When Allan Stevenson delivered his Public Lecture on Books and Bibliography at the University of Kansas in 1959, he characterized the study of paper in general and of watermarks specifically as having "hardly more than begun," noting that "most of the assumptions and most of the watermark books of the past are inadequate." Stevenson during his career contributed a great deal to the advancement of watermark studies, developing numerous procedures and demonstrating them to great effect. Yet 30 years later, John Bidwell surveyed the state of contemporary paper studies for a Bibliographical Society festschrift and described Stevenson's methods as encumbered with "expensive and unwieldy equipment," with the result that "Only a few have mastered this method, and none, to my knowledge, has relied on it extensively except for the study of incunabula."
Recent publications demonstrate the limited uses to which scholars have put paper evidence in bibliographical work. By and large, watermarks constitute illustrative rather than analytical evidence, and in the case of the latter, almost always the marks are represented as single rather than multiple state images. Scholars seem daunted by the technical barriers posed by received methods of reproduction as well as the complexity of the potential evidence of the impressions left by wire figures in paper. If Stevenson's model is ever to gain wider acceptance and become more than just a singular example of brilliant but impractical procedures, if watermark evidence is ever to make the jump from tertiary illustration to primary source, then we must evolve new ways to overcome Bidwell's barrier of cost and convenience, the specter of "expensive and unwieldy equipment." This paper attempts to establish a breach in the resistance by examining existing methods of watermark reproduction and enhancement, demonstrating how digital technology can significantly improve the usefulness of those methods, and finally suggesting avenues for analysis opened up by the synthetic application of these tools.
Scholars contemplating a paper-based study employing watermarks as primary evidence immediately face the question of reproduction techniques. That they will use reproductions is certain. The nature of rare book collecting, both personal and institutional, dictates a cycle of concentration and dispersal. A private collector will over his or her career amass holdings that, more likely than not, become the stuff of an estate sale at death. Likewise, institutions frequently go through periods of acquisition and deaccession brought on by budget pressures or other utilitarian concerns. Over a period of time this pattern in private and public collecting tends to create a corpus of materials widely spread over the rare books and manuscripts landscape. In order to make the greatest use of multiple copies, a bibliographer must become a sort of collector as well, searching out, reproducing and organizing evidence for later analysis.
While expanding the potential breadth of scope for a study, modern techniques also offer an attractive degree of complexity and accuracy. Modern watermark reproductions take the form of highly- detailed two-dimensional images that fit easily onto a digital scanner, enabling one to exploit the ever-expanding resources offered by the micro- and mini-computer. Affordable by most scholars, these machines perform tasks much more quickly and accurately than possible using non-digital techniques. They can detect information at scales too small or large for the unaided human eye, compare and correlate large amounts of data, and store the extracted information in stable, easily-retrievable form for later analysis.
Although watermark reproductions form an essential component of a contemporary bibliographical analysis, they are not without their problems, chief among which is balance. On one side of the scales rests the need for economy and ease of use, mandated in part by the large amount and scattered nature of the data required of paper analysis. On the other side are the analytical requirements of accuracy and clarity, for no study will succeed that does not have reliable evidence.
A number of reproduction methods are in use today, none of which meet the twin criteria I've just outlined. Tracings are cheap and easy but sorely lacking in accuracy. Dylux, a proofing medium from the printing industry, delivers cheap and accurate images but suffers from clarity problems. The same general features hold for the so-called Ilkley method of contact printing, with the added disadvantage that it requires darkroom conditions for the initial exposure. Beta radiography, up to now the favorite if rather expensive technique of researchers, produces clear and accurate images when available. It does suffer from a set of complications inherent to things radioactive, inflating the cost of its use, subjecting its transport of exposure-producing materials subject to state and federal regulations, and generally limiting its applications to holdings in major research libraries.
No single strategy satisfies the seemingly conflicting requirements of ease of use with clarity and accuracy of detail, with the predictable consequence that watermark studies vary widely in approach, sophistication and applicability to other studies. Perhaps in response to the limitations imposed by existing modes, researchers continue in their quest for the perfect technology, the Philosopher's Stone of watermark reproduction. The December '93 issue of Print Quarterly includes a description on Dutch experiments with so-called "soft x-rays," reporting that "an emission of as little as 7kV at 15mA through the paper onto a film, exposed for about 150 seconds, produces a clear and publishable image."
Those seeking to improve beta radiography are searching the periodic table for alternatives to Carbon-14, with candidates including calcium and technetium. Yet while an ideal solution remains tantalizingly elusive, I would like to suggest that a practical and quite workable answer exists today. In the next few minutes, I will demonstrate how the coordinated use of two technologies produces a third, superior and synthetic technology.
Dylux is a non-water soluble, pH neutral photosensitive paper developed by the DuPont corporation for proofing photographic negatives. It comes in a number of forms, but Dylux 503-1B works best for watermark collection, for it reacts in two distinctly different ways to light. When exposed to ultraviolet light in the 200-400 nanometer range, its pale yellow coating turns a deep shade of blue. When exposed to visible light in the 410-500 nanometer range, the coating turns white and ceases to react to the ultraviolet spectrum. Over 20 years ago Thomas Gravell began experimenting with bibliographical applications for Dylux and has developed a set of working procedures for its use. As with any contact exposure, the Dylux is laid beneath the sheet of paper from which an image is to be taken. Fluorescent lighting, whose spectrum includes the visible but not the ultraviolet range, is then directed at the sheet for a period of 1-5 minutes, depending on the thickness of the paper. The light passes through the paper at rates that vary according to the thickness of the paper, and deactivates the underlying Dylux. Once the selective neutralization is completed, the Dylux is then exposed to ultraviolet light for 5-10 seconds to bring out the watermark. The speed of this procedure affords one the luxury of experimentation, since the effect of small changes in exposure on clarity and contrast can quickly be discerned.
In addition to its overall ease of use, Dylux has three technical features that make it extremely useful for paper analysis: a sensitivity to small changes in the amount of light exposure, thereby giving it the ability to reveal not only watermarks but also chain and wire lines; a broad palette of color within the blue spectrum, allowing one to isolate a detail or group of details in part through color differentiation; and a chemical coating of quite fine grain which produces images that don't break down under repeated enlargement. Of these features, the last two - - palette and grain -- make Dylux exposures particularly apt subjects for digital image enhancement. However, because this process captures inked images as well as watermarks, it has found only limited use in bibliographical circles. This is where the application of a second technology enters the picture. By converting a Dylux image into digital form and employing widely available image-enhancement software, the type clutter that frequently obscures a watermark can be reduced to manageable background noise and the reproduction's overall quality significantly increased.
Microcomputer retailers have recently begun "bundling" software to enhance the appeal of their products. Both Macintosh and DOS- based machines will frequently arrive already loaded with a basic word-processing program, a spreadsheet application, and more often than not a graphics tool. This last piece of software not only calls up images onto a computer screen but also allows the user to manipulate and enhance the image in a surprising number of subtle ways. Today I'm going to demonstrate image enhancement techniques developed on a larger and faster computer system, a UNIX-based machine using a graphical interface. However, everything I accomplished on the larger system can also be done on a personal computer whose price is rapidly approaching $1000. The conversion of paper-image into digital graphic still requires a flatbed scanner, but these machines are becoming commonplace objects in most institutions, easily accessible to a scholar willing to put in a little time tracking one down.
The images I'll show you today were scanned on a Hewlitt-Packard ScanJet IIc connected to a standard 386-based PC running Microsoft Windows. The image manipulation was performed on an IBM RS/6000 running the AIX operating system, and a terminal using the X- Windows environment. I also had access to a great deal of invaluable advice and aid from the Electronic Text Center and from the Institute for Advanced Technology in the Humanities. I must apologize for the manner these images appear on the screen; one of the technologies that remain expensive is the projection of computer images onto large viewing screens.
To begin, this sketch of a "shield with three lions" is from Heawood's 1950 catalogue of seventeenth and eighteenth century watermarks, number 576, which he lists as occurring in Thomas Hobbes's 1629 translation of Thucydides. Here is a similar shield watermark, found by Stevenson in an endpaper and used as Figure 4b. in his article "Watermarks Are Twins." While he never explains the exact technology employed, Stevenson does refer to his plates as "photographs" in the article, and in his Kansas lecture expresses a preference for collotypes od beta images. This is an unaltered Dylux exposure I made February of this year at the Folger Shakespeare Library, another shield watermark found this time in Ben Jonson's folio Workes of 1616.
At first glance this last image strikes us as rather lifeless, as flat and unappealing, certainly less useful than the Stevenson photograph, perhaps even less than the Heawood sketch. When compared with a first-rate beta-radiograph the impression of inferiority is even greater. This points to a subtle but nonetheless important aspect of watermark reproduction, that of aesthetics. We have come to expect a watermark to look like a beta-radiograph, with its evocative black and white shading, strong set of chain lines, and an almost tactile background of crisp wire lines. Dylux delivers pastel shades and less overall texture, presenting less overall contrast and as a result has less distinct wire lines. But more importantly, it has the characteristic type clutter preventing us from getting a complete and unobstructed view of the mark. Note, however, that while the type obscures, it does not alter. The watermark may have small gaps caused by the overlaying ink, but the integrity and overall accuracy of the image is not compromised.
Computer enhancement should achieve two things, then: minimize the haze of types in the image while at the same time enhance its aesthetic appeal. I'll begin by converting the image to black and white. I've manipulated these images with a UNIX application called XV, but similar packages are available for Mac and DOS platforms. Immediately the picture takes on the pleasing feel of a beta image and we begin to become more comfortable with it. By converting the blue-to-white spectrum to the wider black- to-white array, we also gain a larger palette of shades with which to highlight the sought-after features. Thus I can shift contrast and brightness to bring out the watermark and chainlines without allowing the types to overwhelm. Finally, since the distracting type images are comprised mainly of a relatively narrow band of grey shades, I can isolate those greys and selectively shift them toward the greys that make up the surrounding space, making them fade into the background without altering the watermark. The final result of this transformation process is a clear and pleasing image, and while we will never mistake it for a beta-radiograph, we can now place the Dylux reproduction alongside images produced employing other methods without feeling one inferior to another.
By combining the portability and practicality of Dylux with the image-manipulation capacity of a standard-issue computer, we can reproduce and collect watermarks from quite diverse sources and yet have images that approach the quality of a beta exposure. Having collected these images, what then can we do with them? Stevenson identifies 10 points he felt most useful in differentiating among marks, and subsequent researchers have used them as guides in their own work. While it is useful to have a list of elements that might serve to distinguish among twins or states, the work of identifying each class of variants looms as a tedious and time-consuming chore, particularly as Dylux allows one to easily collect a large number of images.
As I stated earlier, this paper seeks to synthesize existing tools in an attempt to discover new methods in bibliographical research. Using that objective to guide further efforts, I'd like to suggest that one possible solution to anticipated information overload now sits not 150 feet behind you, resting peacefully in an alcove of the Barrett Reading Room. Although fallen out of fashion, machine collation provides bibliographers with fast and accurate means to identify changes in the type page due to correction or resetting or even small instances of plate damage. Collating machines employ one of two basic methodologies to bring out variants: the stereoscopic principle employed by the McLeod and Lindstrand machines, and the "zoetropic" effect basic to a Hinman Collator. I've used both types of machines extensively, and while I prefer the McLeod for textual collation, the illusion of motion resulting from the Hinman's flashing lights performs better for bringing out small shifts in watermarks. One need merely print out the digitized image, sit down at the collator, and the blinking image draws the eye quickly to the points of difference.
I can't set up the University's Hinman for public viewing in this forum, but I can give you a sense of how it works by showing you a short movie. [**Macintosh] This sequence consists of the two Stevenson and Jonson watermark images I just showed you with the color-table reversed so the watermarks show up as dark lines, fed into a public domain Macintosh application and set to cycle at about 4 frames-per-second. As you can see from this demonstration, certain elements of the mark shift back and forth, revealing immediately the points of variation and allowing one to better place them within the overall sequence of marks across the life of the mould.
Beyond simple movies like this one, computer technology offers an expanding horizon of exciting analytical possibilities. Before concluding this paper, let me mention just two procedures I am now pursuing using watermarks taken from a two-year period in the career of William Stansby, reproduced and enhanced with the techniques I've just outlined. The first involves "morphing," an image manipulation tool used most visibly by moviemakers to transform a man into a raging werewolf or a pool of water into a Queen Anne chair. The same technology lends itself to the illustration of watermark wear in a pedagogical setting. By linking together multiple images from different points in a papermould's existence, one can effectively show the life and death of a watermark.
A second and potentially more far-reaching technique employs advanced statistical tools to establish demonstrable relationships between variant states of a watermark. I've given watermarks from the Jonson folio to Kendon Stubbs who, using procedures that test similarities and differences among two-dimensional polygons, has identified clusters of related marks from different sections of the volume while at the same time eliminating foreign marks included as control images. Both approaches show promise as tools for teachers and researchers, and we hope to be able to report further developments with them in the future.
As computers become ubiquitous adjuncts to research in the Humanities in general, and to bibliographical work in particular, we're going to see time and again the stale made fresh, the forgotten discovered anew, the expensive turned affordable, the outdated transformed into the contemporary, and the marginal allotting a place in the mainstream. This, I think, finally marks the so-called computer revolution as a development of the first rank, and as one from which there is no turning back: the technology's power to revitalize and transform everything it touches. Thank you.