INTERFACING THE EDITION: Bethany Nowviskie
[Note: This is the text of a talk I recently gave at a conference on "literary truth and scientific method." BPN 4/10/2000]
Interfacing the Edition
A scholarly edition is, and always has been, an interface – a point of contact between a user and a set of embodied information. As such, the design of that interface becomes an issue of great importance. What gives us access to those literary and historical documents we treat with such reverence? By what means do we store and configure them and, periodically, download them into our culture?
We haven’t really thought of scholarly editions in such technologically loaded terms before. This is not (or not only) because we’ve only recently become enamoured with digital media. Books have always been information technology. We’ve rarely thought deeply about editions as interface, though, because we as readers are so accustomed to jacking into the codex.
We know what it is to read a book – even a meta-book that contains other books in a complicated textual apparatus. And we – too often – think of ourselves, in our interaction with editions, as readers, not as users. We know what it is to read a book, and we know what it is to construct one. The editors of literary and historical documents have a rich tradition of bibliographical work to draw on. They, in this most skills-based and concrete of humanistic fields, know what it is to make a proper scholarly edition. Certain procedures must be followed; the results of certain inquiries must be presented in certain formats. This is not to say that there is no room for creativity, innovation, or difference within codex-based scholarly editing. The sort of controversy that regularly wracks the discipline indicates otherwise. But, as most of us have come to realize, the spectrum of editions feasible in book form pales in the glow of new media.
Now it’s possible to create a scholarly edition in a form that de-familiarizes the entire undertaking and brings the issue of interface to the fore. The dangers we’re sure encounter when we do this are not yet clear to us. I want to acknowledge that from the outset. Scholarly editing has entered an experimental phase in which it’s becoming more and more clear that no message is divorced from a medium. Many of the benefits of this de-familiarization of the edition, too, are yet to be discovered – but some are clear.
Editors are now working (although not without resistance, both conscious and unconscious, which I’ll discuss in a minute) to think outside the confines of the object they study. Electronic editions permit us to do things with documents that we’ve never been able to do before. We can analyze them computationally, and allow the user of the edition to do so for himself, in the terms of his own research queries, in real time. We can provide, for that user, more and better representations of our texts – some of them not possible in any other medium. We can offer multiple views, multiple arguments – embodied arguments made by editors and users alike. We can craft a site for action on and interaction with documents. The egalitarian electronic edition may go a long way toward bridging the gap between bibliography and interpretation.
In the codex form, a scholarly edition contains an editorial essay, which makes an argument about a text or set of texts, and is then followed by an arranged document that constitutes a frozen version of that argument. Let me make this clear: the text of a scholarly edition is an embodied argument being made by the text’s editor. The editor who wishes to make multiple arguments in book form therefore faces a real challenge. He finds himself bound by the materiality of his medium, and in most cases he must choose among a limited number of unsatisfactory options:
- He may create a complex and unwieldy apparatus, thereby risking his edition’s coherence. Will the Byzantine notation system this requires make all of his arguments unclear? Will users be willing to learn new modes of notation?
- He may petition his publisher to print multiple volumes of his edition. Economics will probably doom this request, since the vast majority of “scholarly editions” are commissioned as basic, inexpensive reading texts for the college classroom.
- He may give up – limiting his argument to (depending on his disposition) the most or least polemical stance supported by those documents he can fit between a couple of inches of board.
The electronic media, however, open up new avenues of presentation, or interface, for editors just as they open up new means of analysis and understanding for users. I want to concentrate today on only one of these avenues of interface – the one that leads to an image-based edition.
In order to do that, I need to provide you with a little bit of background. Traditional scholarly editing is dominated by the critical edition. You may have noticed that I’ve assiduously avoided the term “critical edition” thus far in my talk, using instead the more generic, if less commonplace, “scholarly edition.” That’s because there’s a fundamental difference between an edition we classify as “critical” and one that, in the eyes of many, is merely “scholarly.”
A critical edition is one that posits and presents a new, allegedly better version of a text than any available in an extant document. This “better” version is constructed by the editor and is compiled from his collation of every existing documentary embodiment of the work. The editor then employs his own faculties of reasoning and taste to cull out readings that do not fit his personal vision of that work’s ideal form and, in some cases, to insert readings not present in any preserved document, but which strike him as correct. In the Gregg-Bowers school, which dominates modern textual criticism, the editorial ideal is meant to approach the original author’s own intention toward his work. Editors therefore, in concert with a pre-devised principle, eliminate those textual transformations that happen – either purposefully or accidentally – in multiple printings. The resultant new text is meant to approach a very old, lost text. This is monastic work, and understandably so. The earliest critical editing was done with Biblical texts – works for which authorial intention was all-important and unquestionable.
There are other sorts of editorial endeavor. Different kinds of editions have been constructed by editors who find arguments about documents as interesting as arguments about works. Diplomatic editions contain exact, careful transcriptions of these documents of interest – generally manuscripts that are difficult to read or gain access to. Facsimile editions – the type of work that most attracts me – offer photographic reproductions of manuscripts or rare books. They often replicate a fragile, valuable volume in the form of a relatively cheap and hardy one. They are invariably image-based.
Facsimile editing takes as its primary concern – although it’s seldom expressed in these terms – the ways in which humans interface with documents. Manuscripts are presented photographically because the facsimile editor recognizes that visual information is encoded into human handwriting, and that a substantial part of a manuscript’s essence is lost when it is transcribed (however faithfully) into print. Printed texts are presented photographically because the facsimile editor recognizes the historical value of page appearance. Typeface, lineation, the dimensions of the book and of its margins, the quality of paper, ink, binding material, the presence of illustration and ornament, and the visual quality of the page itself – all of these things are understood as bearers of valuable information.
Perhaps unsurprisingly, some editors of critical editions feel that scholars who produce diplomatic transcriptions or facsimiles work without thought or principle. Surely anyone literate enough (they imply) to transcribe text or competent enough to use a photocopy machine can fashion an edition on those terms?
The fact that these editions provide a valuable service to the scholarly community and enable modes of criticism that would not otherwise be possible tends to get swept under the rug. Instead, because so few editors have embraced those rare theories that champion the document as avidly as the work, the discussion generally devolves into accusations of brainlessness on one side and of careless traditionalism on the other. That is, the lack of a widely respected theory of documentary editing has boxed editors in.
And the book itself drove nails into the coffin of documentary editing. The codex has limited the utility of facsimile editions in two crucial ways:
- The material constraints of that form have made it difficult to present images in an array of configurations. The ability to do this would have allowed editors to take the crucial second step of offering multiple arguments supported by multiple embodiments – thereby forestalling the critique of brainless re-presentation.
- Second, our uncritical attitude toward the codex form has distracted us from those interface issues most central to and implicit in facsimile editing. Interface, as an area of study, is the fundamental reason facsimile editions exist. When we, as textual critics, begin to design those new interfaces that digital media demand, we are suddenly confronted with the concept of interface in a new light. We can address it directly, and channel its inherent energies into all our work.
Documentary editors therefore need a theory that emphasizes interface – both because it illuminates their subject of study and because they must position themselves as interface designers like never before. They also need a theory that can be supported in concrete terms, by the production of editions that embody it. Only now can both those conditions be met. But this theory must come soon.
I recently participated in a meeting with some members of the MLA’s Committee for Electronic Scholarly Editions. The committee was charged with revising its outdated guidelines for scholarly editing in new media. Because the field is so new and the technology that underlies it is developing so rapidly, we felt unable to codify a set of principles like that published by the committee that guides editing in the codex form. The result of our discussion was interesting, though – in our recommended guidelines for electronic editions, the reader is more patently figured as user and understood to take an active role in configuring and constructing what I’ve been calling embodied arguments. In addition, multimedia – that is, components other than traditional, transcribed and emended texts – are seen as crucial to the edition. Editors who do not offer readers points of interface into the visual and, where appropriate, audible instantiations of their documents have left their work half-finished.
The fact that the MLA’s guidelines for electronic scholarly editions are necessarily broad does not make the coherent and specific interface theory I’m calling for a pipe dream. It does, however, make its development more urgent. The worst thing that could happen right now – both for facsimile and diplomatic editing in general and more especially for image-based editorial work online – is if enthusiastic editors undertake electronic projects unthinkingly. Editors need a well thought-out, ardently defended theory of image-based editing – either to follow or to position themselves against.
I’ve said this theory must come soon. Problems are already developing in our new field. Many scholars who think they’re taking advantage of the new media to do “image-based” editing – the kind of work that should build on facsimile editions and some kind of interface theory – are really doing what’s better described as image-assisted editing. Their products are not, in fact, as radically different from traditional diplomatic and critical work as they would suppose. Visual elements of documents (most generally scanned images of book pages) are being presented as supplement to plain, ASCII-text transcription. Julia Flanders, in a recent issue of Computers and the Humanities, made the sharp observation that editors generally employ digital images as “decoration, scholarly substantiation, and bravura display.” In most electronic editions, this display takes the form of an icon or link after a passage of type. “Click here to view an image of this page.” So-called image-based editions like this send a self-defeating message to the user through interface: the visual elements of the page are supplemental – the document is secondary to the work.
There are several possible explanations for the dearth of true image-based interfacing, the first of which is so uninspired that we may choose to discount it immediately.
That is, that technology is not yet capable of a genuine image-based edition. In some ways this is true, and it may be this fear that holds us back. I can certainly imagine more to do with digital images than is currently possible, but projects like the Blake Archive, which has followed a strict image-based methodology virtually since the beginning of the Web, lend us great riches from small coffers. I’ll just pause here to add that Blake and a select few other electronic editions do indeed offer something close to interface through image. Unfortunately, they’re not talking a lot about interfacing – they’re too busy building and innovating to theorize much.
My other two explanations for the lackluster showing by so-called “image-based editions” are closely connected. The Text Encoding Initiative, or TEI, is the body that governs the markup schemes used by nearly all creators of electronic editions. Markup is the system of encoding that, after being applied manually to an electronic text by a saint – and there are many saints in Bryan Hall – can permit a computer (and therefore the stylesheets which generate appearance and the search engines which deliver the results of user queries) to understand the difference between italics and boldface type, between a line of poetry and a line of prose, between a city and a surname. TEI and its subset, SGML, reign supreme in electronic editing. Why? They offer a well-defined, comprehensible practice for editors new to the field to follow (and we’re all fairly new to the field), and they are supported by something like a theory based in part on the familiar – on conceptions of hierarchical information, developed for and within print. There’s little place for the visual in TEI, and its imagination has been bound by what is possible and advisable in codex form.
It’s true that editors who wish to focus on the visual face technical difficulties. The engagement of computing humanists with digital imaging is minimal in comparison with the energies that go into advancing text markup. We need computational systems designed to read and understand the image as well as they understand marked text. And those systems should be here soon. An upcoming issue of Computers and the Humanities will be devoted to image-based work. Commercial software such as Blobworld, which can visually differentiate a flower from a fish, (and one would hope, soon, sonnets from sestinas, or woodcuts from lithographs) is under development. Recent panels at Humanities Computing conferences have focused on the issue. An email discussion list and website, called LOOKSEE, serves as a clearinghouse for information. Grad students and faculty in our own department and in IATH – Jerry McGann, Johanna Drucker, Steve Ramsay, Worthy Martin and others – are beginning to think, talk, play, and program their way around the problem. So stay tuned. (Just as an aside: I do want to add that, as I was compiling the bibliography for that journal issue I mentioned, I noticed that the vast majority of humanities projects that are thinking critically about digital images are really concerned with data visualization, not with the understanding pre-existing documents visually. So to “stay tuned,” I should really add, if you’re at all interested in the issue, “stay active.”)
I wish I had more time here today. I’d like to tell you a story through which I’d lose all credibility by simultaneously biting the hand that feeds me and admitting to a sordid lifestyle in which I side with the enemy. The sad part of my tale is that the Rossetti Archive doesn’t know it’s the enemy. It wants to be good. But I don’t have time, so I’ll just say that, despite our best intentions at Rossetti, we’re a chief offender in the false image-based edition racket. Our material – which not only includes printed texts and manuscripts, but also paintings, sketches, and designs – strains against the TEI. And yet I, as interface designer for the project, have tacitly condoned our practice of substituting ASCII transcription for perfectly good, readable page images. I’m also the gal who plinked out the code that makes those “Click here for page image” icons appear. I’d be happy to talk about this after the panel with anyone who’s interested. I can also arrange to give you a tour of the Archive in our own defense.
So it’s clearly not easy going, but the goal is lofty. Within the space of a couple of decades, we’ll be much better equipped to address, in human terms, what happens between seeing and knowing. I feel confident that image-based editions, if they are undertaken whole-heartedly and backed by coherent theoretical work, will have a profound impact on textual criticism and literary interpretation alike. It should be simple to recognize your colleagues who are working on images and interfacing. They’re the ones looking at each other with a wild surmise. True image-based editions will possess a dual consciousness of that most vital quality of the page and of the edition – the way they serve as interface. I’ll say it again: interface is the essence of the page and of the edition alike, and if we want to understand documents, we must understand interface. The doubled impetus this fact brings, if we treat it seriously, can embody itself in an appreciation of the crucial junction between humans and documents, and between documents and scholarship.
Return to the Index.