Comp[u/e]ting Editorial F[u/ea]tures

Jerome McGann

A short time ago an essay of mine was rejected by the distinguished journal Computers and the Humanities. I had been informally asked to submit something by one of the journal's editors and I was pleased to be able to send a new piece that was nearing completion.

Don't turn off your set. It's true that anecdotes, especially personal ones, often make dismal invitations in the discourse of scholarship. But this one is, I hope, peculiarly apt in the present instance -- that is to say, in the context of this collection of original (in both senses of the word) essays about "re-imagining textuality".

The problem with my essay arrived in two waves.

When I sent it for consideration I noted to the editors that it called for some half dozen color reproductions "illustrating" the final section of the paper. Would this present any difficulties? I asked. None at all, a return letter assured me, though I would have to pay for the costs involved. The charges came to several thousand dollars.

Since I couldn't afford that expense I made a double decision. First, I would remove the last part of the essay and modify the earlier parts to accommodate the change. The essay was organized in modular units in any case, so the revision would not be too difficult to execute. But because that last section contained material that was for me the most intellectually challenging in the piece, I knew that I would have to find a way to "publish" it. That move wasn't very difficult either. I would simply put up the whole of the essay on my webpage, including links to the digital files of the color images that couldn't be included in the print essay. (There is a benevolent irony here. Those images were originally digital files created in Adobe Photoshop. Had they been reproduced in Computers and the Humanities, the print texts would have been, as Frank O'Hara might have said, "a step away from them".)

Several months passed before the reader's report arrived with its second wave of problems. It was an excellent report -- searching, intelligent, and even (always a pleasing matter) full of favorable remarks. Making revisions in light of the critique was going to improve the essay so I felt -- and still feel -- grateful to this reader, whoever s/he was.

But there was (is) for the reviewer a "major weakness" in the essay: "its somewhat unclear structure". The point was elaborated in this way:

There is no initial overview, and no final summary. . . . The thematic composition suffers from a lot of back and forth, and it contains a combination of project report and theoretical argument which makes the paper too long, confusing and hard to follow. The statement of goals for the paper (or for the Rossetti Archive) is clear enough, but it is not always clear that the material discussed is relevant to these goals, and there is no clear evaluation of the results in terms of the goals.
I weighed these comments for some time before realizing they couldn't be dealt with through any kind of "revision" process. The reader's difficulties signalled a skew between an essay that was being looked for, and an essay that had actually been written. More importantly, this skew seemed to me an index of a wider division of thought about how to address certain key conceptual issues that attend many current projects involving "computers" and "humanities" scholarship. The problem with printing the images represents an elementary version of this larger question.


Let me begin with this problem of printing the images. Within the community of scholars who work on and with electronic textuality -- theorizing and building the tools and then reflecting on the events -- a division has emerged in the area of electronic editions that goes, as Robert Frost once wrote, "out far and in deep". In its simplest form the question is: should a scholarly edition be organized as an "image-based" edition or should it concentrate its computerized resources on tools for searching, analyzing, and collating alphanumeric data? As Julia Flanders observes in a good recent essay on this question in Computers and the Humanities, this problem has two faces. One involves the practical design of scholarly electronic editions. This is

the issue of how visual evidence functions within the intellectual economy of the edition. . .and how it interacts with the other kinds of information the edition offers. . .[i.e.] the role of transcribed text, of metadata, of text encoding, of references, of computational features such as algorithms for collating variants or manipulating the text. (Flanders, 301)
The second question, more far reaching, involves "the way editions produce textual knowledge" and hence the role that digital imaging can play in the design and development of these scholarly instruments.

Working with images, particularly full-color images, in a paper based medium has always been problematic. It isn't easy to produce these images in print. And it's always expensive, as my experience with the Computers and the Humanities editors once again showed. On the other hand, reproducing, storing, and manipulating color images in electronic media is relatively simple and inexpensive. (Many would perhaps say "all too simple".) Once again I refer to my initial example. When I knew that my digitized images couldn't be included in the Computers and the Humanities essay, I decided to load the complete essay with its color images for Web access. The whole process took very little time -- a couple of hours.

The event constructs an allegory about the respective limits and powers of paper-based texts and electronic texts. At first glance we see the far greater flexibility and power of the computerized environment, which can accommodate and interrelate so many types of information-bearing media: textual, of course, but visual (still and moving pictures) and audial as well. A second reflection will shift our view, for however easily I was able to upload my essay for (world wide) Web access, its internet availability also disconnects and invisibilizes it in important ways. Simply, the audience of the Web is vast whereas Computers and the Humanities targets and focusses its readership. The disadvantage to publishing my essay on my Website is that from that point it doesn't readily enter the network of scholarly discourse that it was written for.

Of course one might point out that soon (alas, a relative word) these scholarly journals will be stored and disseminated in electronic and some kind of "Web-based" form. Besides, the "network of scholarly discourse" in this field is fairly restricted now so that the internet location of the essay can be found with relative ease. A moment's reflection brings us back to sad reality. First: the transition of scholarly journals from print to electronic form is not happening quickly, even in a field like "computers and humanities" -- and despite the fact that an electronic journal offers vastly augmented capabilities for scholars and their work. Second: even among scholars interested in electronic textuality in humanities, materials put up on the internet escape notice, engagement, citation, as many can testify.

Both of these points carry some significant consequences. The first emerges when we consider the debate about whether scholarly editions should be alphanumeric or image-based. In addressing this question Flanders insists, rightly in my view, that computerized editions should be trying to "provide textual information as high-quality data which can be analysed and processed" (301). She understands that a computerized version of the traditional "facsimile" edition can be an important undertaking, but ultimately even those kinds of works, including their paper-based forebears, are looking for critical and analytic treatment. In this connection it helps to recall that the history of the scholarly edition has been written in two forms: the critical edition, on one hand, and the facsimile edition (or its "diplomatic" variant) on the other. That history tells us in the simplest way that "text" conveys itself in two coding systems, one linguistic, the other graphic (or, more precisely, bibliographic

Prima facie, then, the optimal scholarly edition would combine the resources of facsimile and critical editing. Such a combination has been impossible in paper-based instruments, for obvious reasons. Computerized tools are leading us -- some of us, anyhow -- to revisit the question of whether that "optimal scholarly edition" might not now be achievable. The simplest hypermedia construction, for example, already represents a minimal instance of combining facsimile and critical resources. Electronic tools make it easy to search and analyze vast bodies of electronic objects. The objects can be physically dispersed and they can be of various kinds (pictorial, audial, alphanumeric). So far, however, unless this data is alphanumeric, the search and analysis capacities of electronic tools function only at gross levels. That fact is what has led to the rapid development of SGML projects and the deployment of its TEI offspring in "computers and humanities" scholarship. It is what leads Flanders to conclude that "the most significant future trend in electronic editing" is not with image-based editions but with alphanumeric ones (301).

Myself I think this statement is probably true. I think the opposite is also true. Everything depends on what you mean by the phrase "the most significant future trend".

The "future" of electronic editions that are coded alphanumerically, or that deploy such coding as part of their work, has already been sketched. It has its "trend", so to speak, because it has its practitioners. We see it in certain determinative works like McCarty's Onomasticon, Robinson's Canterbury Tales, Duggan's Piers Plowman, as well as in certain tools that manipulate alphanumeric text for what are at present simply game-playing operations, like Batmemes and [name generator]. The Rossetti Archive itself, which has been associated with image-based editing models, is fully SGML-encoded. It's clear that the models being developed in these kinds of projects have already marked out a "future trend" that will be played out in important ways. Text encoding will contine to grow in wisdom, age, and grace -- not only hierarchical schemes (like SGML/TEI) but nonhierarchical ones as well (the Wittgenstein Project's MECS scheme is the first serious effort in this direction).

What is the "future" of image-based editing? That is much less clear for the simple reason that as yet we don't have the means to search and analyze digital images in ways that correspond to what we can do with alphanumeric data. And yet everyone agrees, as Flanders' essay reminds us once again, that an ideal computerized edition would deploy and coordinate structured electronic searches of both image and alphanumeric data. We will secure those means only if we insist on doing so, despite the evident obstacles. In this case we have to construct a future that has not yet been so shaped and defined as the other future preferred by Flanders. Which is the more "significant" -- the future we think we know or the future we know we need -- is a moot point. Both are significant, indeed, they depend on and create each other.

At this point some of the allegorical import of my experience with Computers and the Humanities begins to be relevant. It happened that the problematic images exemplified some experiments I had made with image-editing tools. I was demonstrating how they might be used for certain kinds of critical and interpretive operations. Because Computers and the Humanities is a paper-based journal, however, it was (is) organized a priori, and at the most basic levels, not to be able to take up these kinds of materials. The expense of reproducing the images is merely an index of the recalcitrance of the paper format, which in this case is itself an index of a future we think we know rather than a future we know we need. Even had I the cash to pay for the printing of the color images, the paper-based format would have delivered only a denatured version of the material. An electronic Computers and the Humanities is what my essay actually needed, for such an instrument could easily have shown not merely the actual digital files (rather than paper reproductions), it could have reconstituted the processes by which those files were generated. It could have restaged the initial set of experiments. It could have even given the reader/user the means to replay the same experiments, or to set up new ones for comparative analytic purposes.

Let me say in passing, briefly, that what I am describing here is not at all some merely longed for but never seen "future". The technical means for doing what I just described are readily available. Indeed, it would be easier (in terms of what one would have to learn), and probably much less expensive, to start up an electronic journal with those capacities than to try to found a correspondent scholarly print journal.

What is "the most significant future trend in electronic editing"? It all depends on how you imagine the future. What is "the future" of that subset future, text encoding? I strongly suspect that nonhierarchical models will overtake this field and that hierarchical markup models will become subordinated "moments", as Kant would say, within nonhierarchical schemes. This will happen for one simple reason: the texts that most interest humanities scholars do not appear to be organized primarily in hierarchized forms. We need nonhierarchical schemes, though at present this is exactly what we haven't got. We need them because so much of our attention focusses on "imaginative" works. Imitations of life, they reproduce themselves in correspondently mysterious and (apparently) nonpredictive ways. They continually make us aware of the inadequacy of hierarchized markup models.

In this frame of reference, "the most significant future trend in electronic editing" may well be with projects commitrted to goals that are necessary but as yet difficult to locate. Projects like The Rossetti Archive get undertaken precisely because they involve imperative scholarly needs that we don't as yet know how to meet -- in this case, the need to find ways to search an analyze digital images. Here Necessity must be the Mother of Invention, and the undertaking of the project establishes its own set of demands, that its needs be met. In course of building The Rossetti Archive, and well after we had committed ourselves to SGML-based markup, we came to realize how inadequate that form of encoding was to the actual needs of our project. The Archive will realize itself and its SGML future, which I have no to reason to deplore. But that future, in my view, is far less significant than the one being promised in two other current scholarly undertakings that involve nonhierarchical demands and materials. Not without reason has the MECS program for nonhierarchized markup been the pursuit of Wittgenstein scholars, and now lying at the horizon of our attention is the Peirce Project with its stunning corpus of "existential graph" manuscripts. The "iconic indeterminacy" of these documents -- the phrase is Mary Keeler's -- will not easily submit to hierarchized markup schemes, and they need as well search and analysis procedures that can include various kinds of images. Something new is going to have to be developed if these works are to be produced in a computerized format that aspires to something more than facsimile reproduction.


And there is another problem -- or opportunity. After more than 30 years of unfulfilled expectations, about 5 years ago humanities computing took off. Why this catastrophic change came about is less important, at least for this particular place and moment, than some of the consequences of it. The World Wild West, sometimes called the World Wide Web, has summoned thousands, millions of people, including large numbers of humanities scholars, to explore the resources of this strange but promising land.

Well, that's an American set of metaphors and the images are importantly inapt. What we have is not so much a new world as a new set of tools designed and built for certain purposes by certain people that have caught the attention of a very different set of people with very different interests. The computer pioneers among humanities scholars -- sorry, my American slip keeps showing -- were almost all practitioners of what German philology used to call "the lower criticism": linguists, enumerative and analytic bibliographers, lexicographers, textual logicians. These were the first people to glimpse the untravelled world whose margin keeps fading for ever and for ever as we move. (At least that's Tennyson and not James Fenimore Cooper.)

In the past 5 years these new tools have fallen into the hands of many other kinds of humanities and literary scholars. Most are students, graduate and undergraduate (who turn into graduate students with bewildering speed). At UVA and our Institute I've watched scores of these young people get hired into various projects on a work-study basis. Almost none of them have had any interest or training in editing, philology, or textual studies. They quickly learn what they need or want to know about these great elementary forms of our discipline and then pursue their own interests. All this takes place in a high energy feedback loop, so that in 5 years at UVA one has witnessed an extraordinary transformation in humanities education. Now when I advertise for a work-study student for The Rossetti Archive, every person applying already possesses a wide range of computer and even programming skills. These are not people I have met before. The ones I see are all literary scholars.

I tell this brief story because it relates to my first anecdote, about the rejection of my essay by Computers and the Humanities. The very skillful reader of that essay quite rightly pointed out that it lacked certain orderly procedures he expected from a scholarly presentation. Part of the essay was speculative, part was simple report; sometimes the exposition was theoretical, sometimes highly factive. The essay did not keep these different kinds of material separate from each other, but kept moving "back and forth" so much that it became "confusing and hard to follow". Perhaps most exasperating of all the essay had "no initial overview, and no final summary. . .and . . .no clear evaluation of . . .results". He (or she) was quite right about all this.

Nonetheless, I can't see that the essay would be improved if I tried to reframe it along the lines suggested by the reader. I can see how those kinds of change would obscure the principal thrust of the essay. How do you summarize work that is constantly undergoing change -- not merely quantitave change, additions to the material corpus, but methodological change at various levels of the work's structure? How do you evaluate results that even as they are achieved will generate new problems and possibilities? And how do you present this kind of situation without shifting "back and forth" from speculative to practical considerations, from reports on concrete research activities to discussion of theoretical matters. The "back and forth" dialectic of the essay is a simplified representation of what has been going on day by day in the building of The Rossetti Archive.

I am not saying that this is the way all projects of this kind ought to proceed. It is the way we have been proceeding, deliberately. Our heuristic goal is to publish, in four installments, a hypermedia archive of all of D. G. Rossetti's textual and pictorial works that will be open to structured search and analysis, including collation, of its materials. Our ultimate goal is to study how to model archival instruments of this kind. Building the Archive is important and imperative, but studying the model we are building is far more important.

The primacy we give to this investigative process explains why we have found the presence of new students so crucial. For the truth of the matter is that none of us know what can or might come out of our engagement with these new tools, or how they might be exploited. We know some things but much remains an obscure object of desire in a sense Bunuel did not intend. And what we do know as often as not gets in the way of what we might learn. Does anyone here really believe that we will not find ways to lift the information in digital images so that they lay themselves open to structured search and analysis? No one can do that now but it will happen, beyond question.

It will happen in "the old fashioned way", by application and study. As that event approaches, who knows where or when, the studies that pursue it metastasize like cancer. This is not a growth we want to cut away or cure. Students come to our Institute and get hired into a project, say The Blake Archive. They help build it but in the process they are traitorously spying on the work and stealing away with novel ideas. Some get poured back into the project, but the project is an old wine skin and won't hold the new vintage (my metaphors are perhaps out of control at this point -- metastasized). I'm not inventing this narrative, any more than I invented my initial anecdote. The Blake Archive had and still has a spy, a young scholar named Matt Kirschenbaum, presently known as the "Project Manager" of that Archive. But he has a secret life that is gradually leaking out. He is the author of a thesis being completed in our English Department on the theory of the visual design of information. It is a remarkable work. It will be the first thesis completed in our department that will have no paper-based existence. It is entirely digital and online.

There are spies all over The Rossetti Archive.

"Great. Let a thousand flowers bloom. But let them bloom in Media Studies, in Information Technology, among librarians and archivists. We have poetry to think about. And our concern is with books. Whatever circuitries come with this brave new world, we will still have the books to deal with."

What I relate here is only one person's story so I can't expect to persuade you to believe what I have found to be true: that these tools are opening the books handed down to us in startlingly new and informative ways. They are utterly transforming editing and textual scholarship, which are the foundation of all literary interpretation and cultural studies. The Rossetti Archive and works like it were not undertaken to study computer hardware or software but poems, stories, translations, pictures, and photographs, as well as the vehicular forms in which they live and move and have their being. I am continually amazed by the unforeseen results this work seems to discover, as it were randomly. The last section of the essay I wanted to publish in Computers and the Humanities -- the part with the digital images -- centered in some experiments I was carrying out with those images. The experiments involved deforming the image of a recognized and highly overdetermined object -- in this case Rossetti's famous painting The Blessed Damozel.

What became an experiment had begun as recreation with a friend, just fooling around with Adobe Photoshop. As we played with random deformations I began to see the painting in entirely new ways. The deformations, I have since come to realize, were breaking down the rhetorical authority of the "finished" picture and allowing certain of its concealed features to emerge. I showed this work to a graduate student, a poet, who was writing her thesis with me. I knew it would interest her because her thesis was concerned with the problem of how to develop lucid explanations of poetic style, and she had been exploring the use of textual deformations as an instrument of stylistic analysis. Since that time we have written an essay together, "Deformance and Interpretation", that will appear shortly.

But why did I include that material in my Computers and the Humanities essay, whose subject was (according to the essay's subtitle) "The Theoretical Goals of The Rossetti Archive"? However interesting it might be in its own right, surely that material had nothing to do with that stated subject. Which is true enough, in one perspective, as my reader pointed out. But in another perspective -- in my perspective -- it had everything to do with the Archive and its goals. As I have said, building the Archive is important but studying the process and extrapolating its results are what drives the work. Those results often have nothing at all to do with the Archive, and least of all with computers or their software. People who work in the "Humanities" with computers want to stay aware of that, it seems to me.

Let me conclude with this: I've spent much of my working life as an editor and textual theorist, but it took this encounter with electronic instruments to overthrow what I thought I knew about books, what they are, how they work. I know, in part, why this has happened. Because computers are stupid. The idea of the computer is simple and profound, and I know -- I've been told -- there are very smart machines out there. They're not the ones I work with or ever will work with. All the computers I know are dull and unimaginative. To make them do anything at all you must supply incredibly precise and above all unambiguous instructions. You must not write poems to them -- though you can, I know for a fact, make them reveal things about poems that people find interesting. Trying to be computer-precise about things like poems and paintings and books, things immensely more complicated than computers, is a merciless experience. It forces you to look at your thinking, and your ideas, with unaccustomed rigor. In the event, it may lead you toward "imagining what you don't know".

That was the title of the genuine rejected article, by the way.