*1. the constructive arts in general textuality; especially the art of

                        making things that have both beauty and usefulness. 2. geology that

                        has to do with text structure. (Websterıs New World Dictionary, revised)


      Jerome McGann


In play, there are two pleasures for your choosing,

The one is winning, and the other, losing.

                        Byron, Don Juan, Canto


He saw his education complete, and was sorry he ever began it.  As a matter of taste, he greatly preferred his eighteenth-century education when God was a father and nature a mother, and all was for the best in a scientific universe.  He repudiated all share in the world as it was to be and yet he could not detect where his responsibility began or ended.

                        Henry Adams, The Education of Henry Adams , chapter 31 (1907)



            Iıll come back to Byron later.  Let me begin with Adams, whose urbane pessimism gets summarized in that late passage from his famous autobiography.  An education ought to make one ready for life, but Adams' education has turned out a kind of black comedy.  His humanistic training has left him unprepared for the dynamo of the twentieth-century, which he is able to grasp only in its arresting superficies: in its images, as he tells us in his penultimate chapter -- not in its gritty fundamentals.  He only sees what is happening, he knows he has not seized it.  So he joins the coming race as an observer, a scholar, or what he calls an "historian".  But "all that the historian won was a vehement wish to escape".

Today, as we pass through a similar historical moment, a moment even more wrenching for a humanist than Adamsı moment, The Education seems especially pertinent.  We don't want to guide our passage through this moment with tabloid reports like The Gutenberg Elegies, which supply us with a cartoon set of alternatives.  Information technology comprises an axis of evil that Birkerts advises us to "refuse".  We can no more "refuse" this digital environment than we can "refuse" the empire our country has become.  We may well feel "a violent wish to escape" both of these unfolding, and closely enfolded, histories, but we would do better to recall that we are characters in these events and so bear a responsibility toward them.

And there precisely we find Henry Adams waiting for us, caught between two worlds.  Not between a dead world and a world powerless to be born, however, but between two living worlds, one relatively young, the other ancient.  He neither abandons the one nor refuses the other.  The positive revelation of his great book tells us that we all always inhabit such a condition.  At certain historical moments that universal experience seems especially clear, and certain figures come forward to render an honest accounting. 

The book also tells a cautionary tale, however, which is the second gift it passes on to us.  If the dynamo and the Virgin each have their humanities in Adams' view, he represents himself as the Nowhere Man.  Not that he takes no action, but that he restricts his action to honest reporting.  As a consequence, both Virgin and dynamo emerge from his book as mysterious forces -- in fact, as those "images" which so preoccupy and immobilize him throughout his book.

I was asked to speak here today on the subject of "Where Will Information Technology Leave Humanities Education Five/Ten/Twenty/. . .N Years from Now?"  The question implicitly asks for something more than an honest report.  Reading Adams helps me remember what at my better moments I know: that I have little reason for confidence in my understanding, and least of all in any prognostic powers.  But he also reminds me that I do have hopes, as well as a few convictions about what we should look to be doing to shape those imagined futures ahead of us. 

So let me begin with a conviction: that we have to carry out what Marxist scholars used to call "the praxis of theory" -- or as the poet better said, we must learn by going where we have to go.  Involved here are two hard sayings that can no longer be fudged or tabled.  First, integrating digital technology into our scholarship will have to be pursued on as broad a scale as possible.  Circumstances are such that this work can no longer be safely postponed.  Second, we have to restore textual and bibliographical work to the center of what we do. 

"What are you saying?  Learn UNIX, hypermedia design, one or more programming languages, or textual markup and its discontents?  Learn bibliography and the sociology of texts, ancient and modern textual theory, history of the book?"  Yes, that is exactly what I am saying.  And of course you ask why.  At this point I give only one reason, though by itself -- if we draw out its implications -- the reason will more than suffice: because digitization is even now transforming the fundamental character of the library.  The library, the chief locus of our cultural memory as well as our central symbol of that memoryıs life and importance.   That transformation is already altering the geography of scholarship, criticism, and educational method throughout the humanities and it forecasts even more dramatic changes ahead, as I shall indicate later.  Moreover , the shifting plates are already registering on the seismographs.

Let's begin at that point, with the signals coming from current, well-known events.  First of all, some happy signs of the times.  Already the libraryıs reference rooms are well along to virtually complete virtualization, and itıs difficult to believe any scholar regrets this.  The transformation reflects the relative ease with which expository and informational materials translate into digital forms.  To have immediately available to you those resources, wherever you might choose to set up your computer and go online, is a clear gain, and for older persons, an amazement.  Such things can turn the soberest scholar into a digital groupie.  Young persons tend to take such marvels for granted.

We want to cherish that generational difference when we begin to pick up on some other less happy signals.  A grace of time is playing through the difficult period humanities education is now experiencing.  But time takes time and some serious problems are short term, even immediate.  A widespread malaise has been notable in our discipline for more than a decade at least, particularly among those heavily invested in humanities research education.  One of the sources of this malaise -- it has many -- was addressed by a special letter sent to the members of the MLA last May by Stephen Greenblatt, the organizationıs president.  Greenblatt pointed to publishing conditions that make it difficult or even impossible for young scholars to meet current standards for tenure in research departments of literature.  He called the problem, correctly, a  "systemic" one.  A network of relations has bound together for a long time the work of scholarship, academic appointment, and paper-based ­ in particular, university press -- publishing.  This network has been breaking up, or down, for many years, and the pace of its unraveling has recently accelerated.  In a grotesque inversion of our most basic goals, near-term economics, not long-term scholarship, has been a serious factor in humanities research for some time.  Just try to find a publisher for primary documentary materials, or for any basic research that doesnıt come labeled for immediate consumption: "Sell this by such and such a date" -- before it spoils.

Do you see a digital savior waiting to descend?  Do you think I see this redeemer?  Well, I don't.  But I think I do see that these broad institutional problems intersect with the emergence of digital technology, and that we wonıt usefully address the former unless we come to terms with the latter.  The engagement wonıt solve our problems but it will help us to see them more clearly.   Let me explain by recalling briefly a related part of our recent institutional history. 

For as long as Iıve been an educator -- since the mid-1960s -- a system of apartheid has been in place in literary and cultural studies.  On one hand we have editing, bibliography, and archival work, on the other theory and interpretation.  I donıt have to tell you which of these two classes of work have been regarded as menial if somehow also necessary.  And like any system of apartheid, both groups were corrupted by it.  As Don McKenzie once remarked, material culture is never more grossly perceived than it is by theoreticians, whose ideas tend to remove them from base contacts with the physical objects that code and comprise material culture.  But of course, as he went on to remark, the gross theoretician met his match in the myopic scholar, who gets lost in the forest by trancing on the bark of the trees. 

To this day at my own university ­ an institution known for its commitment to serious work in textual and bibliographical studies -- most of our advanced graduate students could not talk sensibly, least of all seriously or interestingly, on problems of editing and textuality and why those problems are fundamental to every kind of critical work in literary and cultural studies.  I no longer ask our students in their Ph.D. exams to talk about the editions they read and use, why they choose this one rather than another, what difference it would or might make.  It goes without saying that these are bright and hard-working young people.  Nonetheless, the institutional tradition they have inherited largely set those matters at the margin of attention, and never more unfortunately so than in the last quarter of the 20th century.  Until that time the American research program in English studies regularly made history of the language, editing, and bibliographical studies a requirement of work.  I know from my own, painful experience that these requirements were often taught in killingly mindless ways, reinforcing our sense that they had nothing to teach us about literature, art, and culture ­ either of the past or the present.  As we all know, in our country these requirements were universally dropped or eviscerated between about 1965 and 1990.  (In England and Europe the situation is very different.  Highly developed philological traditions permeate their scholarship.)

When I have described our recent educational history in these terms, I have been suspected of fellow-traveling with a cadre of moralizers and promoting an instrumentalist approach to education.  But remember, Bennett, Bloom, DeSousa, and Lynn Cheney are not enemies of theory or interpretation, they are simply strict constructionists in a field where Cornell West, Catherine Simpson, Edward Said, and Stanley Fish look for broader intellectual opportunities.  Seeing the educational history of the past 15 or 20 years in terms of the celebrated struggles between these groups has obscured our view of an educational emergency now grown acute with the proliferation of digital technology. 

I said before that Iım no haruspicator, but here at last Iım prepared to make a couple of prophesies.  First, this.

1.  In the next 50 years the entirety of our inherited archive of cultural works will have to be re-edited within a network of digital storage, access, and dissemination.  This system, which is already under development, is transnational and transcultural.

            Let's say this prophecy is true.  Now ask yourself these questions: "Who is carrying out this work, who will do it, who should do it?"  These turn to sobering queries when we reflect on the recent history of higher education in the United States.  Just when we will be needing young people well-trained in the histories of textual transmission and the theory and practice of scholarly method and editing, our universities are seriously unprepared to educate such persons.  Electronic scholarship and editing necessarily draw their primary models from long-standing philological practices in language study, textual scholarship, and bibliography.  As we know, these three core disciplines preserve but a ghostly presence in most of our Ph.D. programs.

            Designing and executing editorial and archival projects in digital forms are now taking place and will proliferate.  Departments of literary study have perhaps the greatest stake in these momentous events, and yet they are -- in this country -- probably the least involved.  The work is mostly being carried by librarians and systems engineers.  Many, perhaps most, of these people are smart, hardworking, and literate.  Their digital skills and scholarship are often outstanding.  Few know anything about theory of texts, and they too, like we literary and cultural types, have labored for years in intellectually underfunded conditions.  It has been decades since library schools in this country taught courses in the history of the book.  Does it shock you to learn that?  We aren't shocked at our own instituted ignorance of history of the language or bibliography.

Restoring intimate relations between literarians and librarians, a pressing current need, has thus been hampered from institutional developments on both sides.  Insofar as departments of literature participate in the work and conversations of digitized librarians, it happens through that small band of angels who continue to pursue serious editorial and bibliographical work: scholarly editors and bibliographers.

            Ok, then, what's the problem?  Our traditional departments have managed to keep around a few old-fashioned editorial and bibliographical types.  Let's send them out to help with the technical jobs and hope that their -- (that's our ) -- brains aren't completely fried by beetle-browed and positivist habits.  Once upon a time even they (that's we) were involved with the readerly text, right?

Those contacts might perhaps prove barely sufficient were it not for another recent upheaval in the world of higher education.  For it happens that between about 1965 and 1985 textual scholars began to rethink some of the most basic ideas and methods of their discipline.  I chose those dates because Ernest Honigman published The Stability of Shakespeareıs Text in 1965, and in 1985 D. F. McKenzie delivered his famous inaugural Panizzi Lectures, Bibliography and the Sociology of Texts (published 1986).  So disconnected had the general scholarly community grown from its foundational subfield of textual and bibliographical studies, however, that this historic moment passed it by with little notice.  The "genetic" and "social" editing theories and methods that emerged in those years signaled a major shift in literary and cultural scholarship.  Because this change overlapped with the more public emergence of what would be called Literary Theory -- perhaps "underlapped" is the better word -- it drew scant attention to itself in that more visible orbit of literary and cultural studies. 

A publication scheduled for later this year measures the change that overtook textual scholarship at the end of the last century.  In 1982 Harold Jenkins published his celebrated edition of Hamlet in the Arden Shakespeare series.  A lifetimeıs work, the book epitomized a traditional so-called eclectic approach whereby Jenkins educed a single text of the play out of a careful study of the three chief documentary witnesses.  At the end of this year a new Arden Shakespeare Hamlet, edited by Ann Thompson and Neil Taylor, will replace Jenkinsı remarkable work.  The new Arden Hamlet will not publish a single conflated text, it will present all three witnesses -- F1 (1623), Q1 (1603), and Q2 (1604-5) -- each in their special integrity (or lack thereof). 

The New Yorker magazine reported this event in a substantial piece by Ron Rosenbaum in its past 13 May issue.  The article gives a good general introduction to an  upheaval in textual studies that had been going on for almost 40 years, and that had been at white heat for 20.  Because the world of scholarship moves in a kind of slow motion ­ this remains true even today, odd as that may seem -- such belated awareness would not normally be cause for much notice.  But at this particular historical moment, when information storage and transmission and methods of knowledge representation are calling for immediate practical attention, Rosenbaum's piece seems most interesting for what it does not talk about.  Force of circumstance today calls us to develop scholarly editions in digital forms.  The people who have done this work in the past in paper forms -- people like Jenkins and Thompson -- are involved in serious controversies over how it should be done.  The theory and practice of traditional textual scholarship is in a lively, not to say volatile, state of self-reflection.  Scholarly editing today cannot be undertaken in any medium without a disciplined engagement with editorial theory and method.  Scholars who think to use information technology resources, as now we must, therefore face a double difficulty.  We must learn to use digital tools whose capacities are still being explored in fundamental ways even by technicians.  We must also approach all the traditional questions of scholarly editing as if a transformed world stood all before us, and where to choose was fraught with uncertainty.  Fortunately, the way will not be a solitary one.


            To clarify our situation let me rehearse two exemplary recent events.  My own work was drawn into the gravity field of both. 

"Social text" theories like D. F. McKenzieıs implicitly call for their practical implementation.  In literary and cultural studies, this means one thing: the transformation of a discursive presentation like McKenzie's lectures into that determining instrumental form of all literary and cultural studies: the scholarly edition, or -- more ideally still -- an all-purpose model for editing.   Whether model or exemplar, however, such a work is nothing more or less than a machine into which multiple forms of readerly texts have been abstractly reduced, and (therefore) out of which multiple forms of readerly texts might be regenerated.  Traditional text scholars in the mid-1980s charged social theories of textuality with collapsing an essential distinction between empirical/analytic disciplines on one hand, and readerly/interpretive procedures on the other.  In his Panizzi lectures McKenzie rejected the distinction and showed by discursive example why it could not be intellectually maintained. 

The distinguished textual scholar T. H. Howard-Hill replied that while views like McKenzie's were all very well in a theoretical sense, they could not be implemented in a practical way.  That is to say, you could not translate such ideas into a scholarly edition.  His point was well taken in a paper-based context.  Social-text editing proposals commit one to editing books rather than texts -- an unfeasible idea in a paper-based view, as Howard-Hill insisted.  But digital technology makes such an approach to editing a realizable imagining.  One can in fact transform key social and documentary aspects of the book into computable code. 

A central purpose of The Rossetti Archive project was to prove the correctness of a social-text approach to editing ­ which is to say, to push traditional scholarly models of editing and textuality beyond the masoretic wall of the linguistic object we call "the text".  The proof of concept would be the making of the Archive.  If our breach of the wall was minimal, as it was, its practical demonstration was significant.  We were able to build a machine that organizes for complex study and analysis, for collation and critical comparison, the entire corpus of Rossetti's documentary materials, linguistic as well as pictorial.  We were able to draw these documents into computable synthetic relations at macro as well as micro levels.  The Archive allows you to study the relationships between whole documents and between specific elements of which they are composed.  In the process the Archive also discloses the hypothetical character of the physical documents and of their component parts.  Though completely physical and measurable, neither the documents nor their parts are self-identical, all can be reshaped and transformed in the environment of the Archive. 

Don't misunderstand me.  Our successes, as I say, have been minimal and some of our greatest hopes for the Archive have not been realized.  Nonetheless, the proof of concept was a crucial break with tradition, freeing us to imagine what as yet we donıt know: how to build much better and more sophisticated machines of this kind.  Building the Archive, for instance, has brought me to realize a possibility for these kinds of instruments that stared us all in the face from the beginning, but that none of us thought to try to exploit.  A critical edition can clearly be built in digital form that allows a dynamical tracking and analysis of that recent literary discovery, the "readerly text".  This clearly also means that the fundamentally dynamical character of the textual condition can be digitally realized: the dialectic of the field relations between the history of the textıs transmission and the history of its reception.

In a late lecture, "What's Past is Prologue", McKenzie speculated briefly on computerization and textual criticism.  His remarks came in the context of two ways that scholars were using digital tools: on one hand for electronic storage of large corpora, on the other for the dynamic modeling of textual materials.  McKenzie saw the latter as the more interesting prospect, even it would "represent a radical departure" from his central "article of bibliographical faith": "the primacy of the physical artifact (and the evidence it bears of its own making)".  (There is quintessential McKenzie: entertaining an idea that shook the ground beneath a cherished conviction.)

Had he become more involved with the making of electronic editions, I believe McKenzie would have realized that, far from departing radically from such primacies, digital tools return us to them in the ways he found most interesting.  For "the physical artifact" and "the evidence it bears of its own making" are both social in the sense that such objects, in particular such bibliographical objects, have been made and remade many times in their socio-historical passages.  No book is one thing, it is many things, fashioned and refashioned repeatedly under different circumstances.  Its meaning, as Wittgenstein would say, is in its use.  And because all its uses are always invested in real circumstances, the many meanings of any book are socially and physically coded in and by the books themselves.  They bear the evidence of the meanings they have helped to make.

One advantage digitization has over paper-based instruments comes not from the computerıs modeling powers, but from its greater capacity for simulating phenomena ­ in this case, bibliographical and socio-textual phenomena.  Books are simulation machines as well, of course, with hardcoded machine languages (we call those typography and graphic design) and various softwares (modes of expression  -- expository, hortatory, imaginative -- and genres).  The hardware and software of book technology have evolved into a state of sophistication that dwarfs computerization as it currently stands.  In time this discrepancy will change, we can be sure.  McKenzie probably saw the computer as a modeling machine because of his attachment to "the primacy of the physical object".  Computers can be imagined to make models of such primary, self-identical, objects.  But suppose, in our real-life engagements with those physical objects, we experience them as social objects, and hence that we see their self-identity as a quantum condition, a function of measurements we choose to make for certain particular purposes.  In such a case you will not want to build a model of a made thing, you will try to design a system that can simulate every realizable possibility -- the possibilities that are known and recorded as well as those that have yet to be (re)constructed.

McKenzieıs central idea, that bibliographical objects are social objects, begs to be realized in digital terms and tools.  The Rossetti Archive proves that it can be done.

 My second example is a cautionary tale that illustrates how that realization can get sidetracked or blocked by a failure to think in clear ways about theory of textuality.  The focus of this example is the TEI, the "Text Encoding Initiative," which describes itself as follows:


Initially launched in 1987, the TEI is an international and interdisciplinary standard that helps libraries, museums, publishers, and individual scholars represent all kinds of literary and linguistic texts for online research and teaching, using an encoding scheme that is maximally expressive and minimally obsolescent. (http://www.tei-c.org/)


Still an invisible or ghostly presence for many if not most humanities scholars, TEI has become a widely accepted standard for creating electronic texts that require scholarly reliability.  It is an inline marking system designed specifically for humanities documents.  TEI defines in a precise way an elaborate set of textual information fields so that a computer can search and analyze the texts with respect to those defined fields and extract the marked  or "structured" information.

            I'm not going to rehearse the problems that have arisen in implementing a TEI approach to machine-readable texts.  These were initially aired by the creators of TEI themselves, and subsequent criticisms have confirmed and refined the difficulties.  More important to see is the level at which these problems are situated.  TEI's greatest legacy is the demonstration it makes of its own inadequacy as a means for computerizing the information content of humanities materials. 

TEI understands a text to be "an ordered hierarchy of content objects".  This is the same understanding that generated TEI's parent, Standard Generalized Markup Language (or SGML).  The view has been criticized, by myself and others, as inadequate for representing the character of poetical and imaginative texts, which mix and overlap various kinds of hierarchical and nonhierarchical features.  The criticism, while fairly made, falls far short of exposing the deep inadequacy of an SGML/TEI approach to textuality in the context of digital instruments.  It is a criticism, for instance, which can go on to point out -- as I have done elsewhere -- that if TEI will not do as a markup system for imaginative texts, it will serve nicely for informational texts.  That opportunistic position licensed what we did with The Rossetti Archive: we used TEI to mark up our informational texts and we developed a special SGML design for all of the Archiveıs other materials, documentary as well as visual.

But now that we have built the Archive to those design specifications, we can see more clearly the poverty of the result.  At such moments Byron's comic wisdom helps you to keep your feet.  "In play, there are two pleasures for your choosing,/ The one is winning, and the other, losing".  The pleasure of losing is what John Unsworth has called, no quite so charmingly perhaps, "The Importance of Failure".  The best kinds of defeat come in games that are intense and interesting.  Those are the defeats that make you pay, and therefore make you pay attention.  Their mythic exemplar is probably the expulsion of Lucifer, the archangel of light and knowledge, from heaven. 

As I reflected several years ago on the state of The Rossetti Archive, I could see how various practical demands had compromised our initial commitment to the idea of the social text.  Most waylaying was our focus on the systemıs logical design, to the neglect of its interface.  We wanted to build a structure that would be, as the digitists say, "bullet-proo" so far as the fast-changing world of hardware and software was concerned.  Amazing at it may seem, for six years we built the Archive piece by piece and file by file without ever actually seeing anything of the whole except its abstract form: the hypermedia organization of its SGML file structures.  The Archive was a soul without a body.

 In building digital editions, McKenzie's idea of the social character of physical objects must be held fast.  To define a document as a text, as SGML/TEI does, is to follow the rationalist line of textual/bibliographical thinking that McKenzie's work fractured.  By contrast, regarding textual documents as physical objects prepares you to develop mechanisms that expose their status as social objects.  This is true because physical objects, as McKenzie argued, bear the manifest signs of how and where and by whom they were made.  In addition, and reciprocally, physical objects signal their immediate social condition.  We can think about ideas and take our solitary way with them.  If we fetishize the physical object, we can do the same.  But there the move is less easily made because physical objects carry manifest signs of their public and social relations.  They have to be handled -- that's to say, used and interpreted -- with others, in institutional space and in physical ways.

An idea of a Rossetti Archive is not enough, you actually have to make the thing as a physical object.  Until you do that you are doomed to what E. P. Thompson once called "the poverty of theory".  Postponing -- in truth, neglecting -- interface design in favor of logical design, the Archive weakened its ability to realize the sociological character and meaning of the physical (social) objects it meant to process.  Itıs easy to see why this result comes about.  Logical design is grammar, interface design is rhetoric.  Interface enables and reflects the reader's active presence, it is the environment where readers live and move and have their being in digital simulations.



Inadequate as a model for bibliographical things, an SGML/TEI theory of textuality is even less adequate to the processing capabilities of digital instruments.  To program a digital information system for hierarchically ordered content objects is to short change from the start the simulation capacities of the system.  In text-critical terms, it is to design a system that will edit -- that will deliver for our use -- "texts", not "books".

These new critical instruments will not suffer for long that kind of dumbing-down.  We can fashion them to reconstruct an integrated interpretive network of sociological relations for books and other semiotic objects.  One type of content object in such a network will be "texts" -- that is, linguistic objects formally, as opposed to dialectically, conceived.  But we will be calling these networks to integrate larger masses of different kinds of materials.  They will include more even than the bibliographical objects cherished by McKenzie's great Newtonian imagination.  I have in mind here what is implicit in the term "interactive", so often -- and rightly -- applied to digital environments.  The critical edition built in digital space interpellates the user as an essential and computable element in the system.  The logs that automatically track system usage have scarcely begun to be exploited or critically organized.  Skillfully organized, they will develop feedback loops within the network, augmenting the autopoietic mechanisms that are to this point only latent capacities of such systems. 

Literary scholars should begin undertaking the serious study of interface design as a necessary modeling preliminary to such work.  Interfaces are the mirrors that these systems hold up to their imagined users.  Even now we can see -- theoretically -- that the ideal interface should be as user-specific as possible ­ more than that, as use-specific as possible, for individuals coming to these works may arrive each time with different objects in view.  Designing interfaces that are at once stable and flexible, stimulating as well as clear, is one of the two most demanding tasks -- in both senses of "demanding" -- now facing the scholar who means to work with digital tools.  The interfaces should make it clear that when we use a particular machine, we are called to rethink -- to change --  the territories they initially map for us.  We have therefore to see from the initial maps that those maps are not precepts but examples of understandings -- that they exist to encourage other kinds of mappings and explorations of the material.

            The second task insistently before us involves what a traditional humanist would probably see, in no happy Blakean sense, as a marriage of heaven and hell.  The work calls together the heavens of literary interpretation and meaning, and the hells of statistics and quantum mechanics.

I can best explain what I mean by reading a passage in a book I recently published.  To date


Digital technology has remained instrumental in serving the technical and pre-critical occupations of librarians and archivists and editors.  But the general field of humanities education and scholarship will not take the use of digital technology seriously until one demonstrates how its tools improve the ways we explore and explain aesthetic works ­ until, that is, they expand our interpretational procedures.   (Radiant Textuality xii)


I've spent most of my time this afternoon trying to indicate why and how scholarly editions, whether paper or digital, are not the pre-critical objects that many, probably most, humanities scholars take them to be.  Thatıs a theme Iıve been worrying and preaching for more than 20 years, which is perhaps a sobering comment on my powers of persuasion.  However that may be, the theme returns in this context because most humanists take a similar view of information technology and its relation to the interpretation of cultural works.  Computers, which are machines for counting, are the children of a recent science, statistics.  Most people who love the humanities hate statistics and so, as with the devil at baptism, we renounce statistics and all its works and all its pomps.  At any rate, the statistical devil has been renounced for us by our elders -- those "wise guardians of the poor" children who march past us in Blake's wry little poetic treatise "Holy Thursday".

Computers are the work of a statistical devil.  Yes, they are.  But like Blake's Milton in his great poem so titled, we're ready for a satanic flight from the unhappy hermeneutical heavens where we've "walked about" -- amazing Blakean prophecy -- for "One hundred years, pondering the intricate mazes of [the] providence" that has kept us there.

Two years ago Johanna Drucker and myself began entertaining a way to escape.  We would do it with a digital environment we called IVANHOE -- named after the once celebrated and massively influential bibliographical romance by Walter Scott, long since, alas, fallen on evil days and evil tongues.  We imagined a digitized textual environment  -- more than that, a discourse field of indefinite extent  -- which scholars would enter and engage much as people enter and engage with computer games.

You don't perform statistical analyses when you play computer games.  You let you servants, the computers, do that for you.  And the same is true in IVANHOE, which is a dynamic field where human persons interested in questions of meaning use computational tools to pursue those interests.  Digitization is a useful adjunct in this situation for two reasons: first, it can simulate in computable forms a wide variety of informational materials -- books, maps, pictures, and so forth -- that are the traditional focus of our acts of interpretation; second, it can store massive corpora of such materials and then retrieve, reorganize, and be made to transform the data and the data simulations.

I've talked often here and abroad about how IVANHOE actually works as an interpretational procedure.  Different groups have "played" IVANHOE -- if "playing" is the right term -- a number of times, including groups of seventh grade students , college, undergraduates, graduate students, and senior humanities scholars that included myself and Johanna Drucker.  The discourse fields have centered in works like Wuthering Heights, Frankenstein, Ivanhoe, and "The Turn of the Screw".  This fall I've brought an elementary model of IVANHOE into a graduate class to test its capacity for enhancing interpretational scholarship in a formal context of graduate research.  Weıre focusing IVANHOE on two distinct scholarly problems: to investigate issues of text and interpretation in Blake's The Four Zoas ; and to study a set of D. G. Rossettiıs so-called double works in the context of received scholarly ideas about Victorian and Modernist aesthetics.  This is the first time weıve tried to use IVANHOE as a tool for advanced scholarly research.

Today I don't want to talk about IVANHOE in those operational terms.  I've brought a synopsis for anyone who wants to know a little more about how IVANHOE has been played, along with some bibliographical references for further information.  But today I'd like to speak instead about some broad issues of humanities research and scholarship that this digital tool -- perhaps it is a toy -- is raising.

As everyone knows, the scale of information that scholars today are required to negotiate is enormous.  Digital instruments have themselves generated -- and regenerated -- this information in such massive quantities that researchers for some years now have been trying to build quantum computers to handle it.  Libraries and museums gather and organize traditional humanities materials in the same way, integrating our received corpora of physical objects like books with our emerging digital corpora.  This ever-unfolding informational Archive represents a meta discourse field, a set of all sets within which we distinguish at our will and choice subset discourse fields that interest us.

When a humanist asks "What is this exploding Archive, what is happening here?" part of the answer is: that through such an Archive we expose ourselves to ourselves, and our world to itself, in unimaginable depth and detail.  But how can we possibly see ourselves or our world  -- those foundational humanities' goals  -- in such an informational white out?  Henry Adams' "vehement wish to escape" --a wish he does not indulge, let us remember -- turns into Sven Birkerts' advice of refusal.

That very bad advice does little justice to the power and usefulness of the book, which has been our simulation machine of choice for centuries.  Now more than ever we want to study the complex mechanisms of book technology in order to design digital environments of comparable sophistication.  Think how brilliantly the bibliographical interface organizes our reflective and perceptual experience.  It can hold large amounts of different kinds of data and information.  At the same time, it sends a clear message that such materials, however, rich and strange, are integrated and negotiable.  It facilitates many ways of passaging and repassinging its materials, and of hyperlinking to related materials in and out of books.  It leaves us free to understand each in our own ways, and it supplies a bibliographical network ready to receive and feed those diverse readings back into the emergent discourse field.

Compared with that, contemporary digital interface design often seems -- often is -- less help than hindrance.  Bibliography and the sociology of texts are key points of departure for anyone who wants to understand and design digital environments.  Reciprocally, digital environments expose the bibliographical discourse field in important new ways.  Hypertext, cybertext, ergodic literature: it's true, we have always already been there in our traditional literary forms and functions.  But the common readerıs view of these comparable technologies is important to remember.  People generally see digital objects operating at a vastly larger informational scale than books.  That scalar differential, which is both real and apparent, is important less for its reality than its apparency.

I'm not simply being paradoxical in saying that.  The apparition of nonhuman scales of reality in digital environments is what human beings must translate into human scales of understanding and perception.  That task returns us to the reciprocal apparencies of a bibliographical scale of reality -- arriving where we started, but now beginning to know that bibliographical place for the first time.

Physicists tell us that a quantum world thunders silently beyond (or below) our human scale of perception.  It is a world full of contradictions where everything is as it is perceived, and so everything changes depending on where and how and why you choose to take your observations.  In one perspective photons are wave functions, in another they are particles.  It is a world of random order and disorder.  We were only finally able to make contact with this world after the invention of statistical mathematics.  To the end of his life Einstein disbelieved in the reality of quantum worlds, maintaining they were nothing more than a set of (more or less useful) mathematical functions.

Reality or apparition, a quantum order of bibliographical objects becomes accessible to us through computerization.  I am not speaking about the physico-chemical makeup of paper objects but of the immense number of dynamic relations and functions that comprise the discourse field of social texts.  We touch the hem of this garment whenever we open a web browser.  The field of textual relations accessible through that digital device is statistically significant at a quantum order.  People are trying to build quantum computers precisely to improve controlled access to that discourse field. 

When such computers are built and made robust enough to be used, history tells us they will have very clumsy interfaces.  In the meantime, we have our hands full trying to design interfaces for our current digital tools and systems.  We must have them in order to translate the computer's statistical operations into terms that our embodied minds can seize, understand, and put to human uses.  The need is especially apparent when the database is a bibliographical discourse field.  The interface we have built for The Rossetti Archive is dismayingly inadequate to the Archive's dataset of materials.  At present the Archive organizes approximately 9,000 distinct files, about half of which are SGML/XML files.  When the Archive reaches its scheduled completion date some four years from now, it will have about twice that many files.  Here is a directed graph of a tiny subset of the Archiveıs current set of analytic relations.  We call this "Rossetti Spaghetti", and I show it to give you a graphic sense of the scale and complexity of this grain of Rossettian sand on the shore of the interne'ıs opening ocean.  One can indeed, even here, see an infinite world dawning in that grain of sand.

            Or here is a narrative version of the statistical scale of the Archive.  Take those 9,000 files and understand that they are interconnected by a set of some 200,000 hyperlinks.  Then add to your equation the fact that every SGML/XML text file is structurally divided into hundreds of types of divisions.  Finally, factor in the specific divisionary instances that comprise any particular file, which will range from several hundreds to many thousands.  I could ask the server holding the Archive to make the actual counts in each case, but I think you can see the staggering number of possible relationships that the Archive puts into computational play.

            Let me close with what is for me -- a fetishist of imaginative writing, especially poetry -- the most important moral of the whole story: that poems and other imaginative kinds of social texts are quantum fields.  We have said for a long time that their meanings are inexhaustible, but a digital frame of reference helps us to specify more clearly why and how this is the case.  I do not offer this as a useful metaphor but as a fact about the facts comprising poetic discourse fields -- a computable fact.  The implications of that view of social textuality for humanities studies seem to me considerable.  IVANHOE is a first effort to work out those implications for a program of what I. A. Richards once called "Practical Criticism".  Johanna Drucker and I call it 'Patacriticism.  Like Byron, Jarry's ludic intelligence is (so to say) no joking matter.  From Ubu and Dr. Faustroll emerges an algothmic form of scholarly method that should be seriously entertained (so to say).  It is, I believe, the only method adequate to the textual condition we now see clearly unfolding before us.