Rethinking Textuality

Jerome McGann

This is a report and an analysis of an experimental project I recently began with Johanna Drucker at U. of Virginia. We have been calling it "Metalogics of the Book". Briefly, it involves using computers to explore how the graphic features of textual documents function in a signifying field. The experiment analyzes and manipulates such graphic features through mark?up protocols (page description languages, tags) and other computer tools (OCR).

The point of the demonstration is simple: to show that the rationale of a textualized document is an ordered ambivalence and that this ambivalence can be seen functioning at the document's fundamental graphic levels. By rationale we mean the dynamic structure of a document as it is realized in determinate (artisanal) and determinable (reflective) ways. By "ordered ambivalence" we mean the signifying differentials set in play when a rhetoric of intention is applied to a document. Textual differentials at any level are a function of the effort to control or even eliminate them. <1>

The implications of this demonstration are, we believe, considerable. For our own special field of interest - the study of literary, cultural, and aesthetic works, and especially those deploying textual elements - the demonstration brings a strong argument for the following ideas:

Strictly speaking I probably shouldn't lay out before you at this point that constellation of related ideas. Several were unformulated at the beginning of our work, and at this point they constitute hypotheses for guiding our experiments. They emerged in a determinate form only after some months of preliminary theoretical conversations and initial experiments. A dialectical/experimental process came to clarify and unpack these inchoate ideas and direct the process of exploration. Nevertheless, I place them here so that you can follow and assess the adequacy of what we're doing, and why we came to these first conclusions.

The Initial Experimental Context

The project began out of a general dissatisfaction with two approaches to textuality and text interpretation that have great authority in the community of literary and linguistic scholars. One, a recent power, gained its position with the emergence of humanities computing. The logical system of text markup developed for computers, Standard Generalized Markup Language (SGML), fully represents this view. This hypergrammar treats its documentary materials as organized information, and it chooses to determine the system of organization as a hierarchy of nested elements; or, in the now well-known formulation: "text is an ordered hierarchy of content objects".

This approach to textuality only became problematic when it was undertaken and then implemented by the Text Encoding Initiative (TEI), as several of its principal advocates pointed out in 1993: "the experience of the text encoding community, as represented and codified by the TEI Guidelines, has raised difficulties for the [OHCO] thesis".<2> As we know, TEI set about formulating a special subset of SGML that would be useful, in its view, for encoding cultural documents (as opposed to business and administrative documents) for computerized search and analysis. TEI is now a standard for humanities encoding practices. Because it treats the humanities corpus - typically, works of imagination - as informational structures, it ipso facto violates some of the most basic reading practices of the humanities community, scholarly as well as popular.

The revulsion that many humanists express for the emergence of digital technology reflects this sense that computerized models are alien things: if not alien to imaginative practice as such, then certainly alien to the received inheritance of literature and art.

This traditional community of readers comprises the second group to which our project is critically addressed. For this group textual interpretation (as opposed to text management and organization) is the central concern. In this community of readers, the very idea of a "standard generalized markup", which is to say a standard generalized interpretation, is either problematic or preposterous. The issue hangs upon the centrality of the poetical or imaginative text for cultural scholars, though it applies equally well to students of art, history, anthropology, politics, law and any discipline in which procedural rules of interpretation are perceived as more or less context-based, flexible, manipulable: "reader" or "community" organized - dialectical -- rather than structurally fixed.

But while our specific project consciously addressed this duplex audience of adversaries, it became a practical focus only after a set of (so to speak) pre-historical events had unfolded. Here I speak only for myself, not for Johanna, who was working elsewhere and whom I did not know, though I knew her work pretty well.

The determinate matter here is the project of The Rossetti Archive. This came about in late 1992, about seven years after I had first encountered computerized technology and hypermedia methodology during my tenure at Caltech. At that time I knew I would undertake a project like the Archive should the chance arrive. This was a clear decision based on the idea that a hypermedia "edition" or "archive" would make it possible to study literary and aesthetic works in entirely new ways. The innovative possibilities were, in my view, not so much a function of the computational resource of these new tools. Two other matters interested me more. First, digital imaging resources offered hopes that students would be able to carry out their studies in a more direct relation to primary documentary materials. This was important to me because my previous work as an editor of such materials had shown me that what traditional interpretation sought as "meaning" in a text was always deeply funded in a text's material

When we began building The Rossetti Archive in 1992, I was introduced to SGML and TEI as tools for enhancing the analytic power of the Archive's resources. The usefulness of these tools became pretty apparent fairly quickly and so with the help of some people who had a deep understanding of logical markup forms, we created a specialized version of SGML to organize the data in the Archive.

Then followed seven years of practical implementations of our initial plans and ideas. These were years filled with those splendid, even ravishing enlightenments that only come when your plans and ideas are thwarted and overthrown. "In a dark time", as Theodore Roethke famously wrote, "the eye begins to see".

So, as we proceeded with the practical construction of the Archive we began to see the hidden fault lines of its design structure. As I've written about this matter elsewhere I won't go into the subject here, except in one respect that relates to the subject of our workshop. I refer to the effect that this discovery of our errors had on our work in general. What began as a project to put out a certain product - an image-based design for electronic editing that would have wide applicability - bifurcated. Our initial purpose acquired a new one: to use the Archive's process of construction as a laboratory for reflecting on the project itself. That second purpose led inevitably to a regular set of critical inquiries into the basic organizing ideas of the Archive and its procedures. This new set of interests inevitably delayed the appearance of the Archive itself - a frustrating event in certain respects, but immensely fruitful in others.

One result of these new interests is the "Metalogic of the Book" project that Johanna Drucker and I are now involved with. In my case the project emerged directly from a reflection on three types of problem that the building of the Archive exposed. The first, which I've already noted, involved the weaknesses in "the OHCO thesis" of textuality that we found when implementing the Archive. The second problem centered in the way we were handling digital images. That is to say, the Archive's logical design had no means for integrating these objects into an analytic structure. Finally, Interface - or rather, the failure to consider Interface in a serious way - constituted yet a third problem. This last case was in certain respects the most interesting as well as the most surprising. For when we worked out the Archive's original design, we deliberately chose to focus on the logical structure and to set aside any thought about the Interface for delivering the Archive to its users. We made this decision in order to avoid committing ourselves prematurely to a delivery mechanism. The volatile character of interface software appeared so extreme that we determined to proceed in such a way that, when we were ready to deliver the work, we would have a product that could be accommodated to whatever software seemed best.

The Rewards of Failure


A great virtue of computerized tools is that they are simple. Consequently, to get them to perform their operations you have to make your instructions explicit and unambiguous. To do that means you have to be very clear in your own mind about what you're thinking, meaning, intending. The simplicity of the computer is merciless. It will expose every jot and tittle of your thought's imprecisions.

What I've just said is news to no one in this room. It is a banality. But I ask you to remember that time when you thought differently about computers and about cognitive precision. The recollection is important here because it will help to clarify our project.

Here is my recollection ca. 1983. Introduced to UNIX multitasking and to the possibilities of digital hypermedia, I was drawn to my initial imagining of something like The Rossetti Archive - a critical and scholarly environment for studying aesthetic works in novel and hitherto impossible ways. When the chance came some ten years later actually to build such a work, I thought I had come to the Promised Land.

It was Middlearth after all, I would soon realize. But in fact also a land of promise, although promising in a way I had not expected.

The simplifying rigors characteristic of digital systems have not been prized by humanities scholars for a long time. They are associated with disambiguated scientific - or at least scientistic - thinking. Humanities scholars pledge their allegiance to a different kind of rigor and precision. Or so we have always said. But what is that kind of precision, precisely?

Responding to that question is a primary, and in all likelihood long-term, goal of our "Metalogic of the Book" experiment. Our general goal is to study how digital tools fail to render or realize complex forms of imaginative works (the works of Rossetti, for instance). The purpose, however, is not to "correct" these "failures" but to try to understand their significance and meaning.

So we're trying to use computational operations not to realize our purposes and ideas but to de-realize them, as it were. Why do we want this? Because our subjects of interest are works that realize themselves not in standardized and disambiguated forms but through their active relation to such forms. This is why computerization can only realize imperfectly and imprecisely the projects most dear to scholars who study imaginative works. The problem does not lie "in" the computers but in the strategies of those who design them. This inevitable disfunction, however, is no reason at all to dismiss computerization from the principal research interests of humanities scholars. On the contrary, these new tools offer an unprecedented opportunity for clarifying our thinking processes.

This project means to use computerized resources to clarify - to define precisely - what we imagine we know about books and texts. Because our computer tools are models of what we imagine we know - they're built to our specifications - when they show us what they know they are reporting ourselves back to us.

The Experiment

Ask this question: "Can a computer be taught to read a poem?" The answer is "yes". TEI is a grammar that computers can understand and manipulate. When you mark up text you are ipso facto reading and interpreting it. A poetical text marked up in TEI has been subjected to a certain kind of interpretation.

But of course sophisticated readers of poetical works recoil when such a model of reading is recommended to them. Poems are rich with nuances that regularly and, it seems, inevitably transcend TEI protocols.

But suppose one were to step away from complex forms like poetry. Suppose one were to try to begin a computerized analysis of texted documents at a primitive level. The first move in this case would be to choose to "read" the document at a pre-semantic level. The focus would be on the document's graphical design, the latter being understood as a set of markup features comprising a reading of the document, i.e., a set of protocols for negotiating the textual scene. The idea would be to construct an initial set of elementary text descriptors that would be fed to a computer. The computer would use these to parse the document and then deliver an output of what it read.

Our hypothesis was that it would deliver multiple readings.

This initial model for the experiment did not survive a series of critical interrogations. Conversations we had with Worthy Martin (one of our colleagues at IATH and a specialist in computer vision) exposed the difficulty of constraining the text descriptors so that we would get usable results. To do this was theoretically feasible but would take a great deal of mathematical analysis. These conversations brought another important realization: that the text primitives we were trying to articulate would comprise an elementary set of markup codes. And that understanding brought out a crucial further understanding about textuality in general: that all texts are marked texts.

At this point let me quote Johanna's notes on our investigation as it then stood:

JJM said that his idea of automated mark?up was not simply to insert tags identifying semantic or syntactic or content features, but to show that texts were already "marked" in their written form. This suggested to me the idea of the "reveal codes" command in a digital document, since it would make evident what is usually unacknowledged and unseen: the commands and protocols according to which the file is encoded.

Though "reveal codes" was the first term that galvanized discussion, it was quickly evident that Jerry and I came at it from two different directions and with two different, but curiously complementary agendas. Jerry saw "reveal codes" as an aspect of "deformance" and I saw it as a first step in a "metalogics of the book." Thus we split from the outset between intepretation and analytic description, between a desire to create a demonstration of deformance as a mode of reading and an interrogation of book form and format as interface. In both instances, the point of commonality that links our project into one is the conviction that the graphic format of a text participates in the production of textual signification in ways that are generally unacknowledged. Our shared aim is to demonstrate this ?? Jerry leading us through experiments in computer misreading of graphic features and me trying to push the analysis of graphic form by developing a critical vocabulary for it. (Notes, p. 1)

As we tried to relate and define more precisely our two lines of inquiry, it occurred to us that we might take advantage of the elementary reading operations carried out by OCR programs. Even the best of these programs, as we knew, produced deformed readings of the documents they scanned. We therefore decided to see what could be discovered from the deformations generated by a good scanning program. We used Omnipage 10.0. We also decided to work with prose texts rather than with poetry. But the prose texts would be of two different kinds: first, a document with some complex display features, and second a relatively straightforward piece of prose formatted margin-to-margin in standard block form. We report here only on the first document.

We chose an advert page from the 20 August 1870 issue of the Victorian periodical The Athenaeum (page 256: see Figure A). We set the scanner at True Page/Greyscale. The plan was to run the page through several successive scannings, in this order: 1. An initial scanning; 2. A reprocessing (NOT a rescanning) of the document at exactly the same settings and without moving the document; 3. A repetition of operation 1, i.e., a rescanning of the document keeping all original settings and with the document unmoved; 4, 5, 6. A repetition of operations 1, 2, 3 but at a black and white setting; 7-12. A repetition of operations 1-6 except we would lift the document and replace it in as nearly the same position as we could. We had other similar repetition/variation scans in mind as well, but before planning them we judged it best to assess the results of these first twelve operations.

As it turned out, the results we obtained in the first two operations led us to modify these initial plans. We performed instead a second repetition of the initial scan and then went on to perform only a selection of the other operations.

Operation 1. The first scanning pass produced the usual double output: a rough image highlighting the page sectors, and a standard output of the alphanumeric text. The scan produced a text divided into 22 zones plus an alphanumeric text with a series of misreadings and error messages (see Figure B and Figure C).

Operation 2. The reprocessing of the first scan produced a startling double result: the output this time divided the document into 20 zones and displayed slightly different alphanumberic text (see Figures D and Figure E).

Operation 3. The rescanning of the document produced yet further variances in both the sectoring and the alphanumeric output. This time 17 zones were distinguished and new variances appeared in the alphanumeric text.

The results of operations 2 and 3 decided us on repeating Operation 3, which produced this time 18 sectors and new variances in the alphanumeric text. At this point we had results that were significant for our purposes so we curtailed the rest of the planned experiment. We did two more rescanning: a scan at black and white settings, which reproduced the sectoring of Operation 1 but output yet a new set of alphanumeric variances; and a rescanning after the document had been lifted and then replaced on the scanner. This operation yielded 21 sectors plus new alphanumeric variances (see Figure G and Figure H).

Several important consequences flowed from these experiments. First, we now possessed a powerful physical argument for a key principle of "textual deformance" and its founding premise: that no text is self-identical.<4> Whatever the physical "causes" of the variant readings, and however severely one sought to maintain the integrity of the physical operation, it appeared that variance would remain a possibility.

Second, the OCR experiments showed that textual ambivalence can be located and revealed at graphical, pre-semantic levels. This demonstration is important if one wishes to explore the signifying value of the bibliographical codes of a textual document. For it is a commonplace in both the SGML/TEI and the hermeneutic communities that these codes do not signify in the way that semantic codes do.

Third, the experiments strongly suggested that while every text possesses, as it were, a self-parsing markup, the reading of that markup can only be executed by another parsing agent. That is to say, there can be no such thing as an "unread" text. (And while the experiments did nothing to argue for the following conviction, it remains strong with both of us: that every text "contains within itself", so to speak, a more or less obscured history of the readings/parsings, both semantic and bibliographical, that transmit the document to any immediate moment of reading/parsing.).

The Present Situation

Out of these experiments emerged the theses I set out at the beginning of this paper, and the issues raised through the theses have set Drucker and myself on a pair of new courses. First, here are Johanna Drucker's Notes for developing 4D Interface design models.

Rather than visualizing the thematic/semantic contents of a text in abstract form I now want to be sure to map it into a visualized spatialization of the book. Thinking of the book as a space, one that also unfolds along the temporal axis (or axes) of reading, I can envision a 4?d model of the book.

In this model every graphic element is actually a structuring element. Thus, for instance, a table of contents is not a simple notation lying on a thin sheet in the front matter, but is means of dividing the sculptural form of the book into a set of discrete spaces, each demarcated in relation to that original point of spatial reference, and located relationally within the whole. This sounds terribly empirical, I know, and insofar as I am interested in describing an object in material, schematic, and logical terms, I intend for it to suggest a faith in what Worthy calls "the properties of things themselves." I would stop short of any suggestion that these are "self?identical" properties, or that a specific signification inheres in these properties, or that there might be a lexicon of values attached to such properties. Instead, I suggest that as organizing schemata, these format features function as an integral portion of the text because they function as an interface.

In a several stage process, I want to make a visualized model of a book, a wire?frame image of its format features down to their specifics and particularities, and then flow the text through that so that semantically/syntactically tagged features can be displayed (why? first to see what patterns figure forth from such a demonstration, and second, to be able to morph this display as another act of deformance).

JJM suggests a second form in this model, a second "book" that would emerge as an image of the discourse of reading, the trace of intercourse of reader and text. This begins to suggest the holographic projection of my original graph of deformance as a space between discourse and reference. Now I see that that space is in fact the space of reading ?? with reading defined as deformance. (Notes, p. 7)

This new project intends to deepen the exploration of the "nature" of paper-based documents. And while my work with The Rossetti Archive "as a theoretical pursuit" has been directed almost exclusively in that direction for the past five years, <5>I am beginning to see a need to clarify the critical possibilities of digital environments and tools at the user end. So the following questions begin to pose themselves. First: what practical difference does it make to understand documents as (in William Gibson's terms) "difference engines"? At one time we thought (I think) that a person might usefully engage in "endless play" with "the text", but the tediousness of such a thought is now apparent to (nearly) everyone. It is the dead-end of our fifteen-hundred-year experiment with the game of silent reading. The game will probably never cease to have its charms, but it is a game we now play with complete self-consciousness. (That is the "meaning" of a work like If on a Winter's Night a Traveller, though I should add that since Poe the meaning has been as available as the Purloined Letter.) Second: what good are cybertools for elucidating these difference engines?

Those two questions will be addressed in a practical way by asking two other questions: what use-functions distinguish cybertext from docutext? And (how) might any of those functions promote our appreciation of texts as difference engines?

(A Brief Digression)

Promoters of cybertext, whether critical (like Espen Aarseth) or inspirational (like Janet Murray), have sometimes obscured the issues. <5> Murray, for example, distinguishes four central properties of digital environments: two interactive properties (procedural, participatory); and two immersive properties (spatial, encyclopedic) (Murray 71-90). It can be shown, however, that none of these properties are peculiar to digital environments. They are even essential properties of the docutexts that control the way Janet Murray thinks about digital tools. Her interest is in fictional narrative, and if one thinks seriously about such narratives one easily sees that these four properties characterize their operational status. This fact is most apparent from Murray's own book, for when she introduces her view of the digital environment, she uses a number of paper-based works she calls "Harbingers of the Holodeck". All of her harbingers are recent, but the truth is that these harbingers go far back - the bible being one of the most apparent. Murray chooses recent ones in order to seduce us into thinking these environments are recent phenomena. But they aren't, as book scholars have often pointed out to cyberiots.

Aarseth has proposed an elaborate taxonomy for texts in general in order to construct a distinctive set of criteria for understanding what he wants to call cybertexts. Unlike Murray, Aarseth recognizes that books have "dynamic" functions and hence that "new [cybernetic] media do not appear in opposition to the old [paper media] but as emulators of features and functions that are already invented" (Aarseth 74). Despite this remark, however, Aarseth makes a sharp distinction between what he calls "Linear" and "Ergodic" texts and he locates "Ordinary Text" - including "hyper"-ordinary texts - on the linear side of the distinction. Cybertexts, by contrast, are "ergodic" in that they have dynamic user-function(s) beyond the purely interpretive function common to all texts (Aarseth 62-67). (He distinguishes three other user functions: explorative, configurative, and textonic, the last signifying the user's ability to add permanent traversal functions to the text.)

Useful as Aarseth's study is, however, he too, like Murray, misconstrues "Ordinary Text" as "linear". One does not have to recall Mesoamerican quipu or any number of ideographical texts to recognize the nonlinear character of various kinds of pre-cybertexts. <7> Every poem comprised in our inherited Western corpus could fairly be described as a nonlinear game played (largely) with linear forms and design conventions, but sometimes with nonlinear forms as well. Nonverbal texts are useful to consider in this context because they highlight the socio-historical nature of "linear textuality". Epitomized by documents constructed from alphanumeric characters and by a "clockwork" temporality, even the most abstract linear texts contain residues of nonlinear semiotic functions and relations. The residues appear when textual spaces are treated as maps, when algorithms of traversal are deployed (as with glosses, footnotes, and such), or when the form taken by scripts and typefaces functions rhetorically (operates beyond an abstract and transparent informational function). C. S. Peirce's late effort to replace the alphanumeric text with what he called Existential Graphs in order to achieve a greater range and clarity of logical exposition is an extremely important event in the history of Western textuality. The graphs were an effort to develop a language for nonlinear relations. <8>

Games of Knowing

What then does distinguish cybertext from traditional docutext? Without pretending to answer that question, I would call attention to the special kinds of simulation that can be realized in cybernetic environments. While both Aarseth and Murray discuss computerized simulations, their critical taxonomies permit the subject to come forward only at the interspaces of their studies.

Of course all traditional texts construct simulations, but with docutexts we engage these simulations as "readers". A project like Michael Joyce's celebrated hyperfiction Afternoon or my own Rossetti Archive are paradigms of the "humanities" cybertexts we see all around us now. Both were conceived and designed as high-order reading environments. The Rossetti Archive was imagined as a simulated syndesis of a critical edition of Rossetti's textual works with a complete collection of facsimile editions of those works and a complete set of illustrated catalogues of all his pictorial works, including the reproductions of those works. The whole, however, remains a study-environment embedded in a reading-environment.

In this context it helps to remember that Plato disapproved these kinds of textual simulations as instruments of study, thought, and reflection. For Plato, the optimal scene for thinking had to be living and dialectical. Texts are inadequate because they do not converse: when we interrogate them, Plato observed, they maintain a majestic silence. But in MUDS and with various kinds of cybergames like ELIZA, one enters simulated environments where the user's interaction is no longer a readerly one. This result comes from the construction of a textual scene that simulates in real-time an n-dimensional spatial field. One thinks of the Chorus's speech to the audience at the opening of Henry V, except in cyberspace the "wooden O" of the Shakespearean stage has been extended to include the audience as characters in the action. <9>

Computer games exploit this new dynamic space of textuality by inviting the user to play a role in the gamespace. These are well-known role types like warrior, hero, explorer-adventurer, creator-nurturer, problem-solver, and so forth. And while players may well have to read at various points, their participation in the game is not readerly. (When cybertext enthusiasts speak of the "passive" docutext and the "active-participatory" cybertext, they are calling attention to this differential. Traditional readers quite rightly point out that reading is a highly participatory activity, and one that is commonly quite as "non-linear" as any cybertext.)

When a traditional literary text enters (or is translated into) a cyberspace, then, it will be laid open to "participations" that may or may not be readerly participations. Indeed, paperspace is a far more effective medium for reading than cyberspace. From the point of view of someone wanting to create imaginative works, however -- narrative or otherwise -- cyberspace is replete with inviting opportunities. But from the point of view of the scholar, or someone wanting to reflect upon and study our imaginative inheritance, the resources of cybernetic simulation remain underutilized. The difficulty is conceptual not technical. Even when we work with cybernetic tools, our criticism and scholarship have not escaped the critical models brought to fruition in the nineteenth-century: empirical and statistical analysis, on one hand, and hermeneutical reading on the other.

What critical equivalents might we develop for MUDS, LARPS, and other computer-driven simulation programs? How would one play The Game of Ivanhoe as a game of critical analysis and reflection?

It would be a Multi-User game designed to expose the imaginative structures of Scott's romance - which is also to say the structures that Ivanhoe makes possible through the double helix of its genetic (social) codes: its production history and its reception history. These are the content fields of The Game of Ivanhoe . The game would be played in either of two available multi-user domains: a real-time environment and a list-serve environment. Players can enter one or both as they like, and they can engage with others either as themselves or under consciously adopted roles.

The game is to rethink Ivanhoe by re-writing any part(s) of its codes. Two procedural rules pertain: first, all recastings of the codes must be done in an open-text environment such that those recastings can be themselves immediately rewritten or modified (or unwritten) by others; second, codes can only be recast by identifiable game-players, i.e., by persons who have consciously assumed a role in the game.

Any number of roles might be played. There are the roles of the fictional characters first imagined by Scott for his romance and for its surrounding materials. But to these we add other possible roles: persons involved in the book's material production; Scott's precursors, contemporaries, and inheritors (literary and otherwise); early reviewers and any of its later readers/reviewers/critics/illustrators/redactors/translators/scholarly commentators; in general, persons in the book or persons who might have been in it, real or imaginary, as well as persons who read the book or who might be imagined reading it, for whatever reason. The roles may be played in various forms: in conversation or dialogue, through critical commentary and appreciation, by re-writing any received text, primary or secondary, seen to pertain to Scott's work.

The goal is to rethink the work's textuality by consciously simulating its social reconstruction.

VOICE OF AN ANGEL. But this is implicitly to propose that the works of our cultural inheritance have no meaning or identity an sich - that their meanings are whatever we choose to make of them. It is to make a mere game of the acts of imagination.

VOICE OF THE DEVIL. Are we then to make a business or religion of those acts? If a business then we propose to make something of our inheritance and not simply bury it in the ground, lest it be lost. If a religion we propose to recreate the world anew exactly as did the demiurge of the Book of Genesis when he refashioned his pagan inheritance by pretending there were no strange gods before him, and then making a rule forbidding any later ones as well.


  • 1. See my essay "Dialogue and Interpretation at the Interface of Man and Machine", where the work of G. Spencer Brown is used to demonstrate how states of identity are a function of states of difference and not - as in classical theory - the other way round. This view yields the axiom: a equals a if and only if a does not equal a. In poetical works, where ambiguities are often deliberately set in "controlled" play, a field of meta-ambivalences emerges to expose in the sharpest fashion the general textual condition.Back
  • 2. See the work of G. Spencer Brown, whose Laws of Form demonstrates how states of identity are a function of states of difference and not, as in classical theory, the other way round. This view yields the axiom: a equals a if and only if a does not equal a. In poetical works, where ambiguities are often deliberately set in "controlled" play, a field of meta-ambivalences emerges to expose in the sharpest fashion the general textual condition. Back
  • 3. For the broad context of this important
    idea see John Unsworth's ""Documenting the reinvention of text: the importance of imperfection, doubt, and failure." and below, note 5.Back
  • 4. The philosophical grounds for this idea, which is closely related to Gödel's Theorem, are set out in two papers related to this: Dialogue and Interpretation at the Interface of Man and Machine" and Back
  • 5. See my 1997 presidential address to the Society for Textual Scholarship, "Hideous Progeny, Roush Beasts. Editing as a Theoretical Pursuit," TEXT 11 (1998), 1-16.Back
  • 6. Espen Aarseth, Cybertext. Perspectives on Ergocid Literature Johns Hopkins UP: Baltimore and London, 1997; Janet H. Murray, Hamlet and the Holodeck. The Future of Narrative in Cyberspace (MIT Press: Cambridge, 1999.Back
  • 7. See Marcia Ascher and Robert Ascher, Code of the Quipu: A Study of Media, Mathematics, and Culture (U. of Michigan Press: Ann Arbor, 1981); Joyce Marcus, Mesoamerican Writing Systems. Propaganda, Myth, and History in Four Ancient Civilizations (Princeton UP: Princeton, 1992); Elizabeth Hill Boone and Walter D. Mignolo, eds., Writing Without Words. Alternative Literacies in Mesoamerica and the Andes (Duke UP: Durham, 1994).Back
  • 8. Don D. Roberts, The Existential Graphs of Charles S. Peirce (Mouton: The Hague, Paris, 1973).Back
  • 9. Brenda Laurel's work (see Computers as Theatre [Reading MA: Addison-Wesley, 1993]) offers stimulating ideas about how to exploit the simulacral capacities of digital tools.Back