IV. Ways of Seeing
I want to be careful to distinguish the arguments that I have begun developing above from Jean Baudrillard’s explorations of the simulacra and the hyperreal. My arguments about artifice and information will ultimately rest on developments in computer graphics and visualization, as opposed to the intellectual ancestry of Baudrillard in the Frankfurt school and thinkers such as Guy Debord. My objective is not to refute or overturn Baudrillard, whose writings I consider timely, but rather to use a detailed, sometimes technical look at the technology of digital images to complement (and, yes, complicate) aspects of his work. For although Baudrillard has afforded scholars their most common interface between post-Marxist critical theory and media studies, often his views are simply inadequate in their understanding of technical matters, and more importantly, the relevance of those technical matters to broader theoretical issues.
Baudrillard’s positions are well-known and have been formulated over the last twenty years in a series of books and monographs, from the early and important Symbolic Exchange and Death (1976) and Simulacra and Simulations (1981) to his most recent works, such as America (1989) and The Gulf War Did Not Take Place (1995). The basic ideas may be summarized as follows. Advanced media technologies, particularly technologies of the visible, are blurring boundaries between the real and the artificial. Artifactually, the consequences of this blurring are a class of objects known as the simulacral, or copies without originals. Digital objects are particularly pre-disposed to the simulacral, because they can, in principal and often in fact, be replicated with no discernible loss of quality or integrity (the key referent here of course being Benjamin’s notion of "aura" as articulated in the "Work of Art in the Age of Mechanical Reproduction"). Sociologically, advanced media technologies and the rise of the simulacra presage a shift from a production-based economy to a post-production or information economy. Capital is grounded not in the manufacturing of goods and things, but rather the circulation of images through a "symbolic economy." (Here Baudrillard has invoked the sinister apparition of the twin towers of the World Trade Center, with their implicit contrast to the singular industrial spire of the Empire State Building.) The symbolic economy of the simulacra culminates in a state Baudrillard terms the hyperreal, in which all distinction between authenticity and artifice has been eroded by representational technologies, such that the artificial emerges as only another reality. Disneyland becomes Baudrillard’s most famous exemplar of the hyperreal, a synthetic environment which, in the teflon economy of the spectacle, is received as more natural and organic than any real community or locale -- and which in turn forms the basis of real (or "real") communities, such as Disney’s own "Celebration" network of housing developments.
From this brief sketch, we should be able to see that Baudrillard’s critique of media culture is generally prejudiced toward the reception rather than the composition of hyperreal phenomena. Thus Baudrillard’s most famous dictum: "A possible definition of the real is: that for which it is possible to provide an equivalent representation" (emphasis in original; 145). Many contemporary critics have found this same formulation to be an attractive armature for theorizing digital culture because computers routinely succeed in projecting the illusion of the hyperreal. For example, William J. Mitchell writes of digital images:
The continuous spatial and tonal variation of analog pictures is not exactly replicable, so such images cannot be transmitted or copied without degradation. Photographs of photographs, photocopies of photocopies, and copies of videotapes are always of lower quality than the originals, and copies that are several generations away from an original are typically very poor. But discrete states can be replicated precisely, so a digital image that is a thousand generations away from the original is indistinguishable in quality from any one of its progenitors. A digital copy is not a debased descendent but is absolutely indistinguishable from the original. (6)
The digital image would thus seem to offer a classic instance of simulacral behavior. And it is of course true that digital images (or digital content of any kind) can be copied and multiplied in exactly this way (as Mitchell seems to acknowledge by his emphasis on "discrete" digital states). Anybody who spends any time working with computers routinely duplicates files and moves them around on a desktop or a server precisely because the copy is the ontological equivalent of the original. But, because digital images can be copied in this way doesn’t mean they always are, nor does it mean that this observation tells us all we need to know about the ontology of digital images. In fact, it tends to obscure more than it reveals.
Digital images, for all their putative immateriality, in fact obey and respond to certain logics, behaviors, and constraints that follow from their composition at the computational level. The World-Wide Web’s ubiquitous JPEG (.jpg) image format, for example, is a "lossy" compression format optimized for use with photorealistic images. This means that the JPEG algorithem does its work by discarding ("losing") information that falls below the threshold of human vision. Thus, a JPEGed image will typically be smaller in size (and will download faster) than its master image, because it quite literally does not consist of as much information. Because the recommended range of JPEG settings is not meant to result in any observable degredation of the original image, it can in fact be said to be the "same" as the original, at least from the standpoint of the viewer -- or a simulacrum, in Baudrillard’s terms. This is demonstrated in figure 8, a screenshot from Adobe Photoshop with four copies of the "same" image -- one at 100% magnification and three at 500% magnification. The first of the three images displayed at 500% is a TIFF image created from a first generation transparency of the frontispiece to William Blake’s Book of Urizen; while the middle and right-most image are both JPEG derivatives of the original TIFF (one at a high retention setting, and the other at a low retention setting). All three images appear optically identical, even at 500% magnification. But their file sizes differ dramatically: 955K, 144K, and 37K, respectively, a function of the JPEG compression. Therefore: the JPEG derivative is an altogether different image, even if the difference is not immediately visible. This is why those who are knowledgeable about digital preserevation standards regard JPEG as a poor format for serious archival imaging, despite the quick download times that lend it practical value on the Web -- precisely because once a portion of the pixelated bitmap is gone, it is gone for good. Figure 9, for example, depicts a TIFF derived from the low retention JPEG used in the previous example, displayed alongside the original master TIFF. Note the muddy quality of the TIFF derivative -- the information thrown away by the initial JPEG process is unrecoverable and cannot be used to reconstitute the same "original" image.
A similar way of making this point is by way of the analytical tools included in high-end image processing software such as Photoshop. Figures 10 and 11 depict another electronic image file (this time a digital facsimile of a plate from Blake’s Songs of Innocence and of Experience) opened in Photoshop in two common data formats, GIF and JPEG. To a human observer, these images appear identical. Yet a histogram -- a graphical representation of the number of pixels at each of the 256 brightness levels in the image -- reveals a profoundly different compositional structure underlying the seemingly identical display of colored pixels. What the histogram shows is that the images are "identical" only in one very narrow and specific sense, that of their appearance to the unaided human eye. Computationally, they manifest significant differences. (The difference is accounted for by the different ways in which the GIF and JPEG formats store color information.)
What I want to suggest is twofold. First, evidence such as this or the previous demonstration must, at the very least, complicate the notion of the simulacrum as the basic paradigm for understanding digital images. For though it is certainly true that I could have duplicated the image in such a way as to produce a copy with an indentical histogram had I wanted to, that I could also duplicate it in such a way as to produce a seemingly identical image that nonetheless yielded a markedly different composition at the computational level suggests that any theory of digital media that does not take into account such commonplace events is at best incomplete. Likewise, it is instructive to look at the occassional accidents of digital reproduction (figures 12 and 13, examples of packet loss and a crashed compression algorithm respectively) to remind ourselves that no medium can ever altogether truly embody the friction-free economy of the simulacra. Finally, and although I do not have space to go into detail here, tomorrow’s imaging standards -- built around compression engines driven by fractal and wavelet theory, such as JPEG 2000 -- suggest that the static data models most clearly aligned with a simulacral economy will soon be superceded by infinitely more porous and pliable generations of data.
An additional point of interest emerges from the visual glyph that is the histogram itself. Its spikes and valleys can be usefully understood not just as an abstract projection of the image, but also as an alternative and equally authoritative rendering of its underlying data structure. This is how the image "looks" to the computational algorithm that produced the histogram, and though the histogram might make make little sense to an untrained human eye, it makes a great deal of sense to the machine. The histogram suggests that we would be well advised to evaluate digital images and objects in a number of different informational states, any one of which can be said to be the image at a given moment and only one of which is the normative view. I say this not to be perverse, but because such an observation seems to me the unavoidable conseuqeunce of following the logic of what Nicholas Negroponte calls "being digital" to its inevitable conclusion. That is to say, just as electronic artifacts are capable of endless permutations by virtue of their underlying homogeneity as binary code -- a fact often celebrated by boosters of the medium like Negroponte -- so too are they capable of manifesting themselves in a variety of different representational configurations, only some of which may be said to corrrespond to those representational configurations (say a facsimile reproduction) which we have found to be valuable -- or let us say "informative" -- in our encounters with analog phenomena. If, as Baudrillard states, "A possible definition of the real is: that for which it is possible to provide an equivalent representation," we might also say that "a possible definition of the real is: that for which it is possible to provide an equally real altnernative presentation."