Waxweb Mosaic-Moo article copyright 1994 David Blair
I prefer to describe my work as image-processed narrative, in which both the images and the narrative are processed. On the image side, this puts me very much on the side of video makers who insist upon a mediated image, and for whom the process of technique is always foregrounded in the artwork. A major reason for my choice of working method is that video imaging is something that I discovered and learned on my own; unlike many of my peers, I do not have an art school education. I actually began at the public library, where my desire to make plastic-image work was fatally informed by the discovery of works like Emshwiller's "Sunstone" and Paik's "Suite 212", both of which I found at the Donnell Media Center in New York City. Later, by luck, I learned that it was possible to trade work for access to equipment at Film/Video Arts, a media access center also in New York; and not long after, I heard of the free studios at the Experimental TV Center, in Owego, upstate NY, where I discovered the tools and traditions of image-processed video. It is natural that the method of auto-apprenticeship should combine with the process-oriented approach of Owego-style videoart to create a taste for images whose shape and meaning emerge through the process of attempting to learn how to make them.
I studied fiction as an undergraduate in college, where I made the uninformed decision to become a director of narrative films. My models since high school had been "grotesque" fictions that often winked at the viewer while describing the processes of their own creation, a sort of fiction that has been given the name "metafiction", and was one of the most important precursors of the what is now generally considered post-modernism. My earliest instructors were the Firesign Theater, an audio-theater group that distributed their fictions by LP, and Thomas Pynchon, whose "Gravity's Rainbow" I had the good fortune to accidentally buy when it came out. Much enjoying the Firesign Theaters' methods of constant association to create continuity, and Pynchon's method of reading through primary sources in order to discover the narratives of history, I began my own process of creating artificial histories, whose general form was predetermined, but whose improvisational shape was determined by the accidents of discovery and creation that followed during the execution of the piece. At the level of narrative, this could enter in the astonishing accidents that occur that during directed random reading in the library (or any other meta-text). At the level of images, it could take place during the relatively unpredictable and uncontrollable shape-shifting that images take during machine-mediated creation. And at the higher levels of creation, it could take place in the strange accidents of synchonicity that bound the guided acts of narrative and image creation described above with the ordinary texture of my life, and the events of history around me.
"WAX or the discovery of television among the bees" (85:00, 1991), is a electronic-cinema feature created in this vein. This hybrid feature, which can be called a film both from habit, and because modes of distribution necessitated a transfer to 16mm , is made completely of electronic images; the majority of it's 2000 shots were either digitally post-processed, or synthesized using analog and digital techniques. The narrative was also processed. The availability of the cheap word processor, which its' cut-and-paste functionality, made it possible for me to write the script, a job that took place continually over six years in parallel to the various forms of image composition (the making of the
pictures, and their editing). In fact, in Wax's case it is very difficult to separate the creation of narrative from this pictorial composition process, as it was artist-access to the Montage non-linear editing system, a device archly self-described as a picture processor, that made it possible for me finally compose the film (Wax was the first independent feature cut on a non-linear system). Though the edit machine was physically and computationally separated from the writing machine, the similarity of their processes (and the fact I connected the two places) made the visual work of writing differ only by a strange blur from the pre-verbal work of editing.
This description of image-processed narrative indicates that Wax is a heavily associative film, and in fact it is something like a first person road film, where continuity is created by the main character's endless monologue as he moves from one associative node to another; since it a movie, and so time based, it acts in totality like a punning machine on wheels, with each click of the gear chain spinning off a variety of verbal, audio-visual, or proto-haptic pointers across human and unhuman time or space, creating a virtual web of associative connections for which you are the processor. ... virtual, semi-transparent links that follow first person You like a cloud of unknowing. As indicated, in heading towards this type of fiction, I was molded by writers who rhetoricized a spatialized fiction, made of fragments that existed like connected places or many-exited plazas.... e.g. Firesign Theater or Thomas Pynchon. Unfortunately, working with either the word processor or the non-linear editing machine, I was limited in the amount of backstory, multiple paths from a single point, and general sense of process that I was able to present to an audience. One of the research goals I have set on the way to my second feature, "Jews in Space", has been to discover ways around this compositional/ presentational restriction. A preliminary step along this path has been to embrace hypertext writing. Hypertext refers to computer assisted navigation
through networked text documents where touching a word leads you to
another page, or another document, and you add these links as you see fit, between existing words and docs, or to new ones you write. Jay Bolter, in his book "Writing Space: The Computer. Hypertext, and the History of Writing", talks about hypertext as spatialized fiction, where nodes are places, and narrative is a process of travel by associative links between places. Bolter also writes software, and is one of the authors of the hypertext program I currently use, called Storyspace, which literally presents the written fiction as a spatial fiction, consisting of linked text-boxes arranged in a deeply recursive web, where travel through the fiction is much the same as travel from place to place, along a narrative topography.
Unfortunately, since the expanded writing functionality offered by hypertext is still physically separated from picture composition tools such as digital video non-linear editing systems, as well as from image-synthesis and image-processing tools, research is still an appropriate mode at this time. This research travels in several directions, coincident with the construction of "Jews in Space", which in itself constitutes a type of research. The project's narrative will be a hybrid construction very much in the tradition of the encyclopedic narrative, collating huge numbers of historical and imaginary associations, often connected merely by the curved shape of the globe. To the end of this construction, the literal level of narrative research is the actual gathering and integration of external research and a large number of created ideas and associations. The technical aspect of this research deals with finding ways to amplify my usual ways of working. One direction I have already taken
is to integrate hypertext writing with the use of on-line databases such as the digital Encyclopedia Britannica, a totally hyperlinked, Boolean-searchable version of the famous encyclopedia, which is available across the Internet. Now that relatively inexpensive local-area-network-style connections to the Internet are easily available through dialup, allowing home desktop use of visual point-and-click interface software such as Mosaic (see below for a description of Mosaic), such large-scale sources of meaningful content, easily reconfigurable by individual users, will increase in number and quality in the very near term. Local tools such as optical character recognition, which allows easy importation of scanned paper-based text into the computer, and easily constructed, quickly parsed databases which allow quick search, collation, and annotation of large individually-owned masses of text and other types of data, allow additional functionality when used in conjunction with hypertext software.
However, construction of meaning from huge amounts of material continues as it has since even before the availability of cheap paper, as form of intellectual handicraft which in general resists mechanization. Unfortunately, there perhaps are no true association machines that act as true amplifiers of the creative composition process. Such machines could parse large amounts of inputted raw material, to present the author with processed associative clusters; the author could then select a few proto-compositional elements from the offered choices and use them as the beginning work of new plot sections, or reuse them in the machine as the iterative seeds of new associative processing. These sorts of association machines are necessary for the construction of very large scale hypernarratives, and are additionally theorized as the engines of story places, which will constitute a new medium of autogenerating single or multiple user machine-created hypernarratives. With the promise of these compositional tools in mind, a second, longer term research goal for this part of the Waxweb/Jews in Space projects involves a search for narrative and poetry machines, i.e. artificial intelligence tools for the automated creation of association or even narrative. Such tools would allow amplified imaginative use of the large personal and impersonal databases mentioned above, by assisting in complexifying the narrative associationism which in image-processed narrative can serve as a form of plot propulsion, while simultaneously creating more places for the viewer to travel in her enhanced story automobile.
Unfortunately, these latter tools are not yet easily available to artists, though prototypes do exist in research laboratories. Similar limitations apply to many modes of desirable image construction, for example, the use of shared remote visualization across wide area computer networks to assist interactive creation of images at a distance; the modular construction of large, high resolution shared virtual worlds in relatively inexpensive workstations, plus other applications of virtual reality to electronic cinema production; and the use of artificial intelligence techniques for interactive image creation.
Of course the simplest level of the research problem is shaped by the need to practically apply existing resources to produce results which at least imitate the above (current) unattainables. The simplest solution is always integration of existing resources in unfamiliar ways... i.e. hybridity. Fortunately, the growth of networked computing offers some interesting, on-the-way functionalities, which further shade the question in question by offering a new idea of what integration can be... not just the simultaneous operation of
text and image composition tools, but a profound blurring between the modes of production and distribution.
To this end, not surprisingly, I have continued to distribute "WAX" in order to discover new techniques of production. My catch-phrase for this working method is "multiple-media integrated narrative". Subtitles might include: How the Generic Brain-amplifier (networked computer) allows artists to cast the shadows of a single integrated narrative onto several media... or how Integrated tools allow the affordable creation of a multitude of Hybrid forms which together constitute a single narrative. One of the laboratories for the new feature has been the project of retrofitting Wax into what I call "Waxweb".
Waxweb is a number of things. It started as a Storyspace hypertext, an experiment in large-scale hypertext I began in parallel with the preliminary construction of the hypertext script for "Jews in Space". Wax has no dialogue, but instead a narrator who delivers much of the story through voice-over; a fact which combined with the film's natural resemblance to hypertext, and its' need for audience assembly, made it a natural candidate for retrofit into a constructive hypertext... i.e. a hypertext that can not only be read, but also written to by its readers. To this end, I made what I call a base layer of 600 nodes (windows), roughly corresponding to the number of spoken lines in the film's monologue. Accompanying the text of the monologue are descriptions of the film's 2000 shots, roughly padded with what might be called author's commentary. These are connected on a single "script" path, and surrounded with a simple indexing system, allowing transport around the film. The experience of reading this text-only hypertext is morphologically similar to watching the film (like hand-bones vs. fin-bones., producing a certain type of aesthetic tension); pictures and sound are missing, but much extra information and near instant navigation have been added.
Storyspace has a simple groupware functionality, which allows people in difference places to add hypertext nodes and links to a single document. I asked 25 writers scattered in US, Japan, Germany, Finland, and Australia, all connected by the Internet and equipped with the software, to add writings onto the base layer. For most people, the Internet is a text-based medium where reading and traveling are mixed up, where distance is pointless, and where things can happen in many orders and still retain coherence, so that it very much resembles hypertext. And in reverse, the visual interface to Storyspace looks very much like a network diagram, with text windows resembling subnets or individual machines, and hypertext links as their virtual intercommunicative connections, altogether creating an interesting fit between form, process , and content. I expected that the new contributors would act almost as an analogic poetry machine, creating unexpected narrative connections and material through their processes of reading/writing. If necessary, editors could go through the material, not deleting submissions, but adding indexes and other metalinking schema in order to give coherent shape to the material.
Our tool needs were quite simple... Macintoshes, Storyspace (provided when needed through Eastgate's generosity), and dialup access to the internet, which in turn provided access to an entire set of virtual tools, such as person to person email, and a listserv based at the Institute for Advanced Research in the Humanities at the University of Virginia (headed by John Unsworth),
which allowed an individual correspondent to send a letter to all Waxweb participants, creating an asynchronous discussion group. Files were shared through the use of a private "ftp" site in St. Louis, a harddrive space from which all participants could retrieve (or upload) files. For synchronous conferencing, where people had to be in one place at the same time, we decided to use MOO software, installed at Brown University... using the "telnet" tool, we all could travel to that distant machine and logon.
MOOs are object oriented MUDs, and a MUD is multi-user dungeon, a piece of multi-user software originally created as a game in the style of the text-based Dungeons and Dragons adventure. Like that board game, they are both most often designed architectonically, as interconnected rooms. To play in a MUD, people travel (telnet) to a machine running the software, log on under archaic pseudonyms, and wage text against other users. The live, on-line intercommunication is what makes them unique... they are text-based virtual realities. While MUDS are fixed gaming areas, with fixed rules, MOOS are completely open and allow users to reconfigure the space, make new rooms, and even do a certain amount of Basic-style programming. The source code is also available, so that the software itself can be reconfigured at a deeper level by a programmer. MOO's can still have gaming aspects, but they are more often used as meeting, presentation, and workplaces, where you can be alone, or with many people.
Coincident with our decision to use the HotelMOO at Brown, Tom Meyer, the "owner" of that MOO, introduced some interesting customizations. First off, he wrote a filter which converted Storyspace hypertext files to MOO-space, in the process of which each hypertext node became a room in the MOO's virtual architecture, and each link became a passage between rooms. Meyer also converted the room- construction commands native to the original MOO software so that they would more resemble hypertext authoring commands.
Thus it became possible to put the Waxweb hypertext base-layer in a public place, so that anyone with telnet, regardless of their desktop machine, could literally read and write the Waxweb hypertext. Access to a Macintosh and a copy of Storyspace were no longer prerequisites; internet access was the only requirement. Visitors to the MOO were invited not just to read the ported hypertext, but to add to it using the online hypertext tools, and in addition to talk to one another. Traditional writing, hypertext writing, various levels of programming, as well as several types of synchronous and asynchronous text communication were all supported in this environment, a hybrid functionality resulting from the placement of a constructive hypertext in a virtual-reality environment. Though the easy-to-use visual interface of the Storyspace software was lost, a huge group of potential writers/readers was added; Storyspace still remained the main authoring tool for myself and the 25 original writers, because of the power and speed with which links could be constructed. I had expected this first group of writers to act in unison as a poetry machine, and continued to believe that the quantum froth of net contribution would show an unexpected autocatalytic ability, which could be amplified by the pattern-recognizing abilities of an editor, should that become attractive.
Soon after Waxweb became a 600 room hybrid of text-based virtual reality and on-line hypertext, the project of adding Wax's audio and video was put forward in the context of an installation at SIGGRAPH '94, the largest annual computer
graphics conference. Tom Meyer realized that the best way to realize this, and preserve the existing on-line functionality, would be to make Waxweb a dynamic hypermedia document on the World Wide Web.
The World Wide Web (WWW) is essentially an Internet hypermedia document publishing standard established and maintained at CERN in Geneva, which allows the creation of a distributed, virtual, hypermedia library across the network. Documents can be defined in any ascii editor, as the heart of the system is a simple markup language called HTML (hypertext markup language). These markup codes define intra- and interdocument links, allowing navigation through document data distributed throughout the world. A reader in New York may click a link on her local screen-displayed page to bring forward another virtual, formatted page from Cardiff. Clicking a word link on the Cardiff page may bring forward yet another g^tge from the middle of a document in California, which itself may consist not fast of text residing on that California machine, but also of a picture from another machine in the same laboratory, and a second picture from a machinettspning in Southern Florida. In essence, virtual hypermedia documents are formatted on a user's screen, using data distributed throughout the world, a system which is true even to the level of a single page's composition.
The ability to use the World Wide Web (WWW) is dependent on the type of connection a user has to the internet. If people have text-only, dumb-terminal style connections to the Internet provider, most usually through a telephone connection, they can still capably read hypertext-only, pictureless documents using Lynx, a DOS-commandline style of reader which runs on their provider's server, and shows links as highlighted text on the screen, fdtiosen by using the cursor keys. If users have a LAN-style connection to the network, which allows them to use Windows-style intelligent terminal software, they can use a visual-interface "browser" for the World Wide Web, the most famous of which is Mosaic, an application created at the National Center for Supercomputing Applications (Illinois). Mosaic is freeware; versions are available for almost all current platforms.... Mac, Windows, Unix workstations of various types, and even Amigas. The power of Mosaic and other browsers like it lies its ability to allow point and click navigation through links, plus the ability to easily view stills, audio, and video integrated in a single document. Files are usually transferred before being interpreted by the software, which means on even a relatively highspeed (ethernet) Internet connection, a small one minute digital movie will usually often take much more than a minute to transfer, at the completion of which the playback begins. Some viewers are beginning to offer playback as the data is received, a solution that allows viewers to see a low resolution version of stills as they arrive, and hear some varieties of digital audio (or video) in real-time.
Mosaic-style browsers are essentially readers, and. so do not offer useful on-line writing tools. T hough a user can save personal annotations locally, there is no way to make these visible to others, and no real opportunity for synchronous intercommunication, all of which limits its usefulness as a workgrouping tool, though of course it is a wonderful platform-independent tool for the presentation of networked hypermedia, such as an audiovisual Waxweb.. To keep the writing and intercommunication functionalities in a Mosaic environment, Tom Meyer's solution was to turn the WaxMOO into a virtual, dynamic World Wide Web document. This meant that the MOO, running on a distant machine, could answer requests for "pages" from a copy of Mosaic
running on a local user's machine by sending out a representation of a MOO-room (hypertext node) in WWW format, and then closing connection with the browser until the next request. The "room" sent across the network would be displayed on the user's machine as a static, formatted page of hypermedia. This is quite different from the standard MOO command-line interface, which on one hand provides only text, but which on the other hand is constantly connected to the user, allowing real-time text-chat
Intercommunication through the Mosaic browser was achieved through modifications to the MOO which allowed it to receive commands in html format from a Mosaic browser, thus letting users gain access to the hypertext writing interface of the MOO by pressing standard buttons and filling out forms in the Mosaic browser. Since the MOO is by definition user-reconfigurable (meant to record the intentional traces of its' users), this interface allowed Mosaic users to make annotations that were made readable almost instantly for other reader/ writers. What was missing for the Mosaic users, unfortunately, was the ability to have a real-time chat with other users, or to use some of the other real-time functionalities of the MOO. However, the first solution to this problem was provided by the fact that users able to run Mosaic on their local machine could usually run multiple, similar, "smart-terminal" style programs in a multi-tasking fashion, and so could easily have a MOO chat-session active in a separate window simultaneous with the Mosaic-reading session. The text-only MOO would provide hypertext authoring functionality and intercommunication, while the Mosaic session would allow the user to view formatted hypertext, and embedded stills, audio, and video. The MOO reader could even be slaved to die Mosaic reader tiirough the MOO itself, so tiiat whenever the Mosaic users changed pages, the text only MOO-browser would change rooms to follow. As of this writing, Meyer is researching ways to integrate the real-time intercommunicative capacities of the MOO directly into the Mosaic browser. It should be noted that the quest to add a real-time datastream (such as a telnet session) to World Wide Web browsers is one of the highest current development priorities in the WWW community. A standardized, cross-platform implementation will open die way to such applications as a true, audio-visual intercommunicative and distributed virtual reality (as we shall see below).
With this basic functionality in place, Waxweb was extensively reworked and reimported into the MOO. By late July, Waxweb consisted of 900 pages of hypertext with over 9000 hyperlinks. This included the main 600 pages of the film, plus over 200 additional pages containing a wide variety of material from earlier versions of the script; the other 100 pages included material by guest authors, and miscellaneous materials. Embedded in die main pages are 2000 color stills, one for each shot in the film; each is available in three sizes, resizeable any time by the individual user, dependent on interest or bandwidth requirements. The film itself has been split into 600 mpeg-compressed video segments, most less than a megabyte in size. Audio is available separately in aiff format, mainly in order to offer the soundtrack in four languages. Readers can choose to hear audio at any time in either English, Japanese, French, or German; there are over 2400 audio clips at the site. The film's monologue is also available as text in each of these languages; if the users chooses a language besides English, this text will be automatically inserted on the appropriate pages. Waxweb supports kanji both for reading and writing, though a localized Japanese browser is necessary to see this text.
Using standard World Wide Web programming tools, a push button interface to the dynamic MOO is available directly from the Mosaic browser. The first choice users have upon entering the site is whether to register as a user, or visit as a guest. Registered guests receive a password, and access to the authoring tools at the site. Registration is necessary in part for security reasons, and to encourage responsible participation and the site; but also it allows each user to have a small personal data file associated with her, which allows storage of bookmarks and configuration data from session to session.
At Waxweb, hypertext links have their usual color-underlining; access to the audio or video is through hyperlinked icons at the top of appropriate pages. Associative reading has been made easy; 100 key words have been hyperlinked throughout the entire text, allowing users to browse, for example, though all sequential occurrences of the word "bee". Each of the film's 2000 stills have been sorted into 30 idiosyncratic categories, such as the group of all "round things" that appear in the film. Clicking on any picture in the main 600 pages will take the reader to an index page where these similar pictures are displayed on a grid; the user can then click on any of the similar pictures on the grid, arranged left to right, top to bottom as they appear in the film, to be taken to that page of the film.
At the bottom of each Waxweb page is the main authoring interface. The first element of this is a configuration menu, which allows users to chose language, picture size, and video compression format, at any point in the reading. Beneath this is a simple comment area; users can type their name and a several line comment, press the send button, and the comment will be immediately added to any others listed, and visible to other users anywhere in the world. Following this is a bookmarks area; pages can be added and subtracted from a personal bookmark list, and users can go any of the bookmarked pages at anytime. This is an important compositional tool for hypertext authors, who construct links between spatially distinct pages, and need to store references during their writing. Next is the hypertext writing interface, which allows users to add hyperlinks and new pages at will. Users can choose a word on the current page as the beginning of the link; they can then create a new page, to which they are immediately transported. At this point, the user begins to use the actual writing interface, which is just beneath the hyperlinking interface. Text can be entered, as well as pointers to media (stills, audio, video) at other sites, which will then be published embedded in the user's new page. Turning back to the hyperlinking interface, the user then chooses a place on the new page to anchor the link begun previously; when the link is completely, the user is taken back to the original page to test the new link. An important point to note in this entire process is that the user is free to make new Web pages at will, and indeed to make as many as she desires. Though the WWW is easy to author for, it is often difficult for new users to find a place to take their writing or home pages; Waxweb provides a very simple solution to this public access problem.
By early 1995, we will be implementing a new interface to the site, which will include the first large-scale implementation of distributed virtual reality using standard World Wide Web browsers. This spatialized interface will be based on the recently defined specification for VRML, or virtual reality modeling language, a subset of SGI's Open Inventor language which has been specifically designed for use in conjunction with the World Wide Web. VRML takes advantage of the ability that Web browsers have to auto-launch helper
applications in order to allow viewing of datatypes that the browsers themselves cannot handle internally. For example, pressing a link to download a jpeg-compressed still image will cause most viewers to simultaneously launch an external jpeg viewing application; the downloaded data is directed to the external application, which then displays the picture in a floating window. VRML works similarly; pressing a hyperlink, such as an underlined word or a button, causes 3-D object data and a scene description to be downloaded from a WWW server to the user, while simultaneously a VRML viewer is launched to interpret the data. The viewer is coded for extremely fast, software-only rendering of the 3D objects. When the objects are loaded, visitors can use the mouse to navigate through the rendered 3-D space at 5-20 frames per second on a 486-based machine, dependent on scene complexity; the viewers will be available on all the major platforms, and are expected to be able to render 20,000 polygons per second. Most interestingly, any rendered object can also have a hyperlink attached to it, so that clicking on a linked object, no matter what angle you are viewing it from, can take the user back to text, to a picture, sound, or movie, or to another 3-D scene. VRML is a true hybrid of hypertext and virtual reality. It wasn't so many years ago that Jaron Lanier, in the virtual reality camp, and Jay Bolter, on the side of hypertext, both rhetorically claimed that the two types of interface were mutually exclusive, as virtual reality was meant to couple computers and communication so profoundly that language would be left behind, even at the formal level of programming; whereas hypertext advocates saw the advent of universal hypertext as key to an argument that saw computers as both practically and formally to be a type of language technology. Here, the two collapse into one another, to create a hybrid new form, which can again be recombined with other existing forms such as MOO's and Web browsers to create even stranger combinations.
Since "Wax", the film, made extensive use of 3D objects in its' story telling, it has been relatively easy to covert these objects for use in VRML. The spatial interface to Waxweb will consist of more than 300 browsable rooms, filled with hyperlinked 3D objects from the film. The major limitation of the current version of VRML (1.0, October 1994) is that it does not allow objects to have behavior, so that all objects lie static in their spatial field. This is expected to change over the next year, as VRML 2.0 is developed with behavior specifically in mind, and, in parallel, a standardized solution is worked out for the problem of how to embed real-time datastreams in a Web browser. It is commonly expected that these two developments will allow visual intercommunication via distributed virtual reality; multiple distant users, connected to the Internet through standard dialup IP-connect (visual interface accessible) accounts, will be able to interact with each in a simple realtime 3-D virtual reality, with communal text and graphics spaces easily available, as well as time-delayed audio (real-time audio available only for high-bandwidth users). In the nearterm, one of the specific technical goals of the Waxweb project is to find a simple, relatively standard way to provide these functionalities in a visual workspace which can be used both to view and add to Waxweb, and also serve as a production tool for "Jews in Space".
At present, Waxweb runs on a RS6000 server at the Institute for Advanced Technology in the Humanities at UV, heading by John Unsworth. In early 1995, the project will be establishing mirror sites for the media at UNC's Sunsite, with additional mirrors planned at servers in Berlin, Sydney, and other sites. Because the site is based on a MOO, it is possible to dynamically create pointers to media files based on a user's stored profile, or even the IP
address by which she is entering the site. Thus, a user coming in from Australia would receive all text and interface information from the main server in Virginia, while simultaneously be sent to the mirror in Sydney for pictures, sound, video and 3-D files; since the WWW is by nature a scheme for distributed hypermedia, this dynamic reassignment would be transparent to the user, who would only notice that files loaded faster once the mirroring scheme was implemented. Besides respecting the network's bandwidth ecology, this mirroring scheme will allow a higher user load, as the main sever would be left in the main to running the MOO software and answering requests for text, rather than constantly having to "think" about sending 100k or larger files to multiple simultaneous users. An additional, extremely important advantage of mirroring is that it is possible for the media files to exist on the individual user's machine. With support from the New York State Council for the Arts, I will be pressing a multi-platform CD-ROM of the complete Waxweb dataset (including a "frozen" html version of the MOO). Owners of the CD will be able to register at Waxweb, specifying through an easy to use interface where the CDROM is on their local system; thereafter, the MOO will point to the CD for all media files. This will allow low-bandwidth network users to enjoy Waxweb with quick access to heavy media types such as the mpeg video files, while at the same time dynamically interacting with the Waxweb server and other users, adding comments, hypertext, and pages as they wish.
It is interesting to contrast the Waxweb project with a previous incarnation of Wax on the Internet. In May of 1993, Wax was sent across the mbone, or multimedia backbone of the Internet, which is a special, high-bandwidth testbed for delivery of real-time audio and video across the Internet. The New York Times ran a story in the business section ["Cult Film is First on the Internet", May 23, 1993], which declared tiiat the experiment pointed towards the 500 channels, unfortunately neglecting to point out that the net-cast was a multicast, meaning anyone who could receive could also send audio or video (or text, of course), so that an individual's reception screen could be filled with little boxes of reconfigurable intercommunication. I kept this partial misconception in mind as I planned the Waxweb project, which in many ways is a re-multicast of Wax over the standard, lower bandwidth Internet. As this extremely inexpensive project has gone up on the public network, a wide variety of multi-million dollar commercial video-server trials have been announced around the US, and in some cases constructed. Many of these new networks have been conceived on an expanded cable-tv model, offering mainly more channels, and user interaction at the level of movies on demand, and simple shopping. Many offer high-bandwidth networks 50 to 100 times faster than what is available to high-end Internet users. Though Waxweb on the Internet is based on file transfer, rather than a continuous stream of digital video, I like to point out tiiat if stable bandwidth at least one of order of magnitude lower than that being used in the video trials (in their "thinner" implementations) was available to Waxweb users, the functional difference between the two types of server would blur. With a practical eye on the high-end, Waxweb also allows functionality to lower end (low-bandwidth) users, including those who have a text-only access to the Internet via an ordinary dumb-terminal dialup connection, a type of user which at present constitutes the vast majority of internet connectees. This project is an example of a narrative "server" scalable from the bottom up, from text up to pictures, and in a broader sense demonstrates the strengths of an open, reconfigurable system. If the bandwidth were available, the ability to send narrative audio/video in a single direction would only be a subset of the systems' total functionality. My
rhetorical point is that the 500 channels offered by the VideoServer trials are simply a high-bandwidth subset of an open, accessible, reconfigurable system, not the other way around. In the coming years, as universal digital access becomes viable, as bandwidth becomes cheaper and more stable, and the specification and standard tools surrounding the World Wide Web increase in capacity, it will become possible to imagine a global tv system populated by an indefinite number of small, scalable servers, each offering synchronous or asynchronous one-way and two-way datastreams, serving simultaneously as production environments, content providers and meeting places (or in other words, HTML 4.0 will be global television, plus!). This will be one of the most important areas of research for the Waxweb project as it moves from distribution of a reconfigurable "Waxweb" to the production of "Jews in Space". The first stage of the transition will involve a further enablement of the workgrouping tools, to allow a richer intercommunicative virtual production environment, capable not only of allowing production meeting and management, but also access to still and moving image composition tools, and large shared databases. These production tools will be developed with the idea in mind that they can become distribution tools at the completion of the film, to be used by an audience both for heightened on-line browsing of the completed multimedia narrative, and as intercommunication tools, configurable by users at the narrative server for even their own individual production purposes.
The high road to such a server is of course traveled by using high-bandwidth internet tools, such as the upcoming Jupiter extensions to standard MOO software, which allows text/graphic/audio/video synchronous and asynchronous intercommunication by a group of users connected through the Internet's multimedia backbone (mbone). The low road is through the standard WWW; for instance, a VRML MosaicMOO capable of handling realtime data streams could be a very capable visual workgrouping tool. Such a tool, with both open and hierarchical functionalities, could be used for low-budget, multi-continent electronic cinema production and post-production, and then for distribution of a 40 gigabyte multilingual hyper/cybermovie that can be verbally, pictorially, and spatially browsed and reconfigured across the network, and, recursively, used as a text/picture/spatial meeting place and workspace by those browsing and writing viewers, all maybe for a dollar an hour, and with portable media available in a variety of forms at the same place.
The same dataset created and presented through this server of course would also be used to make the linear theatrical feature. The permutatable nature of this dataset is at the heart of "multiple media integrated narrative", a process by which hybrid tools are used to affordably create a unified data set from which can be created multitude of hybrid media forms which all constitute a single narrative. Working to partly produce "Jews in Space" through the network, I in no way intend to abandon the idea of the large-screen, linear narrative; the theatrical feature will be one iteration of the dataset, while an on-line, audience-reconfigurable 40 gigabyte proto-global tv version will be another, and portable media yet another. As best as possible under present conditions, my next feature is authored from the start as an integrated database preserving all varieties of association, collation, and composition, so that final authoring in a variety of related narrative forms can easily be accomplished. A feature film in a darkened theater offers one type of narrative, both in meaning and presentation; a parallel Worldwide Web-style
version, with as much narrative material as 60 CDROM's, plus user interaction, constitutes another place, with many related stories; and the variety of user-reconfigurable personal, portable media, such as a videotape, floppy disc, or CDROM, each offer additional narrative functionalities. Rhetorically, it is as if the narrative were some 4th dimensional object which cast shadows onto the 2D spaces of composition and audience viewing; the shapes of different shadows are captured in separate media by the computer-aided artist, whose working power has been amplified by the brain machine, which has allowed cheap access to all these different media, simultaneous with cheap access to workgrouping and distribution tools, whose formal properties additionally effect the final plural results.
It is the hybrid practicality of the open computer network medium, which amplifies the individual machine (just as the machine amplifies the individual user), that has allowed the new functionalities discovered and anticipated in the research described above. Here, we begin to see hints of a profound collapse of the previously distinct realms of production and distribution into one another. On the production side, Waxweb is an example of inexpensive distributed workgrouping tied to the integrated use of distributed resources. But this is not separate from distribution; the goal was obviously public distribution of a work which, iteratively, was designed for audience reconfiguration (production), renewed audience viewing (distribution), and so on. Concomitant with perception of this blurring, the concepts of integration and hybridity seem to come to the foreground. Narratively, integration can often be seen in the collapse of a large number of associations into a single coherent narrative; and technically, it can be seen in the continuing collapse of narrative tools into the individual user's workstation, and the collapse of machines into one another across networks. In parallel, narrative hybridity appears as the very strange combination of forms caused by the unexpected combination of various ways of telling; and technical hybridity as the sudden appearance of strange new functionalites caused by the clever recombination of tools, a process most easily performed if the tools are themselves open, easily available, and reconfigurable.
We have already seen most text tools collapse into the integrated text amplifier... or computer, allowing us to do anything we want to do with words, in any order we want, on the way to composition. Concomitantly, we have gained the ability to project these functionalities across any distance, allowing us to not only to write or read, but to do a lot of hybrid things which are neither exactly one nor the other. General media tools will continue to collapse into the integrated media amplifier (or networked media workstation), where hypertext, image processing and synthesis, editing, and a variety of in-between functionalities will allow anything to happen in any order, on the way to composition, collaboration, presentation, and things in between. Inevitably, we are going to end up with a very large number of hybrid, multi-bodied media forms. Common to all will be that fact that a single, variegated chunk of proto-narrative, proto-image, proto-anything data can, and often will, take many different forms, which will all have the esthetic tension of being morphologically similar, though in different media.
Waxweb has gone from hypertext to hypertext MOO to hypertext Mosaic MOO to hypertext VRML Mosaic MOO in about a year. Integration and hybridity are so dynamic right now, what with new tools building other new tools all over the connected planet, I think it is very difficult to really imagine what the
possibilities will be in two or three years (WAIS-based scalable Video-conference/server with AgentlnteractionStoryplace and time-based-hypertext interpersonal-VRML MosaicMOO (with whiteboard) narrative natively synchronized with various types of cross-platform portable media, and able to output new portable media, also available text-only, as well as at the local art-cinema, and the video-rental store.... and what have I forgotten?) If we can only keep open, accessible tools and networks, we will see hybridity become the standard, and maybe even live to see the next true step in hybridity, the commonplace use of integrated, hybridized network tools for the semi-automatic creation of narrative elements, both in production and distribution.Where production and distribution begin to resemble one another, and integrated tools create hybrid narratives, it is possible to imagine the practical availability and narrative application of poetry machines, meaningful autocatalytic images, and visual VR techniques in the production (and distribution) of digital cinema (though I am prepared to accept this may be an archaic vision).