MATTER

Age (days)

Topic

Area

Format

About
Archive

The Enduring Ephemeral
or the Future is a Memory

Wendy Hui Kyong Chun

New media, like the computer technology on which it relies, races simultaneously towards the future and the past, towards what we might call the bleeding edge of obsolescence. Indeed, rather than asking, What is new media? we might want to ask what seem to be the more important questions: what was new media? and what will it be? To some extent the phenomenon stems from the modifier new: to call something new is to ensure that it will one day be old. The slipperiness of new media—the difficulty of engaging it in the present—is also linked to the speed of its dissemination. Neither the aging nor the speed of the digital, however, explains how or why it has become the new or why the yesterday and tomorrow of new media are often the same thing. Consider concepts such as social networking (MUDS to Second Life), or hot YouTube videos that are already old and old email messages forever circulated and rediscovered as new. This constant repetition, tied to an inhumanly precise and unrelenting clock, points to a factor more important than speed—a nonsimultaneousness of the new, which I argue sustains new media as such.

Also key to the newness of the digital is a conflation of memory and storage that both underlies and undermines digital media’s archival promise. Memory, with its constant degeneration, does not equal storage; although artificial memory has historically combined the transitory with the permanent, the passing with the stable, digital media complicates this relationship by making the permanent into an enduring ephemeral, creating unforeseen degenerative links between humans and machines. As I explain in more detail later, this conflation of memory with storage is not due to some inherent technological feature, but rather due to how everyday usage and parlance arrests memory and its degenerative possibilities in order to support dreams of superhuman digital programmability. Unpacking the theoretical implications of constantly disseminated and regenerated digital content, this paper argues these dreams create, rather than solve, archival nightmares. They proliferate nonsimultaneous enduring ephemerals.

The Future, This Time Around

Prophesying the future of digital media is, once more, in fashion. With the now-embarrassing utopian and dystopian hype around the internet and Y2K comfortably behind us (or at least archived), there is a growing impatience with the so-called critical hindsight that flourished after the dotbombs and 9/11. Rather than the sobering if banal reassessments of internet communications as a “double-edged sword,” the main strain of digital media analysis—popular and scholarly—is on future possibilities.

Howard Rheingold, who helped popularize virtual reality and virtual communities, has written a book on the next social revolution, smart mobs; everyone is now speculating about the web 3.0, the semantic web in which information and meaning will finally coincide. Even longstanding critical organizations, such as the Australian organization fibreculture, dedicated to “critical and speculative interventions in the debate and discussions concerning information technology,” have joined the bandwagon, entitling the 2007 Digital Arts and Culture (DAC) association’s conference in Perth “The Future of Digital Media.”

This future 2.0, like web 2.0 or 3.0, is not as utopian or bold as its mid-1990s predecessor, which was billed as the future. There are no upbeat yet paranoid commercials promising an end to racial discrimination and the beginnings of a happy global village; there are no must-read cyberpunk novels or films outlining its gritty, all-encompassing nature, although new media does now encompass bio- and nanotech. This return to the future as future simple—as what will be, as what you will do, as a programmed upgrade to your already existing platform—will no doubt recede and then reappear. Its cycle is partly driven by economics. Silicon Valley has recovered from the demise of the “new economy.” Google is trading well over four hundred dollars per share. Ipods and BlackBerry devices are everywhere. There is a sense that something is and has changed. NBC announced layoffs in 2006 not only because its programming is doing poorly but also because kids just aren’t watching TV on TV anymore. Also, Facebook has moved successfully from college campuses to the Englishspeaking public in general, and Michael Zuckerberg is apparently replacing Larry Page and Sergey Brin as the valley’s new IT kid. YouTube is impacting U.S. presidential elections; CNN now covers blog content as breaking news; and Skype seems poised to make the videophone, conceived in the 1970s and 1980s, an everyday reality.

This return to the future or to the “emerging” in new media and its study is also a reaction to a perceived crisis within net criticism. When, in 2001, Lev Manovich chastised scholars for focusing on future rather than already existing technologies—for conflating demo with reality, fiction with fact—and Peter Lunenfeld and Geert Lovink categorized much theoretical work as “vapor theory,” their criticism seemed a much-needed admonishment. It was a call for theorists to wake up from their virtual reality or, to play with William Gibson’s famous description of the matrix, their consensually hallucinated cyberspace. Even Gibson has started writing about actually existing technology. Engaging the present, however, has not been so easy. Gibson’s more recent books have not been as popular as his early ones. It would seem that presently existing media objects are rather boring or have a short lifespan. Indeed, in a way to avoid both the future and the present, Neal Stephenson now writes about the past, and the scholarly trend towards “media archaeology” is similarly retrospective, even if it is not traditionally historical or progressivist.

Speed and variability apparently confound critical analysis. According to Lovink, “because of the speed of events, there is a real danger that an online phenomenon will already have disappeared before a critical discourse reflecting on it has had the time to mature and establish itself as institutionally recognized knowledge.” More broadly, McKenzie Wark has argued that traditional scholarship is incompatible with the types of images and events, produced and disseminated along lightninglike speed media vectors, that interrupt the homogenous and abstract formal time of scholarship. In making this diagnosis, Wark draws from the work of Paul Virilio, who has argued that cyberspace has implemented a real time that is eradicating local spaces and times. This global one time threatens “a total loss of the bearings of the individual”11 and “a loss of control over reason,” as the interval between image and subject disappears. More narrowly, Manovich has argued that the critical blindness brought about by speed is peculiarly American: “the speed with which new technologies are assimi7. See Lev Manovich, The Language of New Media (Cambridge, Mass., 2001), and Peter Lunenfeld, “Interview with Peter Lunenfeld,” interview by Geert Lovink, 31 July 2000, www.nettime.org/Lists-Archives/nettime-l-0008/msg00008.html lated in the United States makes them ‘invisible’ almost overnight: they become an assumed part of the everyday existence, something which does not seem to require much reflection. The slower speed of assimilation and the higher costs involved give other countries more time to reflect upon new technologies, as it was the case with new media and the Internet in the 1990s.” Manovich’s geographic analysis and his linking of speed to cost is intriguing, but once again speed is labeled as the culprit. In addition to speed, malleability also makes criticism difficult by troubling a grounding presumption of humanities research: the reproducibility of sources. The fact that we cannot all access the same text— because, for example, the page has simply disappeared—seems an affront to scholarly analysis. This lack of verifiability gives a different spin to discourses of trust that dominate technology planning.

In response to these difficulties, Lovink and Wark both argue that the time of theory itself needs to change; Lovink’s “theory on the run” and Wark’s theory as “micro-event” take on the same temporality or speed as digital media, refusing to stand outside their mode of dissemination. Lovink’s theory is a “living entity, a set of proposals, preliminary propositions and applied knowledge collected in a time of intense social-technological acceleration.” It is not only on the run because it engages the present but also because it “expresses itself in a range of ways, as code, interface design, social networks and hyperlinked aphorisms, hidden in mailing-list messages, weblogs and chatrooms and sent as SMS messages.” Wark similarly discusses the work of tactical intellectuals as a kind of micro-event in which “the media tactician presents an image that endangers the conventions of journalistic narrative time, yet which is capable of inserting itself into it.”16 That is, the micro-event travels along the same media vectors as the mainstream event itself, displacing the event’s terms in its travels. Wark’s critical work exemplifies this kind of intervention; it appears first on the net and then later in print. Although I am sympathetic to these efforts and agree that digital media criticism needs to be on the net rather than simply about it, I also believe we need to think beyond speed.

The fact that the present is hard to engage or that scholarly certainty lags behind its object of analysis or that there is a need for intervention is hardly profound. Scholars studying global climate change, for instance, have consistently argued that by the time we know whether or not their predictions are true it will be too late. Thus, one must act as though future predictions (models or demos) were fact in order to prevent the predicted future from taking place. Also, the lag between a digital object’s creation and its popular or scholarly uptake—its nonsimultaneous dissemination— does not belie new media, but rather, as I explain later, grounds it as new. Further, ephemerality is not new to new media. Television scholars have been grappling with this very question for years. Focusing on actually existing shows, rather than future episodes, they have theorized TV content in terms of flow, segmentation, and liveness.17 So, what is different or new about new media?

Most obviously, networked new media does not follow the same logic of seriality as television; flow and segmentation do not quite encompass digital media’s ephemerality. Programming TV and programming new media are significantly different enterprises. To program a television show is to schedule or to broadcast it; to program a computer is to produce a series of stored instructions that supposedly guarantee—and often stand in for—a certain action. One is descriptive, the other prescriptive. Second (and not unrelated), digital media with its memory was supposed to be the opposite of or the solution to television. That is, new-media scholars’ blindness to the similarities between new media and TV is ideological; it stems from an overriding belief in digital media as memory—and thus possibly memorable—and TV as liveness.18 When TV was still TV, memory supposedly marked the difference between it and digital media; unlike TV, digital media’s content, like the programs it runs, was to be available 24/7. The always-thereness of digital media was to make things more stable, more lasting. Digital media, through the memory at its core, was supposed to solve, if not dissolve, archival problems such as degrading celluloid or scratched vinyl, not create archival problems of its own. The limited lifespan of CDs will no doubt shock those who disposed of their vinyl in favor of digitally remastered classics, that is, if they still use CDs or an operating system that can read them. Old computer files face the same problem.

The major characteristic of digital media is memory. Its ontology is defined by memory, from content to purpose, from hardware to software, from CD-ROMs to memory sticks, from RAM to ROM. Memory underlies the emergence of the computer as we now know it; the move from calculator to computer depended on “regenerative memory.”19 John von Neumann in his mythic and controversial First Draft of a Report on the EDVAC (1945) deliberately used the term memory organ rather than store, also in use at the time, in order to parallel biological and computing components and to emphasize the ephemeral nature of vacuum tubes 20 Vacuum tubes, unlike mechanical switches, can hold values precisely because their signals can degenerate—and thus regenerate. The internet’s content, memorable or not, is similarly based on memory. Many websites and digital media projects focus on preservation: from online museums to the YouTube phenomenon Geriatic1927, from Corbis to the Google databanks that store every search ever entered (and link each to an IP address, arguably making Google the Stasi resource of the twenty-first century). Memory allegedly makes digital media an ever-increasing archive in which no piece of data is lost. This always-thereness of new media is also what links it to the future as future simple, as what will be, as predictable progress. By saving the past, it was supposed to make knowing the future easier. More damningly, it was to put into place the future simple through the threat of constant exposure; as a New York Times article questioned in response to the posting on YouTube of a clip of Senator George Allen making a racist remark: “ If... anymoment of a candidate’s life can be captured on film and posted on the Web, will the last shreds of authenticity be stripped from our public officials?”21 Intriguingly, this formulation assumes that racist slurs are the authentic and the true and that public exposure always makes behavior more banal. However, given the legion of students with compromising Facebook entries who seem oblivious to the fact that potential employers can check these entries and given that people increasingly record their own “transgressions” (such as the English happy slappers), it is not so clear that this assumption will hold, even for politicians. Allen, after all, made his comment at a public rally and directly addressed the Indian American man holding the video recorder. Regardless, digital media was supposed to—in its very functioning— encapsulate the enlightenment ideal that better information leads to better knowledge, which in turn guarantees better decisions.22 As a product of programming, it was to program the future.

Save History