Cross Talk
Artbyte (New York) 2, no. 5 (January-February 2000), pp. 28-29
Jon Ippolito

Should some code be censored?

KNOWLEDGE SHOULD BE ADVANCED AT ALL COSTS
--Jenny Holzer, Truisms

Pressures to "clean up" the Internet have receded from the public eye lately, displaced by Bill Clinton's post-Littleton call for video game manufacturers to cut back on virtual decapitations and Rudolph Giuliani's jihad against risquÈ religious paintings at the Brooklyn Museum. There is even some evidence that the pendulum of public policy regarding the Internet may be swinging back in the direction of unfettered access, as suggested by the New York City school system's apologetic response to criticism that its family filter hindered student research. But if the clamor for overt, %de jure% Internet censorship is dying down, a tacit, %de facto% censorship seems to be quietly taking its place. Now that more and more of the Web is being shaped by a corporate bottom line instead of government-sponsored research, the academic model of freely exchanged information is giving way to a mercenary calculation of potential gains and losses. What if visitors follow one of your links to another site, are upset by what they find there, and sue you for directing them, intentionally or not, towards offensive or illegal material? If you're not getting paid to link to someone else's site, lawyers argue, why run the risk?

It's not hard to imagine similar threats of litigation aimed at online art. The Web site mockeries and misdirections perpetrated by Heath Bunting, (r)(tm)ark [NOTE: Registered symbol followed by Trademark symbol], and other online pranksters have already kept a few corporate lawyers employed writing cease-and-desist letters. Mattel told artist Mark Napier he was personally liable when he displayed distorted images of Barbie on his Web site. Of course, these interventions represent political stances that the artists themselves have chosen, and hence are candidates for free-speech protection. But other online artworks take a special risk that could only arise in a programmable language like cgi or Javascript--a liability that is not personal, but algorithmic.

Jenny Holzer exposed herself to algorithmic liability when she chose to include a script in her Please Change Beliefs Web site that allows visitors to post their own rewrites of her famous "Truism" statements. Collaborating with the public sounds great in theory, but what if someone alters Holzer's "PEOPLE WHO DON'T WORK WITH THEIR HANDS ARE PARASITES" to read "JEWS AND BLACKS ARE PARASITES" or "MONICA LEWINSKI'S HOME PHONE NUMBER IS 202 357-1597"? Napier's Web Shredder requires even less intervention by a visitor to raise legal eyebrows: point the Shredder to a Web address--www.microsoft.com, say--and you'll see a delightfully jumbled version of Microsoft's home page, with HTML code where the images should be and vice versa. But if you're the one who chose Microsoft's logo to mangle, who's to say that you, rather than Napier, shouldn't be liable for the distortion? In theory, Holzer and Napier can hide behind the moral shield of the First Amendment, since they have essentially created an alternative public space for others to express views or target enemies.

What, then, if there is no other human agent involved besides the artist? Consider Maciej Wisniewski's Jackpot, which randomly calls up three different Web pages in adjacent frames every time you push the virtual "play" button. Should elementary schools filter out any site with a random url-generator, because of the chance it might load www.bestiality.com by accident? Of course, the random mechanism underlying works like Jackpot merely makes it possible that a viewer might be exposed to objectionable material. But what if a work is *guaranteed* to produce something offensive if you wait long enough? That's the case for John Simon's Every Icon, a Java program that automatically generates every possible black-and-white image that can be described by a 32 x 32-pixel grid. The equation responsible for these countless iterations--something like grid(i,j)=integer((t mod 2^(i+32j+1))/2^(i+32j)) [NOTE: ^ indicates superscript], looks harmless enough. Yet this equation, if left to its own devices for an unlimited amount of time, will exhaustively inventory every possible miniature black-and-white image, including Bill Gates wearing a Ku Klux Klan hood, a diagram for how to make a pipe bomb, and the Virgin Mary covered with elephant dung (not to mention the feces of every other earthbound or extraterrestrial pachyderm). To Albert Einstein, E=mc^2 [NOTE: ^ indicates superscript] looked not only innocent, but beautiful. To J. Robert Oppenheimer, watching the mushroom cloud billowing across the sands of Alamagordo, the equation left a decidedly more sinister impression. ("I am become Death, the shatterer of worlds," he is supposed to have uttered.)

It's hard to imagine that the mere act of solving an equation could be socially irresponsible--that a malevolent force could be lurking among innocent-looking plus- and minus-signs waiting to be released, as the ancient Gnostics believed of their symbols. Then again, it's hard to look at photos of Eric Harris and Dylan Klebold and imagine those fresh-faced Littleton teenagers pulling the trigger on their defenseless classmates. And lest we pretend that code is inherently less culpable than flesh, let's remember that some creatures are composed of both. Medical ethics has already debated the question of whether to destroy the last remaining polio virus or to keep a specimen in a test tube for future research. Now that we have a genetic sequencer capable of building a virus's RNA nucleotide by nucleotide, epidemiologists could simply store the virus's code in a database rather than %in vitro% on a shelf. But then some ethicists would probably call for the eradication of the database itself--perhaps for the eradication of all databases containing potentially harmful codes, whether they corresponded to biological or computer viruses.

It's easy to extrapolate this line of reasoning to a nightmarish future in which recursive functions and random-number generators are outlawed, where a math professor who writes the wrong equation on the chalkboard might meet a similar fate to the follower of Pythagoras whose ship mysteriously sank after he discovered the existence of irrational numbers. However, any attempt to classify mathematical equations as good or bad in advance of solving them is doomed to fail, for there exist entire classes of equations whose solutions cannot be predicted in advance. These equations require a computer to chip away at the problem bit by bit--quite literally--until the solutions are gradually revealed. A well-known example of such an equation is z[n+1]=z[n]^2 + c [NOTE: ^ indicates superscript], whose solution, the Mandelbrot Set, is a fantastically intricate pattern of nested spirals and fractal boundaries. It's no accident that this solution can only be glimpsed as a printout from a computer screen. Humans cannot possibly test every approach to a given problem, but computers can. That's why the IP packets that make up a given Web page may travel to you via Atlanta the first time you load the url and via Helsinki the second. It's why chemical manufacturers like Symyx are researching matrices that randomly combine sample chemicals as a method for producing thousands of new compounds %en masse%.

Of course, those who exploit digital technology's power to explore every variation on a given theme must realize in advance that the range of variations will be beyond their control. Does this mean we should hold Alan Kay or Tim Berners-Lee morally culpable when first-graders stumble across neo-Nazi Web sites? Yes, in a sense we should--but no more than Charles Darwin or Friedrich Nietzsche or Richard Wagner, each of whom had an indirect influence on the promulgation of Nazi ideology. And indirectly opening an entire field of inquiry is a lot different than committing a conscious act of "information abuse," such as secretly tracking your employees' passwords or e-mailing Moammar Quaddafi the formula for Sarin nerve gas. Ultimately we must weigh the social cost of algorithmic liability against the enormous benefits that its applications, from chaos theory to the World Wide Web, have brought us. In nearly every case, I believe, the advantages will outweigh the disadvantages. To outlaw research just because it explores the unknown would be to reduce science to dogma and art to decor--not to mention diminish significantly what it means to be human.

Web projects by artists mentioned in the text can be found at www.irational.org (Heath Bunting), www.rtmark.com ((r)(tm)ark), adaweb.walker.org/project/holzer/cgi/pcb.cgi (Jenny Holzer), www.potatoland.org (Mark Napier), adaweb.walker.org/context/jackpot (Maciej Wisniewski), and www.numeral.com (John Simon).