Category Archives: Best

Be civil, asshole

“The reason the battles are so bitter,” someone once said about academia, “is because the stakes are so low.” It was hard not to be reminded of that famous quote when reading about Currygate and the attendant controversy over who played what role in the invention of podcasting. At least, I thought, this brouhaha will mark the pinnacle of Web 2.0 farce: Aging MTV star anonymously rewrites Wikipedia podcast entry to give himself more prominence in the community-written history of the creation of the latest overhyped online medium. What could possibly top that?

I was wrong, of course. Just days later, a new peak was conquered, in Paris at the Les Blogs conference. Mena Trott, cofounder of blog-software-maker Six Apart, gave a speech chiding bloggers for their nastiness. “Can we as bloggers be more civil?” she asked, echoing the great California philosopher Rodney King. “We need to create an environment where people feel welcomed.” To which audience member Ben Metcalfe, writing on the conference’s real-time, streaming message board, responded: “Bullshit.” Trott, still at the podium, then demanded that Metcalfe stand up and, when he obliged, called him “an asshole.”

In a perfect coda to the event, one of the stars of Currygate, Dave Winer, waded into this affair as well, calling Metcalfe “a coward” in a comment on Metcalfe’s blog and questioning the poor guy’s manhood: “If you were the tough guy you said you were, you would have stood up to Mena and said it was bullshit to her face.”

Utopias are great – until people start moving in.

Jellybeans for breakfast

When my daughter was a little girl, one of her favorite books was Jellybeans for Breakfast. (Holy crap. I just checked Amazon, and used copies are going for hundreds of bucks!) It’s the story of a couple of cute tykes who fantasize about all the fun stuff they’d do if they were free from their parents and their teachers and all the usual everyday constraints. They’d ride their bikes to the moon. They’d go barefoot all the time. They’d live in a treehouse in the woods. And they’d eat jellybeans for breakfast.

Yesterday, Dan Farber wrote a stirring defense of blogging, illustrated by a picture of a statue of Socrates. “For the most part,” he said, “self assembling communities of bloggers hold a kind of virtual Socratic court, sorting out the issues of the day in a public forum, open to anyone, including spammers.” After discussing some technologies for organizing the blogosphere, he concluded:

For a journalist, technologist, politician or anyone with a pulse and who doesn’t know everything, blogs matter. Every morning I can wake up to lots of IQ ruminating, fulminating, arguing, evangelizing and even disapassionately reporting on the latest happenings in the areas that interest me, people from every corner of the globe. That’s certainly preferable to the old world and worth putting up with what comes along with putting the means of production in the hands of anyone with a connection to the Net.

That’s one way of looking at, and most of what Farber says is true. I don’t think it’s the whole story, though. The blogosphere’s a seductive place – it’s easy to get caught up in it – and there’s lots of interesting thoughts and opinions bouncing around amid the general clatter. But does it really provide a good way of becoming informed? Experiencing the blogosphere feels a lot like intellectual hydroplaning – skimming along the surface of many ideas, rarely going deep. It’s impressionistic, not contemplative. Fun? Sure. Invigorating? Absolutely. Socratic? I’m not convinced. Preferable to the old world? It’s nice to think so.

For all the self-important talk about social networks, couldn’t a case be made that the blogosphere, and the internet in general, is basically an anti-social place, a fantasy of community crowded with isolated egos pretending to connect? Sometimes, it seems like we’re all climbing up into our own little treehouses and eating jellybeans for breakfast.

Your brain on Google

There’s a new book out called The Google Story, subtitled “Inside the Hottest Business, Media and Technology Success of Our Time.” I haven’t read it, but I did read a review in this morning’s New York Times. The reviewer describes a passage that comes at the end of the book:

Sergey Brin, one of the search engine’s founders, is marveling, as he and his co-founder, Larry Page, are wont to do, about their product’s awesome computational powers. Having hatched a plan to download the world’s libraries and begun a research effort aimed at cataloging people’s genes, Mr. Brin hungers, with the boundless appetite of a man who has obtained great success at a tender age, for the one place Google has yet to directly penetrate – your mind. “Why not improve the brain?” he muses. “Perhaps in the future, we can attach a little version of Google that you just plug into your brain.”

Visionary? Scary? Cute? Hey, give a kid a Fabulous Money Printing Machine, and he’s bound to get a little excited.

What struck me, though, is how Brin’s words echo something that a Google engineer said to technology historian George Dyson when he recently visited the company’s headquarters: “We are not scanning all those books to be read by people. We are scanning them to be read by an AI.” I wasn’t quite sure when I first read that quote how serious the engineer was being. Now, I’m sure. Forget the read-write web; the Google Brain Plug-In promises the read-write mind.

The theme that computers can help bring human beings to a more perfect state is a common one in writings on artificial intelligence, as David Noble documents in his book The Religion of Technology. Here’s AI pioneer Earl Cox: “Technology will soon enable human beings to turn into something else altogether [and] escape the human condition … Humans may be able to transfer their minds into the new cybersystems and join the cybercivilization … We will download our minds into vessels created by our machine children and, with them, explore the universe …”

Here’s computer guru Danny Hillis explaining the underlying philosophy more explicitly:

“We’re the metabolic thing, which is the monkey that walks around, and we’re the intelligent thing, which is a set of ideas and culture. And those two things have coevolved together, because they helped each other. But they’re fundamentally different things. What’s valuable about us, what’s good about humans, is the idea thing. It’s not the animal thing … I guess I’m not overly perturbed by the prospect that there might be something better than us that might replace us … We’ve got a lot of bugs, sorts of bugs left over history back from when we were animals.”

As I described in The Amorality of Web 2.0, this ethic is alive and well today, and clearly it’s held not only by the internet’s philosopher class but by those who are actually writing the code that, more and more, guides how we live, interact and, yes, think.

Plug me in, Sergey. I’m ready to be debugged.

The MySpace (bottle) rocket

Friendster’s fading. Flickr’s feeling tired. But MySpace is rocking, Facebook’s booming and TagWorld’s launching.

It’s clear that community sites can have a lot of appeal, particularly to the young. MySpace, for instance, logged nearly 12 billion page views last month – that’s more than eBay – according to Business Week. What’s less clear is how long the appeal will last. Will those that flock to community sites when they’re fashionable hang around indefinitely? Or will they stay only until a hipper joint opens up down the (virtual) street?

My guess is that online hot spots, like their real-world counterparts, will go in and out of fashion fairly quickly – and that those betting on their staying power will be disappointed. One reason is simply the fickleness of the young; as soon as a place gets too popular (and the bald-headed guys with backwards baseball caps start showing up), the trendsetters head for the exits, and the crowd soon follows.

But another, more subtle force may also be at work. On the surface, it would seem that the more you invest in a community site – designing a home page, uploading photos, tagging everything that moves – the harder it would be to leave. After all, if you go somewhere else, you’ll have to start all over. But maybe it’s just the opposite. Maybe what’s really fun about these sites is the initial act of exploring them, putting your mark on them, checking out the marks made by others, spreading the word to friends, and so on. Once you’ve done that, maybe you start to get bored and begin looking around for a new diversion – a different place to explore and set up temporary quarters. Community sites may be like games: once they become familiar, they lose their appeal. You want to start fresh.

When it comes to Web 2.0 communities, in other words, familiarity may breed not loyalty but contempt. As we’re seeing with beleaguered Friendster, their trajectories may follow the paths of bottle rockets: up fast, down fast.

If I were Rupert Murdoch, whose News Corp. bought MySpace a couple of months ago, I wouldn’t just be investing in expanding the MySpace property. I’d be building (or buying) the site that’s going to displace MySpace as the in-place. It’s fine to pitch a business to a capricious clientele; just don’t expect stability.

Wireless 1.0

As anybody who’s read my work knows, I’m fascinated by the utopianism that springs up whenever a major new technology comes along. I recently picked up a collection of essays on this theme, called Imagining Tomorrow, which was published in 1986 by the MIT Press. One of the essays, by Susan J. Douglas, looks at the excitement set off by Marconi’s introduction of radio – the “wireless telegraph” – to the American public in 1899. “Wireless held a special place in the American imagination precisely because it married idealism and adventure with science,” she writes.

The invention stirred dreams of a more perfect world, expressed in language that won’t sound unfamiliar to today’s readers:

Popular Science Monthly observed: “The nerves of the whole world are, so to speak, being bound together, so that a touch in one country is transmitted instantly to a far-distant one.” Implicit in this organic metaphor was the belief that a world so physically connected would become a spiritual whole with common interests and goals. The New York Times added: “Nothing so fosters and promotes a mutual understanding and a community of sentiment and interests as cheap, speedy and convenient communication.” Articles suggested that this technology could make men more rational; with better communications available, misunderstandings could be avoided. These visions suggested that machines, by themselves, could change history; the right invention could help people overcome human foibles and weaknesses.

The Atlantic Monthly even published a sonnet titled “Wireless Telegraphy” that ended with these lines:

Somewhere beyond the league-long silences,

Somewhere across the spaces of the years,

A heart will thrill to thee, a voice will bless,

Love will awake and life be perfected!

The rise of wireless also set off a popular movement to democratize media, as hundreds of thousands of “amateur operators” took to the airwaves. It was the original blogosphere. “On every night after dinner,” wrote Francis Collins in the 1912 book Wireless Man, “the entire country becomes a vast whispering gallery.” The amateurs, Douglas reports, “claimed to be surrogates for ‘the people.'”

But it didn’t last. By the 1920s, radio had become “firmly embedded in a corporate grid.” People happily went back to being passive consumers: “In the 1920s there was little mention of world peace or of anyone’s ability to track down a long-lost friend or relative halfway around the world. In fact, there were not many thousands of message senders, only a few … Thus, through radio, Americans would not transcend the present or circumvent corporate networks. In fact they would be more closely tied to both.”

The amorality of Web 2.0

This post, along with seventy-eight others, is collected in the book Utopia Is Creepy.

From the start, the World Wide Web has been a vessel of quasi-religious longing. And why not? For those seeking to transcend the physical world, the Web presents a readymade Promised Land. On the Internet, we’re all bodiless, symbols speaking to symbols in symbols. The early texts of Web metaphysics, many written by thinkers associated with or influenced by the post-60s New Age movement, are rich with a sense of impending spiritual release; they describe the passage into the cyber world as a process of personal and communal unshackling, a journey that frees us from traditional constraints on our intelligence, our communities, our meager physical selves. We become free-floating netizens in a more enlightened, almost angelic, realm.

But as the Web matured during the late 1990s, the dreams of a digital awakening went unfulfilled. The Net turned out to be more about commerce than consciousness, more a mall than a commune. And when the new millenium arrived, it brought not a new age but a dispiritingly commonplace popping of a bubble of earthly greed. Somewhere along the way, the moneychangers had taken over the temple. The Internet had transformed many things, but it had not transformed us. We were the same as ever.

The New New Age

But the yearning for a higher consciousness didn’t burst with the bubble. Web 1.0 may have turned out to be spiritual vaporware, but now we have the hyper-hyped upgrade: Web 2.0. In a profile of Internet savant Tim O’Reilly in the current issue of Wired, Steven Levy writes that “the idea of collective consciousness is becoming manifest in the Internet.” He quotes O’Reilly: “The Internet today is so much an echo of what we were talking about at [New Age HQ] Esalen in the ’70s – except we didn’t know it would be technology-mediated.” Levy then asks, “Could it be that the Internet – or what O’Reilly calls Web 2.0 – is really the successor to the human potential movement?”

Levy’s article appears in the afterglow of Kevin Kelly’s sweeping “We Are the Web” in Wired’s August issue. Kelly, erstwhile prophet of the Long Boom, surveys the development of the World Wide Web, from the Netscape IPO ten years ago, and concludes that it has become a “magic window” that provides a “spookily godlike” perspective on existence. “I doubt angels have a better view of humanity,” he writes.

But that’s only the beginning. In the future, according to Kelly, the Web will grant us not only the vision of gods but also their power. The Web is becoming “the OS for a megacomputer that encompasses the Internet, all its services, all peripheral chips and affiliated devices from scanners to satellites, and the billions of human minds entangled in this global network. This gargantuan Machine already exists in a primitive form. In the coming decade, it will evolve into an integral extension not only of our senses and bodies but our minds … We will live inside this thing.”

The revelation continues:

There is only one time in the history of each planet when its inhabitants first wire up its innumerable parts to make one large Machine. Later that Machine may run faster, but there is only one time when it is born.

You and I are alive at this moment.

We should marvel, but people alive at such times usually don’t. Every few centuries, the steady march of change meets a discontinuity, and history hinges on that moment. We look back on those pivotal eras and wonder what it would have been like to be alive then. Confucius, Zoroaster, Buddha, and the latter Jewish patriarchs lived in the same historical era, an inflection point known as the axial age of religion. Few world religions were born after this time. Similarly, the great personalities converging upon the American Revolution and the geniuses who commingled during the invention of modern science in the 17th century mark additional axial phases in the short history of our civilization.

Three thousand years from now, when keen minds review the past, I believe that our ancient time, here at the cusp of the third millennium, will be seen as another such era. In the years roughly coincidental with the Netscape IPO, humans began animating inert objects with tiny slivers of intelligence, connecting them into a global field, and linking their own minds into a single thing. This will be recognized as the largest, most complex, and most surprising event on the planet. Weaving nerves out of glass and radio waves, our species began wiring up all regions, all processes, all facts and notions into a grand network. From this embryonic neural net was born a collaborative interface for our civilization, a sensing, cognitive device with power that exceeded any previous invention. The Machine provided a new way of thinking (perfect search, total recall) and a new mind for an old species. It was the Beginning.

This isn’t the language of exposition. It’s the language of rapture.

The Cult of the Amateur

Now, lest you dismiss me as a mere cynic, if not a fallen angel, let me make clear that I’m all for seeking transcendence, whether it’s by going to church or living in a hut in the woods or sitting at the feet of the Maharishi or gazing into the glittering pixels of an LCD screen. One gathers one’s manna where one finds it. And if there’s a higher consciousness to be found, then by all means let’s get elevated. My problem is this: When we view the Web in religious terms, when we imbue it with our personal yearning for transcendence, we can no longer see it objectively. By necessity, we have to look at the Internet as a moral force, not as a simple collection of inanimate hardware and software. No decent person wants to worship an amoral conglomeration of technology.

And so all the things that Web 2.0 represents – participation, collectivism, virtual communities, amateurism – become unarguably good things, things to be nurtured and applauded, emblems of progress toward a more enlightened state. But is it really so? Is there a counterargument to be made? Might, on balance, the practical effect of Web 2.0 on society and culture be bad, not good? To see Web 2.0 as a moral force is to turn a deaf ear to such questions.

Let me bring the discussion down to a brass tack. If you read anything about Web 2.0, you’ll inevitably find praise heaped upon Wikipedia as a glorious manifestation of “the age of participation.” Wikipedia is an open-source encyclopedia; anyone who wants to contribute can add an entry or edit an existing one. O’Reilly, in a new essay on Web 2.0, says that Wikipedia marks “a profound change in the dynamics of content creation” – a leap beyond the Web 1.0 model of Britannica Online. To Kevin Kelly, Wikipedia shows how the Web is allowing us to pool our individual brains into a great collective mind. It’s a harbinger of the Machine.

In theory, Wikipedia is a beautiful thing – it has to be a beautiful thing if the Web is leading us to a higher consciousness. In reality, though, Wikipedia isn’t very good at all. Certainly, it’s useful – I regularly consult it to get a quick gloss on a subject. But at a factual level it’s unreliable, and the writing is often appalling. I wouldn’t depend on it as a source, and I certainly wouldn’t recommend it to a student writing a research paper.

Take, for instance, this section from Wikipedia’s biography of Bill Gates, excerpted verbatim:

Gates married Melinda French on January 1, 1994. They have three children, Jennifer Katharine Gates (born April 26, 1996), Rory John Gates (born May 23, 1999) and Phoebe Adele Gates (born September 14, 2002).

In 1994, Gates acquired the Codex Leicester, a collection of writings by Leonardo da Vinci; as of 2003 it was on display at the Seattle Art Museum.

In 1997, Gates was the victim of a bizarre extortion plot by Chicago resident Adam Quinn Pletcher. Gates testified at the subsequent trial. Pletcher was convicted and sentenced in July 1998 to six years in prison. In February 1998 Gates was attacked by Noël Godin with a cream pie. In July 2005, he solicited the services of famed lawyer Hesham Foda.

According to Forbes, Gates contributed money to the 2004 presidential campaign of George W. Bush. According to the Center for Responsive Politics, Gates is cited as having contributed at least $33,335 to over 50 political campaigns during the 2004 election cycle.

Excuse me for stating the obvious, but this is garbage, an incoherent hodge-podge of dubious factoids (who the heck is “famed lawyer Hesham Foda”?) that adds up to something far less than the sum of its parts.

Here’s Wikipedia on Jane Fonda’s life, again excerpted verbatim:

Her nickname as a youth—Lady Jane—was one she reportedly disliked. She traveled to Communist Russia in 1964 and was impressed by the people, who welcomed her warmly as Henry’s daughter. In the mid-1960s she bought a farm outside of Paris, had it renovated and personally started a garden. She visited Andy Warhol’s Factory in 1966. About her 1971 Oscar win, her father Henry said: “How in hell would you like to have been in this business as long as I and have one of your kids win an Oscar before you do?” Jane was on the cover of Life magazine, March 29, 1968.

While early she had grown both distant from and critical of her father for much of her young life, in 1980, she bought the play “On Golden Pond” for the purpose of acting alongside her father—hoping he might win the Oscar that had eluded him throughout his career. He won, and when she accepted the Oscar on his behalf, she said it was “the happiest night of my life.” Director and first husband Roger Vadim once said about her: “Living with Jane was difficult in the beginning … she had so many, how do you say, ‘bachelor habits.’ Too much organization. Time is her enemy. She cannot relax. Always there is something to do.” Vadim also said, “There is also in Jane a basic wish to carry things to the limit.”

This is worse than bad, and it is, unfortunately, representative of the slipshod quality of much of Wikipedia. Remember, this emanation of collective intelligence is not just a couple of months old. It’s been around for nearly five years and has been worked over by many thousands of diligent contributors. At this point, it seems fair to ask exactly when the intelligence in “collective intelligence” will begin to manifest itself. When will the great Wikipedia get good? Or is “good” an old-fashioned concept that doesn’t apply to emergent phenomena like communal on-line encyclopedias?

The promoters of Web 2.0 venerate the amateur and distrust the professional. We see it in their unalloyed praise of Wikipedia, and we see it in their worship of open-source software and myriad other examples of democratic creativity. Perhaps nowhere, though, is their love of amateurism so apparent as in their promotion of blogging as an alternative to what they call “the mainstream media.” Here’s O’Reilly: “While mainstream media may see individual blogs as competitors, what is really unnerving is that the competition is with the blogosphere as a whole. This is not just a competition between sites, but a competition between business models. The world of Web 2.0 is also the world of what Dan Gillmor calls ‘we, the media,’ a world in which ‘the former audience,’ not a few people in a back room, decides what’s important.”

I’m all for blogs and blogging. (I’m writing this, ain’t I?) But I’m not blind to the limitations and the flaws of the blogosphere – its superficiality, its emphasis on opinion over reporting, its echolalia, its tendency to reinforce rather than challenge ideological extremism and segregation. Now, all the same criticisms can (and should) be hurled at segments of the mainstream media. And yet, at its best, the mainstream media is able to do things that are different from – and, yes, more important than – what bloggers can do. Those despised “people in a back room” can fund in-depth reporting and research. They can underwrite projects that can take months or years to reach fruition – or that may fail altogether. They can hire and pay talented people who would not be able to survive as sole proprietors on the Internet. They can employ editors and proofreaders and other unsung protectors of quality work. They can place, with equal weight, opposing ideologies on the same page. Forced to choose between reading blogs and subscribing to, say, the New York Times, the Financial Times, the Atlantic, and the Economist, I will choose the latter. I will take the professionals over the amateurs.

But I don’t want to be forced to make that choice.

Scary Economics

And so, having gone on for so long, I at long last come to my point. The Internet is changing the economics of creative work – or, to put it more broadly, the economics of culture – and it’s doing it in a way that may well restrict rather than expand our choices. Wikipedia might be a pale shadow of the Britannica, but because it’s created by amateurs rather than professionals, it’s free. And free trumps quality all the time. So what happens to those poor saps who write encyclopedias for a living? They wither and die. The same thing happens when blogs and other free on-line content go up against old-fashioned newspapers and magazines. Of course the mainstream media sees the blogosphere as a competitor. It is a competitor. And, given the economics of the competition, it may well turn out to be a superior competitor. The layoffs we’ve recently seen at major newspapers may just be the beginning, and those layoffs should be cause not for self-satisfied snickering but for despair. Implicit in the ecstatic visions of Web 2.0 is the hegemony of the amateur. I for one can’t imagine anything more frightening.

In “We Are the Web,” Kelly writes that “because of the ease of creation and dissemination, online culture is the culture.” I hope he’s wrong, but I fear he’s right – or will come to be right.

Like it or not, Web 2.0, like Web 1.0, is amoral. It’s a set of technologies – a machine, not a Machine – that alters the forms and economics of production and consumption. It doesn’t care whether its consequences are good or bad. It doesn’t care whether it brings us to a higher consciousness or a lower one. It doesn’t care whether it burnishes our culture or dulls it. It doesn’t care whether it leads us into a golden age or a dark one. So let’s can the millenialist rhetoric and see the thing for what it is, not what we wish it would be.