Monthly Archives: May 2010

Experiments in delinkification

A few years back, my friend Steve Gillmor, the long-time technology writer and blogger, went on a crusade against the hyperlink. He stopped putting links into his posts and other online writings. I could never quite understand his motivation, and the whole effort struck me as quixotic and silly. I mean, wasn’t the hyperlink the formative technology of the entire World Wide Web? Wasn’t the Web a hypermedia system, for crying out loud?

My view has changed. I’m still not sure what Gillmor was up to, but I now have a great deal of sympathy for his crusade. In fact, I’m beginning to think I should have joined up instead of mocking it.

Links are wonderful conveniences, as we all know (from clicking on them compulsively day in and day out). But they’re also distractions. Sometimes, they’re big distractions – we click on a link, then another, then another, and pretty soon we’ve forgotten what we’d started out to do or to read. Other times, they’re tiny distractions, little textual gnats buzzing around your head. Even if you don’t click on a link, your eyes notice it, and your frontal cortex has to fire up a bunch of neurons to decide whether to click or not. You may not notice the little extra cognitive load placed on your brain, but it’s there and it matters. People who read hypertext comprehend and learn less, studies show, than those who read the same material in printed form. The more links in a piece of writing, the bigger the hit on comprehension.

The link is, in a way, a technologically advanced form of a footnote. It’s also, distraction-wise, a more violent form of a footnote. Where a footnote gives your brain a gentle nudge, the link gives it a yank. What’s good about a link – its propulsive force – is also what’s bad about it.

I don’t want to overstate the cognitive penalty produced by the hyperlink (or understate the link’s allure and usefulness), but the penalty seems to be real, and we should be aware of it. In The Shallows, I examine the hyperlink as just one element among many – including multimedia, interruptions, multitasking, jerky eye movements, divided attention, extraneous decision making, even social anxiety – that tend to promote hurried, distracted, and superficial thinking online. To understand the effects of the Web on our minds, you have to consider the cumulative effects of all these features rather than just the effects of any one individually.

The book, I’m pleased to say, has already prompted a couple of experiments in what I’ll call delinkification. Laura Miller, in her Salon review of The Shallows, put all her links at the end of the piece rather than sprinkling them through the text. She asked readers to comment on what they thought of the format. As with Gillmor’s early experiments, Miller’s seemed a little silly on first take. The Economist writer Tom Standage tweeted a chortle: “Ho Ho.” But if you read through the (many) comments her review provoked, you will hear a chorus of approval for removing links from text. Here’s a typical response:

Collecting all the URLs into a single block of text at the end of the article works very well. It illustrates Carr’s point, and it improves the experience of reading the article. It also shows more respect for the reader – it assumes that we’ve actually thought about what we’ve read. (Which is not to say that all readers merit that level of respect.)

Now, Neuroethics at the Core, the fine blog published by the National Core for Neuroethics at the University of British Columbia, is carrying out a similar, informal experiment. As Peter Reiner explains, at the end of a lengthy, linkless post:

So here at the Core we are embarking upon a small experiment. For the next little while, we will try not to distract you from reading our blog posts in their entirety by writing them without hyperlinks in the main body of the text. We will still refer you to relevant posts, papers, etc., of course, but we will do so at the end of the post. Oh, the horrors, you might say, but really it is not so bad. One of my favourite science writers, Olivia Judson, regularly writes lovely articles for the New York Times in which she cites the relevant literature at the end of her article, and rarely includes links. If you have not read her posts, I highly recommend them. It would be great if you could share your experience of reading sans hyperlinks. Do you find it irritating? Does it allow you to read an entire blog post without skipping off to some other corner of the internet? Do you jump to the bottom of the post to get at the links anyway? Feel free to let us know.

My own feeling, in reading these works, is that I much prefer the links at the bottom. I do find that the absence of links encourages more concentrated, calmer, and more enjoyable reading. Of course, I’m biased. Try it yourself. You may be surprised.

And here, patient reader, are the links:

Salon review

Neuorethics at the Core post

Standage’s tweeted chortle

The Shallows site

UPDATE: Wow. This post really seems to have ticked off the Self-Appointed Defenders of Web Orthodoxy. Jay Rosen, the NYU journalism professor and ubiquitous web presence, even accused me of wanting to “unbuild the web.” Don’t worry, guys, no one’s going to take your links away. If you’d taken the time to read the post, you’d see that it is about some simple experiments (note headline) aimed at improving our understanding of the Net’s effects on attention, comprehension, and reading.

I don’t want to unbuild the web, but I do want to question it. Is that allowed, Jay?

UPDATE2: And now the king of the linkbaiters, Jeff Jarvis, accuses me of writing “that piece about links to get links.” Yes, Jeff, whenever I write a post with the craven intent of harvesting a lot of links I always make a point of publishing it on the morning of Memorial Day.

The Shallows excerpt, reviews

The new issue of Wired features an excerpt from my new book, The Shallows: What the Internet Is Doing to Our Brains. The excerpt draws on material from the chapter of the book entitled “The Juggler’s Brain,” in which I examine an array of research on how the Internet and networked computers are influencing our mental habits and altering the way we think. (For those of a scientific bent, I should note that the chapter itself, which is considerably longer than the excerpt, surveys many more studies than could be accommodated in the Wired piece.)

I previously listed blurbs for the book provided by early readers. Some other early reviews have also appeared. You can find excerpts on the reviews page of the book site.

Facebook’s identity lock-in

“You’re invisible now, you’ve got no secrets to conceal.” -Bob Dylan

Facebook CEO Mark Zuckerberg has a knack for making statements that are at once sweeping and silly, but he outdoes himself with this one:

You have one identity … Having two identities for yourself is an example of a lack of integrity.

This is, at the obvious level, a clever and cynical ploy to recast the debate about Facebook’s ongoing efforts to chip away at its members’ privacy safeguards. Facebook, Zuckerberg implies, isn’t compromising your privacy by selling personal data to corporations; it is making you a better person. By forcing you, through its imposition of what it calls “radical transparency,” to have “one identity,” it is also imposing integrity on you. We should all be grateful that we have Zuck to act as our personal character trainer, I guess.

Zuckerberg’s self-servingly cavalier attitude toward other people’s privacy has provoked a firestorm of criticism over the last couple of weeks. Whether or not a critical mass of Facebook members actually care enough about online privacy to force Facebook to fundamentally shift its policies remains to be seen. Up to now, as I’ve pointed out in the past, Facebook’s strategy for turning identity into a commodity has consisted of taking two steps forward and then, when confronted with public resistance, apologizing profusely before taking one step back. I suspect that’s what will happen again – and again, and again.

But that’s not the subject of this post. Zuckerberg’s “one identity” proclamation reminded me of something I heard Jaron Lanier say in a recent lecture. He was talking about the way that Facebook, and other social networking sites, serves as a permanent public record of our lives. That’s great in a lot of ways – it gives us new ways to express ourselves, socialize, cement and maintain friendships. But there’s a dark side, too. Lanier pointed to the example of Bob Dylan. After growing up, as Robert Zimmerman, in Hibbing, Minnesota, Dylan shucked off his youthful identity, like a caterpillar in a chrysalis, and turned himself into the mysterious young troubador Bob Dylan in New York City. It was a great act of self-reinvention, a necessary first step in a career of enormous artistic achievement. Indeed, it’s impossible to imagine the kid Zimmerman becoming the artist Dylan without that clean break from the past, without, as Zuckerberg would see it, the exercise of a profound lack of “integrity.”

Imagine, Lanier said, a young Zimmerman trying to turn himself into Dylan today. Forget it. He would be trailing his online identity – his “one identity” – all the way from Hibbing to Manhattan. “There’s that goofy Zimmerman kid from Minnesota,” would be the recurring word on the street in Greenwich Village. The caterpillar Zimmerman, locked into his early identity by myriad indelible photos, messages, profiles, friends, and “likes” plastered across the Web, would remain the caterpillar Zimmerman. Forever.

More insidious than Facebook’s data lock-in is its identity lock-in. The invisibility that Dylan describes at the end of “Like a Rolling Stone,” where you’re free of your secrets, of your past life, is a necessary precondition for personal reinvention. As Robert Zimmerman traveled from Hibbing to New York, he first became invisible – and then he became Bob Dylan. In the future, such acts of transformation may well become impossible. Facebook saddles the young with what Zuckerberg calls “one identity.” You can never escape your past. The frontier of invisibility is replaced by the cage of transparency.

Long player: super deluxe limited-edition reissue

A correspondent, noting the imminent re-re-re-release, in several analogue and digital formats at escalating price points, of the Rolling Stones masterwork Exile on Main Street, suggests that I issue my own re-release of my 2007 post Long Player, which was inspired, in part, by the Stones record and which, as it happens, I wrote in the cellar of a villa in the south of France. I was thinking of hiring a crackerjack blogsman to remix the post – Doc Searls, perhaps – but in rereading it I realized that the original mix has a certain distinctive quality that, whatever its flaws, captures the spirit of the heady times in which it was composed. Get down:

I started reading David Weinberger’s new book, Everything Is Miscellaneous, this weekend. I’d been looking forward to it. Weinberger has a supple, curious mind and an easy way with words. Even though I rarely agree with his conclusions, he gets the brain moving – and that’s what matters. But I have to say I didn’t get very far in the book, at least not this weekend. In fact, I only reached the bottom of page nine, at which point I crashed into this passage about music:

For decades we’ve been buying albums. We thought it was for artistic reasons, but it was really because the economics of the physical world required it: Bundling songs into long-playing albums lowered the production, marketing, and distribution costs because there were fewer records to make, ship, shelve, categorize, alphabetize, and inventory. As soon as music went digital, we learned that the natural unit of music is the track. Thus was iTunes born, a miscellaneous pile of 3.5 million songs from a thousand record labels. Anyone can offer music there without first having to get the permission of a record executive.

“… the natural unit of music is the track”? Well, roll over, Beethoven, and tell Tchaikovsky the news.

There’s a lot going on in that brief passage, and almost all of it is wrong. Weinberger does do a good job, though, of condensing into a few sentences what might be called the liberation mythology of the internet. This mythology is founded on a sweeping historical revisionism that conjures up an imaginary predigital world – a world of profound physical and economic constraints – from which the web is now liberating us. We were enslaved, and now we are saved. In a bizarrely fanciful twist, the digital world is presented as a “natural” counterpoint to the supposed artificiality of the physical world.

I set the book aside and fell to pondering. Actually, the first thing I did was to sweep the junk off the dust cover of my sadly neglected turntable and pull out an example of one of those old, maligned “long-playing albums” from my shrunken collection of cardboard-sheathed LPs (arrayed alphabetically, by artist, on a shelf in a cabinet). I chose Exile on Main Street. More particularly, I chose the unnatural bundle of tracks to be found on side three of Exile on Main Street. Carefully holding the thin black slab of scratched, slightly warped, but still serviceable vinyl by its edges – you won’t, I trust, begrudge me a pang of nostalgia for the outdated physical world – I eased it onto the spindle and set the platter to spinning at a steady thirty-three-and-a-third revolutions per minute.

Now, if you’re not familiar with Exile on Main Street, or if you know it only in a debauched digital form – whether as a single-sided plastic CD (yuk) or as a pile of miscellaneous undersampled iTunes tracks (yuk squared) – let me explain that side three is the strangest yet the most crucial of the four sides of the Stones’ double-record masterpiece. The side begins, literally, in happiness – or Happyness – and ends, figuratively, in a dark night of the soul. (I realize that, today, it’s hard to imagine Mick Jagger having a dark night of the soul, but at the dawn of the gruesome seventies, with the wounds of Brian Jones’s death, Marianne Faithfull’s overdose, and Altamont’s hippie apocalypse still fresh in his psyche, Mick was, I imagine, suffering from an existential pain that neither a needle and a spoon nor even another girl could fully take away.)

But it’s the middle tracks of the platter that seem most pertinent to me in thinking about Weinberger’s argument. Between Keith’s ecstatic, grinning-at-death “Happy” and Mick’s desperate, shut-the-lights “Let It Loose” come three offhand, wasted-in-the-basement songs – “Turd on the Run,” “Ventilator Blues,” and “Just Wanna See His Face” – that sound, in isolation, like throwaways. If you unbundled Exile and tossed these tracks onto the miscellaneous iTunes pile, they’d sink, probably without a trace. I mean, who’s going to buy “Turd on the Run” as a standalone track? And yet, in the context of the album that is Exile on Main Street, the three songs achieve a remarkable, tortured eloquence. They become necessary. They transcend their identity as tracks, and they become part of something larger. They become art.

Listening to Exile, or to any number of other long-playing bundles – The Velvet Underground & Nico, Revolver, Astral Weeks, Every Picture Tells a Story, Mott, Blood on the Tracks, Station to Station, London Calling, Get Happy!, Murmur, Tim (the list, thankfully, goes on and on) – I could almost convince myself that the 20-minute-or-so side of an LP is not just some ungainly byproduct of the economics of the physical world but rather the “natural unit of music.” As “natural” a unit, anyway, as the individual track.

The long-playing phonograph record, twelve inches in diameter and spinning at a lazy 33 rpm, is, even today, a fairly recent technological development. (In fact, recorded music in general is a fairly recent technological development.) After a few failed attempts to produce a long-player in the early thirties, the modern LP was introduced in 1948 by a record executive named Edward Wallerstein, then the president of Columbia Records, a division of William Paley’s giant Columbia Broadcasting System. At the time, the dominant phonograph record had for about a half century been the 78 – a fragile, ten-inch shellac disk that spun at seventy-eight rpm and could hold only about three or four minutes of music on a side.

Wallerstein, being a record executive, invented the long-player as a way to “bundle” a lot of tracks onto a single disk in order to enhance the economics of the business and force customers to buy a bunch of songs that they didn’t want to get a track or two that they did want. Right? Wrong. Wallerstein in fact invented the long-player because he wanted a format that would do justice to performances of classical works, which, needless to say, didn’t lend themselves all that well to three-minute snippets.

Before his death in 1970, Wallerstein recalled how he pushed a team of talented Columbia engineers to develop the modern record album (as well as a practical system for playing it):

Every two months there were meetings of the Columbia Records people and Bill Paley at CBS. [Jim] Hunter, Columbia’s production director, and I were always there, and the engineering team would present anything that might have developed. Toward the end of 1946, the engineers let Adrian Murphy, who was their technical contact man at CBS, know that they had something to demonstrate. It was a long-playing record that lasted seven or eight minutes, and I immediately said, “Well, that’s not a long-playing record.” They then got it to ten or twelve minutes, and that didn’t make it either. This went on for at least two years.

Mr. Paley, I think, got a little sore at me, because I kept saying, “That’s not a long-playing record,” and he asked, “Well, Ted, what in hell is a long-playing record?” I said, “Give me a week, and I’ll tell you.”

I timed I don’t know how many works in the classical repertory and came up with a figure of seventeen minutes to a side. This would enable about 90% of all classical music to be put on two sides of a record. The engineers went back to their laboratories. When we met in the fall of 1947 the team brought in the seventeen-minute record.

The long-player was not, in other words, a commercial contrivance aimed at bundling together popular songs to the advantage of record companies and the disadvantage of consumers; it was a format specifically designed to provide people with a much better way to listen to recordings of classical works. In fact, in focusing on perfecting a medium for classical performances, Columbia actually sacrificed much of the pop market to its rival RCA, which at the time was developing a competing record format: the seven-inch, forty-five-revolutions-per-minute single. Recalls Wallerstein:

There was a long discussion as to whether we should move right in [to the market with the LP] or first do some development work on better equipment for playing these records or, most important, do some development work on a popular record to match these 12-inch classical discs. Up to now our thinking had been geared completely to the classical market rather than to the two- or three-minute pop disc market.

I was in favor of waiting a year or so to solve these problems and to improve the original product. We could have developed a 6- or 7-inch record and equipment to handle the various sizes for pops. But Paley felt that, since we had put $250,000 into the LP, it should be launched as it was. So we didn’t wait and in consequence lost the pops market to the RCA 45s.

A brief standards war ensued between the LP and the 45 – it was called “the battle of speeds” – which concluded, fortunately, with a technological compromise that allowed both to flourish. Record players were designed to accommodate both 33 rpm albums and 45 rpm singles (and, for a while, anyway, the old 78s as well). The 45 format allowed consumers to buy popular individual songs for a relatively low price, while the LP provided them with the option of buying longer works for a somewhat higher price. Of course, popular music soon moved onto LPs, as musicians and record companies sought to maximize their sales and provide fans with more songs by their favorite artists. The introduction of the pop LP did not force customers to buy more songs than they wanted – they could still cherry-pick individual tracks by buying 45s. The LP expanded people’s choices, giving them more of the music they clamored for.

Indeed, in suggesting that the long-player resulted in a big pile of “natural” tracks being bundled together into artificial albums, Weinberger gets it precisely backwards. It was the arrival of the LP that set off the explosion in the number of popular music tracks available to buyers. It also set off a burst of incredible creativity in popular music, as bands, songwriters, and solo performers began to take advantage of the new, extended format, to turn the longer medium to their own artistic purposes. The result was a great flowering not only of wonderful singles, sold as 45s, but of carefully constructed sets of songs, sold as LPs. Was there also a lot of filler? Of course there was. When hasn’t there been?

Weinberger also gets it backwards in suggesting that the LP was a record industry ploy to constrain the supply of products – in order to have “fewer records to make, ship, shelve, categorize, alphabetize, and inventory.” The album format, combined with the single format, brought a huge increase in the number of records – and, in turn, in the outlets that sold them. It unleashed a flood of recorded music. It’s worth remembering that the major competitor to the record during this time was radio, which of course provided music for free. (The arrival of radio nearly killed off the recorded music industry, in fact.) The best way – the only way – for record companies to compete against radio was to increase the number of records they produced, to give customers far more choices than radio could send over the airwaves. The long-playing album, in sum, not only gave buyers many more products to choose from; it gave artists many more options for expressing themselves, to everyone’s benefit. Far from being a constraint on the market, the physical format of the long-player was a great spur to consumer choice and, even more important, to creativity. Who would unbundle Exile on Main Street or Blonde on Blonde or Tonight’s the Night – or, for that matter, Dirty Mind or Youth and Young Manhood or (Come On Feel the) Illinoise? Only a fool would.

And yet it is the wholesale unbundling of LPs into a “miscellaneous pile” of compressed digital song files that Weinberger would have us welcome as some kind of deliverance from decades of apparent servitude to the long-playing album. One doesn’t have to be an apologist for record executives – who in recent years have done a great job in proving their cynicism and stupidity – to recognize that Weinberger is warping history in an attempt to prove an ideological point. Will the new stress on discrete digital tracks bring a new flowering of creativity in music? I don’t know. Maybe we’ll get a pile of gems, or maybe we’ll get a pile of crap. Probably we’ll get a mix. But I do know that the development of the physical long-playing album, together with the physical single, was a development that we should all be grateful for. We probably shouldn’t rush out to dance on the album’s grave.

As for the individual track being the “natural unit of music,” that’s a fantasy. Natural’s not in it.

Not addiction; dependency

This week’s New Yorker features an article, by Julia Ioffe, on Chatroulette, the quirky video chat service that at this point seems mainly of interest to pervs and reporters. Ioffe suggests that, in addition to all the wank artists and show-me-your-tits doofuses, expeditions into “the Chatroulette vortex” also reveal “a lot of joy”:

There is, for example, the video of the dancing banana, crudely drawn on lined paper, exhorting people to “Dance or gtfo!” (Dance or get the fuck out.) The banana’s partners usually respond with wriggling delight.

Well, one gathers one’s joy where one can these days.

Much of Ioffe’s piece is devoted to a profile of Andrey Ternovskiy, the “shy and evasive” Russian teenager who was inspired to invent Chatroulette out of, he claims, a love for “exploring other cultures” that apparently developed during a brief stint selling tchotchkes to tourists in Moscow. “Like much of his generation,” Ioffe writes, “Ternovskiy has an online persona far more developed than his real one.” The young man started skipping school in his early teens, preferring to spend his days at his computer. “The last three years at school, I haven’t done anything,” he tells Ioffe. “I just can’t make myself. There’s so much interesting stuff in the world, and I have to sit there with textbooks?” Ioffe comments:

By “the world,” of course, Ternovskiy means the Internet, which is also where most of his friends are. His closest confidant is a Russian immigrant named Kirill Gura, who lives in Charleston, West Virginia. Every night for the past five years, Ternovskiy has turned on his computer, found Kirill on MSN Messenger, and talked to him until one of them fell asleep. “He’s a real friend,” Ternovskiy says … Ternovskiy says that he sees the computer as “one hundred percent my window into the world.” He doesn’t seek much else. “I always believed that computer might be that thing that I only need, that I only need that thing to survive,” he says. “It might replace everything.”

Ternovskiy’s case is, of course, an extreme one, but it’s also, whether we care to admit or not, representative. The world of the screen hasn’t replaced everything, but, for most of us, whether we’re of Ternovskiy’s generation or not, it has replaced a lot. According to recent media surveys, the average American spends some 8.5 hours a day peering at a screen – TV, computer, or cell phone – and that number continues to rise as smartphone use explodes. We’ve reached a point, in other words, where it’s more likely than not that we’re looking into a screen at any given moment when we’re awake.

Last month, the University of Maryland’s International Center for Media & the Public Agenda released the results of an informal study of college students’ attitudes toward media. Two hundred students at the school were asked to refrain from using any electronic media for a day and to write about their experiences. The students, the researchers reported, “use literal terms of addiction to characterize their dependence on media.” By using the a-word – “addiction” – the researchers assured themselves of a burst of media attention. (If there’s one thing we’re addicted to these days, it’s the word “addiction.”) “College students are ‘addicted’ to social media and even experience withdrawal symptoms from it,” ran a typical headline. “According to a new study out of the University of Maryland, students are addicted to social media, and computers and smartphones deliver their drug,” began a story at the Huffington Post. Predictably, the overheated reports were quickly countered by a flood of counter-reports pointing out the silliness of confusing the language of addiction with addiction itself.

The use of the addiction metaphor gave everybody an easy way to discuss, and dismiss, the study without actually looking at the study’s results, which provided a fascinating look at how we live today. Here’s a brief, representative sampling of how students described the experience of going without their devices for just a few hours:

“Texting and IMing my friends gives me a constant feeling of comfort. When I did not have those two luxuries, I felt quite alone and secluded from my life. Although I go to a school with thousands of students, the fact that I was not able to communicate with anyone via technology was almost unbearable.“

“Not having a cell phone created a logistical problem. It was manageable for one day, but I cannot see how life would be possible without one.”

“My attempt at the gym without the ear pieces in my iPhone wasn’t the same; doing cardio listening to yourself breath really drains your stamina.”

“It is almost second nature to check my Facebook or email; it was very hard for my mind to tell my body not to go on the Internet.”

“I began to compare my amount of media usage to that of my friends. I realized that I don’t usually check or update Facebook or Twitter like a lot of my friends that have Blackberrys or iPhones. I did however realize that as soon as I get home from class it has become a natural instinct to grab my computer (not to do school work which is the sole reason my parents got me my computer!) but to check my email, Gmail, umd account mail, Facebook account, Twitter account, Skype, AIM, and ELMS: that’s six websites and four social networking sites. This in itself is a wake-up call! I was so surprised to think that I probably spend at least 1-2 hours on these sites alone BEFORE I even make it to attempting my homework and then continue checking these websites while doing my school work.”

“With classes, location, and other commitments it’s hard to meet with friends and have a conversation. Instant messaging, SMS, and Facebook are all ways to make those connections with convenience, and even a heightened sense of openness. I believe that people are more honest about how they really feel through these media sources because they are not subject to nonverbal signals like in face to face communication.”

“When I was walking to class I always text and listen to my iPod so the walk to class felt extremely long and boring unlike all the other times.”

“My short attention span prevented me from accomplishing much, so I stared at the wall for a little bit. After doing some push-ups, I just decided to take a few Dramamine and go to sleep to put me out of my misery.”

“On a psychological note, my brain periodically went crazy because I found at times that I was so bored I didn’t know what to do with myself.”

“I clearly am addicted and the dependency is sickening. I feel like most people these days are in a similar situation, for between having a Blackberry, a laptop, a television, and an iPod, people have become unable to shed their media skin.”

“The day seemed so much longer and it felt like we were trying to fill it up with things to do as opposed to running out of time to do all of the things we wanted to do.”

“I couldn’t take it anymore being in my room…alone…with nothing to occupy my mind so I gave up shortly after 5pm. I think I had a good run for about 19 hours and even that was torture.”

“Honestly, this experience was probably the single worst experience I have ever had.”

And so on.

The problem with the addiction metaphor, which as these quotes show is easy to indulge in, is that it presents the normal as abnormal and hence makes it easy for us to distance ourselves from our own behavior and its consequences. By dismissing talk of “Internet addiction” as rhetorical overkill, which it is, we also avoid undertaking an honest examination of how deeply our media devices have been woven into our lives and how they are shaping those lives in far-reaching ways, for better and for worse. In the course of just a decade, we have become profoundly dependent on a new and increasingly pervasive technology.

There’s nothing unusual about this. We routinely become dependent on popular, useful technologies. If people were required to live without their cars or their indoor plumbing for a day, many of them would probably resort to the language of addiction to describe their predicament. I know that, after a few hours, I’d be seriously jonesing for that toilet. What’s important is to be able to see what’s happening as we adapt to a new technology – and the problem with the addiction metaphor is that it makes it too easy to avert our eyes.

The addiction metaphor also distorts the nature of technological change by suggesting that our use of a technology stems from a purely personal choice – like the choice to smoke or to drink. An inability to control that choice becomes, in this view, simply a personal failing. But while it’s true that, in the end, we’re all responsible for how we spend our time, it’s an oversimplification to argue that we’re free “to choose” whether and how we use computers and cell phones, as if social norms, job expectations, familial responsibilities, and other external pressures had nothing to do with it. The deeper a technology is woven into the patterns of everyday life, the less choice we have about whether and how we use that technology.

When it comes to the digital networks that now surround us, the fact is that most us can’t just GTFO, even if we wanted to. The sooner we move beyond the addiction metaphor, the sooner we’ll be able to see, with some clarity and honesty, the extent and implications of our dependency on our networked computing and media devices. What happens to the human self as it comes to experience more and more of the world, and of life, through the mediation of the screen?

At the end of Ioffe’s piece, she reports on a recent trip that Tournovskiy made to West Virigina to meet his IM buddy and “real friend,” Kirill Gura, face to face: “‘It was a little weird, you know,’ Ternovskiy told me later. ‘We was just looking at each other without having much to say.'” At this point, there’s probably a little Ternovskiy in all of us.

My own private internet

Here’s Yahoo CEO Carol Bartz, in a new Esquire interview, describing her vision of the future of the Net:

I call it the Internet of One. I want it to be mine, and I don’t want to work too hard to get what I need. In a way, I want it to be HAL. I want it to learn about me, to be me, and cull through the massive amount of information that’s out there to find exactly what I want.

Cool. Going online would feel like being isolated in one of those comfy suspended-animation capsules where HAL kept the crew members in 2001:

2001.jpg

That turned out well, as I recall.

Sunday rambles

The editors of n+1 examine the rise of “webism” and some of its paradoxes:

The webists met the [New York] Times’s schizophrenia with a schizophrenia of their own. The worst of them simply cheered the almost unbelievably rapid collapse of the old media, which turned out, for all its seeming influence and power, to be a paper tiger, held up by elderly white men. But the best of them were given pause: themselves educated by newspapers, magazines, and books, they did not wish for these things to disappear entirely. (For one thing, who would publish their books?) In fact, with the rise of web 2.0 and the agony of the print media, a profound contradiction came into view. Webism was born as a technophilic left-wing splinter movement in the late 1960s, and reborn in early ’80s entrepreneurial Silicon Valley, and finally fully realized by the generation born around 1980. Whether in its right-leaning libertarian or left-leaning communitarian mode it was against the Man, and all the minions of the Man: censorship, outside control, narrative linearity. It was against elitism; it was against inequality. But it wasn’t against culture. It wasn’t against books! An Apple computer—why, you could write a book with one of those things. (Even if they were increasingly shaped and designed mostly so you could watch a movie.) One of the mysteries of webism has always been what exactly it wanted …

In The American Scholar, Sven Birkerts thinks about technological change and the future of imagination and the creative mind:

From the vantage point of hindsight, that which came before so often looks quaint, at least with respect to technology. Indeed, we have a hard time imagining that the users weren’t at some level aware of the absurdity of what they were doing. Movies bring this recognition to us fondly; they give us the evidence. The switchboard operators crisscrossing the wires into the right slots; Dad settling into his luxury automobile, all fins and chrome; Junior ringing the bell on his bike as he heads off on his paper route. The marvel is that all of them—all of us—concealed their embarrassment so well. The attitude of the present to the past . . . well, it depends on who is looking. The older you are, the more likely it is that your regard will be benign—indulgent, even nostalgic. Youth, by contrast, quickly gets derisive, preening itself on knowing better, oblivious to the fact that its toys will be found no less preposterous by the next wave of the young.

In the Times Magazine, Gary Wolf speculates that obsessive self-monitoring may be moving out of the fringe and into the mainstream:

Ubiquitous self-tracking is a dream of engineers. For all their expertise at figuring out how things work, technical people are often painfully aware how much of human behavior is a mystery. People do things for unfathomable reasons. They are opaque even to themselves. A hundred years ago, a bold researcher fascinated by the riddle of human personality might have grabbed onto new psychoanalytic concepts like repression and the unconscious. These ideas were invented by people who loved language. Even as therapeutic concepts of the self spread widely in simplified, easily accessible form, they retained something of the prolix, literary humanism of their inventors. From the languor of the analyst’s couch to the chatty inquisitiveness of a self-help questionnaire, the dominant forms of self-exploration assume that the road to knowledge lies through words. Trackers are exploring an alternate route. Instead of interrogating their inner worlds through talking and writing, they are using numbers. They are constructing a quantified self.

Placing the spreadsheeting-of-the-self trend in the context of the social-networking trend, Wolf observes, “You might not always have something to say, but you always have a number to report.” To give it a different spin: Who needs imagination when you have the data?