Meanings of the metaverse: Productizing reality

Welcome, Earthlings.

Facebook, it’s now widely accepted, has been a calamity for the world. The obvious solution, most people would agree, is to get rid of Facebook. Mark Zuckerberg has a different idea: Get rid of the world.

Cyberutopians have been dreaming about replacing the physical world with a virtual one since Zuckerberg was in Oshkosh B’gosh overalls. The desire is rooted in misanthropy — meatspace, yuck — but it is also deeply idealistic, Platonic even. The world as we know it, the thinking goes, is messy and chaotic, illogical and unpredictable. It is a place of death and decay, where mind — the true essence of the human — is subordinate to the vagaries of the flesh. Cyberspace liberates the mind from its bodily trappings. It is a place of pure form. Everything in it reflects the logic and order inherent to computer programming.

Hints of that old cyberian idealism float through Zuckerberg’s conception of the metaverse — he’s big on teleportation — but despite his habit of reminding us that he took philosophy and classics courses in college, Zuckerberg is no metaphysician. A Mammonist rather than a Platonist, he’s in it for the money. His goal with the metaverse is not just to create a virtual world that is more encompassing, more totalizing, than what we experience today with social media and videogames. It’s to turn reality itself into a product. In the metaverse, nothing happens that is not computable. That also means that, assuming the computers doing the computing are in private hands, nothing happens that is not a market transaction, a moment of monetization, either directly through an exchange of money or indirectly through the capture of data. With the metaverse, capital subsumes reality. It’s money all the way down.

Zuckerberg’s public embrace of the metaverse, culminating in last week’s Meta rebranding, has been widely seen as a cynical ploy to distract the public from the mess Facebook has made for itself and everyone else. There’s truth in that view, but it would be a mistake to think that the metaverse is just a change-the-subject tactic. It’s a coldly calculated, high-stakes, speculative bet on the future. Zuckerberg believes that several trends are coming together now, commercial, technological, and social, that justify big investments in an all-encompassing virtual sphere. He knows that Facebook — er, Meta — needs to act quickly if it’s to become the dominant player in what could be the biggest of all markets. As one of his lieutenants wrote in a recent memo, “The Metaverse is ours to lose.”

For Meta, Facebook and Instagram are cash cows — established, mature businesses that throw off a lot of cash. The company will milk those social media platforms to fund billions of dollars of investment in metaverse technologies ($10 billion this year alone). Much of that money will go into hardware, including virtual-reality headsets, artificial-reality glasses, hologram projectors, and a myriad of digital sensor systems. Facebook’s greatest vulnerability has always been its dependence on competitors — Apple, Google, Microsoft — to provide the hardware and associated operating systems required to access its sites and apps. The extent of that vulnerability was made clear this year when Apple instituted its data blockade, curtailing Facebook’s ability to track people online and hence making its ads less effective.

If Meta can control the hardware and operating systems people use to frolic in the metaverse, it will neutralize the threat posed by Apple and its other rivals. It will disintermediate the intermediaries. Beyond the hardware, though, the very structure of the metaverse, as envisioned by Zuckerberg, would make it hard if not impossible to prevent a company like Meta from collecting personal data. That’s because, as Zuckerberg emphasized in his Facebook Connect keynote Thursday, a universal metaverse requires universal interoperability. Being in the metaverse needs to be as seamless an experience as being in the real world. That can only happen if all data is shared. Gaps in the flow of data become holes in reality.

And what data! Two of the most revealing, and unsettling, moments in Zuckerberg’s keynote came when he was describing work now being done in the company’s “Reality Labs.” (Does Facebook have a Senior Vice President of Dystopian Branding?) He showed a demo of a woman walking through her home while wearing a pair of Meta AR glasses. The glasses mapped, automatically and in precise detail, everything she looked at. Such digital mapping will allow Meta to create, as Reality Labs Chief Scientist Michael Abrash explained, “an index” of “every single object” in a person’s home, “including not only location, but also the texture, geometry, and function.” The maps will become the basis for “contextual AI” that will be able to anticipate a person’s intentions and desires by tracking eye movements. What you look at, after all, is what you’re interested in. “Ultimately,” said Abrash, “her AR glasses will tell her what her available actions are at any time.” The advertising opportunities are endless.

“Come into my parlor,” said the spider to the fly.

But that’s just the start. Meta has designs on our bodies that go well beyond eye-tracking. Zuckerberg explained that Reality Labs is at work on “neural interfaces” that will tap directly into the nervous system:

We believe that neural interfaces are going to be an important part of how we interact with AR glasses, and more specifically EMG [electromyography] input from the muscles on your wrist combined with contextualized AI. It turns out that we all have unused neuromotor pathways, and with simple and perhaps even imperceptible gestures, sensors will one day be able to translate those neuromotor signals into digital commands that enable you to control your devices. It’s pretty wild.

Wild, indeed. If Facebook’s ability to collect, analyze, and monetize your personal data makes you nervous now, wait till you see what Meta has in store. There are no secrets in the metaverse.

There is, however, private property. One of the obstacles to the computerized productization of reality has always been the difficulty in establishing and enforcing property rights in cyberspace. Fifteen years ago, a company called Linden Lab took a stab at building a proto-metaverse in the form of the much-hyped videogame Second Life. The company promised its users, including many of the world’s biggest businesses, that they would be able to buy, sell, and own virtual goods in Second Life. What it failed to mention was that those goods, being composed purely of data, could be easily and perfectly copied. And that’s exactly what happened. Second Life was invaded by the so-called CopyBot, a software program that could replicate any object in the virtual world, including people’s avatars. An orgy of piracy ensued, dooming Second Life to irrelevance. Today, thanks to blockchains, cryptocurrencies, and non-fungible tokens (NFTs), the copyability problem seems to have been solved. Property rights, including identity rights, will be able to be enforced in the metaverse, which vastly expands its commercial potential.

Just because Zuckerberg wants a universal metaverse to exist doesn’t mean that it will exist. Anyone who’s been on a Zoom call knows that, even at a pretty basic level, we’re a long way from the kind of seamless, perfectly synchronized virtual existence that Meta is promising. As Michael Abrash himself cautioned, “It’s going to take about a dozen major technological breakthroughs to get to the next-generation metaverse.” That’s a lot of breakthroughs, and no breakthrough is foreordained.

But Zuckerberg has one thing on his side: When given the opportunity, people have shown themselves to be willing, even eager, to choose a simulation over the real thing. The metaverse, should it arrive, may feel like home, only better.

_____________

Part 2: “Secondary Embodiment.”
Part 3: “The Andreessen Solution.”
Part 4: “Reality Surfing.”
Part 5: “The People of the Metaverse.”
Part 6: “Liquid Death in Life.”

The mailbox and the megaphone

Now that it’s broadly understood that Facebook is a social disease, what’s to be done? In “How to Fix Social Media,” an essay in the new issue of The New Atlantis, I suggest a way forward. It begins by seeing social media companies for what they are. Companies like Facebook, Google, and Twitter are engaged in two very different communication businesses. They transmit personal messages between individuals, and they broadcast information to the masses. They’re mailbox, and they’re megaphone. The mailbox business is a common carriage business; the megaphone business is business with a public calling. Disentangling the two businesses opens the way for a two-pronged regulatory approach built on well-established historical precedents.

Here’s a taste of the essay:

For most of the twentieth century, advances in communication technology proceeded along two separate paths. The “one-to-one” systems used for correspondence and conversation remained largely distinct from the “one-to-many” systems used for broadcasting. The distinction was manifest in every home: When you wanted to chat with someone, you’d pick up the telephone; when you wanted to view or listen to a show, you’d switch on the TV or radio. The technological separation of the two modes of communication underscored the very different roles they played in people’s lives. Everyone saw that personal communication and public communication entailed different social norms, presented different sets of risks and benefits, and merited different legal, regulatory, and commercial responses.

The fundamental principle governing personal communication was privacy: Messages transmitted between individuals should be shielded from others’ eyes and ears. The principle had deep roots. It stemmed from a European common-law doctrine, known as the secrecy of correspondence, established centuries ago to protect the confidentiality of letters sent through the mail. For early Americans, the doctrine had special importance. In the years leading up to the War of Independence, the British government routinely intercepted and read letters sent from the colonies to England. Incensed, the colonists responded by establishing their own “constitutional post,” with a strict requirement that mail be carried “under lock and key.” At the moment of the country’s birth, the secrecy of correspondence became a democratic ideal.

Read on.

Are you still there?

Late Tuesday night, just as the Red Sox were beginning a top-of-the-eleventh rally against the Rays, my smart TV decided to ask me a question of deep ontological import:

Are you still there?

To establish my thereness (and thus be permitted to continue watching the game), I would need to “interact with the remote,” my TV informed me. I would need to respond to its signal with a signal of my own. At first, as I spent a harried few seconds finding the remote and interacting with it, I was annoyed by the interruption. But I quickly came to see it as endearing. Not because of the TV’s solicitude — the solicitude of a machine is just a gentle form of extortion — but because of the TV’s cluelessness. Though I was sitting just ten feet away from the set, peering intently into its screen, my smart TV couldn’t tell that I was watching it. It didn’t know where I was or what I was doing or even if I existed at all. That’s so cute.

I had found a gap in the surveillance system, but I knew it would soon be plugged. Media used to be happy to transmit signals in a human-readable format. But as soon as it was given the ability to collect signals, in a machine-readable format, media got curious. It wanted to know, and then it wanted to know everything, and then it wanted to know everything without having to ask. If a smart device asks you a question, you know it’s not working properly. Further optimization is required. And you know, too, that somebody is working on the problem.

Rumor has it that most smart TVs already have cameras secreted inside them — somewhere in the top bezel, I would guess, not far from the microphone. The cameras generally haven’t been activated yet, but that will change. In a few years, all new TVs will have operational cameras. All new TVs will watch the watcher. This will be pitched as an attractive new feature. We’ll be told that, thanks to the embedded cameras and their facial-recognition capabilities, televisions will henceforth be able to tailor content to individual viewers automatically. TVs will know who’s on the couch without having to ask. More than that, televisions will be able to detect medical and criminal events in the home and alert the appropriate authorities. Televisions will begin to save lives, just as watches and phones and doorbells already do. It will feel comforting to know that our TVs are watching over us. What good is a TV that can’t see?

We’ll be the show then. We’ll be the show that watches the show. We’ll be the show that watches the show that watches the show. In the end, everything turns into an Escher print.

“If you’re not paying for the product, you are the product.” If I have to hear that sentence again, I swear I’ll barf. As Shoshana Zuboff has pointed out, it doesn’t even have the benefit of being true. A product has dignity as a made thing. A product is desirable in itself. That doesn’t describe what we have come to represent to the operators of the machines that gather our signals. We’re the sites out of which industrial inputs are extracted, little seams in the universal data mine. But unlike mineral deposits, we continuously replenish our supply. The more we’re tapped, the more we produce.

The game continues. My smart TV tells me the precise velocity and trajectory of every pitch. To know is to measure, to measure is to know. As the system incorporates me into its workings, it also seeks to impose on me its point of view. It wants me to see the game — to see the world, to see myself — as a stream of discrete, machine-readable signals.

Are you still there?

Honestly, I have no idea.

Not being there: from virtuality to remoteness

I used to be virtual. Now I’m remote.

The way we describe our digitally mediated selves, the ones that whirl through computer screens like silks through a magician’s hands, has changed during the pandemic. The change is more than just a matter of terminology. It signals a shift in perspective and perhaps in attitude. “Virtual” told us that distance doesn’t matter; “remote” says that it matters a lot. “Virtual” suggested freedom; “remote” suggests incarceration.

The idea of virtuality-as-liberation came to the fore in Silicon Valley after the invention of the World Wide Web in 1989, but its origins go back to the beginnings of the computer age. In the 1940s and 1950s, as Katherine Hayles describes in How We Became Posthuman, the pioneers of digital computing — Turing, Shannon, Wiener, et al. — severed mind from body. They defined intelligence as “a property of the formal manipulation of symbols rather than enaction in the human life-world.” Our essence as thinking beings, they implied, is independent of our bodies. It lies in patterns of information and hence can be represented through electronic data processing. The self can be abstracted, virtualized.

Though rigorously materialist in its conception, this new mind-body dualism soon took on the characteristics of a theology. Not only would we be able to represent our essence through data, the argument went, but the transfer of the self to a computer would be an act of transcendence. It would free us from the constraints of the physical — from the body and its fixed location in space. As virtual beings, we would exist everywhere all at once. We would experience the “bodiless exultation of cyberspace,” as William Gibson put it in his 1984 novel Neuromancer. The sense of disembodiment as a means of emancipation was buttressed by the rise of schools of social critics who argued that “identity” could and should be separated from biology. If the self is a pattern of data, then the self is a “construct” that is infinitely flexible.

The arrival of social media seemed to bring us closer to the virtual ideal. It gave everyone easy access to multimedia software tools for creating rich representations of the self, and it provided myriad digital theaters, or “platforms,” for these representations to perform in. More and more, self-expression became a matter of symbol-processing, of information-patterning. The content of our character became the character of our content, and vice versa.

The pandemic has brought us back to our bodies, with a vengeance. It has done this not through re-embodiment but, paradoxically, through radical disembodiment. We’ve been returned to our bodies by being forced into further separation from them, by being cut off from, to quote Hayles again, “enaction in the human life-world.” As we retreated from the physical world, social media immediately expanded to subsume everyday activities that traditionally lay outside the scope of media. The computer — whether in the form of phone, laptop, or desktop — became our most important piece of personal protective equipment. It became the sterile enclosure, the prophylactic, that enabled us to go about the business of our lives — work, school, meetings, appointments, socializing, shopping — without actually inhabiting our lives. It allowed us to become remote.

In many ways, this has been a good thing. Without the tools of social media, and our experience in using them, the pandemic would have been even more of a trial. We would have felt even more isolated, our agency more circumscribed. Social media schooled us in the arts of social distancing before those arts became mandatory. But the pandemic has also given us a lesson, a painful one, in the limits of remoteness. In promising to eliminate distance, virtuality also promised to erase the difference between presence and absence. We would always be there, wherever “there” happened to be. That seemed plausible when our virtual selves were engaged in the traditional pursuits of media — news and entertainment, play and performance, information production and information gathering — but it was revealed to be an illusion as soon as social media became our means of living. Being remote is a drag. The state of absence, a physical state but also a psychic one, is a state of loneliness and frustration, angst and ennui.

What the pandemic has revealed is that when taken to an extreme — the extreme Silicon Valley saw as an approaching paradise — virtuality does not engender a sense of liberation and exultation. It engenders a sense of confinement and despair. Absence will never be presence. A body in isolation is a self in isolation.

Think about the cramped little cells in which we appear when we’re on Zoom. It’s hard to imagine a better metaphor for our situation. The architecture of Zoom is the architecture of the Panopticon, but it comes with a twist that Jeremy Bentham never anticipated. On Zoom, each of us gets to play the roles of both jailer and jailed. We are the watcher and the watched, simultaneously. Each role is an exercise in remoteness, and each is demeaning. Each makes us feel small.

What happens when the pandemic subsides? We almost certainly will rejoice in our return to the human life-world — the world of embodiment, presence, action. We’ll celebrate our release from remoteness. But will we rebel against social media and its continuing encroachment on our lives? I have my doubts. As the research of Sherry Turkle and others has shown, one of the attractions of virtualization has always been the sense of safety it provides. Even without a new virus on the prowl, the embodied world, the world of people and things, presents threats, not just physical but also social and psychological. Presence is also exposure. When we socialize through a screen, we feel protected from many of those threats — less fearful, more in control — even if we also feel more isolated and constrained and adrift.

If, in the wake of the pandemic, we end up feeling more vulnerable to the risks inherent in being physically in the world, we may, despite our immediate relief, continue to seek refuge in our new habits of remoteness. We won’t feel liberated, but at least we’ll feel protected.

What is it like to be a smartphone?

“The fact that we cannot expect ever to accommodate in our language a detailed description of Martian or bat phenomenology should not lead us to dismiss as meaningless the claim that bats and Martians have experiences fully comparable in richness of detail to our own.” –Thomas Nagel

What is it like to be a smartphone? In all the chatter about the future of artificial intelligence, the question has been glossed over or, worse, treated as settled. The longstanding assumption, a reflection of the anthropomorphic romanticism of computer scientists, science fiction writers, and internet entrepreneurs, has been that a self-aware computer would have a mind, and hence a consciousness, similar to our own. We, supreme programmers, would create machine consciousness in our own image.

The assumption is absurd, and not just because the sources and workings of our own consciousness remain unknown to us and hence unavailable as models for coders and engineers. Consciousness is entwined with being, and being with body, and a computer’s body and (speculatively) being have nothing in common with our own. A far more reasonable assumption is that the consciousness of a computer, should it arise, would be completely different from the consciousness of a human being. It would be so different that we probably wouldn’t even recognize it as a consciousness.

As the philosopher Thomas Nagel observed in “What Is It Like to Be a Bat?,” his classic 1974 article, we humans are unable to inhabit the consciousness of any other animal. We can’t know the “subjective character” of other animals’ experience any more than they can understand ours.  We are, however, able to see that, excepting perhaps the simplest of life forms, an animal has a consciousness — or at least a beingness. The animal, we understand, is a living thing with a mind, a sensorium, a nature. We know it feels like something to be that animal, even though we can’t know what that something is.

We understand this about other animals because we share with them a genetic heritage. Because they are products of the same evolutionary process that gave rise to ourselves and because their bodies and brains have the same essential biology, the same material substrate, as our own, they resemble us in both their physical characteristics and their behavior. It would be impossible, given this obvious likeness, to see them as anything other than living beings.

There would be no such shared heritage or shared substrate, no such likeness, between ourselves and any artificial intelligence that may spring into being through the workings of a computer or a network of computers. Our relationship to an AI, and its to us, would be characterized by radical unlikeness. Confronted with an AI, we would not only be unable to inhabit its consciousness or otherwise sense the character of its being; we would be unable to recognize that it even has a consciousness or a being. It would remain, in our perception, an inanimate thing that we have constructed.

But, you might ask, wouldn’t its being be an emanation of its programming? That might be true to some extent — though who can say where being comes from? — but even so, the programming would be of no help in understanding the character of a computer’s being. You would not be able to know what it’s like to be an AI by examining the 1s and 0s of its machine code any more than you’d be able to understand your own being by examining the As, Cs, Gs, and Ts of your genetic code. A conscious computer would likely be unaware of the routines of its software — just as we’re unaware of how our DNA shapes our body and being or even of the myriad signals that zip through our nervous system every moment. An intelligent computer may perform all sorts of practical functions, including taking our inputs and supplying us with outputs, without having any awareness that it is performing those functions. Its being may lie entirely elsewhere.

The Turing test, in all its variations, would also be useless in identifying an AI. It merely tests for a machine’s ability to feign likeness with ourselves. It provides no insight into the AI’s being, which, again, could be entirely separate from its ability to trick us into sensing it is like us. The Turing test tells us about our own skills; it says nothing about the character of the artificial being.

All of this raises another possibility. It may be that we are already  surrounded by AIs but have no idea that they exist. Their beingness is invisible to us, just as ours is to them. We are both objects in the same place, but as beings we inhabit different universes. Our smartphones may right now be having, to borrow Nagel’s words, “experiences fully comparable in richness of detail to our own.”

Look at your phone. You see a mere tool, there to do your bidding, and perhaps that’s the way your phone sees you, the dutiful but otherwise unremarkable robot that from time to time plugs it into an electrical socket.

The love that lays the swale in rows

There’s a line of verse I’m always coming back to, and it’s been on my mind more than usual these last few months:

The fact is the sweetest dream that labor knows.

It’s the second to last line of one of Robert Frost’s earliest and best poems, a sonnet called “Mowing.” He wrote it just after the turn of the twentieth century, when he was a young man, in his twenties, with a young family. He was working as a farmer, raising chickens and tending a few apple trees on a small plot of land his grandfather had bought for him in Derry, New Hampshire. It was a difficult time in his life. He had little money and few prospects. He had dropped out of two colleges, Dartmouth and Harvard, without earning a degree. He had been unsuccessful in a succession of petty jobs. He was sickly. He had nightmares. His firstborn child, a son, had died of cholera at the age of three. His marriage was troubled. “Life was peremptory,” Frost would later recall, “and threw me into confusion.”

But it was during those lonely years in Derry that he came into his own as a writer and an artist. Something about farming—the long, repetitive days, the solitary work, the closeness to nature’s beauty and carelessness—inspired him. The burden of labor eased the burden of life. “If I feel timeless and immortal it is from having lost track of time for five or six years there,” he would write of his stay in Derry. “We gave up winding clocks. Our ideas got untimely from not taking newspapers for a long period. It couldn’t have been more perfect if we had planned it or foreseen what we were getting into.” In the breaks between chores on the farm, Frost somehow managed to write most of the poems for his first book, A Boy’s Will; about half the poems for his second book, North of Boston; and a good number of other poems that would find their way into subsequent volumes.

“Mowing,” from A Boy’s Will, was the greatest of his Derry lyrics. It was the poem in which he found his distinctive voice: plainspoken and conversational, but also sly and dissembling. (To really understand Frost—to really understand anything, including yourself—requires as much mistrust as trust.) As with many of his best works, “Mowing” has an enigmatic, almost hallucinatory quality that belies the simple and homely picture it paints—in this case of a man cutting a field of grass for hay. The more you read the poem, the deeper and stranger it becomes:

There was never a sound beside the wood but one,
And that was my long scythe whispering to the ground.
What was it it whispered? I knew not well myself;
Perhaps it was something about the heat of the sun,
Something, perhaps, about the lack of sound—
And that was why it whispered and did not speak.
It was no dream of the gift of idle hours,
Or easy gold at the hand of fay or elf:
Anything more than the truth would have seemed too weak
To the earnest love that laid the swale in rows,
Not without feeble-pointed spikes of flowers
(Pale orchises), and scared a bright green snake.
The fact is the sweetest dream that labor knows.
My long scythe whispered and left the hay to make.

We rarely look to poetry for instruction anymore, but here we see how a poet’s scrutiny of the world can be more subtle and discerning than a scientist’s. Frost understood the meaning of the mental state we now call “flow” long before psychologists and neurobiologists delivered the empirical evidence. His mower is not an airbrushed peasant, a rustic caricature. He’s a farmer, a man doing a hard job on a still, hot summer day. He’s not dreaming of “idle hours” or “easy gold.” His mind is on his work—the bodily rhythm of the cutting, the weight of the tool in his hands, the stalks piling up around him. He’s not seeking some greater truth beyond the work. The work is the truth.

The fact is the sweetest dream that labor knows.

There are mysteries in that line. Its power lies in its refusal to mean anything more or less than what it says. But it seems clear that what Frost is getting at, in the line and in the poem, is the centrality of action to both living and knowing. Only through work that brings us into the world do we approach a true understanding of existence, of “the fact.” It’s not an understanding that can be put into words. It can’t be made explicit. It’s nothing more than a whisper. To hear it, you need to get very near its source. Labor, whether of the body or the mind, is more than a way of getting things done. It’s a form of contemplation, a way of seeing the world face-to-face rather than through a glass. Action un-mediates perception, gets us close to the thing itself. It binds us to the earth, Frost implies, as love binds us to one another. The antithesis of transcendence, work puts us in our place.

Frost is a poet of labor. He’s always coming back to those revelatory moments when the active self blurs into the surrounding world—when, as he would write in another poem, “the work is play for mortal stakes.” Richard Poirier, in his book Robert Frost: The Work of Knowing, described with great sensitivity the poet’s view of the essence and essentialness of hard work: “Any intense labor enacted in his poetry, like mowing or apple-picking, can penetrate to the visions, dreams, myths that are at the heart of reality, constituting its articulate form for those who can read it with a requisite lack of certainty and an indifference to merely practical possessiveness.” The knowledge gained through such efforts may be as shadowy and elusive as a dream, but “in its mythic propensities, the knowledge is less ephemeral than are the apparently more practical results of labor, like food or money.”

When we embark on a task, with our bodies or our minds, on our own or alongside others, we usually have a practical goal in sight. Our eyes are looking ahead to the product of our work—a store of hay for feeding livestock, perhaps. But it’s through the work itself that we come to a deeper understanding of ourselves and our situation. The mowing, not the hay, is what matters most.

*  *  *

Frost is not romanticizing some distant, pre-technological past. Although he was dismayed by those who allowed themselves to become “bigoted in reliance / On the gospel of modern science,” he felt a kinship with scientists and inventors. As a poet, he shared with them a common spirit and pursuit. They were all explorers of the mysteries of earthly life, excavators of meaning from matter. They were all engaged in work that, as Poirier described it, “can extend the capability of human dreaming.” For Frost, the greatest value of “the fact”—whether apprehended in the world or expressed in a work of art or made manifest in a tool or other invention—lay in its ability to expand the scope of individual knowing and hence open new avenues of perception, action, and imagination. In the long poem “Kitty Hawk,” written near the end of his life, he celebrated the Wright brothers’ flight “Into the unknown, / Into the sublime.” In making their own “pass / At the infinite,” the brothers also made the experience of flight, and the sense of unboundedness it provides, possible for all of us.

Technology is as crucial to the work of knowing as it is to the work of production. The human body, in its native, unadorned state, is a feeble thing. It’s constrained in its strength, its dexterity, its sensory range, its calculative prowess, its memory. It quickly reaches the limits of what it can do. But the body encompasses a mind that can imagine, desire, and plan for achievements the body alone can’t fulfill. This tension between what the body can accomplish and what the mind can envision is what gave rise to and continues to propel and shape technology. It’s the spur for humankind’s extension of itself and elaboration of nature. Technology isn’t what makes us “posthuman” or “transhuman,” as some writers and scholars these days suggest. It’s what makes us human. Technology is in our nature. Through our tools we give our dreams form. We bring them into the world. The practicality of technology may distinguish it from art, but both spring from a similar, distinctly human yearning.

One of the many jobs the human body is unsuited to is cutting grass. (Try it if you don’t believe me.) What allows the mower to do his work, what allows him to be a mower, is the tool he wields, his scythe. The mower is, and has to be, technologically enhanced. The tool makes the mower, and the mower’s skill in using the tool remakes the world for him. The world becomes a place in which he can act as a mower, in which he can lay the swale in rows. This idea, which on the surface may sound trivial or even tautological, points to something elemental about life and the formation of the self.

“The body is our general means of having a world,” wrote the French philosopher Maurice Merleau-Ponty in his 1945 masterwork Phenomenology of Perception. Our physical makeup—the fact that we walk upright on two legs at a certain height, that we have a pair of hands with opposable thumbs, that we have eyes which see in a particular way, that we have a certain tolerance for heat and cold—determines our perception of the world in a way that precedes, and then molds, our conscious thoughts about the world. We see mountains as lofty not because mountains are lofty but because our perception of their form and height is shaped by our own stature. We see a stone as, among other things, a weapon because the particular construction of our hand and arm enables us to pick it up and throw it. Perception, like cognition, is embodied.

It follows that whenever we gain a new talent, we not only change our bodily capacities, we change the world. The ocean extends an invitation to the swimmer that it withholds from the person who has never learned to swim. With every skill we master, the world reshapes itself to reveal greater possibilities. It becomes more interesting, and being in it becomes more rewarding. This may be what Baruch Spinoza, the seventeenth-century Dutch philosopher who rebelled against René Descartes’ division of mind and body, was getting at when he wrote, “The human mind is capable of perceiving a great many things, and is the more capable, the more its body can be disposed in a great many ways.” John Edward Huth, a physics professor at Harvard, testifies to the regeneration that attends the mastery of a skill. A decade ago, inspired by Inuit hunters and other experts in natural wayfinding, he undertook “a self-imposed program to learn navigation through environmental clues.” Through months of rigorous outdoor observation and practice, he taught himself how to read the nighttime and daytime skies, interpret the movements of clouds and waves, decipher the shadows cast by trees. “After a year of this endeavor,” he recalled in a recent essay, “something dawned on me: the way I viewed the world had palpably changed. The sun looked different, as did the stars.” Huth’s enriched perception of the environment, gained through a kind of “primal empiricism,” struck him as being “akin to what people describe as spiritual awakenings.”

Technology, by enabling us to act in ways that go beyond our bodily limits, also alters our perception of the world and what the world signifies to us. Technology’s transformative power is most apparent in tools of discovery, from the microscope and the particle accelerator of the scientist to the canoe and the spaceship of the explorer, but the power is there in all tools, including the ones we use in our everyday lives. Whenever an instrument allows us to cultivate a new talent, the world becomes a different and more intriguing place, a setting of even greater opportunity. To the possibilities of nature are added the possibilities of culture. “Sometimes,” wrote Merleau-Ponty, “the signification aimed at cannot be reached by the natural means of the body. We must, then, construct an instrument, and the body projects a cultural world around itself.” The value of a well-made and well-used tool lies not only in what it produces for us but what it produces in us. At its best, technology opens fresh ground. It gives us a world that is at once more understandable to our senses and better suited to our intentions—a world in which we’re more at home. Used thoughtfully and with skill, a tool becomes much more than a means of production or consumption. It becomes a means of experience. It gives us more ways to lead rich and engaged lives.

Look more closely at the scythe. It’s a simple tool, but an ingenious one. Invented around 500 BC, by the Romans or the Gauls, it consists of a curved blade, forged of iron or steel, attached to the end of a long wooden pole, or snath. The snath typically has, about halfway down its length, a small wooden grip, or nib, that makes it possible to grasp and swing the implement with two hands. The scythe is a variation on the much older sickle, a similar but short-handled cutting tool that was invented in the Stone Age and came to play an essential role in the early development of agriculture and, in turn, of civilization. What made the scythe a momentous innovation in its own right is that its long snath allowed a farmer or other laborer to cut grass at ground level while standing upright. Hay or grain could be harvested, or a pasture cleared, more quickly than before. Agriculture leaped forward.

The scythe enhanced the productivity of the worker in the field, but its benefit went beyond what could be measured in yield. The scythe was a congenial tool, far better suited to the bodily work of mowing than the sickle had been. Rather than stooping or squatting, the farmer could walk with a natural gait and use both his hands, as well as the full strength of his torso, in his job. The scythe served as both an aid and an invitation to the skilled work it enabled. We see in its form a model for technology on a human scale, for tools that extend the productive capabilities of society without circumscribing the individual’s scope of action and perception. Indeed, as Frost makes clear in “Mowing,” the scythe intensifies its user’s involvement with and apprehension of the world. The mower swinging a scythe does more, but he also knows more. Despite outward appearances, the scythe is a tool of the mind as well as the body.

Not all tools are so congenial. Some deter us from skilled action. The technologies of computerization and automation that hold such sway over us today rarely invite us into the world or encourage us to develop new talents that enlarge our perceptions and expand our possibilities. They mostly have the opposite effect. They’re designed to be disinviting. They pull us away from the world. That’s a consequence not only of prevailing design practices, which place ease and efficiency above all other concerns, but also of the fact that, in our personal lives, the computer, particularly in the form of the smartphone, has become a media device, its software painstakingly programmed to grab and hold our attention. As most people know from experience, the computer screen is intensely compelling, not only for the conveniences it offers but also for the many diversions it provides. There’s always something going on, and we can join in at any moment with the slightest of effort. Yet the screen, for all its enticements and stimulations, is an environment of sparseness—fast-moving, efficient, clean, but revealing only a shadow of the world.

That’s true even of the most meticulously crafted simulations of space that we find in virtual-reality applications such as games, architectural models, three-dimensional maps, and the video-meeting tools used to mimic classrooms, conference rooms, and cocktail parties. Artificial renderings of space may provide stimulation to our eyes and to a lesser degree our ears, but they tend to starve our other senses—touch, smell, taste—and greatly restrict the movements of our bodies. A study of rodents, published in Science in 2013, indicated that the brain cells used in navigation are much less active when animals make their way through computer-generated landscapes than when they traverse the real world. “Half of the neurons just shut up,” reported one of the researchers, UCLA neurophysicist Mayank Mehta. He believes that the drop-off in mental activity likely stems from the lack of “proximal cues”—environmental smells, sounds, and textures that provide clues to location—in digital simulations of space. “A map is not the territory it represents,” the Polish philosopher Alfred Korzybski famously remarked, and a computer rendering is not the territory it represents either. When we enter the virtual world, we’re required to shed much of our body. That doesn’t free us; it emaciates us.

The world in turn is made less meaningful. As we adapt to our streamlined environment, we render ourselves incapable of perceiving what the world offers its most ardent inhabitants. We travel blindfolded. The result is existential impoverishment, as nature and culture withdraw their invitations to act and to perceive. The self can only thrive, can only grow, when it encounters and overcomes “resistance from surroundings,” wrote the American pragmatist John Dewey in Art as Experience. “An environment that was always and everywhere congenial to the straightaway execution of our impulsions would set a term to growth as sure as one always hostile would irritate and destroy. Impulsion forever boosted on its forward way would run its course thoughtless, and dead to emotion.”

Ours may be a time of material comfort and technological wonder, but it’s also a time of aimlessness and gloom. During the first decade of this century, the number of Americans taking prescription drugs to treat depression or anxiety rose by nearly a quarter. One in five adults now regularly takes such medications. Many also take sleep aids such as Ambien. The suicide rate among middle-age Americans increased by nearly 30 percent over the same ten years, according to a report from the Centers for Disease Control and Prevention. More than 10 percent of American schoolchildren, and nearly 20 percent of high school–age boys, have been given a diagnosis of attention-deficit/hyperactivity disorder, and two-thirds of that group take drugs like Ritalin and Adderall to treat the condition. The current pandemic has only exacerbated the discontent.

The reasons for our malaise are many and only dimly understood. But one of them may be that through the pursuit of a frictionless existence, we’ve succeeded in turning the landscape of our lives into a barren place. Drugs that numb the nervous system provide a way to rein in our vital, animal sensorium, to shrink our being to a size that better suits our constricted environs.

* * *

Frost’s sonnet also contains, as one of its many whispers, a warning about technology’s ethical hazards. There’s a brutality to the mower’s scythe. It indiscriminately cuts down flowers—those tender, pale orchises—along with the stalks of grass. It frightens innocent animals, like the bright green snake. If technology embodies our dreams, it also embodies other, less benign qualities in our makeup, such as our will to power and the arrogance and insensitivity that accompany it. Frost returns to this theme a little later in A Boy’s Will, in a second lyric about cutting hay, “The Tuft of Flowers.” The poem’s narrator comes upon a freshly mown field and, while following the flight of a passing butterfly with his eyes, discovers in the midst of the cut grass a small cluster of flowers, “a leaping tongue of bloom” that “the scythe had spared”:

The mower in the dew had loved them thus,
By leaving them to flourish, not for us,
Nor yet to draw one thought of us to him,
But from sheer morning gladness to the brim.

Working with a tool is never just a practical matter, Frost is telling us, with characteristic delicacy. It always entails moral choices and has moral consequences. It’s up to us, as users and makers of tools, to humanize technology, to aim its cold blade wisely. That requires vigilance and care.

The scythe is still employed in subsistence farming in many parts of the world. But it has no place on the modern farm, the development of which, like the development of the modern factory, office, and home, has required ever-more complex and efficient equipment. The threshing machine was invented in the 1780s, the mechanical reaper appeared around 1835, the baler came a few years after that, and the combine harvester began to be produced commercially toward the end of the nineteenth century. The pace of technological advance has only accelerated in the decades since, and today the trend is reaching its logical conclusion with the computerization of agriculture. The working of the soil, which Thomas Jefferson saw as the most vigorous and virtuous of occupations, is being off-loaded almost entirely to machines. Farmhands are being replaced by “drone tractors” and other robotic systems that, using sensors, satellite signals, and software, plant seeds, fertilize and weed fields, harvest and package crops, and milk cows and tend other livestock. In development are robo-shepherds that guide flocks through pastures. Even if scythes still whispered in the fields of the industrial farm, no one would be around to hear them.

The congeniality of hand tools encourages us to take responsibility for their use. Because we sense the tools as extensions of our bodies, parts of ourselves, we have little choice but to be intimately involved in the ethical choices they present. The scythe doesn’t choose to slash or spare the flowers; the mower does. As we become more expert in the use of a tool, our sense of responsibility for it naturally strengthens. To the novice mower, a scythe may feel like a foreign object in the hands; to the accomplished mower, hands and scythe become one thing. Talent tightens the bond between an instrument and its user. This feeling of physical and ethical entanglement doesn’t have to go away as technologies become more complex. In reporting on his historic solo flight across the Atlantic in 1927, Charles Lindbergh spoke of his plane and himself as if they were a single being: “We have made this flight across the ocean, not I or it.” The airplane was a complicated system encompassing many components, but to a skilled pilot it still had the intimate quality of a hand tool. The love that lays the swale in rows is also the love that parts the clouds for the stick-and-rudder man.

Automation weakens the bond between tool and user not because computer-controlled systems are complex but because they ask so little of us. They hide their workings in secret code. They resist any involvement of the operator beyond the bare minimum. They discourage the development of skillfulness in their use. Automation ends up having an anesthetizing effect. We no longer feel our tools as parts of ourselves. In a renowned 1960 paper, “Man-Computer Symbiosis,” the psychologist and engineer J. C. R. Licklider described the shift in our relation to technology well. “In the man-machine systems of the past,” he wrote, “the human operator supplied the initiative, the direction, the integration, and the criterion. The mechanical parts of the systems were mere extensions, first of the human arm, then of the human eye.” The introduction of the computer changed all that. “‘Mechanical extension’ has given way to replacement of men, to automation, and the men who remain are there more to help than to be helped.” The more automated everything gets, the easier it becomes to see technology as a kind of implacable, alien force that lies beyond our control and influence. Attempting to alter the path of its development seems futile. We press the on switch and follow the programmed routine.

To adopt such a submissive posture, however understandable it may be, is to shirk our responsibility for managing progress. A robotic harvesting machine may have no one in the driver’s seat, but it is every bit as much a product of conscious human thought as a humble scythe is. We may not incorporate the machine into our brain maps, as we do the hand tool, but on an ethical level the machine still operates as an extension of our will. Its intentions are our intentions. If a robot scares a bright green snake (or worse), we’re still to blame. We shirk a deeper responsibility as well: that of overseeing the conditions for the construction of the self. As computer systems and software applications come to play an ever-larger role in shaping our lives and the world, we have an obligation to be more, not less, involved in decisions about their design and use—before progress forecloses our options. We should be careful about what we make.

If that sounds naive or hopeless, it’s because we have been misled by a metaphor. We’ve defined our relation with technology not as that of body and limb or even that of sibling and sibling but as that of master and slave. The idea goes way back. It took hold at the dawn of Western philosophical thought, emerging first with the ancient Athenians. Aristotle, in discussing the operation of households at the beginning of his Politics, argued that slaves and tools are essentially equivalent, the former acting as “animate instruments” and the latter as “inanimate instruments” in the service of the master of the house. If tools could somehow become animate, Aristotle posited, they would be able to substitute directly for the labor of slaves. “There is only one condition on which we can imagine managers not needing subordinates, and masters not needing slaves,” he mused, anticipating the arrival of computer automation and even machine learning. “This condition would be that each [inanimate] instrument could do its own work, at the word of command or by intelligent anticipation.” It would be “as if a shuttle should weave itself, and a plectrum should do its own harp-playing.”

The conception of tools as slaves has colored our thinking ever since. It informs society’s recurring dream of emancipation from toil. “All unintellectual labour, all monotonous, dull labour, all labour that deals with dreadful things, and involves unpleasant conditions, must be done by machinery,” wrote Oscar Wilde in 1891. “On mechanical slavery, on the slavery of the machine, the future of the world depends.” John Maynard Keynes, in a 1930 essay, predicted that mechanical slaves would free humankind from “the struggle for subsistence” and propel us to “our destination of economic bliss.” In 2013, Mother Jones columnist Kevin Drum declared that “a robotic paradise of leisure and contemplation eventually awaits us.” By 2040, he forecast, our computer slaves—“they never get tired, they’re never ill-tempered, they never make mistakes”—will have rescued us from labor and delivered us into a new Eden. “Our days are spent however we please, perhaps in study, perhaps playing video games. It’s up to us.”

With its roles reversed, the metaphor also informs society’s nightmares about technology. As we become dependent on our technological slaves, the thinking goes, we turn into slaves ourselves. From the eighteenth century on, social critics have routinely portrayed factory machinery as forcing workers into bondage. “Masses of labourers,” wrote Marx and Engels in their Communist Manifesto, “are daily and hourly enslaved by the machine.” Today, people complain all the time about feeling like slaves to their appliances and gadgets. “Smart devices are sometimes empowering,” observed The Economist in “Slaves to the Smartphone,” an article published in 2012. “But for most people the servant has become the master.” More dramatically still, the idea of a robot uprising, in which computers with artificial intelligence transform themselves from our slaves to our masters, has for a century been a central theme in dystopian fantasies about the future. The very word “robot,” coined by a science fiction writer in 1920, comes from robota, a Czech term for servitude.

The master-slave metaphor, in addition to being morally fraught, distorts the way we look at technology. It reinforces the sense that our tools are separate from ourselves, that our instruments have an agency independent of our own. We start to judge our technologies not on what they enable us to do but rather on their intrinsic qualities as products—their cleverness, their efficiency, their novelty, their style. We choose a tool because it’s new or it’s cool or it’s fast, not because it brings us more fully into the world and expands the ground of our experiences and perceptions. We become mere consumers of technology.

The metaphor encourages society to take a simplistic and fatalistic view of technology and progress. If we assume that our tools act as slaves on our behalf, always working in our best interest, then any attempt to place limits on technology becomes hard to defend. Each advance grants us greater freedom and takes us a stride closer to, if not utopia, then at least the best of all possible worlds. Any misstep, we tell ourselves, will be quickly corrected by subsequent innovations. If we just let progress do its thing, it will find remedies for the problems it creates. “Technology is not neutral but serves as an overwhelming positive force in human culture,” writes one pundit, expressing the self-serving Silicon Valley ideology that in recent years has gained wide currency. “We have a moral obligation to increase technology because it increases opportunities.” The sense of moral obligation strengthens with the advance of automation, which, after all, provides us with the most animate of instruments, the slaves that, as Aristotle anticipated, are most capable of releasing us from our labors.

The belief in technology as a benevolent, self-healing, autonomous force is seductive. It allows us to feel optimistic about the future while relieving us of responsibility for that future. It particularly suits the interests of those who have become extraordinarily wealthy through the labor-saving, profit-concentrating effects of automated systems and the computers that control them. It provides our new plutocrats with a heroic narrative in which they play starring roles: job losses may be unfortunate, but they’re a necessary evil on the path to the human race’s eventual emancipation by the computerized slaves that our benevolent enterprises are creating. Peter Thiel, a successful entrepreneur and investor who has become one of Silicon Valley’s most prominent thinkers, grants that “a robotics revolution would basically have the effect of people losing their jobs.” But, he hastens to add, “it would have the benefit of freeing people up to do many other things.” Being freed up sounds a lot more pleasant than being fired.

There’s a callousness to such grandiose futurism. As history reminds us, high-flown rhetoric about using technology to liberate workers often masks a contempt for labor. It strains credulity to imagine today’s technology moguls, with their libertarian leanings and impatience with government, agreeing to the kind of vast wealth-redistribution scheme that would be necessary to fund the self-actualizing leisure-time pursuits of the jobless multitudes. Even if society were to come up with some magic spell, or magic algorithm, for equitably parceling out the spoils of automation, there’s good reason to doubt whether anything resembling the “economic bliss” imagined by Keynes would ensue.

In a prescient passage in The Human Condition, Hannah Arendt observed that if automation’s utopian promise were actually to pan out, the result would probably feel less like paradise than like a cruel practical joke. The whole of modern society, she wrote, has been organized as “a laboring society,” where working for pay, and then spending that pay, is the way people define themselves and measure their worth. Most of the “higher and more meaningful activities” revered in the distant past have been pushed to the margin or forgotten, and “only solitary individuals are left who consider what they are doing in terms of work and not in terms of making a living.” For technology to fulfill humankind’s abiding “wish to be liberated from labor’s ‘toil and trouble’ ” at this point would be perverse. It would cast us deeper into a purgatory of malaise. What automation confronts us with, Arendt concluded, “is the prospect of a society of laborers without labor, that is, without the only activity left to them. Surely, nothing could be worse.” Utopianism, she understood, is a form of self-delusion.

* * *

A while back, I had a chance meeting on the campus of a small, liberal arts college with a freelance photographer who was working on an assignment for the school. He was standing under a tree, waiting for some uncooperative clouds to get out of the way of the sun. I noticed he had a large-format film camera set up on a bulky tripod—it was hard to miss, as it looked almost absurdly old-fashioned—and I asked him why he was still using film. He told me that he had eagerly embraced digital photography a few years earlier. He had replaced his film cameras and his darkroom with digital cameras and a computer running the latest image-processing software. But after a few months, he switched back. It wasn’t that he was dissatisfied with the operation of the equipment or the resolution or accuracy of the images. It was that the way he went about his work had changed.

The constraints inherent in taking and developing pictures on film—the expense, the toil, the uncertainty—had encouraged him to work slowly when he was on a shoot, with deliberation, thoughtfulness, and a deep, physical sense of presence. Before he took a picture, he would compose the shot in his mind, attending to the scene’s light, color, framing, and form. He would wait patiently for the right moment to release the shutter. With a digital camera, he could work faster. He could take a slew of images, one after the other, and then use his computer to sort through them and crop and tweak the most promising ones. The act of composition took place after a photo was taken. The change felt intoxicating at first. But he found himself disappointed with the results. The images left him cold. Film, he realized, imposed a discipline of perception, of seeing, which led to richer, more artful, more moving photographs. Film demanded more of him. And so he went back to the older technology.

The photographer wasn’t the least bit antagonistic toward computers. He wasn’t beset by any abstract concerns about a loss of agency or autonomy. He wasn’t a crusader. He just wanted the best tool for the job—the tool that would encourage and enable him to do his finest, most fulfilling work. What he came to realize is that the newest, most automated, most expedient tool is not always the best choice. Although I’m sure he would bristle at being likened to the Luddites of the early nineteenth century, his decision to forgo the latest technology, at least in some stages of his work, was an act of rebellion resembling that of the old English machine-breakers, if without the fury. Like the Luddites, he understood that decisions about technology are also decisions about ways of working and ways of living—and he took control of those decisions rather than ceding them to others or giving way to the momentum of progress. He stepped back and thought critically about technology.

As a society, we’ve become suspicious of such acts. Out of ignorance or laziness or timidity, we’ve turned the Luddites into cartoon characters, emblems of backwardness. We assume that anyone who rejects a new tool in favor of an older one is guilty of nostalgia, of making choices sentimentally rather than rationally. But the real sentimental fallacy is the assumption that the new thing is always better suited to our purposes and intentions than the old thing. That’s the view of a child, naive and pliable. What makes one tool superior to another has nothing to do with how new it is. What matters is how it enlarges us or diminishes us, how it shapes our experience of nature and culture and one another. To cede choices about the texture of our daily lives to a grand abstraction called progress is folly.

Technology is a pillar and a glory of civilization. But it is also a test that we set for ourselves. It challenges us to think about what’s important in our lives, to ask ourselves what human being means. Computerization, as it extends its reach into the most intimate spheres of our existence, raises the stakes of the test. We can allow ourselves to be carried along by the technological current, wherever it may be taking us, or we can push against it. To resist invention is not to reject invention. It’s to humble invention, to bring progress down to earth. “Resistance is futile,” goes the glib Star Trek cliché beloved by techies. But that’s the opposite of the truth. Resistance is never futile. If the source of our vitality is, as Emerson taught us, “the active soul,” then our highest obligation is to resist any force, whether institutional or commercial or technological, that would enfeeble or enervate the active soul.

One of the most remarkable things about us is also one of the easiest to overlook: each time we collide with the real, we deepen our understanding of the world and become more fully a part of it. While we’re wrestling with a challenge, we may be motivated by an anticipation of the ends of our labor, but, as Frost saw, it’s the work—the means—that makes us who we are. Automation severs ends from means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want?


This essay is adapted from the book The Glass Cage, published by W. W. Norton & Company. Copyright by Nicholas Carr.

The Shallows: tenth anniversary edition

My book The Shallows: What the Internet Is Doing to Our Brains turns ten this year, and to mark the occasion, my publisher, W. W. Norton, is publishing a new and expanded tenth-anniversary edition. It will be out on March 3.

Along with a new introduction, the new edition includes, as an Afterword, a new chapter that explores relevant technological and cultural developments over the last decade, with a particular focus on the cognitive and behavioral effects of smartphones and social media. The new chapter, titled “The Most Interesting Thing in the World,” also reviews salient research that’s appeared in the years since the first edition came out.

You can preorder the new edition from your local bookstore or through Amazon, Barnes & Noble, Powell’s, and other online booksellers.

Here’s a preview of the new Introduction:

Welcome to The Shallows. When I wrote this book ten years ago, the prevailing view of the Internet was sunny, often ecstatically so. We reveled in the seemingly infinite bounties of the online world. We admired the wizards of Silicon Valley and trusted them to act in our best interest. We took it on faith that computer hardware and software would make our lives better, our minds sharper. In a 2010 Pew Research survey of some 400 prominent thinkers, more than 80 percent agreed that, “by 2020, people’s use of the Internet [will have] enhanced human intelligence; as people are allowed unprecedented access to more information, they become smarter and make better choices.”[

The year 2020 has arrived. We’re not smarter. We’re not making better choices.

The Shallows explains why we were mistaken about the Net. When it comes to the quality of our thoughts and judgments, the amount of information a communication medium supplies is less important than the way the medium presents the information and the way, in turn, our minds take it in. The brain’s capacity is not unlimited. The passageway from perception to understanding is narrow. It takes patience and concentration to evaluate new information — to gauge its accuracy, to weigh its relevance and worth, to put it into context — and the Internet, by design, subverts patience and concentration. When the brain is overloaded by stimuli, as it usually is when we’re peering into a network-connected computer screen, attention splinters, thinking becomes superficial, and memory suffers. We become less reflective and more impulsive. Far from enhancing human intelligence, I argue, the Internet degrades it.

Much has changed in the decade since The Shallows came out. Smartphones have become our constant companions. Social media has insinuated itself into everything we do. The dark things that can happen when everyone’s connected have happened. Our faith in Silicon Valley has been broken, yet the big Internet companies wield more power than ever. This tenth anniversary edition of The Shallows takes stock of the changes. It includes an extensive new afterword in which I examine the cognitive and cultural consequences of the rise of smartphones and social media, drawing on the large body of new research that has appeared since 2010. I have left the original text of the book largely unchanged. I’m biased, but I think The Shallows has aged well. To my eyes, it’s more relevant today than it was ten years ago. I hope you find it worthy of your attention.