Goodbye Rough Type, Hello New Cartographies

Rough Type has had a twenty-year run. That seems like long enough, particularly seeing as the blog has been pretty much dormant in recent years. So this will be the last Rough Type post.

But don’t shed too many tears. I’m going to continue blogging, maybe even at a faster clip, through a Substack I’ve started called New Cartographies. My first new post is up. It’s titled “Dead Labor, Dead Speech,” and here’s how it begins:

If, as Marx argued, capital is dead labor, then the products of large language models might best be understood as dead speech. Just as factory workers produce, with their “living labor,” machines and other forms of physical capital that are then used, as “dead labor,” to produce more physical commodities, so human expressions of thought and creativity—“living speech” in the forms of writing, art, photography, and music—become raw materials used to produce “dead speech” in those same forms. LLMs, to continue with Marx’s horror-story metaphor, feed “vampire-like” on human culture. Without our words and pictures and songs, they would cease to function. They would become as silent as a corpse in a casket.

Read on (and thanks).

Large Language Manglers

ventriloquist

Who’s the dummy?

I was reading Joanna Stern’s report in the Wall Street Journal about the new AI features that Apple is rushing to complete for the iPhone 16s. (Can’t LLMs debug their own code? I thought that was a done deal.) Among the promised features is a Rewrite function that will translate your messages and other writings into different styles of prose. One style is called Professional. Stern tested it on a note she was writing to her mom. Here’s the original:

I’ll be home tomorrow. 

Here’s how it reads after the rewrite:

I anticipate returning home tomorrow.

So, if I’m getting this right, you’d use Professional mode any time you want to sound like you have a stick up your ass. I anticipate forgoing its deployment.

This is all very silly, or at least would be if we hadn’t lost our collective mind. For years now, we’ve been acclimating ourselves to having machines speak on our behalf. It began with autocorrect and autoedit functions in word processors and has continued through ever more aggressive autocomplete functions on phones. Having an app fiddle with your writing now seems normal, even necessary given how much time we all spend messaging, posting, and commenting. The endless labor of self-expression cries out for the efficiency of automation.

We don’t even care that computers, despite years of experience, still do a crappy job of what would seem to be pretty simple algorithmic work. Here’s a sloppy text that I wrote with the aid of my messaging app. It’s filled with typos, weird punctuation, and bizarre word substitutions, but I’m sure you’ll get the gist. If not, who cares? Along with speeding up exchanges, the implicit it’s autocomplete’s fault! excuse that now accompanies every messy text has the added benefit of covering up the fact that we can’t be bothered to spend five seconds proofreading the messages we send to friends and family members. We’ve got headlines to read, YouTubes to watch.

Since OpenAI introduced ChatGPT two years ago, people have taken to using it for all sorts of formal writing tasks, from college papers to corporate memos to government reports. I was recently talking with a Methodist bishop, and she told me that a colleague now uses generative AI to help him write sermons. Apple’s Rewrite, and the similar writing tools being introduced by Google, Microsoft, Meta, and others, extends the AI-based outsourcing of personal speech into more intimate areas, shaping the way we talk with the people closest to us. It may start with rewriting—to help us “deliver the right words to meet the occasion,” as Apple describes it—but it will soon expand into the automated production of condolence messages, wedding vows, and the like. LLMs give us ventriloquism in reverse. The mechanical dummy speaks through your mouth.

It’s also the next stage in the long-running industrialization of human communication—one of the subjects of my forthcoming book Superbloom. For nearly two centuries, we’ve embraced the relentless speeding up of communication by mechanical means, believing that the industrial ideals of efficiency, productivity, and optimization are as applicable to speech as to the manufacture of widgets. More recently, we’ve embraced the mechanization of editing, allowing software to replace people in choosing the information we see (and don’t see). With LLMs, the industrialization ethic moves at last into the creation of the very content of our speech.

It’s hard to know what to say. Why not make it easier? Or, as Apple Rewrite Professional puts it: The rendering of thoughts into prose is one on the most challenging endeavors in which a human being can engage. It would be advisable to subject the task to a process of simplification.

Introducing Superbloom

The poppies come out every March in Walker Canyon, an environmentally sensitive spot in the Temescal Mountains seventy miles southeast of Los Angeles, but the show they put on in early 2019 was something special. Thanks to a wet winter in the normally arid region, seeds that had long lain dormant germinated, and the poppies appeared in numbers not seen in years. The flowers covered the canyon’s slopes in carpets of vivid, almost fluorescent orange — the shade you get on hunters’ vests and caps. On social media, word of the so-called superbloom spread quickly. First on the scene were the influencers.

So begins my new book, Superbloom: How Technologies of Connection Tear Us Apart, to be published in January 2025 by W. W. Norton.

Fifteen years ago, when I was finishing up my book The Shallows: What the Internet Is Doing to Our Brains, I knew that I was telling only part of the story of the net’s effects. The book focused on the personal consequences of our entry into an artificial environment geared to agitation and distraction —  the way it shapes our thoughts and perceptions, our ways of reading and sense-making. What it didn’t cover is the social and political effects of the technology. Back then — this was 2009 — smartphones were brand new, app stores had only recently opened, and social media platforms like Facebook, Twitter, and Instagram were just beginning to draw a mass audience. TikTok wouldn’t appear for nearly a decade. The social world we live in today, in short, didn’t exist. As for psychological and sociological studies of online socializing, they were few and their results were mixed.

We know a lot more now. Although we socialize through social media more than ever today, our attitude toward the experience has, for many good reasons, shifted from enthusiasm to wariness. Our view of the companies running the platforms, meanwhile, has pinballed from celebratory to contemptuous. There’s talk of warning labels, breakups, outright bans. But even as public opinion shifted over the last seven or eight years, I sensed that there was something important missing from all the debates and discussions. That sense, which strengthened in 2019 when I taught an undergraduate seminar on social media at Williams College in Massachusetts, inspired me to begin the research that led to Superbloom.

In the book, I try to put the phenomenon of social media into a broader context, one spanning the history of communication technology as well as the psychological and sociological evidence of how mediated communication works on the human psyche and influences people’s relationships. At the center of the book is a paradox that was summed up well by the Canadian scholar Harold Innis in a 1947 lecture: “Enormous improvements in communication have made understanding more difficult.” No one paid attention to the idea back then, but I think we need to pay attention to it now.

Superbloom is available for preordering. I hope you’ll read it.

photo: cultivar413 (cc).

Culture vultures

I’m not tearing up over Elon Musk’s termination, with extreme prejudice, of Twitter. Kill the blue bird, gut it, stuff it, and stick it in a media museum to collect dust. Think of all the extra time journalists will now have for journalism.

But there is something ominous about a superbillionaire taking over what had become a sort of public square, a center of discourse, for crying out loud, and doing with it what he pleases, including some pretty perverted acts. I mean, that X logo? Virginia Heffernan compares it to “the skull and crossbones on cartoon bottles of poison.” To me, it looks like something that a cop might spray-paint on a floor to mark the spot where a corpse lay before it was removed—the corpse in this case being the bird’s.

Musk’s toying dismemberment of Twitter feels even more unsettling in the wake of the announcement yesterday that private-equity giant KKR is buying Simon & Schuster, publisher of Catch-22 and Den of Thieves, among other worthy titles, for a measly billion and a half. Says S&S CEO Jon Karp: “They plan to invest in us and make us even greater than we already are. What more could a publishing company want?” That would have made a funny tweet.

Both gambits are asset plays, or, maybe a better term, asset undertakings. I don’t understand everything Musk’s doing—manic episodes have their own logic—but he does get an established social-media platform and a big pile of content to feed into the large language model he’s building at xAI. (Fun game: connect the Xs.) KKR gets its own pile of content to, uh, leverage. It intentions probably aren’t entirely literary.

Well-turned sentences had a decent run, but after TikTok they’ve become depreciating assets. Traditional word-based culture—and, sure, I’ll stick Twitter into that category—is beginning to look like a feeding ground for vultures. Tell Colleen Hoover to turn out the lights when she leaves.

Vision Pro’s big reveal

At first glance, there doesn’t seem to be much to connect Meta’s $500 Quest 3 face strap-on for gamer-proles with Apple’s $3,500 Vision Pro face tiara for elite beings of a hypothetical nature, but the devices do share one important thing in common: redundancy. Both offer a set of features that lag far behind our already well-established psychic capabilities. They offer kludgy imitations of what our minds now do effortlessly. Our reality has been augmented, virtual, and mixed for a long time, and we’re at home in it. Bulky headgear that projects images onto fields of vision feels like a leap backwards.

Baudrillard explained it all thirty years ago in The Perfect Crime:

The virtual camera is in our heads. No need of a medium to reflect our problems in real time: every existence is telepresent to itself. The TV and the media long since left their media space to invest “real” life from the inside, precisely as a virus does a normal cell. No need of the headset and the data suit: it is our will that ends up moving about the world as though inside a computer-generated image.

Who needs real goggles when we already wear virtual ones?

Vision Pro’s value seems to lie largely in the realm of metaphor. There’s that brilliant little reality dial—the “digital crown”—that allows you to fade in and out of the world, an analog rendering of the way our consciousness now wavers between presence and absence, here and not-here. And there’s the projection of your eyes onto the outer surface of the lens, so those around you can judge your degree of social and emotional availability at any given moment. Your eyes disappear, Apple explains, as you become more “immersed,” as you retreat from your physical surroundings into the screen’s captivating images. See you later. Your fingers keep moving, though, worrying their virtual worry beads, the body reduced to interface. In its metaphors, Vision Pro reveals us for what we have become: avatars in the uncanny valley.

Apple presents its Vision line as the next logical step in the progression of computing: from desktop computing to mobile computing to, now, “spatial computing.” Apps float in the air. The invisible data streams that already swirl around us become visible. The world is the computer. Maybe that is the future of computing. Maybe not. In most situations, the smartphone still seems more practical, flexible, and user-friendly than something that, like the xenomorph in Alien, commandeers the better part of your face.

The vision that Vision offers us seems more retrospective than prospective. It shows us a time when entering a virtual world required a gizmo. That’s the past, not the future.

Meanings of the metaverse: Liquid death in life

“Liquid Death”: Whenever the metaverse gets rebranded into something more consumable, I would suggest that for its new name. It’s edgy, it’s memorable, and it hits the bullseye.

Liquid Death, the edgy canned water, has already proclaimed itself, in one of its edgy TikToks, the “official water of the metaverse.”*

@liquiddeath The official water of the metaverse @vyralteq #murderyourthirst #deathtoplastic ♬ original sound – Liquid Death

That’s apt. Liquid Death is, after all, the first product to exist entirely in the metaverse. In fact, calling it a “product” seems anachronistic. It just reveals how ill-suited our vocabulary is to the metaverse. Words are too tied to things; we’re going to need a new language. I guess you could say LD is a “metaproduct,” but that doesn’t seem quite right either. It suggests that, behind the metaproduct, there is an actual, primary “product.” And that’s not true at all.

Let me explain. We used to think that avatars were virtual representations of actual objects, digital symbols of “real” things, but LD turns that old assumption on its head. The real LD is a symbol, and the stuff you pour down your neck from the can is just a physical representation of the symbol, a derivative. The water is the avatar. The actual “product” — everything is going to need to be put into scare quotes soon — is the sum of the billions of digital Liquid Death messages and images that pour continuously through billions of streams. The actual product is nothing.

Jean Baudrillard, philosopher of the hyperreal, would have put it like this:

Liquid Death: more watery than water

That’s why LD can market itself as an alcoholic beverage — the “latest innovation in beer,” as the Wall Street Journal described it — even though it’s just water. In the metaverse, a tallboy of water is every bit as intoxicating as a double IPA. More intoxicating, actually, if you get the branding right. And if you’re still partying in the “quote-unquote real world,” as Marc Andreessen puts it, drinking a symbol of an alcoholic beverage without actually drinking an alcoholic beverage is the first step to becoming a symbol yourself. Liquid Death is the metaverse’s gateway drug.

Liquid Death: more boozy than booze

Whether you call it a product or a “product” or TBD, one thing’s for sure: Liquid Death is a prophecy. Mark Zuckerberg says that his immediate goal is to “get a billion people into the metaverse doing hundreds of dollars apiece in digital commerce.” That’s his “north star.” Meeting the goal is going to require that commerce accelerate its long-term shift from goods, as traditionally defined, to symbols. Which in turn will require a psychic shift on the part of consumers, a kind of caterpillar-to-butterfly transubstantiation. We’ll need to do to the self what Liquid Death has done to booze: shift its essence from the thing to the representation of the thing. The avatar becomes the person, the non-fungible token of the self. The body turns into an avatar of the symbol, a derivative of a derivative.

Liquid Death operates a virtual country club—called the Liquid Death Country Club—which you can join, it says, by “selling your soul.” That’s what I love about Liquid Death. It tells you the truth about the metaverse.

________
*When edginess achieves cultural centrality, is it still edgy?

This is the sixth installment in the series “Meanings of the Metaverse,” which began here.

At the Concord station (for Leo Marx)

Leo Marx has died, at the mighty age of 102. His work, particularly The Machine in the Garden, inspired many people who write on the cultural consequences of technological progress, myself included. As a small tribute, I’m posting this excerpt from The Shallows, in which Marx’s influence is obvious.

It was a warm summer morning in Concord, Massachusetts. The year was 1844. Nathaniel Hawthorne was sitting in a small clearing in the woods, a particularly peaceful spot known around town as Sleepy Hollow. Deep in concentration, he was attending to every passing impression, turning himself into what Ralph Waldo Emerson, the leader of Concord’s transcendentalist movement, had eight years earlier termed a “transparent eyeball.”

Hawthorne saw, as he would record in his notebook later that day, how “sunshine glimmers through shadow, and shadow effaces sunshine, imaging that pleasant mood of mind where gayety and pensiveness intermingle.” He felt a slight breeze, “the gentlest sigh imaginable, yet with a spiritual potency, insomuch that it seems to penetrate, with its mild, ethereal coolness, through the outward clay, and breathe upon the spirit itself, which shivers with gentle delight.” He smelled on the breeze a hint of “the fragrance of the white pines.” He heard “the striking of the village clock” and “at a distance mowers whetting their scythes,” though “these sounds of labor, when at a proper remoteness, do but increase the quiet of one who lies at his ease, all in a mist of his own musings.”

Abruptly, his reverie was broken:

But, hark! there is the whistle of the locomotive,—the long shriek, harsh above all other harshness, for the space of a mile cannot mollify it into harmony. It tells a story of busy men, citizens from the hot street, who have come to spend a day in a country village,—men of business,—in short, of all unquietness; and no wonder that it gives such a startling shriek, since it brings the noisy world into the midst of our slumbrous peace.

Leo Marx opens The Machine in the Garden, his classic 1964 study of technology’s influence on American culture, with a recounting of Hawthorne’s morning in Sleepy Hollow. The writer’s real subject, Marx argues, is “the landscape of the psyche” and in particular “the contrast between two conditions of consciousness.” The quiet clearing in the woods provides the solitary thinker with “a singular insulation from disturbance,” a protected space for reflection. The clamorous arrival of the train, with its load of “busy men,” brings “the psychic dissonance associated with the onset of industrialism.” The contemplative mind is overwhelmed by the noisy world’s mechanical busyness.

The stress that Google and other Internet companies place on the efficiency of information exchange as the key to intellectual progress is nothing new. It’s been, at least since the start of the Industrial Revolution, a common theme in the history of the mind. It provides a strong and continuing counterpoint to the very different view, promulgated by the American transcendentalists as well as the earlier English romantics, that true enlightenment comes only through contemplation and introspection. The tension between the two perspectives is one manifestation of the broader conflict between, in Marx’s terms, “the machine” and “the garden”—the industrial ideal and the pastoral ideal—that has played such an important role in shaping modern society.

When carried into the realm of the intellect, the industrial ideal of efficiency poses, as Hawthorne understood, a potentially mortal threat to the pastoral ideal of contemplative thought. That doesn’t mean that promoting the rapid discovery and retrieval of information is bad. The development of a well-rounded mind requires both an ability to find and quickly parse a wide range of information and a capacity for open-ended reflection. There needs to be time for efficient data collection and time for inefficient contemplation, time to operate the machine and time to sit idly in the garden. We need to work in what Google calls the “world of numbers,” but we also need to be able to retreat to Sleepy Hollow. The problem today is that we’re losing our ability to strike a balance between those two very different states of mind. Mentally, we’re in perpetual locomotion.

Even as the printing press, invented by Johannes Gutenberg in the fifteenth century, made the literary mind the general mind, it set in motion the process that now threatens to render the literary mind obsolete. When books and periodicals began to flood the marketplace, people for the first time felt overwhelmed by information. Robert Burton, in his 1628 masterwork An Anatomy of Melancholy, described the “vast chaos and confusion of books” that confronted the seventeenth-century reader: “We are oppressed with them, our eyes ache with reading, our fingers with turning.” A few years earlier, in 1600, another English writer, Barnaby Rich, had complained, “One of the great diseases of this age is the multitude of books that doth so overcharge the world that it is not able to digest the abundance of idle matter that is every day hatched and brought into the world.”

Ever since, we have been seeking, with mounting urgency, new ways to bring order to the confusion of information we face every day. For centuries, the methods of personal information management tended to be simple, manual, and idiosyncratic—filing and shelving routines, alphabetization, annotation, notes and lists, catalogues and concordances, indexes, rules of thumb. There were also the more elaborate, but still largely manual, institutional mechanisms for sorting and storing information found in libraries, universities, and commercial and governmental bureaucracies. During the twentieth century, as the information flood swelled and data-processing technologies advanced, the methods and tools for both personal and institutional information management became more complex, more systematic, and increasingly automated. We began to look to the very machines that exacerbated information overload for ways to alleviate the problem.

Vannevar Bush sounded the keynote for our modern approach to managing information in his much-discussed article “As We May Think,” which appeared in the Atlantic Monthly in 1945. Bush, an electrical engineer who had served as Franklin Roosevelt’s science adviser during World War II, worried that progress was being held back by scientists’ inability to keep abreast of information relevant to their work. The publication of new material, he wrote, “has been extended far beyond our present ability to make use of the record. The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.”

But a technological solution to the problem of information overload was, Bush argued, on the horizon: “The world has arrived at an age of cheap complex devices of great reliability; and something is bound to come of it.” He proposed a new kind of personal cataloguing machine, called a memex, that would be useful not only to scientists but to anyone employing “logical processes of thought.” Incorporated into a desk, the memex, Bush wrote, “is a device in which an individual stores [in compressed form] all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility.” On top of the desk are “translucent screens” onto which are projected images of the stored materials as well as “a keyboard” and “sets of buttons and levers” to navigate the database. The “essential feature” of the machine is its use of “associative indexing” to link different pieces of information: “Any item may be caused at will to select immediately and automatically another.” This process “of tying two things together is,” Bush emphasized, “the important thing.”

With his memex, Bush anticipated both the personal computer and the hypermedia system of the internet. His article inspired many of the original developers of PC hardware and software, including such early devotees of hypertext as the famed computer engineer Douglas Englebart and HyperCard’s inventor, Bill Atkinson. But even though Bush’s vision has been fulfilled to an extent beyond anything he could have imagined in his own lifetime—we are surrounded by the memex’s offspring—the problem he set out to solve, information overload, has not abated. In fact, it’s worse than ever. As David Levy has observed, “The development of personal digital information systems and global hypertext seems not to have solved the problem Bush identified but exacerbated it.”

In retrospect, the reason for the failure seems obvious. By dramatically reducing the cost of creating, storing, and sharing information, computer networks have placed far more information within our reach than we ever had access to before. And the powerful tools for discovering, filtering, and distributing information developed by companies like Google ensure that we are forever inundated by information of immediate interest to us—and in quantities well beyond what our brains can handle. As the technologies for data processing improve, as our tools for searching and filtering become more precise, the flood of relevant information only intensifies. More of what is of interest to us becomes visible to us. Information overload has become a permanent affliction, and our attempts to cure it just make it worse. The only way to cope is to increase our scanning and our skimming, to rely even more heavily on the wonderfully responsive machines that are the source of the problem. Today, more information is “available to us than ever before,” writes Levy, “but there is less time to make use of it—and specifically to make use of it with any depth of reflection.” Tomorrow, the situation will be worse still.

It was once understood that the most effective filter of human thought is time. “The best rule of reading will be a method from nature, and not a mechanical one,” wrote Emerson in his 1858 essay “Books.” All writers must submit “their performance to the wise ear of Time, who sits and weighs, and ten years hence out of a million of pages reprints one. Again, it is judged, it is winnowed by all the winds of opinion, and what terrific selection has not passed on it, before it can be reprinted after twenty years, and reprinted after a century!” We no longer have the patience to await time’s slow and scrupulous winnowing. Inundated at every moment by information of immediate interest, we have little choice but to resort to automated filters, which grant their privilege, instantaneously, to the new and the popular. On the net, the winds of opinion have become a whirlwind.

Once the train had disgorged its cargo of busy men and steamed out of the Concord station, Hawthorne tried, with little success, to return to his deep state of concentration. He glimpsed an anthill at his feet and, “like a malevolent genius,” tossed a few grains of sand onto it, blocking the entrance. He watched “one of the inhabitants,” returning from “some public or private business,” struggle to figure out what had become of his home: “What surprise, what hurry, what confusion of mind, are expressed in his movement! How inexplicable to him must be the agency which has effected this mischief!” But Hawthorne was soon distracted from the travails of the ant. Noticing a change in the flickering pattern of shade and sun, he looked up at the clouds “scattered about the sky” and discerned in their shifting forms “the shattered ruins of a dreamer’s Utopia.”