Monthly Archives: December 2021

The automatic muse

In the fall of 1917, the Irish poet William Butler Yeats, now in middle age and having twice had marriage proposals turned down, first by his great love Maud Gonne and next by Gonne’s daughter Iseult, offered his hand to a well-off young Englishwoman named Georgie Hyde-Lees. She accepted, and the two were wed a few weeks later, on October 20, in a small ceremony in London.

Hyde-Lees was a psychic, and four days into their honeymoon she gave her husband a demonstration of her ability to channel the words of spirits through automatic writing. Yeats was fascinated by the messages that flowed through his wife’s pen, and in the ensuing years the couple held more than 400 such seances, the poet poring over each new script. At one point, Yeats announced that he would devote the rest of his life to interpreting the messages. “No,” the spirits responded, “we have come to give you metaphors for poetry.” And so they did, in abundance. Many of Yeats’s great late poems, with their gyres, staircases, and phases of the moon, were inspired by his wife’s mystical scribbles.

One way to think about AI-based text-generation tools like OpenAI’s GPT-3 is as clairvoyants. They are mediums that bring the words of the past into the present in a new arrangement. GPT-3 is not creating text out of nothing, after all. It is drawing on a vast corpus of human expression and, through a quasi-mystical statistical procedure (no one can explain exactly what it is doing), synthesizing all those old words into something new, something intelligible to and requiring interpretation by its interlocutor. When we talk to GPT-3, we are, in a way, communing with the dead. One of Hyde-Lees’ spirits said to Yeats, “this script has its origin in human life — all religious systems have their origin in God & descend to man — this ascends.” The same could be said of the script generated by GPT-3. It has its origin in human life; it ascends.

It’s telling that one of the first commercial applications of GPT-3, Sudowrite, is being marketed as a therapy for writer’s block. If you’re writing a story or essay and you find yourself stuck, you can plug the last few sentences of your work into Sudowrite, and it will generate the next few sentences, in a variety of versions. It may not give you metaphors for poetry (though it could), but it will give you some inspiration, stirring thoughts and opening possible new paths. It’s an automatic muse, a mechanical Georgie Hyde-Lees.

Sudowrite, and GPT-3 in general, has already been used for a lot of stunts. Kevin Roose, the New York Times technology columnist, recently used it to generate a substantial portion of a review of a mediocre new book on artificial intelligence. (The title of the review was, naturally, “A Robot Wrote this Book Review.”) Commenting on Sudowrite’s output, Roose wrote, “within a few minutes, the AI was coming up with impressively cogent paragraphs of analysis — some, frankly, better than what I could have generated on my own.”

But the potential of these AI-powered automatic writers goes far beyond journalistic parlor tricks. They promise to serve as new tools for the creation of art. One of the most remarkable pieces of writing I read this year was Vauhini Vara’s story “Ghosts” in The Believer. While locked down in 2020, Vara became obsessed with GPT-3. “I sought out examples of GPT-3’s work, and they astonished me,” she writes in an introduction to her piece. “Some of them could easily be mistaken for texts written by a human hand. In others, the language was weird, off-kilter — but often poetically so, almost truer than writing any human would produce.” Yeats would have understood.

Vara’s older sister contracted cancer in high school, and died a few years later. The experience left Vara traumatized, and, though an accomplished writer, she had never been able to write about it. But with GPT-3 she began to find the words. “I found myself irresistibly attracted to GPT-3,” she explains “— to the way it offered, without judgment, to deliver words to a writer who has found herself at a loss for them.” She began to feed GPT-3 some sentences about her sister’s illness, and the system started to weave those sentences into stories — fantastical and uncanny, but also stirring, and ultimately heartbreaking. The Believer story chronicles eight of her sessions with GPT-3. It reads as a conversation between a writer and her muse, a conversation that begins tentatively and grows richer and truer as it goes on.

At one point, after Vara confesses to feeling like “a ghost” after her sister’s death, GPT-3 continues the thought:

So I can’t describe her to you. But I can describe what it felt like to have her die. It felt like my life was an accident — or, worse, a mistake. I’d made a mistake in being born, and now, to correct it, I would have to die. I’d have to die, and someone else — a stranger — would have to live, in my place. I was that stranger. I still am.

What gives the exchange all the more resonance is the sense that, in drawing on its corpus of past human speech to generate its mysterious new pastiche, GPT-3 is expressing the pain of others who have suffered unbearable losses. Spirits are talking.

Social media as pseudo-community

In 1987, two years before James Beniger wrote The Control Revolution, his seminal study of the role information systems play in society, he published an article called “Personalization of Mass Media and the Growth of Pseudo-Community” in the journal Communication Research. Beniger’s subject was the shift from “interpersonal communication” to “mass communication” as the basis of human relations. The shift had begun in the eighteenth century, with the introduction of high-speed printing presses and the proliferation of widely circulating newspapers and magazines; had accelerated with the arrival of broadcasting in the middle of the twentieth century; and was taking a new turn with the rise of digital media.

Beniger argued that interpersonal, or face-to-face, communication encourages the development of small, tightly knit, tightly controlled communities where individual interests are subordinate to group interests. For most of human history, society was structured along these intimate lines. Mass communication, more efficient but less intimate, encourages the development of large, loosely knit, loosely controlled communities where individual interests take precedence over group interests. As mass communication became ever more central to human experience in the second half of the twentieth century, thanks to the enormous popularity of radio and television, society restructured itself, with individualism and personal freedom becoming the governing ethos. The trend seemed to culminate in the free-wheeling, self-indulgent 1970s.

The arrival of the personal computer around 1980 put a twist in the story. By enabling mass media messages to be personalized, computers began to make mass communication feel as intimate as interpersonal communication, while also making mass communication even more efficient.* Imbuing broadcasting with an illusion of intimacy, computers expanded media’s power to structure and control human relations. Observed Beniger:

Gradually each of us has become enmeshed in superficially interpersonal relations that confuse personal with mass messages and increasingly include interactions with machines that write, speak, and even “think” with success steadily approaching that of humans. The change constitutes nothing less than a transformation of traditional community into impersonal association — toward an unimagined hybrid of the two extremes that we might call pseudo-community.

Beniger emphasized that, for broadcasters and advertisers, contriving a sense of intimacy had always been a central goal, as it served to give their programs and messages greater influence over the audience. Even during the early days of radio and TV, the performers who seemed most sincere to listeners and viewers tended to have the greatest success — whether their sincerity was real or feigned. With computer personalization, Beniger understood, individuals’ sense of personal connection with mass-media messages would strengthen. The glue of pseudo-community would be pseudo-intimacy. 

Although Beniger wrote his article several years before the invention of the web and long before the arrival of social media, he was remarkably prescient about what lay ahead:

The capacity of such [digital] mass media for simulating interpersonal communication is limited only by their output technologies, computing power, and artificial intelligence; their capacity for personalization is limited only by the size and quality of data sets on the households and individuals to which they are linked.

The power of “sincerity” — today we would be more likely to use the terms “authenticity” and “relatability” — would also intensify, Beniger saw. Overwhelmed with personalized messages, people would put their trust and faith in whatever human or machine broadcaster felt most real, most genuine to them.

Mass communication skills would thereby prove as effective in influencing attitudes in behavior as would the corresponding interpersonal skills in a true “community of values.” Electorates of large nation states might even entrust mass media personalities with high public office as a consequence of this dynamic.

Beniger did not live long enough to see the rise of social media, but it seems clear he would have viewed its expansion and automation of personalized broadcasts as the fulfillment of his vision of pseudo-community. Digital media’s blurring of interpersonal and mass communication, he concluded in his article, was establishing a “new infrastructure” for societal control, on a scale far greater than was possible before. The infrastructure could be used, he wrote, “for evil or for good.”

________
*For a different take on the consequences of the blurring of personal and mass communication, see my recent New Atlantis article “How to Fix Social Media.”

Deep Fake State

In “Beautiful Lies: The Art of the Deep Fake,” an essay in the Los Angeles Review of Books, I examine the rise and ramifications of deep fakes through a review of two books, photographer  Jonas Bendiksen‘s The Book of Veles and mathematician Noah Giansiracusa‘s How Algorithms Create and Prevent Fake News. As Bendiksen’s work shows, deep-fake technology gives artists a new tool for probing reality. As for the rest of us, the technology promises to turn reality into art.

Here’s a bit from the essay:

The spread of ever more realistic deep fakes will make it even more likely that people will be taken in by fake news and other lies. The havoc of the last few years is probably just the first act of a long misinformation crisis. Eventually, though, we’ll all begin to take deep fakes for granted. We’ll come to take it as a given that we can’t believe our eyes. At that point, deep fakes will start to have a very different and even more disorienting effect. They’ll amplify not our gullibility but our skepticism. As we lose trust in the information we receive, we’ll begin, in Giansiracusa’s words, to “doubt reality itself.” We’ll go from a world where our bias was to take everything as evidence — the world Susan Sontag described in On Photography — to one where our bias is to take nothing as evidence.

The question is, what happens to “the truth” — the quotation marks seem mandatory now — when all evidence is suspect?

Read it.