This review of Shoshana Zuboff’s The Age of Surveillance Capitalism appeared originally in the Los Angeles Review of Books.
1. The Resurrection
We sometimes forget that, at the
turn of the century, Silicon Valley was in a funk, economic and psychic. The
great dot-com bubble of the 1990s had imploded, destroying vast amounts of investment
capital along with the savings of many Americans. Trophy startups like Pets.com,
Webvan, and Excite@Home, avatars of the so-called New Economy, were punch lines.
Disillusioned programmers and entrepreneurs were abandoning their Bay Area bedsits
and decamping. Venture funding had dried up. As a business proposition, the
information superhighway was looking like a cul-de-sac.
Today, less than 20 years on, everything
has changed. The top American internet companies are among the most profitable
and highly capitalized businesses in history. Not only do they dominate the
technology industry but they have much of the world economy in their grip. Their
founders and early backers sit atop Rockefeller-sized fortunes. Cities and
states court them with billions of dollars in tax breaks and other subsidies. Bright
young graduates covet their jobs. Along with their financial clout, the
internet giants hold immense social and cultural sway, influencing how all of
us think, act, and converse.
Silicon Valley’s Phoenix-like resurrection
is a story of ingenuity and initiative. It is also a story of callousness, predation,
and deceit. Harvard Business School professor emerita Shoshana Zuboff argues in
her new book that the Valley’s wealth and power are predicated on an insidious,
essentially pathological form of private enterprise—what she calls
“surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now
spreading throughout the economy, surveillance capitalism uses human life as
its raw material. Our everyday experiences, distilled into data, have become a
privately-owned business asset used to predict and mold our behavior, whether we’re
shopping or socializing, working or voting.
Zuboff’s fierce indictment of the big internet firms goes beyond the usual condemnations of privacy violations and monopolistic practices. To her, such criticisms are sideshows, distractions that blind us to a graver danger: By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
Silicon Valley’s Phoenix-like resurrection is a story
of ingenuity and initiative. It is also
a story of callousness, predation, and deceit.
Capitalism has always been a
fraught system. Capable of both tempering and magnifying human flaws,
particularly the lust for power, it can expand human possibility or constrain
it, liberate people or oppress them. (The same can be said of technology.) Under
the Fordist model of mass production and consumption that prevailed for much of
the twentieth century, industrial capitalism achieved a relatively benign balance
among the contending interests of business owners, workers, and consumers. Enlightened
executives understood that good pay and decent working conditions would ensure
a prosperous middle class eager to buy the goods and services their companies
produced. It was the product itself — made by workers, sold by companies,
bought by consumers — that tied the interests of capitalism’s participants together.
Economic and social equilibrium was negotiated through the product.
By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers. But, as Zuboff makes clear, this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders. Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms. In contrast to the businesses of the industrial era, whose interests were by necessity entangled with those of the public, internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
2. The Map
It all began innocently. In the
1990s, before they founded Google, Larry Page and Sergey Brin were
computer-science students who shared a fascination with the arcane field of
network theory and its application to the internet. They saw that by scanning
web pages and tracing the links between them, they would be able to create a
map of the net with both theoretical and practical value. The map would allow them
to measure the importance of every page, based on the number of other pages
that linked to it, and that data would, in turn, provide the foundation for a powerful
search engine. Because the map could also be used to record the routes and choices
of people as they traveled through the network, it would provide a finely
detailed account of human behavior.
In Google’s early days, Page and
Brin were wary of exploiting the data they collected for monetary gain, fearing
it would corrupt their project. They limited themselves to using the
information to improve search results, for the benefit of users. That changed
after the dot-com bust. Google’s once-patient investors grew restive, demanding
that the founders figure out a way to make money, preferably lots of it. Under
pressure, Page and Brin authorized the launch of an auction system for selling
advertisements tied to search queries. The system was designed so that the
company would get paid by an advertiser only when a user clicked on an ad. This
feature gave Google a huge financial incentive to make accurate predictions
about how users would respond to ads and other online content. Even tiny increases
in click rates would bring big gains in income. And so the company began
deploying its stores of behavioral data not for the benefit of users but to aid
advertisers — and to juice its own profits. Surveillance capitalism had arrived.
Google’s business now hinged on
what Zuboff calls “the extraction imperative.” To improve its predictions, it had
to mine as much information as possible from web users. It aggressively expanded
its online services to widen the scope of its surveillance. Through Gmail, it
secured access to the contents of people’s emails and address books. Through
Google Maps, it gained a bead on people’s whereabouts and movements. Through
Google Calendar, it learned what people were doing at different moments during
the day and whom they were doing it with. Through Google News, it got a readout
of people’s interests and political leanings. Through Google Shopping, it opened
a window onto people’s wish lists, brand preferences, and other material
desires. The company gave all these services away for free to ensure they’d be used
by as many people as possible. It knew the money lay in the data.
Once it embraced surveillance as the
core of its business, Google changed. Its innocence curdled, and its idealism
became a means of obfuscation.
Even as its army of PR agents and lobbyists continued to promote a cuddly Nerds-in-Toyland image for the firm, the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements. Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors. As one Google executive quoted by Zuboff put it, “Larry [Page] opposed any path that would reveal our technological secrets or stir the privacy pot and endanger our ability to gather data.”
As networked computers came to mediate more and more of people’s everyday lives, the map of the online world created by Page and Brin became far more lucrative than they could have anticipated. Zuboff reminds us that, throughout history, the charting of a new territory has always granted the mapmaker an imperial power. Quoting the historian John B. Harley, she writes that maps “are essential for the effective ‘pacification, civilization, and exploitation’ of territories imagined or claimed but not yet seized in practice. Places and people must be known in order to be controlled.” An early map of the United States bore the motto “Order upon the Land.” Should Google ever need a new slogan to replace its original, now-discarded “Don’t be evil,” it would be hard-pressed to find a better one than that.
3. The Heist
Zuboff opens her book with a look
back at a prescient project from the year 2000 on the future of home automation
by a group of Georgia Tech computer scientists. Anticipating the arrival of
“smart homes,” the scholars described how a mesh of environmental and wearable sensors,
linked wirelessly to computers, would allow all sorts of domestic routines,
from the dimming of bedroom lights to the dispensing of medications to the
entertaining of children, to be programmed to suit a house’s occupants.
Essential to the effort would be the processing of intimate data on people’s habits, predilections, and health. Taking it for granted that such information should remain private, the researchers envisaged a leak-proof “closed loop” system that would keep the data within the home, under the purview and control of the homeowner. The project, Zuboff explains, reveals the assumptions about “datafication” that prevailed at the time: “(1) that it must be the individual alone who decides what experience is rendered as data, (2) that the purpose of the data is to enrich the individual’s life, and (3) that the individual is the sole arbiter of how the data are put to use.”
What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information. It turned the details of the lives of millions and then billions of people into its own property. The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
Without permission, without compensation,
and with little in the way of resistance, Google seized and
declared ownership over everyone’s information.
Google conducted its great data
heist under the cover of novelty. The web was an exciting frontier — something
new in the world — and few people understood or cared about what they were
revealing as they searched and surfed. In those innocent days, data was there
for the taking, and Google took it. The public’s naivete and apathy were only
part of the story, however. Google also benefited from decisions made by
lawmakers, regulators, and judges — decisions that granted internet companies
free use of a vast taxpayer-funded communication infrastructure, relieved them of
legal and ethical responsibility for the information and messages they
distributed, and gave them carte blanche to collect and exploit user data.
Consider the terms-of-service
agreements that govern the division of rights and the delegation of ownership online.
Non-negotiable, subject to emendation and extension at the company’s whim, and
requiring only a casual click to bind the user, TOS agreements are parodies of
contracts, yet they have been granted legal legitimacy by the courts. Law
professors, writes Zuboff, “call these ‘contracts of adhesion’ because they
impose take-it-or-leave-it conditions on users that stick to them whether they
like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped
Google and other firms commandeer personal data as if by fiat.
The bullying style of TOS
agreements also characterizes the practice, common to Google and other
technology companies, of threatening users with a loss of “functionality”
should they try to opt out of data sharing protocols or otherwise attempt to escape
surveillance. Anyone who tries to remove a pre-installed Google app from an
Android phone, for instance, will likely be confronted by a vague but menacing
warning: “If you disable this app, other apps may no longer function as
intended.” This is a coy, high-tech form of blackmail: “Give us your data, or
the phone dies.”
In pulling off its data grab,
Google also benefited from the terrorist attacks of September 11, 2001. As much
as the dot-com crash, the horrors of 9/11 set the stage for the rise of
surveillance capitalism. Zuboff notes that, in 2000, members of the Federal
Trade Commission, frustrated by internet companies’ lack of progress in
adopting privacy protections, began formulating legislation to secure people’s
control over their online information and severely restrict the companies’
ability to collect and store it. It seemed obvious to the regulators that
ownership of personal data should by default lie in the hands of private
citizens, not corporations. The 9/11 attacks changed the calculus. The
centralized collection and analysis of online data, on a vast scale, came to be
seen as essential to national security. “The privacy provisions debated just
months earlier vanished from the conversation more or less overnight,” Zuboff
writes.
Google and other Silicon Valley
companies benefited directly from the government’s new stress on digital
surveillance. They earned millions through contracts to share their data
collection and analysis techniques with the National Security Agency and the
Central Intelligence Agency. But they also benefited indirectly. Online
surveillance came to be viewed as normal and even necessary by politicians, government
bureaucrats, and the general public. One of the unintended consequences of this
uniquely distressing moment in American history, Zuboff observes, was that “the
fledgling practices of surveillance capitalism were allowed to root and grow
with little regulatory or legislative challenge.” Other possible ways of
organizing online markets, such as through paid subscriptions for apps and
services, never even got a chance to be tested.
What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not. “Privacy involves the choice of the individual to disclose or to reveal what he believes, what he thinks, what he possesses,” explained Supreme Court Justice William O. Douglas in a 1967 opinion. “Those who wrote the Bill of Rights believed that every individual needs both to communicate with others and to keep his affairs to himself. That dual aspect of privacy means that the individual should have the freedom to select for himself the time and circumstances when he will share his secrets with others and decide the extent of that sharing.”
Google and other internet firms
usurp this essential freedom. “The typical complaint is that privacy is eroded,
but that is misleading,” Zuboff writes. “In the larger societal pattern,
privacy is not eroded but redistributed . . . . Instead of people having the
rights to decide how and what they will disclose, these rights are concentrated
within the domain of surveillance capitalism.” The transfer of decision rights
is also a transfer of autonomy and agency, from the citizen to the corporation.
4. The Script
Fearing Google’s expansion and coveting
its profits, other internet, media, and communications companies rushed into the
prediction market, and competition for personal data intensified. It was no
longer enough to monitor people online; making better predictions required that
surveillance be extended into homes, stores, schools, workplaces, and the
public squares of cities and towns. Much of the recent innovation in the tech
industry has entailed the creation of products and services designed to vacuum
up data from every corner of our lives. There are the chatbots like Alexa and Cortana,
the digital assistants like Amazon Echo and Google Home, the wearable computers
like Fitbit and Apple Watch. There are the navigation, banking, and health apps
installed on smartphones and the new wave of automotive media and telematics systems
like CarPlay, Android Auto, and Progressive’s Snapshot. And there are the myriad
sensors and transceivers of smart homes, smart cities, and the so-called
internet of things. Big Brother would be impressed.
But spying on the populace is not the end game. The real
prize lies in figuring out ways to use the data to shape how people think and
act. “The best way to predict the future is to invent it,” the computer
scientist Alan Kay once observed. And the best way to predict behavior is to
script it.
Google realized early on that the internet allowed market research to be conducted on a massive scale and at virtually no cost. Every click could become part of an experiment. The company used its research findings to fine-tune its sites and services. It meticulously designed every element of the online experience, from the color of links to the placement of ads, to provoke the desired responses from users. But it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react. The company rolled out its now ubiquitous “Like” button, for example, after early experiments showed it to be a perfect operant-conditioning device, reliably pushing users to spend more time on the site, and share more information.
It was Facebook, with its incredibly detailed data
on people’s social lives, that grasped digital media’s
full potential for behavior modification.
Zuboff describes a revealing and in
retrospect ominous Facebook study that was conducted during the 2010 U.S. congressional
election and published in 2012 in Nature
under the title “A 61-Million-Person Experiment in Social Influence and
Political Mobilization.” The researchers, a group of data scientists from
Facebook and the University of California at San Diego, manipulated
voting-related messages displayed in Facebook users’ news feeds on election day
(without the users’ knowledge). One set of users received a message encouraging
them to vote, a link to information on poll locations, and an “I Voted” button.
A second set saw the same information along with photos of friends who had
clicked the button.
The researchers found that seeing the
pictures of friends increased the likelihood that people would seek information
on polling places and end up clicking the “I Voted” button themselves. “The
results show,” they reported, “that [Facebook] messages directly influenced
political self-expression, information seeking and real-world voting behaviour
of millions of people.” Through a subsequent examination of actual voter records,
the researchers estimated that, as a result of the study and its “social
contagion” effect, at least 340,000 additional votes were cast in the election.
Nudging people to vote may seem praiseworthy,
even if done surreptitiously. What the study revealed, though, is how even very
simple social-media messages, if carefully designed, can mold people’s opinions
and decisions, including those of a political nature. As the researchers put
it, “online political mobilization works.” Although few heeded it at the time,
the study provided an early warning of how foreign agents and domestic
political operatives would come to use Facebook and other social networks in
clandestine efforts to shape people’s views and votes. Combining rich
information on individuals’ behavioral triggers with the ability to deliver precisely
tailored and timed messages turns out to be a recipe for behavior modification
on an unprecedented scale.
To Zuboff, the experiment and its aftermath carry an even broader lesson, and a grim warning. All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.” This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists. “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
Behavior modification is the thread that ties today’s
search engines, social networks, and smartphone trackers to tomorrow’s
facial-recognition systems, emotion-detection sensors, and artificial-intelligence
bots. What the industries of the future will seek to manufacture is the self.
5. The Bargain
The
Age of Surveillance Capitalism is a long, sprawling book, but there’s a
piece missing. While Zuboff’s assessment of the costs that people incur under
surveillance capitalism is exhaustive, she largely ignores the benefits people
receive in return — convenience, customization, savings, entertainment, social connection,
and so on. The benefits can’t be dismissed as illusory, and the public can no
longer claim ignorance about what’s sacrificed in exchange for them. Over the
last two years, the press has uncovered one scandal after another involving
malfeasance by big internet firms, Facebook in particular. We know who we’re dealing
with.
This is not to suggest that our
lives are best evaluated with spreadsheets. Nor is it to downplay the abuses
inherent to a system that places control over knowledge and discourse in the
hands of a few companies that have both incentive and means to manipulate what
we see and do. It is to point out that a full examination of surveillance
capitalism requires as rigorous and honest an accounting of its boons as of its
banes.
In the choices we make as consumers
and private citizens, we have always traded some of our autonomy to gain other
rewards. Many people, it seems clear, experience surveillance capitalism less as
a prison, where their agency is restricted in a noxious way, than as an
all-inclusive resort, where their agency is restricted in a pleasing way. Zuboff
makes a convincing case that this is a short-sighted and dangerous view — that
the bargain we’ve struck with the internet giants is a Faustian one — but her
case would have been stronger still had she more fully addressed the benefits
side of the ledger.
The book has other, more cosmetic
flaws. Zuboff is prone to wordiness and hackneyed phrasing, and she at times
delivers her criticism in overwrought prose that blunts its effect. A less
tendentious, more dispassionate tone would make her argument harder for Silicon
Valley insiders and sympathizers to dismiss. The book
is also overstuffed. Zuboff feels compelled to make the same point in a dozen different
ways when a half dozen would have been more than sufficient. Here, too,
stronger editorial discipline would have sharpened the message.
Whatever its imperfections, The Age of Surveillance Capitalism is an original and often brilliant work, and it arrives at a crucial moment, when the public and its elected representatives are at last grappling with the extraordinary power of digital media and the companies that control it. Like another recent masterwork of economic analysis, Thomas Piketty’s 2013 Capital in the Twenty-First Century, the book challenges assumptions, raises uncomfortable questions about the present and future, and stakes out ground for a necessary and overdue debate. Shoshana Zuboff has aimed an unsparing light onto the shadowy new landscape of our lives. The picture is not pretty.