Different people will set the line between the private and the public in different places. Different societies will as well. As Evgeny Morozov writes in an excellent essay in the new issue of Technology Review, the growing ability of corporations, governments, and individuals to use computers to collect and analyze data on personal behavior has for many years now created social pressure to move the line ever more toward the public, squeezing the realm of the private. If some public good can be attributed to, or even anticipated from, an expansion in the collection of personal data — an increase in efficiency or safety, say — it becomes difficult to argue against that expansion. Privacy advocates become marginalized, as they attempt to defend an abstract good against a practical and measurable one.
As the trend continues, the outputs of data-analysis programs begin to shape public policy. What’s been termed “algorithmic regulation” takes the place of public debate. Policy decisions, and even personal ones, start to be automated, and the individual begins to be disenfranchised. Morozov quotes from a perceptive 1985 lecture by Spiros Simitis: “Where privacy is dismantled, both the chance for personal assessment of the political … process and the opportunity to develop and maintain a particular style of life fade.” The pursuit of transparency, paradoxically, ends up making society’s workings more opaque to its citizens. Comments Morozov:
In case after case, Simitis argued, we stood to lose. Instead of getting more context for decisions, we would get less; instead of seeing the logic driving our bureaucratic systems and making that logic more accurate and less Kafkaesque, we would get more confusion because decision making was becoming automated and no one knew how exactly the algorithms worked. We would perceive a murkier picture of what makes our social institutions work; despite the promise of greater personalization and empowerment, the interactive systems would provide only an illusion of more participation. As a result, “interactive systems … suggest individual activity where in fact no more than stereotyped reactions occur.”
Simitis offered a particularly prescient assessment of the kind of polity that would ultimately emerge from this trend:
Habits, activities, and preferences are compiled, registered, and retrieved to facilitate better adjustment, not to improve the individual’s capacity to act and to decide. Whatever the original incentive for computerization may have been, processing increasingly appears as the ideal means to adapt an individual to a predetermined, standardized behavior that aims at the highest possible degree of compliance with the model patient, consumer, taxpayer, employee, or citizen.
Morozov goes on to explore the insidious effects of what he terms “the invisible barbed wire of big data,” and he argues, compellingly, that those effects can be tempered only through informed political debate, not through technological fixes.
I have only one quibble with Morozov’s argument. He declares that “privacy is not an end in itself” but rather “a means of achieving a certain ideal of democratic politics.” That strikes me as an overstatement. In claiming that the private can only be justified by its public benefits, Morozov displays the sensibility that he criticizes. I agree wholeheartedly that privacy is a means to a social end — to an ideal of democratic politics — but I think it is also an end in itself, or, to be more precise, it is a means to important personal as well as public ends. A sense of privacy is essential to the exploration and formation of the self, just as it’s essential to civic participation and political debate.
Photo by Alexandre Dulaunoy.
I’m open to the possibility of privacy being an end in itself, though I have great difficulty conceptualizing quite why this might be the case. Obviously, privacy is not an absolute – there are certainly various situations that call for more or less privacy. This then leads me to Morozov’s line of reasoning, that privacy is instrumental in achieving other ends, which are the factors determining when, on a case by case basis, privacy should be more or less respected.
This raises two additional concerns: (1) Will politicians, legislators and regulators have the necessary understanding/comprehension of these sophisticated data-analysis systems to use them correctly (2) The potential (socioeconomic) damage that could come from a system failure (especially one that goes long unnoticed) could be immense.
In some ways, the first concern is still likely to net out positively. An article published in The Atlantic by the former Director of Obama’s Office Management and Budget and Director of Congressional Budget Office and the director of White House Domestic Policy Council under GWB wrote: “Based on our rough calculations, less than $1 out of every $100 of government spending is backed by even the most basic evidence that the money is being spent wisely… and that less than $1 out of every $1,000 that the government spends on health care this year will go toward evaluating whether the other $999-plus actually works.” In this capacity, any move to actual analysis is likely to improve governance – even if it’s somewhat poorly understood.
The latter is more alarming. To quote the UK’s Government Office of Science: “(i)t seems likely, or at least plausible, that major advanced economies are becoming increasingly reliant on large-scale complex IT systems (LSCITS): the complexity of these LSCITS is increasing rapidly; their socio-economic criticality is also increasing rapidly; our ability to manage them, and to predict their failures before it is too late, may not be keeping up. That is, we may be becoming critically dependent on LSCITS that we simply do not understand and hence are simply not capable of managing.”
In either case, the implicatons of this trend will necessitate electing (or appointing) a different type of government. Perhaps more actuaries and fewer antiquarians.
“A sense of privacy is essential to the exploration and formation of the self, just as it’s essential to civic participation and political debate.”
A sense of privacy is secondary to the formation of the self. Without the prior of the self, there’s nothing that could have the quality of being private.
The problem isn’t that contemporary technology is redefining privacy, but that it’s telling us what our “self” is with a different set of parameters than the technology which preceded it.
Netflix has an algorithm which determines what choices it shows me based on its determination of who I am. There is no privacy conflict there, until it shares that determination with third parties. The whole privacy thing is an after-effect of the changing modes of subjection.
“privacy” stands in as a proxy for the more difficult to comprehend process by which our selves come into being.
The political problem is not that the privacy of one’s “vote” is imperative, but that a particular concept of self is prerequisite to voting in a liberal democracy, and that self has the attribute of being private.
To suss out just what the self is and how it’s constructed socially and privately is a whole rabbit hole in itself. I take Judy’s excellent point above, but we should also keep in mind that our definition of the self(an individual? a consciousness? a body? a “bundle of sensations”?) determines when and how it obtains. I’m reminded of the late Rick Roderick’s excellent “The Self Under Seige” lectures. Now there’s something that would do Morozov some good.
I can’t understand why Morozov is so insistent that his particular socio-political angle is the only way to appropriately critique technology. Nick’s quibble here is a perfect example of what such tunnel vision can overlook. He seems to take any attention paid to the individual, psychological, philosophical, or neuro-scientific ramifications of media as a deliberate diversion from the political. The only reason I can think of is that this windmill swinging allows his frequent book reviews to reliably be the kind of bullying invectives he seems to enjoy writing. The way he makes straw men out of Lanier and Lessig in that article is typical; he’s done the same in past articles with Borgmann, Ellul, Korzybski and plenty of others. Does he genuinely purport to be the only writer on technology worth reading?
It seems impossible to know exactly or completely, as Judy above says, “the process by which our selves come into being.” But some of that process, as people like Anthony Storr and Susan Cain have pointed out, has to do with ourselves alone as much as with or in response to others, and the algorithms I am confronted with on a daily basis seem only to acknowledge one half of that equation. Thus I’m not completely surprised at how often, to lift an example from Judy’s comment above, Netflix’s algorithm seems to get my tastes completely wrong and mainly recommends films I wouldn’t be caught dead watching.
Where Morozov is concerned, I’m totally in agreement with Daniel above, and I often wonder the same thing when I read him, Isn’t there anyone out there as smart as you Evgeny? Still, I’m glad for his existence, since the value of big data has been, at least in my circles (educational), so enthusiastically and unquestioningly embraced that I am glad to have him say in loud and even “bullying” tones, “Hold on a minute folks. There’s a flip side here that you are not attending to. “
“He seems to take any attention paid to the individual, psychological, philosophical, or neuro-scientific ramifications of media as a deliberate diversion from the political.”
I suspect this reflects his personal outlook on things while also allowing him to streamline his arguments. But, like all idées fixes, it is limiting.
“He seems to take any attention paid to the individual, psychological, philosophical, or neuro-scientific ramifications of media as a deliberate diversion from the political.”
They are a diversion from the political, to the extant that those realms are understood as being domains of our selves or bodies alone, which is most likely not the case.
The problem with Netflix isn’t merely that it gets my tastes wrong, or has dubious categories of taste, but that it reinforces the idea that I have tastes which belong to me, which I can share with others whose tastes belongs to them, if only they could get the algorithm right.
I find this post very relevant to a Digital Ethics class I am currently taking, and while reading I couldn’t help but compare your thoughts to those in chapters of Rebecca MacKinnon’s book ‘Consent of the Networked’. When you made the point that the ease of which governments and corporations can use the internet to collect personal data is increasing, and therefore makes it an increasingly public domain, I thought immediately of MacKinnon’s discussion of the Chinese government’s use of the internet. “Where privacy is dismantled, both the chance for personal assessment of the political … process and the opportunity to develop and maintain a particular style of life fade.” This seems particularly true in the case of the Chinese citizens and internet users, because there is such a prevalent theme of networked authoritarianism, authoritarian deliberation, and digital bonapartism. I am unsure, however, about the stance you are taking on technological development. While MacKinnon takes a Constructionist viewpoint, certain things in your post make it seem like you take more of a Determinist viewpoint. When you say that technological processing has become an ideal means of adapting society into a standardized and compliant ‘model patient’, are you inferring that this course was inevitable because of the nature of technology itself? Or are you inferring that those in control of the internet resources we use chose to make it as such?
Fhalyshia,
You’re attributing to me words and thoughts that actually belong to Simitis. If I’m reading him correctly, I would say that his view of technological progress lies somewhere between that of the strict constructionist and that of the strict determinist, and that’s also where my view lies. You might want to explore Thomas Hughes’s idea of “technological momentum,” if you haven’t already.
Nick
Good take on this:
http://marginalutilityannex.wordpress.com/2013/11/06/the-self-is-not-a-territory/