The informavore in its cage

Edge is featuring, in “The Age of the Informavore,” a fascinating interview with Frank Schirrmacher, the influential science and culture editor at Frankfurter Allgemeine Zeitung.”The question I am asking myself,” Schirrmacher says, “[which] arose through work and through discussion with other people, and especially watching other people, watching them act and behave and talk, [is] how technology, the Internet and the modern systems, has now apparently changed human behavior, the way humans express themselves, and the way humans think in real life … And you encounter this not only in a theoretical way, but when you meet people, when suddenly people start forgetting things, when suddenly people depend on their gadgets, and other stuff, to remember certain things. This is the beginning, its just an experience. But if you think about it and you think about your own behavior, you suddenly realize that something fundamental is going on.”

Tell me about it.

Later in the interview, Schirrmacher wonders what the effects will be as companies collect ever more behavioral data and apply ever more sophisticated predictive algorithms to it:

You have a generation — in the next evolutionary stages, the child of today — which [is adapting] to systems such as the iTunes “Genius”, which not only know which book or which music file they like, [but] which [go] farther and farther in [predicting] certain things, like predicting whether the concert I am watching tonight is good or bad. Google will know it beforehand, because they know how people talk about it.

What will this mean for the question of free will? Because, in the bottom line, there are, of course, algorithms, who analyze or who calculate certain predictabilities … The question of prediction will be the issue of the future and such questions will have impact on the concept of free will. We are now confronted with theories by psychologist John Bargh and others who claim there is no such thing as free will. This kind of claim is a very big issue here in Germany and it will be a much more important issue in the future than we think today. The way we predict our own life, the way we are predicted by others, through the cloud, through the way we are linked to the Internet, will be matters that impact every aspect of our lives. And, of course, this will play out in the work force — the new German government seems to be very keen on this issue, to at least prevent the worst impact on people, on workplaces.

It’s very important to stress that we are not talking about cultural pessimism. What we are talking about is that a new technology which is in fact a technology which is a brain technology, to put it this way, which is a technology which has to do with intelligence, which has to do with thinking, that this new technology now clashes in a very real way with the history of thought in the European way of thinking.

The interview has drawn many responses (including one from me), the most recent of which is from John Bargh, who heads Yale’s Automaticity in Cognition, Motivation and Evaluation Lab. He picks up on Schirrmacher’s comments on prediction and describes how recent research in brain science is opening up powerful new possibilities for manipulating human behavior:

Schirrmacher is quite right to worry about the consequences of a universally available digitized knowledge base, especially if it concerns predicting what people will do. And most especially if artificial intelligence agents can begin to search and put together the burgeoning data base about what situation (or prime) X will cause a person to do. The discovery of the pervasiveness of situational priming influences for all of the higher mental processes in humans does say something fundamentally new about human nature (for example, how tightly tied and responsive is our functioning to our particular physical and social surroundings). It removes consciousness or free will as the bottleneck that exclusively generates choices and behavioral impulses, replacing it with the physical and social world itself as the source of these impulses. …

It is because priming studies are so relatively easy to perform that this method has opened up research on the prediction and control of human judgment and behavior, ‘democratized’ it, basically, because studies can be done much more quickly and efficiently, and done well even by relatively untrained undergraduate and graduate students. This has indeed produced (and is still producing) an explosion of knowledge of the IF-THEN contingencies of human responses to the physical and social environment. And so I do worry with Schirrmacher on this score, because we [are] so rapidly building a database or atlas of unconscious influences and effects that could well be exploited by ever-faster computing devices, as the knowledge is accumulating at an exponential rate. …

More frightening to me still is Schirrmacher’s postulated intelligent artificial agents who can, as in the Google Books example, search and access this knowledge base so quickly, and then integrate it to be used in real-time applications to manipulate the target individual to think or feel or behave in ways that suit the agent’s (or its owner’s) agenda of purposes.

The Web has been called a “database of intentions.” The bigger that database grows, and the more deeply it is mined, the more difficult it may become to discern whether those intentions are our own or ones that have been implanted in us.

8 thoughts on “The informavore in its cage

  1. Joe Linker

    CQ Researcher has just published a report: “Should advertisers’ collection of data on Web users be regulated?” Overview available on their blog. With regard to control, we are reminded of Alice Cooper’s response when asked if he embedded secret messages into his songs. No, he said; he didn’t know how; but if he did, the message would be to buy more records. The question was also taken up by Lewis Carol: from Humpty Dumpty’s conversation with Alice:

    `When I use a word,’ Humpty Dumpty said, in rather a scornful tone, `it means just what I choose it to mean — neither more nor less.’

    `The question is,’ said Alice, `whether you can make words mean so many different things.’

    `The question is,’ said Humpty Dumpty, `which is to be master — that’s all.’

  2. Nick Carr

    A postscript to my post:

    In his 1950 book The Human Use of Human Beings, cybernetics pioneer Norbert Wiener quotes from an article that a Dominican friar had published, on the topic of cybernetics, in Le Monde in 1948:

    One of the most fascinating prospects thus opened is that of the rational conduct of human affairs, and in particular of those which interest communities and seem to present a certain statistical regularity, such as the human phenomena of the development of opinion. Can’t one imagine a machine to collect this or that type of information … and then to determine as a function of the average psychology of human beings … what the most probable development of the situation might be? Can’t one even consider a State apparatus covering all systems of political decisions, either under a regime of many states distributed over the earth, or under the apparently much more simple regime of a human government of this planet? At present nothing prevents our thinking of this. We may dream of the time when the machine a gouverner may come to supply – whether for good or evil – the present obvious inadequacy of the brain when the latter is concerned with the customary machinery of politics. …

    Perhaps fortunately, the machine a gouverner is not ready for a very near tomorrow. For outside of the very serious problems which the volume of information to be collected and to be treated rapidly still put, the problems of stability of prediction remain beyond what we can seriously dream of controlling. For human processes are assimilable to games with incompletely designed rules, and above all, with the rules themselves functions of the time. The variation of the rules depends both on the effective detail of the situations engendered by the game itself, and on the system of psychological reactions of the players in the face of the results obtained at each instant. …

    All of this not only tends to complicate the degree of the factors which influence prediction, but perhaps to make radically sterile the mechanical manipulation of human situations. As far as one can judge, only two conditions here can guarantee stabilization in the mathematical sense of the term. There are, on the one hand, a sufficient ignorance on the part of the mass of players exploited by a skilled player, who moreover may plan a method of paralyzing the consciousness of the masses; or on the other, sufficient goodwill to allow one, for the sake of the stability of the game, to refer his decisions to one or a few players of the game who have arbitrary privileges. This is a hard lesson of cold mathematics, but it throws a certain light on the adventure of our century: hesitation between an indefinite turbulence of human affairs and the rise of a prodigious Leviathan.

  3. Steve

    Nick,

    I suspect you’re going to find my forthcoming book, The Learning Layer (www.learninglayer.com), an interesting take on some of these issues from an enterprise perspective.

    Steve

  4. Linuxguru1968

    The informavore isn’t in a cage, he’s in a maze. Like the proverbial rat in the maze, their behavior will always be predictable because all the paths both dead-ends and out paths are laid out. But, unlike the rat, the informavore can just jump out of the maze at any time. Just because we can do something doesn’t mean we should. It is not invevitalbe that all our actions will be recorded, catogized and used to shape our destiny. The human rat can choose not to play the game.

  5. Matthieu Hug

    I really like this article, which deals with unusally fundamental questions for a blog paper. Is there a risk that information technology obliterates free will? More bluntly, how much of his/her humanity does the informavore surrender? I’ll humbly try to contribute to this discussion.

    I’d like to emphasize a very important statiscal or “group effect” when considering free thought / will, and its control. We’ve all experienced cases when being within a group we behave in ways we aren’t especially proud of afterwards: it may have been with a group of friends in college, while attending a sports game, or a political rally. Outcomes may be fairly benign or monstruous: it may be shouting insults at the face of people we don’t know but who support the other team, or it may be kicking someone to death during the crystal night in Germany in 1933. Gravity is different but the principle is the same: when in a group we usually want to “belong to” the group, and as such become enrolled in a warm and confortable “friendship”, where we do not need to think by ourselves, but only comply with the group’s rules, for its best or for its worst. Simply because the warmth of the group is cosy and confortable. This was recently wonderfully illustrated by philosopher Alain Finkielkraut (Un Coeur Intelligent – not yet translated apparently); but there are plenty of examples of that in the ignominous history of the 20th century, as well as in litterature from Orwell to Huxley or Bradbury. The feeling that we pertain to a group more than often obliterates free will, free thoughts, and ultimatly intelligence: we are far more sensitive to manipulation when we feel we belong to a larger group than when we feel like individuals.

    This is the kind of comportamental analysis a huge information database can deal with: it can predict statistically what an individual should like, do, think… based of the reactions of groups, and our (dangerous) tendency to forget ourselves into a group. But there are exceptions: there are people who stand up. And this can never be predicted: these are events of probability 0. They happen but have no statiscal existence. Free thought is always there, but we only surrender it very easily. For instance Gandhi, Martin Luther King, Mandela or Jean Moulin stood up: they were very unique and the way they stood up and fought was beyond predictability. Because they were individual free thinkers rather than group members.

    So it’s not that information technology is a danger to free will: it’s a tool that may help manipulating groups. The danger is that we like to forget ourselves into groups, be it talibans or white supremacists, a company or a neighborhood, a sport team or a facist summer camp. But free will will always exist: it’s just tremendously difficult. We can read beyond Google’s or Amazon’s suggestions; we can know more about the world than what CNN or Fox News says (especially easy in the case of Fox News ;-); we can disagree with an unjustified war; we can refuse to follow our group’s main stream beliefs. It’s just difficult: free thought is complex, group belief is often a simplistic relief. Huxley or Orwell, among others, told tales of a man who stands up: he is always alone; free will is the experience of loneliness. That’s why the informavore is in danger of losing his / her free will: he / she wants to belong, he / she doesn’t want to be alone.

    So, indeed, information control is the promised land of all kind of facism. But, it’s all in our hand to refuse it.

  6. Joe McCarthy

    Interesting post and commentary!

    The post, and the comment by Matthieu Hug on the “group effect”, reminds me of a New Scientist article a few years ago by Liz Else and Sherry Turkle on Living Online: I’ll Have to Ask my Friends, in which they note:

    When technology brings us to the point where we’re used to sharing our thoughts and feelings instantaneously, it can lead to a new dependence, sometimes to the extent that we need others in order to feel our feelings in the first place.

    In an earlier elaboration on issues relating to self-reflection vs. self-expression, I’d further noted:

    According to Turkle, the increasing prevalence of talk culture, wherein “people share the feeling to see if they have the feeling”, comes at the expense of introspection and probing more deeply into complex thoughts and emotions. Questioning society’s tendency toward breathless techno-enthusiasm, with the increasing means available to quickly communicate our state, she champions self-reflection: “having an emotion, experiencing it, taking one’s time to think it through and understand it, but only sometimes electing to share it.”

    James Ogilvy, author of Living Without a Goal, raises some issues that suggest the difference between self-reflection and self-expression may not be significant:

    The self is a process of reflection, one that lacks a substantial, originary core. … Hegel put it this way: “Self-consciousness exists in itself and for itself, in that, and by the fact that, it exists for another self-consciousness; that is to say, it is only by being acknowledged or ‘recognized'”. More simply, there is a certain Tinkerbell effect for self-consciousness. You remember Peter Pan’s little sidekick whose life and light threatened to flicker out unless the audience clapped. We’re all a little like that.

    … self-love must finally spread itself across the social pattern of reflections that constitute the self. When privacy goes public you see the self as a pattern of relations of mutual recognition. The celebration of self becomes a song for the ears of the other, not for the sake of self-aggrandizement but for the benefit of shared acts of artful self-creation.

    I know that the focus in this article is more on prediction (and manipulation) than reflection (or expression), but I do believe that there is an inverse correlation between level of self-reflection and level of manipulability (FWIW, I also believe in free will, if only for the pragmatic societal effects it entails).

    In any case, the potentially new insidious twist illuminated by the current article is that algorithms could supplant people as mirrors through which breathless techno-enthusiasts validate their thoughts and feelings.

  7. Petter

    What’s the opposite of a “vore”, e.g. an informavore? Much is written about the effect of all this information we’re taking in, but what about all the compulsive expressing and outpouring that’s going on (like here and now), surely that can’t be healthy? How can we learn to keep our peace? Help me! When I have a thought now I look around for some place – a tweet, a blog, a comments box – to express it. It’s become an internalized instinct, not unlike that gut-felt expectation to be able to see again and again the finest feints and goals at a football match.

  8. Alexandra

    In my 12th grade history class we read “The Big Switch”. We were asked to write a “blog entry” on our class’ blog: http://blogs.milkenschool.org/america3point0/2009/12/01/the-big-switch/ about concepts in the book, and ideas we found on Carr’s blog.

    In Nicholas Carr’s, “The Big Switch” a portion of the chapter “iGod” is dedicated to exposing Google founders Larry Page and Sergey Brins’ vision or hope for what is to come technologically for Google. Page and Brin hope that Google will some day be connected to our minds, so that we will have the, “entirety of the world’s information” right in our brain. In other words, humans will have unlimited access to the worlds information, an unlimited surplus of “Artificial Intelligence” (A.I.). We, as humans, will be able to interact directly with computers by merely thinking; our thoughts will be programmable, much like the “thoughts” of computers. Carr presents a world where, in the words of Bill Gates, “the blending of computers and people is inevitable.” But Carr notes that to most of us the, “desire of AI advocates to merge computers and people, to erase and blur the boundary between man and machine, is troubling.” According to inventor and author Ray Kurzweil, by the mid 2040s the advances in AI may be so great that there will no longer be a distinction between the biological and the mechanical, or between physical and virtual reality. Although there is no certainty in predicting what’s to come, based on the facts and occurrences Carr references, it seems extremely plausible (I buy it), that this is the kind of world we are fated to live in. And thus Kurzweil addressed a major concern; many sense Artificial Intelligence will be a threat to our integrity as freethinking individuals’. My fear was exposed. The idea of having a little “Google-chip” in my head, which answers all my questions as they are thought, is frightening. I am certainly not comfortable being a walking-talking half computer half human. How will I know I am not being fed false information, brainwashed even? How will I know that when I say or do something it is really an execution of my own free will or my own true desires? Does the agenda of AI advocates undermine the values expressed in the Declaration of Independence? According to the Declaration, we are allotted with the inalienable right to pursue happiness. How will I know that I am really pursuing is what I want, and not what the computer wants? The chapter “iGod” presents the facts, or the assumptions we expect to see in the future (ex: In 2004 Microsoft was granted a patent for transmitting power and data using the human body). Yet, I wanted to get more of Carr’s opinion on AI, and to possibly see if he had the same concerns as I do. After reading Carr’s, “Is Google Making Us Stupid?” it was clear he was sure Google, and the Internet are changing how we read, and almost ruining our ability to read over 2/3 pages without loosing focus. About three ago Carr posted a blog called, “The Informavore in its Cage”, which notes key points from an interview (done by Edge), with Frank Schirrmacher. Carr points out that the web has been called a “database of intentions”. He believes that the deeper we mine this database, the more difficult it will be to decide if our intentions are truly our own, or one that has been implanted in us (an idea which echoes my own concerns). Carr also includes a link to his personal reaction to Schirrmacher’s interview. In his response Carr agrees with Schirrmacher; he states in order to “keep up with computers” we must think like computers (an opinion both the two men hold). He also acknowledges my concern… How will we remain individuals? Nick Bilton says we will only, “consume information that makes us happy, fulfills us, and leave the rest by the wayside.” But Carr is far more skeptical. He responds, “Maybe.” Or maybe we’ll be like a, “school like fish” carried along in the Web’s currents, constantly talking and sharing our every thought, telling ourselves we are individuals.

    This statement is only Carr’s response to what’s happening on the web today. If we are already becoming “shallow” and “narrow” versions of ourselves online right now, what will happen, only a couple of years down the line, when we have an Artificial Intelligence chip plugged into our brains? How much more of our individualism will we loose?

    1. “Is Google Making Us Stupid?”

    https://www.roughtype.com/archives/2008/08/is_google_makin.php

    2. “The Informavore in its Cage”

    https://www.roughtype.com/archives/2009/11/be_everywhere_n_1.php

    3. Carr’s response to “The Age of the Informavore” (scroll down to Carr)

    http://www.edge.org/3rd_culture/schirrmacher09/schirrmacher09_index.html#nc

Comments are closed.