An android dreams of automation

beluga

Google’s Android guru, Sundar Pichai, provides a peek into the company’s conception of our automated future:

“Today, computing mainly automates things for you, but when we connect all these things, you can truly start assisting people in a more meaningful way,” Mr. Pichai said. He suggested a way for Android on people’s smartphones to interact with Android in their cars. “If I go and pick up my kids, it would be good for my car to be aware that my kids have entered the car and change the music to something that’s appropriate for them,” Mr. Pichai said.

What’s illuminating is not the triviality of Pichai’s scenario — that billions of dollars might be invested in developing a system that senses when your kids get in your car and then seamlessly cues up “Baby Beluga” — but what the urge to automate small, human interactions reveals about Pichai and his colleagues. With this offhand example, Pichai gives voice to Silicon Valley’s reigning assumption, which can be boiled down to this: Anything that can be automated should be automated. If it’s possible to program a computer to do something a person can do, then the computer should do it. That way, the person will be “freed up” to do something “more valuable.” Completely absent from this view is any sense of what it actually means to be a human being. Pichai doesn’t seem able to comprehend that the essence, and the joy, of parenting may actually lie in all the small, trivial gestures that parents make on behalf of or in concert with their kids — like picking out a song to play in the car. Intimacy is redefined as inefficiency.

I guess it’s no surprise that what Pichai expresses is a robot’s view of technology in general and automation in particular — mindless, witless, joyless; obsessed with productivity, oblivious to life’s everyday textures and pleasures. But it is telling. What should be automated is not what can be automated but what should be automated.

Image: “Communicating with the Beluga” by Bob.

33 thoughts on “An android dreams of automation

  1. Van Sranden

    Seth, you’re moving the goal posts. Now your challenge seems to be that we should mention a technology that was a source of cultural anxiety, but which practically everybody welcomed. Pretty meaningless challenge, I would say.

    Also, why do you keep juxtaposing Pichai’s idea with parents singing to their children? Nick said: “…. the essence, and the joy, of parenting may actually lie in all the small, trivial gestures that parents make on behalf of or in concert with their kids — like picking out a song to play in the car.” This is not about technology vs. pre-technology, this is about two ways of using technology.

  2. Seth

    Van Sranden – No, I’m asking for demonstration that the “analytical and critical pressure” isn’t just an erudite way of saying “New (culture-affecting) stuff is bad”. I have to qualify with phrases like “cultural anxiety” so as to avoid relying on context to convey that I’m concerned with concepts as are discussed in this post. Otherwise, phrasing relying on context, but read literally, indeed gets “ridiculous”. But I think I’ve been consistent all along that the problem is evidence that it’s not all just get-off-my-lawn in flowery language. Look at it this way, since we’re discussing machines, consider a sort of Turing Test. On one side is a great literary-type philosopher of society and technology. On the other side is an erudite writer, but who is a calculated panderer to fogey-fear, and will cynically flog the themes that anything geezers tend to grump about is harming brains (especially for kids) and represents a new low point in anti-humanism. Can one tell the difference? How?

    And I keep talking about parents singing to the children because Nick’s sentence brought me up short at that point of (my emphasis) “… picking out *a* *song* *to* *play* in the car”. The post was so harsh on Pichai, *starting* with “An android …” (clever, but still setting the tone), going on about “what it actually means to be a human being”, then “that the essence, and the joy, of parenting” and later on “mindless, witless, joyless; obsessed” – all fulminating over automating song selection. Then he swallows whole AUTOMATIC SONGS! It’s apparently completely human to accept this entire technology of mechanical reproduction of human voice, the commercialization accompanying it, the well-established effects it’s had on reducing human participation (like parents singing with children). But don’t you dare automate the *selection* – then you’re taking a *ROBOT’S VIEW*. Everything else up that point – no problem, humanity and parenting is fine. Cross that line, you’ve violated (his emphasis) “what *should* be automated.”. The combination of the hair-thinness of the line, the weight put on it, and the magnitude of accepted old versus the near-trivial rejected new, took me aback.

    Daniel – “Water is hot, then it burns”, exactly. There is a quantity called “temperature”, where we can say “98 degrees is fine, 200 degrees is extremely harmful”. Despite some differences in perception, and internal variation, this quantity is fairly objective. Some people enjoy exposing their whole bodies to water much hotter than average use (“sauna”), and we do not say they are losing touch with their humanity for doing so, even though the majority of the population doesn’t engage in this pastime. If we had that sort of analysis, we wouldn’t be having this discussion.

  3. Daniel C.

    Seth – My point there was that there are tipping points/conditions past which something we might categorize as a single phenomenon begins to have significantly different effects on the subject, but you’re correct: if bad ideas hurt we wouldn’t be worrying about any of this : )

    It would all be so simple if emotions, experiences, and “humanity” were all quantifiable things that could be easily measured. I have no doubt Mr. Pichai and his colleagues are doing their best to deliver on that idea, and their incremental “Progress” along that trajectory is what this whole thing is about, isn’t it? They really don’t need a “human”, just a data set and a bank account. The narrative of one’s life is becoming superstitious, irrelevant noise, while the data proxy begins to look like the only “truth” that has any currency. This tracks perfectly with the shift from narrative based advertising to habit tracking and individual targeting.

    You’re portraying the situation as one in which irate literary traditionalists are they only ones who might find this sort of automation objectionable, and then you’re discounting those objections on the basis of your characterizations of their authors. I sympathize with the irritation at reactionaries, but I still don’t think that’s all this is. I believe there are plenty of genuine people who enjoy the sorts of conversation sparked by flipping through the radio with their kids and getting a spontaneous chance to slightly bridge the parental gap. Perhaps it’s also one instance among others that provides an opportunity to teach a child to put up with not being comfortable or getting what they want. This is about choices which slide in and out of view depending upon the environment created. A major difference with the past is that the entertainment, pleasure and distraction available to automate is at peak intensity at exactly the moment when automation itself is becoming exponentially more subtle and intrusive. It’s likely that this combination will push many people past their threshold for choosing in many areas.

  4. Seth

    Daniel – Yes, I understood the quantitative-change-becomes-qualitative-difference argument, but I was riposting that even so, that assumes some sort of quantity, say “unhumanness”. And hence the question I keep asking, why is automated song *selection* of an “unhumanness” which draws such very strong denunciation, but automated *songs* are apparently completely acceptable? Pichai is not a bad human being (literally) for using automated *songs* with his kids, but automated song *selection* is apparently another matter. It doesn’t make any sense to me in any sort of logically consistent framework. The potential “unhumanness” of replacing a parent’s singing with a robot’s singing (which is what’s being done) strikes me as massive. On the other hand, it does make a great deal of sense from the perspective of “What we grew up with is normal, but the new stuff is *scary*”. I’m not discounting the objections on the basis of the characterizations of their authors. I’m discounting the objections because they don’t seem to make any sense, except based on pure fogeyness.

    Note, sigh, I have to now take a paragraph to tediously clarify that when I say “the objections” above, I do not mean that any person who wants to do manual song selection is wrong and should get with the times. Rather, I am referring to the opposite, as outlined in the post, the fulmination that any person who wants to do automatic song selection is a bad *human being*, per, “Completely absent from this view is any sense of what it actually means to be a human being”, “mindless, witless, joyless”, etc. etc.

    People were very freaked out about mechanical voice reproduction when it was new. And here we are now, where it’s considered so normal and accepted that the “tech” guy in a discussion is the only one even mentioning the “robot” issue with it. To me, that’s a lesson in how much the objections are driven by nothing apart from highly elaborate get-off-my-lawn.

  5. Van Stranden

    Seth – Personally, I think singing to your kids and selecting a song on whatever technological device for them is the same: In both cases you, as a parent, are creating (a big part of) their environment. But having an algorithm create the environment by selecting what music is appropriate, crosses a line for me. Which line?

    “Humanness” is definitely not it. Often in tech discussions this means “autonomous” or “independent from technology”, which I think is quite naive. By juxtaposing automated song selection with singing, and claiming that singing is different from selecting songs yourself, you seem to fall for the same trap. Dutch philosopher Peter-Paul Verbeek says about this: “We are as autonomous with regard to technology as we are with regard to language, oxygen, or gravity.” And French philosopher Jacques Ellul already said something along the same lines 60 years ago. It seems that to you, there are only two possible responses to new technology: Either you embrace it, or you reject it because you didn’t grow up with it. I think it’s much more subtle than that. You’ll find many people amongst ‘anti-technologists’ that embrace technology, but just do not swallow it whole.

    So which line is crossed? I think my main problem with Pichai’s vision has to do with “appropriate”. I wouldn’t want my children to grow up in an environment that is devoid of anything “inappropriate”. Especially if the line between appropriate and inappropriate is drawn by some company. Also, I can imagine that one of the joys of parenthood is the small creative everyday act of drawing that appropriate/inappropriate line for your kids. Possibly that’s what Nick is talking about. And what Pichai -robotically- sees as valuable resources that need to be freed up.

  6. Daniel C.

    Seth – I understand your frustration; I think we’ve been talking past each other here at times, but maybe we’ve both made our points. I’ve had trouble choosing what to respond to without making my posts overlong, since this is a blog and it’s not really conducive to sustained debate (though I’ve enjoyed it). Unless you have something new to add, I’ll probably let it go with this. The question you’re reiterating is one I feel I’ve already addressed, but…

    If some people encounter a technology in a disruptive way, while others are conditioned to it in childhood, it doesn’t necessarily mean the effects were a mirage. So I don’t think those objections are always necessarily baseless just because they fade away. People have lived in some really crazy ways, and whole societies would have thought some of our basic ethical tenets were just silly. We can get accustomed to a lot of things that aren’t very good for us, and even forget they are there entirely.

    Comparing the recorded voice and the automated choice, one thing comes to the forefront: New automation as described by Pichai occurs within the existing technological milieu, making use of recorded audio as well as all kinds of other media. There’s really no point in a comparison attempting to isolate the two examples, because it’s always already a holistic situation. The cumulative effect is obscured when we try to examine media in decontextualized isolation. It’s the whole field that we encounter in our real lives, which is perhaps why seemingly small issues like this can cause so much anxiety.

    On another note, we’re having to bracket off questions of what we understand to be “human”, and how the conditions we’re born into give rise to those understandings, so it’s difficult to argue without a consensus upon which to ground value judgments. While I do believe that one understanding can be defended against another, I’m not arguing that here. I’m arguing for the preservation of the conditions that make any conception of humanity or human values possible.

    I also don’t thnk that Mr. Pichai is a bad human being. I think he and others in his field are quite naïve in regards to the effects of their work and seem to be unaware of their ignorance. They seem to feel it’s their privilege and duty to “hack” society simply because they see the opportunity, yet they seem to neither know nor care to know much about the social from any other perspective.

Comments are closed.