Monthly Archives: May 2015

The way of the tweet

birds

[notes in search of an essay]

The lifecycle of the twitterer:

1. Skepticism

2. Enchantment

3. Disenchantment

4. Servitude

(This may also be the lifecycle of Twitter itself.)

Every communication medium has formal qualities and social qualities. The uniqueness of each medium lies in the tension between the formal qualities and the social qualities.

The initial skepticism of the twitterer stems from the formal restrictiveness of the medium, which makes its social possibilities appear meager. Limiting messages to 140 characters would seem to prevent both rich expression and rich conversation. Hence the early rap on Twitter: who wants to hear what some narcissist had for breakfast? (Many users and would-be users never get beyond this stage.)

The enchantment begins with the realization that formal restrictiveness can be a spur to creative expression. A limit of 140 characters, it turns out, leaves plenty of room for wit. It also leaves plenty of room for conversation. The sense of enchantment grows as people come up with ingenious formal innovations that further expand the flexibility of the medium (hashtags, abbreviations, denotative punctuation marks, etc.), greatly enhancing its social qualities, without destroying the overarching formal constraint that distinguishes the service.

(Side note: When a link appears in a tweet, the link can serve as either context or text.)

Disenchantment begins with the realization that, even with all the formal innovations, Twitter remains a restrictive medium. The twitterer begins to sense that she has explored all the exciting or intriguing formal and social qualities of the medium, and now the limitations begin to grate. Tweeting begins to feel at best routine and at worst like an exercise in recycling. A contempt for the medium begins to grow silently within the twitterer.

(Side note: Unlike the charming formal innovations that come from the users, the formal innovations introduced by Twitter itself, such as the rote attachment of images or snippets to tweets, often feel forced, clumsy, manipulative. They’re disenchanting.)

To the extent that the disenchanted have been socially successful with Twitter (lots of followers, lots of interlocutors, achievement of status), they will find it difficult, if not impossible, to abandon the medium. Servitude begins.

(Side note: The formal restrictiveness of Twitter makes it a more interesting medium, a more enchanting medium, than, say, Facebook. Servitude is Facebook’s business model, as its founder and early investors understood long ago.)

Can computers improvise?

eminem

Bust this:

Girl I’m down for whatever cause my love is true
This one goes to my man old dirty one love we be swigging brew
My brother I love you Be encouraged man And just know
When you done let me know cause my love make you be like WHOA

These rap lyrics, cobbled together by a computer from a database of lines from actual rap songs, “rival those of Eminem,” wrote Esquire‘s Jill Krasny last week. I have to think that’s the biggest dis ever thrown Eminem’s way. But Krasny was not the only one gushing over the witless mashup. A Mashable headline said the program, dubbed DeepBeat by its Finnish creators, “produced rap lyrics that rival human-generated rhymes.” Quartz‘s Adam Epstein suggested that robots can now be considered “lyrical wordsmiths.” Reported UPI: “Even rappers might soon lose their jobs to robots.”

I guess it must have been a slow news day.

Silly as it is, the story is not atypical, and it illuminates something important about our sense of the possibilities and threats presented by computers. Our expectations about artificial intelligence have raced ahead of the reality, and that’s skewing our view not only of the future but of the very real accomplishments being made in the AI and robotics fields. We take a modest but meaningful advance in natural-language processing — DeepBeat fits lines together through a statistical analysis of rhyme, line length, and wording, its choices constrained by a requirement that a specified keyword (“love” in the example above) appear in every line* — and we leap to the conclusion that computers are mastering wordplay and, by implication, encroaching on the human facility for creativity and improvisation. In the process, we denigrate the accomplishments of talented people — just to make the case for the computer seem a little more compelling.

We humans have a well-documented tendency to perceive human characteristics in, and attribute human agency to, inanimate objects. That’s a side effect, scientists believe, of the human mind’s exquisite sensitivity to social signals. A hint of human-like cognition or behavior triggers a sense that we’re in the presence of a human-like being. The bias becomes particularly strong when we observe computers and automatons performing manual or analytical tasks similar to those we do ourselves. Joseph Weizenbaum, the MIT computer scientist who wrote the program for the early chatbot ELIZA, limned the phenomenon in a 1966 paper:

It is said that to explain is to explain away. This maxim is nowhere so well fulfilled as in the area of computer programming, especially in what is called heuristic programming and artificial intelligence. For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible.

The procedures have grown more complex and impressive since then, and their practical applications have widened enormously, but Weizenbaum’s point still holds. We’re quick to mistake clever programming for actual talent.

Last week, in a New York Times op-ed examining human and machine error, I wrote a couple of sentences that I suspected would raise the odd hackle: “Computers are wonderful at following instructions, but they’re terrible at improvisation. Their talents end at the limits of their programming.” And hackles were raised. Krasny hooked her Esquire piece on DeepBeat to those lines, arguing that the program’s ability to spot correlations in spoken language is an example of machine improvisation that proves me wrong. On Twitter, the sociologist and technology writer Zeynep Tufekci suggested I was “denying the true state of advances in artificial intelligence.” She wrote: “I don’t agree that [computers] are unable to ‘improvise’ in the most practical way.”

The quotation marks in Tufekci’s statement are revealing. If we redefine what we mean by improvisation to encompass a computer’s ability to respond programmatically to events within a constrained field of activity, then, sure, we can say that computers “improvise.” But that’s not what we really mean by improvisation. To improvise — the word derives from a Latin term meaning “without preparation” — is to act without instruction, without programming, in novel and unforeseen situations. To improvise is to go off script. Our talent for improvisation, a talent we share with other animals, stems from the mind’s ability to translate particular experiences into a store of general know-how, or common sense, which then can be deployed, fluidly and often without conscious deliberation, to meet new challenges in new circumstances.

No computer has demonstrated an act of true improvisation, an act that can’t be explained by the instructions written by its programmers. Great strides are being made in machine learning and other AI techniques,** but the programming of common sense remains out of reach. The cognitive scientist Gary Marcus, in a recent New Yorker essay, “Hyping Artificial Intelligence, Yet Again,” explains:

Trendy new techniques like deep learning and neuromorphic engineering give A.I. programmers purchase on a particular kind of problem that involves categorizing familiar stimuli, but say little about how to cope with things we haven’t seen before. As machines get better at categorizing things they can recognize, some tasks, like speech recognition, improve markedly, but others, like comprehending what a speaker actually means, advance more slowly.

Marcus is hardly blasé about advances in artificial intelligence. He thinks it likely that “machines will be smarter than us before the end of the century — not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine.” But he stresses that we’re still a long way from building machines with common sense, much less an ability to program themselves, and he’s skeptical that existing AI techniques will get us there.

Since we don’t know how the minds of human beings and other animals develop common sense, or gain self-awareness, or learn to improvise in novel situations, we have no template to follow in designing machines with such abilities. We’re working in the dark. As University of California, Berkeley, professor Michael Jordan, a leading expert in machine intelligence, said in an IEEE Spectrum interview with Lee Gomes last year, when it comes to “issues of higher cognition — how we perceive, how we remember, how we act — we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.”

And yet the assumption that computers are replicating our own thought processes persists. In reporting on DeepBeat’s use of a neural network, two Wall Street Journal bloggers wrote that the software “is based on a field of artificial intelligence that mimics the way the human brain works.” Writing about neural nets in general, Wired‘s Cade Metz says that “these systems approximate the networks of neurons inside the human brain.” As Jordan cautions, “people continue to infer that something involving neuroscience is behind [neural nets], and that deep learning is taking advantage of an understanding of how the brain processes information, learns, makes decisions, or copes with large amounts of data. And that is just patently false.”

Even Marcus’s expectation of the arrival of human-level artificial intelligence in the next eighty or so years is based on a faith that we will find a way, without a map in hand or in the offing, to cross the undiscovered country that lies between where we are today and the promised land of human-level AI. That’s not to say it can’t happen. Our own minds would seem to be proof that common sense and improvisational skill can come from an assemblage of inanimate components. But it is to say that predictions that it will happen — in twenty years or fifty years or a hundred years — are speculations, not guarantees. They assume a lot of things that haven’t happened yet.

If in interpreting the abilities of machines we fall victim to our anthropomorphizing instinct, in forecasting the progress of machine abilities we’re often misled by our tendency to place unwarranted faith in our prediction techniques. Many of the predictions for the rapid arrival of human-like artificial intelligence, or “superintelligence” that exceeds human intelligence, begin with reference to “the exponential advance of computer power.” But even if we assume a doubling of available computer-processing power every year or two indefinitely into the future, that doesn’t tell us much about how techniques for programming AI will unfold. In warning against AI hype in another IEEE Spectrum interview, published earlier this year, Facebook’s director of AI research, Yann LeCun, described how easy it is to be led astray in anticipating ongoing exponential advances in AI programming:

As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit — physical, economical, societal — then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.

Here’s a sketch of the sigmoidal pattern of progress that LeCun is talking about:

sigmoid

It’s very easy to get carried away during that exponential phase (“computers will soon do everything that people can do!”; “superintelligence will arrive in x years!”), and the more carried away you get, the more disheartened you’ll be when the plateau phase arrives. We’ve seen this cycle of hype and disappointment before in the progress of artificial intelligence, and it hasn’t just distorted our sense of the future of AI; it has also had a debilitating effect on the research itself. Coming after a period of hype, the plateau stage tends to provoke a sharp drop in both interest and investment, which ends up extending the plateau. Notes LeCun, “AI has gone through a number of AI winters because people claimed things they couldn’t deliver.”

To be skeptical about the promises being made for AI is not to denigrate the ingenuity of the people who are actually doing the work. Appreciating the difficulties, uncertainties, and limits in pushing AI forward makes the achievements of computer scientists and programmers in the field seem more impressive, even heroic. If computers were actually capable of improvisation, a lot of the hardest programming challenges would evaporate. It’s useful, in this regard, to look at how Google has gone about programming its “autonomous” car to deal with unusual traffic situations. If you watch the car perform tricky maneuvers, you might be tempted to think that it has common sense and a talent for improvisation. But what it really has is very good and diligent programmers. Here’s how Astro Teller, the head of Google X, the R&D unit developing the vehicle, explains how the process has proceeded:

When we started, we couldn’t make a list of the 10,000 things we’d have to do to make a car drive itself. We knew the top 100 things, of course. But pretty good, pretty safe, most of the time isn’t good enough. We had to go out and just find a way to learn what should be on that list of 10,000 things. We had to see what all of the unusual real world situations our cars would face were. There is a real sense in which the making of that list, the gathering of that data, is fully half of what is hard about solving the self driving car problem.

The Google team, Teller says, “drives a thousand miles of city streets every single day, in pursuit of moments that stump the car.” As the team methodically uncovers novel driving challenges — challenges that human drivers routinely handle with ease, without requiring new instructions — it updates the car’s software to give the vehicle the ability to handle new categories of situations:

When we produce a new version of our software, before that software ends up on our actual cars, it has to prove itself in tens of thousands of [possible situations] in our simulator, but using real world data. We show the new software moments like this and say “and what would you do now?” Then, if the software fails to make a good choice, we can fail in simulation rather than in the physical world. In this way, what one car learns or is challenged by in the real world can be transferred to all the other cars and to all future versions of the software we’ll make so we only have to learn each lesson once and every rider we have forever after can get the benefit from that one learning moment.

Behind the illusion of machine improvisation lies a whole lot of painstaking effort. In an article this week about the Darpa Robotics Challenge, an annual event that tests the limits of robots, New York Times science reporter John Markoff emphasized this point:

Pattern recognition hardware and software has made it possible for computers to make dramatic progress in computer vision and speech understanding. In contrast, [Darpa program manager Gill] Pratt said, little headway has been made in “cognition,” the higher-level humanlike processes required for robot planning and true autonomy. As a result, both in the Darpa contest and in the field of robotics more broadly, there has been a re-emphasis on the idea of human-machine partnerships. “It is extremely important to remember that the Darpa Robotics Challenge is about a team of humans and machines working together,” he said. “Without the person, these machines could hardly do anything at all.”

If you’re worried about a robot or an AI algorithm taking your job, you can take a little comfort in what I’ve written here. But you shouldn’t take a lot of comfort. As the Google car shows, computers can take over a whole lot of sophisticated manual and intellectual work without demonstrating any common sense or improvisational skill. Many years ago, Alan Turing observed that, as computers sped up and databases swelled, the “ingenuity” of programmers would be able to be substituted for the “intuition” of skilled professionals in many fields. We’re seeing that today on a broad scale, and we’re going to be seeing even more of it tomorrow. But Turing also concluded that there would always be limits to the use of ingenuity. There would always be an important place for intuition — for “spontaneous judgments which are not the result of conscious trains of reasoning.” Computers would not be able to substitute for talented, experienced people in all situations. And we’re seeing plenty of evidence of that today, too. If you’re a rapper, you may need to worry about shifts in fashion — capriciousness is another human quality that computers can’t match — but you can rest assured that robot rappers pose no threat whatsoever to your livelihood. Computers in the future will be able to do more than we assume but less than we fear.

Ever since our ancestors made the first tools, we have been dividing labor between ourselves and our technologies. And the line between human effort and machine effort is always changing, for better and for worse. It would probably be a good idea to spend a little less time worrying about, or yearning for, a future in which robots take all our jobs and a little more time thinking about how to divide labor between people and computers in the wisest possible way.

______

*The Finnish researchers admit that when they apply their statistical model of rap lyrics to Eminem’s work, it scores poorly. The reason? Eminem is a master at “bending” — altering the pronunciation of words to create assonance and other rhymes where the rules say they shouldn’t exist. Eminem, in other words, improvises.

**Tomorrow, for example, Berkeley researchers will present a paper on a machine-learning technique that appears to enable a robot to master simple new tasks through a process of trial and error.

Image: Marvel.

The Uber of labor unions

protestcab

One of the underappreciated benefits of the internet is that it is continually forcing us to relearn the lessons of the past. Take publishing, for instance. In the early days of blogging, there was a sense, shared by many, that online publishing systems were serving to “liberate” writers from editors and proofreaders and fact-checkers and all the other folks who had long stood between the scribbler and the printed page. The internet, it was said, had revealed these people to be useless interlopers — gatekeepers, even. They were the human faces of friction, relics of an inefficient, oppressive atom-based past.

Forgotten in all the enthusiasm was the reason editorial staffs had come into being in the first place. Early publishers didn’t hire a bunch of useless workers just for the joy of paying them wages. No, publishers realized that people prefer reading good prose to crappy prose. And what we’ve learned is that people still prefer reading good prose to crappy prose — even when they get the prose for free. So we still have editorial staffs, and thank goodness for that.

Which brings me to Uber and all the other online labor markets that, as Christopher Mims writes in the Wall Street Journal, “are remarkably efficient machines for producing near minimum-wage jobs.” The enormous valuations that venture capitalists and other investors are now giving to the Ubers of the world reflect an assumption that the clearinghouses, or “platforms,” will continue to wield almost absolute power over the masses of individual laborers who do the driving and other work that the companies dole out. Easily replaceable, the individual worker has little choice but to accept the platform’s terms. Which means that, as Uber’s recent pricing actions suggest, the platforms will face little resistance in ratcheting up their share of the take.

Mims suggests that Uber drivers, and other such contractors, will ultimately have to rely on the government to protect their interests:

The only way forward is something that has gotten far too little attention, called “dependent contractors.” In contrast with independent contractors, dependent contractors work for a single firm with considerable control over their work — as in, Lyft or Uber or Postmates or Instacart or any of a hundred other companies like them. This category doesn’t exist in current U.S. law, but it does exist in countries like Germany, where dependent contractors get more protections than freelancers but are still distinct from full-time employees.

That may well be a good idea, but history tells us that it’s far from the only way forward. Following the example of factory workers — as well as early taxi drivers, or “hackmen” — a century ago, the contractors could stop acting as individuals and start acting collectively. Some form of unionization is not just another way forward, it could, for the workers, be a much more empowering way forward. I realize this may sound far-fetched at the moment. Labor unions aren’t exactly at the height of their popularity these days. And it’s true that many of the platform-dependent contractors are content with the current market arrangement — indeed, grateful for the new opportunities it’s given them to make a buck. Many drivers see Uber as their ally, their friend. But, as history also tells us, such attitudes can change quickly. There’s a latent economic antagonism between the workers and the clearinghouses, and as the clearinghouses wield their power to take an ever greater slice of the pie, in order to deliver the returns expected by their investors, the antagonism seems likely to burst into the open. Workers are content until the moment they feel cheated.

Clearinghouses like Uber may actually turn out to be the model for a new, digital form of labor union. Rather than relying on collective bargaining, these new unions would displace the third-party clearinghouses by taking over their role in the market. Think about it. The drivers join together and agree to contribute a small percentage of their fares — much smaller than the fees Uber extracts — as union dues, and the pooled cash is used to build and run their own, jointly owned ride-sharing platform. As the current plethora of such clearinghouses — the Uber of wiping smudges off eyeglasses! the Airbnb of caskets! — makes clear, setting up such platforms is, as a technical matter, pretty straightforward at this point, and once set up, they operate with great efficiency. By cutting out the Uber middleman, the drivers would not only keep more of their earnings; they’d also reap benefits of scale in establishing insurance plans, retirement accounts, and the other sorts of worker benefits that unions basically invented.

I’ve even come up with a cool term to describe this system of worker-owned clearinghouses. I call it the sharing economy.

Ladies of the code

That 1961 Life article about computers also gave a snarky nod to the important role that women were playing in mainframe programming:

Within the New Class [of computer experts] are more than 30,000 people, including hundreds of attractive young ladies who have studied computers at such colleges as Vassar. They are concerned not with the executive problems of how to use computers but with How to Talk to the Machine. They have their own language and like to discuss such things as automorphisms, combinatorial lemmas, (0,1)-matrices of size m by n, Monte Carlo theory, heuristic programing, Boolean trees and don’t care conditions. Fortunately space does not permit these terms to be explained here.

“The machines are taking over”

life

Our computers advance, but our fears about them remain remarkably consistent, cycling through peaks and valleys in some as yet undiagnosed pattern. In the spring of 1961, Life magazine ran a long feature story titled “The Machines Are Taking Over: Computers Outdo Man at His Work — and Soon May Outthink Him.” It made for unsettling reading.

“The American economy,” reported the writer, Warren R. Young, “is approaching the point of no return in its reliance on computers.” He provided a long list of examples to show how computers were quickly taking over not only factory work but also professional jobs requiring analysis and decision-making, in such fields as engineering, finance, and business. Computers “will tend to make middle-management obsolete.” The digital machines, he went on, were even moving into the creative trades, composing “passable pop songs” and “Beatnik poems.” Soon, they’d be able to perform “robotic translation of foreign publications, particularly scientific and political material written in Russian.”

The use of language is, of course, one of the traits that has most notably distinguished human beings from all other creatures. The complete mastery of human language by computers may well be on its way. Some scientists say that digital computers can already “think.” Though they greatly doubt that computers will be able to do creative thinking, they are coming close.

Most ominous of all, wrote Young, was the arrival of machine learning:

A new machine called the Perceptron is actually able to learn things by itself, by studying its environment. Built by a Cornell psychologist, Dr. Frank Rosenblatt, it is equipped to look at pictures and in future versions will hear spoken words. It not only recognizes what it has seen before but also teaches itself generalizations about these. It can even identify new shapes similar to those it has seen before.

The Perceptron is so complex that even its inventor can no longer predict how it will react to a new problem. “If devices like the Perceptron,” says one expert, “can really learn effectively by themselves, we will be approaching the making of a true robot, fantastic as that sounds. But remember, all this was begun and devised by human brains, so humans — if they take care — will remain supreme.”

Young didn’t find such tepid reassurances all that convincing:

This is cheering news, no doubt. But there is another view of the future in a story that computer designers now tell only as a macabre joke: A weary programmer who has spent his life tending a computer that always has the right answer for everything finally gets fed up. “All right,” he asks his machine, “if you’re so smart, tell me — is there a God?” The computer whirs gently, its lights flicker, its coils buzz and hum, and at last it clicks out the answer: THERE IS NOW.

Computers hadn’t even mastered lower-case letters, and already we’d infused them with delusions of grandeur.

Image: Life.

Insert human here

ibm

I have an op-ed about how we misperceive our computers and ourselves, “Why Robots Will Always Need Us,” in this morning’s New York Times. A snippet:

While our flaws loom large in our thoughts, we view computers as infallible. Their scripted consistency presents an ideal of perfection far removed from our own clumsiness. What we forget is that our machines are built by our own hands. When we transfer work to a machine, we don’t eliminate human agency and its potential for error. We transfer that agency into the machine’s workings, where it lies concealed until something goes awry.

Read it.