“What do you think about machines that think?” That’s the Edge question of the year for 2015. Here’s my reply.
Machines that think think like machines. That fact may disappoint those who look forward, with dread or longing, to a robot uprising. For most of us, it is reassuring. Our thinking machines aren’t about to leap beyond us intellectually, much less turn us into their servants or pets. They’re going to continue to do the bidding of their human programmers.
Much of the power of artificial intelligence stems from its very mindlessness. Immune to the vagaries and biases that attend conscious thought, computers can perform their lightning-quick calculations without distraction or fatigue, doubt or emotion. The coldness of their thinking complements the heat of our own.
Where things get sticky is when we start looking to computers to perform not as our aids but as our replacements. That’s what’s happening now, and quickly. Thanks to advances in artificial-intelligence routines, today’s thinking machines can sense their surroundings, learn from experience, and make decisions autonomously, often at a speed and with a precision that are beyond our own ability to comprehend, much less match. When allowed to act on their own in a complex world, whether embodied as robots or simply outputting algorithmically derived judgments, mindless machines carry enormous risks along with their enormous powers. Unable to question their own actions or appreciate the consequences of their programming — unable to understand the context in which they operate — they can wreak havoc, either as a result of flaws in their programming or through the deliberate aims of their programmers.
We got a preview of the dangers of autonomous software on the morning of August 1, 2012, when Wall Street’s biggest trading outfit, Knight Capital, switched on a new, automated program for buying and selling shares. The software had a bug hidden in its code, and it immediately flooded exchanges with irrational orders. Forty-five minutes passed before Knight’s programmers were able to diagnose and fix the problem. Forty-five minutes isn’t long in human time, but it’s an eternity in computer time. Oblivious to its errors, the software made more than four million deals, racking up $7 billion in errant trades and nearly bankrupting the company. Yes, we know how to make machines think. What we don’t know is how to make them thoughtful.
All that was lost in the Knight fiasco was money. As software takes command of more economic, social, military, and personal processes, the costs of glitches, breakdowns, and unforeseen effects will only grow. Compounding the dangers is the invisibility of software code. As individuals and as a society, we increasingly depend on artificial-intelligence algorithms that we don’t understand. Their workings, and the motivations and intentions that shape their workings, are hidden from us. That creates an imbalance of power, and it leaves us open to clandestine surveillance and manipulation. Last year we got some hints about the ways that social networks conduct secret psychological tests on their members through the manipulation of information feeds. As computers become more adept at monitoring us and shaping what we see and do, the potential for abuse grows.
During the nineteenth century, society faced what the late historian James Beniger described as a “crisis of control.” The technologies for processing matter had outstripped the technologies for processing information, and people’s ability to monitor and regulate industrial and related processes had in turn broken down. The control crisis, which manifested itself in everything from train crashes to supply-and-demand imbalances to interruptions in the delivery of government services, was eventually resolved through the invention of systems for automated data processing, such as the punch-card tabulator that Herman Hollerith built for the U.S. Census Bureau. Information technology caught up with industrial technology, enabling people to bring back into focus a world that had gone blurry.
Today, we face another control crisis, though it’s the mirror image of the earlier one. What we’re now struggling to bring under control is the very thing that helped us reassert control at the start of the twentieth century: information technology. Our ability to gather and process data, to manipulate information in all its forms, has outstripped our ability to monitor and regulate data processing in a way that suits our societal and personal interests. Resolving this new control crisis will be one of the great challenges in the years ahead. The first step in meeting the challenge is to recognize that the risks of artificial intelligence don’t lie in some dystopian future. They are here now.
Image: Jean Mottershead.
“Yes, we know how to make machines think. What we don’t know is how to make them thoughtful.”
Precisely. Witness the latest Facebook year in review feature which sent out photos of relatives that had died to people still grieving for their loss. Similarly, a recent New Yorker article (“We Know How You Feel”) reports on how computers have become so good at decoding emotional reactions, sometimes even the people reacting don’t know about them, which suggests to me that– besides the obvious, the computer misread the signs–those reacting might not want to recognize what they are feeling, and perhaps for good reason. But the program doesn’t care about the context for such detection. It doesn’t give a flying fig about battles between ego and id.
To me, these are more examples of automation run amok, an idea that creeps me out, but one that is beginning to seem inevitable as corporations lick their chops over what can be sold to consumers whose emotional responses to products are readily accessible through nifty new computer programs, which consumers may or may not know are running in the background as they navigate the Web.
Excellent analogy to the “Control Revolution” book topic. Hadn’t occurred to me.
Machines with learning capacity, and autonomy, will be given very dangerous capabilities and “motivations” in the context of military robotics and police control. This will put a very very dangerous twist on the issue you raise. The military robot that acts the fastest, implying tremendous decision making autonomy, will win military showdowns (much like Wild West gun fighters bring quick on the draw!). For once I do not think that Hollywood-esque terminator concerns are far fetched given the demands of military competition. But you are also right that more subtle challenges will arise as in the market example.
The dynamics of rapidly evolving systems will be extremely hard to control.
Disturbingly right after posting this came up in my newsfeed on next generation learning military robots as a probable trend:
http://www.huffingtonpost.com/heather-roff/autonomous-or-semi-autono_b_6487268.html?utm_hp_ref=technology&ir=Technology
You can call “artificial intelligence” whatever you want, truth is there is no “field” or “corpus of knowledge” related to it, but there are algorithm and there related TEXTS (implementations, software or hardware).
And indded we are now sitting on a huge book “technology”, always dead but also always evolving from the people writing it.
Next to this the explosion that allowed all this, which is access to cheap energy, is on its tail end.