Skills are gained through effort.
Automation relieves effort.
There’s always a tradeoff, but because the relief comes immediately whereas the loss of skills manifests itself slowly, we rarely question the pursuit of ever greater degrees of automation.
Recent case in point:
Pilots’ “automation addiction” has eroded their flying skills to the point that they sometimes don’t know how to recover from stalls and other mid-flight problems, say pilots and safety officials. The weakened skills have contributed to hundreds of deaths in airline crashes in the last five years … “We’re seeing a new breed of accident with these state-of-the art planes,” said Rory Kay, an airline captain and co-chair of a Federal Aviation Administration advisory committee on pilot training. “We’re forgetting how to fly.”
Something to think about on Labor Day.
For economic progress we have to automate tasks with technology. Otherwise there can be no uptick in productivity.
And as always the new technology brings with it a set of problems. This should be dealt with by further innovation IMO.
Maybe augmented reality, or a form of external cognition in this case. Or maybe not.
Chris,
I agree with you up to a point, though I think you’re oversimplifying.
With regard to autopilot systems in commercial aircraft, I’d be curious about their effects on productivity. Have they reduced the number of pilots required to be in the cockpit of a plane or reduced the wages of the pilots? Surely it requires added labor (and capital) to design, install, and maintain the systems? I’m not arguing against the use of the systems (though the evidence makes me worry about the overuse of them), but I just don’t understand the productivity effects in this case.
Also, what does “external cognition” look like, exactly?
Nick
I don’t get an email when you add a reply to the comment. Bit counterproductive for conversation. Anyway…
Pilots wages have reduced over time. This could be due to other factors however; cost-cutting by the airlines, supply and demand etc.
I was talking to automation in general.
You bring up a good point. The aim of the automation is presumably to improve safety rather than increase productivity.
But despite it’s downside — atrophication of skill — I presume the automation saves more lives than it costs.
On the flip-side, it’s not unimaginable that automation becomes so reliable that it only requires one pilot per plain. It’s only a matter of time before we get driverless cars, wouldn’t you say?
External cognition in this case would be any system that supplemented the pilots memory. A system that allowed them to access a checklist of procedures, or a graphical representation of what to do in a given circumstance.
Under stress humans higher brain functions reduce, having something external to jog the pilots memory of what actions to take in an emergency may be useful.
I was talking to automation in general.
That’s fine, but I’m talking about particulars.
But despite it’s downside — atrophication of skill — I presume the automation saves more lives than it costs.
Here, too, you’re talking in generalizations. The question isn’t whether autopilot systems on balance have saved lives, but whether automation at some point becomes counterproductive – the negative consequences from the atrophy of skills begin to outweigh further gains (in productivity, safety or whatever) from additional degrees of automation.
It’s only a matter of time before we get driverless cars, wouldn’t you say?
Entirely driverless? Unlikely, because there will always be some failure rate in technology, at both the component and the system level. You can have driverless trains on fixed routes with fixed schedules because in the event of a failure you can shut down the whole system immediately. You can’t shut down the whole automobile-traffic system in the event of a failure; therefore you need human backup.
Besides, driving’s fun. I don’t want some boring Google Robot driving my car. I’m not Larry Page.
I see the distinction now.
Automation would work better in a closed system with limited inputs. The more open the system, the greater the number of dimensions, and combination of dimensions, the more humans are needed. The combining of these dimensions with unique outcomes requires creativity; something machines aren’t very good at.
“You can’t shut down the whole automobile-traffic system in the event of a failure; therefore you need human backup.”
If there was another system layer around the driverless car it may work. The cars could run only when receiving a ‘go’ signal from this system. Turning off the single for a given section would then shut the section down. The other cars could then route around. It would be a network system and display similar characteristics.
However, this larger system would be too expensive to put in place just for cars. If such a system was around as a GPT then it may become an economic possibility.
“Besides, driving’s fun. I don’t want some boring Google Robot driving my car. I’m not Larry Page.”
Yeah, getting driven around by a robot would start to feel like we were being transported more for Technology than for ourselves.
Interesting discussion. This reminds me of a fraught discussion in an “Emerging Technologies” course about artificial intelligence–and the that I believe you touched on in your “Shallows”, Nick: can the human brain and all the fuzzy logic be recreated by fast enough computers? Are even things like “intuition” just super fast processes that we don’t realize are actually happen, and can therefore be mimicked by algorithms and silicon? Or is there something else there that can “never”–and I bridle at the word never–be approximated by computing. Could the “failure’ rate in technology eventually fall below the collective similar rate in our collective gray matter tech? (To Nick & Chris) If a computer gets fast enough, doesn’t the definition of a “closed” system change? What might have seemed infinitely open and festooned with innumerable variables to a bank of computer from the 80s might seem quite closed to today’s high powered graphics card. Or not.
Nick,
One possible way to look at this may be, to consider what has happened in a number of industries. Not only the airline industry. I am reminded for some reason, of the way that knowledge about the game (soccer as played in the English Premier league), has been eroded by the invasion of those who understand more about capital investment, than the underlying game. I mean, there was a time no so long ago, when the players and managers of an English soccer team, were considered to be high up in the organisation, in terms of decision making. You would have held little credibility in those organisations, if you had not grown up playing and been involved in the game itself.
However, today, increasingly the team manager and players find themselves marginalised from authority and decision making processes by those in charge of the capital. I am reminded of certain English league clubs which obtain new players, because the owner goes out and buys them. Those players may not fit into the manager’s strategy at all, but the manager is forced to work with them, because the owner is pulling all the strings in terms of player selection.
It is possible, that if pilots (or persons with aviation background) were still participants in the decision making process, then they would weight decisions towards outcomes which required the pilot more control over the operations. You always need a healthy balance I think, between one and other extreme. But if the aviation familiar personnel have no input any longer, you will witness a lot of decisions, which try to diminish further the involvement or importance of the same.
A good case in point, would be where engineers instead of MBA ex. graduates manage to gain control over a company’s decision making process. Normally, I would be all in favour of such an outcome. But having scene it play out in a number of real situations in my own country, I am very aware of the pitfalls in that. Engineers will often encourage decisions which propel them into importance, and give them more resources to continue doing what they do, even if it is to the detriment of the company. It’s unfortunate, but also very true. You would need to consider what happened to the aviation industry, over an extended period of time, when engineers may have held too much sway.
“Besides, driving’s fun. I don’t want some boring Google Robot driving my car. I’m not Larry Page.”
It’s fun, but driving also kills more people per year than died in World War II every couple years. It’s a silent holocaust, but it’s convenient, so we do it.
Of course, once we get all the cars hooked up to Googlenet, Chinese and Russian hackers will probably take over the grid and start crashing us all into one another, and then we’ll discover the Scylla on the other side of that Charybdis…
Life without risk is indistinguishable from death.
Nick & co,
I forget which season it was, but I do remember distinctly an episode of ‘Numb3rs’ TV series, which was based on the ‘Tipping Point’ notion of Malcolm Gladwell. Those of you familiar with the TV series will understand that each unique episode is meant to unravel some obscure mathematical concept or other, for easy consumption by passive TV viewers such as myself. The episode of ‘Numb3rs’ which I refer to, studied the effect of a certain sniper shooter who terrorised the city, and all of the folk stayed at home. Streets that were normally packed with people, became empty. It turned out there was one main criminal at work, and a lot of other ‘copy cats’ of lesser skill level. Anyhow, the mathematician in the TV show tried to explain to people, how low the mathematical probability was, of anyone being the target. But still people stayed off the streets. What is nice about the ‘Numb3rs’ TV show, is that it allows all points of view to weigh in. In the episode in question, it was the opportunity of the FBI ‘profiler’ to counter the above statement on probability, by explaining that people have assimilated the reality of automobile fatalities into their daily lives. The fact, that sniper shootings in public places, though of a much lower statistical probability of affecting the inhabitant in a huge city, created far more terror for people – because of the fact, it was something un-encountered, something entirely out of the ordinary. It is the same with sharks, wolves and other predators. There is one early episode of the West Wing TV show I recall, where someone mentioned, more people in north America were killed per year, trying to get their change out of a vending machine, than were fatalities of wolf attacks. It is the same idea as the sniper. The idea of wild predators targeting people, creates more widespread terror than vending machines.
@ All,
May I add something briefly to the above, which could be worth voicing here? It is an extension of the point, that certain things are assimilated into daily life and become ‘normal’. Work has been done in the United Kingdom on the subject of climate change, but not in the usual kind of way. I attended a talk by a British man not so long ago, who worked in psychiatric services for the public jail system in the UK. He spoke about climate change, and how it may affect peoples’ everyday lives. The point he expressed was, most of us follow some routine in our lives, and that enables us to embrace a certain level of sanity. He pointed out, that once people lose their routine, the daily things, shortly afterwards many other things begin to go wrong. These may be some of the challenges awaiting us, which we are not used to dealing with. The act of driving and the vehicular transportation system is a part of the very routine, daily existence for many. I suppose another example would be the mobile phone. When people are deprived of these items, they sometimes find it difficult to cope.
Apologises for the babble, but I thought it was worth bringing up.
It’s an interesting paradox that training pilots to work in highly-automated cockpits requires highly-automated training i.e. in simulators. The opinion of most aviation experts is that more simulation-based training is required to prepare pilots for the sorts of events that brought down AF447.
See the blog by David Learmount for a good description. e.g. http://www.flightglobal.com/blogs/learmount/2011/08/airline-pilots-whove-forgotten.html
I’ve heard that “Life without risk is indistinguishable from death” but it’s hard to confirm that with anyone who’s actually dead. It’s a question of the acceptable degree of risk I suppose. There were 228 fatalities on AF447, approximately 3000 on 9/11, and around 34,000 deaths on US highways in 2009.
I suppose we should also remember that cars and roads are themselves automation technologies. As are gatling guns and their various progeny. Automation’s historical death count is astronomical, though of course automation has saved many lives, too. My point, which I confess I’ve drifted away from, is that automation involves tradeoffs, which, in our enthusiasm for automation, we have a tendency to overlook.
As you know there’s a debate in the airline industry about the consequences of highly-automated cockpits, a debate which has intensified since the crash of Air France 447 in the South Atlantic in 2010. It’s a debate with relevance far beyond aircraft. Consider these two aspects:
1) Are pilots putting too much trust in automated systems?
This is a tricky question because often a pilot MUST trust his avionics instruments rather than what his senses are telling him or he risks being disoriented by the confusion of forces and visual cues he is subjected to. He doesn’t really have much choice in the matter in a tropical storm in the dark of night at 35000 feet over the ocean – but when the systems fail as they did on AF447 then how will he know that he now MUST NOT trust the systems?
2) Are pilots losing their flying skills because of automated systems?
The challenge here is that the reliability of modern aircraft means that pilots’ skills aren’t challenged nearly as often as their predecessors were. So even if they develop their skills in basic training they can atrophy. Experience ain’t what it used to be.
The focus on safety and training in aviation means that these topics are being intensely debated. You’re right when you say that the trade-offs in automation aren’t well understood in other industries and in society in general.
Larry Lumsden wrote on aeronautics,
That is precisely the same point that David Clark made in his talk at the Berkeley iSchool, of which you can download a podcast at this address.
http://www.ischool.berkeley.edu/newsandevents/events/sl20090304
Basically, that when a network has become vulnerable to some kind of exploitation and its working as it aught to, how do we know that? David Clark gave the simple example of a technology conference he attended where a box for the dynamic host configuration protocol, wireless area network at the conference, had been limited to only 30 users. The audience consisted of about a hundred or more people, all trying to gain access at the same time. Show stopper. But there was no way the little box on the stage could tell anyone, what the difficulty was.
The Spelling and Grammar feature of modern computers has caused people to become over-reliant on auto-correction when writing. My generation has trouble writing the “old fashioned way” due to being raised with this interactive system; I believe your book mentions our writing skills among have worsened in recent decades. Such is the danger of new technologies. They facilitate our lives in such a way that if you pull the plug, so to speak, we plunge into the discomfort of not having a crutch to rely on. This danger, as you pointed out, can be fatal. The Federal Aviation Commission, I imagine, is racing to address this new issue pilot error due to spending too much time on autopilot. It’s lamentable that many fatalities occur before an adequate “fix” is put into place. New tech requires new, often compensatory, curriculum.