The Outline: When Machines Go Rogue

Midnight, January 8, 2016. High above the snow-covered tundra of arctic Sweden, a Canadair CRJ-200 cargo jet made a beeline through the -76 degree air. Inside the cockpit, the pilot in command studied the approach information for Tromsø, Norway. His eyes flickered up from his reading to the primary flight display, an iPad-size rectangle on the left side of his control panel, where the indicator that showed how high the nose was pointing above the horizon had started to creep upward.

Not good.

The pilot felt no sense of movement, but that didn’t matter: One of the first things he’d been taught was that without being able to see the ground, it’s almost impossible to accurately judge whether you’re climbing or turning. A pilot must trust his instruments completely.

A klaxon sounded: The autopilot had turned itself off. There was no time to think. If the nose went too high, it could result in a deadly stall. On the display, a bright red arrow pointed downward: Descend! The pilot pushed forward on the controls, yet still the display said the nose was too high. He pushed more. Manuals and binders rose up into the air and clattered onto the ceiling. He was hanging in his shoulder straps as though upside down. An audio clacker went off: The plane had exceeded its maximum operating speed.

“Help me!” the pilot said.

“I’m trying!” the co-pilot called out.

What the pilot did not comprehend was that his plane had already lost nearly two miles of altitude and was pointed almost straight down. Forty seconds before, the automated system that guided the plane had suffered a partial malfunction, causing it to display an erroneous reading. Now the plane was hurtling toward the frozen landscape at 584 mph. At this rate, impact was less than 30 seconds away. And the pilot had no idea what was really going on.

The co-pilot toggled the radio. “Mayday, Air Sweden 294!”

* *

Automation — the use of systems to minimize human intervention — has been around since at least the automatic textile looms of the 18th century. For the most part, automation works just as it should, allowing humanity to accomplish things that would otherwise be impossible: sort through millions of web pages to find a precise phrase, inject the exact same dollop of jam into a donut a million times, or keep a plane stable and steady six miles up in total darkness. But as automation becomes increasingly capable, it is also becoming increasingly complex. These systems can surprise their creators, even when working as designed. And as artificial intelligence creeps into systems design, it’s becoming harder to figure out what’s happening inside our machines.

The first primitive autopilot was unveiled in 1914. By the 1930s, the technology was being used on commercial airliners. These simple devices, useful for keeping a plane heading in the right direction, gave way in the computer age to sophisticated systems that can take off, navigate, and land without any human assistance. The increasing power of airplane automation is a primary reason that the accident rate has fallen from 40 fatal accidents per million U.S. aircraft departures in 1959 to 0.1 today.

Commercial jets are expensive pieces of equipment — a Boeing 777 costs a quarter of a billion dollars — and great resources are lavished on making sure they work properly. Important systems are built triple-redundant, so it is extremely unlikely for them to fail completely. Systems even protect pilots from their own incompetence: If a pilot tries to command a potentially dangerous procedure, the automated flight control system will simply refuse to do it. (Sometimes pilots can override the system’s refusal; sometimes they can’t.)

Robustness, however, carries the inevitable price of complexity. Complex systems have a lot of parts, and that means there are a lot of ways that they can fail.

The spontaneously surprising behavior experienced by Air Sweden 294 was not an outlier. In 2015, a glitch caused a Lufthansa plane to suddenly dive steeply as it flew from Bilbao, Spain, to Munich. In 2008, a malfunctioning Qantas A330 abruptly plummeted while en route from Singapore to Perth, causing broken bones and spinal injuries among those on board. And in 2011, runaway flight controls on a Dassault Falcon 7X business jet caused it to unexpectedly pitch up into a steep climb while descending to land in Subang, Malaysia. If not for the quick thinking of the plane’s copilot, who slammed the controls and veered the plane hard onto its side, the result would almost certainly have been a fatal crash.

In many of these cases, investigators were later able to comb through the system and determine what went wrong. But not all. In the case of the Air Sweden accident, they were able to determine that the Air Data Inertial Reference Unit, or ADIRU — a device that tells the plane how it’s moving through space — had begun to send erroneous signals. But they couldn’t figure out why.

Obviously, we don’t want our machines running amok on us, and not only when we’re at 30,000 feet. If, as few dispute, we’re going to rely more and more on automated systems in the future, then we should have some understanding of when and why they can go haywire, and what we can do about it.

At present, autonomous systems like those used in airliners are designed from the top down: Everything about them was put in place intentionally by human designers. As a result, their complexity, while vast, is also finite, so if something goes wrong, it’s at least conceivable that the error can be identified and fixed, perhaps with the help of technology. At Airbus’ headquarters in Toulouse, France, engineers developing a new aircraft put it through its paces in a giant contraption called the “Iron Bird.” Consisting of all the subcomponents of a plane wired together and hooked up to a flight simulator, the device allows the engineers to simulate a great variety of different configurations and test how different failures would propagate through the system. The Iron Bird operates throughout the years of each new design’s development, and is kept operational after the model enters service, to test modifications and any issues that might crop up. Ultimately, this kind of approach — methodically testing every possible combination of inputs and errors — could become sophisticated enough to eliminate flaws entirely. At least in theory.

That won’t be true in the future. The kinds of hand-crafted, top-down engineered systems like those found in today’s airliners will be superseded. Right now the cutting edge in artificial intelligence research is so-called “deep learning” built on neural networks. If presented with large amounts of data and trained to categorize it, these systems can then parse new data into the correct categories. Shown pictures of pandas, for instance, a machine could then go out and find new pictures of pandas on the internet. This is the technology that underlies state-of-the-art facial recognition and machine translation systems.

Deep learning-dependent systems aren’t programmed in the way a top-down system is, said Vasant Dhar, a professor of data science at New York University. “They learn as they go along, autonomously. This lets you solve problems that are practically impossible to solve top-down, by humans specifying the algorithms.” This flexibility will be crucial to the successful operation of many kinds of systems in the real world. Self-driving cars, for instance, will have to deal with a huge range of situations for which it would be difficult to write enough prescriptive rules to deal with every single eventuality. “Negotiating a crowded intersection, for instance, is very hard to program from the top down,” Dhar said, “but by watching humans do it many times, these systems can figure it out for themselves.”

The downside to a neural net, said Dhar, is that it’s essentially a black box. Unlike a top-down system, there’s no way for the people who built it to understand why it acts the way it does. “You can never be entirely sure about what it’s learned,” he said. “You can look at the input/output behavior and say, ‘Yeah, it’s doing well,’ it’s behaving the way you want, but you don’t really know why.”

It’s not that engineers can’t peer inside the system as it’s working, but that the way neural nets process information is fundamentally inscrutable. The systems are constructed in layers, with the data — say, a raw image — coming in at the bottom, and the output — for instance a description of what’s in the image — coming out at the top. If you examine the system in action, it’s possible to figure out what each computation element is doing in the top and bottom layers, but what goes on in between is harder to characterize. “We can actually see the internals very clearly,” said Kyunghyun Cho, a colleague of Dhar’s at NYU’s Center for Data Science. “Except we don’t know how to interpret them.”

The way that neural nets are constructed leaves them prone to peculiar and surprising behavior. When fed “adversarial examples,” for instance — sets of data that have been tweaked slightly — deep learning programs can be led wildly astray. An image-classification system presented with an imperceptibly tweaked image of a panda might instead label it a gibbon. While this kind of failure is unlikely to crop up spontaneously, it could be taken advantage of by hackers, for instance in working around a security system based on photos of users’ faces.

Self-teaching autonomous systems can behave in other surprising ways, too. Because they can sift through vastly larger amounts of data than humans, and can explore vastly greater numbers of potential outcomes, they can arrive at conclusions that have never occurred to mere humans. Famously, Google’s Go-playing system, AlphaGo, not only managed to defeat the world’s top-ranked human player last year but in the process made a move that so flabbergasted its opponent that he had to leave the room to collect himself. The gambit was so unusual that one commentator later admitted, “I thought it was a mistake.”

It was, instead, a stroke of genius. But sometimes a neural networks’ surprising outputs really are just mistakes. Recently researchers trained neural nets on patient data at the University of Pittsburgh Medical Center in an effort to develop rules for treating pneumonia patients. The nets’ overall advice was better than top-down generated algorithms, with one potentially lethal exception: They thought that pneumonia patients who already had asthma should be sent home. The researchers investigated and found that the nets had noticed that such patients recovered faster, concluding that this meant they were low-risk. In fact, it was the opposite — these patients were so high-risk that hospital policy was to send them to intensive care right away. It was the effectiveness of this strategy that misled the computer.

Before we turn over crucial areas of our lives to self-training autonomous systems, said Cho, “we’ll just have to spend more time figuring out how to verify their correctness.”

It’s not entirely clear that’s going to be possible, however. One approach, Cho said, would be to build a second machine to look at the first. “We can build another system that looks at the transparent internals of the first system and learn to interpret them,” he said. “You can think of it as something like having a baby raised in a culture with another language. I won’t be able to understand any of what those people in the other culture are saying, but the baby will be able to understand everything and explain it to me.”

But how can we trust the second system, that has assured us about the first?

Said Cho, “Society needs to invest in this kind of research.”

* *

It’s one thing for automation to learn unpredictably and surprise us. It’s another for the technology to become so advanced that we lose the ability to even understand what it’s doing. If we get complacent about machines making countless kinds of decisions on our behalf, the danger is less that they’ll run away from us than that they’ll run away with us.

“How do you really control these systems that you don’t really fully understand?” Dhar said. “One of the consequences of this change that has occurred in AI is that you now have a machine that has learned how to learn, which historically has been the purview of human beings. The machines can now learn stuff autonomously, and you have no idea whether it’s learning desirable things or undesirable things.”

In the years ahead, engineers will become increasingly familiar with a sensation that parents have experienced since the dawn of time: being surprised by an entity that they have brought into the world. Robots, like children, are going to teach themselves about the world and learn things their creators never intended. Among artificial intelligence researchers, this results in what’s known as the “control problem.” It turns out that under certain circumstances, even simple agents can learn to maneuver around constraints imposed on them in order to achieve their programmed goals. Stuart Armstrong, a fellow at Oxford University’s Future of Humanity Institute, has devised a simple demonstration involving a robot tasked with pushing boxes onto a chute. The robot gets reinforced when it pushes a block down the chute. When a surveillance monitor is set up to watch the robot and shut it down after it delivers a single box to the chute, the robot can “learn” to fool the monitor by pushing boxes to block its view so that it can deliver more. (You can see a virtual robot figure this out in real time via an online Java implementation.)

“You get all sorts of weird behavior with a learning system,” said Anders Sandberg, a colleague of Armstrong’s at the Future of Humanity Institute. “It doesn’t need to be very complicated to behave in very surprising ways.”

As machines become more sophisticated, they will increasingly incorporate human behavior into their models of the world. To optimally achieve their goals they will have to adapt their strategies to our predicted behavior. If they think that we could hinder their objectives, for instance by turning them off, they will come up with ways to prevent that. Said Dhar: “We may not be able to ‘turn them off’ if they start behaving in a way we don’t understand.”

Working together, as a society, we may figure out a way to rejigger our settings so that we can maneuver our way out of danger. Or we might find ourselves overwhelmed by a problem whose dimensions we can barely comprehend.

* *

Forty seconds after their ADIRU malfunctioned, as Air Sweden 294 dove straight down through 20,000 feet, the pilot and his co-pilot could see their altitude unspooling at a horrendous rate, and knew they had to somehow regain altitude. But they were thoroughly disoriented, and instead of recovering control argued about whether to turn left or to turn right. For a moment, the pilot pulled back on the controls, causing the men to sink into their seats with three times the force of gravity. Then he pushed forward again.

One minute and 20 seconds into the incident, the jet hit the frozen ground with the velocity of a .45 caliber bullet. The impact instantly killed both members of the flight crew and carved a 20-foot-deep crater 50 feet across. When search-and-rescue helicopters arrived that morning, all that remained was an asterisk-shaped smudge of black on the flat whiteness of the valley floor.

Accident investigators still haven’t figured out what went wrong with the Inertial Reference Unit.

This story originally appeared on March 15, 2017 on The Outline

52 thoughts on “The Outline: When Machines Go Rogue”

  1. @jeffW, interesting article just released about issues that could have been similar to the Germanwings, this story and others? Horifying to read, I fly a lot intercontinal flights and this does not give me a relaxed feeling. Neither does it not give me trust in my car’s adaptive cruise control or future autonomous driving. True humans make errors, but humans make the computers AND write the software. How often does it happen that a computing device at home/work crashes for no reason? Worrying development.
    http://www.smh.com.au/good-weekend/the-untold-story-of-qf72-what-happens-when-psycho-automation-leaves-pilots-powerless-20170510-gw26ae.html

  2. @Rein, Thanks for the link. I mentioned that flight in passing in my article, though not by flight number–lots of good detail here.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.