New York: AirAsia Flight 8501 Crash Reveals the Dangers of Putting Machines in the Driver’s Seat

Eleven months after AirAsia Flight 8501 crashed under mysterious circumstances south of Borneo, taking with it the lives of 162 passengers and crew, we at last understand what happened: On Tuesday, Indonesia released a report revealing that the plane was doomed by a combination of minor mechanical glitches and pilot error. While this in itself would be grounds for concern, eerie similarities with another crash five years earlier suggest that an underlying vulnerability remains unaddressed in the worldwide air-travel system — one that could eventually have unexpected and far-reaching consequences for the driverless cars currently being developed by some of the world’s richest and most ambitious companies.

Flight QZ8501 took off from Surabaya, on the Indonesian island of Java, at 5:35 a.m. local time on December 28, 2014, bound for Singapore. Ahead lay a band of thunderstorms, some of them towering up to 44,000 feet high. After reaching the assigned cruising altitude of 32,000 feet, the flight crew called air-traffic control and requested a turn to the left to avoid a storm. Given permission, the pilots then asked to climb to 38,000 feet. Controllers denied that request, then soon afterward said the plane could go to 34,000 feet. But something had gone wrong. The pilots did not respond to the new clearance. Instead, without issuing a distress call or signal, the plane abruptly climbed, slowed, and banked into a steep turn. When it disappeared from radar, it was plummeting at a rate of more than 11,000 feet per minute.

For days it seemed as though the plane had simply vanished. Then, on December 30, the first bodies and debris were pulled from the ocean six miles from the plane’s last known location. More wreckage was recovered soon after, and on January 12, the black boxes were recovered from the ocean floor.

Given the proximity of the thunderstorms and the flight crew’s urgent efforts to avoid them, it seemed that weather was likely a major cause of the accident. Indeed, before the black boxes were found, Indonesia’s weather agency issued at 14-page report stating that most likely the plane had been brought down by icing in the thunderstorm cloud tops.

But as Tuesday’s report takes pains to emphasize, it turns out that weather had no direct bearing on what happened. Instead, it focuses on pieces of equipment located in the tail of the aircraft called the Rudder Travel Limiter Units, or RTLU.

These are designed to prevent the rudder from moving too much at high speed, in order to prevent it from damaging the tail. Due to faulty wiring, the units aboard QZ8501 had been generating error messages in the cockpit with increasing frequency during the month leading up to the accident. The pilots had learned to deal with these messages by turning off power to one of the plane’s computers, then turning it back on again.

It’s worth bearing in mind that his problem was annoying, but not dangerous, per se — the plane was still airworthy. But as QZ8501 approached the mass of thunderstorms, the fault occurred several times in rapid succession. At the time, the plane was being flown by the second-in-command, a 46-year-old Frenchman who had only recently become an airline pilot and had only 2,247 hours of flight experience (compare that to the flight’s captain, a 53-year-old Indonesian with more than 20,000 hours experience). At 6:16 a.m., 40 minutes into the flight, a series of warning lights triggered by the faulty RTLU led the pilots to turn off one of the flight computers, which resulted in the autopilot turning off. Now the plane now had to be flown manually.

The plane was still perfectly airworthy. But, perhaps preoccupied with the threat posed by the thunderstorm looming beyond his windshield, the co-pilot became disoriented. Failing to notice that the plane had started to bank to the left, he let it roll over into a steep, 60-degree turn. At the same time he pulled the plane into a steep climb. Bleeding off speed as it gained altitude, the plane became dangerously slow, and continued banking to the left until it rolled past vertical. The captain shouted, “Level! Level! Level!”; the men managed to get the wings horizontal. But their problems were not over. Still badly disoriented, the co-pilot kept pulling back on the stick, causing the nose of the plane to remain high. This prevented the plane from regaining airspeed and recovering from its aerodynamic stall. The captain understood the danger. The black boxes show that as the plane sank precipitously, he pushed his control stick forward to break that stall. At the same time, however, the co-pilot was pulling his control stick back, negating the captain’s effort. Unable to move forward with sufficient speed, the plane plummeted. When it hit the water it was moving at a vertical speed of 95 mph.

The details line up to an uncanny degree with the crash of Air France Flight 447, which disappeared en route between Rio de Janeiro and Paris in 2009. In that case as well, a junior pilot was at the controls in the right-hand seat. A band of thunderstorms lay ahead, and the flight crew tried to climb and turn to avoid the worst of the weather. But again, a minor technical mishap — in this case, a blockage of a speed sensor by ice crystals — caused the autopilot to turn off so that the plane had to be flown by hand. Already unnerved by the weather, and now assaulted by flashing lights and alarms, the confused co-pilot pulled back on the stick so that the plane climbed several thousand feet and then stalled. The more experienced pilot in the left-hand seat recognized what had happened and pushed forward on his control stick to break the stall, but his efforts were negated by the still-confused junior pilot. In both cases, coincidentally, the planes wound up in a steep spiral descent to the left, with impact occurring less than five minutes after the precipitating mechanical fault.

When France’s air-accident investigation bureau published its report into AF447 in 2011, the sequence of events seemed mind-boggling. How could a pilot with one of the world’s most prestigious airlines make such a beginner’s mistake as pulling the nose up during a stall? (Aviation analysts around the world, myself included, had spent two years trying to figure out what might have happened, and none of our guesses proved correct.) It seemed unlikely that such a thing could ever happen again. And yet here we are, with an accident so stunningly similar that it verges on plagiarism.

This recurrence suggests that there’s a fundamental vulnerability in the way that humans interact with automatic systems in a crisis. In the normal course of things, we come to rely on the automation so much that our own abilities fade away. Then, when the automated system suddenly goes haywire and shuts itself off, we don’t have the skills to expertly handle the situation. To make matters worse, the sudden stress of a life-or-death crisis tends to shut down our capacity for reasoned thought and leaves us prone to mentally freeze up. In essence, in QZ8501 and AF447, the machines panicked, and then the human panicked.

No doubt in the wake of this second crash, the global airline industry will take steps to prevent this specific sequence of events from happening a third time. Yet the underlying issue will remain, and indeed spread beyond aviation, as automation becomes an increasing part of daily life. Already, the first self-driving cars are operating on the road, and the problem of keeping unoccupied drivers alert has emerged as a major issue. Imagine you’re behind the wheel at night, reading a book on a winding, rain-swept road, when a deer suddenly jumps in front of the car and the autopilot turns off. You’re going 80, an alarm is blaring, you have no idea where you are, and you’ve got less than a second to react. For engineers tasked with designing the system, what happens in that moment — that critical instant of handover from machine to human control — will likely pose a serious problem for a long time to come.

This article originally appeared on the New York magazine website on December 2, 2015.

11 thoughts on “New York: AirAsia Flight 8501 Crash Reveals the Dangers of Putting Machines in the Driver’s Seat”

  1. “In the normal course of things, we come to rely on the automation so much that our own abilities fade away.”

    The main problem…

  2. Accident statistics simply do not support the notion that automation contributes to accidents. Every statistic – fatalities, hull loss, minor damage,… have all continued to decline (since 1960) as a percentage of departures or operating hours (take your pick).

  3. One huge advantage self driving cars have is that they can fail safe by stopping – they can give up without having to involve the occupants or a driver. The context doesn’t even matter that much – be it on a highway, or the middle of a suburban intersection.

    A moving plane does not have that option. In cruise there may be some times when everything could just switch off and be left at their current positions, but the plane is still moving (at roughly 10 miles a minute), with far smaller tolerances (eg coffin corner) and realistically someone or something needs to be in almost constant control. Heck even the information needed by whomever/whatever is control of a plane is far more extensive – speeds, angles, densities, temperatures, rate of changes etc, which makes it all the more difficult when any of them are unreliable. Consider just how much of the dashboard a car *needs* to be driven (pretty much none of it) versus a plane.

    This is going off topic, but self driving cars have far greater moral issues. eg some people run out into the road in front of one, and the car can’t stop in time so it has to pick what damage to do – go straight and take them all out? swing left or right to only take one out (which one), go into oncoming traffic to hit another car (better protected), and the list goes on (how well can steering under hard braking be predicted).

    Going back on topic, the fix is relatively easy. Have pilots spend more time in simulators, and have scenarios that better exercise their responses and learnings of the systems and goals as a whole. However this is the same industry that has conditioned passengers to pay very little (was that cause or effect?), has odious practises where pilots end up paying to work at an airline in order to get enough qualifying hours, and has been fighting more frequent tracking.

  4. @jeffwise
    One big difference between Boeing and Airbus is that Boeing has retained the traditional control yoke in front while Airbus has chosen to install small joysticks on the side of the pilots. In an emergency neither pilot knows which way the joystick is being pulled. This was a key issue in the Air France crash too.

  5. @DennisW

    “Accident statistics simply do not support the notion that automation contributes to accidents. Every statistic – fatalities, hull loss, minor damage,… have all continued to decline (since 1960) as a percentage of departures or operating hours (take your pick).”

    it’s hard to determine how much of that is due to automation, today planes are of a much better quality including full redundancy better navigation etc. so it also contributes

  6. @StevanG

    There is no doubt whatsoever, that the IT- industry , which earns trillions with its products, owes 100 % fail-safe solutions to the customers and the endangered public.

    The hardware/Software interface is still as vulnerable as it was 40 years ago. The difference is, it didnt pose immediate threats to humans then. In our times this vulnerability is becoming more and more of an unbearable risk.

    Its just not possible, to continue with trial and error rituals, when one of those errors costs 300 lives.

    The whole approach to the problem has to be altered in a revolutionary way.

  7. @Phil Webb


    The things that stand out to me on a quick first pass are:

    1) No additional commentary on either the radar data or the French forensics – especially the latter. While the report offers a speculative comment on a controlled ditch, someone has the answer and they are not providing it.

    2) Continued adherence to AP models. While this is consistent with previous reports, Figure 4 flies in the face of this model without any additional comments. The three “bifurcations” shown in that figure are simply not consistent with any AP mode. While I have not tried to look at the bifurcations in detail, they are most likely at equal angles to a line drawn from the sub satellite point to actual flight path. A property I pointed out some time ago that was greeted by howls of protest from the usual suspects.

    3) No mention of the drift models (i.e. Geomar) which contradict the current search area.

    One has to wonder why this report was even published. I guess another SIO “confirmation” was over due.

  8. @CosmicAcademy

    …and then we have this.

    I tell you none of us are safe. At least the robots are bolted to the floor, and are unlikely to come crashing through our homes like an airplane that could fall out of the sky at any time.

    The arguments against automation are not new, and I certainly do not regard automation as an “unbearable risk” as you apparently do. I am much more inclined to trust automation than a human operator relative to any task.

  9. @DennisW
    “I am much more inclined to trust automation than a human operator relative to any task.”

    I agree. There are also rumors about AI dangers, but almost nobody understands what is and what isnt possible – for example in automatic text semantic processing and communication, this seems to be possible very soon – for example to scan the net and find/mark whats really truth confirmed by more objective and safe sources and what could be/is proven lie… In fact, I wish that lies and liars will dismiss from the net. Machines can be programmed and learned to be absolutelly objective, without any political point of view etc… imaginable thing 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.