To watch Deep Dive MH370 on YouTube, click the image above. To listen to the audio version on Apple Music, Spotify, or Amazon Music, click here.
For a concise, easy-to-read overview of the material in this podcast I recommend my 2019 book The Taking of MH370, available on Amazon.
Part of the process of figuring out the mystery of MH370 is finding explanations for the seemingly inexplicable things that happened. Part two is trying to verify whether those explanations hold water.
Today we revisit a topic that we explored in depth back in Episode 10, “The Vulnerability,” in which we talked about an idea that Victor Iannello and I have both worked on—namely, that MH370 had an unsual vulnerability that would have allowed a sophisticated attacker on board the plane alter the data in its satellite communications system so that when investigators looked at the data later they would think the plane went south when it really went north. (If you’re interested in learning the details of the theory, I’ve posted a précis here.)
I’ve been thinking about this idea for a long time. There was even a whole episode of the Netflix documentary “MH370: The Plane That Disappeared” about it. But it’s taken this podcast to spur me to do something I wish I had done a long time ago, which is to seek out the opinion of cybersecurity professionals. From the perspective of someone whose job it is to assess potential hacking vulnerabilities, does it seem like MH370 had one?
I was able to tap the expertise of someone who really knows his stuff, Ken Munro, the founder of Pen Test Partners in the UK. As the name implies, Ken’s company specializes in penetration testing, which means that they probe a client’s computer network for vulnerabilities to see if they can get inside the system. The idea is by imagining all the ways a bad guy could hurt you, you can take steps to prevent them from happening. Though his skills are applicable in every corner of IT, Ken specializes in aviation. Recently he and his team were able to a real 747 that wasn’t being used and borrow it for a bit to test it for security vulnerabilities (and found some interesting ones).
I figured if anyone could tell whether a proposed vulnerability is plausible or not, it would be Ken.
I sent Ken my write-up of the idea and then asked him what he thought about it. We had a fascinating discussion, which you can watch in the YouTube video above. The take-home for me was his assessment of my proposed vulnerability: “Technically, it stacks up…is it possible? Yes.”
However, as Ken made clear, he doesn’t think that the vulnerability was exploited. “As one of my colleagues suggested, a ton of gold might land on my driveway tomorrow, which I’d be very happy about, but it’s pretty implausible,” he said.
I understand why he thinks it’s unlikely that the vulnerability was exploited, but I don’t think that means it should be dismissed. We’ve always known that executing a spoof attack to make MH370 disappear would be a huge technical challenge. But in trying to figure out what happened to MH370, we need to distinguish the impossible from the improbable.
I would argue that, just as penetration testers need to cast their imagination as far and wide as he can in order to detect possible avenues of attack, we need to explore the outermost limits of what might have happened to MH370 in order to be sure we haven’t missed what happened to the plane.
I think the Australian search authorities did themselves a serious disservice by deciding very early on that the Inmarsat data could not have been spoofed. Especially since, ten years later, they don’t really have a workable hypothesis that explains the evidence in hand. Above all, they can’t explain why they didn’t find the plane on the seabed where their calculations said it would be, as I publicly predicted in 2015.
In a normal, innocent air crash environment, Occam’s Razor is a useful tool. As a general rule, simple explanations are best. But in an environment with security risks, you can’t afford to set aside implausible ideas. Bad actors can find devious and complicated ways to break through defenses — more ominously, to prevent you from even realizing that they are there. Engineers and scientists like those at the CSIRO, DSTG and ATSB are not used to looking at the world from a security perspective. They have never been in an environment in which Doppler shifts are spoofed. Everything in their training has told them that it’s okay to laugh off scenarios that seem low-probability and put their money on the one that seems most plausible.
In a world where there are bad actors, though, this is a dangerous state of mind. To be safe, you have to be able to imagine the worst that an adversary could do. Even if you are 99 percent certain that you’re safe and your security measures are satisfactory, you have to ask: what could I have missed? What assumption have I left untested?
If we look at MH370 through a security lens, the starting point shouldn’t be “is the spoof unlikely?” but “is the spoof impossible?” The best approach to possible attack scenarios is to explore their ramifications, rather than to assume they don’t exist.