Exclusive Interview with Top MH370 Search Mathematician Neil Gordon

DSTG report 2
Fig. 1: Probability distribution function or “heat map” of where MH370 might have wound up.

When Australia’s Transport Safety Board (ATSB) was tasked with finding missing Malaysia Airlines flight 370, it tapped another arm of the government, the Defense Science & Technology Group (DSTG), to tell it where to look. There a team led by Dr. Neil Gordon devised a mathematical approach based on Bayesian analysis to weigh all the possible routes that the Boeing 777-200ER could have flown, given the seven Inmarsat “pings,” the plane’s fuel load, environmental conditions, and the different settings available on the autopilot. From this they derived a probabilistic “heat map” of where the plane might have wound up (Fig 1, above). The results showed that the jet most likely flew fast and straight, at high altitude, before running out of fuel and crashing. It was this analysis that allowed the ASTB to define the search area currently being scoured for traces of seabed wreckage. Yet, with less than 10 percent of the area left to be searched and not a trace found, it now appears they looked in the wrong place. Earlier this summer, the three nations responsible for the investigation—Malaysia, China, and Australia—jointly announced that they would not be extending the search after the last portion is completed this fall. Last month Dr. Gordon went on record for the first time to explain what might have gone wrong and where the next place to look for the plane should be. His answers formed the basis of an article for Popular Mechanics; for the readers of this blog I present a less filtered version of what Dr Gordon had to say.

One of the crucial decisions you had to make was how to treat the 18:22 radar return. In your report, you wrote, “The final reported position from radar was very at very long range from the sensor and there was a long time delay between it and the penultimate radar report. The report is at long range and it is likely to have rather poor accuracy because of the angular errors translates into large location errors at that range.” Are you confident that that radar return is not anomalous, it actually comes from the plane?

You’ve got to understand what our job in this investigation is. Our job is to take the data as presented to us by the accident investigators and project a trajectory from that.

Was there any explanation or speculation on why a plane would be detected at that point but not before or after?

I guess it was that they’ve just got snapshots off the radar screen. I’m speculating here but I would imagine they’ve recorded a video of the screen but they don’t necessarily have a digital backup of the measurements.

Any speculation as to why the the satellite data unit was turned off and back on again?

That’s not something that we’ve considered ourselves with. Our job is to process this set of numbers from the Inmarsat system.

At the end of the day, it’s a prioritization exercise, it’s not an exhaustive enumeration of all the possible ways you can do this, because, you know, I can draw trajectories that perfectly match the metadata measurements that fill a humungously large segment of the seventh arc. You can draw an enormous area that you’ve got to look. But the reality is, there’s a finite set of money that’s available, a finite amount of time. You’ve got to prioritize.

A lot of people have looked at the BTO data and said, “You can draw an arbitrary route.” And the answer is, yes, you can draw an arbitrary route, but if you didn’t know that this thing was generating these BTO arcs, the probability that you would fly it just right is essentially zero. 

It prefers simpler explanations rather than highly complex ones. So if doing one turn, staying on the same autopilot mode, fits the data as well as doing 35 turns, all having to be carefully orchestrated with these measurements you didn’t know were going to occur, then yes, it would prefer the single turn explanation, the simple explanation, over the highly convoluted, complex explanation.

Now, as I understand it the idea of Bayesian analysis is that as new data comes in the probability matrix changes. In the case of the seabed search, each time a ship makes a sidescan sonar pass, the probability in that swathe essentially drops.

Absolutely.

And the probability goes up by a commensurate amount somewhere else. 

Another figure you should probably have in your head, I suppose, is that the 120,000 square kilometers that’s currently funded for searching, encompasses approximately 72-ish, 75-ish percent of the probability.

So where is the rest of the probability? 

It’s outside that 120,000—as you increase the area away from that zone, you obviously, you increase the number you’ve searched. If you’d said before they started searching this 120,000 square kilometers, ‘What’s the probability you think you’ll find it in there?” I’d have said, “mid 70s.” Because that’s the probability content of that zone. Conversely, if you’d said, “I’m going to search 120,000 square kilometers, where do I put it to cover the most probability?” Then that’s where I’d have put it.

If it’s not in this 120,000 square kilometers, what are the alternatives?

The analysis that we do defines the probability distribution along the 7th arc. You then have to have a descent scenario. And the one that’s focussed on is the uncontrolled descent, and that’s done for a few reasons. So, if you look at the final electronic communication signaling that goes on in the final reboot phase, as it tried to boot up again at 00:19, that points to very high descent rates. If you look at the simulation results that Boeing have done for uncontrolled descent from that time, they’re consistent with the numbers you get from the final data messages. You also get the probability distribution of how many control mode changes we think have been made in the last five hours of flight. And the probability distribution there is very heavily stacked up on “none.” So if you strongly believe that there was no changes to the autopilot in the last five hours, the final communications messages are pointing to a very rapid descent rate, then clearly that’s going to prefer uncontrolled descent over controlled.

I guess the obvious problem with an uncontrolled descent, though, is that it would imply that you would wind up very close to the seventh arc.

Yes.

And so, would you expect—

Well, I’ve just told you, we’ve only searched just over 75 percent of the probability. When we started I’d have said there was a one in three chance you wouldn’t find it, even if everything was as you’d expect.

heat-map-and-search-box
Fig. 2: The current search box roughly approximates the 90% confidence region. (source: DSTG)

The 00:19 BFO value they believe would have been generated two minutes after the second engine flameout, and so basically within two minutes you’ve got a plane that’s now descending at 15,000 feet per minute, you’re going to quickly run out of altitude. Is the assumption, then, that despite this, things turning pear-shaped very quickly, that the plane nevertheless was able to get 40 nautical miles past the seventh arc? 

Don’t forget the 7th arc isn’t a precise entity. There is a spread of error on that, and how you map it down to the—it’s a range from the satellite, which has got an error on it, and then depending on what altitude you think it happened at, also adds an extra component to the error. And I guess they’ve based their spread beyond that on the results of the Boeing simulation of how the aircraft can come down from that point. Not something I know about, I take their word for it.

The alternative would be that it is very close to the 7th arc, but it’s further north of where they’ve been searching. That of course would require a curved, slower, lower-probability by your analysis route. But it seems like the ATSB is not looking at that right now.

I guess they have got funding in the area to search. You can certainly lay out where you would, if you’re prioritizing where you would go, and what you would do, you can see what you do, but at the end of the day there’s a finite area that’s funded for search.

If they said to you, “Dr Gordon, we’ve got another 50,000, should we go further north or should we go further out?”

Well, if you look at the probability distribution it would say, “Go up north.” [The probability distribution function] spreads out a lot up north. It’s quite constrained and peaked in the southern end, if you look at the distribution, and I guess it’s also interesting in the sense that that falls right into the maximum achievable range area as well. But then as you move north it’s still there but it tails off more slowly, which means you’ve got to search a larger kilometer-squared area to aggregate the same probability.

As you have all this new Bayesian information because of the absence of wreckage in the search zone, are you constantly updating your heat map.

There’s two ways, in a sense, you’d update it. One is you update it because they’ve dragged the sonar over and looked. And that starts fading things out and accentuating other areas. And then there are debris finds which enable you to think about where could they have—what are the plausible places they could have entered the water to get there.

You’re talking about drift analysis. 

Yes. But there are significant uncertainties attached to the drift models that generate those, and the distributions for those are much flatter and wider than the ones that come out of the dynamic analysis based on the satellite data. It takes a lot of aggregation of those to move things. Basically we haven’t had that aggregation yet.

There have been press reports that they are doing revised drift analysis and, perhaps it will rise to the level of significance?

As the data accumulates, the statistics will follow the right zones. As long as you represent the uncertainty properly in the information you’re putting in. So we’ve worked quite hard with David Griffin down from CSIRO to create a scheme to aggregate measurements based on his drift modelling into the process.

After the search has been suspended, will more reports be forthcoming?

I’d certainly imagine we would need to write something that aggregates our thoughts in hindsight, for sure. And update things. I guess it’s a point to say, we’ve put our mathematics out there, we’ve been touring around professional societies here, the international conference of signal processors, gave a talk there, we’ve been telling everyone “Here’s what we’ve done, here’s how the data works, please have a go and tell us where we’ve gone wrong.” So, we’ve made a big effort to ask people. The sort of difficulty we always come up with is, at the end of the day it’s a probability distribution, it’s not an analytic solution with a guarantee. It’s a distribution that represents the prioritized belief as to where you think it is relative to other places.

To get back to this BFO point at 00:19. The thing has just been turned back on. The question is, is it reliable? Is it anomalous, can we trust it as much as we trust some of the other data points?

And I guess that was some of the reticence to use it as an exact indicator, but certainly the manufacturers have done lots of tests for us on the warm-up characteristics are of their devices. So we do have an understanding of what the uncertainty that could be caused by that. I guess what we’ve done is, we’ve said, Let’s imagine that everything was the worst case, pushing the BFO in the opposite direction to disavow a descent, what if it had turned back the other way and it’s going completely the opposite direction—what if, what if, what if—and then try to come up with a bound as to what they were—what’s the minimum descent rate that would now be required to explain this?” And it’s still a big number. And if you look at it across the two of them, between the eight seconds apart, it’s an increasingly big number.

It’s a conundrum, though. Given your analysis and given this BFO value, the plane should have been where they were looking for it.

They’ve searched 70 percent of the probability. It’s not a guarantee, it’s—if you’re going to spend that much money and search that much area, that’s the area that’s going to get you the most probability. But it’s not a probability 1 event.

Maybe part of the problem and the expectations of the public was that there were a lot of very confident pronouncements.

I guess that no statistician would ever have said that.

When the search officially ends at the completion of the 120,000 square kilometers, this is going to be in the fall, our fall your spring, are you going to officially close the books, or are you going to keep working on it? What’s your stance going to be at that point?

I can only speak personally, that we would certainly keep open minds to new analysis we can do. If data comes along we’ll certainly analyse it. If new insights come along, we’re more than happy to analyze them. And go out and seek peer review on the methods, as much as possible throughout the last year and a half or whatever. There’s not only things to learn for this, on the methods, things we can learn as research scientists carry over to lots of domains of activity we do, so yes. We’re not going to be forgetting it.

Are you going to be issuing a report when the search officially ends, or is over? 

That’s a matter for the ATSB. But I can’t imagine they won’t. Certainly the Malaysians have to do a report on the investigation. That’s the Malaysians, not the Australians.

Before the debris was found, and all we had indicating that the plane went south was the BFO data, how did you guys know the BFO hadn’t been tampered with?

All I’ve done is process the data as given to me to produce this distribution.

After we spoke, I sent Dr Gordon two follow up questions, to which he replied via email.

At one time, the ATSB considered the 00:19 BFO values unreliable. What has caused the change of heart?

The BFO at 0019 was always understood to most likely indicate a descent. When the book was being prepared, there was uncertainty about the state of the reference frequency oscillator, namely steady state or start-up. For level flight, the BFO measurements are insensitive to altitude, so the model in the book provides only an indicative altitude estimate.  We did not aim to model altitude rate and the 0019 BFO was initially treated qualitatively. Subsequently, a review was performed by the SATCOM Working Group based on observations of 9M-MRO’s SDU historic transient characteristics and those of other SDUs.  This review resulted in limits on the amount of BFO variation at start-up and these limits allowed a more detailed treatment of the 0019 BFO. This understanding of the limits of the BFO start-up behaviour provides a bounded range of dynamics consistent with the measured BFO rather than a single value.

I would be very curious to see what the output of the filter would be if the starting point was the 18:22 radar return.

The attached figure shows the pdf as a function of longitude for an 1822 initialisation compared with an 1801 initialisation. They are not identical, but they are quite similar.

MATLAB Handle Graphics
Fig. 3: It makes little difference to the DSTG’s probability analysis whether their starting point is the 18:01 or the 18:22 primary radar detection.

If you’d like a truly unfiltered version of our discussion, I’ve put the whole transcript on Dropbox.

32 thoughts on “Exclusive Interview with Top MH370 Search Mathematician Neil Gordon”

  1. I think this two clinches it for me as a straight dive into oblivion:

    a. And the one that’s focussed on is the uncontrolled descent, and that’s done for a few reasons. So, if you look at the final electronic communication signaling that goes on in the final reboot phase, as it tried to boot up again at 00:19, that points to very high descent rates. If you look at the simulation results that Boeing have done for uncontrolled descent from that time, they’re consistent with the numbers you get from the final data messages. You also get the probability distribution of how many control mode changes we think have been made in the last five hours of flight. And the probability distribution there is very heavily stacked up on “none.” So if you strongly believe that there was no changes to the autopilot in the last five hours, the final communications messages are pointing to a very rapid descent rate, then clearly that’s going to prefer uncontrolled descent over controlled.

    b.Subsequently, a review was performed by the SATCOM Working Group based on observations of 9M-MRO’s SDU historic transient characteristics and those of other SDUs. This review resulted in limits on the amount of BFO variation at start-up and these limits allowed a more detailed treatment of the 0019 BFO. This understanding of the limits of the BFO start-up behaviour provides a bounded range of dynamics consistent with the measured BFO rather than a single value.

    Those two statements prove that Boeing and the rest flogged the BFO to death and came to a conclusion of a death dive.

    Why has it not been found? Its there hidden in some crevice missed by Fugro’s limited equipment which is why there was a report I linked here that they are going to re-scan the area. and it could be whole with with minimal surface debris in which case the scientists we were laughing at last year could be having the last laugh:

    http://today.tamu.edu/2015/06/08/mathematician-theorizes-what-happened-to-mh370/

    Or its smashed to smithereens and got sucked into the garbage gyre and is floating circuitously around, with occasionally one piece fortuitously splitting off and landing on distant shores.

  2. 1.The researchers used applied mathematics and computational fluid dynamics to conduct numerical simulations on the RAAD Supercomputer at Texas A&M at Qatar of a Boeing 777 plunging into the ocean, a so-called “water entry” problem in applied mathematics and aerospace engineering. The team simulated five different scenarios, including a gliding water entry similar to the one Capt. Chesley B. “Sully” Sullenberger skillfully performed when US Airways flight 1549 landed in the middle of New York City’s Hudson River, a feat that’s referred to as “the miracle on the Hudson.”

    Chen said based on all available evidence — especially the lack of floating debris or oil spills near the area of the presumed crash — the mostly likely theory is that the plane entered the water at a vertical or steep angle.

    2.The fluid dynamic simulations indicate, for a vertical water entry of the plane, that there would be no large bending moment, which is what happens when an external force, or moment, is applied to a structural element (such as a plane), which then causes the fuselage to buckle and break up. As the vertical water-entry is the smoothest with only small bending moment in contrast with other angles of entry, the aircraft is less likely to experience “global failure,” or break up on entry near the ocean surface, which would explain the lack of debris or oil near the presumed crash site.

    Based also on the suggestions of other aviation experts, Chen said in such a situation the wings would have broken off almost immediately and………

    Of course 1 is for glide aficionados while 2 succinctly accounts for @Rob’s obsession with the wing 😀

    For the full paper:

    http://www.ams.org/notices/201504/rnoti-p330.pdf

  3. @airlandseaman: Thanks, Mike!

    @Wazir: That Texas A&M study was thoroughly debunked — turned out they had a vertical water-entry speed of something like 35 mph. It’s pretty intuitively obvious that a plane slamming into the water at 500 knots (or whatever) might as well be hitting concrete.

  4. @JS @RetiredF4

    “Even without drive being built-in, it may have been the best performing drive at the time, despite being undersized.”

    The IG Assement said the SSD was not connected. It doesn’t say anything about whether or not the drive was built-in.

    I agree that one would use a SSD for performance reasons. That’s what I use as my primary drive in my computer.

    Maybe someday we will be able to see the Malaysian police report for ourselves.

  5. I dont recall it being debunked @Jeff but if true the modeled speed was not realistic in relation to what actually happened.

    As such, the smithereens version would definitely hold more credence at 500 knots or so.

    Nice if you could provide a link to the debunking, though.

    Yes agree with @ALSM, excellent read @Jeff. Thanks for a job well done.

    also:

    http://masterherald.com/flight-mh370-victims-families-welcome-follow-up-search-operation-to-begin-in-october/45094/

  6. Jeff,

    Nice interview!
    Did Dr. Neil Gordon have any comments
    regarding the so-called unresolvable ‘geographical’ dependence of the BFO bias values ?

    Can he provide all the data and analysis of the ~ March 7 flight to KL ?
    or other 9M-MRO flights just before March 8, 2014 ?

    Some is described statistically in the report. It would be useful to have all the data by timestamp.

    I asked ATSB in December 2015 for it. However, I never received anything from them.

    It may be possible to develop a deterministic BFO model from this data.

  7. @Oleksandr
    Regarding your post;
    http://jeffwise.net/2016/08/31/mh370-flight-simulator-claim-unravels-under-inspection/#comment-183358
    You said to DrBobbyUlich;
    “Thus first you stated that 166 Hz is the maximum BFO for level flight, then
    you stated that this value is not sensitive to the speed.” ..etc.

    You have misunderstood a statement DrBobbyUlich made.
    He said;
    “Slowing down makes the shortfall in BFO at 18:27 slightly worse. In fact,
    the BFO then is fairly insensitive to speed.”
    Understand, he made the comment in regard to insensativity IN THE CASE
    OF/FOR THE SITUATION OF ‘slowing down’.
    Therefore his statement is not automatically incompatible with his earlier
    comment (that you quoted) about the 166Hz figure.
    Seen in this light, you were somewhat unfair to follow on with words
    in the nature of an emotive attack.
    _
    In a later post of yours, regarding;
    “1. Magnetic heading/track.” (after reaching a waypoint and then
    encountering a Route Discontinuity, whilst flying onward),
    you helpfully suggested we obtain clarification either from Boeing or
    from an experienced B777 pilot.
    1. above could also be checked by reference to a flight simulation
    software, (such as the PMDG 777 preferably) – at least until we could
    definately receive confirmation from a pilot.
    _
    @Jeff Wise, LouVilla apparently has a sim (not PMDG?), but touches upon
    this forum only occaisionally – would you consider it appropriate to
    contact LouVilla to ask him if he could inform the forum what behaviour
    he sees on the (simulated) aircraft displays in regard to 1. above, and
    if he is receptive to answering some follow-up questions (as there would
    inevitably be follow-up questions…)

  8. Yes, thanks much, Jeff. Very revealing. For instance:

    1) Search box width

    – The IG’s original Sep/14 search box width recommendation was ~20nmi
    – by Mar/15, this had widened to ~30nmi (“high probability”)
    – by Apr/15, my stochastic model, calibrated to IG flight sim results, suggested the ~40nmi width already scanned contained 99% of the statistical likelihood

    Each of the above took both of Dr. Gordon’s points about BTO uncertainty into consideration. Yet:

    – in Dec/15 – with the search already ~60nmi wide – the DTSG “book” – without any stated justification – paved the way for a FURTHER widening of the scan zone

    – now, here in Sep/16 – after 9 months spent (wasted?) widening the search to ~70nmi – Dr. Gordon deflects Jeff’s invitation to reconsider zone width, suggesting only 70% of the recommended zone has yet been searched, and pretending that BTO errors account for much more than a TENTH of this width.

    Perhaps a greater proportion of density near the arc COULD have been searched by now, had his organization’s “book” not in essence told the ATSB to go wide instead.

    2) Debris found to date

    Interestingly, it was the NORTHERN half of the DTSG’s 34-40°S zone (per above graphic) which my Apr/16 IPRC data-based drift study suggested should have resulted in significant quantities of debris hitting Australian shorelines by Dec/14. While the IPRC study which generated the shoreline hit probabilities was not publicized until after the flaperon sparked interest in drift analyses, such data was not dependent on any found debris – it could have been generated as soon as the priority search zone was set, back in Sep/14. And the absence of debris on Oz shores could have been used to force a rethink of this zone by as early as Jan/15.

    Hiding behind the uncertainty factor of drift analyses is a cop-out. If the entire WIDE distribution of, e.g. the “Roy” piece’s possible starting points misses your search zone by a wide margin, it is time for a re-think.

    This reminds me of the two months spent searching the Wallaby plateau, despite the obvious counter-indications of the FDR’s frequency and range, the BFO data fit, and the ridiculously trigonometric path curvature required to access it. Deaf ears + failure of media to hold anyone accountable = dysfunctional search. Highly suspicious.

    @Jeff: can you please request from Dr. Gordon…

    A) concrete support for the DTSG’s baffling decision to push the scan zone width past 60nmi – and now past 70nmi, and

    B) a reason (better than the one given) to ignore what IO shorelines cry out to us (OZ seemingly empty to Sep/16, SA seemingly “full” by Dec/15)?

    Thanks in advance for your consideration of these requests.

  9. BFO MODEL COMPARISON

    Here is a comparison of my BFO model results with Richard Godfrey’s 2016 BFO model results for the test case suggested by Oleksandr:

    https://drive.google.com/file/d/0BzOIIFNlx2aUXzl0bDZXTXNpUHM/view?usp=sharing

    I used the same FF Bias value of 150.27 Hz in both models.

    Here are the take-aways:

    1. The overall agreement is quite good (+1.3 Hz difference).
    2. The Delta F (up) terms agree within < 0.1 Hz.
    3. The Delta F (down) results differ by +0.1 Hz.
    4. The delta f (comp) results differ by +0.3 Hz.
    5. The delta f (sat) + delta f (AFC) results differ by +1.0 Hz (more on this below).

    The “Satellite and AFC values” were listed at 18:25 (only 2 minutes prior to this test case at 18:27:04) by the ATSB in their Table 4 which I reproduced in the file linked above. The ATSB value is 10.7 Hz. My empirical model fit (also shown in the linked file) predicts 10.6 Hz. Godfrey’s empirical model predicts 11.6 Hz. It seems Godfrey’s model fit is a bit high (~1 Hz) for this time. Since this is the largest contributor to the overall difference, the agreement of the remaining terms is within 0.3 Hz, which is excellent.

    Although this is only one test case, it does suggest that 10 Hz errors near this time are highly unlikely.

    It will be interesting to compare Oleksandr’s and Yap’s results, when they are provided), with the first two models.

  10. @buyerninety

    1.) I use the B777-200LR from PMDG in FSX. The same virtual aircraft the Cpt. of MH370 used at home i think.

    2.) I´m not sure i understand your question. Do you want to find out what the PMDG aircraft would do when the flightplan has no more waypoints after reaching the last one and the autopilot would follow a Route Discontinuity next ?

  11. @DrBobbyUlich: thanks much for your spreadsheet (which I haven’t yet perused, but will), and for the additional context re: the small heading variance (SHV) you introduce to reflect uncertainty in wind speed/direction.

    Yes, your SHV may help close gaps in my understanding on two fronts:

    – if you insert random SHVs – and then select only those paths which satisfy the BTO’s – then the SHVs will force your path back onto each arc, effectively reversing out the chaotic wind effects one would otherwise expect to see.

    – yes, it seems plausible to suppose that one consequence of this framework is the apparent anomaly between 5 and 20S, which curves upwind, albeit slightly.

    To eliminate both confusion and any possible misinterpretation of the SHV as a “fudge factor”, might I suggest you…

    1) publish the exact same path you are proposing, but with the SHV reduced from 0.7° to 0.0°, so we can see how far off the arcs the raw wind data would otherwise push the modeled path, and

    2) consider re-parameterizing the SHV as a WV = Wind Variance – i.e. simply back-solve for the adjustment to wind speeds required to keep your path bang on each arc, so we can compare the back-solved winds to the empirical data recordings.

    Please understand me, Bobby: I am not suggesting for a moment that the empirical wind recordings should be trusted to the nth decimal point – not at all. All I’m suggesting is that the general public should be able to view with crystal clarity just how much your SVH matters, i.e. what is it worth in terms of equivalent adjustments to wind speed.

    Thanks much for your continued and careful consideration of my feedback. Please accept it in the spirit it is offered: merely to ensure the science behind your proposal survives (and is improved by, hopefully) challenges from every angle.

  12. @Brock
    >…in Dec/15 – with the search already ~60nmi wide – the DTSG “book” – without any stated justification – paved the way for a FURTHER widening of the scan zone

    No, you have misread the ‘book’. The Bayesian model can have nothing to say about the flight after loss of power. Section 10.5 makes it clear that the width considerations come from the ATSB report (reference 5 in the book) and the Descent Kernel (fig 10.9) was constructed with ATSB advice. The book certainly did not tell ATSB to go wider. The position of the 7th arc is DSTG’s responsibility, so Gordon mentions there is a potential error there.

    I think it is more interesting that Gordon’s response to Jeff’s challenge is to extend the search north along the arc, rather than wider.

  13. @Richard Cole: “I think it is more interesting that Gordon’s response to Jeff’s challenge is to extend the search north along the arc, rather than wider.”

    That is nothing new. The ATSB/DSTG party line has always been, and is even more vigorously defended recently, that the descent was unpiloted, low bank angle at first, more steeply only recently. According to that doctrine the descent must have ended close to the 7th arc. Therefore it is politically correct to say that extending the search north makes more sense than extending the width.

    I’m not saying that I disagree, but it is nothing new.

  14. @Gysbreght

    Well, there are certainly only a few directions for the search to move, so it is unlikely to be a new one! ATSB have had plenty of opportunities to pursue the doctrine you describe (by extending the search north or south) but have instead extended the search wider in the current search area (and shortened its length along the arc). Here we have one of the architects of the search area suggesting a re-extension north.

  15. @Richard Cole: thanks for responding. While I consider your correction somewhat beside the point – the DTSG owns that probability oval, and its effect on search strategy – I concede that the ATSB may well have supplied the analytics on which that oval’s width was based, so I accept your refinement of my concern.

    As long as SOMEONE is held accountable for extending search width (particularly INSIDE the arc) to widths counter-indicated by all independently generated evidence, we have no cause for concern. If the DTSG and ATSB point fingers at each other, we do.

    The only thing I find interesting about suggestions parts further NE should be searched is why such public statements weren’t being made 21 months ago, when the physical evidence pointing us NE was pouring in. I refer again to my “deaf ears” point above. Highly suspicious.

    Here we are, entering Month 31, and we have yet to falsify a single theory: too wide and short to falsify ghost flights, too narrow to falsify a piloted glide. Taken together, such behaviour constitutes compelling evidence of a search designed either to fail, or to a schedule.

  16. @Brock
    I will probably regret saying this but

    >…21 months ago, when the physical evidence pointing us NE was pouring in.

    What _physical_ evidence was there in Dec 2014? Physical means you can touch it, right?

  17. @louVilla, in your mFSx installation /w PMDG, do those 5th and 6th data points come pre-installed as demos/points/etc?

  18. Mr. Gordon says the possible more northerly path is “curved”. Does that rule out the McMurdo/simulator path?

  19. Some interesting comments by Dr. Neil Gordon. If this were an investment ‘opportunity’, these guys would be in trouble. They would essentially be telling their venture capitalists that the more money they put into this, the greater the risk becomes (largely because of the successive expansions of the search area).

    If the search area were periodically changed to a new location – or even to several new ‘spot’ locations – that could signal progress and a reduction in risk. Processing this data is not where the risk is – the risk is in the validity of the input data. Probably we are all familiar with the ‘garbage in – garbage out’ idiom. It seems that Dr. Neil Gordon’s group is not concerned that they may indeed be processing garbage – and so far their output seems to reflect that lack of concern.

    Some of this IMO is due to a ‘break’ in ownership or responsibility of the task to solve this problem (find the wreckage). “All I’ve done is process the data as given to me to produce this distribution.” Would you be happy with that style of management if this were your money on the table?

    I do know enough about this process to respect Dr. Neil Gordon’s capabilities but not enough to delve into the details. Still, I wonder if probability analysis is really the best approach here. This is really a binary problem – you find the wreckage or you don’t find the wreckage and you have finite resources available. One approach would be to go after the ‘low hanging fruit’ first. Get some best guesses from the best sources and go have a look. Another approach is to follow your nose. Pieces of the aircraft have been found – or so we have been told. IMO the best place to look for more pieces is in the regions where pieces have already been found. Perhaps a piece will be found that contains some unambiguous and useful information.

    Mostly I do not believe that the analytical approach will yield the coordinates of the wreckage. There simply is not enough reliable data. The solution does not seem to be converging. I think we must be hopeful that there will be a confession of conscious, a deathbed confession, a whistleblower or a smart, ruthless investigator who can punch holes in the barriers to relevant information.

  20. @Shadynuk

    Excellent points. When you are a “hired gun” which I have been on many occasions before getting a real job, you have to be very careful about setting the scope of the work. You don’t want to get sucked into the nuances. There is no money in that, and you can piss away a lot unbillable hours screwing around with stuff that you were not specifically contracted to do. Gordon tells it like it was. Good stuff.

    That brings me back to Figure 5.4 which slipped through the ATSB censorship. It was the DSTG’s way of saying look at the shit you gave us to work with.

  21. @Oleksandr,

    You are the one confused.

    When I say the BFO there is “fairly insensitive to speed” (actually it changes only about 0.1 Hz per knot), you say it is “not sensitive.” That is quite a different thing. Perhaps you have difficulty with English expressions, but you mischaracterize my statements.

    I’m still waiting for your BFO model values for each term for the test case you proposed, but I guess you have given up trying to prove my model calculations have significant errors since all comparisons with mine so far seem to match quite closely. Godfrey’s matches mine within 0.3 Hz when using the same AFC and bias values. Apparently Yap’s also matches mine to ~1 Hz when using the same bias.

    I disagree with your statement that “the uncertainty in the BFO bias and wind speed allow for even exceeding the challenging value of 176 Hz”. What “uncertainty in BFO bias” are you considering? You can’t change the bias between the 18:25 and 18:27 BFOs (and all the later data). They all must use the same bias value. If you increase the bias arbitrarily to fit the 18:27 BFO then you miss the 18:25 BFO as well as the KLIA values (which determine the bias value within 1-2 Hz uncertainty).

    Regarding the maximum realistic speed, the wind was only 3 knots from the East at the time, so the ground speed and the air speed are virtually identical equal on a North track. You can’t make 20 knots of tailwind where it doesn’t exist. As I said, 570 knots of ground speed (and the same air speed) is unrealistic in this situation in explaining the ~176 Hz BFO data.

  22. @Shadynuk, @DennisW

    The risk capitalists won’t get any happier when they realize that the main investor’s only incentive in the business is #cough#rural support#cough# and doesn’t consider a penny lost whichwever way it ends.

  23. @Brock McEwen,

    I have investigated the relationship between the RMS speed errors and the RMS lateral navigation errors for true heading routes using my fitting program. This provides insight into the effect of errors in calculated wind velocities on the variability of headings and speed during the post-FMT route. What I did is constrain the BTO and BFO errors so they always matched their expected RMS values. Then I set the maximum allowable RMS navigation error and finally I found the RMS speed error. As you can realize, these two parameters are not orthogonal. With large allowable navigation errors one can get small speed errors (that is what Inmarsat did in their Example Path), or, conversely, with small allowable navigation errors one will get large speed errors. Plotted against one another, there will be a “frontier” (quasi-hyperbola) beyond which no fit can go.

    I have made a number of trial fits with various lateral navigation error constraints, and the following figure shows how the RMS speed errors behaved:

    https://drive.google.com/file/d/0BzOIIFNlx2aURC13RU5IWTF4aVk/view?usp=sharing

    In this figure, I have plotted lateral navigation error on the horizontal axis, and speed error on the vertical axis. The data demonstrate a “knee” in the curve near 0.2 degrees RMS navigation error, below which the speed error increases dramatically. Allowing navigation errors larger than 0.2 degrees RMS has little effect on the speed error, which remains below 1 knot.

    My current route fitting program averages the wind information at the start and at the end of each leg (and the legs are generally between the handshake arcs). Thus there are two crosswind estimates, and the difference between them is what primarily determines the change in heading between this leg and the previous leg. Thus the navigation error in radians is approximately SQRT(2) * crosswind error / leg speed. In degrees the navigation error = 0.18 * crosswind error, so 1 knot crosswind error produces 0.18 degrees navigation error, and 2 knots produces 0.36 degrees navigation error. From the figure this latter value seems like a good setting to use that “clears” the knee of the curve in the figure. I was previously using 0.50 degrees, so 0.38 degrees is a bit tighter. Remember we only have 5 post-FMT legs to measure navigation error statistics, so the “knee” is actually rather imprecisely determined. Still, it indicates the crosswind errors are probably in the neighborhood of 2 knots RMS. Considering the potential errors in the dataset and how it is applied in my program, 2 knots seems quite reasonable.

    Note also that the general wind error has a headwind/tailwind component that directly affects the air speed errors (which are about 1 knot RMS). The effect of the wind errors is different on speed than on navigation angles. For the air speed I use the average value of the headwind at each leg end to compensate the ground speed and convert to airspeed. The error in the average headwind is thus [1/SQRT(2)]*headwind error. So a 1 knot air speed error implies a 1.41 knot headwind error for each measurement. We see that the navigation errors and the speed errors imply a (single measurement) wind velocity error of about 1.4-2.0 knots.

  24. @Shadynuk

    I agree, analytical solutions are not going to converge. The ISAT data by itself, which is the only actual flight path data, is too under-constrained. There is little chance of drift analyses converging – Neil Gordon admitted as much. I don’t see any deathbed confessions or whistle blowers on the horizon (just my opinion). As for smart, ruthless investigators…, well I’ve been trying for ages, and made little headway 🙂 Obviously not ruthless enough.

    With the search effort, the problem all along has been the split of responsibilities, with one of the parties – the Malaysian Authorities, being as unhelpful as possible. The Australians responsible for the search, the Malaysians supposedly responsible for the investigation, and the confounding effects of obscure political interests. It’s all been discussed here before, I know.

    The way things stand at the moment, there is little prospect of the plane being found.

  25. Dr. Neil Gordon: “Another figure you should probably have in your head, I suppose, is that the 120,000 square kilometers that’s currently funded for searching, encompasses approximately 72-ish, 75-ish percent of the probability.”
    and –
    “Well, I’ve just told you, we’ve only searched just over 75 percent of the probability. When we started I’d have said there was a one in three chance you wouldn’t find it, even if everything was as you’d expect.”

    Strange statements. Is that what these graphs show?
    https://www.dropbox.com/s/wmjvqvk9uvfp8m1/Scan004.jpg?dl=0

  26. The ATSB wrote in December:

    “The yellow and pink lines are the 6th and 7th arcs respectively. The green line outlines the main area of interest representing approximately 90% of the PDF.”

    https://www.atsb.gov.au/media/5747317/ae2014054_mh370-definition_of_underwater_search_areas_3dec2015_update.pdf
    p. 4

    Perhaps Gordon is figuring in the possibility that the FMT occurred after 18:40, something he says in the full transcript. In that case, the ghost flight scenario is still on the table, he says.

  27. @Richard

    Very nice graphic. I found myself wondering about the logic of the “one in three” statement.

Comments are closed.