The way the FAA assesses the risk of a fatal airplane crash after a safety incident needs substantial overhaul, according to an evaluation mandated by Congress following the two Boeing 737 MAX crashes conducted by an expert panel for the National Academies of Sciences.
The FAA declined to discuss with the expert committee the specific risk analysis that prompted the review: the safety regulator’s December 2018 risk assessment after the first crash of a Boeing 737 MAX in Indonesia that led to the decision to allow the MAX to continue to fly.
Nevertheless, the 74-page technical report, published Wednesday, offers 13 recommendations for substantial improvements in the process that produced that 2018 assessment.
A month after the Indonesian MAX crash, the FAA formally assessed the risk of further crashes due to a malfunction of the same flight control system. It concluded that, without a fix, there would be about 15 crashes over the entire life of the MAX fleet — one crash every two or three years on average.
With that analysis, the FAA decided to allow the MAX to continue flying while Boeing came up with what was expected to be a quick software fix — until the second crash occurred just over four months later.
So while unable to comment specifically on that faulty 737 MAX risk analysis, “the committee was able to make recommendations that, if adopted, would significantly improve the … process,” the report states in the preface.
The new report notes upfront that “FAA management declined to provide additional details or to discuss the [December 2018 risk] analysis of the 737 MAX with the committee.”
Tarek Milleron, whose niece Samya Rose Stumo died in the second MAX crash in Ethiopia, has called the FAA’s 2018 risk assessment “stunningly incompetent.” He pushed to have this independent review of the process included as a mandate in the Aircraft Certification Safety and Accountability Act that Congress passed in 2020 in response to the MAX crashes.
Milleron welcomed the report’s recommendations, but was angered by the FAA’s unwillingness to share with the committee the specific data upon which it based its 2018 conclusions about the risk of another MAX crash.
“The broad scope of improvements dryly suggested by the committee amounts to an absolutely humiliating assessment of the FAA’s risk-analysis procedure,” Milleron said.
“But the FAA gave the double middle finger to families of victims of the MAX crashes, to Congress and the general public by refusing to discuss with the National Academies data inputs to the risk assessment it did for the MAX after the Lion Air crash” in Indonesia, he added. “I’m astonished at how the FAA continues to stonewall.”
In a statement, the FAA said “We appreciate the Academies members’ time and expertise for this important work. We welcome outside scrutiny and are carefully reviewing the report.”
Rep. Peter DeFazio, D-Ore., chair of the U.S. House Transportation Committee who played a big role in the legislation requiring the report, urged the FAA to implement its recommendations “without delay.”
Subjective judgments in place of data
The FAA’s formal process for assessing the risk of an aircraft accident is known as the Transport Airplane Risk Assessment Methodology, or TARAM.
As with the TARAM done on the MAX, the result is a very precise set of numbers that stipulates the mathematical probability of an accident happening and the predicted real-world outcome.
The 2018 TARAM on the MAX, for example, projected that without a fix the flight control flaw would cause 15.4 crashes and kill 2,921 people in the 30- to 40-year life of the entire fleet.
A bottom-line finding that the crash risk for an aircraft fleet is above .02 requires the FAA, at least, to issue an Airworthiness Directive.
When the MAX figure came in at 15.4, the FAA did so, flagging the flight control issue for pilots and providing them instructions on how to handle a repeat incident — instructions that proved woefully inadequate.
In any case, those numbers are misleadingly precise.
Robert Jones, a former FAA safety engineer who has done several TARAM analyses in his career, said that while the process “gives the FAA a standardized way to do the analysis, it takes something that’s really qualitative and makes it look quantitative.”
“It’s not really that precise,” he said. He recalled doing one TARAM (not the 2018 MAX analysis) in which he had to come up with a probability that the pilots would not respond appropriately to a particular system failure.
He consulted several FAA pilots, who gave him hugely varying estimates. Based on his own experience with flight controls over 35 years, Jones selected an average figure from those the pilots suggested and used 1 in 100 as a reasonable estimate. But choosing a different value for that subjective estimate could have produced a significantly different outcome.
The National Academies report acknowledges this.
One recommendation asks that the FAA establish guidance on how to account for the uncertainties associated with varying the inputs to the TARAM.
Another focuses specifically on how to weigh the consequences of varying human reactions and asks that the FAA initiate an effort “to quantify the human performance of flight, maintenance, and cabin crews under the wide range of contexts experienced in civil aviation.”
Noting that the FAA’s decision-making process is “qualitative and relies on engineering judgment,” it recommends a more quantitative failure analysis and suggests the probabilistic risk analysis used by the U.S. Nuclear Regulatory Commission as a model.
Only one “super-expert”
Jeff Guzzetti, a former FAA and National Transportation Safety Board airplane accident investigator, now an aviation safety consultant, who served on the National Academies committee, said a precise quantitative analysis requires a lot of in-depth data about system reliability and maintenance “that the FAA currently doesn’t use.”
The estimates used “weren’t plucked out of the air,” Guzzetti said. “They were based on something, but they were not based on quantitative data. It was qualitative.”
He said the most important recommendation in the report is that all the aviation stakeholders — the FAA, Boeing, its suppliers and the airline operators — reach an agreement to explicitly define how safety data is collected, monitored and analyzed to improve the quality of the inputs that inform the TARAM outcome.
Boeing declined to comment.
The report also recommends that the TARAM process be subject to “independent peer review and quality assurance,” at least in the case of significant in-flight safety incidents.
Implementing the National Academies recommendations will clearly require the FAA to hire more engineers and invest in deeper training.
The report notes that the FAA has just a single recognized subject-matter expert in TARAM risk analysis: 21-year agency veteran John Craycraft, who helped develop the process and appeared before the committee last fall.
Guzzetti said the panel was surprised at the lack of depth in available expertise.
“There was really only one super-expert,” said Guzzetti. “There used to be two or three, but through retirements and things like that, the legacy brain trust was down to one person.”
The report adds that the FAA currently has no formal training curriculum or regular training schedule on the details of how a TARAM is compiled.
Two of the 13 recommendations address this lack of depth in expertise. They recommend the FAA formally develop “multiple employees” to be TARAM experts and set up a technical training program for aviation safety engineers and their management on how to do a risk analysis.
Jones said he’s heard from former colleagues that the FAA has already begun setting up an organization focused on improving its TARAM analyses.