(I’ve updated this post as the situation has developed) Public panic fueled by government and media overreaction to Boeing 737 MAX incidents make a poignant case study of how political-economic & cognitive biases can fuel a mass hysteria.
A hostile Chinese government promoting its own aircraft industry hastily banned 737 flights before evidence of cause. UK & EU soon followed. The UK Civil Aviation Authority admitted in announcing its ban, “we do not currently have sufficient information from the flight data recorder.” EU’s European Aviation Safety Agency confessed “it is too early to draw any conclusions as to the cause of the accident” admitting a decision based on ignorance. But Safety doesn’t work by fact-blind majority vote.
Aircraft accident statistics show human error (i.e. inadequate training) is more likely. Higher standard companies haven’t reported problems with 737s. LionAir is a highly-troubled airline with a history of violating safety standards.
Statistically speaking, it makes more sense to ground Lion Air and Ethiopian pilots of the 737Max until the causes are determined. Rather than say “I’m not flying on a Boeing,” logical people would say “I’m not flying Lion Air or Ethiopian Airline until they find the cause.”
Again, if one takes a statistically rational view, aircraft accidents mostly occur in takeoff or landing phase, so it’s no surprise that two aircraft have accidents in the takeoff phase. As I understand, the Lion Air case has already implicated improper maintenance on the AOA vane, which of course affects the MCAS. I have researched Lion Air to some extent, and they have a relatively terrible safety record, as well as a toxic organizational culture.
Even if there is an MCAS problem, the reason we have human pilots in the cockpit still is to properly address malfunctions. It’s much more likely that pilots in the Lion Air case did not follow proper procedures which were in place for a malfunctioning MCAS. It would be rare to see a design flaw in an aircraft that has gone through all the airworthiness qualifications that can’t be overcome by proper piloting.
In the Ethiopian case, several ground observers (admittedly very poor reliability most of the time) reported some kind of smoke from the plane. If there were some problem with fire, it would not be surprising that in dealing with that problem, without autopilot, there would be pitch problems as distracted pilots were dealing with something else, i.e. not pitch problems due to MCAS malfunction.
Aside from the technical details, which will eventually become clearer, the flood of misinformation in both social and professional media serves as an excellent illustration of cognitive biases. The first is availability bias. Statistically unlikely events are seen as more probable because they’re easier to see. People jump to conclusions from two high-visibility accidents with superficial similarities. Imagine several Prius crashes with like circumstances. Would there be a call to ban Prius travel? The false consensus bias also shows. Countries banning flights while still ignorant promote mass psychology, like a run on a bank, where social media threads feed on the panic.
I’m not against grounding aircraft when there is actual evidence of a design malfunction. Since I originally posted this article it appears that some 737Max pilots may have surfaced some concerns previously (but one should be skeptical about these reports before they’re confirmed). Additonally, the FAA has cited evidence of altitude control problems that may implicate the MCAS. If true, it’s appropriate to ground the aircraft until it’s clear that either there is no problem, or it’s fixed.
The accident investigations might result in findings about a design or mechanical flaw in the 737. The point of my observations here is that public policy and opinion is easily swayed by strong political-economic influences from self-interest, and sustained by cognitive biases that don’t rely on factual evidence.
How can/should organizations respond when facing public criticism fueled by such biases?