PPRuNe Forums - View Single Post - Ethiopian airliner down in Africa
View Single Post
Old 20th Mar 2019, 17:39
  #2154 (permalink)  
VicMel
 
Join Date: Jun 2009
Location: Dorset
Posts: 31
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by FCeng84
If it turns out the the Ethiopian accident was the result of the same issues that led to the Lion Air accident our industry has some major soul searching to do.

While the MCAS software update developed after the Lion Air accident that is almost ready to go to the fleet will likely remove reliance on the three MCAS design assumptions listed above and thus would have greatly improved the likelihood of a safe outcome for the Ethiopian event we are left with a huge elephant in the room. After making the planned update we still must address the following:
A. How many other key points in the 737MAX safety story are based on pilot response assumptions that may not be valid?
B. How about other airplane models? Are they deemed safe based on faulty assumptions regarding pilot action?
- For instance, how may current 737 crews (all models) would not respond quickly enough to a classic stabilizer runaway that was not arrested by column cutout (i.e., pulling the column far enough)? I know this is covered in simulator sessions for 737 pilots, but is that enough?
C. Moving forward with the current status and future of commercial aviation have we gotten to the point where basic flying skills and system awareness are so low that we are at risk throughout the whole industry?
D. Can current and future pilot reaction short falls be addressed through training? If so, what kind, how much, and how often?
E. How will we know that we have achieved a sufficient industry wide level of safety?

Hoping to see FDR data from the Ethiopian accident soon. I sure hope someone from the PPRUNE community will find a way to get ahold of it and share it here.
FCeng84 - Excellent comments on the ‘bigger picture’ of the problem. You refer to “the elephant in the room”, this does not only apply to MCAS. Some years ago when I was an aviation safety assessor, shortly after the loss of AF447, it became clear to me the premise that ADU TAS output is not a ‘safety critical’ parameter was a badly flawed concept. The approach of suppliers and the aviation authorities was that TAS was only ‘advisory information’ and that incorrect data would be handled by ‘good airmanship’. It is ‘obvious’ from AF447 and three other incidents I am aware of, the probability that pilots can always safely deal with bad Air Data is not high enough to mitigate the ADS to be not a safety critical system. Sadly this elephant is still hidden away, for now.

The same flawed argument now seems to being applied to AoA with MCAS. I am only aware of three cases of the MCAS not working correctly, in only one case did the crew manage to safely deal with the situation. The real hard evidence suggests that a significant proportion of pilots (somewhere between 10% and 90%) would not be able to cope. From what I understand about MCAS from the Pprune posts, I would expect this probability of pilots failing to cope would need to be of the order of 0.1% (or lower) in order for the MCAS to be considered as not safety critical. IHO, only a basic understanding of Human Factors is needed to show that the MCAS safety assessment is fundamentally flawed. This is the elephant in the room, Boeing might perhaps try to hide it with a software patch, but it will still be there.

The software patch looks inadequate to me for the following reasons:-
A) Quote from https://www.seattletimes.com/busines...ion-air-crash/
“According to a detailed FAA briefing to legislators, Boeing will change the MCAS software to give the system input from both angle-of-attack sensors. It will also limit how much MCAS can move the horizontal tail in response to an erroneous signal. And when activated, the system will kick in only for one cycle, rather than multiple times.”
I find this quite disturbing:-
i) They seem to have ‘defined’ the software patch before they even know the cause.
ii) How does having both inputs help if one of them is ‘erroneous, but believable’?
iii) Presumably the software would have be set to be cautious and use the higher (potentially wrong) value?
iv) MCAS was originally designed at its current limits in order to counteract a known problem. Presumably lowering the limits mean that more ‘real’ problems will not now be safely dealt with.

B) There may well be failure modes other than the AoA vane that need to be considered. From https://leehamnews.com/2018/11/07/bo...-air-accident/ the Alpha Vanes input to the ADIRUs, presumably there they are at least A to D converted. Are the (non-safety critical) ADIRUs a potential source of failures?

C) From The Seattle Times report, the MCAS was not considered to be a Level 1 safety critical system, so presumably the software was not designed, developed and tested to Level 1 standards. In which case, a software failure within the MCAS has to be considered as a feasible cause of the MCAS’s undesirable behaviour.

Considering your points A to E on the future of aviation safety: I fear the aviation industry is approaching a ‘perfect storm’ dilemma:
a) aircraft are becoming more complex, even Boeing consider that “average” pilots cannot cope with the workload of extra information about MCAS
b) air traffic is increasing and new aircraft designs are ‘down-sizing’ as smaller aircraft are more cost effective; this means many more new, inexperienced, pilots will be needed
c) it is not possible to train pilots such that they all become ‘super Sulleys’.

My conclusions are:-
a) systems must no longer use human intervention as part of their safety case; we are too unpredictable.
b) safety critical systems must get smarter; garbage in, garbage out is not an option, neither is giving up and disconnecting.

VicMel is offline