PPRuNe Forums - View Single Post - Ethiopian airliner down in Africa
View Single Post
Old 31st Mar 2019, 13:46
  #2829 (permalink)  
TryingToLearn
 
Join Date: Mar 2019
Location: Bavaria
Posts: 20
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Blythy
Triplexing/Voting works on the assumption that a single failure is unlikely, and a failure that affects two parts simultaneously is therefore extremely unlikely. It does not take into account a single root cause failure (as in the XL airways incident) that affects two parts simultaneously.
As an example, on the Space shuttle, there were four identical computers which voted against each other in the case of discrepancy. However, there was a 5th computer (limited to ascent and reentry only) which was different hardware and different software in the event of something which had the same root cause in the software / hardware.
I think that's the point:
Redundancy is a measure against random faults. Diversity is a measure against systematic faults.
A stone-age sensor which is working perfectly on one type of aircraft (thousands of planes for decades) is now mounted on a modified type and the fault rate went up drastically (350 planes, maybe 500 flights each, 6 failures). Or does every old 737 gets new AoA sensors every year because they fail that often?
How can you explain that with statistics?
I would assume that this cannot be explained without a systematic failure (wrong design, production failure) that leads to this drastic increase in failure probability. Especially since the failure mode is always the same in time (before flight) and even magnitude (22.5°).

So how can you now prevent that by redundancy (2 out of 2)? The actual statistics lead to an error every 3000 flights, so even if both sides are independent there is a double fault at least every 850 million flights. With the airplanes ordered that's every 20 years. But since systematic faults may tamper reliability in an unknown way, this calculation is very optimistic...
Limiting the capabilities of MCAS is the bandaid (less trim only once), comparing the sensors is just a gimmick.
And claiming to fix something with a systematic failure without identifying it is...

Oh, and as a safety consultant working at ASIL D (highest automotive level safety) inductive resolvers I can only imagine 2 failure modes which cause such a deviation if usual diagnostics are in place (vector length check, range check...):
a) Electromagnetic interference 'locked' the driving coil resonator on the EMI frequency and is also received by the receiver coils, being then demodulated on the sin/cos output (maybe from the new engines / engine electrical generators...)
b) If the resolver is made with +-45 mechanical deg angle range (360° electrical equal 90° mechanical) and the software is running on a stone-age 80286 without sin/cos coprocessor, an error in table-based sin/cos calculation would exactly result in 90° electrical / 22.5° mechanical angle deviation. Such tables only contain one quadrant of sin/cos and then just switch signs to get the other three.

btw: For the highest automotive safety level you use 2oo2 with a very strict analysis of production / design common cause errors and dependent failure analysis or even 2oo2 on different sensors from different fabs. But to be fair: randomly blocking tires at 100mph is even less controllable than MCAS, therefore it is not completely comparable.
TryingToLearn is offline