Boeing pilot involved in Max testing is indicted in Texas
Are false alarms are OK? Wasn't it the startle effect from the false alarm that caused the ET302 crew to ignore that full thrust remained as the plane exceeded the velocity envelope? Wasn't it the false alarm that forced the autopilot to go offline and allow MCAS to operate?
I prefer to focus on the origin of the problem and not the edge of the last chance to correct it.
1) Why wasn't the autopilot software designed to choose the correct AoA sensor? 2) Why when it went off line didn't the autothrottle also go off line? These are also decades old decisions. 3) Why is the AoA sensor not fail-safe? But, sure, multiple decades of depending on all these bad ideas.
I prefer to focus on the origin of the problem and not the edge of the last chance to correct it.
1) Why wasn't the autopilot software designed to choose the correct AoA sensor? 2) Why when it went off line didn't the autothrottle also go off line? These are also decades old decisions. 3) Why is the AoA sensor not fail-safe? But, sure, multiple decades of depending on all these bad ideas.
1) How can the AP decide what is the correct one if there is two inputs that are different from each other? You need 3 AOAs to vote, or another input like AHRS attitude and GS to rule out the faulty one (currently being studied (implemented?) by Boeing)
2) If the AT had gone offline, it would not have reduced power either. If anything, the AT could have had a function to automatically reduce thrust in an overspeed. (like the A320 has had for 3+ decades for underspeed)
3) What do you mean by fail-safe? How would it know the data it provides is incorrect without being able to compare to other data?
But yes. You are totally correct the B737 design is decades overdue for a systems and cockpit design change. The B737NG was launched 10 years after the A320, over 25 years ago. The A320 has mostly triple sensors that vote, or let the pilot make a more informed choice about what is the correct one, (can still go wrong, look at the crash of the Airbus in Perpignan, where 2 of the 3 sensors were wrong).
The 737NG still makes mostly do with 2, and when 1 breaks, it is up to the pilot to decide. Add the non-cancelable stick-shaker, stall warning and overspeed warning for some AOA faults for some added confusion in the cockpit.
The MAX was the last chance for Boeing to get it right, but they didn't. And the MCAS system, borrowed from the KC-46, initially for high altitude flight characteristics, and later put on steroids for low and slow flight was just the rotting cherry on that already moldy cake. In the KC-46 MCAS takes info from both AOAs. In order to prevent extra training due to the comparator annunciation that came on if there was a difference between the two AOA inputs into MCAS, Boeing decided to do the wrong thing, and make the MCAS single source. It would only be getting the info from 1 AOA, alternating between legs (power cycles). It was a deliberate design choice, to save money, and we know from the confirmed 3 flight that happened in that condition (failed AOA feeding into MCAS) that the first one almost crashed, and the other two ended with a crash.
Some false alarms are inevitable, and every effort should be made to design them out, and make it easy to diagnose and rectify.
But the MCAS part of the story isn't so much about the false alarm IMO. It is about Boeing deliberately stepping backwards in an already outdated design.
I've posted this before, but some either didn't see or have forgotten:
The certification process groups failures into four categories - Minor, Major, Hazardous, and Catastrophic. These have associated acceptable probability numbers - 10-3, 10-5, 10-7, and 10-9 per flight hour, respectively (occasionally modified to per flight cycle).
The entire problem with MCAS started early in the design process were the malfunctioning of MCAS (either erroneous activation or failure to activate when needed) was judged to be "Major" - Major is considered to be no big deal, readily handled by the crew with a moderate increase in crew workload (I'm quite familiar with Major since most 'benign' engine failures are considered 'Major')
Since 'Major' failures are allowed to occur at a rate of 10-5/hour, redundancy is not required (BTW, apparently those who made that judgement also assumed that the flight crews would be told about and trained with regard to MCAS, but somewhere along the line that requirement was dropped).
Now, if someone had really sat down and thought about it - what the impact of a bad AOA sensor activation MCAS along with all the other bells and warnings that would be going off (stick shaker, unreliable airspeed, etc.) they might have realized that MCAS malfunction was at least Hazardous - but that obviously never happened prior to the first MAX crash. So the certification process for MCAS followed the (correct) process for a "Major" system. Now, if someone along the line realized that MCAS was worse than Major and withheld or hid that information - that's fraud and someone should be prosecuted for it. But if it was all an honest mistake - it's just that, a horrible, tragic, mistake, but humans design aircraft and humans make mistakes. I have it on good authority that there was at least one attempted suicide among the people who worked MCAS. These were not cold-blooded accountants that made these decisions - they were real, flesh and blood humans with feelings that made a horrible mistake. Was management pressure to keep things simple and 'on the cheap' a factor? Perhaps, but I know that I often experienced those pressures, and it never made me do or design something that I honestly believed was wrong.
The certification process groups failures into four categories - Minor, Major, Hazardous, and Catastrophic. These have associated acceptable probability numbers - 10-3, 10-5, 10-7, and 10-9 per flight hour, respectively (occasionally modified to per flight cycle).
The entire problem with MCAS started early in the design process were the malfunctioning of MCAS (either erroneous activation or failure to activate when needed) was judged to be "Major" - Major is considered to be no big deal, readily handled by the crew with a moderate increase in crew workload (I'm quite familiar with Major since most 'benign' engine failures are considered 'Major')
Since 'Major' failures are allowed to occur at a rate of 10-5/hour, redundancy is not required (BTW, apparently those who made that judgement also assumed that the flight crews would be told about and trained with regard to MCAS, but somewhere along the line that requirement was dropped).
Now, if someone had really sat down and thought about it - what the impact of a bad AOA sensor activation MCAS along with all the other bells and warnings that would be going off (stick shaker, unreliable airspeed, etc.) they might have realized that MCAS malfunction was at least Hazardous - but that obviously never happened prior to the first MAX crash. So the certification process for MCAS followed the (correct) process for a "Major" system. Now, if someone along the line realized that MCAS was worse than Major and withheld or hid that information - that's fraud and someone should be prosecuted for it. But if it was all an honest mistake - it's just that, a horrible, tragic, mistake, but humans design aircraft and humans make mistakes. I have it on good authority that there was at least one attempted suicide among the people who worked MCAS. These were not cold-blooded accountants that made these decisions - they were real, flesh and blood humans with feelings that made a horrible mistake. Was management pressure to keep things simple and 'on the cheap' a factor? Perhaps, but I know that I often experienced those pressures, and it never made me do or design something that I honestly believed was wrong.
MCAS did not fail. The AoA subsystem did, producing erroneous data and a false stall warning. MCAS did exactly what it was supposed to do based on the information it was provided. Isn't the suggestion for pilots to push the nose down when there is a stall warning and stick shaker? While MCAS wasn't designed to detect or react to stalls, and appears to have no such input, it is supposed to provide a correction to a high AoA and it did. The FAA, Boeing, foreign CAAs, and all pilots trained on the 737 NG already accepted the chance for a false stall warning and had done so for, estimating, 2 decades.
I've posted this before, but some either didn't see or have forgotten:
The certification process groups failures into four categories - Minor, Major, Hazardous, and Catastrophic. These have associated acceptable probability numbers - 10-3, 10-5, 10-7, and 10-9 per flight hour, respectively (occasionally modified to per flight cycle).
The entire problem with MCAS started early in the design process were the malfunctioning of MCAS (either erroneous activation or failure to activate when needed) was judged to be "Major" - Major is considered to be no big deal, readily handled by the crew with a moderate increase in crew workload (I'm quite familiar with Major since most 'benign' engine failures are considered 'Major')
Since 'Major' failures are allowed to occur at a rate of 10-5/hour, redundancy is not required (BTW, apparently those who made that judgement also assumed that the flight crews would be told about and trained with regard to MCAS, but somewhere along the line that requirement was dropped).
Now, if someone had really sat down and thought about it - what the impact of a bad AOA sensor activation MCAS along with all the other bells and warnings that would be going off (stick shaker, unreliable airspeed, etc.) they might have realized that MCAS malfunction was at least Hazardous - but that obviously never happened prior to the first MAX crash. So the certification process for MCAS followed the (correct) process for a "Major" system. Now, if someone along the line realized that MCAS was worse than Major and withheld or hid that information - that's fraud and someone should be prosecuted for it. But if it was all an honest mistake - it's just that, a horrible, tragic, mistake, but humans design aircraft and humans make mistakes. I have it on good authority that there was at least one attempted suicide among the people who worked MCAS. These were not cold-blooded accountants that made these decisions - they were real, flesh and blood humans with feelings that made a horrible mistake. Was management pressure to keep things simple and 'on the cheap' a factor? Perhaps, but I know that I often experienced those pressures, and it never made me do or design something that I honestly believed was wrong.
The certification process groups failures into four categories - Minor, Major, Hazardous, and Catastrophic. These have associated acceptable probability numbers - 10-3, 10-5, 10-7, and 10-9 per flight hour, respectively (occasionally modified to per flight cycle).
The entire problem with MCAS started early in the design process were the malfunctioning of MCAS (either erroneous activation or failure to activate when needed) was judged to be "Major" - Major is considered to be no big deal, readily handled by the crew with a moderate increase in crew workload (I'm quite familiar with Major since most 'benign' engine failures are considered 'Major')
Since 'Major' failures are allowed to occur at a rate of 10-5/hour, redundancy is not required (BTW, apparently those who made that judgement also assumed that the flight crews would be told about and trained with regard to MCAS, but somewhere along the line that requirement was dropped).
Now, if someone had really sat down and thought about it - what the impact of a bad AOA sensor activation MCAS along with all the other bells and warnings that would be going off (stick shaker, unreliable airspeed, etc.) they might have realized that MCAS malfunction was at least Hazardous - but that obviously never happened prior to the first MAX crash. So the certification process for MCAS followed the (correct) process for a "Major" system. Now, if someone along the line realized that MCAS was worse than Major and withheld or hid that information - that's fraud and someone should be prosecuted for it. But if it was all an honest mistake - it's just that, a horrible, tragic, mistake, but humans design aircraft and humans make mistakes. I have it on good authority that there was at least one attempted suicide among the people who worked MCAS. These were not cold-blooded accountants that made these decisions - they were real, flesh and blood humans with feelings that made a horrible mistake. Was management pressure to keep things simple and 'on the cheap' a factor? Perhaps, but I know that I often experienced those pressures, and it never made me do or design something that I honestly believed was wrong.
Your level of knowledge of certification is not something I will ever approach. But either the KC-46 was over engineered/certified having dual channel MCAS and a comparator annunciator, or corners were cut with the MAX, when they made it single source. And they definitely made it single source to avoid training and the associated cost. Maybe they thought is was safe enough, but they would have known that is was less safe, and cheaper.......
737 MCAS was intended to account something that should rarely occur - the pilot flying the aircraft into a near stall condition. So MCAS would rarely come into play - again, a different design requirement.
Not excusing the sloppy engineering that resulted in the original MAX MCAS implementation, but comparing it to the KC-46 MCAS is apples to oranges.
KC-46 MCAS is fundamentally different than 737 MCAS. On the KC-46, it's intended to account for everyday occurrences - the rapidly changing CG as the tanker offloads fuel. Different design requirements when you design something to account for what will routinely happen.
737 MCAS was intended to account something that should rarely occur - the pilot flying the aircraft into a near stall condition. So MCAS would rarely come into play - again, a different design requirement.
Not excusing the sloppy engineering that resulted in the original MAX MCAS implementation, but comparing it to the KC-46 MCAS is apples to oranges.
737 MCAS was intended to account something that should rarely occur - the pilot flying the aircraft into a near stall condition. So MCAS would rarely come into play - again, a different design requirement.
Not excusing the sloppy engineering that resulted in the original MAX MCAS implementation, but comparing it to the KC-46 MCAS is apples to oranges.
I've posted this before, but some either didn't see or have forgotten:
The certification process groups failures into four categories - Minor, Major, Hazardous, and Catastrophic. These have associated acceptable probability numbers - 10-3, 10-5, 10-7, and 10-9 per flight hour, respectively (occasionally modified to per flight cycle).
The entire problem with MCAS started early in the design process were the malfunctioning of MCAS (either erroneous activation or failure to activate when needed) was judged to be "Major" - Major is considered to be no big deal, readily handled by the crew with a moderate increase in crew workload (I'm quite familiar with Major since most 'benign' engine failures are considered 'Major')
Since 'Major' failures are allowed to occur at a rate of 10-5/hour, redundancy is not required (BTW, apparently those who made that judgement also assumed that the flight crews would be told about and trained with regard to MCAS, but somewhere along the line that requirement was dropped).
Now, if someone had really sat down and thought about it - what the impact of a bad AOA sensor activation MCAS along with all the other bells and warnings that would be going off (stick shaker, unreliable airspeed, etc.) they might have realized that MCAS malfunction was at least Hazardous - but that obviously never happened prior to the first MAX crash. So the certification process for MCAS followed the (correct) process for a "Major" system. Now, if someone along the line realized that MCAS was worse than Major and withheld or hid that information - that's fraud and someone should be prosecuted for it. But if it was all an honest mistake - it's just that, a horrible, tragic, mistake, but humans design aircraft and humans make mistakes. I have it on good authority that there was at least one attempted suicide among the people who worked MCAS. These were not cold-blooded accountants that made these decisions - they were real, flesh and blood humans with feelings that made a horrible mistake. Was management pressure to keep things simple and 'on the cheap' a factor? Perhaps, but I know that I often experienced those pressures, and it never made me do or design something that I honestly believed was wrong.
The certification process groups failures into four categories - Minor, Major, Hazardous, and Catastrophic. These have associated acceptable probability numbers - 10-3, 10-5, 10-7, and 10-9 per flight hour, respectively (occasionally modified to per flight cycle).
The entire problem with MCAS started early in the design process were the malfunctioning of MCAS (either erroneous activation or failure to activate when needed) was judged to be "Major" - Major is considered to be no big deal, readily handled by the crew with a moderate increase in crew workload (I'm quite familiar with Major since most 'benign' engine failures are considered 'Major')
Since 'Major' failures are allowed to occur at a rate of 10-5/hour, redundancy is not required (BTW, apparently those who made that judgement also assumed that the flight crews would be told about and trained with regard to MCAS, but somewhere along the line that requirement was dropped).
Now, if someone had really sat down and thought about it - what the impact of a bad AOA sensor activation MCAS along with all the other bells and warnings that would be going off (stick shaker, unreliable airspeed, etc.) they might have realized that MCAS malfunction was at least Hazardous - but that obviously never happened prior to the first MAX crash. So the certification process for MCAS followed the (correct) process for a "Major" system. Now, if someone along the line realized that MCAS was worse than Major and withheld or hid that information - that's fraud and someone should be prosecuted for it. But if it was all an honest mistake - it's just that, a horrible, tragic, mistake, but humans design aircraft and humans make mistakes. I have it on good authority that there was at least one attempted suicide among the people who worked MCAS. These were not cold-blooded accountants that made these decisions - they were real, flesh and blood humans with feelings that made a horrible mistake. Was management pressure to keep things simple and 'on the cheap' a factor? Perhaps, but I know that I often experienced those pressures, and it never made me do or design something that I honestly believed was wrong.
The entire problem with MCAS started early in the design process were the malfunctioning of MCAS (either erroneous activation or failure to activate when needed) was judged to be "Major"
…
(BTW, apparently those who made that judgement also assumed that the flight crews would be told about and trained with regard to MCAS, but somewhere along the line that requirement was dropped).
Now, if someone had really sat down and thought about it … they might have realized that MCAS malfunction was at least Hazardous - but that obviously never happened prior to the first MAX crash.
…
(BTW, apparently those who made that judgement also assumed that the flight crews would be told about and trained with regard to MCAS, but somewhere along the line that requirement was dropped).
Now, if someone had really sat down and thought about it … they might have realized that MCAS malfunction was at least Hazardous - but that obviously never happened prior to the first MAX crash.
The engineers that make changes are the ones that determine if safety needs to look at those changes. Often those engineers don't understand d how their changes impact the larger system, yet the process relies on them at least suspecting it could impact safety in order to bring it to the attention of others.
747MAX was FAA certification. The military has their own certification. 3 actually Army, Air Force, and Navy each have different certification for their respective aircraft. Just because the Navy certified something doesn't mean it's good for the Air Force.
Another thing to note is different certification.
747MAX was FAA certification. The military has their own certification. 3 actually Army, Air Force, and Navy each have different certification for their respective aircraft. Just because the Navy certified something doesn't mean it's good for the Air Force.
747MAX was FAA certification. The military has their own certification. 3 actually Army, Air Force, and Navy each have different certification for their respective aircraft. Just because the Navy certified something doesn't mean it's good for the Air Force.
Interesting. So is the AF flying under the STC and using commercial maintenance instead of organic maintenance? At least historically military maintainers weren't certified to FAA/Boeing standards and thus didn't meet requirements gpr continued airworthiness.
Seems unusual for the AF (and not a practice I care for where other branches have done so), especially for such a specialized aircraft.
Seems unusual for the AF (and not a practice I care for where other branches have done so), especially for such a specialized aircraft.
Interesting. So is the AF flying under the STC and using commercial maintenance instead of organic maintenance? At least historically military maintainers weren't certified to FAA/Boeing standards and thus didn't meet requirements gpr continued airworthiness.
Seems unusual for the AF (and not a practice I care for where other branches have done so), especially for such a specialized aircraft.
Seems unusual for the AF (and not a practice I care for where other branches have done so), especially for such a specialized aircraft.
There has been a strong movement towards certifying commercially derived military aircraft to FAA Part 25 standards - something that I quite frankly don't understand since it adds considerable costs (basically you need to certify twice - once to the FAA and once to the USAF) without any real added value.
Fair enough.
it would be interesting to see the artifacts the FAA cert was based on, particularly for MCAS.
i can't imagine anyone buying a -2C or KC-46 for strictly cargo use vs another dedicated cargo plane without the baggage of the tanker.
it would be interesting to see the artifacts the FAA cert was based on, particularly for MCAS.
i can't imagine anyone buying a -2C or KC-46 for strictly cargo use vs another dedicated cargo plane without the baggage of the tanker.
MCAS did not fail. The AoA subsystem did, producing erroneous data and a false stall warning. MCAS did exactly what it was supposed to do based on the information it was provided. Isn't the suggestion for pilots to push the nose down when there is a stall warning and stick shaker? While MCAS wasn't designed to detect or react to stalls, and appears to have no such input, it is supposed to provide a correction to a high AoA and it did. The FAA, Boeing, foreign CAAs, and all pilots trained on the 737 NG already accepted the chance for a false stall warning and had done so for, estimating, 2 decades.
Its job was to provide a “suitable” stick force gradient in specific flight envelope circumstances.
That didn’t happen here not least because those flight envelope circumstances didn’t even exist.
It’s supposed to do what it’s designed for.
It didn’t.
That’s a failure.
Last edited by Bbtengineer; 27th Mar 2023 at 22:34.
Bbtengineer,
Are you satisfied that there was a false stall warning and that the AoA system reported false information?
Satisfied that the major errors in ET-302 happened primarily because of that false stall warning and prior to MCAS activation?
What other sensors should be allowed to lie? Fuel amount? Radalt? Engine fire?
I have been looking at the whole system. I agree - it was the failure to do so that got people killed.
You are looking at a piece of software that acted exactly as it was specified to act. It would have saved AF 447 is Airbus had installed a similar system.
In contrast, the AoA sensor didn't report the correct AoA and the related control subsystems all acted as if it did. All of them relied on the false AoA information, including the autopilot, which bugged out because of the false AoA sensor reading.
Are you satisfied that there was a false stall warning and that the AoA system reported false information?
Satisfied that the major errors in ET-302 happened primarily because of that false stall warning and prior to MCAS activation?
What other sensors should be allowed to lie? Fuel amount? Radalt? Engine fire?
I have been looking at the whole system. I agree - it was the failure to do so that got people killed.
You are looking at a piece of software that acted exactly as it was specified to act. It would have saved AF 447 is Airbus had installed a similar system.
In contrast, the AoA sensor didn't report the correct AoA and the related control subsystems all acted as if it did. All of them relied on the false AoA information, including the autopilot, which bugged out because of the false AoA sensor reading.
Bbtengineer,
Are you satisfied that there was a false stall warning and that the AoA system reported false information?
Satisfied that the major errors in ET-302 happened primarily because of that false stall warning and prior to MCAS activation?
What other sensors should be allowed to lie? Fuel amount? Radalt? Engine fire?
I have been looking at the whole system. I agree - it was the failure to do so that got people killed.
You are looking at a piece of software that acted exactly as it was specified to act. It would have saved AF 447 is Airbus had installed a similar system.
In contrast, the AoA sensor didn't report the correct AoA and the related control subsystems all acted as if it did. All of them relied on the false AoA information, including the autopilot, which bugged out because of the false AoA sensor reading.
Are you satisfied that there was a false stall warning and that the AoA system reported false information?
Satisfied that the major errors in ET-302 happened primarily because of that false stall warning and prior to MCAS activation?
What other sensors should be allowed to lie? Fuel amount? Radalt? Engine fire?
I have been looking at the whole system. I agree - it was the failure to do so that got people killed.
You are looking at a piece of software that acted exactly as it was specified to act. It would have saved AF 447 is Airbus had installed a similar system.
In contrast, the AoA sensor didn't report the correct AoA and the related control subsystems all acted as if it did. All of them relied on the false AoA information, including the autopilot, which bugged out because of the false AoA sensor reading.
I would expect a software engineer to anticipate faulty inputs, and to figure out how to detect them and deal with them.
Apparently they did neither.
In what universe was a totally unconstrained application of AND ever going to be appropriate?
It obviously didn’t work and I can’t quite actually believe we’re discussing a hypothesis that it did.
Last edited by Bbtengineer; 28th Mar 2023 at 00:58.
The software had faulty inputs.
I would expect a software engineer to anticipate faulty inputs, and to figure out how to detect them and deal with them.
Apparently they did neither.
In what universe was a totally unconstrained application of AND ever going to be appropriate?
It obviously didn’t work and I can’t quite actually believe we’re discussing a hypothesis that it did.
I would expect a software engineer to anticipate faulty inputs, and to figure out how to detect them and deal with them.
Apparently they did neither.
In what universe was a totally unconstrained application of AND ever going to be appropriate?
It obviously didn’t work and I can’t quite actually believe we’re discussing a hypothesis that it did.
The flaw was in the software requirements. Software is tested to confirm it conforms to the requirements - not to confirm it does what the designer intended...
This, unfortunately, is a common problem with software - poorly defined requirements that result in software not behaving as we'd like.
This is somewhat independent of s/w DAL (Design Assurance Level) - even DAL A (flight critical) software can behave in unanticipated ways if the requirements are not clearly defined.
The flaw wasn't in the software - it acted exactly as the software requirements would have it react.
The flaw was in the software requirements. Software is tested to confirm it conforms to the requirements - not to confirm it does what the designer intended...
This, unfortunately, is a common problem with software - poorly defined requirements that result in software not behaving as we'd like.
This is somewhat independent of s/w DAL (Design Assurance Level) - even DAL A (flight critical) software can behave in unanticipated ways if the requirements are not clearly defined.
The flaw was in the software requirements. Software is tested to confirm it conforms to the requirements - not to confirm it does what the designer intended...
This, unfortunately, is a common problem with software - poorly defined requirements that result in software not behaving as we'd like.
This is somewhat independent of s/w DAL (Design Assurance Level) - even DAL A (flight critical) software can behave in unanticipated ways if the requirements are not clearly defined.
At best as people who aren’t expected to actually understand the requirement in any context whatsoever.
People who implement software aren’t supposed to exist in a vacuum. They’re supposed to actually understand what they’re building and why.
The requirement apparently said apply nose down repetitively forever.
Nobody should ever have accepted that requirement.
I’m sorry but you’re treating the team implementing the software as idiots.
At best as people who aren’t expected to actually understand the requirement in any context whatsoever.
People who implement software aren’t supposed to exist in a vacuum. They’re supposed to actually understand what they’re building and why.
The requirement apparently said apply nose down repetitively forever.
Nobody should ever have accepted that requirement.
At best as people who aren’t expected to actually understand the requirement in any context whatsoever.
People who implement software aren’t supposed to exist in a vacuum. They’re supposed to actually understand what they’re building and why.
The requirement apparently said apply nose down repetitively forever.
Nobody should ever have accepted that requirement.
That's why it's so critically important to get the s/w requirements correct.
The requirements did not consider what would happen if MCAS kept trimming the nose down, because it was assumed early in the design process that if the stab trim was doing something the pilots didn't want or understand, they'd turn it off. Hence the classification of inappropriate MCAS activation as only Major - that's what a stab trim malfunction was classified as.
As I noted previously - the entire MCAS mess grew from that flawed assumption that an issue with MCAS was no worse than Major.