Go Back  PPRuNe Forums > Flight Deck Forums > Rumours & News
Reload this Page >

Ethiopian airliner down in Africa

Rumours & News Reporting Points that may affect our jobs or lives as professional pilots. Also, items that may be of interest to professional pilots.

Ethiopian airliner down in Africa

Old 7th May 2019, 13:12
  #5061 (permalink)  
 
Join Date: Jan 2010
Location: UK
Posts: 180
Received 0 Likes on 0 Posts
Originally Posted by meleagertoo
Which rather vindicates Boeing's position on this; they reacted exactly as Boeing intended by identifying it as an STS runaway (which most assuredly is a runaway trim event) and dealt with it by using the correct pre-existing technique.


And as such, why is it really so necessary to inform pilots of this system? There is no specific control over it, just the generic runaway trim procedure. Surely telling people about systems they have no specific influence over is merely muddying the waters? If it presents itself as failure event X which is dealt with by checklist Y does anyone need to know that it could be system A or A.1 at fault, when both are addressed by the same checklist, show effectively the same symptoms and actually are components of the same system?

That, I am sure, was Boeing's rationale and though I'm not 100% comfortable with it I'm certainly not condemning it in the absolute and fundamental way some others are.
Except it was the jumpseater that identified the issue NOT the crew and it seems that neither the crew or the jumpseater understood what the issue was. No mention of stab trim runaway was made in the writeup as I recall.

SamYeager is offline  
Old 7th May 2019, 13:14
  #5062 (permalink)  
 
Join Date: Apr 2019
Location: USA
Posts: 217
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by rog747
Are there any contributors here who are 737 pilots who transitioned to the MAX?

May I ask please,
If you did, did you have any SIM, classroom, or Line training on the MAX and it's differences, or was it purely on-line modules, thus was your first flight on a ''pax on board'' flight?

Were you made fully aware of the adverse pitch up changes, and CG issues of the new MAX due to the design enforced forward location of the new larger engines (which can now cause lift) at low weights/high power applications resulting in a (unrecoverable?) high AOA? (which we now are aware, necessitated the MCAS software patch)

Were you (before the 2 fatal and 1 nearly accidents) fully informed/trained on the new MCAS systems and it's functionality, implications, and what to do if it went rogue?

Thanks.
MAX was added to our fleet of NG's about a year ago. All training was either online or bulletins pushed to our Ipads. There is a quick reference card in the cockpit with key reminders. I had a couple of opportunities to fly the MAX before it was grounded. It actually flies very nicely, and the only real issue for me was that some of the switches and indicators were in different places. It would be comparable to transitioning from a 2001 Ford F-150 to a 2019 model. Drives pretty much the same, some new bells and whistles, some new switchology for the radios and climate control, but still a Ford F-150.

Our company continually stressed that the transition would be relatively straightforward, and to a certain point that was true in the context of normal operations. However, my contention always was (and this is not 20/20 hindsight) that any issues with the MAX would be less a case of normals operations, but rather non-normal ops. As we have seen in aviation time and time again, it is very difficult to predict all the unique failure modes that may arise with a new aircraft. Given that, my concern with the MAX was not with adapting to any differences when things were going right, but rather how different it might be when things were going wrong. Sadly, those concerns were not misplaced.
737 Driver is offline  
Old 7th May 2019, 13:44
  #5063 (permalink)  
 
Join Date: Jan 2013
Location: UK
Age: 63
Posts: 37
Likes: 0
Received 0 Likes on 0 Posts
Boeing's biggest mistake was design not underestimating the public

Originally Posted by meleagertoo
Boeing's big 'mistake' was to underestimate the public and to some extent the industry's interpretation of two failures due almost exclusively to bad handling and incorrect procedures that they could hardly have anticipated. At least, Boeing thought they could hardly have been anticipated at the time, and I doubt (m)any of us would have thought otherwise either before these accidents had we known about the system. Their mistake was to underestimate the amount and volume of criticism that would unexpectedly come their way because crews, maintenance and at least one airline screwed up in spades and the world retrospectively devined faults therefrom in Boeing that no one had thought were faults before and in a vindictive and vitriolic way unprecedented in the history of aviation..
I am not a pilot so my view may not be correct but I do design systems with functional safety requirments and I profoundly disagree with this. A system which cannot tolerate a single fault without entering a dangerous state which requires prompt action to prevent a catastrophe is not safe paticularily when at least one of the failures can occur in a high workload situation, must be responded to within a time limit and will generate misleading and distracting warnings. I am confident that I and all the teams I have worked in would have anticipated this would cause problems and would not have considered it an acceptable design.

Yes we are all human and may overlook failure modes with common causes or fail to understand complex interactions between sub-systems but this was just straightforwardly poor design which should have been identified as such.

The idea that Boeings big mistake was 'to underestimate the public and to some extent the industry's interpretation of two failures' is shockingly callous given the death toll and relatively small timespan. As far as we know the scenario concerned has occured three times and only been survived once and then perhaps a little fortuitously.
PiggyBack is offline  
Old 7th May 2019, 14:29
  #5064 (permalink)  
 
Join Date: Jul 2004
Location: Found in Toronto
Posts: 615
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by PiggyBack
I am not a pilot so my view may not be correct but I do design systems with functional safety requirments and I profoundly disagree with this. A system which cannot tolerate a single fault without entering a dangerous state which requires prompt action to prevent a catastrophe is not safe paticularily when at least one of the failures can occur in a high workload situation, must be responded to within a time limit and will generate misleading and distracting warnings. I am confident that I and all the teams I have worked in would have anticipated this would cause problems and would not have considered it an acceptable design.

Yes we are all human and may overlook failure modes with common causes or fail to understand complex interactions between sub-systems but this was just straightforwardly poor design which should have been identified as such.

The idea that Boeings big mistake was 'to underestimate the public and to some extent the industry's interpretation of two failures' is shockingly callous given the death toll and relatively small timespan. As far as we know the scenario concerned has occured three times and only been survived once and then perhaps a little fortuitously.
There are many systems on an aircraft where one failure can cause entry to a "dangerous state".

MCAS was designed to be easily disabled by simply trimming the aircraft. There is no prompt action required. All that is need is for the pilot to FLY THE AIRCRAFT just as they were taught in their very first lesson. ATTITUDES and MOVEMENTS

Pilots are taught to always control the aircraft and to TRIM the aircraft to maintain that control. If the aircraft is not doing what you want it to, it is up to the pilot to MAKE it happen.

The MCAS "problem" is just a form of un-commanded or un-wanted trim. In addition to being a memory item, it is also just common sense to disable a system that is not performing correctly. In this case MCAS was causing nose down trim. If repeated nose up trim did not stop the unwanted nose down trim, turn off the electric trim.

Problem solved.

You can't really blame Boeing any more than you can blame Airbus for not predicting that the AF447 crew would forget that you need to lower the nose to unstall an aircraft, or that Airbus had designed the side sticks so that they cancel each other out.
Lost in Saigon is offline  
Old 7th May 2019, 15:03
  #5065 (permalink)  
 
Join Date: Mar 2019
Location: Washington
Posts: 2
Likes: 0
Received 0 Likes on 0 Posts
The Refrain of Every Lousy Programer

Originally Posted by PiggyBack
I am not a pilot so my view may not be correct but I do design systems with functional safety requirments and I profoundly disagree with this. A system which cannot tolerate a single fault without entering a dangerous state which requires prompt action to prevent a catastrophe is not safe paticularily when at least one of the failures can occur in a high workload situation, must be responded to within a time limit and will generate misleading and distracting warnings. I am confident that I and all the teams I have worked in would have anticipated this would cause problems and would not have considered it an acceptable design.

Yes we are all human and may overlook failure modes with common causes or fail to understand complex interactions between sub-systems but this was just straightforwardly poor design which should have been identified as such.

The idea that Boeings big mistake was 'to underestimate the public and to some extent the industry's interpretation of two failures' is shockingly callous given the death toll and relatively small timespan. As far as we know the scenario concerned has occured three times and only been survived once and then perhaps a little fortuitously.
Everyone who writes lousy software has the same excuse, blame the user.
DCDave is offline  
Old 7th May 2019, 15:10
  #5066 (permalink)  
 
Join Date: Apr 2019
Location: USA
Posts: 217
Likes: 0
Received 0 Likes on 0 Posts
Threat and Error Management

Part 4

Continuing the Threat and Error Management discussion.....
If you are just joining this sub-topic, please go back to the first post with the TEM graphic (Part 1)

First, a quick refresher. There are three components of the TEM model that are relevant here:

Threats are external and internal factors that can increase complexity or introduce additional hazards into a flight operations. Weather, unfamiliar airports, terrain, placarded aircraft systems, language barriers, fatigue, and distraction are examples of threats. Once a threat has been identified, the crew can take steps to mitigate that threat.

Errors are divergences from expected behavior caused by human actions or inaction that increase the likelihood of an adverse event. The difference between an error and a threat is that an error can, with careful attention, be quickly identified and crew members can find prompt solutions to the error. This is sometimes known as "trapping" the error. Untrapped errors can turn into new threats.

Barriers are structures, procedures and tools available to flight crew to trap errors and contain threats. Since no barrier is perfect, the goal is to build sufficient barriers so that all threats are contained and all errors trapped. Untrapped errors and uncontained threats can ultimately lead to an undesired aircraft state, incident, or accident.

The TEM model assumes that there are no perfect aircraft, perfect environments, or perfect humans. The goal is not to create a flawless system, but rater a resilient system.

The standard TEM model lists these available barriers for flight deck operations: Policies and procedures (SOP's), checklists, CRM, aircraft systems (particularly warning and alert systems), knowledge, and airmanship. Knowledge and airmanship are related to not only to training and experience, but also to an individual's commitment to develop their knowledge and airmanship. CRM includes such things as crew communications, monitoring, flight deck discipline, assignment and execution of specific duties. The Captain is the primary driver behind CRM, but the First Officer has obligations here as well.

In Part 3 of this series, I used the TEM model as a lens to analyze where and how the existing barriers failed. The primary reason that multiple barriers failed is that the effective employment of virtually all of these barriers depends heavily on the mental states of the two pilots. SOP's, checklists, CRM, knowledge, and airmanship only work as barriers when the crew can actually draw on them. It is unclear how much of this failure was due to lack of particular knowledge and/or skill as opposed to the inability to draw on existing knowledge and/or skill under pressure. There are indications that the Captain had achieved cognitive overload. This might have also applied to the First Officer, but we must also acknowledge that the FO had far less experience to draw on and may have had discomfort in speaking up. I believe one of the key takeaways from this accident is to appreciate the critical role of the First Officer in safe aircraft operations. A First Officer must not only be able operate the aircraft, run the checklists, and demonstrate knowledge of systems and procedures, he must be able to act as an effective barrier to trap not only his errors, but also the errors of the Captain.

When the traditional barriers failed, they effectively became new threats. These threats were subsequently uncontained and allowed errors to go untrapped leading ultimately to a hull loss and the death of all passenger and crew.

I ended Part 3 with the following question: What should one do when a barrier actually becomes a threat?

I'll be the first to admit that the "barrier as threat" is a bit different take on the TEM model, but I believe it is both valid and useful. From practical experience, I think TEM theory sometimes assumes that barriers are more resilient than they really are in practice and largely ignores the possibility that what was meant to be a barrier could actually become a threat.

However, by adopting a "barrier as potential threat" perspective, the TEM model actually provides some useful guidance. Threats should be identified or anticipated and steps should be taken to mitigate and contain those threats.

The key step here is awareness of the threat, or more specifically, awareness that what was initially considered a barrier might actually become a threat.

Let's go back to that list of potential barriers for flight deck operations - Policies and procedures (SOP's), checklists, CRM, aircraft systems, knowledge, and airmanship - and consider how these "barriers" might actually become threats.

Policy and procedures - I believe most airline SOP's provide useful barriers to the degree that the flight crew actually uses them. However, in some situations those policies may create unappreciated threats. For example, does the airline's policy drive an over-reliance on automation by mandating its use at all times? Do existing policies require/encourage Captains to do most of the actual flying leaving the First Officer ill-equipped to serve as an effective back-up? Do airline policies and/or culture create or sustain a steep authority gradient which discourages First Officers from speaking up or correcting errors by the Captain?

Checklists - Are the checklists (normal and non-normal) well designed? Do they help trap likely crew errors? If a crew member believes a checklist contains a potential threat, how amenable is their airline to modifying that checklist?

Crew resource management - Is the level of knowledge and proficiency of your Captain/First Officer sufficient to be an effective barrier? Is yours? Do the pilots use effective communication and social skills? Do they maintain cockpit discipline? Do they feel free to speak up and correct each other without creating tension?

Knowledge and airmanship - Does the crew receive the right kind of training to be effective? (Just refer to the "mantra" discussion if you need to be reminded of my position on this). Does that training prepare the crew for the known as well as the unknown? Does that training help mitigate the well-known startle and fear reflexes? Does that training emphasize systems management at the expense of basic aircraft skills? Does that training emphasize the need for the execution of NNC in a methodical and deliberate manner?

As we go through this list of questions (please add more if you like), we can develop a picture of where these barriers may actually morph into threats.

Once these new threats are identified, the next step is to attempt to mitigate those threats.

To be continued.....
737 Driver is offline  
Old 7th May 2019, 16:24
  #5067 (permalink)  
 
Join Date: Apr 2009
Location: Hotel Gypsy
Posts: 2,821
Likes: 0
Received 0 Likes on 0 Posts
The problem with TEM is that it tends to encourage linear thought - actions will create the desired resolution. I spent some time in the UK RAF where we often quoted the Boyd`Cycle (OODA Loop) which was more of a circular decision making process - think DODAR. The advantage of the Boyd Cycle is that you review the efficacy of your actions and then, potentially, choose additional or even different actions.

Of course, such flexibility and decision making (including potential divergence from checklists) requires experience and deep theoretical. knowledge. In that area I think we all agree that aviation is struggling, not just due to the training system but also due to the manufacturers not telling the full story.

People quote Sully as an example in that he ‘got the job done’ regardless of checklist.
Cows getting bigger is offline  
Old 7th May 2019, 16:27
  #5068 (permalink)  
 
Join Date: Dec 2002
Location: UK
Posts: 2,451
Likes: 0
Received 9 Likes on 5 Posts
To be continued..... Oh please no.

Is it really necessary to explain the complete TEM concept, to use this model to fit the few facts that are available, or is it that the facts are fitted the model in order to understand an individual’s (preconceived) viewpoint.

All models are wrong, but some are useful’ (George Box). The value of a model, like a tool is to select the appropriate one and know how it should be used; particularly its limits.

If you start with the human as a threat then you will conclude human error; alternatively starting with the human as an asset, pilot, designer, regulator, then with open thought, guided by a model, it may be possible to identify influencing factors, which in combination enabled the outcome.

Limitations of TEM Model
Assumes technical competency appropriate for role.
The threat-error-undesired states relationship is not necessarily straightforward and it may not always be possible to establish a linear relationship, or one-to-one linkage between threats, errors and undesired states. e.g. threats can on occasion lead directly to undesired states without the inclusion of errors;
and operational personnel may on occasion make errors when no threats are observable.
Essentially a ‘deficit’ model.
Benchmarks against a standard ‘safe’ or ‘safe enough’ i.e., other operators.
Descriptive: It describes an outcome or end state not how to get there.
Little focus on minimisation of error
Links the management of threats and errors to potential deficiencies in HF & NTS skills but not the processes supporting good TEM behaviour.
Same challenge as ‘Airmanship’

(https://www.casa.gov.au/sites/g/file.../banks-tem.pdf)




Last edited by safetypee; 7th May 2019 at 17:50. Reason: typo
safetypee is offline  
Old 7th May 2019, 17:06
  #5069 (permalink)  
 
Join Date: Feb 2019
Location: shiny side up
Posts: 431
Likes: 0
Received 0 Likes on 0 Posts
Given that, my concern with the MAX was not with adapting to any differences when things were going right, but rather how different it might be when things were going wrong. Sadly, those concerns were not misplaced.
Sadly, those concerns WERE misplaced.

There are the legacy commands that line up, not necessarily under non-normal ops. Look what happened when, what was it V10 of the HW FMS software came out? That one didnt last long.

The if/then sequence of commands can get one to a line in the code that has been long forgotten. A few that come to mind are the balked TOGA with a bounce, or after crossing a FO waypoint, the ac porpoises down to the AA level of the next waypoint, and of course, the lookup finding a simple radius of the Earth instead of the Geoid.

Unintended consequences of legacy programming. I would love to see a V1.0 of the FMS.

Last edited by Smythe; 7th May 2019 at 18:57.
Smythe is offline  
Old 7th May 2019, 17:28
  #5070 (permalink)  
 
Join Date: Apr 2019
Location: USA
Posts: 217
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by safetypee

All models are wrong, but some are useful’ (George Box). The value of a model, like a tool is to select the appropriate one and know how it should be used; particularly its limits.
I agree. The TEM model has its limitations, but it also has its uses. One of its primary benefits is that is a key part of the language of aviation safety. Pilots are usually on the receiving end of this dialogue. I submit that it can be pointed in the other direction.
737 Driver is offline  
Old 7th May 2019, 17:34
  #5071 (permalink)  
 
Join Date: Apr 2019
Location: USA
Posts: 217
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Cows getting bigger
The problem with TEM is that it tends to encourage linear thought - actions will create the desired resolution. I spent some time in the UK RAF where we often quoted the Boyd`Cycle (OODA Loop) which was more of a circular decision making process - think DODAR. The advantage of the Boyd Cycle is that you review the efficacy of your actions and then, potentially, choose additional or even different actions.
If you go back and look at the original graphic, you will see that it does incorporate a cycle of input/output/review. I'm familiar with the Boyd Cycle, and it is appropriate in some circumstances, but it is less useful in setting up a resilient system in the first place. The OODA loop is more applicable once you are responding in the environment that has already been established.
737 Driver is offline  
Old 7th May 2019, 17:44
  #5072 (permalink)  
 
Join Date: Apr 2009
Location: Hotel Gypsy
Posts: 2,821
Likes: 0
Received 0 Likes on 0 Posts
Yep, like an aircraft trying to kill you when you've lost your way through process.
Cows getting bigger is offline  
Old 7th May 2019, 19:04
  #5073 (permalink)  
 
Join Date: Nov 2018
Location: Vancouver
Posts: 68
Likes: 0
Received 0 Likes on 0 Posts
Just another reiteration of some issues with MCAS' flawed logic, as discussed here and elsewhere... (with my emphasis)

Boeing says no flaws in 737 Max. Former engineer points to several

...Boeing CEO Dennis Muilenburg said the planes went down because of a chain of events.

"One of the links in that chain was the activation of the MCAS system because of erroneous angle attack data," he said at a recent news conference.

Peter Lemme, a former Boeing engineer and former FAA designated engineering representative, said MCAS is the main link. The flaws in that system, he said, need to be addressed...

***First, MCAS activated because of a single sensor with a false reading. On the Ethiopian jet, one indicator swung from showing a normal ascent to showing a steep ascent. Lemme said in that case it was a clear sign of failure.

"Having the vane change from 15 to 75 degrees in two seconds — it is immediately an indication of a fault. There's just no physical way to do that," he said. "And then 75 degrees is kind of a ridiculous number."

But MCAS acted on it, even though a sensor on the other side of the plane reported everything was fine.

"That was a big disappointment. If the systems had declared the signal failed then MCAS would not have fired and nothing would have happened," he said.

***Both planes were flying at a great speed when they crashed — another flaw, according to Lemme because MCAS should have stopped at that speed.

"There is no way to stall the airplane at that airspeed and MCAS should have had logic in place that would prohibit it from operating," Lemme said.

***The Lion Air flight pitched forward more than 20 times before that plane crashed into the sea. That is the greatest flaw in MCAS, Lemme said: the repeated descents.

"It persistently attempted to move the stabilizer down without giving up. I think if MCAS hadn't had the repeated feature where it could re-trigger, we probably would have been OK," he said.

Lemme said testing should have caught the problems with MCAS.

"That should have been found. You would expect the test program would look at the likely failure modes," he said. "That is a breakdown in the test program."...
- https://www.kuow.org/stories/engineer-gap-flaw-mcas
patplan is offline  
Old 7th May 2019, 19:14
  #5074 (permalink)  
 
Join Date: Oct 2011
Location: Lower Skunk Cabbageland, WA
Age: 74
Posts: 354
Likes: 0
Received 0 Likes on 0 Posts
And the source of the above article is very well-respected as one of Seattle's NPR radio stations, KUOW, so should be taken seriously.
Organfreak is offline  
Old 7th May 2019, 19:25
  #5075 (permalink)  
 
Join Date: Jan 2008
Location: Wintermute
Posts: 76
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Lost in Saigon
There are many systems on an aircraft where one failure can cause entry to a "dangerous state".

MCAS was designed to be easily disabled by simply trimming the aircraft. There is no prompt action required. All that is need is for the pilot to FLY THE AIRCRAFT just as they were taught in their very first lesson. ATTITUDES and MOVEMENTS

Pilots are taught to always control the aircraft and to TRIM the aircraft to maintain that control. If the aircraft is not doing what you want it to, it is up to the pilot to MAKE it happen.

The MCAS "problem" is just a form of un-commanded or un-wanted trim. In addition to being a memory item, it is also just common sense to disable a system that is not performing correctly. In this case MCAS was causing nose down trim. If repeated nose up trim did not stop the unwanted nose down trim, turn off the electric trim.

Problem solved.

You can't really blame Boeing any more than you can blame Airbus for not predicting that the AF447 crew would forget that you need to lower the nose to unstall an aircraft, or that Airbus had designed the side sticks so that they cancel each other out.
It may be interesting to note that what appears to be the vast majority of people who are responsible for designing, developing and delivering safety critical systems for a living (I am another example - high software content military life critical systems amongst other things) who have commented find the Boeing approach at best questionable, and for my part very concerning (as a very regular pax). I had expected better from the aviation regulation process.

Equally concerning are the folk that fly these machines who also appear to feel that this type of potentially inadequate (and demonstrably dangerous) systems design is acceptable, it may be the norm, and it may be what you are used to . . . but I'm surprised . . .

Edit : A wise man in the military safety community once told me that if I wasn't personally prepared to trust my life to the system I designed I shouldn't be in the industry . . . I wonder whether that ethos has been diluted in aviation . . . I hope not . . .

Fd

Last edited by fergusd; 7th May 2019 at 19:43.
fergusd is offline  
Old 7th May 2019, 20:06
  #5076 (permalink)  
 
Join Date: Dec 2015
Location: Cape Town, ZA
Age: 62
Posts: 424
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by patplan
Just another reiteration of some issues with MCAS' flawed logic, as discussed here and elsewhere... (with my emphasis)Peter Lemme, a former Boeing engineer and former FAA designated engineering representative, said MCAS is the main link. The flaws in that system, he said, need to be addressed...

[snip]

***Both planes were flying at a great speed when they crashed — another flaw, according to Lemme because MCAS should have stopped at that speed.

"There is no way to stall the airplane at that airspeed and MCAS should have had logic in place that would prohibit it from operating," Lemme said.

[snip]

Lemme said testing should have caught the problems with MCAS.

"That should have been found. You would expect the test program would look at the likely failure modes," he said. "That is a breakdown in the test program."...

- https://www.kuow.org/stories/engineer-gap-flaw-mcas
Isn't that statement a logical fallacy for two reasons:
- An aircraft can stall at any speed, if the altitude is sufficiently high, and the the wings are in a banked turn (accelerated stall).
- MCAS is not an anti-stall system, so that statement has no bearing on its activation.
GordonR_Cape is offline  
Old 7th May 2019, 20:42
  #5077 (permalink)  
 
Join Date: Apr 2019
Location: USA
Posts: 217
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Europa01
Here's a polite challenge. Given your previously expressed views on what you consider to have been the inadequacy of this crew, are you sure you aren't levering your preconceptions into the TEMS model rather than applying it from first principles?
An astute observation. I am not conducting a first-order analysis for a very specific reason. A first-order analysis would step through the initial threats, the barriers, and the errors (trapped and untrapped) and the outcome of each untrapped error. I think that has already been done in spades, though not necessarily through the lens of TEM. We know there were errors, and we know many of those errors went untrapped despite the theoretical presence of numerous barriers. I am doing something more of a second-order (and in some cases third-order) analysis that suggests that the "barriers" did not function as expected because they actually contained unrecognized threats. Those threats, being unrecognized, had no mitigation strategy or barrier to contain them and thus led to a series of actions resulting in the loss of this aircraft.

As far as the "inadequacy" of the crew, I think the picture I've been painting here is the inadequacy of the system that put them in that cockpit. These pilots were a product of their training, experience, and environment. In theory, that system gave them the tools (i.e. the barriers) that would have made this accident avoidable. Rather, my conjecture is that instead of creating resilient barriers, their training and operating protocols were actually producing unperceived, and hence unmitigated, threats.
737 Driver is offline  
Old 7th May 2019, 20:45
  #5078 (permalink)  
 
Join Date: Apr 2014
Location: Minneapolis, MN
Posts: 14
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by MurphyWasRight
It has pretty well been established on this thread that pilot electric trim will work in all conditions (does not stall under load) and interrupts MCAS if active as shown by ET trace at ~05:40:27. This pilot trim input was possibly then interrupted by cutout switches.
Other than 'deer in headlights' losing it I see a few possible factors:
1: Pilot accustomed to short blips not comprehending amount truly needed, this fits Lion Air when the FO was ineffective at the end when PIC handed over control while the Captain was mostly successful.
2: Trimming by column feel not position: May seem that AC is closer to trimmed than it is. In other words if you have been pulling really hard then just slightly hard may feel like close to trimmed, I am sure the pilot did not want to over trim given all the alarms.
This might explain ET first retrim at 05:4-:15. Unfortunately we don't have the column force graph, just position.
3: Some as yet to be revealed flaw that interferes with pilot trim inputs; one possibility is biomechanical factors related to actuation switches after prolonged pulling.
This is unlikely but could explain the final seconds of both accidents.
Hopefully the final reports will fully address this question.
Please excuse me if I am repeating something that has been discussed earlier in this thread, but from the schematic for the “Horizontal Stabilizer Trim Control System – Functional Description – Electric Trim” (see PPRUNE thread entitled 737MAX Stab Trim architecture, post #194), one can see that when MCAS is active, then STS in inactive.
And when MCAS is inactive, then STS is active.
Is it possible that after a pilot electrically trims the aircraft nose-up after an MCAS nose-down trim event, then the STS system will activate to trim the aircraft nose-down again before the next MCAS nose-down trim event? Recall that the STS system trims the aircraft in the direction opposite to the speed change. So if the pilot has just trimmed with a nose-up command, then wouldn’t the STS system counter with a nose-down trim command? This same operation would still apply if the autopilot thought the aircraft had a higher angle of attack as a result of a defective AoA sensor (the autopilot controls the STS trim even when the autopilot is off). Could this help to explain the failure of the ET302 pilot to trim back to a fully neutral trim after an MCAS trim event?

One thing that confuses me here is that I’ve read that STS activates 5 seconds after release of the manual trim switches. MCAS has a similar 5 second delay. This may mean that any STS trim would be canceled by an MCAS trim event. But could there be a delay in MCAS activation relative to STS activation? By the way, STS trim and manual electric trim have the same trim rates, but differ in direction. Also, any STS trim should be canceled by the simultaneous activation of manual electric trim by the pilot. But if the pilot released the manual trim button when he believed the aircraft trim to be at neutral, then STS might give a short nose-down trim command before MCAS activates to give a larger nose-down command.

Also, the same schematic shows that the two pedestal cutout switches on the 737MAX operate as a logical “AND” function (“&” function) as follows:
Manual electric trim = [PRI] & [B/U]
Autopilot trim = [PRI]
STS speed trim = [PRI]
MCAS trim = [PRI] & [B/U]

This means that there is no way to turn off all automatic trim functions while keeping the manual electric trim operative. However, it would be a trivial change to have the switches operate as follows:
Manual electric trim = [PRI]
Autopilot trim = [PRI] & [B/U]
STS speed trim = [PRI]
MCAS trim = [PRI] & [B/U]

In this case the pilots would be able to turn off the autopilot trim and MCAS trim by turning off only the B/U cutout switch while keeping the manual electric trim and STS trim active by leaving the PRI cutout switch on. This would make the 737 MAX operate more like the 737NG aircraft, allowing full use of manual electric trim at all times. Why has this not been done? Is it because it would have required re-certification of the aircraft by giving the pilots control over the MCAS function? Was certification of the 737NG with new lift-causing engines dependent upon the MCAS correction function operating only in the background without control from the pilots?
Double07 is offline  
Old 7th May 2019, 21:22
  #5079 (permalink)  
 
Join Date: May 2010
Location: Boston
Age: 73
Posts: 443
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Double07
Please excuse me if I am repeating something that has been discussed earlier in this thread, but from the schematic for the “Horizontal Stabilizer Trim Control System – Functional Description – Electric Trim” (see PPRUNE thread entitled “737MAX Stab Trim architecture”, post #194), one can see that when MCAS is active, then STS in inactive. And when MCAS is inactive, then STS is active.).
...
... Could this help to explain the failure of the ET302 pilot to trim back to a fully neutral trim after an MCAS trim event?
...
This means that there is no way to turn off all automatic trim functions while keeping the manual electric trim operative..
......
This would make the 737 MAX operate more like the 737NG aircraft, allowing full use of manual electric trim at all times. Why has this not been done?
...
Other than it's existence and a few things posted here I have no knowledge of sts, however does not look to be a significant factor in ET case.
There is one brief NU automatic trim on the trace at 05:40:00 after the AP disconnect followed (or even interrupted by) MCAS for 9 seconds, this could be an STS input.

The only possible impact of this is that the pilot may have heard trim starting and glanced down to see normal looking (NU) trim, which was quickly reversed by MCAS.
Had not thought of that before but likely not significant, in any case it was short and "the right way".

BTW: The penultimate Lion Air crew wrote up the defect as "STS trimming in wrong direction".

Correct on cut out switch change from NG to MAX.

No one has offered an 'official' sounding reason for the change.
The most likely case is that training had shifted to always using both and it simplified some aspect (cert?) of MCAS to not have a seperate auto only cutout, but both switches were retained for commonality.

Last edited by MurphyWasRight; 7th May 2019 at 21:27. Reason: spell check 'typos'
MurphyWasRight is offline  
Old 7th May 2019, 21:29
  #5080 (permalink)  
 
Join Date: Feb 2006
Location: USA
Posts: 487
Likes: 0
Received 0 Likes on 0 Posts
Yet another review panel

https://www.seattletimes.com/busines...pgrade-review/

FAA asks for NASA’s help in Boeing 737 MAX safety-upgrade review
May 7, 2019 at 12:40 pm Updated May 7, 2019 at 1:58 pm
By Alan Levin and Ryan Beene
Bloomberg

The Federal Aviation Administration is convening a panel of outside experts from the Air Force, NASA and a Transportation Department center to review Boeing’s software fixes for the grounded 737 MAX.

The agency announced the new Technical Advisory Board in a statement on Tuesday. The panel’s recommendations will “directly inform the FAA’s decision concerning the 737 MAX fleet’s safe return to service,” the agency said.

The plane was grounded on March 10 after the second fatal accident in less than five months claimed a total of 346 lives. Boeing designed the plane with a system that automatically forced down the nose in some circumstances and malfunctions on both flights caused it to repeatedly dive until pilots lost control.

The manufacturer is changing the software to make it less likely to fail and to limit how far it can drive down the nose. Boeing and the FAA have been working closely on the software fix, but the Chicago-based planemaker hasn’t completed its work.

The new panel is separate from two other existing reviews created by FAA. The DOT’s Volpe National Transportation Systems Center in Massachusetts is participating.

To contact the reporters on this story: Alan Levin in Washington at [email protected];Ryan Beene in Washington at [email protected]
Zeffy is offline  

Thread Tools
Search this Thread

Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.