PPRuNe Forums - View Single Post - Boeing 737 Max Software Fixes Due to Lion Air Crash Delayed
Old 16th Mar 2019, 23:25
  #242 (permalink)  
JPcont
 
Join Date: May 2011
Location: Finland
Age: 62
Posts: 16
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by RickNRoll
All raw data is suspect and all systems working with raw data must be able to tolerate faulty data. It should be a part of the system design and testing that faulty data is going to have to be dealt with.
It sounds reasonable but I am afraid it is in reality only a pipe dream. There are few fundamental issues. For example:

First. At end of the day, control laws works as the data is reliable. Yes, they can work with unreliable information, but the information is made reliable by some means (e.g. (Kalman) filtering, fuzzy sets e.g. in a simple cases).

Second issue is in the testing. There might be hughe testing set to cover all the possible cases. Unfortunately, impossible failure modes that are not recognized and are not tested...

Third is money. Redundancy is kept minimal to avoid costs. Largely maintenance costs. Sensors tend to break down and nobody wants hangar queen. If you have only two sensors you can disable the whole feature with three you can vote one out.

One additional problem with modern distributed automation system, where signals are digital "messages", is that you can't relay even to the causality in the trouble shooting. There are signals with different priorities and paths. So the response can sometimes seen before the action. It can make simple fault analysis complicated. There is role for the humans also in the future. Fortunately, human is very good in the trouble shooting.

When seen MAX as a control system, MCAS can cause positive feedback in main control loop by trimming plane badly. When the nose goes down, the speed increases and stick force increases and speed tend to increase even more. In control theory positive feedback means instability. It can be tolerated in some subsystems but never in the main control loop. That is why, I think that the whole automation system should have consistent "process image" about trimming decisions and it can not be based on opinion of distinct subsystem (beyondtrimming where stick forces are reasonable when speed increases).

The automatic nose down action should be implemented so that it can be reversed without long delays. If not, how can the pilots override control "mad" automation system? When working, the automation system can override pilot actions. It increases safety. On the other hand, automation system has to keep the plane in a condition where the pilot can take the plane in his/here control at any moment, also for safety reasons.

Edit:
I forget to mention one largely ignored fact: In a closed loop systems, the feedback (autopilot, pilot, etc.) restricts the possibility to identify the “state” of the system. So, the automation architecture has to be defined so that it does not restrict pilots situation awareness too much.

Last edited by JPcont; 17th Mar 2019 at 13:22. Reason: Forget to mention
JPcont is offline