PPRuNe Forums - View Single Post - Boeing 737 Max Software Fixes Due to Lion Air Crash Delayed
Old 18th Mar 2019, 07:46
  #253 (permalink)  
RickNRoll
 
Join Date: Jul 2013
Location: Australia
Posts: 313
Received 7 Likes on 5 Posts
Originally Posted by JPcont
It sounds reasonable but I am afraid it is in reality only a pipe dream. There are few fundamental issues. For example:

First. At end of the day, control laws works as the data is reliable. Yes, they can work with unreliable information, but the information is made reliable by some means (e.g. (Kalman) filtering, fuzzy sets e.g. in a simple cases).

Second issue is in the testing. There might be hughe testing set to cover all the possible cases. Unfortunately, impossible failure modes that are not recognized and are not tested...

Third is money. Redundancy is kept minimal to avoid costs. Largely maintenance costs. Sensors tend to break down and nobody wants hangar queen. If you have only two sensors you can disable the whole feature with three you can vote one out.

One additional problem with modern distributed automation system, where signals are digital "messages", is that you can't relay even to the causality in the trouble shooting. There are signals with different priorities and paths. So the response can sometimes seen before the action. It can make simple fault analysis complicated. There is role for the humans also in the future. Fortunately, human is very good in the trouble shooting.

When seen MAX as a control system, MCAS can cause positive feedback in main control loop by trimming plane badly. When the nose goes down, the speed increases and stick force increases and speed tend to increase even more. In control theory positive feedback means instability. It can be tolerated in some subsystems but never in the main control loop. That is why, I think that the whole automation system should have consistent "process image" about trimming decisions and it can not be based on opinion of distinct subsystem (beyondtrimming where stick forces are reasonable when speed increases).

The automatic nose down action should be implemented so that it can be reversed without long delays. If not, how can the pilots override control "mad" automation system? When working, the automation system can override pilot actions. It increases safety. On the other hand, automation system has to keep the plane in a condition where the pilot can take the plane in his/here control at any moment, also for safety reasons.

Edit:
I forget to mention one largely ignored fact: In a closed loop systems, the feedback (autopilot, pilot, etc.) restricts the possibility to identify the “state” of the system. So, the automation architecture has to be defined so that it does not restrict pilots situation awareness too much.
I don't mean it has to work despite failure. It has to be tolerant. That can mean failing gracefully. For example, give the pilot a clear warning message and hand control back to the pilot. The MCAS softare could have done this and now will do this.
RickNRoll is offline