PPRuNe Forums - View Single Post - AF 447 Thread No. 9
View Single Post
Old 11th Jul 2012, 11:37
  #247 (permalink)  
slats11
 
Join Date: Aug 2007
Location: sydney
Age: 60
Posts: 496
Likes: 0
Received 2 Likes on 2 Posts
From a system engineer's point of view, I perceive a gross discrepance between automation behaviour *before* leaving the predefined, valid flight envelope, and thereafter. Before, there is protection from all sorts of (possibly) stupid control inputs by the pilots, but as soon as a single sensor fails, the whole protection system just quits.
Humans are not computers, and are extremely suspectible to fatigue, habits, boredom, surprise and panic. If you wanted to design a system that had the goal of provoking 'human error', these are the human weaknesses you'd exploit. And that is exactly what the airbus did: Lull the pilots into a seemingly fool-proof, fully automated environment, and then, at a slight (!) malfunction of hardware, drop everything raw onto them, intermingled with inconsistent alerts and warnings.
I would agree with this. And I wonder if this logic reflects underlying concerns about blame and legal ramifications if a "protected system" were to crash. So if there is a problem, downgrade the computer support, give greater authority to the pilot and let him/her fly it. If it ends badly, then people will lament the lack of manual flying skills and blame the pilots, the training, the operator, the culture.... rather than the equipment.

So I wonder at the wisdom of abruptly changing horses midstream. And I wonder whether this design logic is in the best interests of the AI, or the passengers.
slats11 is offline