PPRuNe Forums - View Single Post - Airbus crash/training flight
View Single Post
Old 2nd Feb 2009, 12:29
  #639 (permalink)  
[Steve]
 
Join Date: Feb 2000
Location: Perth, Western Australia
Posts: 16
Likes: 0
Received 0 Likes on 0 Posts
"Could you precise a little more why/how the aguadalte statement is flawed? "

Speaking as another computer boffin (and one who has been responsible for software where failure could result in loss of life) I read it as stickyb actually agreeing with the main assertion -- as I understand it -- that it is *really* hard for a computer to know when it is incapable of producing the correct outputs for a given set of inputs. aguadalte suggests a "big red button" (BRB), which stickyb claims is not feasable.

I tend to agree with both of them :-)

Where I disagree with stickb is in his assertion that a BRB is not feasible. If the software already has the ability to gracefully degrade the level of automation, then another input that forces it to do so is an increased complexity, but one not nearly as large as the complexity involved in "knowing" all the exceptions that should cause this to happen automatically.

What worries me, however, is whether such a manual override would lead to more or less problems. The obvious example is where some un-commanded pitch up or down is overridden by flying more "manually". However in other cases where a pilot is under extreme stress, or where (s)he becomes situationally unaware, the BRS usage or the actions taken thereafter may be inappropriate and lead to a worse outcome.

From my reading here, some pilots are of the view that they should be given the opportunity to overstress an aircraft if flying within the limits will result in a less desirable outcome. I understand (poorly, probably) that Airbus' attitude is that the automation should help prevent you getting into that place between a rock an a hard place -- which does not directly address the issue.

In my case, the operators of the software were (are) not in personal danger, and the additional cost of dealing with tricky cases was considered unwaranted, so we opted for a system that simply suggested the "correct" action, but left the operator to manually move the controls (in a manner of speaking) and hence with the ultimate authority and ability to override the automatic suggestions. I am in awe of the software which runs aboard modern aircraft, as in many cases other design decisions have been taken (and with great success).

An interesting question from my perspective is "How often do the various protections governing (say) Normal law get triggered in a way that protects the pilot and aircraft from a negative outcome vs how many times have there been incidents where these protections or the erroneous triggering of them has resulted in a negative outcome?" (I ask this because I have no idea)

I've been trying to write this last paragraph for some time. Please excuse me if I express myself poorly. Even if it were shown that pilots were statistically a far worse bet than potentially failing computers, I would hate to be the pilot who may meet his fate *knowing correctly* that all he needed to do was override a computer :-(
[Steve] is offline