PPRuNe Forums - View Single Post - Drone airlines - how long?
View Single Post
Old 30th Oct 2017, 09:08
  #85 (permalink)  
Musician
 
Join Date: Sep 2017
Location: Bremen
Posts: 118
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by double_barrel
That is a strange attitude given that most fatal accidents are caused by human beings.
That's a bogus argument, especially as
a) many systems are stil being controlled by humans, so many more accidents based on numbers alone
b) accidents involving automation and humans are usually blamed on humans in that model, not on automation

Computer systems usually have a narrow, well defined area within they which they can operate safely; venture outside of that area, and performance drops off far more sharply than a human's performance will.

On AF447, the autopilot encountered a condition it could not resolve and turned itself off. Had it not turned itself off, what would have happened? What would have occurred if it had adjusted its operation to the erroneous inputs? What would happen if it did that every time it now turns itself off, where humans save the situation? Was the inability to cope with the situation an inherently human problem, or was the introduction of automation in the cockpit a contributary cause?

Are human interface issues the failure of humans to operate machinery, or is it the failure of automation to cooperate with humans?

I've been reading comp.risks on and off for almost three decades. It's about Risks to the Public in Computers and Related Systems, and as you may guess, those are almost all automation risks (the occasional laptop catching on fire notwithstanding). This includes aviation topics.

My personal takeaway is that whenever an automated system assumes that it a) has a complete picture of the situation, and that b) it has complete control, the time will come when either one of these assumptions is no longer true, and then the system will fail. The problem is that these assumptions make a computerized system easy to design. It is hard to design a system that is able to recognize when its inputs may be bad, and that its outputs may be bad as well, and still deal with it. (The approach to the system itself being bad is usually "put three of them in, if one is off, disable it". Bonus points if the three systems are not identical (because otherwise they would simply show the same errors in some cases), but of course that requires three times the effort.)

And then there's malicious interference: it's hard to get a pilot to crash an airplane; once you have found a way to do it to the computer, you can easily do it to all of them. So that's another thing that will cause fatal automation accidents to go up when automation spreads to more critical systems; but since the accident was caused by a malicious human, it'll appear on the other side of the statistics yet again.

You also need to consider the question: if the same effort was spent on making automation safer was spent on giving huamns the tools to make their own activities safer, what would the result be?

Arguments by statistics may seem simple and convincing, but when you delve into the issues, you're going to find that no statistic tells the whole truth.
Musician is offline