PPRuNe Forums - View Single Post - Drone airlines - how long?
View Single Post
Old 1st Nov 2017, 16:43
  #95 (permalink)  
Musician
 
Join Date: Sep 2017
Location: Bremen
Posts: 118
Likes: 0
Received 0 Likes on 0 Posts
I have multiple issues with your post, DB.

First of all, the automation can get it wrong. There's a list of incidents on the Wikipedia ADIRU article, e.g. Malaysia Airlines Flight 124, which the autopilot would have happily crashed, but for the humans in the cockpit.

You assert that automation would have gotten it right on AF447. But the reason the pilot pitched up was the information provided by the automation:
"The A330 static ports are located below the fuselage mid-line forward of the wing. On the A330-200 in particular, as a result of the position of teh static pressure sensors, the measured static pressure overestimates the actual static pressure. One of the first effects after AF447'spitot tubes became obstructed was that the internal altimeter corrections were recalculated as if the airplane was flaying at lower speeds. This resulted in false indications of a 300 foot decrease in altitude and a downward vertical speed approaching 600 feet per minute." (Bill Palmer, "Understanding Air France 447")
So what would the autopilot have done if faced with a descent like that? I'd say, about the same thing the humans did. Also note that because the ADIRU altitude is slowly pulled onto barometric altitude whenever there's deviation, this descent wouldn't have looked like a sudden jump. Bill Palmer also says, "You've heard that the crew did not react to the stall warning. But, you'll see that they reacted exactly how they were taught to - it just didn't do any good." (understandingaf477.com)

"make a best guess at actual airspeed based on other parameters"-- well, there is the Back-up Speed Scale, but it's not supposed to be used above FL250. The fallback is pitch/thrust tables, but I'd assume these are only useful when the aircraft isn't about to stall.

Automatic systems can have a "startle factor" as well (disregarding for a moment the questionable assertion that the AF447 crew were "startled"). For one, the AF447 ADIRU was "startled" into computing a bad vertical speed, without being aware of it. For another, reset an IRU and it becomes useless. Now count all the various ways technical systems can fail and not be aware of it...

The main problem though is this: "If you can describe the circumstances you can program them into the system." And that's not adequate. With humans, you can describe situations to them, and solutions, and when they're faced with something unknown, they'll look for analogies in their knowledge and apply them as best as they think appropriate. (This includes selection end execution of trained procedures.) This means that humans perform best in familiar cricumstances, they don't always perform optimally, but their performances falls of gradually as circumstances go outside of the norm.

Most computers can't reason like that (and those that can aren't fully understood). A computer's actions are guided by rules. Now think about bureaucracy and how inappropriate its rules can be in certain situations that they weren't made for. You are creating the rules for a certain set of assumptions, and if you are really rigorous, you identify these assumptions and "do nothing" when they don't hold. Since a computer can't do nothing, unless it turns itself off, that's what it does at present. But if a system that can't turn itself off because there is no pilot, it needs to follow rules that were not made for the situation it finds itself in, and it is then when the behaviour of the automatic system deteriorates sharply, because it has no way to select which rules it should be following and thus follows even the nonsensical ones. The behaviour that emerges from the interplay of a complex system of rules is always somewhat unpredictable.

So you make some rules for situations you found, and you add them to the system, and now you have a complex system of rules (including those that deal with the various types of failures that might occur), and you're going to stumble upon a situation you hadn't considered where the behaviour generated by those rules results in a failure, possibly with many passengers aboard.

Your suggestion of "add some code for each nonstandard situation we know about" leads to an unstable, unmanageable software system with unpredictable performance in critical cases. (This is true for all software systems, ask any software engineer.) Well, "unpredictable" is not entirely true, because you can fall back on statistics, but then you need a large number of samples to have reliable data, aka learn from experience, which means you can't predict the safety of the system in advance.

yellowperil made the point that automatic systems are not inherently safer, but they're seen as cheaper than human-controlled systems and so there's money that can be profitably invested to make them safer than human aviation, which would then enable their introduction. This means that the argument "drone flight is safer than human flight" goes out the window; it is only true because we want it to be, and it suffices for the stakeholder to make it appear to be true, and they're motivated to make it appear to be true at the least cost to them. I think that is cause to be suspicious.

For advanced weapons systems, it is true that they're often demonstrated in controlled conditions (aka where the assumptions made by the designers are ensured to be true), and even then tests often fail. (I think for one of the recent cruise missile attacks on Syria, only about half of them hit anything. There go your fully automatic drones.) The same goes for software security for Internet-connected systems: once a hacker can engineer a situation that breaks the designer's assumptions, the system is often able to be compromised. But when the system was demonstrated, it certainly looked capable.

Will there be drone airlines in the future? Possibly. But they're definitely still a long ways off.
Musician is offline