Go Back  PPRuNe Forums > Flight Deck Forums > Tech Log
Reload this Page >

Can automated systems deal with unique events?

Tech Log The very best in practical technical discussion on the web

Can automated systems deal with unique events?

Old 26th Oct 2015, 16:21
  #1 (permalink)  
Thread Starter
 
Join Date: Jan 2010
Location: Marlow (mostly)
Posts: 364
Likes: 0
Received 1 Like on 1 Post
Can automated systems deal with unique events?

There has always been interesting comment on Prune about software reliability, bugs, design requirements, testing, etc., most recently under the topic of a B787 Dreamliner engine issue. There appear to be a significant number of Ppruners who are serious and knowledgeable on the subject.

I would like to ask those members a philosophical question. This has an impact on the argument that a safety priority now should be the elimination of human pilots from the system via automation.

The question is whether it is feasible (within a foreseeable timeframe) for humans to create automated systems that can deal with truly unique (not just "extremely improbable") events.

The pro-automation lobby (see for example thread I started in March, " "Pilotless airliners safer" - London Times article") starts from the view that as pilots appear to be the dominant primary cause in aviation accident reports, removing them will remove the errors they make and the consequent accidents.

This first started being discussed seriously in the late 1980s, when the Flight Engineer function was automated out of the B747 to create the -400 and the DC10 the MD11, etc. (Note - this was not the same as the 3-person vs. 2 person crew controversy so please don't mix that into it!)

There has been a multiple-order-of-magnitude increase in computing capability since then, but my feeling is still the same. Human pilots on board will always be able to make SOME attempt to deal with a completely unforeseen and unique event that arises from a coincidence of imperfections in the total aviation system (vehicle, environment, and people) - even if unable to do so 100% successfully.

So: is it possible to replace this capability with a human-designed and manufactured system, without creating additional vulnerability to human error elsewhere?

The entire industry works on a concept of "acceptable" and "target" levels of safety, involving probability of occurrence and consequences of events that society is willing to take. The regulatory authorities lay down numbers for those probability and consequences elements at various levels.

It seems to me that it would not be possible to design any automated system to control physical equipment like an aircraft without making assumptions about that aircraft and its components, one of which must be that component failure ALWAYS meets the probability required.

In reality, human errors occur in all stages of the process of getting a paying customer to their destination. In the vast majority of cases these errors are caught by the myriad checks in the system, but some are not. When two or more such trapping failures coincide, they may end up as a problem that until now has required the pilot(s) to act creatively, because the situation has never been considered as a possibility. That lack of foresight in itself might even be classed as a human error in the specification and implementation of the checking process.

To a human designing an overall automated control system, either an event is possible and can occur no more often than the required frequency, or it is impossible and need not be considered. There isn't a halfway house where the design engineer can say "this isn't supposed to happen but I think it might, so I'll cater for it." Apart from anything else, what steps can he take to cater for it when there is no means of knowing what the other circumstances are?

Take an uncontained engine failure, which is supposed to be a very improbable event. To quote a Skybrary summary: "Each uncontained failure will result in a “unique” combination of collateral damage ....... [which] carries the greater potential risk and that will require creative pilot assessment to ensure a positive outcome is achieved." That was amply demonstrated on QF32, where the problem originated as human errors in manufacturing, and were prevented from becoming a catastrophe by the pilots.

Other "unique" event examples which show that they are not so rare as to be negligible might include 2 within a few years in one airline alone - the BA B777 dual engine flameout on short final LHR and B744 leading edge flap retraction on takeoff at JNB. Both were survived largely due to instantaneous on-the-spot human "creativity" in recognising that the situation did not conform to any known precedent.

Issues of bugs, validation, verification, system analysis etc, appear to me to be essentially about meeting probability requirements for "known" possibilities. Is there an additional requirement that will have to met for "creativity" in such a system before a pilotless system can even start to be considered?

Unless such a creative artificial intelligence system is included, is the concept of automating the pilot out of the commercial aircraft cockpit doomed to fail, because ALL human error, and with it 100% of the liability for all consequences of any unique event, will clearly be transferred to the manufacturer and/or other suppliers?

Finally, in the event of such an event, will "society" in the form of the legal processes which will inevitably follow, agree that the numbers used since the 1950s etc. to define an acceptable level of safety to the authorities are the correct ones to meet expectations in the mid 21st century? In other words, will potential product liability issues stop the bandwagon?

Any thoughts on this, ladies and gentlemen?
slast is offline  
Old 26th Oct 2015, 16:38
  #2 (permalink)  
 
Join Date: Jan 2008
Location: Reading, UK
Posts: 15,810
Received 199 Likes on 92 Posts
If you were to rephrase the question as "Can automated systems deal with unforeseen events?" then the answer would be obvious.

So a useful approach might be to consider what events, if any, are unique but not unforeseen and vice versa.
DaveReidUK is offline  
Old 26th Oct 2015, 16:54
  #3 (permalink)  
 
Join Date: Jun 2002
Location: Wor Yerm
Age: 68
Posts: 4
Likes: 0
Received 0 Likes on 0 Posts
A brilliant starting point for a discussion. My opinion is that the thing that makes humans good operators is that they are capable of fact finding, learning and self programming. This is not a feature of a lump of traditional software. For example software won't suggest that as the aircraft will only turn left, they'll line up following a series of left turns. It won't think about re-seating passengers to fix CofG problems etc.

But I must disagree with the following:

...that as pilots appear to be the dominant primary cause in aviation accident reports, removing them will remove the errors they make...
It is the human tag fills the functional gap (yawning chasm) between a useless device as delivered by the manufacturer and the all singing, all dancing, highly functional device that we see in service. That device is a ship, a railway locomotive, aircraft or power station. We exist only because we can't be replaced. It is what the human does right millions of times every day that makes flying safe. It's not the few times we foul up that makes it dangerous.

Put the programmer in the plane to make it safer. Errr... Isn't that a pilot though?

PM

Last edited by Piltdown Man; 27th Oct 2015 at 16:09.
Piltdown Man is offline  
Old 26th Oct 2015, 16:54
  #4 (permalink)  
 
Join Date: Nov 2010
Location: Tamworth, UK / Nairobi, Kenya
Posts: 614
Likes: 0
Received 0 Likes on 0 Posts
In theory, computer systems (not just automated systems) could be developed, which are able to take into account every failure or mistake to have ever happened in the history of transportation, and to evaluate all the probabilities of success and failure for every possible action and outcome.

In theory. In practice, we're still a ways off from doing that, although systems like IBM's Jeopardy contestant are headed in that direction.

But more to the point, when it turns out that there are still accidents, that are now "blamed" on the computer systems, will the developers now be blamed? And will we then want to automate the software developers? And when those systems are blamed, do we then develop automated systems to for developing automated systems?

The fact is that in the future, there will be systems which are more capable of evaluating all the risks and outcomes from all the possible actions, faster and more effectively than the human mind.

The question then will be, would you rather trust a piece of equipment or a human being who actually comprehends the concept of failure due to mistakes?
darkroomsource is offline  
Old 26th Oct 2015, 17:49
  #5 (permalink)  
 
Join Date: Jul 2005
Location: SoCal
Posts: 1,929
Likes: 0
Received 0 Likes on 0 Posts
It's a good question and a fascinating subject.

One big problem in discussing it (and in arriving at any conclusion) is that the information we have WRT the actions of aircrew is heavily slanted towards the negative. Why? For the simple reason that we hear about accidents and incidents which were induced by pilot action, but we almost never hear about mishaps that were prevented by pilot action, unless they were dramatic enough to make the news.

There is an interesting analogy to the development of self-driving cars. Google are finding in the course of their tests in California, that their cars of course conform 100% to the highway code. This has obviously been programmed into them. However, the real world doesn't always conform. The big challenge here is to install a sort of fuzzy logic that allows the car to 'think', which in extreme cases also involves ethical dilemmas. I suggest you read this excellent article on the subject.

Personally, I'd much rather live with the errors my fellow human beings (and I!) make than hand over my life to some algorithm.
172driver is offline  
Old 26th Oct 2015, 18:03
  #6 (permalink)  

"Mildly" Eccentric Stardriver
 
Join Date: Jan 2000
Location: England
Age: 77
Posts: 4,136
Received 221 Likes on 64 Posts
which are able to take into account every failure or mistake to have ever happened in the history of transportation
This is fine, but there will always be "black swan" events, and that is where it will not be possible, at least in the foreseeable future, to automate the human out of the equation.
Herod is offline  
Old 26th Oct 2015, 18:04
  #7 (permalink)  
 
Join Date: Jun 2000
Location: Canada
Posts: 819
Likes: 0
Received 0 Likes on 0 Posts
"priority now should be the elimination of human pilots from the system via automation"

For the salaries on offer these days, great idea. Can't happen soon enough.

On a more serious note, is the artificial intelligence refined enough to accommodate that level of automation and how soon could it be incorporated into today's technology?
As an aside, I don't think I'd be that comfortable getting onto anything, especially something leaving the ground, that doesn't have a human behind the wheel (other than the train at Disney World). The though is still unnerving to me and I can't imagine the average afraid-to-fly-in-the-first-place passenger would either.

Willie
Willie Everlearn is offline  
Old 26th Oct 2015, 18:45
  #8 (permalink)  
 
Join Date: Mar 2015
Location: oregon usa
Posts: 11
Likes: 0
Received 0 Likes on 0 Posts
The legal system is not ready for driverless cars or pilotless aircraft. regardless of the cause of any accident there wlii always be the need for convenient blame.
bullfox is offline  
Old 26th Oct 2015, 19:15
  #9 (permalink)  
 
Join Date: Jan 2008
Location: Scandinavia
Posts: 98
Likes: 0
Received 0 Likes on 0 Posts
Interesting question but maybe too simply put.

Firstly if an event is "unique" then by definition it becomes a binary thing whether that event can be foreseen or not. Your question is then "Can automated systems deal with all foreseeable unique events?" Then the discussion moves to what counts as foreseeable and of those what is it worth guarding against.

In most cases automated systems are constructed around generalisations of specific cases, eg: avoiding crashing in to Everest, becomes GPWS. Similarly preventing a pilot exceeding the load limits of an aircraft becomes the flight laws on an Airbus etc.

As another posted has pointed out: "...that as pilots appear to be the dominant primary cause in aviation accident reports, removing them will remove the errors they make..." is false from many perspectives. While it is "technically" correct the pilot probably did crash the plane, they were probably figuring out how to get out of that situation and therefore circumstances eventually conspired against them. Only a few accidents are attributable to pilots only (AF, German Wings), but even then the chain of events and context is extremely complex - hence the need for accident investigation.

It might help to start with looking at the "Swiss Cheese Model" ( https://en.wikipedia.org/wiki/Swiss_cheese_model ) and read up on the work by James Reason on the whole concept of safety.

If you want a particularly readable book, have a look at Atul Gawande's Checklist Manifesto which'll give you an insight into how aviation's checklists are used in a completely different environment - one that has a very different idea of what automation is.

Huge area to discuss and lots of research, but take a look at Reason's books and papers,

fc101
fc101 is offline  
Old 26th Oct 2015, 19:57
  #10 (permalink)  
 
Join Date: Jul 2000
Location: London
Posts: 1,256
Likes: 0
Received 0 Likes on 0 Posts
Think cyber attacks. Bye bye no pilots on the flight deck.

There should be a guarded yellow switch on every flight deck. When required this can be switched on and it turns it back into an aeroplane.
4Greens is offline  
Old 26th Oct 2015, 20:30
  #11 (permalink)  
Thread Starter
 
Join Date: Jan 2010
Location: Marlow (mostly)
Posts: 364
Likes: 0
Received 1 Like on 1 Post
responses to a few points

Good to get some serious answers so fast...!

DaveReidUk, I pondered long and hard over whether to make it unforeseen or unique (or both). can you continue that thought with examples as to what events, if any, are unique but not unforeseen and vice versa.

PD, j(and several others! Just to be clear, I DON'T consider that pilots ARE the predominant cause - that's the pro-automation lobby viewpoint. But it gets support from graphs like this from an MIT study on "Safety Management Challenges for Aviation Cyber Physical Systems" . This was picked at random from many others similar.


Darkroom, your "in theory"... para. The failures I see as problematic to deal with are not ones that HAVE ever happened, but ones that have not YET happened and almost certainly never will. These are for practical purposes infinite in number - certainly many order of magnitude greater than the possible moves in a game of chess (10^120?)

Humans brain within human body can be pretty good at chess but are relatively easily beaten now by specialist programmes. However, the same human brain/body combination can also deal with umpteen other issues (e.g. raise children, create music) at which the same programme and hardware has zero capability. To what extent would a system "trained" to handle the QF32 scenario and every historic event be able to deal with a second QF32 in which one hot chunk went in a 1 degree different direction with significantly different consequential failures? However a human would ATTEMPT to cope just the same.

172Driver. I agree entirely with your comment about the information bias. I have devoted a couple of pages specifically to this on my own website but to make the point that where pilots ARE responsible they need collectively to do something about it. See these diagrams I made to illustrate that public perception

does not align with the underlying reality:


Thanks for the link, very useful. The self-drive car issue is of course the canary in the mine, if they can't resolve the liability issue for cars, then it goes away for aircraft. See Volvo recent statement... Volvo will accept liability for self-driving car crashes


FC101: see earlier, I do not consider that pilots ARE the major problem. I have an adaptation of Jim Reason's diagram here... and agree Gawande's Checklist Manifesto is a good read.



Thanks for the input, keep it coming...
slast is offline  
Old 26th Oct 2015, 21:23
  #12 (permalink)  
 
Join Date: Feb 2005
Location: flyover country USA
Age: 82
Posts: 4,579
Likes: 0
Received 0 Likes on 0 Posts
I have to ask about the inverse of pilot error, exemplified by Sully's decision to put his bird in the drink after a low-altitude loss of thrust.
barit1 is offline  
Old 26th Oct 2015, 21:41
  #13 (permalink)  
 
Join Date: Oct 2011
Location: United Kingdom
Posts: 310
Likes: 0
Received 0 Likes on 0 Posts
The first automated aircraft would probably be data-linked to the ground with "pilots" monitoring several at once, able to step in and assume remote control if necessary. This would remove the "can it deal with any hypothetical situation?". All the automation would need to do would be to flag up "something" as wrong and ground could take over. Similarly flight attendants could communicate medical emergencies etc.

The problem this creates is the loss of data-link scenario. Programming the software to land at the nearest airport in this situation could be done, but this would require every diversion airport to be ILS and auto-land equipped. The costs would soon mount up.....

Finally, hardware/software is expensive. The level of redundancy and function required to remove pilots would take years and many millions/billions of dollars. Money which the manufacturers have to gamble on there being enough people willing to fly on these new drones. Only a fool would say fully automated planes will never happen (think president of IBM 1940s - "there will only be a market for maybe 5 computers") but it's a long way off.
ManUtd1999 is offline  
Old 26th Oct 2015, 22:08
  #14 (permalink)  
 
Join Date: Sep 2014
Location: Canada
Posts: 1,257
Likes: 0
Received 0 Likes on 0 Posts
The first automated aircraft would probably be data-linked to the ground with "pilots" monitoring several at once, able to step in and assume remote control if necessary.
The way I envision it, the first fully automated aircraft will still have a "pilot" onboard -- but the pilot will no longer the fly the airplane -- not even via autopilot.

The pilot's job will transition to a pure supervisory & flight management role.

The entire flight will be completely automated from gate to gate. Compliance with ATC instructions and traffic / flow management will also be completely automated.

The "cockpit" will be redesigned -- supervisory controls will replace flight controls. Here I'm envisioning supervisory controls as higher-order controls for more suited for decision making rather than for piloting. More touchscreens, less joysticks.

In an in-flight emergency, the pilot will be entrusted to make safety-of-flight decisions, via the supervisory controls.
peekay4 is offline  
Old 26th Oct 2015, 22:13
  #15 (permalink)  
 
Join Date: Oct 2012
Location: SF Bay area, CA USA
Posts: 254
Likes: 0
Received 0 Likes on 0 Posts
Data link loss.

Quote: "The problem this creates is the loss of data-link scenario. Programming the software to land at the nearest airport in this situation could be done, but this would require every diversion airport to be ILS and auto-land equipped. The costs would soon mount up....."


So a data link loss becomes a 'clear the decks' priority landing?
.
jack11111 is offline  
Old 26th Oct 2015, 22:14
  #16 (permalink)  
 
Join Date: Oct 2011
Location: United Kingdom
Posts: 310
Likes: 0
Received 0 Likes on 0 Posts
The way I envision it, the first fully automated aircraft will still have a "pilot" onboard -- but the pilot will no longer the fly the airplane.
That would be a potentially better option.
ManUtd1999 is offline  
Old 26th Oct 2015, 22:57
  #17 (permalink)  
Nemo Me Impune Lacessit
 
Join Date: Jun 2004
Location: Derbyshire, England.
Posts: 4,091
Received 0 Likes on 0 Posts
Only a fool would say fully automated planes will never happen
Maybe I'm that fool then, if you mean a fully automated, sans pilot, commercial passenger aircraft.

Insurance premiums will go through the roof, the product liability cover required will outstrip fuel costs and I've mentioned it before, on other threads, security and suicidal maniacs, but this thread isn't about that so I'll stop here.

(The R&D costs to get a pilotless pax aircraft that satisfies the regulators and is considered an insurable risk by underwriters will run to billions, assuming anyone can be found to stump up these funds, pilots are cheaper!).
parabellum is offline  
Old 26th Oct 2015, 23:06
  #18 (permalink)  
 
Join Date: Apr 1999
Location: Manchester, UK
Posts: 1,958
Likes: 0
Received 0 Likes on 0 Posts
Aside from the difficulty of providing a (hopefully) totally reliable and hack-proof worldwide datalink, What would be the point of that? Surely if you're going to pay someone to sit on board anyway, why not give them a stick and some buttons to press to keep alert for the times, inevitably, when something goes wrong. And since you've done that, why not give them a uniform with some stripes on.
ShotOne is offline  
Old 26th Oct 2015, 23:13
  #19 (permalink)  
 
Join Date: Sep 2014
Location: Canada
Posts: 1,257
Likes: 0
Received 0 Likes on 0 Posts
>> The pilot's job will transition to a pure supervisory & flight management role.

Isn't that what the 'Captain' already does?
Not purely, no -- and certainly not when the Captain is PF. (And even as PM, a Captain today is still concerned with all aspects of flying the plane.)

Imagine instead: a Flight Commander who's not at the controls, working with two pilots at the controls. This Commander is in a pure management / supervisory role, completely "hands off" from the actual mechanics of flight. Then, automate both pilot positions.

Today we have the Pilot in Command (PIC) -- which is two roles in one: a pilot and a commander. So as the first step, I think it will be the pilot role which will be fully automated, leaving a Commander on board.
peekay4 is offline  
Old 27th Oct 2015, 02:22
  #20 (permalink)  
 
Join Date: Mar 2014
Location: Arizona
Age: 76
Posts: 62
Likes: 0
Received 0 Likes on 0 Posts
Yes, automation can handle unforseen events

Automation of very complex processes, such as autonomous cars (or aircraft) is not just a practice of thinking up every scenario and programming the autopilot to handle it.

Modern software is absorbing more AI of the sort that learns. Although at its heart it may be a computer program, it is very different from just programming. And, sometimes it is not a computer program - it may be a collection of electrically simulated neurons. In fact, what it has learned may not even be accessible or understandable to humans. Furthermore, such software still has access to high speed accurate models of physics, and to many sensors, so it can tie the woo-woo deep AI to strong modeling and control.

This kind of technology has been in our lives for some time. Credit card companies have been using neural nets for a long time to evaluate credit risk. Google search results come partly from self-learning AI.

That said, it is not clear when this will be appropriate as a replacement for a pilot in commercial aviation. This sort of AI is rapidly advancing, but is still pretty weak. Furthermore, safety qualifying something that is not well understood is obviously a serious challenge. We try to qualify pilots, since we know a lot about human beings, but sometimes we goof. How do we qualify a big, complex artificial neural net?
Mesoman is offline  

Thread Tools
Search this Thread

Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.