Go Back  PPRuNe Forums > Flight Deck Forums > Tech Log
Reload this Page >

Can automated systems deal with unique events?

Wikiposts
Search
Tech Log The very best in practical technical discussion on the web

Can automated systems deal with unique events?

Thread Tools
 
Search this Thread
 
Old 28th Oct 2015, 18:23
  #81 (permalink)  
 
Join Date: Dec 2006
Location: Florida and wherever my laptop is
Posts: 1,350
Likes: 0
Received 0 Likes on 0 Posts
An interesting discussion.
There are three main approaches to automation in the presence of humans.

  1. Human in the loop
  2. Human On the loop
  3. Human out of the loop
Human in the loop
In this level of automation there is decision support to the human, and there may be support like flight control augmentation auto-stabs etc, but the human is 'flying' the aircraft.


Human on the loop
In this level of automation the human does not fly the aircraft directly but informs the automation what is required and the automation then implements that requirement



Human out of the loop
Full autonomous operation where the human may be able to intervene but without intervention the aircraft will fly as the automation wants.


Note that these states may exist in different phases of a single flight. Hands on control initially (human in the loop) then giving the aircraft to the FMS (human on the loop) then a CATIIIb autoland (human out of the loop)



The problems occur when a pilot who has rarely been 'in the loop' and has spent the last hundred or more flights out of the loop with occasional on the loop inputs, is required to jump into the loop and take control.



Failure Modes and Exception Handling
It is unfair to blame pilots for more errors, as they usually have had to pick up the pieces when the automation 'fails'. However, why does the automation fail? It is unfair on the automation to say it has 'failed' it actually behaved as designed. It is very costly to program for the 'otherwise' cases. These are the ones that fall outside the design conditions and so after checking for all reasons the programmer/system designer has something else to identify. But these are rare more complex faults and more expensive to program for. It is easier, knowing a pilot is there, to have the automation hand the bag of bolts to the pilot with a 'get out of that' ECAM message. Or to put it another way, the software design relies on the pilot being there so it does not have to cope with complex or rare failure modes. Passing the bag of bolts over to the pilot is a design feature that saves time and energy for the systems builders.


Certification Costs and Validation Testing
It is perfectly possible to program learning software, software that will share its learning, fully capable of dealing with aircraft damage way beyond
the worst nightmares of a QANTAS 380 uncontained engine failure. There are military aircraft flying with adaptive software that can correct for loss of control surfaces for example, and the software works so well that it is not easy for the pilot to realize that there is something wrong. So that's fine let's put it on the next Boeing 797 or Airbus 390.....?


Well no Certification Costs for complex tests for variable response and learning software are extreme, that is if anyone from the certification bodies could be found to agree the tests. As by definition the learning software will have a different response each time this is an anathema to certification testing and raises all sorts of flags in regression testing. Civil systems just cannot cope with the potential risks involved.



Sensors and Sullenberger
There are already systems that spend all their time doing what every good pilot should do - but hugely faster - identifying where the automated (usually unmanned) aircraft will put down given a list of potential effects from problems caused by an automated failure mode analysis. Again, certification of these is prohibitive even when compared with the cost of a lifetime employment of first officers especially in time. This is because the mathematicians managed to convince TPTB that every statement in 'safety critical software' needed to be mathematically proved. This is infeasible in a real time network of sub-systems all capable of pre-empting each other.





So do I believe that there will be fully automated passenger carrying aircraft yes absolutely, we have some flying already where they are 'optionally piloted' and can recover autonomously if the command link fails. Do I believe that there will be fully automated passenger carrying civil aircraft? Only over the dead bodies of an entire generation of 'safety engineers' who have set standardization safety tests that such autonomous aircraft cannot ever meet because they work differently to current crewed aircraft. I am even uncertain whether the single pilot crew will ever pass safety certification requirements.



So the military will have these systems probably for decades before a new generation of safety engineers perhaps with a more sophisticated approach allows them for civil aviation. I won't hold my breath.
Ian W is offline  
Old 28th Oct 2015, 18:32
  #82 (permalink)  
 
Join Date: Aug 2006
Location: Timbukthree
Posts: 13
Likes: 0
Received 0 Likes on 0 Posts
Speaking of humans in the loop...

- Velcro fastened shoes are safer than laced shoes.

- Velcro can be re-fastened quicker than laces.

- Velcro failure (breakage) rate is one-quarter that of laces.

- Velcro fastened shoes have saved lives by reducing the risk of falls by senior citizens, which leads to fractured hips, which leads to fatal hospital-borne infections.

- I will never wear Velcro fastened shoes.

Last edited by evansb; 29th Oct 2015 at 13:51.
evansb is offline  
Old 28th Oct 2015, 18:57
  #83 (permalink)  
 
Join Date: Sep 2008
Location: 41S174E
Age: 57
Posts: 3,095
Received 479 Likes on 129 Posts
That's incredible! I'm off to buy some Velcro shoes right now! They will go well with my cords and I might live to 104!
framer is offline  
Old 28th Oct 2015, 19:21
  #84 (permalink)  
 
Join Date: Sep 2010
Location: Oz
Posts: 18
Likes: 0
Received 0 Likes on 0 Posts
Beyond Reason

I'm not sure if this work has been generally seen here, but it is well worth a read; https://mitpress.mit.edu/sites/defau...afer_World.pdf
Resar40 is offline  
Old 28th Oct 2015, 21:22
  #85 (permalink)  
 
Join Date: Mar 2014
Location: Arizona
Age: 76
Posts: 62
Likes: 0
Received 0 Likes on 0 Posts
Automation vs AI

"An automated control system may well have opted for heading for the nearest airport if the figures indicated it was possible to glide there. The automated system might then be caught out by windshear nearing final approach and landing, because that was a factor it was unable to detect in advance. So the aircraft could have landed short and hit buildings killing everone aboard and causing casualties on the ground. A human pilot will at least consider the possibility and additional risks of windshear and act accordingly.

Automation can only act in accordance with the information it receives from its sensors. It cannot autonomously consider the possibility of events for which there are no data, so it is pointless and impossible to plan ahead for every eventuality."

This is not correct, although perhaps right for a non-AI system. The quote above is responding only to the old fashioned style - complete detail programmed by humans - and only then where the humans left out factoring in wind shear and obstructions.

AI stands for "artificial intelligence" for a reason: an AI system does not simply reproduce what a programmer intended. A "strong AI" system is best thought of us an intelligent creature, not just a collection of rules. A "strong AI" system can be trained, and it can draw inferences and make deductions. In other words, it can think - certainly today not in the same way as a human, but far differently from a simple programmed control system. It may be use simulated neurons ("neural nets") or other technologies or more likely, combinations.

Today, strong AI is not to the point where it can replace a pilot's judgement. It may never be, but there is a good chance that it could be. The strong AI, coupled with the power of ordinary automation (sensors, actuators, physics calculations, decision trees, etc) may some day very well exceed the capacity of the very best pilot. I think that day will come. I think the strong AI problem is harder for automated cars than aircraft, and the push for autonomous automobiles is very strong. Cars are far less complex, but they encounter, on a routine basis, a very wide variety of situations where physics is only the start of the problem - for example, dealing a ball that bounces into a residential street.

In such a world, one might take a strong AI system and literally train it the way you would train a human pilot. But, once trained, it can be replicated - the training for one "pilot" produces thousands of immortal pilots. More likely, a whole lot of the flight smarts would be pre-coded as rules, with the strong AI there for the overall management and unexpected scenarios.
Mesoman is offline  
Old 28th Oct 2015, 23:57
  #86 (permalink)  
 
Join Date: Dec 2013
Location: Norfolk
Age: 67
Posts: 1
Likes: 0
Received 0 Likes on 0 Posts
I wonder how airline managers would react when the AI system refuses to fly because it assesses the conditions as being sub optimal for profit or safety?

"Sorry folks, the computer says no."
"We haven't got a clue why it is refusing to fly, come back tomorrow."

The problem with neural networks is that their reasoning is not readily transparent. They may come up with the right answers, but the process can be so complex that it is impossible for humans to check how they arrived at the answer. This is already an issue with computer solutions to complex mathematics problems. They give a solution that is so complex and detailed that it would take several human lifetimes to verify the answer (without using more computers to cross check the workings of the first).
G0ULI is offline  
Old 29th Oct 2015, 01:52
  #87 (permalink)  
 
Join Date: Mar 2014
Location: Arizona
Age: 76
Posts: 62
Likes: 0
Received 0 Likes on 0 Posts
Strong AI systems

"I wonder how airline managers would react when the AI system refuses to fly because it assesses the conditions as being sub optimal for profit or safety?"

Retrain it? Seriously, if it made that assessment, it would prudent to find out why.
"The problem with neural networks is that their reasoning is not readily transparent."
That is certainly an issue. How do you certify it as safe?

I think, however, that this will eventually be solved. You can design systems to give information on their reasoning, and I suspect that will be required. In other words, the system would need to be able to describe and justify its reasoning, just like a trainee pilot.

I don't think that opaque strong AI (deep learning) systems will be accepted for executive reasoning without a huge amount of evidence that they are correct. They will be used very soon as part of sensor systems (at least for automobiles), as they offer perhaps the best hope for problems like understanding images.
Mesoman is offline  
Old 29th Oct 2015, 06:53
  #88 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Ian W

Very good post.

I agree that the challenges are not technical.

The problems are in certification, legal issues plus public perception.

The thing that will have a very large effect on all these areas is the current nascent autonomous cars that are starting to appear.

The legal issues are the same even if different in scale. Who is responsible if it crashes?

The certification, the same. I think the thing that will happen is that the military will fly transports around autonomously for a while, and as evidence mounts for their safety advantages, the certification industry will be forced to evolve in the face of evidence.

Whether you understand it or not, safer is safer.

I think once people get used to their cars driving them around, the plane is not such a stretch...
Tourist is offline  
Old 29th Oct 2015, 10:22
  #89 (permalink)  
Thread Starter
 
Join Date: Jan 2010
Location: Marlow (mostly)
Posts: 369
Likes: 0
Received 1 Like on 1 Post
strict application of rules.

I wonder how airline managers would react when the AI system refuses to fly because it assesses the conditions as being sub optimal for profit or safety?
That may be a more serious issue than it appears! It was certainly the case that "work to rule" (strict interpretation of all procedures etc.,) was/is a pretty effective weapon in industrial disputes. In the normal world people use a surprising amount of discretion and interpretation to keep things running. Would strict application of all rules etc. cause things to grind to a halt?

Unlike a factory or other industrial enterprise where the management may be able to control or at least heavily influence nearly all the variables that affect its production, airline operations seem very much subject to disruption by factors that are not within management's control and require the exercise of discretion by "operatives" such as crew members etc.

It would be interesting to speculate how that can that be built into an automated system and who will take the responsibility for the correctness of its application.
slast is offline  
Old 29th Oct 2015, 11:24
  #90 (permalink)  
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
Instead of asking ‘can’ automation manage unique events, we might consider ‘if’ we should use it, which might help clarify the critical issues.
Many views seek to eliminate ‘error’, yet the vast majority of operations tolerate error; humans adapt and successfully manage the variability in normal operations. There are many risks in seeking to eliminate ‘error’. We have scant understanding of the mechanism of ‘error’, where in some views the process of success and failure are the same, the difference only being decided by outcome.
How might fallible humans understand the very fallibility which makes them human, and similarly how can we be assured that any solution will be infallible or particularly that by managing ‘error’ outcomes this will not undermine the required successful ones.

One obstacle in our thinking is in applying the absolute (one or zero) view of automation to humans. Humans are not absolute; human advantages are in adaptability (reconfiguration), which might be better equated to probabilistic behaviour. No course of action is perfect, but it is normally good enough, this requires judgement.
Safety involves a balance, judging what is good enough in each and every situation, but often without adequate information or measure of what is ‘good’.

The human should not be seen as a hazard, but as a help, having unique inspiration and judgement which contributes to the successes in operation.
Instead of attempting to totally eliminate the undesired outcomes, we should promote the desirable aspects, the behaviours, thoughts and actions used today, and understand the situations in which they apply. If there are more good outcomes then there should be less opportunity for bad ones.

There are indications that the use of automation detracts from the learning processes; although the overall level of safety has improved with automation the human contribution has not, and in some circumstances thier effectiveness might be degraded.

There is also an important distinction between automation and technological aids; where the latter can encourage thought with improved awareness and decision making. EGPWS, ACAS, and Windshear Warning all use technology (not automation) to improve awareness and error detection. There are good arguments for automation in the event of inaction, but with society influences on safety – who pays if inappropriate activation hurts someone (litigious culture, an artefact of being safe) – thus the risk adverse management prefer to leave the responsibility with the sharp end.

It is important to consider the blunt end of operations; many of the contributors to unique events have ‘blunt’ origins. ‘Unique accidents’ normally involve many contributing factors, each necessary, but none in isolation sufficient. Thus identifying and uncoupling the conjunction of factors could be of greater benefit to safety and easier (cheaper) to implement than focusing on replacing the human with costly or untimely automation.
Blunt end activities might be more receptive to technology and automation; data bases, pattern matching, and the essential element of reduced time, but these still require human involvement in choosing what and when to implement.

The objective of change should be to promote the human advantage vice constraint or replacement, particularly if the latter results in the management, regulators, and manufactures becoming the new sharp end, focussing on their fallibility – then back onto the merry-go-round.

In order to progress we must look at the perceived problems and potential solutions in new ways; we have to change the manner of our thoughts not automate it.
alf5071h is offline  
Old 29th Oct 2015, 11:45
  #91 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by alf5071h
There is also an important distinction between automation and technological aids; where the latter can encourage thought with improved awareness and decision making. EGPWS, ACAS, and Windshear Warning all use technology (not automation) to improve awareness and error detection.
This is very true, unfortunately you are taking the wrong lesson from it.

EGPWS TCAS Windshear warning are near flawless systems which lend themselves perfectly to automation.

Unfortunately, we have decided to throw in a human who has no purpose except to add errors to the system.

example.

TCAS sees a problem. It works out the RA and tells the pilot to do it. The pilot tries to do what he is told. The pilot regularly gets it wrong. (>50% of the time at a large British Airline)
If the autopilot was just slaved to the TACS system many hundreds of people would be alive today.

I would call that "promoting the human advantage"
Tourist is offline  
Old 29th Oct 2015, 16:22
  #92 (permalink)  
Thread Starter
 
Join Date: Jan 2010
Location: Marlow (mostly)
Posts: 369
Likes: 0
Received 1 Like on 1 Post
Hi Tourist
The pilot regularly gets it wrong. (>50% of the time at a large British Airline)
If the autopilot was just slaved to the TACS system many hundreds of people would be alive today.
you are usually pretty meticulous at providing evidence for your positions as well as asking for it from others, can you give chapter and verse for this statement? (PM or email me if you do not want it on a public forum)
slast is offline  
Old 29th Oct 2015, 16:39
  #93 (permalink)  
Thread Starter
 
Join Date: Jan 2010
Location: Marlow (mostly)
Posts: 369
Likes: 0
Received 1 Like on 1 Post
alf,
we might consider ‘if’ we should use it
There seems to be a significant number of people who have already made up their minds to go down that route, for reasons that satisfy THEM. However my own suspicion has long been that it will actually prove to be a dead end for many decades to come, for fundamentally non-technical reasons. My motive for posting the question in its original form was to see if that view is supported by others who actually have better knowledge than I do.
Steve

Last edited by slast; 29th Oct 2015 at 18:49.
slast is offline  
Old 29th Oct 2015, 17:53
  #94 (permalink)  
 
Join Date: Jul 2005
Location: SoCal
Posts: 1,929
Likes: 0
Received 0 Likes on 0 Posts
Taking humans out of the loop

Removing humans from the information loop can have catastrophic consequences. I recommend reading this article about how human intervention prevented a nuclear war. A computer would have fired the missiles....

Short synopsis: a US missile station on Okinawa got sent the attack codes during the Cuban crisis. It was only the skepticism of the commander (who wondered why some of his targets were *not* in the USSR) that saved the day. He queried the veracity of the codes up the chain of command and the error was thus detected.

Without this fine gentleman, none of us might be here to have this discussion....
172driver is offline  
Old 29th Oct 2015, 18:22
  #95 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
....and then the last 50 years happened.

You know we went to the moon since then?
Tourist is offline  
Old 29th Oct 2015, 18:25
  #96 (permalink)  
 
Join Date: Sep 2014
Location: Canada
Posts: 1,257
Likes: 0
Received 0 Likes on 0 Posts
Removing humans from the information loop can have catastrophic consequences. I recommend reading this article about how human intervention prevented a nuclear war. A computer would have fired the missiles....
But the reason there was an incident at all was BECAUSE of a human error. The commanding officer (the Major) allegedly issued the nuclear strike codes by mistake. He was later court-martialed, according to the article.

Also, a computer probably would have summarily rejected the mistaken launch order because it did not conform to requirements (i.e., not being in DEFCON-1).

On the other hand, a different human being that night might have launched the nukes as instructed, without checking all the prerequisites.
peekay4 is offline  
Old 30th Oct 2015, 03:20
  #97 (permalink)  
 
Join Date: Jan 2015
Location: Near St Lawrence River
Age: 53
Posts: 198
Likes: 0
Received 0 Likes on 0 Posts
Self-driving cars

Exciting video, but would you:
- buy a self-driving car without wheel and pedals installed?
- read comfortable a book on the back seat while the cars runs on a country road (2 ways) at 60 mph?
- allow this car to drive your kids to school?
_Phoenix is offline  
Old 30th Oct 2015, 03:48
  #98 (permalink)  
 
Join Date: Sep 2014
Location: Canada
Posts: 1,257
Likes: 0
Received 0 Likes on 0 Posts
Tesla's brand new "autopilot" self-driving mode already saved multiple lives:

Tesla Autopilot Stops Uber Driver's Car Crash - Fortune


and we're still in the pioneering days of self-driving cars. I have no doubt self-driving cars will soon be much safer than conventional cars -- if they're not already.
peekay4 is offline  
Old 30th Oct 2015, 04:19
  #99 (permalink)  
 
Join Date: Jan 2015
Location: Near St Lawrence River
Age: 53
Posts: 198
Likes: 0
Received 0 Likes on 0 Posts
Some unique events

Tough to program or predict though
_Phoenix is offline  
Old 30th Oct 2015, 06:20
  #100 (permalink)  
 
Join Date: Jul 2013
Location: Alternative Universe
Posts: 19
Likes: 0
Received 0 Likes on 0 Posts
Everyone keeps giving Sully as an example of Human vs automation, when everyone forgets that the A320 Sully was piloting never left Normal Law.

During the flight, he was assisted and, during the last seconds of the flight, he did not have any control over the airplane because it was on the edge of the envelope.

Would he be able to pull off a perfect landing if the airplane were in Direct Law? Maybe, maybe not.

Last edited by Standard Toaster; 30th Oct 2015 at 08:17.
Standard Toaster is offline  


Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.