Go Back  PPRuNe Forums > Flight Deck Forums > Tech Log
Reload this Page >

Can automated systems deal with unique events?

Wikiposts
Search
Tech Log The very best in practical technical discussion on the web

Can automated systems deal with unique events?

Thread Tools
 
Search this Thread
 
Old 27th Oct 2015, 03:43
  #21 (permalink)  
 
Join Date: Dec 2013
Location: Norfolk
Age: 67
Posts: 1
Likes: 0
Received 0 Likes on 0 Posts
I fail to see how completely automating an aircraft would make flight inherently safer. A computer is capable of conducting a flight from A to B with extreme accuracy and can be programmed to avoid typical hazards en route. Big problems occur if the computer loses the ability to extract meaningful data from remote sensors, or data lines to control mechanisms are severed. As a last resort, a human being can look out the window and potentially still navigate and land safely.

What might work would be a ground based system programmed with every known accident and failure mechanism to date in all known aircraft. If such a system could be programmed and built, it might identify common design failures and potential faults that have not yet occurred and identify potential defects in aircraft yet to be built.

If a fault or series of conditions are identified that no human pilot could deal with but a computer could, then it would be easiest to set up an on board computer to assist the pilot when the situation required. Something Airbus seems to have been struggling to get right for some years.
G0ULI is offline  
Old 27th Oct 2015, 04:34
  #22 (permalink)  
 
Join Date: Mar 2006
Location: expat
Posts: 129
Likes: 0
Received 0 Likes on 0 Posts
It is not just the aircraft fleets that would need highly developed AI. ATC would as well, with backups for backups to achieve adequate margins. Perhaps this would involve internet like layers of redundant autonomous capability between individual aircraft in case of datalink loss. Until ATC becomes automated and digitised then the current analogue method of R/T between humans is the only way to manage traffic in high intensity traffic situations.
As far as product liability being an obstacle: why would an AI component be different to any other component? None are perfect yet we still have airliners.
HPSOV L is offline  
Old 27th Oct 2015, 04:43
  #23 (permalink)  
 
Join Date: Sep 2014
Location: Canada
Posts: 1,257
Likes: 0
Received 0 Likes on 0 Posts
I fail to see how completely automating an aircraft would make flight inherently safer.
In a nutshell: it is to recognize that humans are better at some tasks, while computers are better at other tasks.

(And also conversely, that humans are bad at some tasks, while computers are bad at other tasks.)

Today there is no such recognition or separation of tasks. The (sophisticated) automation we have in the cockpit is designed to augment a human pilot. But the pilot is still expected to execute all the tasks, albeit with help -- or lets say "protection" -- from the computers.

But the fact that "protections" are needed points to a sub-optimal division of tasks.

Example: with the current model, since pilots are responsible for flying tasks (but are not very reliable), pilots are expected to actively "monitor" each other and the automation. Yet humans are really bad at monitoring. We get bored and lose concentration. Play games on our iPhones. Get distraction with conversation. Fall asleep. Think sexy thoughts. Become incapacitated.

An independent, specialized computer can do a much better job at monitoring.

So perhaps, we should let computers fully do what they're good at: e.g., fly from point A to B, completely automated. Take the human factor out. Computers are not tempted to bypass checklists, bust minimums, or take unauthorized shortcuts. They also don't consume alcohol or drugs, fly fatigued, or develop suicidal or homicidal tendencies. (HAL excepted).

Human pilots can then concentrate on decision making, supervising (not monitoring!), and handling emergency or other non-routine situations. Not by flying, but by commanding.
peekay4 is offline  
Old 27th Oct 2015, 05:59
  #24 (permalink)  
 
Join Date: Jan 2006
Location: Ijatta
Posts: 435
Likes: 0
Received 0 Likes on 0 Posts
I can remember when automatic elevators (lifts) came on the scene, many folks would, instead, use the stairs.

Last edited by wanabee777; 27th Oct 2015 at 06:21.
wanabee777 is offline  
Old 27th Oct 2015, 08:26
  #25 (permalink)  
 
Join Date: Jul 2009
Location: Surrey, UK
Posts: 130
Likes: 0
Received 0 Likes on 0 Posts
How will a computer be able to be programmed to recognise, for example, a vehicle about to trespass onto a runway when the plane is about to land? It may be programmed to recognise a shape, but it will take a human to instantly interpret the shape, speed and likely trajectory, and make the split-second decision for a go-around.

... or recognising a drone about to cause conflict, or a sudden volcanic eruption in the immediate flight path (how could that be detected and instantly recognised as such by a camera?)
HamishMcBush is offline  
Old 27th Oct 2015, 08:36
  #26 (permalink)  
 
Join Date: Dec 2001
Location: what U.S. calls ´old Europe´
Posts: 941
Likes: 0
Received 0 Likes on 0 Posts
If we look at it from a purely technical standpoint, every event (no matter how unique) has to follow the laws of physics (or the laws of nature, but probably for aviation it is 99% physics). All of them can be translated into formula and constants. For all things that can happen, there is a sort of sensor available which would detect it. So If you can have a computer that can do any calculation of any natural law (is unbelieveable powerful and quick), and has all sensors wired to it which could detect any parameter existing around, then yes, a computer can deal with any event.
but...
We all know, that the more complex a system gets, the more sensors you have, the more bugs are in, the more failure happens, the more false information is gathered, which would then also result in wrong decisions taken.
So with every additional aspect we would consider in the system (to make sure we are able to deal with whatever is physically possible) we would increase the number of bugs and sources for failure. At a certain level, the event we want to prevent is less probable than a malfunction of the system we install to deal with it.
Thinking of real MTBF figures of existing components, which sometimes (for whatever reason, most probably cost...) are as low as 1000 FH, it would not make the aircraft safer for any event what is unique enough to happen only every 1000 FH. Of course we can significantly increase reliability figures by installing redundancy, but we will always have a finite reliability of the system, hence we have a certain probability of events for which the risk of it to happen is lower, than the risk of the system dealing with it to fail.

therefore:
practically we will never be able to create systems which can deal with every unique event, which would not result in system failures causing accidents at a rate in the same order of magnitude as the unique event we wanted to deal with.
Which also is true for pilots, we can never train them good enough for any unique event that may happen, they will always make mistakes and errors (which would be another interesting philosophical discussion, whether pilot error or pilot mistakes are the real issue...) in an order of magnitude higher than very unique events.

But slast is absolutely correct, instead of looking at statistics of very, very rare events, we should look at statistics of the billions of correct decisions taken, correct actions performed and flight hours of systems doing exactly what they should. We should not try to avoid something vary rare, by taking responsibility from those who do something right 99.99999% of the time. We should try to support those doing a good job to make a perfect job (humans and systems). We should check who is doing what best, and then support him / improve it.
Volume is offline  
Old 27th Oct 2015, 10:20
  #27 (permalink)  
 
Join Date: Dec 2003
Location: Tring, UK
Posts: 1,845
Received 2 Likes on 2 Posts
I have always thought that I am there on the aeroplane to deal with the rare, the unexpected, the edge cases and anything that requires “thinking outside the box”. Most of the regular day-to-day stuff can be automated (I include humans following SOPs in that). My personal opinion is that many problems have their roots not in pilots or aircraft systems but in the interface between the two: we are still in the relative dark ages here using methods that were old half a century ago.

It’s normal to consider risk in terms of severity factored by likelihood of occurrence: you can accept something with pretty bad potential consequences as long as it sits way out at one end of the probability curve. If you could define an area where the results of automation were uncertain or even catastrophic, as long as that area was tiny compared with the area of competence, that would be acceptable.

As far as removing pilots completely from the equation, even now there are large autonomous drones flying missions around the globe but I don’t think they have quite the safety record that would entice many to sit on board as a paying customer.

The day will come when we have AI strong enough to give a comparable performance to a trained and experienced human in terms of air transport operations. At the moment, it would appear to be cheaper (and possibly safer) to retain the status quo. Also, by the time autonomous airliners become a reality, every other mode of transport will be similar and pretty much every job a human could do could be done as well or better by AI...
FullWings is offline  
Old 27th Oct 2015, 12:06
  #28 (permalink)  
 
Join Date: Aug 2001
Location: se england
Posts: 1,580
Likes: 0
Received 48 Likes on 21 Posts
think the real risk to pilot jobs from automation and AI is the point made above that most other jobs will b automated before pilots.
So who do pilots have left to fly now that no one has a job and cannot afford to and as so many business functions are automated then there are no bums on seats in business class.

The computers themselves can telecommute so there is no need to fly the hardware around either and so there is not much call for airlines over all. In fact what is there much demand for, we can automate banking and finance functions so they are run by honest incorruptable machines. Same with the stock exchanges where machines will do the job better than coke fuelled risk taking reckless human counterparts .

So yes automations a threat to cockpit jobs because it is a threat to all jobs in the long run , and as the saying goes in the long run we are all dead but perhaps that was not meant in the sense that what will the AI run machines do when they realise they no longer need us!
pax britanica is offline  
Old 27th Oct 2015, 12:36
  #29 (permalink)  
 
Join Date: Mar 2007
Location: Up North….
Posts: 502
Likes: 0
Received 0 Likes on 0 Posts
I hate the way blame always seems to be laid at the pilots feet...Statements like 80% of crashes are due to pilot error, do nothing for the public or the profession.

The chances of an accident are so incredibly rare. The statistics are something like 1 per 2000000 flights have an incident. Well by my mind thats 1999999 where the pilots have done a bloody good job......

Where do we hear about that? 80% is a huge percentage scares the pants off the general public. But what about 0.00005% (1 in 2000000) and thats saying that every one of those 1 in 2000000 is the pilots fault, which it isn't even by the 80% statement.

Come on the incident rate can NEVER be zero and can always be improved but 0.00005% means that all in all pilots do a great job!
felixthecat is offline  
Old 27th Oct 2015, 13:40
  #30 (permalink)  
 
Join Date: Mar 2002
Location: Florida
Posts: 4,569
Likes: 0
Received 1 Like on 1 Post
I hate the way blame always seems to be laid at the pilots feet...Statements like 80% of crashes are due to pilot error, do nothing for the public or the profession.
fair point

replace the words "blame and Due to" and the pilot becomes only one of several causal factors. In the long run as many have pointed out that we can't seem to achieve zero mechanical or system screw ups so we depend on the pilot to mitigate a large percentage.

He just becomes one of the areas in the recommended actions in spite of what the newspapers say or a discussion board like this
lomapaseo is offline  
Old 27th Oct 2015, 15:42
  #31 (permalink)  
 
Join Date: Jan 2008
Location: Scandinavia
Posts: 98
Likes: 0
Received 0 Likes on 0 Posts
FC101: see earlier, I do not consider that pilots ARE the major problem. I have an adaptation of Jim Reason's diagram here... and agree Gawande's Checklist Manifesto is a good read.
I think you misread....I don't agree that pilots are the major problem either.

c101
fc101 is offline  
Old 27th Oct 2015, 16:23
  #32 (permalink)  
 
Join Date: Sep 1999
Location: South of the Border
Posts: 19
Likes: 0
Received 0 Likes on 0 Posts
I like the Air Safety Iceberg Reality model. Certainly a more balanced way of representing the reality of the situation.

I know there are many others, but I would like to add one other accident to the 'spectacular saves' list.

Cathay flight CX780 a few years ago. Accident report here:

http://www.cad.gov.hk/reports/2%20Fi...0compliant.pdf

Another 300+ lives saved by, in my opinion, the pilots. I wouldn't have confidence in computers bringing this situation to a successful ending. It really was one out of left field.

They, like Sully, received various awards for their actions.

And I very much like this thread with sound, reasoned discussion and not the usual trolls spoiling it all. May it continue.
gearupmaxpower is offline  
Old 27th Oct 2015, 16:24
  #33 (permalink)  
 
Join Date: Dec 2001
Location: England
Posts: 1,389
Likes: 0
Received 0 Likes on 0 Posts
How will a computer be able to be programmed to recognise, for example, a vehicle about to trespass onto a runway when the plane is about to land? It may be programmed to recognise a shape, but it will take a human to instantly interpret the shape, speed and likely trajectory, and make the split-second decision for a go-around.
Driverless cars will probably have even less time to react.
cwatters is offline  
Old 27th Oct 2015, 16:49
  #34 (permalink)  
Thread Starter
 
Join Date: Jan 2010
Location: Marlow (mostly)
Posts: 369
Likes: 0
Received 1 Like on 1 Post
Cathay event

Thanks GUMP, I had been intending to use that event for just that reason, but couldn't lay hands on the report at the time I did the diagram. Fits the pattern totally, especially the phrase in the report summary about "A SERIES (my emphasis) of causal factors" leading to loss of control of both engines.

Many other qualifying events are also in the IFALPA Polaris awardees list https://en.wikipedia.org/wiki/Polaris_Award
although for the purpose of this discussion the hijacking events probably have to be discounted. Even so we have 7 in the last decade.
Steve

Last edited by slast; 27th Oct 2015 at 17:00. Reason: typo, add reference
slast is offline  
Old 27th Oct 2015, 17:13
  #35 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
There is a lot of misunderstanding of the challenges here.


The BA 777 running out of engine is not an instance of where the human is better.

That is an example of where a computer would have the advantage.

No human has practiced glide approaches in a 777, whereas programing the physics into a computer is easy.

It can real time monitor the approach, and fly the perfect AoA without difficulty.
It can know the effect a flap raise will have.

Just like you cant match an autopilot for normal flight, you can't match it for abnormal flight.

The Sully case is another on.

The computer would not be trying to guess whether he could make the runway. It would know. It is very simple physics for the computer.



People are trying to make out that the computer has to be perfect.
It doesn't.

It merely has to be better than we are at present.

There will be black swan events, and that is where a good crew might be better, but the vast majority of events are endless repeats of accidents ad infinitum.


We are currently in a bad transitional phase.

An Example.

TCAS

The aircraft tells us what to do.
We are not supposed to second guess it, just do what it says.
We then try to do what it says but usually get it wrong. The aircraft itself would never ever got the response wrong.

EGPWS

Again, simply connecting these systems to the autopilot would make them a lot safer.

Those who say that the sensors will never match the human eye are talking utter rubbish.

1. The pilots barely look out the window.
2. We fly blind in IMC all the time.
3.If anyone actually thought that lookout was important, we would have cockpits like fighters that you could actually see out of.
4. Modern fighters already have integrated systems far better at spotting other aircraft.
Tourist is offline  
Old 27th Oct 2015, 17:46
  #36 (permalink)  
 
Join Date: Mar 2014
Location: WA STATE
Age: 78
Posts: 0
Likes: 0
Received 0 Likes on 0 Posts
Red face

The Sully case is another on.

The computer would not be trying to guess whether he could make the runway. It would know. It is very simple physics for the computer.
And would it pick the congested hudson river ( not a designated airfield out of reach ) to land ?

Or would it simply bring up the ' does not compute" or ' sorry scully i cannot allow you to do that ? '

Or as was the case for moon landing- computer about to overload and abort sequence ?
CONSO is offline  
Old 27th Oct 2015, 17:52
  #37 (permalink)  

"Mildly" Eccentric Stardriver
 
Join Date: Jan 2000
Location: England
Age: 77
Posts: 4,142
Received 224 Likes on 66 Posts
There has been talk that Sully could have made Teteboro, and that a computer would have done so. That is all very well, but the computer would have to be programmed with ALL building work on EVERY approach to EVERY airfield in the world, and in many cases ANY large ships etc transiting the area. Yes, it could be done, but the constant updating (assuming EVERY building contractor informed the relevant authorities), would be enormous.
Herod is offline  
Old 27th Oct 2015, 17:52
  #38 (permalink)  
 
Join Date: Feb 2005
Location: flyover country USA
Age: 82
Posts: 4,579
Likes: 0
Received 0 Likes on 0 Posts
Tourist:
The Sully case is another on(e).

The computer would not be trying to guess whether he could make the runway. It would know. It is very simple physics for the computer.
And the computer will respond with a ONE or a ZERO.

ONE, No problem. It steers for TEB with optimum configuration - Gear up, if necessary.

ZERO - Now what? Would it steer for the Hudson, and avoid the watercraft? What database would apply?

Seems to me the Cat3a-configured, pilot-optional airplane will not be capable of flying safely in a strictly VFR environment.
barit1 is offline  
Old 27th Oct 2015, 17:59
  #39 (permalink)  
 
Join Date: Jun 2009
Location: Canada
Posts: 464
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by CONSO
Or as was the case for moon landing- computer about to overload and abort sequence ?
The Apollo Guidance Computer is actually a good example of coding to handle unexpected events. It had CPU power to spare in normal operation, but was programmed to keep flying and drop less essential tasks like displaying data to the crew if it was overloaded. Had the programmers not built that capability into the software, Neil Armstrong wouldn't have been the first man on the Moon.

Of course, it had no idea that it was aiming to land them on top of a big rock (or was it the edge of a crater? I forget). So, without Armstrong, it would probably just be a pile of debris on the Moon.
MG23 is offline  
Old 27th Oct 2015, 18:06
  #40 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Why would it have to crash?

Why not have it scan the ground in front and around to the sides in search of the most suitable landing area.

This technology is already in existence and operating. This is not pie in the sky.

https://www.youtube.com/watch?v=GoCFE8xVhKA

Incidentally, Sully was an exceptional pilot. Just because one human did well does not mean that usually the pilots don't crash in in these scenarios.

The reason we know his name is because he is the very rare human who got this right, not the norm.
Tourist is offline  


Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.