Go Back  PPRuNe Forums > Flight Deck Forums > Rumours & News
Reload this Page >

Airbus pitches pilotless jets -- at Le Bourget

Wikiposts
Search
Rumours & News Reporting Points that may affect our jobs or lives as professional pilots. Also, items that may be of interest to professional pilots.

Airbus pitches pilotless jets -- at Le Bourget

Thread Tools
 
Search this Thread
 
Old 20th Jun 2019, 21:21
  #81 (permalink)  
 
Join Date: Feb 2007
Location: Spain
Age: 81
Posts: 57
Received 0 Likes on 0 Posts
Originally Posted by Auxtank
And, equally happy as you; that has hasn't happened yet. Because there are no truly AI Autopilots operating - defence experiments aside - yet.
Even the Airbus at Le Bourget was confined by it's programming. It had no free will whatsoever; it was analysing data input and responding with it's pre-programmed instructional code.

Sorry, that's not AI - it's on a level with your toaster on which you spread your breakfast marmalade.
I usually spread my breakfast marmalade on the toast rather than on the toaster.
kkbuk is offline  
Old 21st Jun 2019, 00:03
  #82 (permalink)  
 
Join Date: Sep 2017
Location: Europe
Posts: 1,674
Likes: 0
Received 0 Likes on 0 Posts
The chief salesman of Airbus
Is there any need to expand his 'knowledge' on the difference between sales pitch and reality?
Rated De is offline  
Old 21st Jun 2019, 04:24
  #83 (permalink)  
 
Join Date: Aug 2002
Location: sydney
Posts: 136
Likes: 0
Received 0 Likes on 0 Posts
More likely is a hybrid approach first, with a remote pilot.

Because we already have that, with military drones. I'm unaware of their failure rate, but I'd imagine it would be improving as experience is gained.

There's objections about things like up/down link security, but I'd imagine the typical drone is of huge interest to very well-resourced organisations that presumably are willing to try and degrade or snoop. Whilst there may have been instances of same, I suspect they have been patched pretty quickly.

The only really big barrier to similar control over pax operations is passenger psychology.
Groaner is offline  
Old 21st Jun 2019, 04:42
  #84 (permalink)  
 
Join Date: Jul 2013
Location: Everett, WA
Age: 68
Posts: 4,399
Received 180 Likes on 88 Posts
Currently, the FAA is on record - in writing - that they will not permit or certify any flight critical software (DAL A or B) that incorporates AI (or anything resembling AI). The reason is quite simple - AI isn't predictable in it's responses - and unpredictability is the exact opposite of what you want in aircraft avionics.
Personal example - my last BMW 3 series had a simple form of AI - it would 'learn' my driving habits and incorporate that into the engine and transmission response algorithms. I'd taken the car in for service, I mentioned that I'd seen an error message for "BMW Connect" a couple of times (BMW Connect is similar to "On Star", but cell phone based). After I picked up the car, it had turned into a gutless wonder - the engine was literally so slow and unresponsive as to be dangerous to drive. I took it back the next day, let the service manager drive it around the block and he immediately confirmed something was seriously wrong.
Turns out they'd re-flashed the memory to correct the BMW Connect error messages - somehow in doing that, they'd inadvertently set all the AI learning to "little old lady", making the car almost undriveable. They reset all the AI, and the car drove perfectly. When I talked about this with some co-workers later, it turns out one of the others had a similar occurrence - on their Jeep Grand Cherokee...
Programing for 'known' failures is relatively easy - the first step in any fully autonomous aircraft would be to catalog every single known survivable failure, and come up with the best solution to every one. Not to shortchange Sully in any way, but an all engine power loss is pretty straight forward - a proper program could evaluate the possible glide range based on all the relevant parameters (altitude, airspeed, aircraft weight and drag), and determine if it was feasible to land at an airport or a water landing would be better - and do all that in a faction of a second, while simultaneously trying to restart the engines. Where the computer falls short is something that's never happened before - e.g the failures associated with an uncontained engine failure (think Qantas 32) - what works and what doesn't work after such a failure is somewhat random - a programmers nightmare.
As I mentioned previously - I have no doubt fully autonomous aircraft will eventually occur, but it's going to take a long time.
tdracer is online now  
Old 21st Jun 2019, 05:01
  #85 (permalink)  
 
Join Date: Feb 2015
Location: The woods
Posts: 5
Likes: 0
Received 3 Likes on 2 Posts
Originally Posted by kkbuk
I usually spread my breakfast marmalade on the toast rather than on the toaster.
Flew with a coplilot once who spread his breakfast strawberry jam all over the secondary trim handles in an MD 80. When I asked him what he was up to he answered Now we have a Jammed Stabiliser.. Took him the next half hour to clean it. AI might have been better.
bill fly is offline  
Old 21st Jun 2019, 09:34
  #86 (permalink)  
Thread Starter
 
Join Date: Mar 2019
Location: Canada
Posts: 72
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by tdracer
Currently, the FAA is on record - in writing - that they will not permit or certify any flight critical software (DAL A or B) that incorporates AI (or anything resembling AI). The reason is quite simple - AI isn't predictable in it's responses - and unpredictability is the exact opposite of what you want in aircraft avionics.
Personal example - my last BMW 3 series had a simple form of AI - it would 'learn' my driving habits and incorporate that into the engine and transmission response algorithms. I'd taken the car in for service, I mentioned that I'd seen an error message for "BMW Connect" a couple of times (BMW Connect is similar to "On Star", but cell phone based). After I picked up the car, it had turned into a gutless wonder - the engine was literally so slow and unresponsive as to be dangerous to drive. I took it back the next day, let the service manager drive it around the block and he immediately confirmed something was seriously wrong.
Turns out they'd re-flashed the memory to correct the BMW Connect error messages - somehow in doing that, they'd inadvertently set all the AI learning to "little old lady", making the car almost undriveable. They reset all the AI, and the car drove perfectly. When I talked about this with some co-workers later, it turns out one of the others had a similar occurrence - on their Jeep Grand Cherokee...
Programing for 'known' failures is relatively easy - the first step in any fully autonomous aircraft would be to catalog every single known survivable failure, and come up with the best solution to every one. Not to shortchange Sully in any way, but an all engine power loss is pretty straight forward - a proper program could evaluate the possible glide range based on all the relevant parameters (altitude, airspeed, aircraft weight and drag), and determine if it was feasible to land at an airport or a water landing would be better - and do all that in a faction of a second, while simultaneously trying to restart the engines. Where the computer falls short is something that's never happened before - e.g the failures associated with an uncontained engine failure (think Qantas 32) - what works and what doesn't work after such a failure is somewhat random - a programmers nightmare.
As I mentioned previously - I have no doubt fully autonomous aircraft will eventually occur, but it's going to take a long time.
Well, not really. Large classes of AI/ML algorithms are as deterministic & predictable as any "classical" algorithms.

And most systems using machine learning algorithms aren't actually "learning" (updating itself) while being used. All the "learnings" happen back in the lab while the algorithms are being modeled, trained, tuned, and validated. The resulting model (various parameters) are then "baked" into production systems.

In your BMW, for example, the AI isn't really "learning" while you're driving around. The learning already took place in Munich -- long before you bought your car -- when BMW data scientists & data engineers used machine learning to create many configuration sets (apparently including an "old lady" configuration). From time to time, perhaps once or twice a year, BMW might use new datasets to "re-train" their AI models, validate them, and provide the new updated parameters as part of the next software release. (Now, your car might be "smart" enough to notice if you prefer to drive like an old lady or an F1 driver and automatically load the appropriate configuration or adjust some variables between some well defined limits, but that's not AI).

Anyway, the bottom line is that AI system can be "predictable" and doesn't substantially change between rigorously validated updates.

Related to this are concepts of "interpretability" and "explainability". I wont go into details (here's an academic paper if one cares) but many machine learning algorithms work like a "black box" so their use may be problematic in safety critical systems. However, not all of them work this way, and we're making great strides in making the rest "interpretable" and/or "explainable".
futurama is offline  
Old 21st Jun 2019, 09:45
  #87 (permalink)  
 
Join Date: Nov 2013
Location: Equatorial
Age: 51
Posts: 1,067
Received 124 Likes on 61 Posts
Originally Posted by bill fly


Flew with a coplilot once who spread his breakfast strawberry jam all over the secondary trim handles in an MD 80. When I asked him what he was up to he answered Now we have a Jammed Stabiliser.. Took him the next half hour to clean it. AI might have been better.
Classic!!!
Global Aviator is online now  
Old 21st Jun 2019, 09:58
  #88 (permalink)  
 
Join Date: Sep 2007
Location: london
Posts: 741
Likes: 0
Received 1 Like on 1 Post
Would someone kindly define AI for me?
Having read all the subsequent posts I guess not. The fact that a computer programme stores results and incorporates these results into future runs is merely machine learning. Its a derivation of what we did with mainframe computers in the 1970 - write a programme, run it, correct it, run it, repeat - except that you continue once the programme works and now have the ability with greater computing power to incorporate more feedback and variables. However it is just machine learning. It is data crunching. The computer hasnt got intelligence or anything more than an ability to crunch data.

When an AP can decide to crash on the elderly woman and not the child because we care more for children, I will accept it is AI enabled. Until then it is just a computer
homonculus is offline  
Old 21st Jun 2019, 09:59
  #89 (permalink)  
 
Join Date: Apr 1998
Location: Mesopotamos
Posts: 5
Likes: 0
Received 0 Likes on 0 Posts
There's an ex NASA/JPL engineer who now works for Nissan who regularly holds demonstrations on why autonomous vehicles will never make it. In his demonstrations he shows a library of videos of humans breaking the law to avoid fatal automobile accidents. He rightly suggests that computers could never be programmed to do what these humans did let alone be programmed to break the law.

While the marketing people have gained control over the remaining qualified engineers on design issues then expect to hear more of this kind of autonomous vehicle cr@p ad-nauseum.

cattletruck is offline  
Old 21st Jun 2019, 10:15
  #90 (permalink)  
 
Join Date: Mar 2012
Location: DFFD Ouagadougou
Age: 62
Posts: 11
Likes: 0
Received 0 Likes on 0 Posts
I bet O'Leary was interested.
Raffles S.A. is offline  
Old 21st Jun 2019, 14:47
  #91 (permalink)  
 
Join Date: May 2008
Location: Paris
Age: 60
Posts: 101
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Auxtank
That problem is examined and answered so well in that book above as he mulls over Asimov's Three Laws.
It has not been answered satisfactorily in any conference I've attended.

Computers making control decisions in life/death scenarios? They handle nominal situations very well and are used routinely in the industry.

However, they fail. They always will. They are complex. They rely on a consistent sequence of reliable events all the way from the power source through to the outputs. They rely on the expertise of the thousands of designers and coders.

I've worked on major systems for decades. Mature technologies, with massive redundancy. They still occasionally fail.

Applying a patch on reasonably placid systems performing simple functions can takea month or so as it passes through change control/test/user acceptance cycle.

You know what? Even after that cycle of rigorous testing, it can still introduce failures.

If I were to be involved in any autonomous flight project... well, I said walk away. Maybe I wouldn't. My costs, for development and ongoing support would far exceed those charged by a pilot, though.

Several thousand people providing 24/7 expert support (we're speaking third level rather than helpline) will not come cheap.

And I repeat the risk inherent in a single error being propagated across multiple users.

I am no technophobe. I've been in the industry for decades. I would never fly on a craft which relied entirely on code.
Nialler is offline  
Old 21st Jun 2019, 15:10
  #92 (permalink)  
 
Join Date: May 2008
Location: Paris
Age: 60
Posts: 101
Likes: 0
Received 0 Likes on 0 Posts
On the issue of AI:

The systems I've examined have been good at avoiding error if that error has been encountered in the past and is part of its dataset. The issue is not avoiding predictable errors, though. The core issue is that resolving a fresh problem may require a new solution. It may require a solution exceeding the constraints and limits of the programme design.

I return to my issue with the term AI. That first is enough for me. Artificial. Computers are brilliant at high speed processing. Millions, if not billions of times quicker than humans. Yet, as a dataset grows they suffer performance anxiety. One of the issues with 9/11 was not that the intelligence agencies had to little data in advance. The problem was that they had to much.

Armstrong had to fight on his descent to the Moon when the job entry subsystem became flooded with tasks.

Amdahl's law also has a place. As you add components to a computing system (and he specified additional processing power), the chatter and handshaking between them begins to overcome their capacity to perform the role they were expected to perform. They now exist for each other. They're no longer tallying bank balances or calculating an angle of attack. They're making sure that the system is fine.

IBM have designed massive database systems. CICS, IMS, DB2. 90%of the code is about resilience, recoverability, integrity. A small fraction does the job it is expected to do.
Nialler is offline  
Old 21st Jun 2019, 15:21
  #93 (permalink)  
 
Join Date: Oct 2007
Location: Germany
Posts: 0
Likes: 0
Received 0 Likes on 0 Posts
There is a lot of false information and misinterpretation buried in this thread, for instance that A.I. would make changes to its algorithms as it goes along. After training, you can fix the A.I. 'settings' and therefore the system is predictable if you put an equal system into the same situation. But that's completely beside the point. Are human pilots predictable? Do all human pilots react in the same perfect way or do they fail from time to time? One example given in this thread is AF447. A computer system could be programmed to just fly Pitch&Power and get itself out of the critical situation. Why didn't the pilots do this? Did anybody predict this or did the pilots behave unpredictable?
ThorMos is offline  
Old 21st Jun 2019, 15:42
  #94 (permalink)  
 
Join Date: May 2008
Location: Paris
Age: 60
Posts: 101
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by ThorMos
There is a lot of false information and misinterpretation buried in this thread, for instance that A.I. would make changes to its algorithms as it goes along. After training, you can fix the A.I. 'settings' and therefore the system is predictable if you put an equal system into the same situation. But that's completely beside the point. Are human pilots predictable? Do all human pilots react in the same perfect way or do they fail from time to time? One example given in this thread is AF447. A computer system could be programmed to just fly Pitch&Power and get itself out of the critical situation. Why didn't the pilots do this? Did anybody predict this or did the pilots behave unpredictable?
This gets sort of to the point. We're not comparing systems as to which one is perfect. Merely, which one is better fitted for purpose.

The joy of science is that it recognises its limitations and - more importantly - has mechanisms by which it can self-correct. A purely mechanistic approach based on rules has no such flexibility. It will fly into the cliff because the rules decreed so. A patch will be supplied. In an unexpected situation a stall will occur. A patch will be issued.

The AI systems I've seen were all based on accumulated experience. That's not of much use if you find yourself in an entirely novel one.
Nialler is offline  
Old 21st Jun 2019, 15:50
  #95 (permalink)  
 
Join Date: Aug 2010
Location: UK
Age: 67
Posts: 167
Received 31 Likes on 18 Posts
Originally Posted by Auxtank
And, equally happy as you; that has hasn't happened yet. Because there are no truly AI Autopilots operating - defence experiments aside - yet.
Even the Airbus at Le Bourget was confined by it's programming. It had no free will whatsoever; it was analysing data input and responding with it's pre-programmed instructional code.

Sorry, that's not AI - it's on a level with your toaster on which you spread your breakfast marmalade.

You spread marmalade on your toaster?
golfbananajam is offline  
Old 21st Jun 2019, 16:37
  #96 (permalink)  
 
Join Date: Oct 2018
Location: Uka Duka
Posts: 1,003
Received 37 Likes on 13 Posts
Originally Posted by golfbananajam
You spread marmalade on your toaster?
Alright guys - hilarious. I got it wrong...but it was just a slip up with wording. (Not on the marmalade)
I apologise. I meant to say that the particular piece of AI BEING TALKED ABOUT is as intelligent as your toaster - IN WHICH YOU TOAST YOUR BREAD AND ON TO WHICH; THE BREAD THAT IS, YOU SPREAD YOUR BLOODY MARMALADE.

Now, can we get back to discussing the earth-shattering dawn of AI.

(And no, I do not like blood in my marmalade)
Auxtank is offline  
Old 21st Jun 2019, 16:46
  #97 (permalink)  
 
Join Date: Oct 2007
Location: Germany
Posts: 0
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Nialler

<snip>

The AI systems I've seen were all based on accumulated experience. That's not of much use if you find yourself in an entirely novel one.
The humans I've seen were all based on accumulated experience. That's not of much use if you find yourself in an entirely novel one.

see what i did here?

ThorMos is offline  
Old 21st Jun 2019, 17:19
  #98 (permalink)  
 
Join Date: Aug 2009
Location: Nova Scotia Canada
Age: 77
Posts: 24
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Chris2303
On the flight deck one human, one dog.

Dog bites human if he touches anything
Actually the human is there to feed the dog.
RobertP is offline  
Old 21st Jun 2019, 21:39
  #99 (permalink)  
 
Join Date: Jul 2014
Location: Harbour Master Place
Posts: 662
Likes: 0
Received 0 Likes on 0 Posts
The humans I've seen were all based on accumulated experience. That's not of much use if you find yourself in an entirely novel one.

see what i did here?
Humans have a generalised ability to solve novel problems with accumulated knowledge and experience of nearby or similar situations and intuition. There have been numerous instances posited in this thread where humans adapted on the fly to unanticipated or unprecedented scenario's that they had not been trained for. CX780, QF32, Sioux city, Sully and a slew of others. A good primer on the subject of intuition (and it's flaws) is Daniel Kahneman's unexpected best seller "Thinking Fast and Slow", noting the work of his antagonist collaborator Gary Klein on Naturalistic decision-making.

In the interest of honest debate, humans are also completely capable of screwing it up, and required strict Standard Operating Procedures has to be developed trained and complied with to save them from themselves, that for most part pilots against their will are required to function as automatons. The hull loss rate suggest we have probably optimised the hybrid between the advantages that automation can provide and human tolerance for novelty and ambiguity for "out of design" scenario's.

Can you please point to a generalised problem solving artificial intelligence system that can solve and adapt to an unprecedented novel scenario in real time? Because that is the humans bring to the game.
CurtainTwitcher is offline  
Old 21st Jun 2019, 22:10
  #100 (permalink)  
 
Join Date: Oct 2018
Location: Uka Duka
Posts: 1,003
Received 37 Likes on 13 Posts
Originally Posted by CurtainTwitcher
Humans have a generalised ability to solve novel problems with accumulated knowledge and experience of nearby or similar situations and intuition. There have been numerous instances posited in this thread where humans adapted on the fly to unanticipated or unprecedented scenario's that they had not been trained for. CX780, QF32, Sioux city, Sully and a slew of others. A good primer on the subject of intuition (and it's flaws) is Daniel Kahneman's unexpected best seller "Thinking Fast and Slow", noting the work of his antagonist collaborator Gary Klein on Naturalistic decision-making.

In the interest of honest debate, humans are also completely capable of screwing it up, and required strict Standard Operating Procedures has to be developed trained and complied with to save them from themselves, that for most part pilots against their will are required to function as automatons. The hull loss rate suggest we have probably optimised the hybrid between the advantages that automation can provide and human tolerance for novelty and ambiguity for "out of design" scenario's.

Can you please point to a generalised problem solving artificial intelligence system that can solve and adapt to an unprecedented novel scenario in real time? Because that is the humans bring to the game.
That is what the humans bring to the game.

That's essentially, in a nut shell what GAI is. (GAI - Generalised Artificial Intelligence)

We're about 20 years away from that. Nand Gates are where it all started and their slow but steady, rising to exponential, evolutionary growth are going to be the death of us - or our salvation.

Which of those it is - is down to us.

Start here; https://futureoflife.org/superintelligence-survey/

Last edited by Auxtank; 22nd Jun 2019 at 07:08.
Auxtank is offline  


Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.