PPRuNe Forums

PPRuNe Forums (https://www.pprune.org/)
-   Rumours & News (https://www.pprune.org/rumours-news-13/)
-   -   Airbus pitches pilotless jets -- at Le Bourget (https://www.pprune.org/rumours-news/622618-airbus-pitches-pilotless-jets-le-bourget.html)

kkbuk 20th Jun 2019 21:21


Originally Posted by Auxtank (Post 10498883)
And, equally happy as you; that has hasn't happened yet. Because there are no truly AI Autopilots operating - defence experiments aside - yet.
Even the Airbus at Le Bourget was confined by it's programming. It had no free will whatsoever; it was analysing data input and responding with it's pre-programmed instructional code.

Sorry, that's not AI - it's on a level with your toaster on which you spread your breakfast marmalade.

I usually spread my breakfast marmalade on the toast rather than on the toaster.

Rated De 21st Jun 2019 00:03


The chief salesman of Airbus
Is there any need to expand his 'knowledge' on the difference between sales pitch and reality?

Groaner 21st Jun 2019 04:24

More likely is a hybrid approach first, with a remote pilot.

Because we already have that, with military drones. I'm unaware of their failure rate, but I'd imagine it would be improving as experience is gained.

There's objections about things like up/down link security, but I'd imagine the typical drone is of huge interest to very well-resourced organisations that presumably are willing to try and degrade or snoop. Whilst there may have been instances of same, I suspect they have been patched pretty quickly.

The only really big barrier to similar control over pax operations is passenger psychology.

tdracer 21st Jun 2019 04:42

Currently, the FAA is on record - in writing - that they will not permit or certify any flight critical software (DAL A or B) that incorporates AI (or anything resembling AI). The reason is quite simple - AI isn't predictable in it's responses - and unpredictability is the exact opposite of what you want in aircraft avionics.
Personal example - my last BMW 3 series had a simple form of AI - it would 'learn' my driving habits and incorporate that into the engine and transmission response algorithms. I'd taken the car in for service, I mentioned that I'd seen an error message for "BMW Connect" a couple of times (BMW Connect is similar to "On Star", but cell phone based). After I picked up the car, it had turned into a gutless wonder - the engine was literally so slow and unresponsive as to be dangerous to drive. I took it back the next day, let the service manager drive it around the block and he immediately confirmed something was seriously wrong.
Turns out they'd re-flashed the memory to correct the BMW Connect error messages - somehow in doing that, they'd inadvertently set all the AI learning to "little old lady", making the car almost undriveable. They reset all the AI, and the car drove perfectly. When I talked about this with some co-workers later, it turns out one of the others had a similar occurrence - on their Jeep Grand Cherokee...
Programing for 'known' failures is relatively easy - the first step in any fully autonomous aircraft would be to catalog every single known survivable failure, and come up with the best solution to every one. Not to shortchange Sully in any way, but an all engine power loss is pretty straight forward - a proper program could evaluate the possible glide range based on all the relevant parameters (altitude, airspeed, aircraft weight and drag), and determine if it was feasible to land at an airport or a water landing would be better - and do all that in a faction of a second, while simultaneously trying to restart the engines. Where the computer falls short is something that's never happened before - e.g the failures associated with an uncontained engine failure (think Qantas 32) - what works and what doesn't work after such a failure is somewhat random - a programmers nightmare.
As I mentioned previously - I have no doubt fully autonomous aircraft will eventually occur, but it's going to take a long time.

bill fly 21st Jun 2019 05:01


Originally Posted by kkbuk (Post 10498978)
I usually spread my breakfast marmalade on the toast rather than on the toaster.

Flew with a coplilot once who spread his breakfast strawberry jam all over the secondary trim handles in an MD 80. When I asked him what he was up to he answered Now we have a Jammed Stabiliser.. Took him the next half hour to clean it. AI might have been better.

futurama 21st Jun 2019 09:34


Originally Posted by tdracer (Post 10499131)
Currently, the FAA is on record - in writing - that they will not permit or certify any flight critical software (DAL A or B) that incorporates AI (or anything resembling AI). The reason is quite simple - AI isn't predictable in it's responses - and unpredictability is the exact opposite of what you want in aircraft avionics.
Personal example - my last BMW 3 series had a simple form of AI - it would 'learn' my driving habits and incorporate that into the engine and transmission response algorithms. I'd taken the car in for service, I mentioned that I'd seen an error message for "BMW Connect" a couple of times (BMW Connect is similar to "On Star", but cell phone based). After I picked up the car, it had turned into a gutless wonder - the engine was literally so slow and unresponsive as to be dangerous to drive. I took it back the next day, let the service manager drive it around the block and he immediately confirmed something was seriously wrong.
Turns out they'd re-flashed the memory to correct the BMW Connect error messages - somehow in doing that, they'd inadvertently set all the AI learning to "little old lady", making the car almost undriveable. They reset all the AI, and the car drove perfectly. When I talked about this with some co-workers later, it turns out one of the others had a similar occurrence - on their Jeep Grand Cherokee...
Programing for 'known' failures is relatively easy - the first step in any fully autonomous aircraft would be to catalog every single known survivable failure, and come up with the best solution to every one. Not to shortchange Sully in any way, but an all engine power loss is pretty straight forward - a proper program could evaluate the possible glide range based on all the relevant parameters (altitude, airspeed, aircraft weight and drag), and determine if it was feasible to land at an airport or a water landing would be better - and do all that in a faction of a second, while simultaneously trying to restart the engines. Where the computer falls short is something that's never happened before - e.g the failures associated with an uncontained engine failure (think Qantas 32) - what works and what doesn't work after such a failure is somewhat random - a programmers nightmare.
As I mentioned previously - I have no doubt fully autonomous aircraft will eventually occur, but it's going to take a long time.

Well, not really. Large classes of AI/ML algorithms are as deterministic & predictable as any "classical" algorithms.

And most systems using machine learning algorithms aren't actually "learning" (updating itself) while being used. All the "learnings" happen back in the lab while the algorithms are being modeled, trained, tuned, and validated. The resulting model (various parameters) are then "baked" into production systems.

In your BMW, for example, the AI isn't really "learning" while you're driving around. The learning already took place in Munich -- long before you bought your car -- when BMW data scientists & data engineers used machine learning to create many configuration sets (apparently including an "old lady" configuration). From time to time, perhaps once or twice a year, BMW might use new datasets to "re-train" their AI models, validate them, and provide the new updated parameters as part of the next software release. (Now, your car might be "smart" enough to notice if you prefer to drive like an old lady or an F1 driver and automatically load the appropriate configuration or adjust some variables between some well defined limits, but that's not AI).

Anyway, the bottom line is that AI system can be "predictable" and doesn't substantially change between rigorously validated updates.

Related to this are concepts of "interpretability" and "explainability". I wont go into details (here's an academic paper if one cares) but many machine learning algorithms work like a "black box" so their use may be problematic in safety critical systems. However, not all of them work this way, and we're making great strides in making the rest "interpretable" and/or "explainable".

Global Aviator 21st Jun 2019 09:45


Originally Posted by bill fly (Post 10499136)


Flew with a coplilot once who spread his breakfast strawberry jam all over the secondary trim handles in an MD 80. When I asked him what he was up to he answered Now we have a Jammed Stabiliser.. Took him the next half hour to clean it. AI might have been better.

Classic!!!

homonculus 21st Jun 2019 09:58


Would someone kindly define AI for me?
Having read all the subsequent posts I guess not. The fact that a computer programme stores results and incorporates these results into future runs is merely machine learning. Its a derivation of what we did with mainframe computers in the 1970 - write a programme, run it, correct it, run it, repeat - except that you continue once the programme works and now have the ability with greater computing power to incorporate more feedback and variables. However it is just machine learning. It is data crunching. The computer hasnt got intelligence or anything more than an ability to crunch data.

When an AP can decide to crash on the elderly woman and not the child because we care more for children, I will accept it is AI enabled. Until then it is just a computer

cattletruck 21st Jun 2019 09:59

There's an ex NASA/JPL engineer who now works for Nissan who regularly holds demonstrations on why autonomous vehicles will never make it. In his demonstrations he shows a library of videos of humans breaking the law to avoid fatal automobile accidents. He rightly suggests that computers could never be programmed to do what these humans did let alone be programmed to break the law.

While the marketing people have gained control over the remaining qualified engineers on design issues then expect to hear more of this kind of autonomous vehicle cr@p ad-nauseum.


Raffles S.A. 21st Jun 2019 10:15

I bet O'Leary was interested.

Nialler 21st Jun 2019 14:47


Originally Posted by Auxtank (Post 10498716)
That problem is examined and answered so well in that book above as he mulls over Asimov's Three Laws.

It has not been answered satisfactorily in any conference I've attended.

Computers making control decisions in life/death scenarios? They handle nominal situations very well and are used routinely in the industry.

However, they fail. They always will. They are complex. They rely on a consistent sequence of reliable events all the way from the power source through to the outputs. They rely on the expertise of the thousands of designers and coders.

I've worked on major systems for decades. Mature technologies, with massive redundancy. They still occasionally fail.

Applying a patch on reasonably placid systems performing simple functions can takea month or so as it passes through change control/test/user acceptance cycle.

You know what? Even after that cycle of rigorous testing, it can still introduce failures.

If I were to be involved in any autonomous flight project... well, I said walk away. Maybe I wouldn't. My costs, for development and ongoing support would far exceed those charged by a pilot, though.

Several thousand people providing 24/7 expert support (we're speaking third level rather than helpline) will not come cheap.

And I repeat the risk inherent in a single error being propagated across multiple users.

I am no technophobe. I've been in the industry for decades. I would never fly on a craft which relied entirely on code.

Nialler 21st Jun 2019 15:10

On the issue of AI:

The systems I've examined have been good at avoiding error if that error has been encountered in the past and is part of its dataset. The issue is not avoiding predictable errors, though. The core issue is that resolving a fresh problem may require a new solution. It may require a solution exceeding the constraints and limits of the programme design.

I return to my issue with the term AI. That first is enough for me. Artificial. Computers are brilliant at high speed processing. Millions, if not billions of times quicker than humans. Yet, as a dataset grows they suffer performance anxiety. One of the issues with 9/11 was not that the intelligence agencies had to little data in advance. The problem was that they had to much.

Armstrong had to fight on his descent to the Moon when the job entry subsystem became flooded with tasks.

Amdahl's law also has a place. As you add components to a computing system (and he specified additional processing power), the chatter and handshaking between them begins to overcome their capacity to perform the role they were expected to perform. They now exist for each other. They're no longer tallying bank balances or calculating an angle of attack. They're making sure that the system is fine.

IBM have designed massive database systems. CICS, IMS, DB2. 90%of the code is about resilience, recoverability, integrity. A small fraction does the job it is expected to do.

ThorMos 21st Jun 2019 15:21

There is a lot of false information and misinterpretation buried in this thread, for instance that A.I. would make changes to its algorithms as it goes along. After training, you can fix the A.I. 'settings' and therefore the system is predictable if you put an equal system into the same situation. But that's completely beside the point. Are human pilots predictable? Do all human pilots react in the same perfect way or do they fail from time to time? One example given in this thread is AF447. A computer system could be programmed to just fly Pitch&Power and get itself out of the critical situation. Why didn't the pilots do this? Did anybody predict this or did the pilots behave unpredictable?

Nialler 21st Jun 2019 15:42


Originally Posted by ThorMos (Post 10499733)
There is a lot of false information and misinterpretation buried in this thread, for instance that A.I. would make changes to its algorithms as it goes along. After training, you can fix the A.I. 'settings' and therefore the system is predictable if you put an equal system into the same situation. But that's completely beside the point. Are human pilots predictable? Do all human pilots react in the same perfect way or do they fail from time to time? One example given in this thread is AF447. A computer system could be programmed to just fly Pitch&Power and get itself out of the critical situation. Why didn't the pilots do this? Did anybody predict this or did the pilots behave unpredictable?

This gets sort of to the point. We're not comparing systems as to which one is perfect. Merely, which one is better fitted for purpose.

The joy of science is that it recognises its limitations and - more importantly - has mechanisms by which it can self-correct. A purely mechanistic approach based on rules has no such flexibility. It will fly into the cliff because the rules decreed so. A patch will be supplied. In an unexpected situation a stall will occur. A patch will be issued.

The AI systems I've seen were all based on accumulated experience. That's not of much use if you find yourself in an entirely novel one.

golfbananajam 21st Jun 2019 15:50


Originally Posted by Auxtank (Post 10498883)
And, equally happy as you; that has hasn't happened yet. Because there are no truly AI Autopilots operating - defence experiments aside - yet.
Even the Airbus at Le Bourget was confined by it's programming. It had no free will whatsoever; it was analysing data input and responding with it's pre-programmed instructional code.

Sorry, that's not AI - it's on a level with your toaster on which you spread your breakfast marmalade.


You spread marmalade on your toaster?

Auxtank 21st Jun 2019 16:37


Originally Posted by golfbananajam (Post 10499762)
You spread marmalade on your toaster?

Alright guys - hilarious. I got it wrong...but it was just a slip up with wording. (Not on the marmalade)
I apologise. I meant to say that the particular piece of AI BEING TALKED ABOUT is as intelligent as your toaster - IN WHICH YOU TOAST YOUR BREAD AND ON TO WHICH; THE BREAD THAT IS, YOU SPREAD YOUR BLOODY MARMALADE.
:O
Now, can we get back to discussing the earth-shattering dawn of AI.

(And no, I do not like blood in my marmalade)

ThorMos 21st Jun 2019 16:46


Originally Posted by Nialler (Post 10499756)

<snip>

The AI systems I've seen were all based on accumulated experience. That's not of much use if you find yourself in an entirely novel one.

The humans I've seen were all based on accumulated experience. That's not of much use if you find yourself in an entirely novel one.

see what i did here?


RobertP 21st Jun 2019 17:19


Originally Posted by Chris2303 (Post 10497288)
On the flight deck one human, one dog.

Dog bites human if he touches anything

Actually the human is there to feed the dog.

CurtainTwitcher 21st Jun 2019 21:39


The humans I've seen were all based on accumulated experience. That's not of much use if you find yourself in an entirely novel one.

see what i did here?
Humans have a generalised ability to solve novel problems with accumulated knowledge and experience of nearby or similar situations and intuition. There have been numerous instances posited in this thread where humans adapted on the fly to unanticipated or unprecedented scenario's that they had not been trained for. CX780, QF32, Sioux city, Sully and a slew of others. A good primer on the subject of intuition (and it's flaws) is Daniel Kahneman's unexpected best seller "Thinking Fast and Slow", noting the work of his antagonist collaborator Gary Klein on Naturalistic decision-making.

In the interest of honest debate, humans are also completely capable of screwing it up, and required strict Standard Operating Procedures has to be developed trained and complied with to save them from themselves, that for most part pilots against their will are required to function as automatons. The hull loss rate suggest we have probably optimised the hybrid between the advantages that automation can provide and human tolerance for novelty and ambiguity for "out of design" scenario's.

Can you please point to a generalised problem solving artificial intelligence system that can solve and adapt to an unprecedented novel scenario in real time? Because that is the humans bring to the game.

Auxtank 21st Jun 2019 22:10


Originally Posted by CurtainTwitcher (Post 10499996)
Humans have a generalised ability to solve novel problems with accumulated knowledge and experience of nearby or similar situations and intuition. There have been numerous instances posited in this thread where humans adapted on the fly to unanticipated or unprecedented scenario's that they had not been trained for. CX780, QF32, Sioux city, Sully and a slew of others. A good primer on the subject of intuition (and it's flaws) is Daniel Kahneman's unexpected best seller "Thinking Fast and Slow", noting the work of his antagonist collaborator Gary Klein on Naturalistic decision-making.

In the interest of honest debate, humans are also completely capable of screwing it up, and required strict Standard Operating Procedures has to be developed trained and complied with to save them from themselves, that for most part pilots against their will are required to function as automatons. The hull loss rate suggest we have probably optimised the hybrid between the advantages that automation can provide and human tolerance for novelty and ambiguity for "out of design" scenario's.

Can you please point to a generalised problem solving artificial intelligence system that can solve and adapt to an unprecedented novel scenario in real time? Because that is the humans bring to the game.

That is what the humans bring to the game.

That's essentially, in a nut shell what GAI is. (GAI - Generalised Artificial Intelligence)

We're about 20 years away from that. Nand Gates are where it all started and their slow but steady, rising to exponential, evolutionary growth are going to be the death of us - or our salvation.

Which of those it is - is down to us.

Start here; https://futureoflife.org/superintelligence-survey/


All times are GMT. The time now is 16:37.


Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.