PDA

View Full Version : Airbus pitches pilotless jets -- at Le Bourget


futurama
18th Jun 2019, 11:05
AP Interview: Airbus is ready for pilotless jets - are you?

(with not-so-subtle reference/implication to pilot performance re: the 737 MAX crashes)

https://apnews.com/d8d911a9f1844df1a314a42c346e74a4


LE BOURGET, France (AP) — The chief salesman for Airbus says his company already has the technology to fly passenger planes without pilots at all — and is working on winning over regulators and travelers to the idea.

Christian Scherer also said in an interview with The Associated Press on Monday that Airbus hopes to be selling hybrid or electric passenger jets by around 2035.

While the company is still far from ready to churn out battery-operated jumbo jets, Scherer said Airbus already has “the technology for autonomous flying” and for planes flown by just one pilot.

“This is not a matter of technology — it’s a matter of interaction with the regulators, the perception in the traveling public,” he told The Associated Press.

“When can we introduce it in large commercial aircraft? That is a matter we are discussing with regulators and customers, but technology-wise, we don’t see a hurdle.”

Several manufacturers are presenting unmanned aircraft at the Paris Air Show, primarily for military purposes — and some are also proposing pilotless “air taxis” of the future.

When it comes to autonomous passenger jets, safety is an obvious concern. It’s an issue that is on many minds after two deadly crashes of the Boeing 737 Max jet that have implicated problematic anti-stall software.

Scherer said the crashes “highlighted and underlined the need for absolute, uncompromising safety in this industry, whether from Airbus, Boeing or any other plane.”

While he said Airbus’ sales strategy hasn’t changed as a result of the crashes in Indonesia and Ethiopia, “there is a capacity need that materialized as a result of this, and naturally you have airlines that are frustrated over capacity, that are looking for answers.”

Airbus announced several orders Monday as the air show kicked off, while Boeing had an anemic day as it works to win back trust from customers.

Scherer forecast continued growth in the aviation industry after several boom years, predicting the world will need at least 37,000 new aircraft in the next 20 years, especially in Asia — and that eventually the whole industry will stop creating emissions and “decarbonize.”

Rated De
18th Jun 2019, 11:27
The chief salesman for Airbus

Who would have thought a salesman making such a statement /sarc


predicting the world will need at least 37,000 new aircraft in the next 20 years, especially in Asia — and that eventually the whole industry will stop creating emissions and “decarbonize.”

So Christian please explain why aircraft manufacturers, will only 'eventually' de-carbonise, when other industries have already transtioned and even the IMO International Maritime Organisation will be fossil fuel free inside fifty years.
Maybe because that won't please airlines and much better to pitch something that calms the frayed nerves of IR managers at airlines world wide.

Herod
18th Jun 2019, 11:39
Single-pilot operation? Possibly, as technology advances. Single occupant behind a locked door? Cross me off the passenger list.

wan2fly99
18th Jun 2019, 12:03
Pilot less planes or one pilot?

Trust my life on a computer or one guy up front behind locked doors?
That is the day will start taking the boat again if they still exist.
Computers fail. It is code type in by a programmer. All considerations have to be type in.

I program and know that things are missed or not thought of and then what? We put in a quick fix.

ATC Watcher
18th Jun 2019, 12:13
That is the day will start taking the boat again if they still exist.
at that time the boats and the trains will likely be automated too :E

Herod
18th Jun 2019, 12:43
at that time the boats and the trains will likely be automated too

Yes, but they are not so dependent on the law of gravity

infrequentflyer789
18th Jun 2019, 12:46
at that time the boats and the trains will likely be automated too :E

Automation won't kill you because it makes a mistake, because it's gone nuts, because it feels suicidal or it's just having a real bad day. Automation will kill you because someone got the design wrong, or the implementation, or failed to account for the situation in the design, or... essentially all the ways transport and engineering can kill you today anyway. Or maybe you just ended up on the wrong side of the trolley problem - that one'll be new.

One guy behind a locked door? Not sure. I may take the automation instead. Maybe that's the intention, to make that appear to be the choice.

emeritus
18th Jun 2019, 12:55
I suspect the biggest hurdle will be getting a salesperson who will be capable of selling tickets on one of these a/c. We have seen recently that technology has eliminated a lot of causes of accidents but has substituted new causes. A lot of pax decide to entrust their lives to the skill of aircrew on the basis that said aircrew enjoy living as much as they do. A computer doesn't understand the concept.

Emeritus

Auxtank
18th Jun 2019, 13:17
They're all at it!
Looks like Boeing landed a "pilotless" KC 46 there as well, if the landing was anything to go by;

https://www.pprune.org/rumours-news/622596-kc-46-tanker-landing-paris-airshow.html

https://youtu.be/PzRGsZJkjUY

:ok: (https://youtu.be/PzRGsZJkjUY)

fergusd
18th Jun 2019, 13:50
Aircraft are much easier to fly than cars are to drive . . . there will come a point when an automated system will be statistically safer than using pilots . . . neither will ever be 100% safe . . . the real question is, which one is safer at a point in time ? . . . Views of the flying public will rapidly change IF fully autonomous aircraft are shown to be safer than using pilots as part of the control system . . . it is inevitable that this will happen at some point . . . whether it is cost effective is a different matter.

Hotel Tango
18th Jun 2019, 14:36
I'll be long gone before full automation will be used on commercial passenger airliners. I imagine that if and when the day comes, it will first be on cargo aircraft for a good many years before the final step of fully automated passenger operations. Future generations will see it as quite normal. Now, if it was to come in next year, I would most definitely change my mode of transport!

Herod
18th Jun 2019, 15:32
Aircraft are much easier to fly than cars are to drive .

Errr...on what do you base that? A pilot is thinking and acting (or at least monitoring) in 4 dimensions (the conventional 3 plus time). A car driver is only in 2. In fact, if you accept that a car is following a road, he is only in 1; straight ahead.

CargoOne
18th Jun 2019, 15:58
Flight engineers (and navigators) are waving their hands towards pilots....

Meester proach
18th Jun 2019, 16:26
Absolute bollocks

seconded .

Written by someone who’s never flown an aircraft I’d guess, or a bitter wannabe

lurkio
18th Jun 2019, 16:32
Auxtank - no it definitely takes a pilot to stuff it up that badly and still get away with it.

KiloB
18th Jun 2019, 16:33
Whatever they design will have to pass ‘the Hudson Test’ and that will be a long-long time. Or do people think the bar should be set lower?

futurama
18th Jun 2019, 17:38
Whatever they design will have to pass ‘the Hudson Test’ and that will be a long-long time. Or do people think the bar should be set lower?

Not a good test. An autonomous aircraft facing the exact same situation might simply return to La Guardia (https://www.cbsnews.com/news/ntsb-sully-could-have-made-it-back-to-laguardia/) without drama.

Not to say Sully that didn't do a superb job -- he absolutely did -- but computers are much better than humans in such situations. I.e., problems with clear constraints where an immediate solution can be computed. There would be no hesitation, no need to query ATC about possible options, no "can we make it to Teterboro?" back-and-forth, etc., all wasting precious seconds.

OldLurker
18th Jun 2019, 18:00
I suspect the biggest hurdle will be getting a salesperson who will be capable of selling tickets on one of these a/c. ...No, just cut the price enough and the loco pax will line up.

Of course, pilotless is just the start. Look how much we could save by going cabin-crewless too!

20driver
18th Jun 2019, 18:00
Errr...on what do you base that? A pilot is thinking and acting (or at least monitoring) in 4 dimensions (the conventional 3 plus time). A car driver is only in 2. In fact, if you accept that a car is following a road, he is only in 1; straight ahead.
One thing you could say is - if airspace is totally controlled - the advantage in autonomous flying is all the aircraft are under the same control system, going to a small number of known destinations on a coordinated and published schedule. Commercial aircraft can be relied upon to obey the rules of the road so to speak. The challenge in cars is blending autonomous cars, that will presumably be (an big if!) acting rationally, mixed in with drunk texting yahoos taking selfies, sheep wandering across the road, etc etc.
Off the two, autonomous aircraft will be "easier" to develop, but the consequences of failure are more severe. In cars you could be right 99% of the time and still be much safer than what we have. In aircraft your threshold is much higher because you start with a much higher bar. The public seems to accept carnage on the road, but not in the air.
In the end I very much doubt it will be worth getting rid of pilots for all sorts of reasons, public acceptance being the biggest. I can think of a lot of good reasons to get rid of drivers! I personally think fully autonomous vehicles will be limited to well defined infrastructure.
20driver

DaveReidUK
18th Jun 2019, 18:19
Whatever they design will have to pass ‘the Hudson Test’ and that will be a long-long time. Or do people think the bar should be set lower?

Perhaps 'the AF447 test' ?

CargoOne
18th Jun 2019, 18:28
Airbus is doing it quietly for more than 10 years now. It is the only possible breakthrough to improve flight safety further. And by the way AF447 case would have been happily resolved on such kind of new aircraft, just applying pre-progammed binary logic.

BSD
18th Jun 2019, 18:28
A terrorists dream perhaps?

Vulnerable to hacking and hence potentially utterly appalling consequences.

A real live person (preferably 2) at the helm when I fly please.

BSD.

CargoOne
18th Jun 2019, 18:35
A terrorists dream perhaps?

Vulnerable to hacking and hence potentially utterly appalling consequences.

A real live person (preferably 2) at the helm when I fly please.

BSD.

you must be the one who is avoiding LGW and FRA inter-terminal trains?

CurtainTwitcher
18th Jun 2019, 21:47
I'd like to know how a fully automated system would have dealt with the Cathay CX780 fuel contamination (https://www.cad.gov.hk/reports/2%20Final%20Report%20-%20CX%20780%202013%2007%20web%20access%20compliant.pdf) with one engine stuck at high thrust and the other at idle? How do you program that scenario? Ok, you don't, you have remote control. So it's not actually fully "automated", you have just moved the human decision maker to a different location. Still completely capable of making a Human Factor screw up. Humans have been covering for, and saving computers in aviation or a long time, the manufacturers may not even be aware of the extent and nature of this problem, see the Therac 25 report below and the operation of the fail safe mechanical interlock in opposition and protecting to the software command lethal dose. Two humans are the fail safe in aviation, we save a lot more than we kill.

The computer accidents history is replete with Human Factor screw-ups, they just occur in the coding cubicle, not the interface with the real world. A close read of the first documented computer accident, the Therac 25 (http://sunnyday.mit.edu/papers/therac.pdf) and some of Nancy Leveson's other work on comparing the introduction of Software and the introduction of the High Pressure Steam powered era accidents (http:// ! #%$'&()$* ,+-../) and how to encourage public confidence should give anyone pause for thought about the future of aviation automation. Her contention is the software is the laggard and it's reliable operation is subject to enormous, drum roll please, Human Factors.

As Leveson sagely notes, the Steam business couldn't move forward until regulations had caught up with the boiler makers technological advancements and those regulations were driven by public outrage at the deaths and maiming caused by poor quality products.

A second reason for the number of accidents was that engineers had badly miscalculated the working environment of steam engines and the quality of the operators and maintainers. Most designs for engines and safety features were based on the assumption that owners and operators would behave rational, conscientiously and capably. But operators and maintainers were poorly trained, and economic incentives existed to override safety features in order to get more work done. Owners had little understanding of the workings of the engine and the limits of it's operation.


We have already had an inkling of the public's tolerance for accidents in this sphere with the 737MAX - two. Two accidents of a pilotless aircraft and the entire effort will be put in jeopardy. There are enormous risks by actually proceeding toward a commercial product

Loose rivets
18th Jun 2019, 23:34
Absolute bollocks

Is that your considered opinion? :}

I made a wisecrack years ago about full automation, with a couple of actors up the front . . . and then added, oh, there already is. Don't worry, just based on jealousy.

The reality is, aircraft of the future will set out with a computer driving that knows the details of every tall thing in the world, the weather everywhere, and a full memory, not only of every accident but the post accident analysis. It will be one smart cookie. Two, better make that two smart cookies, plus a spacesaver in a box somewhere.

PerPurumTonantes
18th Jun 2019, 23:37
Absolute bollocks
Not quite. He does have a point. In some ways it's easier to write software to fly an aircraft than to drive a car. There is very little to crash into up in fresh air. There are endless things to crash into on your average street. And that lady with a pram isn't broadcasting ADS-B, so good luck programming your collision avoidance.

Loose rivets
18th Jun 2019, 23:45
There is very little to crash into up in fresh air.

Oh yes. My addendum above didn't mention the projected vectors of every flight in the world.


50 years ago there was the story of a transatlantic flight made with only one human - a press reporter. It was repeated for so long that I began to wonder if there was any truth in it, but I doubt it, they would have been using valves/tubes in that era. (of the flight)

Lord Farringdon
19th Jun 2019, 01:18
Single-pilot operation? Possibly, as technology advances. Single occupant behind a locked door? Cross me off the passenger list.

You've just described high speed trains carrying hundreds of passenger at a time. Although, if the driver of one of those decided to punch through a red signal, auto braking systems would engage and cant be over ridden. Everyone lives and the driver is carted off by Police. Anyone ever considered why we don't have driverless trains? I guess the point is that just because we have the technology to automate our transport systems, doesn't mean it is the most sensible thing to do. The fact is that automated systems all have failures at some point whether it is power, sensor, actuator, command signals or an external influence. Of the latter it is impossible to program a response to the unknown event and because of that it is simply unacceptable to put hundreds of peoples lives at risk on a pilotless or single pilot commercial airliner. How many times would the collective experience of this forum be able to recount events where if it wasn't for the skill of the crew in managing a dynamically changing, multi-headed emergency, all would have perished. Humans don't have artificial intelligence, thank God! This salesman for Airbus cant be serious, can he?

Theaviator10101
19th Jun 2019, 01:18
Single-pilot operation? Possibly, as technology advances. Single occupant behind a locked door? Cross me off the passenger list.

Me too, I wouldn't fly on a Single Pilot Commercial Jet.
Maybe a ground based pilot alongside the single pilot, checking that everything is done correctly?
But then why not just have two pilots in the cockpit.

Loose rivets
19th Jun 2019, 01:37
Of course, pilotless is just the start. Look how much we could save by going cabin-crewless too!

And of course standing room only. This is where MOL enters stage right dressed as a pantomime horse.

What is funny is the press leaping on a story like this, even to have a sketch of the folk all lined up holding onto straps.

tdracer
19th Jun 2019, 01:54
I've posted variations of this many times - I have little doubt that we'll eventually end up with pilotless commercial aircraft. I also foresee a future where not only are fully autonomous cars common, driving a car yourself will be expressly banned aside from a few areas set aside for dinosaurs like me that actually enjoy a brisk drive through the countryside.
That being said, I also believe we are still decades away from that future - far enough I doubt I'll live to see it.
Airbus said a lot of stuff in that press conference that I have issues with - I wonder if he'd actually talked to his engineers before spouting off about going completely to hydrogen powered aircraft. I know people that have looked at that, and the problems are tremendous - especially where to put the fuel. In order to have a reasonable density of hydrogen, it needs to be liquid, which means really, really cold, and even then then density is so low that it takes a massive volume to fly even a few thousand miles - you're not just going to stick it in the wings... You're talking a massive volume that needs to be kept cryrogenically cold for long periods of time. Materials that can handle that level of cold for extended periods, and reusable thousands of times is also a problem. Not to mention all the inefficiency in using a cryogenic fuel as well as obtaining large quantities of hydrogen in an environmentally friendly manner (currently most hydrogen fuel is created by stripping the hydrogen off of hydrocarbon fuels).

ACMS
19th Jun 2019, 02:03
There’s been a few “automation” errors that were only saved by the crew. The confused computers could not have fixed the problem, indeed they caused it.....

So then what? You’ll Have yet another computer monitoring the other computers output?

atakacs
19th Jun 2019, 02:17
There’s been a few “automation” errors that were only saved by the crew. The confused computers could not have fixed the problem, indeed they caused it.....

So then what? You’ll Have yet another computer monitoring the other computers output?
Just as well as there have been many crew errors fixed by automation.
As long as you can feed it with reliable data automation is already working very well and can only improve further. Of course absolute perfection will probably never be achieved but pilotless aircrafts will happen, and I'd wager to say sooner than most expect.

futurama
19th Jun 2019, 03:46
Anyone ever considered why we don't have driverless trains?
Oh but we do. For trains, Fully Automated Operations (FAO) have been the reality since Kobe New Transit in Japan achieved Grade of Automation 4 -- the highest possible -- way back in 1981.

FAO trains have no drivers at all. Human attendants (if present) are mainly for customer service. Many of these trains operate 24/7 as part of the most complex systems in the world (typically large city metro / commuter light rails) and achieve the absolute highest safety records.

A prime example is the Copenhagen Metro, which was designed to be fully automated from day 1. At peak the Metro carries 12,000 passengers per hour. It was awarded the World's Best Metro for three years in a row (2009-2011).

In fact if you visit Airbus in Toulouse you might notice that all of the metro trains in the city are fully automated. Today FAO GoA 4 trains operate in 40+ cities / 20 countries around the world (https://en.wikipedia.org/wiki/List_of_automated_train_systems), including the Sydney Metro Northwest line that just opened last month.

KRviator
19th Jun 2019, 04:09
at that time the boats and the trains will likely be automated tooYes, but they are not so dependent on the law of gravityTell that to BHP... (https://www.abc.net.au/news/2019-01-22/iron-ore-train-derailment-inflicts-heavy-financial-blow-on-bhp/10737426).

You've just described high speed trains carrying hundreds of passenger at a time. Although, if the driver of one of those decided to punch through a red signal, auto braking systems would engage and cant be over ridden. Everyone lives and the driver is carted off by Police. Anyone ever considered why we don't have driverless trains?You do. Many systems are driverless. Even leaving the Driver's on board, you are still reliant on automated signalling systems, supposedly fail-safe and they still have problems. As Futurama pointed out, Sydney just opened their new Metro system, and are planning on extending it into the southwest in years to come, with contracts being let to commence the installation of ATO-capable signalling on their heavy-rail network.

Rio Tinto is moving some of the heaviest trains in the world thousands of miles daily, with no driver's within cooee. They are not without their teething problems, but the system appears to be proving itself reliable enough to continue with thus far. If it was that bad or unreliable that it was hurting their bottom line, they would get rid of it in a heartbeat and reintroduce manned trains.

Chris2303
19th Jun 2019, 04:21
On the flight deck one human, one dog.

Dog bites human if he touches anything

Smythe
19th Jun 2019, 04:24
Interesting thoughts, but really, think it though...

On a CATIII autoland, what exactly does the crew have to do? drop the gear?

I watch landings all the time, who cant tell when the ac lands on auto vs a pilot?

On DEP, once weight off wheels....again, retract the gear?

There are fully autonomous aircraft, some sizable ones, including helos, fuelers, and armed with ****, that fly sorties 24/7.....

Looking at that AB concept, with the driver in the front baggage area, kicking back, surrounded by screens, I would love to kick back in that space with that view...

We learn to fly in the sim, but cannot possibly fly by large screens in front of us...really, sign me up!

Really, I am telling you, especially with drivers of experience, forget the whole BS of getting to the airport, whatever hours, and the rest of the bull****, sign on remote, and see the same thing, well, even more, and fly it remote...

(why do you think there is an ADSB-In port?)

Look at the positives...

petrichor
19th Jun 2019, 04:59
Not a good test. An autonomous aircraft facing the exact same situation might simply return to La Guardia (https://www.cbsnews.com/news/ntsb-sully-could-have-made-it-back-to-laguardia/) without drama.

Not to say Sully that didn't do a superb job -- he absolutely did -- but computers are much better than humans in such situations. I.e., problems with clear constraints where an immediate solution can be computed. There would be no hesitation, no need to query ATC about possible options, no "can we make it to Teterboro?" back-and-forth, etc., all wasting precious seconds.

Have you read the report you referenced? "An immediate decision would have been required to complete a successful turnback, and even that was questionable (paraphrasing) ". An immediate decision isn’t necessarily the most appropriate. Scenario..pilotless aircraft has to force land in a field a) full of children or b) a field full of animals, which would it chose? I know what a human would do...

futurama
19th Jun 2019, 05:43
Have you read the report you referenced? "An immediate decision would have been required to complete a successful turnback, and even that was questionable (paraphrasing) ".

That's my whole point. :ugh:

An "immediate decision" for a human being is an eternity for a computer. By the time Sully noticed something was amiss, an autonomous system could have considered thousands of scenarios, computed the best course of action (having the highest probability of success), and initiated a safe turn back to La Guardia, all without much drama. This is exactly what computers are good for.

CurtainTwitcher
19th Jun 2019, 07:35
That's my whole point. :ugh:
This is exactly what computers are good for.
What exactly are they good at? The leading type of AI, neural nets are exceptional of taking a closed problem and solving it eg AlphaGo (https://deepmind.com/research/alphago/) and AlphaGo Zero (https://en.wikipedia.org/wiki/AlphaGo_Zero)
(AGZ). However, these problems were not solved on the fly, they took extensive computational resources to generate it's own training data in the case of AGZ it was a 40 day process simulating playing itself to generate the dataset.

However, neural nets need extensive clean training data, either from the real world or by simulation. There was no extensive corpus of air returns with double engine damage for A320's unlike the millions of recorded games of Go for Alpha Go training. Even Sully's effort represents a single instance that is effectively useless for future algorithmic training. Billions of simulations would be necessary just to replicate & solve this exact scenario on this day. For self driving cars they actually model intersections and do billions of simulations to generate the training data to enable self driving: Inside Waymo's Secret World for Training Self-Driving Cars (https://www.theatlantic.com/technology/archive/2017/08/inside-waymos-secret-testing-and-simulation-facilities/537648/).

There are many things that computers can do exceptionally better than humans, but solving novelty is not one of them using the current leading AI technology.

Gove N.T.
19th Jun 2019, 07:39
you must be the one who is avoiding LGW and FRA inter-terminal trains?

perhaps include the Docklands Light Railway in London which carries more than 119million people a year over a 24 mile network and has an on time departure reliability rate of 99% with no driver, just a computer. Yes it has a fixed trajectory which reduces the chance of straying. This is the future of all rail travel. Yes I know it’s a flying forum but planes now won’t work safely without computers - properly programmed of course

LOONRAT
19th Jun 2019, 07:42
'Welcome to the first computer flown transatlantic crossing. Let me assure you that although we have no pilots aboard our triple computer system and advanced programing guarantees you will have a safe and pleasant flight to New York. Be assured that nothing can go Wrong Go Wrong Go Wrong Go Wrong Go Wrong >>>>>>>>>>'

fergusd
19th Jun 2019, 07:47
Errr...on what do you base that? A pilot is thinking and acting (or at least monitoring) in 4 dimensions (the conventional 3 plus time). A car driver is only in 2. In fact, if you accept that a car is following a road, he is only in 1; straight ahead.

Your view is highly over simplistic. Next time you get in a car lock the steering straight ahead and accelerate to 70mph with your eyes shut, see how far you get . . .

Aircraft operate in a highly controlled environment (cars do not), the control system required to fly a plane does not require complex decision making logic which requires non deterministic deep learning to be used (cars do - there is no other approach (today) which allows a car to be driven on current roads), it's flight behaviour is mathematically deterministic (a car is not - the deep learning part is non deterministic and 'difficult' from a safety perspective). From a software and safety perspective, aircraft are much simpler to fly than driving a car (assuming the car is being driven in any kind of realistic scenareo).

It is a very commonly held view (in the safety, systems and software communities) that full automation of aircraft and trains is far more practical than that of cars.

Airbus, unsusprisingly, concur.

parabellum
19th Jun 2019, 07:55
Of course absolute perfection will probably never be achieved but pilotless aircrafts will happen, and I'd wager to say sooner than most expect.

It won't happen until the world's insurance markets agree to let it happen, when they feel it is an acceptable risk, especially the third party element, (pilotless aircraft involved in a mid-air collision over the CBD of a major city). The leaders in the aviation insurance market notch up a lot of air miles and will have definite opinions on this subject.
Before the skies become full of pilotless aircraft, dependent on ground control all threat from sophisticated terrorism will have to have been eliminated.
Don't hold your breath.

KiloB
19th Jun 2019, 08:01
I wish people would stop using ground-based examples of automated control to justify systems for aircraft.
There is one BIG difference. Just about any ground based control system faced with data ‘outside parameters’ will contain Code to ‘halt and call for assistance’. Difficult to do when you are flying!
And for all those second guessing Sully; remember a big part of the decision making was that ‘even with a 90% chance of a successful turn back, there was a 10% chance of a dive into a heavily populated area’. How do Computers/ Programmers deal with that?

George Glass
19th Jun 2019, 08:14
Yeah right. People’s enthusiasm for this fairy land stuff is in inverse proportion to their operational experience. Physically handling the aircraft is about 5% of what an airline pilot does. I wonder if an enthusiastic PhD student has ever followed a domestic crew on , say, a 4 sector per day, 4 day trip and added up the critical decisions made from sign-on to sign-off. It would be an interesting exercise. Just boarding, getting the doors closed, pushing back and taxiing to the hold point would be beyond automation. Ain’t gunna happen.

DaveReidUK
19th Jun 2019, 08:18
And for all those second guessing Sully; remember a big part of the decision making was that ‘even with a 90% chance of a successful turn back, there was a 10% chance of a dive into a heavily populated area’. How do Computers/ Programmers deal with that?

Quite so.

It's not hard to envisage that a computer flying US1549 might well have decided that diving into the Hudson was the least worst option (confining any deaths or injuries to those on board), compared to the carnage that would likely ensue if the flight came down in the middle of a New York borough.

I'd rather have the guy(s) up front making that decision, rather than some anonymous programmer who's having a bad day at his/her keyboard.

CargoOne
19th Jun 2019, 08:24
And for all those second guessing Sully; remember a big part of the decision making was that ‘even with a 90% chance of a successful turn back, there was a 10% chance of a dive into a heavily populated area’. How do Computers/ Programmers deal with that?

This is very easy and you don’t need any AI for that. A processing power equivalent to iPhone with pre-programmed aircraft performance and other relevant data input will make a decision in a split of a second with no element of guessing, as it will analyse about a few thousand possible scenarios and choose the best one, this is where computers are light years ahead of people.

Flocks
19th Jun 2019, 08:30
When I see all I have to solve on the ground as captain with my FO (oh I m so glad we are 2, to respect our short turnaround of today) and I just speak about ground stuff, pax problems, bag loader, cabin crew request, ... I m always thinking, the flying part is the easy part ... So if a full automated plane come, they need to change all the way to operate this plane, from the gate agent to the ground staff and ATC ... Thinking those plane will have to operate same time as the good old fashioned 2 crews one, I m wondering how they want to do this transition.

Oh, ofc, they will have a single crew plane, but one small problems I believe is : it works when you have a single pilot with past experience ... Few years later, how it will work 200hr training and you are captain to take all the decisions (I speak again about ground) it took me so long to see other doing it before I was myself able to do it ...

We all ve been promising autonomous cars ... Still waiting ... And even Tesla admitted it is more difficult than they initially thought ...
Airbus and other are telling us how we will fly autonomous flying bus in the city ... Same, it is really good to speak about that and try to develop technology, but public already don't accept a low altitude helicopter over city, noise and other ... So yo imagine even hundred of those electric autonomous flying all over London or newyork ... I take the bet it will not arrive anywhere soon (not with the schedule market people communicate to us ! )

Same as the hyperloop project, if I listen advertisement, in few years we will be able to travel so fast it will be amazing ... Same I take the bet it won't be that fast to happen ...

Fly safe.

futurama
19th Jun 2019, 08:50
What exactly are they good at? The leading type of AI, neural nets are exceptional of taking a closed problem and solving it eg AlphaGo (https://deepmind.com/research/alphago/) and AlphaGo Zero (https://en.wikipedia.org/wiki/AlphaGo_Zero)
(AGZ). However, these problems were not solved on the fly, they took extensive computational resources to generate it's own training data in the case of AGZ it was a 40 day process simulating playing itself to generate the dataset.

However, neural nets need extensive clean training data, either from the real world or by simulation. There was no extensive corpus of air returns with double engine damage for A320's unlike the millions of recorded games of Go for Alpha Go training. Even Sully's effort represents a single instance that is effectively useless for future algorithmic training. Billions of simulations would be necessary just to replicate & solve this exact scenario on this day. For self driving cars they actually model intersections and do billions of simulations to generate the training data to enable self driving: Inside Waymo's Secret World for Training Self-Driving Cars (https://www.theatlantic.com/technology/archive/2017/08/inside-waymos-secret-testing-and-simulation-facilities/537648/).

There are many things that computers can do exceptionally better than humans, but solving novelty is not one of them using the current leading AI technology.
You're conflating AI with machine learning and with automation.

Furthermore, you're wrong that "AI" requires millions of (recorded games) to learn from. An extensive corpus is not necessarily required. Read about AlphaGo Zero (https://en.wikipedia.org/wiki/AlphaGo_Zero), which used zero recorded games (hence the name). In just three days AlphaGo Zero played 5 million games against itself -- and surpassed the original AlphaGo (the version that beat Lee Sedol), winning against it 100 games to 0. The successor AlphaZero (https://en.wikipedia.org/wiki/AlphaZero) is even more impressive.

Now, AlphaGo Zero's technique is probably not suited to autonomous vehicles, but we don't even need machine learning to safely return Sully's plane back to La Guardia. For a computer, this is actually a much simpler problem, where a constraint satisfaction (https://en.wikipedia.org/wiki/Constraint_satisfaction) system combining a finite state machine with path planning (https://dzone.com/articles/how-does-path-planning-for-autonomous-vehicles-wor) would be sufficient.

CurtainTwitcher
19th Jun 2019, 09:20
You're conflating AI with machine learning and with automation
Yes, I concede this point. However, an aircraft operates in a total system where some "smarts" either human or AI commands a lower level system to implement the automation to do the manipulation of the aircraft in space. The automation bit is relatively easy, and well established technology.

What most people here are talking about is the smarts, that is the challenge that most are alluding to here. In the case of the Sully, I also concede that the automation level required to implement the turn and configure for a return is relatively straight forward. What I am questioning how to implement the AI command decision to make the turn or take another choice or any one of thousand of possible scenario's that arise across the planet every day for takeoffs and landings with a range of mechanical malfunctions.

How do you write code for each specific scenario? Is every takeoff in a twin jet going be running the double engine failure and return scenario calculation for every takeoff and climb until a return is no longer possible? How about the CX scenario above with one thrust lever at idle and one stuck at 75% for approach. Is the computer going to be running every know scenario constantly? There are a lot of predictable scenario's that haven't yet been seen, and and even more that we can't imagine, but will happen given sufficient time.

I am specifically talking about the implementation of the command aspects of the flying problem, and I thank you for forcing clarification of this point.

As to AlphaGo vs AlphaGo zero, yes I was well aware of the differences between the two, which raises more subtle points. Humans cannot follow the gameplay of AGZ in some case, and see unprecedented & crazy moves which are completely foreign, This issue this raises is reproducibility, how does an algorithm arrive at its decision? It isn't always explainable, or in some cases reproducible which is going to be a legal issue when the inevitable accidents occur. Any highly complex system that meets the real world, human or AI is going to have accidents eventually.

Rated De
19th Jun 2019, 09:55
I'd rather have the guy(s) up front making that decision, rather than some anonymous programmer who's having a bad day at his/her keyboard.

Given pilots are strapped to the same machine, always pays to bet on that old horse called self interest!

Global Aviator
19th Jun 2019, 10:11
Full automation and aviation. What about the phenomenons that as pilots we see out the window. The CB that is not painting on the radar, the one that is but isn’t. That wind on short final that seems to come from nowhere. So many variables that humans can and do deal with.

Feeling what is obviously wake turbulence and using initiative to avoid further, even though separation exists.

I don’t doubt it will happen, but I feel it will be a loooong way away until it’s the norm.

S speed
19th Jun 2019, 11:15
To have truly autonomous aircraft, will require real artificial general intelligence, and that is decades away at best.

The flip side of that coin is that you will then be placing hundreds of human lives in the hands of something you cannot control.

Hopefully I'm pushing up poppies by the time this era comes to be.

Water pilot
19th Jun 2019, 14:00
So is the computer going to threaten to land the plane at the nearest airport if a bunch of drunken yahoos onboard start pissing on the floor and molesting the female passengers? Will 400 terrified humans who have just dropped 1000' be mollified by a computer voice saying "BE CALM. ALL IS WELL"? When the plane catches fire, do we just let the pax decide which doors to open? How long would you stay on an automated plane that was apparently stuck on the runway for reasons that you don't understand? I spent four hours on one (hot) plane in Denver waiting for thunderstorms to clear, it would have been pretty damn ugly if there wasn't some authority to both coerce us to staying onboard and assuring us (falsely it turned out) that we were leaving soon. Imagine some rumor while flying across the atlantic that the plane is going the wrong way or that communication has been lost...

I have been on automated trains late at night in big cities, it can get a bit scary even for a guy. On the street you can avoid situations that you can't when you are trapped in a little tube, which is not a natural situation for humans.

Luke_2
19th Jun 2019, 14:06
In 15-20 years probably, but now just see what the total fiasco with the 737 Max, and Airbuses crash during this decade.

Add to this possible hacking of the plane. A good reason not to trust (yet) the computers too much.

hexboy
19th Jun 2019, 14:38
The reactions of the general public riding in an elevator (lift) which has a malfunction and does its own thing when buttons are pushed should give a clear indication of how
accepting people are of machinery which malfunctions with no one controlling it.
It moves up and down on rails in a concrete shaft so what could possibly go wrong and be scary about that?

SARF
19th Jun 2019, 17:24
I’m sorry Dave. I’m afraid I can’t do that

Auxtank
19th Jun 2019, 18:57
I’m sorry Dave. I’m afraid I can’t do that

HAL, shut the f*ck up, I've kicked the tyres now you light the Godamn fires and let's get the hell out of here!


https://cimg6.ibsrv.net/gimg/pprune.org-vbulletin/640x361/05282014hal_ce5583ddf873c1e2832b2b58b6df22d20f04c333.jpg

PropPiedmont
20th Jun 2019, 01:03
How many airframes & lives have been saved by human pilots recognizing various types of runway incursions? Delaying a rotation or rotating early? Going around? How many pilots have saved engines and tires on aircraft while avoiding FOD while taxiing? How many ATC controllers have made errors that a human pilot trapped, preventing an accident? What about enroute weather considerations? There are so many different situations that can occur in aviation and it’s human pilots, making human decisions that keep it safe.

Technology and hacking is not what will prevent pilotless airliners, it’s the required human interface that will.

Bend alot
20th Jun 2019, 01:57
In 15-20 years probably, but now just see what the total fiasco with the 737 Max, and Airbuses crash during this decade.

Add to this possible hacking of the plane. A good reason not to trust (yet) the computers too much.
The 737 MAX fiasco is because MCAS needed to be introduced because pilots are in the co-pit - in automated flight (autopilot) MCAS is not required.

Airbus crashes in the last decade? nothing really stands out there that pilots would have saved the day.

It is far easier to protect against hacking/hijacking an aircraft without a cockpit. If a ground base change to flight plan was required it can have a delay and report mode - the main flight data with allowable deviations could be a number of hardware drives placed on the aircraft in a number of locations externally by ground staff for the days planned flights.

Water pilot
20th Jun 2019, 03:43
The early successes with AI on chess programs led to a false confidence that we would have machine intelligence shortly thereafter. It was decades after the first reasonable chess playing program before we had any kind of reasonable natural language recognition, which is something that three year olds master. GO is a much more complex game than chess and it was quite an accomplishment to create programs that could win at the game, but even so it is not a very good analogy for real life. Games are significantly easier to create learning networks for than real life because with a game you have a perfect knowledge of your current state (no failed AOA sensors) and a very deterministic outcome -- you either win the game, or you lose it.

There are many challenges to the problem. Pattern recognition is one: it is easy to recognize a pattern in a game, a little harder to recognize a pattern in a picture, and I have no idea how you recognize the pattern of what your jet feels like when it is hit by wake turbulence on takeoff -- but I am sure all of the real pilots here 'instinctively' recognize that pattern and can distinguish it from the feeling of taking off from a wet runway, or in a crosswind, or what it feels like if a tire blows out on takeoff (if that is something that you practice in the sim.)

Weighing the outcome is another major difference between game play and real life. The player with the most enclosed spaces wins the game of GO, so it is pretty simple to score. When the ending condition has been met, count up the number of enclosed squares and the winner is the one with the most of them. In an excellent post earlier, a poster brought up what the "Sully" question -- with no engines, is it better to try to return to the airport with a nonzero probability of accidentally recreating 9/11, or is it better to try a water landing? How do you score the neural network's decision? If 3 times out of 10 the "return to base" scenario kills 1,000 people on the ground is that a failure? What about 3 times out of 100? What are the weather conditions? What if you are an American plane in this situation over Moscow at a time when the US and Russia are on the brink of nuclear war? Does it make a difference if the plane is full of Mexicans, or if Mitch McConnell is onboard? The neural network will faithfully reproduce whatever value decisions that you make (which is one of the real dangers of using AI for police and military work.)

There is also the question of transparency. Do the people onboard have the right to know about these value judgments? I'd certainly like to know if the automatic plane is programmed to self destruct if a failure occurs over a populated area. With a human pilot, I can be fairly sure that the decisions that are made will closely match the decisions that I would make in the same situation, since the pilot shares my fate. A computer pilot doesn't care about survival at all, and a remote pilot knows he is going home at the end of the day no matter what the outcome for the passengers.

It is a tricky issue, and I think it will be a long time before the public cottons onto self driving airplanes, and by that time we may not be able to fly anymore anyway. The problem with the AI approach is that it only works with perfect humans, of which there have been very few in history. The self driving Uber car sounds great, but who wants to get into a car in the morning that got puked all over last night?

moosepileit
20th Jun 2019, 04:05
How do you integrate manned and unmanned aircraft into and out of the same runways at current capacity flow?

The controlled choas of max rate landings and departures at KEWR and similar come to mind.

Will ATC have to change? Who will fund that?

Is there enough bandwidth at enough rate globally? If not, who will pay for this? CPDLC is not fast enough for terminal operations, is it? The latest and greatest is probably not up to the task.

Pilotless planes today are not autonomous, someone does the taleoff and landing locally and those that are more automated get a huge airspace restriction around their launch and recovery.

In an unmanned combat vehicle, the weight that was the crew and life support systems becomes increased payload and firepower- do we see those first? Pilot, seat, etc weigh equal a few more missiles and have reduced physical limitations in play.

Pilotless passenger and even cargo aircraft will be clean sheet designs, yet will need all the air condtiing and pressurization systems, so they have pay for themselves just in labor costs, not weight saved and payload gained. Attractive in some principles, but someone new gets to work all the unglamorous bits of the duty.

Each paper gain comes with fresh vulnerabilities.

How will single pilot come into play? Will that also be a fresh generation of aircraft? How will that interact with ATC and bandwidth needs? Who funds that stepping stone? Customers? Governments?

bill fly
20th Jun 2019, 07:36
...set aside for dinosaurs like me that actually enjoy a brisk drive through the countryside

Is that in an MG TD, Racer?

^_^

Nialler
20th Jun 2019, 13:17
How do you integrate manned and unmanned aircraft into and out of the same runways at current capacity flow?

The controlled choas of max rate landings and departures at KEWR and similar come to mind.

Will ATC have to change? Who will fund that?

Is there enough bandwidth at enough rate globally? If not, who will pay for this? CPDLC is not fast enough for terminal operations, is it? The latest and greatest is probably not up to the task.

Pilotless planes today are not autonomous, someone does the taleoff and landing locally and those that are more automated get a huge airspace restriction around their launch and recovery.

In an unmanned combat vehicle, the weight that was the crew and life support systems becomes increased payload and firepower- do we see those first? Pilot, seat, etc weigh equal a few more missiles and have reduced physical limitations in play.

Pilotless passenger and even cargo aircraft will be clean sheet designs, yet will need all the air condtiing and pressurization systems, so they have pay for themselves just in labor costs, not weight saved and payload gained. Attractive in some principles, but someone new gets to work all the unglamorous bits of the duty.

Each paper gain comes with fresh vulnerabilities.

How will single pilot come into play? Will that also be a fresh generation of aircraft? How will that interact with ATC and bandwidth needs? Who funds that stepping stone? Customers? Governments?
I've worked for nearly forty years in IT, having taken a doctorate along the way.

Systems routinely encounter critical failures. That's the way they are. Not simply that, but they must be maintained. Situations occur which require a patch; the specific situation was not considered at the design stage.

I've worked exclusively on mainframes. With at leas one vendor the sequence number of PTFs rolled past 99,999. A PTF is a programme temporary fix.

Now, you're an airline operator. A mail arrives. It links to a PTF. It is marked "HIPER", meaning high impact and pervasive. The software support contract with your supplier specifies that it must be supplied or your support lapses.

Do you apply it immediately to your fleet and let them fly immediately? What if it introduces a different aberrant behaviour?

The problem is that it might not be just one of your fleet which fails. It could be many.

I must repeat that if you don't maintain software currency your system *will* in time fail. Other users will encounter failures not yet suffered by you, and it may involve circumstances that you may meet in the future.

Hell, systems developed over decades to process something as simple as a bank balance fall over.

It's not simply that I wouldn't fly in an autonomous plane; I would refuse to have any part in any such development project.

Don't get me started on AI.

homonculus
20th Jun 2019, 14:01
Would someone kindly define AI for me? The politicians bang on about how it will save the world and any manufacturer claims his product is superior due to AI. But what is AI other than repetitive computing and machine learning? Doesnt quite sound either sexy nor mind blowing......

Nialler
20th Jun 2019, 15:12
Would someone kindly define AI for me? The politicians bang on about how it will save the world and any manufacturer claims his product is superior due to AI. But what is AI other than repetitive computing and machine learning? Doesnt quite sound either sexy nor mind blowing......
My experience of it is that there is a lot of probability involved. Baye's stuff. I backed away from it the moment I encountered it first.

In the drive to autonomous operation the issue with the MAX stands as a lesson. The key word in AI is that word "Artificial".

OPENDOOR
20th Jun 2019, 15:12
How will that interact with ATC and bandwidth needs?

The answer to that is probably here; https://en.wikipedia.org/wiki/Starlink_(satellite_constellation)

As far as pilotless passenger aircraft go I suspect that by the time we have an AI based computer that can handle any event as well as a human pilot it will have bought itself a big watch and done away with humanity.

Auxtank
20th Jun 2019, 15:22
Would someone kindly define AI for me? The politicians bang on about how it will save the world and any manufacturer claims his product is superior due to AI. But what is AI other than repetitive computing and machine learning? Doesnt quite sound either sexy nor mind blowing......

This is the best examination I've read so far - and I've done a lot of reading on it.


https://cimg3.ibsrv.net/gimg/pprune.org-vbulletin/181x279/unknown_648cec53f924ff5ffefae7a95a4873fb55195648.jpeg

Nialler
20th Jun 2019, 15:35
Someone mentioned earlier the trolley problem. It was apt. Your child is in that field beside the field with ten people. Who decides on the decision to crash into that field with one child? The software design team? The programmer? The manufacturer? The operator? Will these ethical decisions be decided on in committees? Will these have joint liability?

Auxtank
20th Jun 2019, 15:42
Someone mentioned earlier the trolley problem. It was apt. Your child is in that field beside the field with ten people. Who decides on the decision to crash into that field with one child? The software design team? The programmer? The manufacturer? The operator? Will these ethical decisions be decided on in committees? Will these have joint liability?

That problem is examined and answered so well in that book above as he mulls over Asimov's Three Laws.

a_q
20th Jun 2019, 15:56
I can see it now...

Currently -
Both pilots get food poisoning, are ill, hostess comes into cabin "Can anyone fly a plane???"

In 20 years time -
Computer gets Blue Screen of Death, hostess comes into cabin "Can anyone program a computer - to fly a plane???"

Auxtank
20th Jun 2019, 16:59
I can see it now...

Currently -
Both pilots get food poisoning, are ill, hostess comes into cabin "Can anyone fly a plane???"

In 20 years time -
Computer gets Blue Screen of Death, hostess comes into cabin "Can anyone program a computer - to fly a plane???"

Maybe not in twenty years time but eventually the answer will be;

Passenger comes forward with his own computer..."No, but this computer here can fly the plane."

DaveReidUK
20th Jun 2019, 17:12
Would someone kindly define AI for me?

Computers making decisions that have real-life consequences.

A bit like autopilots have been doing for the last 60-odd years. :O

Auxtank
20th Jun 2019, 17:20
Computers making decisions that have real-life consequences.

A bit like autopilots have been doing for the last 60-odd years. :O

And there it is - the crux of the matter.

Computers making decisions? - or dumbly following programmed code.

There's an awfully big difference.

Your AP is an example of the latter btw.

Autopilots can't author their own algorithms. Therefore they are dumb followers of programmed code.
Making them no more "intelligent' than your washing-machine.

Fusibleplugg
20th Jun 2019, 17:55
What happens when the sensors send the autopilots a load of conflicting information? The Autopilots kick out and hand the a/c over to you.
What would a pilotless a/c do in that situation?

CargoOne
20th Jun 2019, 18:33
Someone mentioned earlier the trolley problem. It was apt. Your child is in that field beside the field with ten people. Who decides on the decision to crash into that field with one child? The software design team? The programmer? The manufacturer? The operator? Will these ethical decisions be decided on in committees? Will these have joint liability?

You have been reading too many newspaper articles named “brave pilots were struggling at controls in a desperate attempt to avoid an orphanage”. In real live no one is avoiding anything, just a matter of choosing a place where is more suitable than other, but even that is extremely rare in a real live, usually it is a boring CFIT.

DaveReidUK
20th Jun 2019, 18:55
Autopilots can't author their own algorithms.

Happily, that's true.

AI, on the other hand does indeed "author its own algorithms".

In other words, when a flight under the control of AI meets a situation that hasn't specifically been foreseen by the programmers, nobody can predict WTF it's going to do. :O

Auxtank
20th Jun 2019, 19:00
Happily, that's true.

AI, on the other hand does indeed "author its own algorithms".

In other words, when a flight under the control of AI meets a situation that hasn't specifically been foreseen by the programmers, nobody can predict WTF it's going to do. :O


And, equally happy as you; that has hasn't happened yet. Because there are no truly AI Autopilots operating - defence experiments aside - yet.
Even the Airbus at Le Bourget was confined by it's programming. It had no free will whatsoever; it was analysing data input and responding with it's pre-programmed instructional code.

Sorry, that's not AI - it's on a level with your toaster on which you spread your breakfast marmalade.

ATC Watcher
20th Jun 2019, 20:44
How will that interact with ATC and bandwidth needs?The answer to that is probably here; https://en.wikipedia.org/wiki/Starlink_(satellite_constellation).
Problems with that is that we are slowly building a single point of failure; the communications with Sats. Signals from Sats are very weak and can easily be jammed or spoofed or hijacked. Relying only on Sats for Navigation/ position , communications , separation ( anti collision) and now autonomous operations is not a very good option, especially in today's world getting more and more dangerous each day.
In le Bourget this week, in various panel discussions, the onus seem to be on terrestrial 5G or even 6G data transfers using clouds based in strange places like the arctic. We need a back up to satellites and think differently.

kkbuk
20th Jun 2019, 21:21
And, equally happy as you; that has hasn't happened yet. Because there are no truly AI Autopilots operating - defence experiments aside - yet.
Even the Airbus at Le Bourget was confined by it's programming. It had no free will whatsoever; it was analysing data input and responding with it's pre-programmed instructional code.

Sorry, that's not AI - it's on a level with your toaster on which you spread your breakfast marmalade.

I usually spread my breakfast marmalade on the toast rather than on the toaster.

Rated De
21st Jun 2019, 00:03
The chief salesman of Airbus

Is there any need to expand his 'knowledge' on the difference between sales pitch and reality?

Groaner
21st Jun 2019, 04:24
More likely is a hybrid approach first, with a remote pilot.

Because we already have that, with military drones. I'm unaware of their failure rate, but I'd imagine it would be improving as experience is gained.

There's objections about things like up/down link security, but I'd imagine the typical drone is of huge interest to very well-resourced organisations that presumably are willing to try and degrade or snoop. Whilst there may have been instances of same, I suspect they have been patched pretty quickly.

The only really big barrier to similar control over pax operations is passenger psychology.

tdracer
21st Jun 2019, 04:42
Currently, the FAA is on record - in writing - that they will not permit or certify any flight critical software (DAL A or B) that incorporates AI (or anything resembling AI). The reason is quite simple - AI isn't predictable in it's responses - and unpredictability is the exact opposite of what you want in aircraft avionics.
Personal example - my last BMW 3 series had a simple form of AI - it would 'learn' my driving habits and incorporate that into the engine and transmission response algorithms. I'd taken the car in for service, I mentioned that I'd seen an error message for "BMW Connect" a couple of times (BMW Connect is similar to "On Star", but cell phone based). After I picked up the car, it had turned into a gutless wonder - the engine was literally so slow and unresponsive as to be dangerous to drive. I took it back the next day, let the service manager drive it around the block and he immediately confirmed something was seriously wrong.
Turns out they'd re-flashed the memory to correct the BMW Connect error messages - somehow in doing that, they'd inadvertently set all the AI learning to "little old lady", making the car almost undriveable. They reset all the AI, and the car drove perfectly. When I talked about this with some co-workers later, it turns out one of the others had a similar occurrence - on their Jeep Grand Cherokee...
Programing for 'known' failures is relatively easy - the first step in any fully autonomous aircraft would be to catalog every single known survivable failure, and come up with the best solution to every one. Not to shortchange Sully in any way, but an all engine power loss is pretty straight forward - a proper program could evaluate the possible glide range based on all the relevant parameters (altitude, airspeed, aircraft weight and drag), and determine if it was feasible to land at an airport or a water landing would be better - and do all that in a faction of a second, while simultaneously trying to restart the engines. Where the computer falls short is something that's never happened before - e.g the failures associated with an uncontained engine failure (think Qantas 32) - what works and what doesn't work after such a failure is somewhat random - a programmers nightmare.
As I mentioned previously - I have no doubt fully autonomous aircraft will eventually occur, but it's going to take a long time.

bill fly
21st Jun 2019, 05:01
I usually spread my breakfast marmalade on the toast rather than on the toaster.

Flew with a coplilot once who spread his breakfast strawberry jam all over the secondary trim handles in an MD 80. When I asked him what he was up to he answered Now we have a Jammed Stabiliser.. Took him the next half hour to clean it. AI might have been better.

futurama
21st Jun 2019, 09:34
Currently, the FAA is on record - in writing - that they will not permit or certify any flight critical software (DAL A or B) that incorporates AI (or anything resembling AI). The reason is quite simple - AI isn't predictable in it's responses - and unpredictability is the exact opposite of what you want in aircraft avionics.
Personal example - my last BMW 3 series had a simple form of AI - it would 'learn' my driving habits and incorporate that into the engine and transmission response algorithms. I'd taken the car in for service, I mentioned that I'd seen an error message for "BMW Connect" a couple of times (BMW Connect is similar to "On Star", but cell phone based). After I picked up the car, it had turned into a gutless wonder - the engine was literally so slow and unresponsive as to be dangerous to drive. I took it back the next day, let the service manager drive it around the block and he immediately confirmed something was seriously wrong.
Turns out they'd re-flashed the memory to correct the BMW Connect error messages - somehow in doing that, they'd inadvertently set all the AI learning to "little old lady", making the car almost undriveable. They reset all the AI, and the car drove perfectly. When I talked about this with some co-workers later, it turns out one of the others had a similar occurrence - on their Jeep Grand Cherokee...
Programing for 'known' failures is relatively easy - the first step in any fully autonomous aircraft would be to catalog every single known survivable failure, and come up with the best solution to every one. Not to shortchange Sully in any way, but an all engine power loss is pretty straight forward - a proper program could evaluate the possible glide range based on all the relevant parameters (altitude, airspeed, aircraft weight and drag), and determine if it was feasible to land at an airport or a water landing would be better - and do all that in a faction of a second, while simultaneously trying to restart the engines. Where the computer falls short is something that's never happened before - e.g the failures associated with an uncontained engine failure (think Qantas 32) - what works and what doesn't work after such a failure is somewhat random - a programmers nightmare.
As I mentioned previously - I have no doubt fully autonomous aircraft will eventually occur, but it's going to take a long time.

Well, not really. Large classes of AI/ML algorithms are as deterministic & predictable as any "classical" algorithms.

And most systems using machine learning algorithms aren't actually "learning" (updating itself) while being used. All the "learnings" happen back in the lab while the algorithms are being modeled, trained, tuned, and validated. The resulting model (various parameters) are then "baked" into production systems.

In your BMW, for example, the AI isn't really "learning" while you're driving around. The learning already took place in Munich -- long before you bought your car -- when BMW data scientists & data engineers used machine learning to create many configuration sets (apparently including an "old lady" configuration). From time to time, perhaps once or twice a year, BMW might use new datasets to "re-train" their AI models, validate them, and provide the new updated parameters as part of the next software release. (Now, your car might be "smart" enough to notice if you prefer to drive like an old lady or an F1 driver and automatically load the appropriate configuration or adjust some variables between some well defined limits, but that's not AI).

Anyway, the bottom line is that AI system can be "predictable" and doesn't substantially change between rigorously validated updates.

Related to this are concepts of "interpretability" and "explainability". I wont go into details (here's an academic paper if one cares (https://arxiv.org/pdf/1806.00069.pdf)) but many machine learning algorithms work like a "black box" so their use may be problematic in safety critical systems. However, not all of them work this way, and we're making great strides in making the rest "interpretable" and/or "explainable".

Global Aviator
21st Jun 2019, 09:45
Flew with a coplilot once who spread his breakfast strawberry jam all over the secondary trim handles in an MD 80. When I asked him what he was up to he answered Now we have a Jammed Stabiliser.. Took him the next half hour to clean it. AI might have been better.

Classic!!!

homonculus
21st Jun 2019, 09:58
Would someone kindly define AI for me?

Having read all the subsequent posts I guess not. The fact that a computer programme stores results and incorporates these results into future runs is merely machine learning. Its a derivation of what we did with mainframe computers in the 1970 - write a programme, run it, correct it, run it, repeat - except that you continue once the programme works and now have the ability with greater computing power to incorporate more feedback and variables. However it is just machine learning. It is data crunching. The computer hasnt got intelligence or anything more than an ability to crunch data.

When an AP can decide to crash on the elderly woman and not the child because we care more for children, I will accept it is AI enabled. Until then it is just a computer

cattletruck
21st Jun 2019, 09:59
There's an ex NASA/JPL engineer who now works for Nissan who regularly holds demonstrations on why autonomous vehicles will never make it. In his demonstrations he shows a library of videos of humans breaking the law to avoid fatal automobile accidents. He rightly suggests that computers could never be programmed to do what these humans did let alone be programmed to break the law.

While the marketing people have gained control over the remaining qualified engineers on design issues then expect to hear more of this kind of autonomous vehicle cr@p ad-nauseum.

Raffles S.A.
21st Jun 2019, 10:15
I bet O'Leary was interested.

Nialler
21st Jun 2019, 14:47
That problem is examined and answered so well in that book above as he mulls over Asimov's Three Laws.
It has not been answered satisfactorily in any conference I've attended.

Computers making control decisions in life/death scenarios? They handle nominal situations very well and are used routinely in the industry.

However, they fail. They always will. They are complex. They rely on a consistent sequence of reliable events all the way from the power source through to the outputs. They rely on the expertise of the thousands of designers and coders.

I've worked on major systems for decades. Mature technologies, with massive redundancy. They still occasionally fail.

Applying a patch on reasonably placid systems performing simple functions can takea month or so as it passes through change control/test/user acceptance cycle.

You know what? Even after that cycle of rigorous testing, it can still introduce failures.

If I were to be involved in any autonomous flight project... well, I said walk away. Maybe I wouldn't. My costs, for development and ongoing support would far exceed those charged by a pilot, though.

Several thousand people providing 24/7 expert support (we're speaking third level rather than helpline) will not come cheap.

And I repeat the risk inherent in a single error being propagated across multiple users.

I am no technophobe. I've been in the industry for decades. I would never fly on a craft which relied entirely on code.

Nialler
21st Jun 2019, 15:10
On the issue of AI:

The systems I've examined have been good at avoiding error if that error has been encountered in the past and is part of its dataset. The issue is not avoiding predictable errors, though. The core issue is that resolving a fresh problem may require a new solution. It may require a solution exceeding the constraints and limits of the programme design.

I return to my issue with the term AI. That first is enough for me. Artificial. Computers are brilliant at high speed processing. Millions, if not billions of times quicker than humans. Yet, as a dataset grows they suffer performance anxiety. One of the issues with 9/11 was not that the intelligence agencies had to little data in advance. The problem was that they had to much.

Armstrong had to fight on his descent to the Moon when the job entry subsystem became flooded with tasks.

Amdahl's law also has a place. As you add components to a computing system (and he specified additional processing power), the chatter and handshaking between them begins to overcome their capacity to perform the role they were expected to perform. They now exist for each other. They're no longer tallying bank balances or calculating an angle of attack. They're making sure that the system is fine.

IBM have designed massive database systems. CICS, IMS, DB2. 90%of the code is about resilience, recoverability, integrity. A small fraction does the job it is expected to do.

ThorMos
21st Jun 2019, 15:21
There is a lot of false information and misinterpretation buried in this thread, for instance that A.I. would make changes to its algorithms as it goes along. After training, you can fix the A.I. 'settings' and therefore the system is predictable if you put an equal system into the same situation. But that's completely beside the point. Are human pilots predictable? Do all human pilots react in the same perfect way or do they fail from time to time? One example given in this thread is AF447. A computer system could be programmed to just fly Pitch&Power and get itself out of the critical situation. Why didn't the pilots do this? Did anybody predict this or did the pilots behave unpredictable?

Nialler
21st Jun 2019, 15:42
There is a lot of false information and misinterpretation buried in this thread, for instance that A.I. would make changes to its algorithms as it goes along. After training, you can fix the A.I. 'settings' and therefore the system is predictable if you put an equal system into the same situation. But that's completely beside the point. Are human pilots predictable? Do all human pilots react in the same perfect way or do they fail from time to time? One example given in this thread is AF447. A computer system could be programmed to just fly Pitch&Power and get itself out of the critical situation. Why didn't the pilots do this? Did anybody predict this or did the pilots behave unpredictable?
This gets sort of to the point. We're not comparing systems as to which one is perfect. Merely, which one is better fitted for purpose.

The joy of science is that it recognises its limitations and - more importantly - has mechanisms by which it can self-correct. A purely mechanistic approach based on rules has no such flexibility. It will fly into the cliff because the rules decreed so. A patch will be supplied. In an unexpected situation a stall will occur. A patch will be issued.

The AI systems I've seen were all based on accumulated experience. That's not of much use if you find yourself in an entirely novel one.

golfbananajam
21st Jun 2019, 15:50
And, equally happy as you; that has hasn't happened yet. Because there are no truly AI Autopilots operating - defence experiments aside - yet.
Even the Airbus at Le Bourget was confined by it's programming. It had no free will whatsoever; it was analysing data input and responding with it's pre-programmed instructional code.

Sorry, that's not AI - it's on a level with your toaster on which you spread your breakfast marmalade.


You spread marmalade on your toaster?

Auxtank
21st Jun 2019, 16:37
You spread marmalade on your toaster?

Alright guys - hilarious. I got it wrong...but it was just a slip up with wording. (Not on the marmalade)
I apologise. I meant to say that the particular piece of AI BEING TALKED ABOUT is as intelligent as your toaster - IN WHICH YOU TOAST YOUR BREAD AND ON TO WHICH; THE BREAD THAT IS, YOU SPREAD YOUR BLOODY MARMALADE.
:O
Now, can we get back to discussing the earth-shattering dawn of AI.

(And no, I do not like blood in my marmalade)

ThorMos
21st Jun 2019, 16:46
<snip>

The AI systems I've seen were all based on accumulated experience. That's not of much use if you find yourself in an entirely novel one.

The humans I've seen were all based on accumulated experience. That's not of much use if you find yourself in an entirely novel one.

see what i did here?

RobertP
21st Jun 2019, 17:19
On the flight deck one human, one dog.

Dog bites human if he touches anything
Actually the human is there to feed the dog.

CurtainTwitcher
21st Jun 2019, 21:39
The humans I've seen were all based on accumulated experience. That's not of much use if you find yourself in an entirely novel one.

see what i did here?
Humans have a generalised ability to solve novel problems with accumulated knowledge and experience of nearby or similar situations and intuition. There have been numerous instances posited in this thread where humans adapted on the fly to unanticipated or unprecedented scenario's that they had not been trained for. CX780, QF32, Sioux city, Sully and a slew of others. A good primer on the subject of intuition (and it's flaws) is Daniel Kahneman's unexpected best seller "Thinking Fast and Slow (https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow)", noting the work of his antagonist collaborator Gary Klein on Naturalistic decision-making (https://en.wikipedia.org/wiki/Naturalistic_decision-making).

In the interest of honest debate, humans are also completely capable of screwing it up, and required strict Standard Operating Procedures has to be developed trained and complied with to save them from themselves, that for most part pilots against their will are required to function as automatons. The hull loss rate suggest we have probably optimised the hybrid between the advantages that automation can provide and human tolerance for novelty and ambiguity for "out of design" scenario's.

Can you please point to a generalised problem solving artificial intelligence system that can solve and adapt to an unprecedented novel scenario in real time? Because that is the humans bring to the game.

Auxtank
21st Jun 2019, 22:10
Humans have a generalised ability to solve novel problems with accumulated knowledge and experience of nearby or similar situations and intuition. There have been numerous instances posited in this thread where humans adapted on the fly to unanticipated or unprecedented scenario's that they had not been trained for. CX780, QF32, Sioux city, Sully and a slew of others. A good primer on the subject of intuition (and it's flaws) is Daniel Kahneman's unexpected best seller "Thinking Fast and Slow (https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow)", noting the work of his antagonist collaborator Gary Klein on Naturalistic decision-making (https://en.wikipedia.org/wiki/Naturalistic_decision-making).

In the interest of honest debate, humans are also completely capable of screwing it up, and required strict Standard Operating Procedures has to be developed trained and complied with to save them from themselves, that for most part pilots against their will are required to function as automatons. The hull loss rate suggest we have probably optimised the hybrid between the advantages that automation can provide and human tolerance for novelty and ambiguity for "out of design" scenario's.

Can you please point to a generalised problem solving artificial intelligence system that can solve and adapt to an unprecedented novel scenario in real time? Because that is the humans bring to the game.

That is what the humans bring to the game.

That's essentially, in a nut shell what GAI is. (GAI - Generalised Artificial Intelligence)

We're about 20 years away from that. Nand Gates are where it all started and their slow but steady, rising to exponential, evolutionary growth are going to be the death of us - or our salvation.

Which of those it is - is down to us.

Start here; https://futureoflife.org/superintelligence-survey/

FlexibleResponse
22nd Jun 2019, 08:11
Who is going to turn off the stab trim cut-out switches?

Dan Winterland
22nd Jun 2019, 12:23
I was attending a conference where it was confidently stated we were only 15 years from pilotless airliners. I had to pipe up and say "double that". I was challenged and reversed the challenge by pointing out that the statement obviously came from an engineer. This is because engineers have the confidence that they can find a solution to every issue. The FBW Airbus types are a case in point. The engineers design the ECAM procedures for every eventuality they can think of. But there are many more situations they can't anticipate. In about 8000 hours of command on these aircraft, I have had five 'significant' events, none of which were resolved by the ECAM procedures and required the pilot's systems knowledge and analysis to make the aircraft safe. One was a programming error in the Flight Warning Computer software which reported one problem, but ignored a bigger issue. A similar case was with the FWC reporting an issue with a system not actually fitted to our aircraft. Another was an issue with the Navigation System not seen before; another was as a result of a dual failure which the ECAM couldn't resolve (the suggested course of action would have resulted in the aircraft depressurising) and the last was sequence of events which started as an engine fire indication, but led to a depressurisation because of a failure that had not been seen before and which had not been considered in 25 years of the type being in service. This event led to the checklists being re-written.

Until the computers monitoring the systems have sufficient artificial intelligence to evaluate and make decisions based on the information presented, we are a long way from certifying autonomous systems in public transport. I certainly wouldn't get on one of those aircraft, and I suspect many would feel the same. And even then, I wouldn't trust the AI!

Kerosene Kraut
22nd Jun 2019, 14:26
..."Welcome onboard this is Captain Emcas flying you today"...

Ian W
22nd Jun 2019, 15:34
Who is going to turn off the stab trim cut-out switches?

No need to.
The FMS has no problem with feel changing which is why MCAS doesn't (didn't) operate with autopilot engaged. MCAS was there because it was thought human pilots would be unable to fly an aircraft with varying force to move the elevator.

PerPurumTonantes
22nd Jun 2019, 17:58
In about 8000 hours of command on these aircraft, I have had five 'significant' events, none of which were resolved by the ECAM procedures and required the pilot's systems knowledge and analysis to make the aircraft safe. One was a programming error in the Flight Warning Computer software which reported one problem, but ignored a bigger issue. A similar case was with the FWC reporting an issue with a system not actually fitted to our aircraft. Another was an issue with the Navigation System not seen before; another was as a result of a dual failure which the ECAM couldn't resolve (the suggested course of action would have resulted in the aircraft depressurising) and the last was sequence of events which started as an engine fire indication, but led to a depressurisation because of a failure that had not been seen before...
Interested to know: out of these issues, how many would have been disasters, and how many would have been OK, if the a/c had been programmed to divert to nearest airport and get down asap?

​​​​​

Kerosene Kraut
22nd Jun 2019, 19:08
Just look at the high crash rates of military drones. Not shoot downs...BTW: An RQ-4 is more expensive than a F-35.

Higher risk and higher cost will prevent unmanned commercial aircraft for very long. Maybe we'll see something like unmanned wingmen flying in formation with some manned leader one day.
Single seat means no redundancy and unmanned from a systems reliability standpoint as one man can become unavailable.