Self Flying Airbus
Thread Starter
Join Date: Feb 2008
Location: Wasilla, Alaska
Age: 69
Posts: 38
Likes: 0
Received 0 Likes
on
0 Posts
Self Flying Airbus
https://us.yahoo.com/news/airbus-sel...120900008.htmlAirbus' self-flying plane just completed successful taxi, take-off, and landing tests, opening the door for fully autonomous flight...Also: https://www.businessinsider.com/airb...-boeing-2020-4
Nemo Me Impune Lacessit
Join Date: Jun 2004
Location: Derbyshire, England.
Posts: 4,094
Received 0 Likes
on
0 Posts
Now find an insurer to insure not the plane but the passengers and third party liability, don't hold your breath. The control of the aircraft will have to be impervious to interference by terrorists as well.
Join Date: Dec 2019
Location: Copenhagen
Posts: 3
Likes: 0
Received 0 Likes
on
0 Posts
The article doesn't appear to suggest this was a self-flying exercise. Instead it appears to be a automated refuelling exercise and the article mentions the autonomous flight earlier in the year in the A350XWB.
Pilot error is already the most common cause of air transport accidents and is trending upwards as mechanical reliability improves, technology mitigates ATC errors (TCAS/GPWS/GPS etc), and safety management systems bear down upon maintenance errors and organisational factors. And many posters on here routinely decry the standards of training, the experience, the pay and the working conditions of the younger and/or foreign members of the profession.
Far from being reticent over full automation, I’d be surprised if insurers aren’t looking forward to it, if not actively investing in research to help bring it about. The Miracle on the Hudson, the Gimli Glider, and no doubt a couple of other notable human ‘saves’ (*) make us feel good about what we can do that machines can’t, but ultimately they’re consolation scores in what’s going to become an increasingly one-sided contest as sensing and computing advance.
* Sioux City... but artificial intelligence nowadays can ‘do and learn‘ quickly enough that I suspect it could teach itself to fly on differential throttle just like Al Haynes did.
Far from being reticent over full automation, I’d be surprised if insurers aren’t looking forward to it, if not actively investing in research to help bring it about. The Miracle on the Hudson, the Gimli Glider, and no doubt a couple of other notable human ‘saves’ (*) make us feel good about what we can do that machines can’t, but ultimately they’re consolation scores in what’s going to become an increasingly one-sided contest as sensing and computing advance.
* Sioux City... but artificial intelligence nowadays can ‘do and learn‘ quickly enough that I suspect it could teach itself to fly on differential throttle just like Al Haynes did.
Last edited by Easy Street; 27th Jul 2020 at 11:03.
Join Date: Apr 1998
Location: Mesopotamos
Posts: 5
Likes: 0
Received 0 Likes
on
0 Posts
I wonder what it will say deep down in the fine print when flying as pax on one of these craft. Probably similar to your typical Windows operating system EULA where you absolve all your rights and accept the uncertified workmanship given to you as the norm forever.
Now if only all those software engineers livelihoods were on the line with jail time for negligence, only then will you see a real improvement in quality before release rather than a "it's fixed in version 2.0" response.
Tesla gets around this by requiring the driver keep their hands near the steering wheel - which isn't autonomous, though I accept designing for road travel has considerably more challenges.
I am a loss why we still keep throwing millions if not billions of dollars at a task that is relatively simple enough when the actual benefit in real terms proves to be quite marginal.
Now if only all those software engineers livelihoods were on the line with jail time for negligence, only then will you see a real improvement in quality before release rather than a "it's fixed in version 2.0" response.
Tesla gets around this by requiring the driver keep their hands near the steering wheel - which isn't autonomous, though I accept designing for road travel has considerably more challenges.
I am a loss why we still keep throwing millions if not billions of dollars at a task that is relatively simple enough when the actual benefit in real terms proves to be quite marginal.
Of course it can be done, and unlike the dog walking on its hind legs, it can be done well. Given the level of automation available in modern aircraft, it's much easier to do a self-flying airliner than a self-driving car, because the environment is so much more controlled.
To me the puzzle is why Airbus are spending so much money on this.
Scenario 1: fully autonomous — nobody in the cockpit (maybe no cockpit). Remember the old joke about the automatic announcement saying "Nothing can go wrong ... go wrong ... go wrong ..."? However cheap the tickets were, I don't think it'd survive the first crash – and there would be a crash, because the world is imperfect and AI isn't really that clever. Murphy's law says something unexpected would be certain to happen that the automation couldn't cope with. Then parabellum's point about liability kicks in.
Scenario 2: supervised autonomous — pilot lines up for takeoff, presses a button and goes to sleep until the aircraft either lands or wakes him/her up because something odd is happening. Just feasible, but puts back most of the cost because you've got to have a fully-functioning cockpit with at least one competent pilot in it.
To me the puzzle is why Airbus are spending so much money on this.
Scenario 1: fully autonomous — nobody in the cockpit (maybe no cockpit). Remember the old joke about the automatic announcement saying "Nothing can go wrong ... go wrong ... go wrong ..."? However cheap the tickets were, I don't think it'd survive the first crash – and there would be a crash, because the world is imperfect and AI isn't really that clever. Murphy's law says something unexpected would be certain to happen that the automation couldn't cope with. Then parabellum's point about liability kicks in.
Scenario 2: supervised autonomous — pilot lines up for takeoff, presses a button and goes to sleep until the aircraft either lands or wakes him/her up because something odd is happening. Just feasible, but puts back most of the cost because you've got to have a fully-functioning cockpit with at least one competent pilot in it.
Join Date: Sep 2013
Location: 370
Posts: 94
Likes: 0
Received 0 Likes
on
0 Posts
Pilot error is already the most common cause of air transport accidents and is trending upwards as mechanical reliability improves, technology mitigates ATC errors (TCAS/GPWS/GPS etc), and safety management systems bear down upon maintenance errors and organisational factors. And many posters on here routinely decry the standards of training, the experience, the pay and the working conditions of the younger and/or foreign members of the profession.
Far from being reticent over full automation, I’d be surprised if insurers aren’t looking forward to it, if not actively investing in research to help bring it about. The Miracle on the Hudson, the Gimli Glider, and no doubt a couple of other notable human ‘saves’ (*) make us feel good about what we can do that machines can’t, but ultimately they’re consolation scores in what’s going to become an increasingly one-sided contest as sensing and computing advance.
* Sioux City... but artificial intelligence nowadays can ‘do and learn‘ quickly enough that I suspect it could teach itself to fly on differential throttle just like Al Haynes did.
Far from being reticent over full automation, I’d be surprised if insurers aren’t looking forward to it, if not actively investing in research to help bring it about. The Miracle on the Hudson, the Gimli Glider, and no doubt a couple of other notable human ‘saves’ (*) make us feel good about what we can do that machines can’t, but ultimately they’re consolation scores in what’s going to become an increasingly one-sided contest as sensing and computing advance.
* Sioux City... but artificial intelligence nowadays can ‘do and learn‘ quickly enough that I suspect it could teach itself to fly on differential throttle just like Al Haynes did.
Personally, I’ve seen an Airbus as it goes from ALT* to ALT take managed speed to MACH 0 and take all the thrust off, no explanation for it. I’ve had a DUAL ADR FAULT where the aircraft becomes fairly useless in terms of protecting itself. These are small examples and I’m sure there are many, many more. I also accept that in time these will be ironed out, but as long as Airbus are releasing OEBs and Boeing are installing faulty Alpha Protection systems, that pilots are needed as a backstop. We are faulty and full of latent errors, but at the moment everyday we make minor corrections to the automation that keep the aircraft safe.
Do I think aviation will be automated eventually? Yes. In the near future? No.
That said (and playing devil's advocate to some extent) it would be considerably less likely to make a total hash of a straightforward failure (or no failure at all) resulting in the destruction of the aircraft as has happened with human crews on more than one occasion! It's also not going to spill coffee on the avionics panel, drop its camera onto the sidestick or let its nephew have a go at the controls. (It won't try to pull the cabin crew either - although since it's French that will probably be an optional add-on! )
You'd certainly want to know how it would troubleshoot a non-standard problem - presumably there will be a data-link backup to allow input from a ground station in such cases (but this could not be guaranteed).
That said (and playing devil's advocate to some extent) it would be considerably less likely to make a total hash of a straightforward failure (or no failure at all) resulting in the destruction of the aircraft as has happened with human crews on more than one occasion! It's also not going to spill coffee on the avionics panel, drop its camera onto the sidestick or let its nephew have a go at the controls. (It won't try to pull the cabin crew either - although since it's French that will probably be an optional add-on! )
That said (and playing devil's advocate to some extent) it would be considerably less likely to make a total hash of a straightforward failure (or no failure at all) resulting in the destruction of the aircraft as has happened with human crews on more than one occasion! It's also not going to spill coffee on the avionics panel, drop its camera onto the sidestick or let its nephew have a go at the controls. (It won't try to pull the cabin crew either - although since it's French that will probably be an optional add-on! )
Join Date: Aug 2010
Location: EDDH
Posts: 11
Likes: 0
Received 0 Likes
on
0 Posts
Cheap tickets shouldn't be problem then, right? You save the costs for the cockpit crew...
I am sure they will find someone. Engineer? Maybe roboots and electronics have emotions and are humanized through artificial inteligence until then. So robots on court? hahah
What happ ens after an accident and you can’t blame the pilots?
Last edited by just2010; 27th Jul 2020 at 13:18.
Join Date: Feb 2005
Location: Botswana
Posts: 890
Likes: 0
Received 0 Likes
on
0 Posts
immediately call bullsh1t on that one but even if it were true, Hal wouldn’t have the local knowledge to put it down at an airfield that wasn’t in the database..
The myth of Pilot Error
Easy Street, WonderBus
Myth - an assumption about something as taken for granted rather than verified.
Cause - a verified hypothesis.
Pilot error assumes that you can both form and verify a hypothesis for human behaviour in all situations; which to date we are unable to do.
Thus humans can never be a 'cause', only a contribution to good or not so good outcomes.
The issue of human behaviour would also apply to design, build, and maintenance of the 'automation', thus autonomous operations with a higher level of safety than current operations may be an (unachievable) ideal.
Alternatively combine man and machine as a system which makes use of the best of each in normal and non normal operation.
https://static1.squarespace.com/stat...ech-Report.pdf
http://www.iploca.com/platform/conte...afetyMyths.pdf
Myth - an assumption about something as taken for granted rather than verified.
Cause - a verified hypothesis.
Pilot error assumes that you can both form and verify a hypothesis for human behaviour in all situations; which to date we are unable to do.
Thus humans can never be a 'cause', only a contribution to good or not so good outcomes.
The issue of human behaviour would also apply to design, build, and maintenance of the 'automation', thus autonomous operations with a higher level of safety than current operations may be an (unachievable) ideal.
Alternatively combine man and machine as a system which makes use of the best of each in normal and non normal operation.
https://static1.squarespace.com/stat...ech-Report.pdf
http://www.iploca.com/platform/conte...afetyMyths.pdf
Last edited by alf5071h; 27th Jul 2020 at 15:46.