Wikiposts
Search
Passengers & SLF (Self Loading Freight) If you are regularly a passenger on any airline then why not post your questions here?

Drone airlines - how long?

Thread Tools
 
Search this Thread
 
Old 21st Oct 2017, 16:14
  #81 (permalink)  
 
Join Date: Aug 2007
Location: Ireland
Posts: 216
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Mechta
The size together with the production & maintenance requirement appear to contradict the price bracket. For example, have a look at the new cost of a 53 seater coach (Euro 300k+).

The problem with quadcopter-type drones when scaled up, is the velocity and noise of the downwash. When V-22 Ospreys were sent to help after the Nepal earthquake, they were rapidly withdrawn when it was found their downwash just added to the destruction.
Downwash at takeoff and landing would not be a problem in normal operation if you use dedicated hard surface landing spots as suggested, and else fly high enough. And there is a lot of expensive stuff on a coach that would not be needed on a drone, like transmission, brakes and wheels.

In all you would want to keep most gadgets to a minimum to keep weight down, meaning carbon chassie and chairs, cheaper and lighter weight per kw petrol engines, and natural draft aircon.
The lidar, radar and laser stuff will soon be cheap enough to fit on most new cars so that won't be a moneypit either. Just need some additional programming to make it suitable in the air. If I can have radar on my boat for less than 2 grand, why not on my buss.
vikingivesterled is offline  
Old 29th Oct 2017, 13:10
  #82 (permalink)  
 
Join Date: Apr 2010
Location: London
Posts: 7,072
Likes: 0
Received 0 Likes on 0 Posts
This weeks "Flight" has an article in which Dassault say they are already planning for the day when you'll only need 1 man in the cockpit ............ long-term studies already underway

Sounds like they are thinking they need to design new cockpits with that option in mind.........
Heathrow Harry is offline  
Old 29th Oct 2017, 16:01
  #83 (permalink)  
 
Join Date: Oct 2013
Location: London, Monte Carlo and Bermuda (I wish!)
Age: 80
Posts: 119
Likes: 0
Received 0 Likes on 0 Posts
Drone airlines? They're almost here already, surely. Most planes could take-off and land themselves now. Trains, buses and cars do so anyway so it's only a matter of time. I've sat in cars parking themselves and on driverless trains and buses. Do I worry? You bet your life I do, but so far without cause.

A major concern, though, occurs to me. Will people actually need to travel so much in the future? When every home is equipped with interactive 3D multi-screen facilities will anybody need to travel anywhere unless they absolutely have to or want to? Mass-travel will be a thing of the past and holidays won't be necessary when robots and drones at home and away will provide for all our wants and needs leaving us with plenty of time for peeling grapes and sipping G-and-Ts. And it won't be much longer before we're all beaming up and down all over the place.

Good times are coming, they told me years ago, when nuclear power was promised to provide free electricity and computerization access to unlimited leisure. That worked out well, didn't it. After all, it was nicely predicted in Bladerunner years ago!
Mr Oleo Strut is offline  
Old 30th Oct 2017, 08:06
  #84 (permalink)  
 
Join Date: Aug 2015
Location: 5Y
Posts: 597
Received 13 Likes on 5 Posts
Originally Posted by alserire
All it would take is one fatal accident........

I won't ever get into a motorised vehicle that is not driven/flown/operated by a human being.
That is a strange attitude given that most fatal accidents are caused by human beings.

I am sure that the increase in automation will continue until we reach a situation without an onboard pilot rather soon. I cannot think of a recent situation in which a human being has saved a flight by unusual intervention (although I am sure that some will argue that happens every day), but plenty where inappropriate human intervention has been fatal. Fly-by-wire means that all the feedback mechanisms are already in place, so the hardest technical part is done.

The biggest problem I guess is that in most automated systems, if all else fails, and the computers have no idea what is happening, they can just stop! In aircraft systems the equivalent is to just hand back control to a human, whose response is often slow and inappropriate.

To take the crazy example of AF447, everyone died because the system was 'conservative', the philosophy of "if in doubt ask a human to sort it out" was a disaster. You can argue that is because the human crew were too reliant on their systems and when suddenly presented with confusing data they had almost literally forgotten how to fly. So is the solution less automation so they keep their hand in and crash a few aircraft in the process ?! Or more automation until the crew become passengers ?
double_barrel is online now  
Old 30th Oct 2017, 09:08
  #85 (permalink)  
 
Join Date: Sep 2017
Location: Bremen
Posts: 118
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by double_barrel
That is a strange attitude given that most fatal accidents are caused by human beings.
That's a bogus argument, especially as
a) many systems are stil being controlled by humans, so many more accidents based on numbers alone
b) accidents involving automation and humans are usually blamed on humans in that model, not on automation

Computer systems usually have a narrow, well defined area within they which they can operate safely; venture outside of that area, and performance drops off far more sharply than a human's performance will.

On AF447, the autopilot encountered a condition it could not resolve and turned itself off. Had it not turned itself off, what would have happened? What would have occurred if it had adjusted its operation to the erroneous inputs? What would happen if it did that every time it now turns itself off, where humans save the situation? Was the inability to cope with the situation an inherently human problem, or was the introduction of automation in the cockpit a contributary cause?

Are human interface issues the failure of humans to operate machinery, or is it the failure of automation to cooperate with humans?

I've been reading comp.risks on and off for almost three decades. It's about Risks to the Public in Computers and Related Systems, and as you may guess, those are almost all automation risks (the occasional laptop catching on fire notwithstanding). This includes aviation topics.

My personal takeaway is that whenever an automated system assumes that it a) has a complete picture of the situation, and that b) it has complete control, the time will come when either one of these assumptions is no longer true, and then the system will fail. The problem is that these assumptions make a computerized system easy to design. It is hard to design a system that is able to recognize when its inputs may be bad, and that its outputs may be bad as well, and still deal with it. (The approach to the system itself being bad is usually "put three of them in, if one is off, disable it". Bonus points if the three systems are not identical (because otherwise they would simply show the same errors in some cases), but of course that requires three times the effort.)

And then there's malicious interference: it's hard to get a pilot to crash an airplane; once you have found a way to do it to the computer, you can easily do it to all of them. So that's another thing that will cause fatal automation accidents to go up when automation spreads to more critical systems; but since the accident was caused by a malicious human, it'll appear on the other side of the statistics yet again.

You also need to consider the question: if the same effort was spent on making automation safer was spent on giving huamns the tools to make their own activities safer, what would the result be?

Arguments by statistics may seem simple and convincing, but when you delve into the issues, you're going to find that no statistic tells the whole truth.
Musician is offline  
Old 30th Oct 2017, 09:37
  #86 (permalink)  
 
Join Date: Aug 2015
Location: 5Y
Posts: 597
Received 13 Likes on 5 Posts
Originally Posted by Musician
On AF447, the autopilot encountered a condition it could not resolve and turned itself off. Had it not turned itself off, what would have happened? What would have occurred if it had adjusted its operation to the erroneous inputs? What would happen if it did that every time it now turns itself off, where humans save the situation? Was the inability to cope with the situation an inherently human problem, or was the introduction of automation in the cockpit a contributary cause?
OK, to take just that point. The system detected anomalous inputs and reverted to 'alternate law', ie handed control back to the humans. Of course, as it was designed it may not have successfully handled the situation if it had retained control (although the dumbest rule could hardly have done worse than the human crew). But a fully automated system would not have just 'surrendered' it would have looked at other sources of information. It would have been immediately obvious that the pitot tubes were sending nonsense data by comparison with multiple other sensors. It would be trivial to have designed a system to cope seamlessly with temporary loss of airspeed data. This was not done because the 'safest' option was assumed to be 'if in doubt, give it to the humans'.


Of course, it could be argued that this was itself an automation failure, ie had the crew been hand flying it, there would have been no incident. But that gets into a very circular argument!
double_barrel is online now  
Old 30th Oct 2017, 12:13
  #87 (permalink)  
 
Join Date: Sep 2017
Location: Bremen
Posts: 118
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by double_barrel
It would have been immediately obvious that the pitot tubes were sending nonsense data by comparison with multiple other sensors.
What are those sensors, and what are their failure modes? There were already multiple pitot tubes, but it did not help.
Musician is offline  
Old 31st Oct 2017, 07:27
  #88 (permalink)  
 
Join Date: Aug 2015
Location: 5Y
Posts: 597
Received 13 Likes on 5 Posts
What would/should the crew have done when the system dropped to alternate mode ? How might they have determined what the aircraft was doing and how to respond? In fact the correct response was to do nothing, which they failed to do. It would not be hard to program a system to do better than that :-)

Alternatives? $0.1 accelerometer would have been enough ! So would GPS or simple interpolation. The system 'knows' the attitude and what the engines are doing, it knows the ground speed now and what the airspeed was a few seconds ago. Of course those do not give the actual instantaneous airspeed, but it would not require an especially smart system to detect that the airspeed data was simply missing and to use other sources of data to make an approximation that would have been good enough to keep the aircraft flying until the airspeed data stream returned. I don't know if pitot tubes have ice detection as well as anti-icing - in fact, I think if I was to design a pitot ice detection system it might look for anomalous readings or big pressure drops across critical parts!!

Of course you should now be asking how the system decides which set of data to trust if all the redundant but not truly independent members of a system drop out - eg all the pitots or all the GPS (plus both alternative GPS-like systems) which might conceivably give spurious but matching values under some circumstances. I don't think it is hard to manage such circumstances.

And of course, under some highly unusual circumstances, the system gets it wrong and everyone dies. How is that different from today's arrangement?
double_barrel is online now  
Old 31st Oct 2017, 17:26
  #89 (permalink)  
Thread Starter
 
Join Date: Jan 2000
Location: Henley-on-Thames, UK
Posts: 52
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Musician
You also need to consider the question: if the same effort was spent on making automation safer was spent on giving huamns the tools to make their own activities safer, what would the result be?
No, you don't, as this will never be a valid question. The effort is only spent on automation in the short term as it lowers cost in the long term. Money is the motivating factor, not safety. Aviation is made relatively safe, not absolutely safe, as the safest form of aviation is not to aviate at all.

The 2 person cockpit of today will shrink to the one person cockpit of tomorrow, as surely as it shrank from 3 to 2. The survivor might retain the title captain, but they'll be increasingly deskilled into some sort of 'flight manager' overseeing the flight in general, whilst the responsibility for dealing with unexpected events takes places on the ground.

True autonomy is still a while off, but high levels of automation and remote operation and oversight is far closer, and driven by reasons of cost alone.
yellowperil is offline  
Old 31st Oct 2017, 17:59
  #90 (permalink)  
 
Join Date: Aug 2007
Location: Ireland
Posts: 216
Likes: 0
Received 0 Likes on 0 Posts
The problem with only having 1 pilot aka a captain on board is, how do you make people captains. Where do they get their training and hours of practice for the 1 person cockpit. Will the training happen on the ground in real flight simulator like environments And they who are not found suitable to proceed to take on multiple simultaneous remote controlled flights, will be offsided to yellowperil's "Flight Manager" position in the sky.
At least that way the ground based pilots could have regular job hours close to their place of home and without having their days extended by regular unforeseen circumstances. The flight(s) in progress would just be handed over to the next shift, like in air-traffic control.
vikingivesterled is offline  
Old 31st Oct 2017, 23:14
  #91 (permalink)  
 
Join Date: Sep 2017
Location: Bremen
Posts: 118
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by double_barrel
Alternatives? $0.1 accelerometer would have been enough ! So would GPS or simple interpolation. The system 'knows' the attitude and what the engines are doing, it knows the ground speed now and what the airspeed was a few seconds ago. Of course those do not give the actual instantaneous airspeed, but it would not require an especially smart system to detect that the airspeed data was simply missing and to use other sources of data to make an approximation that would have been good enough to keep the aircraft flying until the airspeed data stream returned.
The existing system already detected that the input was bad. The problem is that unless you can determine why the input was bad, it can't be trusted from that point on: it might look as expected, but still be off, just by less.

Your alternative sensors can't replace air speed data: inertial speed or GPS speed is ground speed, which is fine for navigating, but for aviating you need the air speed because it determines how close to the limits the plane is: how close to a stall, how close to being overstressed?

"Do nothing" might have been "keep air speed constant by applying power", which is what it was doing before the autopilot turned itself off. That's pretty much what the pilot did, isn't it? And if you change the system behaviour to "do nothing", you may have changed it for situations where it shouldn't have changed (any programmer knows that fixing a bug often introduces new bugs). What you say is "easy to manage" is in fact hard to manage. It is easy for humans, but hard for computers, which is my point.
Musician is offline  
Old 1st Nov 2017, 00:39
  #92 (permalink)  
 
Join Date: May 2017
Location: NZ
Posts: 4
Likes: 0
Received 0 Likes on 0 Posts
End of the day...they won't be the ones who decide if this stuff really takes off or not....Plenty of other factors to consider...Regulations, political and social factors all will have a influence.

Ultimately if people aren't comfortable with fully autonomous planes, Airlines won't be too keen on buying them, and r&d may take a different direction. Plenty of people worry about flying now, even thou they wont think twice about the drive to the airport...good luck trying to get them on some autonomous airliner. It would take a brave airline to try it.


In my opinion, it would be wise to keep 1 person up front. Crashes tend to be expensive and bad PR. But if the apparent "experts" are right in regards to a massive amount of jobs being automated in whatever amount of years ...I'd say the pilots job as it is currently would be more at risk of airlines losing sales, then layoffs due to a massive amount of the population being out of a income and unable to travel as opposed to being automated....Sometimes you have to wonder about the wisdom of all this automation and where it ends up...no point automating everything if no one has a income to buy stuff and companies ending up shooting themselves in the foot.


Who knows. Nobody has a crystal ball. Maybe one day, there will only be one pilot up front on short haul...Timeframe-anyones guess. But just because something is new, doesn't necessarily mean it is going to be the next big thing...Crystal Pepsi?
cflier is offline  
Old 1st Nov 2017, 05:59
  #93 (permalink)  
 
Join Date: Aug 2015
Location: 5Y
Posts: 597
Received 13 Likes on 5 Posts
Originally Posted by Musician
The existing system already detected that the input was bad. The problem is that unless you can determine why the input was bad, it can't be trusted from that point on: it might look as expected, but still be off, just by less.

Your alternative sensors can't replace air speed data: inertial speed or GPS speed is ground speed, which is fine for navigating, but for aviating you need the air speed because it determines how close to the limits the plane is: how close to a stall, how close to being overstressed?

"Do nothing" might have been "keep air speed constant by applying power", which is what it was doing before the autopilot turned itself off. That's pretty much what the pilot did, isn't it? And if you change the system behaviour to "do nothing", you may have changed it for situations where it shouldn't have changed (any programmer knows that fixing a bug often introduces new bugs). What you say is "easy to manage" is in fact hard to manage. It is easy for humans, but hard for computers, which is my point.
Actually in this particular (admittedly trivial from an engineering standpoint) case it is rather straightforward to manage. If you can describe the circumstances you can program them into the system. You have already said that the system correctly determined that airspeed indication was faulty, so your primary concern has been dealt with. It would be very easy to have set it up to allow it to keep control and make a best guess at actual airspeed based on other parameters. As the crew should have done - they had no more information but should have looked at the vast array of other information available to them and simply managed attitude, why would it be dangerous to have left the computer to do no more than that?

True, in a fully automatic system I would want to look at the nature of the backup sensors - eg multiple identical pitot tubes were not actually independent of each other as they all had the same characteristics.

The crew failed because of 'startle' - computers don't suffer from that! Having totally screwed-up the situation the crew were further confused by the very interesting stall warning signal that disappeared when the airspeed fell below what the system considered possible in flight - it was actually correct, they were not flying when the stall warning went silent, they were falling like a brick, but there is a lesson there when considering more automation.

Last edited by double_barrel; 1st Nov 2017 at 12:25.
double_barrel is online now  
Old 1st Nov 2017, 12:12
  #94 (permalink)  
Paxing All Over The World
 
Join Date: May 2001
Location: Hertfordshire, UK.
Age: 67
Posts: 10,149
Received 62 Likes on 50 Posts
As has been said before, cargo flights will try this first.
PAXboy is offline  
Old 1st Nov 2017, 16:43
  #95 (permalink)  
 
Join Date: Sep 2017
Location: Bremen
Posts: 118
Likes: 0
Received 0 Likes on 0 Posts
I have multiple issues with your post, DB.

First of all, the automation can get it wrong. There's a list of incidents on the Wikipedia ADIRU article, e.g. Malaysia Airlines Flight 124, which the autopilot would have happily crashed, but for the humans in the cockpit.

You assert that automation would have gotten it right on AF447. But the reason the pilot pitched up was the information provided by the automation:
"The A330 static ports are located below the fuselage mid-line forward of the wing. On the A330-200 in particular, as a result of the position of teh static pressure sensors, the measured static pressure overestimates the actual static pressure. One of the first effects after AF447'spitot tubes became obstructed was that the internal altimeter corrections were recalculated as if the airplane was flaying at lower speeds. This resulted in false indications of a 300 foot decrease in altitude and a downward vertical speed approaching 600 feet per minute." (Bill Palmer, "Understanding Air France 447")
So what would the autopilot have done if faced with a descent like that? I'd say, about the same thing the humans did. Also note that because the ADIRU altitude is slowly pulled onto barometric altitude whenever there's deviation, this descent wouldn't have looked like a sudden jump. Bill Palmer also says, "You've heard that the crew did not react to the stall warning. But, you'll see that they reacted exactly how they were taught to - it just didn't do any good." (understandingaf477.com)

"make a best guess at actual airspeed based on other parameters"-- well, there is the Back-up Speed Scale, but it's not supposed to be used above FL250. The fallback is pitch/thrust tables, but I'd assume these are only useful when the aircraft isn't about to stall.

Automatic systems can have a "startle factor" as well (disregarding for a moment the questionable assertion that the AF447 crew were "startled"). For one, the AF447 ADIRU was "startled" into computing a bad vertical speed, without being aware of it. For another, reset an IRU and it becomes useless. Now count all the various ways technical systems can fail and not be aware of it...

The main problem though is this: "If you can describe the circumstances you can program them into the system." And that's not adequate. With humans, you can describe situations to them, and solutions, and when they're faced with something unknown, they'll look for analogies in their knowledge and apply them as best as they think appropriate. (This includes selection end execution of trained procedures.) This means that humans perform best in familiar cricumstances, they don't always perform optimally, but their performances falls of gradually as circumstances go outside of the norm.

Most computers can't reason like that (and those that can aren't fully understood). A computer's actions are guided by rules. Now think about bureaucracy and how inappropriate its rules can be in certain situations that they weren't made for. You are creating the rules for a certain set of assumptions, and if you are really rigorous, you identify these assumptions and "do nothing" when they don't hold. Since a computer can't do nothing, unless it turns itself off, that's what it does at present. But if a system that can't turn itself off because there is no pilot, it needs to follow rules that were not made for the situation it finds itself in, and it is then when the behaviour of the automatic system deteriorates sharply, because it has no way to select which rules it should be following and thus follows even the nonsensical ones. The behaviour that emerges from the interplay of a complex system of rules is always somewhat unpredictable.

So you make some rules for situations you found, and you add them to the system, and now you have a complex system of rules (including those that deal with the various types of failures that might occur), and you're going to stumble upon a situation you hadn't considered where the behaviour generated by those rules results in a failure, possibly with many passengers aboard.

Your suggestion of "add some code for each nonstandard situation we know about" leads to an unstable, unmanageable software system with unpredictable performance in critical cases. (This is true for all software systems, ask any software engineer.) Well, "unpredictable" is not entirely true, because you can fall back on statistics, but then you need a large number of samples to have reliable data, aka learn from experience, which means you can't predict the safety of the system in advance.

yellowperil made the point that automatic systems are not inherently safer, but they're seen as cheaper than human-controlled systems and so there's money that can be profitably invested to make them safer than human aviation, which would then enable their introduction. This means that the argument "drone flight is safer than human flight" goes out the window; it is only true because we want it to be, and it suffices for the stakeholder to make it appear to be true, and they're motivated to make it appear to be true at the least cost to them. I think that is cause to be suspicious.

For advanced weapons systems, it is true that they're often demonstrated in controlled conditions (aka where the assumptions made by the designers are ensured to be true), and even then tests often fail. (I think for one of the recent cruise missile attacks on Syria, only about half of them hit anything. There go your fully automatic drones.) The same goes for software security for Internet-connected systems: once a hacker can engineer a situation that breaks the designer's assumptions, the system is often able to be compromised. But when the system was demonstrated, it certainly looked capable.

Will there be drone airlines in the future? Possibly. But they're definitely still a long ways off.
Musician is offline  
Old 1st Nov 2017, 17:21
  #96 (permalink)  
Paxing All Over The World
 
Join Date: May 2001
Location: Hertfordshire, UK.
Age: 67
Posts: 10,149
Received 62 Likes on 50 Posts
Coincidentally in todays Guardian https://www.theguardian.com/science/...h-experts-warn

Artificial intelligence risks GM-style public backlash, experts warn.

Researchers say social, ethical and political concerns are mounting and greater oversight is urgently needed
Noted that air regulation is going to be significantly stronger than for AI cars but, the problem is the same.
PAXboy is offline  
Old 2nd Nov 2017, 13:49
  #97 (permalink)  
Thread Starter
 
Join Date: Jan 2000
Location: Henley-on-Thames, UK
Posts: 52
Likes: 0
Received 0 Likes on 0 Posts
Dunno what you made of the rest of the article, but to me it bore little resemblance to that clickbait headline you've quoted....
yellowperil is offline  

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off



Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.