Go Back  PPRuNe Forums > Flight Deck Forums > Tech Log
Reload this Page >

"Looking Forward" to a Pilotless Future

Wikiposts
Search
Tech Log The very best in practical technical discussion on the web

"Looking Forward" to a Pilotless Future

Thread Tools
 
Search this Thread
 
Old 2nd Dec 2017, 18:57
  #21 (permalink)  
 
Join Date: Oct 2004
Location: Near sheep!
Posts: 915
Likes: 0
Received 0 Likes on 0 Posts
If you are all too niave to believe that pilotless aircraft will begin with cargo, followed by many years of review and safety validation and followed eventually by pax pilotless then......or wait a minute, am I too niave?

20 years ago we hardly had Internet, let alone all the recent introductions of smart phones etc.

Another 20 years will bring about enormous evolution, unless of course it goes too far and gives 'some of the more idiotic' leaders of the world the avenue to bring about our own destruction.

While there are cargo planes or trains out there, they are the perfect avenue to test new concepts.

Just being real.
WindSheer is offline  
Old 2nd Dec 2017, 21:00
  #22 (permalink)  
Thread Starter
 
Join Date: Dec 2006
Location: Florida and wherever my laptop is
Posts: 1,350
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by KayPam
Your reasoning has a bias.
Why would the insurance rates go up ? Because airplanes would crash more.
Therefore it would be inefficient, since passengers care so much about safety (or rather, since passengers overestimate so much the danger of flying)
It may be that the autonomous aircraft are safer - however, the perception (shown here) is that they will be less safe. It is just a perception without supporting evidence. Nevertheless, initially I would expect insurance rates could reflect the perceived risk until there was sufficient history to demonstrate whether the perception was soundly based or not. If the savings to the operators are significant a rise in rates may not be a disincentive.
Ian W is offline  
Old 3rd Dec 2017, 03:35
  #23 (permalink)  
 
Join Date: Jul 2013
Location: Everett, WA
Age: 68
Posts: 4,395
Received 180 Likes on 88 Posts
Perceptions change. Wait until driverless cars are perfected and the automotive accident rate plummets. Once that happens, every time there is an aircraft disaster caused by "pilot error" there will be an outcry about why do we still have fallible humans piloting these things.
It won't happen soon, but it will happen.
In my lifetime, we've gone from a world where they sold flight life insurance in the airports (because many insurance policy's specifically excluded aircraft accidents) to a world where it is no exaggeration that the most dangerous part of flying is the drive to the airport. 100 years ago most people wouldn't even dream of using an aircraft to cross an ocean - now not only is it common, it's pretty much the only option. Assuming we avoid blowing ourselves up, life 100 years in the future will be similarly unimaginable.
Computers are getting exponentially more capable - humans not so much.
tdracer is offline  
Old 3rd Dec 2017, 07:18
  #24 (permalink)  
 
Join Date: Apr 2008
Location: UK
Posts: 379
Received 0 Likes on 0 Posts
Don't be fooled; computers are getting better only because the humans who program them are getting better. But they're still human, and prone to human mistakes and foibles. I know, it's my job to do that, and frankly I sweat buckets over systems that have to be good but not immediately safety critical.

Moral Question

It basically comes down to this: is it moral to risk a backward step in aviation safety for the sake of some financiers' wet dream of an investment? Especially as the aviation industry is so very good these days at not crashing, having learned from crashes over the past century? There's not a lot of room for improvement in safety, so that's not really a viable justification...

Are we really going to say "let's go through all that again" for the sake of pilotless aircraft?

(Let's not distinguish between cargo and passenger aircraft; people on the ground are just as dead if either crashes into their house. Incidentally one of the costs of UAV testing is finding somewhere where there's no one to crash land on top of...)

I refuse to participate in any effort to develop software for an automated airliner, or car (i.e. something intended to operate constantly in or over a human-dense environment. Most UAVs are operated in sparsely populated areas, so am less concerned about these at the moment). Trains are different; they operate in an environment that is almost entirely under our control if we choose so (i.e. avoid rivers, cliffs, etc). That means that we can write software to control them autonomously; we really can think of almost everything they have to do. They also benefit from being much bigger and heavier that most of the movable things that might get in their way, and they don't go wondering away from their rails very often...

Why it Happens

There's what is called a "conspiracy of optimism" surrounding AI and transport at the moment, and those have a nasty habit of being self perpetuating until some unsuspecting user pays the price of the thing being not as good as it needed to be.

There's a ton of investors out there who are putting huge sums of money into self driving this, self flying that, on a "just in case" basis. It doesn't take much of a sales pitch from a bunch of enthusiastic engineers to convince some very big wallets to splash the cash.

The problems arise when some return on that investment is demanded. The technology is very obviously not up to expectations, and so the autonomous car industry tries to make it a political issue. There are PR campaigns running; effectively some are seeking to have the "rules of the road" changed to suit the industry; that will be at the expense of all road users.

It is the same with self flying passenger aircraft; the idea of having them is rapidly transforming into a self perpetuating "we must have them", without any real explanation of why, just like it has with cars. There's lots of dubious justifications...

In the aviation industry we already have rules, engineering standards, etc. for certification of software and hardware for safe flight. These have been very effective at putting aviation safety at a very high level.

The trouble is that I cannot see us ever successfully developing an autonomous pilotless aircraft whilst sticking to the existing rules, certification procedures, etc. It'd would be enormously difficult to do. What I can see happening is various companies, backed by very rich investors, applying pressure to have the rules relaxed to make it easier for their technology to be put into the sky. The rule book is in the way? Burn the rule book. Yet it's these rules that have made the industry as safe as it currently is.

And because we cannot build one that's guaranteed to be 100% reliable the only way we're going to find out if they're worthwhile from a safety point of view is to build lots of them, fly them with paying passengers and let the crash stats build up over decades whilst the elements, terrorists, hackers, maintenance crews and airline management do their worst.

That is, use the paying public as guinea pigs, yet again.

And for all that time we're just one software bug or undreamt of hardware failure or network security issue or hacker away from killing a lot of people. That's for only the sake of what is demonstrably a very marginal improvement for passengers and a very doubtful financial gain for the airlines and manufacturers.

And it might truly be a lot of people who end up dying to prove the point; what if one ATC net got hacked and issued convergent directions to 1000 airborne aircraft at the same instant?

The cost / benefit / risk analysis is poor in my estimation. I wouldn't want to be partly responsible.

Paranoia or History Repeating?

Ok so perhaps I'm being paranoid. But humans are very poor at engaging rationally with the prospect of risks that have huge consequences that are probably fairly unlikely to occur. TEPCO didn't and look where that's got them.

Humans are also very poor at adding risks to the equation in the first place. For example how do you assess the risk of being hacked? Can't do that, so let's not consider it and assume the security will be good enough (that's the usual response). Yet it's 100% guaranteed that hackers will try... Just like it's 100% guaranteed that thieves will hijack driverless lorries.

Vanity

Vanity is a dangerous thing in this business, and it's present in the self driving car industry. To illustrate the "problem" in the self driving car industry, consider the possibility of autonomous cars being "bullied" by humans (they won't drive into me, so I can intimidate it!). When asked on BBC Radio recently (Tech Tent, 10th Nov 2017), an industry personality was deadly serious about solving this problem with laws. Seriously? It becomes illegal for you to act in a way that is interpreted as a danger by someone else's lame brained self driving car? No way! How vain is that, expecting everyone else to be compelled by law to account for the nature of one's own product!!!

Airports and Everyone Else

I don't think the airports would value a self-flying, pilotless autonomous airliner. Want to take some landing aid out of service? Why no you can't, the planes can't land without it. So you'd have to have two, just in case.

The knock on costs to other players are going to be quite large, and that will simply get passed on to the paying passenger; manned aircraft would probably be cheaper to land.

Last edited by msbbarratt; 3rd Dec 2017 at 09:23.
msbbarratt is offline  
Old 3rd Dec 2017, 12:52
  #25 (permalink)  
 
Join Date: Dec 2015
Location: France
Posts: 507
Likes: 0
Received 0 Likes on 0 Posts
There are foundamental differences between cars and planes.

Airliners are operated by very few professionals.
Cars can be operated by almost anyone over 18.

Basically, a driver has the liberty of driving like an idiot all he wants. You can drive several years without seeing a single policeman (and handing him your papers).
So obviously, the accident rate would plummet if you had self driving cars. But it would also plummet in a very similar fashion if all cars were operated with the same standards as airliners are : 2 drivers with 200 hours of practical training, years of experience, an operating manual to rule everything, regular and exhausting training in the sim, an other crew member to monitor what you're doing and pax to complain of the slightest problems...

The only question is : do we need self driving cars ? No, driving is not terrifying, it's actually already very safe.
Do we need self driving planes ? No, it's already even safer.
KayPam is offline  
Old 3rd Dec 2017, 15:46
  #26 (permalink)  
 
Join Date: Apr 2008
Location: UK
Posts: 379
Received 0 Likes on 0 Posts
Originally Posted by KayPam
There are foundamental differences between cars and planes.

Airliners are operated by very few professionals.
Cars can be operated by almost anyone over 18.
There are some similarities; both are difficult challenges for full autonomy (cars probably more so - roads are an even more uncontrolled environment that the air), and the kind of people most keen on achieving it aren't necessarily the one's I'd trust to make a fully objective assessment as to the wisdom of attempting it. Especially as they're commonly the ones also keen on eroding the normal rules and processes surrounding safety-critical systems development, testing and certification.

Basically what I think will result from the current AI-Transport bubble is a bunch of "partial" projects
  • Cars that are self driving some of the time, in some circumstances.
    • But what's the use in that? If it can't bring me home from the pub half cut or take the kids to school, I'm not interested.
  • Airliners that might allow pilots to nod off for a period of time.
    • So not so different from today's tech.
    • How does anyone think that the world's aviation infrastructure can be updated and certified to the point where we trust an airliner to 1) land at any airport on route / diversion, 2) taxi to the right place, 3) do all that in the foulest and fairest of conditions, 4) do all that with the requisite ground infrastructure going offline just as it commits to a landing, all this being basically the only thing that a modern airliner isn't trusted to do for itself today?
    • I'd be interested to know exactly how an autonomous airliner is supposed to declare a Mayday or Pan, and what exactly is supposed to happen when it does...
msbbarratt is offline  
Old 3rd Dec 2017, 15:50
  #27 (permalink)  
 
Join Date: Jun 2007
Location: Wanderlust
Posts: 3,403
Likes: 0
Received 0 Likes on 0 Posts
Airlines is just another business like any other. Economics and technological advances will overtake everything and everyone. Only question is when. We discuss so many accidents where people sitting in front have not given the impression of a highly skilled professional. It may be difficult to visualize presently but will happen.
vilas is offline  
Old 3rd Dec 2017, 16:38
  #28 (permalink)  
 
Join Date: Dec 2015
Location: France
Posts: 507
Likes: 0
Received 0 Likes on 0 Posts
As someone calculated earlier, front seat crew account for about 2% of the price of a ticket.
Probably even less for long-haul operations (with more pax aboard).

All the sensors and computers required, in addition to the advanced AI required to make the airplane autonomous, will surely cost more than pilots for a very long time.

And by the time it's not the case anymore, it's possible that there won't be enough petrol to continue flying airplanes.
So there's a serious chance we'll never see autonomous airplanes in our life.
KayPam is offline  
Old 3rd Dec 2017, 17:01
  #29 (permalink)  
 
Join Date: Dec 2006
Location: The No Transgression Zone
Posts: 2,483
Received 5 Likes on 3 Posts
What about situations such as a forgotten chock preventing right gear extension...2 go arounds and finally anti skid off to intentionally burst the tires.

Can a computer deal with that?
Pugilistic Animus is offline  
Old 3rd Dec 2017, 21:00
  #30 (permalink)  
 
Join Date: Aug 2017
Location: KERAV Hold
Posts: 31
Received 0 Likes on 0 Posts
Out of curiosity, are the latest generation of aircraft (A350, B787, CS300/100) authorised to conduct an autoland with braking action less than good or on contaminated runways, or with turbulance greater than moderate? Just curious as current/previous generation aircraft (A320/A330/B737) are not allowed to conduct autoland ops in these conditions so just wondering if today's modern technology allows this type of landing, or if that's still another generation away?
EI_DVM is offline  
Old 3rd Dec 2017, 21:23
  #31 (permalink)  
 
Join Date: Jul 2008
Location: Australia
Posts: 1,251
Received 191 Likes on 87 Posts
Boeing and Airbus are working on the next generation of narrow body aircraft. They would be at least 10 to 15 years away from service in the most optimistic time frame. They are not being developed as pilotless aircraft and they will be in service for at least 40 years in their various forms. Any attempt to reconfigure them to pilotless will require recertification. If you don't think thats an issue then look at the 737 flightdeck, an aircraft that has been in service for 50 years.
Lookleft is offline  
Old 4th Dec 2017, 03:36
  #32 (permalink)  
 
Join Date: Jul 2013
Location: Mosquitoville
Posts: 99
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by msbbarratt

Vanity

Vanity is a dangerous thing in this business, and it's present in the self driving car industry. To illustrate the "problem" in the self driving car industry, consider the possibility of autonomous cars being "bullied" by humans (they won't drive into me, so I can intimidate it!). When asked on BBC Radio recently (Tech Tent, 10th Nov 2017), an industry personality was deadly serious about solving this problem with laws. Seriously? It becomes illegal for you to act in a way that is interpreted as a danger by someone else's lame brained self driving car? No way! How vain is that, expecting everyone else to be compelled by law to account for the nature of one's own product!!!

Airports and Everyone Else

I don't think the airports would value a self-flying, pilotless autonomous airliner. Want to take some landing aid out of service? Why no you can't, the planes can't land without it. So you'd have to have two, just in case.

The knock on costs to other players are going to be quite large, and that will simply get passed on to the paying passenger; manned aircraft would probably be cheaper to land.
You seem to have more foresight of this issue than most.

The thing is the cost/benefit analysis for cars is different than for airplanes. First of all the economies of scale are more favorable for autos. Second there are more potential benefits to society such as access to transport for those that formerly had difficulty operating or using personal autos. Of course, this has me way more fearful of autonomous car parade trampling the driving rights of everyone else. You already gave one example but there are several other foreseeable consequences as well. Traffic congestion is another big one in my opinion. The autonomous fans tend to tout that traffic will be better since the AI drivers are better and more orderly. That may be so, but I highly suspect the overall effect will be much more gridlock. Once cars are untethered from their drivers, the utilization rate will increase meaning the car will be on the road more. Often for trivial benefits of the owner such as have the car circle block endless to avoid parking or I also foresee an explosion of passengerless vehicles performing delivery services. I really believe that it going to be a huge problem in urban and surburban areas and backlash will form as well.

Fortunately, for aircraft, I don't see it being a big issue in years than I have left to fly.
Sorry Dog is offline  
Old 5th Dec 2017, 07:29
  #33 (permalink)  
 
Join Date: Apr 2008
Location: UK
Posts: 379
Received 0 Likes on 0 Posts
I hope the rest of your flying time has smooth air and comfortable runways, and for many years beyond!

It's quite interesting studying some of the claims the autonomous vehicle people make; safety, traffic improvements, etc. There's also what's being touted as a virtuous circle between electric cars and self-driving cars; you need electric cars to save the environment, you need self-driving to make electric cars viable, ergo please invest megabucks into self-driving cars.

The trouble is that absolutely none of these claims will ever come to fruition whilst being 100% "safe" with the roads and other road users we have today. The danger is that giga-bucks will be poured into changing the roads, banishing other road users, simply to fulfil something that is little better than a religious prophecy. Such expenditure would be an acknowledgement that the original prophecy was unfulfillable, and that the most important thing had become to fulfil a heavily amended form of it even if that is in itself now not worthwhile.

Ask yourselves how many times governments have spent lots of good money on top of bad chasing an already dead and pointless idea, simply because they’ve already started…

Seen it All Before

The whole pattern of research, argument, persuasion, policy, law, and eventual retraction is an oft-repeated human thing. Look at the food industry; for decades the industry, experts, governments, magazines, etc. have been saying "butter is bad for you, it'll make you fat, give you heart disease, cancer, and make you go bald, cause flatulence". A lot of businesses made a lot of money out of trans-fats and artificial substitutes for decades. And then (quite recently) it turns out that the "advice" is rubbish, butter and the like is fine (even necessary) in moderation, cue the butter shortage in France and elsewhere as suddenly the demand goes through the roof.

How many health £ have been spent and how many people have died due to trans-fats? Probably an incalculably large number.

The Power of Lobby

Where there's money to be made, there's almost no end to the measures, arguments, research, advertising, lobbying, voting, etc. that will be wheeled out to create a market. That is exactly what Tesla, Google, Amazon, Uber, etc. are doing right now. Just like the smoking industry did, the food industry, the car industry back in the bad old days, the drinks industry today, the oil industry, etc.

The danger is that a lot of harm will be done along the way from here to there when we start seeing the poor accident statistics / traffic flow figures rolling in. The industry's end goal is to make it too late to go back before everyone has a chance to work out whether it's a bad idea or not.

With self driving cars, what I think will actually happen is that people will not value the inevitable "partial" solution enough for the market to take off, and it'll simply be too expensive to give it away for free.

The only company doing it properly is Volvo - they're saying they'll take the rap for collisions and accidents caused by their software. Everyone else is saying "Read the End User License Agreement".

Musk

Tesla are particularly bad because there's a lot of "Tech Rock God" going on with Musk which causes a severe and possibly harmful reality distortion field. People believe him literally as an article of faith (I know some who are like this). Someone killed themselves misusing Autopilot, yet Tesla are not even trying to nor claim to do a fully autonomous self-driving car (apart from their refusal thus far to drop the term “Autopilot”).

Musk in particular is interesting; Tesla does not make money, and they're only now finding out just how hard it is to mass produce cars; SpaceX does not make money; so what’s in it for him? His personal reputation is his way of making him money, and his co-investors simply have to hope that somehow Tesla and SpaceX will actually turn a profit (but for him that's merely a bonus). Tesla won’t, they’re building the wrong sort of battery (Toyota have the right battery, solid-electrolyte lithium ion, watch this space). SpaceX are unlikely to as they’re trying to make money in an expensive market where margins are already pretty thin, and they’ve wasted a lot through careless engineering brought about by cost and time pressures. Meanwhile their competitors (e.g. Ariane) are fearsomely good at being on time (my own experience of them is to the second, planned years in advance), on cost, and reliable.

People such as this can be very influential beyond their actual achievment, qualifications, or field of expertise. If Musk were to turn his gaze on the aviation industry, things may end up happening that you pilots would have a lot of deep concerns over. You just kinda have to hope that the barriers to achieving anything at all in aviation are too high for even someone like Musk to consider leaping. It depends on what he wants to do. Battery-hybrid aircraft - OK no problems with that. Tesla inspired pilotless aircraft - anyone want to get on board?
msbbarratt is offline  
Old 6th Dec 2017, 06:42
  #34 (permalink)  
 
Join Date: Jul 2013
Location: Everett, WA
Age: 68
Posts: 4,395
Received 180 Likes on 88 Posts
MSB, you make some very good points (some of your comments on Tesla are spot on - as another automotive CEO put it 'it's really easy to build good cars when you're losing money on every one you build').
But you're still thinking "short term" (and by that I'm talking 20-50 years). Like you, I'm involved in aviation software (or more accurately was involved - I retired about a year ago ). I witnessed massive changes in the 40 years I spent in the industry before that retirement. I clearly recall pilots stating they'd "NEVER" get on an aircraft with "plastic" (composite) wings, computer flight controls (Fly By Wire), or electronically controlled engines (FADEC). Being an engine guy, I'm particularly familiar with the reservations regarding FADEC engines, and some of the concerns I heard were quite simply laughable...
Yet as you certainly know, all these things are now commonplace. Software coding today bears almost zero resemblance to what we did in the 1980's - "coding errors" as such basically no longer exist since the coding is done by computers - the weak link now is in the requirements and how they are translated into flow diagrams.
Basically, anything that has ever happened can be accounted for (yes, Pugilistic Animus, that includes a failed gear extension), the weakness being the 'unknown unknown - it's really hard to incorporate human ingenuity into a computer program given today's technology (the often mentioned Sully's Miracle on the Hudson is actually pretty trivial - likely with an even better outcome of making it to an airfield - since it would take a computer milliseconds to determine what Sully did in roughly 20 seconds)(that's not to detract from what Sully did in any way - by human standards he was exceptional).
But that's todays technology. With computer capabilities increasing exponentially via Moore's law (comment - Moore's law isn't a law - it's an observation - but it's an observation that has held remarkably accurate for several decades), can any of us even imagine what computers will be capable of 50 years from now?
As I mentioned, it won't happen soon - likely not in my lifetime. But we're already in a world where pilot error, CFIT, and suicidal pilots result in far more aircraft accidents than mechanical failure. The major auto manufacturers are prediction fully autonomous cars within five years - even if they are off by ten years, it's the future of transportation. When autonomous cars become perfected and commonplace and driving to the airport is no longer the most dangerous part of flying the public tolerance for pilot error accidents will vanish (not that it's ever been very high).
tdracer is offline  
Old 6th Dec 2017, 07:33
  #35 (permalink)  
 
Join Date: Apr 2008
Location: UK
Posts: 379
Received 0 Likes on 0 Posts
Hello tdracer, I envy you, I'm still slaving away at the keyboard...

The requirements are indeed the weak link. It's possible to completely set out the requirements for a wing, FADEC, fly by wire. We can't do that with machine learning and AI.

And if we can't do that, then the only thing to do is to put them into service and see how many crash over the following decades. Given the excellent state of aviation safety, there's precious little room for improvement.

That's the moral question, and I think that we shouldn't entertain that at all, not simply for the sake of eliminating a company cost.

The fear I have is that the money power will try and brush the morals under the carpet (just as they are trying to do in the self driving car industry).

However if someone can prove in advance of the first revenue flight that it will be a safety improvement (a truly narrow window of opportunity) that would be harder to argue against. But that's impossible to do.

I work with a bunch of guys who do machine learning algorithms, and the one permanent fact that remains is that they need training data. And if that's incomplete, or just plain wrong, you end up with a junk machine. Moores observation doesn't come into it; more transistors does not make our training data set complete, or right. And there's precious little sign of these things exhibitting inspired imagination or adaptability like Sully did.
msbbarratt is offline  
Old 6th Dec 2017, 07:55
  #36 (permalink)  
 
Join Date: Apr 2008
Location: Up high
Posts: 555
Likes: 0
Received 0 Likes on 0 Posts
But we're already in a world where pilot error, CFIT, and suicidal pilots result in far more aircraft accidents than mechanical failure.
As the pilot is the last line of defence the failure of the last line on occasion can and does lead to an accident. That line of the defence (the pilot) in the other hand does catch many hardware and software failures and thus prevents many accidents. Accidents that are prevented are not reported. The order of magnitude must be in the order of hundreds of thousands of prevented accidents to caused accidents in favour of the pilot.

It is clear that while operating in the middle of the envelope computers can do a better job than humans. The issue arises when external conditions (WX), procedural or mechanical failures place you at the edge (or beyond) of the envelope. In the case of cars the software defaults to brakes on and STOP. In the case of aircraft it defaults to automation disconnect - manual handling. The difference is significant, the autonomous aircraft will have to be designed with a much greater envelope capability than current aircraft and will require greater system redundancy and a larger number of sensor and sensor types with the added cost that will bring.

The scale of the problem can be seen by looking at FMC irregularities published by both Airbus and Boeing. These are known software glitches that have undesired consequences on the behaviour of the aircraft. In very mature products such as the A320 family or the B737 these irregularities are many pages long. How long then to develop a way more complex system that has no such irregularities at all.

Anyone that flies an Airbus knows that resetting a computer is a daily occurrence to restore normal function. That being the current state of affairs I cannot see a system with the level of reliability required any time soon.

Time is long so no doubt in the end it will happen but its certainly not around the corner.
Elephant and Castle is offline  
Old 6th Dec 2017, 18:26
  #37 (permalink)  
its£5perworddammit
 
Join Date: Nov 2003
Location: the foxhole
Posts: 148
Likes: 0
Received 0 Likes on 0 Posts
from last week:
The complex failure scenario of the second Soyuz rocket launch from Vostochny continued emerging in the days following the accident. Although the culprit had quickly been pinned down by flight control specialists, even seasoned space engineers, who were not directly involved in the intricacies of guidance systems, struggled to fully comprehend the bizarre nature of the accident.

In the Soyuz/Fregat launch vehicle, the first three booster stages of the rocket and the Fregat upper stage have their two separate guidance systems controlled by their own gyroscopic platforms. The guidance reference axis used by the gyroscopes on the Soyuz and on the Fregat had a 10-degree difference. The angle of a roll maneuver for rockets lifting off from Baikonur, Plesetsk and Kourou, which was required to guide them into a correct azimuth of ascent, normally laid within a range from positive 140 to negative 140 degrees. To bring the gyroscopic guidance system into a position matching the azimuth of the launch, its main platform has to be rotated into a zero-degree position via a shortest possible route. The ill-fated launch from Vostochny required a roll maneuver of around 174 degrees (which was apparently conducted from the 5th to 22nd second of the flight), and with an additional 10 degrees for the Fregat's reference axis, it meant that its gyro platform had to turn around 184 degrees in order to reach the required "zero" position.

In the Soyuz rocket, the gyro platform normally rotated from 174 degrees back to a zero position, providing the correct guidance. However on the Fregat, the shortest path for its platform to a zero-degree position was to increase its angle from 184 to 360 degrees. Essentially, the platform came to the same position, but this is not how the software in the main flight control computer on the Fregat interpreted the situation. Instead, the computer decided that the spacecraft had been 360 degrees off target and dutifully commanded its thrusters to fire to turn it around to the required zero-degree position. After the nearly 60-degree turn at a rate of around one degree per second, the Fregat began a preprogrammed trajectory correction maneuver with its main engine. Unfortunately, the spacecraft was in a wrong attitude and, as a result, the engine was fired in a wrong direction.

Soyuz fails to deliver 19 satellites from Vostochny
mrfox is offline  
Old 6th Dec 2017, 21:27
  #38 (permalink)  
 
Join Date: Apr 2008
Location: UK
Posts: 379
Received 0 Likes on 0 Posts
Originally Posted by Elephant and Castle
Accidents that are prevented are not reported. The order of magnitude must be in the order of hundreds of thousands of prevented accidents to caused accidents in favour of the pilot.
I think it would be interesting and illuminating if pilots themselves organised collection and reporting of such data, independent of their companies. It would serve as a good measure of how important pilots are.

Originally Posted by Elephant and Castle
In the case of cars the software defaults to brakes on and STOP.
Probably, yes. Unfortunately there are occassions when stopping would be the wrong thing to do!

Originally Posted by Elephant and Castle
The scale of the problem can be seen by looking at FMC irregularities published by both Airbus and Boeing. These are known software glitches that have undesired consequences on the behaviour of the aircraft. In very mature products such as the A320 family or the B737 these irregularities are many pages long. How long then to develop a way more complex system that has no such irregularities at all.
Glitches and irregularties are OK so long as they're known about, can be worked around, at which point they become "quirks" (a technical term...). The danger within an AI machine learning based system is that the number, severity and exact behaviour of quirks is not quantifiable, even after long operation; the first thing you may know about it is when you look out the window and wonder why the ground seems to be looming large...

The autonomous car industry is in effect hoping that it never has to prove that their systems "work and are an improvement on humans in all circumstance" in advance of them going into mass production.

Originally Posted by Elephant and Castle
Anyone that flies an Airbus knows that resetting a computer is a daily occurrence to restore normal function. That being the current state of affairs I cannot see a system with the level of reliability required any time soon.

Time is long so no doubt in the end it will happen but its certainly not around the corner.
I can't see it happening in the end, not with the state of technology we have now. With today's machine learning / AI systems we cannot say exactly what it is we have built; thus it cannot be certified, "examined", etc. Too dumb to be fully trusted, too clever for their behaviour to be fully analysed and understood. Not a good combination.

To really get there we'd need AI more or less as portrayed in Sci Fi films like I, Robot (we best hope we don't end up with Marvin the Paranoid Android). And that's a looooooong way off. In fact we haven't the first scoobies of an idea how to actually, really, do that.

At the risk of going down a deep rabbit hole, Roger Penrose (mathematician) has written some intersting observations on how the brain works. The Turing Machine Halting Problem is interesting; a Turing machine cannot tell you that another Turing machine will complete its program without running that program (except in trivial cases). Yet a human brain can look at a program and work it out. Penrose's suggestion is that perhaps the human brain is not a Turing machine (i.e. it is not a computer, nor can a computer be like it), and that perhaps there's something quantum going on inside our heads.

If so, then there's no hope for today's computers (for that's all that these machine learning / AI systems are) emulating the human brain. It might be that they mathematically cannot have truly human characteristics like imagination, universal adaptability, etc. It'd take a significant break through in quantum computing (that's wild arsed guess on my part) to begin to get something plausibly intelligent.
msbbarratt is offline  
Old 6th Dec 2017, 21:31
  #39 (permalink)  
Thread Starter
 
Join Date: Dec 2006
Location: Florida and wherever my laptop is
Posts: 1,350
Likes: 0
Received 0 Likes on 0 Posts
@msbbarratt

I can assure you that there are no attempts to reduce the safety standards or certification requirements for autonomous or remotely piloted aircraft; indeed it is quite the contrary. Considerable efforts are being expended to research the different system and safety aspects of unpiloted aircraft in the various xAAs and standards and safety bodies. This includes generation of appropriate additions to certification testing and processes. Some of these efforts are (and will be) translating into better safety for piloted aircraft.

Current FMC/FMS should not be confused with those that will be needed for autonomous flight. In manned aircraft FMC/FMS designers can avoid attempting to cope with the more difficult emergencies and just hand control back to the pilot. The designers of autonomous systems have no such leeway. Some designs from military systems that cope with battle damage and similar will no doubt port to the civil implementations.

There is considerable industry pressure to implement UAS as passenger carrying autonomous aircraft. Yes, some of these ideas are being pushed by people with little grasp of the problems - people that are only now discovering saying not above 400ft is actually not a useful spec as the immediate question is from what datum, do not understand the complexities of some basic flying and ATM operations. At the same time there are those that do understand these issues and are convinced that the delays are due to bureaucracy and regulation rather than technical.

As an example:
https://www.uber.com/elevate.pdf

https://www.uber.com/info/elevate/

I expect that in 20 year's time there will be autonomous aircraft integrated into the normal airspace. There already are such aircraft in many respects as a UA with a lost link is an autonomous aircraft.
Ian W is offline  
Old 6th Dec 2017, 22:12
  #40 (permalink)  
 
Join Date: Jul 2013
Location: Everett, WA
Age: 68
Posts: 4,395
Received 180 Likes on 88 Posts
As is often the case with new, ground breaking technology, it'll be the military that leads the way. There are obvious benefits to getting rid of the pilot on military aircraft - not only do you not put the pilot in harms way, no pilot means lower cost/lighter weight and increased maneuverability since you don't need to worry about the g-loads on the pilot. Heck, with that fancy helmet on the F-35 that lets you look through the aircraft, all it would need today is a secure, high speed data link. But you don't want your $100 million aircraft to crash if you lose the data link, so there would be simple, return to base routine. But eventually that simple, return to base routine would get increased capabilities, eventually getting strike/dog fighting capability and finally fully autonomous. Eventually, when unmanned military aircraft have a better safety record than the piloted ones, there will be the final push to extend it to commercial aircraft. It won't happen soon, but I have little doubt it will eventually happen.
As Ian notes, we're not talking today's FMC and auto throttle/auto thrust type systems - for the most part they are created assuming human intervention. A better example would be FADEC or FBW - systems that can't readily be over-ridden by humans.
tdracer is offline  


Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.