PPRuNe Forums - View Single Post - "Looking Forward" to a Pilotless Future
View Single Post
Old 6th Dec 2017, 21:27
  #38 (permalink)  
msbbarratt
 
Join Date: Apr 2008
Location: UK
Posts: 379
Received 0 Likes on 0 Posts
Originally Posted by Elephant and Castle
Accidents that are prevented are not reported. The order of magnitude must be in the order of hundreds of thousands of prevented accidents to caused accidents in favour of the pilot.
I think it would be interesting and illuminating if pilots themselves organised collection and reporting of such data, independent of their companies. It would serve as a good measure of how important pilots are.

Originally Posted by Elephant and Castle
In the case of cars the software defaults to brakes on and STOP.
Probably, yes. Unfortunately there are occassions when stopping would be the wrong thing to do!

Originally Posted by Elephant and Castle
The scale of the problem can be seen by looking at FMC irregularities published by both Airbus and Boeing. These are known software glitches that have undesired consequences on the behaviour of the aircraft. In very mature products such as the A320 family or the B737 these irregularities are many pages long. How long then to develop a way more complex system that has no such irregularities at all.
Glitches and irregularties are OK so long as they're known about, can be worked around, at which point they become "quirks" (a technical term...). The danger within an AI machine learning based system is that the number, severity and exact behaviour of quirks is not quantifiable, even after long operation; the first thing you may know about it is when you look out the window and wonder why the ground seems to be looming large...

The autonomous car industry is in effect hoping that it never has to prove that their systems "work and are an improvement on humans in all circumstance" in advance of them going into mass production.

Originally Posted by Elephant and Castle
Anyone that flies an Airbus knows that resetting a computer is a daily occurrence to restore normal function. That being the current state of affairs I cannot see a system with the level of reliability required any time soon.

Time is long so no doubt in the end it will happen but its certainly not around the corner.
I can't see it happening in the end, not with the state of technology we have now. With today's machine learning / AI systems we cannot say exactly what it is we have built; thus it cannot be certified, "examined", etc. Too dumb to be fully trusted, too clever for their behaviour to be fully analysed and understood. Not a good combination.

To really get there we'd need AI more or less as portrayed in Sci Fi films like I, Robot (we best hope we don't end up with Marvin the Paranoid Android). And that's a looooooong way off. In fact we haven't the first scoobies of an idea how to actually, really, do that.

At the risk of going down a deep rabbit hole, Roger Penrose (mathematician) has written some intersting observations on how the brain works. The Turing Machine Halting Problem is interesting; a Turing machine cannot tell you that another Turing machine will complete its program without running that program (except in trivial cases). Yet a human brain can look at a program and work it out. Penrose's suggestion is that perhaps the human brain is not a Turing machine (i.e. it is not a computer, nor can a computer be like it), and that perhaps there's something quantum going on inside our heads.

If so, then there's no hope for today's computers (for that's all that these machine learning / AI systems are) emulating the human brain. It might be that they mathematically cannot have truly human characteristics like imagination, universal adaptability, etc. It'd take a significant break through in quantum computing (that's wild arsed guess on my part) to begin to get something plausibly intelligent.
msbbarratt is offline