Go Back  PPRuNe Forums > Flight Deck Forums > Tech Log
Reload this Page >

Can automated systems deal with unique events?

Wikiposts
Search
Tech Log The very best in practical technical discussion on the web

Can automated systems deal with unique events?

Thread Tools
 
Search this Thread
 
Old 2nd Nov 2015, 09:15
  #161 (permalink)  
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
Steve, as you note many of the early contributors have made up their mind, but few explain why.
An apparent unthoughtful choice of automation might reflect social change; the use Wiki and Google, vice thinking, preferring automation dependency, system belief, without checking etc.

So: “… is it possible to replace this capability with a human-designed and manufactured system, without creating additional vulnerability to human error elsewhere?
I don’t think so; as discussed previously, human ability is limited by inherent, yet necessary fallibility; how can we design an error free system if we cannot understand our own error.

Re the QF example, the warning and display systems provided the crew with the ‘best’ picture that technology could provide. The crew actions could be automated, but apart from a shorter timescale the process would still be limited by the quality and availability of sensors (as noted by previous contributors).
After landing and selecting fuel off to stop the engine, what more could an automatic system do when the engine did not stop? – Automation only computes, Humans reason.

“… will potential product liability issues stop the bandwagon?” Probably; but legal liability is only a small part of an ever-changing society which influences human development.

An alternative line of thought is to ask ‘why’ we should replace existing capability – use technology wisely to support humans, but not to replace them.
If we choose ‘safety’ then this requires careful thought of what safety is, what would we be attempting to improve - why. I prefer not to define safety but consider it as an activity; so will change affect this activity; might it upset the finely balanced state that we have achieved so far.
Whatever our views are, we require thought and explanation before choice. My thoughts would start with natural human risk-aversion, and if we are to change a finely tuned industry, only make small changes first and asses the feedback.
For those choosing full automation, look for and evaluate the feedback from recent accidents; what should we have learned from them – without blaming the crew.
alf5071h is offline  
Old 2nd Nov 2015, 15:57
  #162 (permalink)  
 
Join Date: Mar 2014
Location: Arizona
Age: 76
Posts: 62
Likes: 0
Received 0 Likes on 0 Posts
"Automation only computes, Humans reason."

"Automation only computes, Humans reason."

This is not true, and in the future computers will do even more reasoning. There are too many comments now which assume that computers only do what is pre-programmed. They can do more, and they will do more.

We don't know and cannot know how far they will get. Some very smart people (e.g. Stephen Hawking) fear that they will be able to out-reason us. I hope they are wrong.
Mesoman is offline  
Old 2nd Nov 2015, 17:33
  #163 (permalink)  
 
Join Date: Jun 2000
Location: last time I looked I was still here.
Posts: 4,507
Likes: 0
Received 0 Likes on 0 Posts
"Ladies & Gentlemen, this is your captain speaking. You are presently flying at ...............etc. etc. This is our first fully automated flight from XYZ - ABC. Indeed I am on the ground in XYZ controlling & monitoring your flight. I hope you are enjoying the flight and I assure you nothing can go wrong..go wrong..go wrong.. go wrong........"
RAT 5 is offline  
Old 2nd Nov 2015, 19:34
  #164 (permalink)  
 
Join Date: Jan 2011
Location: on the cusp
Age: 52
Posts: 217
Likes: 0
Received 0 Likes on 0 Posts
"Ladies and Gentlemen the cans of burning fuel either side of you are not under my control, I merely get to make suggestions to them. But don't worry in the unlikely event of something going wrong I can switch them off and I'm sure we'll make it across the Atlantic."

People very quickly adapt to the concept of handing over to automation.

Automation can manage all the tasks of aviation, it can aviate, navigate and communicate. We already have machines that see so unstable that they can't be flown without automation. We have automated drones that you give a mission to and let them go, or that are datalinked half way round the world. But really the great advantages of automated aircraft can only be realised if the aircraft doesn't have to provide all the equipment necessary to make humans comfortable. While aircraft carry humans then there are no disadvantages to having a pilot. As there will always be the need to carry humans in a passenger aircraft, it seems obvious to me to invest our efforts in optimising the human machine combination rather than strive for full automation.
dClbydalpha is offline  
Old 2nd Nov 2015, 21:09
  #165 (permalink)  
Resident insomniac
 
Join Date: Aug 2005
Location: N54 58 34 W02 01 21
Age: 79
Posts: 1,873
Likes: 0
Received 1 Like on 1 Post
G-CPTN is offline  
Old 3rd Nov 2015, 04:22
  #166 (permalink)  
 
Join Date: Jul 2003
Location: Somewhere
Posts: 3,072
Received 139 Likes on 64 Posts
Given how the regulators seem to struggle to what is really old technology upgrades in aviation I think that fully autonomous aircraft are at least 50+ years away assuming that it can even be done. Don't forget that NEW aircraft now in the 737Max and 320Neo are 70s and 80s technology.
neville_nobody is offline  
Old 3rd Nov 2015, 06:39
  #167 (permalink)  
 
Join Date: Sep 2014
Location: Canada
Posts: 1,257
Likes: 0
Received 0 Likes on 0 Posts
Neville, fully autonomous aircraft are already a reality today and already approved by various regulators to fly in controlled airspace under special AOC.

Good examples include the so called Optionally Piloted Aircraft (OPA) such as the Diamond DA42 Centaur, the Lockheed/Kaman K-Max helicopter, and the Northrop Grumman (Scaled Composites) Firebird.

(Aurora Flight Sciences’ Centaur Optionally Piloted Aircraft (OPA) flew multiple unmanned flights from Griffiss International Airport in Rome from June 12-15, 2015)

These aircraft can be flown from inside the cockpit, or piloted from the ground, or programmed to fly fully automated from take-off to landing. They are not "testbeds" but are all production aircraft in service today.

The K-Max notably did nearly 2,000 unmanned sorties delivering cargo for U.S. troops in Afghanistan.

They are not carrying passengers yet, but the K-Max is being pitched as a possible Combat SAR Evac (Air Ambulance) platform; i.e., as an unmanned transport to take wounded troops from the battlefield to a medical facility.

Yes we are far away from adopting this primarily military technology to the commercial transport realm, but I don't think it will be 50+ years. As I mentioned in an earlier post, I think we'll see fully automated commercial cargo ops sooner rather than later, before proceeding to pioneering passenger flights.
peekay4 is offline  
Old 3rd Nov 2015, 11:22
  #168 (permalink)  

Do a Hover - it avoids G
 
Join Date: Oct 1999
Location: Chichester West Sussex UK
Age: 91
Posts: 2,206
Likes: 0
Received 0 Likes on 0 Posts
In the mid 60s I was a safety pilot for the Blind Landing Experimental Unit at RAE Bedford on their Comet 3B doing cross wind autoland trials with a component of over 30kt. To watch that system flare, smoothly remove the drift angle and squeak the wheels onto the numbers over and over again, convinced me that automatics could achieve standards of ‘flying’ that I could not match.

I have put quotes round flying because I believe word means different things to different people. To avoid ambiguity I suggest we separate out the tasks of flying into ‘steering' the aircraft and ‘operating' the aircraft.

By steering, I mean controlling any flight parameter. By operating, I mean every other aspect of a flight from pre-flight preparation to selecting the appropriate flight parameters and filling in the Tech Log afterwards. I believe automatic systems are better at steering tasks while humans are better at operating tasks.

In reply to “What are you going to do when the autopilot fails?” my answer is that future automatic steering systems will not fail in a critical way. Unlike today’s autopilots which disconnect themselves in the event of a problem, future automatics will be designed to fail safe and carry on performing their functions. Just like today’s wing structures. Autoland, thanks to special certification standards, has not caused a landing accident since it was first used with passengers in the 70s. Sadly there have been quite a few steering errors by aircrew over the same period.

I am a future Captain climbing out of La Guardia when both engines fail. As the operator I decide the crisis needs a landing on the Hudson. I lift the guard protecting the Glide Landing button and press it which tells the steering systems to set up the best glide. With my knowledge of the aircraft’s gliding performance I estimate the touchdown zone on the local area map that appears, draw the final approach track I want with my stylus, press the Glide Landing button again and thank my lucky stars that I did not have to use skill so save my aeroplane. Just knowledge.

As a future passenger I will always want my flight operated by a senior Captain and First Officer who have the knowledge to get us to our destination safely, but without the need for them to use skill.
John Farley is offline  
Old 3rd Nov 2015, 12:59
  #169 (permalink)  
 
Join Date: Jan 2011
Location: on the cusp
Age: 52
Posts: 217
Likes: 0
Received 0 Likes on 0 Posts
Excellent post John Farley.

From my perspective steering the aircraft can be readily achieved by automation. It can even cope with abnormal events. Computing can "try" something, measures the response, adjust, try again, all much faster than a human can recognise there is even an issue.

Human operations require human operators, as we don't necessarily correspond to the same rule-set as a physical item.

While an aircraft needs to support human physiology then there is little to no advantage gained from adding the automation necessary to mimic human decisions. It is better to use a human.

Currently we appear to be designing a long way from the optimal point. We put automation on board that removes the pilot from the loop other than as an operations director, but we don't give it authority to fully act. The pilot is mostly removed from the minute to minute situational awareness of what the aircraft is doing, but is suddenly catapulted from monitoring to handling with no time to appraise. Appraisal of the situation is the strength of the pilot, if the automation can buy them some time to make a decision and communicate it. At the moment the rules/tradition for implementing automation and the level of information provided to the pilot just don't seem to achieve this aim. The automation isn't allowed to control, but the system is too complicated for a pilot to quickly comprehend the situation of what is and isn't requiring manual intervention.

I don't like referring to AF447, but do people think the out come would have been different if the "system" rather than say "you have control - well mostly" it said "Dave, HAL here, I've lost reliable airspeed sensing. I'm going to carry on in straight and level flight using free run inertials and GPS. Let me know if you want me to do something different, meanwhile I've switched pitot heat on and I'll let you know if anything changes."
dClbydalpha is offline  
Old 3rd Nov 2015, 14:14
  #170 (permalink)  
 
Join Date: Jun 2000
Location: last time I looked I was still here.
Posts: 4,507
Likes: 0
Received 0 Likes on 0 Posts
As a future passenger I will always want my flight operated by a senior Captain and First Officer who have the knowledge to get us to our destination safely, but without the need for them to use skill.

Review the oft quoted definition of a skilful pilot. (light heartedly)

The other poignant issue is the coordination, or lack of, between airline pilot raining and airline a/c design. They seem to happen in isolation. The former can head off in any direction with leaps & bounds driven by technocrats and accountants, and as long as it meets XAA specs and is cheap[er longterm they go ahead. Out it comes, every 15 years or so, a new bag of bells & whistles that is more sophisticated, trouble free, crash proof than the generation before. The latter, meanwhile plods along with great inertia. The only real difference I've noticed over 30 years is that the training has gone from 250hrs to 148hrs (CPL), MCC has been thrown in and the MPL is now the rage. But just how much of it is focused on the 'technology' that is going into the next generation of a/c? A short intense TQ course with very strict SOP's that teach only one method of doing anything, and is well short of total capability of the systems, is not IMHO a satisfactory training. I suspect that with more automation & sophistication it will get worse, i.e. less knowledge & understanding of the a/c. Meanwhile we test to the same criteria as a B732. Hence my comment about uncoordinated training programs.
RAT 5 is offline  
Old 3rd Nov 2015, 20:21
  #171 (permalink)  
 
Join Date: Feb 2005
Location: flyover country USA
Age: 82
Posts: 4,579
Likes: 0
Received 0 Likes on 0 Posts
Can automated systems deal with unique events?
Of course they can. Use the similar logic of Climate Change models, used to predict with great certainty the state of Earth's climate 30, 100, or 1000 years hence!
barit1 is offline  
Old 3rd Nov 2015, 20:37
  #172 (permalink)  
 
Join Date: Jan 2011
Location: on the cusp
Age: 52
Posts: 217
Likes: 0
Received 0 Likes on 0 Posts
I suspect that with more automation & sophistication it will get worse, i.e. less knowledge & understanding of the a/c.
I hope not. When I learned to drive my car had a manual choke. I understood what it was for and how it did it. I've even driven a car that had an ignition advance lever. Now I drive a car where the throttle is a digital input to a computer. A computer is even used to decide exactly how much brake pressure is applied to each brake. I don't have to understand how it works, I just press pedals. The automation is transparent to me. This may be due to the fact that cars have to be capable of being driven by the vast majority of the population. The relationship between pilots and aircraft is very different and the industry is required to design automation in a particular way. We need as an industry to redefine that relationship along the lines of "operator" and "steering" as pointed at by John Farley's post. Getting better definition of roles and responsibilities will allow clear boundaries to be defined in terms of what needs to be communicated between human and machine and of what knowledge and skill is required by a pilot to fulfil their part of the system.

At the moment we have the scenario where the aircraft can hand a "bag of spanners" (thanks Tourist) to the pilot without the courtesy of passing it over handles first. I feel this has to change.
dClbydalpha is offline  
Old 4th Nov 2015, 00:17
  #173 (permalink)  
 
Join Date: Nov 2015
Location: New Zealand
Posts: 2
Likes: 0
Received 0 Likes on 0 Posts
Computers certainly do have the abilty to 'reason' in a functionally equivalent way to humans. (whether the mechanism how they do that is similar is debatable)

Consider the following, as an example as what most would regard as a 'unique event'.

A single engine aircraft is approaching a runway for landing and simultaneously a large moose and a small child enter into the aircraft landing path. Immediately, on attempting to go round, the engine fails.

Could a computer control the aircraft and achieve a statisically better outcome than a human pilot even though it is highly unlikely the software would have been programmed explicitly for this scenario?

Given the current state of the art of autonomous devices is it already probable that it could.

Up to the point of the runway incursion we could assume that existing era technology could have the aircraft lined up and able to land successfully.

Detecting the runway incursion would require a vision system. Self driving cars already have such systems and are able to navigate the vehicle to avoid obstables. Aircaft, having more degrees of freedom than a car actually have an advantage here and the system would command a go-around. At the point engine failure occurs the range of available trajectories decreases significantly. Let's assume our motion prediction system can calculate the range of available trajectories as anything from crash landing the plane short of the incursions, hitting both objects, or impacting one or other object.

What should the system do? Can the system 'reason' that it should aim to save either the plane, the moose or the child.? First it would need to be able to recognise the objects in its path and determine a "consequence of impact" value. The best outcome might simply be a solution that seeks to mininimise the overall sum of those values.

Object recognition and classification is well within the domain of current technology. (Think Xbox Kinect for a consumer available example). Having classified the object it 'sees' all the data needed to calculate the consequence of collision is then available.

Things get interesting of course. A simple algorithm might infer it is best to collide with smaller objects compared with larger ones of similar density, disregarding that children have higher intrinsic value than moose. A slightly more complex system might attempt to assign 'intrinsic value' to the objects.

However The data for such a descision tree might be as simple as {object/animal/twolegs=1, object/animal/fourlegs=2, object/animal/nolegs=3, object/animal/unknownlegs=4}

You can of course build any data structure you like to classify the real world. This is where learning aspect of a computing systems comes in. Over time, many systems, if able to communicate could adjust these parameters to minimise the number of negative outcomes.

Considering that all of the above could be computed for an optimal solution 60+ times per second by a sufficiently powerful system it is probable even now that computers could significantlty outperform humans in 'unique events'.

I don't expect it to happen any time soon in real life though as aviation seems determined to stay in the technological dark ages.

Last edited by Symbion90; 4th Nov 2015 at 02:56. Reason: typos
Symbion90 is offline  
Old 4th Nov 2015, 02:47
  #174 (permalink)  
 
Join Date: Dec 2013
Location: Norfolk
Age: 67
Posts: 1
Likes: 0
Received 0 Likes on 0 Posts
In order to assign an intrinsic value to a series of unavoidable runway obstructions an AI system would have to recognise the objects - which can be done using present technology - but also understand their worth to society as a whole. Why would an AI system charged with flying an aircraft safely from A to B need to be burdened with a sense of morality?

What cannot be determined by an AI system is the background history associated with the objects. Is the vehicle autonomous or a bus full of school children that has just been hijacked? Is the animal one of the last breeding pair on the planet? Is the human intentionally trying to commit suicide?

Vehicles are replaceable, critically endangered species are not and while human life should be sacrosanct, the sad truth is that human life is cheap in cash terms.

A human pilot may well be aware or informed of facts that an AI system will just not be equipped to recognise, so killing a single human may well be the least worse option, rather than potentially killing a bus load of people, or wiping out a species.

But if an AI system is ever set up in such a way that it is capable of making the decision to kill someone in preference to somebody else, that is the start of a very dangerous technological development. We are already seeing the development of such capabilities in autonomous drones, but at the moment at least, a human operator allegedly makes the final decision.

Of course we have been wiping out species all over the planet from the day humans evolved, so the logical answer is to minimise human casualties - but that decision is influenced by cultural bias. Other cultures and societies may hold animal life higher than human life, particularly for rare or endangered species.

The potential development of artificial intelligence should give everyone pause for thought and instill a great deal of concern about oversight and who controls such systems, or even if they can be controlled once released on the world.
G0ULI is offline  
Old 4th Nov 2015, 03:15
  #175 (permalink)  
 
Join Date: Nov 2015
Location: New Zealand
Posts: 2
Likes: 0
Received 0 Likes on 0 Posts
Why would an AI system charged with flying an aircraft safely from A to B need to be burdened with a sense of morality?
It's not 'burdened' as such unless the speed that the decision loop being calculated falls below an acceptable value. 99.999 % of the time such 'reasoning ability' is not required but software is weightless so why not carry it along?

What cannot be determined by an AI system is the background history associated with the objects.
I'd disagree. Look at the way google performs speech recognition or language translation. One of the reasons it is accurate is that it has the context of billions of other searches performed on a massive database in all languages. It uses that history to infer the most likely context for words far more accurately than interpreting single words or sounds.
Symbion90 is offline  
Old 4th Nov 2015, 06:29
  #176 (permalink)  
 
Join Date: Sep 2014
Location: Canada
Posts: 1,257
Likes: 0
Received 0 Likes on 0 Posts
There was a time when people wouldn't get into lifts without a human operator...

NPR: Remembering When Driverless Elevators Drew Skepticism
peekay4 is offline  
Old 4th Nov 2015, 11:40
  #177 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Lots and lots and lots of opinions without supporting evidence from the naysayers.....

Saying a thing doesn't make it true.

To be fair, neither does supporting material on the Internet, but it certainly lends a bit of credence.

"Nobody will get on a steam train!"
"You will die if you go faster than 100mph!"
"Machine looms will never replace the craftsman"
"A hand built car will be superior"
"A computer will never beat a grandmaster at chess"
"It will never fly"
"We can't get to the moon"
"Nothing can go faster than light"
"There will never be a market for more than a few computers"
"Nobody wants a camera on a phone"
"You will die if you sail west"
Tourist is offline  
Old 4th Nov 2015, 12:15
  #178 (permalink)  
 
Join Date: Mar 2004
Location: Oxfordshire
Age: 54
Posts: 470
Likes: 0
Received 0 Likes on 0 Posts
If we can accept that pilots are 'allowed' to crash planes from time to time when the odds are too heavily stacked against them, then why should we hold automated systems to a higher level?

Surely if (and that's the question still unresolved) pilots 'cause' most of the crashes, then replacing them with automation which will vastly reduce the number of crashes is a good thing, even if those automated systems do still crash from time to time due to the unique / unforeseen events?
glum is offline  
Old 4th Nov 2015, 15:00
  #179 (permalink)  
 
Join Date: Aug 2003
Location: Surrey
Posts: 1,217
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by G0ULI
In order to assign an intrinsic value to a series of unavoidable runway obstructions an AI system would have to recognise the objects - which can be done using present technology - but also understand their worth to society as a whole. Why would an AI system charged with flying an aircraft safely from A to B need to be burdened with a sense of morality?.....
Given past performance of pilots, it is reasonably probable the pilot will totally forget about the detail of the objects - or possibly fixate on the child and then potentially fail to execute a safe engine out landing. The odds are very limited of a pilot having the mental bandwidth to (for example) determine that going well below best glide to loose altitude and then accelerate to a safe engine out flare speed aiming at the undershoot would allow the aircraft to be going less than 20 knots at impact thus minimising the risk of fatal injury to passengers and objects unidentified.

There are real questions about the reliability of todays computer systems that would IMHO prevent large scale passenger transport with no human able to intervene in the event of a system failure. However, It is clear that (despite some creative attempts to construct 'moral dilemma unique events') computers today can sense and respond to the outside world better than people, within their design scope (a very important qualification). Moreover, the design scope of an 'automated aircraft' could cover a vast array of situations and the computers would have significant lower hull loses and loss of life (both on the ground and in the air). There would still be situations that were out of scope of the design and the computer could get to a point that it had no available options and resulted in a catastrophe that a human might have averted.


The record of self driving cars to date is instructive in the advantages and disadvantages.
Advantage - they report every incident and have a much lower rate of serious incidents than human driven cars (as far as I can tell 0 so far)

Disadvantage - they slavishly follow the law and traffic rules which appears to result in them being hit from behind far more often than normal cars. Think of parking lots with a 5 mph speed limit, only your grandma or a google car will actually be doing 5 mph in the road, hence, you the human doesn't expect them and hits the car in front!

automatic cars will 'always' see the hazard and avoid it to a much higher standard than humans. However, they will frequently surprise the human who thinks he could have bent the rule and gotten in and then is surprised the automatic car didn't move out of the way.
mm_flynn is offline  
Old 5th Nov 2015, 00:50
  #180 (permalink)  
 
Join Date: Jan 2015
Location: Near St Lawrence River
Age: 53
Posts: 198
Likes: 0
Received 0 Likes on 0 Posts
"Dave, HAL here, I've lost reliable airspeed sensing. I'm going to carry on in straight and level flight using free run inertials and GPS. Let me know if you want me to do something different, meanwhile I've switched pitot heat on and I'll let you know if anything changes."
Firstly, airspeed design should change for increased reliability. Pitot tube is about 100 years old. Usually, pitot tube gets clogged with ice in thunderstorms and there is a lot of turbulence = noised input for inertial. Good luck with that:
https://www.youtube.com/watch?v=a5FrIDwq-qE
_Phoenix is offline  


Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.