Go Back  PPRuNe Forums > Flight Deck Forums > Tech Log
Reload this Page >

Automation dependency stripped of political correctness.

Wikiposts
Search
Tech Log The very best in practical technical discussion on the web

Automation dependency stripped of political correctness.

Thread Tools
 
Search this Thread
 
Old 20th Jan 2016, 10:03
  #121 (permalink)  
 
Join Date: Sep 2008
Location: Scotland
Posts: 891
Received 6 Likes on 2 Posts
Since the high level of accidents in the 70s we have had:

- New generations of aircraft
- Better automation
- TCAS (which is generally there to protect from human error, but mainly ATC)
- GPWS/EGPWS
- The CRM revolution

As far as I'm aware without the ability to crunch MORs, NASA ASAPs, company safety reports &c we only really know that the accident rate has decreased, and that a number of high profile accidents have involved human failings.

I am interested in how one can ascribe a certain percentage of the accident rate reduction to different factors above and others I have not mentioned. I would welcome links to studies that attempt to do that.
Jwscud is offline  
Old 20th Jan 2016, 10:28
  #122 (permalink)  
 
Join Date: Feb 2000
Location: 500 miles from Chaikhosi, Yogistan
Posts: 4,295
Received 139 Likes on 63 Posts
Just to clarify the Apollo 11 radar in the wrong position causing buffering...

I just watched an interview with buzz where he said he switched it on deliberately so it was ready for the departure and subsequent rendezvous. There was no published restriction On it as the design team never thought it would be activated prior to it being needed.
compressor stall is offline  
Old 20th Jan 2016, 12:26
  #123 (permalink)  
Thread Starter
 
Join Date: Jun 2000
Location: Australia
Posts: 4,188
Likes: 0
Received 14 Likes on 5 Posts
Since the high level of accidents in the 70s we have had:

- New generations of aircraft
- Better automation
- TCAS (which is generally there to protect from human error, but mainly ATC)
- GPWS/EGPWS
- The CRM revolution
CRM is a cottage industry that has swept most airlines of the Western World. My guess from personal experience is that CRM has not been effective in the prevention of accidents in the vast majority of those airlines where ethnic culture is the dominant factor in the cockpit. And that is lots of airlines, large and small. There may be much bumpf written about CRM in those operators Operations Manuals; but lip service in the way of written words, is about as far as it goes.
Centaurus is offline  
Old 20th Jan 2016, 13:44
  #124 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Sorry Dog
Even if you take a gross example of error like AF447, sure those pilots could have done 10 things differently to save things, but you say could some of the same things about the automation, such as.... Why wasn't there a degraded auto pilot mode, why wasn't there a different sensor mode to estimate the plane's airspeed, etc, etc.
There was a degraded autopilot mode.

It was called "hand it to the pilot"

Airbus are not designed to be autonomous and deal with all problems themselves, so they don't.
You cannot reasonably judge the possibility of autonomous aircraft based upon aircraft that are not intended to be.


The design philosophy (rightly or wrongly) is that the systems will look after everything whilst it is all going fine, then hand more and more to the pilot once things are not.
They also work under the assumption that the pilot can do it all himself if necessary.

This may well have been true before Airbus and other modern aircraft came along, but 10yrs of monitoring Airbus magic can kill the skill level of even a Chuck Yeager.


The simple fact is that if you designed a scenario to make life difficult for human beings, modern airline operations tick nearly every box.

Humans are nothing like as good at repetitive dull actions as machines.

Humans are very poor at monitoring systems that very rarely go wrong.

Not just that, but the people who are better at this sort of thing (without being too offensive, shall we call them people of limited imagination) are not the sort of people who make pilots.

Machines never tire of watching an EPR gauge. They are never tired, they never miss a single flicker and they can watch vastly more metrics than a human ever could.


https://www.newscientist.com/article...ncentrate-for/

Humans need continuous practise to be good at things.
Machines not only don't need practise, but once one machine has learnt something, then all machines can know it.
Autonomous aircraft will have "pilot error"crashes, but at least each accident should be the last of that type. All the others can be programmed to not make the same error.
We have tried to do that for ever with humans, but the simple fact is that the same errors appear again and again in accidents.

Humans can not concentrate on more than a very limited number of things at once.

Humans Can Only Think About Four Things At Once, Study Says - InformationWeek


Humans do not operate well when tired.
Airline ops fatigue everybody.
Machines do not tire.

Last edited by Tourist; 20th Jan 2016 at 14:11.
Tourist is offline  
Old 20th Jan 2016, 13:47
  #125 (permalink)  
 
Join Date: Jun 2000
Location: last time I looked I was still here.
Posts: 4,507
Likes: 0
Received 0 Likes on 0 Posts
So in parallel with the question of automation dependence may be a fear of violation or non-compliance. This goes straight back to the sense of autonomy that should be inherent when ensuring that “the successful outcome of the maneuver is never in doubt”

Don't get me wrong - SOPs are the bedrock of a good safe standard operation, and adherence to them is vital. BUT - there are times when it is necessary to apply basic understanding - and this requires education rather than the slavish following of rules.


Mansfield & Bergerie have both made the same great point. Might not be absolutely in the 'automation dependably' file but relevant. I saw so much over the past years whereby cadets join a rigid SOP airline. They are taught only one method of achieving a task. They are keen but green pilots and terrified of non compliance with SOP's, be it using auto or manual fight. As a result of having a small knowledge data base, and a blinkered view of aviation requirements, they often are slow to realise that their singular technique was not the best applicable to their current scenario. Further their knowledge of all the possibilities available within the AFDS was too low a %.
A simple example: SOP for departure says no use of V/S until flaps are up. SOP for altitude capture says max 1000fpm within 1000' of altitude. 99% of time no problems. Now they have a SID which requires F1 to achieve turn radius. SID cap = 4000'. Passing 3000' F1 ROC = 1500fpm. Inbound traffic descending 5000' and a TA occurs. I suggest V/S 1000fpm to avoid a possible RA (and the associated paperwork). F/O now has a major mental conflict; SOP compliance or application of airmanship. He thinks it is forbidden (violation? unsafe?) to use V/S, but Boeing FCTM has no such restriction. This is a very simple example but there are similar conflicting occurrences where it is more complicated. In these latter scenarios not using the best method available can cause some confusing consequences and delay in action.
Even worse, I once flew with an outfit who had just received the new VNAV/LNAV a/c. They had been taught that in LVL CH descent you increased ROD by increasing MCP speed. So there they were, too high downwind, 210kts, clean and ATC gives descent to circuit altitude, a step down of 4000' before base turn. Solution 1: LVL CH and increase drag & maintain 210kts ready for flaps later in circuit. Solution 2. LVL CH , speed 250kts. (above flap speed) When solution 1 was insisted upon there was a confused silence.
I'm concerned about automatic dependence diluting manual flying skills, but I'm more concerned about automatic dependency causing a dilution of airmanship, situational awareness, and effective management of the flight. IMHO automatic dependance, with only a 50% knowledge of what the automatics are capable of, is very dangerous.
I see too much of this: after a selection the a/c does not perform in the manner you expected. There is a confused pause to try and figure out why; rather than make another simplified selection and force the a/c to do what you want. Even worse, a section is made and it is assumed the a/c will do what you want and there is no follow up monitoring, while you go off on another task, until quite a few moments later. You are really behind the a/c now, playing catch up.
What is missing in normal ops is the application of a non-normal scenario management technique. There are many titles for this, but in principle you identify a problem, consider options, select one to apply, action it and review its effect. There is a complete circle with a feedback loop structure. That is often missing is use of automatics in normal ops. Why? Because it is not taught in normal ops only in non-normal ops. IMHO that is a lack of training awareness and parts of the modern trained monkey syndrome.
Couple all this with quick low hour commands and low time SFI's and the dilution process is complete. In the old days (not always good old) TR courses were conducted by training captains, recurrency could be F/O SFI's. Command
up-grade courses were all captain TRI/TRE. Commands were minimum 5000hrs. Now there are training F/O SFI's at 2 years, even on initial sessions on command courses, and commands at 3000hrs. Is this the best way forward given that the students under training are now joining with so little aviation experience? It's not quite blind leading blind, but it does lead to an eduction based very heavily on a strict following of rigid SOP's in singular methods of operation and heavy use of automatics.
RAT 5 is offline  
Old 20th Jan 2016, 14:00
  #126 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Jwscud
Since the high level of accidents in the 70s we have had:

- New generations of aircraft
- Better automation
- TCAS (which is generally there to protect from human error, but mainly ATC)
- GPWS/EGPWS
- The CRM revolution

As far as I'm aware without the ability to crunch MORs, NASA ASAPs, company safety reports &c we only really know that the accident rate has decreased, and that a number of high profile accidents have involved human failings.

I am interested in how one can ascribe a certain percentage of the accident rate reduction to different factors above and others I have not mentioned. I would welcome links to studies that attempt to do that.

Let us look at your list, which I fully agree with.

1. New generations of aircraft.
Things go wrong less often thus requiring less input from pilot to not crash.
=Less reliance on pilots being capable of dealing with emergencies.

2. Better automation.
=less reliance on pilots being capable of actually flying. (downside of reducing pilot capability!)

3.TCAS
Catches pilot/ATC errors and tells pilot what to do to solve problem.
=Less reliance on pilots being capable of spotting aircraft and avoiding.

4.GPWS/EGPWS
Catches pilot/ATC errors and tells pilot what to do to solve problem.
=Less reliance on pilots being capable of maintaining SA.

5.The CRM revolution
In large part, in reality, a lot of CRM is following SOPs. When you sift it down, SOPs are an attempt to automate humans. They are an attempt to minimise human creativity in how to accomplish the various tasks. As many scenarios as they can think of have a proscribed way of accomplishing them.
This has been very successful at helping teams to integrate and work well, but to pretend it is not another instance of automation is slightly fuzzy thinking.
=Less reliance on pilots being capable of coming up with best solution on their own under pressure.


The result of this is that I don't think we need to work out the percentages between these factors.

All your factors once rendered down are to do with higher automation and less reliance on human pilots being good at their job.

There is a lesson to take away from that.

Every instance of higher automation that I can think of has improved safety overall.

Why stop now?
Tourist is offline  
Old 20th Jan 2016, 15:18
  #127 (permalink)  
 
Join Date: Feb 2015
Location: Alternate places
Age: 76
Posts: 97
Likes: 0
Received 0 Likes on 0 Posts
Mansfield, I very much enjoyed your comments regarding Taylorism. To me it showed a broader view of matters, which is something the "automation discourse" does need, including a touch of philosophy.

Such a predisposition helps to delineate and therefore to be aware of our assumptions behind some of the terms we use all the time but for which there will be slightly differing interpretations; so thank you for the entry.
FDMII is offline  
Old 20th Jan 2016, 17:00
  #128 (permalink)  
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
Mansfield, Bergerie, “compliance” reflex. AF447, yes, correcting an apparent height loss of 400ft, or complying with the loss of airspeed procedure – box items; both actions relating to ‘trained’ memories (pull up), or acquired beliefs and biases.
A similar viewpoint of other accidents might identify the same behavioural patterns; Colgan, where both crew’s actions were consistent with the recently learned tail stall procedure; or resetting cbs in flight, engaging the autopilot to resolve misunderstood situations.

The hazard in this behaviour is not necessarily the actor, but the source of the ‘norm’.
Inappropriate training, ill-conceived SOP’s, reliance on SOPs – fit the SOP to the perceived situation vs the need to assess the situation first.
How might we identify the source, avoid these aspects, create barriers or mitigations?

In most accidents we are unable to ask the pilots for their view of the ‘norm’, i.e. the origin of the ‘reflex’. Of greater concern is that even with crew recollection they might not know because the ‘reflex’ comes from subconscious memories; thus the problem of identification would include how these aspects became to be embedded in the subconscious memory.

A further thought from modern ideas of cognition is that all of our thoughts and actions come from the fast acting subconscious process, and that slower conscious thought is only an error checker / detector. If so, then an inappropriate reflex action could be seen as a by-pass or failure of error detection, which could also be a function of the appreciation of time – another contributor to these types of accident.

A wider view of these accidents and the previous hypothetical example (#116), is that these accidents were ‘designed’ by others just for these crews, all the holes lined up, human variation peaked, limited attention resources, … , but no one identified these future circumstances.

Centaurus, CRM, agreed; “everyone knows what CRM is for, but no one knows what it is”, (it’s like an inverted bidet in English humour).
alf5071h is offline  
Old 21st Jan 2016, 01:21
  #129 (permalink)  
Thread Starter
 
Join Date: Jun 2000
Location: Australia
Posts: 4,188
Likes: 0
Received 14 Likes on 5 Posts
but 10yrs of monitoring Airbus magic can kill the skill level of even a Chuck Yeager.
Seems Boeing don't trust the skill level of lots of customer pilots. A Boeing test pilot told a friend of mine who was a simulator instructor on the 787. "Boeing have designed the 787 assuming it will be flown by incompetent pilots."
Centaurus is offline  
Old 21st Jan 2016, 04:41
  #130 (permalink)  
 
Join Date: Feb 2001
Location: The Winchester
Posts: 6,555
Received 5 Likes on 5 Posts
"Boeing have designed the 787 assuming it will be flown by incompetent pilots."
I think it has crept onto other Boeings, given that they recently introduced a rather annoying "[] VNAV step climb" checklist on my type. Guess they think reading and understanding the FMC is a bit tricky for some.
wiggy is offline  
Old 21st Jan 2016, 05:28
  #131 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
I think it is an unavoidable chicken and egg situation.

When aircraft were poor, the constant struggle to stay alive and not lost kept the pilots skill levels high and stopped all but the gifted entering the profession.

Now that aircraft are fantastic, pilot skills wane through lack of usage and a lower standard of pilot can successfully enter the profession.

Boeing and Airbus are merely recognising the fact and attempting to mitigate.
Tourist is offline  
Old 21st Jan 2016, 05:40
  #132 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
As a slight aside, it seems to me that autonomous aircraft would mean that less aircraft would end up airborne with emergencies, and aircraft could safely get airborne heavier off shorter runways....

V1 was invented to deal with human failings.

Recognising that humans need thinking time, and have trouble making the correct decision under extreme time pressure and multiple inputs, V1 is a very good idea for humans.
It is binary system designed around a best guess set of circumstances with flex built in for reaction time after history showed that humans kept making the wrong decision.


A computer could simultaneously assimilate enough information about A/C energy levels, actual position on the runway, engine thrust, engine health, A/C acceleration etc to make a tailored decision about each circumstance including making the decision to not get airborne despite insufficient stopping distance available.


Can anybody think of a reason why V1 would have to be retained?

Last edited by Tourist; 21st Jan 2016 at 05:57.
Tourist is offline  
Old 21st Jan 2016, 06:13
  #133 (permalink)  
 
Join Date: Jan 2016
Location: UK
Posts: 3,781
Received 66 Likes on 40 Posts
Originally Posted by Tourist
As a slight aside, it seems to me that autonomous aircraft would mean that less aircraft would end up airborne with emergencies, and aircraft could safely get airborne heavier off shorter runways....

V1 was invented to deal with human failings.

Recognising that humans need thinking time, and have trouble making the correct decision under extreme time pressure and multiple inputs, V1 is a very good idea for humans.
It is binary system designed around a best guess set of circumstances with flex built in for reaction time after history showed that humans kept making the wrong decision.


A computer could simultaneously assimilate enough information about A/C energy levels, actual position on the runway, engine thrust, engine health, A/C acceleration etc to make a tailored decision about each circumstance including making the decision to not get airborne despite insufficient stopping distance available.


Can anybody think of a reason why V1 would have to be retained?

80kts, V1 and go/no go is merely a human-run algorithm, albeit with some human judgement.


There is still a point, depending on performance and weight, at which an autonomous aircraft cannot stop safely in the distance remaining.


Some judgement is needed though and it's not always black and white.


A thread on here maybe a few weeks ago regarding a 738 in Canada I think that got smoke in the cockpit below V1... they continued as it is known that an APU bleed takeoff after deice can cause smoke, and versus an RTO on a contaminated runway they decided continuing was safer option (and I agree).

Can an autonomously flown aircraft detect and decide what's smoke and what's SMOKE? Will it be able to include all factors when weighing up decisions?



I think it would still have a double-level of go/no-go mode, like we have now...
level one (i.e. below 80) it can stop for anything
level two (80 to V1) it will stop for anything major (engine failure or any fire)
level three (above V1) it will continue.


But then if you can suffer a bird strike to the engine after V1 (as has happened several times) you can not implausibly suffer a bird strike to both engines. What does it do in the event of a can't fly won't fly situation?
LlamaFarmer is offline  
Old 21st Jan 2016, 07:04
  #134 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by LlamaFarmer

There is still a point, depending on performance and weight, at which an autonomous aircraft cannot stop safely in the distance remaining.
Yes I quite agree, but this is need not be based upon a set speed. A computer can easily calculate this in real-time based upon real-time information about energy levels/acceleration vs runway remaining.

Originally Posted by LlamaFarmer
But then if you can suffer a bird strike to the engine after V1 (as has happened several times) you can not implausibly suffer a bird strike to both engines. What does it do in the event of a can't fly won't fly situation?
That is exactly where a computer would have the advantage. It will be constantly monitoring the hundreds of engine metrics which are taken but are currently opaque to the pilot and will have a far better idea of the health of the engines. If the aircraft knows will not have enough thrust to fly it can abort despite lack of runway and accept the overrun.
This is very difficult to train a human pilot to do even if he could assimilate the engine data.

Originally Posted by LlamaFarmer
Can an autonomously flown aircraft detect and decide what's smoke and what's SMOKE? Will it be able to include all factors when weighing up decisions?
There a computer maybe in the same position as a human re diagnosis, but still has the advantage.

1. Because there is no need to build in the reaction time/ worst legal pilot buffer.

2. There is no need to have a set speed delineating the go/no-go decision.
This can always be made in real-time by a computer.

ie.

At any point when something triggers an alert, the computer can calculate from the real data, not planned data, the energy situation.
This is not tricky for a computer.
I knows whether it can safely stop on the runway.
This would mean that an autonomous aircraft with a smoke caution well above the speed where a manned aircraft would be committed to take-off could safely stop sure in the knowledge that it would have sufficient runway.

This can only lead to safer flights.

Last edited by Tourist; 21st Jan 2016 at 07:35.
Tourist is offline  
Old 21st Jan 2016, 07:16
  #135 (permalink)  
 
Join Date: Feb 2001
Location: The Winchester
Posts: 6,555
Received 5 Likes on 5 Posts
Tourist

I can see how there may well be advantages of having a real world, dynamically/real time calculated V1. I can see how it might well be a safety improvement if the emergency you are dealing with an easily defined "binary" go or no go type of failure...(e.g Fire warning, or "engine failure confirmed by at least 2 parameters...")

What I'm struggling with is how is your automated decision making process is going to handle more subtle/less binary events - smoke has been already been mentioned but there's also the likes of an engine failure indicated by only one measured parameter- but there was an associated external noise, a "blocked runway" ( what runway length is HAL going to use for the number crunching?), or the good old abandon because in the opinion of the captain it "was unsafe to fly"..(good luck with coding that one).
wiggy is offline  
Old 21st Jan 2016, 07:32
  #136 (permalink)  
 
Join Date: Jan 2016
Location: UK
Posts: 3,781
Received 66 Likes on 40 Posts
Originally Posted by Tourist
Yes I quite agree, but this is need not be based upon a set speed. A computer can easily calculate this in real-time based upon real-time information about energy levels/acceleration vs runway remaining.


I knows whether it can safely stop on the runway.

It would still need safety factors... there's a lot of variables as to whether it can safely stop in the distance it thinks.

Braking coefficient and reverse thrust the main ones.


But what about in the event of an RTO when the cabin needs evacuating. Who makes the decision, and who deems it safe, cabin crew or the aircraft?



A lot of your points are valid, and computers can make decisions much quicker than humans... but only the decisions it's programmed to make, and only using the factors it's told to take into account.
LlamaFarmer is offline  
Old 21st Jan 2016, 07:41
  #137 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by wiggy

What I'm struggling with is how is your automated decision making process is going to handle more subtle/less binary events - smoke has been already been mentioned but there's also the likes of an engine failure indicated by only one measured parameter- but there was an associated external noise, a "blocked runway" ( what runway length is HAL going to use for the number crunching?), or the good old abandon because in the opinion of the captain it "was unsafe to fly"..(good luck with coding that one).
These are valid points, however at the very least, the dynamic V1 gives a longer thinking/evaluating/watching developments time before making the decision.

A computer watching hundreds of parameters has a far greater chance of diagnosing spurious/valid engine problems than a human particularly since it can monitor trends in short spaces of time.
Tourist is offline  
Old 21st Jan 2016, 07:46
  #138 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by LlamaFarmer
It would still need safety factors... there's a lot of variables as to whether it can safely stop in the distance it thinks.

Braking coefficient and reverse thrust the main ones.
Yes, it would still need some kind of safety buffer to account for unknowns.

Originally Posted by LlamaFarmer
A lot of your points are valid, and computers can make decisions much quicker than humans... but only the decisions it's programmed to make, and only using the factors it's told to take into account.
This is true, however at the moment the decision making process is necessarily extremely simplified because of human limitations re time and complexity.
It should not be impossible to at the very least match human capabilities.

ie.

Autonomous aircraft going down runway.
Unknown/unplanned/confusing situation occurs.
Do I have enough stopping distance available?

No.
Take it airborne.


yes.
Stop.
Tourist is offline  
Old 21st Jan 2016, 07:49
  #139 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Thinking about it, there is actually no reason that the same system could not be fitted to manned aircraft in the same way as TCAS and EGPWS have appeared to assist humans.

You might lose the benefit re the shock/reaction times but would still gain a lot.


Loud voice says "GO" or "STOP"

Too late to patent the idea now
Tourist is offline  
Old 21st Jan 2016, 08:16
  #140 (permalink)  
 
Join Date: Feb 2001
Location: The Winchester
Posts: 6,555
Received 5 Likes on 5 Posts
It should not be impossible to at the very least match human capabilities.

ie.

Autonomous aircraft going down runway.
Unknown/unplanned/confusing situation occurs.
Do I have enough stopping distance available?

No.
Take it airborne.


yes.
Stop.
Playing Devil's advocate here but you might need a a third option
"stopping distance now unknown"..

As would apply in the case of a sudden runway incursion ahead during the take off roll - the blocked runway case....

(I'm not suggesting an answer - just wondering what logic you would suggest is used )
wiggy is offline  


Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.