PPRuNe Forums - View Single Post - Automation dependency stripped of political correctness.
Old 24th Jan 2016, 11:25
  #179 (permalink)  
alf5071h
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
1202, “… that we have no statistics about how often the humans actually saved the day.”
Yes, but more than that, we have little understanding of the mechanisms behind any statistics.

In Fig 1, the negative HE-LOC vector involves %, but in order to improve safety we need to understand how these adverse outcomes came about.
One simplified view considers incorrect situation assessment or incorrect choice of action (Orasanu), thus turning this around, the successes might represent appropriate assessment or choice of action. Alternative views consider that the behaviour in adverse events and success has the same basis and thus the successes should be fully investigated (Hollnagel).

In some successes the initial assessment/action was not as required, e.g. of the 20+ ICI/ADC events pre AF447, several aircraft pitched up, but subsequent action (adjustment) prevented a stall. Another view involves a continuous process of adjustment – reviewing awareness and action based on an interim outcome.
A further aspect of success involves the really hidden events, e.g. where an unmodified aircraft faced the ICI situational threat, but the crew managed the situation (adjusted behaviour) to avoid an unwanted outcome; normal operation, non- event. (Weick, et al – ‘safety is a dynamic non-event’).
Successful crews / operators appear to be able to manage ‘potential accident scenarios’; not just avoid the fatal accidents, but also all events which could have adverse outcomes, yet the solution need not be ‘machine’ (or SOP) perfect, only acceptable for the situation (machine ~ technology / automation).
This may involve (situation) recognition primed / naturalistic decision making, the basis which may not be completely feasible with machine based decision making. A machine might provide better situation assessment, but not for the choice of action, which may depend on learning. This assumes that machine learning is based on previous situations, whereas human learning enables previously experienced situations to be extended to other un-sampled situations (intuition?), thus for machines to have sufficiently reliable ‘intuition’, the boundaries of this process might have to be programmed by the fallible human.

As for safety improvements, Orasanu considers machine aided awareness, but also complementary crew training to improve experience and judgement - airmanship.
For choice of action, a machine may help in judging risk, but such a judgement would require an understanding of both the situation and the proposed action – what has the human decided to do. i.e. machines may be better at catching an ‘error’ than making the decision, e.g. EGPWS.

Orasanu. http://www.dcs.gla.ac.uk/~johnson/pa...ithlynne-p.pdf

Hollnagel. https://www.scribd.com/doc/296474809...hat-Goes-Wrong

Weick. Managing the Unexpected - University of Michigan Business School

Klein. http://xstar.ihmc.us/research/projec...ensemaking.pdf
And http://xstar.ihmc.us/research/projec...semaking.2.pdf
And http://psych.colorado.edu/~vanboven/..._expertise.pdf

Other refs: http://high-reliability.org/Critical...Johns_2010.pdf

Error management in aviation training
alf5071h is offline