PPRuNe Forums - View Single Post - Can automated systems deal with unique events?
Old 29th Oct 2015, 11:24
  #90 (permalink)  
alf5071h
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
Instead of asking ‘can’ automation manage unique events, we might consider ‘if’ we should use it, which might help clarify the critical issues.
Many views seek to eliminate ‘error’, yet the vast majority of operations tolerate error; humans adapt and successfully manage the variability in normal operations. There are many risks in seeking to eliminate ‘error’. We have scant understanding of the mechanism of ‘error’, where in some views the process of success and failure are the same, the difference only being decided by outcome.
How might fallible humans understand the very fallibility which makes them human, and similarly how can we be assured that any solution will be infallible or particularly that by managing ‘error’ outcomes this will not undermine the required successful ones.

One obstacle in our thinking is in applying the absolute (one or zero) view of automation to humans. Humans are not absolute; human advantages are in adaptability (reconfiguration), which might be better equated to probabilistic behaviour. No course of action is perfect, but it is normally good enough, this requires judgement.
Safety involves a balance, judging what is good enough in each and every situation, but often without adequate information or measure of what is ‘good’.

The human should not be seen as a hazard, but as a help, having unique inspiration and judgement which contributes to the successes in operation.
Instead of attempting to totally eliminate the undesired outcomes, we should promote the desirable aspects, the behaviours, thoughts and actions used today, and understand the situations in which they apply. If there are more good outcomes then there should be less opportunity for bad ones.

There are indications that the use of automation detracts from the learning processes; although the overall level of safety has improved with automation the human contribution has not, and in some circumstances thier effectiveness might be degraded.

There is also an important distinction between automation and technological aids; where the latter can encourage thought with improved awareness and decision making. EGPWS, ACAS, and Windshear Warning all use technology (not automation) to improve awareness and error detection. There are good arguments for automation in the event of inaction, but with society influences on safety – who pays if inappropriate activation hurts someone (litigious culture, an artefact of being safe) – thus the risk adverse management prefer to leave the responsibility with the sharp end.

It is important to consider the blunt end of operations; many of the contributors to unique events have ‘blunt’ origins. ‘Unique accidents’ normally involve many contributing factors, each necessary, but none in isolation sufficient. Thus identifying and uncoupling the conjunction of factors could be of greater benefit to safety and easier (cheaper) to implement than focusing on replacing the human with costly or untimely automation.
Blunt end activities might be more receptive to technology and automation; data bases, pattern matching, and the essential element of reduced time, but these still require human involvement in choosing what and when to implement.

The objective of change should be to promote the human advantage vice constraint or replacement, particularly if the latter results in the management, regulators, and manufactures becoming the new sharp end, focussing on their fallibility – then back onto the merry-go-round.

In order to progress we must look at the perceived problems and potential solutions in new ways; we have to change the manner of our thoughts not automate it.
alf5071h is offline