PPRuNe Forums - View Single Post - Man-machine interface and anomalies
View Single Post
Old 12th Apr 2012, 20:33
  #23 (permalink)  
safetypee
 
Join Date: Dec 2002
Location: UK
Posts: 2,451
Likes: 0
Received 9 Likes on 5 Posts
TT, “The intelligence interface becomes the problem” (#22).
I have some agreement with this view, but not as you describe – you cannot equate human and machine intelligence.
Whatever limited ‘intelligence’ aircraft might have is the result of human ‘programming’; so “the bells, lights, clicks, horns, and ECAM messages” will have some prioritisation relating to the safe operation of the aircraft; this is specified by the design team. However, as the design team cannot foresee all possible circumstances there remains a requirement for human ‘intelligence’ to seek out specific information, and apply understanding, but this depends on the context of the situation and background systems knowledge. Thus the ‘intelligence’ interface is a partnership which requires a balance in activities, this is influenced by the overall system context. It is not an easy balance as technology never holds responsibility (exercised via human intelligence) – ‘technology is dutiful but dumb’.

The human has the advantage of being able to construct context – understand the situation (which can be flawed); the machine is very limited, e.g. how to differentiate between a null (failed) value, an erroneous value, and a valid signal.
This is not a failure of the machine intelligence, but a limitation of design and engineering. It is not the division between being able to fly the aircraft or not; – the lack of auto flight control and envelope protection are consequential issues.

Serious problems arise at the limits of the machine ‘intelligence’ where the interface becomes ambiguous (‘capability’ actually as machines cannot think). In these situations the human is required to change mode of operation (thinking) which requires deeper understanding of the situation and its implications.
Many accidents involve the failure of the human to recognise the need to change ‘mode’, or if recognised, then the abilities in the changed behaviour are limiting – all of which is influenced by context.

Without intelligence, technology cannot describe its graceful degradation; whereas a human can indicate the approach to limiting performance, technology just quits.
Why should we expect anything else? Many posts state what they need from technology – this is usually context dependent and biased (due to human cognition), but consider, what if the reasons behind the human wish lists are flawed; we have inappropriate expectations or mental models of the big system involving humans and technology.

An alternative view: “The Design of Future Things.”
safetypee is offline