jcjeant, considering the various regulations; a requirement for an indication of failure is not the same as an indication of system state. The pitot did not ‘fail’, the problem was that the data was not reliable.
Similarly before considering the ‘risk of error’, what is an error? A system should be designed to minimise risk, but it cannot ensure that there is no risk, nor any error.
Winnerhofer, irrespective of your experience with ASI problems, what was the experience in unusual ice crystals. And please do read a HF reference on the limits of human performance.
As per CONF iture’s link, Airbus had associated the pitot deficiency with ice crystals. Unfortunately, the manufacturer, regulator, operator, or all, continued to focus on the pitot system as the threat (flight with UAS) instead of the conditions which triggered the system problems.
Looking at this accident with hindsight, and with a ‘safety 2’ view (resilience – the human as a help not a hazard), a backward search can identify a situation where a crew’s normal behaviour could have avoided the pitot ‘failure’ situation – by avoiding ice crystals / cbs. The normal everyday behaviour, opposed to LoC recovery, would be using Wxr to deviate around cbs, with a greater distance margin due to an awareness of the ice crystal threat.
Comparing flight paths with other aircraft this operation might be questioned, as might the crew’s knowledge of the ice crystal threat as above.
Thus the safety weakness was with an incorrect choice of threat, possibly aided by considering the human as a hazard, requiring refresher training.