PPRuNe Forums - View Single Post - AF 447 Thread No. 12
View Single Post
Old 25th Feb 2019, 21:17
  #1619 (permalink)  
Winnerhofer
 
Join Date: Oct 2013
Location: Wengen
Age: 53
Posts: 380
Likes: 0
Received 0 Likes on 0 Posts
Winnerhoffer C&P's someone else's work yet again

This article was cut and pasted by Winnerhofer from some other web site. It has been cleaned up to improve readability. The original work was compiled by Richard J. Ranaudo. (Mod)
Increasing automation has introduced new situational awareness challenges for pilots.
Increasing automation has introduced new situational awareness challenges for pilots.

We’re going to crash! … This can’t be happening!”

These were the last words of the first officer as Air France Flight 447, an Airbus A330, crashed into the Atlantic Ocean on June 1, 2009, killing all 228 persons on board.1 During the final minutes of the flight, the cockpit voice recorder painted a picture of confusion and frustration in the cockpit, likely due to the crew’s inability to understand what was happening.

The Flight 447 accident, according to the findings of the French Bureau d’Enquêtes et d’Analyses (BEA)² was precipitated by loss of three sources of airspeed indications at high altitude due to blockage of the pitot tubes by ice crystals. Subsequently, the fly-by-wire flight control system went into a degraded mode, and the autopilot disconnected, likely startling the pilots.

This required manual handling of the airplane at high altitude — a requirement for which the pilot flying (PF) had no prior training. The PF began a climb to a higher altitude and unknowingly stalled the aircraft. He continued to make inappropriate control inputs until it crashed.

Among the accident’s causal factors, the BEA cited the “ergonomic features” of the warning system design and the manner in which pilots are trained for stall conditions — using methods that would not elicit the appropriate response behaviors in this situation.

In its findings, the BEA stated: “The crew, progressively becoming de-structured, likely never understood that it was faced with a ‘simple’ loss of three sources of airspeed information. In the minute that followed the autopilot disconnection, the failure of the attempts to understand the situation and the de-structuring of crew cooperation fed on each other until the total loss of cognitive control of the situation.”

The crew’s loss of situational awareness (SA) began a chain of events resulting in the accident.
Airmanship and SA
Airmanship skills are defined more broadly as “the consistent use of good judgment and well-developed skills to accomplish flight objectives. This consistency is founded on a cornerstone of uncompromising flight discipline and is developed through systematic skill acquisition and proficiency. A high state of situational awarenesscompletes the airmanship picture and is obtained through knowledge of one’s self, aircraft, environment, team and risk.”³Maintaining SA in today’s modern transport aircraft requires attention and cognitive skills to sense and process information in a timely and accurate manner. Older-generation aircraft required a focus on motor skills, but modern, highly automated aircraft require more focus on attention and cognition skills.

According to a recent U.S. Federal Aviation Administration (FAA) report on human factors: “Highly automated systems in which the flight crew serves primarily as a monitor may reduce their awareness of system state, leading to longer response times in emergencies and loss of knowledge or skill. Additionally, humans are traditionally poor monitors, and as time spent in a purely monitoring mode increase[s], the ability to remain attentive decreases dramatically as does (their) performance.”4  The transport aircraft accident rate since the introduction of more automated aircraft systems in the 1980s continues to decrease, as shown in International Civil Aviation Organization Document 9683/950, Human Factors Training Manual, and illustrated in Figure 1. While automation has improved safety and reliability, it has introduced new and different challenges to achieving the goal of safe flight operations.5 Stated another way, when it comes to human error, automated systems have not eliminated it, they have relocated it.
Figure 1 — Accident Rates After Introduction of Automated Aircraft
Source: International Civil Aviation Organization and Boeing

Limits of Human Information Processing
Humans are essentially limited-capacity, single-channel operators, which means that we are serial processors and cannot attend to independent input and output activities simultaneously without suffering a performance loss on other tasks.6Human limitations in attention and memory resources, especially in a high workload or stressful situation, can have a detrimental effect on achieving good SA. Mica Endsley, a noted human factors expert in SA, defined three levels or stages required to achieve good SA: Level 1 — Perception of the Elements in the Environment, Level 2 — Comprehension of the Current Situation and Level 3 — Projection of Future Status.7 These stages can also be mapped into the stages of human information processing (Figure 2).8

Figure 2 — Information Processing Model



Source: Ranaudo, R., University of Tennessee Space Institute. “Human Factors” course notes, adapted from Wickens, C.D.; Flach, J.M., Human Factors in Aviation, Ch. 5, Academic Press, Inc., 1988, Editors Weiner, E.L.; Nagel, N.C.

SA begins with a perception of a stimulus (visual, auditory, somatosensory, etc.), and proceeds to higher levels of cognition. As shown in Figure 3, working memory, which is a component of attention resources, is called upon to assess what is perceived, and may extract information from knowledge, experience or training stored in long-term memory to define the situation and decide what action or actions are required. Based on the result of the action, and the expectation of its outcome, a determination is made either to be satisfied with the result or to seek more cues and repeat the cycle. What are not in this simplified model are effects of stressors such as time compression, confusion and emotions resulting from fear or difficult interactions with others in a team process. Further complications occur when this process is incited by a surprising or startling event. Startle and surprise, however, are terms that are frequently and incorrectly used synonymously. In reality, they are different events with different responses having different causes and effects. A startle occurs quickly, as a result of an unexpected event such as a pistol shot or the blast of a loud horn and elicits a physiological reaction — eye blinks, muscle tightening and elevated heart rate. Surprise, on the other hand, occurs when something does not react or behave as expected, such as a failure annunciation of stall warning when such a failure is not expected.

A confusing automation behavior is a common source of surprise in modern aircraft. Surprises generally manifest themselves in subtle ways, such as discovering a slow loss of cabin pressure; however, surprises can also follow a startling event. “These events are known to interrupt information processing to the point where the selection and execution of actions become reactive and sequential instead of anticipatory and proactive,” according to a 2017 report.9 “Tunnel vision” can then occur, causing a cognitive lockup. Quite possibly, this explains the continued incorrect pitch inputs by the PF in the Air France Flight 447 accident; as the BEA said, “The excessive nature of the PF’s inputs (in a stalled condition) can be explained by the startle effect and the emotional shock at the autopilot disconnection.”

Training for Better SA
The capacity for attaining SA varies among individuals. The reasons are complex, involving differences in cognitive capabilities, combined with knowledge and experience gained throughout life. Nevertheless, some people are much better at SA than others. These individuals also tend to be better at observing and extracting information from their environment, situation or activity. They direct their attention resources more efficiently, remain focused on their goals and ignore distractions. According to Endsley, 88 percent of accidents involving pilot error are due to problems with SA.10 She also believes that acquiring SA is a trainable skill11 and identifies 12 key principals around which SA training programs should be built. They include a host of key behavioral skills such as task management, comprehension, projection, attention sharing, team skills and forming mental models of systems and environments. This training would ostensibly transfer to better SA skills in the cockpit.

Michael Gillen, airline pilot and human factors expert, studied 40 airline crews receiving targeted training designed to mitigate startle and surprise. The research was conducted to assess training effectiveness in both high- and low-altitude scenarios using a Level D flight simulator. The thrust of the training was to enforce call-outs to identify and stabilize an undesirable situation. In an article summarizing the results of his research that appeared in the November 2017 issue of AeroSafety World (ASW),12 Gillen said, “The most significant factor in determining scenario success was problem identification, which was consistent with previous research (showing) when crews make an initial wrong decision, the in-flight issue tends to rapidly degrade.” He added, “The data showed that targeted training can help pilots bridge the cognitive gap when startled, and the fact that trained crews performed equally well in both (high- and low-altitude) scenarios suggested that the training had a broad array of effectiveness.” Gillen believes that startle can be mitigated with behavior-based training. The ASW article was based on a presentation Gillen made at Foundation’s 70th International Air Safety Summit in Dublin in October 2017.

Specific training requirements for airline pilots are found in U.S. Federal Aviation Regulations (FARs) Part 121.13 These training requirements are usually well scripted, involving standardized and relatively predictable scenarios. This type of training, which is termed “brittle,” does not require flight crews to have a high level of SA and may not transfer well to novel and unexpected emergency situations.14 As a result, pilots generally know what to expect during a check ride. At the highest level of information processing, SA is achieved in a perceptual cycle that evaluates and re-evaluates a problem. This high level of processing can come through training, which is why accident investigators in recent years have proposed changing the airline pilot training requirements to include more random and unexpected scenarios. A 2018 report summarized a research program conducted by Delft University, Netherlands, to determine if exposing pilots to unpredictability and variability (U/V) in training scenarios would improve their response performance to startle and surprise events. The test was conducted with 20 experienced airline pilots. A Delft research simulator, shown in the photo below (Figure 3), with a full motion base, visual display and a hybrid aircraft model, was used.

Figure 3 — Delft University of Technology “Simona” Research Simulator


Source: Courtesy, Olaf Stroosma and photograph by Theirry Shut, Delft University of Technology

The pilots were divided into two groups of 10 — a control group whose members were given non-variable and predictable training, and an experimental group whose training was U/V. A hybrid aircraft model was in the simulator, for which none of the pilots had previous experience. Variability was achieved for the U/V group by alternating the failure conditions from run to run, whereas the control group practiced the failure cases in succession (non-variable). Before a practice run, the U/V group was only told that a malfunction would occur (unpredictable failure), but the control group was told the details of what to expect (predictable failure).

After the practice sessions, the pilots in both groups were tested in identical failure scenarios that were designed to create surprise or startle. Surprise was caused by engine and control system failures. The presence of startle was assessed post-test with Likert-type opinion questions — for example, “How startled or shocked were you when you discovered the issue?” (1 = not at all, 5 = extremely). The test profile is shown in Figure 4.

Figure 4 — Traffic Pattern Flown in Surprise Test


1. Right engine loses power over 20 seconds.
2. Brief decrease in left engine power, which was restored immediately.
3. Rudder effectiveness decreases 20 percent.

Source: R. Ranaudo, Adapted from Reference 12.

A moderate nearly direct crosswind was present, and pilots were told to fly a left traffic pattern and call out failures as soon as they became aware of them. A successful landing was the success criterion. The test profile began with a right engine power loss on takeoff at 55 kt (1). After the callout (or after approximately 30 seconds), pilots were instructed to continue the takeoff and were given a lower level-off altitude. A second surprise was an engine power loss after which the pilot was informed that both engines were unreliable but still running (2). When turning downwind, rudder effectiveness was reduced by 20 percent (3). A successful landing was possible only if pilots identified the failures and managed differential thrust to offset loss of rudder effectiveness. The U/V pilots adapted their training and developed a control strategy allowing them to fly a steeper approach with reduced thrust on the good engine and landed safely. Successful landings were made by nine of the 10 pilots in the U/V group and only two of the 10 pilots in the control group. Callout times for failures were not significantly different between the two groups. Regarding the subjective determination of startle vs. surprise from the opinion questionnaires, there was no significant difference between the groups. On a scale of 1 to 5, startle was rated between (slight) 2 to (moderate) 3. Surprise was rated on average a (moderate) 3, with the highest (very) 4 for all events. Overall, the U/V group rated all events as “significantly easier to understand” than the control group. The higher surprise ratings indicated that the failure events were unexpected but not of such significance that the pilots felt threatened. Real-world conditions would likely have caused higher ratings.

Changes in Training
Following the Air France Flight 447 accident, the BEA recommended that unexpected and unusual situations — that is, surprise and startle events — be incorporated into pilot training scenarios. This training takes a behavioral approach that emphasizes problem solving through analysis of failure indicators and their meaning. The objective is to teach pilots how to make sense of a novel or unusual situation, achieve good SA and make better decisions and action responses. The results of the two studies cited in this article give evidence to the efficacy of this training. But completing the picture requires that training include manual control skills training in failure modes. These are still the most basic airmanship skills required of a pilot.

Richard J. Ranaudo was a U.S. National Aeronautics and Space Administration (NASA) research pilot for 25 years and the lead project test pilot in the icing research program for 16 years. After retiring from NASA, he spent five years as manager of Canadair flight test programs, and conducted icing development and certification testing on prototype business and regional aircraft. As a research assistant professor at the University of Tennessee Space Institute, he taught graduate level courses in the Aviation Systems Program, including human factors in aviation, flight test engineering and airport systems.
Winnerhofer is offline