PPRuNe Forums - View Single Post - Is fatigue a problem at Emirates?
View Single Post
Old 11th Aug 2016, 22:19
  #36 (permalink)  
Longtimer
 
Join Date: Nov 2000
Location: Canada
Posts: 603
Likes: 0
Received 0 Likes on 0 Posts
Fatigue or Boredom or Both?

THE HAZARDS OF GOING ON AUTOPILOT
By Maria Konnikova , SEPTEMBER 4, 2014
At 9:18 p.m. on February 12, 2009, Continental Connection Flight 3407, operated by Colgan Air, took off from Newark International Airport. Rebecca Shaw, the first officer, was feeling ill and already dreaming of the hotel room that awaited in Buffalo. The captain, Marvin Renslow, assured her that she’d feel just fine once they landed. As the plane climbed to its cruising altitude of sixteen thousand feet, the pair continued to chat amiably, exchanging stories about Shaw’s ears and Renslow’s Florida home.

The flight was a short one and, less than an hour after takeoff, the plane began its initial descent. At 10:06 p.m., it dropped below ten thousand feet. According to the F.A.A.’s “sterile cockpit” rule, all conversation from that point forward is supposed to be essential to the flight. “How’s the ears?” Renslow asked. “Stuffy and popping,” Shaw replied. Popping is good, he pointed out. “Yeah, I wanna make ’em pop,” she assured him. They laughed and began talking about how a different Colgan flight had reached Buffalo before theirs did.

As ground control cleared the flight to descend to twenty-three hundred feet, the pilots’ conversation continued, unabated. There was the captain’s own training, which was, when he first got hired, substantially less than Shaw’s. There were Shaw’s co-workers, complaining about not being promoted quickly enough. There was the ice outside. Renslow recalled his time flying in Charleston, West Virginia, and how, being a Florida man, the cold had caught him doubly off guard. As the plane lost altitude, it continued to decelerate.


At 10:16 p.m., the plane’s impending-stall alert system—the stick shaker—kicked in. “Jesus Christ,” Renslow said, alarmed. In his panicked confusion, he pulled the shaker toward him instead of pushing it away from him. Seventeen seconds later, he said, “We’re down,” and, two seconds after that, the plane crashed, killing everyone on board and one person on the ground.

In its report about Flight 3407, the National Transportation Safety Board (N.T.S.B.) concluded that the likely cause of the accident was “the captain’s inappropriate response to the activation of the stick shaker, which led to an aerodynamic stall from which the airplane did not recover.” The factors that the board said had contributed to Renslow’s response were, “(1) the flight crew’s failure to monitor airspeed in relation to the rising position of the low-speed cue, (2) the flight crew’s failure to adhere to sterile cockpit procedures, (3) the captain’s failure to effectively manage the flight, and (4) Colgan Air’s inadequate procedures for airspeed selection and management during approaches in icing conditions.” All but the fourth suggested a simple failure to pay attention.

In this respect, Flight 3407 followed a long-established trend. A 1994 N.T.S.B. review of thirty-seven major accidents between 1978 and 1990 that involved airline crews found that in thirty-one cases faulty or inadequate monitoring were partly to blame. Nothing had failed; the crew had just neglected to properly monitor the controls.

The period studied coincided with an era of increased cockpit automation, which was designed to save lives by eliminating the dangers related to human error. The supporting logic was the same in aviation as it was in other fields: humans are highly fallible; systems, much less so. Automation would prevent mistakes caused by inattention, fatigue, and other human shortcomings, and free people to think about big-picture issues and, therefore, make better strategic decisions. Yet, as automation has increased, human error has not gone away: it remains the leading cause of aviation accidents.

***

In 1977, the House Committee on Science and Technology identified automation as a major safety concern for the coming decade, and, three years later, the Senate Committee on Commerce, Science, and Transportation repeated the warning. Boeing, McDonnell Douglas, and other leading commercial-aviation companies were, at the time, developing new aircraft models with ever more sophisticated cockpits. With the move toward automation seemingly inevitable, Congress requested that nasa research the effects of the changes on pilots.

Leading the charge at nasa’s Ames Research Center was Earl Wiener, a pioneer of human-factors and automation research in aviation. Wiener had been studying flight records in the years since automation was first introduced into the cockpit. Beginning in the nineteen-seventies, he published a series of papers that analyzed the interplay among automation, pilot error, and accidents. By the early nineteen-eighties, he had concluded that a striking number of innovations designed to address the perceived risk of human error had, in fact, led to accidents. Among the most notorious examples he cited was the 1983 crash of Korean Air Lines Flight 007, which was shot down by the Soviet Union after veering three hundred miles off course. The official report attributed the crew’s “lack of alertness” as the most plausible cause of the navigational error. Such inattention, the report went on to say, was far from unique in civilian-aircraft navigation.

By 1988, Wiener had added more cases to his list and had begun supplementing his research with extensive pilot interviews. He was well aware that automation could work wonders: computers had markedly improved navigation, for example, and their ability to control the airplanes’ every tiny wiggle via the yaw damper was helping to prevent potentially fatal Dutch rolls. But, as pilots were being freed of these responsibilities, they were becoming increasingly susceptible to boredom and complacency—problems that were all the more insidious for being difficult to identify and assess. As one pilot whom Wiener interviewed put it: “I know I’m not in the loop, but I’m not exactly out of the loop. It’s more like I’m flying alongside the loop.”

Wiener accused the aviation industry of succumbing to what he called the “let’s just add one more computer” phenomenon. Companies were introducing increasingly specialized automated functions to address particular errors without looking at their over-all effects, he said, when they should have been be making slow and careful innovations calibrated to pilots’ abilities and needs. As it stood, increased automation hadn’t reduced human errors on the whole; it had simply changed their form.

***

It was against this backdrop, in 1990, that Stephen Casner arrived at Ames, armed with a doctorate in Intelligent Systems Design from the University of Pittsburgh. Casner had been studying automation, and although he didn’t have any particular experience with planes (he became a licensed pilot soon after), he brought a new perspective to the problem: that of human psychology. His adviser at Pitt had been a psychologist, and the field had deeply influenced his understanding of automation. He hoped to bring a new experimental rigor to the problem, by testing the effects of computerized systems on pilots.

Over the next two decades, Casner dedicated himself to systematically studying how, exactly, humans and computers were interacting in the cockpit, and how that interaction could be improved to minimize error and risk. How were the pilots solving complex problems as a flight progressed along its regular course? How well-suited were the displays and functions to the pilots’ preferences and behaviors?

Cockpit systems, he found, were not particularly well understood by the pilots who had to use them, and he concurred with Wiener that the forms of automation in use were not particularly well suited to the way pilots’ minds operated during a flight. In 2006, Casner attempted to remedy the first part of the problem by publishing a textbook on automation in the cockpit. Since then, he has focussed increasingly on the problem of inattention. Last year, he teamed up with the psychologist Jonathan Schooler, from the University of California, Santa Barbara, who studies attention and problem-solving ability, to see whether automation was genuinely responsible for the kinds of monitoring errors that Wiener had identified. If computerized systems performed as intended ninety-nine per cent of the time, Casner and Schooler asked, how would a pilot’s ability to engage at a moment’s notice if something went wrong, as it had for Colgan Air, be affected?

When Casner and Schooler ran tests using a Boeing 747-400 flight simulator, they confirmed that the degree of automation a pilot relied on during a flight directly impacted how closely he paid attention to his work. It was true, as automation proponents argued, that pilots spent less time worrying about the minutiae of flying when they were using more highly automated systems. But they weren’t necessarily using the newfound mental space to perform higher-order computations. Instead, a full twenty-one per cent of the pilots surveyed reported thinking about inconsequential topics, just as Shaw and Renslow had done.

Even more troublingly, a new study published by Casner and Schooler in Human Factors reveals that automation has also caused some pilots’ skills to atrophy. In the experiment, a group of sixteen pilots, each with approximately eighteen thousand hours of flight time, were asked to fly in a Boeing 747-400 simulator. As the simulated flights progressed, the researchers systematically varied the levels of automation in use. At some point in the flight, they would disable the alert system without advising their subjects and introduce errors into the instrument indicators. Casner and Schooler wanted to see if the pilots would notice, and, if so, what they would do.

Surprisingly, the pilots’ technical skills, notably their ability to scan instruments and operate manual controls, had remained largely intact. These were the skills that pilots and industry experts had been most concerned about losing, but it seemed that flying an airplane was much like riding a bike. The pilots’ ability to make complex cognitive decisions, however—what Casner calls their “manual thinking” skills—had suffered a palpable hit. They were less able to visualize their plane’s position, to decide what navigational step should come next, and to diagnose abnormal situations. “The things you do with your hands are good,” Casner told me. “It’s the things you do with your mind and brain that we really need to practice.”

Only one pilot had been able to complete the test without making a mistake. The rest exhibited the same behavior that Casner and Schooler had identified in their earlier study: mind-wandering. The more the pilots’ thoughts had drifted—which the researchers affirmed increased the more automated the flight was—the more errors they made. In most cases, they could detect that something had gone wrong, but they didn’t respond as they should have, by cross-checking other instruments, diagnosing the problem, and planning for the consequences. “We’re asking human beings to do something for which human beings are just not well suited,” Casner said. “Sit and stare.”

The more a procedure is automated, and the more comfortable we become with it, the less conscious attention we feel we need to pay it. In Schooler’s work on insight and attention, he uses rote, automated tasks to induce the best mind-wandering state in his subjects. If anyone needs to remain vigilant, it’s an airline pilot. Instead, the cockpit is becoming the experimental ideal of the environment most likely to cause you to drift off.

In the cockpit, as automated systems have become more reliable, and as pilots have grown accustomed to their reliability—this is particularly the case for younger pilots, who have not only trained with those systems from the outset of their careers but grown up in a world filled with computers and automation—they have almost inevitably begun to abdicate responsibility on some deeper level. “It’s complacency,” Casner said. “If a buzzer goes off, I’ll do something about it. If it doesn’t, I’m good.” The July, 2013, crash of Asiana Airlines Flight 214 as it attempted to land in San Francisco is a recent example. None of the four pilots on board had noticed that the plane was coming in too slowly. “One explanation is they had the automation configured so that something like this couldn’t happen,” Casner said. “They truly believed it would keep flying that airspeed.”

***

The problems identified by Wiener, Casner, and Schooler have implications that reach far beyond the airline industry. Think about cruise control in cars: the moment you set it, you’re no longer forced to vigilantly monitor your speed. Does this make you look more closely at the road, or does your mind begin to drift off? Casner compared the dynamic to our modern collective inability to remember phone numbers programmed into our phones—normally not a problem, but in the event of an emergency potentially a major issue.

“What we’re doing is using human beings as safety nets or backups to computers, and that’s completely backward,” Casner said. “It would be much better if the computing system watched us and chimed in when we do something wrong.” Ideally, he said, automation would adopt a human-centered approach—one that takes seriously our inability to sit and stare by alerting us when we need to be alerted rather than by outright replacing our routines with computerized ones. This kind of shift from passive observation to active monitoring would help to ensure that our minds remain stimulated. Casner likened the desired approach to one taken by good lifeguards. In the absence of a safety net, they must remain aware. “They don’t just sit and wait to see if someone’s screaming,” he said. “They scan the pool, look for certain signs.” While lifeguards are taught all the possible signs of a person who is drowning, pilots don’t receive elaborate training on all the things that can go wrong, precisely because the many things that can go wrong so rarely do. “We need to give pilots more practice at the thinking skills,” Casner said. “Present them with abnormal situations, show them some interesting-looking instrument panels and say, ‘What’s going on?’ “

Active monitoring may be difficult to achieve in aviation, given the degree of automation already present in cockpits. But industries in which automation is nascent—automotive, medical, housing construction—still have the opportunity to learn from the problems that have occurred in the cockpit. Casner is working with the designer Don Norman to apply what he has learned to other fields, beginning with the car industry. As we visualize a future in which more of our tasks are left to machines—Google’s driverless car, computer-guided surgery—we may be able to make our systems easier and safer without inducing complacency. We assume that more automation is better—that a driverless car or a drone-delivered package is progress, no matter the guise it takes—but the experience we’ve had in aviation teaches us to be suspicious of that assumption. “Don’t just automate something because you can,” Casner said. “Automate it because you should.”
Longtimer is offline