Old 1st Jan 2015, 20:14
  #1 (permalink)  
Join Date: Jun 2009
Location: Chelan, WA
Age: 43
Posts: 43
Opportunities, Challenges, and Limits of Automation in Aircraft

Because I suspect this will come up frequently I figured the topic of automation of aircraft and the limits of such might deserve its own post. I am not a pilot. I am a systems/software engineer with a very strong interest in aerodynamics and have been following to a very large extent the field of vehicular automation and how, sometimes, reliance on automation can lead to impaired situational awareness and tragedy. This is not just an issue with aircraft but with trains, cruise ships, and now even with cars. The field affects, frankly everybody, and it is not one in the case of aircraft where the techies are going to just solve the problems. This is going to require a lot of feedback and thinking from everyone in the field.

This is not an Airbus vs Boeing flame. There are advantages and disadvantages to both designs, and significant flaws in both approaches. This field is still in its infancy. I will probably talk about Airbus more than Boeing here, but I think you will see it doesn't go all one way.

There is also no doubt that automation in some areas has made travel in all modes safer. However, because this is usually approached incrementally, the operator/automation interface is not considered from the start but instead considered only in addition to the current control interface or a slightly modified version of it. The advantage here is familiarity. The disadvantage is that the coupling often happens on suboptimal levels.

Let's start with a couple of very basic problems with automation. While automation has been, on the whole good, it is not an unmixed blessing. Increasingly reliable and capable automation has increasingly complex and problematic failure modes. This means that the operator is forced into a worse position recovery-wise, with more complicated troubleshooting information presented. One of the critical findings of the AF447 report was that the pilots were not in a position to quickly and reliably determine that all the errors were coming simply from blocked pitot tubes. So instead of letting the pilots know exactly what they needed to know, they got a slew of warnings and in the ensuing confusion stalled a perfectly well-flying airplane. So a hard "I don't know what to do -- you take over" approach has real problems associated with it (including lives lost).

So the primary challenges with automation as we currently do it, assuming it works as designed (more on problems with that assumption below) is that situation awareness often when things go wrong, and that recovery is harder when things do go wrong.

But what if it doesn't work as intended? In good weather things may be recoverable. In bad? who knows?

In 2007, a squadron of F22's were on their first deployment overseas when all the sudden, they ran into a problem. All of the sudden, large portions of their avionics (including some communication systems, inertial reference, and many other systems) suddenly stopped working. Unable to navigate, and with very little computer aid, they were able to follow tankers back to Hawaii, where the problem was found and fixed. Based on the description of the error, it sounds like an integer overflow or underflow error. One line in a million line codebase, and the international date line proved it was more than a match for the most advanced fighter the US had at the time. Pilots need to be able to perform recovery obviously even when computer errors cause problems.

A similar software-induced problem was seen with Malaysian Airlines flight 142 in August 1, 2005, where the aircraft suddently performed a series of uncommanded maneuvers, taking the plane up to 41k feet, and then losing thousands of feet. The pilots recovered. The problem was that two (out of six) accellerometers had failed in an inertial reference unit, and a software bug caused data to be read from a faulty accellerometer. The plane was a Boeing 777-200. Recovery from bad automation in clear weather has not been a huge problem so far other than in terms of nerves, stress, and schedules. In bad weather both of these could have turned out very differently.

Indeed, the current generation of automation-related tragedies are not when things are malfunctioning in terms of design, but when they are operating in accordance with design. One of the earliest cases I know of was SAS 751, which crashed after automatic thrust restoration spun stalling engines up enough to tear them apart (all passengers and crew survived). That was on a DC9.

The problems come in two forms: uncommanded changes, whether it is overriding the captain's throttle settings, or suddenly climbing, or whether it is the autopilot disengagement procedure causing lack of situation awareness. Frankly regarding Adam Air, since I am not a pilot, I am wondering: the procedure is, in the middle of a thunderstorm in a plane slowly banking right, to fly wings level with no artificial horizon while that resets? Is that even realistically survivable in that set of circumstances? Is not a big part of the problem an insufficient safety margin on artificial horizon availability in the glass cockpit, at least when the 737-400 came out? Hopefully newer models are better?

Since AF447, one of the areas I am most critical of Airbus in, is the fact that there is insufficient feedback as to what the other pilot is doing regarding stick inputs. This is a serious oversignt in the Airbus design. In theory "I am in control" should be enough. In practice, when that doesn't happen in an emergency, you might not know it. That's a problem and it is a further contributor to lack of situation awareness of the flight crew there. This is particularly the problem when the crew is distracted with a large number of warnings following autopilot disengagement.

So what is to be done?

One thing I think Airbus deserves some credit for is the flight law system. The flight law system adds a logical layer of automation and abstraction between the pilot and the controls. It's a pioneering system and like all such systems, it builds on past knowledge and makes some new mistakes. But I think conceptually it is a good start. The interface between pilots and aircraft needs to be rethought and this is I think the first step.

So here are my thoughts on where things should head. They come out of a fairly close following of this topic for several years but they lack practical flying experience. Therefore these are offered for discussion and in the hope that rather than found useful themselves, they may inspire useful thoughts:

1. Less elimination of human functions. The crew is likely to either be totally eliminated or totally incorporated in the flying. The functions of the crew are likely to be high level (airplane, do this!) and the automation then assumes the role of doing that.

2. Replace "flight laws" with "flight strategies" and theme the glass cockpit according to the strategy, so that there are subtle reminders built into many instruments as to what the plane is doing. For example, with unreliable airspeed, the plane can fly pitch and power.

3. Work needs to be done to better understand failure hierarchies and to avoid displaying, by default, cascading failures to pilots in the event of problems. Of course pilots should have *access* to this in the course of troubleshooting....

4. Bring back the flight engineer in modified form. It may be worth having a flight engineer station on many long-range aircraft which can be optionally filled (in lieu of data link to ground engineers), but also pilot not flying may also take over more of this role.

But of course there are limits. Automation won't change aerodynamics. Automation can't work in areas where the automated systems cannot know the information directly. And automated systems can apply heuristics but they cannot exercise judgement.

Anyway open to thoughts and discussion.
einhverfr is offline