PPRuNe Forums

PPRuNe Forums (https://www.pprune.org/)
-   Tech Log (https://www.pprune.org/tech-log-15/)
-   -   Opportunities, Challenges, and Limits of Automation in Aircraft (https://www.pprune.org/tech-log/553856-opportunities-challenges-limits-automation-aircraft.html)

einhverfr 1st Jan 2015 20:14

Opportunities, Challenges, and Limits of Automation in Aircraft
 
Because I suspect this will come up frequently I figured the topic of automation of aircraft and the limits of such might deserve its own post. I am not a pilot. I am a systems/software engineer with a very strong interest in aerodynamics and have been following to a very large extent the field of vehicular automation and how, sometimes, reliance on automation can lead to impaired situational awareness and tragedy. This is not just an issue with aircraft but with trains, cruise ships, and now even with cars. The field affects, frankly everybody, and it is not one in the case of aircraft where the techies are going to just solve the problems. This is going to require a lot of feedback and thinking from everyone in the field.

This is not an Airbus vs Boeing flame. There are advantages and disadvantages to both designs, and significant flaws in both approaches. This field is still in its infancy. I will probably talk about Airbus more than Boeing here, but I think you will see it doesn't go all one way.

There is also no doubt that automation in some areas has made travel in all modes safer. However, because this is usually approached incrementally, the operator/automation interface is not considered from the start but instead considered only in addition to the current control interface or a slightly modified version of it. The advantage here is familiarity. The disadvantage is that the coupling often happens on suboptimal levels.

Let's start with a couple of very basic problems with automation. While automation has been, on the whole good, it is not an unmixed blessing. Increasingly reliable and capable automation has increasingly complex and problematic failure modes. This means that the operator is forced into a worse position recovery-wise, with more complicated troubleshooting information presented. One of the critical findings of the AF447 report was that the pilots were not in a position to quickly and reliably determine that all the errors were coming simply from blocked pitot tubes. So instead of letting the pilots know exactly what they needed to know, they got a slew of warnings and in the ensuing confusion stalled a perfectly well-flying airplane. So a hard "I don't know what to do -- you take over" approach has real problems associated with it (including lives lost).

So the primary challenges with automation as we currently do it, assuming it works as designed (more on problems with that assumption below) is that situation awareness often when things go wrong, and that recovery is harder when things do go wrong.

But what if it doesn't work as intended? In good weather things may be recoverable. In bad? who knows?

In 2007, a squadron of F22's were on their first deployment overseas when all the sudden, they ran into a problem. All of the sudden, large portions of their avionics (including some communication systems, inertial reference, and many other systems) suddenly stopped working. Unable to navigate, and with very little computer aid, they were able to follow tankers back to Hawaii, where the problem was found and fixed. Based on the description of the error, it sounds like an integer overflow or underflow error. One line in a million line codebase, and the international date line proved it was more than a match for the most advanced fighter the US had at the time. Pilots need to be able to perform recovery obviously even when computer errors cause problems.

A similar software-induced problem was seen with Malaysian Airlines flight 142 in August 1, 2005, where the aircraft suddently performed a series of uncommanded maneuvers, taking the plane up to 41k feet, and then losing thousands of feet. The pilots recovered. The problem was that two (out of six) accellerometers had failed in an inertial reference unit, and a software bug caused data to be read from a faulty accellerometer. The plane was a Boeing 777-200. Recovery from bad automation in clear weather has not been a huge problem so far other than in terms of nerves, stress, and schedules. In bad weather both of these could have turned out very differently.

Indeed, the current generation of automation-related tragedies are not when things are malfunctioning in terms of design, but when they are operating in accordance with design. One of the earliest cases I know of was SAS 751, which crashed after automatic thrust restoration spun stalling engines up enough to tear them apart (all passengers and crew survived). That was on a DC9.

The problems come in two forms: uncommanded changes, whether it is overriding the captain's throttle settings, or suddenly climbing, or whether it is the autopilot disengagement procedure causing lack of situation awareness. Frankly regarding Adam Air, since I am not a pilot, I am wondering: the procedure is, in the middle of a thunderstorm in a plane slowly banking right, to fly wings level with no artificial horizon while that resets? Is that even realistically survivable in that set of circumstances? Is not a big part of the problem an insufficient safety margin on artificial horizon availability in the glass cockpit, at least when the 737-400 came out? Hopefully newer models are better?

Since AF447, one of the areas I am most critical of Airbus in, is the fact that there is insufficient feedback as to what the other pilot is doing regarding stick inputs. This is a serious oversignt in the Airbus design. In theory "I am in control" should be enough. In practice, when that doesn't happen in an emergency, you might not know it. That's a problem and it is a further contributor to lack of situation awareness of the flight crew there. This is particularly the problem when the crew is distracted with a large number of warnings following autopilot disengagement.

So what is to be done?

One thing I think Airbus deserves some credit for is the flight law system. The flight law system adds a logical layer of automation and abstraction between the pilot and the controls. It's a pioneering system and like all such systems, it builds on past knowledge and makes some new mistakes. But I think conceptually it is a good start. The interface between pilots and aircraft needs to be rethought and this is I think the first step.

So here are my thoughts on where things should head. They come out of a fairly close following of this topic for several years but they lack practical flying experience. Therefore these are offered for discussion and in the hope that rather than found useful themselves, they may inspire useful thoughts:

1. Less elimination of human functions. The crew is likely to either be totally eliminated or totally incorporated in the flying. The functions of the crew are likely to be high level (airplane, do this!) and the automation then assumes the role of doing that.

2. Replace "flight laws" with "flight strategies" and theme the glass cockpit according to the strategy, so that there are subtle reminders built into many instruments as to what the plane is doing. For example, with unreliable airspeed, the plane can fly pitch and power.

3. Work needs to be done to better understand failure hierarchies and to avoid displaying, by default, cascading failures to pilots in the event of problems. Of course pilots should have *access* to this in the course of troubleshooting....

4. Bring back the flight engineer in modified form. It may be worth having a flight engineer station on many long-range aircraft which can be optionally filled (in lieu of data link to ground engineers), but also pilot not flying may also take over more of this role.

But of course there are limits. Automation won't change aerodynamics. Automation can't work in areas where the automated systems cannot know the information directly. And automated systems can apply heuristics but they cannot exercise judgement.

Anyway open to thoughts and discussion.

No Fly Zone 1st Jan 2015 22:30

And the Question is...
 
Nice sermon and one that few can object to. Was there a question embedded someplace in there?:confused:

5N-206 1st Jan 2015 22:45

Like he said, this is to inspire discussion

Superpilot 2nd Jan 2015 09:48

Couple of different things I’d like to share:

#1 Re-current training of crews in the simulator

Every 6 months, we get sim checked, some airlines (the better ones) will also throw in a few training sessions before this. However, the training and the sim check ride scenarios are almost identical to the ones before.

We take off, climb to around 10,000ft, get a TCAS RA (traffic avoidance instruction), followed by a hydraulic or electrical problem. We decide to turn back, perform an ILS to land. Simulator is reset, we depart same runway, engine fails upon rotation, we perform a single engine ILS approach with a go around. We come back again to do a non-precision approach to land. Simulator is reset once more, we depart, there's an engine fire prior to rotation, we stop on the runway, carry out the evacuation. The sim session ends and we go home.

In 5 years of flying the Airbus, my training / sim check scenarios have not deviated from the above. In fact, I've never climbed above 15,000ft in a simulator ever. I've never had an unreliable speed problem (other than a single ADR failure which results in one of the PFDs spitting out an inaccurate air speed - no big deal, switch the source over to ADR3) and even that was in simulated visual conditions.

There is so much that could potentially go wrong in flight, most of the training focusses on resolving issues related to that specific aircraft. The handling, unusual attitude, and partial panel (where only some instruments are available) skills you learn during initial IR training do not get practiced in a medium/large jet simulator. It is assumed what you learned on a single engine piston plane will stay with you forever. Quite simply, training budgets don’t allow for this kind of training and it's my personal belief that airlines will probably end up firing many pilots who are simply not up to the job. Yes, I’m sad to say, but within this industry we have a very large pool of semi-talented people (especially in the third world) who are only in paid flying jobs because of nepotism and friendly/financial/political favours.

#2 – Software/hardware improvements and bureaucracy

There are countless improvements we could make to modern EFIS setups that would help enormously and it's no secret that the avionics of many modern general aviation aircraft are far better in terms of features and functionality than what Airbus and Boeing are producing even today. Despite, the industry experiencing some pretty disastrous events; we have not seen much done. After learning lessons from accidents, and to avoid the likelihood of repeats, a few software changes are all that is required to decrease the likelihood of the same mishandling to occur. The inputs are there because they are integral to the original design of the aircraft and in most cases the computing hardware is there too, but the logic wasn't considered at the time of initial design. However, even minor software changes need to go through exhaustive certification processes that end up becoming uneconomical for manufacturers to pursue. Thus, we typically do not see improvements for 15-20 years.

I have perhaps not made this point properly but I have also worked in other industries where the timescales associated with idea inception, coding, testing and certification are much smaller and the costs highly manageable. In aviation, the progress of change/improvement is heavily stagnated.

Centaurus 2nd Jan 2015 10:50


So the primary challenges with automation as we currently do it, assuming it works as designed (more on problems with that assumption below) is that situation awareness often when things go wrong, and that recovery is harder when things do go wrong.

In just about every simulator initial training I have seen where the pilot has never flown a jet or large turbo-prop transport aircraft before, the immediate accent is on use of all the automatic features from the word go. Follow the flight director is the universal cry by the simulator instructor. The old saying of "I can't fly but I can type at 80 wpm" applies to a great number of pilots who have been totally brain-washed into automatics.

While there has been knee-jerk advice (usually following the latest accident) to add more manual flying during recurrent simulator sessions, the whole point is being missed and that is a frightening number of airline captains and first officers simply cannot fly, or have forgotten how to fly. By "fly" I mean basic instrument flying skills in IMC without the aids of a flight director and automatic throttle. Those of fortunate enough to have flown the Boeing 737-200 series or the 727 where manual flying on line was considered SOP, usually had no trouble, if flying the glass cockpit Boeings, with disengaging the automatics and seamlessly taking control manually.

This opinion, like most Pprune contributions, is personal opinion but based on one's own flying experience over many years. I would have thought the natural progression of a training syllabus for a type rating would start with the first few sessions of getting to know the flying characteristics of the aircraft. That is vital in order to gain confidence. In essence it will include raw data non-automatics features. Once a pilot can fly accurate visual circuits without the FMC and other goodies to help him find his way in the circuit, plus be consistently able to handle max crosswind landings, is competent at high and low altitude stall recoveries in IMC and knows how to recover from serious unusual attitudes in the simulator, then the time has come to learn the next steps and that is auto flight.

If the candidate is unable to demonstrate he can fly the aircraft without blind reliance on the automatics, then someone has to make a decision regarding his future with that company.

Forget the cries of extra costs of simulator time. Operators cannot have it both ways. If they want competent pilots able to safely handle the aircraft manually as well be entirely familiar with using the automatics, then the extra costs involved must be accepted.

It reminds me of another old saying that flying aeroplanes means hours and hours of boredom punctuated by occasional moments of intense fear. From what I see, the occasional intense fear is when the incompetent pilot loses the automatics and is forced to fly manually raw data...

Jwscud 2nd Jan 2015 15:18

Without wishing to get too deeply into the raw data argument, a lot of it depends on individual attitude. Some pilots fly raw data quite regularly, normally surface to FL100, or vectors to the ILS down. Others plonk the AP in at 1000' and take it out at about the same.

However, there are a number of raw data Jedi out there who will switch everything off at FL100 whatever the weather, and these are the types who pop up in the monthly safety digest landing without landing flaps, or throwing the approach away still doing 250kts at the landing gate.

Regarding automation philosphies, whenever especially Airbus are discussed, their philosphy is defended (especially by DozyWannabe) by pointing out that Airbus pilots were intimately involved in their development. That does not mean that they weren't wrong, or that they were part of a particular intellectual movement within their branch of engineering that is now coming under further scrutiny as it is being exposed to a new generation of flight crew.

Airbus vs Boeing is always going to arise, but simply because there are two competing philosophies of computer flight control - the Airbus protections, and Boeing's view that the pilot can do what he wants, but we will employ control forces and other forms of tactile feedback to make him aware that what he is demanding is unusual.

I should add the caveat that neither of the type ratings on my licence start with Airbus, but it is my form view that the Boeing philosophy is both more intuitive to a pilot trained on conventional aircraft, and more conducive to the maintenance of essential manual flight skills and direct feedback and understanding of the aircraft's flying characteristics.

glum 5th Jan 2015 12:41

I think we need to give credit where it is due, and just pause to consider that there were about 33 million commercial flights in 2014, and only 150 crashes.

Which is a crash rate of 0.00045%.

And that's per flight, not miles flown.

These are staggering numbers, and vindicate that the industry (both technical and aircrew) do a fantastic job. It is perhaps the slow pace of change which has positively contributed to this safety record, and not made things worse.

Of course we can improve, but changes must be made in a measured, sensible way taking advantage of properly proven technological advancements in an affordable way.

In an increasingly congested airspace, humans simply cannot keep track of everything going on inside and outside the flight deck so automation must take on ever more tasks.

Perhaps aircraft systems have already gone beyond the point where any single person can understand how everything works such that they can piece together disparate symptoms of failures to produce a recovery plan?

If it takes thousands to develop these systems and dozens to maintain them (with no time constraints and a set of manuals to follow) why are we expecting a single pilot to know enough about everything important?

Humans are the weak link, and always will be - both in the design stage and in the flying. On the ground we have many many processes to follow to try and eradicate errors, but they do still occur. 99% are found during the testing phase, and whilst costly they can be put right.

When errors compound faults during flight, the odds are stacked against the pilots...

mickjoebill 6th Jan 2015 04:36

The public may believe that it is not the loss of flying skills that is as relevant as the lack of skill in managing the computer.

mikedreamer787 6th Jan 2015 20:31

What happened on Apollo XIII shoots your last statement down Mr 69.

Centaurus 7th Jan 2015 05:36

As a Boeing B787 test pilot told a friend of mine, "Boeing have built the 787 assuming that incompetent pilots will fly them". Hence the sophisticated automation.:ok:

Tee Emm 7th Jan 2015 11:17


the role of flying the aircraft has largely shifted from humans to computers.the role of today's pilot is to interface with those computers.hence his competency is to be measured in that role.crashes have been caused because of gaps in that interface.
The Loss of Control crashes we are talking about have very little to do with misunderstanding by the pilots of mode control or selection/interface of the computers. The crashes were the result of the apparent total inability of the pilot in command to fly manually in IMC after the autopilot has either disengaged itself or was disengaged by the pilot.

The investigation report of one Middle Eastern 737 loss of control at night shortly after take off said the captain was continually calling for the autopilot to be engaged even as the 737 was in a steep spiral dive until it hit the sea. It is drawing a long bow to claim the cause was because of gaps in the interface between the crew and computers. The cause was sheer incompetency by the captain in hand flying on instruments in IMC. The question then arises about the efficacy or otherwise in his simulator training.

RVF750 7th Jan 2015 12:03

Fortunately, as a 737NG pilot, I still have the skill to disconnect the automatics or determine how much automation I want on a day to day level..

I once had an interesting conversation with a fleet manager where he justified telling his fleet captains to refrain from hand flying, because more FDM 'events' occurred whilst hand flying and stopping it would improve safety stats.

My reply that it showed that they should do more hand flying as clearly they weren't as good at it as they should be was lost on him.

Not my fleet thank the Lord.........

Personally I'm more of a 1000' kind of pilot, but on a smooth day I'll do the Jedi bit a while longer......

Our company are quite chilled about letting us keep our skills up. The 737 is still a stone aged thing with clever boxes bolted in really.

island_airphoto 7th Jan 2015 15:08

Computers are great - until they aren't.
Then killing yourself and maybe somewhere between 1 and a few hundred passengers because you either never could fly or forgot how is not really a good thing. YMMV

macdo 7th Jan 2015 20:02

Airlines to roll out constituted man and dog flight crews.
Man is there to feed the dog, dog is there to bite the man if he touches anything!
Truth stranger than fiction.

FCeng84 7th Jan 2015 20:46

Role of FBW Augmentation
 
While fly-by-wire augmentation provides the benefits of greatly reducing pilot workload and providing flight envelope protection, those are not the only reasons this technology has been developed for commercial transport aircraft. An additional driving factor (and in some ways the key motivation for FBW) is the opportunity to increase airplane performance. FBW provides handling qualities enhancement through augmentation thus enabling airplane configurations to be optimized for performance rather than handling.

Prior to FBW, commercial airplanes had to be configured to provide acceptable handling qualities without control system augmentation. Metrics such as stick force per g, stick force per knot, and maneuver response damping posed design constraints on cg range. In addition, wing design had to account for stall characteristics such as positive Stall ID and pitch stability. With FBW the control laws can be designed to augment the open-loop airplane characteristics such that the closed-loop response is acceptable. This allows pushing the cg further aft and designing wings for higher L/D (with less concern about stall characteristics) thus improving airplane fuel economy.

As a result, the truely open-loop (i.e., no computers involved) handling qualities of today's FBW airplanes are not sufficient to support certification. Reversionary control law modes are provided where the full-up normal mode system does not have sufficient availibility, but most often these include some level of augmentation to help improvide the handling qualities above what would be found with no augmentation at all. Because the probability of being in a reversionary control law mode is quite low, the handling qualities requirements that apply are not as stringent as for the full-up, every day normal system. Calls for the flight crew to have the ability to "turn all of the computers off" must be considered carefully as the handling characteristics they would encounter could be more than a handful.

Modern FBW commercial airplane control systems are designed to provide graceful degradation of levels of augmentation in response to detected failures. Of particular interest is loss of air speed and/or angle-of-attack. Multiple sensors and monitoring logic that compares signals from indepenent sources makes these systems robust to equipment failures through signal selection, fault detection logic. This leaves common mode failures (ones that corrupt equally all sources of a particular type of data) as the most serious and potentially dangerous. Severe icing or an encounter with a volcanic ash cloud are two scenarios that can cause bocked pitot probes leading to undetected erroneous air data and thus must be considered.

Pilots have long been taught to consider all of the sources of data that they have available to them and to be on the lookout for inconsistent data that may point to a sensor failure. Climbing with idle thrust while indicated airspeed is increasing is an example of a clear inconsistency that should cause the crew to question the airspeed indication. While some of the latest control systems include logic to detect such inconsistencies and select lower levels of augmentation that do not rely on the suspect data, pilots need to be able to make such determinations themselves.

Whether or not commercial transport control systems include provisions for pilots to select lower levels of augmentation is a design philosophy issue - one where A and B have taken different paths. Any procedures for crews to selectively down-mode to less augmentation must consider the handling qualities consequences. Going all the way to open loop is most likely not the best choice.

island_airphoto 8th Jan 2015 14:32

FCeng84 - I do not think unstable aircraft not flyable by humans are the problem. I know such airplanes exist, but your post is the first time I have ever heard that anyone would approve one for passenger service. I have also never heard of a crash that went like "the FBW automation blew a fuse and despite the best efforts of the very skilled pilots, the airplane was just not controllable by anyone without superhuman reflexes so they all died".

What we have seen more than once is some part of the automation/autopilot system - which would be connected to the FBW system if the plane is so equiped but also could be connnected to chains and sprockets in Grandpa's Cessna 150 - and the pilots were utterly unable to take over or even detect what had gone wrong. Air France stalling into the sea and running a Boeing into a seawall were not caused by an airplane that just could not be controlled by hand. You don't even have to have a jet to see this - my local rental nag C-172 has enough electronics so I hardly have to touch the controls except the first few hundred feet and the last couple hundred. I could become helpless flying THAT thing by hand in IMC if Otto is all I ever did ;)

island_airphoto 8th Jan 2015 18:19

Jockey69 - bull.
Have you ever flown an airplane?
Emergency half-arse flying is what you get when you recruit passengers from the back to have a go, not the guys up front presumably getting paid to know how to operate their equipment. You would massively fail an IR Comp Check with me if you pointed at the autopilot as a method to get things done. Sure Otto is great on a long boring leg, but there are times when humans are better, especially in extremis when the plane is going beyond limits Otto is programmed for. For one thing, Otto is happy to rip the wings right off in a good updraft or give up and disconnect.

island_airphoto 8th Jan 2015 18:48

The pilot of course. The autopilot will fly right into the side of a mountain and kill everyone aboard without a care in the world. The pilot at least WANTS to survive the flight, EgyptAir excepted.
The autopilot is an incredible servant, but a poor master.

FCeng84 8th Jan 2015 19:09

Levels of Augmentation
 
Island Airphoto

I fully appreciate your concern that there have been far too many instances of crews pressing on with a full up control system when there appear to have been sensor data failures that went undetected and blindly followed by the crew to disasterous consequences. I think there are a number of improvements that are needed.
1. Better training of crews to recognize inconsistencies in airplane response data so that they are able to recognize potential sensor failures and adjust their control actions accordingly.
2. Refinement of control system signal selection, fault detection logic to perform signal consistency checks automatically to identify errors such as blocked pitot tubes that may equally corrupt all air data sources. Crews should then be alerted as to which data is suspect and control augmentation mode should revert to a configuration that does not rely on the data deemed to be in question.
3. Clear guidance to flight crews as to how the control system should be reconfigured if they suspect errant data.
4. Simulator experience flying the airplane manually throughout the flight envelope including exposure to any degraded levels of augmentation that involve significant handling qualities changes.

As to the point about open loop airplane stability, it is important to note that even the lowest level of control system configuration may require some level of augmentation. For instance, FBW augmentation has allowed airplane configurations with the certified cg range such that cg at its aft limit results in zero steady state elevator for a maneuver. This does not present a system that is wildly unstable, but does yield neutral pitch stabilty. In order to assure that the handling qualities experienced by the flight deck crew are at least acceptable, augmentation of some sort has been added to all modes. The B777 is a good example where the lowest level of augmentation (Direct Mode) includes inertial pitch rate feedback to the elevator.

I think it is more appropriate to speak of pilot selection of the lowest certified level of augmentation rather than pilot ability to turn off all of the computers. A subtle difference, but one that I feel we need to be clear about. We should not advocate having the crew manipulate the control system such that it is in a configuration that has not been fully tested. In the case of the B777, the handling qualiltes have been carefully and completely evaluated in Direct Mode (the lowest augmentation level that is both pilot selectable and can be automatically engaged in the event of detected failures) and found to be adequate for a low probability backup system. The B777 has not, however, been tested in a configuration where the elevators are commanded by pilot control column position alone.

island_airphoto 8th Jan 2015 19:23

I am more worried about pilots not realizing the auto throttle is not working and totally ignoring airspeed trending to zero than the FBW totally failing into a super-emergency mode. Speaking of which, I once was subjected to some bad cargo loading and flying the ILS with zero pitch stability is a very large PITA :eek: Otto WAS better at it than I was - too bad he couldn't land.

FCeng84 8th Jan 2015 23:37

Car without cruise control and GPS
 
I think a better analogy is an auto driver whose GPS navigation system and cruise control fail. Many would have a hard time these days with a paper map and having to control speed on the highway, but all should be able to.

Mad (Flt) Scientist 9th Jan 2015 02:30

To extend the car analogy, though - should someone who has only ever driven an automatic (and maybe only passed whatever test they passed on one) be allowed to drive a manual?

What if the automatic has provision for operation as a manual?

Oakape 9th Jan 2015 03:47


ROUTINE vs rare, you are talking about one in a million event.
We are talking about the military, or experimental aircraft, or aviation theory here. We are talking about passenger carrying commercial aviation. A one in a million event would be quite frequent these days & I don't believe that the industry could sustain a weekly fatal hull loss. The traveling public would just not accept it.

Tweeting aircraft to get maximum performance at the detriment of a fall back position where the pilot can get it home when the automatics fail, combined with less & less pilot training due to cost, is pushing the boundaries of what the customer will find acceptable. The never ending quest for lower fares & more profit has a dark side. Try telling the customer that they are in a lottery & bad luck - your number just came up. Worse still, try telling that to the family members left behind.

Things cost. Safety & reliability cost. And the sooner people wake up to the cost of flying around the world at speed in comfort, with safety in mind, the better! Constant downward pressure on costs, combined with constant demand for more profit isn't sustainable. And that applies to all aspects of business.

Hunter58 9th Jan 2015 04:10

All commercial FBW aircraft are first and foremost very nice flying maschines. They have to fulfill the same flight dynamic capabilities than any conventional aircraft. The FBW part concerns only the way the pilot input gets to the control surfaces. This as such is NOT automation.

Otto is the additional stuff commonly referred to as the 'autopilot'. And there is that famous dependency on the magenta line, regardless of manufacturer and type. All manufacturers and regulative authorities operate on the basic principle that the crew is consisting of PILOTS and not some system administrators punching buttons.

We may, however, have a problem that some Airlines have forgotten the latter part of the basics of aviation. Which those airlines are is not visible by their business model. Last time I checked Air France was not a low frill/cost carrier...

island_airphoto 9th Jan 2015 05:13

The better analogy would be drivers so used to lane assist and adaptive cruise control they hit the car ahead of them or drive right off the road if the system goes offline.
(having grown up with an ancient Porsche with carbs and no power assist auto ANYTHING, I wonder if my son could even get such a car started when he turns 16. ABS, DSC, and EFI will be all he has ever seen)

@Mad (Flt) Scientist
To extend the car analogy, though - should someone who has only ever driven an automatic (and maybe only passed whatever test they passed on one) be allowed to drive a manual?

What if the automatic has provision for operation as a manual?

island_airphoto 9th Jan 2015 07:06

Please let us know what you fly.

island_airphoto 9th Jan 2015 08:15

It is relevant. If you have never had to fight weather AND some crazy autopilot at the same time, you have no frame of reference for what we are talking about. Kind of like how a a GPS map coupled to an engine governor that will never allow a car to exceed the speed limit makes perfect sense to a bicycle rider from Bermuda who has never driven a car or seen a highway ;)

FCeng84 9th Jan 2015 16:03

Hunter58 has the following paragraph in an entry above:

"All commercial FBW aircraft are first and foremost very nice flying maschines. They have to fulfill the same flight dynamic capabilities than any conventional aircraft. The FBW part concerns only the way the pilot input gets to the control surfaces. This as such is NOT automation."

I agree completely with the first, second, and fourth sentences above. My issue is with the third sentence. FBW concerns both the way that the pilot input gets to the control surfaces and how feedback of sensed airplane response also factors into the surface commands to achieve the desired response characteristics. Connecting the elevator directly to the pilots controller without any feedback augmentation will not necessarily (and need not) result in acceptable response characteristics. Careful design of the combination of pilot to surface and feedback control paths results in an integrated system that defines the augmented handling qualities that the pilot experiences.

FCeng84 9th Jan 2015 16:31

Jockey69 - Your line of thinking seems to suggest throwing out the baby along with the bath water. With that perspective we would not be flying at all.

Technology advancement has involved taking risks to explore new horizons. We all owe a debt of gratitude to those how have been pioneers in these areas. Our challenge now is to take the knowledge available and apply it to design and operation of commercial transport systems in such a manner that the risk imposed on the travelling public is reduced to an acceptable level - far below that encountered by the pioneers. We cannot push that risk to zero, but we can make it very small.

Part of our challenge is also to make commercial air transport as affordable as possible without increasing risk above an acceptable threshold. FBW control system technology is playing a major role in that evolution. With this comes dependency on feedback augmentation to provide acceptable handling qualities while enabling performance optimization.

As long as there are failure scenarios that can disable or corrupt data vital to performance of the flight control system in its full up configuration, there will be need for means of recognizing that corruption and gracefully transition to reversionary control system modes employing a smaller, more robust set of feedback signals.

Pilots must be capable of continuing safe flight and landing in all control system modes to which they may be exposed. Training should be defined with consideration for control system mode reversion so that flight crews have both the confidence and skills needed for the task.

island_airphoto 11th Jan 2015 15:56

Hey - now the thread is cleaned up we can continue on.
Basic Problem:
If you make the plane able to overrule the pilot, you can prevent many accidents. You can also cause them when the HAL gets wrong data or when a new situation arises that HAL has no idea how to handle and doesn't allow the pilot to make the needed input.
IMHO humans would rather be killed by human error than a stupid computer that can't be stopped ;)

FCeng84 12th Jan 2015 19:18

Examples of Hal not letting a pilot save the day?
 
I agree with the opinion that people would rather tilt the scale in the direction of allowing a pilot to screw it up vs. allowing the system to end the day by not permiting the crew to take over. I would like to know about examples of when the pilot's inability to take over has been a major contributor to an accident - I can't think of any right off hand.

I think the bigger problem is that crews have all too often lost situational awareness and have either blindly followed errant displays or simply persisted with pilot inputs that were not appropriate (e.g., nose up command when at high AOA and descending due to stall or simply insufficient lift resulting from very low speed). I would put erroneous air data high on that list. The air data technology we have on commercial transports today is not sufficient to ignor the need to recognize when it has failed and to respond accordingly. Both Hal and the carbon units up front have a role to play in this one.

Another issue is pilots being unfamiliar with flying without the autopilot and/or autothrottle. Closely coupled is lack of awareness of the current automation mode(s) and thus not recognizing what pilot inputs are needed to fly the intended path/speed. The B777 clipping the seawall on the way into SFO is a prime example of the crew not recognizing the need to monitor and control speed. For that event it seems that the crew may have assumed that the autothrottle was minding speed while they made an attempt (all be in not a very good one) to manually control path.

I would sure be a supporter of simulator training where starting at cruise an engine is failed and both the autopilot and authrottle are disengaged and the crew has the task of landing at an alternate. As SLF I would be more comfortable in back if I knew that the team up front had that practice on a regular basis.

island_airphoto 12th Jan 2015 19:48

The Air Force lost some F-16s early in the program when HAL would not let the pilots pull up hard enough to avoid solid ground.
Autothrottles are a prime example of where automation fails us by making humans complacent. It apparently makes flying a lot easier - never had a plane with this myself - but absent autothrottles forgetting about power and airspeed on final would be like forgetting how to walk. Now - maybe not so much because the airplane does it for you for so much of the time. Of course if you try and take them away the 99.9% of pilots that didn't forget how to use them correctly would be screaming.

john_tullamarine 12th Jan 2015 21:07

First, my apologies as I had not been monitoring this thread as routinely as I should have done ... otherwise it would have been brought to heel a bit quicker ..

Some comments -

(a) if one has a disagreement with the thrust of the conversation, by all means put your disagreement but, please, don't restate the point ad infinitum .. it gets terribly distracting and boring

(b) the rational person accepts that the automatics are here to stay for all the right reasons. There is no realistic view which suggests that we can (or should) reverse the trend.

(c) whether an individual pilot prefers stick and rudder or automatics largely is irrelevant to the discussion

(d) clearly, if an unrecoverable catastrophic event occurs, the pilot is along for the ride .. eg wing separation

(e) however, depending on the individual pilot's knowledge base, skill set and determination .. a varying range of significant failures can result in a variety of outcomes .. some more successful than others. While there are a few in the records, my favourite is UA 232. By rights none on board should have had any chance of survival .. yet Haynes and his crew pulled off a miracle .. albeit with some peripheral bits of good fortune to help out along the way. If the phugoid hadn't caught them out at the last moment, they may have pulled off the miracle of the century .. but the gremlins have to get a foot in the door.

(f) it should be a reasonable view that the typical competent pilot can address minor, but initially confusing or improbable, JB failures and reversions

(e) the question is, regardless of the presumed probability of an undesirable event outcome, do we want pilots who have no/little chance of rising to the occasion on the day (ie the button pushers) - in which case we should look to UAVs ... or folk who have sufficient training and exposure to have a reasonable chance of saving the day (ie those who use the buttons intelligently but retain a sufficient basic skill competence to operate) following intentional/unintentional use of the O-F-F button ?

climber314 4th Dec 2018 18:03


Originally Posted by FCeng84 (Post 8822388)
Examples of Hal not letting a pilot save the day?

I agree with the opinion that people would rather tilt the scale in the direction of allowing a pilot to screw it up vs. allowing the system to end the day by not permitting the crew to take over. I would like to know about examples of when the pilot's inability to take over has been a major contributor to an accident - I can't think of any right off hand.

Looks like we have one now... JT610.

megan 4th Dec 2018 22:59

I'm surprised that the Qantas A330 upset has not gained a mention thus far. Synopsis,

On 7 October 2008, an Airbus A330-303 aircraft, registered VH-QPA and operated as Qantas flight 72, departed Singapore on a scheduled passenger transport service to Perth, Western Australia. While the aircraft was in cruise at 37,000 ft, one of the aircraft's three air data inertial reference units (ADIRUs) started outputting intermittent, incorrect values (spikes) on all flight parameters to other aircraft systems. Two minutes later, in response to spikes in angle of attack (AOA) data, the aircraft's flight control primary computers (FCPCs) commanded the aircraft to pitch down. At least 110 of the 303 passengers and nine of the 12 crew members were injured; 12 of the occupants were seriously injured and another 39 received hospital medical treatment.Although the FCPC algorithm for processing AOA data was generally very effective, it could not manage a scenario where there were multiple spikes in AOA from one ADIRU that were 1.2 seconds apart. The occurrence was the only known example where this design limitation led to a pitch-down command in over 28 million flight hours on A330/A340 aircraft, and the aircraft manufacturer subsequently redesigned the AOA algorithm to prevent the same type of accident from occurring again.Each of the intermittent data spikes was probably generated when the LTN-101 ADIRU's central processor unit (CPU) module combined the data value from one parameter with the label for another parameter. The failure mode was probably initiated by a single, rare type of internal or external trigger event combined with a marginal susceptibility to that type of event within a hardware component. There were only three known occasions of the failure mode in over 128 million hours of unit operation. At the aircraft manufacturer's request, the ADIRU manufacturer has modified the LTN-101 ADIRU to improve its ability to detect data transmission failures.At least 60 of the aircraft's passengers were seated without their seat belts fastened at the time of the first pitch-down. The injury rate and injury severity was substantially greater for those who were not seated or seated without their seat belts fastened. The investigation identified several lessons or reminders for the manufacturers of complex, safety‑critical systems.
The worrying part is the highlighted portion, and the reason this SLF wants a human upfront, he/she may not be perfect and prone to their own failures, but they represent the final get out of jail card when the folk/designers on the ground have boo booed on coding/design/architecture. Post event the Captain retired with PTSD. Page 191 of the link for analysis. It is perhaps pertinent to this thread to post a portion of the analysis.

Limitations of simulation and testing activities
Another means of detecting a design problem is through the use of the simulation and testing activities conducted during the verification and validation processes. However, the selection of the simulations and tests needs to be prioritised based on an identified need, and this will usually focus on confirming that the design meets the specified requirements, and that it effectively manages identified failure modes or specific types of incorrect inputs. Any activities beyond the scope of verifying the explicitly defined design requirements must rely on the expertise of those involved, which is as fallible as any other human activity.
Due to the wide range of potential inputs into a complex system such as the EFCS, simulation and testing programs cannot exhaustively examine all the possible patterns of inputs. In the case of the FCPC algorithm for processing AOA, the simulation and testing activities examined the new design’s ability to handle the situation that led to the redesign. They also included previously identified tests to ensure there were no regression problems with the system design. However, they would not realistically have included a scenario involving multiple AOA data spikes 1.2 seconds apart unless the potential problem had previously been identified.

https://www.atsb.gov.au/media/3532398/ao2008070.pdf

underfire 5th Dec 2018 12:28

Personally, the certification process needs to be rethought. The latest version of the FMS is based on what, a 486 processor. The avionics software, instead of certifying a new version with new, up to date electronics, and software, is bastardized by the process. Version upon version is dogpiled on, because a revision is easy to certify, while a new system certification would take forever.
Thus we are constrained by legacy avionics, even on brand new variants. We are all well aware that there is far more computer power in our watches and phones, than what is installed on the flightdeck, and that is a sad result of the certification process.


Due to the wide range of potential inputs into a complex system such as the EFCS, simulation and testing programs cannot exhaustively examine all the possible patterns of inputs. In the case of the FCPC algorithm for processing AOA, the simulation and testing activities examined the new design’s ability to handle the situation that led to the redesign.
Exactly, with a software version, built upon the legacy programming. This is, until somewhere in the million lines of code, a lookup feature finds some legacy variable, and it all falls apart.

Allow the streamlined certification process of a brand new system, built upon the capabilities of the brand new variants. The software and avionics follow a straight path in the programming, not a multi-threaded path of cascading possibilities, that many times, is not figured out until the ac is on revenue flights.

climber314 5th Dec 2018 20:03

With respect to JT610 and the 737 Max, does it matter if the technology is based on a 8086, 286 or some other older processor? It's not a full blown AI system we're talking about. If MCAS is limited to function only when AoA Agree = Yes, there is no crash. Assuming pilots can fly.

If we're talking autonomous flight, FBW or something similar then by all means more processing power is better and new code preferred.

Atlas Shrugged 6th Dec 2018 02:00

The biggest thing with automation at the moment is that it is not 'complete' automation. Partial automation is a huge issue - having a pilot who's supposed to take over when the system craps itself, but who otherwise does nothing............... he's only there for when the going gets really tough, but the automation won't always keep him in the loop, or keep him in practice, but will still expect him to go from brain dead to aviation hero in an instant.

At some point the technology will become reliable enough but it is nowhere even remotely near that right now. First, you have to assume that the programmers will make NO ERRORS (!), and second, that they will think of EVERYTHING (!!) or come up with an AI that can HANDLE EVERYTHING (!!!).

I expect they'll be working on it for many, many, years before it ever becomes viable.

Vessbot 6th Dec 2018 02:16


Originally Posted by Atlas Shrugged (Post 10329038)
The biggest thing with automation at the moment is that it is not 'complete' automation. Partial automation is a huge issue - having a pilot who's supposed to take over when the system craps itself, but who otherwise does nothing............... he's only there for when the going gets really tough, but the automation won't always keep him in the loop, or keep him in practice, but will still expect him to go from brain dead to aviation hero in an instant.

At some point the technology will become reliable enough but it is nowhere even remotely near that right now. First, you have to assume that the programmers will make NO ERRORS (!), and second, that they will think of EVERYTHING (!!) or come up with an AI that can HANDLE EVERYTHING (!!!).

I expect they'll be working on it for many, many, years before it ever becomes viable.

You're capturing something I've been trying to say for a while. To be even more succinct (something I'm not good at) -- right now we're in a gap, where the baton of ultimate responsibility is being passed from human pilots to the automation - but it's not a smooth pass, and there's no one to hold it in the interim.

Atlas Shrugged 6th Dec 2018 02:57

Exactly Vessbot.

Another thing that I think about is that as the systems improve, which they will, the point at which they fail will be further and further into the areas that make the aircraft harder to fly with the system handing over a larger and larger pile of excrement as it progresses, QF32 a possible example.

It seems as if Airbus have tried to engineer pilots out as much as possible, and in doing so have made aircraft that are, in some situations, very much more difficult to fly than they need to be. They have taken a lot of the day-to-day things and automated them and this has had the effect of weakening pilot skills and when it does drop its bundle you'll need those very same now weakened skills to fix it. Every day there are possibly hundreds of events around the world where the automatics go haywire in one form or another, and it's fixed by the pilots, who simply tidy up and continue on their way. If you stopped those fixes, it would start raining aluminium.

There is no such thing as something that cannot fail..... at least not at the moment.


All times are GMT. The time now is 16:30.


Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.