PDA

View Full Version : Why so many computers for flight controls in A320?


IFLY_INDIGO
21st Jun 2013, 10:28
I wonder why a single computer couldn't be used for the flight controls with a back up or two?

mixture
21st Jun 2013, 10:46
I wonder why a single computer couldn't be used for the flight controls with a back up or two?

Really ? You really wonder that ? :rolleyes:

Its a safety critical system. The fundamental rule of safety critical systems is the KISS principle (KISS = Keep It Simple Silly).

The more features and functions you introduce, the greater the complexity of programming that needs to be done, and the greater the risk of bugs etc.

You also need to consider the maintenance aspect. Safety critical parts tend to be expensive, if you have one big computer doing everything, that's going to be very expensive to replace. Having a larger number of discrete components means you can replace individual parts with less expense.

Clandestino
21st Jun 2013, 12:52
Why does she have two engines? Two wheels per undercarriage leg? Three hydraulic systems? Three generators? Three GNADIRS?

Same reason.

FCeng84
21st Jun 2013, 15:23
It all boils down to meeting safety requirements for flight critical systems. No single failure (regardless of probability) or combination of failures more likely than 10^-9 (that's literally one in a billion flight hours) shall leave the system in a catastrophic state. Consider that the failure rate for any single LRU such as a flight computer is on the order of 10^-6. The overall system must be 1000 times more reliable than any single component. To put 10^-9 in perspective, a fleet lifetime for a very successful commercial airplane model will be about 10^9 hours!

In order to meet this level of safety, redundant systems are required including sufficient levels of dissimilarity to protect against generic faults.

All commercial transport aircraft must meet these requirements regardless of the company that produces them or the airline operating them.

EEngr
21st Jun 2013, 15:27
Setting aside the issue of redundancy for a moment, multiple computers (controllers, etc.) are still a good idea. Many of the flight control functions involve control loops that have to read a few sensors and operate an actuator very rapidly with pilot/autopilot inputs (error correction) entered at a much slower rate.

Designing one big controller that can handle multiple loops, read numerous sensors and drive many actuators becomes problematic, with the possibility of bugs creeping into the software and unknown and undesired coupling between the various functions. Its much easier to write smaller modules and run each on hardware optimized for that particular function.

FCeng84
21st Jun 2013, 15:36
Airplane handling qualities are particulary sensitive to delay between pilot control input and airplane response. As a result, the fastest signal flow and processing paths are used for the linkage between pilot cockpit controller input and associated surface motion.

Division of control processing among different system LRUs is not driven by complexity concerns. It is driven by failure impact / propagation issues.

Uplinker
23rd Jun 2013, 23:57
Really ? You really wonder that ?

mixture, was that really necessary? Indigo is asking for information, and you effectively call them an idiot - why did you do that? Would you like people to treat you the same way when you ask a question?

The other replies have been far more sensible - despite flying the Airbus for 8 years, I personally hadn't thought of the parallel processing angle - the computers are distributed around the flight controls, which would indeed give quick response, rather than one doing everything.

Many designers and engineers at Airbus will have had many "strokey beard" sessions to decide how many items and back-up items would be needed in every part of the aircraft. They would have done a risk analysis and not included more than was considered necessary. There are 5 FBW computers and 2 FAC's - (flight augmentation computers), all of which translate the side stick or autopilot commands into flight control responses. This gives a high level of redundancy. I think I'm right in saying (please correct me) that each computer is of a different design and has different software written by different software suppliers, so not only do the computers back each other up, they do so with different methods and philosophies, which is an additional safeguard.

I was also told, but don't know if it is actually true; of the FBW fighter jet that was the new whizz-bang thing, but one day it crossed the equator and flipped inverted - someone had missed a minus sign out of the program and it hadn't been spotted.

IFLY_INDIGO
24th Jun 2013, 18:26
I doubt if the 'code' would change from ELAC 2 to ELAC 1 or between any SECs.

Redundancy could not be the reason for so many different computers. After all, you can have a single computer and its copies for redundancy.

Speed of processing could also not be the reason for so many computers. A High speed micro-processor processes millions of instructions in a second while inputs coming in from many sources. In case of A320, input is just flowing in from the sidestick mainly (rudder too for the yaw damping rate).

A task is shared among many small entities instead of a Big single entity. Lets see the 7 computer FBW as a big single computer with 7 components inside it. if just a single component fails, why reject all other ones? backing up individual components in different ratios is much more economical (cost, space, weight-wise) than backing up the entire big single computer.

Popsiclestix
24th Jun 2013, 20:03
A single-large computer may not be cheaper than many individual discrete components.

The largest cost to manufacture these computers is actually the cost of programming them. If you integrate multiple functions into a single chip you suddenly have very weird ways the functions may interact.

In order to have a high confidence they don't interact, you now have to write tests that exercise the multiple functions in different ways.

Separate chips don't have this limitation, they cannot possibly interact with each other (outside of their control signals).

clark y
24th Jun 2013, 20:15
Remember that the A320 was designed in the 80's. Back then nobody could have dreamed of a gigabyte or even a terabyte.
I too have been told that all the flight control computers are supposedly designed and built by different companies to maintain redundancy.
As for processing speed, I think that is also plausible as a reason due to the age of the design and also the fact it is not just the controlsthat need to be instantaneous but also other critical systems like instrumentation.
The FMGS on the A320s I fly have a whopping 2.8 meg (not gig) of memory. There are also 3.5 inch floppy drive data loaders on the centre pedestals!

FakePilot
24th Jun 2013, 20:47
Might also be a reflection of the internal company structure. This group makes this subsystem and it would probably good idea if they made the controller for it too.

Of course, this is uninformed speculation based on experience from fields.

Uplinker
25th Jun 2013, 00:57
Well Indigo; I back you up and you dismiss me. Thanks very much.

I don't know why you 'doubt that the code would change between ELACs'. Consider two computers which are identical electronically and have been loaded with exactly the same software which has a tiny flaw that no-one has spotted. One day, the aircraft enters the zone where this flaw produces a fault. No matter, because the second computer can take over....ah no wait, it has the same flaw.

Now consider two computers which may be different electronically, have been programmed differently by different people perhaps even using a different computer language. Now when the aircraft enters the area where the flaw in the first computer shows itself, the second computer can take over and work properly, because it does not have the same flaw.

Your 'seven subsections of the same computer' idea has less redundancy than seven separate computers: 7 separate computers have 7 separate power supplies, 7 separate mother boards, 7 separate sets of data wiring, and 7 separate outputs etc. etc. Can you see how this is a far more redundant system than your one computer with 7 parts is?

With your one 'mega-system'; if the internal power supply fails, you lose the whole lot. With Airbus's 7 separate systems, if you lose one power supply, you lose one computer, still leaving you with 6.


As for high speed computers being able to process many inputs; I can easily overload my modern desktop PC or my smartphone with too many rapid inputs. It would not be very clever if rapid joystick/yoke inputs in an approach in gusty turbulent conditions overloaded the one flight computer and it froze, now would it?

galaxy flyer
25th Jun 2013, 02:04
Once again, I'm amazed questions like this come from line pilots who clearly do NOT understand aviation. You do understand how important flight controls are, don't you?

I FLY INDIGO, why not give up your day job and replace the huge work force in Toulouse that do nothing but design systems for certified planes?

Fly3
25th Jun 2013, 02:32
IIRC during my initial training on the A320 in Toulouse back in 1991 they told us that only two computers from the same manufacturer could be fitting in each aircraft and the others had to be from another source and that those two from the same company could not be loaded with software written by the same software provider. This would minimize the risk of a bug being affecting more than one computer at a time.

IFLY_INDIGO
25th Jun 2013, 05:43
"Lets see the 7 computer FBW as a big single computer with 7 components inside it. if just a single component fails, why reject all other ones? backing up individual components in different ratios is much more economical (cost, space, weight-wise) than backing up the entire big single computer."I was taking the present arrangement of 7 computers as the most ideal case and was trying to find out the hidden logic in this unique selection. What I meant was to look at the 7 computers as 7 components of a big notional single computer.

On the redundancy, Elevator has 3 back ups, stabilizer has 3 back ups, aileron has a single back up, spoilers have no back up. if a SEC fails its spoilers are gone. Rudder is mechanical with no back up.

IFLY_INDIGO
25th Jun 2013, 06:33
from Popsiclestix The largest cost to manufacture these computers is actually the cost of programming them. If you integrate multiple functions into a single chip you suddenly have very weird ways the functions may interact.

In order to have a high confidence they don't interact, you now have to write tests that exercise the multiple functions in different ways.

Separate chips don't have this limitation, they cannot possibly interact with each other (outside of their control signals).

I guess Programming and hardware constraints of the past (1980s) are being carried forward.

I Know for sure that both ELACs have to be from the same vendor - thales, honeywell etc. same with SECs. (out of discussion with our chief aircraft engineer).

IFLY_INDIGO
25th Jun 2013, 23:29
In absolute normal situation, ELAC 1 and 2 does all the signal processing. after processing, ELAC 1 provides roll orders to ailerons directly and to the spoilers via SECs. ELAC 1 also provides yaw order to the FAC1 for coordination. ELAC2 provides pitch order to the elevators and THS.

in case of the failure of one of the ELACs, remaining ELAC will carry out all the signal processing and give all the orders. only LAF would not available.

the point is ' there is not so much spreading of processing as it seems '. in normal situations, only two computers do the processing and pass the order other computers.

Swedish Steve
26th Jun 2013, 06:29
Once again, I'm amazed questions like this come from line pilots who clearly do NOT understand aviation. You do understand how important flight controls are, don't you?

Well if separate computors are so important, why does the B777 only have ONE ADIRU.
Admitted there are many sensors and channels inside it, but when it fails there is only one box to change.
I know the SAARU is more complicated than the average standby gyro,m but there is still only one ADIRU.

EEngr
26th Jun 2013, 18:50
why does the B777 only have ONE ADIRU.That is a bunch of computers in one box. Aside from multiple redundant channels, each channel is comprised of several CPUs, each processing its inputs and providing outputs on busses within the box.

This architecture achieves a few things: It is fault tolerant in that the loss of one CPU (or other function) does not disable the entire LRU. Ships wiring being more prone to faults, it is more reliable to keep all these busses within one housing. The other thing this design does is to simplify maintenance. Only one box to diagnose/replace in the field. The internal modules are better repaired in an electronics workshop.

NSEU
26th Jun 2013, 22:57
Real scenario: Water drips from overflowing galley plumbing through improperly sealed floorboards onto a cracked Main Equipment Centre drip shield and onto multiple rows of computers. Instead of complete systems failing, parts of systems survive.

Real scenario: A pax oxygen bottle blows up in flight and damages flight control cables on the right hand side of the fuselage. The left hand flight control cables are ok.

Real scenario: Someone accidentally stands on an antenna cable in the Forward Cargo Area and damages it. Two antenna cables remain serviceable.

Don't put all your eggs in one giant basket. For starters, one person may not be able to lift it. Also, you won't be able to fit the basket through the (Equipment Centre) door.
Also, don't stack smaller egg baskets on top of each other. Spread them out so one doesn't affect the other.

And always apply Murphy's Law.

panda-k-bear
27th Jun 2013, 11:58
The answer is redundancy (whether you accept it or not, that is the answer) and when designing the systems, Airbus insisted that separate teams within the manufacturers, or where possible different manufacturers, designed both the hardware and the software. Airbus (or the DGAC) was not willing to accept that a fault with the same piece of code in two separate computers could cause the loss of an aircraft.

EEngr
27th Jun 2013, 14:05
Redundancy is why we have three computers instead of one. Functional isolation is why we have dozens of computers instead of three.

DozyWannabe
27th Jun 2013, 21:29
OK, I guess here's where I come in... :8

The general gist of the previous replies is correct. To answer the question thoroughly would require an essay-length reply or longer, so I'll try to stay brief for now and for a more in-depth look, direct you at this PDF, which covers things in greater detail:

http://www.cs.ucc.ie/~herbert/CS4504/X%20Software%20Risks/A320%20Software%2000627364.pdf

A few ground rules worth understanding include the following:

We're talking purely of flight control computers here -or the FBW aspect. FMS/automatics are a completely separate concept.
As the A320 was not just the first iteration of Airbus's civil FBW technology, but a world first - the tendency was to overengineer things slightly. As an example, the functions handled by the FACs in the A320 were folded into the PRIMs and SECs on the A330/340 (and hence the change in designation).
There are several layers of redundancy at work. Not just duplicating the systems themselves, but function-sharing in terms of physical connections and a dissimilar codebase for each.
The hardware used was already obsolete by the mid-'80s. This meant that the hardware was fully understood and predictable in terms of operation.
This in turn necessitated a very simple codebase - which was and is standard practice in realtime systems development.
Megabytes and gigabytes of storage have little bearing on this kind of technology, because it is designed to be lightweight in terms of complexity (and as such reduce risk of error).


As far as the dissimilar codebase goes, the ELACs, SECs and FACs each have the same basic hardware implementation per type (the ELACs based around the Motorola 68010 and the SECs around the Intel 80186), but the code developed for ELAC1 was different to that of ELAC2, and the same went for the SECs and FACs.

To give an example of function-sharing and redundancy in the physical sense, I'll quote from the article linked above:

The computers and actuators are also redundant. This is illustrated by the A320 pitch control (left imd right elevator, plus Trimable Horizontal Stabilizer - 'THS). Four control and monitoring computers are used, orie is sufficient to control the aircraft. In normal operation, one of the computers (ELAC2) controls the pitch, with one servocontrol pressurised by the Green hydraulic for the left elevator, one pressurised by the Yellow hydraullic on the right elevator, and by electric motor NO2 for the THS. The other computers control the other contra11 surfaces. If ELAC2 or one of the actuators that it controls fails, ELAC1 takes over (with the servocontrols pressurized by the Blue hydraulic on elevators, and with THS motor NO1). Following same failure method, ELACl can hand over control to SEC2. Likewise, pitch control can be passed from one SEC to the other depenlding on the number of control surfaces that one of these computers can handle.

At the risk of repeating myself, I'm going to reiterate the point about the hardware used being obsolete even by mid-'80s standards because it is crucial to understand, given the scuttlebutt aimed at the system over the years. Your home PC with its modern processor is the equivalent of a Formula 1 engine - designed to run complex software at tremendous speed, with the caveat that it will crash from time to time. Realtime systems use obsolete hardware and very simple code and would be the equivalent of an old pick-up truck or VW engine - not amazingly fast, but designed to run problem-free and reliably for an unbelievable amount of time. For this reason, specialist firms are still knocking out hardened packages based on processor designs that are over three decades old, because for most realtime work you don't need anything much more complex!

The software too is based on completely different principles to what most people are used to. In engineering terms this design principle is known as a Finite State Machine and you tend to encounter them in things like washing machines and microwave ovens. Put simply, there is only a finite number of states (e.g. Fill, Wash, Rinse, Spin, Drain) that the machine can be in, and usually only one of those states applies at any given time. The Airbus flight control software is essentially an interconnected set of these FSMs which can trigger changes in behaviour amongst themselves. This means that a system which appears complex to the outside observer is actually made up of a group of very simple systems which were tested exhaustively in isolation and later in combination with each other.

I realise I've exceeded the bounds of the original question slightly, but I hope this info is useful!

IFLY_INDIGO
28th Jun 2013, 03:16
Thanks for the link.. very detailed answer. It proves that sometimes 'OLD IS GOLD'.

Another point I would like to add is that redundancy means duplication which usually 'kicks in' AFTER a failure.

Sharing processing job among many computer is not redundancy. it is reducing risk of failure by not letting a single computer run the entire show.

HighWind
28th Jun 2013, 16:25
@IFLY_INDIGO (http://www.pprune.org/members/172928-ifly_indigo)
Another point I would like to add is that redundancy means duplication which usually 'kicks in' AFTER a failure.
No.. You need redundancy to detect faults.


Wee may have an application with a pressure sensor to monitor if the oil pressure is so low that it could damage an engine.


The sensor might have failure modes where you can't detect if the reading is correct or not.
In this case you need to compare the reading with another sensor to find out if the pressure reading is ok. In case of disagreement you might not know witch of the two sensors are correct.
This is fine if it is ok to stop the engine on pressure disagreement.


If we are not allowed to stop the engine on a false low reading then we need additional redundancy. E.g. 2oo3 or dual 1oo2.


Micro-controllers have the same problem, it is not able to detect if it has had a hardware error, resulting in a faulty calculation. (There are some that claim that they can do this like the TMS570)


Normally lock-stepping and voting is used. (Design to avoid Byzantine faults).

DozyWannabe
28th Jun 2013, 17:24
Another point I would like to add is that redundancy means duplication which usually 'kicks in' AFTER a failure.

Sharing processing job among many computer is not redundancy. it is reducing risk of failure by not letting a single computer run the entire show.

Not always true. Even simple redundant systems such as multiple elevator (as in lift) cables can have all of them sharing a load at any given time, it's just that losing one or more of them will allow the system to continue functioning enough to ensure a safe outcome. Likewise, the cross-checking that goes on between the ADIRU modules has all of them working on similar data, monitoring for discrepancies.

Such functions in the computing world are a specialised form of redundancy known as "Defence In Depth".

http://en.wikipedia.org/wiki/Defense_in_depth_(computing)

Teldorserious
28th Jun 2013, 20:13
It's about reliance on the gear, trying to take the pilot out of the equation.

One good lightning strike, back to pilots again...no telling them that though, - lightning strikes, bombs, Oxy fires, sabotage, malware, ground interception...none of that happens in their world.

DozyWannabe
28th Jun 2013, 22:33
It's about reliance on the gear, trying to take the pilot out of the equation.

No it isn't, and it never was.

Not only that but there has not been a single accident where the FBW systems failed - not due to a lightning strike or anything else.

Fly3
29th Jun 2013, 15:38
I have suffered five lightening strikes flying FBW aircraft, three in A320's and two in A340' and there has never been any problem with the aircraft post strike.

deptrai
29th Jun 2013, 17:20
Everyone has his/her own favorite "weird" thread with seemingly unnecessary questions (arguably they're still ok though).

If I was to chose, this one gets my vote. Many excellent and very detailed answers here, but I'll add to the noise, because this is a simple question, which doesn't need complicated answers. All the computers... it's definitely not about conspiracy theories and computers vs men, taking over the world. Why do we need so many doors on aircraft? Isn't one pilot enough? Why all the cabin attendants? Multiple hydraulic systems? Engines? Generators? Sensors? Instruments? Radios? And do we really need fuel for alternate airports?

Redundancy, redundancy, redundancy. The more, the safer. It's that simple. Without redundancy, the evil computerized Qantas A380 would have fallen out of the sky because of an uncontained engine failure. Despite heavy damage and multiple failures, it didn't. It was build like a tank, metaphorically, and kept flying in a safe and controllable way :)

Mr Optimistic
30th Jun 2013, 17:22
I would imagine that safety analysis of real time operating systems is much simpler if the hardware and software are cleanly partitioned so that the functional architecture maps the physical architecture.

deptrai
30th Jun 2013, 17:49
btw, from what I see, A320 has 2 ELAC (elevator and aileron computers) and 3 SEC (spoiler and elevator computers); the ELACs were produced by Thomson-CSF based on a 68010 CPU and the SECs were made by SFENA/Aerospatiale based on the 80186, as already noted. So you have 2 sets of completely different Hardware (CPU and architecture), and also different functional specs. (and at the Software Level, you have 4 different Software packages: ELAC control channel and monitoring channel, and SEC control channel and monitoring channel). Then there's also the FACs, for the A320...all this info is easily available. Imho this level of redundancy is completely appropriate, it's not "overdone" in any way, it's just based on safe and sound principles to avoid common mode failures, conservatively engineered, and adapted to the capacity of the hardware.

if you look at the A380, a much more modern design, possibly there's been a minimal increase in functional density (I counted 51 control surfaces vs X for A320?), but it still needs 3 Primary Flight Control and Guidance Computers, 3 Secondary Flight Control Computers, and 2 Slats & Flaps Control Computers.

I can't see how reducing the number of computers would be of any big help for anything (quite contrarily).

dash6
30th Jun 2013, 21:45
Original question:Why not one computer with a couple of backups? Job done, they are called other computers.Duh! Fly old Jets with strings attached:)

DozyWannabe
2nd Jul 2013, 16:13
@dash6:

To be fair, it's a little more complicated than that... :)

@deptrai:

Unfortunately I do not as yet have the kind of in-depth info on the A380 that I have for the A320, but to my mind the big 'bus has to be a significantly more complex beast than her smaller older sister. Not just in terms of flight surfaces, but things like landing gear operation are far more involved - and the systems built on top of them (e.g. Brake To Vacate) require a much more precise level of monitoring.

That said, the principles behind the design seem to have remained fundamentally the same - they've just evolved over time.