PDA

View Full Version : Computers need to know what they are doing


em3ry
16th Aug 2016, 04:55
Computers know how to do things but don't know what they are doing.

If you want the computer to know what is happening then it seems to me that all it really needs to be able to do is to run a simulation in real time with current conditions and look ahead to see what is going to happen so it can take the appropriate action. I think Google self-driving cars do exactly that. (As do our own brains)

No it will not be omniscient. No it will not be infallible. No it will not replace the pilot. But it will be much smarter.
Anybody that thinks that smarter computers is a bad thing has rocks in their head.

Some of the recent airplane crashes would have been prevented if the computer had simply been able to look ahead and see what was going to happen

Google patent: https://www.google.com/patents/US9248834

Check Airman
16th Aug 2016, 06:20
What would a computer have done on the IGS approach to kai tak?

What would a computer do when the spacing on final gets tight?

What would a computer do at 200ft if it gets a report of possible windshear?

What would a computer do if low on fuel and an aircraft crossed the hold short line, but was not actually on the runway surface?

There are lots of examples in day-to-day operations where the default computer response, or airline SOP would be the less safe course of action. You can't program for everything. What you're talking about is artificial intelligence. We're quite a bit away from that in an airplane. A car can pull over to the side if something doesn't compute. We don't have that luxury.

em3ry
16th Aug 2016, 06:31
All of those are perfect examples of why the computer needs to be able to run a simulation in real time with current conditions and look ahead to see what is going to happen so it can take the appropriate action.

And if it cant do anything else then it can at the very least alert the pilot

Do you really think the computer can't figure out that if it only has enough gas for one last attempt at Landing that it must land no matter what? I should think that would be very easy for it to figure out

em3ry
16th Aug 2016, 06:55
There are lots of examples in day-to-day operations where the default computer response, or airline SOP would be the less safe course of action. You can't program for everything.

Yes that's been the cause of several recent accidents and that is exactly why we need computers that are able to run a simulation in real time with current conditions and look ahead to see what is going to happen so it can take the appropriate action.

em3ry
16th Aug 2016, 07:41
The key is for the computer to be able to run a simulation and see not just what is happening but also what is about to happen and what actions can be taken to steer events in a desirable direction

https://en.wikipedia.org/wiki/Tensor_processing_unit

Intruder
16th Aug 2016, 08:53
Read what the patent is all about: Detecting motion and assessing possible outcomes based on likely scenarios. The computer can only choose from scenarios that are programmed into it.

There are several examples of such computer behavior in airplanes today: TCAS, EGPWS, Predictive WindShear. In all of these, however, the final step - actually taking the corrective action - is left to the pilot. Adding an interface to the autopilot and autothrottles would be a straightforward step IF the regulators wanted to place that much trust in the computer algorithms.

Goldenrivett
16th Aug 2016, 09:12
The computer can only choose from scenarios that are programmed into it.
Very true.
See: https://aviation-safety.net/database/record.php?id=20010207-0
edit.
"The cause of the accident was the activation of the angle of attack protection system which, under a particular combination of vertical gusts and windshear and the simultaneous actions of both crew members on the sidesticks, not considered in the design, prevented the aeroplane from pitching up and flaring during the landing."

Airbus FBY Normal Law computers don't allow the pilot to over ride them.
(B777 & B787 FBY computers do permit the pilot to over ride them)

em3ry
16th Aug 2016, 09:12
There are several examples of such computer behavior in airplanes today: TCAS, EGPWS, Predictive WindShear.
Great. Now extend that concept. Look further into the future.
Compute what the probable state of the airplane will be every 100 meters down the length of the runway.
Determine if any of those states are problematic.
Determine what actions need to be taken to avoid any difficult situation.

The human pilot will always be better than the computer at certain tasks.
But the computer will always be better than the human pilot at certain other tasks

Check Airman
16th Aug 2016, 09:25
The human pilot will always be better than the computer at certain tasks.
But the computer will always be better than the human pilot at certain other tasks Hit the nail on the head. That's why we build computers with only so much authority now, and the ability to turn them off if necessary. With TCAS, EGPWS etc, they advise the pilot, and it's left to him to take final action.

Ever had the EGPWS go off in cruise at FL350? How about a defective radio altimeter triggering a configuration warning on approach? What would your computer do?

Check Airman
16th Aug 2016, 09:31
Computers, as we know them today are great at doing repetitive tasks and monitoring. Decision-making, no so much.

em3ry
16th Aug 2016, 09:49
not to beat a dead horse but that's exactly why they need to know what they are doing and that's why it needs to be able to run a simulation in real time with current conditions and look ahead to see what is going to happen so it can take the appropriate action.

Uplinker
16th Aug 2016, 10:05
Neither approach - human pilot nor computer pilot - will ever be 100% reliable. However, humans can think and react to situations that the computer programmers never thought of or allowed for.

I think the question we need to ask is why do you want computers to fly our aircraft? Well humans cause crashes you might reply. Well sometimes, (but so do badly programmed computers or computers that have lost their power supply when the fuse/CB trips). The real problem is that us pilots:-

Are working longer and longer duties.
Are receiving less and less basic handling instruction and practice.
Are working earlier or later in the day/night.
Are working in high stress phases of flight during the WOCL.*
Are flying in busier and busier airspace with fewer and fewer ATCOs.
Are more stressed because of the downward pressure on salaries and the effects that has on our family life.
Are seeing worse terms and conditions year on year.

Why is all this happening? To save money. Oh right - so are we all getting better and better salaries, year on year then? No, quite the reverse. Meanwhile somebody somewhere is making big profits out of us, but it is not us, and pilots and passengers alike are suffering because of it.

I love technology, but I am first and foremost a pilot. I like having computers to help me fly and manage the flight (FMGC, automatic cabin pressurisation, fuel trimming, efficiency predictions, Autolands etc) but I don't think it would be safe or sensible to replace pilots with computers.

Give us better training, more handling practice, and remove the unecessary daily stressors in modern flying, even if that means adding £10 to every ticket.

* Window of circadian low: 0200-0559 hours local time.

em3ry
16th Aug 2016, 10:10
I am not suggesting that the computer should replace the pilot. I am suggesting that the computer should be smarter. Maybe replace the first officer. Then again maybe not.

Tourist
16th Aug 2016, 10:15
That's why we build computers with only so much authority now, and the ability to turn them off if necessary. With TCAS, EGPWS etc, they advise the pilot, and it's left to him to take final action.



Nope.

EASA certifies new "Autopilot/Flight Director" TCAS mode for A380 | Airbus Press release (http://www.airbus.com/presscentre/pressreleases/press-release-detail/detail/easa-certifies-new-autopilotflight-director-tcas-mode-for-a380/)


Ever had the EGPWS go off in cruise at FL350?

10 If height => 20000ft then ignore EGPWS.

Problem solved.

Next?

Check Airman
16th Aug 2016, 10:48
1. I'm aware of that option. It's imperfect. My company management explained why they decided against the option after a company aircraft was involved in an incident. Having an automatic AP RA would have caused a collision. In any event, the re button is still there.

2. How do you account for mountainous terrain above 20,000ft, incorrect altimeter setting or a defective altimeter?

Uplinker
16th Aug 2016, 11:16
I am not suggesting that the computer should replace the pilot. I am suggesting that the computer should be smarter. Maybe replace the first officer. Then again maybe not.

You have written about "the pilot". This seems to be a very common misconception. Are you aware that there are two fully qualified pilots on a commercial passenger airliner? One is designated the Captain, the other is designated the First Officer. And that both pilots have the same flying licence and therefore have passed all the same flying tests? (The Captain has passed more tests than the F/O to become a Captain, but flyingwise they are the same. It's not one pilot and a secretary.)

Why do you want to only have one pilot? There can be only one reason - to save money. Why do you want to save money? Will you receive the extra profits or get cheaper air travel? Is that sensible?

em3ry
16th Aug 2016, 11:25
Yes there are two pilots but only one is flying the plane at any given time

Ian W
16th Aug 2016, 12:23
Computers know how to do things but don't know what they are doing.

If you want the computer to know what is happening then it seems to me that all it really needs to be able to do is to run a simulation and look ahead to see what is going to happen. I think Google self-driving cars do exactly that. (As do our own brains)

Some of the recent airplane crashes would have been prevented if the computer had simply been able to look ahead and see what was going to happen

Google patent: https://www.google.com/patents/US9248834

But computers in aircraft already have this capability which you use all the time. The FMC takes the uploaded list of waypoints, speed schedules, weight information etc., and creates a 'trajectory' description. The FMS then provides guidance to follow the trajectory and slow down/speed up; climb descend to follow the aircraft trajectory and meet the required time of arrival.

In research, aircraft have been able to forecast their touchdown time at destination to within 5 seconds at the time weight came off wheels for a 2 hour flight 'knowing' their trajectory for the entire flight..

The more complex the requirement the more extended the development of the computer system required and the more exhaustive the testing. The more safety related the computer function, say dealing with common and less common failure modes. the more certification testing is required. All these are expensive.

So it is has been easier for the system and software development project to assume that as there is always a qualified pilot, each time the computation becomes difficult for whatever reason - sound an alarm and "you have control"
Now it is being found that 'qualified pilot' may on occasion be an overstatement for many even 'routine' contingencies. At the same time computers are getting significantly more powerful and for that matter analyst/programmers much better.

As with all things aviation whether we like it or not the arbiter is cost. As soon as wetware pilots are more expensive than the automation that appears just as capable, they will be replaced by automation.

barit1
16th Aug 2016, 12:25
Just wonderful, em3ry.

Simulation? The rules, algorithms, and such will be written by humans. The computer will operate on signals provided by remote instrumentation. The output will go to a screen that we hope has not gone blue. You want redundancy? Go for 2 or 3 independent parallel systems; they can then out-vote the human pilot.

This is a metaphor for central planning, is it not?

WeeJeem
16th Aug 2016, 12:26
Computers, as we know them today are great at doing repetitive tasks and monitoring. Decision-making, no so much.

With all due respect, your "today" seems to be some two or more decades out of date.

These days, fr'instance, some 75% or so of trading of financial instruments on the major financial exchanges (USA, UK, Japan, Germany, etc) is done by "computers" (automated trading systems) and their "decision-making". And, interestingly enough, no calls from anyone to get rid of the computers and bring back the jackets, the shouting and the paper pads.

One aviation example that I've worked on is lost-contact recovery in ASW, where the system ("computer") does indeed project forward in time to see what might happen and then make a "decision" based on future possible outcomes.

Now, there is the occasional operator whose skill and intuition is "better" at times, but the system will - for the greatest part - outperform the human. And when it's not just one system, but cooperative systems with serious informational flows, the humans just cannot compete..

em3ry
16th Aug 2016, 12:35
My point is that if the computer can run a simulation then the computer can "know what it is doing".
Anybody that thinks that is a bad thing has rocks in their head.
It is not knowing what you're doing that gets people killed

Goldenrivett
16th Aug 2016, 13:07
Anybody that thinks that is a bad thing has rocks in their head.
Are you saying that those pilots "had rocks in their heads" because the Airbus FBW would not let them flare before impact with the runway?

From post #7 "The cause of the accident was the activation of the angle of attack protection system which, under a particular combination of vertical gusts and windshear and the simultaneous actions of both crew members on the sidesticks, not considered in the design, prevented the aeroplane from pitching up and flaring during the landing."

em3ry
16th Aug 2016, 13:10
that was because the computer didnt know what it was doing.
if it had run a simulation then it would have known what it was doing

thats the whole point of this thread
and I just restated that point for you in the very post you just quoted

Goldenrivett
16th Aug 2016, 13:30
if it had run a simulation then it would have known what it was doing
Don't you think that Airbus would have run thousands of simulations for every conceivable combination they could have imagined?
The difference is that in real life there can be a combination of things which result in something we never expected.

It's great to use computers to perform routine tasks - but please still give us the authority to over ride.

em3ry
16th Aug 2016, 13:36
thats why it needs to run a simulation in real time with current conditions
so it can know what is happening and take appropriate action

of course the pilot should override.
thats what he is there for

andytug
16th Aug 2016, 13:40
This is the issue - you can program a computer for every conceivable problem, and combination thereof. Then a problem occurs that no-one conceived of and the computer has no answer, what then? You need a human with the experience to solve the problem there and then, and the computer to fail safe and help out when required.
Anyone who thinks a computer can be made to cope with every possible situation is deluded, and when failure means likely death you can't just wait and "turn it off and back on again".
Maybe AI may eventually have the answer but it's a long, long way off yet.

em3ry
16th Aug 2016, 13:42
exactly what I just answered in the post before yours

em3ry
16th Aug 2016, 13:46
AI is not that far off.
all it needs to do is run a simulation in real time to see what is happening

andytug
16th Aug 2016, 13:51
AI based on previous experiences in here, but AI that can deduce a solution to a new unknown problem based on other experiences (as a human can) is still in the early stages yet. It will come one day though.

Goldenrivett
16th Aug 2016, 14:53
Hi em3ry,
of course the pilot should override that's what he is there for
I'm glad we agree on that.
Pity Airbus don't permit pilots to override in Normal Law.

Tourist
16th Aug 2016, 15:52
em3ry

I absolutely agree with you.

You are never going to persuade the luddites though.

Even when they are flying around, they will still deny their existence....

Tourist
16th Aug 2016, 15:58
You need a human with the experience to solve the problem there and then, and the computer to fail safe and help out when required.

Who is this human with experience you talk about?

A vanishingly small % of airline pilots have any experience of any emergencies whatsoever in their entire career.


I had one "emergency" and I use the term loosely in my civil airline time. It ended up essentially autopilotless and autothrustless with all the captains side instruments failed.

It was the most serious thing that the captain had seen in his 20 yrs.

This is not a bad thing, this is a testament to the exceptional engineering that goes into these aircraft, but it certainly does not translate into experience.

Derfred
16th Aug 2016, 20:47
em3ry,

You are so busy repeating yourself I don't think you are actually doing any thinking.

"The cause of the accident was the activation of the angle of attack protection system which, under a particular combination of vertical gusts and windshear and the simultaneous actions of both crew members on the sidesticks, not considered in the design, prevented the aeroplane from pitching up and flaring during the landing."

This has been quoted at you twice, and you don't seem to have comprehended it.

The point of this quote was that this scenario could NOT have been simulated in advance by the computer because no-one predicted this scenario in advance.

You say:

thats why it needs to run a simulation in real time with current conditions
so it can know what is happening and take appropriate action

What are these "current conditions" you speak of and how are they relevant to the quoted scenario? How could the computer have predicted the combination of vertical gust, windshear and simultaneous actions of the crew members?

As for AI being not far off, they've been saying that for decades. What is often not discussed is that there is a difference between "simulated AI" and "real AI". Real AI is real intelligence. That would be handy, but that is a looong way off. However simulated AI is not real intelligence, it's just a pre-programmed dumb machine. That's what the Google Car will be. If you throw something at it that the programmers have not thought of, game over.

In principle, however, I think I agree with your general point that predictive systems such as TCAS, EPGWS and Predictive Windshear could (and very likely will) be expanded upon to extend to other predictions and warnings. A dynamic warning system for runway overruns (takeoff and landing) wouldn't be that hard and could save a lot of regular incidents/accidents.

But then again we come back to your "simulate with current conditions" argument. What are the current conditions and how does the aircraft know them? How is it going to know the friction co-efficient of a contaminated runway? It won't. So your argument might save the day sometimes, but not always.

Where to draw the line in the human/computer interface will continue to be controversial. The two largest aircraft manufacturers don't agree, and the recent trend from both manufacturers is that the pilots have been pushed a little too far out of the loop - they are now trying to bring them back in.

Three recent high profile crashes (Asiana, Air Asia and Air France) have all been put down to lack of basic flying skills. You either take the pilots out all together or you let them fly the aircraft. You can't just have them sit there doing nothing and be expected to save the day when the computer gives up.

Oakape
16th Aug 2016, 23:04
And that last paragraph sums it up nicely!

BleedingAir
17th Aug 2016, 03:33
Em3ry,

I'm sure this all sounds fantastic in your head (more so every time you repeat it), but it's science fiction.

A computer system that can simulate, predict and counter all possible flight scenarios is simply not going to happen in our lifetimes. Even if current aircraft systems were literally 100% reliable, you need humans to make the decisions the machine cannot, of which there are thousands of possibilities.

And your comment about perhaps "removing the First Officer" indicates you have little to no idea about the modern flight deck and how it actually operates, like most of the general public who think there's a "pilot" (who always flies the plane) and a "co-pilot" (who helps out in the hope he too can become a "pilot" one day).

em3ry
17th Aug 2016, 07:30
What is your point? Because the computer isnt omniscient therefore it can never fly the plane? Neither is the human pilot!

Don't you think the computer in the recent crash could have determined that if the pilot is attempting a go-around and has forgotten to give it thrust that it will end badly?

All too often they crash turns out to be due to some mundane pilate or computer error that would have been prevented had the computer simply been able to look forward into the future

If humans make mistakes like that then by your reasoning humans should never even be allowed near an airplane

em3ry
17th Aug 2016, 07:35
As I said earlier

The human pilot will always be better than the computer at certain tasks.
But the computer will always be better than the human pilot at certain other tasks

And as I said earlier

of course the pilot should override.
thats what he is there for

em3ry
17th Aug 2016, 07:42
And all of this is relevant to my original post. All I'm suggesting is a way of making the computers smarter.
Are you people seriously suggesting that smarter computers would be a bad thing?

em3ry
17th Aug 2016, 08:00
running a simulation is science fiction? Really? I didnt know that!

em3ry
17th Aug 2016, 08:07
If I were guessing I would guess that you people already have this technology and the government and the airlines have made you swear an oath that you will keep it secret

Check Airman
17th Aug 2016, 08:41
If I were guessing I would guess that you people already have this technology and the government and the airlines have made you swear an oath that you will keep it secret

We're not allowed to discuss that sort of sensitive security information on a public message board.

BleedingAir
17th Aug 2016, 09:08
Yes, it would be fantastic if aircraft could "run a simulation" and foresee all possible outcomes of everything that could happen to them in the next minute, and automatically react accordingly.

It would also be fantastic if airliners were doing Mach 3 everywhere at half the current fuel burn. Why don't we just do that now?

em3ry
17th Aug 2016, 09:35
I didnt say all possible outcomes of everything that could happen to them

anything can be ridiculed if you exaggerate it to the point of absurdity

how many possibilities are there in a chess game?
yet computers play chess just fine

Slatye
17th Aug 2016, 12:09
Chess is a vastly simpler proposition though. The moves the player can make are limited and discrete. The moves the opponent can make are also limited and discrete. At the start of a chess game, for example, there are twenty available moves for White (eight pawns can each move one or two spaces, two knights can each go to two positions). For each of these moves the opponent can also make twenty moves, so to look ahead one turn requires 400 separate evaluations. The second turn is a lot more complex; depending on which pawns moved you might be able to shift any of the other pieces. Depending on how the opponent moved there might even be a capture. Still, we're talking about a few billion moves to be evaluated after a few moves, and with a bit of alpha-beta pruning (practical when "score" is reasonably well defined, for example by assigning values to pieces) this is quite manageable on a modern PC.

Running a simulation on a transport aircraft using fixed control inputs and in perfectly known conditions is practical. Obviously you'd have to make some sacrifices in aerodynamic modelling to have it done by an aviation-grade computer (not exactly the latest and greatest desktop hardware) much, much faster than realtime (there's not much point if at any time you can only look a few seconds ahead) but I can see that being possible.

Then you start throwing in variables. What happens if we apply 5% more thrust? What happens if we apply 10% more? Or 15%? Or maybe 14.3726% on the left engine but 1.3978% on the right since that might be a practical way to deal with a rudder malfunction? Or should we apply 5% left aileron? Or 5% right rudder? Or drop the gear?

What if there's a bit of wind shear? Maybe some wake turbulence? Maybe the C-17 that's about to taxi onto our runway will realise what's going on, apply reverse thrust, and get out of the way. Or maybe he'll stop, so that just staying a bit to one side provides sufficient safety margin. Or maybe he'll keep on coming, in which case absolutely any response that involves continuing this landing is going to cause a disaster. Of course, for the first option the computer has to be able to recognise that (a) the plane is a C-17, and (b) C-17s can to taxi backwards on reverse thrust. And if it's listening on the radio it might be able to hear ATC telling the C-17 to stop, and the C-17 pilot confirming (or not).

So now instead of running one simulation to see what will happen, the computer is running thousands or millions of simulations just for the next millisecond. And for each of those simulations, thousands or millions for the next millisecond (so now we're looking at millions to trillions). By the time you're looking maybe 30 seconds ahead (which is probably about the minimum you could expect) you're asking for a *lot* of simulations - and of course because these simulations are covering the dynamics of a large aircraft each one is pretty complex!

Plus, of course, you need big sanity-checks. If someone's gone and parked a plane on the runway you were about to land on, then the best strategy for the next 30 seconds may well be to cut power, lower flaps, lower gear, and raise the nose. All of these will reduce the speed, and therefore prevent a crash within the 30-second simulation time. What the simulation does not know (because it only looks 30 seconds ahead) is that at 31 seconds the plane will stall and crash, and the burning wreckage will slide into the fully-fuelled plane sitting on the runway. You can force the computer to only accept "nice" solutions (eg. where the plane finishes well above stall speed, well below VNE, within 30 degrees of level in both axes, etc) but then that may cut out beneficial solutions that a human pilot could find (eg. coming very close to a stall in order to pull up fast enough). The other effect of the 30-second look-ahead time is that the computer won't even start to think about the guy sitting on the runway until it's 30 seconds in the future; a human might have seen that sveral minutes ago.



The first part of this plan, controlling the plane itself, is essentially heading for a field known as "optimal control" - how do you perform control in the absolute best way to achieve the goal. One way, as you've described, is to brute-force every possible action and pick the best. The downside, of course, is that this tends to take many times the age of the universe on any conventional computer that can be constructed within the universe. The fact that there's a field of study rather than a single "optimal control equation" indicates that nobody's actually solved optimal control for any but the most trivial cases. We do have numerical methods that can produce a reasonable approximation for more complex systems, but a plane is a very complex system which will need a very large amount of careful analysis.

Add to that the outside factors. What are other planes doing? What is the air around the plane doing? Can ATC deal with your crazy plan? What happens if your plane suffers a mechanical failure and so the best plan you had no longer works?

If you can assign numbers to the probability and impact of every possible event, and then spend a long time crunching numbers, you can undoubtedly come up with a single best course of action. By the time this has occurred it's likely that the human race will have become extinct - and that's a very big "if" on assigning the numbers.


Or you can just go with the existing solution, where if the problem is one that the computer knows about then it just implements a solution, and otherwise it hands over to a human. The computer already knows about most of the reasonably common problems, and handles them so well that nobody even notices. Humans are good at spotting future problems well in advance by collating extra information; if there's someone entering the runway that you're about to land on, you'll be recalling what they and ATC have been saying to determine whether it's a risk or not.

Of course, there's still a class of problems that can't be seen coming but do require an instant response, like the 737 rudder PCU malfunctions. We don't have a good solution to this, but I'm not sure that a simulation would help at all. After all, how can you simulate a malfunction when you don't know what the malfunction is?

em3ry
17th Aug 2016, 12:58
Flying a plane should be simpler than playing chess. You know the exact route that you want to take. You just don't know the exact moves that will take you along that exact route

Derfred
17th Aug 2016, 14:32
Em3ry,

Your posts make it clear that you have no knowledge of piloting an aircraft. Some of us who are pilots also know a little about computers. I have studied university level AI. You obviously haven't. You've read an article about the Google car and have come onto a professional pilot's forum telling everyone how to fix the industry. You are way out of your depth.

One speculated cause of the latest crash was a man-machine interface issue. I'm not aware of a similar accident in recent times, so it's a rare one.

As I have said, this will continue to be an issue, but rushing automation will not be the solution. In fact, it is very likely the cause.

Yes, more advanced warning systems and improved man-machine philosophy and training will continue to improve safety, but in aviation it is a slow process because a new technology must be well proven and certified before it can be implemented (a bit like medical science). If it takes an automotive company 5 years to develop and certify a lane-warning system on a car, you can bet it would take 15 years to do something similar on an aircraft.

The state of the industry at present (and for the forseeable future) is that it is very difficult for a human to hand fly an aircraft all day safely. It is also impossible for a computer to do the same. So the best solution is a combination of the two. It is the man-machine interface that we are working on, and it's not easy. Financial pressure on training, pay, conditions, fatigue etc also influence the human element, often leading to suggestions that increased automation is the answer. Widely held recent opinion is that it is not.

Volume
17th Aug 2016, 15:04
Computers, as we know them today are great at doing repetitive tasks and monitoring. Decision-making, no so much.
An interesting read...
IBM's Watson is better at diagnosing cancer than human doctors (http://www.wired.co.uk/article/ibm-watson-medical-doctor)
I would call this (especially when it comes to decide about the type of treatment) some sort of decission making.
However, this decission is purely statistical and based on comparison with millions of datasets. So our first step is that we need to install very accurate flight data recorders, fly for some centuries and after having a database of millions of accidents, the computer will be perfect in predicting an impending crash from the current parameters.
Maybe it will even be able to decide upon an action, which in the past had saved the day. However, we need to freeze airports, airspace and aircraft design for some time to collect relevant data...

em3ry
17th Aug 2016, 15:18
If it takes an automotive company 5 years to develop and certify a lane-warning system on a car, you can bet it would take 15 years to do something similar on an aircraft.

That's why they should get to work on it right away

BBK
17th Aug 2016, 16:34
em3ry

If you want to advocate that a computer system can do the job of a pilot then you need to understand what the job entails. You appear not to. Or maybe you do and you're a troll and trying to wind us up.

So much of what flight crews do involves a balanced judgement rather than black and white decisions. Disruptive passengers, in flight medical emergencies, weather avoidance etc.
Of course this is all goes on behind a locked cockpit door unseen to the passengers but that's no excuse for not having the imagination to think about such matters if you want to argue that computers can do a better job.

I read an article recently by a pilot on this subject and he made a very good comparison with the medical profession in that technology has improved medical care but we never think that doctors will be replaced by robots.

BBK

em3ry
17th Aug 2016, 17:20
all I said was computers can be made smarter and that doing so would have prevented some of the recent accidents

BBK
17th Aug 2016, 17:42
em3ry

I was going to reply but Derfred's posting above pretty much sums up my views. Computers, in a well designed system, can enhance flight safety but not, IMHO, in the role of decision making.

Just my two penn'orth.

BBK

em3ry
17th Aug 2016, 17:46
which is why you need a pilot

em3ry
17th Aug 2016, 17:50
as i said earlier

The human pilot will always be better than the computer at certain tasks.
But the computer will always be better than the human pilot at certain other tasks

BBK
17th Aug 2016, 17:51
Pilots. Plural! Over and out.

tdracer
17th Aug 2016, 18:25
The FAA is on record has prohibiting the use of anything resembling AI for aircraft or aircraft engine control. That's right, a flat prohibition. The reason is that AI can be somewhat unpredictable - and that's the very last thing you want to integrate into an aircraft system.
Personal example - my BMW was a very basic form of AI that 'learns' my driving style and adapts the engine and trani controls accordingly. A while back, they needed to re-flash the BMW computer to correct a malfunction. When I picked the car up, it had turned into a gutless wonder - literally unsafe - I nearly got in an accident on the way home when it failed to accelerate when I expected to - so I took it back next day. Turns out the AI values had gotten corrupted when they did the re-flash. They reset the AI values and the car was fine. A co-worker had a similar experience with his car (non-BMW).
It's going to be a long time before AI has advanced to the level where we're going to be using to control aircraft.

Check Airman
17th Aug 2016, 18:28
Flying a plane should be simpler than playing chess. You know the exact route that you want to take. You just don't know the exact moves that will take you along that exact route

And with that, I'm leaving this discussion. Clearly a troll.

em3ry
17th Aug 2016, 18:40
Is the FAA on record as forbidding running a simulation in real time with current events to see what is just about to happen if only to warn the pilot

andytug
17th Aug 2016, 19:30
How do you ensure that your "simultaneous simulation" is capable of predicting the future, as that's effectively what you need it to do?

Intruder
17th Aug 2016, 20:09
And all of this is relevant to my original post. All I'm suggesting is a way of making the computers smarter.
Are you people seriously suggesting that smarter computers would be a bad thing?
. . .

Is the FAA on record as forbidding running a simulation in real time with current events to see what is just about to happen if only to warn the pilot
You are not suggesting anything new. Computers are constantly improved to make them "smarter".

As has been pointed out before, even computers in airplanes have been improved to make them smarter. They have even been programmed with the capability to assess the current situation, predict possible outcomes, AND warn the pilot - all with the FAA's blessing.

So, unless you have a SPECIFIC proposal to perform a SPECIFIC prediction to generate a SPECIFIC set of responses or warnings, you are merely repeating a baseless proposition ad nauseum. So far, it is apparent you do not have that capability...

semmern
17th Aug 2016, 23:26
10 If height => 20000ft then ignore EGPWS.

Problem solved.

Next?

Ah, so, bang into Mt. Denali and the Himalayas, then. Brilliant...

Derfred
18th Aug 2016, 01:06
That's why they should get to work on it right away

Who said they aren't?

Radu Poenaru
18th Aug 2016, 05:52
Ok, sorry but reading that sentence over and over again is killing me inside:

Real time simulation means 1 sec computer simulation = 1 sec real-time.
You are not predicting any future, you are simulating as time goes along : https://en.wikipedia.org/wiki/Real-time_simulation

(... bunch of other rants and censored screaming ... )

Ok, that's better, I can go to sleep now.

em3ry
18th Aug 2016, 10:03
Not anything new? Then why didn't the computer warn the pilot that he had forgotten to give it thrust when he was attempting that go around?

em3ry
18th Aug 2016, 10:06
I guess on the fly would have been better than in real time. Actually I thought they were synonymous. I guess I was wrong.

cattletruck
18th Aug 2016, 10:45
Computers need to know what they are doing.

WRONG!

Pilots need to know what the computer is doing.

Otherwise switch it orf.

Yes, it really is that simple.

Volume
18th Aug 2016, 14:17
all I said was computers can be made smarter and that doing so would have prevented some of the recent accidents You will never be able to make a computer smart. You can enlarge the database a computer is working with, you can optimize algorithms, you can monitor more parameters, and yes, you can prevent some accidents that way. But by relying more on computers, you will also see additional accidents.
Let the computers do what they can do best, and allow pilots to switch them off when the "assistance" they offer does in fact not help.

If it takes an automotive company 5 years to develop and certify a lane-warning system on a car, you can bet it would take 15 years to do something similar on an aircraft. In fact it took an automotive company to develop 50 years to make a car do what a Trident already was able to do. Get to your destination. Automatic. Even in fog so dense, no sensor can see the road markings.
The reliability of some of the modern car equipment is simply ridiculius compared to some 20 year old aircraft equipment. Autopilots crashing into turning trucks, satnav systems showing a ferry as a bridge, automatic wipers wiping in the bright sunshine, lane warning systems that want you to stay with the white marks in a construction site... It will take them at least another 15 years to reach aircraft reliability standards

em3ry
18th Aug 2016, 14:48
Let the computers do what they can do best, and allow pilots to switch them off when the "assistance" they offer does in fact not help.
Isnt that exactly what I just said?

em3ry
18th Aug 2016, 14:49
Sure let's just make all the pilots omniscient and infallible and that will solve all the problems. Good luck with that

Ian W
20th Aug 2016, 14:59
not to beat a dead horse but that's exactly why they need to know what they are doing and that's why it needs to be able to run a simulation in real time with current conditions and look ahead to see what is going to happen so it can take the appropriate action.
Computers do what their manufacturers designed them to do. That is not a limit on computing that is a limit on what the manufacturer thought they could sell. There would be immediate pushback from the community here if someone gave the FMC more intelligence and reduced the role of the pilot. Similarly, to make the computer do more costs more and may initially be non-economic. Those limitations are rapidly disappearing.

Computers can do multiple simulations and select the optimal compared to the system optimization goals. Several years ago I saw just that kind of approach to sequencing traffic into a busy airport with multiple runways and all the WTC rules. With an hour of sequencing modeled in less than a second.

It is commercial pressure and risk that are the limits, not computing power or capability. There are multiple fast time simulations on the market that effectively 'fly' every aircraft in a center's airspace in accordance with their performance, fly them on routes and procedures, land them at destination and taxi them in then out for takeoff while at the same time sequencing and deconflicting for efficient use of the runways and taxiways and the airspace route structure. They will do that simulation in fast time with several hours traffic taking seconds on a relatively standard PC. They do that because that is what they were designed to do.

The FMC's that go into degraded mode and hand control to the pilot on some events, do that because that was what they were designed to do, not because they cannot cope with those events. It is just cheaper and (supposedly) less risk to use the flight crew rather than write handlers for exceptions.

Derfred
21st Aug 2016, 08:40
Em3ry,

The discussion of the man-machine interface on airliners is an interesting one, but is not a new one. There are threads on this topic on this forum dating back forever.

You have added nothing of interest to this topic. You read an article about Google Cars. Fascinating. Do you have anything else interesting to say?

The general etiquette for posting on forums is to provide something of value to the readers, rather than yourself.

Out.

em3ry
21st Aug 2016, 18:03
The topic is about running a simulation on the fly in order to see what is about to happen.

pax britanica
21st Aug 2016, 19:34
Couple of points
An airliner doesn't need a pilot it needs a crew , Ok one is in overall charge and normally has more experience but they swap the handling and monitoring roles.

We all know that when computers get confused they just stop - no good having 'Err 404' or 'no internet connection' at 200ft on finals is it.

Someone used the financial trading analogy but that is a cowboy industry as we have now all learned and using the computers like they do and the disasters caused we would be seeing a tens of thousands of people killed every ten years or so when the computers on airliners ran into the depression/panic selling modes the markets do.

At the end of the day i don't think even the most ardent lo cost at any price /fly to an airport 100Km from your destination at 0300/ Buy crap food on board and lotto tickets/ have no customer service at all / passenger will ever get on a plane with no windows up front and no living breathing forms behind each one.

However with automation confusion being a big cause of accidents these days it would seem training needs some revision , engine out at V1 was a regular occurence on pistons and still quite frequent on 707 era jets but today?????

The Flying Pram
21st Aug 2016, 20:44
You can program a computer for every conceivable problem, and combination thereof. Then a problem occurs that no-one conceived of and the computer has no answer, what then? You need a human with the experience to solve the problem there and thenThe Sioux City, Iowa DC10? As I remember it McDonnell Douglas never envisaged all 5 hydraulic systems failing.

And the computer to fail safe and help out when requiredI dare say Sully was grateful for that aspect when gliding down to the Hudson.

It goes to show there are good reasons for having both computers AND pilots on an airliner, but I would like the pilot to have the final say.

Ian W
22nd Aug 2016, 00:40
The topic is about running a simulation on the fly in order to see what is about to happen.
Unfortunately, that is not how simulations work.
All simulations can do is simulate the possibilities that the simulation script writer thinks may happen. Types of simulation vary so some are relatively deterministic, some are stochastic but the possibilities of what may happen have already been thought of by the simulation designer. This is the same as the basic FMC issue, the software analyst/designer has to think of all possible cases then decide which to give to the crew and any unexpected cases (the otherwise cases) are automatically given to the crew.

A simulation can only play the probability game with things that are already expected so by definition it cannot simulate the unexpected. That is why the pilots are still in the cockpit.

At system design time all the variables that are possible all the boundary cases and all the time related issues are all simulated as a means of verifying and validating the system. The results of those tests using simulation run inputs are then used to correct any shortcomings in the systems being tested. But it is not possible to think of every potential eventuality and every possible mix of unrelated circumstances there are simply far too many variables. That is at _testing_ phase before the system is even in final development.

There is no way that the airborne systems can run simulations varying every possible input and initialization state in real time. Not only that but by definition this is being done to deal with the unexpected unknown - if it is unexpected and unknown it is not going to be part of the simulation. The other problem is that the FMCs because of a rather old fashioned view of computer safety are required to be mathematically proven as correct. So chips, microcode, firmware, software all has to be mathematically modeled and proven correct using maths. This is a somewhat 1980's concept but still limits the hardware that can be used by FMCs to the extent that current generation multi-core chips with predictive fetch and preemption cannot be used as there is no mathematical proof that can cope with infinite levels of preemption in the chip operation. In consequence there is more power in some watches than there is in advanced FMCs. In my view this is not the correct way to go in the same way that the OSI/ISO communications model is no longer seen as the way to go for reliable and safe communications. Nevertheless, the current FMCs are beasts of very little brain and very constrained in what they are allowed to do.

So in summary. Simulation can only simulate what is expected, if it is expected it is not a problem. FMCs do not have the grunt or the safety clearance do do anything other than a very constrained set of processing due to certification rules and limitations that are around 40 years old. (I am sure someone here will correct me :) )

Google and others have a real advantage with driverless cars as they do not (yet) have the dead weight of bureaucratic certification rules rooted in the past. Even there, they are unlikely to be running simulations of what might happen.

em3ry
22nd Aug 2016, 00:53
The whole point of a simulation is to find things that you wouldn't expect

As for there being too many possibilities well as I said earlier how many possibilities are there in a chess game? Yet computers play chess just fine. And unlike a chess game you know the exact ideal route that you would like to take.

Ian W
22nd Aug 2016, 01:05
Imagine a chess game where there was suddenly a new type of piece on the board with different moves but you don't know anything about it. Now simulate its effects.
That is what you are asking for.
Chess is extremely easy to simulate every piece has a constrained method of movement only one at a time so there may be a huge number of potential variations so it is a large problem but they are all KNOWN. Making the right decision is then a case of comparing many simple and forecastable moves. That is nothing like the difficulty of simulating the complexities of an aircraft in flight in real time.

Your idea will not work until you find a way of adding one or more things into a simulation that you don't know about yet - not at all - not in any way.

BleedingAir
22nd Aug 2016, 02:24
I'm thinking we've got a subtle troll here, I'd let it go.

Tourist
22nd Aug 2016, 03:27
The Sioux City, Iowa DC10? As I remember it McDonnell Douglas never envisaged all 5 hydraulic systems failing.


You use Sioux city as an example of where you need a human, however the reality is that actually it is an area where computers excel.

This from Scientific American


July 26, 2004

Crippled but Not Crashed

Neural networks can help pilots land damaged planes

By Mike Corder

On July 19, 1989, as United Airlines flight 232 cruised over Iowa, the fan disk of the tail engine on the DC-10 broke apart, and the debris cut through all three of the plane's hydraulic lines. Because the pilots could not move any of the jet's control surfaces--the ailerons on the wings and the elevators and rudder on the tail--a horrific crash seemed inevitable. But by carefully adjusting power to the two remaining engines, the crew managed to maneuver the plane to the Sioux City airport. Although the jet flipped over and caught fire after hitting the runway, 184 of the 296 passengers and crew members survived.

The pilots of flight 232 proved that it was possible to control a modern airliner using only the engines. And this discovery led some innovative engineers to wonder if they could program flight computers to achieve the same feat, making it easier for a crew to safely land a heavily damaged aircraft. This research has been gradually progressing over the past 15 years, and the technology could be incorporated into commercial and military planes in the not too distant future. To judge how well these computer-controlled flight systems perform, I decided to see if they could enable a moderately experienced pilot like myself to fly a crippled jet.

But first, a little background. On early aircraft, the control stick and rudder pedals were directly connected to the control surfaces with wires or rods or cables. But as planes got faster and larger, pilots found it hard to move the stick. So engineers added "power steering," connecting the cables to hydraulic servos that amplify the pilot's efforts. Then, with the advent of the digital age, aircraft makers developed control systems that feed the input from pilots into a computer. This so-called fly-by-wire system can greatly improve an airplane's performance. For example, a fighter jet may fly well when lightly loaded but not so well when it carries bombs on its wings. With a computer in the loop, the control rules can be modified to make the plane behave more consistently. Fly-by-wire also allows the creation of safeguards: if a pilot tries to do something that would cause the aircraft to break apart or plummet to the ground, the computer can ignore the inputs and take the plane only to the edge of the flight envelope.

Shortly after the crash of flight 232, Frank W. (Bill) Burcham, Jr., then chief propulsion engineer at the NASA Dryden Flight Research Center in Edwards, Calif., began an effort to develop software that would enable jet engines to compensate for damage to a plane's control surfaces. Initially the research was considered too far-out to be funded, but a few engineers at Dryden volunteered their spare time. The project, which became known as Propulsion Controlled Aircraft (PCA), eventually received a small budget and proceeded to flight tests with an MD-11 jet. On August 29, 1995, the PCA team brought the plane in for a smooth landing at Edwards Air Force Base using only the computer-controlled engines to maneuver the craft. The NASA engineers felt they had demonstrated that airliner safety could be significantly enhanced just by modifying a plane's software. Unfortunately, none of the aircraft manufacturers chose to adopt the technology.

A few years later researchers in the Intelligent Flight Control (IFC) group at the NASA Ames Research Center in Mountain View, Calif., followed up on the PCA work by developing a system that would allow the computer-controlled engines of a damaged aircraft to work together with any control surfaces that remain functional. The system is based on neural-network software, which mimics the behavior of the human brain by learning from experience--the network's connections strengthen with use and weaken with disuse. The neural networks in the IFC system compare the way the plane should be flying with the way it actually is flying. Differences may be caused by inaccuracies in the reference model, normal wear and tear on the plane, or damage to the aircraft's physical structure. The networks monitor these differences and attempt to minimize them.

For example, if you want to make an undamaged airplane climb, you pull back on the control stick, which raises the elevators. But if the elevators are not working, the IFC system will raise both ailerons to lift the airplane's nose. (Ailerons typically move asymmetrically, with one rising as the other falls.) If this maneuver does not correct the error or if it reaches the limits imposed to prevent the aircraft from rolling over, the IFC system uses the thrust of the engines to achieve the desired pitch.

The Ames researchers tested their system by inviting professional airline pilots and NASA test pilots to fly in the lab's simulator. First, the pilots operated the simulated aircraft under normal conditions. Then the researchers mimicked a variety of failures and observed how the pilots reacted using different types of control systems. In almost every case, the IFC system performed better than a conventional fly-by-wire control system. When the engineers simulated the failure of all tail controls, only half the pilots could safely land the plane using the fly-by-wire system, but all of them made it back to the runway using IFC.

So what's it like to fly a plane equipped with neural networks? At the invitation of Karen Gundy-Burlet, head of the IFC group, I recently spent several hours in its lab to see the system firsthand. I am a private pilot with no experience flying larger aircraft. The IFC simulator was set up to represent a very big plane: the U.S. Air Force's four-engine C-17 transport jet. The simulator features a large wraparound screen to show the animated landscape and a mockup of a glass cockpit, which replaces the traditional flight gauges with flat-panel color displays.

Gundy-Burlet set me up on a 12-mile final approach to the San Francisco airport and let me embarrass myself trying to get an undamaged plane to the ground. Don Bryant, a retired U.S. Navy fighter pilot who works with the IFC group, was polite enough not to openly laugh at my ham-handed attempts to control the craft. My biggest problem was my unfamiliarity with the glass cockpit, which is only now starting to appear in private planes. I spent more time staring at the simulated display trying to find familiar values such as airspeed and altitude than I did actually flying the aircraft. That said, I got a basic feel for how the undamaged plane flew.

Then Gundy-Burlet reset the simulator to the initial location and said, "Captain, I'm sorry, but you've lost all the control surfaces on the tail." Both the elevators and rudders were inoperative, which would probably be a death sentence for an amateur pilot in the real world. But I was pleasantly surprised to find that the simulated aircraft was pretty controllable. I made a few gentle turns to get a feel for the plane while also trying to stay on the right heading. The damaged jet was sluggish in roll and pitch, but its behavior seemed more natural once I slowed down my steering. This change was undoubtedly facilitated by the neural networks, which were training themselves to compensate for the damage. As the networks adjusted to the new conditions, the plane kept getting easier to fly. Within a few minutes, I was able to safely land the simulated craft, although it did stray from the runway.

The overall experience was fairly tame, almost ordinary. It was only later that I recognized the true magnitude of this advance. A private pilot who had never flown a large aircraft was able to land a heavily damaged four-engine jet without killing anybody (in a simulation, at least).

How quickly might this technology see actual use? NASA researchers plan to flight-test the IFC system on F-15 fighter jets and C-17 transport craft over the next two years. The earliest adopters will most likely be the makers of military aircraft. Damage-compensating flight controls should be particularly useful to pilots who fly aircraft that get shot at from time to time.


Mike Corder is a freelance writer in Santa Cruz, Calif., who is building a Van's Aircraft RV-7A plane in his spare time.

em3ry
22nd Aug 2016, 19:21
What do you mean it won't work? All I'm saying is that there is a way to make computers much smarter.

No it will not be omniscient. No it will not be infallible. No it will not replace the pilot. But it will be much smarter.

Anybody that thinks that smarter computers is a bad thing has rocks in their head.

A smarter computer would have prevented several of the recent crashes

DozyWannabe
3rd Sep 2016, 04:30
Hullo all, just thought I'd stop by...

I'm seeing a fair bit of misunderstanding as to the status quo (and possible future) of aviation/computer interaction, so figured I'd weigh in briefly:

To start with, it's a fallacy to conflate the concept of FBW with that of autoflight (FMC/FMS) - they serve separate purposes and are engineered very differently.

Very true [re:computers only dealing with programmed scenarios]
...the simultaneous actions of both crew members on the sidesticks, not considered in the design...
From a purely technological point of view, following this accident Airbus modified the ELAC software's "AoA Protection" activation logic to take into account turbulent conditions. As the report says :

With these modifications the protection level is maintained against dynamically aggressive manoeuvres made by the pilot, but the premature activation of the AoA protection triggered by wind gusts is inhibited, and a de-activation in flight at low height under less stringent conditions is allowed.

The report itself is fairly thorough, but doesn't seem to question the crew's actions in terms of continuing an approach despite weather conditions being considerably worse than they were expecting. Those conditions, along with both crew pulling hard on the sidesticks (a no-no as far as handling training is concerned) created what some engineers call an "edge case", where a very specific set of circumstances defeats the design. That it took some 13 years for that edge case to be found implies that the design and implementation was pretty damned thorough.

Airbus FBY Normal Law computers don't allow the pilot to over ride them.
(B777 & B787 FBY computers do permit the pilot to over ride them)
This gets brought up a lot, but as far as the above accident is concerned it's a bit of a tangent. Yes, the conditions and crew actions defeated the logic - but on finals at 60ft RA in windshear conditions, being able to override the AoA protection wouldn't make any material difference to the outcome. Also, in the 30 years the B777 has been around there has only been a single known incident where overriding the flight control computers could have been appropriate (Malaysian B777 9M-MRG over Perth, Australia; a dodgy accelerometer feeding the ADIRU caused an in-flight upset), but for whatever reason the crew did not do so.

We all know that when computers get confused they just stop - no good having 'Err 404' or 'no internet connection' at 200ft on finals is it.
Real-time, fault-tolerant software engineering is an entirely different kettle of fish from the processes used making the software in the machines we use from day-to-day. Safety-critical embedded systems also tend to use obsolete/proven hardware precisely because it is a known and predictable quantity.

As far as the OP goes; sorry em3ry, but you're a bit off in some of your assumptions as far as I can tell. For starters, that Google patent you linked to is clearly linked to their "self-driving car" efforts. Now, a car's behaviour is relatively simple to model and control - applying/reducing power or braking, steering in a given direction etc. results in a near-instant change of trajectory and closure rate. A fixed-wing aircraft is a massively different proposition because it's ability to manoeuvre is reliant on a far more complex form of energy management. For example, if an autonomous car wants to avoid an obstacle, a combination of acceleration, braking and steering can be applied to quickly remedy the situation. An aircraft responds much more slowly - and additionally it needs to have enough airspeed to stay aloft, but cannot exceed a certain airspeed without risking structural damage - an avoiding manoeuvre first requires that there is sufficient energy to pull it off, and there is (particularly with jet engines) a significant lag between applying power and that power translating into useful energy. This lag massively increases the amount of "look-ahead" any simulation must perform, which in turn exponentially increases the number of variables that simulation must take into account. Multiply that by the number of scenarios it has to model and there simply isn't a feasible way to implement it practically and cost-effectively using state-of-the-art hardware, let alone the obsolete and proven hardware required for aviation certification purposes.

A smarter computer would have prevented several of the recent crashes
Which ones, and how so?

Centaurus
3rd Sep 2016, 06:03
You have written about "the pilot". This seems to be a very common misconception. Are you aware that there are two fully qualified pilots on a commercial passenger airliner? One is designated the Captain, the other is designated the First Officer.

And in many airlines the first officer is nothing more than a young apprentice whose only actual flying experience in the air may have been at his flying school. If the captain becomes incapacitated his apprentice is now in charge of a plane load of passengers and he is all by himself. So yes - he is "fully qualified" on paper but that may mean nothing without the flying experience to fall back on.

Don't get me wrong. That is the norm nowadays. And statistically safe. Meets regulatory requirements too. But "fully qualified" doesn't necessarily reveal the whole story.

Goldenrivett
3rd Sep 2016, 13:00
where a very specific set of circumstances defeats the design. That it took some 13 years for that edge case to be found implies that the design and implementation was pretty damned thorough.

@DozyWannabe,
Any comment on the 50% reduction of aileron authority once one wheel touches the ground during landing?

"The BFU stated, that at the time of flare and touchdown there was no significant gust. The weather situation was well within the forecasts.

The flight control laws of the Airbus Fly By Wire (FBW) change from flight mode via flare mode to ground mode in the pitch axis and change directly from flight mode to ground mode in the roll control. In ground mode the side stick deflection lead to a direct proportional deflection of ailerons and roll spoilers without computer interaction.

However, above 80 knots the effectiveness of roll control, ailerons and roll spoilers, is reduced by the half (e.g. aileron deflection limited to 50% of maximum deflection).

When the left hand main gear contacted the ground, the radar altimeter indicating less than 50 feet AGL, both landing gear control interface units detecting weight on the left hand wheel, the airplane changed from flight to ground mode, confirmed by Airbus, the effectiveness of the roll control reduced by 50 percent at that point.
.....The airplane subsequently touched down with the left hand main gear at a roll angle of 4 degrees to the left and got airborne again. The roll angle increased to 23 degrees to the left, first officer and captain each now pushing their side sticks full right,"
Report: Lufthansa A320 at Hamburg on Mar 1st 2008, wing touches runway in cross wind landing (http://avherald.com/h?article=42826d3a)

em3ry
4th Sep 2016, 08:00
Avoid an obstacle? Like a mountain?

BleedingAir
4th Sep 2016, 08:22
We already have "computers that look ahead to see what is going to happen" with regard to flying into mountains - it's called EGPWS. What additions are you proposing?

Uplinker
4th Sep 2016, 12:02
All commercial passenger airliners flying today are already equipped with at least two of the most complex processors known to man: the human brain.

These devices consist of a network of 100 billion neurons - (each neuron a mini computer in itself) - and are able to adapt and predict and plan ahead in real time. They also produce "what if?" scenarios. The vision system alone has been developed and refined for millions of years - It uses short and long term memory resources to enhance the processed vision from the eyes to construct a real time three dimensional predictive situational awareness.

Even so, these incredibly complex devices, are not perfect. They can become tired, they make mistakes, there are errors of perception and vision. Therefore a way of operating aircraft safely has been gradually developed over the years to try to mitigate against failures and shortcomings as they become known.

What is happening in aviation today is that it has become locked into a descending spiral of ever lowering operating costs to encourage an increasing number of people to fly. This will generate profits and bonuses for the owners and shareholders, but to facilitate these lower costs, pilots (and crews) are being utilised beyond what is sensible and safe. We are technically allowed and required to fly when we are tired. Training time and quality is being reduced. Ground school - to learn the intricacies of the aircraft systems - are often reduced to computer based training, where, with enough practice, the tests can be passed without any true understanding of those systems and how that relates to operating the aircraft on a dark stormy night.

So mistakes are happening and passengers are quite rightly concerned. The answer is not to develop more and more computers to take over from the pilots - we have enough computers already. Most, (except those such as TCAS and EGPWS etc.) remove the pilot further and further away from engagement and situational awareness of their flight. Autothrust, for example, removes the need to constantly monitor and adjust the aircraft speed. We should of course always monitor our speed very carefully, but when it is operated by a computer that rarely gets this (simple linear parameter) wrong, monitoring is perhaps not as rigorously performed as it should be. Then, one day, you get pilots who have not done 'proper' groundschool to fully understand their systems; have not practised flying with manual thrust; and fly with the autopilot engaged so often; that they literally sit and allow the speed to decay to 30 knots slow on approach without doing anything about it, and just watch as their aircraft crashes around them !

I think the answer is not to spend time and resources trying to build computers to replace pilots - such a thing would be a waste of time and take decades to even work reliably enough - let alone convince the flying public; But those resources should instead be directed to proper pilot training and sensible rosters.


.

DozyWannabe
7th Sep 2016, 01:14
@DozyWannabe,
Any comment on the 50% reduction of aileron authority once one wheel touches the ground during landing?
Ultimately, we're talking about two incidents in which a crew elected to proceed with an approach and landing in spite of the overall conditions being questionable at best. At the time of flare and touchdown there may have been no significant gust, but there was certainly enough adverse wind activity beforehand to destabilise the approach. To the best of my knowledge, LH is not an airline known to arbitrarily penalise crew for going around - in my book the Captain in this case let the aircraft and his FO get ahead of him. FBW or conventional, grabbing the controls without a proper hand-over as your aircraft is crossing the threshold (in any scenario besides one-on-one training) is a clear indication of having "dropped the ball"...

Given that in the Hamburg incident one of the wing fences contacted the ground (possibly as a result of summation of dual input) I'd argue that the reduction in roll authority in those circumstances was probably a good idea!

Avoid an obstacle? Like a mountain?
Like anything, sir; but that's not what I was getting at.

The point I was making was that whether you're talking about Google, Tesla or whomever; the complexity inherent in dynamic autonomous guidance (or, in layman's terms, computers automatically driving in response to immediate outside situations) of a ground vehicle is at least several tens of orders of magnitude less than doing the same in a fixed-wing aircraft. Not only would the logic have to deal with a far greater need to scan and evaluate in the vertical (y) plane, but because aircraft controls (particularly thrust) tend to respond much more slowly than those of a car, the overall requirement for look-ahead, simulation and evaluation would be well beyond the scope of current technology (to say nothing of the technology - several generations behind - which is currently certified for aviation use).

You haven't answered my question - which recent accidents do you believe could have been avoided with the technology you describe, how, and why?

The answer is not to develop more and more computers to take over from the pilots
No-one directly involved with the tech side ever claimed it was. The notion that FBW/digital flight controls plus FMC/autoflight was the first step in replacing pilots was purely an invention of the press.

Autothrust, for example, removes the need to constantly monitor and adjust the aircraft speed.
That's not an especially new thing though - it's been a part of line flying since the '60s.

Then, one day, you get pilots who have not done 'proper' groundschool to fully understand their systems; have not practised flying with manual thrust;
You're talking about two different things there. Since the advent of the widebodies in the late '60s and early '70s, we're talking about airliners with a degree of complexity such that they're on the very limit of what human beings are capable of dealing with (case in point, a couple of years back I walked through the flight deck of a static B741 exhibit and the sheer number of switches, CBs and gauges blew my mind - those FEs got a silent salute out of me that day!). The reason that later aircraft systems design moved towards computer management and monitoring of those systems is because the complexity grew to such a degree that it was too much to ask of flight crew (and because that kind of work is something computers - when programmed correctly - are very good at).

Failure to require pilots to practice certain skills (e.g. flying with manual thrust), on the other hand, is a rather dubious practise of some airlines, and I don't think it's fair to blame the technology itself for that state of affairs.

...and fly with the autopilot engaged so often; that they literally sit and allow the speed to decay to 30 knots slow on approach without doing anything about it, and just watch as their aircraft crashes around them !
In all fairness, pilots have been "falling behind" their aircraft for far longer than autoflight has been around. If you're referring to Asiana into SFO, I think it's fair to point out that poor training and a series of CRM blunders were involved well before the automation mix-up came into the picture.

I think the answer is not to spend time and resources trying to build computers to replace pilots
For the reasons I listed to the OP above - among others - absent some kind of unforeseen leap in guidance or transportation technology, technology isn't likely to replace pilots until long after I'm pushing up the daisies! :)

Uplinker
7th Sep 2016, 13:21
Hello dozy,

Forgive me: As I often find with your posts, I am never sure whether you are agreeing or disagreeing, or whether you simply enjoy countering other people's points of view? (i.e. winding us pilots up !? :):ok:)


The answer is not to develop more and more computers to take over from the pilots
No-one directly involved with the tech side ever claimed it was. The notion that FBW/digital flight controls plus FMC/autoflight was the first step in replacing pilots was purely an invention of the press.

Straw man argument: the OP was claiming this, and I was responding to the OP. Perhaps I could have phrased it more tidily by saying "the answer is not to develop a computer to think ahead.........."



Autothrust, for example, removes the need to constantly monitor and adjust the aircraft speed.
That's not an especially new thing though - it's been a part of line flying since the '60s.

Another straw man: How does the fact that autothrust has been around for quite a while render my point invalid?



Then, one day, you get pilots who have not done 'proper' groundschool to fully understand their systems; have not practised flying with manual thrust;..

You're talking about two different things there.........

Yes I am, and they are directly related in this crash, so how does that nullify my point? It was a non understanding of the A/T HOLD mode coupled with a reluctance - or lack of ability - to monitor speed and take over and use manual thrust, that was the main cause of this crash.

Yes, they were too high all the way down the approach. Yes, the two other captains in that cockpit failed to properly alert or take control from the obviously very senior but incompetant Captain. (the {non flying} F/O was the only one who spoke up until it was far too late).

Since the advent of the widebodies in the late '60s and early '70s, we're talking about airliners with a degree of complexity such that they're on the very limit of what human beings are capable of dealing with (case in point, a couple of years back I walked through the flight deck of a static B741 exhibit and the sheer number of switches, CBs and gauges blew my mind - those FEs got a silent salute out of me that day!). The reason that later aircraft systems design moved towards computer management and monitoring of those systems is because the complexity grew to such a degree that it was too much to ask of flight crew (and because that kind of work is something computers - when programmed correctly - are very good at).

Setting and monitoring the correct thrust and speed is not a complex task, nor is it a difficult one. It can be tedious to do for long sectors (In my past I have flown five types without autothrust), and is legally necessary for CAT lll autolands.

Don't be too overawed by the FEs panels of yesteryear. I have a background and previous life in electronics, so it is easy for me to see; but each part was quite simple, there were just a lot of parts ! A bit like music mixing desks you might have seen in recording studios: they look insanely complex to the novice but they really are not. (Last night every passenger who visited the flight deck as they boarded were in awe at the cockpit: "Wow do you know what every switch and light does?" etc.)


...and fly with the autopilot engaged so often; that they literally sit and allow the speed to decay to 30 knots slow on approach without doing anything about it, and just watch as their aircraft crashes around them !
In all fairness, pilots have been "falling behind" their aircraft for far longer than autoflight has been around. If you're referring to Asiana into SFO, I think it's fair to point out that poor training and a series of CRM blunders were involved well before the automation mix-up came into the picture.

I am trying to understand how the fact that "pilots have been falling behind their aircraft for far longer than autoflight has been around" nullifies my point? You point out that poor training was to blame as if I hadn't thought of that, but this was one of the points I made.

And, yes, I was referring to Asiana. I do agree there were CRM issues and other factors in this accident; because as we all know; any accident is never caused by a single hole in the cheese. Having said that, a proper understanding of the A/T system modes would have 'converted' this crash into a hot and high approach. Not pretty, or proper, but the landing (or go around) would have been reasonably OK and ?five people would not have died.


Regards.

Goldenrivett
7th Sep 2016, 15:14
Hello DozyWannabe,
Given that in the Hamburg incident one of the wing fences contacted the ground (possibly as a result of summation of dual input) I'd argue that the reduction in roll authority in those circumstances was probably a good idea!
The BFU state that the continued roll to the left, despite the application of full right aileron, was due to the flight control law switching to ground mode whilst the aircraft was still technically airborne.

The crew needed more than half aileron to control the roll - but the computer logic denied it.

em3ry
7th Sep 2016, 15:59
So you think that smarter computers would be a bad thing?

Goldenrivett
7th Sep 2016, 16:22
So you think that smarter computers would be a bad thing?

You can write software to make them "smarter" - but that won't make them any smarter than the bloke who is programming them.
Hence from post #24
"It's great to use computers to perform routine tasks - but please still give us the authority to over ride."

em3ry
7th Sep 2016, 16:50
that won't make them any smarter than the bloke who is programming them.
I don't see why not. The brain is a computer. Who programmed your brain?

Computers play chess better than any human.
How is that not smarter than the programmer who programmed them?

Uplinker
8th Sep 2016, 10:47
Oh for goodness' sake.

Look, chess is a very strictly defined. A chess board has just 64 squares OK ? Each square can only have one piece on it. Each piece only has certain defined moves available. To win a chess game, one only needs to be able to work through every possible move you can make WITHIN those 64 squares, and the moves the opponent can make in response. Grand masters can do this to an extraordinary degree, but it is something that is very well suited to a computer, because a computer has the memory available to work through every possible move on the board and log it, score it and then make the best move according to its programmed algorithms. Chess is also not life and death.

Now, imagine you are making an approach to a runway at night. It is raining and turbulent with scattered cloud. An aircraft ahead of you has just been cleared to take off. Your aircraft is bucking around and you have the windshield wipers on. You fly through occasional clumps of cloud in which you get a brief white-out effect from your landing lights. ATC has brought you quite close in to the aircraft that has just been cleared. You have heard the aircraft acknowledge his take-off clearance. In these conditions, you can only really see the runway edge and centre lighting. You cannot see the aircraft on the runway, but you can just make out his strobes and infer where he is when he blocks out a runway light. He seems to be moving very slowly and is just lining up.

You have to make a decision according to the acceleration of the aircraft taking off whether to execute a missed approach, or whether to keep going.

Question: How do you propose to engineer and program the level of real time visual processing required to make sense of what your "thinking ahead" computer is even looking at - let alone be able to make any sort of decision about it?

All you can see is a blackness with myriad of lights ahead of you. The human pilot knows that those lights are the runway, those lights are the road nearby, those tiny lights are the aircraft on the runway, and the human brain can track that aircraft by observing where he is according to which runway lights he is blocking out. As well as this, the scene is moving around all the time due to the turbulence. The scene completely disappears now and then - replaced with just random whiteness as we fly through each cloud. The rain and the windshield wipers constantly distort and block parts of the scene. The human brain can assimilate all of this and process it into a three dimensional moving predictive situational awareness. Can any computer see this well?

Stop bloody trolling will you?

BleedingAir
8th Sep 2016, 12:52
Stop feeding him.

em3ry
8th Sep 2016, 18:43
And as I've said before this thread is not about replacing the pilot with a computer. It's about how to make the computer smarter.

DozyWannabe
9th Sep 2016, 01:46
Forgive me: As I often find with your posts, I am never sure whether you are agreeing or disagreeing, or whether you simply enjoy countering other people's points of view? (i.e. winding us pilots up !? :):ok:)
No worries - I've said it often enough, but I am honestly just trying to add to the conversation and learn stuff as I go, and that has always been the case. I know that the tendency is for internet conversation to be frustratingly adversarial by default, which is something I absolutely try to avoid wherever possible. It would seem that my desire to go against the grain in this way seems to throw some folks!

When you say "us pilots", it implies to me that you're drawing separation lines in anticipation of there being some kind of antagonism before it actually happens, which I reckon is a slightly sad indication (of how these things in general seem to go - not you specifically).

Straw man argument: ... Perhaps I could have phrased it more tidily by saying "the answer is not to develop a computer to think ahead.........."
And perhaps my response wasn't as clear as it should have been. I wasn't necessarily responding to the OP as much as re-stating a common misconception. Apologies...

How does the fact that autothrust has been around for quite a while render my point invalid?
Again, I didn't intend that statement be directed solely at you (sorry if it came across that way), I was making a more generalised response, following on from the "FBW/FMC intended to eventually replace pilots" canard. The only group that bothers me there is the press; which fed the "controversy" that it made up out of whole cloth back in the late '80s and in turn created a division between pilots and techies which rumbles to this day and is massively unhelpful.

Yes I am, and they are directly related in this crash, so how does that nullify my point?
I wasn't out to nullify your point or rebut your argument sir, I was only trying to provide a little more background info and add a few extra things I've read to the mix.

As we know, aviation accident scenarios are usually fairly complex sequences of events involving equally complex networks of decision making, and (while not aiming at you personally) I tend to be wary of the notion of "main cause[s]" in the singular. This is because it gives rise to a tendency to focus on a few (or even single) aspects at the expense of properly understanding things from a holistic "systems safety" perspective (to say nothing of feeding the media's tendency to foment a 'blame game').

In that case it would appear that Asiana's training systems all the way back to ground school and sim training were outright unfit for purpose in many respects, and while they certainly came across as one of those airlines who trained pilots to be over-reliant on aids, technology and automation (i.e. I agree with your point there... :) ). But if I recall correctly it went rather further than that. To start with, Asiana maintained a list of "difficult" airfields (within which SFO was a prominent example) and effectively forbade flight crew to land there without ILS (unless there was no other option). Sim training for non-ILS approaches was always done using their home base locale, which is relatively forgiving terrain-wise.

I guess what I'm getting at is that - as you say - whilst the last hole in the Swiss cheese was a failure to monitor airspeed which fell below safe margins as a result of A/THR mode confusion (and a failure of the check Captain to properly monitor and remedy the situation), my view is that this (along with the automation-reliance aspect) was but one part of the whole. In citing certain airfields as problematic and strongly discouraging flight crew from attempting non-ILS approaches at those airfields, the company's attitude ran the risk of effectively undermining flight crew self-confidence in general, even before we get to the training aspect (which further reinforced the notion that pilots should only be confident in doing non-ILS approaches at certain airfields).

In HF/psychological terms that is pretty much teaching your crew that some scenarios are probably beyond their abilities before they've even tried. It's accepted that the probability we humans have of making a mistake when performing a task increases dramatically as the amount of stress we are put under increases. As you said, I'm not a pilot, but many of those I've spoken to have said that checkrides tend to be pretty nerve-wracking even if you're usually confident in your abilities - that's stressor number one. Our newly-minted Asiana Captain was rostered to SFO (which the company considered challenging) to start with - stressor two; the check Captain was apparently of the quiet "hands-off" tendency (number three); then on finals, SFO Approach informs him that ILS is inoperative - and that's four. Minutes away from scheduled arrival time and the unfortunate guy had every reason to feel he'd drawn every single short straw possible - as such his stress level was (and the consequent odds of his making a mistake were) already drastically higher than should have been the case.

[to Uplinker : I've gone into the above tangent not to refute your point or be contrary in general - you're absolutely correct when you state that Asiana's company policy at the time was rather automation-centric - I just wanted to explain my view (for anyone who may be reading) that this particular accident had causal roots in several other aspects as well. If a person is subject to an implicit (and oft-reinforced) notion that a certain task is beyond them, and then subsequently expected to perform that task under already high-stress conditions, it risks becoming a self-fulfilling prophecy - and the profession, organisation, ethnicity etc. involved is immaterial. ]

Don't be too overawed by the FEs panels of yesteryear. I have a background and previous life in electronics, so it is easy for me to see; but each part was quite simple, there were just a lot of parts !
Sure - and thanks for the "mixing desk" analogy ;) - I know what you're saying - in that each part was in and of itself relatively simple - what I was getting at was that I imagined a scenario of multiple and/or cascading failures with the tool for diagnosis and remedy being literally hundreds of gauges and switches on the FE panel plus a ceiling full of hundreds more CBs each linked to an individual system - and thinking that while the tech was relatively simple in an individual sense, getting into the possible combinations and permutations had to have been (particularly in a high-stress scenario) either at or near the limit of the Mk.1 human brain.

So - to reiterate - I wasn't trying to nullify your points (promise!), I was trying to add a bit of extra background info and put forward some points of my own that some readers might find interesting. I'm keeping my own counsel as to whether the OP may or may not have been a deliberate wind-up attempt (though my responses assumed giving them the benefit of the doubt) - but I promise you that I'm not doing that, and never have done.

The crew needed more than half aileron to control the roll - but the computer logic denied it.
That's one viewpoint (and arguably a fair one) - it's just that what we're taking about here is another "edge case" (in which the scenario fell outside the design parameters). That's not a computer-specific thing - it applies to every engineering-related discipline (including going all the way back to the rods, cables and counterweights of the first few decades of aviation). If the logic involved could have been improved as a result of discovering that edge case, then it probably was (one of the benefits of having digital flight controls is that applying a design/implementation fix to the entire fleet is relatively straightforward). Also, the inherent complexity of "weight-on-wheels" logic and how it applies to flight controls has been a perennial headache for engineers since long before the digital age!

In that scenario an "override" of the kind available on the T7 would not have helped because the timescale involved was far too short for the crew to have engaged it, let alone taken advantage of it. In my view (with which you're welcome to disagree), to say the logic "denied" the crew is an exaggeration. It gave the crew the maximum amount of right aileron that the design parameters considered safe - and to be fair, whilst the ground contact was certainly a bit of a "brown trousers" moment, the logic nevertheless gave the crew enough control authority to prevent things from getting worse.

Consider this - in that one particular scenario the aspect of the design which limits aileron travel in "ground mode" might have contributed to the wing fence "scrape". The engineers (pilot, aero, mechanical or software) who designed that system had to take multiple (tens at least, if not hundreds of) scenarios into account and come up with the best possible compromise in terms of addressing them all as safely as possible. For example, consider a scenario (one of many alternatives) in which the same inputs were applied and aileron travel was not limited, resulting in an overcontrolled roll to the right and a probable fatal crash. Then consider that for every second in time you "rewind" from that wingtip scrape, you're adding several more scenarios that must be addressed. Engineering is about compromise above all and it ain't easy.

I reckon it's worth bearing in mind that when it comes to flight controls, engineers have had to design in myriad ways of controlling and limiting input and response to help the pilots keep their craft pointed in the right direction - from mechanical baulks and counterweights through electro-hydraulic systems to today's digital technology; all of which involved compromise.

And as I've said before this thread is not about replacing the pilot with a computer. It's about how to make the computer smarter.
If that's the case (and you're actually on the level, which I'm beginning to doubt if I'm honest), then:

Why illustrate your point with a Google patent clearly related to their "self-driving" car project?
Why are you seemingly ignoring my posts explaining why the level of complexity involved in autonomous airliners is at least several orders of magnitude more complex?
Why have you not listed (and this is the third time of asking) those aviation accidents that you think could have been avoided with "smarter" computers?

If you are just fishing for responses and having a giggle at our expense (and mine), then please be aware that, at least in my case, looking up information to answer these kind of questions is something that I happily do for the sake of it and as such, I never consider it a waste of time and effort on my part.

On the other hand, and to give you the benefit of the doubt one last time for now...

As a software engineer myself (and a dyed-in-the-wool techie since not long after I was out of nappies [aka diapers]) there's this. "Smarter" is very much a subjective term - I've stated many times that the kind of computer technology used in aviation (as is the case with any safety-critical real-time embedded use) always uses hardware that would be considered obsolete in any other field. Case in point - the ELAC and SEC units fitted to every A320 that has rolled off the production line from 1988 to the present day are based around the Motorola 68000 and Intel 80186. Both designs were already almost a decade old (i.e. developed in the late '70s) when the A320 went into service, both are effectively 16-bit and both are designed to run at clock speeds not much greater than 10MHz. The 68k found it's way into a lot of homes via the Atari ST, CBM Amiga, original Apple Macintosh and Sega MegaDrive/Genesis in the late '80s/early '90s and yet...

When combined in the A320 (two of each type plus a duplicated FAC), the system overall is capable of running tens of logical finite-state machines per unit, all of which are capable of self-checking and cross-checking each other in real-time. The same (arguably) "ancient" devices are also capable of assessing the crew's control inputs, calculating a certain amount of "look-ahead" in terms of the aircraft's trajectory and power settings (exactly the kind of 'simulation' you seem to be getting at) and providing the best combination of control surface and thrust response possible - all (again) in real-time.

In other words, I'd argue that whilst the underlying tech is obsolete and each individual software component is kept deliberately simple in order to enable thorough testing, in concert the system is "smart" enough to give the crew what they're asking for, and - on rare occasions - also capable of helping them avoid or get out of trouble (to a certain extent) by keeping the aircraft within the safe flight envelope.

That said, I ask one last time - what do you mean by "smarter", and which accidents would your notion of "smarter" have avoided?

Uplinker
9th Sep 2016, 23:55
Hi Dozy,

Your points accepted, and no offence taken. (Your written style sometimes reads as one who has more flying experience than anybody, no offence intended.)

As far as "us pilots" are concerned; I remember reading some years ago - your own post I think - that you are not a pilot?

Like so many things, the ideal and the actual are not necessarily the same thing - one has to do the actual job to realise why.

Regards,

Uplinker

@ BleedingAir: Yeah I know, I should know better by now.

.

DozyWannabe
10th Sep 2016, 18:14
(Your written style sometimes reads as one who has more flying experience than anybody, no offence intended.)

As far as "us pilots" are concerned; I remember reading some years ago - your own post I think - that you are not a pilot?
Along with my user profile, I've explicitly said in more posts than I care to remember that I am not a pilot, and more implicitly (in the days when I was around here more often) I deliberately avoided getting involved in piloting aspects - I always did my best to stick to the tech side, and any time I ventured other information it was always because I had enough evidence to back things up.

Like so many things, the ideal and the actual are not necessarily the same thing - one has to do the actual job to realise why.
To some extent, but I'd suggest that as long as a reasonable and thorough effort is made to understand what people doing the job have to do and have to face, then it's possible to be only a few steps away from having to have done the job. For my part, while I didn't end up flying for a living, I've been utterly obsessed with aviation since I was about five years old, joined the Air Cadets as a teenager (almost applied to Cranwell, I think my Mum still has the papers somewhere), used to play around with sims when free time was still a thing for me and throughout that time have voraciously read just about every book and watched every video I could lay my hands on. When I was lucky enough a few years ago to take part in an experiment in a proper A320 sim I was like the proverbial kid in a candy store inside, though obviously did my absolute best to perform the experiment diligently and thoroughly.

It can cut both ways sometimes as well. Some of those on here who are (for want of a better term) of a "traditionalist" mindset seem to be of the opinion/belief that some time in the '80s airline managers got together with Airbus and us techies and resolved to design pilots out of the cockpit by degrees. As such I have in the past (thanks to one of my professors who took a very keen interest at the time) tried to explain that this was never the case, and that all of the engineers involved were committed to helping pilots do their job, not take it away from them. :)

Uplinker
12th Sep 2016, 14:09
:ok: Absolutely, Dozy, I was not judging or having a dig at you, merely confirming.

I am sorry that you didn't make it into piloting, it used to be a great job, but to be absolutely honest, you really are not missing much in these days of low cost. You are probably a lot less stressed, and work more sensible hours. (And you have saved yourself £120,000 !)

I was recently physically assaulted by a baggage handler whom I had tried to ask to be more careful with our passengers' cases - that he was (literally) throwing out of the hold and some were falling to the ground.

Sadly, I find that the bad days outnumber the good days now..........


Re your last paragraph, I agree. As a previous electronics engineer myself, and now a pilot, I personally think the Airbus FBW design is very good, and I like flying it. Perhaps having an engineer's brain helps?