Why is automation dependency encouraged in modern aviation ?
+1 vilas. The pilot must maintain their scan of basic parameters even with the autos in.
Like when driving a car, where one should check the rearview mirrors every 10 seconds or so; pilots should regularly glance at the instruments. Even on auto at FL410 at 30W with nobody else in the sky, one should glance at the PFD and ED regularly - looking at pitch, roll, speed and V/S, and the engine parameters.
The problem of pilots unable to hand fly seems much worse than I imagined. The F/O who trembled when given control for 20 secs in the descent, the F/O who was scared to fly into IMC - they should not have been in the cockpit in the first place - how the hell were they allowed in there? How did they pas their captain's incapacitation detail and raw data hand flown approaches in the SIM?
You don't specify every x days, you require x approaches within every 6 month period, exactly like we did with practise autolands.
That way, on the very busy approach, on the bad weather day, on the long tiring multi sector day, you don't do it. But on the nice day when things are quiet-ish and both crew are feeling fine, then go for it. As pilots see that the Chief pilot is encouraging them to practise hand-flying, personal confidence will build; flying skills won't go so rusty and the fleet will improve its competence.
Only a tiny minority of pilots will regularly voluntarily practise their raw data flying - it's human nature. There has to be some sort of mandate from the CP to push us all into doing this.
.
Like when driving a car, where one should check the rearview mirrors every 10 seconds or so; pilots should regularly glance at the instruments. Even on auto at FL410 at 30W with nobody else in the sky, one should glance at the PFD and ED regularly - looking at pitch, roll, speed and V/S, and the engine parameters.
The problem of pilots unable to hand fly seems much worse than I imagined. The F/O who trembled when given control for 20 secs in the descent, the F/O who was scared to fly into IMC - they should not have been in the cockpit in the first place - how the hell were they allowed in there? How did they pas their captain's incapacitation detail and raw data hand flown approaches in the SIM?
PilotLZ
Tightening the regulatory requirements for manual flying proficiency sounds tempting right until you realise that it can also backfire. For example, if you require a pilot to perform a raw-data approach every X days, how can you be certain that, during a period of low flying activity, his/her only flight won't be in conditions where flying raw-data is not appropriate? Hence, I don't think that hard-and-fast rules are the best way forward.
Tightening the regulatory requirements for manual flying proficiency sounds tempting right until you realise that it can also backfire. For example, if you require a pilot to perform a raw-data approach every X days, how can you be certain that, during a period of low flying activity, his/her only flight won't be in conditions where flying raw-data is not appropriate? Hence, I don't think that hard-and-fast rules are the best way forward.
That way, on the very busy approach, on the bad weather day, on the long tiring multi sector day, you don't do it. But on the nice day when things are quiet-ish and both crew are feeling fine, then go for it. As pilots see that the Chief pilot is encouraging them to practise hand-flying, personal confidence will build; flying skills won't go so rusty and the fleet will improve its competence.
Only a tiny minority of pilots will regularly voluntarily practise their raw data flying - it's human nature. There has to be some sort of mandate from the CP to push us all into doing this.
.
Last edited by Uplinker; 30th Nov 2020 at 11:18. Reason: typo
Join Date: Mar 2006
Location: USA
Posts: 2,525
Likes: 0
Received 0 Likes
on
0 Posts
Now let's take a step into reality. It does. I'll admit that I've flown approaches where I didn't know what my speed or N1 was, because the AT was on. Now imagine I'm the sort of person who never turns off the AT- it won't take much to overload me on the day that it fails.
I wonder how some of us ever got through an instrument rating checkride in a piston single where the only automation was a mode C transponder.
Thread Starter
Join Date: Dec 2015
Location: France
Posts: 507
Likes: 0
Received 0 Likes
on
0 Posts
Vessbot, how do you recognise the need to disconnect ?
Are you able to describe the process which you use, which will always apply in every situation. If this starts with understanding the situation, then how do you understand.
To you it might be obvious, to others no so depending on the situation, experience, training - even of this can be trained at all.
Are you able to describe the process which you use, which will always apply in every situation. If this starts with understanding the situation, then how do you understand.
To you it might be obvious, to others no so depending on the situation, experience, training - even of this can be trained at all.
Aircraft trajectory deviated, deviating, or even about to deviate from the prescribed and safe track. The deviate, deviating or about to deviate depends on the pilot's capacity to detect the problem as soon as possible.
(For very minor deviations, the automation could be kept on. I heard of a case where the AP descended below the minimum altitude, but it was 2450ft instead of 2500ft. In clear blue sky it could be discussed that it is more interesting to observe the aircraft as long as it is doing something safe (once again : VMC) and be able to describe the problem in more details to the manufacturer, so that the system can be improved in the future)
One could even say that if there is even a mere potentiality that the system might misbehave : automation mode can be changed or even disconnected (if it is deemed safer to manually control the aircraft than to risk the aircraft doing something unexpected)
Some well know criteria are : descend below safety altitude, deviation of more than half a dot on loc or glide, or half deviation of the vor scale, 0.5nm for a dme arc, etc..
Other well know criteria are speed below Vls (airbus world) or alpha above alpha prot, there is no well-known figure for beta (sideslip), I think because the figure would be 0. Since aircraft are yaw-stable, any deviation is caused by a large external force (engine failure, aerodynamical problem..) which require immediate correction.
You could also put a limit at 33° of bank and debate of a figure for negative Vs. Some say "one minute before touchdown at all times"
The process that is used and trained is to monitor the aircraft. Speed, pitch, power, bank, altitude, radionavigation, all these are monitored constantly. The whole point of training is to know what to expect, how to achieve it, in a light aircraft. In a large aircraft it is exactly the same except it is more often achieved by the flight guidance system on the pilot's orders rather than manual inputs.
Vessbot,
That’s a couple of very cogent posts, dense with some really good observations.
For me, having had a fairly long career in aviation but coming from an engineering/science background, one of the (many) issues that gets me is the user interface that a modern commercial transport presents to its operator. In a word: suboptimal. They are an unholy mix of a century’s worth of ideas and technology, keeping the bad as well as the good for some kind of continuity. IMO many EFIS+FMC presentations are actually worse for SA in some areas than an equivalent steam-driven setup: un-annunciated hidden modes, weird logic and a plethora of known but too-expensive-to-fix bugs in prehistoric firmware/hardware.
That’s a couple of very cogent posts, dense with some really good observations.
For me, having had a fairly long career in aviation but coming from an engineering/science background, one of the (many) issues that gets me is the user interface that a modern commercial transport presents to its operator. In a word: suboptimal. They are an unholy mix of a century’s worth of ideas and technology, keeping the bad as well as the good for some kind of continuity. IMO many EFIS+FMC presentations are actually worse for SA in some areas than an equivalent steam-driven setup: un-annunciated hidden modes, weird logic and a plethora of known but too-expensive-to-fix bugs in prehistoric firmware/hardware.
One very interesting aspect is flight at high altitude.
Yes, people are not supposed to be flying manually at high altitude. But many accidents have happened there. So pilots should still have an optimized interface.
Suppose your aircraft is on its way to cruise flight level 380. At flight level 372, AP disconnects. What are you going to do ? Suppose the best course of action at this time is to stabilize, first, at FL380.
Say you climb at 800ft/min, speed is about 450kt. You just have to push the nose and stabilize the aircraft. The pitch change required is about 1 degree.
But which instrument do you have to control this ?
https://lh3.googleusercontent.com/pr...30Dzx6K4Hkrabg
From what I read, the center square on the PFD is about 1.6° large, and each pixels is about 0.2mm large
Between +10° and -10°, there are 4cm. That is 2mm per degree. And the center square is about 10x10 pixels. So the pitch change required to go from climb to cruise, or cruise to climb is about half the size of the center square and 1mm. Now look at a ruler with millimiter markings and ask yourself if it's easy to differentiate them.
It leads to something like this :
In climb, the top of the bottom segment of the center square is aligned with the top of the 2.5° line.
To go from climb to cruise, you have to put the bottom of the top segment and align it with the top of the 2.5° line.
This is a variation of about 0.8° so a realistic value for a heavy day.
It is absurd to think a normal human being can easily control something so tiny.
If Airbus really wanted to make possible manual flight at a high altitude (for cases of automation failure) they should implement a button that could be used only at high altitude which would switch the aircraft in "high precision mode".
- The PFD would be magnified at least 4 times. There is no point in seeing +/-20° in cruise as is the case on the above picture.
- The flight controls would also have a reduced efficacy (not sure about this one though, just a possibility, in my opinion the sidestick is too small even for low altitude flight so...).
There is the same kind of problem on the thrust levers.
When you want to begin taxi and apply breakaway thrust, let's say 30%, there are a few centimeters of dead area. So the thrust lever course, which was not very long to begin with, is even more reduced by this. On final approach, a variation of just 1% corresponds to almost 100ft/min.
I did not measure the distance to be traveled to change the thrust by just one percent... But my guess is it is about half a millimeter.
I don't get it. The aircraft is huge, there is plenty of space in the cockpit, but you have to move the thrust levers and the PFD target by distances similar to that of an ant's arse !
It seems that I completely agree with you overall.
The idea of giving airline pilots a few hours every now-and-then in a light aircraft to get the old scan and handling ability up again often comes around and is generally pooh-poohed as being impractical, expensive,
At one point during the covid crisis I wasn't flying much on the airbus. To get myself going again, I booked a flight on a small aircraft. I did 15 short circuits followed by landings. All were full flaps and idle power. This gives a descent rate that is close to what you get on the A320, so it was good practise, my landings were definetely better that without this.
Last edited by KayPam; 30th Nov 2020 at 12:17.
Join Date: Jun 2007
Location: Wanderlust
Posts: 3,407
Likes: 0
Received 0 Likes
on
0 Posts
I’m not too sure the scans are the same, as with AP in the aircraft is being controlled through the MCP/FMC, which necessarily takes attention away from instrument monitoring. Most SOPs are when the PF is hand-flying, the PM will look after all the selections so the PF can concentrate on the instruments?
Vessbot, do find time to read the previous links in https://www.pprune.org/tech-log/636976-why-automation-dependency-encouraged-modern-aviation.html#post10937097
If they are too scientific then see the following :-
https://www.dropbox.com/s/c5otpsl20a...g%20I.pdf?dl=0
https://www.dropbox.com/s/7f9h8qh8yh...%20II.pdf?dl=0
https://www.dropbox.com/s/sk5agfbjbu...20III.pdf?dl=0
P.S. re 777 example - you blame the crew and create a workaround for a weak aircraft system (double button press), but might the event be something to do with the aircraft:- https://www.pprune.org/rumours-news/636908-southern-air-777-stall-recovery-after-takeoff-nov-15th.html#post10932069
If they are too scientific then see the following :-
https://www.dropbox.com/s/c5otpsl20a...g%20I.pdf?dl=0
https://www.dropbox.com/s/7f9h8qh8yh...%20II.pdf?dl=0
https://www.dropbox.com/s/sk5agfbjbu...20III.pdf?dl=0
P.S. re 777 example - you blame the crew and create a workaround for a weak aircraft system (double button press), but might the event be something to do with the aircraft:- https://www.pprune.org/rumours-news/636908-southern-air-777-stall-recovery-after-takeoff-nov-15th.html#post10932069
Join Date: Sep 2016
Location: USA
Posts: 803
Likes: 0
Received 0 Likes
on
0 Posts
Again no! I have said it before with AP on or off pilot's scan remains same only difference is with AP on, the AP makes changes and in manual pilot makes changes. With or without AP It's the loss of scan that causes the problem. Mode or no mode, right or wrong mode, you simply cannot make an approch without a periodic glance at the speed. In the cases we are discussing the crew never even expressed a surprise or utter a word regarding falling speed because they never looked at the speed and worse not even the mode. They assumed a mode that wasn't there, they assumed a speed that wasn't being maintained. In both the cases even the go around was not because they noticed speed or the ROD but because they were not reaching the runway otherwise they would have executed a 3 or 4G Landing. In case of Indian airlines they lost their lives because they were nine seconds too late. The engines were spooling up. Somebody needs maintain the parameters either the AP does it, the mode does it or the pilot does it. Nobody is not acceptable.
Regarding “interpretation,” here’s what I mean. You say that you have to keep up a good scan regardless of AP on vs. off, and “you simply cannot make an approach without a periodic glance at the airspeed.” Taken in the factual sense, that is of course laughably wrong. People can and do that all the time as a baseline default, and accidents like these 2 reveal the tip of the iceberg. But of course you didn’t mean in the factual sense, but rather in the prescriptive sense, or what should be done.
However, as it relates to the cause of these accidents, it is the factual, not prescriptive sense, that is relevant. What do these pilots do, not what should they do. And what they did actually do, is clearly rely on the autothrottles to maintain the airspeed without their brain being in the loop. And had the modes been what the pilots thought they were, (as you say, “they assumed a mode that wasn’t there”) the autothrottles would have done exactly that.
So where do we disagree? What part of my posts, if you can be more specific, do you take issue with?
Join Date: Jun 2007
Location: Wanderlust
Posts: 3,407
Likes: 0
Received 0 Likes
on
0 Posts
. If you know what's wrong, then you're not confused, you're the opposite: you accurately know the situation. Confusion is if you think the automation setup will result in something different than what it's actually resulting in.
Join Date: Mar 2006
Location: USA
Posts: 2,525
Likes: 0
Received 0 Likes
on
0 Posts
I said no to this. In the two cases if they would have seen the speed falling they could still be confused as to why it's falling if they don't check the mode/FMA. But the falling speed would have drawn their attention to the mode or FMA which they would have corrected. Alternately had they checked the mode first, as they must in Airbus then they would have corrected it then and there knowing that it (OP DES) won't maintain the speed. In Airbus you are not to assume anything as happening unless you confirm the FMA. If FMA was correct and yet speed starts falling that can be confusing. But that wasn't the case here. They committed first error of not getting into correct mode and checking it on the FMA, then committed the second, confirmatory error as it were to not monitor the parameter, the speed to cause the disaster.
Join Date: Sep 2016
Location: USA
Posts: 803
Likes: 0
Received 0 Likes
on
0 Posts
I said no to this. In the two cases if they would have seen the speed falling they could still be confused as to why it's falling if they don't check the mode/FMA. But the falling speed would have drawn their attention to the mode or FMA which they would have corrected. Alternately had they checked the mode first, as they must in Airbus then they would have corrected it then and there knowing that it (OP DES) won't maintain the speed. In Airbus you are not to assume anything as happening unless you confirm the FMA. If FMA was correct and yet speed starts falling that can be confusing. But that wasn't the case here. They committed first error of not getting into correct mode and checking it on the FMA, then committed the second, confirmatory error as it were to not monitor the parameter, the speed to cause the disaster.
If they saw saw the speed falling they could still be confused as to why - yes, and I said this (“knowing something is wrong but not understanding exactly what or why”). The falling speed should have drawn their attention to the FMA, also yes (but among other things, like drawing their hand to the thrust levers). Had they fixed the mode, after checking it as required, AT would have been active - of course.
The next sentence may be a disagreement? Not sure. “If FMA was correct and yet speed starts falling that can be confusing. But that wasn't the case here.” Yes of course that would have been confusing (and not only that, but not only that, an actual malfunction). And yes that wasn’t the case here, and I hope you’re not hinging an argument that they weren’t confused by this event that didn’t happen, since there are myriad other possible confusion scenarios, one of them being the actual case.
And yeah the first error was getting in the wrong mode (compounded by lack of checking the FMA) and the second, not monitoring the airspeed. I’ll add a third, not monitoring the picture outside the window. How many thousands of hours does it take to learn what an approximate 3 degree descent over flat terrain looks like? I guess actually too many thousands, when you never use that as a source of SA due to putting all your chips in the automation. And also a 4th (Check Airman’s 3rd) and IMO the most significant of all, not flying the airplane when required. 17 seconds from “You are descending on idle open descend ha, all this time” (at 400 feet) until TOGA, in the meanwhile playing with modes and questioning each other on what switches had been flipped.
Last edited by Vessbot; 30th Nov 2020 at 21:30.
Join Date: Jun 2007
Location: Wanderlust
Posts: 3,407
Likes: 0
Received 0 Likes
on
0 Posts
I’ll add a third, not monitoring the picture outside the window. How many thousands of hours does it take to learn what an approximate 3 degree descent over flat terrain looks like?
Last edited by vilas; 1st Dec 2020 at 04:03.
Join Date: Jun 2007
Location: Wanderlust
Posts: 3,407
Likes: 0
Received 0 Likes
on
0 Posts
Check airman
NTSB Chairman on Asiana SFO
I'll admit that I've flown approaches where I didn't know what my speed or N1 was, because the AT was on.
Hersman has repeatedly emphasized it is the pilot’s responsibility to monitor and maintain correct approach speed
Last edited by vilas; 1st Dec 2020 at 06:42.
Join Date: Jun 2007
Location: Wanderlust
Posts: 3,407
Likes: 0
Received 0 Likes
on
0 Posts
Coming back to the original question "Why is automation dependency encouraged in modern aviation?" Because:
1. Aircraft behave(mostly) as designed in the US, China or anywhere in the world.
2. Humans operate the machines differently in different countries, cultures and ethnicities.
3. No matter how well a pilot is trained or is experienced he does not become immune to all the ills human flesh is heir to(human factors).
4. Automation can perform repetatively to a given standard. There is no variation due to skill. If it doesn't just replace it.
5. A few failures of automation is not enough evidence against automation just as one Sully or Al Haynes doesn't make a summer. There are any number of fully serviceable Aircraft crashed through inadequate piloting.
6. Accidents will happen in manual or automation but the frequency and economics of the accidents will be the deciding factor.
Industry is moving towards automation. AF447 triggered some training but eventually alternate/backup speed in A350 and synthetic speed in B787. Final solution always end up in automation.
Actually the question should have been how to acquire/retain manual skill. That's what all the discussion is about.
1. Aircraft behave(mostly) as designed in the US, China or anywhere in the world.
2. Humans operate the machines differently in different countries, cultures and ethnicities.
3. No matter how well a pilot is trained or is experienced he does not become immune to all the ills human flesh is heir to(human factors).
4. Automation can perform repetatively to a given standard. There is no variation due to skill. If it doesn't just replace it.
5. A few failures of automation is not enough evidence against automation just as one Sully or Al Haynes doesn't make a summer. There are any number of fully serviceable Aircraft crashed through inadequate piloting.
6. Accidents will happen in manual or automation but the frequency and economics of the accidents will be the deciding factor.
Industry is moving towards automation. AF447 triggered some training but eventually alternate/backup speed in A350 and synthetic speed in B787. Final solution always end up in automation.
Actually the question should have been how to acquire/retain manual skill. That's what all the discussion is about.
Join Date: Mar 2012
Location: Having a margarita on the beach
Posts: 2,423
Likes: 0
Received 0 Likes
on
0 Posts
Coming back to the original question "Why is automation dependency encouraged in modern aviation?" Because:
1. Aircraft behave(mostly) as designed in the US, China or anywhere in the world.
2. Humans operate the machines differently in different countries, cultures and ethnicities.
3. No matter how well a pilot is trained or is experienced he does not become immune to all the ills human flesh is heir to(human factors).
4. Automation can perform repetatively to a given standard. There is no variation due to skill. If it doesn't just replace it.
5. A few failures of automation is not enough evidence against automation just as one Sully or Al Haynes doesn't make a summer. There are any number of fully serviceable Aircraft crashed through inadequate piloting.
6. Accidents will happen in manual or automation but the frequency and economics of the accidents will be the deciding factor.
Industry is moving towards automation. AF447 triggered some training but eventually alternate/backup speed in A350 and synthetic speed in B787. Final solution always end up in automation.
Actually the question should have been how to acquire/retain manual skill. That's what all the discussion is about.
1. Aircraft behave(mostly) as designed in the US, China or anywhere in the world.
2. Humans operate the machines differently in different countries, cultures and ethnicities.
3. No matter how well a pilot is trained or is experienced he does not become immune to all the ills human flesh is heir to(human factors).
4. Automation can perform repetatively to a given standard. There is no variation due to skill. If it doesn't just replace it.
5. A few failures of automation is not enough evidence against automation just as one Sully or Al Haynes doesn't make a summer. There are any number of fully serviceable Aircraft crashed through inadequate piloting.
6. Accidents will happen in manual or automation but the frequency and economics of the accidents will be the deciding factor.
Industry is moving towards automation. AF447 triggered some training but eventually alternate/backup speed in A350 and synthetic speed in B787. Final solution always end up in automation.
Actually the question should have been how to acquire/retain manual skill. That's what all the discussion is about.
Regarding the last bit on how to acquire/retain manual skill I believe we all know the answer is : training, and more specifically SIM training. But SIM training is expensive and, aside from from the mandatory annual sessions, unproductive from an entrepreneurship point of view. How to get the message across that some extra non jeopardy yearly sim sessions focused on FPM are beneficial to everybody in the long term ? I believe this is another question we should be able to answer as a pilots/trainers community.
Kaypam, from what you say about control movements, don't worry, you will get used to it.
The control movements can seem weirdly small until you get used to it. For taxiing, take the brakes off and you will normally start to move. If not you advance the thrust levers say a quarter of the available arc and as soon as you see the start of movement, you put them back to idle. Now you will taxi.
If you ever need to make fine adjustments to thrust levers, "walk" them against each other; twist your hand left and right to move each lever at a time by a small amount.
Reducing the pitch if the AP drops out 800' below the level. Gentle pressure on a conventional control column, not a push or even a movement, just gentle pressure.
On Airbus FBW on my initial type rating NOBODY could tell me how to use the side-stick. Even though I asked my TRE and several others, neither they or anybody could tell me. I eventually taught myself after seeing a film of a pilot operating the joystick in a Tornado (military fast jet):
On Airbus FBW, the attitude will stay where it was until a further input is received. So in your level-off scenario push the side-stick forwards against the centring spring one very brief forward push of about 20% of the full travel and immediately let the side-stick return to central. So just a small nudge or jab or bump against the spring, lasting half a second and then centre. So the action is nudge-release in half the time it takes to say that phrase. This will lower the nose a small amount and the FBW will hold the aircraft at this new attitude. If it was not enough, nudge-release again. Same applies in roll. That is how to make very small fine adjustments to the attitude of an Airbus FBW.
Re instruments, I agree, indications are too small for a given deviation or are badly designed, and this is why raw data flying is such a challenge. I remember finding NDB tracking in a PA28 to be very difficult because the NDB needle was on one dial and the heading bug was on another, and there was no bug for the NDB track - you had to remember what it was. One had to continually compare the needle with the heading instrument, and parallax and misreading could occur. Also, you were flying an aircraft that never stayed where you put it, so you were busy hand flying and continually correcting the aircraft and tracking a non bugged NDB needle, it was quite common for the NDB to drift more than 5 degrees out. When I later flew the Dash 8, you could overlay the NDB needle, the NDB track bug and the compass rose all on one instrument and suddenly NDB tracking was a piece of piss ! Instead of having to remember the NDB track and look at a different instrument to read what the heading was and then go back to the NDB instrument, all you had to do was glance at the one composite overlay. You did not even have to read any values, you could see at a glance if the NDB needle was under the track bug, and if it was just one degree out one side, you clicked the AP heading bug towards it by one degree. Really easy.
I have always found LOC and G/S displays to be too limited. By the time you can see a deviation, it is quite a large error. With my engineer's hat on, I would redesign the display so the markers were in two halves. One half would move as they currently do, the other half would move over its whole range of the display for say 1/4 of a degree LOC or 50' G/S - much more sensitive and a large movement for a small deviation - so you would be able to see a deviation before it became too big. The other half would display as it does now. A lot of the time the sensitive marker would be pegged on one extreme or the other, but when you had captured the LOC and G/S they would come off the stops, and a perfect ILS would see all the bugs centred.
(I should qualify that I am referring to the ILS markers on the PFD, not the navigation ILS beam bar display)
The control movements can seem weirdly small until you get used to it. For taxiing, take the brakes off and you will normally start to move. If not you advance the thrust levers say a quarter of the available arc and as soon as you see the start of movement, you put them back to idle. Now you will taxi.
If you ever need to make fine adjustments to thrust levers, "walk" them against each other; twist your hand left and right to move each lever at a time by a small amount.
Reducing the pitch if the AP drops out 800' below the level. Gentle pressure on a conventional control column, not a push or even a movement, just gentle pressure.
On Airbus FBW on my initial type rating NOBODY could tell me how to use the side-stick. Even though I asked my TRE and several others, neither they or anybody could tell me. I eventually taught myself after seeing a film of a pilot operating the joystick in a Tornado (military fast jet):
On Airbus FBW, the attitude will stay where it was until a further input is received. So in your level-off scenario push the side-stick forwards against the centring spring one very brief forward push of about 20% of the full travel and immediately let the side-stick return to central. So just a small nudge or jab or bump against the spring, lasting half a second and then centre. So the action is nudge-release in half the time it takes to say that phrase. This will lower the nose a small amount and the FBW will hold the aircraft at this new attitude. If it was not enough, nudge-release again. Same applies in roll. That is how to make very small fine adjustments to the attitude of an Airbus FBW.
Re instruments, I agree, indications are too small for a given deviation or are badly designed, and this is why raw data flying is such a challenge. I remember finding NDB tracking in a PA28 to be very difficult because the NDB needle was on one dial and the heading bug was on another, and there was no bug for the NDB track - you had to remember what it was. One had to continually compare the needle with the heading instrument, and parallax and misreading could occur. Also, you were flying an aircraft that never stayed where you put it, so you were busy hand flying and continually correcting the aircraft and tracking a non bugged NDB needle, it was quite common for the NDB to drift more than 5 degrees out. When I later flew the Dash 8, you could overlay the NDB needle, the NDB track bug and the compass rose all on one instrument and suddenly NDB tracking was a piece of piss ! Instead of having to remember the NDB track and look at a different instrument to read what the heading was and then go back to the NDB instrument, all you had to do was glance at the one composite overlay. You did not even have to read any values, you could see at a glance if the NDB needle was under the track bug, and if it was just one degree out one side, you clicked the AP heading bug towards it by one degree. Really easy.
I have always found LOC and G/S displays to be too limited. By the time you can see a deviation, it is quite a large error. With my engineer's hat on, I would redesign the display so the markers were in two halves. One half would move as they currently do, the other half would move over its whole range of the display for say 1/4 of a degree LOC or 50' G/S - much more sensitive and a large movement for a small deviation - so you would be able to see a deviation before it became too big. The other half would display as it does now. A lot of the time the sensitive marker would be pegged on one extreme or the other, but when you had captured the LOC and G/S they would come off the stops, and a perfect ILS would see all the bugs centred.
(I should qualify that I am referring to the ILS markers on the PFD, not the navigation ILS beam bar display)
Last edited by Uplinker; 1st Dec 2020 at 10:55.
Thread Starter
Join Date: Dec 2015
Location: France
Posts: 507
Likes: 0
Received 0 Likes
on
0 Posts
Aircraft descending below minimum descent altitude, initiating the first sid turn on the wrong side, unable to switch to land then flare mode, etc...
Airbus could very well design a cockpit full of automation but that would still leave a possibility to fly raw data.
This is impossible, so the slow erosion of manual skills is almost unavoidable, even if we would all be well aware of the problem and willing to practise raw data flying, in most cases (RNAV) we just can't, due to a design choice.
Plus, a sim remains a sim. Since it represents the aircraft, you still can't fly raw data RNAVs in it. Since it does not match perfectly the aircraft, the training is not as realistic.
Flying raw data manual departures and approaches on a regular basis on the line, to me, is the only option to maintain a high standard of manual skill.
...
Re instruments, I agree, indications are too small for a given deviation or are badly designed, and this is why raw data flying is such a challenge. I remember finding NDB tracking in a PA28 to be very difficult because the NDB needle was on one dial and the heading bug was on another, and there was no bug for the NDB track - you had to remember what it was. One had to continually compare the needle with the heading instrument, and parallax and misreading could occur. Also, you were flying an aircraft that never stayed where you put it, so you were busy hand flying and continually correcting the aircraft and tracking a non bugged NDB needle, it was quite common for the NDB to drift more than 5 degrees out. When I later flew the Dash 8, you could overlay the NDB needle, the NDB track bug and the compass rose all on one instrument and suddenly NDB tracking was a piece of piss ! Instead of having to remember the NDB track and look at a different instrument to read what the heading was and then go back to the NDB instrument, all you had to do was glance at the one composite overlay. You did not even have to read any values, you could see at a glance if the NDB needle was under the track bug, and if it was just one degree out one side, you clicked the AP heading bug towards it by one degree. Really easy.
I have always found LOC and G/S displays to be too limited. By the time you can see a deviation, it is quite a large error. With my engineer's hat on, I would redesign the display so the markers were in two halves. One half would move as they currently do, the other half would move over its whole range of the display for say 1/4 of a degree LOC or 50' G/S - much more sensitive and a large movement for a small deviation - so you would be able to see a deviation before it became too big. The other half would display as it does now. A lot of the time the sensitive marker would be pegged on one extreme or the other, but when you had captured the LOC and G/S they would come off the stops, and a perfect ILS would see all the bugs centred.
Re instruments, I agree, indications are too small for a given deviation or are badly designed, and this is why raw data flying is such a challenge. I remember finding NDB tracking in a PA28 to be very difficult because the NDB needle was on one dial and the heading bug was on another, and there was no bug for the NDB track - you had to remember what it was. One had to continually compare the needle with the heading instrument, and parallax and misreading could occur. Also, you were flying an aircraft that never stayed where you put it, so you were busy hand flying and continually correcting the aircraft and tracking a non bugged NDB needle, it was quite common for the NDB to drift more than 5 degrees out. When I later flew the Dash 8, you could overlay the NDB needle, the NDB track bug and the compass rose all on one instrument and suddenly NDB tracking was a piece of piss ! Instead of having to remember the NDB track and look at a different instrument to read what the heading was and then go back to the NDB instrument, all you had to do was glance at the one composite overlay. You did not even have to read any values, you could see at a glance if the NDB needle was under the track bug, and if it was just one degree out one side, you clicked the AP heading bug towards it by one degree. Really easy.
I have always found LOC and G/S displays to be too limited. By the time you can see a deviation, it is quite a large error. With my engineer's hat on, I would redesign the display so the markers were in two halves. One half would move as they currently do, the other half would move over its whole range of the display for say 1/4 of a degree LOC or 50' G/S - much more sensitive and a large movement for a small deviation - so you would be able to see a deviation before it became too big. The other half would display as it does now. A lot of the time the sensitive marker would be pegged on one extreme or the other, but when you had captured the LOC and G/S they would come off the stops, and a perfect ILS would see all the bugs centred.
Two other examples :
When flying an DME arc with an old DME like this one :
https://s7d2.scene7.com/is/image/hon...detail-470x290
You have the DME distance, but also the very precise and precious indication of DME speed !
So to fly a DME during flight school, on an aircraft equipped with a DME, I used the TAS/200 to initiate the turn towards the DME, and then I did not worry about the wind, the angles, the calculations, all this, I just kept the DME speed around zero.
Let's say after the initial interception I was 0.1nm away. I would set 10kt convergence until it displayed the correct DME distance, at which time I set the DME speed to zero by diverging a little.
Then, the DME speed increased slowly, and when it reached a limit that I chose like 5kt, I converged until I read 5kt on the other side. At some point after several corrections like this one, the DME distance which was stuck on the correct DME distance could deviate by 0.1nm. I corrected this and continued.
This allowed me to make almost perfect DME arcs, remaining always within +/-0.1nm. And I would have been always within 0.05nm if I had an indication of the distance of that precision.
Whereas even the best other students struggled to remain within +/-0.3nm.
I could very well see when they would deviate, thanks to the DME speed. A DME speed deviation of about 30kt outwards would appear. At 0.1nm they would reduce the DME speed to 20kt. But by the time they reached 0.2nm deviation, their DME speed is now 25kt outwards.. They apply a larger correction and now have 10kt DME speed outwards, increasing. They reach 0.3nm deviation at 15kt outwards. They apply the same large correction and they are now at 0kt outwards, but increasing outwards. They can reach 0.4nm at 5kt outwards, and only then with another correction they can finally converge ! Now the risk is huge to overcorrect.
When I gave them the technique they aced the DME arcs, problems like above described never happened anymore and instructors were amazed.
Then, we changed airplanes, and I could no longer use this technique. Because DME speed was not given anymore !
So I had to find another technique. Add 90° to the bearing and that is my desired route.
Also works very well. I did an almost complete DME circle in a glider like this, around my home gliding airfield, with just my GPS watch.
A computer can analyse in real time the deviation, the derivative of the deviation, and it's derivative's derivative !
A human cannot derive a derivative from a needle, at least not as well as a computer.
Regarding the ILS, I practise them raw data on the airbus.
The difficulty is that the green diamond should give an indication of the LOC's derivative : right of course, deviate to the right, and so on.
But since there is a few degrees imprecision on the diamond, especially after a long flight, this does not work. The computer can compute the LOC's derivative, but I have no indication of it. So a deviation trend must start before I can detect it, and this still does not give me the heading at wich the derivative is zero.
Instead of your double indicator, I would rather think of a very sensible "LOC trend" or "glide trend", as is done with the speed trend, displayed on the side of the diamond.
Then you would just set the LOC trend to zero, and if there was a deviation, you could see much faster if it is growing or reducing, which would also help the PM's job.
Yes, a trend arrow might work really well. I like that idea.
Another thought is that the V/S indication has a logarithmic presentation, (or expanded centre scale), but the ILS is linear.
If we made the ILS markers logarithmic/expanded centre scale, it would be easier to notice deviation trends. So half the marker travel from centre could be, say 0.25° LOC deviation. The next 1/4 of scale would go up to say 1° LOC dev, the last quarter up to 2.5° dev.
But I think your idea is better.
Another thought is that the V/S indication has a logarithmic presentation, (or expanded centre scale), but the ILS is linear.
If we made the ILS markers logarithmic/expanded centre scale, it would be easier to notice deviation trends. So half the marker travel from centre could be, say 0.25° LOC deviation. The next 1/4 of scale would go up to say 1° LOC dev, the last quarter up to 2.5° dev.
But I think your idea is better.
Join Date: Apr 2010
Location: IRS NAV ONLY
Posts: 1,230
Likes: 0
Received 0 Likes
on
0 Posts
If we made the ILS markers logarithmic/expanded centre scale, it would be easier to notice deviation trends. So half the marker travel from centre could be, say 0.25° LOC deviation. The next 1/4 of scale would go up to say 1.5° LOC dev, the last quarter up to 2.5° dev.
Join Date: Jun 2007
Location: Wanderlust
Posts: 3,407
Likes: 0
Received 0 Likes
on
0 Posts
Instead of your double indicator, I would rather think of a very sensible "LOC trend" or "glide trend", as is done with the speed trend, displayed on the side of the diamond.
Then you would just set the LOC trend to zero, and if there was a deviation, you could see much faster if it is growing or reducing, which would also help the PM's job.
Then you would just set the LOC trend to zero, and if there was a deviation, you could see much faster if it is growing or reducing, which would also help the PM's job.
If you ever need to make fine adjustments to thrust levers, "walk" them against each other; twist your hand left and right to move each lever at a time by a small amount.