PDA

View Full Version : End of Aircraft Operation


twistedenginestarter
30th Jun 2020, 18:24
The future is Mission Management

https://www.flightglobal.com/systems-and-interiors/airbus-completes-two-year-autonomous-systems-trials/139044.article

capngrog
30th Jun 2020, 19:16
It seems that Airbus intends to enhance aircraft automation, which: " ... could help flightcrews focus less on the aircraft operation and more on mission management." Other than monitoring aircraft systems, routing, turbulence avoidance, etc., what is "mission management". I also find it interesting that Airbus uses the term "flight crew" rather than "pilots". We have seen the future, and it may not include "pilots" as we know them now. So goes Airbus, so goes the World.

Cheers,
Grog

armchairpilot94116
30th Jun 2020, 21:36
On the first 737 max crash the crew certainly had no idea what was happening thanks to Boeing. The second crash they may have had an idea but didn't know exactly what to do or were unable to do the right thing in the very narrow window of opportunity they had, again thanks to Boeing.

twistedenginestarter
1st Jul 2020, 09:02
On the first 737 max crash the crew certainly had no idea what was happening thanks to Boeing..The whole point is they didn't need to know what was happening. If the computers were flying, none of this would have happened. Automation is so complicated nowadays,it is difficult for the pilots to know how the plane works and what it is currently doing. So why try and make them gain that knowledge? What's the point any longer?

Hundreds of people died because pilots were controlling the planes. Much worse than that, Boeing lost $Billions. There was nothing wrong with the Max. It was just the pilots did not do what the plane was designed to have done.

Boeing did make some cryptic statement about re-thinking the computer-pilot interface. The obvious way is to apply currently available technology to remove the need for the pilot for second-to-second control.

You can currently buy a number of small planes with Garmin Autoland. It chooses which airport to go to, does the R/T, makes the approach and lands.

Boeing sees the problem (as no doubt Airbus does) as 'third world' customers who have pilots who are less likely to fly the planes as it has tested. The most economical solution to that challenge is to make the plane be flown by Boeing ie tried and tested computer programs.

It has got to happen. The Max pressed the eject button

jmmoric
1st Jul 2020, 10:00
The whole point.....

Are you saying that if the pilots weren't there, those two aircraft wouldn't have crashed?

The thread wasn't about the Max though, there's already one on that.

Flight crew, pilots, airmen or whatever you call them, doesn't really matter.... it's just something people has always been doing, changing names of positions to make it sound more "fancy"....

And yes, I believe were closing in on the date where both pilots and controllers are made redundant. They are certainly investing A LOT of money in "developing" everyone out into unemployment. Where technically we could save A LOT of money by reducing the development teams instead.

Pugilistic Animus
1st Jul 2020, 10:14
I always, every time, I hear nonsense about automation taking over the role of {{{ Flite Cru}}}:} I think back to that 737 on which one gear wouldn't deploy. after 2 go arounds to try to bump the gear down, that were unsuccessful he realized that he couldn't do it, so he pulled the antiskid, blew the tire and landed safely...No computer could do that, no "Landing gear won't deploy due to a misplaced chock" item in the QRH!

Plus a computer can't ever do a Hudson River dead stick landing. Though I admit that they can be trained to do an Al Haynes scenario...the MD11 has demonstrated total hydraulic loss landing using something called " PCA" Propulsion Controlled Airplane" But hydraulic check valves make the Al Haynes situation highly improbable today.

safetypee
1st Jul 2020, 10:16
The type of research conducted by Airbus is likely to gather information required for single pilot operations.

Extended automatic functions would reduce workload to enable single pilot operation, but require a new automatic, autonomous capability to provide recovery in the event of incapacitation.
The current standard of automation, software, and implementation could achieve such an operation, but certification often depends on pilot involvement in extreme situations.

A difficulty for single pilot ops is being able to prove a sufficiently high level of system safety for commercial operation; both with a complicated system (todays systems), and also with a self-adapting 'complex' software system which might be required in an autonomous (pilot-less) role.

Quoting the 737 is meaningless in this context, being at least two generations out of date.

Pugilistic Animus
1st Jul 2020, 10:32
Safetytypee such a mishap can occur with any airplane with retractable gear. What happened was that one of the ground support people left chocks in the wheel well causing failure of the gear to deploy on one side

Less Hair
1st Jul 2020, 10:37
Pilots must be able to fly even after some lightning strike crippled the electronics compartment or after heavy mechanical damage or in unforeseeable situations. Something beyond drone management is needed for passenger transport. Just look at drone accident rates.

twistedenginestarter
1st Jul 2020, 12:55
Pilots must be able to fly even after some lightning strike crippled the electronics compartment or after heavy mechanical damage or in unforeseeable situations.Possibly so, but not otherwise. The normal operating procedure should be the computers receive information directly (not through R/T driven pilot input) and manage the flight profile according to predictable rules and procedures.

The current framework of pilots operating computers which in turn try to warn them of various dangers or mistakes has performed extraordinarily well since I first started flying over half a century ago. However I suspect the tide is finally turning.

vilas
1st Jul 2020, 15:52
Safetytypee such a mishap can occur with any airplane with retractable gear. What happened was that one of the ground support people left chocks in the wheel well causing failure of the gear to deploy on one side
When you pit this one against dozens that have crashed in fully serviceable conditions it is not that convincing. Take PK8303 an Airbus or Indian version IX812 a Boeing without pilots those guys would be alive. One Sully cannot make summer.

tdracer
1st Jul 2020, 18:18
When you pit this one against dozens that have crashed in fully serviceable conditions it is not that convincing. Take PK8303 an Airbus or Indian version IX812 a Boeing without pilots those guys would be alive. One Sully cannot make summer.
Fully autonomous passenger aircraft will happen - it's only a matter of time. The only real question is how much time - years, decades? Computing power continues to expand exponentially, the human capabilities not so much.
My personal view is we're still decades away (single pilot - really only there to take action if the automation goes crazy - will happen first). Examples of the humans saving the day with the current aircraft are meaningless - current aircraft are not designed to be autonomous, they are designed to reduce crew workload but to hand the aircraft back to the human(s) if things go south. Commercial aircraft have become incredibly safe and rarely crash - but the percentage of those crashes that can be attributed to either suicidal pilots (e.g. Germanwings) or incomprehensibly bad piloting keeps going up (latest example being PIA 8303).
Fully autonomous cars have proved to be more difficult than expected, but progress is continuing and eventually they well become mainstream - and I foresee a future where human controlled autos will be banned from most roads since they 'unsafe' (I probably won't live to see it, but I expect it will happen). There will come a time when people will start to wonder why they can ride safely to the airport in a fully autonomous taxi - so why do we still let human pilots try to land a perfectly serviceable aircraft at 210 knots with the gear up...

Pugilistic Animus
1st Jul 2020, 19:14
Gentleman, the point that I am trying to make is there will always be something unforseen in engineering airplanes. A more modern version would that 777 that crashed at LHR due to ice in the fuel/ oil heat exchanger. Would an automated plane retract the flaps to avoid the highway? I think pilots will be needed, at least for the next 4 or 5 decades?

jcbmack
1st Jul 2020, 21:14
Gentleman, the point that I am trying to make is there will always be something unforseen in engineering airplanes. A more modern version would that 777 that crashed at LHR due to ice in the fuel/ oil heat exchanger. Would an automated plane retract the flaps to avoid the highway? I think pilots will be needed, at least for the next 4 or 5 decades?

While I am not a pilot or an aviation engineer, I research AI/ML and various autonomous systems. It's highly improbable that in years or even decades we will see fully autonomous commercial aircraft. The issues we currently have in land vehicles is plethora, and the problems of deep-learning and Neural Networks for split decision making with limited learning rates and gradient descent based generalization is way more significant than those outside of AI/ML realize. Some aviation engineers I consult with from time to time concur with PA's general assessment.

jcbmack
1st Jul 2020, 21:17
Fully autonomous passenger aircraft will happen - it's only a matter of time. The only real question is how much time - years, decades? Computing power continues to expand exponentially, the human capabilities not so much.
My personal view is we're still decades away (single pilot - really only there to take action if the automation goes crazy - will happen first). Examples of the humans saving the day with the current aircraft are meaningless - current aircraft are not designed to be autonomous, they are designed to reduce crew workload but to hand the aircraft back to the human(s) if things go south. Commercial aircraft have become incredibly safe and rarely crash - but the percentage of those crashes that can be attributed to either suicidal pilots (e.g. Germanwings) or incomprehensibly bad piloting keeps going up (latest example being PIA 8303).
Fully autonomous cars have proved to be more difficult than expected, but progress is continuing and eventually they well become mainstream - and I foresee a future where human controlled autos will be banned from most roads since they 'unsafe' (I probably won't live to see it, but I expect it will happen). There will come a time when people will start to wonder why they can ride safely to the airport in a fully autonomous taxi - so why do we still let human pilots try to land a perfectly serviceable aircraft at 210 knots with the gear up...

You make several salient points. The field of computer- vision, and semisupervised learning grant us significant useful information, but we have had higher than expected application of cost functions, and differentiation in variables (issues) which represent real-world events than we expected.

twistedenginestarter
1st Jul 2020, 22:09
When I started flying, a Trident could do a Cat 3C autoland. That was over 50 years ago. Self driving cars are here now. True, they can't drive on ordinary roads. That may take a decade or more. But flying is not like driving a car. It takes place in a highly organised, structured and protected context.

I am not suggesting there will be pilotless airliners in my lifetime. That would be far too ambitious. What Boeing/Airbus want is to reduce costs and avoid Max-type complications. Going too far would simply present new alternative deadly threats to their revenue streams.

Single pilot is missing the point. That still implies a pilot operating the controls. It still means the pilot has to know what to do in every situation. It still means pilots making mistakes or doing bad things.

No, I think the logic is to enable the plane to carry out all phases of every flight. That in turn puts pressure on everyone to ensure all the pieces are in place (eg The Bergerac Ryanair 737 being set up to perform a GPS approach).

The remaining pilot (or pilots) would still have an important and skilled job in managing the mission (including handling emergencies) although it would quite likely be less enjoyable.

By the way Sully did not handle the Hudson River ditching perfectly. As it happens his mis-settings were not fatal, but they could have been in different circumstances. I think the essence was he didn't know what the correct configuration was for a Hudson River landing because it is something no one would ever train you for. Airbus of course did know, so a computer operated landing would have been just that tad safer.

jcbmack
2nd Jul 2020, 03:09
When I started flying, a Trident could do a Cat 3C autoland. That was over 50 years ago. Self driving cars are here now. True, they can't drive on ordinary roads. That may take a decade or more. But flying is not like driving a car. It takes place in a highly organised, structured and protected context.

I am not suggesting there will be pilotless airliners in my lifetime. That would be far too ambitious. What Boeing/Airbus want is to reduce costs and avoid Max-type complications. Going too far would simply present new alternative deadly threats to their revenue streams.

Single pilot is missing the point. That still implies a pilot operating the controls. It still means the pilot has to know what to do in every situation. It still means pilots making mistakes or doing bad things.

No, I think the logic is to enable the plane to carry out all phases of every flight. That in turn puts pressure on everyone to ensure all the pieces are in place (eg The Bergerac Ryanair 737 being set up to perform a GPS approach).

The remaining pilot (or pilots) would still have an important and skilled job in managing the mission (including handling emergencies) although it would quite likely be less enjoyable.

By the way Sully did not handle the Hudson River ditching perfectly. As it happens his mis-settings were not fatal, but they could have been in different circumstances. I think the essence was he didn't know what the correct configuration was for a Hudson River landing because it is something no one would ever train you for. Airbus of course did know, so a computer operated landing would have been just that tad safer.

While again not a pilot myself, I cannot see how Airbus's computer-operated landing would have been safer--this event was unprecedented at the time. In some cases, Airbus or Boeing systems do have settings or semi-autonomous applications that potentially are safer within specific contexts, but it is not clear that would have been superior in Sully's case. If you have specific experience as an Airbus pilot and/or references we can look at--that would be very helpful. Thanks.

Manwell
2nd Jul 2020, 05:57
The whole point is they didn't need to know what was happening. If the computers were flying, none of this would have happened. Automation is so complicated nowadays,it is difficult for the pilots to know how the plane works and what it is currently doing. So why try and make them gain that knowledge? What's the point any longer?


The short answer is the meaning of life twisted. What would be the purpose of life without the opportunity to transcend our human failings and perform far better than any machine?

Cornish Jack
2nd Jul 2020, 09:12
Some of the respondents above seem to view 'pilots' as a uniform entity with standard skill sets and operating ability. Sadly, it's not so. The number of Capts Sullenberger and Eric Moody are (from personal experience) limited. I had the pleasure of working with Eric Moody when he converted to the 400 and his personal skills matched, if not exceeded, his flying. Such people are rarities and to base future operating criteria on the premise that they will exist in the right place at the right time is (forgive the pun) 'pie in the sky'!

esscee
2nd Jul 2020, 11:24
Any computer is only as good as the input information to it plus its own "100%" solid safe operating software. You only have to consider the situation of Windows 10 and the many problems of nearly every "Update".

waco
2nd Jul 2020, 11:55
One thing is certain...……………..who ever will be sat at the pointy end.....will not be paid very much...…………..

Better get used to it...…………..

vilas
2nd Jul 2020, 13:15
​By the way Sully did not handle the Hudson River ditching perfectly. As it happens his mis-settings were not fatal, but they could have been in different circumstances​​​​​​
That's true. He was faced with unpracticed situation and tremendous time pressure. The Engines had not failed but damaged to produce any meaningful thrust. Therefore the aircraft was in Normal Law. Configuration was not important but he didn't maintain his speed. The low speed triggered Valpha prot and he wasn't able to flare sufficiently leading to heavy impact. Perhaps twistedengine wants say if on automation was possible it could've controlled speed better.

twistedenginestarter
2nd Jul 2020, 13:20
Any computer is only as good as the input information to it plus its own "100%" solid safe operating software. You only have to consider the situation of Windows 10 and the many problems of nearly every "Update".
This I think is this biggest argument against automation.

The perspective I proposed is, in an era of concern over the competence and authenticity of airline pilots, you can guarantee quality performance by changing to computer systems which deliver consistency and can be made aware of every known parameter, procedure, and optimum failure response, apart from being largely immune to the temptations of doing things that are not supposed to be done.

The argument is $Billions can be invested in your pilot because your pilot is exactly the same as everybody else's flying the same aircraft type. Your pilot can know more than any human pilot because your pilot will have millions of man-hours of training. Your pilot will have been tested for thousands upon thousands of hours in the most extreme and rare situations. Clever aeronautical engineers will have honed your pilot to the pinnacle of precision.

The logic is irresistible.

Until you spend a few hours on the internet.

We are all exposed to websites which are chronically inept and dysfunctional. These are usually from large, global organisations with huge revenue streams at stake. Companies with super sophisticated and super expensive IT departments. Yet they punt out endless persistent rubbish. I totally fail to understand why a) software engineers create crap and b) testers do not pick up the problems (having been both a software developer and a tester during my working life).

So the calculation then becomes do you want to risk your life (or your massive aeronautical colossus) on slightly flawed but mainly very, very effective human pilots, or choose a digital nirvana where virtually anything is possible but history shows if there is a cock-up that can be made, it somehow can't be ruled out?

If only we knew what Boeing and Airbus are thinking...

jcbmack
2nd Jul 2020, 16:29
Any computer is only as good as the input information to it plus its own "100%" solid safe operating software. You only have to consider the situation of Windows 10 and the many problems of nearly every "Update".

Great point. Even Machine Learning, a subset of AI, where the system/computer/machine learns over time to improve performance with experience, errors are made--potentially fatal. In supervised ML all instructions are explicitly programmed in, and then the machine can use preset parameter weights to learn, in semi-supervised learning, there is a balance between pre-programmed instructions and flexible learning parameters, and then there is unsupervised ML; this was very useful in uncovering genes we did not know existed during the Human Genome Project. Theoretically, yes these ML algorithms can save lives or land with greater ease under some circumstances, but no one knows in advance, and at least with Boeing's recent failures (787. 737 Max) we should all be more skeptical of so-called capabilities of new smart-autonomous systems. In research with my team: biometrics, AI financial/stock market analysis, autonomous car research, and cybersecurity--so much goes wrong in the testing, a lot we did not even anticipate. Speaking with my pilot friends here on PPRUNE and elsewhere, split-second decisions on an aircraft are not only complex but often life and death.

jcbmack
2nd Jul 2020, 16:37
"This I think is this biggest argument against automation.

The perspective I proposed is, in an era of concern over the competence and authenticity of airline pilots, you can guarantee quality performance by changing to computer systems which deliver consistency and can be made aware of every known parameter, procedure, and optimum failure response, apart from being largely immune to the temptations of doing things that are not supposed to be done."

I wanted to address this part of your quote in isolation. In no industry in the world can any computer system guarantee quality performance. In all my years of programming, algorithm assessment, design, testing, implementation, and leading specialist teams with their own expertise, it has never worked out that way. During the cold war, it was believed so-called "AI" would be able to interpret Russian and other languages near flawlessly--it is only in the last few years that Een Google translate has become above proficient in a plethora of languages, and even today it often misses translating sub-dialects, and known slang in languages like Bisya (Visya/Cebuano), and Russian. Computer vision is still in many ways in its infancy too--an area I worked on with researchers from Saarland University in Germany. Even in self-driving cars, there are significant performance and safety issues, sometimes ending in death---while aircraft might be in more clear airspace as compared to vehicles on the road--when things go wrong, they can often become widespread fatalities. Clearly, not every pilot is a "PPRUNE heavyweight", but that does not make the argument for autonomous computer systems working perfectly--the negative in one area does not prove positive in another. I have a deep respect for pilots and I love traveling by flight---I work with technology that has enhanced human being's lives--but the limitations are more than most realize.

jcbmack
2nd Jul 2020, 16:40
That's true. He was faced with unpracticed situation and tremendous time pressure. The Engines had not failed but damaged to produce any meaningful thrust. Therefore the aircraft was in Normal Law. Configuration was not important but he didn't maintain his speed. The low speed triggered Valpha prot and he wasn't able to flare sufficiently leading to heavy impact. Perhaps twistedengine wants say if on automation was possible it could've controlled speed better.

I have no reason to doubt that claim from multiple professional pilots. Still, Sully got the A320 safely down, no casualties, and everyone grateful. I am skeptical that the system itself could have controlled for all the variables and did a better job--not impossible; I don't work on systems in Airbuses, but knowing the technology limitations, I can still be skeptical.

vilas
2nd Jul 2020, 17:13
Speaking with my pilot friends here on PPRuNe and elsewhere, split-second decisions on an aircraft are not only complex but often life and death. There are simply no split second decisions for any failures. If anyone told you that it's exaggeration. Most accidents/incidents are attributed to pilot error. The long list of human factors responsible for errors makes it tempting for the technology to attempt to remove the human. Accidents happened many times more without automation. All weather CAT3 operations, reduced vertical separation to increase the number of aircaft using an airway is just not possible without automation. And with human presence if accidents cannot be avoided then it's a question of can they be less without them with full automation. B737 max failures are not at all because of automation. It's because of trying to match a competitor, fitting a more fuel efficient engine on to an unsuitable airframe. Then trying to correct the problems that arose through software without redundancy and selling it in immoral haste. Don't compare that.

jcbmack
2nd Jul 2020, 19:57
There are simply no split second decisions for any failures. If anyone told you that it's exaggeration. Most accidents/incidents are attributed to pilot error. The long list of human factors responsible for errors makes it tempting for the technology to attempt to remove the human. Accidents happened many times more without automation. All weather CAT3 operations, reduced vertical separation to increase the number of aircaft using an airway is just not possible without automation. And with human presence if accidents cannot be avoided then it's a question of can they be less without them with full automation. B737 max failures are not at all because of automation. It's because of trying to match a competitor, fitting a more fuel efficient engine on to an unsuitable airframe. Then trying to correct the problems that arose through software without redundancy and selling it in immoral haste. Don't compare that.

There are no split-second decisions to make, at all? No novel events not foretold by an AC, flight manual, or computer system? I know the NTSB likes to blame pilots virtually straight away--this I have been told by a few experienced pilots here on PPRUNE, and the same case was attempted to be laid against Sully. I cannot comment on actually flying a B737, but I can guarantee automation failure and other computer-based factors were at play there--just like the detrimental idea of using Lithium-Ion batteries in the 787 was a grave error. I can tell from the information I have access to, which is not all Boeing has, that the software was buggy for the 737 Max and the issue here was system design. There are other systems error examples like Qantas Flight 72, where a malfunction in the ADIRU was found; Lufthansa Airbus A321, had autopilot issues, and when the autopilot was turned off--the nose dropped. The Boeing 737 Max' MCAS did indeed malfunction. Now I think we all know the claims of William Langewiesche--while not the pilot his father was (author of Stick and Rudder) he is still a pilot, so I did read his statements very carefully. There is ample evidence Boeing downplayed the serious issues with MCAS, the FAA did not look hard enough, and now, we have disparate experts trying to blame the pilots; it is easy for us non-pilots to blame the pilots, but I do know wit years of experience in software engineering, AI/ML research, and talking to seasoned pilots the narrative is not so clear-cut. Sure there are pilot errors, and they get found out, but to blame the pilots and undermine Boeing's sloppy work in the engineering of the upgrades to the 737 Max does a great disservice. These computer systems are not infallible either.

twistedenginestarter
3rd Jul 2020, 02:40
we have disparate experts trying to blame the pilots; With MCAS, the argument is whether or not it was correct to expect the pilots to do the right things at the speed they needed to. MCAS strongly supports the proposition of automated flying, simply because MCAS doesn't work when the plane is flying automatically. The autopilot does not need any special help to cope with the evil larger engine configuration.

If you take the Max crashes and factor out MCAS, you are left with an interesting question: if the plane had been designed so that the computers were in charge: how easy would it have been to program them to cope with the sensor failures?. Inconsistent or suspicious speed etc readings can be caused by a number of system failures so would have to be a class of situations the computers could cope with.. Probably the answer to this is Boeing would have to have built in greater redundancy (ie resilience), more like Airbus do.

And more like Boeing will now have to do anyway.

jcbmack
3rd Jul 2020, 03:38
With MCAS, the argument is whether or not it was correct to expect the pilots to do the right things at the speed they needed to. MCAS strongly supports the proposition of automated flying, simply because MCAS doesn't work when the plane is flying automatically. The autopilot does not need any special help to cope with the evil larger engine configuration.

If you take the Max crashes and factor out MCAS, you are left with an interesting question: if the plane had been designed so that the computers were in charge: how easy would it have been to program them to cope with the sensor failures?. Inconsistent or suspicious speed etc readings can be caused by a number of system failures so would have to be a class of situations the computers could cope with.. Probably the answer to this is Boeing would have to have built in greater redundancy (ie resilience), more like Airbus do.

And more like Boeing will now have to do anyway.

http://www.b737.org.uk/mcas.htm

There was an early issue regarding how the system responded in slow flight (adding slow speed activation along with high speed pitch up conditions). There was an issue with the AOA disagree alert where it was not properly activated in all 737 Max's by all airlines. This was a contributing factor in the Lion Air accident.

For the Ethiopian accident after the pilots turned off the stab trim, cut out switches, they lost manual control of the trim wheel, so they switched them back on to regain electric trim, but this turned back on MCAS, and it's not known why they could not manually control the trim wheel.

As far as Boeing becoming more like Airbus, I think, historically, they have had different engineering philosophies, and perhaps an issue here is Boeing is trying too hard to be like Airbus. They are very different aircraft. They are not the same style or engineering practice.

Here is a salient quote " During The House Committee on Transportation & Infrastructure hearing (http://www.b737.org.uk/mcas.htm#govtinvs) in October 2019, an email exchange was disclosed between Boeing employees from 2015 which read: "Are we vulnerable to single AOA sensor failures with the MCAS implementation?" The response from CEO Dennis Muilenburg was that the email showed that "our engineers do raise questions, in an open culture," but that the single-sensor design met the standards. John Hamilton, chief engineer for Boeing’s commercial airplane division, who testified alongside Muilenburg, said that single points of failure are allowed in airplane design depending on the hazard assessment. Any dissent the committee could present on the final assessment that a single sensor was merited “highlights that our engineers do raise questions and it’s an open culture.”

The final KNKT investigation report into the Lion Air accident (http://www.b737.org.uk/incident_pk-lqp.htm) said as a contributing factor "The replacement AOA sensor that was installed on the accident aircraft had been miscalibrated during an earlier repair. This miscalibration was not detected during the repair." the angle it registered was 21 degrees too high. Following the publication of this report, the FAA revoked the certificate of approval of Xtra Aerospace of Miramar, Fla., the company that supplied the faulty AoA sensor. Xtra subsequently issued a statement saying that “we respectfully disagree with the agency’s findings.” It added that the revocation of its certificate “is not an indication that Xtra was responsible for the accident.”

Again sensor issues have become more common-place and this needs to be addressed as well even if everyone is trying to avoid responsibility. More significant redundancies perhaps--but, maybe MCAS itself was a mistake to gain longitudinal stability. That takes more digging.

Pugilistic Animus
3rd Jul 2020, 07:25
When I started flying, a Trident could do a Cat 3C autoland. That was over 50 years ago. Self driving cars are here now. True, they can't drive on ordinary roads. That may take a decade or more. But flying is not like driving a car. It takes place in a highly organised, structured and protected context.

I am not suggesting there will be pilotless airliners in my lifetime. That would be far too ambitious. What Boeing/Airbus want is to reduce costs and avoid Max-type complications. Going too far would simply present new alternative deadly threats to their revenue streams.

Single pilot is missing the point. That still implies a pilot operating the controls. It still means the pilot has to know what to do in every situation. It still means pilots making mistakes or doing bad things.

No, I think the logic is to enable the plane to carry out all phases of every flight. That in turn puts pressure on everyone to ensure all the pieces are in place (eg The Bergerac Ryanair 737 being set up to perform a GPS approach).

The remaining pilot (or pilots) would still have an important and skilled job in managing the mission (including handling emergencies) although it would quite likely be less enjoyable.

By the way Sully did not handle the Hudson River ditching perfectly. As it happens his mis-settings were not fatal, but they could have been in different circumstances. I think the essence was he didn't know what the correct configuration was for a Hudson River landing because it is something no one would ever train you for. Airbus of course did know, so a computer operated landing would have been just that tad safer.

How would Airbus know how to ditch with computers? Sully had very little time to think, he made the correct decision to Ditch in the Hudson rather than attempt a turn back which would have most likely ended in disaster.

CaptainMongo
3rd Jul 2020, 14:27
There are simply no split second decisions for any failures. If anyone told you that it's exaggeration. Most accidents/incidents are attributed to pilot error. The long list of human factors responsible for errors makes it tempting for the technology to attempt to remove the human. Accidents happened many times more without automation..


What we don’t read about are the thousands of flights each day which are operated safely because of, not despite decisions of the pilots. We make a myriad of seeming insignificant decisions during flight which result in a safe outcome of that flight.

Technology is great until it isn’t. Pilots aren’t paid for when things go right, we are paid for when things go wrong.

Regards,

jcbmack
3rd Jul 2020, 15:23
What we don’t read about are the thousands of flights each day which are operated safely because of, not despite decisions of the pilots. We make a myriad of seeming insignificant decisions during flight which result in a safe outcome of that flight.

Technology is great until it isn’t. Pilots aren’t paid for when things go right, we are paid for when things go wrong.

Regards,

Great points and I have a deep respect for you pilots who do fly day in and day out.

vilas
3rd Jul 2020, 18:50
There are no split-second decisions to make, at all? No novel events not foretold by an AC, flight manual, or computer system? I know the NTSB likes to blame pilots virtually straight away--this I have been told by a few experienced pilots here on PPRUNE, and the same case was attempted to be laid against Sully. I cannot comment on actually flying a B737, but I can guarantee automation failure and other computer-based factors were at play there--just like the detrimental idea of using Lithium-Ion batteries in the 787 was a grave error. I can tell from the information I have access to, which is not all Boeing has, that the software was buggy for the 737 Max and the issue here was system design. There are other systems error examples like Qantas Flight 72, where a malfunction in the ADIRU was found; Lufthansa Airbus A321, had autopilot issues, and when the autopilot was turned off--the nose dropped. The Boeing 737 Max' MCAS did indeed malfunction. Now I think we all know the claims of William Langewiesche--while not the pilot his father was (author of Stick and Rudder) he is still a pilot, so I did read his statements very carefully. There is ample evidence Boeing downplayed the serious issues with MCAS, the FAA did not look hard enough, and now, we have disparate experts trying to blame the pilots; it is easy for us non-pilots to blame the pilots, but I do know wit years of experience in software engineering, AI/ML research, and talking to seasoned pilots the narrative is not so clear-cut. Sure there are pilot errors, and they get found out, but to blame the pilots and undermine Boeing's sloppy work in the engineering of the upgrades to the 737 Max does a great disservice. These computer systems are not infallible either.
You are making too many assumptions. Pilots are not blamed straight away. The inquiry says it's purpose is not to assign blame. It takes years two publish inquiry report there was no question of blaming Sully. He was faced with something not in the book nor practiced, he landed in water. Had he gone back he would have been blamed because he wouldn't have made it. When majority of accidents are due to pilot error it is a fact of life. Emirates 521 B777 go around accident pilot didn't know his thrust was at idle who should be blamed? PK8303 Karachi pilot did not do a single thing as per procedure, who will be blamed? We as humans have problems in the air. That's why they are called human factors but not being a bird we done a pretty good job. Don't discuss MAX because it should not have been designed. Yes Boeing trying to blame Pilots was nothing short of criminal. It's a jugglery now between humans and computers. If humans make mistake so will computers but it's matter of which is safer. Naturally it's emotional issue with pilots but technology marches relentlessly. Besides aviation is a business like any other. You sell wine, computers, hotel rooms or aircaft seats it's no different it's done to make profit. What's profitable will decide the future. Poor piloting is a bad advertisement for human presence in the cockpit. Take AF447 unreliable speed accident. How was it solved automation. First came back up speed scale now in A350 it's backup speed itself. Pilot does nothing. Computers do it and inform the pilot we are on alternate speed. This is an endless discussion. Piloting is not the only issue. Humans won't be required for most jobs or at least not in those numbers. It's a serious problem.

jcbmack
3rd Jul 2020, 19:08
You are making too many assumptions. Pilots are not blamed straight away. The inquiry says it's purpose is not to assign blame. It takes years two publish inquiry report there was no question of blaming Sully. He was faced with something not in the book nor practiced, he landed in water. Had he gone back he would have been blamed because he wouldn't have made it. When majority of accidents are due to pilot error it is a fact of life. Emirates 521 B777 go around accident pilot didn't know his thrust was at idle who should be blamed? PK8303 Karachi pilot did not do a single thing as per procedure, who will be blamed? We as humans have problems in the air. That's why they are called human factors but not being a bird we done a pretty good job. Don't discuss MAX because it should not have been designed. Yes Boeing trying to blame Pilots was nothing short of criminal. It's a jugglery now between humans and computers. If humans make mistake so will computers but it's matter of which is safer. Naturally it's emotional issue with pilots but technology marches relentlessly. Besides aviation is a business like any other. You sell wine, computers, hotel rooms or aircaft seats it's no different it's done to make profit. What's profitable will decide the future. Poor piloting is a bad advertisement for human presence in the cockpit. Take AF447 unreliable speed accident. How was it solved automation. First came back up speed scale now in A350 it's backup speed itself. Pilot does nothing. Computers do it and inform the pilot we are on alternate speed. This is an endless discussion. Piloting is not the only issue. Humans won't be required for most jobs or at least not in those numbers. It's a serious problem.

There is no determination as to whether computers or humans are safer. Aviation like many endeavors is human-computer interaction based. Self-checkouts are more convenient but more error-prone than a live cashier, MCAS killed a bunch of people due to sensor and systems errors, drivers crash all the time on the road, self-driving cars also crash. There is no statistical technique that can show robustly that more automation will make flying safer in and of itself. Airbuses are a different breed than Boeings, and I like both aviation philosophies but Airbuses malfunction too. There is no way to know if computers will reduce the estimated 75%-85% causal factors of pilot error, if such an estimate is really accurate. Look, I love CS/IT and AI/ML. It is a huge and blossoming field, but I believe it is you making too many assumptions.

twistedenginestarter
3rd Jul 2020, 22:31
MCAS is a red herring here. It is a red herring because it is not automation. It is a fly-by-wire feature to help human pilots to fly manually. It doesn't work when the Max autopilot is engaged. In fact I recall that one of the crash crews tried to engage the autopilot because they knew, or believed, it would correct the trim problem.

For the purposes of this thread the 737 Max crashes were caused by faulty angle-of-attack data which amongst other things gave incorrect speed data. I can't remember exactly whether anyone said the autopilots could have coped with the sensor failures. I believe the autopilot tripped out when selected in the relevant crash but that the cause was the excessive misconfiguration ie things had gone too far.. If autopilots had been used from the very first possible moment they may have been able to avoid the two crashes.

If that were true it would make an interesting case for enhanced use of automation. Even if it were not true then I suspect you wouldn't need a huge change to the software to make it cope with sensor failure.

The argument I am making is not whether computers should fly planes rather than pilots. My point is simply Boeing have suffered massively and they might well feel they could easily get into the same situation again. They have to make things more sophisticated (ie complicated) to make progress. You can therefore imagine why a spin of the strategy wheel could end up pointing at backing automation and leaving pilots behind. That would have been wild speculation until we found out this week that Airbus have spent two years making something like 450 flights in a big A350. Imagine the cost of that. You can only think momentum is building up.

jcbmack
4th Jul 2020, 05:32
MCAS is a red herring here. It is a red herring because it is not automation. It is a fly-by-wire feature to help human pilots to fly manually. It doesn't work when the Max autopilot is engaged. In fact I recall that one of the crash crews tried to engage the autopilot because they knew, or believed, it would correct the trim problem.

For the purposes of this thread the 737 Max crashes were caused by faulty angle-of-attack data which amongst other things gave incorrect speed data. I can't remember exactly whether anyone said the autopilots could have coped with the sensor failures. I believe the autopilot tripped out when selected in the relevant crash but that the cause was the excessive misconfiguration ie things had gone too far.. If autopilots had been used from the very first possible moment they may have been able to avoid the two crashes.

If that were true it would make an interesting case for enhanced use of automation. Even if it were not true then I suspect you wouldn't need a huge change to the software to make it cope with sensor failure.

The argument I am making is not whether computers should fly planes rather than pilots. My point is simply Boeing have suffered massively and they might well feel they could easily get into the same situation again. They have to make things more sophisticated (ie complicated) to make progress. You can therefore imagine why a spin of the strategy wheel could end up pointing at backing automation and leaving pilots behind. That would have been wild speculation until we found out this week that Airbus have spent two years making something like 450 flights in a big A350. Imagine the cost of that. You can only think momentum is building up.

MCAS is certainly part of automation; I suggest you open the link I left for further reading. Airbus is cool, but different than Boeing. I fear you may need a little more CS background :)


http://www.b737.org.uk/mcas.htm

https://www.airwayslife.com/news/2019/03/29/boeing-737-max-mcas-and-the-future-of-automation-in-aviation/

Lookleft
4th Jul 2020, 05:46
How many times do we have to put up with tech nerds getting all excited about the impending demise of the pilot. There is one simple reality. There is no current airliner in the advanced design stage that does not have two pilots in the front. If Boeing thought they were going to get a quantum leap in the market place they would not have persisted with a 50 year old design but gone the pilotless aircraft path.

Uplinker
4th Jul 2020, 08:11
Human error exists amongst computer programmers too. Anyone who has written computer software, will know that even though your program looks good and logical; the first time you run it it will almost certainly throw up some errors or lock up and not work at all because of a logical cul-de-sac you hadn't thought of. You then spend time working through the errors and rewriting the code. Even then if someone other than yourself uses the software, they will often find bugs that you hadn't, because they will almost certainly take the software to places you never thought of.

AF447 had the STALL warning stopping because the aircraft thought it was below 60 kts IAS.

The tragically unecessary PIA crash possibly had the landing gear warnings suppressed by other warnings.

Which self driving car was is that thought the grey side of a large lorry was just the sky and crashed into the lorry?

Even if an autonomous flight deck were possible - and deemed safe - who would compute the take-off performance, monitor and cross-check the fuelling, (and decide the fuel load), do the walk-around and monitor the loading? Engineers could if there were many more of them, but I really think you need these things to be done by someone whose life will depend on getting it right by actually flying in the aircraft, not waving it goodbye from the ground. Who would liaise with Ops and ATC regarding slots etc? How would the aircraft taxi to and from the runway? Who would check the controls were not reversed before take-off?

The Cabin Crew have their own work to do, they would not have time to prepare the aircraft as well.


Boeing MCAS is not FBW, it is a bodge, and a bad bodge at that. If Boeing had introduced FBW on the 737 years ago - even just in pitch - and certified it, they would still be a brilliant manufacturer and the MAX would have just required some software parameter changes to the FBW in pitch to account for the longer engines. As it was, the company changed and concentrated on making money rather than making airliners.

safetypee
4th Jul 2020, 11:13
There is little value in considering replacing humans with computers in the short term; similarly comparing current operations with future developments.
Future operations will be innovative, evolve as a combined Man - Machine team, each contributing to their particular strengths.

New design concepts will enable single pilot operations with economic benefit. Some highly automated systems are already capable, but without technical enhancement may not have sufficient redundancy for an acceptable level of safety with single crew.
A critical 'failure' is pilot incapacitation; a rare event nowadays - not that rare for a single pilot when considering the overall level of safety.

Semiautonomous operation (automatic recovery) would only be required after the first failure (no human), and depending on certification requirement - probability and capability - avoiding adverse consequence. A 'relatively simple' recovery system might be sufficient; it would not have to manage pilot + lighting strike, etc - excluded by certification probability, just a lower minimum standard with accepted risk.

One problem is the level of piloting skill for one crew operation. Current operations have opportunity to improve experience; First Officers aspire to Captaincy, but perhaps with less experience than in previous operations. However, with single pilot ops there may not be any opportunity to learn on the job - no FO, thus the automation will have to compensate for reduced levels of experience within the pilot-automation team. However, the need for further improved technology may be no more than would be required for abnormal flight without a pilot, although the tasks differ.

Airbus is thinking ahead, considering means, method, and validation; most of all they are not depending on simulation. Their tests are real, man and machine, warts and all, to provide a wider experience base and safety confidence in choosing a way forward.

Background reading; 'Complex Operational Decision Making in Networked Systems of Humans and Machines: - Integrating Humans, Machines, and Networks: A Global Review of Data-to-Decision Technologies'
https://download.nap.edu/cart/download.cgi?record_id=18844 'Download free pdf'
Summary ~ page 16, with interesting findings.
Chapter 3 is a valuable view of the human element - HF, CRM, etc.

Cornish Jack
4th Jul 2020, 11:24
Seems a pity that this topic ends up as a series of individual incidents/accidents being quoted as 'evidence' supporting or condemning the 'systems' being mooted. It may be possible to select one, or more, such events to support/condemn a course of action but the ultimate criterion is life or death. Apart from having somebody buy the beers and guide to the best night spots, was the 74 Classic a safer machine than the 400? If in doubt, listen to the CVR tape and accompanying visual depiction of the Evergreen freighter into KL; or the details of the Delta Tristar into the swamp. A lifetime in and around aviation encompassed more than enough involvement with the outcomes of piloting 'skills' to view any potential improvements with interest and cautious enthusiasm. Both humans and computers are limited - using the best features of each would seem, to me to be a sensible compromise.

jcbmack
4th Jul 2020, 17:47
How many times do we have to put up with tech nerds getting all excited about the impending demise of the pilot. There is one simple reality. There is no current airliner in the advanced design stage that does not have two pilots in the front. If Boeing thought they were going to get a quantum leap in the market place they would not have persisted with a 50 year old design but gone the pilotless aircraft path.

I am a tech nerd :) but I understand the limitations within any field. Even in cutting edge areas where significant progress has been made, we must be cautious. When I fly to the Philippines and Hong Kong (or anywhere, for that matter) as a passenger I want two competent pilots flying :)

jcbmack
4th Jul 2020, 17:48
How would Airbus know how to ditch with computers? Sully had very little time to think, he made the correct decision to Ditch in the Hudson rather than attempt a turn back which would have most likely ended in disaster.

There is no evidence any computer system would make the same decision or do it as well as Sully did. I have found this thread very interesting, but I will go back to reading it--as I have said what I can on this subject--and will leave it to you, professional pilots.

tdracer
4th Jul 2020, 19:38
There is no evidence any computer system would make the same decision or do it as well as Sully did. I have found this thread very interesting, but I will go back to reading it--as I have said what I can on this subject--and will leave it to you, professional pilots.
You are totally missing the point. Sully is a reasonably easy scenario to program for:
Scenario - you just lost thrust on both engines and are unlikely to get it back - so you're looking at a forced landing. So you need to determine - given your weight, altitude, and airspeed - how far you can glide and what potential landing spots are available within that range (as well as any configuration changes needed to achieve that range). Furthermore, with appropriate programing an autonomous system would instantly know where every available landing spot was (no need to ask ATC). The only 'hard' part would be determining the best option of where to put it down (obviously a runway would be best, but if available range doesn't allow that picking the best alternative).
Now, Sully did all this, but it took him ~20 seconds - exceptionally good for a human under those circumstances - but an autonomous system could have done all that in a fraction of a second - and by making that determination ~20 seconds earlier, there would still have been enough altitude/airspeed to make an actual runway (in which case John Q Public probably wouldn't even remember it happened).
Basically, if the scenario has ever happened, or if the designers can dream it up, an autonomous system can be developed to account for it - with the designers having the advantage of being able to sort through various different actions to determine which is most likely to provide a happy outcome (unlike a human pilot who basically only gets one chance to get it right). The weakness of any autonomous system is dealing with a totally new, unanticipated scenario - humans are creative, and can think up new, inventive ways to deal with unknown - computers not so much.

jcbmack
4th Jul 2020, 21:07
You are totally missing the point. Sully is a reasonably easy scenario to program for:
Scenario - you just lost thrust on both engines and are unlikely to get it back - so you're looking at a forced landing. So you need to determine - given your weight, altitude, and airspeed - how far you can glide and what potential landing spots are available within that range (as well as any configuration changes needed to achieve that range). Furthermore, with appropriate programing an autonomous system would instantly know where every available landing spot was (no need to ask ATC). The only 'hard' part would be determining the best option of where to put it down (obviously a runway would be best, but if available range doesn't allow that picking the best alternative).
Now, Sully did all this, but it took him ~20 seconds - exceptionally good for a human under those circumstances - but an autonomous system could have done all that in a fraction of a second - and by making that determination ~20 seconds earlier, there would still have been enough altitude/airspeed to make an actual runway (in which case John Q Public probably wouldn't even remember it happened).
Basically, if the scenario has ever happened, or if the designers can dream it up, an autonomous system can be developed to account for it - with the designers having the advantage of being able to sort through various different actions to determine which is most likely to provide a happy outcome (unlike a human pilot who basically only gets one chance to get it right). The weakness of any autonomous system is dealing with a totally new, unanticipated scenario - humans are creative, and can think up new, inventive ways to deal with unknown - computers not so much.

I think you are missing several points. It did take Sully an about 20 seconds to carry out his actions and it had no detriment to the crew or the passengers; no one died, and everyone went on living. I see you overestimate the power of autonomous systems in general. Many idealists in engineering and software engineering make this common this overstatement of capabilities; many in my own teams have made similar, biased claims.

Here is where you make a mostly false claim, unfortunately: "Basically, if the scenario has ever happened, or if the designers can dream it up, an autonomous system can be developed to account for it - with the designers having the advantage of being able to sort through various different actions to determine which is most likely to provide a happy outcome." They can account for a plethora of scenarios, yes, and often assist pilots in making more rapid decisions, but this is not universally true. The progress in aviation, financial, big data, and epidemic autonomous, and intelligent systems (AI/ML) has not been as rapid or consistent as many in the field predicted or as many of us hoped they would be. No engineer or computer scientist is that good.

" but an autonomous system could have done all that in a fraction of a second - and by making that determination ~20 seconds earlier".

The operative phrase here is maybe could have; there is no guarantee this would happen in actual real-world conditions.

" The weakness of any autonomous system is dealing with a totally new, unanticipated scenario - humans are creative, and can think up new, inventive ways to deal with unknown - computers not so much."

We mostly agree here, and that is a major issue, but there are other issues of detecting nuanced conditions that to date the autonomous systems still do rather poorly with. Sensors even when they are not malfunctioning have real-world issues differentiating between some important visual data cues. Sully, at the time, was an unanticipated scenario that involved more than just two engines out.

After seeing Boeing destroy its engineering legacy with the 787 Dreamliner, a worthless aircraft, and now the 737 Max, they have a lot of errors to fix, and hopefully, the engineers now will report known design flaws rather than just mention briefly in an email. Hopefully, Airbus will not become too theoretical with its new autonomous systems and will remain practical.

In my own experience with global teams of Computer Vision, Machine Learning, and accident avoidance in land-systems, we have seen incredible improvements where information processed is far faster and at least as accurate as human end-users, but we also have seen systems make mistakes people rarely if ever do.

More salient links to the subject at hand:

https://www.skybrary.aero/index.php/Cockpit_Automation_-_Advantages_and_Safety_Challenges

https://www.reuters.com/article/us-airbus-a220-exclusive-idUSKBN1X31ST

https://www.news.com.au/technology/innovation/inventions/how-a-confused-ai-may-have-fought-pilots-attempting-to-save-boeing-737-max8s/news-story/bf0d102f699905e5aa8d1f6d65f4c27e

tdracer
4th Jul 2020, 23:20
Jcbmack - you're still making the fundamental error of applying the current state to something decades in the future. Compare today's computing capability to what it was 50 years ago (your digital watch has more computing capability than the Lunar Module did when they landed on the moon - never mind what your phone can do), then try to extrapolate that another 50 years into the future. Sure, sensor failures are a problem, but with orders of magnitude more computing capability available (and cheaper), you might have a dozen redundant sensors instead of today's two or three. Today's computers still struggle with the basic vision that the human Mk 1 eyeball is capable of, but that's improving rapidly and the computer's vision isn't limited to what the human eyeball can do - plus the computer can look at 360 degrees in all three dimensions simultaneously - something we humans can only fantasize about. FBW and FADEC were greeted with massive skepticism when they started coming on-line 40 years ago (and yes, I heard the naysayers who claimed they'd never fly on an aircraft so equipped) - today nobody would even dream of designing a new aircraft without them. Meanwhile, human capabilities are not meaningfully different than they were 50 years ago and pilot error has become the leading cause of aircraft accidents (the MAX fiasco not withstanding).
Like I originally posted, it won't happen soon - my prediction is 40 to 50 years - but assuming that humankind doesn't manage to kill itself off in the meantime, the time will come when fully autonomous aircraft are not just common, like FBW and FADEC, they will have become the norm.
BTW, I'm not sure when this became about Boeing (given the discussion started regarding an Airbus project), but calling the 787 "worthless" unfortunately reveals your bias. In spite it's early difficulties, it's become the most successful new widebody in history - with just shy of a thousand aircraft delivered less than 9 years after EIS - no other widebody is even in the ballpark to those numbers.

jcbmack
5th Jul 2020, 00:35
Jcbmack - you're still making the fundamental error of applying the current state to something decades in the future. Compare today's computing capability to what it was 50 years ago (your digital watch has more computing capability than the Lunar Module did when they landed on the moon - never mind what your phone can do), then try to extrapolate that another 50 years into the future. Sure, sensor failures are a problem, but with orders of magnitude more computing capability available (and cheaper), you might have a dozen redundant sensors instead of today's two or three. Today's computers still struggle with the basic vision that the human Mk 1 eyeball is capable of, but that's improving rapidly and the computer's vision isn't limited to what the human eyeball can do - plus the computer can look at 360 degrees in all three dimensions simultaneously - something we humans can only fantasize about. FBW and FADEC were greeted with massive skepticism when they started coming on-line 40 years ago (and yes, I heard the naysayers who claimed they'd never fly on an aircraft so equipped) - today nobody would even dream of designing a new aircraft without them. Meanwhile, human capabilities are not meaningfully different than they were 50 years ago and pilot error has become the leading cause of aircraft accidents (the MAX fiasco not withstanding).
Like I originally posted, it won't happen soon - my prediction is 40 to 50 years - but assuming that humankind doesn't manage to kill itself off in the meantime, the time will come when fully autonomous aircraft are not just common, like FBW and FADEC, they will have become the norm.
BTW, I'm not sure when this became about Boeing (given the discussion started regarding an Airbus project), but calling the 787 "worthless" unfortunately reveals your bias. In spite it's early difficulties, it's become the most successful new widebody in history - with just shy of a thousand aircraft delivered less than 9 months after EIS - no other widebody is even in the ballpark to those numbers.

I agree with you tdracer, there will be many more amazing computing power advances. Moore's law is still in effect, even though there is evidence it might be on a slow-down soon, computing efficiency is still rapidly advancing. Yeah, smartphones are amazing; for example, I have the Samsung S20 Ultra, and it is an elegant computing system in itself. I also agree that autonomous systems are necessary and significantly useful. More specifically to sensor issues, we are seeing many errors in biometric research, and CV (computer vision) based differentiation issues, even in the top-funded companies and Universities--unforeseen issues keep popping up. It is not that human capabilities are significantly different, though Cognitive Psychology and Behavioral Neuroscience research has helped humans enhance performance here and there. It is more humans do certain things way better than computerized systems, and certain types of judgment are one of these salient things. While Deep-Learning Neural Networks can analyze more data faster and discern patterns, and technically the more data NNs can process the more accurate the generalization can become, they sometimes analyze meaningless variables, or process detrimental actions as useful, where on average a human would not do so. I also agree that the sensors will get better, but we need a more effective way to switch off when manual control over actions (like trim) is necessary, and more efficient ways to check sensor operations.

The 40-50 year timeline, I think we will see a plethora of technological improvements and more ML involved in autonomous systems as a whole. I am just conservative about what the systems' consistent output capabilities will be.

More beer here! Happy Fourth!

armchairpilot94116
5th Jul 2020, 05:03
We had HAL in 2001 , well actually we didn't but he is coming in 2101 but he is not going to let you control the flight then either.

The problem (well at least one problem) is that the modern autopilot seems to carry on until it can't handle the situation then dumps the problem on the pilot who is not up to speed in real time needed to right the wrong. Being out of practice by not hand flying on a regular basis and practicing irregular situations isn't helping the pilots to handle problems on the rare occasions they do arise. There is a disconnect between man and machine which rears its head now and then.

vilas
5th Jul 2020, 06:39
Jcbmack - you're still making the fundamental error of applying the current state to something decades in the future. Compare today's computing capability to what it was 50 years ago (your digital watch has more computing capability than the Lunar Module did when they landed on the moon - never mind what your phone can do), then try to extrapolate that another 50 years into the future.
tdracer I fully agree with you. It's common mistake everyone makes comparing today's technology for tommorow's pilotless aircaft. But as I said this is an endless discussion and it's just that. It is not going to decide the future. It is possible to program a computer to deal with all the mistakes and errors. recorded in aviation but not possible to train a human for those situations. Routinely millions of hours are flown without any requirement of human creativity. But the discussion will carry on.

safetypee
5th Jul 2020, 07:06
For those who persist in 'designing' the future by only looking backwards, or wistfully speculate, - a few words from Russell Ackoff.
https://thesystemsthinker.com/a-lifetime-of-systems-thinking/

"… acting appropriately in the present …, you cannot learn from my mistakes, only from your own. I want to encourage, not discourage, your making your own." An alternative view of Airbus research ?

Moore's 'association', never a law, is dying; the expansion, now 'S' shape curve levelling off.
Power required, heat dissipation, cost effectiveness.

also https://brandongaille.com/25-splendid-russell-l-ackoff-quotes/

"Expertise reflects tacit knowledge, not the mastery of procedures or the accumulation of academic honours."

Uplinker
5th Jul 2020, 10:39
I think you will always need humans in the loop.

Yes there have been far too many accidents caused by stupid or poor or bad or fatigued piloting. But I think the answer is not to remove the pilots - the way forward is to try to remove the poor piloting.

We should certainly refine the present automation so the human can interface with it even more reliably - knowing as we now do, the fallibilities of the human senses and brain. Under stressful situations our hearing becomes ignored. Tests show that in high workload situations we do not notice a person in a Gorilla suit in full view, walking amongst other people. We suffer information tunnel vision or overload, and cannot see the wood for the trees.

Pilot training should be a true measure of a pilot's skill and ability, not a box ticking exercise. How about that every year, only the top performing pilots in the SIM became the Captains for the next year? The others stay as, or are demoted to F/Os. Same thing every year, so those in the LHS would always be those with the top 50% of skill, ability and good organisation. That would remove LHS complacency or low ability. I was deeply shocked in my last SIM that an "experienced" long-haul Captain did not know how to program a hold.

Autonomous aircraft could certainly cruise unaided up where there is nothing close to hit - and as long as there were no system failures or passengers becoming ill. And when aircraft systems have been set and checked (by the pilots), they can auto-land, as we know. But what about taxiing and the nose-wheel slipping in a turn due to ice or oil on the taxiway? We humans would instantly reduce the steering angle and perhaps gently brake. I know cars have ABS and traction control using yaw angle processing, but it is hard to imagine an autonomous system being able to 100% deal with taxiing. For example a baggage truck pulling out in front of them. Or low visibility at night

And why autonomous aircraft anyway? You still need a cabin crew, fuellers, engineers, ramp agents, loaders, caterers etc etc, someone has to coordinate all those people and plan the fuel load and check weather etc. Getting rid of two people amongst the dozens needed to get an aircraft into the sky seems to me to be the wrong focus.

Any company constantly tries to reduce costs. If a weaving mill can buy an automated weaving machine that does the work of two skilled weavers and only needs one unskilled worker to oil it, of course the company will become more efficient and make more money. But a weaving machine is not an aircraft or a coach operating in four dimensions and full of passengers.

tdracer
5th Jul 2020, 19:26
Or low visibility at night
Even today, electronic systems can see better in challenging vision situations than humans - they only time the human Mark 1 eyeball is superior is in near perfect conditions (and that advantage is shrinking fast). Add some smoke or fog, low light, etc. and electronics become superior (with the advantage of being able to instantly determine distances to millimeters - far more accurate than any human). A CAT III autoland wouldn't be remotely possible using only human vision.
If we can design a car to drive down a busy, uncontrolled neighborhood road autonomously, taxing an aircraft at a controlled airport becomes almost trivial.

Less Hair
5th Jul 2020, 19:44
True, there are drones that can auto land on aircraft carriers these days. However extremely high total loss rates and bad crash statistics of bigger military drones seem to indicate that human operators are still needed for unusual situations and human operators with not only systems knowledge but pilotage skills and aviation background knowledge. The military tried to convert non pilot ground staff to drone operators to save money. AFAIK with big problems like clueless operators wanting to do steep turns at high altitude and stalling ignoring known icing conditions and losing their drone and such.
Therefore especially for passenger aircraft I see a need for human pilots, as in at least two of them on board, for some longer time to come. Just wait for the first drone crashes within the public view and how that affect will their acceptance. We are observing this with automated cars already.

YYZjim
6th Jul 2020, 06:15
I don’t think that large passenger aircraft will ever be designed for a single (human) pilot. I think that the transition, when it comes, will be from two to zero.

Unless the pilot is chained into his seat (which is feasible) and guaranteed never to become incapacitated (which isn’t feasible), the possibility exists that the cockpit will become unmanned at some unpredictable phase of flight. The control system will have to designed to take the aircraft all the way to a safe landing with no human input at all.

SLF are going to love Peter the Pilot. He has 5,000,000 hours and has memorized all the arrivals and departures at every airport in the world. He knows more about the effect of moving any control surface of his airplane than any of his human designers. He can call up and execute a non-normal checklist in a heartbeat. He is so efficient that he can even adjust the control parameters for a particular airplane’s quirks during flight.

Airlines are going to love Peter the Pilot, too. He can fly any airplane that shows up at the gate. He never complains to the Chief Pilot. He doesn't suffer from circadian lows. He remembers every lesson from his very first flight. He can train a new co-pilot in a minute with a USB stick. And, neither of them need any time in the sim.

Designing Peter the Pilot isn’t going to require artificial intelligence, or machine learning, or quantum computers, or any of a plethora (today’s word) of other fancy processes. Nope, he’s going to be developed in the traditional manner, with engineers trying to imagine all of the possibilities and to come up with strategies to achieve an acceptable outcome for each.

Unfortunately, the Peter they develop won’t be perfect. There will be accidents, and even fatalities. But that doesn’t mean that SLF will demand humans up front. Peter has almost unlimited capacity for being improved, unlike humans, who are effectively at their limit now.

Sully might be the cat’s pyjamas, but who gets to fly behind him now?

YYZjim

finestkind
6th Jul 2020, 06:58
Agree that the number of "superior pilots" that save aircraft are far out weighted by the number that cause accidents (whether from incompetence of fatigue is another question). But one of the biggest hurdles to overcome for the pilotless aircraft will be decision making. How many warnings (unable to find) were there for QF 32. Although computers are faster and getting more so than humans they will continue to digest data to make a “D”. If new data keeps incoming than when will the “D” be made or do we go for a “sensory” overload with the computer reverting back to piloting 101, fly the aircraft, which a) it never did and b) cannot with data incoming.Yes the day will come that computers will fly the aircraft but the Sully example about a computer being quicker only holds water (forgive the pun) if the computer has a program for bird strike, low level, thrust loss from both engines for every possible airport in its programming (and any other scenario not thought of).

twistedenginestarter
6th Jul 2020, 07:59
Whilst there has been much discussion about pilotless operations, my post started by quoting Airbus. I say quoting Airbus but I actually mean quoting a Flight International article. I can't read it now because they say I've exceeded my article limit. If they think I'll give them any money for their entirely worthless re-printing of PR handouts they must be deluded.

Airbus refer to a new way of flying aircraft - potentially a complete game changer affecting most of its readers - and the 'journalists' at Flight International make absolutely zero effort to find out what it's all about.

I just pray somewhere there is someone worth their pay, picking up a phone and dialling Toulouse.

Uplinker
6th Jul 2020, 14:08
Even today, electronic systems can see better in challenging vision situations than humans - they only time the human Mark 1 eyeball is superior is in near perfect conditions (and that advantage is shrinking fast). Add some smoke or fog, low light, etc. and electronics become superior (with the advantage of being able to instantly determine distances to millimeters - far more accurate than any human). A CAT III autoland wouldn't be remotely possible using only human vision.


If we can design a car to drive down a busy, uncontrolled neighborhood road autonomously, taxing an aircraft at a controlled airport becomes almost trivial.



Rather you than me.:)


I was not saying human vision is better than electronic vision, I meant that the human eye/brain combination and a human's ability to adapt to changing situations and the unforeseen is vastly superior to a computer's.


If we think we can design a car to drive down the freeway and avoid all obstacles, but it confuses the side of a truck for the sky and crashes into the truck....?


Asimo is an extremely impressive humanoid robot that can run up a few stairs and walk bipedally unaided. But he has to spend about a minute standing in front of the stairs programming himself and his visual detectors before he can do so. As he does this he adjusts his position by a few cm one way or another before making the attempt. Then, eventually, he runs up four steps. Absolutely brilliant, really impressive. But any human who can walk can do that. And then as Asimo left the auditorium, he walked into a door that he was expecting to be open, but which had swung half shut.


I am not a non believer, my first career was in electronics and I have written computer code. I just don't see a valid reason for making commercial passenger aircraft autonomous. It won't prevent accidents, it will replace current accidents with other types.


CAT lll autolands don't use electronic vision, they follow radio beams and radar altitude measurements. As you know. :ok:

twistedenginestarter
6th Jul 2020, 17:32
I had a short rummage around on Airbus's site. There was no real explanation as to why they were doing computer vision taxy, take-off and landing. However if you haven't already seen this, it will be informative:

https://www.cleansky.eu/european-aviation-in-the-driving-seat-clean-skys-disruptive-cockpit-for-large-passenger-aircraft

To quote: The consensus at Airbus is that single pilot operations will be a necessary and inevitable 'game changer' to face next generation aircraft challenges:

PilotLZ
6th Jul 2020, 17:43
Even if there is a principally new transport-category aircraft flown by one pilot or, highly unlikely, operated autonomously from the ground, that will be decades away from now. And, with the present state of affairs, I seriously doubt that we'll see ANY major new type until the 2030s. Some Chinese fast-hand remake of something existing, like that dinosaur based on the MD80 which was rolled out recently - maybe. Some minor engine or computer upgrades to an existing type - maybe. But something principally new, involving technology which couldn't become widespread even in cars, let alone in aircraft? I doubt it. First, all major manufacturers need a return on investment for whatever has been designed in the past decade. They will first sell quite a lot of A320neo, B737MAX, A350, B777-9, E190-E2, A220 etc before they will even look into investing into a brand new project. And that will drag on for a while, given the overall deferral of orders for the next 2-4 years. Second, a realistic timeline for developing, testing and certification of a new aircraft type is at least 5 to 7 years, and that if you use only or mostly off-the-shelf components. If you decide to implement avant-garde solutions like autonomous operation capability, you're looking into 8-10 years to make it happen. In the next couple of years, airframers will be struggling to survive as new orders will be lagging well behind airline recovery. Lots of prospective projects already got binned because of that - think the Airbus electrical jet engine for example. So, it's highly unlikely that anyone will roll out anything principally new anytime soon. And then, there are the issues of legislation, insurance, earning public trust and so on.

tdracer
6th Jul 2020, 17:58
Uplinker - read my post #47 again. You're making the same error of basing the future state on current technology. Sure, autonomous cars still struggle with certain scenarios, but that's changing fast. Fast forward 50 years and add in several orders of magnitude more computing capability and who knows what capabilities will have been invented.

vilas
7th Jul 2020, 04:13
​I am not a non believer, my first career was in electronics and I have written computer code. I just don't see a valid reason for making commercial passenger aircraft autonomous. It won't prevent accidents, it will replace current accidents with other types.​​​​​​ It is expected to save billions in salary expenditure per year. And when people die in accidents it doesn't matter so much in which type of accidents but whether that number will significantly reduce can decide in favour of automation.

Uplinker
8th Jul 2020, 07:07
...........You're making the same error of basing the future state on current technology. Sure, autonomous cars still struggle with certain scenarios, but that's changing fast. Fast forward 50 years and add in several orders of magnitude more computing capability and who knows what capabilities will have been invented.

Ah OK. Well I don't think I will exist in 50 years time :)

I just think of the human's visual ability to, say, distinguish the adverts from the programs on TV, and knowing when to hit the mute button. And that is the level we would need to make a safe autonomous passenger transport system. Just imagine trying to put that into computer code, or even a neural network: These people moving across the screen are an advert, but these people moving across the screen are in the film.

Or for an autonomous car: that shape moving into the road is a dog - a driver shouldn't endanger themselves or others by avoiding it, or heavy braking. Whereas that shape moving into the road is a child chasing a ball, who we must try to avoid at all costs. How do you program that? Basically, recreating the entire human eye-brain-memory processing system, which has taken millions of years to develop. Granted, a plane only taxis along defined paths, but there are still a lot of unexpected hazards that can arise that will need to be accommodated by an autonomous system.

I heard a chap on the radio who is trying to develop autonomous freight barges, so we will see :ok:

Pugilistic Animus
9th Jul 2020, 08:29
Just to add a very tiny bit to this...weight doesn't affect glide ratio; the heavier plane reaches the landing spot sooner due to a faster sink rate at a particular glide speed.

jmmoric
9th Jul 2020, 14:01
Just to add a very tiny bit to this...weight doesn't affect glide ratio; the heavier plane reaches the landing spot sooner due to a faster sink rate at a particular glide speed.

The light aircraft and the heavy aircraft will not have the same air speed though.

The speed of the ligther aircraft will be slower than that of the heavier aircraft, if they are to maintain their best glide ratio.

You probably meant that.

Pugilistic Animus
9th Jul 2020, 14:13
The light aircraft and the heavy aircraft will not have the same air speed though.

The speed of the ligther aircraft will be slower than that of the heavier aircraft, if they are to maintain their best glide ratio.

You probably meant that.

Yes that is what I meant...too many beers, thank you

jcbmack
11th Jul 2020, 00:29
tdracer I fully agree with you. It's common mistake everyone makes comparing today's technology for tommorow's pilotless aircaft. But as I said this is an endless discussion and it's just that. It is not going to decide the future. It is possible to program a computer to deal with all the mistakes and errors. recorded in aviation but not possible to train a human for those situations. Routinely millions of hours are flown without any requirement of human creativity. But the discussion will carry on.

One of my areas of research is neural networks, and the other is on a team with computer vision--very technical, and advanced stuff. We are very excited about the results and they are indispensable in ML and aviation systems over time. The technology has revolutionized biochemistry and molecular biology as well. Still, time and again we see errors in the systems and true limitations compared to humans.

Uplinker
11th Jul 2020, 08:09
The vision system of living things is awesomely impressive. It has taken around a billion years to evolve from light sensitive spots on the skin to the advanced vision system we have today. It uses incredibly sophisticated sensors, the eyes, linked to a huge brain - about a third of which is used for visual processing - to process the data from the eyes; and it combines that and memory to assemble a live, detailed 3 dimensional image. The eye-brain-memory combination also has a subconscious function that detects and alerts movement of potential predator threats.

The result is a full colour, real time*, moving, three dimensional, detailed vision-scape.

Those visual tests you have to pass when registering with certain internet sites - 'click on the photographs containing traffic lights, or zebra crossings, or type in what the wonky letters are';to prove you are not a robot; show how basic computer vision systems are at the moment.


*A few hundred milliseconds delayed owing to all the sophisticated processing.

twistedenginestarter
11th Jul 2020, 09:12
Uplinker, we are so, so much faster at creating things than Evolution.

If you watch football, cricket or tennis on the telly in the UK you'll notice computers are necessary to work out where the balls are and their trajectories. Unfortunately our billion-year-old 'eye-brain-memory combination' doesn't cut it.

In the UK, trains are driven by humans. To my knowledge there are only two exceptions - a short stretch of tunnel south of St Pancras where I believe human driving was deemed incapable of supporting the traffic density. And of course the Docklands Light Railway which carries over 120 million passengers per annum. The DLR demonstrates that while it can be impossibly challenging to drive things like humans do, it can equally be very trivial to drive things like computers can. The 1960s Trident could carry out a Cat 3C landing but only because they had put a wire down the centre of the runway. The discussion of whether computers can emulate human beings is missing the point. You don't need an autopilot to be able to do all the things humans can do. An autopilot does not need to see there is a Land Rover on the taxiway and a couple of blokes looking at a mark on the ground. All it needs to do is detect whether there is something solid in its path or have ground systems that monitor and provide control information. For normal operating procedures the task of flying a plane is very much simpler that the capabilities of a human pilot. As far as you can tell it is at least ten years before an automated car could be let loose in Central London. However automated cars have clocked up millions of miles, probably with a higher level of safety than humans ie there have been a few fatalities but less than if humans had been driving instead. If you got a group of aviation engineers and PPRUNE posters in a room together I bet it wouldn't take more than a day before they had come up with a technical environment that would support automated flight. For example, cars need fine density maps, 5G-like internet connections, various road furniture that defines the landscape to the car, protection from moving objects entering lanes in an unplanned way, etc.

I started this thread because of Airbus's PR release, slavishly regurgitated by Flight International. In retrospect it has turned out to be a red herring. Why Airbus have spent so much money on computer vision is not obvious. I suspect it's mainly come from EU subsidies ('Clean Sky). Perhaps they used to it to subsidize some other A350-1000 testing tasks. More relevant are other things they are doing like Disruptive Cockpit - https://www.airbus.com/innovation/autonomous-and-connected/autonomous-flight.html -which is less esoteric and specifically aimed at single pilot operation .

As always these things are always technically possible long before the political and commercial will allows them to happen

safetypee
11th Jul 2020, 14:46
We view machine performance as being black or white, yet we exist and think in an uncertain grey world.
We should judge with a range of thoughts, but all too often we, with abhorrence of uncertainty, seek black or white solutions.

“Just what do you think you’re doing, Dave?”
https://demos.co.uk/blog/just-what-do-you-think-youre-doing-dave/

"… they are still just good guesses made on whatever data we fed the machine."

Don't miss the embedded link https://www.wired.com/2016/05/the-end-of-code/

"… humans still have to train these systems. But for now, at least, that's a rarefied skill."

However, if machines have to learn to act like pilots, which pilots will be the teachers and what will they teach?

Or if "… computers are becoming devices for turning experience into technology", the machines learn by copying, who will these systems copy and in which situations?

… and what is 'experience' ?

Uplinker
12th Jul 2020, 07:36
I believe that the London Circle tube line is driven by computers, but there is a human driver in the cab to operate the doors, and presumably to hit the emergency stop if the computer goes haywire or the human sees an obstacle. But a tube train runs in one dimension, on tracks. It also runs in specific tunnels, so the probability of obstructions is very low.

What you say about putting wires down the runway, or special street furniture etc is fine - that will work - and for that matter, planes could be simply towed to and from the runway by tug crews. But that is not computer vision, and you will still need pilots to 'conduct the flight', since they do rather more than taxi. Would you or your family fly in a pilotless aircraft?

A so-called autonomous car, even if the route is lined with copper wires or whatever, will still not be able to differentiate between a rubbish bag blowing into the road and a child chasing a ball, so it will have to emergency stop for anything it "sees", even a tumble weed, or a bird flying across the road, as they do.

Perhaps instead; a way forward to preventing horrendous airplane accidents would be to have live flight data monitoring?
If certain FDM parameters are significantly exceeded, or the profile looks badly wonky; a flight could be flagged up in real time and the chief pilot in their office could look at the flight parameters and have a direct communication channel to say to the crew, "er guys, what is going on here? You will go-around or immediately do x,y,z." etc.

Or maybe even something like the Apollo mission control?

Pugilistic Animus
12th Jul 2020, 20:52
Even when the 707 was the Queen they managed to fly one via remote control, during a test on something to do with fire but I can't remember whether it was fuel safety or material flammability testing and that was in the 60s. There's been autothrottle forever, too. DC3s were equipped with autopilot. The point being is that for quite some time with technology today airplanes still are very far from being autonomous even after all of that time even though base technology for no pilot aircraft is intact is there a reason that we still haven't done it?
Edit: How could I forget to mention CAT III autoland?

What I really wanna see is a new more efficient supersonic airplane...with pilots.... the tech people are really barking up the wrong tree.

FullWings
13th Jul 2020, 07:53
I have no doubt that some time in the future, we will have autonomous aircraft that are safer and more efficient than human piloted ones. However, we will have to completely change the way we certify things if ML/AI is involved. Maybe a LPC/OPC and LOE for Hal? At the moment I have little confidence in the state-of-the-art as we are only just starting to realise how complex the problems are and how fragile and narrow the competence of things like neural networks can be.

Also, 95% of my job as a pilot is interacting with other humans and trying to sort out their problems, which tends to get overlooked.

Peter47
17th Jul 2020, 08:56
I believe that the London Circle tube line is driven by computers, but there is a human driver in the cab to operate the doors, and presumably to hit the emergency stop if the computer goes haywire or the human sees an obstacle. But a tube train runs in one dimension, on tracks. It also runs in specific tunnels, so the probability of obstructions is very low.

What you say about putting wires down the runway, or special street furniture etc is fine - that will work - and for that matter, planes could be simply towed to and from the runway by tug crews. But that is not computer vision, and you will still need pilots to 'conduct the flight', since they do rather more than taxi. Would you or your family fly in a pilotless aircraft?

A so-called autonomous car, even if the route is lined with copper wires or whatever, will still not be able to differentiate between a rubbish bag blowing into the road and a child chasing a ball, so it will have to emergency stop for anything it "sees", even a tumble weed, or a bird flying across the road, as they do.

Perhaps instead; a way forward to preventing horrendous airplane accidents would be to have live flight data monitoring?
If certain FDM parameters are significantly exceeded, or the profile looks badly wonky; a flight could be flagged up in real time and the chief pilot in their office could look at the flight parameters and have a direct communication channel to say to the crew, "er guys, what is going on here? You will go-around or immediately do x,y,z." etc.

Or maybe even something like the Apollo mission control?

Some railways do operate entirely automatically such as Paris metro Line 1 & several routes in Singapore. Only certain sections of the Circle line are there yet but it will in due course become fully automatic as several other tube lines already are. The driver closes the doors but is advised when to do so by the regulation system. As said, it works well in a self-enclosed system. The operator must still look for obstructions although lidar systems are being developed to do this (and Docklands Light Rail trains may bot have anyone at the front of the train). He/she is primarily a systems operator able to take over is something goes wrong. With a metro system the theory is that help will always be nearby in the case of an emergency - this is reasonable, a driver may be incapacitated should there be for example a collision. Also in the case of a problem the system should fail safe, the train will stop ideally at the next station but possibly in a tunnel, but will not fall from the sky. Obviously there are issues should there be a fire or indeed someone on board have a medical emergency.

If absolute safety is a target (not that you will ever get it) you can argue that a plane should have dual control systems, one onboard and one on the ground. Working out a protocol as to which to use in the case of an emergency such as a suspected on board hijacking, or indeed of ground facilities and the communication system could be an interesting problem! Not to mention whether the pilot or ground controller is legally in command.

You could argue that if the car were invented today the historical system of obtaining driving licences would be considered totally inadequate. You might want to limit driving to say the top 50% of the population by aptitude and insist on overlaying onboard anti collision systems. All very hypothetical...

JRK
20th Jul 2020, 10:29
Error bracket within any decent computer program today is much tighter than the same for us, humans. Full automation of flight ops is, therefore, inevitable. Better get used to it and let it go as soon as possible. Professions disappear all the time. This is part of our human evolution. Being pissed off about airline pilot vanishing it is like being pissed off about commercial sailing ships - for many centuries an economic backbone for most countries - having eventually become redundant.

vilas
20th Jul 2020, 15:52
Error bracket within any decent computer program today is much tighter than the same for us, humans. Full automation of flight ops is, therefore, inevitable. Better get used to it and let it go as soon as possible. Professions disappear all the time. This is part of our human evolution. Being pissed off about airline pilot vanishing it is like being pissed off about commercial sailing ships - for many centuries an economic backbone for most countries - having eventually become redundant.
Yes! One Sully doesn't make summer. Poor piloting is the worst advertisement for human presence in the cockpit As more and more human factors are discovered it's a message that for a human to work safely and efficiently 100 things have to be changed or done differently. Technology may find it safer and importantly cheaper because it's a business after all to replace the human. It is sad but as you say it's part of evolution. It may start with single pilot, then cargo flights and then pilotless. Will accidents happen? Surely there will be some but as long as they are significantly less it won't matter. People will board if it costs less. Some may even see it as an adventure and the phobics are uncomfortable even now.

fltlt
21st Jul 2020, 03:36
Take a look at how many highly paid support technicians and equipment it takes to operate the Northrop Grumman Triton, or for that matter the plain old General Atomics family, Predator, etc., even discounting military specific hardware.

The dollars/flt hour will make your eyes bleed, and the Triton is currently the closest thing to autonomous operation levels that the airlines will need. All about how and where you do the required information processing, on or off board.

Two pilots/aircraft, minuscule cost compared to current autonomous operations, and nobody sees it getting any cheaper.

Thats one of the reasons autonomous military aircraft have fallen off the radar so to speak, less manpower/footprint is the groupthink right now.
A few demo sideshow programs will run, my bet is they peter out. Company’s like to sell what they demo, if not, it’s on to something else.

But I could be wrong.

compressor stall
21st Jul 2020, 03:40
For every Sully there are many Karachis.