PPRuNe Forums

PPRuNe Forums (https://www.pprune.org/)
-   Tech Log (https://www.pprune.org/tech-log-15/)
-   -   Can automated systems deal with unique events? (https://www.pprune.org/tech-log/569674-can-automated-systems-deal-unique-events.html)

evansb 27th Oct 2015 18:18

Fully automated cockpits will increase cyber attacks, not reduce them. You can't cyber attack a computer that isn't there...

Tourist 27th Oct 2015 18:55

But the computer is there.

All Boeings and Airbus are entirely reliant with no fallback options if computers fail anyway, so no added risk.

There is no fallback mode that doesn't require a computer.

slast 27th Oct 2015 20:14

I would like to hear from the guys who are actually really experienced in the automation and control side about this.

There is a lot of talk about programming and sensors and databases and systems that learn, which are progressing by leaps and bounds. I have no doubt that it will be possible in a relatively short timeframe to do far more things quasi-autonomously than we do now. BUT:

In the following I am using the term sensor and system very loosely, e.g. sensor simply means a "problem detector" and "system" simply means some aspect of safe flight. So it could be a sensor for a hydraulic leak in the airframe "system" or "conflicting traffic" in the air environment "system", or just about anything else, we want to stay conceptual here.

For the sake of argument imagine we have this device we'll call a super-smart box (SSB), and accept that SSB design is such that it "doesn't make mistakes". All problems can be detected by a sensor and the SSB provides signals with correct output to deal with it, with greater reliability than a human can.

But correct SSB output is not the end product we are looking for. A change of trajectory of hundreds of tons of aircraft is what we are looking for, and SSB output needs to be converted to physical machinery activity.

My question is what happens in this chain of events.

Sensors detect problem in system A > SSB chooses correct response which demands action by physical system B > physical system B does not respond as expected by SSB. It may do nothing or may do something entirely different. "Something entirely different" could trigger other sensors in system A, B, C, D etc.... ad infinitum.

What is the process by which the SSB knows what to do now, and who is responsible for the correct outcome of that process?

G-CPTN 27th Oct 2015 20:33

IF THEN ELSE :E

Nested if necessary - or concatenated.


.

llondel 27th Oct 2015 20:36


Had the programmers not built that capability into the software, Neil Armstrong wouldn't have been the first man on the Moon.
He probably would have been, or not then a very close second depending on what hit first.

MG23 27th Oct 2015 20:58


Originally Posted by llondel (Post 9160287)
He probably would have been, or not then a very close second depending on what hit first.

Aborts could be flown by a completely different, independently developed, backup computer. They didn't trust them that much :).

cjm_2010 27th Oct 2015 21:34

I'm a lowly student on my way to an NPPL(m), but my job is automation of software testing processes.

I always sell automation to my clients in the same way: It will save money by taking on the bulky repetitive tasks (like regression testing) but there will always be an element of manual intervention required. Humans will spot things the automated code won't; but it's not good at performing highly complex tasks that require intuition, and cannot perform exploratory testing.

Automation tools are often sold on the promise that they will catch everything, and it is true that, in some rare cases, defect detection rates may well be better than when the testing is executed manually. But true AI does not exist as far as I know - and even if processing power ever reaches that point - I still firmly believe the human element can't ever be eliminated. We are simply far better at picking the important stuff out of the data we're bombarded with, we're better able to recognise patterns, and we can act upon those cues subconsciously.

Granted, sometimes the humans make incorrect decisions. But we are adaptable and often able to self-correct in time to avoid disaster.

I don't think we'll see pilotless passenger aircraft anytime soon. And I'd certainly never step foot on an aircraft without a human flight crew.

peekay4 27th Oct 2015 22:26

No offence CJM but you can't really compare QA test automation to life-critical systems automation.

Two very, very different worlds.

CONSO 27th Oct 2015 22:37


All Boeings and Airbus are entirely reliant with no fallback options
Wrong re Boeing - dont know re Airbus.

Thru the 777, BA has always had cable backups, direct to a minimum number of flight controls and trim tabs. Simply push/pull hard enough to override autopilot- safety limits, etc. 707 and 747 aircraft have been saved by that concept- the pilot always has the final word despite many bells and lights and buzzers, and pull up warnings.

read about the Gimli glider for but one example ( 767) , a China 747 over pacific for anouther, and early 707 over atlantic- wings were bent- but remained in service

vmandr 27th Oct 2015 23:08

'philo' questions tend to generate more questions than answers.

here a few of mine...

assuming Sully is 'the perfect pilot'

a. how will you program/teach your 'computer pilot' after Sully's hope, fear, anger and pointlessness(futility) or 'taking risks'.
b. how will you teach situational awareness, see-and-avoid, an inherently 'blind computer pilot'
c. how will you inhibit 'self-destruct'
e. how will you teach responsibility and 'departure from the rules' as in
icao-annex 2, and far 91.3
'The pilot-in-command of an aircraft shall, whether manipulating the controls or not, be responsible for the operation of the aircraft in accordance with
the rules of the air, except that
the pilot-in-command may depart from these rules in circumstances that render such departure absolutely necessary in the interests of safety.'

_Phoenix 28th Oct 2015 04:16

Too much automation doesn't work for safety
 
Flying is not rocket science or AI science, whatever. The pilot needs just an operational aircraft that responds correctly to inputs and he will put it on the ground within resonable g factors. Volume wrote a very good post in page 2, he clearly shows why more automation will not improve safety.
These days, the automation is good just for profitability figures, but it erodes pilots skills. When unique event X happens the pilot has to aviate alone, while the automation is there only to pop up an enless list of ECAM message as in QF32 case or even worse as in XL888 case.
Apart the fact that a fully automated airliner is a sweet delusion, I found this thread interesting and at least a good warm up for QZ8501 final report due (again) for this month.

Tourist 28th Oct 2015 06:28

CONSO

Wrong

Boeing has FADEC on the engines. A computer hack means you have lost your engines.

We ignore FADEC and have got used to it because it never fails, but it is a computer and it is 100% required to fly the aircraft.

If the FADEC fails, the aircraft is lost

Slast

Your question about sensors requiring responses and not getting the right one is another area where computers are better.

After an F15 incident and post the Sioux City crash NASA did some work on exactly that sort of problem. This is a long time ago, and the learning systems were so good that the poblem became opaque to the people on board.

https://www.nasa.gov/centers/dryden/...main_srfcs.pdf

This is old tech. A problem solved.

vmandr

Please make sense.

Herod.

You are acting as if the computer must be blind.


Can we just have a reality check here for a moment re the situational awareness bit.

Does anybody think that for the last 30 years, fighter pilots find other aircraft using their eyes?

No, they don't. They have systems which do a far better job.

Modern military aircraft have a range of sensors which surpass the human eye in every way. Google F35 EO DAS.

AN/AAQ-37 Distributed Aperture System (DAS) for the F-35

This is obviously aside from the fact that the eye has already been considered and discarded as a reliable method of not hitting aircraft.
It's so bad that we invented TCAS.

In VMC, are you allowed to disregard TCAS?

No.

Essentially, we are already relying on automated systems for aircraft avoidance.


You will note that I am attempting to add references and evidence to turn my opinions into accepted facts.

Just saying things without any supporting evidence is fairly worthless and does not contribute to the debate.

felixthecat 28th Oct 2015 07:33

Having flown modern jets and the old clockwork wonders for many years I have had to on occasion think outside the box and not follow SOPs. Computers follow SOPs and thats not always great. Also even in todays marvels of computerised wizardry I have flown several flights where the computers have dropped out and systems have been lost.....we regularly get notifications from the manufacturers with changes to software and block points not to mention work arounds that the PILOTS have to implement until the software is fixed.

Who implements the work arounds when the pilots are gone? Does the computer see its wrong and correct? You would hope so but still even today we haven't got to that stage.

ChickenHouse 28th Oct 2015 08:05


The question is whether it is feasible (within a foreseeable timeframe) for humans to create automated systems that can deal with truly unique (not just "extremely improbable") events.
Philosophically answer, Why?

Automation and Standardization always come as couple. The current trend is towards highly automated systems for the problem "get from A to B in the air".

Earlier in history this was solved by humans, called pilot, which explored the way this can be done without too many losses. As always, if you reach a certain level of confidence in a "should be" process, you can start automation along the track. One of the reasons why we have this discussion is the fact that automation started before we really understood enough - we left the decision to commercial thinking ones too early.

You do not need an automated system to deal with "unforeseen" events after sufficient knowledge was gathered to avoid getting into situation where these events can have an impact. If you found a process, where you have a certain corridor of stability, the only thing you do is follow the center line and unforeseen events really do no harm, so ignore them. Creativity was only used in finding the path along which you do your automation within rock solid. Once you follow that road, it does not matter wether a properly functioning human, formerly called pilot, or a machine does the work, because the probability to hit an unforeseen event, which is potentially dangerous, is so low that this human does not carry the skill any longer anyways. This is one of the essentials of the "follow magenta line" discussion.

Do we want to develop an automated system with creativity and artificial intelligence? I say no, as it will produce them same threats to earth as humans do, but with much more power.

Tourist 28th Oct 2015 09:35

Chickenhouse

That is a very valid point.

Tourist 28th Oct 2015 09:38

Felixthecat

That would be a reasonable question if your basic tenet was correct.


"Modern" aircraft are not in fact "technical marvels" at all.

They are astonishingly old technology.

Nobody drives a car that old fashioned. Nobody uses a telly or a phone a 10th of the age of that technology.

The Airbus is practically Stone Age.

http://i404.photobucket.com/albums/p..._D-AIQT_01.jpg

http://i404.photobucket.com/albums/p...s/untitled.png

These two gestated at about the same time for about the same length of time. They even have the same plastic beige finish...
Clever in their day......

Imagine if we judged what a phone would be capable of today based upon the 80's equivalent.

Don't judge what old aircraft can do against the future.

stilton 28th Oct 2015 10:00

I'd bet that most if not all of the posters advocating total automation as the
future for transport aircraft are not pilots.


A few thousand hours of real flight time experiencing the myriad of dynamic situations and unexpected failures most Pilots have should convince even the most die hard computer nerd you can't program for every situation, not to mention computer failures themselves.


So allegedly Sully could have 'made' Teterboro that day, interesting, what if he had tried for it and came up short, you can't always predict glide distance and winds can change at a moments notice, in that case most likely their chances of survival were zero, imagine 'landing' an A320 in a built up area without power :eek:



He made the right decision, saving everyone using his judgement, experience and skill, three qualities a computer will never have.

Tourist 28th Oct 2015 10:46

Stilton

Thank you for perfect example of entirely opinion based post with no evidence to back up any of it.


Wishful thinking will not change anything.


Your example could be turned around.

What if he had flown to the river and hit a boat killing all on board, and then computers suggest that he could have made the runway....?


Re judgement

Computers don't use judgement. They use factual physics based calculation. They will always beat a human for such things.

Re Skill

Can you fly at coffin corner as well as a computer? If you can, you have more "skill" than U2 pilots
Can you fly at a perfect speed height and heading better than a computer? If you can you need to start a super flying school.....

Re experience

Can you contain the repository of all computer pilots and instantly share that knowledge around the other pilots so nobody ever makes the same mistake twice, or are you doomed to keep re-learning all the same mistakes again and again..

p.s. I'm a pilot.

Tourist 28th Oct 2015 10:55

I notice, incidentally, that many on here are choosing to mandate that to viable autonomous aircraft have to match the best of pilots like Sully in their strongest areas.

The reality is that most pilots are nowhere near Sully.

What matters is whether overall there will be less deaths with autonomous aircraft.

Not whether there will be different causes, because there inevitably will, particularly in the early stages, but whether lives and money (airlines are businesses) are saved.

Autonomous pilots will never be drunk.
Autonomous pilots will never be suicidal.
Autonomous pilots will never be tired.
Autonomous pilots will never be stressed.
Autonomous pilots will never be rusty.
Autonomous pilots will never fall out with each other in the cockpit.
Autonomous pilots will never misread a plate/minima.
Autonomous pilots will correctly carry out TCAS RAs
Autonomous pilots will never ignore a GPWS pull up.
Autonomous pilots will never fail to fly a perfectly serviceable 777 to a VMC runway.
Autonomous pilots will follow the rules/SOPs
Autonomous pilots will never break the law.


You need to judge any computer against the average, not the exception, and the average is very very average......

framer 28th Oct 2015 11:48


The first automated aircraft would probably be data-linked to the ground with "pilots" monitoring several at once, able to step in and assume remote control if necessary. This would remove the "can it deal with any hypothetical situation?"
Can you imagine a guy on the ground, sipping his coffee, and being "handed " Sully's ship with both engines out, making the same decisions as Sully's?
Nup, the aircraft would be at 500ft AGL before he was beginning to understand the situation. He wouldn't be 'invested' enough.


All times are GMT. The time now is 04:48.


Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.