PPRuNe Forums

PPRuNe Forums (https://www.pprune.org/)
-   Tech Log (https://www.pprune.org/tech-log-15/)
-   -   Can automated systems deal with unique events? (https://www.pprune.org/tech-log/569674-can-automated-systems-deal-unique-events.html)

slast 26th Oct 2015 16:21

Can automated systems deal with unique events?
 
There has always been interesting comment on Prune about software reliability, bugs, design requirements, testing, etc., most recently under the topic of a B787 Dreamliner engine issue. There appear to be a significant number of Ppruners who are serious and knowledgeable on the subject.

I would like to ask those members a philosophical question. This has an impact on the argument that a safety priority now should be the elimination of human pilots from the system via automation.

The question is whether it is feasible (within a foreseeable timeframe) for humans to create automated systems that can deal with truly unique (not just "extremely improbable") events.

The pro-automation lobby (see for example thread I started in March, " "Pilotless airliners safer" - London Times article") starts from the view that as pilots appear to be the dominant primary cause in aviation accident reports, removing them will remove the errors they make and the consequent accidents.

This first started being discussed seriously in the late 1980s, when the Flight Engineer function was automated out of the B747 to create the -400 and the DC10 the MD11, etc. (Note - this was not the same as the 3-person vs. 2 person crew controversy so please don't mix that into it!)

There has been a multiple-order-of-magnitude increase in computing capability since then, but my feeling is still the same. Human pilots on board will always be able to make SOME attempt to deal with a completely unforeseen and unique event that arises from a coincidence of imperfections in the total aviation system (vehicle, environment, and people) - even if unable to do so 100% successfully.

So: is it possible to replace this capability with a human-designed and manufactured system, without creating additional vulnerability to human error elsewhere?

The entire industry works on a concept of "acceptable" and "target" levels of safety, involving probability of occurrence and consequences of events that society is willing to take. The regulatory authorities lay down numbers for those probability and consequences elements at various levels.

It seems to me that it would not be possible to design any automated system to control physical equipment like an aircraft without making assumptions about that aircraft and its components, one of which must be that component failure ALWAYS meets the probability required.

In reality, human errors occur in all stages of the process of getting a paying customer to their destination. In the vast majority of cases these errors are caught by the myriad checks in the system, but some are not. When two or more such trapping failures coincide, they may end up as a problem that until now has required the pilot(s) to act creatively, because the situation has never been considered as a possibility. That lack of foresight in itself might even be classed as a human error in the specification and implementation of the checking process.

To a human designing an overall automated control system, either an event is possible and can occur no more often than the required frequency, or it is impossible and need not be considered. There isn't a halfway house where the design engineer can say "this isn't supposed to happen but I think it might, so I'll cater for it." Apart from anything else, what steps can he take to cater for it when there is no means of knowing what the other circumstances are?

Take an uncontained engine failure, which is supposed to be a very improbable event. To quote a Skybrary summary: "Each uncontained failure will result in a “unique” combination of collateral damage ....... [which] carries the greater potential risk and that will require creative pilot assessment to ensure a positive outcome is achieved." That was amply demonstrated on QF32, where the problem originated as human errors in manufacturing, and were prevented from becoming a catastrophe by the pilots.

Other "unique" event examples which show that they are not so rare as to be negligible might include 2 within a few years in one airline alone - the BA B777 dual engine flameout on short final LHR and B744 leading edge flap retraction on takeoff at JNB. Both were survived largely due to instantaneous on-the-spot human "creativity" in recognising that the situation did not conform to any known precedent.

Issues of bugs, validation, verification, system analysis etc, appear to me to be essentially about meeting probability requirements for "known" possibilities. Is there an additional requirement that will have to met for "creativity" in such a system before a pilotless system can even start to be considered?

Unless such a creative artificial intelligence system is included, is the concept of automating the pilot out of the commercial aircraft cockpit doomed to fail, because ALL human error, and with it 100% of the liability for all consequences of any unique event, will clearly be transferred to the manufacturer and/or other suppliers?

Finally, in the event of such an event, will "society" in the form of the legal processes which will inevitably follow, agree that the numbers used since the 1950s etc. to define an acceptable level of safety to the authorities are the correct ones to meet expectations in the mid 21st century? In other words, will potential product liability issues stop the bandwagon?

Any thoughts on this, ladies and gentlemen?

DaveReidUK 26th Oct 2015 16:38

If you were to rephrase the question as "Can automated systems deal with unforeseen events?" then the answer would be obvious.

So a useful approach might be to consider what events, if any, are unique but not unforeseen and vice versa.

Piltdown Man 26th Oct 2015 16:54

A brilliant starting point for a discussion. My opinion is that the thing that makes humans good operators is that they are capable of fact finding, learning and self programming. This is not a feature of a lump of traditional software. For example software won't suggest that as the aircraft will only turn left, they'll line up following a series of left turns. It won't think about re-seating passengers to fix CofG problems etc.

But I must disagree with the following:


...that as pilots appear to be the dominant primary cause in aviation accident reports, removing them will remove the errors they make...
It is the human tag fills the functional gap (yawning chasm) between a useless device as delivered by the manufacturer and the all singing, all dancing, highly functional device that we see in service. That device is a ship, a railway locomotive, aircraft or power station. We exist only because we can't be replaced. It is what the human does right millions of times every day that makes flying safe. It's not the few times we foul up that makes it dangerous.

Put the programmer in the plane to make it safer. Errr... Isn't that a pilot though?

PM

darkroomsource 26th Oct 2015 16:54

In theory, computer systems (not just automated systems) could be developed, which are able to take into account every failure or mistake to have ever happened in the history of transportation, and to evaluate all the probabilities of success and failure for every possible action and outcome.

In theory. In practice, we're still a ways off from doing that, although systems like IBM's Jeopardy contestant are headed in that direction.

But more to the point, when it turns out that there are still accidents, that are now "blamed" on the computer systems, will the developers now be blamed? And will we then want to automate the software developers? And when those systems are blamed, do we then develop automated systems to for developing automated systems?

The fact is that in the future, there will be systems which are more capable of evaluating all the risks and outcomes from all the possible actions, faster and more effectively than the human mind.

The question then will be, would you rather trust a piece of equipment or a human being who actually comprehends the concept of failure due to mistakes?

172driver 26th Oct 2015 17:49

It's a good question and a fascinating subject.

One big problem in discussing it (and in arriving at any conclusion) is that the information we have WRT the actions of aircrew is heavily slanted towards the negative. Why? For the simple reason that we hear about accidents and incidents which were induced by pilot action, but we almost never hear about mishaps that were prevented by pilot action, unless they were dramatic enough to make the news.

There is an interesting analogy to the development of self-driving cars. Google are finding in the course of their tests in California, that their cars of course conform 100% to the highway code. This has obviously been programmed into them. However, the real world doesn't always conform. The big challenge here is to install a sort of fuzzy logic that allows the car to 'think', which in extreme cases also involves ethical dilemmas. I suggest you read this excellent article on the subject.

Personally, I'd much rather live with the errors my fellow human beings (and I!) make than hand over my life to some algorithm.

Herod 26th Oct 2015 18:03


which are able to take into account every failure or mistake to have ever happened in the history of transportation
This is fine, but there will always be "black swan" events, and that is where it will not be possible, at least in the foreseeable future, to automate the human out of the equation.

Willie Everlearn 26th Oct 2015 18:04

"priority now should be the elimination of human pilots from the system via automation"

For the salaries on offer these days, great idea. Can't happen soon enough.

On a more serious note, is the artificial intelligence refined enough to accommodate that level of automation and how soon could it be incorporated into today's technology?
As an aside, I don't think I'd be that comfortable getting onto anything, especially something leaving the ground, that doesn't have a human behind the wheel (other than the train at Disney World). The though is still unnerving to me and I can't imagine the average afraid-to-fly-in-the-first-place passenger would either.

Willie

bullfox 26th Oct 2015 18:45

The legal system is not ready for driverless cars or pilotless aircraft. regardless of the cause of any accident there wlii always be the need for convenient blame.

fc101 26th Oct 2015 19:15

Interesting question but maybe too simply put.

Firstly if an event is "unique" then by definition it becomes a binary thing whether that event can be foreseen or not. Your question is then "Can automated systems deal with all foreseeable unique events?" Then the discussion moves to what counts as foreseeable and of those what is it worth guarding against.

In most cases automated systems are constructed around generalisations of specific cases, eg: avoiding crashing in to Everest, becomes GPWS. Similarly preventing a pilot exceeding the load limits of an aircraft becomes the flight laws on an Airbus etc.

As another posted has pointed out: "...that as pilots appear to be the dominant primary cause in aviation accident reports, removing them will remove the errors they make..." is false from many perspectives. While it is "technically" correct the pilot probably did crash the plane, they were probably figuring out how to get out of that situation and therefore circumstances eventually conspired against them. Only a few accidents are attributable to pilots only (AF, German Wings), but even then the chain of events and context is extremely complex - hence the need for accident investigation.

It might help to start with looking at the "Swiss Cheese Model" ( https://en.wikipedia.org/wiki/Swiss_cheese_model ) and read up on the work by James Reason on the whole concept of safety.

If you want a particularly readable book, have a look at Atul Gawande's Checklist Manifesto which'll give you an insight into how aviation's checklists are used in a completely different environment - one that has a very different idea of what automation is.

Huge area to discuss and lots of research, but take a look at Reason's books and papers,

fc101

4Greens 26th Oct 2015 19:57

Think cyber attacks. Bye bye no pilots on the flight deck.

There should be a guarded yellow switch on every flight deck. When required this can be switched on and it turns it back into an aeroplane.

slast 26th Oct 2015 20:30

responses to a few points
 
Good to get some serious answers so fast...!

DaveReidUk, I pondered long and hard over whether to make it unforeseen or unique (or both). can you continue that thought with examples as to what events, if any, are unique but not unforeseen and vice versa.

PD, j(and several others! Just to be clear, I DON'T consider that pilots ARE the predominant cause - that's the pro-automation lobby viewpoint. But it gets support from graphs like this from an MIT study on "Safety Management Challenges for Aviation Cyber Physical Systems" . This was picked at random from many others similar. http://picma.org.uk/sites/default/fi...al%20stats.jpg


Darkroom, your "in theory"... para. The failures I see as problematic to deal with are not ones that HAVE ever happened, but ones that have not YET happened and almost certainly never will. These are for practical purposes infinite in number - certainly many order of magnitude greater than the possible moves in a game of chess (10^120?)

Humans brain within human body can be pretty good at chess but are relatively easily beaten now by specialist programmes. However, the same human brain/body combination can also deal with umpteen other issues (e.g. raise children, create music) at which the same programme and hardware has zero capability. To what extent would a system "trained" to handle the QF32 scenario and every historic event be able to deal with a second QF32 in which one hot chunk went in a 1 degree different direction with significantly different consequential failures? However a human would ATTEMPT to cope just the same.

172Driver. I agree entirely with your comment about the information bias. I have devoted a couple of pages specifically to this on my own website but to make the point that where pilots ARE responsible they need collectively to do something about it. See these diagrams I made to illustrate that public perception http://picma.org.uk/sites/default/fi...%20iceberg.png

does not align with the underlying reality:
http://picma.org.uk/sites/default/fi...%20iceberg.png

Thanks for the link, very useful. The self-drive car issue is of course the canary in the mine, if they can't resolve the liability issue for cars, then it goes away for aircraft. See Volvo recent statement... Volvo will accept liability for self-driving car crashes


FC101: see earlier, I do not consider that pilots ARE the major problem. I have an adaptation of Jim Reason's diagram here... and agree Gawande's Checklist Manifesto is a good read.

http://picma.org.uk/sites/default/fi...wisscheese.png

Thanks for the input, keep it coming...

barit1 26th Oct 2015 21:23

I have to ask about the inverse of pilot error, exemplified by Sully's decision to put his bird in the drink after a low-altitude loss of thrust. :uhoh:

ManUtd1999 26th Oct 2015 21:41

The first automated aircraft would probably be data-linked to the ground with "pilots" monitoring several at once, able to step in and assume remote control if necessary. This would remove the "can it deal with any hypothetical situation?". All the automation would need to do would be to flag up "something" as wrong and ground could take over. Similarly flight attendants could communicate medical emergencies etc.

The problem this creates is the loss of data-link scenario. Programming the software to land at the nearest airport in this situation could be done, but this would require every diversion airport to be ILS and auto-land equipped. The costs would soon mount up.....

Finally, hardware/software is expensive. The level of redundancy and function required to remove pilots would take years and many millions/billions of dollars. Money which the manufacturers have to gamble on there being enough people willing to fly on these new drones. Only a fool would say fully automated planes will never happen (think president of IBM 1940s - "there will only be a market for maybe 5 computers") but it's a long way off.

peekay4 26th Oct 2015 22:08


The first automated aircraft would probably be data-linked to the ground with "pilots" monitoring several at once, able to step in and assume remote control if necessary.
The way I envision it, the first fully automated aircraft will still have a "pilot" onboard -- but the pilot will no longer the fly the airplane -- not even via autopilot.

The pilot's job will transition to a pure supervisory & flight management role.

The entire flight will be completely automated from gate to gate. Compliance with ATC instructions and traffic / flow management will also be completely automated.

The "cockpit" will be redesigned -- supervisory controls will replace flight controls. Here I'm envisioning supervisory controls as higher-order controls for more suited for decision making rather than for piloting. More touchscreens, less joysticks.

In an in-flight emergency, the pilot will be entrusted to make safety-of-flight decisions, via the supervisory controls.

jack11111 26th Oct 2015 22:13

Data link loss.
 
Quote: "The problem this creates is the loss of data-link scenario. Programming the software to land at the nearest airport in this situation could be done, but this would require every diversion airport to be ILS and auto-land equipped. The costs would soon mount up....."


So a data link loss becomes a 'clear the decks' priority landing?
.

ManUtd1999 26th Oct 2015 22:14


The way I envision it, the first fully automated aircraft will still have a "pilot" onboard -- but the pilot will no longer the fly the airplane.
That would be a potentially better option.

parabellum 26th Oct 2015 22:57


Only a fool would say fully automated planes will never happen
Maybe I'm that fool then, if you mean a fully automated, sans pilot, commercial passenger aircraft.

Insurance premiums will go through the roof, the product liability cover required will outstrip fuel costs and I've mentioned it before, on other threads, security and suicidal maniacs, but this thread isn't about that so I'll stop here.

(The R&D costs to get a pilotless pax aircraft that satisfies the regulators and is considered an insurable risk by underwriters will run to billions, assuming anyone can be found to stump up these funds, pilots are cheaper!).

ShotOne 26th Oct 2015 23:06

Aside from the difficulty of providing a (hopefully) totally reliable and hack-proof worldwide datalink, What would be the point of that? Surely if you're going to pay someone to sit on board anyway, why not give them a stick and some buttons to press to keep alert for the times, inevitably, when something goes wrong. And since you've done that, why not give them a uniform with some stripes on.

peekay4 26th Oct 2015 23:13


>> The pilot's job will transition to a pure supervisory & flight management role.

Isn't that what the 'Captain' already does?
Not purely, no -- and certainly not when the Captain is PF. (And even as PM, a Captain today is still concerned with all aspects of flying the plane.)

Imagine instead: a Flight Commander who's not at the controls, working with two pilots at the controls. This Commander is in a pure management / supervisory role, completely "hands off" from the actual mechanics of flight. Then, automate both pilot positions.

Today we have the Pilot in Command (PIC) -- which is two roles in one: a pilot and a commander. So as the first step, I think it will be the pilot role which will be fully automated, leaving a Commander on board.

Mesoman 27th Oct 2015 02:22

Yes, automation can handle unforseen events
 
Automation of very complex processes, such as autonomous cars (or aircraft) is not just a practice of thinking up every scenario and programming the autopilot to handle it.

Modern software is absorbing more AI of the sort that learns. Although at its heart it may be a computer program, it is very different from just programming. And, sometimes it is not a computer program - it may be a collection of electrically simulated neurons. In fact, what it has learned may not even be accessible or understandable to humans. Furthermore, such software still has access to high speed accurate models of physics, and to many sensors, so it can tie the woo-woo deep AI to strong modeling and control.

This kind of technology has been in our lives for some time. Credit card companies have been using neural nets for a long time to evaluate credit risk. Google search results come partly from self-learning AI.

That said, it is not clear when this will be appropriate as a replacement for a pilot in commercial aviation. This sort of AI is rapidly advancing, but is still pretty weak. Furthermore, safety qualifying something that is not well understood is obviously a serious challenge. We try to qualify pilots, since we know a lot about human beings, but sometimes we goof. How do we qualify a big, complex artificial neural net?

G0ULI 27th Oct 2015 03:43

I fail to see how completely automating an aircraft would make flight inherently safer. A computer is capable of conducting a flight from A to B with extreme accuracy and can be programmed to avoid typical hazards en route. Big problems occur if the computer loses the ability to extract meaningful data from remote sensors, or data lines to control mechanisms are severed. As a last resort, a human being can look out the window and potentially still navigate and land safely.

What might work would be a ground based system programmed with every known accident and failure mechanism to date in all known aircraft. If such a system could be programmed and built, it might identify common design failures and potential faults that have not yet occurred and identify potential defects in aircraft yet to be built.

If a fault or series of conditions are identified that no human pilot could deal with but a computer could, then it would be easiest to set up an on board computer to assist the pilot when the situation required. Something Airbus seems to have been struggling to get right for some years.

HPSOV L 27th Oct 2015 04:34

It is not just the aircraft fleets that would need highly developed AI. ATC would as well, with backups for backups to achieve adequate margins. Perhaps this would involve internet like layers of redundant autonomous capability between individual aircraft in case of datalink loss. Until ATC becomes automated and digitised then the current analogue method of R/T between humans is the only way to manage traffic in high intensity traffic situations.
As far as product liability being an obstacle: why would an AI component be different to any other component? None are perfect yet we still have airliners.

peekay4 27th Oct 2015 04:43


I fail to see how completely automating an aircraft would make flight inherently safer.
In a nutshell: it is to recognize that humans are better at some tasks, while computers are better at other tasks.

(And also conversely, that humans are bad at some tasks, while computers are bad at other tasks.)

Today there is no such recognition or separation of tasks. The (sophisticated) automation we have in the cockpit is designed to augment a human pilot. But the pilot is still expected to execute all the tasks, albeit with help -- or lets say "protection" -- from the computers.

But the fact that "protections" are needed points to a sub-optimal division of tasks.

Example: with the current model, since pilots are responsible for flying tasks (but are not very reliable), pilots are expected to actively "monitor" each other and the automation. Yet humans are really bad at monitoring. We get bored and lose concentration. Play games on our iPhones. Get distraction with conversation. Fall asleep. Think sexy thoughts. Become incapacitated.

An independent, specialized computer can do a much better job at monitoring.

So perhaps, we should let computers fully do what they're good at: e.g., fly from point A to B, completely automated. Take the human factor out. Computers are not tempted to bypass checklists, bust minimums, or take unauthorized shortcuts. They also don't consume alcohol or drugs, fly fatigued, or develop suicidal or homicidal tendencies. (HAL excepted).

Human pilots can then concentrate on decision making, supervising (not monitoring!), and handling emergency or other non-routine situations. Not by flying, but by commanding.

wanabee777 27th Oct 2015 05:59

I can remember when automatic elevators (lifts) came on the scene, many folks would, instead, use the stairs.:\

HamishMcBush 27th Oct 2015 08:26

How will a computer be able to be programmed to recognise, for example, a vehicle about to trespass onto a runway when the plane is about to land? It may be programmed to recognise a shape, but it will take a human to instantly interpret the shape, speed and likely trajectory, and make the split-second decision for a go-around.

... or recognising a drone about to cause conflict, or a sudden volcanic eruption in the immediate flight path (how could that be detected and instantly recognised as such by a camera?)

Volume 27th Oct 2015 08:36

If we look at it from a purely technical standpoint, every event (no matter how unique) has to follow the laws of physics (or the laws of nature, but probably for aviation it is 99% physics). All of them can be translated into formula and constants. For all things that can happen, there is a sort of sensor available which would detect it. So If you can have a computer that can do any calculation of any natural law (is unbelieveable powerful and quick), and has all sensors wired to it which could detect any parameter existing around, then yes, a computer can deal with any event.
but...
We all know, that the more complex a system gets, the more sensors you have, the more bugs are in, the more failure happens, the more false information is gathered, which would then also result in wrong decisions taken.
So with every additional aspect we would consider in the system (to make sure we are able to deal with whatever is physically possible) we would increase the number of bugs and sources for failure. At a certain level, the event we want to prevent is less probable than a malfunction of the system we install to deal with it.
Thinking of real MTBF figures of existing components, which sometimes (for whatever reason, most probably cost...) are as low as 1000 FH, it would not make the aircraft safer for any event what is unique enough to happen only every 1000 FH. Of course we can significantly increase reliability figures by installing redundancy, but we will always have a finite reliability of the system, hence we have a certain probability of events for which the risk of it to happen is lower, than the risk of the system dealing with it to fail.

therefore:
practically we will never be able to create systems which can deal with every unique event, which would not result in system failures causing accidents at a rate in the same order of magnitude as the unique event we wanted to deal with.
Which also is true for pilots, we can never train them good enough for any unique event that may happen, they will always make mistakes and errors (which would be another interesting philosophical discussion, whether pilot error or pilot mistakes are the real issue...) in an order of magnitude higher than very unique events.

But slast is absolutely correct, instead of looking at statistics of very, very rare events, we should look at statistics of the billions of correct decisions taken, correct actions performed and flight hours of systems doing exactly what they should. We should not try to avoid something vary rare, by taking responsibility from those who do something right 99.99999% of the time. We should try to support those doing a good job to make a perfect job (humans and systems). We should check who is doing what best, and then support him / improve it.

FullWings 27th Oct 2015 10:20

I have always thought that I am there on the aeroplane to deal with the rare, the unexpected, the edge cases and anything that requires “thinking outside the box”. Most of the regular day-to-day stuff can be automated (I include humans following SOPs in that). My personal opinion is that many problems have their roots not in pilots or aircraft systems but in the interface between the two: we are still in the relative dark ages here using methods that were old half a century ago.

It’s normal to consider risk in terms of severity factored by likelihood of occurrence: you can accept something with pretty bad potential consequences as long as it sits way out at one end of the probability curve. If you could define an area where the results of automation were uncertain or even catastrophic, as long as that area was tiny compared with the area of competence, that would be acceptable.

As far as removing pilots completely from the equation, even now there are large autonomous drones flying missions around the globe but I don’t think they have quite the safety record that would entice many to sit on board as a paying customer.

The day will come when we have AI strong enough to give a comparable performance to a trained and experienced human in terms of air transport operations. At the moment, it would appear to be cheaper (and possibly safer) to retain the status quo. Also, by the time autonomous airliners become a reality, every other mode of transport will be similar and pretty much every job a human could do could be done as well or better by AI...

pax britanica 27th Oct 2015 12:06

think the real risk to pilot jobs from automation and AI is the point made above that most other jobs will b automated before pilots.
So who do pilots have left to fly now that no one has a job and cannot afford to and as so many business functions are automated then there are no bums on seats in business class.

The computers themselves can telecommute so there is no need to fly the hardware around either and so there is not much call for airlines over all. In fact what is there much demand for, we can automate banking and finance functions so they are run by honest incorruptable machines. Same with the stock exchanges where machines will do the job better than coke fuelled risk taking reckless human counterparts .

So yes automations a threat to cockpit jobs because it is a threat to all jobs in the long run , and as the saying goes in the long run we are all dead but perhaps that was not meant in the sense that what will the AI run machines do when they realise they no longer need us!

felixthecat 27th Oct 2015 12:36

I hate the way blame always seems to be laid at the pilots feet...Statements like 80% of crashes are due to pilot error, do nothing for the public or the profession.

The chances of an accident are so incredibly rare. The statistics are something like 1 per 2000000 flights have an incident. Well by my mind thats 1999999 where the pilots have done a bloody good job......

Where do we hear about that? 80% is a huge percentage scares the pants off the general public. But what about 0.00005% (1 in 2000000) and thats saying that every one of those 1 in 2000000 is the pilots fault, which it isn't even by the 80% statement.

Come on the incident rate can NEVER be zero and can always be improved but 0.00005% means that all in all pilots do a great job!

lomapaseo 27th Oct 2015 13:40


I hate the way blame always seems to be laid at the pilots feet...Statements like 80% of crashes are due to pilot error, do nothing for the public or the profession.
fair point :ok:

replace the words "blame and Due to" and the pilot becomes only one of several causal factors. In the long run as many have pointed out that we can't seem to achieve zero mechanical or system screw ups so we depend on the pilot to mitigate a large percentage.

He just becomes one of the areas in the recommended actions in spite of what the newspapers say or a discussion board like this :)

fc101 27th Oct 2015 15:42


FC101: see earlier, I do not consider that pilots ARE the major problem. I have an adaptation of Jim Reason's diagram here... and agree Gawande's Checklist Manifesto is a good read.
I think you misread....I don't agree that pilots are the major problem either.

c101

gearupmaxpower 27th Oct 2015 16:23

I like the Air Safety Iceberg Reality model. Certainly a more balanced way of representing the reality of the situation.

I know there are many others, but I would like to add one other accident to the 'spectacular saves' list.

Cathay flight CX780 a few years ago. Accident report here:

http://www.cad.gov.hk/reports/2%20Fi...0compliant.pdf

Another 300+ lives saved by, in my opinion, the pilots. I wouldn't have confidence in computers bringing this situation to a successful ending. It really was one out of left field.

They, like Sully, received various awards for their actions.

And I very much like this thread with sound, reasoned discussion and not the usual trolls spoiling it all. May it continue. :)

cwatters 27th Oct 2015 16:24


How will a computer be able to be programmed to recognise, for example, a vehicle about to trespass onto a runway when the plane is about to land? It may be programmed to recognise a shape, but it will take a human to instantly interpret the shape, speed and likely trajectory, and make the split-second decision for a go-around.
Driverless cars will probably have even less time to react.

slast 27th Oct 2015 16:49

Cathay event
 
Thanks GUMP, I had been intending to use that event for just that reason, but couldn't lay hands on the report at the time I did the diagram. Fits the pattern totally, especially the phrase in the report summary about "A SERIES (my emphasis) of causal factors" leading to loss of control of both engines.

Many other qualifying events are also in the IFALPA Polaris awardees list https://en.wikipedia.org/wiki/Polaris_Award
although for the purpose of this discussion the hijacking events probably have to be discounted. Even so we have 7 in the last decade.
Steve

Tourist 27th Oct 2015 17:13

There is a lot of misunderstanding of the challenges here.


The BA 777 running out of engine is not an instance of where the human is better.

That is an example of where a computer would have the advantage.

No human has practiced glide approaches in a 777, whereas programing the physics into a computer is easy.

It can real time monitor the approach, and fly the perfect AoA without difficulty.
It can know the effect a flap raise will have.

Just like you cant match an autopilot for normal flight, you can't match it for abnormal flight.

The Sully case is another on.

The computer would not be trying to guess whether he could make the runway. It would know. It is very simple physics for the computer.



People are trying to make out that the computer has to be perfect.
It doesn't.

It merely has to be better than we are at present.

There will be black swan events, and that is where a good crew might be better, but the vast majority of events are endless repeats of accidents ad infinitum.


We are currently in a bad transitional phase.

An Example.

TCAS

The aircraft tells us what to do.
We are not supposed to second guess it, just do what it says.
We then try to do what it says but usually get it wrong. The aircraft itself would never ever got the response wrong.

EGPWS

Again, simply connecting these systems to the autopilot would make them a lot safer.

Those who say that the sensors will never match the human eye are talking utter rubbish.

1. The pilots barely look out the window.
2. We fly blind in IMC all the time.
3.If anyone actually thought that lookout was important, we would have cockpits like fighters that you could actually see out of.
4. Modern fighters already have integrated systems far better at spotting other aircraft.

CONSO 27th Oct 2015 17:46


The Sully case is another on.

The computer would not be trying to guess whether he could make the runway. It would know. It is very simple physics for the computer.
And would it pick the congested hudson river ( not a designated airfield out of reach ) to land ? :ugh:

Or would it simply bring up the ' does not compute" or ' sorry scully i cannot allow you to do that ? '

Or as was the case for moon landing- computer about to overload and abort sequence ?
:sad:

Herod 27th Oct 2015 17:52

There has been talk that Sully could have made Teteboro, and that a computer would have done so. That is all very well, but the computer would have to be programmed with ALL building work on EVERY approach to EVERY airfield in the world, and in many cases ANY large ships etc transiting the area. Yes, it could be done, but the constant updating (assuming EVERY building contractor informed the relevant authorities), would be enormous.

barit1 27th Oct 2015 17:52

Tourist:

The Sully case is another on(e).

The computer would not be trying to guess whether he could make the runway. It would know. It is very simple physics for the computer.
And the computer will respond with a ONE or a ZERO.

ONE, No problem. It steers for TEB with optimum configuration - Gear up, if necessary. :D

ZERO - Now what? Would it steer for the Hudson, and avoid the watercraft? What database would apply? :rolleyes:

Seems to me the Cat3a-configured, pilot-optional airplane will not be capable of flying safely in a strictly VFR environment.

MG23 27th Oct 2015 17:59


Originally Posted by CONSO (Post 9160122)
Or as was the case for moon landing- computer about to overload and abort sequence ?

The Apollo Guidance Computer is actually a good example of coding to handle unexpected events. It had CPU power to spare in normal operation, but was programmed to keep flying and drop less essential tasks like displaying data to the crew if it was overloaded. Had the programmers not built that capability into the software, Neil Armstrong wouldn't have been the first man on the Moon.

Of course, it had no idea that it was aiming to land them on top of a big rock (or was it the edge of a crater? I forget). So, without Armstrong, it would probably just be a pile of debris on the Moon.

Tourist 27th Oct 2015 18:06

Why would it have to crash?

Why not have it scan the ground in front and around to the sides in search of the most suitable landing area.

This technology is already in existence and operating. This is not pie in the sky.

https://www.youtube.com/watch?v=GoCFE8xVhKA

Incidentally, Sully was an exceptional pilot. Just because one human did well does not mean that usually the pilots don't crash in in these scenarios.

The reason we know his name is because he is the very rare human who got this right, not the norm.


All times are GMT. The time now is 06:39.


Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.