PDA

View Full Version : Can automated systems deal with unique events?


slast
26th Oct 2015, 16:21
There has always been interesting comment on Prune about software reliability, bugs, design requirements, testing, etc., most recently under the topic of a B787 Dreamliner engine issue. There appear to be a significant number of Ppruners who are serious and knowledgeable on the subject.

I would like to ask those members a philosophical question. This has an impact on the argument that a safety priority now should be the elimination of human pilots from the system via automation.

The question is whether it is feasible (within a foreseeable timeframe) for humans to create automated systems that can deal with truly unique (not just "extremely improbable") events.

The pro-automation lobby (see for example thread I started in March, " "Pilotless airliners safer" - London Times article") starts from the view that as pilots appear to be the dominant primary cause in aviation accident reports, removing them will remove the errors they make and the consequent accidents.

This first started being discussed seriously in the late 1980s, when the Flight Engineer function was automated out of the B747 to create the -400 and the DC10 the MD11, etc. (Note - this was not the same as the 3-person vs. 2 person crew controversy so please don't mix that into it!)

There has been a multiple-order-of-magnitude increase in computing capability since then, but my feeling is still the same. Human pilots on board will always be able to make SOME attempt to deal with a completely unforeseen and unique event that arises from a coincidence of imperfections in the total aviation system (vehicle, environment, and people) - even if unable to do so 100% successfully.

So: is it possible to replace this capability with a human-designed and manufactured system, without creating additional vulnerability to human error elsewhere?

The entire industry works on a concept of "acceptable" and "target" levels of safety, involving probability of occurrence and consequences of events that society is willing to take. The regulatory authorities lay down numbers for those probability and consequences elements at various levels.

It seems to me that it would not be possible to design any automated system to control physical equipment like an aircraft without making assumptions about that aircraft and its components, one of which must be that component failure ALWAYS meets the probability required.

In reality, human errors occur in all stages of the process of getting a paying customer to their destination. In the vast majority of cases these errors are caught by the myriad checks in the system, but some are not. When two or more such trapping failures coincide, they may end up as a problem that until now has required the pilot(s) to act creatively, because the situation has never been considered as a possibility. That lack of foresight in itself might even be classed as a human error in the specification and implementation of the checking process.

To a human designing an overall automated control system, either an event is possible and can occur no more often than the required frequency, or it is impossible and need not be considered. There isn't a halfway house where the design engineer can say "this isn't supposed to happen but I think it might, so I'll cater for it." Apart from anything else, what steps can he take to cater for it when there is no means of knowing what the other circumstances are?

Take an uncontained engine failure, which is supposed to be a very improbable event. To quote a Skybrary summary: "Each uncontained failure will result in a “unique” combination of collateral damage ....... [which] carries the greater potential risk and that will require creative pilot assessment to ensure a positive outcome is achieved." That was amply demonstrated on QF32, where the problem originated as human errors in manufacturing, and were prevented from becoming a catastrophe by the pilots.

Other "unique" event examples which show that they are not so rare as to be negligible might include 2 within a few years in one airline alone - the BA B777 dual engine flameout on short final LHR and B744 leading edge flap retraction on takeoff at JNB. Both were survived largely due to instantaneous on-the-spot human "creativity" in recognising that the situation did not conform to any known precedent.

Issues of bugs, validation, verification, system analysis etc, appear to me to be essentially about meeting probability requirements for "known" possibilities. Is there an additional requirement that will have to met for "creativity" in such a system before a pilotless system can even start to be considered?

Unless such a creative artificial intelligence system is included, is the concept of automating the pilot out of the commercial aircraft cockpit doomed to fail, because ALL human error, and with it 100% of the liability for all consequences of any unique event, will clearly be transferred to the manufacturer and/or other suppliers?

Finally, in the event of such an event, will "society" in the form of the legal processes which will inevitably follow, agree that the numbers used since the 1950s etc. to define an acceptable level of safety to the authorities are the correct ones to meet expectations in the mid 21st century? In other words, will potential product liability issues stop the bandwagon?

Any thoughts on this, ladies and gentlemen?

DaveReidUK
26th Oct 2015, 16:38
If you were to rephrase the question as "Can automated systems deal with unforeseen events?" then the answer would be obvious.

So a useful approach might be to consider what events, if any, are unique but not unforeseen and vice versa.

Piltdown Man
26th Oct 2015, 16:54
A brilliant starting point for a discussion. My opinion is that the thing that makes humans good operators is that they are capable of fact finding, learning and self programming. This is not a feature of a lump of traditional software. For example software won't suggest that as the aircraft will only turn left, they'll line up following a series of left turns. It won't think about re-seating passengers to fix CofG problems etc.

But I must disagree with the following:

...that as pilots appear to be the dominant primary cause in aviation accident reports, removing them will remove the errors they make...

It is the human tag fills the functional gap (yawning chasm) between a useless device as delivered by the manufacturer and the all singing, all dancing, highly functional device that we see in service. That device is a ship, a railway locomotive, aircraft or power station. We exist only because we can't be replaced. It is what the human does right millions of times every day that makes flying safe. It's not the few times we foul up that makes it dangerous.

Put the programmer in the plane to make it safer. Errr... Isn't that a pilot though?

PM

darkroomsource
26th Oct 2015, 16:54
In theory, computer systems (not just automated systems) could be developed, which are able to take into account every failure or mistake to have ever happened in the history of transportation, and to evaluate all the probabilities of success and failure for every possible action and outcome.

In theory. In practice, we're still a ways off from doing that, although systems like IBM's Jeopardy contestant are headed in that direction.

But more to the point, when it turns out that there are still accidents, that are now "blamed" on the computer systems, will the developers now be blamed? And will we then want to automate the software developers? And when those systems are blamed, do we then develop automated systems to for developing automated systems?

The fact is that in the future, there will be systems which are more capable of evaluating all the risks and outcomes from all the possible actions, faster and more effectively than the human mind.

The question then will be, would you rather trust a piece of equipment or a human being who actually comprehends the concept of failure due to mistakes?

172driver
26th Oct 2015, 17:49
It's a good question and a fascinating subject.

One big problem in discussing it (and in arriving at any conclusion) is that the information we have WRT the actions of aircrew is heavily slanted towards the negative. Why? For the simple reason that we hear about accidents and incidents which were induced by pilot action, but we almost never hear about mishaps that were prevented by pilot action, unless they were dramatic enough to make the news.

There is an interesting analogy to the development of self-driving cars. Google are finding in the course of their tests in California, that their cars of course conform 100% to the highway code. This has obviously been programmed into them. However, the real world doesn't always conform. The big challenge here is to install a sort of fuzzy logic that allows the car to 'think', which in extreme cases also involves ethical dilemmas. I suggest you read this excellent article (http://www.technologyreview.com/view/542626/why-self-driving-cars-must-be-programmed-to-kill/) on the subject.

Personally, I'd much rather live with the errors my fellow human beings (and I!) make than hand over my life to some algorithm.

Herod
26th Oct 2015, 18:03
which are able to take into account every failure or mistake to have ever happened in the history of transportation

This is fine, but there will always be "black swan" events, and that is where it will not be possible, at least in the foreseeable future, to automate the human out of the equation.

Willie Everlearn
26th Oct 2015, 18:04
"priority now should be the elimination of human pilots from the system via automation"

For the salaries on offer these days, great idea. Can't happen soon enough.

On a more serious note, is the artificial intelligence refined enough to accommodate that level of automation and how soon could it be incorporated into today's technology?
As an aside, I don't think I'd be that comfortable getting onto anything, especially something leaving the ground, that doesn't have a human behind the wheel (other than the train at Disney World). The though is still unnerving to me and I can't imagine the average afraid-to-fly-in-the-first-place passenger would either.

Willie

bullfox
26th Oct 2015, 18:45
The legal system is not ready for driverless cars or pilotless aircraft. regardless of the cause of any accident there wlii always be the need for convenient blame.

fc101
26th Oct 2015, 19:15
Interesting question but maybe too simply put.

Firstly if an event is "unique" then by definition it becomes a binary thing whether that event can be foreseen or not. Your question is then "Can automated systems deal with all foreseeable unique events?" Then the discussion moves to what counts as foreseeable and of those what is it worth guarding against.

In most cases automated systems are constructed around generalisations of specific cases, eg: avoiding crashing in to Everest, becomes GPWS. Similarly preventing a pilot exceeding the load limits of an aircraft becomes the flight laws on an Airbus etc.

As another posted has pointed out: "...that as pilots appear to be the dominant primary cause in aviation accident reports, removing them will remove the errors they make..." is false from many perspectives. While it is "technically" correct the pilot probably did crash the plane, they were probably figuring out how to get out of that situation and therefore circumstances eventually conspired against them. Only a few accidents are attributable to pilots only (AF, German Wings), but even then the chain of events and context is extremely complex - hence the need for accident investigation.

It might help to start with looking at the "Swiss Cheese Model" ( https://en.wikipedia.org/wiki/Swiss_cheese_model ) and read up on the work by James Reason on the whole concept of safety.

If you want a particularly readable book, have a look at Atul Gawande's Checklist Manifesto which'll give you an insight into how aviation's checklists are used in a completely different environment - one that has a very different idea of what automation is.

Huge area to discuss and lots of research, but take a look at Reason's books and papers,

fc101

4Greens
26th Oct 2015, 19:57
Think cyber attacks. Bye bye no pilots on the flight deck.

There should be a guarded yellow switch on every flight deck. When required this can be switched on and it turns it back into an aeroplane.

slast
26th Oct 2015, 20:30
Good to get some serious answers so fast...!

DaveReidUk, I pondered long and hard over whether to make it unforeseen or unique (or both). can you continue that thought with examples as to what events, if any, are unique but not unforeseen and vice versa.

PD, j(and several others! Just to be clear, I DON'T consider that pilots ARE the predominant cause - that's the pro-automation lobby viewpoint. But it gets support from graphs like this from an MIT study on "Safety Management Challenges for Aviation Cyber Physical Systems" . This was picked at random from many others similar. http://picma.org.uk/sites/default/files/images/rationale/typical%20stats.jpg


Darkroom, your "in theory"... para. The failures I see as problematic to deal with are not ones that HAVE ever happened, but ones that have not YET happened and almost certainly never will. These are for practical purposes infinite in number - certainly many order of magnitude greater than the possible moves in a game of chess (10^120?)

Humans brain within human body can be pretty good at chess but are relatively easily beaten now by specialist programmes. However, the same human brain/body combination can also deal with umpteen other issues (e.g. raise children, create music) at which the same programme and hardware has zero capability. To what extent would a system "trained" to handle the QF32 scenario and every historic event be able to deal with a second QF32 in which one hot chunk went in a 1 degree different direction with significantly different consequential failures? However a human would ATTEMPT to cope just the same.

172Driver. I agree entirely with your comment about the information bias. I have devoted a couple of pages specifically to this on my own website but to make the point that where pilots ARE responsible they need collectively to do something about it. See these diagrams I made to illustrate that public perception http://picma.org.uk/sites/default/files/images/rationale/perceived%20iceberg.png

does not align with the underlying reality:
http://picma.org.uk/sites/default/files/images/rationale/whole%20iceberg.png

Thanks for the link, very useful. The self-drive car issue is of course the canary in the mine, if they can't resolve the liability issue for cars, then it goes away for aircraft. See Volvo recent statement... Volvo will accept liability for self-driving car crashes (http://www.autoblog.com/2015/10/07/volvo-accept-liability-self-driving-car-crashes/)


FC101: see earlier, I do not consider that pilots ARE the major problem. I have an adaptation of Jim Reason's diagram here... and agree Gawande's Checklist Manifesto is a good read.

http://picma.org.uk/sites/default/files/images/rationale/swisscheese.png

Thanks for the input, keep it coming...

barit1
26th Oct 2015, 21:23
I have to ask about the inverse of pilot error, exemplified by Sully's decision to put his bird in the drink after a low-altitude loss of thrust. :uhoh:

ManUtd1999
26th Oct 2015, 21:41
The first automated aircraft would probably be data-linked to the ground with "pilots" monitoring several at once, able to step in and assume remote control if necessary. This would remove the "can it deal with any hypothetical situation?". All the automation would need to do would be to flag up "something" as wrong and ground could take over. Similarly flight attendants could communicate medical emergencies etc.

The problem this creates is the loss of data-link scenario. Programming the software to land at the nearest airport in this situation could be done, but this would require every diversion airport to be ILS and auto-land equipped. The costs would soon mount up.....

Finally, hardware/software is expensive. The level of redundancy and function required to remove pilots would take years and many millions/billions of dollars. Money which the manufacturers have to gamble on there being enough people willing to fly on these new drones. Only a fool would say fully automated planes will never happen (think president of IBM 1940s - "there will only be a market for maybe 5 computers") but it's a long way off.

peekay4
26th Oct 2015, 22:08
The first automated aircraft would probably be data-linked to the ground with "pilots" monitoring several at once, able to step in and assume remote control if necessary.

The way I envision it, the first fully automated aircraft will still have a "pilot" onboard -- but the pilot will no longer the fly the airplane -- not even via autopilot.

The pilot's job will transition to a pure supervisory & flight management role.

The entire flight will be completely automated from gate to gate. Compliance with ATC instructions and traffic / flow management will also be completely automated.

The "cockpit" will be redesigned -- supervisory controls will replace flight controls. Here I'm envisioning supervisory controls as higher-order controls for more suited for decision making rather than for piloting. More touchscreens, less joysticks.

In an in-flight emergency, the pilot will be entrusted to make safety-of-flight decisions, via the supervisory controls.

jack11111
26th Oct 2015, 22:13
Quote: "The problem this creates is the loss of data-link scenario. Programming the software to land at the nearest airport in this situation could be done, but this would require every diversion airport to be ILS and auto-land equipped. The costs would soon mount up....."


So a data link loss becomes a 'clear the decks' priority landing?
.

ManUtd1999
26th Oct 2015, 22:14
The way I envision it, the first fully automated aircraft will still have a "pilot" onboard -- but the pilot will no longer the fly the airplane.

That would be a potentially better option.

parabellum
26th Oct 2015, 22:57
Only a fool would say fully automated planes will never happen

Maybe I'm that fool then, if you mean a fully automated, sans pilot, commercial passenger aircraft.

Insurance premiums will go through the roof, the product liability cover required will outstrip fuel costs and I've mentioned it before, on other threads, security and suicidal maniacs, but this thread isn't about that so I'll stop here.

(The R&D costs to get a pilotless pax aircraft that satisfies the regulators and is considered an insurable risk by underwriters will run to billions, assuming anyone can be found to stump up these funds, pilots are cheaper!).

ShotOne
26th Oct 2015, 23:06
Aside from the difficulty of providing a (hopefully) totally reliable and hack-proof worldwide datalink, What would be the point of that? Surely if you're going to pay someone to sit on board anyway, why not give them a stick and some buttons to press to keep alert for the times, inevitably, when something goes wrong. And since you've done that, why not give them a uniform with some stripes on.

peekay4
26th Oct 2015, 23:13
>> The pilot's job will transition to a pure supervisory & flight management role.

Isn't that what the 'Captain' already does?
Not purely, no -- and certainly not when the Captain is PF. (And even as PM, a Captain today is still concerned with all aspects of flying the plane.)

Imagine instead: a Flight Commander who's not at the controls, working with two pilots at the controls. This Commander is in a pure management / supervisory role, completely "hands off" from the actual mechanics of flight. Then, automate both pilot positions.

Today we have the Pilot in Command (PIC) -- which is two roles in one: a pilot and a commander. So as the first step, I think it will be the pilot role which will be fully automated, leaving a Commander on board.

Mesoman
27th Oct 2015, 02:22
Automation of very complex processes, such as autonomous cars (or aircraft) is not just a practice of thinking up every scenario and programming the autopilot to handle it.

Modern software is absorbing more AI of the sort that learns. Although at its heart it may be a computer program, it is very different from just programming. And, sometimes it is not a computer program - it may be a collection of electrically simulated neurons. In fact, what it has learned may not even be accessible or understandable to humans. Furthermore, such software still has access to high speed accurate models of physics, and to many sensors, so it can tie the woo-woo deep AI to strong modeling and control.

This kind of technology has been in our lives for some time. Credit card companies have been using neural nets for a long time to evaluate credit risk. Google search results come partly from self-learning AI.

That said, it is not clear when this will be appropriate as a replacement for a pilot in commercial aviation. This sort of AI is rapidly advancing, but is still pretty weak. Furthermore, safety qualifying something that is not well understood is obviously a serious challenge. We try to qualify pilots, since we know a lot about human beings, but sometimes we goof. How do we qualify a big, complex artificial neural net?

G0ULI
27th Oct 2015, 03:43
I fail to see how completely automating an aircraft would make flight inherently safer. A computer is capable of conducting a flight from A to B with extreme accuracy and can be programmed to avoid typical hazards en route. Big problems occur if the computer loses the ability to extract meaningful data from remote sensors, or data lines to control mechanisms are severed. As a last resort, a human being can look out the window and potentially still navigate and land safely.

What might work would be a ground based system programmed with every known accident and failure mechanism to date in all known aircraft. If such a system could be programmed and built, it might identify common design failures and potential faults that have not yet occurred and identify potential defects in aircraft yet to be built.

If a fault or series of conditions are identified that no human pilot could deal with but a computer could, then it would be easiest to set up an on board computer to assist the pilot when the situation required. Something Airbus seems to have been struggling to get right for some years.

HPSOV L
27th Oct 2015, 04:34
It is not just the aircraft fleets that would need highly developed AI. ATC would as well, with backups for backups to achieve adequate margins. Perhaps this would involve internet like layers of redundant autonomous capability between individual aircraft in case of datalink loss. Until ATC becomes automated and digitised then the current analogue method of R/T between humans is the only way to manage traffic in high intensity traffic situations.
As far as product liability being an obstacle: why would an AI component be different to any other component? None are perfect yet we still have airliners.

peekay4
27th Oct 2015, 04:43
I fail to see how completely automating an aircraft would make flight inherently safer.

In a nutshell: it is to recognize that humans are better at some tasks, while computers are better at other tasks.

(And also conversely, that humans are bad at some tasks, while computers are bad at other tasks.)

Today there is no such recognition or separation of tasks. The (sophisticated) automation we have in the cockpit is designed to augment a human pilot. But the pilot is still expected to execute all the tasks, albeit with help -- or lets say "protection" -- from the computers.

But the fact that "protections" are needed points to a sub-optimal division of tasks.

Example: with the current model, since pilots are responsible for flying tasks (but are not very reliable), pilots are expected to actively "monitor" each other and the automation. Yet humans are really bad at monitoring. We get bored and lose concentration. Play games on our iPhones. Get distraction with conversation. Fall asleep. Think sexy thoughts. Become incapacitated.

An independent, specialized computer can do a much better job at monitoring.

So perhaps, we should let computers fully do what they're good at: e.g., fly from point A to B, completely automated. Take the human factor out. Computers are not tempted to bypass checklists, bust minimums, or take unauthorized shortcuts. They also don't consume alcohol or drugs, fly fatigued, or develop suicidal or homicidal tendencies. (HAL excepted).

Human pilots can then concentrate on decision making, supervising (not monitoring!), and handling emergency or other non-routine situations. Not by flying, but by commanding.

wanabee777
27th Oct 2015, 05:59
I can remember when automatic elevators (lifts) came on the scene, many folks would, instead, use the stairs.:\

HamishMcBush
27th Oct 2015, 08:26
How will a computer be able to be programmed to recognise, for example, a vehicle about to trespass onto a runway when the plane is about to land? It may be programmed to recognise a shape, but it will take a human to instantly interpret the shape, speed and likely trajectory, and make the split-second decision for a go-around.

... or recognising a drone about to cause conflict, or a sudden volcanic eruption in the immediate flight path (how could that be detected and instantly recognised as such by a camera?)

Volume
27th Oct 2015, 08:36
If we look at it from a purely technical standpoint, every event (no matter how unique) has to follow the laws of physics (or the laws of nature, but probably for aviation it is 99% physics). All of them can be translated into formula and constants. For all things that can happen, there is a sort of sensor available which would detect it. So If you can have a computer that can do any calculation of any natural law (is unbelieveable powerful and quick), and has all sensors wired to it which could detect any parameter existing around, then yes, a computer can deal with any event.
but...
We all know, that the more complex a system gets, the more sensors you have, the more bugs are in, the more failure happens, the more false information is gathered, which would then also result in wrong decisions taken.
So with every additional aspect we would consider in the system (to make sure we are able to deal with whatever is physically possible) we would increase the number of bugs and sources for failure. At a certain level, the event we want to prevent is less probable than a malfunction of the system we install to deal with it.
Thinking of real MTBF figures of existing components, which sometimes (for whatever reason, most probably cost...) are as low as 1000 FH, it would not make the aircraft safer for any event what is unique enough to happen only every 1000 FH. Of course we can significantly increase reliability figures by installing redundancy, but we will always have a finite reliability of the system, hence we have a certain probability of events for which the risk of it to happen is lower, than the risk of the system dealing with it to fail.

therefore:
practically we will never be able to create systems which can deal with every unique event, which would not result in system failures causing accidents at a rate in the same order of magnitude as the unique event we wanted to deal with.
Which also is true for pilots, we can never train them good enough for any unique event that may happen, they will always make mistakes and errors (which would be another interesting philosophical discussion, whether pilot error or pilot mistakes are the real issue...) in an order of magnitude higher than very unique events.

But slast is absolutely correct, instead of looking at statistics of very, very rare events, we should look at statistics of the billions of correct decisions taken, correct actions performed and flight hours of systems doing exactly what they should. We should not try to avoid something vary rare, by taking responsibility from those who do something right 99.99999% of the time. We should try to support those doing a good job to make a perfect job (humans and systems). We should check who is doing what best, and then support him / improve it.

FullWings
27th Oct 2015, 10:20
I have always thought that I am there on the aeroplane to deal with the rare, the unexpected, the edge cases and anything that requires “thinking outside the box”. Most of the regular day-to-day stuff can be automated (I include humans following SOPs in that). My personal opinion is that many problems have their roots not in pilots or aircraft systems but in the interface between the two: we are still in the relative dark ages here using methods that were old half a century ago.

It’s normal to consider risk in terms of severity factored by likelihood of occurrence: you can accept something with pretty bad potential consequences as long as it sits way out at one end of the probability curve. If you could define an area where the results of automation were uncertain or even catastrophic, as long as that area was tiny compared with the area of competence, that would be acceptable.

As far as removing pilots completely from the equation, even now there are large autonomous drones flying missions around the globe but I don’t think they have quite the safety record that would entice many to sit on board as a paying customer.

The day will come when we have AI strong enough to give a comparable performance to a trained and experienced human in terms of air transport operations. At the moment, it would appear to be cheaper (and possibly safer) to retain the status quo. Also, by the time autonomous airliners become a reality, every other mode of transport will be similar and pretty much every job a human could do could be done as well or better by AI...

pax britanica
27th Oct 2015, 12:06
think the real risk to pilot jobs from automation and AI is the point made above that most other jobs will b automated before pilots.
So who do pilots have left to fly now that no one has a job and cannot afford to and as so many business functions are automated then there are no bums on seats in business class.

The computers themselves can telecommute so there is no need to fly the hardware around either and so there is not much call for airlines over all. In fact what is there much demand for, we can automate banking and finance functions so they are run by honest incorruptable machines. Same with the stock exchanges where machines will do the job better than coke fuelled risk taking reckless human counterparts .

So yes automations a threat to cockpit jobs because it is a threat to all jobs in the long run , and as the saying goes in the long run we are all dead but perhaps that was not meant in the sense that what will the AI run machines do when they realise they no longer need us!

felixthecat
27th Oct 2015, 12:36
I hate the way blame always seems to be laid at the pilots feet...Statements like 80% of crashes are due to pilot error, do nothing for the public or the profession.

The chances of an accident are so incredibly rare. The statistics are something like 1 per 2000000 flights have an incident. Well by my mind thats 1999999 where the pilots have done a bloody good job......

Where do we hear about that? 80% is a huge percentage scares the pants off the general public. But what about 0.00005% (1 in 2000000) and thats saying that every one of those 1 in 2000000 is the pilots fault, which it isn't even by the 80% statement.

Come on the incident rate can NEVER be zero and can always be improved but 0.00005% means that all in all pilots do a great job!

lomapaseo
27th Oct 2015, 13:40
I hate the way blame always seems to be laid at the pilots feet...Statements like 80% of crashes are due to pilot error, do nothing for the public or the profession.


fair point :ok:

replace the words "blame and Due to" and the pilot becomes only one of several causal factors. In the long run as many have pointed out that we can't seem to achieve zero mechanical or system screw ups so we depend on the pilot to mitigate a large percentage.

He just becomes one of the areas in the recommended actions in spite of what the newspapers say or a discussion board like this :)

fc101
27th Oct 2015, 15:42
FC101: see earlier, I do not consider that pilots ARE the major problem. I have an adaptation of Jim Reason's diagram here... and agree Gawande's Checklist Manifesto is a good read.

I think you misread....I don't agree that pilots are the major problem either.

c101

gearupmaxpower
27th Oct 2015, 16:23
I like the Air Safety Iceberg Reality model. Certainly a more balanced way of representing the reality of the situation.

I know there are many others, but I would like to add one other accident to the 'spectacular saves' list.

Cathay flight CX780 a few years ago. Accident report here:

http://www.cad.gov.hk/reports/2%20Final%20Report%20-%20CX%20780%202013%2007%20web%20access%20compliant.pdf

Another 300+ lives saved by, in my opinion, the pilots. I wouldn't have confidence in computers bringing this situation to a successful ending. It really was one out of left field.

They, like Sully, received various awards for their actions.

And I very much like this thread with sound, reasoned discussion and not the usual trolls spoiling it all. May it continue. :)

cwatters
27th Oct 2015, 16:24
How will a computer be able to be programmed to recognise, for example, a vehicle about to trespass onto a runway when the plane is about to land? It may be programmed to recognise a shape, but it will take a human to instantly interpret the shape, speed and likely trajectory, and make the split-second decision for a go-around.


Driverless cars will probably have even less time to react.

slast
27th Oct 2015, 16:49
Thanks GUMP, I had been intending to use that event for just that reason, but couldn't lay hands on the report at the time I did the diagram. Fits the pattern totally, especially the phrase in the report summary about "A SERIES (my emphasis) of causal factors" leading to loss of control of both engines.

Many other qualifying events are also in the IFALPA Polaris awardees list https://en.wikipedia.org/wiki/Polaris_Award
although for the purpose of this discussion the hijacking events probably have to be discounted. Even so we have 7 in the last decade.
Steve

Tourist
27th Oct 2015, 17:13
There is a lot of misunderstanding of the challenges here.


The BA 777 running out of engine is not an instance of where the human is better.

That is an example of where a computer would have the advantage.

No human has practiced glide approaches in a 777, whereas programing the physics into a computer is easy.

It can real time monitor the approach, and fly the perfect AoA without difficulty.
It can know the effect a flap raise will have.

Just like you cant match an autopilot for normal flight, you can't match it for abnormal flight.

The Sully case is another on.

The computer would not be trying to guess whether he could make the runway. It would know. It is very simple physics for the computer.



People are trying to make out that the computer has to be perfect.
It doesn't.

It merely has to be better than we are at present.

There will be black swan events, and that is where a good crew might be better, but the vast majority of events are endless repeats of accidents ad infinitum.


We are currently in a bad transitional phase.

An Example.

TCAS

The aircraft tells us what to do.
We are not supposed to second guess it, just do what it says.
We then try to do what it says but usually get it wrong. The aircraft itself would never ever got the response wrong.

EGPWS

Again, simply connecting these systems to the autopilot would make them a lot safer.

Those who say that the sensors will never match the human eye are talking utter rubbish.

1. The pilots barely look out the window.
2. We fly blind in IMC all the time.
3.If anyone actually thought that lookout was important, we would have cockpits like fighters that you could actually see out of.
4. Modern fighters already have integrated systems far better at spotting other aircraft.

CONSO
27th Oct 2015, 17:46
The Sully case is another on.

The computer would not be trying to guess whether he could make the runway. It would know. It is very simple physics for the computer.


And would it pick the congested hudson river ( not a designated airfield out of reach ) to land ? :ugh:

Or would it simply bring up the ' does not compute" or ' sorry scully i cannot allow you to do that ? '

Or as was the case for moon landing- computer about to overload and abort sequence ?
:sad:

Herod
27th Oct 2015, 17:52
There has been talk that Sully could have made Teteboro, and that a computer would have done so. That is all very well, but the computer would have to be programmed with ALL building work on EVERY approach to EVERY airfield in the world, and in many cases ANY large ships etc transiting the area. Yes, it could be done, but the constant updating (assuming EVERY building contractor informed the relevant authorities), would be enormous.

barit1
27th Oct 2015, 17:52
Tourist:
The Sully case is another on(e).

The computer would not be trying to guess whether he could make the runway. It would know. It is very simple physics for the computer.


And the computer will respond with a ONE or a ZERO.

ONE, No problem. It steers for TEB with optimum configuration - Gear up, if necessary. :D

ZERO - Now what? Would it steer for the Hudson, and avoid the watercraft? What database would apply? :rolleyes:

Seems to me the Cat3a-configured, pilot-optional airplane will not be capable of flying safely in a strictly VFR environment.

MG23
27th Oct 2015, 17:59
Or as was the case for moon landing- computer about to overload and abort sequence ?

The Apollo Guidance Computer is actually a good example of coding to handle unexpected events. It had CPU power to spare in normal operation, but was programmed to keep flying and drop less essential tasks like displaying data to the crew if it was overloaded. Had the programmers not built that capability into the software, Neil Armstrong wouldn't have been the first man on the Moon.

Of course, it had no idea that it was aiming to land them on top of a big rock (or was it the edge of a crater? I forget). So, without Armstrong, it would probably just be a pile of debris on the Moon.

Tourist
27th Oct 2015, 18:06
Why would it have to crash?

Why not have it scan the ground in front and around to the sides in search of the most suitable landing area.

This technology is already in existence and operating. This is not pie in the sky.

https://www.youtube.com/watch?v=GoCFE8xVhKA

Incidentally, Sully was an exceptional pilot. Just because one human did well does not mean that usually the pilots don't crash in in these scenarios.

The reason we know his name is because he is the very rare human who got this right, not the norm.

evansb
27th Oct 2015, 18:18
Fully automated cockpits will increase cyber attacks, not reduce them. You can't cyber attack a computer that isn't there...

Tourist
27th Oct 2015, 18:55
But the computer is there.

All Boeings and Airbus are entirely reliant with no fallback options if computers fail anyway, so no added risk.

There is no fallback mode that doesn't require a computer.

slast
27th Oct 2015, 20:14
I would like to hear from the guys who are actually really experienced in the automation and control side about this.

There is a lot of talk about programming and sensors and databases and systems that learn, which are progressing by leaps and bounds. I have no doubt that it will be possible in a relatively short timeframe to do far more things quasi-autonomously than we do now. BUT:

In the following I am using the term sensor and system very loosely, e.g. sensor simply means a "problem detector" and "system" simply means some aspect of safe flight. So it could be a sensor for a hydraulic leak in the airframe "system" or "conflicting traffic" in the air environment "system", or just about anything else, we want to stay conceptual here.

For the sake of argument imagine we have this device we'll call a super-smart box (SSB), and accept that SSB design is such that it "doesn't make mistakes". All problems can be detected by a sensor and the SSB provides signals with correct output to deal with it, with greater reliability than a human can.

But correct SSB output is not the end product we are looking for. A change of trajectory of hundreds of tons of aircraft is what we are looking for, and SSB output needs to be converted to physical machinery activity.

My question is what happens in this chain of events.

Sensors detect problem in system A > SSB chooses correct response which demands action by physical system B > physical system B does not respond as expected by SSB. It may do nothing or may do something entirely different. "Something entirely different" could trigger other sensors in system A, B, C, D etc.... ad infinitum.

What is the process by which the SSB knows what to do now, and who is responsible for the correct outcome of that process?

G-CPTN
27th Oct 2015, 20:33
IF THEN ELSE (https://en.wikipedia.org/wiki/Conditional_(computer_programming)#If.E2.80.93then.28.E2.80. 93else.29) :E

Nested if necessary - or concatenated.


.

llondel
27th Oct 2015, 20:36
Had the programmers not built that capability into the software, Neil Armstrong wouldn't have been the first man on the Moon.

He probably would have been, or not then a very close second depending on what hit first.

MG23
27th Oct 2015, 20:58
He probably would have been, or not then a very close second depending on what hit first.

Aborts could be flown by a completely different, independently developed, backup computer. They didn't trust them that much :).

cjm_2010
27th Oct 2015, 21:34
I'm a lowly student on my way to an NPPL(m), but my job is automation of software testing processes.

I always sell automation to my clients in the same way: It will save money by taking on the bulky repetitive tasks (like regression testing) but there will always be an element of manual intervention required. Humans will spot things the automated code won't; but it's not good at performing highly complex tasks that require intuition, and cannot perform exploratory testing.

Automation tools are often sold on the promise that they will catch everything, and it is true that, in some rare cases, defect detection rates may well be better than when the testing is executed manually. But true AI does not exist as far as I know - and even if processing power ever reaches that point - I still firmly believe the human element can't ever be eliminated. We are simply far better at picking the important stuff out of the data we're bombarded with, we're better able to recognise patterns, and we can act upon those cues subconsciously.

Granted, sometimes the humans make incorrect decisions. But we are adaptable and often able to self-correct in time to avoid disaster.

I don't think we'll see pilotless passenger aircraft anytime soon. And I'd certainly never step foot on an aircraft without a human flight crew.

peekay4
27th Oct 2015, 22:26
No offence CJM but you can't really compare QA test automation to life-critical systems automation.

Two very, very different worlds.

CONSO
27th Oct 2015, 22:37
All Boeings and Airbus are entirely reliant with no fallback options Wrong re Boeing - dont know re Airbus.

Thru the 777, BA has always had cable backups, direct to a minimum number of flight controls and trim tabs. Simply push/pull hard enough to override autopilot- safety limits, etc. 707 and 747 aircraft have been saved by that concept- the pilot always has the final word despite many bells and lights and buzzers, and pull up warnings.

read about the Gimli glider for but one example ( 767) , a China 747 over pacific for anouther, and early 707 over atlantic- wings were bent- but remained in service

vmandr
27th Oct 2015, 23:08
'philo' questions tend to generate more questions than answers.

here a few of mine...

assuming Sully is 'the perfect pilot'

a. how will you program/teach your 'computer pilot' after Sully's hope, fear, anger and pointlessness(futility) or 'taking risks'.
b. how will you teach situational awareness, see-and-avoid, an inherently 'blind computer pilot'
c. how will you inhibit 'self-destruct'
e. how will you teach responsibility and 'departure from the rules' as in
icao-annex 2, and far 91.3
'The pilot-in-command of an aircraft shall, whether manipulating the controls or not, be responsible for the operation of the aircraft in accordance with
the rules of the air, except that
the pilot-in-command may depart from these rules in circumstances that render such departure absolutely necessary in the interests of safety.'

_Phoenix
28th Oct 2015, 04:16
Flying is not rocket science or AI science, whatever. The pilot needs just an operational aircraft that responds correctly to inputs and he will put it on the ground within resonable g factors. Volume wrote a very good post in page 2, he clearly shows why more automation will not improve safety.
These days, the automation is good just for profitability figures, but it erodes pilots skills. When unique event X happens the pilot has to aviate alone, while the automation is there only to pop up an enless list of ECAM message as in QF32 case or even worse as in XL888 case.
Apart the fact that a fully automated airliner is a sweet delusion, I found this thread interesting and at least a good warm up for QZ8501 final report due (again) for this month.

Tourist
28th Oct 2015, 06:28
CONSO

Wrong

Boeing has FADEC on the engines. A computer hack means you have lost your engines.

We ignore FADEC and have got used to it because it never fails, but it is a computer and it is 100% required to fly the aircraft.

If the FADEC fails, the aircraft is lost

Slast

Your question about sensors requiring responses and not getting the right one is another area where computers are better.

After an F15 incident and post the Sioux City crash NASA did some work on exactly that sort of problem. This is a long time ago, and the learning systems were so good that the poblem became opaque to the people on board.

https://www.nasa.gov/centers/dryden/pdf/88798main_srfcs.pdf

This is old tech. A problem solved.

vmandr

Please make sense.

Herod.

You are acting as if the computer must be blind.


Can we just have a reality check here for a moment re the situational awareness bit.

Does anybody think that for the last 30 years, fighter pilots find other aircraft using their eyes?

No, they don't. They have systems which do a far better job.

Modern military aircraft have a range of sensors which surpass the human eye in every way. Google F35 EO DAS.

AN/AAQ-37 Distributed Aperture System (DAS) for the F-35 (http://www.northropgrumman.com/capabilities/anaaq37f35/pages/default.aspx)

This is obviously aside from the fact that the eye has already been considered and discarded as a reliable method of not hitting aircraft.
It's so bad that we invented TCAS.

In VMC, are you allowed to disregard TCAS?

No.

Essentially, we are already relying on automated systems for aircraft avoidance.


You will note that I am attempting to add references and evidence to turn my opinions into accepted facts.

Just saying things without any supporting evidence is fairly worthless and does not contribute to the debate.

felixthecat
28th Oct 2015, 07:33
Having flown modern jets and the old clockwork wonders for many years I have had to on occasion think outside the box and not follow SOPs. Computers follow SOPs and thats not always great. Also even in todays marvels of computerised wizardry I have flown several flights where the computers have dropped out and systems have been lost.....we regularly get notifications from the manufacturers with changes to software and block points not to mention work arounds that the PILOTS have to implement until the software is fixed.

Who implements the work arounds when the pilots are gone? Does the computer see its wrong and correct? You would hope so but still even today we haven't got to that stage.

ChickenHouse
28th Oct 2015, 08:05
The question is whether it is feasible (within a foreseeable timeframe) for humans to create automated systems that can deal with truly unique (not just "extremely improbable") events.

Philosophically answer, Why?

Automation and Standardization always come as couple. The current trend is towards highly automated systems for the problem "get from A to B in the air".

Earlier in history this was solved by humans, called pilot, which explored the way this can be done without too many losses. As always, if you reach a certain level of confidence in a "should be" process, you can start automation along the track. One of the reasons why we have this discussion is the fact that automation started before we really understood enough - we left the decision to commercial thinking ones too early.

You do not need an automated system to deal with "unforeseen" events after sufficient knowledge was gathered to avoid getting into situation where these events can have an impact. If you found a process, where you have a certain corridor of stability, the only thing you do is follow the center line and unforeseen events really do no harm, so ignore them. Creativity was only used in finding the path along which you do your automation within rock solid. Once you follow that road, it does not matter wether a properly functioning human, formerly called pilot, or a machine does the work, because the probability to hit an unforeseen event, which is potentially dangerous, is so low that this human does not carry the skill any longer anyways. This is one of the essentials of the "follow magenta line" discussion.

Do we want to develop an automated system with creativity and artificial intelligence? I say no, as it will produce them same threats to earth as humans do, but with much more power.

Tourist
28th Oct 2015, 09:35
Chickenhouse

That is a very valid point.

Tourist
28th Oct 2015, 09:38
Felixthecat

That would be a reasonable question if your basic tenet was correct.


"Modern" aircraft are not in fact "technical marvels" at all.

They are astonishingly old technology.

Nobody drives a car that old fashioned. Nobody uses a telly or a phone a 10th of the age of that technology.

The Airbus is practically Stone Age.

http://i404.photobucket.com/albums/pp121/Tourist_photos/600px-Lufthansa_Airbus_A320-211_D-AIQT_01.jpg (http://s404.photobucket.com/user/Tourist_photos/media/600px-Lufthansa_Airbus_A320-211_D-AIQT_01.jpg.html)

http://i404.photobucket.com/albums/pp121/Tourist_photos/untitled.png (http://s404.photobucket.com/user/Tourist_photos/media/untitled.png.html)

These two gestated at about the same time for about the same length of time. They even have the same plastic beige finish...
Clever in their day......

Imagine if we judged what a phone would be capable of today based upon the 80's equivalent.

Don't judge what old aircraft can do against the future.

stilton
28th Oct 2015, 10:00
I'd bet that most if not all of the posters advocating total automation as the
future for transport aircraft are not pilots.


A few thousand hours of real flight time experiencing the myriad of dynamic situations and unexpected failures most Pilots have should convince even the most die hard computer nerd you can't program for every situation, not to mention computer failures themselves.


So allegedly Sully could have 'made' Teterboro that day, interesting, what if he had tried for it and came up short, you can't always predict glide distance and winds can change at a moments notice, in that case most likely their chances of survival were zero, imagine 'landing' an A320 in a built up area without power :eek:



He made the right decision, saving everyone using his judgement, experience and skill, three qualities a computer will never have.

Tourist
28th Oct 2015, 10:46
Stilton

Thank you for perfect example of entirely opinion based post with no evidence to back up any of it.


Wishful thinking will not change anything.


Your example could be turned around.

What if he had flown to the river and hit a boat killing all on board, and then computers suggest that he could have made the runway....?


Re judgement

Computers don't use judgement. They use factual physics based calculation. They will always beat a human for such things.

Re Skill

Can you fly at coffin corner as well as a computer? If you can, you have more "skill" than U2 pilots
Can you fly at a perfect speed height and heading better than a computer? If you can you need to start a super flying school.....

Re experience

Can you contain the repository of all computer pilots and instantly share that knowledge around the other pilots so nobody ever makes the same mistake twice, or are you doomed to keep re-learning all the same mistakes again and again..

p.s. I'm a pilot.

Tourist
28th Oct 2015, 10:55
I notice, incidentally, that many on here are choosing to mandate that to viable autonomous aircraft have to match the best of pilots like Sully in their strongest areas.

The reality is that most pilots are nowhere near Sully.

What matters is whether overall there will be less deaths with autonomous aircraft.

Not whether there will be different causes, because there inevitably will, particularly in the early stages, but whether lives and money (airlines are businesses) are saved.

Autonomous pilots will never be drunk.
Autonomous pilots will never be suicidal.
Autonomous pilots will never be tired.
Autonomous pilots will never be stressed.
Autonomous pilots will never be rusty.
Autonomous pilots will never fall out with each other in the cockpit.
Autonomous pilots will never misread a plate/minima.
Autonomous pilots will correctly carry out TCAS RAs
Autonomous pilots will never ignore a GPWS pull up.
Autonomous pilots will never fail to fly a perfectly serviceable 777 to a VMC runway.
Autonomous pilots will follow the rules/SOPs
Autonomous pilots will never break the law.


You need to judge any computer against the average, not the exception, and the average is very very average......

framer
28th Oct 2015, 11:48
The first automated aircraft would probably be data-linked to the ground with "pilots" monitoring several at once, able to step in and assume remote control if necessary. This would remove the "can it deal with any hypothetical situation?"
Can you imagine a guy on the ground, sipping his coffee, and being "handed " Sully's ship with both engines out, making the same decisions as Sully's?
Nup, the aircraft would be at 500ft AGL before he was beginning to understand the situation. He wouldn't be 'invested' enough.

vmandr
28th Oct 2015, 12:03
@ Tourist

You need to judge any computer against the average, not the exception, and the average is very very average...... there is a surplus of 'average' pilots out there. Why a 'mechanical', a bit 'above average' and very expensive is needed ?

my belief is that, flawed, limited humans can not create perfect flawless machines and history keeps proving my point.

regarding perfection a quick link http://philosophy.stackexchange.com/questions/3510/is-it-possible-for-something-perfect-to-be-created-by-humans

answer is 'yes' only if you settle for the...'2nd best'

slast
28th Oct 2015, 12:04
Tourist, thanks for the Dryden reference. That was the sort of thing I started the thread for. Will study it.

Tourist
28th Oct 2015, 12:15
vmandr

There you go again

Why should the computer have to be perfect?

I used the term average as a derogatory term.

It only has to perform better than the average human pilot, and average human pilot performance is not good a lot of the time.

So poor, in fact that engineers have quietly snuck in various automation devices without people noticing such as TCAS and EGPWS.

Tourist
28th Oct 2015, 12:21
slast

If you like that then I recommend this TED talk.

It is a great talk all the way through, but the bit at 6:46 is relevant

https://www.ted.com/talks/raffaello_d_andrea_the_astounding_athletic_power_of_quadcopt ers?language=en#t-394181

funfly
28th Oct 2015, 12:32
We seem to have two distinct requirements for a (human) pilot.
One is a technical ability to understand information and to control the mechanics and electronics involved in a commercial flight.
The other is the ability to 'fly' which is a motor skill.
I know people who fly a Cessna who cannot use a computer and I also know wizards on the computer who would never be able to fly a Cessna.
Does a commercial pilot have to have both abilities and if so are we asking the impossible to be a driver of a modern airliner - might this be the reason for the suggested pilot errors in aircraft disasters?

Tourist
28th Oct 2015, 12:37
I don't think that is the main problem.

I think the problem is that a modern airline pilot never has the chance to practise his skills often enough to remain good at them, plus the sort of people who are good at these things is rarely the sort of person who copes well with endless tedium waiting for something to happen.


I heard, possibly on here, a really good idea that would unfortunately be very hard to keep safe and effective.


Make the two sides of the cockpit selectively independent, and run one side at a time in the cruise in computer game mode.
Each take an hour at a time flying sim emergencies all flight.

That would produce decent involved experienced pilots.

vmandr
28th Oct 2015, 12:44
Tourist

the OP posted a philosophical question to which i can only commend ...'philosophically'

i can not introduce 'technological' views - such as those mentioned by you -
as philosophy and technology do not blend.

in my early years in aviation a friend, a Captain, told me once 'you cant possibly provide for every eventuality' to which i will add 'conceivable'

Why should the computer have to be perfect? if it is 'not perfect' then who needs it ?

if it is 'near perfect' how you ensure that it 'adapts' to 'dynamic environmental changes' when those changes can not be ...'conceived' and thus predicted.

'bookies' make a living out of this very fact. 'non-predictability'

I mean it is a question of a human 'Vne' that will to 'exceed its own limitations' . With all technology available to military, why they still use human pilots ? even their drones have 'pilots'.

G0ULI
28th Oct 2015, 12:52
Since Sully's actions are being held to be the gold standard of dealing with an unexpected event, it might be worth comsidering his options again.

Returning to the airport he had just taken off from was unviable.
Landing at another airport nearby was potentially possible but involved flying over a built up area.
Looking for the longest, largest unobstructed area to put down the aircraft.

Aircraft are strongly built to survive in the air but perform very badly when coming into contact with solid immovable objects like buildings. Aircraft are designed to potentially survive a ditching, they even carry liferafts and life jackets for passengers and crew.

Sully acted in the only logical manner to ensure the best chance of survival for those on board. Even if the aircraft had hit a vessel on the Hudson and disintegrated, it is likely that some on board would still have survived.

If he had gone for a landing elsewhere on land and hit a building, undoubtably everyone aboard would have perished.

It wasn't such a hard decision to make although the flying skills and great deal of luck required to carry out a successful ditching were exceptional.

An automated control system may well have opted for heading for the nearest airport if the figures indicated it was possible to glide there. The automated system might then be caught out by windshear nearing final approach and landing, because that was a factor it was unable to detect in advance. So the aircraft could have landed short and hit buildings killing everone aboard and causing casualties on the ground. A human pilot will at least consider the possibility and additional risks of windshear and act accordingly.

Automation can only act in accordance with the information it receives from its sensors. It cannot autonomously consider the possibility of events for which there are no data, so it is pointless and impossible to plan ahead for every eventuality.

Computers effectively live in the moment but humans are always looking to the future, even if it is a simple as putting one foot in front of the other, we are constantly aware of our surroundings and potential danger through millions of years of evolution.

Perhaps computers will evolve sentience and be able to consider and take control of unlikely events, but will a sentient computer put people or its own survival first? The survival instinct is the most basic one found in nature. All living things ultimately act in a way that best ensures their own survival, so why would a sentient computer be any different? Could a rogue sentient computer decide to crash an aircraft deliberately because of some perceived threat to computers generally? Perhaps one of the passengers on board is a computer programmer intending to 'adjust' the level of autonomy of the computers used in aircraft or false data is being input for testing purposes, shades of HAL in 2001.

Automation should assist to the point where it disappears into the background, not assume overall control. That is a human perogative.

ChickenHouse
28th Oct 2015, 12:56
Don't put all discussion around flying into one basket!

There are pilots and airmen flying airplanes.
There are airliner's and automats air-transporting SLF from A to B.
The future may be more automated flying cattle paddock or stapled coffins, as discussed lately.

The art and joy of flying is in the first, not in the two others.
That is why pilots and airmen don't like the other two option.

slast
28th Oct 2015, 13:19
Thanks Tourist, again a very interesting video.

I found your comment re dealing with tedium interesting. In the late 80s when the automation of the F/E was being done I suggested at meetings with manufacturers that the stated objective that e.g. the stated objective that "normal workload" on the B744 should be lower than that on the B757" was misconceived. What we actually needed was a not workload reduction but workload optimisation through a variable degree of automation.

E.g operating LHR-MAN-JFK in a 744 would require max automation on the 40 minute LHR-MAN leg, but it should permit much more pilot involvement when en route on the oceanic leg, as strategic issues could be more relevant as well as maintaining alertness and situational awareness.

I said in a 1987 paper at a Flight Safety Foundation meeting on this subject "We have to do better in achieving optimum arousal levels. If you were setting up experiments to study people sleeping, I suggest you might look for a person due for natural sleep on his body clock, in a comfortable chair, low light level, minutely changing light patterns, temperature about 75 deg. F, a white noise background, and the elimination of body movements and intellectual stimuli.

Anyone who's flown long night flights in oceanic airspace on a B-757 or B-767 will recognize the scene well! Our problem arises when something goes wrong, because on these aeroplanes the abnormal workload appears to be, and in some cases is, extremely high - certainly monitoring breaks down almost entirely for a high percentage of the time. It is the step function between workload levels which is hazardous, not necessarily the absolute levels."

Keep it coming...

Tourist
28th Oct 2015, 13:20
Tourist

With all technology available to military, why they still use human pilots ? even their drones have 'pilots'.

Factually incorrect.

Even the most basic google search will find many that are not.

https://en.wikipedia.org/wiki/Northrop_Grumman_X-47B

https://en.wikipedia.org/wiki/BAE_Systems_Taranis

https://www.flightglobal.com/news/articles/farnborough-bae-systems-tests-uav-auto-control-syst-374389/


I should also point out that the challenge of operating a military aircraft autonomously is orders of magnitude greater than an autonomous airliner.


I will restate.

It does not need to be perfect, merely better than humans to be worth the effort.

Tourist
28th Oct 2015, 13:25
Automation can only act in accordance with the information it receives from its sensors. It cannot autonomously consider the possibility of events for which there are no data, so it is pointless and impossible to plan ahead for every eventuality.


Factually untrue.

You are just making things up based upon no knowledge of systems.

Your example is a case in point.

All you need to do is build in an error bar to the decision making process.

Can aircraft glide to runway?

If yes, how much flex is there?

For glides, demand 10% flex.

If not available and route is over built up area with river option available then choose river.

Job done.

funfly
28th Oct 2015, 13:48
There seems to be three distinct areas involved.
1. Driving the aircraft under 'normal' circumstances
2. Dealing when conditions drift outside of normal
3. Coping with a potentially fatal emergency situation

It would appear to me that it is in situation 3 which some pilots have been unable to deal with or indeed some pilots have deliberately caused.

In the cases over the last couple of years, are there any that a computer could have been programmed to rectify?

The rest, general flying, must be the easy stuff.

Tourist
28th Oct 2015, 13:54
In the cases over the last couple of years, are there any that a computer could have been programmed to rectify?



You mean like fly an approach and land a 777 on a VMC huge runway in good weather without hitting the water?

or don't fly the new Russian airliner into a hill?

or don't hold a A330 in a stall until it hits the water?

or not stick a Super Puma into Vortex ring?

I'm actually struggling to think of an accident recently where a computer wouldn't have been better.

vmandr
28th Oct 2015, 14:03
'Northrop' UAV

...capable of semi-autonomous operation
...designed to allow ground crews to steer the X-47B while on the carrier deck
...the aircraft expected to enter service in the early 2020s

humans present here...

'BAE Taranis' UAV

a. A semi-autonomous unmanned warplane...
b. ... is intended to incorporate "full autonomy", allowing it to operate without human control for a large part of its mission
c. The Taranis is planned to be operational "post 2030" and used in concert with manned aircraft

more humans involved...

nothing about orders, production, operation besides the tests by the prototypes.

anyway the issue here is air transport aircraft not 'wannabe' UAVs.

Tourist
28th Oct 2015, 14:07
You do realise that "ground crews" is referring to the deck move team?

Like the push back crew?


It is deck landing and airborne refuelling by itself.

Do you have any idea how much more difficult that is than cruising an airway.


If you are not going to bother debating sensibly after reviewing the evidence then why post?

You will notice that the last link was to a BAe baby airliner trialling systems for airliners...?

vmandr
28th Oct 2015, 14:19
If you are not going to bother debating sensibly ...

now, this lack 'democratic spirit' me thinks :)

G0ULI
28th Oct 2015, 14:22
Tourist

The examples you posted are all those where sensory data was misinterpreted (by humans) or missing due to a fault condition. Inexperience of the pilots and failure to follow best practice was a contributing factor.

A computer can only react to the data it receives. If all airspeed data disappears and the angle of attack indicators are giving out of range readings, the computer can then only follow a best recovery procedure programed by humans, based upon previous experience, by humans. Nose down, throttles to two thirds and pray the wings don't come off due to overspeed. The computer will consider all data sources which a human pilot might disregard in the stress of the moment, so on that basis, a computer could potentially achieve a better outcome.

slast
28th Oct 2015, 17:22
Tourist,
Reading quickly through the Dryden "Self-repairing Flight Control Systems" paper, if I understand it correctly an Artificial Neural Network (ANN) flight control system would use continuous feedback from the control surfaces so that under damage or failure conditions, the system re-allocates control to non-traditional flight control surfaces and/or incorporates propulsion control, when additional control power is necessary for achieving desired flight control performance, and that is now well proven.

In this discussion, the relevance would be that much of the human pilot creativity demonstrated in the United DC10 uncontained failure could have been replaced by automation. Also much of the aircraft flyability aspects of Qantas A380 uncontained failure. So I take your point about that aspect having been addressed - to a degree.

So can you explain how an ANN would deal with e.g. the BA 747 Nairobi T/O event, where prior human failures involving engine and airframe hardware, and system logic, resulted in undamaged surfaces ending up incorrectly configured for the actual flight regime?

In CX 780 where the engine fuel control units simply got "stuck" and unresponsive, the 70%/17% assymetry problem could also be handled automatically at least in some parts of the flight envelope by such ANN reconfiguring. And by chance the total energy being delivered was not too much to prevent the crew landing.

But if both had been at high power the outcome might have been different. From what I have read it appears that that for an ANN to be effective, there needs to be a transition to a lot more involvement of electrics in actuation as well. E.g in the Dryden document there's a picture of the F15 wing opened up for the replacement of mechanical actuators with electric servos. In other cases engines and other mechanical components may simply ignore control inputs including shut down. How does an ANN deal with that?

Nialler
28th Oct 2015, 17:41
Pilots are the cancer.

I mean that positively and I mean that in the context of omeone who has been involved in supporting mission-criticl systems over decades.

If I as a doctor discover the means to cure every ill but cancer, then cancer will rocket in the statistics as a means of death. I've eliminated every other cause of death, but something will still grab ya. Cancer will now become ever more prominent as a vector of death.

If utomation becomes the new deal, no matter how well it is developed it will still be down to the pilots to be the failing backstop.

Ian W
28th Oct 2015, 18:23
An interesting discussion.
There are three main approaches to automation in the presence of humans.



Human in the loop
Human On the loop
Human out of the loop

Human in the loop
In this level of automation there is decision support to the human, and there may be support like flight control augmentation auto-stabs etc, but the human is 'flying' the aircraft.


Human on the loop
In this level of automation the human does not fly the aircraft directly but informs the automation what is required and the automation then implements that requirement



Human out of the loop
Full autonomous operation where the human may be able to intervene but without intervention the aircraft will fly as the automation wants.


Note that these states may exist in different phases of a single flight. Hands on control initially (human in the loop) then giving the aircraft to the FMS (human on the loop) then a CATIIIb autoland (human out of the loop)



The problems occur when a pilot who has rarely been 'in the loop' and has spent the last hundred or more flights out of the loop with occasional on the loop inputs, is required to jump into the loop and take control.



Failure Modes and Exception Handling
It is unfair to blame pilots for more errors, as they usually have had to pick up the pieces when the automation 'fails'. However, why does the automation fail? It is unfair on the automation to say it has 'failed' it actually behaved as designed. It is very costly to program for the 'otherwise' cases. These are the ones that fall outside the design conditions and so after checking for all reasons the programmer/system designer has something else to identify. But these are rare more complex faults and more expensive to program for. It is easier, knowing a pilot is there, to have the automation hand the bag of bolts to the pilot with a 'get out of that' ECAM message. Or to put it another way, the software design relies on the pilot being there so it does not have to cope with complex or rare failure modes. Passing the bag of bolts over to the pilot is a design feature that saves time and energy for the systems builders.


Certification Costs and Validation Testing
It is perfectly possible to program learning software, software that will share its learning, fully capable of dealing with aircraft damage way beyond
the worst nightmares of a QANTAS 380 uncontained engine failure. There are military aircraft flying with adaptive software that can correct for loss of control surfaces for example, and the software works so well that it is not easy for the pilot to realize that there is something wrong. So that's fine let's put it on the next Boeing 797 or Airbus 390.....?


Well no Certification Costs for complex tests for variable response and learning software are extreme, that is if anyone from the certification bodies could be found to agree the tests. As by definition the learning software will have a different response each time :eek: this is an anathema to certification testing and raises all sorts of flags in regression testing. Civil systems just cannot cope with the potential risks involved.



Sensors and Sullenberger
There are already systems that spend all their time doing what every good pilot should do - but hugely faster - identifying where the automated (usually unmanned) aircraft will put down given a list of potential effects from problems caused by an automated failure mode analysis. Again, certification of these is prohibitive even when compared with the cost of a lifetime employment of first officers especially in time. This is because the mathematicians managed to convince TPTB that every statement in 'safety critical software' needed to be mathematically proved. This is infeasible in a real time network of sub-systems all capable of pre-empting each other.





So do I believe that there will be fully automated passenger carrying aircraft yes absolutely, we have some flying already where they are 'optionally piloted' and can recover autonomously if the command link fails. Do I believe that there will be fully automated passenger carrying civil aircraft? Only over the dead bodies of an entire generation of 'safety engineers' who have set standardization safety tests that such autonomous aircraft cannot ever meet because they work differently to current crewed aircraft. I am even uncertain whether the single pilot crew will ever pass safety certification requirements.



So the military will have these systems probably for decades before a new generation of safety engineers perhaps with a more sophisticated approach allows them for civil aviation. I won't hold my breath.

evansb
28th Oct 2015, 18:32
Speaking of humans in the loop...

- Velcro fastened shoes are safer than laced shoes.

- Velcro can be re-fastened quicker than laces.

- Velcro failure (breakage) rate is one-quarter that of laces.

- Velcro fastened shoes have saved lives by reducing the risk of falls by senior citizens, which leads to fractured hips, which leads to fatal hospital-borne infections.

- I will never wear Velcro fastened shoes.

framer
28th Oct 2015, 18:57
That's incredible! I'm off to buy some Velcro shoes right now! They will go well with my cords and I might live to 104!

Resar40
28th Oct 2015, 19:21
I'm not sure if this work has been generally seen here, but it is well worth a read; https://mitpress.mit.edu/sites/default/files/titles/free_download/9780262016629_Engineering_a_Safer_World.pdf

Mesoman
28th Oct 2015, 21:22
"An automated control system may well have opted for heading for the nearest airport if the figures indicated it was possible to glide there. The automated system might then be caught out by windshear nearing final approach and landing, because that was a factor it was unable to detect in advance. So the aircraft could have landed short and hit buildings killing everone aboard and causing casualties on the ground. A human pilot will at least consider the possibility and additional risks of windshear and act accordingly.

Automation can only act in accordance with the information it receives from its sensors. It cannot autonomously consider the possibility of events for which there are no data, so it is pointless and impossible to plan ahead for every eventuality."

This is not correct, although perhaps right for a non-AI system. The quote above is responding only to the old fashioned style - complete detail programmed by humans - and only then where the humans left out factoring in wind shear and obstructions.

AI stands for "artificial intelligence" for a reason: an AI system does not simply reproduce what a programmer intended. A "strong AI" system is best thought of us an intelligent creature, not just a collection of rules. A "strong AI" system can be trained, and it can draw inferences and make deductions. In other words, it can think - certainly today not in the same way as a human, but far differently from a simple programmed control system. It may be use simulated neurons ("neural nets") or other technologies or more likely, combinations.

Today, strong AI is not to the point where it can replace a pilot's judgement. It may never be, but there is a good chance that it could be. The strong AI, coupled with the power of ordinary automation (sensors, actuators, physics calculations, decision trees, etc) may some day very well exceed the capacity of the very best pilot. I think that day will come. I think the strong AI problem is harder for automated cars than aircraft, and the push for autonomous automobiles is very strong. Cars are far less complex, but they encounter, on a routine basis, a very wide variety of situations where physics is only the start of the problem - for example, dealing a ball that bounces into a residential street.

In such a world, one might take a strong AI system and literally train it the way you would train a human pilot. But, once trained, it can be replicated - the training for one "pilot" produces thousands of immortal pilots. More likely, a whole lot of the flight smarts would be pre-coded as rules, with the strong AI there for the overall management and unexpected scenarios.

G0ULI
28th Oct 2015, 23:57
I wonder how airline managers would react when the AI system refuses to fly because it assesses the conditions as being sub optimal for profit or safety?

"Sorry folks, the computer says no."
"We haven't got a clue why it is refusing to fly, come back tomorrow."

The problem with neural networks is that their reasoning is not readily transparent. They may come up with the right answers, but the process can be so complex that it is impossible for humans to check how they arrived at the answer. This is already an issue with computer solutions to complex mathematics problems. They give a solution that is so complex and detailed that it would take several human lifetimes to verify the answer (without using more computers to cross check the workings of the first).

Mesoman
29th Oct 2015, 01:52
"I wonder how airline managers would react when the AI system refuses to fly because it assesses the conditions as being sub optimal for profit or safety?"

Retrain it? :) Seriously, if it made that assessment, it would prudent to find out why.

"The problem with neural networks is that their reasoning is not readily transparent."
That is certainly an issue. How do you certify it as safe?

I think, however, that this will eventually be solved. You can design systems to give information on their reasoning, and I suspect that will be required. In other words, the system would need to be able to describe and justify its reasoning, just like a trainee pilot.

I don't think that opaque strong AI (deep learning) systems will be accepted for executive reasoning without a huge amount of evidence that they are correct. They will be used very soon as part of sensor systems (at least for automobiles), as they offer perhaps the best hope for problems like understanding images.

Tourist
29th Oct 2015, 06:53
Ian W

Very good post.

I agree that the challenges are not technical.

The problems are in certification, legal issues plus public perception.

The thing that will have a very large effect on all these areas is the current nascent autonomous cars that are starting to appear.

The legal issues are the same even if different in scale. Who is responsible if it crashes?

The certification, the same. I think the thing that will happen is that the military will fly transports around autonomously for a while, and as evidence mounts for their safety advantages, the certification industry will be forced to evolve in the face of evidence.

Whether you understand it or not, safer is safer.

I think once people get used to their cars driving them around, the plane is not such a stretch...

slast
29th Oct 2015, 10:22
I wonder how airline managers would react when the AI system refuses to fly because it assesses the conditions as being sub optimal for profit or safety?
That may be a more serious issue than it appears! It was certainly the case that "work to rule" (strict interpretation of all procedures etc.,) was/is a pretty effective weapon in industrial disputes. In the normal world people use a surprising amount of discretion and interpretation to keep things running. Would strict application of all rules etc. cause things to grind to a halt?

Unlike a factory or other industrial enterprise where the management may be able to control or at least heavily influence nearly all the variables that affect its production, airline operations seem very much subject to disruption by factors that are not within management's control and require the exercise of discretion by "operatives" such as crew members etc.

It would be interesting to speculate how that can that be built into an automated system and who will take the responsibility for the correctness of its application.

alf5071h
29th Oct 2015, 11:24
Instead of asking ‘can’ automation manage unique events, we might consider ‘if’ we should use it, which might help clarify the critical issues.
Many views seek to eliminate ‘error’, yet the vast majority of operations tolerate error; humans adapt and successfully manage the variability in normal operations. There are many risks in seeking to eliminate ‘error’. We have scant understanding of the mechanism of ‘error’, where in some views the process of success and failure are the same, the difference only being decided by outcome.
How might fallible humans understand the very fallibility which makes them human, and similarly how can we be assured that any solution will be infallible or particularly that by managing ‘error’ outcomes this will not undermine the required successful ones.

One obstacle in our thinking is in applying the absolute (one or zero) view of automation to humans. Humans are not absolute; human advantages are in adaptability (reconfiguration), which might be better equated to probabilistic behaviour. No course of action is perfect, but it is normally good enough, this requires judgement.
Safety involves a balance, judging what is good enough in each and every situation, but often without adequate information or measure of what is ‘good’.

The human should not be seen as a hazard, but as a help, having unique inspiration and judgement which contributes to the successes in operation.
Instead of attempting to totally eliminate the undesired outcomes, we should promote the desirable aspects, the behaviours, thoughts and actions used today, and understand the situations in which they apply. If there are more good outcomes then there should be less opportunity for bad ones.

There are indications that the use of automation detracts from the learning processes; although the overall level of safety has improved with automation the human contribution has not, and in some circumstances thier effectiveness might be degraded.

There is also an important distinction between automation and technological aids; where the latter can encourage thought with improved awareness and decision making. EGPWS, ACAS, and Windshear Warning all use technology (not automation) to improve awareness and error detection. There are good arguments for automation in the event of inaction, but with society influences on safety – who pays if inappropriate activation hurts someone (litigious culture, an artefact of being safe) – thus the risk adverse management prefer to leave the responsibility with the sharp end.

It is important to consider the blunt end of operations; many of the contributors to unique events have ‘blunt’ origins. ‘Unique accidents’ normally involve many contributing factors, each necessary, but none in isolation sufficient. Thus identifying and uncoupling the conjunction of factors could be of greater benefit to safety and easier (cheaper) to implement than focusing on replacing the human with costly or untimely automation.
Blunt end activities might be more receptive to technology and automation; data bases, pattern matching, and the essential element of reduced time, but these still require human involvement in choosing what and when to implement.

The objective of change should be to promote the human advantage vice constraint or replacement, particularly if the latter results in the management, regulators, and manufactures becoming the new sharp end, focussing on their fallibility – then back onto the merry-go-round.

In order to progress we must look at the perceived problems and potential solutions in new ways; we have to change the manner of our thoughts not automate it.

Tourist
29th Oct 2015, 11:45
There is also an important distinction between automation and technological aids; where the latter can encourage thought with improved awareness and decision making. EGPWS, ACAS, and Windshear Warning all use technology (not automation) to improve awareness and error detection.

This is very true, unfortunately you are taking the wrong lesson from it.

EGPWS TCAS Windshear warning are near flawless systems which lend themselves perfectly to automation.

Unfortunately, we have decided to throw in a human who has no purpose except to add errors to the system.

example.

TCAS sees a problem. It works out the RA and tells the pilot to do it. The pilot tries to do what he is told. The pilot regularly gets it wrong. (>50% of the time at a large British Airline)
If the autopilot was just slaved to the TACS system many hundreds of people would be alive today.

I would call that "promoting the human advantage"

slast
29th Oct 2015, 16:22
Hi Tourist
The pilot regularly gets it wrong. (>50% of the time at a large British Airline)
If the autopilot was just slaved to the TACS system many hundreds of people would be alive today. you are usually pretty meticulous at providing evidence for your positions as well as asking for it from others, can you give chapter and verse for this statement? (PM or email me if you do not want it on a public forum)

slast
29th Oct 2015, 16:39
alf,
we might consider ‘if’ we should use it There seems to be a significant number of people who have already made up their minds to go down that route, for reasons that satisfy THEM. However my own suspicion has long been that it will actually prove to be a dead end for many decades to come, for fundamentally non-technical reasons. My motive for posting the question in its original form was to see if that view is supported by others who actually have better knowledge than I do.
Steve

172driver
29th Oct 2015, 17:53
Removing humans from the information loop can have catastrophic consequences. I recommend reading this article (http://thebulletin.org/okinawa-missiles-october8826) about how human intervention prevented a nuclear war. A computer would have fired the missiles....

Short synopsis: a US missile station on Okinawa got sent the attack codes during the Cuban crisis. It was only the skepticism of the commander (who wondered why some of his targets were *not* in the USSR) that saved the day. He queried the veracity of the codes up the chain of command and the error was thus detected.

Without this fine gentleman, none of us might be here to have this discussion....

Tourist
29th Oct 2015, 18:22
....and then the last 50 years happened.

You know we went to the moon since then?

peekay4
29th Oct 2015, 18:25
Removing humans from the information loop can have catastrophic consequences. I recommend reading this article about how human intervention prevented a nuclear war. A computer would have fired the missiles....
But the reason there was an incident at all was BECAUSE of a human error. The commanding officer (the Major) allegedly issued the nuclear strike codes by mistake. He was later court-martialed, according to the article.

Also, a computer probably would have summarily rejected the mistaken launch order because it did not conform to requirements (i.e., not being in DEFCON-1).

On the other hand, a different human being that night might have launched the nukes as instructed, without checking all the prerequisites.

_Phoenix
30th Oct 2015, 03:20
Exciting video, but would you:
- buy a self-driving car without wheel and pedals installed?
- read comfortable a book on the back seat while the cars runs on a country road (2 ways) at 60 mph?
- allow this car to drive your kids to school?
9CoyKEttxNk

peekay4
30th Oct 2015, 03:48
Tesla's brand new "autopilot" self-driving mode already saved multiple lives:

Tesla Autopilot Stops Uber Driver's Car Crash - Fortune (http://fortune.com/2015/10/29/tesla-autopilot-uber-crash/)

http://www.youtube.com/watch?v=9X-5fKzmy38

and we're still in the pioneering days of self-driving cars. I have no doubt self-driving cars will soon be much safer than conventional cars -- if they're not already.

_Phoenix
30th Oct 2015, 04:19
Tough to program or predict though
WmQ4ICKMMBA

Standard Toaster
30th Oct 2015, 06:20
Everyone keeps giving Sully as an example of Human vs automation, when everyone forgets that the A320 Sully was piloting never left Normal Law.

During the flight, he was assisted and, during the last seconds of the flight, he did not have any control over the airplane because it was on the edge of the envelope.

Would he be able to pull off a perfect landing if the airplane were in Direct Law? Maybe, maybe not.

Tourist
30th Oct 2015, 06:52
Phoenix


You are using the video examples as a "the computer has to be as good as this" to be suitable to take the job.

Those videos are almost entirely flukes.

The sad reality is that most such events with human drivers end in bent metal and lives lost.

Why do you hold a computer to a higher standard?

Reality of human drivers

http://www.youtube.com/watch?v=_MycmdlcwK0

The other thing to take away from those videos is that in almost all the cases we saw, the human was driving at totally inappropriate speed for the conditions.

A computer can be programmed to not do that.

msjh
30th Oct 2015, 07:06
It's been a common theme over the last few decades that computers become more talented than humans in various areas of endeavour. This will doubtless continue and flying will be no different.

One example is Watson, a development by IBM. In its first public outing it beat off human experts in the American quiz show, Jeopardy. It's being developed now in other areas such as a diagnostic aid for doctors, helping utilities forecast and deliver power and so on. Most important, Watson is a learning system; it adapts as new information comes its way.

It's been reported in these fora and elsewhere that in some emergencies, some pilots have been so intent on what they are doing that they miss other critical signals. This is human failing is, perhaps, best illustrated in the basketball test.

https://www.youtube.com/watch?v=vJG698U2Mvo

In day-to-day flying conditions, human pilots have huge inputs not available to computers: windows, seat-of-the-pants, speech input from others (ATC reporting something flying off the aircraft at take-off). This will change: computers are increasingly adept at understanding natural language (a feature of Watson) for example and cheap and reliable sensors are being placed everywhere.

I have no doubt that computer systems will, over the next decade or two, become so deft that pilots (and doctors and others) will find it increasingly difficult to justify their roles.

bubbers44
30th Oct 2015, 07:09
Standard toaster. Yes, Sully would have been able to do what he did in direct law with no problem. Any competent pilot should be able to do the flying part as well as he did. The judgement part on deciding exactly where to navigate to given his altitude, speed and options for touchdown point he could just as easily have done in any Boeing product flying manually. No computer could be programmed for every conceivable condition.

RAT 5
30th Oct 2015, 08:19
I'm coming to this late in the game. Already I notice Sully's scenario cropping up in more recent posts where a human made judgements in a unique/extremely highly improbable scenario that automatics could not have catered for. True; and there are others I can think of where it was a human who saved the day after the manure hit the air conditioning, unexpectedly.
RYR into CIA. Short finals bird strike known on 1 engine; PF initiated a G/A, 2nd engine coughed & nearly died. Captain took control and plonked the a/c unceremoniously on the runway. A/C broken, all survived. Any slower thinking system/human would have made a burning hole in Rome.
Transatlantic glider. Human error in maintenance caused the initial problem. Lack of human monitoring compounded it when the 1st error caused a major problem; human skill resolved it. I doubt any automated system would have saved the day.
While it might be true that humans from all links in the chain are the largest numerical cause of incidents/accidents I wonder how many more were avoided by timely human intervention as a preventative force? Automatics, and sadly too often humans, act in only a reactive manner. This constant thinking about the future, what if I do this then what, choice selection at a phenomenal rate based on experience; this is what good human operators can do. Can automatics?

msjh
30th Oct 2015, 08:32
A lot has been made of this. Let's look at it from the point of view of a computer program. It would go roughly like this:

a. Can I maintain altitude? No: branch to landing routine.

b. Landing routine: which airports of sufficient size can I reach? None: branch to Forced landing routine.

c. Forced Landing routine: Of the places I can reach, which poses the least hazard? List, evaluate, sort.

d. Choose least hazardous area.

Now none of this is impossible to program with today's technology. That's not a criticism of Sully: his cool evaluation of alternatives and calm actions were outstanding.

But it is entirely possible to build computer programs that would perform no worse.

Standard Toaster
30th Oct 2015, 08:50
Standard toaster. Yes, Sully would have been able to do what he did in direct law with no problem. Any competent pilot should be able to do the flying part as well as he did.

How can you be so sure?
In Direct Law the plane behaves differently of what the pilot is used to, so...


The judgement part on deciding exactly where to navigate to given his altitude, speed and options for touchdown point he could just as easily have done in any Boeing product flying manually.

Once again, maybe, maybe not.
Matter of fact is that in the case of Sully, protections were activated. If they were not, they plane would have stalled. Or did he fly in a way that forced them to activate because he knew he was in Normal Law?


No computer could be programmed for every conceivable condition.


And they don't have to be programmed to respond to every conceivable situation, they just need to be programmed to stay out of those situations in the first place.
What most people fail to understand (or simply don't want to), is that the human/s in the cockpit is responsible for the crashes most of the time. The human is the weakest link, that's it. And it's not only in flying, but driving and so on.

And to the opponents of automation, the automation has to be perfect in order to replace the human. Well, that's impossible, but isn't it enough to be much better?
Let me give you an example: the self driving car. If all cars were self driven, what would be the accident rate? Extremely low. But would it be zero? No, because that's impossible. So, if the rate is not zero, is that a reason to rule out the technology? Of course not.

The same applies to the aviation. When automation is discussed, there is a trend to give examples of the exception, when the human "saved the day" not the norm, when the human caused the crash.

Regards

stilton
30th Oct 2015, 10:14
There's a salient point that the robot lovers are missing and its quite simple:



Automation is designed, programmed and built by humans who are not and never will be perfect so automation itself can never be either.



It's quite alarming how many 'pilots' place so much faith in it.

deptrai
30th Oct 2015, 11:15
I think we will soon see more automation, like TCAS RA integration with flight automation (already exists, eg in Bristows EC225).

True "artificial intelligence" is on the US Navy wish list for their future F/A-XX, probably some kind of decision aid, but it's not yet clear how or what exactly. They have the budgets though, and aren't constrained by certification issues.

TCAS began as a decision aid, and will get automated. I could imagine similar developments in other areas of civil aviation, eg enhanced awareness / moving ground maps / synthetic vision for taxiing could at some point get interfaced with brakes.

_Phoenix
30th Oct 2015, 11:23
Tourist,
The other thing to take away from those videos is that in almost all the cases we saw, the human was driving at totally inappropriate speed for the conditions.
A computer can be programmed to not do that.

Yes the speeding is the principal factor for accidents.
Not necessarily your fully automated car would protect you if another car will smash in you.
All cars should be equipped with speed limiter. It can be override, but automatically a ticket will be issued.

Old Carthusian
30th Oct 2015, 11:46
It's an interesting debate but a historical approach clearly indicates that more automation leads to more safety. One examines the accidents due to pilot error and as the function is taken out of the pilot's hands they reduce. I am also interested by the assertion that pilots are better suited to deal with the totally unexpected incidents. This really qualifies as blind faith. There is nothing in the record to indicate that pilots are indeed, uniquely qualified to deal with such situations. In fact evidence indicates that they are singularly ill-equipped to handle such situations. That being said totally automated flight is a bit away but one can indeed look forward to it.

msjh
30th Oct 2015, 12:09
There is a salient point that the anti-computerisation lobby is missing.

It's that computer systems that learn can learn faster and more rigorously than any human. Combine that with an ability to process far more data, far more swiftly and most human roles are at risk.

macdo
30th Oct 2015, 12:21
Having watched my a330 autothrust spectacularly fail to deal with some mountain wave activity last week, the answer is a definite NO!

Volume
30th Oct 2015, 12:50
You mean like
... or don't hold a A330 in a stall until it hits the water?
I'm actually struggling to think of an accident recently where a computer wouldn't have been better.
Actually it was the computer, not the pilot trimming the aircraft full nose up... No pilot ever did this before.
A computer alone would not have caused this accident
A pilot alone would not have caused this accident
It required some misunderstanding on both sides to cause it.

Which probably is part of the issue. A mix of both is not the best solution, however there are still situations where pilots alone have already demonstrated to be able to handle it, while computers have not. So computer alone is no option yet.
Look at the accident statistics of pilotless aviation systems, they are by far inferior (currently). The question however was, can automated systems one day deal with any event. In the future if will be mainly the question, how much am I willing to pay for computers and which failure rate will I accept.

pax britanica
30th Oct 2015, 13:00
msjh
You make an excellent point which reinforces my rather sarcastic post about three pages back . this said the threat to pilots from computers is not them taking over the flight deck but taking over all the jobs that justify business and first class air travel.
No more bankers- dead easy to replace them, no more lawyers-same, no more financial 'professionals' in fact no more jobs at all for professionals who just talk and do not do actually do anything.

The jobs that cannot be replaced of course are plumbers, installation techs , domestic cleaners etc but with no well off middle class to pay them anymore even those people are no longer needed. Pretty bleak world

Tourist
30th Oct 2015, 15:25
Having watched my a330 autothrust spectacularly fail to deal with some mountain wave activity last week, the answer is a definite NO!

Exactly!

Just because a A330 is very old tech is no reason not to base all assumptions in it's abilities despite the fact that technology has moved on......

No, wait.....:suspect:

Tourist
30th Oct 2015, 15:29
There's a salient point that the robot lovers are missing and its quite simple:

Automation is designed, programmed and built by humans who are not and never will be perfect so automation itself can never be either.

It's quite alarming how many 'pilots' place so much faith in it.

Again, why are you attempting to hold the computers to a higher standard than human pilots?


Nothing will ever be perfect, I'm sure some of the early versions will have horrendous accidents. This is true of all new tech, yet somehow we always manage to advance through the teething troubles.

Out of interest, does your car have ABS?

Early ABS was pretty poor. A pretty good driver was better. Modern ABS? There is a reason it is banned from formula 1.

Perfect enough.

Tourist
30th Oct 2015, 15:30
Actually it was the computer, not the pilot trimming the aircraft full nose up... No pilot ever did this before.
A computer alone would not have caused this accident
A pilot alone would not have caused this accident
It required some misunderstanding on both sides to cause it.


No, the computer knew it was stalled all the way down.

Tourist
30th Oct 2015, 15:39
I've said this before, but I think it's worth reiterating.

I'm not a "computer lover"

I gave up an airline job flying airbus because I didn't enjoy the feeling of uselessness/interfering.

I now fly an aircraft with no automation.

It needs me. I far prefer it.

I would like to be proved wrong and it never to happen, in fact I'd like an excuse to go back to steam driven cockpits for all aircraft, but I hear no valid arguments that persuade me on the technical front.

llondel
30th Oct 2015, 16:28
When doing a forced landing, is the aircraft equipped with sensors to allow it to spot the optimum spot to land given that there may be traffic on the ground? In picking which patch of water, would the computer be able to see all the boats/ferries/whatever? If deciding to land on a highway, could it do its best to pick an empty patch and be mindful of lamp posts and other street furniture?

As for the transatlantic glider, one of the issues with that was that it was necessary for the flight crew to do all the fuel calculations, I understand that the software was modified after that incident so that it will flag an alert if the apparent rate of consumption is higher than can be explained by the engine settings.

Having the computers run checks and at least ask "are you sure?" would prevent some things happening. Wasn't there one crash where the autopilot got programmed for constant rate of climb and did its best, right up until it stalled. Ideally it should have queried whether the programming was sensible or not so the pilot could review what he'd just asked for and why it didn't think it was a good idea.

Tourist
30th Oct 2015, 16:55
When doing a forced landing, is the aircraft equipped with sensors to allow it to spot the optimum spot to land given that there may be traffic on the ground? In picking which patch of water, would the computer be able to see all the boats/ferries/whatever? If deciding to land on a highway, could it do its best to pick an empty patch and be mindful of lamp posts and other street furniture?



Yes, the technology exists, has been trialled (see my link previously to the autonomous trial Blackhawk)

Incidentally, it could do it in the dark or IMC too....

Re forced landings, you get what you are given.

If there had been boats on the river, Sully would have hit them. His choices were limited.

Seabreeze
31st Oct 2015, 04:31
Designers and programmers make all the same types of errors that pilots do. The big difference is that pilots own lives are at stake. Design and programming problems can never be fully overcome, so onboard pilots will always remain essential for pax ops.IMHO.
SEABREEZE.

Tourist
31st Oct 2015, 05:19
Sea breeze

Do you have any evidence that that is true whatsoever? Even a tiny bit of evidence to support it?


Follow that thought to it's logical conclusion.


Humans designed the wings too.....


If humans make errors, and we all agree they do, why is it an advantage to have humans onboard when we accept they are going to make more errors.

One of the good things about an automated aircraft is that at least the same error won't keep occurring. They wil all have perfect recall of previous mistakes and what didn't work..


Having your own life at stake makes you care more. Caring more does not make you better at doing your job.

Having your life at stake can lead to fear and stress.

Up to a point stress is good....

harpf
31st Oct 2015, 07:39
Assuming 50,000 pilots flying for flag carriers at $200,000 per year, the annual pay for airline pilots is 10 billion a year.

At current crew pay rates, 30 billion dollars of R&D is reasonable investment to remove pilots from the cockpit.

As far as the cost of hardware it's a wash - removal of TSO displays etc., cover the cost and weight of the extra computers. The communications bandwidth and security issues are being addressed outside the aviation community and on the way via new SATCOM networks.

The question is how, when and where - people are running the numbers and drafting schedules today.

Timelines being developed suggest - 12+- 5 years for airfreight add 10+-5 more for PAX.

KiloB
31st Oct 2015, 09:58
Airbus would seem to have the greatest amount of experience in this area and they don't seem to think automation is all dependable. Hence Airbuses being relatively quick to announce "you have control" and change to one of the other "Laws" if "Hal" doesn't like the data.

KB

wilsr
31st Oct 2015, 10:41
This thread is so similar to those in other media about self driving cars.

There's so much talk about the "programming" of the computer systems, but it's not programming that's the problem.

The real problem is that artificial intelligence isn't actually about programming, deep down. Given that no-one agrees how to define intelligence, or conciousness, it's difficult to see how a truly artificially intelligent aircraft can be envisaged at present.

And if real AI CAN be built, it opens a whole new can of worms anyway.

MainDude
31st Oct 2015, 10:52
As a pilot and instructor myself I would say yes, pilots can be replaced by good technology.

For any abnormal event, we train pilots to follow procedures. Some abnormal events have defined procedures, others you need to make it up as you go - but based on a structured process of elimination. No doubt a machine would do a great job if programmed properly.

Anyone who say's it's not possible lacks imagination!

harpf
31st Oct 2015, 11:13
Keep in mind that current designs being looked at are remotely piloted with autonomous operation as an emergency condition vs HAL acting as PIC.

The first step is to automate the QRM, the second task is data link system with a probability of lost link < 1x10-6. The chance of having lost link and an event that cannot be solved by the QRM should be on the order of < 1x10-12.

when the data link goes down the aircraft executes the lost com procedure as defined in the AIM and does a CAT III landing per the AFM / AIM as well.

stilton
31st Oct 2015, 12:08
Pilots that believe automation can replace them are probably right.

Tourist
31st Oct 2015, 12:15
Airbus would seem to have the greatest amount of experience in this area and they don't seem to think automation is all dependable. Hence Airbuses being relatively quick to announce "you have control" and change to one of the other "Laws" if "Hal" doesn't like the data.

KB

No.

The "current" airbus types are all 80s tech.

They were not designed to be autonomous, so they are not autonomous.


Honda makes lawnmowers that you push.

Because they are for pushing, they don't make them capable of autonomy.

Honda is capable of making an autonomous lawnmower.

Tourist
31st Oct 2015, 12:17
Pilots that believe automation can replace them are probably right.

I don't think automation can replace me as a pilot yet.

I do think that automation can replace me in my role as an airline pilot.

Tourist
31st Oct 2015, 12:19
This thread is so similar to those in other media about self driving cars.

There's so much talk about the "programming" of the computer systems, but it's not programming that's the problem.

The real problem is that artificial intelligence isn't actually about programming, deep down. Given that no-one agrees how to define intelligence, or conciousness, it's difficult to see how a truly artificially intelligent aircraft can be envisaged at present.

And if real AI CAN be built, it opens a whole new can of worms anyway.

Nobody involved is trying to make an AI.
Neural networks etc yes.

The last thing anyone wants is an AI. It might get bored. Or angry.


You are correct though that it is very much like the chat about self driving cars.

In both cases there are legions saying it cannot and will not ever be done, and in both cases people are going ahead and doing it anyway.

Tourist
31st Oct 2015, 12:21
Keep in mind that current designs being looked at are remotely piloted with autonomous operation as an emergency condition vs HAL acting as PIC.


That is just not true.

Why did you even post such a statement without bothering to do 5 mins google?

staplefordheli
31st Oct 2015, 13:10
Whilst automation can vastly improve safety in any form of transportation. In an unsterile environ such as roads most current railways and passenger carrying flight, this is never going to be fulfilled by current technologies. The human eye and mind are simply far to complex to replace.


However it is a positive thing if for instance algorithms can be run as currently do to maintain safe limits of flight to avoid stalls or damaging the AC or in case of railways, remaining within track speeds and braking zones for signals and reacting if the driver fails to respond in time.


This protection is now being applied to many family cars such as the VW radar and adaptive cruise system providing collision warning and intervention at low speed to avoid pedestrian and vehicle impacts by applying the brakes.


Where the mk1 eyeball and most human brains , especially trained ones excel is spotting subtle warnings of impending danger which a machine can never do


Example final approach and taxying AC or ground vehicle spotted encroaching The PIC can then decide if the danger is a threat to the AC and go around
Many other scenarios such as Hudson river where having an experienced PIC prevented a disaster and no current technology would have pulled that off autonomously not to mention communicating with the tower


Recent railway example , one of many on the railways where a driver noticed something amiss from noticing dangerous flood waters near a bridge to livestock on the trackside or dangerous load on another passing train
On this occasion driver had muddy water hit his windscreen passing through Old St tunnel north London and he immediately contacted control . All trains stopped and inspection train sent in at low speed to find massive piling drill bit from building site above penetrated roof and now blocking tracks. Had that been the driverless DLR, the consequences are all to easy to realise.


If you still need a skilled human to take over, then make their workload easier and more pleasant but never remove completely from the equation
I for one would never wish to fly on a pilotless airline:eek:

Tourist
31st Oct 2015, 15:39
Whilst automation can vastly improve safety in any form of transportation. In an unsterile environ such as roads most current railways and passenger carrying flight, this is never going to be fulfilled by current technologies. The human eye and mind are simply far to complex to replace.


Before you made this post, did you consider for a moment 5 minutes of googling?

If you had bothered, you would have found a host of fully autonomous railways worldwide for decades.

You would also have very quickly found the large number of autonomous cars currently on our roads or in trials from serious manufacturers. These have invested billions in the technology, and they are companies that don't mess around. If they think they can do it, then unless you are astonishingly knowledgable in the field, you're a brave man to bet against them.

https://www.youtube.com/watch?v=Ol3g7i64RAI
http://www.dailymail.co.uk/sciencetech/article-2535789/Self-driving-cars-just-got-cool-BMW-trials-high-speed-prototype-slides-round-corners-skill-racing-driver.html
https://www.youtube.com/watch?v=XZxZC0lgOlc
https://www.youtube.com/watch?v=HdSRUG4KTPA&feature=youtu.be
http://www.cnet.com/news/mercedes-benz-unveils-luxury-concept-self-driving-car/
https://en.wikipedia.org/wiki/Google_driverless_car
http://www.marketwatch.com/story/elon-musk-tesla-almost-ready-to-go-driverless-2015-07-31


Various serious players in the military aviation scene are working very hard to field autonomous military aircraft. The challenges of military aircraft are an order of magnitude higher than passenger carrying flight.
BAe are however flying a trial passenger aircraft.

The human eye is amazing, but is not in the same ballpark as the wealth of sensors available to a computer. EO/IR radar LIDAR etc etc. They can see in IMC.

More importantly, it will actually be looking every moment of the flight, rather than reading the paper like a human pilot. If Airliner manufacturers intended or relied on the pilot to be looking out, then the would put them in a cockpit with decent vis like a fighter.

StuntPilot
31st Oct 2015, 18:06
On a philosophical note:

All computers are built of circuitry that can perform a discrete set of operations, which can be reduced to the NAND (logical not-and) operation. Rigourous mathematical analysis shows that systems built from arbitrary numbers of such parts can compute a set of mathematical functions. So in a sense, all computers using a sufficiently rich 'language' (containing if-then and a loop instruction) are equally powerful in terms of what can be computed in finite time. Hilbert, a century ago, along these lines formulated a mathematical challenge to proof that a simple formalizable system / 'computer language', arithmetic, is free of internal contradictions.

Kurt Godel came with an answer to Hilbert's problem that surprised everyone: he gave a proof that for every formal system (such as a decription of the reality of aviation, formulated in computer languages) there are theorems that are true but that are not algoritmically provable within the formal system. This means that there always exist correct conclusions about how to fly a plane that AI cannot draw.

An interesting point is that for us humans, using 'insight', it is possible to 'see' that these theorems are true. Godel's theorem puts a fundamental limit on what AI can do, even in a world where software is perfect.

To get a flavour of Godels theorem:
http://isites.harvard.edu/fs/docs/icb.topic1470808.files/boolos.pdf .

Tourist
31st Oct 2015, 18:57
Stuntpilot


1. Nobody is trying to make an AI fly a plane.

2. Do you honestly believe that in 1000yrs there will not be a computer smarter than us, because that is the only conclusion to be drawn from your statement about fundamental limits?

3. Is your point of view in any way influenced by belief in some brand of sky fairy perhaps?:rolleyes:

dClbydalpha
31st Oct 2015, 19:17
An interesting point is that for us humans, using 'insight', it is possible to 'see' that these theorems are true. Godel's theorem puts a fundamental limit on what AI can do, even in a world where software is perfect.

However the point is you don't need AI to fly a plane. Flying a plane is easy according to the laws of physics. The problem for pilots and why they possess such exceptional skills is because humans have very little ability to interact directly with those laws, everything aviation is limited by giving a pilot the necessary sensory cues to effect the correct control responses. To an automated aircraft flying would be as simple and "intuitive" as walking. FADECs already monitor, control and tune an engine at a rate and precision no human could match. This doesn't take AI just simple transfer functions.

An automated aircraft can constantly compare what it is doing with what it should be doing. From that it would know what control it has and can tune its responses in line with its measured errors. Learning as a Kalman Filter learns. None of which is AI. The skies that an automated aircraft uses are much more regulated than the roads an automated car would have to use. The aircraft can have sensors added that vastly out perform the human senses, these already exist in other forms.

The question is not can a fully automated aircraft be made ... IMHO it can be implemented using current technologies. The question is what would we gain by doing it? I suspect very little in the civilian world. However optimising human-automation interaction could yield massive benefits.

RAT 5
31st Oct 2015, 19:51
Computers are great for certain activities e.g robotic car building = NC machines. They can reproduce perfection 24/7, in a perfect world. No unforeseen circumstances. If they encounter one they shut down and alert the nearest supervisor.
Humans have a flexibility 2nd to none; also to make mistakes, but the clever ones then correct them. Some mistakes may be caused by an unexpected event and they try a solution, feedback loop and then use ingenuity to try another possible solution; feedback loop again.
In comparing computers to humans I always look at sports men, of which I was one, and compare their reactions. I played squash and reckon I could beat a computer: I watch a cricket fielder chase the ball, slide, grab it, turn and in one motion without standing up throw the ball directly onto or over the stumps with pinpoint accuracy with a cross wind. The speed of the human brain to compute all those angles & forces, while twisting & turning, IMHO is astonishing. When a computer + machine can do that reliably maybe I'll think again.

skyship007
31st Oct 2015, 19:54
It is possible to develop an adaptive semi intelligent flight control program capable of dealing with unique events, such as an uncontained engine failure or structural failure. BUT and that is a big but, the cost of developing such a high tech certifiable program would be biblical and it would also take many years.
There are only about half a dozen companies in the US or EU capable of writing, testing and developing the type of programs that are capable of thinking for themselves in terms of dealing with non standard emergency situations for which no checklist can be written in advance. All of those high tech companies are very busy working for various defence or automotive sector customers, doing such things are developing driverless cars or intelligent fire and forget missile systems.

Most airlines can't even afford to fit a certified 3 channel auto land system, so it is rather unlikely that they would be interested in paying the hundred billion plus to develop either an integral system OR an android that sits in the pilots seat. I kind of suspect that it is more likely that a android pilot adapted from a military or car industry unit is more likely than a built in unit.






"priority now should be the elimination of human pilots from the system via automation"

For the salaries on offer these days, great idea. Can't happen soon enough.

On a more serious note, is the artificial intelligence refined enough to accommodate that level of automation and how soon could it be incorporated into today's technology?
As an aside, I don't think I'd be that comfortable getting onto anything, especially something leaving the ground, that doesn't have a human behind the wheel (other than the train at Disney World). The though is still unnerving to me and I can't imagine the average afraid-to-fly-in-the-first-place passenger would either.

Willie

Tourist
31st Oct 2015, 20:08
RAT 5

That is an interesting comparison, however the sports field is where a human is in his or her optimum environment.

In an aircraft the human body is most definitely out of our ideal environment. Our sensory organs are ill suited. Our balance organs have no way to give us situational awareness, in fact they give us false information which we must learn to supress and instead monitor instruments that require conscious thought rather than subconscious action like on a sports field.

If we were birds you would have a more valid point, however even they have the same problem as a human when it comes to damage.

We learn using muscle memory and by practise. A change in our fitness level or mobility takes us a long time to adapt to. By contrast, computers have almost instantaneous feedback loops and adapt very quickly as showm by the NASA neural net trials for control adaption after damage.

The simple fact is that computers are faster at computing angles etc.

This is obvious when you consider the type of fully autonomous aircraft that has been flying very successfully for a long time.

https://www.youtube.com/watch?v=EBF-0OxpW6Q

Fancy controlling those angles by hand?

dClbydalpha
31st Oct 2015, 20:24
As Tourist points out, humans give the illusion of carrying out massive quantities of calculations. In most cases it is muscle memory in response to repeated stimulus. In fact I can personally attest to the fact that an injury I can hardly perceive has affected my ability to do a repeated task, in such a way that my brain doesn't recognise the fact until another sense reveals the error. Human senses are easily tricked in to causing the wrong response.

An effective automated system has a large number of sensors and effectors with a number of hierarchical goals along with a series of transfer functions and an adaptive error matrix. It doesn't have any concept of "trusting its senses or using its instinct" it simply acts, measures the effect, judges how well it did then acts again. This it can do thousands of times a second.

The reason automation hands over to a human when things "go wrong" is because we build it in. I don't believe that this is always the best option. But it is the rules by which we currently play.

RAT 5
31st Oct 2015, 20:29
I did watch a fascinating lab demo of automatic drone flying. First there was a maze, then a 3D maze and then a labyrinth to negotiate. OK it had been pre programmed. It was brilliant. Then they performed formation square dancing, and then in 3D. Astonishing. But it had all been pre-programmed in a perfect environment. Calm winds etc. I would like to see it performed with some air wave activity to see if they could adapt, both laterally & vertically. Adapt, that's what we are good at. We are mission orientated and that can be a good AND bad thing. Fixated on success, get home-itis, but also adapt to unforeseen circumstances.
I've no hesitation in saying an a/c can get airborne from a large runway in A and fly a profile and land in B 1000's nm away. Heck, they went to the moon and back and I wonder how much stick time Buzz logged. Perfect world, perfect day, no problem. Now add the endless what if's and I doubt the there is a computerised solution for all of them. Until there is, and with HUGE agree of certainty and reliability, I expect there to be Captain Kirk, Spock & Mr. Zulu around for yonks to come.
Which XAA will be the brave one to authorise such pilotless ops? It would have to be worldwide agreement, with worldwide standards. Heck we can't even get a standard size of cabin bag; and look how long it took to get common EU FTL's, and even then there are dispensations etc. There are not even worldwide FTL's or even a/c certification specifics.

peekay4
31st Oct 2015, 22:33
Which XAA will be the brave one to authorise such pilotless ops? It would have to be worldwide agreement, with worldwide standards. Heck we can't even get a standard size of cabin bag; and look how long it took to get common EU FTL's, and even then there are dispensations etc. There are not even worldwide FTL's or even a/c certification specifics.
I would guess the first fully automated commercial aircraft will fly cargo routes over remote/unpopulated locations, until enough experience / data / improvements are gathered (probably after some years) to move forward to human flights.

tdracer
1st Nov 2015, 06:04
I've been intentionally staying out of this debate, but it's late Saturday night - Halloween - I've had several "adult beverages" - and what the heck :E
They were not designed to be autonomous, so they are not autonomous.This is something that is all to often overlooked - just because automation didn't do something that it wasn't designed to do doesn't mean it couldn't have done it had it been so designed.
The progress in autonomous vehicles over the last 10 years has been astounding. By design, aviation tends to be slow to adopt new technologies - there is too much at stake to do otherwise. But it will come.
While Moore's law isn't so much a law as an observation - and it's recently showed signs of slowing - it's held up remarkably well. So computing power continues to increase at an exponential rate, while humans remain pretty much stagnate.
I doubt - or at least hope - it doesn't happen during my lifetime (and I plan to be good for at least another 30 years :ok:), but I foresee a future where fully autonomous vehicles are so much safer than those driven by humans that driving your own car will be severely limited - if not banned outright. Basically those of us who love driving will be limited to what today we call "track days" :rolleyes:

I don't want to in any way diminish Sully's "Miracle on the Hudson" - but an all engine power loss is actually relatively easy to design for (and not exactly an unknown since it's happened multiple times) A computer can determine the maximum glide distance, suitable landing points within that distance (if any), determine the lowest risk alternatives for a forced landing/ditching, and notify air traffic of what's going on. Oh, and do all that in less than second.
Air France 447? In the event of 'unreliable' airspeed, the autopilot was designed to disconnect and give control to the humans, lest the autopilot do something stupid. As we all know, a human pilot then did something incomprehensible stupid and killed everyone onboard. The computer could easily have been designed to perform the 'unreliable airspeed' QRH procedure - or better yet look at GPS, AOA, power setting, etc. and synthesize airspeed until ADIRU airspeed was again reliable - and we'd never have even heard about it.
Now, that's all within today's automation capabilities. Moore's law says that in 20 years, we'll have over 100x that capability. Today, the most dangerous part of flying is the drive to/from the airport - what happens 30 years from now when autonomous cars have a near zero accident rate but human pilot error makes flying the most dangerous part of the trip?

Tourist
1st Nov 2015, 09:15
I did watch a fascinating lab demo of automatic drone flying. First there was a maze, then a 3D maze and then a labyrinth to negotiate. OK it had been pre programmed. It was brilliant. Then they performed formation square dancing, and then in 3D. Astonishing. But it had all been pre-programmed in a perfect environment.

That is not actually true. It was not preprogrammed at all.

These videos make that perfectly clear.

https://www.ted.com/talks/raffaello_d_andrea_the_astounding_athletic_power_of_quadcopt ers?language=en

https://www.youtube.com/watch?v=MvRTALJp8DM

https://www.youtube.com/watch?v=geqip_0Vjec

https://www.youtube.com/watch?v=S-dkonAXOlQ

Note that these videos are a few years old now, and the tech has moved on a long way since.

Note also that these are low budget toys, not state of the art.

Tourist
1st Nov 2015, 09:17
Just to cheer up the naysayers....

https://www.youtube.com/watch?v=TVrxvqYlCDs


There's still some details to be worked out....:ok:

_Phoenix
1st Nov 2015, 14:30
The computer could easily have been designed to perform the 'unreliable airspeed' QRH procedure - or better yet look at GPS, AOA, power setting, etc. and synthesize airspeed until ADIRU airspeed was again reliable - and we'd never have even heard about it.
Autopilot disconnected obviously, the computer cannot process rubbish inputs(see scattered data in reference), but FAC cannot be disconnected. FAC "helped" with deep stall and pilot confusion.
No, the computer cannot be programmed easily to deal with 'unreliable airspeed', rubbish data or unique events for which FAC is impaired.

http://www.mediafire.com/convkey/71e1/i5s1c9pm8o80e2vzg.jpg
http://image.b4in.net/resources/2013/09/18/1379518305-gigo350.gif

dClbydalpha
1st Nov 2015, 15:34
No, the computer cannot be programmed easily to deal with 'unreliable airspeed'

Yes it can, and was. The accepted method is to hand over to the humans. It could alternatively have been programmed to free run using inertials for a specified period of time, cross comparing with other sources to ensure what is happening matches the expected behaviour. But that is currently unacceptable as a strategy. In the end a relatively benign scenario was handed over to the humans with a "confusing" sa picture, while the automation almost certainly knew exactly what was happening.

The important thing is that we need to understand better the interaction between the automation and the human. Optimising this I feel would yield better results than striving for full automation irrespective.

Tourist
1st Nov 2015, 15:35
You are misunderstanding what he means.

He does not mean that an actual A330 could be programmed to do what he says.

He means that you could easily program a system to do what he says if you were building one with that intention.


ie the A330 handed a bag of spanners back to the pilots because that is what it was designed to do rather than because there is no other option.

_Phoenix
1st Nov 2015, 16:28
dClbydalpha understood well.
But, the alternative to run using g-load would not save the day, since FAC will continue to maintain the g-load of stall condition then the pitch trim runs to full nose up, as AF447 demonstrated. From that point recovery is impossible without human intervention to reduce pitch trim manually.
I completely agree with:
The important thing is that we need to understand better the interaction between the automation and the human. Optimizing this I feel would yield better results than striving for full automation irrespective.

peekay4
1st Nov 2015, 16:36
From that point recovery is impossible without human intervention to reduce pitch trim manually.

Only because it was designed that way. All airliners today are designed to augment human pilots "in the loop". And as AF447 demonstrated, the humans in the loop weren't up to the job that day.

Future aircraft may be based on a different design philosophy, elaborated previously.

dClbydalpha
1st Nov 2015, 16:45
Fairly much my thinking, Tourist.
The software in most aircraft is designed to a philosophy that corresponds with the overall architecture. Driven by regulations and safety cases. To do something different usual requires a change in that philosophy, which touches a lot of the design.

The avionics are normally monitoring a vast number of sensors 20 - 50 times a second. At those intervals most things are approximately "linear" and so integrating inertials etc can give reliable results, at least in the short term, to allow stability to be retained. Automation could hold things steady before defaulting to handing over control, giving the human a chance to familiarise, assess and decide on a course of action. I am also in no doubt we could fully automate operations, with the correct airborne and ground infrastructure. But then we are swapping a set of human vulnerabilities for a set of machine ones. For me we need to learn how to overlap the strengths. I feel the AF447 disaster yields a valuable lesson about automation and human interaction.

RAT 5
1st Nov 2015, 16:53
Amazing that the A330 Atlantic Glider didn't ring the egg timer when the fuel was balanced and tell them to stop X-feeding. Then they would have realised, 30 mins later, that there was a leak and behaved differently. Such a sophisticated a/c, such a basic human error.
Mind you, the same could be said of AF 447; except the egg timer.

KiloB
1st Nov 2015, 16:58
Anyone want to comment on why Airbus reaction to inaccurate airspeed indication cannot be "Power and Pitch"? Seems simple enough.
KB

_Phoenix
1st Nov 2015, 17:05
Future aircraft may be based on a different design philosophy, elaborated previously.

Automation is based on computer program quality and processing power. In normality, computers are super precise and fast. But they are a wonderful and powerful tool, not more than that. When a FAC bug comes to surface (i.e 0/0) then we need the human intervention to take it out of infinite loop, to think out of the box. When AI will beat human's intelligence, imagination and adaptability, when AI will be designed to reprogram itself for the unique situation, only then, maybe, the automation will not need human in the loop.

Tourist
1st Nov 2015, 17:31
Phoenix

You are not listening.

Nobody wants an AI in the cockpit.

Nobody.
Seriously.

As mentioned by many on here, neural networks can do some very clever thinking.

RAT 5

How good were mobile phones in the 80s?

That is what a A330 is.

It is not sophisticated. It is stone age.

Nothing about any system on board belongs in any place other than a museum.
Unfortunately, certification requirements stifle advancement.

Tourist
1st Nov 2015, 17:36
As many on here have mentioned, we are in a very odd place in the cockpit at present.

We have automation doing some things, but some things the human has to do even when the automation is better.


ECAM

Computer tells you what to do.
You do it.
Computer sees you do it, and moves on to next item.
You are not supposed to think, just do.

You are merely an error vector.

You can mention times when you shouldn't follow ECAM, but how do you know not to follow ECAM? Because you are told not to in the book. Therefore it is just a rule that a computer could follow itself.

TCAS

Computer tells you what to do.
You do it.
You are not supposed to think, just do.
Its a simple manoeuvre that a computer would always do perfectly

You are merely an error vector.

EGPWS

Computer tells you what to do.
You do it.
You are not supposed to think, just do.
Its a simple manoeuvre that a computer would always do perfectly

You are merely an error vector.

dClbydalpha
1st Nov 2015, 20:25
Tourist - very interesting. I believe an aircraft can aviate more accurately and navigate more precisely when automated than when under manual control. It can sense not just acceleration, but also velocities, orientations and positions. It can react faster than a human and with more precision. But we don't need AI in the cockpit ... we've already got natural intelligence there. It is important for us to work on the new relationship between pilot and machine. They should never end up fighting each other but should work to their relative strengths, easier said than done I know. A machine could be programmed with gains to calculate whether it is better to spend a little time avoiding uncomfortable weather at the expense of time and fuel etc. but that's the kind of strategy best left to a human who can take a larger number of factors in to account, including physiological and emotional ones.

I bring up the point I made previously of what do we gain by fully automating air travel? Air travel is still an experience, however often we do it. Automation wouldn't take that into account. I'll give a philosophical example. A few years ago I was flying back home. The aircraft was ready at the end of the runway, when the pilot made an announcement that "... the view of the sunset from the cockpit is spectacular, I am awaiting permission to carry out two turns following our take off so I can share it with all of you." This he did, and the view of that sunset over the alps was truly breath taking. Even if you programmed a machine to sense sunset, how could it make such a judgement call to enhance the experience for the passengers? We could automate the whole process of flying with a lot of investment, but why bother ... we're not short of people who want to be pilots and pilots get it right the vast majority of the time. When it goes wrong it is usually because of the limitations of giving a pilot full SA or of the pilot not being able to understand what the automation is doing for them at that moment. If we can crack that problem then I think it will be as good as we can get.

RAT 5
2nd Nov 2015, 08:52
I think where we're not seeing the wood for the trees and becoming clogged up in the discussion is this:no doubt automatics can FLY an a/c adequately and mostly as good as or better than a human. After all, now, once we've gently made an input to the elevator and slipped our earthly bonds, engaged HAL and relaxed, we then listen to a human ATC voice telling us what to do; we then respond verbally and input this into HAL. HAL then, supposedly, does what we tell it to after having had an air-ground conversation. We all know this whole process could be digitised or even controlled from an earthly pilot. The a/c can then navigate itself in 4D all the way to an ILS/RNAV approach & land anywhere we choose. A technical doddle.
The human is/will become a manager. They use intuition to make decisions about predicted circumstances, i.e. preventative action. They use experience to handle unforeseen non-technical events. They use experience to make choices when confronted with scenarios where there are a multitude of options; they have gut feelings; they have ingenuity; they have inventiveness when required; they are very good in grey areas where a computer might be black & white yes/no; they can employ finesse and dexterity and be as brutal as necessary; they can adapt to unknown/unforeseen circumstances.
Captain Kirk was a Starship manager. He decided what he wanted to do and then commanded Mr Zulu & HAL to achieve it. If Plan A needed altering he switched to plan B or C, usually after Mr. Spock had whispered in his ear. (they should have been married). Is that the way we shall go? Whatever the evolution I do not see the removal of the human manager. It might be that the pilot is less involved with the act of flying and is even more of a manager & monitor than now (almost certainly) but they will still need to be able to save the day when HAL goes AWOL.

stilton
2nd Nov 2015, 09:05
Tourist.


You are a subscriber to the illusion that with 'progress' more sophistication and processing power in computers we are getting closer to the point we can 'hand off' all human piloting tasks to them.


With more sophistication there are more possibilities for error and even if we reach that technological nirvana you dream of with a perfect robot it doesn't matter.


There will always be faults we can't anticipate, dynamic situations that can't be programmed for that will require the judgement and real life experience of a human pilot.


Finally, if you really believe you can be replaced as an airline pilot by automation then you have no business working as one.

alf5071h
2nd Nov 2015, 09:15
Steve, as you note many of the early contributors have made up their mind, but few explain why.
An apparent unthoughtful choice of automation might reflect social change; the use Wiki and Google, vice thinking, preferring automation dependency, system belief, without checking etc.

So: “… is it possible to replace this capability with a human-designed and manufactured system, without creating additional vulnerability to human error elsewhere?”
I don’t think so; as discussed previously, human ability is limited by inherent, yet necessary fallibility; how can we design an error free system if we cannot understand our own error.

Re the QF example, the warning and display systems provided the crew with the ‘best’ picture that technology could provide. The crew actions could be automated, but apart from a shorter timescale the process would still be limited by the quality and availability of sensors (as noted by previous contributors).
After landing and selecting fuel off to stop the engine, what more could an automatic system do when the engine did not stop? – Automation only computes, Humans reason.

“… will potential product liability issues stop the bandwagon?” Probably; but legal liability is only a small part of an ever-changing society which influences human development.

An alternative line of thought is to ask ‘why’ we should replace existing capability – use technology wisely to support humans, but not to replace them.
If we choose ‘safety’ then this requires careful thought of what safety is, what would we be attempting to improve - why. I prefer not to define safety but consider it as an activity; so will change affect this activity; might it upset the finely balanced state that we have achieved so far.
Whatever our views are, we require thought and explanation before choice. My thoughts would start with natural human risk-aversion, and if we are to change a finely tuned industry, only make small changes first and asses the feedback.
For those choosing full automation, look for and evaluate the feedback from recent accidents; what should we have learned from them – without blaming the crew.

Mesoman
2nd Nov 2015, 15:57
"Automation only computes, Humans reason."

This is not true, and in the future computers will do even more reasoning. There are too many comments now which assume that computers only do what is pre-programmed. They can do more, and they will do more.

We don't know and cannot know how far they will get. Some very smart people (e.g. Stephen Hawking) fear that they will be able to out-reason us. I hope they are wrong.

RAT 5
2nd Nov 2015, 17:33
"Ladies & Gentlemen, this is your captain speaking. You are presently flying at ...............etc. etc. This is our first fully automated flight from XYZ - ABC. Indeed I am on the ground in XYZ controlling & monitoring your flight. I hope you are enjoying the flight and I assure you nothing can go wrong..go wrong..go wrong.. go wrong........"

dClbydalpha
2nd Nov 2015, 19:34
"Ladies and Gentlemen the cans of burning fuel either side of you are not under my control, I merely get to make suggestions to them. But don't worry in the unlikely event of something going wrong I can switch them off and I'm sure we'll make it across the Atlantic."

People very quickly adapt to the concept of handing over to automation.

Automation can manage all the tasks of aviation, it can aviate, navigate and communicate. We already have machines that see so unstable that they can't be flown without automation. We have automated drones that you give a mission to and let them go, or that are datalinked half way round the world. But really the great advantages of automated aircraft can only be realised if the aircraft doesn't have to provide all the equipment necessary to make humans comfortable. While aircraft carry humans then there are no disadvantages to having a pilot. As there will always be the need to carry humans in a passenger aircraft, it seems obvious to me to invest our efforts in optimising the human machine combination rather than strive for full automation.

G-CPTN
2nd Nov 2015, 21:09
rbsqaJwpu6A

neville_nobody
3rd Nov 2015, 04:22
Given how the regulators seem to struggle to what is really old technology upgrades in aviation I think that fully autonomous aircraft are at least 50+ years away assuming that it can even be done. Don't forget that NEW aircraft now in the 737Max and 320Neo are 70s and 80s technology.

peekay4
3rd Nov 2015, 06:39
Neville, fully autonomous aircraft are already a reality today and already approved by various regulators to fly in controlled airspace under special AOC.

Good examples include the so called Optionally Piloted Aircraft (OPA) such as the Diamond DA42 Centaur (http://www.airforce-technology.com/projects/diamond-da42-centaur-optionally-piloted-aircraft-opa/), the Lockheed/Kaman K-Max helicopter (http://www.lockheedmartin.ca/content/dam/lockheed/data/ms2/documents/K-MAX-brochure.pdf), and the Northrop Grumman (Scaled Composites) Firebird (https://en.wikipedia.org/wiki/Northrop_Grumman_Firebird).

http://www.youtube.com/watch?v=9mFc3BhDwyE
(Aurora Flight Sciences’ Centaur Optionally Piloted Aircraft (OPA) flew multiple unmanned flights from Griffiss International Airport in Rome from June 12-15, 2015)

These aircraft can be flown from inside the cockpit, or piloted from the ground, or programmed to fly fully automated from take-off to landing. They are not "testbeds" but are all production aircraft in service today.

The K-Max notably did nearly 2,000 unmanned sorties delivering cargo for U.S. troops in Afghanistan.

They are not carrying passengers yet, but the K-Max is being pitched as a possible Combat SAR Evac (Air Ambulance) platform; i.e., as an unmanned transport to take wounded troops from the battlefield to a medical facility.

Yes we are far away from adopting this primarily military technology to the commercial transport realm, but I don't think it will be 50+ years. As I mentioned in an earlier post, I think we'll see fully automated commercial cargo ops sooner rather than later, before proceeding to pioneering passenger flights.

John Farley
3rd Nov 2015, 11:22
In the mid 60s I was a safety pilot for the Blind Landing Experimental Unit at RAE Bedford on their Comet 3B doing cross wind autoland trials with a component of over 30kt. To watch that system flare, smoothly remove the drift angle and squeak the wheels onto the numbers over and over again, convinced me that automatics could achieve standards of ‘flying’ that I could not match.

I have put quotes round flying because I believe word means different things to different people. To avoid ambiguity I suggest we separate out the tasks of flying into ‘steering' the aircraft and ‘operating' the aircraft.

By steering, I mean controlling any flight parameter. By operating, I mean every other aspect of a flight from pre-flight preparation to selecting the appropriate flight parameters and filling in the Tech Log afterwards. I believe automatic systems are better at steering tasks while humans are better at operating tasks.

In reply to “What are you going to do when the autopilot fails?” my answer is that future automatic steering systems will not fail in a critical way. Unlike today’s autopilots which disconnect themselves in the event of a problem, future automatics will be designed to fail safe and carry on performing their functions. Just like today’s wing structures. Autoland, thanks to special certification standards, has not caused a landing accident since it was first used with passengers in the 70s. Sadly there have been quite a few steering errors by aircrew over the same period.

I am a future Captain climbing out of La Guardia when both engines fail. As the operator I decide the crisis needs a landing on the Hudson. I lift the guard protecting the Glide Landing button and press it which tells the steering systems to set up the best glide. With my knowledge of the aircraft’s gliding performance I estimate the touchdown zone on the local area map that appears, draw the final approach track I want with my stylus, press the Glide Landing button again and thank my lucky stars that I did not have to use skill so save my aeroplane. Just knowledge.

As a future passenger I will always want my flight operated by a senior Captain and First Officer who have the knowledge to get us to our destination safely, but without the need for them to use skill.

dClbydalpha
3rd Nov 2015, 12:59
Excellent post John Farley.

From my perspective steering the aircraft can be readily achieved by automation. It can even cope with abnormal events. Computing can "try" something, measures the response, adjust, try again, all much faster than a human can recognise there is even an issue.

Human operations require human operators, as we don't necessarily correspond to the same rule-set as a physical item.

While an aircraft needs to support human physiology then there is little to no advantage gained from adding the automation necessary to mimic human decisions. It is better to use a human.

Currently we appear to be designing a long way from the optimal point. We put automation on board that removes the pilot from the loop other than as an operations director, but we don't give it authority to fully act. The pilot is mostly removed from the minute to minute situational awareness of what the aircraft is doing, but is suddenly catapulted from monitoring to handling with no time to appraise. Appraisal of the situation is the strength of the pilot, if the automation can buy them some time to make a decision and communicate it. At the moment the rules/tradition for implementing automation and the level of information provided to the pilot just don't seem to achieve this aim. The automation isn't allowed to control, but the system is too complicated for a pilot to quickly comprehend the situation of what is and isn't requiring manual intervention.

I don't like referring to AF447, but do people think the out come would have been different if the "system" rather than say "you have control - well mostly" it said "Dave, HAL here, I've lost reliable airspeed sensing. I'm going to carry on in straight and level flight using free run inertials and GPS. Let me know if you want me to do something different, meanwhile I've switched pitot heat on and I'll let you know if anything changes."

RAT 5
3rd Nov 2015, 14:14
As a future passenger I will always want my flight operated by a senior Captain and First Officer who have the knowledge to get us to our destination safely, but without the need for them to use skill.

Review the oft quoted definition of a skilful pilot. (light heartedly)

The other poignant issue is the coordination, or lack of, between airline pilot raining and airline a/c design. They seem to happen in isolation. The former can head off in any direction with leaps & bounds driven by technocrats and accountants, and as long as it meets XAA specs and is cheap[er longterm they go ahead. Out it comes, every 15 years or so, a new bag of bells & whistles that is more sophisticated, trouble free, crash proof than the generation before. The latter, meanwhile plods along with great inertia. The only real difference I've noticed over 30 years is that the training has gone from 250hrs to 148hrs (CPL), MCC has been thrown in and the MPL is now the rage. But just how much of it is focused on the 'technology' that is going into the next generation of a/c? A short intense TQ course with very strict SOP's that teach only one method of doing anything, and is well short of total capability of the systems, is not IMHO a satisfactory training. I suspect that with more automation & sophistication it will get worse, i.e. less knowledge & understanding of the a/c. Meanwhile we test to the same criteria as a B732. Hence my comment about uncoordinated training programs.

barit1
3rd Nov 2015, 20:21
Can automated systems deal with unique events?

Of course they can. Use the similar logic of Climate Change models, used to predict with great certainty the state of Earth's climate 30, 100, or 1000 years hence! ;)

dClbydalpha
3rd Nov 2015, 20:37
I suspect that with more automation & sophistication it will get worse, i.e. less knowledge & understanding of the a/c.

I hope not. When I learned to drive my car had a manual choke. I understood what it was for and how it did it. I've even driven a car that had an ignition advance lever. Now I drive a car where the throttle is a digital input to a computer. A computer is even used to decide exactly how much brake pressure is applied to each brake. I don't have to understand how it works, I just press pedals. The automation is transparent to me. This may be due to the fact that cars have to be capable of being driven by the vast majority of the population. The relationship between pilots and aircraft is very different and the industry is required to design automation in a particular way. We need as an industry to redefine that relationship along the lines of "operator" and "steering" as pointed at by John Farley's post. Getting better definition of roles and responsibilities will allow clear boundaries to be defined in terms of what needs to be communicated between human and machine and of what knowledge and skill is required by a pilot to fulfil their part of the system.

At the moment we have the scenario where the aircraft can hand a "bag of spanners" (thanks Tourist) to the pilot without the courtesy of passing it over handles first. I feel this has to change.

Symbion90
4th Nov 2015, 00:17
Computers certainly do have the abilty to 'reason' in a functionally equivalent way to humans. (whether the mechanism how they do that is similar is debatable)

Consider the following, as an example as what most would regard as a 'unique event'.

A single engine aircraft is approaching a runway for landing and simultaneously a large moose and a small child enter into the aircraft landing path. Immediately, on attempting to go round, the engine fails.

Could a computer control the aircraft and achieve a statisically better outcome than a human pilot even though it is highly unlikely the software would have been programmed explicitly for this scenario?

Given the current state of the art of autonomous devices is it already probable that it could.

Up to the point of the runway incursion we could assume that existing era technology could have the aircraft lined up and able to land successfully.

Detecting the runway incursion would require a vision system. Self driving cars already have such systems and are able to navigate the vehicle to avoid obstables. Aircaft, having more degrees of freedom than a car actually have an advantage here and the system would command a go-around. At the point engine failure occurs the range of available trajectories decreases significantly. Let's assume our motion prediction system can calculate the range of available trajectories as anything from crash landing the plane short of the incursions, hitting both objects, or impacting one or other object.

What should the system do? Can the system 'reason' that it should aim to save either the plane, the moose or the child.? First it would need to be able to recognise the objects in its path and determine a "consequence of impact" value. The best outcome might simply be a solution that seeks to mininimise the overall sum of those values.

Object recognition and classification is well within the domain of current technology. (Think Xbox Kinect for a consumer available example). Having classified the object it 'sees' all the data needed to calculate the consequence of collision is then available.

Things get interesting of course. A simple algorithm might infer it is best to collide with smaller objects compared with larger ones of similar density, disregarding that children have higher intrinsic value than moose. A slightly more complex system might attempt to assign 'intrinsic value' to the objects.

However The data for such a descision tree might be as simple as {object/animal/twolegs=1, object/animal/fourlegs=2, object/animal/nolegs=3, object/animal/unknownlegs=4}

You can of course build any data structure you like to classify the real world. This is where learning aspect of a computing systems comes in. Over time, many systems, if able to communicate could adjust these parameters to minimise the number of negative outcomes.

Considering that all of the above could be computed for an optimal solution 60+ times per second by a sufficiently powerful system it is probable even now that computers could significantlty outperform humans in 'unique events'.

I don't expect it to happen any time soon in real life though as aviation seems determined to stay in the technological dark ages.

G0ULI
4th Nov 2015, 02:47
In order to assign an intrinsic value to a series of unavoidable runway obstructions an AI system would have to recognise the objects - which can be done using present technology - but also understand their worth to society as a whole. Why would an AI system charged with flying an aircraft safely from A to B need to be burdened with a sense of morality?

What cannot be determined by an AI system is the background history associated with the objects. Is the vehicle autonomous or a bus full of school children that has just been hijacked? Is the animal one of the last breeding pair on the planet? Is the human intentionally trying to commit suicide?

Vehicles are replaceable, critically endangered species are not and while human life should be sacrosanct, the sad truth is that human life is cheap in cash terms.

A human pilot may well be aware or informed of facts that an AI system will just not be equipped to recognise, so killing a single human may well be the least worse option, rather than potentially killing a bus load of people, or wiping out a species.

But if an AI system is ever set up in such a way that it is capable of making the decision to kill someone in preference to somebody else, that is the start of a very dangerous technological development. We are already seeing the development of such capabilities in autonomous drones, but at the moment at least, a human operator allegedly makes the final decision.

Of course we have been wiping out species all over the planet from the day humans evolved, so the logical answer is to minimise human casualties - but that decision is influenced by cultural bias. Other cultures and societies may hold animal life higher than human life, particularly for rare or endangered species.

The potential development of artificial intelligence should give everyone pause for thought and instill a great deal of concern about oversight and who controls such systems, or even if they can be controlled once released on the world.

Symbion90
4th Nov 2015, 03:15
Why would an AI system charged with flying an aircraft safely from A to B need to be burdened with a sense of morality?

It's not 'burdened' as such unless the speed that the decision loop being calculated falls below an acceptable value. 99.999 % of the time such 'reasoning ability' is not required but software is weightless so why not carry it along?


What cannot be determined by an AI system is the background history associated with the objects.


I'd disagree. Look at the way google performs speech recognition or language translation. One of the reasons it is accurate is that it has the context of billions of other searches performed on a massive database in all languages. It uses that history to infer the most likely context for words far more accurately than interpreting single words or sounds.

peekay4
4th Nov 2015, 06:29
There was a time when people wouldn't get into lifts without a human operator...

NPR: Remembering When Driverless Elevators Drew Skepticism (http://www.npr.org/2015/07/31/427990392/remembering-when-driverless-elevators-drew-skepticism)

Tourist
4th Nov 2015, 11:40
Lots and lots and lots of opinions without supporting evidence from the naysayers.....

Saying a thing doesn't make it true.

To be fair, neither does supporting material on the Internet, but it certainly lends a bit of credence.

"Nobody will get on a steam train!"
"You will die if you go faster than 100mph!"
"Machine looms will never replace the craftsman"
"A hand built car will be superior"
"A computer will never beat a grandmaster at chess"
"It will never fly"
"We can't get to the moon"
"Nothing can go faster than light"
"There will never be a market for more than a few computers"
"Nobody wants a camera on a phone"
"You will die if you sail west"

glum
4th Nov 2015, 12:15
If we can accept that pilots are 'allowed' to crash planes from time to time when the odds are too heavily stacked against them, then why should we hold automated systems to a higher level?

Surely if (and that's the question still unresolved) pilots 'cause' most of the crashes, then replacing them with automation which will vastly reduce the number of crashes is a good thing, even if those automated systems do still crash from time to time due to the unique / unforeseen events?

mm_flynn
4th Nov 2015, 15:00
In order to assign an intrinsic value to a series of unavoidable runway obstructions an AI system would have to recognise the objects - which can be done using present technology - but also understand their worth to society as a whole. Why would an AI system charged with flying an aircraft safely from A to B need to be burdened with a sense of morality?.....

Given past performance of pilots, it is reasonably probable the pilot will totally forget about the detail of the objects - or possibly fixate on the child and then potentially fail to execute a safe engine out landing. The odds are very limited of a pilot having the mental bandwidth to (for example) determine that going well below best glide to loose altitude and then accelerate to a safe engine out flare speed aiming at the undershoot would allow the aircraft to be going less than 20 knots at impact thus minimising the risk of fatal injury to passengers and objects unidentified.

There are real questions about the reliability of todays computer systems that would IMHO prevent large scale passenger transport with no human able to intervene in the event of a system failure. However, It is clear that (despite some creative attempts to construct 'moral dilemma unique events') computers today can sense and respond to the outside world better than people, within their design scope (a very important qualification). Moreover, the design scope of an 'automated aircraft' could cover a vast array of situations and the computers would have significant lower hull loses and loss of life (both on the ground and in the air). There would still be situations that were out of scope of the design and the computer could get to a point that it had no available options and resulted in a catastrophe that a human might have averted.


The record of self driving cars to date is instructive in the advantages and disadvantages.
Advantage - they report every incident and have a much lower rate of serious incidents than human driven cars (as far as I can tell 0 so far)

Disadvantage - they slavishly follow the law and traffic rules which appears to result in them being hit from behind far more often than normal cars. Think of parking lots with a 5 mph speed limit, only your grandma or a google car will actually be doing 5 mph in the road, hence, you the human doesn't expect them and hits the car in front!

automatic cars will 'always' see the hazard and avoid it to a much higher standard than humans. However, they will frequently surprise the human who thinks he could have bent the rule and gotten in and then is surprised the automatic car didn't move out of the way.

_Phoenix
5th Nov 2015, 00:50
"Dave, HAL here, I've lost reliable airspeed sensing. I'm going to carry on in straight and level flight using free run inertials and GPS. Let me know if you want me to do something different, meanwhile I've switched pitot heat on and I'll let you know if anything changes."
Firstly, airspeed design should change for increased reliability. Pitot tube is about 100 years old. Usually, pitot tube gets clogged with ice in thunderstorms and there is a lot of turbulence = noised input for inertial. Good luck with that:
https://www.youtube.com/watch?v=a5FrIDwq-qE

Tourist
5th Nov 2015, 04:07
There are now about a thousand different way to feed info to an autonomous system, and unlike humans, they don't get swamped by inputs.

The more inputs you have, the less likely a spurious one will cause confusion.


Oh look. Even cheap toy drones can see now.....

MIT drone knows how to swerve to avoid crashes - BBC News (http://www.bbc.co.uk/news/technology-34725949)

dClbydalpha
5th Nov 2015, 07:41
_Phoenix
turbulence = noised input for inertial. Good luck with that:


Rate aiding from inertials has been used for decades. It can and does cope with all sorts of disturbances. The only question is do you allow the system to shift a datum or must it try to return to the original. Look at some modern helicopters in rock solid hovers, I suspect the vast majority of them are using inertial stabilisation. My pilot friends tell me that hovering in gusty wind conditions is about as hard as it gets. So keeping an aircraft in steady safe flight, at least within the same bounds as that expected of a pilot, is not really going to be that difficult, surely? On the principle that such systems operate already I don't need luck, just expertise.

As for the old pitot debate. The requirement is not to measure airspeed, but the air's ability to support the lift you want to generate and interlinked the amount of drag that that will result in. Give or take, a pitot static system does this directly. I can't think of any other method that will, so we would then have to measure some other property or properties and carry out a translation. That's why pitots are still with us. Yes they ice, but then they de-ice pretty quickly too, so you are left having to maintain stable flight for a relatively short time until you get them back. It appears to be the case that misleading airspeed is the dangerous condition. Take away airspeed and pilots adapt using other fall back parameters. But present airspeeds that contradict other indications and rapidly SA is lost as pilots try to establish what they need to use to "steer" safely, meanwhile the machine already has a large set of reliable inputs it can use to maintain safe flight.

So back to my original point.
Give the correct information to the pilots to let them decide what to do, then let the machine get on with the doing.

RAT 5
5th Nov 2015, 07:45
automatic cars will 'always' see the hazard and avoid it to a much higher standard than humans. However, they will frequently surprise the human who thinks he could have bent the rule and gotten in and then is surprised the automatic car didn't move out of the way.

I wonder. The sensors can only 'see' so far in front. If I'm on a twisty road with good visibility I'm looking across the corners to see what's coming, and even what's ahead of me in my one and adjusting speed accordingly to avoid having to brake at last minute. The same is true at higher speeds on the motorway. I'm looking way ahead to anticipate a problem, be a necessary lane change, a speed up or slow down to avoid entering a risk zone; and I'm looking in 360 degrees to anticipate action/threats approaching from behind or even to the side; and that threat can be detected many cars in front or the rear.
It might be possible for transponder equipped cars to talk to each other and tell each other what they are doing, but what about what they are going to do? And that will require 100% compatible equipment fitted.
Regarding just the motorway system; it is thought that automatic car control with equal speed and no lane changes, with short spacing, will increase capacity and avoid jams. A steady flow of piston in a cylinder type traffic flow. It might work, but only when there is a 100% compliance. That might then become a requirement to use such roads. It'd be quicker to get from MAN to LON with the car on a high speed train than on the road, perhaps.

Jwscud
5th Nov 2015, 09:53
My understanding is that image processing and recognition is the big Achilles heel for high power computer systemsat the moment.

Developing say a FBW architecture, even the learning/damage compensating kind is easier as the inputs and outputs are both very easily specified and provided. However, the ability to use visual/IR imaging and then to make sense of that picture and act appropriately is very challenging at the moment.

Self-driving cars manage because road signs and cars are simple, predictable shapes and easy for an image recognition program to "learn" - an autonomous or optionally crewed aircraft out there will not have at luxury, but be able to visually interpret anything the world can throw at it.

This is one task the human brain is bloody brilliant at compared to computers. The oft-quoted example is some,body throwing a tennis ball at you - a human has seen, recognised and caught it before a modern computer has identified it and the way the brain recognises objects is still somewhat of a mystery so we don't really know how we can train computers to do it efficiently.

Autonomous flights between Heathrow and Frankfurt? Maybe. Autonomous flights into any Greek island of your choice with nothing but a ****e VOR or a visual? That is the real challenge for automated air transport - to match the flexibility of the current system without massive infrastructure expenditure.

Tourist
5th Nov 2015, 10:19
Jwscud


I sometimes wonder why I bother.

Did you actually watch the video I just posted?

It clearly shows a toy drone interpreting the world so as not to hit things.
This is not supercomputer stuff, this is toy drone level tech.

Re the catching a ball.

Watch the TED talk I posted earlier. It shows both catching and using a racquet to bounce back a ball(considerably harder than merely catching) plus doing it in autonomous teams.

RAT 5

You say that the sensors "can only see so far in front"
This is true of human eyes.
A computer could combine EO/IR radar UV LIDAR etc giving clear view in fog etc. Vastly superior to the human eye.

Modern military aircraft do not go hunting using eyes. Their range is too limited. Google EO DAS F35 for what a true all round view is.

dClbydalpha
5th Nov 2015, 19:01
My understanding is that image processing and recognition is the big Achilles heel for high power computer systemsat the moment.

It is pretty processor intensive to process an image, but not beyond capability. You don't need to recognise things just classify them - "ball = catch : knife = don't catch".

I've seen a stereo camera system that can interpret a lot of information in real time. With some relatively simple high contrast markings it can give orientation, speed and distance/height above runway etc.

Infra-structure costs would not be that great IMHO. A combination of Rad Nav for identification, then DGPS plus "visual" can deal with most of it.

But, for me, there isn't a lot of advantage unless we're talking about cargo aircraft. Better place our efforts elsewhere.

Mesoman
5th Nov 2015, 19:35
I don't think the main challenge is processor power these days, although that is an issue. The really hard part - still very much in the research area - is making sense of the image. These days, smart phones and cameras can detect faces, for example. But, there's a lot more you want these devices to be able to do - to match human image analysis and even go beyond. I expect rapid progress to continue in this area, but it gets into hard AI.

RAT 5
5th Nov 2015, 19:36
I'm glad that the discussion has separated between flying & operating; and concluded, I think, that computers, now, are great at repetitive actions and humans are better at reasoning. Predicting the future is always difficult and AI may catch up with us.
What I perceive is that a computer is great at deciding on an action based on certain parameters; it then has a feedback loop to decide if that action is successful and will adjust its response until it achieves the required success. An experienced human can go through a lot of 'what if I do this, then that will happen, but if I do the other then something else will happen ...' type of thinking and decide on the best 1st course of action from various options. Then the human feedback loop will start. i.e. there can be quite a lot of reasoning before the first action based on an expected best & successful outcome. Plan B, C, D will always be in the background. I wonder if AI will ever be trusted fully by politicians in foreign policy decisions, or by generals in deciding war & battle strategy. I'm not sure of the human v computer chess score at the moment. There will be those who have a deeper knowledge than I of what is already in use or contemplated: it doesn't always mean it's the best method. It's very easy to let things run away from us.

airman1900
5th Nov 2015, 19:52
Computers are fast idiots.

mm_flynn
5th Nov 2015, 21:46
. I'm not sure of the human v computer chess score at the moment.
I think you will find the sport is now how much of a head start does the computer need to give the grand masters to have an interesting game. A straight up grand master v machine match is like watching Arsenal v a strong club team. Arsenal could loose on the day, but not very likely, but if you take three of Arsenal's players off the pitch it could be quite an exciting and even match

_Phoenix
6th Nov 2015, 01:02
I admit that I'm a bit subjective regarding Bombardier brand, however I strongly believe in their approach and vision of the integration human-machine. The new design of fly by wire platform and enhanged cockpit take the best of A&B experiences and technology and go beyond, bringing the best technology available and fenomenal design. The flight deck offers best situational awarness and enhanged pilot sensing, directly through HUD (extrasensorial of machine) i.e. night vision, advanced landing guidance, syntetic 3D vision of runway and relief, GPWS, windshear. The fly by wire is robust and reliable with only two laws: normal and direct.
More interesting details in videos:
https://m.youtube.com/watch?v=QN0qQrMaLYw
https://m.youtube.com/watch?v=OghtdzFXFoo

Tourist
6th Nov 2015, 06:01
Computers are fast idiots.

Better to be a slow idiot, yes?

https://www.youtube.com/watch?v=Y4kNeixcOrY

TransAsia plane crash: crew 'shut off working engine' ? reports | World news | The Guardian (http://www.theguardian.com/world/2015/jul/01/transasia-plane-crash-crew-shut-off-working-engine-reports)

metadalek
6th Nov 2015, 10:30
This is my first post here, after several years. Its long unfortunately.
I am a GA pilot, not commercial. I work in IT as a programmer.

I think the thrust of most of the comments in this thread are misguided.

The question of dealing with unique events, while interesting, is not important.
The real question is will computer pilots increase safety and the answer is almost certainly yes.

Lets look at pros and cons of computer pilots. Read this list and think about fatal accidents over the last 20 years.

Pros:

- they dont get tired
- they dont have wives,children, husbands, pets, medical conditions, financial worries,
which leads to:
- they dont have anxieties
- they are not religious, they do not believe in an afterlife, in fact they do not believe in anything
- they never go the to bathroom, or need to eat
- they do not go to sleep
- they do net get distracted by trivia
- they never forget a checklist item
- they do not suffer from getthereitis
- they dont worry about losing their jobs
- they do not lie
- can can handle 10, 20 30 ... inputs at once without getting confused
- they do maths really well and never make a mistakes in unit conversions
- they never panic
- they never need checks, and additional development can be done offline
- upgrades can be done to entire fleets at once, or over any period deemed sensible
- they double in capacity every 2-3 years
- they are cheap and light
- incremental cost of deployment is zero so scaling is easy
- they dont have sex, or get distracted by pretty/handsome pilots or cabin crew
- they never worry about daughters getting pregnet, or sons becoming drug addicts
- they do not have heart attacks, fainting spells, hangovers ....
- they never forget to feather props, turn off/on fuel pumps....
- they dont need oxygen or heat
- they can compute optimum settings in no time.
- they do not suffer from confirmation bias
- they never get ATC instructions wrong (protocol required but trivial)

This list is endless..

Cons:

- they dont handle strange/unforeseen situations well
- they are not afraid to die
- a flaw in one is a flaw in all

Now for humans:

Pros:

- they can handle strange/unforeseen situations well, but most dont

Cons:

- see list of computer pros and negate them all

In any sensible analysis, the computer wins once it can do the job.
And that is if not today, then real soon.

As for me, I would feel much safer on a pilotless aircraft just like I feel comfortable on a driverless train.

Oh, and despite the fact I love driving my car and would hate to not be able to do that,
I am pretty sure a driverless care much safer than I am. Its annoying, but its true.

Sure, some accidents will occur they maybe could have been prevented by a human pilot.

Who cares, as long as more accidents are prevented because there are no human pilits.

Its about percentages. Get used to it.

airman1900
6th Nov 2015, 23:19
Better to be a slow idiot, yes?.
You are correct.

Slow idiots are of slow mind.

Fast idiots are of faster slow mind.

Mach E Avelli
7th Nov 2015, 02:04
Metadalek, that is a brilliant post!
Summed up perfectly too, with the bit about percentages.
I feel that we are seeing the last generation of pilots commencing their flying lessons today who will actually finish their careers flying large aircraft.
Humans may fly turboprop aircraft to regional destinations and in the third world for a generation or two beyond that because the costs of smaller aircraft total automation will probably remain unaffordable for longer.
Also, when a turboprop augers in somewhere in Africa and kills 50 people the legal fallout is not likely to concentrate minds as much as the first A380 crash to be caused by pilot error will.

neville_nobody
7th Nov 2015, 04:49
The question of dealing with unique events, while interesting, is not important.
The real question is will computer pilots increase safety and the answer is almost certainly yes.


I think you are missing the point in that if we go to autonomous aircraft all that happens is that the safety risk shifts from one risk to another.

Whilst computers may do a whole bunch of things better than humans the middle of the Atlantic or the Indian Ocean at F370 in the middle of the night is not the time to find out some bizarre problem with the autonomous machine that no one thought of. Presently if that happens the human can always hand fly to the alternate. QANTAS has had at least 3 incidents which were 'never supposed to happen' that I can recall off the top of my head.

If you look at the accident rate of Western Airlines you will find the accident rate is ludicrously low and the fatality rate is even lower.

The questions are:

1. Can autonomous aircraft beat that fatality rate?
2. Can they do it cheaper than pilots do now?

Single pilot airliner ops in the future are probably going to happen.

Full blown autonomous airliners are probably going to be deemed to much of a risk and I would suggest that regulator and/or insurance companies will probably make it difficult. Regulation is something that the tech industry is not particularly used to dealing with or does very well.

metadalek
7th Nov 2015, 05:07
I think you are missing the point in that if we go to autonomous aircraft all that happens is that the safety risk shifts from one risk to another.

The above is obviously true. But shifting a risk from a high one to a lower one is a good thing.

Can autonomous aircraft beat that fatality rate?

And as I said, almost certainly. But thats an opinion, not a fact.

Can they do it cheaper than pilots do now?

If they can do it at all, they can certainly do it cheaper. I would bet my house on this one.

If you look at the accident rate of Western Airlines you will find the accident rate is ludicrously low and the fatality rate is even lower.

Not relevant unless the accident fatality rate is zero and the cost is zero. We are looking for improvement in all things.

As for regulation, the people making the rules are not idiots, and its a mistake to think they are. They will change regulations as soon as they are convinced it produces the outcomes they desire.

As for the public, people adapt amazing quickly. There will always be some who resist, but I suspect most people simply do not care. I certainly don't.

RAT 5
7th Nov 2015, 07:39
Single pilot airliner ops in the future are probably going to happen.

Let's consider future FTL's. A/C can now fly for 16+ hours. What of the future? Imagine a single pilot on board an automatic a/c crossing the world's oceans. Launch, enter oceanic airspace, engage autonomous automatics according to transit clearance and then go to sleep in a cockpit bunk. 5-6 hours later wake up and carry on. Using current split duty rules you could be on duty for an outrageous period. It would not be surprising, reflecting on past performance of XAA's & EASA, that FTL's are adjusted to match a/c performance and extended even more. With automatic a/c and cockpit bunks, hugely lengthy duty periods on ultra long-haul flights with single pilots might well be approved. What horrors.

Tourist
7th Nov 2015, 09:17
But if all you were doing was sleeping when you fancied it or reading a book/ watching a movie would that be awful? Assuming the aircraft was on automatics, which I don't think anybody on here disagrees is entirely possible at least in the case of normal ops without emergencies, there would be no requirement to be involved with the aircraft in any way unless there was a problem.
Easy 20 hours "work"

Tourist
7th Nov 2015, 09:20
Full blown autonomous airliners are probably going to be deemed to much of a risk and I would suggest that regulator and/or insurance companies will probably make it difficult. Regulation is something that the tech industry is not particularly used to dealing with or does very well.

I would counter that the exact opposite will be the case. I think insurance companies are exactly the people who will force it to happen once a body of evidence is gathered to show the safety advantages of autonomous aircraft.

This evidence will be gathered from autonomous military freight (already happening) and eventually civvy freight.

Jwscud
7th Nov 2015, 10:26
I'm in my 30s now. Airlines are still buying A320s and 737 derivatives. I have little doubt that I will retire flying aircraft currently flying now. The 747 and 767 have been with BA 20-30 years. I look forward to retiring off maybe the 380 some time in the 2040s.

If you need one pilot on the aircraft, then quite frankly you need two. I think the vast majority of tasks will be automated, with various forms of computer control, but we are looking at the next generation of clean sheet designs, so 30-50 years away.

dClbydalpha
7th Nov 2015, 11:03
1. Can autonomous aircraft beat that fatality rate?
2. Can they do it cheaper than pilots do now?

1. Yes, if implemented properly taking in to account a systems design that is fully autonomous. #
2. Absolutely. Pilots are extremely expensive.

# What I would like to know though is where is the optimum? What combination of pilot/aircraft/automation will yield the best return in terms of fuel economy/payload/accident rate/fatality rate. Whatever it is will take a change in attitudes within the design, operating and regulating communities.



... but we are looking at the next generation of clean sheet designs, so 30-50 years away.

Not necessarily. The majority of latest designs have the necessary infrastructure on board already. If required they could be adapted quite readily with the addition of some computing and tweaking of the existing algorithms. But frankly, where is the need? While the aircraft has to be designed to accommodate people, then having a pilot on board is a small penalty for the massive gain of a highly flexible and capable systems manager, plus a human can bring a lot more to the decision making that just isn't necessary to be considered by a computer, but enhances the whole experience. For a cargo aircraft, that trade off may yield different results.

Tourist
7th Nov 2015, 11:05
I'm in my 30s now. Airlines are still buying A320s and 737 derivatives. I have little doubt that I will retire flying aircraft currently flying now. The 747 and 767 have been with BA 20-30 years. I look forward to retiring off maybe the 380 some time in the 2040s.


That is all fine, but we are not talking about the end of manned aircraft, merely the start of unmanned.


If you need one pilot on the aircraft, then quite frankly you need two.

Umm. I don't know quite where to start with that statement.

If you need one chef in the kitchen, then quite frankly you need two.
If you need one prime minister, then quite frankly you need two.
If you need one midwife, then quite frankly you need two.

You realise that it makes no sense, right?

A huge number of aircraft fly around with one pilot. Many of them carry passengers in airways RVSM etc. As the job gets easier, it is not a great stretch to imagine that one person could do it.

hunterboy
8th Nov 2015, 06:09
I get the impression that the powers that be aren't planning initially to have only 1 pilot flying an airliner, rather 1 pilot sat in the cockpit during the cruise, whilst the other pilot goes off task for rest. For T/O and Landing, there would be 2 pilots at the controls.

neville_nobody
8th Nov 2015, 12:43
I would counter that the exact opposite will be the case. I think insurance companies are exactly the people who will force it to happen once a body of evidence is gathered to show the safety advantages of autonomous aircraft

Until they have a fatality killing 500 people that a human could have recovered.

The other issue there is actually proving your case to be correct. The amount of autonomous flight data you would need is huge given the number of human powered flights that exist now. Flying a testbed around the world a few times is not the same as the millions of hours that humans put up every year.

And for the insurance companies they have to consider the original title of this thread can computers deal with the weird failures that occur in aviation

RAT 5
8th Nov 2015, 12:50
Regarding the pilot cost question: 1 captain with all the skills and experience and proper salary. 1 cruise pilot, while said captain nods off in a cockpit bunk ready to leap into action if required, paid peanuts. So, 1.25 pilots. Cruise pilot then goes back to smaller jets from whence they came to refresh their techniques, achieve command, and if so desires move unto the sleepy ultra-longhaul single (almost) big jet.
Cruise pilot, during takeoff and landing, is trained thoroughly to move landing gear & flaps, bark & bite captain if he tries to kill anyone and touch a wrong switch, and read any normal/non-normal checklists as needed. Cruise pilots answer to the name of 'Rover'.