Go Back  PPRuNe Forums > Ground & Other Ops Forums > Safety, CRM, QA & Emergency Response Planning
Reload this Page >

Computers in the cockpit and the safety of aviation

Wikiposts
Search
Safety, CRM, QA & Emergency Response Planning A wide ranging forum for issues facing Aviation Professionals and Academics

Computers in the cockpit and the safety of aviation

Thread Tools
 
Search this Thread
 
Old 27th Jun 2011, 16:41
  #161 (permalink)  
 
Join Date: Nov 2001
Location: Inside the M25
Posts: 2,404
Likes: 0
Received 0 Likes on 0 Posts
For the sake of discussion ...

How much effort should we really be putting into this? The circumstances in which there is a total and permanent loss of enough of the flight systems are pretty unusual - even with the large amount of aviation in the world today, how many hull losses are we talking about? One in five years? And in the case of AF, wasn't the more fundamental problem thought to be the fact that it flew through a CB? Surely the more important issue is to ensure that pilots avoid the situations where their superior basic handling skills are needed to save lives?

On the other hand, I can think of several occurrences of double engine failures on twinjets that have occurred in the last five years. Should training to handle this wisely not be getting a higher priority?

Particularly in a professional pilots' forum, of course there will be people who nod sagely and say that being a professional pilot requires good basic flying skills. And they ought to have. But my suggestion (I am open to being refuted) is that the real risks that need to be addressed aren't shortcomings in basic flying skills. For instance, I genuinely think that one of the greatest risks not that will CAUSE the next accident but which will be a significant contributory factor is poor CRM. Most people have got to grips with it. But accidents and near accidents will continue to happen where this has broken down - and what it will actually look like will be CFIT (with a captain assuring a junior fo that he knows the local area) or loss of control in a CB (with the capt unhappy about flying so close to the red bit but not wanting to intervene a fifth time in the fos operation) ...

Thoughts?
Young Paul is offline  
Old 27th Jun 2011, 18:40
  #162 (permalink)  
 
Join Date: Aug 2008
Location: Betwixt and between
Posts: 666
Likes: 0
Received 0 Likes on 0 Posts
Young Paul,

If I understand you correctly, you are saying that training should be targeted at the areas of most risk to address that risk? You cite loss of thrust from both engines as an example.

I disagree to the extent that I believe that flying is a 360deg problem requiring 360deg solution. What I mean by that is that a pilot does not develop superior skills and awareness by tackling single issues. Sure, one would be more proficient dealing with total loss of trust by practicing it. But also one would be more proficient by developing a superior sense of situational awareness and general competence.

The issues of computers in the cockpit is not simply an issue of hand flying, but an overall issue of maintaining the required skills to be sufficiently aware and knowledgeable. Hand flying specifically, doesn't just improve one area of one's ability, it attunes the pilot to the nature of the environment he is operating in.

You also cite CRM as a problem. CRM is generally a problem when the F/O lets his responsibility to ensure a safe flight slide. I believe a major reason for this is lack of confidence which generates a lack of willingness to tackle unsafe practices by Captains. I am sure that an F/O who is competent in all aspects of flying and thus confident in his knowledge and ability to handle the aircraft in any recoverable situation, is not the kind of F/O who would let a Captain continue with an approach to Mangalore such that it entire approach is high and a touchdown so far down the runway that it must have been obvious the aircraft was is great danger. That is an extreme example, but even in my airline F/Os who have no fear of disciplinary action (quite the opposite) often fail to challenge Captains when it is their job to do so.

Specifically targeting high risk failures is not going to help anyone in the long run. Specifically encouraging crew to develop as complete, confident and thus thoroughly able and flexible pilots is critical. Being able to confidently take over from the automation at any point in a normal or abnormal flight is, I believe, absolutely critical to that development.
Sciolistes is offline  
Old 27th Jun 2011, 20:36
  #163 (permalink)  
 
Join Date: Nov 2001
Location: Inside the M25
Posts: 2,404
Likes: 0
Received 0 Likes on 0 Posts
I agree that there's a need to be an all-round pilot. What I'm challenging is how great the role of "traditional" skills and "traditional" behaviour of the aeroplane should be in this matrix.

To take traditional aeroplane behaviour to start with. I am quite happy (even as an Airbus pilot) to say that FBW was placed in a life-critical application ten years too early. Now, however, on the back of 30 years of practice, I don't have any significant doubts about it. The designers had to make decisions: should we have manual reversion? Should we be able to fly with flight control computers all off? Should there be tactile feedback in the control system? Whilst the decisions they made at the time may have been ambitious, I don't think that history demonstrates they were wrong. Where there have been hull-losses of fbw aircraft, I don't think you can really show that it was an issue with the computers. At worst, it was an issue at the man-machine interface, which highlights the real safety issue - that is, the human factor - people not using the system properly. So in what circumstances do we need to know that we can fly it in the "traditional" way? I think BA have gone too far in saying that you can't fly without the ... whatever it is they say you can't fly without. Autothrottle? But all airlines have to make a decision which can be justified to the regulatory authorities.

Now traditional skills - the ability to switch off all the automatics and fly by hand. In what circumstances is it necessary? I flew with a technically switched-on fo the other day who knew that you could get the best descent out of an A320 by switching the automatics off. That was safely done in VMC. But the number of times when I've been put in a position in many years where this made the difference between a landing and a go-around is - well, one. If you're high, you can ask for extra miles. Or go around. Or you can plan things from further back to make sure that you reach the gate. And the price of switching all the automatics off voluntarily is that unless you're very careful, you're taking not only those protections out, but also the inclusion of your fellow-pilot in the monitoring loop. Much better as a rule, surely, to work within the constraints of the automatic systems.

What about technical issues or multiple system failures or lightning strikes coupled with autopilot disengagement? Again, whilst as pilots we should be able to cope with this, would it be proportionate to gear our training towards a once-in-a-flying-career combination of failures? To be honest, when everything goes wrong, none of us really knows if we'll be successful when we step up to the plate. Every famous air incident you can think of - in each case, the pilots didn't know what they were going to be facing when they went to work in the morning. In all, arguably, they were heroes - in some, they saved lives, in some, they didn't. But it's a bit like the engine failure at v1, thing - the danger is that you prepare for the "worst" case, and get very good at managing that, but don't know how to cope with anything less than extreme. We really don't want people switching autopilots and flight directors off because one of the flight control computers has failed, do we? But I think there's a risk of that. Again, I've heard of experienced captains, doubtless very good at handling the aeroplane, who have used their superior handling skills to escape from encounters with CBs that good airmanship ought to have kept them away from in the first place.
Young Paul is offline  
Old 28th Jun 2011, 01:54
  #164 (permalink)  
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
Lonewolf_50 refers to ‘probabilistic’ regulation, which has served the industry well, but the method does not capture human activity except by assumption. Adverse human contributions, to some extent, have been mitigated by selection, training, and proficiency, but even so, these are still subject to basic human fallibility.
Most new designs overtly aim to guard against error, but a technology-driven complacent industry, ‘by counting numbers’, might have unwittingly accepted that the safety improvements apparently rooted in technology would mitigate even lesser human standards.
More likely, commercial pressures have argued for a stabilisation (containment) in the quest for ever higher levels of safety (by using technology); this might be a facet of an ‘almost totally safe transport system’ (Amalberti).

Tee Emm reiterates some of the sharp-end issues and in part identifies a solution “If automation dependency has you by the short and curly, then you have only yourself to blame.” A facet of self discipline perhaps?
This solution is still only treating a symptom as there are many situations where humans now have to depend on automation, e.g. RVSM, PNAV, because the industry has changed. Thus the availability of automation (and other technologies) has altered both the operating situation and having a choice in executing a task (auto vs manual). Furthermore, human nature biases individual assessment of capability – we think that we are better than we are; complacency, ‘we can do that when required’, etc, etc, - "repetition and entrenched thinking".

If as BOAC states, modern systems are ahead of (beyond) human capability, in not having the skills for failure cases, then the context of the failure should be avoided. But the context is driven by the perception of safety, the risk of encountering a situation – probability. Moreover, if this a public perception – a social perception, then the industry might have more to fear from the media than technology (cf nuclear industry).

Avoiding the context (operational situation) could involve either, or a selection of, highly reliable technical solutions (everything automatic), focused training, or changing the workplace / task.
IMHO it is not technology that is beyond human capability, it is the situations which the human has to face if technology fails, that demand too much; this is an artefact of the modern ‘system’ – the modern technological complacent industry.
Assuming that the problems are perceived as being severe enough to worry about (probability), then solutions may not emerge until the industry recognises that some situational demands are too great for the human, whether these individuals are at the sharp-end, in management or design, or regulators.

We cannot change the human condition. But we can change the conditions under which humans work”, Professor James Reason.

“… to really understand risk, we have no choice but to take account of the way people interpret events.” Professor Nicolas Bouleau in “To understand risk, use your imagination”, New Scientist, 27 June 2011.
alf5071h is offline  
Old 28th Jun 2011, 07:20
  #165 (permalink)  
Per Ardua ad Astraeus
Thread Starter
 
Join Date: Mar 2000
Location: UK
Posts: 18,579
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by alf
If as BOAC states, modern systems are ahead of (beyond) human capability,
- my 'broadbrush' comment also takes into account the fact the the systems are extremely complex. So complex that in the case of 447, if a question on the operation of the FBW system is asked 2 years after a ? 6 minute ? disaster window, we still get conflicting answers from 'experts'. These systems (and I do accept the need for them, by the way) must operate in a way that either there is no possibility of human confusion through the cycling of their various code loops AND/OR there is a clear and available 'escape route' available to a pilot to allow a less than perfect but survivable exit from the problem. It may sound trite, but when you are trying to fly out of whatever 447 had, RVSM, PRNAV and even alpha protection CAN be dispensed with. Of course, 'acceptable risk' rears its head and we could more-or-less shrug our shoulders and say C'est la Vie', but I believe current trends suggest not.

I would also expand on Reason - the 'human condition' is changing itself with time as each generation grows up with a different tech landscape and thus we need to be constantly reviewing the way we change 'the conditions'. Anecdotally a recent thread about an email 'problem' was sorted by the poster's 9 year-old grandson arriving on the scene with an Android 'App' to solve it. In 10 years or so, said grandson could be in the RHS of a transport aircraft. Are we adapting our training philosophy at the same speed?
BOAC is offline  
Old 28th Jun 2011, 08:27
  #166 (permalink)  
 
Join Date: Apr 2008
Location: A tropical island.
Posts: 460
Likes: 0
Received 0 Likes on 0 Posts
I might be a little late to the discussion... but; BRING BACK THE FEs! When failures/emergencies of this magnitude occur in adverse weather the two guys up front have their hands full flying the thing, having a guy in back who know the systems and can maintain them as best as possible is going to beat a computer any day of the week.
aviatorhi is offline  
Old 28th Jun 2011, 10:18
  #167 (permalink)  
Per Ardua ad Astraeus
Thread Starter
 
Join Date: Mar 2000
Location: UK
Posts: 18,579
Likes: 0
Received 0 Likes on 0 Posts
Unfortunately not! Short of using the fire axe on the PF's head, the F/E has no way of stopping PF exceeding stall AoA. F/Es are also excellent at working with and hitting big mucky whirring and thrashing bits, and not at diagnosing where a couple of zeros in a memory stack may have rolled off the end and dropped onto the next stack.

There is, of course, nowhere for the F/E to sit any more
BOAC is offline  
Old 28th Jun 2011, 11:58
  #168 (permalink)  
 
Join Date: Apr 2008
Location: A tropical island.
Posts: 460
Likes: 0
Received 0 Likes on 0 Posts
They have no way of stopping it but the relieve a lot of the workload on the two pilots who have lights and warning going off as they're trying to get the plane under control, and I hope you're not saying that excessive AoA is the only thing that can happen.

Point being, planes have grown more and more complex over time with less and less direct control of the systems, just lights and procedures that aren't as completely understood, clear or complete as they need to be.
aviatorhi is offline  
Old 28th Jun 2011, 12:44
  #169 (permalink)  
 
Join Date: Jun 2006
Location: Australia
Posts: 1,186
Likes: 0
Received 0 Likes on 0 Posts
Professor Nicolas Bouleau in “To understand risk, use your imagination”, New Scientist, 27 June 2011.
Fine words - but no captain would have ever thought a nervous Nellie first officer would have snatched back the un-guarded thrust levers on that Ryan Air 737 and aborted during rotation simply because "things didn't seem right".
Tee Emm is offline  
Old 30th Jun 2011, 02:10
  #170 (permalink)  
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
TM, I interpreted ‘the fine words’ as to be directed at management; someone who can restrict the operational situations which pilots might face.

With your example (737 RTO), the generic risky situation evolved from relatively recent RTO training materials which introduced ‘if unable to fly’.
This text requires an evaluation, whereas an engine failure is a simple ‘if-then’ assessment (If engine fails below V1, Then stop).

The speed trend is a poorly described artefact of technology which has be put to good use in some situations (IMHO it’s a crutch for an inferior speed display, but that’s another matter).
The decision to add speed / trend anomalies or similar in an RTO decision increase operational complexity, such situations must be bounded. What is anomalous, how much change from the norm, what difference can be tolerated, when, why.

The availability of technology (speed trend) introduces the opportunity for complexity, but its humans who control the level of complexity; e.g. with specific guidance in a procedure – "If ASI fail or error (<= 5 kt split) below 80 kts, Then stop; trend vector N/A. Above 80kts go, use the standby and resolve any ambiguity at a safe altitude".
The guidance / procedure controls the circumstance of the situation; this should be based on a risk assessment – technical probabilities, e.g. is the trend vector an essential item, if not then ignore; cf dual vs single ASI failure during take off – bound the situation with an error margin and total speed.

In an RTO scenario, controlling the context / circumstance of the situation and thus opportunity for dilemma*, well formed operational guidance should maintain safety even with high tech systems.

* I use ‘dilemma’ opposed to error in this instance, as the decision would likely be ‘correct’ for the situation at the time, based on what was perceived and judged against the crew’s belief – vague / incomplete guidance, and common (perhaps mistaken) knowledge and training about a facet of new technology.
alf5071h is offline  
Old 1st Jul 2011, 13:58
  #171 (permalink)  
 
Join Date: Jun 2006
Location: Australia
Posts: 1,186
Likes: 0
Received 0 Likes on 0 Posts
The 80 knot take-off roll check of the captains and copilots ASI's in the 737 is really a gross error check. Not five knots or even 10 knots, because even as the call-out is made, the aircraft is accelerating so quickly that the speed comparison is useless a second or so later.

From simulator experience, by the time either crew member focuses on the standby ASI for comparison purposes, another 10-20 knots has passed by. Boeing do mention use of the ground speed reading as a confirming factor since rarely does one see a defective ground speed read-out. Of course any ground speed check must take into account the wind component.

What is important though, is what happens if the PNF does not call out "80 knots" either because he had his attention elsewhere, or he simply was too slow to react or of course if his ASI had not yet reached 80 knots.

In that case, it is encumbant on the PF to make his own call-out based upon his own ASI - for example "going through 95 knots my side." In turn, this should stir the other pilot either to agree or not agree or remain bemused. The latter is more likely given the time factor.

All the time the aircraft is accelerating towards V1. Believe me we see this in the simulator a lot when a fault is set into one or other of the main ASI's. There is immediate doubts in both pilots minds.

This is where a knowledge of expected ground speed is good airmanship and rather than risk a high speed rejected take off under the circumstances, the next best thing is to rotate on the expected ground speed reading allowing for the wind component.

Provided the pilot applies commonsense knowledge of initial climb pitch attitude coupled with known N1 for the circumstances, the confusion of which ASI is the problem can be sorted out later at relative leisure. Sorry about thread drift
Tee Emm is offline  
Old 3rd Jul 2011, 13:58
  #172 (permalink)  
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
TM, there’s no thread drift if you consider the effects of technology; this becomes a good example of potential problems of ‘computers in the cockpit’.

I would be very surprised if Boeing recommended (officially) a ground speed (GS) check during take-off. Where does GS come from, how accurate, update rate, etc, etc, and what time is available for a crew member to look and crosscheck? You mention some of the problems.
The ‘availability’ of GS is an artefact of technology – “let’s use it because it’s there”, without thought to the added complexity and potential for confusion.

Why do we check/crosscheck ASIs?
With old steam driven systems, as you say, - a gross error check.
However, with modern technology, many ADC driven instruments have a comparator alerting system, either ‘Speed’ or for the total system, ‘ADC’.
So the take-off SOP could be simplified by deleting the speed check and relying on the comparator; but is that warning one of those inhibited during take off? If so, would that imply that the malfunction (amber level) does not pose significant risk, certainly not sufficient for rejecting at high speed?

Operators may still call 80kts or similar as an indication of the takeoff progress, or because it could reduce the PF task of glancing into the flight deck.
Or is the call an acceleration check? If so, how would such a system be used, what action might ensue (speed requires a function of time for acceleration, humans are poor time keepers). Speed trend is a form of acceleration (check exact computation and smoothing), but do the crew know what specific value is required for each take-off? Perhaps they are only familiar with a range of values (approximations) gained from experience.
Most aircraft have a designated engine thrust parameter, if that value is set and maintained then the take-off thrust is assured (providing the value has been calculated correctly). A reduction in the thrust parameter should be part of the engine failure process; If engine failed, Then …
Keep SOPs simple, practical, meaningful.

Most of the above is just my view of what you posted, but skewed by technology (automation, computers in the cockpit).
With increasing use of technology there is opportunity for operators to mis-judge the effects of ‘change’, to carry on using the same old procedures (complacency), or unwittingly add unnecessary complexity – ‘because it seems like a good idea’ – “it improves safety”.
Is - 'use it because it's there', a failed application of CRM - see adjacent thread.

Unfortunately many of the technology inspired changes, particularly in SOPs involve weak or inaccurate risk assessment which may not improve safety. Technology is not the cause of this, it’s human judgement; and judgement is part of ‘airmanship’, except in this instance it should be exercised by management and regulators, who have to apply professionalism.
alf5071h is offline  
Old 6th Jul 2011, 16:14
  #173 (permalink)  
 
Join Date: Jul 2002
Location: UK
Posts: 3,093
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by BOAC
So complex that in the case of 447, if a question on the operation of the FBW system is asked 2 years after a ? 6 minute ? disaster window, we still get conflicting answers from 'experts'.
I'm not so sure of that - what you're getting is a combination of people with varying amounts of knowledge. In my case a reasonable idea of the design philosophy and some knowledge (by no means complete) of the logic trees and reliability/testing phases involved, current and former Airbus FBW pilots in the form of PJ2 and Chris Scott among others, those who are pilots that don't necessarily know the systems but have their opinions anyway - and finally those who we know well have a major axe to grind with Airbus and are deliberately muddying the waters like they always do when the subject comes up.

In terms of the current Airbus pilots in particular, you have speculation based on knowledge which is sound but may not be current - notably none of them has suggested that the pilots in the case you're referring to were confused by laws, displays or ergonomics. One current pilot is fascinated with the possibility of Byzantine software failure (which I'm not discounting out of hand but suspect is unlikely). If you read the threads however, the ones claiming confusion on the flight deck and slating the systems design are all from the latter two camps.

Just thought I'd better clear that up - on with the discussion!
DozyWannabe is offline  
Old 7th Jul 2011, 01:25
  #174 (permalink)  
 
Join Date: Dec 2002
Location: UK
Posts: 2,451
Likes: 0
Received 9 Likes on 5 Posts
“… a courageous realization …”

BOAC has a problem “How to live in an unfathomable world”, as we all do according to New Scientist, 17 May 2011.

“… opposing positions are predictable, but they are also incoherent, unintelligible and entirely unhelpful in navigating the complexities of our technological age.”

The gist of the New Scientist article is that we fail to distinguish between various levels of technology.
Level 1 is simply functional, level 2 is part of a network with increasing complexity, and level 3 is a highly complex system with adaptive subsystems and human interaction, which we cannot fully understand. Level 3 systems are beyond our cognitive abilities.

The problem is that we tend to focus on levels 1 and 2 because we can understand and assess them, and manage their complexity. It’s our expectation that all technology be like this, we remain in control, except that in reality at level 3 we are not.

“Level 3 systems whose implications you cannot fathom.”

“We are not the ‘knowledge society’; that's Level 1. We are in fact an ignorance society, continually creating more and more ignorance as we busily expand the complexity of the anthropogenic Earth. But our ignorance is not a 'problem' with a 'solution': it is inherent in the techno-human condition.”

“The question now is how to enable rational and ethical behaviour in a world too complex for applied rationality, how to make our ignorance an opportunity for continual learning and adjustment.
This necessary evolution does not demand radical changes in human behaviour and institutions, but the opposite: a courageous realisation that the condition we are always trying to escape - of ignorance and disagreement about the consequences of our actions - is in fact the source of the imagination and agility necessary to act wisely in the Level 3 world.”

Take care not to interpret the final quote out of context;

“… that to participate ethically, rationally and responsibly in the world we are creating together, we must accept fundamental cognitive dissonance as integral to the techno-human condition. What we believe most deeply, we must distrust most strongly.”

IMHO this is not the distrust of technology / automation, it’s about how we should trust/distrust what we feel about it, how technology can be used, and what can be expected with human interaction. We need to be a learning society, except in this instance there is a limit to our understanding, and we need “agility necessary to act wisely in the Level 3 world”.

We have to accept that we may never understand aspects of ‘level 3’; complex technical systems in a vast operational environment, with human interaction, such as AF 447.
safetypee is offline  
Old 7th Jul 2011, 07:40
  #175 (permalink)  
Per Ardua ad Astraeus
Thread Starter
 
Join Date: Mar 2000
Location: UK
Posts: 18,579
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by sp
BOAC has a problem
- phew - someone to talk to......

Yes, in essence a good summary, but it is missing 'level 4'. The result of our acquiescing to 'level 3' leads to the age old question of who 'supervises the supervisors', does it not? We 'learn' to live with a complex system we do not really understand - where are the 'long-stops' on this? Particularly in aviation, we surely need to ensure that this complex and almost unfathomable sequence of bits and bytes and failure modes etc etc is 'fit for purpose', at least for the time being, until we have truly automated systems.

That takes us to level 4, where AI rules. Therein is a dark pit. Let's hope beta testing of level 4 is VERY thorough.

The article says - far more eloquently than I can -
"how to make our ignorance an opportunity for continual learning and adjustment.
This necessary evolution does not demand radical changes in human behaviour and institutions, but the opposite: a courageous realisation that the condition we are always trying to escape - of ignorance and disagreement about the consequences of our actions - is in fact the source of the imagination and agility necessary to act wisely in the Level 3 world.”

which I my crude way was a call for a major review of the way we teach it -
a courageous realisation.
BOAC is offline  
Old 7th Jul 2011, 18:11
  #176 (permalink)  
 
Join Date: Dec 2002
Location: UK
Posts: 2,451
Likes: 0
Received 9 Likes on 5 Posts
when the problem is a 'mess' that individual is usually a part of the problem

BOAC, - level 4, I don’t think so.
I interpreted level 3 as including the deeper ‘bits and bytes’.
I abbreviated the quote “Level 3 systems whose implications you cannot fathom”, which continues … “With input from tablet computers, cameraphones and walls of dancing video, and with much of your memory outsourced to Google and your social relations to Face-book, you now embody the accelerating charge of the Five Horsemen of converging technology - nanotechnology, biotechnology, robotics, information and communication technology, and applied cognitive science – whose cumulative potency will transform the human-Earth system in ways that are impossible to predict.

Re …. who 'supervises the supervisors', again the article implies that this is up to us, apart from deity, there is no one else. We, humans, have created this ‘mess’ and thus with the necessary courageous realisation have to ‘self-police’ the situation.

A later quote explains this in part –

We have to become a lot smarter in moving ourselves and our institutions of learning and innovation, of political and economic decision-making, out of their Level I playrooms. This transition will require us to increase the diversity of world views involved in creating and assessing our technological activities. It asks us to create more richly imagined futures, seeded with more potential choices, so that we have improved opportunities to learn from and respond to the choices we are making.

I am not sure what aviation might pick out of that, but IMHO, part of the realization must include ‘transition’, that the industry is changing; and ‘learning’, if not from the very rare level 3 accidents, then from everyday behavior – how humans successfully manage these complex technological systems, in complex operational environments, with normal human interaction.
Aviation, with modern aircraft, is a very safe form of transport.


… we have created a ‘mess’. Perhaps the following is an appropriate summary:-
A difference between a difficulty (level 1 & 2) and a mess (level 3) is that when the problem is a difficulty, an individual claiming to have the solution is an asset, but when the problem is a mess that individual is usually a large part of the problem!
Paraphrased from Systems Failure, J Chapman.

Last edited by safetypee; 9th Jul 2011 at 01:54. Reason: typo
safetypee is offline  
Old 7th Jul 2011, 21:56
  #177 (permalink)  
 
Join Date: Jun 2009
Location: Oxford, England
Posts: 297
Received 0 Likes on 0 Posts
PBL

Have just finished working through this thread and as an engineer, found
your posts most interesting, though you obviously don't suffer fools
gladly. A certain arrogance is normal in some professions and I don't
have any problem with that, nor am I offended, so let's not have this
degenerate into ad hominem territory. Please, let's have more like #127

Even if the systems have a failure rate of only 1 in 10e5 hours, they
are arguably more consistent than the human souls that drive them. The
problem that is the fact that the system, collectively, does not degrade
anything like gracefully enough at the extreme ends of it's capability
in terms of flight control. This is not the same thing as system failure
due, eg: a software bug. It is a limit in the capabilities of the system
as designed. Despite the complexity, there seems to be no overall
coordinating intelligence providing a big picture monitoring view at
all times. They say a picture is worth a thousand words. Why ?. Because
a picture is effectively parallel processed by the brain, while reading
text or scanning instruments is serial and it takes much longer to
assimilate the meaning. Trying to diagnose problems by wading through
pages of error messages, and / or getting out handbooks, finding the
right page, ad nauseum, takes far too much time in an emergency. There
just has to be a better way. In some ways, modern a/c are quite primitive,
despite all the complexity and shiny paintwork.

There's should be more than enough data running around the system as a
whole to enable a monitoring processor / subsystem to spot trends and
provide a continuous assessment of current state and developing
situations. If the crew choose to ignore this, it's another issue,
but system failure in such a way as to produce ambiguous information
does nothing to inspire confidence in those systems and is arguably more
dangerous that having no data at all.

More R&D, fresh thinking and intelligence is needed. Perhaps a second
revolution, as was the original ab fbw concept in it's time...

Regards,

Chris
syseng68k is offline  
Old 8th Jul 2011, 02:35
  #178 (permalink)  
 
Join Date: Jun 2010
Location: USA
Posts: 245
Likes: 0
Received 0 Likes on 0 Posts
There have been some interesting posts made here while I have been off rattling some cages in another thread.

To understand risk, use your imagination
Imagination is just as fallible as thinking, however. It is no panacea.

This transition will require us to increase the diversity of world views involved in creating and assessing our technological activities.
This made me laugh. How does the mere increase in diversity increase safety. Not all ideas are created equal, not all probabilities are equally likely. Diversity is unhelpful if it leads us down blind alleys and over steep cliffs. Shuttling the onus from thinking to imagination to values all the while looking for the silver bullet that will solve the problem of human fallibility is nothing but an academic shell game unworthy of honorable men.

My honest disagreement with PBL and others stems from my firm conviction that there are some events we can never predict and thus there is no rational way to quantify them; when we hear phrases like "the odds of that are 1 in 10e5 hours" we need to treat this as a best guess and not anything certain.

The follow-up point is that generally human beings do a bad job of estimating odds and the more unlikely an event the worse we are at estimating it. Quantifying events with numbers often gives us a false sense of security. Once we put a number to it we think we understand it and thus we feel we control it. Until it all falls apart....then statistician runs into a corner and says, "well don't blame me if the one in a million event happened on your watch. It's not my problem you were unlucky."

Hand flying specifically, doesn't just improve one area of one's ability, it attunes the pilot to the nature of the environment he is operating in
This is true. The problem is that with airplanes the mistakes are often costly. That's the motivation behind my posts in the Ryanair thread: experience is costly. Human learning is costly. The question then becomes at what point does the cost of the experience become more than the flying public will bear and it's simply cheaper to automate the flight deck and get rid of the pilots entirely.

People like to pretend they know things when they really don't and they like to pretend things are free when they are not. All it takes is someone to put his nose up in the air, stick a number on the problem, and put his hand in the other guys pocket and the rabble in the crowd will give him a cheer.

Last edited by MountainBear; 8th Jul 2011 at 02:37. Reason: grammar
MountainBear is offline  
Old 8th Jul 2011, 13:28
  #179 (permalink)  
 
Join Date: Jun 2009
Location: Oxford, England
Posts: 297
Received 0 Likes on 0 Posts
MountainBear, #178

All it takes is someone to put his nose up in the air, stick a number on
the problem, and put his hand in the other guys pocket and the rabble in
the crowd will give him a cheer.
That's the human condition in the 21st century. The obsession with
putting numbers on and compartmentalising everything, is a sickness of
the modern age and owes everything to the age of enlightenment, when man
started to discard religion and superstition in favour of science and
the classification of everything. As you quite rightly say, once
something has been quantified, everyone can go away and be happy in the
knowledge that due diligence has been satisfied, even though noone but
specialists in the field understand what the numbers actually mean.
In some ways, it's all gone too far, but it's not unique to aviation.

Having said that, it's often the case that the only way to get an
indication that there has been an improvement in any process is put
numbers on things via analytical methods. In aviation, as in things
like the climate debate, the change may be so small that it's down in
the noise and difficult to measure reliably anyway. Even so, the effort
is worthwhile if progress is made. The 1 in 10e5 will be a statistical
value that is based on mtbf values (also statistical) for individual
components and would be updated with data from in service components
over a multi year timescale. Obviously, the value doesn't mean that
there will be no failures until 10e5 hours. That single failure could be
in the next 5 minutes, but the figures are usually very conservative
and real world kit is often far more reliable than the figures might
suggest.

Any engineer will tell you that it's not possible to make any system
100% reliable. In many areas, it's a devil's compromise between cost,
safety and performance. The graph of cost vs improved safety probably
looks something like an exponential decay, in that you can get vast
improvement at the start of the curve, but beyond a certain point, you
could spend another 10x present cost to get any serious effect at all.
I suspect we are well down that curve in terms of civil aviation and
most likely need the analytical methods to detect anything.

An activity where you put several hundred people into an aluminium can,
together with tens of tons of fuel, then fly it at 35k feet, will always
be high risk, irrespective of how reassuring the numbers are. You are
also correct in saying that learning is high cost, though excessive
timidity in terms of risk taking can be a serious bar to progress. If
you look at the early space program in the US in the 60's, a high degree
of risk was accepted to attain great goals and was, imho, an example of
the highest aspirations of mankind, even though the initial driver for
it was arguably less than altruistic. Take big risks, make great
progress. If they had had the health and safety culture that exists now,
where nothing moves because of multi layered a** covering, the program
would never have got off the ground.

The question then becomes at what point does
the cost of the experience become more than the flying public will bear
and it's simply cheaper to automate the flight deck and get rid of the
pilots entirely.
I don't see the connection here. It seems as though you think that the
pilot's are the problem, when I would suggest that the systems are
nothing like smart enough in terms of the way they interface with the
pilot. Nor in the way that they degrade when expected to handle
something outside a strictly defined set of limits. From comms /
information theory, you achieve the lowest error rate when you match the
transmitting and receiving ends and use a low noise channel. Put simply,
if you want to talk to humans, the onus is for the system to talk the
correct language, rather than at present, where the human is expected to
adapt to the inadequacies and rigidity of the system.

Fully automated flight decks are, imho, a fantasy and will never happen
until computing has at least the same reasoning and abstract problem
solving ability as a human brain, trained in an activity and augmented
by years of experience. A lot of human processing is analog and driven
by subconscious responses, even if it is learned. Imagine trying to
model all that in a computer . Having worked in computing and electronics
for a lifetime, I can tell you computers are not even close yet,
thankfully...

Regards,

Chris
syseng68k is offline  
Old 8th Jul 2011, 22:30
  #180 (permalink)  
 
Join Date: Jun 2010
Location: USA
Posts: 245
Likes: 0
Received 0 Likes on 0 Posts
I don't see the connection here. It seems as though you think that the
pilot's are the problem,
I don't know if the pilots are the problem or not, I think it's too soon to tell. We haven't given full automation an opportunity to produce results.

when I would suggest that the systems are nothing like smart enough in terms of the way they interface with the pilot
That leads to a circular argument. They are 'nothing like smart enough' because they haven't been programmed to be. They haven't been programmed to be precisely because the pilot is there.

It's unfair to blame the machine or the programmers behind the machine for the inadequacy of the design document they were handed. The human/machine interface only becomes a issue when you assume that that a human being must be on the flight deck. Take away that design requirement and the design of a FBW system is going to look a lot different than it does now.
MountainBear is offline  


Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.