PPRuNe Forums - View Single Post - Computers need to know what they are doing
Old 9th Sep 2016, 01:46
  #95 (permalink)  
DozyWannabe
 
Join Date: Jul 2002
Location: UK
Posts: 3,093
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Uplinker
Forgive me: As I often find with your posts, I am never sure whether you are agreeing or disagreeing, or whether you simply enjoy countering other people's points of view? (i.e. winding us pilots up !? )
No worries - I've said it often enough, but I am honestly just trying to add to the conversation and learn stuff as I go, and that has always been the case. I know that the tendency is for internet conversation to be frustratingly adversarial by default, which is something I absolutely try to avoid wherever possible. It would seem that my desire to go against the grain in this way seems to throw some folks!

When you say "us pilots", it implies to me that you're drawing separation lines in anticipation of there being some kind of antagonism before it actually happens, which I reckon is a slightly sad indication (of how these things in general seem to go - not you specifically).

Straw man argument: ... Perhaps I could have phrased it more tidily by saying "the answer is not to develop a computer to think ahead.........."
And perhaps my response wasn't as clear as it should have been. I wasn't necessarily responding to the OP as much as re-stating a common misconception. Apologies...

How does the fact that autothrust has been around for quite a while render my point invalid?
Again, I didn't intend that statement be directed solely at you (sorry if it came across that way), I was making a more generalised response, following on from the "FBW/FMC intended to eventually replace pilots" canard. The only group that bothers me there is the press; which fed the "controversy" that it made up out of whole cloth back in the late '80s and in turn created a division between pilots and techies which rumbles to this day and is massively unhelpful.

Yes I am, and they are directly related in this crash, so how does that nullify my point?
I wasn't out to nullify your point or rebut your argument sir, I was only trying to provide a little more background info and add a few extra things I've read to the mix.

As we know, aviation accident scenarios are usually fairly complex sequences of events involving equally complex networks of decision making, and (while not aiming at you personally) I tend to be wary of the notion of "main cause[s]" in the singular. This is because it gives rise to a tendency to focus on a few (or even single) aspects at the expense of properly understanding things from a holistic "systems safety" perspective (to say nothing of feeding the media's tendency to foment a 'blame game').

In that case it would appear that Asiana's training systems all the way back to ground school and sim training were outright unfit for purpose in many respects, and while they certainly came across as one of those airlines who trained pilots to be over-reliant on aids, technology and automation (i.e. I agree with your point there... ). But if I recall correctly it went rather further than that. To start with, Asiana maintained a list of "difficult" airfields (within which SFO was a prominent example) and effectively forbade flight crew to land there without ILS (unless there was no other option). Sim training for non-ILS approaches was always done using their home base locale, which is relatively forgiving terrain-wise.

I guess what I'm getting at is that - as you say - whilst the last hole in the Swiss cheese was a failure to monitor airspeed which fell below safe margins as a result of A/THR mode confusion (and a failure of the check Captain to properly monitor and remedy the situation), my view is that this (along with the automation-reliance aspect) was but one part of the whole. In citing certain airfields as problematic and strongly discouraging flight crew from attempting non-ILS approaches at those airfields, the company's attitude ran the risk of effectively undermining flight crew self-confidence in general, even before we get to the training aspect (which further reinforced the notion that pilots should only be confident in doing non-ILS approaches at certain airfields).

In HF/psychological terms that is pretty much teaching your crew that some scenarios are probably beyond their abilities before they've even tried. It's accepted that the probability we humans have of making a mistake when performing a task increases dramatically as the amount of stress we are put under increases. As you said, I'm not a pilot, but many of those I've spoken to have said that checkrides tend to be pretty nerve-wracking even if you're usually confident in your abilities - that's stressor number one. Our newly-minted Asiana Captain was rostered to SFO (which the company considered challenging) to start with - stressor two; the check Captain was apparently of the quiet "hands-off" tendency (number three); then on finals, SFO Approach informs him that ILS is inoperative - and that's four. Minutes away from scheduled arrival time and the unfortunate guy had every reason to feel he'd drawn every single short straw possible - as such his stress level was (and the consequent odds of his making a mistake were) already drastically higher than should have been the case.

[to Uplinker : I've gone into the above tangent not to refute your point or be contrary in general - you're absolutely correct when you state that Asiana's company policy at the time was rather automation-centric - I just wanted to explain my view (for anyone who may be reading) that this particular accident had causal roots in several other aspects as well. If a person is subject to an implicit (and oft-reinforced) notion that a certain task is beyond them, and then subsequently expected to perform that task under already high-stress conditions, it risks becoming a self-fulfilling prophecy - and the profession, organisation, ethnicity etc. involved is immaterial. ]

Don't be too overawed by the FEs panels of yesteryear. I have a background and previous life in electronics, so it is easy for me to see; but each part was quite simple, there were just a lot of parts !
Sure - and thanks for the "mixing desk" analogy - I know what you're saying - in that each part was in and of itself relatively simple - what I was getting at was that I imagined a scenario of multiple and/or cascading failures with the tool for diagnosis and remedy being literally hundreds of gauges and switches on the FE panel plus a ceiling full of hundreds more CBs each linked to an individual system - and thinking that while the tech was relatively simple in an individual sense, getting into the possible combinations and permutations had to have been (particularly in a high-stress scenario) either at or near the limit of the Mk.1 human brain.

So - to reiterate - I wasn't trying to nullify your points (promise!), I was trying to add a bit of extra background info and put forward some points of my own that some readers might find interesting. I'm keeping my own counsel as to whether the OP may or may not have been a deliberate wind-up attempt (though my responses assumed giving them the benefit of the doubt) - but I promise you that I'm not doing that, and never have done.

Originally Posted by Goldenrivett
The crew needed more than half aileron to control the roll - but the computer logic denied it.
That's one viewpoint (and arguably a fair one) - it's just that what we're taking about here is another "edge case" (in which the scenario fell outside the design parameters). That's not a computer-specific thing - it applies to every engineering-related discipline (including going all the way back to the rods, cables and counterweights of the first few decades of aviation). If the logic involved could have been improved as a result of discovering that edge case, then it probably was (one of the benefits of having digital flight controls is that applying a design/implementation fix to the entire fleet is relatively straightforward). Also, the inherent complexity of "weight-on-wheels" logic and how it applies to flight controls has been a perennial headache for engineers since long before the digital age!

In that scenario an "override" of the kind available on the T7 would not have helped because the timescale involved was far too short for the crew to have engaged it, let alone taken advantage of it. In my view (with which you're welcome to disagree), to say the logic "denied" the crew is an exaggeration. It gave the crew the maximum amount of right aileron that the design parameters considered safe - and to be fair, whilst the ground contact was certainly a bit of a "brown trousers" moment, the logic nevertheless gave the crew enough control authority to prevent things from getting worse.

Consider this - in that one particular scenario the aspect of the design which limits aileron travel in "ground mode" might have contributed to the wing fence "scrape". The engineers (pilot, aero, mechanical or software) who designed that system had to take multiple (tens at least, if not hundreds of) scenarios into account and come up with the best possible compromise in terms of addressing them all as safely as possible. For example, consider a scenario (one of many alternatives) in which the same inputs were applied and aileron travel was not limited, resulting in an overcontrolled roll to the right and a probable fatal crash. Then consider that for every second in time you "rewind" from that wingtip scrape, you're adding several more scenarios that must be addressed. Engineering is about compromise above all and it ain't easy.

I reckon it's worth bearing in mind that when it comes to flight controls, engineers have had to design in myriad ways of controlling and limiting input and response to help the pilots keep their craft pointed in the right direction - from mechanical baulks and counterweights through electro-hydraulic systems to today's digital technology; all of which involved compromise.

Originally Posted by em3ry
And as I've said before this thread is not about replacing the pilot with a computer. It's about how to make the computer smarter.
If that's the case (and you're actually on the level, which I'm beginning to doubt if I'm honest), then:
  1. Why illustrate your point with a Google patent clearly related to their "self-driving" car project?
  2. Why are you seemingly ignoring my posts explaining why the level of complexity involved in autonomous airliners is at least several orders of magnitude more complex?
  3. Why have you not listed (and this is the third time of asking) those aviation accidents that you think could have been avoided with "smarter" computers?
If you are just fishing for responses and having a giggle at our expense (and mine), then please be aware that, at least in my case, looking up information to answer these kind of questions is something that I happily do for the sake of it and as such, I never consider it a waste of time and effort on my part.

On the other hand, and to give you the benefit of the doubt one last time for now...

As a software engineer myself (and a dyed-in-the-wool techie since not long after I was out of nappies [aka diapers]) there's this. "Smarter" is very much a subjective term - I've stated many times that the kind of computer technology used in aviation (as is the case with any safety-critical real-time embedded use) always uses hardware that would be considered obsolete in any other field. Case in point - the ELAC and SEC units fitted to every A320 that has rolled off the production line from 1988 to the present day are based around the Motorola 68000 and Intel 80186. Both designs were already almost a decade old (i.e. developed in the late '70s) when the A320 went into service, both are effectively 16-bit and both are designed to run at clock speeds not much greater than 10MHz. The 68k found it's way into a lot of homes via the Atari ST, CBM Amiga, original Apple Macintosh and Sega MegaDrive/Genesis in the late '80s/early '90s and yet...

When combined in the A320 (two of each type plus a duplicated FAC), the system overall is capable of running tens of logical finite-state machines per unit, all of which are capable of self-checking and cross-checking each other in real-time. The same (arguably) "ancient" devices are also capable of assessing the crew's control inputs, calculating a certain amount of "look-ahead" in terms of the aircraft's trajectory and power settings (exactly the kind of 'simulation' you seem to be getting at) and providing the best combination of control surface and thrust response possible - all (again) in real-time.

In other words, I'd argue that whilst the underlying tech is obsolete and each individual software component is kept deliberately simple in order to enable thorough testing, in concert the system is "smart" enough to give the crew what they're asking for, and - on rare occasions - also capable of helping them avoid or get out of trouble (to a certain extent) by keeping the aircraft within the safe flight envelope.

That said, I ask one last time - what do you mean by "smarter", and which accidents would your notion of "smarter" have avoided?

Last edited by DozyWannabe; 10th Sep 2016 at 16:07.
DozyWannabe is offline