PPRuNe Forums - View Single Post - AF 447 report out
View Single Post
Old 11th Jul 2012, 01:52
  #309 (permalink)  
soylentgreen
 
Join Date: Jul 2012
Location: California
Posts: 6
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by clandestino
Originally Posted by soylentgreen
If you don't have doctoral-level statistics knowledge and don't understand this, I'd be happy to explain in more detail?
Nice example of paternalistic appeal to authority. I'd be happy to see more details.

Well, on the one hand, that's not appeal-to-authority precisely, since I was prepared to tell you why I disagreed (rather than just saying "Trust me, I'm a doctor"), but upon re-reading my words, they did come across snarky and like a cheap shot, so I apologize.


Originally Posted by clandestino
Originally Posted by soylentgreen
Care to explain your "Bad Science" comment?
Sure. You have proposed a method whereby 100 x 3-man crews would be put in a multi-day full-experience simulator. At some point, each crew would get on 1% of their flights an AF447-type scenario. No warning, it just happens. From this you would see what percentage "flubs" the scenario and determine from this whether it was pilots or machine the root cause of the AF447 demise. So far so good. Methodology seems sound, logic too. So it could be scientific. Why is it bad?

Because it is based on monumental misunderstanding of aviation and human factors in it. Logically correct conclusion from false premises is still false!
To be fair, I did not mean to suggest this was the perfect experiment that would end all debate. I said "it would be fun" and "The outcome would be quite interesting."

Obviously, simulator-based research has a number of flaws, the biggest one (which you mention) is called Ecological Validity though I think that it could be argued that simulator research might present a lower bound on the %crash% estimate. The logic being that the pilots may suspect this is not a routine test, and they know they aren't going to actually die, may be less likely to panic, etc.

Again, I'm not familiar with this research, so I'm just speculating.


Originally Posted by clandestino
Originally Posted by soylentgreen
Or maybe in either case, we need to consider the human-machine interface as the thing that must change?
So whatever we conclude from study, the premise of human-machine interface being inadequate must be confirmed. Why making the study at all if initial bias is confirmed no matter the outcome?
Not at all. Some things (bad weather in the ICTZ, human frailty) we can't change. Some things we can: training and machines. I'm simply saying that
we should think of the big picture here, and improve the things that we do control.

Originally Posted by clandestino
Originally Posted by soylentgreen
Get 100 x 3-man crews, and put them in a multi-day full-experience simulator.
If you crash in the simulator, you can restart. Nonavailability of this feature in real life is very important factor when proverbial hits the fan and largely increases chance of inadequate response and panic.
Agreed -- see above.


Originally Posted by clandestino
Originally Posted by soylentgreen
Fullt-time AOA sensors.
While they are demanded by certification standards, mother nature has shown total indifference to righteous demands of outraged public that demands the letter of the certification laws be followed. There is no AoA probe that will work well and reliable both at 1 kt and 0.82 Mach. Make fame and fortune by inventing one.
Not me, but perhaps someone else? This video Google's next driverless car goal? 1,000,000 miles is interesting and relevant: Google claims to have driven 160,000 miles in their robot car with only one fender-bender (which they claim was human error).

How many miles do we need google's robot cars to have accident-free before we trust them?

Seems like this technology (using LIDAR, GPS, and a bunch of other data) could be useful as a "third eye" autopilot that normally sits in the back and is quiet, but occasionally speaks up "Hey guys, uh, I know you are the bosses here, but we seem to be falling towards the sea in a strange attitude. I can show you on this 3D display just what I'm seeing..."


Originally Posted by clandestino
Originally Posted by soylentgreen
From this study, we calculate the ultimate data point: what % of crews survive. And perhaps more interestingly: what % of crews survive for the right reasons.
So wealth of data available to make good case studies out of a few occurrences is just thrown away to make one-or two conclusions out of very small sample statistics? I'm glad BEA took different path.
I'm not sure I understand you here, but I'll try to make my points again.

The "naturalistic" study shows that 1 of 37 crews in similar situations crashed. As I mentioned, that's such a small sample size that we can't say whether the actual percentage is closer to 0% or closer to 10%.

I hope that if the actual number is 10%, you'd agree with me that something is wrong, either with pilot training, or with the human-computer interface, yes?

My proposed simulator study would vastly increase the N to, say 100.

The statistics show that if 1/100 crashed, our confidence range narrows to roughly (0% to 3%).

If we had 1000 crews run through the simulator, and 1 crashed, then the interval shrinks further (about 0.5% to 1.5%).

Then, a naive analysis could be made (*with a ton of assumptions which we shall ignore*)

Let p1 be the % of flights on which the pitot tubes freeze and the autopilot drops out.
Let p2 be the % of times where the crew crashes when the autopilot drops out.

Then the overall risk of this = p1 * p2.

To me (again, not a pilot, but interested in cognitive psychology and statistics) both of these numbers are relevant here.

From what I know, the airline industry likes the risks to be in the one per million range or lower, and if there's a suggestion that it's anywhere near to 1% or 3% or even 10%, yikes!
soylentgreen is offline