PDA

View Full Version : Could data mining help with the automation vs. hand flying debate?


Zionstrat2
27th Nov 2013, 18:45
Although I am an aviation fanatic and with GA experience, I am a technology strategist by trade and rarely post on aviation forums- However considering the wide variety of 'lack of hands on experience' vs. 'not using available data and automation' incidents (AF447, OZ214, Dreamlifter), I'm wondering if this would be a good time to consider database systems that learn and advise in unusual situations?

I'm aware of point specific systems like TCAS and GPWS that address very specific problems where unsafe parameters were defined well in advance. And AB is often criticized when automation takes control.

But has anyone considered a more generalized database approach that simply provides observation/recommendations to assist in unusual situations?
Big data makes a big difference in other industries- For example, retailers use data mining algorithms to sift thru tons and tons of seemingly unrelated data looking for the potential for cause and effect. Significant relationships that are entirely unknown are often discovered (the apocryphal example is the relationship between beer sales and disposable diapers in convenience stores).

The idea would be to collect all flight data and outcomes over a long period and let the software look for potential relationships and make future recommendations based on those relationships-

Of course the data could be wrong, causality could be off, and there are many reasons why this wouldn't work all the time. Data mining is never 100% effective, but it provides an additional input that isn't otherwise obvious to humans who have limited bandwidth.

However, with enough data, it seems likely that an independent system could easily announce:
• This seems to be an approach to a stall- Consider these remedies...
• You appear to be in a full stall...
• You appear to be well below the normal glide slope...
• You appear to be on final for AAO however the flight plan lists IAB as the final destination...

I imagine pilots might not appreciate another potential conflicting data source and the 'big brother' aspect. However think about this-- automation is likely to win in the long run for many reasons and it is very likely that similar systems would be integrated as automation increases.

So why not get ahead of the game and see if this would add value now? In the test phase, system recommendations could be withheld (to avoid conflicts with current procedures) and it would be relatively easy to monitor the recommendations that would have been made until the recommendations are strongly correlated with outcomes.

I'm sure some will say that this is just another prop to hold up the undertrained and that airmanship would suffer. But unlike GPS, ILS, or even the introduction of VORs, we're not talking about a system that simplifies the job or reduces the workload- the pilot still has to do everything as usual-
The system I am describing would be 'learning' all the time, but only come into play in the rare occasions when it appears likely that something is significantly wrong and the pilot still has the last word.

This may be entirely unrealistic, but it will be interesting to see if anyone is already thinking in this direction.

swh
27th Nov 2013, 21:44
The ATSB recently published a report on 250 stall related events it has investigated over the past 5 years, it would seem that those aircraft with "speed protection" were less prone to stall events.

News: Stall warning events (http://www.atsb.gov.au/newsroom/news-items/2013/stall-warning-events.aspx)

" As a rate per hours flown, stall warnings were more common in Dash 8, Boeing 767, Boeing 717 and Fokker F100 aircraft, although for the F100, almost all reports were for the aircraft’s stall warning systems activating spuriously. Stall warnings were found to occur in all flight phases and a range of aircraft configurations, not exclusively those related to slow speed, high pitch attitude flight, or flight in poor meteorological conditions."

underfire
27th Nov 2013, 22:05
I think there is value in collecting the data, just not sure how much would actually be available and the direct/indirect correlation potential.
Prying this sortof data out of the hands of a provider, airline, or manufacturer would require a bit of skill in out litigous environment, not to mention the pilots union!

Collecting the data would be the first step, to see if any issues/correlations fall out.

I agree that the automation will prevail, and the data gathering could be used for pilot training, especially in the sim. The sims need to be far more realistic in the scenarios and response.

Real time, well...

As an example, your note on too low on glideslope, would required a whole bunch of real-time in/out data correlated with the ground, the landing system at the airport, ATC instructions, and ....

KBPsen
27th Nov 2013, 22:46
First of all, what is a "technology strategist"?

However, with enough data, it seems likely that an independent system could easily announce:
(1)• This seems to be an approach to a stall- Consider these remedies...
(2)• You appear to be in a full stall...
(3)• You appear to be well below the normal glide slope...
(4)• You appear to be on final for AAO however the flight plan lists IAB as the final destination...1. Already exists and have for many years.
2. See 1.
3: Already exists and have for many years.
4. Already exsits and have for many years.

Perhaps technology strategists don't know much about the problems they are suggesting solutions to?

cattletruck
27th Nov 2013, 23:07
Lost in pea soup murk you type in "clear air" in the DMP (Data Mining for Pilots) system and the answer comes back

Notice Level: Alarm
Safety Level: Danger, personal injury risk.
Message: Pilots do not pick their nose in turbulence.

:}

underfire
27th Nov 2013, 23:26
Zion, You would not be working on the GE Flight Quest would you?

alf5071h
28th Nov 2013, 01:11
Zionstrat2, interesting thoughts – green field thinking.
Unfortunately most commercial aircraft already have the alerting and warning systems you suggest: stick shaker- approaching the stall (advanced with manoeuvre), stall warning - imminent or at the point of stall, glide slope alerting from the instrument display and EGPWS.
The underlying problem appears to be with the human, with a lack of awareness or comprehension of these alerts, and most important, a weak general awareness when approaching these conditions.

Data can be useful, but it requires interpretation in context, which in this instance is the situational context of the pilot – what is seen, understood, planned, knowledge recall/use, etc. These cannot be recorded as data, and even if observed by another pilot, the view could be biased – ‘I wouldn’t do it that way”, but neither view would be incorrect or known to the other pilot; human factors – cognition.

Various reports and studies cite a range of issues, which appear to reflect the problems of an increasingly complex operating environment where the interactions are tightly wound-up; ‘close coupled’ in systems theory. There are indications of this in recent FAA and BEA reports, providing that the issues are considered as a whole. A major safety problem is that we humans - the industry, like to simplify issues and deal with them separately, and also inadvertently are biased towards fixing the human (blame and train). However with a wider view it may be possible to identify changes in task expectation, complex airspace requirements, reduced training time, and the expectation the automation will manage everything, etc; but not to overlook the important aspect of how all of these aspects interact and together affect the human.

At times, the industry overly focusses on the human, whereas wider consideration of the operational system may provide a more balanced view; also that solutions appear to be proposed in areas which suggest that we don’t fully understand the problem.
A complex situation rarely has a simple solution.

framer
28th Nov 2013, 05:15
I like that you're thinking and coming up with new ideas Zion so I don't want to sound like I am knocking the idea.....but, if you go to any airline with a great safety record and ask some of their pilots what they need to be safer pilots they will give you good answers that would improve the industries safety stats within six months if implemented and none of them will ask for more information to be aimed at their senses under high workload situations. In fact, some will probably ask for less. Competant professional pilots already know what they need to maintain their own standards and also what would improve their own standards.
So why is this not happening? Because their answers cost the airline they work for money. It's that simple. There is nothing magical about improving flight safety. Imagine this for a second;
'Perfect World Airlines' doesn't care about how much money they make, they are solely focused on maintaining their impeccable safety record and as such every pilot completes one simulator session then one line flight, another simulator session then another line flight and so on for their entire career. All sim sessions are non jeprody. They would be one sharp bunch of pilots who knew their aircraft inside and out and could fly the pants off the thing under all but the most extreme circumstances.
Now that's a completely ridiculous fantasy, but by going to an extreme it becomes obvious which direction we need to move. We need to train our pilots more. We have given them increasingly complicated aircraft with new ways of making approaches and new procedures and technology and have neglected the basics. Many airline pilots who used to be totally comfortable with their machines are not anymore. They have become detached from them and their skills which used to be second nature to them are rusty. There is so much information in a modern airliner that a great skill to develop is to ignore it all and focus on the threshold, the airspeed and the rate of descent when placed in situations that don't happen every day. It's hard to do sometimes because you have been focussing elsewhere for the last 100 or so flights.
So although I like your original thinking I feel like it would be a hinderance to me if I was ignoring all the dials, flashing indicators, call outs etc to ensure I didn't miss the most important things, the basics.

framer
28th Nov 2013, 19:20
Zion what about data mining and using that information as an input for 'Evidenced Based Training' ? Can you see that working?
I think we need a global collaborative on EBT whereby any Airline can pay their $200 annual fee and log on to download the last twelve months of type specific incident data from which they build their next recurrent sim session, the data based info could be one more input and wouldn't have to be type specific.
Thoughts?

DozyWannabe
28th Nov 2013, 20:17
Hey,

However considering the wide variety of 'lack of hands on experience' vs. 'not using available data and automation' incidents (AF447, OZ214, Dreamlifter)...

Firstly, it could be argued that such interpretations are false dichotomies - e.g. the PF who pulled AF447 up into an unrecoverable state arguably had more general "hands-on" experience than most of his peers, but the experience he lacked was of manual handling at high altitude.

In terms of raw stats, there is no "debate" - advances in flight control technology and automation have improved flight safety. What tempers that conclusion is that those advances have changed the nature of incidents and accidents, such that when they do happen the question is frequently raised over whether a crew may have been able to better resolve the problem if they were more rehearsed in manual flying skills.

And AB is often criticized when automation takes control.

Which is in itself a fallacy. Brand A is no more involved in pushing automation than brand B or others.

Big data makes a big difference in other industries- For example, retailers use data mining algorithms to sift thru tons and tons of seemingly unrelated data looking for the potential for cause and effect.

That data is largely numerical in nature though. Data mining algorithms are becoming increasingly impressive at determining correllation in a quantitative sense, but doing so in an qualitative sense is considerably more difficult.

The idea would be to collect all flight data and outcomes over a long period and let the software look for potential relationships and make future recommendations based on those relationships

The problem with that is the requirement for natural language understanding to separate the technical issues from the operational ones. Unless the data has some kind of way to distinguish these things, there would be no way of distinguishing, say, a runway overrun caused by brake failure from one caused by a problematic approach.

Even if one brings in incident reports as an attempt to distinguish the differences, the ability to distinguish relies on the impartiality and wording of those reports.

This may be entirely unrealistic, but it will be interesting to see if anyone is already thinking in this direction.

The FAA are already using technology capable of performing "big data" operations, even if not in the way you're suggesting:

Customer Interview: FAA, Will Lawrence | MarkLogic (http://www.marklogic.com/resources/customer-interview-faa-will-lawrence/)

underfire
28th Nov 2013, 20:31
Dozy,

Yes, there are several FAA sponsored programs that datamine. At least one company has an automated program that uses the flight recorder data.

An event, such as a wake encounter, windshear, or clear air turbulence, can be culled from the aircraft systems response (the g-rate, roll, etc).
This shows the significance of the effect and shows the pilot reaction, if any, to the event.

This data mining was deliberately blind, so it automatically scrubbed the airline/flight number info. It was fairly extensive, something like 75K flight hours.

The results were very interesting below 10,000 feet.

DozyWannabe
28th Nov 2013, 21:40
Hi underfire,

Yes, there are several FAA sponsored programs that datamine. At least one company has an automated program that uses the flight recorder data.

An event, such as a wake encounter, windshear, or clear air turbulence, can be culled from the aircraft systems response (the g-rate, roll, etc).
This shows the significance of the effect and shows the pilot reaction, if any, to the event.

That doesn't surprise me in the slightest. It confirms my thoughts on the ability to correlate quantitative data to a fairly useful extent. What quantitative data can't do, however, is provide a comprehensive context for those correlations - and that's the crux of Zionstrat2's initial question.

The results were very interesting below 10,000 feet.

I bet - and I'd give a lot to have access to that data! :ok:

alf5071h
28th Nov 2013, 22:58
framer, a single-focus solution is unlikely to produce the benefits required.
The vast majority of operations are safe; pilots are able to manage day-to-day situations both in auto and manual flight. This positive view suggests that their training is satisfactory for the conditions faced; thus in incidents/accidents, either the particular crews lacked training (but they were safe enough yesterday), or the situation exceeded the normal standard of training (which otherwise was satisfactory).

There is increasing support for the latter view; if the recent FAA report on automation and the BEA study on go-arounds are read as an overview (noting the dominant components and groupings), then many incidents/accidents indicate that the limits of human performance were reached or exceeded. This was due to the amalgamation of components in operational situations, which require more mental resources than might be available at that time.

In these situations the mental workload was too high, indicating a previous misjudgement of workload or of the situation – we didn’t see it coming. Part of the solution requires changing or removing the unnecessary situational components; simplifying the operational demands (ATC should not request a late change of runway). Another aspect would be to improving the human’s ability to manage workload, by avoiding situations (‘unable’ to accept a late runway change). The latter depends on experience, judgement; airmanship. Thus if training is to be part of the solution it should focus on improving the skills of being a pilot; skills applicable to both auto and manual flight.

Now if data was available for the high workload situations, such that they could be identified in real time, then perhaps pilots would benefit from a workload meter or a situation alert.
Meanwhile we should help pilots to think ahead, and educate the regulators, ATC, operators, etc; all those people who can affect operating situations.

Refs
http://www.skybrary.aero/bookshelf/books/2501.pdf

http://www.bea.aero/etudes/asaga/asaga.study.pdf

Kefuddle
29th Nov 2013, 12:22
Zion,
The idea would be to collect all flight data and outcomes over a long period and let the software look for potential relationships and make future recommendations based on those relationships-
The problem with incidents/accidents is that all analysis is a retrospective only of the incident/accident in question. The cause of the event may be determined to be mishandled by the crew. The associated reports can only really focus on what the crew should have done in the seconds leading up to the incident, I am much more interested in examples where incidents were averted much earlier in the chain.

So, for me the point of data mining would be to discover flight patterns (based on previous incidents or other parameters) that could have been incidents/accidents and also discover how the crew recovered. The crew may not even be aware that the situation they were in was very similar to a situation that once led to a hull loss, but on this occasion, they did something important and subtle that recovered the situation before it became event worthy.

Naturally, this would be recording all/most/some flights continuously, which is technically feasible with FOQA and SD cards right now. Clearly, the data would need to be de-identified and handled only independent agencies. There would be issues with voice recording storage, anonymity and interpretation though.

This would be a hugely positive step; we would be able to draw upon clear evidence of good practice, excellent SA and handling skills and how they related directly to problem solving. As Tony Kern puts forward in his book, being potentially much more powerful training that just focusing on what went wrong.

FLEXPWR
29th Nov 2013, 13:18
At a safety meeting in one large Asian airline, where FOQA, FDM is used extensively for finger-pointing and punitive measures (grounding, money fines etc.), I have once asked if all this data could be used to identify who would fly with the least deviations, with the least exceedance, and if we could all learn how and why some pilots (anonymously of course) have less or more FOQA events.

Maybe by studying the "good" examples, we could also draw seriously constructive conclusions, instead of the slap-on-the-wrist attitude of most of the airlines in the region and elsewhere.

The answer I got was that it would require analyzing too much data, and it's easier just to blame on a FOQA trigger.

When you think they have up to 80 employees whose job is tracking all FOQA events, it would be nice to praise instead of blaming. It seems to me that the human resources should be available anyway.

As a comparative note, slightly drifting from the original post, our modern societies usually look in other sectors what makes a success and not a failure: we look at why/how Bill Gates became one of the most successfull, why or how Hussein Bolt is the fastest sprinter, etc. Our society rarely studies in great depth why or how the slowest sprinter failed to perform, or what startup company shut down.

In other areas of our economy, we have a tendency to study success. In FOQA, we study pilot errors. Any chance to reverse this scenario? Would it benefit in the long term?

Zionstrat2
29th Nov 2013, 18:31
Wow, thanks for all the input- A few points seem to stand out:
1. Although realtime maybe questionable, it sounds like collecting the data, building the 'normal' models, and looking at variations might be useful for training, safety, design.

2. The qualitative vs. quantitative aspect is a very real and is always an issue- In the diapers=beer sales example, it took exit surveys to understand the correlation; mom yelling go get some diapers, dad going to the nearest store and grabbing the beer to calm the nerves.

But considering underfire's example 'low on glideslope, would required a whole bunch of real-time in/out data correlated with the ground, the landing system at the airport, ATC instructions', with enough data collected and crunched ahead of time, it seems logical that a normal profile, that includes lots of variation, would emerge. Equipment, performance parameters, lots of other variables would be predefined, so real time analysis would be limited to conditions well outside of normal-

If the dive is to avoid a flight of geese, the pilot knows what he did and why he did it and the system could award him for quantifying unusual behavior.

Yes the GE program looks extremely interesting, but I have nothing to do with it - I work with non-aviation technology and healthcare companies to collect user needs, the thinking of 'thought leaders' (who provide the 'out of the box' aspect), and look for matches with short term and long term feature-functionality. Add some risk/benefit and return on investment analysis and we put together product roadmaps-

It's fun to project this thinking into aviation and it's always fun to find other like-minds out there, so thanks for the input!
Cheers,
ZS

alf5071h
1st Dec 2013, 20:07
It may help to consider the ideas in the presentations below:

http://easa.europa.eu/essi/ecast/wp-content/uploads/2012/01/Characteriz2.pdf

http://easa.europa.eu/essi/ecast/wp-content/uploads/2013/01/EOFDM_CONFERENCE_2013_-DE_JONG_Landing_Performance.pdf

Caution; the problems of runway overruns include the quality of data (real-time reporting of conditions) and the human factor in understanding the situation and the choice of action.

DonH
1st Dec 2013, 21:32
Really helpful links to these valuable presos and papers alf5071h, thank you. I'm datamining right now and being a one-man band need all the help I can get in assessing and presentation of overrun risk. I've seen Pere Fabregas' (of Vueling), presentations in other links and consider them of great interest as they are the first attempt I've seen to turn actual landing distances into expressions of threat, (of runway excursion) and subsequent risk assessment. The FAA/PARC paper is of exceptional interest.

Your caution regarding the quality of data is a good one, and I would include in this the innate inability to accurately determine the touchdown point on non-GPS-equipped aircraft.

With the exception of Vertical 'g', sample rates for parameters normally used to determine touchdown are not frequent enough to accurately determine the point of touchdown. At speeds of 160 to 200fps, the best accuracy one can expect is +/- 360 to +/- 400 ft, meaning the touchdown point indicated in the data can be between 700 and 800ft "in error".

This means that determining runway remaining will also be in error and I haven't seen a successful way of fine-tuning this fundamental issue. The old method of assuming that the airplane is "over the threshold at 50ft RA" isn't good enough.

The mathematics behind using the glideslope make the t/d point a bit more accurate as does the determination of where the glideslope antenna is located and how far the airplane travelled past it to touchdown but it is still not as accurate as desired.

If anyone has thoughts on this basic flight data issue I welcome them. In the meantime, both touchdown distances past the threshold, and runway-remaining distances and related event should, in my view, be viewed broadly rather than statements of high accuracy in non-GPS equipped aircraft.

I think that perhaps a more valuable number might be the flare times and distances, primarily because we know that when the thrust levers/throttles are closed, the airplane is definitely over the runway and a landing is intended. Measuring from say, 20ft flare height, to touchdown will tell us how much runway was used in the flare and as such may be more useful, although we still will not know where on the runway the airplane has touched down.

Zionstrat2
2nd Dec 2013, 14:27
alf5071h- Were you part of this study? I really enjoyed the powerpoints and wonder if the entire presentation has been taped or transcribed? I believe I understood most of the data and conclusions, but it would be nice to fill in the dots.

This appears rather targeted, and I'm assuming that the dataset was large, however, I'm also assuming that datamining per se wasn't used... so it's probably significantly different than what I am talking about (however your example is probably more useful and likely to add value in the long run)-

The reason I say this is that glidesope, touch down point, and flap settings are exactly the variables that we would imagine might impact overruns and it does make sense to clean up the data, look for predictive behaviors, and use the outcome for training or designing systems.

To me, this is a lot like using doppler for windshear, or stickshakers for stalls- If the problem is big enough, it's likely that out of normal parameters will be identified and something will be done in the long haul. However, it is very point specific.

On the other hand, data mining provides the opportunity to find relationships that are far less intuitive. There is no way I can realistically imagine a strange enough scenario, however, for the sake of argument, let's pretend that wheels out a second later than average, plus wind gusts of x%, + one radio call repeat in the last 90 seconds before touchdown, increases overruns by y%.

Any one of those variables is unlikely to be a problem in itself, so they would fall within the 'normal' profile until all 3 show up together.

Of course this is a bad example because unless the relationship was extreme (perhaps a 25% increase in overruns) I can't imagine wanting to communicate with a pilot at such a critical point in the flight (again assuming that the odds are highly favorable that the pilot will make a safe landing).

But what if there is a 4th, seemingly unrelated variable that pushes the overun odds beyond 25%? And what about the thousands of other 'holes in the cheese' where we never imagined cause and effect?

Again, there probably aren't any immediate applications for what I am talking about. However, when aviation goes primarily autonomous, it seems very likely that the virtual pilot will be provided with some kind of similar feed because the virtual pilot will have the bandwidth to deal with it.

So this gets me thinking in another direction... I wonder if data mining is in use with autonomous cars that are already on the road?

Thanks again for the thought experiments!

phiggsbroadband
2nd Dec 2013, 14:58
Hi, my thoughts are it would be just one more warning horn to be dealt with, or not.

I flew a twin seat glider in some aerobatic manoeuvres, and hardly looked at the instruments. It also had no stall warner. Then when I tried the same manoeuvres in an aerobatic power plane, the stall warner was constantly sounding... Pulling into and out of a loop, doing moderately tight turns, and intermittently whilst in turbulence.
I said to the Instructor that the stall warner was making a lot of noise, he agreed and said we should just ignore it... It's just one more distraction, and one of the reasons they are not fitted to gliders.

Maybe Data Mining is a useful tool in situations where decisions can be made in a time span of a few weeks, but to expect pilots to make split-second decisions at the drop of the electronic hat, may be just too much data at the wrong time.

alf5071h
2nd Dec 2013, 17:43
As much as there are difficulties in determining what the aircraft did/doing, there are also as many external parameters which are just as difficult to establish. For landing, any difference between the reported runway friction and that experienced, similarly the wind speed. Then there are problems of human interaction, ability vs achievement, or the choice of braking level or any change when on the runway; why. Landing distances already have a 60% margin for ‘normal’ variability (depending on application and conditions), but accidents appear to indicate that the combination of variability is more significant; a good target for data mining.

Thus where data may help with retrospective analysis, there are considerable limitations in real time use.
I prefer a more strategic view, aiding organisational factors which can be very powerful, but this should not exclude ‘strategic’ aiding during operations. As an example the landing data could indicate the level of risk in the crew’s choice during a pre landing briefing – expected landing distance, braking action, windspeed; although these data are not known precisely, previous data mining may be able to indicate the risks from their variability, at a particular time/location – a sort of ‘have you considered this’, or ‘add some safety margin’. I can imagine a flight deck display where if ‘3 lemons’ line up the situation requires a diversion, but even then with the variability of human perception, some people’s lemons are oranges.

I am not involved with any these activities, but take interest in ‘technological safety’. E.g. the effectiveness of ‘reactive’ warnings (accuracy permitting) vs proactive, strategic awareness. Reactive systems might encourage complacency and a dependency on the alerts, whereas proactive information might guide thoughts – ‘should we be doing this’ vs ‘how can we do it’; frame the anticipated situation - dynamic awareness.
This opens up alternative views of data use – not just number crunching, but the formation of meaningful data (information) for human use (reactive/proactive), and all within an operating environment (context); man-machine-environment. Thus we should be aware of any narrowing views – ‘not how to solve this problem’, instead ‘what characteristics identify these problems’, and can these be assessed in context and time.

Fresh of the press; with links to previous conferences at the bottom of the page: EASA - 3rd Conference of the European Operators Flight Data Monitoring forum (EOFDM) (http://easa.europa.eu/events/events.php?startdate=06-02-2014&page=3rd_Conference_of_the_European_Operators_Flight_Data_Mo nitoring_forum_%28EOFDM%29)

Piltdown Man
2nd Dec 2013, 21:45
Data mining might help but a better use of processing power would be a "You are going to end up here!" indicator. If it's on the runway then OK, if not...

cattletruck
3rd Dec 2013, 04:13
In medicine "Data Mining" is referred to as epidemiology and it's as old as the hills even predating computers. It is quite a difficult study to get right and employs many scientific disciplines.

Epidemiology provides numerous health benefits to the community and in some respects the same principles can be also applied to aircraft/engine health.

However I'm having trouble understanding any tangible benefit out of such a non-deterministic system that is being proposed here. If the system said "Don't fly with cheapskate airline X in February because that is when they are most likely to crash" then nobody will fly with them in February and they won't crash.

alf5071h
3rd Dec 2013, 13:06
Piltdown, when humans are able to determine where the aircraft will end up, and explain why, then perhaps the programmers might be able to code this and use the processing power.
Also, remember that for any meaningful output there has to be a corresponding accurate input, otherwise it’s just a guess.
In most circumstances an experienced human provides a very good ‘guess’ (but rarely, not always good enough); however if data-mining could provide a more accurate guess based many previous successful human guesses (humans don’t learn/forget or are corrupted by bias) then the computation might be able to advise the human that they will not end up where they plan to (providing the human declares the intention).

At the other end of the scale there are situations where the human will excel; here the computer should not intervene. Thus there is problem to determining which situation applies (context). Data-mining might first focus on establishing the situation (requires sensor accuracy), then indicate the level of risk which is in a choice of action, bearing in mind that humans tend to underestimate risk (operators and programmers).

Zionstrat2
4th Dec 2013, 17:13
cattletruck-
I had to think about your example for a while because a lot of my time is healthcare oriented- but I would have never imagined what I think of as near real time recommendations base on data mining at this point in hearth care-

The reason is that healthcare data hasn't been readily available (stuck in charts without incentives to make operational data available) and algorithms need volume to throw out the outliers to build the normal model.

But even with trends to change data capture, standards, and create a sharing infrastructure, I'm not sure that I would imagine the near real time analysis that we are talking about in the near future in health care- Heath providers have a lot more control of their clinical environment and they intentionally limit a lot of variables that are unrealistic in aviation.

I get the idea of data mining in health care- model building, identifying unimagined causality, it makes total sense, but I would have imagined this would be used for training and developing better procedures because I would imagine that heathcare outcomes are also far more deterministic.

On the other hand, if aviation had the equivalent to millions of labrats (= aircraft) that could be tested to the point of failure over and over for hundreds of years, and that data was collected, I would think the comparison would be closer?

In a way, that is what I am imagining... if thousands of drones are delivering pizza, we're going to see a lot more failures, the data sets get bigger, and cause or failure might lead to more traditional fixes (training and developing better procedures).

No idea if any of this fits your example, but I appreciate the analogy...

cattletruck
5th Dec 2013, 12:22
Mr Strat2, no doubt your idea got one thinking so therefore it must be a good one, just the focus and end result may not be right, perhaps the world isn't quite ready for it.

In the immediate term, I can see a use for this kind of analysis in providing high level consultative information. For example, my friend runs a company that provides high level and well researched executive briefs on what's happening in the fast paced telecommunication industry, of which he has clients paying a premium for it. His motto is "Distilling market noise into market sense".

Create something similar in the aviation industry and you'd be on a winner.

misd-agin
5th Dec 2013, 13:24
The capability exists for simulators. It can also incorporate instrument scan. NASA has it.

The issue is what is the cost and can it be used to improve performance?

Zionstrat2
5th Dec 2013, 21:11
Cattletruck-
"Distilling market noise into market sense"