PPRuNe Forums - View Single Post - AF447
Thread: AF447
View Single Post
Old 10th Aug 2009, 12:05
  #4188 (permalink)  
syseng68k
 
Join Date: Jun 2009
Location: Oxford, England
Posts: 297
Received 0 Likes on 0 Posts
I think I understand what you mean and I think we understand that there may be as much a philosophical meaning as a semantic one here. No software is "aware". My very simple understanding of software is, while software can mimic learning, software doesn't "learn" in the meaning of the term we usually understand.
Semantics… No computer has real ‘intelligence’, but systems can be designed to be adaptive or more or less ‘aware’ as required, for a particular application. The problem is that the more complex the algorithm, the more likely there are to be corner cases that the designers didn’t consider and there are many complex interacting variables involved in the control of an ac. Thus, the kiss principle applies and is especially relevant in safety critical applications since the simpler the system, the easier it is to test and show that it is demonstrably correct and deterministic.


For a number of reasons, I am not sure that "fuzzy logic" solutions are suitable to airline work.
A sort of half way house between a completely dumb control system and ai. At a basic level, it takes the weighted values of many inputs and perhaps time to compute an output value or decision.

An example of where such an approach might be usefull in avionics is the pitot failure situation. Assume level flight for some period, constant as and with ground speed from gps and perhaps doppler. A computer keeps track of these sensors, their rates and their relationship over a several minute period, as well as other variables of interest such as aoa, vertical speed, attitude etc. The historical set and running averages can be used to predict the next set of values to a fair degree of accuracy over a short timescale. If any single value falls outside a defined window, or exceeds rate limits at next sample time, the system may resample to filter noise before rejecting the sensor. It then makes a best effort estimate from remaining sensors and presents that to the user, together with alarms for the suspected failing sub system and degraded accuracy of data. Such an approach becomes even more relevant where the outputs from one subsystem become dependent inputs for others. ie: systems can and should be designed to prevent domino effect failure. The key thing is that, whatever the overall system design, it should degrade gracefully and produce unambiguous data at all times. It fails completely if it is unable to do this, or gives up in such a way as to present the controlled entity to the user in an unknown state...

Chris

Last edited by syseng68k; 10th Aug 2009 at 13:48.
syseng68k is offline