PPRuNe Forums - View Single Post - Airbus pitches pilotless jets -- at Le Bourget
Old 21st Jun 2019, 09:34
  #86 (permalink)  
futurama
 
Join Date: Mar 2019
Location: Canada
Posts: 72
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by tdracer
Currently, the FAA is on record - in writing - that they will not permit or certify any flight critical software (DAL A or B) that incorporates AI (or anything resembling AI). The reason is quite simple - AI isn't predictable in it's responses - and unpredictability is the exact opposite of what you want in aircraft avionics.
Personal example - my last BMW 3 series had a simple form of AI - it would 'learn' my driving habits and incorporate that into the engine and transmission response algorithms. I'd taken the car in for service, I mentioned that I'd seen an error message for "BMW Connect" a couple of times (BMW Connect is similar to "On Star", but cell phone based). After I picked up the car, it had turned into a gutless wonder - the engine was literally so slow and unresponsive as to be dangerous to drive. I took it back the next day, let the service manager drive it around the block and he immediately confirmed something was seriously wrong.
Turns out they'd re-flashed the memory to correct the BMW Connect error messages - somehow in doing that, they'd inadvertently set all the AI learning to "little old lady", making the car almost undriveable. They reset all the AI, and the car drove perfectly. When I talked about this with some co-workers later, it turns out one of the others had a similar occurrence - on their Jeep Grand Cherokee...
Programing for 'known' failures is relatively easy - the first step in any fully autonomous aircraft would be to catalog every single known survivable failure, and come up with the best solution to every one. Not to shortchange Sully in any way, but an all engine power loss is pretty straight forward - a proper program could evaluate the possible glide range based on all the relevant parameters (altitude, airspeed, aircraft weight and drag), and determine if it was feasible to land at an airport or a water landing would be better - and do all that in a faction of a second, while simultaneously trying to restart the engines. Where the computer falls short is something that's never happened before - e.g the failures associated with an uncontained engine failure (think Qantas 32) - what works and what doesn't work after such a failure is somewhat random - a programmers nightmare.
As I mentioned previously - I have no doubt fully autonomous aircraft will eventually occur, but it's going to take a long time.
Well, not really. Large classes of AI/ML algorithms are as deterministic & predictable as any "classical" algorithms.

And most systems using machine learning algorithms aren't actually "learning" (updating itself) while being used. All the "learnings" happen back in the lab while the algorithms are being modeled, trained, tuned, and validated. The resulting model (various parameters) are then "baked" into production systems.

In your BMW, for example, the AI isn't really "learning" while you're driving around. The learning already took place in Munich -- long before you bought your car -- when BMW data scientists & data engineers used machine learning to create many configuration sets (apparently including an "old lady" configuration). From time to time, perhaps once or twice a year, BMW might use new datasets to "re-train" their AI models, validate them, and provide the new updated parameters as part of the next software release. (Now, your car might be "smart" enough to notice if you prefer to drive like an old lady or an F1 driver and automatically load the appropriate configuration or adjust some variables between some well defined limits, but that's not AI).

Anyway, the bottom line is that AI system can be "predictable" and doesn't substantially change between rigorously validated updates.

Related to this are concepts of "interpretability" and "explainability". I wont go into details (here's an academic paper if one cares) but many machine learning algorithms work like a "black box" so their use may be problematic in safety critical systems. However, not all of them work this way, and we're making great strides in making the rest "interpretable" and/or "explainable".
futurama is offline