PPRuNe Forums - View Single Post - End of Aircraft Operation
View Single Post
Old 4th Jul 2020, 21:07
  #45 (permalink)  
jcbmack
 
Join Date: Oct 2008
Location: united states
Age: 45
Posts: 113
Likes: 0
Received 0 Likes on 0 Posts
Too Theoretical

Originally Posted by tdracer
You are totally missing the point. Sully is a reasonably easy scenario to program for:
Scenario - you just lost thrust on both engines and are unlikely to get it back - so you're looking at a forced landing. So you need to determine - given your weight, altitude, and airspeed - how far you can glide and what potential landing spots are available within that range (as well as any configuration changes needed to achieve that range). Furthermore, with appropriate programing an autonomous system would instantly know where every available landing spot was (no need to ask ATC). The only 'hard' part would be determining the best option of where to put it down (obviously a runway would be best, but if available range doesn't allow that picking the best alternative).
Now, Sully did all this, but it took him ~20 seconds - exceptionally good for a human under those circumstances - but an autonomous system could have done all that in a fraction of a second - and by making that determination ~20 seconds earlier, there would still have been enough altitude/airspeed to make an actual runway (in which case John Q Public probably wouldn't even remember it happened).
Basically, if the scenario has ever happened, or if the designers can dream it up, an autonomous system can be developed to account for it - with the designers having the advantage of being able to sort through various different actions to determine which is most likely to provide a happy outcome (unlike a human pilot who basically only gets one chance to get it right). The weakness of any autonomous system is dealing with a totally new, unanticipated scenario - humans are creative, and can think up new, inventive ways to deal with unknown - computers not so much.
I think you are missing several points. It did take Sully an about 20 seconds to carry out his actions and it had no detriment to the crew or the passengers; no one died, and everyone went on living. I see you overestimate the power of autonomous systems in general. Many idealists in engineering and software engineering make this common this overstatement of capabilities; many in my own teams have made similar, biased claims.

Here is where you make a mostly false claim, unfortunately: "Basically, if the scenario has ever happened, or if the designers can dream it up, an autonomous system can be developed to account for it - with the designers having the advantage of being able to sort through various different actions to determine which is most likely to provide a happy outcome." They can account for a plethora of scenarios, yes, and often assist pilots in making more rapid decisions, but this is not universally true. The progress in aviation, financial, big data, and epidemic autonomous, and intelligent systems (AI/ML) has not been as rapid or consistent as many in the field predicted or as many of us hoped they would be. No engineer or computer scientist is that good.

" but an autonomous system could have done all that in a fraction of a second - and by making that determination ~20 seconds earlier".

The operative phrase here is maybe could have; there is no guarantee this would happen in actual real-world conditions.

" The weakness of any autonomous system is dealing with a totally new, unanticipated scenario - humans are creative, and can think up new, inventive ways to deal with unknown - computers not so much."

We mostly agree here, and that is a major issue, but there are other issues of detecting nuanced conditions that to date the autonomous systems still do rather poorly with. Sensors even when they are not malfunctioning have real-world issues differentiating between some important visual data cues. Sully, at the time, was an unanticipated scenario that involved more than just two engines out.

After seeing Boeing destroy its engineering legacy with the 787 Dreamliner, a worthless aircraft, and now the 737 Max, they have a lot of errors to fix, and hopefully, the engineers now will report known design flaws rather than just mention briefly in an email. Hopefully, Airbus will not become too theoretical with its new autonomous systems and will remain practical.

In my own experience with global teams of Computer Vision, Machine Learning, and accident avoidance in land-systems, we have seen incredible improvements where information processed is far faster and at least as accurate as human end-users, but we also have seen systems make mistakes people rarely if ever do.

More salient links to the subject at hand:

https://www.skybrary.aero/index.php/...ety_Challenges

https://www.reuters.com/article/us-a...-idUSKBN1X31ST

https://www.news.com.au/technology/i...8d1f6d65f4c27e


Last edited by jcbmack; 4th Jul 2020 at 21:30.
jcbmack is offline