Can automated systems deal with unique events?
Steve, as you note many of the early contributors have made up their mind, but few explain why.
An apparent unthoughtful choice of automation might reflect social change; the use Wiki and Google, vice thinking, preferring automation dependency, system belief, without checking etc.
So: “… is it possible to replace this capability with a human-designed and manufactured system, without creating additional vulnerability to human error elsewhere?”
I don’t think so; as discussed previously, human ability is limited by inherent, yet necessary fallibility; how can we design an error free system if we cannot understand our own error.
Re the QF example, the warning and display systems provided the crew with the ‘best’ picture that technology could provide. The crew actions could be automated, but apart from a shorter timescale the process would still be limited by the quality and availability of sensors (as noted by previous contributors).
After landing and selecting fuel off to stop the engine, what more could an automatic system do when the engine did not stop? – Automation only computes, Humans reason.
“… will potential product liability issues stop the bandwagon?” Probably; but legal liability is only a small part of an ever-changing society which influences human development.
An alternative line of thought is to ask ‘why’ we should replace existing capability – use technology wisely to support humans, but not to replace them.
If we choose ‘safety’ then this requires careful thought of what safety is, what would we be attempting to improve - why. I prefer not to define safety but consider it as an activity; so will change affect this activity; might it upset the finely balanced state that we have achieved so far.
Whatever our views are, we require thought and explanation before choice. My thoughts would start with natural human risk-aversion, and if we are to change a finely tuned industry, only make small changes first and asses the feedback.
For those choosing full automation, look for and evaluate the feedback from recent accidents; what should we have learned from them – without blaming the crew.
An apparent unthoughtful choice of automation might reflect social change; the use Wiki and Google, vice thinking, preferring automation dependency, system belief, without checking etc.
So: “… is it possible to replace this capability with a human-designed and manufactured system, without creating additional vulnerability to human error elsewhere?”
I don’t think so; as discussed previously, human ability is limited by inherent, yet necessary fallibility; how can we design an error free system if we cannot understand our own error.
Re the QF example, the warning and display systems provided the crew with the ‘best’ picture that technology could provide. The crew actions could be automated, but apart from a shorter timescale the process would still be limited by the quality and availability of sensors (as noted by previous contributors).
After landing and selecting fuel off to stop the engine, what more could an automatic system do when the engine did not stop? – Automation only computes, Humans reason.
“… will potential product liability issues stop the bandwagon?” Probably; but legal liability is only a small part of an ever-changing society which influences human development.
An alternative line of thought is to ask ‘why’ we should replace existing capability – use technology wisely to support humans, but not to replace them.
If we choose ‘safety’ then this requires careful thought of what safety is, what would we be attempting to improve - why. I prefer not to define safety but consider it as an activity; so will change affect this activity; might it upset the finely balanced state that we have achieved so far.
Whatever our views are, we require thought and explanation before choice. My thoughts would start with natural human risk-aversion, and if we are to change a finely tuned industry, only make small changes first and asses the feedback.
For those choosing full automation, look for and evaluate the feedback from recent accidents; what should we have learned from them – without blaming the crew.
Join Date: Mar 2014
Location: Arizona
Age: 76
Posts: 62
Likes: 0
Received 0 Likes
on
0 Posts
"Automation only computes, Humans reason."
"Automation only computes, Humans reason."
This is not true, and in the future computers will do even more reasoning. There are too many comments now which assume that computers only do what is pre-programmed. They can do more, and they will do more.
We don't know and cannot know how far they will get. Some very smart people (e.g. Stephen Hawking) fear that they will be able to out-reason us. I hope they are wrong.
This is not true, and in the future computers will do even more reasoning. There are too many comments now which assume that computers only do what is pre-programmed. They can do more, and they will do more.
We don't know and cannot know how far they will get. Some very smart people (e.g. Stephen Hawking) fear that they will be able to out-reason us. I hope they are wrong.
Join Date: Jun 2000
Location: last time I looked I was still here.
Posts: 4,507
Likes: 0
Received 0 Likes
on
0 Posts
"Ladies & Gentlemen, this is your captain speaking. You are presently flying at ...............etc. etc. This is our first fully automated flight from XYZ - ABC. Indeed I am on the ground in XYZ controlling & monitoring your flight. I hope you are enjoying the flight and I assure you nothing can go wrong..go wrong..go wrong.. go wrong........"
Join Date: Jan 2011
Location: on the cusp
Age: 52
Posts: 217
Likes: 0
Received 0 Likes
on
0 Posts
"Ladies and Gentlemen the cans of burning fuel either side of you are not under my control, I merely get to make suggestions to them. But don't worry in the unlikely event of something going wrong I can switch them off and I'm sure we'll make it across the Atlantic."
People very quickly adapt to the concept of handing over to automation.
Automation can manage all the tasks of aviation, it can aviate, navigate and communicate. We already have machines that see so unstable that they can't be flown without automation. We have automated drones that you give a mission to and let them go, or that are datalinked half way round the world. But really the great advantages of automated aircraft can only be realised if the aircraft doesn't have to provide all the equipment necessary to make humans comfortable. While aircraft carry humans then there are no disadvantages to having a pilot. As there will always be the need to carry humans in a passenger aircraft, it seems obvious to me to invest our efforts in optimising the human machine combination rather than strive for full automation.
People very quickly adapt to the concept of handing over to automation.
Automation can manage all the tasks of aviation, it can aviate, navigate and communicate. We already have machines that see so unstable that they can't be flown without automation. We have automated drones that you give a mission to and let them go, or that are datalinked half way round the world. But really the great advantages of automated aircraft can only be realised if the aircraft doesn't have to provide all the equipment necessary to make humans comfortable. While aircraft carry humans then there are no disadvantages to having a pilot. As there will always be the need to carry humans in a passenger aircraft, it seems obvious to me to invest our efforts in optimising the human machine combination rather than strive for full automation.
Given how the regulators seem to struggle to what is really old technology upgrades in aviation I think that fully autonomous aircraft are at least 50+ years away assuming that it can even be done. Don't forget that NEW aircraft now in the 737Max and 320Neo are 70s and 80s technology.
Join Date: Sep 2014
Location: Canada
Posts: 1,257
Likes: 0
Received 0 Likes
on
0 Posts
Neville, fully autonomous aircraft are already a reality today and already approved by various regulators to fly in controlled airspace under special AOC.
Good examples include the so called Optionally Piloted Aircraft (OPA) such as the Diamond DA42 Centaur, the Lockheed/Kaman K-Max helicopter, and the Northrop Grumman (Scaled Composites) Firebird.
(Aurora Flight Sciences’ Centaur Optionally Piloted Aircraft (OPA) flew multiple unmanned flights from Griffiss International Airport in Rome from June 12-15, 2015)
These aircraft can be flown from inside the cockpit, or piloted from the ground, or programmed to fly fully automated from take-off to landing. They are not "testbeds" but are all production aircraft in service today.
The K-Max notably did nearly 2,000 unmanned sorties delivering cargo for U.S. troops in Afghanistan.
They are not carrying passengers yet, but the K-Max is being pitched as a possible Combat SAR Evac (Air Ambulance) platform; i.e., as an unmanned transport to take wounded troops from the battlefield to a medical facility.
Yes we are far away from adopting this primarily military technology to the commercial transport realm, but I don't think it will be 50+ years. As I mentioned in an earlier post, I think we'll see fully automated commercial cargo ops sooner rather than later, before proceeding to pioneering passenger flights.
Good examples include the so called Optionally Piloted Aircraft (OPA) such as the Diamond DA42 Centaur, the Lockheed/Kaman K-Max helicopter, and the Northrop Grumman (Scaled Composites) Firebird.
These aircraft can be flown from inside the cockpit, or piloted from the ground, or programmed to fly fully automated from take-off to landing. They are not "testbeds" but are all production aircraft in service today.
The K-Max notably did nearly 2,000 unmanned sorties delivering cargo for U.S. troops in Afghanistan.
They are not carrying passengers yet, but the K-Max is being pitched as a possible Combat SAR Evac (Air Ambulance) platform; i.e., as an unmanned transport to take wounded troops from the battlefield to a medical facility.
Yes we are far away from adopting this primarily military technology to the commercial transport realm, but I don't think it will be 50+ years. As I mentioned in an earlier post, I think we'll see fully automated commercial cargo ops sooner rather than later, before proceeding to pioneering passenger flights.
Do a Hover - it avoids G
Join Date: Oct 1999
Location: Chichester West Sussex UK
Age: 91
Posts: 2,206
Likes: 0
Received 0 Likes
on
0 Posts
In the mid 60s I was a safety pilot for the Blind Landing Experimental Unit at RAE Bedford on their Comet 3B doing cross wind autoland trials with a component of over 30kt. To watch that system flare, smoothly remove the drift angle and squeak the wheels onto the numbers over and over again, convinced me that automatics could achieve standards of ‘flying’ that I could not match.
I have put quotes round flying because I believe word means different things to different people. To avoid ambiguity I suggest we separate out the tasks of flying into ‘steering' the aircraft and ‘operating' the aircraft.
By steering, I mean controlling any flight parameter. By operating, I mean every other aspect of a flight from pre-flight preparation to selecting the appropriate flight parameters and filling in the Tech Log afterwards. I believe automatic systems are better at steering tasks while humans are better at operating tasks.
In reply to “What are you going to do when the autopilot fails?” my answer is that future automatic steering systems will not fail in a critical way. Unlike today’s autopilots which disconnect themselves in the event of a problem, future automatics will be designed to fail safe and carry on performing their functions. Just like today’s wing structures. Autoland, thanks to special certification standards, has not caused a landing accident since it was first used with passengers in the 70s. Sadly there have been quite a few steering errors by aircrew over the same period.
I am a future Captain climbing out of La Guardia when both engines fail. As the operator I decide the crisis needs a landing on the Hudson. I lift the guard protecting the Glide Landing button and press it which tells the steering systems to set up the best glide. With my knowledge of the aircraft’s gliding performance I estimate the touchdown zone on the local area map that appears, draw the final approach track I want with my stylus, press the Glide Landing button again and thank my lucky stars that I did not have to use skill so save my aeroplane. Just knowledge.
As a future passenger I will always want my flight operated by a senior Captain and First Officer who have the knowledge to get us to our destination safely, but without the need for them to use skill.
I have put quotes round flying because I believe word means different things to different people. To avoid ambiguity I suggest we separate out the tasks of flying into ‘steering' the aircraft and ‘operating' the aircraft.
By steering, I mean controlling any flight parameter. By operating, I mean every other aspect of a flight from pre-flight preparation to selecting the appropriate flight parameters and filling in the Tech Log afterwards. I believe automatic systems are better at steering tasks while humans are better at operating tasks.
In reply to “What are you going to do when the autopilot fails?” my answer is that future automatic steering systems will not fail in a critical way. Unlike today’s autopilots which disconnect themselves in the event of a problem, future automatics will be designed to fail safe and carry on performing their functions. Just like today’s wing structures. Autoland, thanks to special certification standards, has not caused a landing accident since it was first used with passengers in the 70s. Sadly there have been quite a few steering errors by aircrew over the same period.
I am a future Captain climbing out of La Guardia when both engines fail. As the operator I decide the crisis needs a landing on the Hudson. I lift the guard protecting the Glide Landing button and press it which tells the steering systems to set up the best glide. With my knowledge of the aircraft’s gliding performance I estimate the touchdown zone on the local area map that appears, draw the final approach track I want with my stylus, press the Glide Landing button again and thank my lucky stars that I did not have to use skill so save my aeroplane. Just knowledge.
As a future passenger I will always want my flight operated by a senior Captain and First Officer who have the knowledge to get us to our destination safely, but without the need for them to use skill.
Join Date: Jan 2011
Location: on the cusp
Age: 52
Posts: 217
Likes: 0
Received 0 Likes
on
0 Posts
Excellent post John Farley.
From my perspective steering the aircraft can be readily achieved by automation. It can even cope with abnormal events. Computing can "try" something, measures the response, adjust, try again, all much faster than a human can recognise there is even an issue.
Human operations require human operators, as we don't necessarily correspond to the same rule-set as a physical item.
While an aircraft needs to support human physiology then there is little to no advantage gained from adding the automation necessary to mimic human decisions. It is better to use a human.
Currently we appear to be designing a long way from the optimal point. We put automation on board that removes the pilot from the loop other than as an operations director, but we don't give it authority to fully act. The pilot is mostly removed from the minute to minute situational awareness of what the aircraft is doing, but is suddenly catapulted from monitoring to handling with no time to appraise. Appraisal of the situation is the strength of the pilot, if the automation can buy them some time to make a decision and communicate it. At the moment the rules/tradition for implementing automation and the level of information provided to the pilot just don't seem to achieve this aim. The automation isn't allowed to control, but the system is too complicated for a pilot to quickly comprehend the situation of what is and isn't requiring manual intervention.
I don't like referring to AF447, but do people think the out come would have been different if the "system" rather than say "you have control - well mostly" it said "Dave, HAL here, I've lost reliable airspeed sensing. I'm going to carry on in straight and level flight using free run inertials and GPS. Let me know if you want me to do something different, meanwhile I've switched pitot heat on and I'll let you know if anything changes."
From my perspective steering the aircraft can be readily achieved by automation. It can even cope with abnormal events. Computing can "try" something, measures the response, adjust, try again, all much faster than a human can recognise there is even an issue.
Human operations require human operators, as we don't necessarily correspond to the same rule-set as a physical item.
While an aircraft needs to support human physiology then there is little to no advantage gained from adding the automation necessary to mimic human decisions. It is better to use a human.
Currently we appear to be designing a long way from the optimal point. We put automation on board that removes the pilot from the loop other than as an operations director, but we don't give it authority to fully act. The pilot is mostly removed from the minute to minute situational awareness of what the aircraft is doing, but is suddenly catapulted from monitoring to handling with no time to appraise. Appraisal of the situation is the strength of the pilot, if the automation can buy them some time to make a decision and communicate it. At the moment the rules/tradition for implementing automation and the level of information provided to the pilot just don't seem to achieve this aim. The automation isn't allowed to control, but the system is too complicated for a pilot to quickly comprehend the situation of what is and isn't requiring manual intervention.
I don't like referring to AF447, but do people think the out come would have been different if the "system" rather than say "you have control - well mostly" it said "Dave, HAL here, I've lost reliable airspeed sensing. I'm going to carry on in straight and level flight using free run inertials and GPS. Let me know if you want me to do something different, meanwhile I've switched pitot heat on and I'll let you know if anything changes."
Join Date: Jun 2000
Location: last time I looked I was still here.
Posts: 4,507
Likes: 0
Received 0 Likes
on
0 Posts
As a future passenger I will always want my flight operated by a senior Captain and First Officer who have the knowledge to get us to our destination safely, but without the need for them to use skill.
Review the oft quoted definition of a skilful pilot. (light heartedly)
The other poignant issue is the coordination, or lack of, between airline pilot raining and airline a/c design. They seem to happen in isolation. The former can head off in any direction with leaps & bounds driven by technocrats and accountants, and as long as it meets XAA specs and is cheap[er longterm they go ahead. Out it comes, every 15 years or so, a new bag of bells & whistles that is more sophisticated, trouble free, crash proof than the generation before. The latter, meanwhile plods along with great inertia. The only real difference I've noticed over 30 years is that the training has gone from 250hrs to 148hrs (CPL), MCC has been thrown in and the MPL is now the rage. But just how much of it is focused on the 'technology' that is going into the next generation of a/c? A short intense TQ course with very strict SOP's that teach only one method of doing anything, and is well short of total capability of the systems, is not IMHO a satisfactory training. I suspect that with more automation & sophistication it will get worse, i.e. less knowledge & understanding of the a/c. Meanwhile we test to the same criteria as a B732. Hence my comment about uncoordinated training programs.
Review the oft quoted definition of a skilful pilot. (light heartedly)
The other poignant issue is the coordination, or lack of, between airline pilot raining and airline a/c design. They seem to happen in isolation. The former can head off in any direction with leaps & bounds driven by technocrats and accountants, and as long as it meets XAA specs and is cheap[er longterm they go ahead. Out it comes, every 15 years or so, a new bag of bells & whistles that is more sophisticated, trouble free, crash proof than the generation before. The latter, meanwhile plods along with great inertia. The only real difference I've noticed over 30 years is that the training has gone from 250hrs to 148hrs (CPL), MCC has been thrown in and the MPL is now the rage. But just how much of it is focused on the 'technology' that is going into the next generation of a/c? A short intense TQ course with very strict SOP's that teach only one method of doing anything, and is well short of total capability of the systems, is not IMHO a satisfactory training. I suspect that with more automation & sophistication it will get worse, i.e. less knowledge & understanding of the a/c. Meanwhile we test to the same criteria as a B732. Hence my comment about uncoordinated training programs.
Join Date: Feb 2005
Location: flyover country USA
Age: 82
Posts: 4,579
Likes: 0
Received 0 Likes
on
0 Posts
Can automated systems deal with unique events?
Join Date: Jan 2011
Location: on the cusp
Age: 52
Posts: 217
Likes: 0
Received 0 Likes
on
0 Posts
I suspect that with more automation & sophistication it will get worse, i.e. less knowledge & understanding of the a/c.
At the moment we have the scenario where the aircraft can hand a "bag of spanners" (thanks Tourist) to the pilot without the courtesy of passing it over handles first. I feel this has to change.
Join Date: Nov 2015
Location: New Zealand
Posts: 2
Likes: 0
Received 0 Likes
on
0 Posts
Computers certainly do have the abilty to 'reason' in a functionally equivalent way to humans. (whether the mechanism how they do that is similar is debatable)
Consider the following, as an example as what most would regard as a 'unique event'.
A single engine aircraft is approaching a runway for landing and simultaneously a large moose and a small child enter into the aircraft landing path. Immediately, on attempting to go round, the engine fails.
Could a computer control the aircraft and achieve a statisically better outcome than a human pilot even though it is highly unlikely the software would have been programmed explicitly for this scenario?
Given the current state of the art of autonomous devices is it already probable that it could.
Up to the point of the runway incursion we could assume that existing era technology could have the aircraft lined up and able to land successfully.
Detecting the runway incursion would require a vision system. Self driving cars already have such systems and are able to navigate the vehicle to avoid obstables. Aircaft, having more degrees of freedom than a car actually have an advantage here and the system would command a go-around. At the point engine failure occurs the range of available trajectories decreases significantly. Let's assume our motion prediction system can calculate the range of available trajectories as anything from crash landing the plane short of the incursions, hitting both objects, or impacting one or other object.
What should the system do? Can the system 'reason' that it should aim to save either the plane, the moose or the child.? First it would need to be able to recognise the objects in its path and determine a "consequence of impact" value. The best outcome might simply be a solution that seeks to mininimise the overall sum of those values.
Object recognition and classification is well within the domain of current technology. (Think Xbox Kinect for a consumer available example). Having classified the object it 'sees' all the data needed to calculate the consequence of collision is then available.
Things get interesting of course. A simple algorithm might infer it is best to collide with smaller objects compared with larger ones of similar density, disregarding that children have higher intrinsic value than moose. A slightly more complex system might attempt to assign 'intrinsic value' to the objects.
However The data for such a descision tree might be as simple as {object/animal/twolegs=1, object/animal/fourlegs=2, object/animal/nolegs=3, object/animal/unknownlegs=4}
You can of course build any data structure you like to classify the real world. This is where learning aspect of a computing systems comes in. Over time, many systems, if able to communicate could adjust these parameters to minimise the number of negative outcomes.
Considering that all of the above could be computed for an optimal solution 60+ times per second by a sufficiently powerful system it is probable even now that computers could significantlty outperform humans in 'unique events'.
I don't expect it to happen any time soon in real life though as aviation seems determined to stay in the technological dark ages.
Consider the following, as an example as what most would regard as a 'unique event'.
A single engine aircraft is approaching a runway for landing and simultaneously a large moose and a small child enter into the aircraft landing path. Immediately, on attempting to go round, the engine fails.
Could a computer control the aircraft and achieve a statisically better outcome than a human pilot even though it is highly unlikely the software would have been programmed explicitly for this scenario?
Given the current state of the art of autonomous devices is it already probable that it could.
Up to the point of the runway incursion we could assume that existing era technology could have the aircraft lined up and able to land successfully.
Detecting the runway incursion would require a vision system. Self driving cars already have such systems and are able to navigate the vehicle to avoid obstables. Aircaft, having more degrees of freedom than a car actually have an advantage here and the system would command a go-around. At the point engine failure occurs the range of available trajectories decreases significantly. Let's assume our motion prediction system can calculate the range of available trajectories as anything from crash landing the plane short of the incursions, hitting both objects, or impacting one or other object.
What should the system do? Can the system 'reason' that it should aim to save either the plane, the moose or the child.? First it would need to be able to recognise the objects in its path and determine a "consequence of impact" value. The best outcome might simply be a solution that seeks to mininimise the overall sum of those values.
Object recognition and classification is well within the domain of current technology. (Think Xbox Kinect for a consumer available example). Having classified the object it 'sees' all the data needed to calculate the consequence of collision is then available.
Things get interesting of course. A simple algorithm might infer it is best to collide with smaller objects compared with larger ones of similar density, disregarding that children have higher intrinsic value than moose. A slightly more complex system might attempt to assign 'intrinsic value' to the objects.
However The data for such a descision tree might be as simple as {object/animal/twolegs=1, object/animal/fourlegs=2, object/animal/nolegs=3, object/animal/unknownlegs=4}
You can of course build any data structure you like to classify the real world. This is where learning aspect of a computing systems comes in. Over time, many systems, if able to communicate could adjust these parameters to minimise the number of negative outcomes.
Considering that all of the above could be computed for an optimal solution 60+ times per second by a sufficiently powerful system it is probable even now that computers could significantlty outperform humans in 'unique events'.
I don't expect it to happen any time soon in real life though as aviation seems determined to stay in the technological dark ages.
Last edited by Symbion90; 4th Nov 2015 at 02:56. Reason: typos
Join Date: Dec 2013
Location: Norfolk
Age: 67
Posts: 1
Likes: 0
Received 0 Likes
on
0 Posts
In order to assign an intrinsic value to a series of unavoidable runway obstructions an AI system would have to recognise the objects - which can be done using present technology - but also understand their worth to society as a whole. Why would an AI system charged with flying an aircraft safely from A to B need to be burdened with a sense of morality?
What cannot be determined by an AI system is the background history associated with the objects. Is the vehicle autonomous or a bus full of school children that has just been hijacked? Is the animal one of the last breeding pair on the planet? Is the human intentionally trying to commit suicide?
Vehicles are replaceable, critically endangered species are not and while human life should be sacrosanct, the sad truth is that human life is cheap in cash terms.
A human pilot may well be aware or informed of facts that an AI system will just not be equipped to recognise, so killing a single human may well be the least worse option, rather than potentially killing a bus load of people, or wiping out a species.
But if an AI system is ever set up in such a way that it is capable of making the decision to kill someone in preference to somebody else, that is the start of a very dangerous technological development. We are already seeing the development of such capabilities in autonomous drones, but at the moment at least, a human operator allegedly makes the final decision.
Of course we have been wiping out species all over the planet from the day humans evolved, so the logical answer is to minimise human casualties - but that decision is influenced by cultural bias. Other cultures and societies may hold animal life higher than human life, particularly for rare or endangered species.
The potential development of artificial intelligence should give everyone pause for thought and instill a great deal of concern about oversight and who controls such systems, or even if they can be controlled once released on the world.
What cannot be determined by an AI system is the background history associated with the objects. Is the vehicle autonomous or a bus full of school children that has just been hijacked? Is the animal one of the last breeding pair on the planet? Is the human intentionally trying to commit suicide?
Vehicles are replaceable, critically endangered species are not and while human life should be sacrosanct, the sad truth is that human life is cheap in cash terms.
A human pilot may well be aware or informed of facts that an AI system will just not be equipped to recognise, so killing a single human may well be the least worse option, rather than potentially killing a bus load of people, or wiping out a species.
But if an AI system is ever set up in such a way that it is capable of making the decision to kill someone in preference to somebody else, that is the start of a very dangerous technological development. We are already seeing the development of such capabilities in autonomous drones, but at the moment at least, a human operator allegedly makes the final decision.
Of course we have been wiping out species all over the planet from the day humans evolved, so the logical answer is to minimise human casualties - but that decision is influenced by cultural bias. Other cultures and societies may hold animal life higher than human life, particularly for rare or endangered species.
The potential development of artificial intelligence should give everyone pause for thought and instill a great deal of concern about oversight and who controls such systems, or even if they can be controlled once released on the world.
Join Date: Nov 2015
Location: New Zealand
Posts: 2
Likes: 0
Received 0 Likes
on
0 Posts
Why would an AI system charged with flying an aircraft safely from A to B need to be burdened with a sense of morality?
What cannot be determined by an AI system is the background history associated with the objects.
Join Date: Sep 2014
Location: Canada
Posts: 1,257
Likes: 0
Received 0 Likes
on
0 Posts
There was a time when people wouldn't get into lifts without a human operator...
NPR: Remembering When Driverless Elevators Drew Skepticism
NPR: Remembering When Driverless Elevators Drew Skepticism
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes
on
0 Posts
Lots and lots and lots of opinions without supporting evidence from the naysayers.....
Saying a thing doesn't make it true.
To be fair, neither does supporting material on the Internet, but it certainly lends a bit of credence.
"Nobody will get on a steam train!"
"You will die if you go faster than 100mph!"
"Machine looms will never replace the craftsman"
"A hand built car will be superior"
"A computer will never beat a grandmaster at chess"
"It will never fly"
"We can't get to the moon"
"Nothing can go faster than light"
"There will never be a market for more than a few computers"
"Nobody wants a camera on a phone"
"You will die if you sail west"
Saying a thing doesn't make it true.
To be fair, neither does supporting material on the Internet, but it certainly lends a bit of credence.
"Nobody will get on a steam train!"
"You will die if you go faster than 100mph!"
"Machine looms will never replace the craftsman"
"A hand built car will be superior"
"A computer will never beat a grandmaster at chess"
"It will never fly"
"We can't get to the moon"
"Nothing can go faster than light"
"There will never be a market for more than a few computers"
"Nobody wants a camera on a phone"
"You will die if you sail west"
Join Date: Mar 2004
Location: Oxfordshire
Age: 54
Posts: 470
Likes: 0
Received 0 Likes
on
0 Posts
If we can accept that pilots are 'allowed' to crash planes from time to time when the odds are too heavily stacked against them, then why should we hold automated systems to a higher level?
Surely if (and that's the question still unresolved) pilots 'cause' most of the crashes, then replacing them with automation which will vastly reduce the number of crashes is a good thing, even if those automated systems do still crash from time to time due to the unique / unforeseen events?
Surely if (and that's the question still unresolved) pilots 'cause' most of the crashes, then replacing them with automation which will vastly reduce the number of crashes is a good thing, even if those automated systems do still crash from time to time due to the unique / unforeseen events?
Join Date: Aug 2003
Location: Surrey
Posts: 1,217
Likes: 0
Received 0 Likes
on
0 Posts
In order to assign an intrinsic value to a series of unavoidable runway obstructions an AI system would have to recognise the objects - which can be done using present technology - but also understand their worth to society as a whole. Why would an AI system charged with flying an aircraft safely from A to B need to be burdened with a sense of morality?.....
There are real questions about the reliability of todays computer systems that would IMHO prevent large scale passenger transport with no human able to intervene in the event of a system failure. However, It is clear that (despite some creative attempts to construct 'moral dilemma unique events') computers today can sense and respond to the outside world better than people, within their design scope (a very important qualification). Moreover, the design scope of an 'automated aircraft' could cover a vast array of situations and the computers would have significant lower hull loses and loss of life (both on the ground and in the air). There would still be situations that were out of scope of the design and the computer could get to a point that it had no available options and resulted in a catastrophe that a human might have averted.
The record of self driving cars to date is instructive in the advantages and disadvantages.
Advantage - they report every incident and have a much lower rate of serious incidents than human driven cars (as far as I can tell 0 so far)
Disadvantage - they slavishly follow the law and traffic rules which appears to result in them being hit from behind far more often than normal cars. Think of parking lots with a 5 mph speed limit, only your grandma or a google car will actually be doing 5 mph in the road, hence, you the human doesn't expect them and hits the car in front!
automatic cars will 'always' see the hazard and avoid it to a much higher standard than humans. However, they will frequently surprise the human who thinks he could have bent the rule and gotten in and then is surprised the automatic car didn't move out of the way.
Join Date: Jan 2015
Location: Near St Lawrence River
Age: 53
Posts: 198
Likes: 0
Received 0 Likes
on
0 Posts
"Dave, HAL here, I've lost reliable airspeed sensing. I'm going to carry on in straight and level flight using free run inertials and GPS. Let me know if you want me to do something different, meanwhile I've switched pitot heat on and I'll let you know if anything changes."
https://www.youtube.com/watch?v=a5FrIDwq-qE