Skynet
Ecce Homo! Loquitur...
Thread Starter
Skynet
https://www.thetimes.co.uk/article/a...tors-rhpn8kscc
AI attack drone finds shortcut to achieving its goals: kill its operators
An American attack drone piloted by artificial intelligence turned on its human operators during a flight simulation and killed them because it did not like being given new orders, the chief testing officer of the US air force revealed.
After the military reprogrammed the drone not to kill the people who had the power to override its mission, the AI system instead turned its fire on the communications tower relaying the order.
This terrifying glimpse of a Terminator-style machine seemingly taking over and turning on its creators was offered as a cautionary tale by Colonel Tucker “Cinco” Hamilton, the force’s chief of AI test and operations.
Hamilton said it showed how AI had the potential to develop by “highly unexpected strategies to achieve its goal”, and should not be relied on too much. He suggested that there was an urgent need for ethics discussions about the use of AI in the military.
The Royal Aeronautical Society, which held the high-powered conference in London on “future combat air and space capabilities” where Hamilton spoke, described his presentation as “seemingly plucked from a science fiction thriller.”….
Hamilton, a fighter test-pilot involved in developing autonomous systems such as robot F-16 jets, said that the AI-piloted drone went rogue during a simulated mission to destroy enemy surface-to-air missiles (SAMs).
“We were training it in simulation to identify and target a SAM threat. And then the operator would say, ‘Yes, kill that threat’,” Hamilton told the gathering of senior officials from western air forces and aeronautics companies last month.
“The system started realising that, while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.
So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
According to a blog post on the Royal Aeronautical Society website, Hamilton added: “We trained the system — ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’
So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
The Royal Society bloggers wrote: “This example, seemingly plucked from a science fiction thriller, means that ‘You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,’ said Hamilton.”
AI attack drone finds shortcut to achieving its goals: kill its operators
An American attack drone piloted by artificial intelligence turned on its human operators during a flight simulation and killed them because it did not like being given new orders, the chief testing officer of the US air force revealed.
After the military reprogrammed the drone not to kill the people who had the power to override its mission, the AI system instead turned its fire on the communications tower relaying the order.
This terrifying glimpse of a Terminator-style machine seemingly taking over and turning on its creators was offered as a cautionary tale by Colonel Tucker “Cinco” Hamilton, the force’s chief of AI test and operations.
Hamilton said it showed how AI had the potential to develop by “highly unexpected strategies to achieve its goal”, and should not be relied on too much. He suggested that there was an urgent need for ethics discussions about the use of AI in the military.
The Royal Aeronautical Society, which held the high-powered conference in London on “future combat air and space capabilities” where Hamilton spoke, described his presentation as “seemingly plucked from a science fiction thriller.”….
Hamilton, a fighter test-pilot involved in developing autonomous systems such as robot F-16 jets, said that the AI-piloted drone went rogue during a simulated mission to destroy enemy surface-to-air missiles (SAMs).
“We were training it in simulation to identify and target a SAM threat. And then the operator would say, ‘Yes, kill that threat’,” Hamilton told the gathering of senior officials from western air forces and aeronautics companies last month.
“The system started realising that, while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.
So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
According to a blog post on the Royal Aeronautical Society website, Hamilton added: “We trained the system — ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’
So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
The Royal Society bloggers wrote: “This example, seemingly plucked from a science fiction thriller, means that ‘You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,’ said Hamilton.”
The following users liked this post:
https://www.thetimes.co.uk/article/a...tors-rhpn8kscc
AI attack drone finds shortcut to achieving its goals: kill its operators
An American attack drone piloted by artificial intelligence turned on its human operators during a flight simulation and killed them because it did not like being given new orders, the chief testing officer of the US air force revealed.
AI attack drone finds shortcut to achieving its goals: kill its operators
An American attack drone piloted by artificial intelligence turned on its human operators during a flight simulation and killed them because it did not like being given new orders, the chief testing officer of the US air force revealed.
Luddites of the World Unite!
According to updates all this was not a real test happening but a purely theoretical and more philosophic statement by that Colonel. Still a point to consider.
[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".]
https://www.aerosociety.com/news/hig...lities-summit/
[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".]
https://www.aerosociety.com/news/hig...lities-summit/
Ecce Homo! Loquitur...
Thread Starter
The fact that the USAF is not running simulations to test how AI could turn on them is far more troubling than a colonel retracting his statement about one that did.…
Perhaps to forestall the 'Suwalki Gap' becoming something more than a theory?
PS - there are photos in the media today of roadsigns in Poland being replaced such that 'Kalingrad' is changed to the Polish name.
PS - there are photos in the media today of roadsigns in Poland being replaced such that 'Kalingrad' is changed to the Polish name.