Go Back  PPRuNe Forums > Aircrew Forums > Military Aviation
Reload this Page >

Phalanx and the AI quandary

Wikiposts
Search
Military Aviation A forum for the professionals who fly military hardware. Also for the backroom boys and girls who support the flying and maintain the equipment, and without whom nothing would ever leave the ground. All armies, navies and air forces of the world equally welcome here.

Phalanx and the AI quandary

Thread Tools
 
Search this Thread
 
Old 6th May 2023, 11:45
  #1 (permalink)  
Thread Starter
 
Join Date: Feb 2006
Location: Hanging off the end of a thread
Posts: 33,047
Received 2,920 Likes on 1,249 Posts
Phalanx and the AI quandary

This is getting a farce, we might as well just surrender now…

https://ukdefencejournal.org.uk/uk-f...dgetail-order/


In a recent AI in Weapons Systems Committee session, experts debated the ethical, legal, and technical concerns of AI in weapons like the Royal Navy’s Phalanx system, discussing potential bans on specific autonomous systems.

The Artificial Intelligence in Weapons Systems Committee recently held a public evidence session, inviting experts to discuss ethical and legal concerns surrounding the use of AI in weaponry.

The session included testimony from Professor Mariarosaria Taddeo, Dr. Alexander Blanchard, and Verity Coyle, who examined the implications of AI in defence and security.

Professor Taddeo highlighted three main issues with the implementation of AI in weapons systems, stating, “We need to take a step back here, because it is important to understand that, when we talk about artificial intelligence, we are not just talking about a new tool like any other digital technology. It is a form of agency.”

She emphasised concerns regarding the limited predictability of outcomes, difficulty attributing responsibility, and the potential for AI systems to perpetrate mistakes more effectively than humans. Taddeo argued that the unpredictability issue is intrinsic to the technology itself and unlikely to be resolved.

Verity Coyle, a Senior Campaigner/Adviser at Amnesty International, emphasized the potential human rights concerns raised by autonomous weapons systems (AWS), saying, “The use of AWS, whether in armed conflict or in peacetime, implicates and threatens to undermine fundamental elements of international human rights law, including the right to life, the right to remedy, and the principle of human dignity.”

She argued that without meaningful human control over the use of force, AWS cannot be used in compliance with international humanitarian law (IHL) and international human rights law (IHRL).

During the session, Verity Coyle provided an example of an existing AWS, the Kargu-2 drones deployed by Turkey, which have autonomous functions that can be switched on and off. She warned that, “We are on a razor’s edge in terms of how close we are to these systems being operational and deadly.”

In response to questions about existing AI-driven defence systems, such as the Phalanx used by the Royal Navy, Coyle stated, “If it is targeting humans, yes,” indicating that any system targeting humans should be banned.


The experts recommended the establishment of a legally binding instrument that mandates meaningful human control over the use of force and prohibits certain systems, particularly those that target human beings.
NutLoose is online now  
Old 6th May 2023, 14:30
  #2 (permalink)  
 
Join Date: Oct 2018
Location: Ferrara
Posts: 8,464
Received 364 Likes on 213 Posts
this has been an issue since Asimov wrote I Robot. How much can you trust the machines?
Asturias56 is offline  
Old 6th May 2023, 14:50
  #3 (permalink)  
 
Join Date: Apr 2010
Location: Herefordshire
Posts: 775
Received 571 Likes on 209 Posts
Originally Posted by Asturias56
this has been an issue since Asimov wrote I Robot. How much can you trust the machines?
Just finished the novel 'DELIO Phase One', a freebie with Audible. A self-aware AI becomes all-powerful and totally amoral. Excellent.
Video Mixdown is online now  
Old 6th May 2023, 16:41
  #4 (permalink)  
 
Join Date: Oct 2018
Location: Ferrara
Posts: 8,464
Received 364 Likes on 213 Posts
More worrying was a story - I think it might have been Eric Frank Russell - about some poor sod who gots caught in a completely machine driven problem over a missing murder mystery book which the machines gradually escalate, quite logically, to finding him guilty of murder and having him executed....................

Asturias56 is offline  
Old 6th May 2023, 18:42
  #5 (permalink)  
 
Join Date: Nov 2000
Location: UK
Age: 69
Posts: 1,406
Received 40 Likes on 22 Posts
Originally Posted by NutLoose
This is getting a farce, we might as well just surrender now…

https://ukdefencejournal.org.uk/uk-f...dgetail-order/
I don't think that this is a farce at all. ROE rely on identifying the enemy, AI may not necessarily use the laid down criteria and because its decision making may very well be opaque we may never know what criteria it uses.

Last edited by beardy; 6th May 2023 at 19:35.
beardy is offline  
Old 6th May 2023, 18:46
  #6 (permalink)  
 
Join Date: Jan 2019
Location: Cumbria
Posts: 366
Received 161 Likes on 50 Posts
Doesn't Phalanx have an Off-Switch? Operated by a human?
DuncanDoenitz is offline  
Old 6th May 2023, 18:48
  #7 (permalink)  
 
Join Date: Oct 2007
Location: York
Posts: 627
Received 23 Likes on 14 Posts
Terminator was a film before it’s time……..
dctyke is offline  
Old 6th May 2023, 19:37
  #8 (permalink)  
 
Join Date: Nov 2000
Location: UK
Age: 69
Posts: 1,406
Received 40 Likes on 22 Posts
Originally Posted by DuncanDoenitz
Doesn't Phalanx have an Off-Switch? Operated by a human?
Probably, but that would mean having a human monitoring its performance, which kind of negates the autonomous AI bit.
beardy is offline  
Old 6th May 2023, 19:59
  #9 (permalink)  
 
Join Date: Jan 2019
Location: Cumbria
Posts: 366
Received 161 Likes on 50 Posts
Originally Posted by beardy
Probably, but that would mean having a human monitoring its performance, which kind of negates the autonomous AI bit.
And is that different to any smart munition/sub-munition, and which doesn't have an Off-Switch?
DuncanDoenitz is offline  
Old 6th May 2023, 20:25
  #10 (permalink)  
 
Join Date: Jan 2008
Location: Glorious Devon
Posts: 2,698
Received 936 Likes on 554 Posts
You mean like loitering munitions or mines?
Ninthace is online now  
Old 6th May 2023, 21:17
  #11 (permalink)  
 
Join Date: Oct 2013
Location: UK
Age: 42
Posts: 654
Received 9 Likes on 8 Posts
Basically, like 'self driving cars' someone still has to be held liable when there is a crash. Who is it? hopefully there's a person behind the wheel (as you must be ready to take back control at any time....) or the manufacturer if its operating in full self drive mode?

A cynic might say the self drive is programmed to hand back control of the car before the crash so that the driver is responsible for it...

Same for any other autonomous system - who owns the risk of it going wrong?
unmanned_droid is offline  
Old 6th May 2023, 21:19
  #12 (permalink)  
 
Join Date: Jan 2019
Location: Cumbria
Posts: 366
Received 161 Likes on 50 Posts
I suppose something like the CBU-97 would fit the bill. Deploys 10 sub-munitions which each deploy 4 individually self-targeting sub-sub-munitions; "skeets". Each skeet searches using a laser sensor and height/contour algorithms to detect a "target", and attacks with an Explosively Formed Penetrator. Apparently it has controversies of its own, but largely based on its characteristics involving the term "cluster".

I'm pretty sure this isn't the only dog of war which is designed to detect, select and attack a target after being let-slip by a human.
DuncanDoenitz is offline  
Old 6th May 2023, 21:21
  #13 (permalink)  
 
Join Date: Jan 2008
Location: Glorious Devon
Posts: 2,698
Received 936 Likes on 554 Posts
Originally Posted by unmanned_droid
.....

Same for any other autonomous system - who owns the risk of it going wrong?
At first glance, the person switching it on/launching it/putting it in position.
Ninthace is online now  
Old 6th May 2023, 22:42
  #14 (permalink)  
 
Join Date: Oct 2013
Location: UK
Age: 42
Posts: 654
Received 9 Likes on 8 Posts
Originally Posted by Ninthace
At first glance, the person switching it on/launching it/putting it in position.
It's a good start, however the person turning it on is likely to be the least liable - they're likely to know the least about how the system works and potentially have the least say in its operation.

For a determinate system i.e. you know what it will do (very nearly) every time you turn it on (1e-9 failure rate - like systems in an airliner) then sure, for the most part that is fine. Problem is, AI isn't a determinate system as we understand it. There's nothing to say it will do as it is expected to do. Every time it is switched on it has the opportunity to give a different answer. So leaving it alone to determine what it should attempt to kill is a pretty risky idea. We already have enough problems with algorithms trying to understand traffic on smart motorways, lets not give them guns.

When an AI system does something unexpected, its difficult to troubleshoot - maybe even impossible. If the outcome of that unexpected action is fatal, expensive or even just embarrassing the finger pointing starts. Someone needs to be the risk holder. I know for a fact I'd never take that on.

The whole point of the AI system is that it can be left alone to do its job - potentially faster and 'better' than the human it was supposed to replace. If you require a human to nanny it, then its value proposition tanks.
unmanned_droid is offline  
Old 6th May 2023, 23:30
  #15 (permalink)  
 
Join Date: Mar 2005
Location: Land of the Angles
Posts: 359
Received 2 Likes on 2 Posts
Slightly off topic, but it is my understanding that a report last year by the Law Commission of England, Wales and Scotchland, suggested that the legal responsibility for accidents caused by self-driving vehicles should rest not with the person in the driver’s seat, but with the company or body that obtained authorization for the self-driving features used by the vehicle. To add to the confusion, there is currently no legal definition in the UK for what constitutes a 'Self Driving Vehicle'.

Back on topic. Twenty years ago I went to see 'Terminator 3, Rise of the Machines' and I immediately thought the opening sequence was where the world was heading and my view has not changed since.
Hilife is offline  
Old 7th May 2023, 01:06
  #16 (permalink)  
 
Join Date: Jan 2008
Location: Glorious Devon
Posts: 2,698
Received 936 Likes on 554 Posts
Originally Posted by unmanned_droid
It's a good start, however the person turning it on is likely to be the least liable - they're likely to know the least about how the system works and potentially have the least say in its operation.

For a determinate system i.e. you know what it will do (very nearly) every time you turn it on (1e-9 failure rate - like systems in an airliner) then sure, for the most part that is fine. Problem is, AI isn't a determinate system as we understand it. There's nothing to say it will do as it is expected to do. Every time it is switched on it has the opportunity to give a different answer. So leaving it alone to determine what it should attempt to kill is a pretty risky idea. We already have enough problems with algorithms trying to understand traffic on smart motorways, lets not give them guns.

When an AI system does something unexpected, its difficult to troubleshoot - maybe even impossible. If the outcome of that unexpected action is fatal, expensive or even just embarrassing the finger pointing starts. Someone needs to be the risk holder. I know for a fact I'd never take that on.

The whole point of the AI system is that it can be left alone to do its job - potentially faster and 'better' than the human it was supposed to replace. If you require a human to nanny it, then its value proposition tanks.
Nevertheless, as with any weapon system, when you use it, you have an expectation of what it will do and how it will perform. Even an AI system should conform to that as it should have gone through rigorous testing during development. To that extent, the operator is liable. No one is going to use a system where there is no knowing what it will do.
Ninthace is online now  
Old 7th May 2023, 03:05
  #17 (permalink)  
 
Join Date: Mar 2005
Location: N/A
Posts: 5,947
Received 394 Likes on 209 Posts
Anyone promoting no pilot airliners I'll direct to this thread.
megan is offline  
Old 7th May 2023, 06:43
  #18 (permalink)  
 
Join Date: Jan 2011
Location: In a van down by the river
Posts: 706
Received 1 Like on 1 Post
I highly recommend the movie Colossus: The Forbin Project for anyone who hasn’t seen it, way ahead of its time.
Fonsini is offline  
Old 7th May 2023, 07:58
  #19 (permalink)  
 
Join Date: Nov 2000
Location: UK
Age: 69
Posts: 1,406
Received 40 Likes on 22 Posts
Originally Posted by DuncanDoenitz
And is that different to any smart munition/sub-munition, and which doesn't have an Off-Switch?
Yes of course it is. Dumb munitions don't possess a self generated intent, they do what they are designed and built for, their ROE are designed in and they are deployed knowing their capabilities and limitations. AI develops its own intent and ROE, that's the point of AI. The difficulty at the moment is to determining how it develops and deploys itself, the process is opaque and could be too fast to be overridden by a human. Of course you could have a supervisory AI monitoring the operational AI.
beardy is offline  
Old 7th May 2023, 08:21
  #20 (permalink)  
 
Join Date: Dec 2021
Location: Uk
Posts: 177
Likes: 0
Received 10 Likes on 7 Posts
Originally Posted by unmanned_droid
Basically, like 'self driving cars' someone still has to be held liable when there is a crash. Who is it? hopefully there's a person behind the wheel (as you must be ready to take back control at any time....) or the manufacturer if its operating in full self drive mode?

A cynic might say the self drive is programmed to hand back control of the car before the crash so that the driver is responsible for it...

Same for any other autonomous system - who owns the risk of it going wrong?
Except we are probably only 10-15 years away from truly self driving cars. Lots of taxis will be driving to and from pick ups with nobody in them. These cars won’t even have a steering wheel for someone to take control.
Flyhighfirst is offline  


Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.