Wikiposts
Search
Military Aviation A forum for the professionals who fly military hardware. Also for the backroom boys and girls who support the flying and maintain the equipment, and without whom nothing would ever leave the ground. All armies, navies and air forces of the world equally welcome here.

AI in Combat

Thread Tools
 
Search this Thread
 
Old 11th Apr 2023, 10:22
  #1 (permalink)  
Ecce Homo! Loquitur...
Thread Starter
 
Join Date: Jul 2000
Location: Peripatetic
Posts: 17,452
Received 1,612 Likes on 737 Posts
AI in Combat

An interesting read.

https://warontherocks.com/2023/04/ai...man-advantage/

AI’S INHUMAN ADVANTAGE
ORAC is offline  
Old 11th Apr 2023, 10:43
  #2 (permalink)  
 
Join Date: Jan 2008
Location: Glorious Devon
Posts: 2,700
Received 965 Likes on 570 Posts
As you say, interesting. It is the lack of passion and therefore ability to change fighting style to suit circumstance that stood out for me. Perhaps the other thing is AI's ability to think outsider the box, something not traditionally associated with machine intelligence.
Ninthace is offline  
Old 11th Apr 2023, 11:04
  #3 (permalink)  
Ecce Homo! Loquitur...
Thread Starter
 
Join Date: Jul 2000
Location: Peripatetic
Posts: 17,452
Received 1,612 Likes on 737 Posts
Not stuck in boxes any more ChatGPT searches the internet, finds data and organises it sensibly - and now they talk and learn from each other…

ORAC is offline  
Old 11th Apr 2023, 11:20
  #4 (permalink)  
 
Join Date: Jul 2005
Location: Great yarmouth, Norfolk UK
Age: 72
Posts: 640
Received 14 Likes on 12 Posts
If they start to call it Skynet, start running for cover......
bobward is offline  
Old 11th Apr 2023, 12:47
  #5 (permalink)  
 
Join Date: Nov 2009
Location: London
Posts: 555
Received 21 Likes on 15 Posts
Don't be overwhelmed by the current crop of AI.

They do much more than play chess, but years ago we saw computers that could beat humans at chess and despite the hype then they couldn't do much else. GPT can predict what words are next based on some input but that is not quite thinking. It might be a part of thinking but not the whole story.

Alphago can play Starcraft and it's utility is that it can make lots of sensible choices about many things that are going on where a human might be unable to pay adequate attention. I'd say the disadvantage is probably that it is heavily tied to the rules it has been given and would have trouble if its actions were really unbounded. Like coming over to your house and resetting your computer while you were off getting it a coffee - that kind of out of box effort to win.

As the article says, performance can be brittle and that's why after years and years of hype we still aren't getting driven around by AI in cars. They still must have our hands on the wheel so they can give up control to us when they're out of their depth. Or they're in a geofenced area which they can be somewhat confident about. I think they will end up giving us "special powers" for the time being - the ability to take control of details for us or do certain things we cannot do accurately. They will end up being great at shooting down cruise missiles or other "autonomous" things which are less "intelligent" or they will be used in particularly horribly clever cruise missiles but we will employ them more when the hardware wouldn't have had a pilot anyhow.
t43562 is offline  
Old 11th Apr 2023, 14:47
  #6 (permalink)  
 
Join Date: Oct 2022
Location: Home
Posts: 12
Received 1 Like on 1 Post
Originally Posted by t43562
Don't be overwhelmed by the current crop of AI.

They do much more than play chess, but years ago we saw computers that could beat humans at chess and despite the hype then they couldn't do much else. GPT can predict what words are next based on some input but that is not quite thinking. It might be a part of thinking but not the whole story.

Alphago can play Starcraft and it's utility is that it can make lots of sensible choices about many things that are going on where a human might be unable to pay adequate attention. I'd say the disadvantage is probably that it is heavily tied to the rules it has been given and would have trouble if its actions were really unbounded. Like coming over to your house and resetting your computer while you were off getting it a coffee - that kind of out of box effort to win.

As the article says, performance can be brittle and that's why after years and years of hype we still aren't getting driven around by AI in cars. They still must have our hands on the wheel so they can give up control to us when they're out of their depth. Or they're in a geofenced area which they can be somewhat confident about. I think they will end up giving us "special powers" for the time being - the ability to take control of details for us or do certain things we cannot do accurately. They will end up being great at shooting down cruise missiles or other "autonomous" things which are less "intelligent" or they will be used in particularly horribly clever cruise missiles but we will employ them more when the hardware wouldn't have had a pilot anyhow.
Which suggests that critical military skills are going to need to shift to Cyber Attack/Defense.....
DodgyGeezer is offline  
Old 11th Apr 2023, 15:06
  #7 (permalink)  
 
Join Date: Aug 2010
Location: UK
Age: 67
Posts: 170
Received 35 Likes on 20 Posts
Originally Posted by t43562
Don't be overwhelmed by the current crop of AI.

They do much more than play chess, but years ago we saw computers that could beat humans at chess and despite the hype then they couldn't do much else. GPT can predict what words are next based on some input but that is not quite thinking. It might be a part of thinking but not the whole story.

Alphago can play Starcraft and it's utility is that it can make lots of sensible choices about many things that are going on where a human might be unable to pay adequate attention. I'd say the disadvantage is probably that it is heavily tied to the rules it has been given and would have trouble if its actions were really unbounded. Like coming over to your house and resetting your computer while you were off getting it a coffee - that kind of out of box effort to win.

As the article says, performance can be brittle and that's why after years and years of hype we still aren't getting driven around by AI in cars. They still must have our hands on the wheel so they can give up control to us when they're out of their depth. Or they're in a geofenced area which they can be somewhat confident about. I think they will end up giving us "special powers" for the time being - the ability to take control of details for us or do certain things we cannot do accurately. They will end up being great at shooting down cruise missiles or other "autonomous" things which are less "intelligent" or they will be used in particularly horribly clever cruise missiles but we will employ them more when the hardware wouldn't have had a pilot anyhow.

how long is "for the time being" and at what point do we say enough is enough or will we, as usual, realize too late that the human race has become superfluous?
golfbananajam is offline  
Old 11th Apr 2023, 15:32
  #8 (permalink)  
 
Join Date: Nov 2009
Location: London
Posts: 555
Received 21 Likes on 15 Posts
Originally Posted by golfbananajam
how long is "for the time being" and at what point do we say enough is enough or will we, as usual, realize too late that the human race has become superfluous?
I have recently been reading Isaac Asimov's "Caves of Steel" books which are nearly about this topic. They were written in the 1950s and I think people then would have assumed walking talking robots by now but they had very quaint ideas about a lot of future technologies - like viewing books on reels of film (that's an Asimov one). In other words we don't progress the way we think we will.

The Large Language Models like GPT are the kind of thing that's been waiting to pop out for quite a long time - they're "just" neural networks that have "just" been trained at great expense on a huge amount of input information. Their intelligence is possibly a result of the way there is so much written out there already. It's a bit like being faced with a book that has all of recorded knowledge in it and instead of searching it with google we can now ask it to look things up like a librarian. So librarians might have less book searching to do. On the other hand you cannot really trust its answers unless you're an expert already and know when it's producing false results. Programmers use it to write small programs but usually have to correct them for example. If I asked it for medical advice I'd be taking my life in my hands.

I think the people that are studying the things that are needed for more general intelligence are those like the Human Brain Project. They're beavering away now and IMO it's people like them whose work is quetly advancing the state of "real" or recognisable intelligence. It could be a long time before something comes of those efforts but when it comes it will seem sudden just like the new things we're gawking at today. I think you cannot say when "a long time" is but I would guess at half a lifetime as a reasonable time not to worry too much.

At some point we will need universal basic income and possibly a limitation on robots just as in Asimov's stories if we are not to become useless and effete. The problem will be that we won't have a world government to enforce this, nor anything smart like the 3 laws and the building of robots to do harm will be far too tempting. Skynet would start to be a possibility then. We will have to build AI battle computers and robots just because of the threat that others might do it and put us at their mercy.

In the meantime the advantages in what we have now will be for applying these things to help humans do work more than replacing them. Lets put it this way: your AI fighter will be great until some weirdness in the machine learning makes it nose dive into a building - so it probably will only be useful where that kind of danger is less than the advantage it gives "for now".

To be fair I'm quite good at talking as if I'm an expert but I'm just another person with a lowish Computer Science degree and some reading. It's all just my opinion. You're seeing a bandwagon in the making - people will want to hype it to promote themselves, their companies etc etc. You have to be careful about both rejecting and accepting everything about it.

Last edited by t43562; 11th Apr 2023 at 15:39. Reason: Clarification
t43562 is offline  
Old 11th Apr 2023, 15:53
  #9 (permalink)  
Ecce Homo! Loquitur...
Thread Starter
 
Join Date: Jul 2000
Location: Peripatetic
Posts: 17,452
Received 1,612 Likes on 737 Posts
The point here is that they are not trying to build machines that “think” like human beings and are self aware.

One of the main advantages of self-learning machines, whether in designing components, playing games or folding proteins is that they “think” differently and avoid the blind spots in human cognition.

https://www.nature.com/articles/d41586-020-03348-4

https://www.nasa.gov/feature/goddard...ssion-hardware
ORAC is offline  
Old 11th Apr 2023, 16:20
  #10 (permalink)  
 
Join Date: Nov 2009
Location: London
Posts: 555
Received 21 Likes on 15 Posts
Originally Posted by ORAC
The point here is that they are not trying to build machines that “think” like human beings and are self aware.

One of the main advantages of self-learning machines, whether in designing components, playing games or folding proteins is that they “think” differently and avoid the blind spots in human cognition.

https://www.nature.com/articles/d41586-020-03348-4

https://www.nasa.gov/feature/goddard...ssion-hardware
IMO they're just making a virtue out of a necessity - they cannot make machines that can consider their situation, decide on what the problems are and plan to solve them. So they take a network and throw millions of simulated or real events at it until it stops failing. Hey presto it thinks "differently" yay.

Whatever's left in the network is a kind of learning but it can be deceptive. You might think that the aircraft is identifying the enemy in some clever way but it might just be shooting at the first thing coming towards it. It's potentially quite difficult to work out if you've got something safe or that's just waiting for a situation you didn't give it before to blow up in your face.

There are ways around this but at the moment you cannot ask the AI to explain itself. I should think there's a lot of effort going on now to effectively find ways to "get the explanation" out of such systems to make that problem go away.
t43562 is offline  
Old 11th Apr 2023, 16:37
  #11 (permalink)  
 
Join Date: Apr 2010
Location: Herefordshire
Posts: 777
Received 590 Likes on 210 Posts
Originally Posted by ORAC
The point here is that they are not trying to build machines that “think” like human beings and are self aware.
One of the main advantages of self-learning machines, whether in designing components, playing games or folding proteins is that they “think” differently and avoid the blind spots in human cognition.
I'm curious about how well AI copes when faced with circumstances that are abnormal. For example, there have been occasions when relatively minor sensor or data processing faults have confused human pilots so much that an essentially serviceable airliner has crashed. Would an AI do any better when presented with illogical or contradictory data? I imagine such faults are relatively common and in most cases a human pilot is able to use other clues to troubleshoot the problem - clues that AI may be oblivious to.
Video Mixdown is offline  
Old 11th Apr 2023, 16:54
  #12 (permalink)  
 
Join Date: Jan 2008
Location: Glorious Devon
Posts: 2,700
Received 965 Likes on 570 Posts
Conversely AI may be less likely to be confused in the first place as it can deal with far more simultaneous inputs and is less likely to expect a different outcome by repeatedly performing the same input.
Ninthace is offline  
Old 11th Apr 2023, 17:10
  #13 (permalink)  
 
Join Date: Nov 2000
Location: UK
Age: 69
Posts: 1,407
Received 40 Likes on 22 Posts
Hmm..... medals, bravery, cowardice, pacifism. Could these very human attributes find a place in the machine world? Would they be recognised as virtues or vices?
beardy is online now  
Old 11th Apr 2023, 17:13
  #14 (permalink)  
 
Join Date: Oct 2022
Location: Home
Posts: 12
Received 1 Like on 1 Post
Originally Posted by t43562
I have recently been reading Isaac Asimov's "Caves of Steel" books which are nearly about this topic. They were written in the 1950s and I think people then would have assumed walking talking robots by now but they had very quaint ideas about a lot of future technologies - like viewing books on reels of film (that's an Asimov one). In other words we don't progress the way we think we will.

The Large Language Models like GPT are the kind of thing that's been waiting to pop out for quite a long time - they're "just" neural networks that have "just" been trained at great expense on a huge amount of input information. Their intelligence is possibly a result of the way there is so much written out there already. It's a bit like being faced with a book that has all of recorded knowledge in it and instead of searching it with google we can now ask it to look things up like a librarian. So librarians might have less book searching to do. On the other hand you cannot really trust its answers unless you're an expert already and know when it's producing false results. Programmers use it to write small programs but usually have to correct them for example. If I asked it for medical advice I'd be taking my life in my hands.

I think the people that are studying the things that are needed for more general intelligence are those like the Human Brain Project. They're beavering away now and IMO it's people like them whose work is quetly advancing the state of "real" or recognisable intelligence. It could be a long time before something comes of those efforts but when it comes it will seem sudden just like the new things we're gawking at today. I think you cannot say when "a long time" is but I would guess at half a lifetime as a reasonable time not to worry too much.

At some point we will need universal basic income and possibly a limitation on robots just as in Asimov's stories if we are not to become useless and effete. The problem will be that we won't have a world government to enforce this, nor anything smart like the 3 laws and the building of robots to do harm will be far too tempting. Skynet would start to be a possibility then. We will have to build AI battle computers and robots just because of the threat that others might do it and put us at their mercy.

In the meantime the advantages in what we have now will be for applying these things to help humans do work more than replacing them. Lets put it this way: your AI fighter will be great until some weirdness in the machine learning makes it nose dive into a building - so it probably will only be useful where that kind of danger is less than the advantage it gives "for now".

To be fair I'm quite good at talking as if I'm an expert but I'm just another person with a lowish Computer Science degree and some reading. It's all just my opinion. You're seeing a bandwagon in the making - people will want to hype it to promote themselves, their companies etc etc. You have to be careful about both rejecting and accepting everything about it.
Two points:

The only available storage on the 50s/60s was perforated paper/card or mag tape. So you get vast megabrain computers depending on miles of mag tape in books from that period.

If we ever emulate a conscious brain in software we will be faced with no end of ethical rather than technical problems.
DodgyGeezer is offline  
Old 11th Apr 2023, 18:18
  #15 (permalink)  
 
Join Date: Jul 2003
Location: uk
Posts: 1,042
Likes: 0
Received 0 Likes on 0 Posts
Felt a bit sad for the player.
BigEndBob is offline  
Old 11th Apr 2023, 18:29
  #16 (permalink)  
 
Join Date: Jan 2003
Location: Southampton
Posts: 859
Received 48 Likes on 23 Posts
If AI is that smart, then it would be finding a way to talk to the enemy's AI and learn it's secrets.

Of course the enemy's AI would be doing the same.

Therefore would they cancel each other out and declare a draw?
Saintsman is offline  
Old 11th Apr 2023, 19:02
  #17 (permalink)  
Guest
 
Join Date: Dec 2004
Posts: 1,264
Received 180 Likes on 106 Posts
The other thing to bear in mind with AI is that most of the successful models are trained on huge datasets. One thing there is an absence of in warfare is large amounts of information about how the enemy is likely to behave. "Niche" skills (see alpha dogfight) are quite trainable because it's a bit like playing chess - physics provides a set of rules. The same doesn't apply to more general warfare. Where AI is likely to be useful is hoovering up large volumes of data and pre-processing it for interpretation - imagery analysis in real-time from multiple satellites in multiple spectra is a great example of something that humans wouldn't keep up with, but an AI would eat for breakfast.
PPRuNeUser0211 is offline  
Old 11th Apr 2023, 19:11
  #18 (permalink)  
 
Join Date: Apr 2009
Location: Wherever it is this month
Posts: 1,792
Received 78 Likes on 35 Posts
I would offer that the principles of air combat and the tactics which have been developed to deal with various kinds of air and ground threats are, at their core, fairly simple. The difficulty is executing them under the extreme physical and mental stresses associated with a fighter cockpit while maintaining a sufficient degree of situational awareness. To deal with that problem, we have rigorous selection and training programmes to find the elite few who can make reasonable decisions most of the time under such conditions. Then there is the elite within the elite, Warfare Centres, EWIs and QWIs, boiling down data and lessons learned into rules of thumb to keep things manageable for the junior elite. Anyone who's sat through mass debriefs of the air battle at the likes of RED FLAG will know that there are only so many ways that such battles play out, and while the playing board is not constrained in the same way as in go or chess, the patterns which emerge are consistent and therefore readily learnable by machines. And once the machines have learned them (or, in the DARPA case, derived them from first principles), the machines won't ever get maxed out or find their engagement timelines getting squeezed. They won't need rules of thumb and so can be more precise in how they execute tactics, the ruthless accuracy of the head sector gun shot being a prime example.

The main difficulty as I see it will be in sensing, or building the information needed to feed the algorithm. The DARPA trial allowed the robots to have 100% unbroken awareness of the human's position and vector, which is obviously unrealistic. But even today, there is already a very good argument that this limitation only applies WVR; when we talk BVR then humans are entirely reliant on sensors and comms links which an AI would have little difficulty interpreting. You could even have human controllers issuing verbal instructions over R/T: "Alexa, engage group bullseye 170/43". And I'm sure that our finest legal minds could conjure up an argument that human input needn't be of the unbroken, real time variety. We've already been there with Brimstone, after all.

To ethical objectors, I would pose a question: if (heaven forbid) WW3 should break out, perhaps over an island off China, and the technology existed to field an AI driven fighter, and it could be rigidly geofenced into a set operating area (let's call it the South China Sea FAOR) to avert the possibility of it taking over the world, would it be ethical to choose not to use that technology, and instead put human pilots in harm's way in the full knowledge that they were less capable in the lethal BVR battle which would ensue? I would suggest that the British public would take an even dimmer view of that than of the Snatch Land Rover debacle.

Last edited by Easy Street; 11th Apr 2023 at 20:31.
Easy Street is online now  
Old 12th Apr 2023, 07:31
  #19 (permalink)  
 
Join Date: May 2003
Location: SAUDI
Posts: 462
Received 13 Likes on 9 Posts
Not quite AI but using the human mind as an interface for the computer if memory serves, Dale Brown's "Day of the Cheetah".
finestkind is offline  
Old 12th Apr 2023, 08:00
  #20 (permalink)  
Ecce Homo! Loquitur...
Thread Starter
 
Join Date: Jul 2000
Location: Peripatetic
Posts: 17,452
Received 1,612 Likes on 737 Posts
(let's call it the South China Sea FAOR)
Populate with AI driven fighters and call it what it would be - a Kill Box.
ORAC is offline  


Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.