PDA

View Full Version : AI in Combat


ORAC
11th Apr 2023, 10:22
An interesting read.

https://warontherocks.com/2023/04/ais-inhuman-advantage/

AI’S INHUMAN ADVANTAGE

Ninthace
11th Apr 2023, 10:43
As you say, interesting. It is the lack of passion and therefore ability to change fighting style to suit circumstance that stood out for me. Perhaps the other thing is AI's ability to think outsider the box, something not traditionally associated with machine intelligence.

ORAC
11th Apr 2023, 11:04
Not stuck in boxes any more ChatGPT searches the internet, finds data and organises it sensibly - and now they talk and learn from each other…

https://twitter.com/itakgol/status/1645491031071236120?s=61&t=rmEeUn68HhlFHGKbTPQr_A

bobward
11th Apr 2023, 11:20
If they start to call it Skynet, start running for cover......

t43562
11th Apr 2023, 12:47
Don't be overwhelmed by the current crop of AI.

They do much more than play chess, but years ago we saw computers that could beat humans at chess and despite the hype then they couldn't do much else. GPT can predict what words are next based on some input but that is not quite thinking. It might be a part of thinking but not the whole story.

Alphago can play Starcraft and it's utility is that it can make lots of sensible choices about many things that are going on where a human might be unable to pay adequate attention. I'd say the disadvantage is probably that it is heavily tied to the rules it has been given and would have trouble if its actions were really unbounded. Like coming over to your house and resetting your computer while you were off getting it a coffee - that kind of out of box effort to win.

As the article says, performance can be brittle and that's why after years and years of hype we still aren't getting driven around by AI in cars. They still must have our hands on the wheel so they can give up control to us when they're out of their depth. Or they're in a geofenced area which they can be somewhat confident about. I think they will end up giving us "special powers" for the time being - the ability to take control of details for us or do certain things we cannot do accurately. They will end up being great at shooting down cruise missiles or other "autonomous" things which are less "intelligent" or they will be used in particularly horribly clever cruise missiles but we will employ them more when the hardware wouldn't have had a pilot anyhow.

DodgyGeezer
11th Apr 2023, 14:47
Don't be overwhelmed by the current crop of AI.

They do much more than play chess, but years ago we saw computers that could beat humans at chess and despite the hype then they couldn't do much else. GPT can predict what words are next based on some input but that is not quite thinking. It might be a part of thinking but not the whole story.

Alphago can play Starcraft and it's utility is that it can make lots of sensible choices about many things that are going on where a human might be unable to pay adequate attention. I'd say the disadvantage is probably that it is heavily tied to the rules it has been given and would have trouble if its actions were really unbounded. Like coming over to your house and resetting your computer while you were off getting it a coffee - that kind of out of box effort to win.

As the article says, performance can be brittle and that's why after years and years of hype we still aren't getting driven around by AI in cars. They still must have our hands on the wheel so they can give up control to us when they're out of their depth. Or they're in a geofenced area which they can be somewhat confident about. I think they will end up giving us "special powers" for the time being - the ability to take control of details for us or do certain things we cannot do accurately. They will end up being great at shooting down cruise missiles or other "autonomous" things which are less "intelligent" or they will be used in particularly horribly clever cruise missiles but we will employ them more when the hardware wouldn't have had a pilot anyhow.

Which suggests that critical military skills are going to need to shift to Cyber Attack/Defense.....

golfbananajam
11th Apr 2023, 15:06
Don't be overwhelmed by the current crop of AI.

They do much more than play chess, but years ago we saw computers that could beat humans at chess and despite the hype then they couldn't do much else. GPT can predict what words are next based on some input but that is not quite thinking. It might be a part of thinking but not the whole story.

Alphago can play Starcraft and it's utility is that it can make lots of sensible choices about many things that are going on where a human might be unable to pay adequate attention. I'd say the disadvantage is probably that it is heavily tied to the rules it has been given and would have trouble if its actions were really unbounded. Like coming over to your house and resetting your computer while you were off getting it a coffee - that kind of out of box effort to win.

As the article says, performance can be brittle and that's why after years and years of hype we still aren't getting driven around by AI in cars. They still must have our hands on the wheel so they can give up control to us when they're out of their depth. Or they're in a geofenced area which they can be somewhat confident about. I think they will end up giving us "special powers" for the time being - the ability to take control of details for us or do certain things we cannot do accurately. They will end up being great at shooting down cruise missiles or other "autonomous" things which are less "intelligent" or they will be used in particularly horribly clever cruise missiles but we will employ them more when the hardware wouldn't have had a pilot anyhow.


how long is "for the time being" and at what point do we say enough is enough or will we, as usual, realize too late that the human race has become superfluous?

t43562
11th Apr 2023, 15:32
how long is "for the time being" and at what point do we say enough is enough or will we, as usual, realize too late that the human race has become superfluous?

I have recently been reading Isaac Asimov's "Caves of Steel" books which are nearly about this topic. They were written in the 1950s and I think people then would have assumed walking talking robots by now but they had very quaint ideas about a lot of future technologies - like viewing books on reels of film (that's an Asimov one). In other words we don't progress the way we think we will.

The Large Language Models like GPT are the kind of thing that's been waiting to pop out for quite a long time - they're "just" neural networks that have "just" been trained at great expense on a huge amount of input information. Their intelligence is possibly a result of the way there is so much written out there already. It's a bit like being faced with a book that has all of recorded knowledge in it and instead of searching it with google we can now ask it to look things up like a librarian. So librarians might have less book searching to do. On the other hand you cannot really trust its answers unless you're an expert already and know when it's producing false results. Programmers use it to write small programs but usually have to correct them for example. If I asked it for medical advice I'd be taking my life in my hands.

I think the people that are studying the things that are needed for more general intelligence are those like the Human Brain Project. They're beavering away now and IMO it's people like them whose work is quetly advancing the state of "real" or recognisable intelligence. It could be a long time before something comes of those efforts but when it comes it will seem sudden just like the new things we're gawking at today. I think you cannot say when "a long time" is but I would guess at half a lifetime as a reasonable time not to worry too much.

At some point we will need universal basic income and possibly a limitation on robots just as in Asimov's stories if we are not to become useless and effete. The problem will be that we won't have a world government to enforce this, nor anything smart like the 3 laws and the building of robots to do harm will be far too tempting. Skynet would start to be a possibility then. We will have to build AI battle computers and robots just because of the threat that others might do it and put us at their mercy.

In the meantime the advantages in what we have now will be for applying these things to help humans do work more than replacing them. Lets put it this way: your AI fighter will be great until some weirdness in the machine learning makes it nose dive into a building - so it probably will only be useful where that kind of danger is less than the advantage it gives "for now".

To be fair I'm quite good at talking as if I'm an expert but I'm just another person with a lowish Computer Science degree and some reading. It's all just my opinion. You're seeing a bandwagon in the making - people will want to hype it to promote themselves, their companies etc etc. You have to be careful about both rejecting and accepting everything about it.

ORAC
11th Apr 2023, 15:53
The point here is that they are not trying to build machines that “think” like human beings and are self aware.

One of the main advantages of self-learning machines, whether in designing components, playing games or folding proteins is that they “think” differently and avoid the blind spots in human cognition.

https://www.nature.com/articles/d41586-020-03348-4

https://www.nasa.gov/feature/goddard/2023/nasa-turns-to-ai-to-design-mission-hardware

t43562
11th Apr 2023, 16:20
The point here is that they are not trying to build machines that “think” like human beings and are self aware.

One of the main advantages of self-learning machines, whether in designing components, playing games or folding proteins is that they “think” differently and avoid the blind spots in human cognition.

https://www.nature.com/articles/d41586-020-03348-4

https://www.nasa.gov/feature/goddard/2023/nasa-turns-to-ai-to-design-mission-hardware

IMO they're just making a virtue out of a necessity - they cannot make machines that can consider their situation, decide on what the problems are and plan to solve them. So they take a network and throw millions of simulated or real events at it until it stops failing. Hey presto it thinks "differently" yay.

Whatever's left in the network is a kind of learning but it can be deceptive. You might think that the aircraft is identifying the enemy in some clever way but it might just be shooting at the first thing coming towards it. It's potentially quite difficult to work out if you've got something safe or that's just waiting for a situation you didn't give it before to blow up in your face.

There are ways around this but at the moment you cannot ask the AI to explain itself. I should think there's a lot of effort going on now to effectively find ways to "get the explanation" out of such systems to make that problem go away.

Video Mixdown
11th Apr 2023, 16:37
The point here is that they are not trying to build machines that “think” like human beings and are self aware.
One of the main advantages of self-learning machines, whether in designing components, playing games or folding proteins is that they “think” differently and avoid the blind spots in human cognition.
I'm curious about how well AI copes when faced with circumstances that are abnormal. For example, there have been occasions when relatively minor sensor or data processing faults have confused human pilots so much that an essentially serviceable airliner has crashed. Would an AI do any better when presented with illogical or contradictory data? I imagine such faults are relatively common and in most cases a human pilot is able to use other clues to troubleshoot the problem - clues that AI may be oblivious to.

Ninthace
11th Apr 2023, 16:54
Conversely AI may be less likely to be confused in the first place as it can deal with far more simultaneous inputs and is less likely to expect a different outcome by repeatedly performing the same input.

beardy
11th Apr 2023, 17:10
Hmm..... medals, bravery, cowardice, pacifism. Could these very human attributes find a place in the machine world? Would they be recognised as virtues or vices?

DodgyGeezer
11th Apr 2023, 17:13
I have recently been reading Isaac Asimov's "Caves of Steel" books which are nearly about this topic. They were written in the 1950s and I think people then would have assumed walking talking robots by now but they had very quaint ideas about a lot of future technologies - like viewing books on reels of film (that's an Asimov one). In other words we don't progress the way we think we will.

The Large Language Models like GPT are the kind of thing that's been waiting to pop out for quite a long time - they're "just" neural networks that have "just" been trained at great expense on a huge amount of input information. Their intelligence is possibly a result of the way there is so much written out there already. It's a bit like being faced with a book that has all of recorded knowledge in it and instead of searching it with google we can now ask it to look things up like a librarian. So librarians might have less book searching to do. On the other hand you cannot really trust its answers unless you're an expert already and know when it's producing false results. Programmers use it to write small programs but usually have to correct them for example. If I asked it for medical advice I'd be taking my life in my hands.

I think the people that are studying the things that are needed for more general intelligence are those like the Human Brain Project. They're beavering away now and IMO it's people like them whose work is quetly advancing the state of "real" or recognisable intelligence. It could be a long time before something comes of those efforts but when it comes it will seem sudden just like the new things we're gawking at today. I think you cannot say when "a long time" is but I would guess at half a lifetime as a reasonable time not to worry too much.

At some point we will need universal basic income and possibly a limitation on robots just as in Asimov's stories if we are not to become useless and effete. The problem will be that we won't have a world government to enforce this, nor anything smart like the 3 laws and the building of robots to do harm will be far too tempting. Skynet would start to be a possibility then. We will have to build AI battle computers and robots just because of the threat that others might do it and put us at their mercy.

In the meantime the advantages in what we have now will be for applying these things to help humans do work more than replacing them. Lets put it this way: your AI fighter will be great until some weirdness in the machine learning makes it nose dive into a building - so it probably will only be useful where that kind of danger is less than the advantage it gives "for now".

To be fair I'm quite good at talking as if I'm an expert but I'm just another person with a lowish Computer Science degree and some reading. It's all just my opinion. You're seeing a bandwagon in the making - people will want to hype it to promote themselves, their companies etc etc. You have to be careful about both rejecting and accepting everything about it.

Two points:

The only available storage on the 50s/60s was perforated paper/card or mag tape. So you get vast megabrain computers depending on miles of mag tape in books from that period.

If we ever emulate a conscious brain in software we will be faced with no end of ethical rather than technical problems.

BigEndBob
11th Apr 2023, 18:18
Felt a bit sad for the player.
https://www.youtube.com/watch?v=WXuK6gekU1Y

Saintsman
11th Apr 2023, 18:29
If AI is that smart, then it would be finding a way to talk to the enemy's AI and learn it's secrets.

Of course the enemy's AI would be doing the same.

Therefore would they cancel each other out and declare a draw?

PPRuNeUser0211
11th Apr 2023, 19:02
The other thing to bear in mind with AI is that most of the successful models are trained on huge datasets. One thing there is an absence of in warfare is large amounts of information about how the enemy is likely to behave. "Niche" skills (see alpha dogfight) are quite trainable because it's a bit like playing chess - physics provides a set of rules. The same doesn't apply to more general warfare. Where AI is likely to be useful is hoovering up large volumes of data and pre-processing it for interpretation - imagery analysis in real-time from multiple satellites in multiple spectra is a great example of something that humans wouldn't keep up with, but an AI would eat for breakfast.

Easy Street
11th Apr 2023, 19:11
I would offer that the principles of air combat and the tactics which have been developed to deal with various kinds of air and ground threats are, at their core, fairly simple. The difficulty is executing them under the extreme physical and mental stresses associated with a fighter cockpit while maintaining a sufficient degree of situational awareness. To deal with that problem, we have rigorous selection and training programmes to find the elite few who can make reasonable decisions most of the time under such conditions. Then there is the elite within the elite, Warfare Centres, EWIs and QWIs, boiling down data and lessons learned into rules of thumb to keep things manageable for the junior elite. Anyone who's sat through mass debriefs of the air battle at the likes of RED FLAG will know that there are only so many ways that such battles play out, and while the playing board is not constrained in the same way as in go or chess, the patterns which emerge are consistent and therefore readily learnable by machines. And once the machines have learned them (or, in the DARPA case, derived them from first principles), the machines won't ever get maxed out or find their engagement timelines getting squeezed. They won't need rules of thumb and so can be more precise in how they execute tactics, the ruthless accuracy of the head sector gun shot being a prime example.

The main difficulty as I see it will be in sensing, or building the information needed to feed the algorithm. The DARPA trial allowed the robots to have 100% unbroken awareness of the human's position and vector, which is obviously unrealistic. But even today, there is already a very good argument that this limitation only applies WVR; when we talk BVR then humans are entirely reliant on sensors and comms links which an AI would have little difficulty interpreting. You could even have human controllers issuing verbal instructions over R/T: "Alexa, engage group bullseye 170/43". And I'm sure that our finest legal minds could conjure up an argument that human input needn't be of the unbroken, real time variety. We've already been there with Brimstone, after all.

To ethical objectors, I would pose a question: if (heaven forbid) WW3 should break out, perhaps over an island off China, and the technology existed to field an AI driven fighter, and it could be rigidly geofenced into a set operating area (let's call it the South China Sea FAOR) to avert the possibility of it taking over the world, would it be ethical to choose not to use that technology, and instead put human pilots in harm's way in the full knowledge that they were less capable in the lethal BVR battle which would ensue? I would suggest that the British public would take an even dimmer view of that than of the Snatch Land Rover debacle.

finestkind
12th Apr 2023, 07:31
Not quite AI but using the human mind as an interface for the computer if memory serves, Dale Brown's "Day of the Cheetah".

ORAC
12th Apr 2023, 08:00
(let's call it the South China Sea FAOR)
Populate with AI driven fighters and call it what it would be - a Kill Box.

Sue Vêtements
12th Apr 2023, 12:36
Scene : an AI powered war somewhere in the future. A crew of black robots has been captured by a team of green robots


Black Robot Commander: When we win the war you will be brought to account

Captain of the green robots: You write what you like, you're not going to win this war

Black Robot Commander: Oh yes we are

Captain of the green robots: Oh no you're not

Black Robot Commander: Oh yes we are

Junior green robot: (singing) Whistle while you work, your leader is a twerp, he's half barmy, so's his army, whistle while you...

Black Robot Commander: Your name will also go on the list. What is it?

Captain of the green robots: Don't tell him serial number P147226

;)





Personally I think nature will win in the end

Ninthace
12th Apr 2023, 13:04
I seem to recall a SF story of two large battlefleets manoeuvring in space unable to engage as their respective systems had worked out whoever opened fire first would lose.

ORAC
12th Apr 2023, 13:29
https://youtu.be/h73PsFKtIck

BEagle
12th Apr 2023, 14:34
Remember War Games?

"A strange game. The only winning move is not to play."

steamchicken
12th Apr 2023, 23:32
Here's a really successful and important example of military AI, that came out of the massive early 80s DOD investment widely seen as a failure:

https://en.wikipedia.org/wiki/Dynamic_Analysis_and_Replanning_Tool

Nobody noticed because it's logistics:-)

India Four Two
13th Apr 2023, 03:05
steamchicken,

Thanks for that really interesting link. It led me down an IT rabbit hole, to among other places, this wiki page:

https://en.wikipedia.org/wiki/Raytheon_BBN

I had never heard of BBN before.

ORAC
16th Apr 2023, 08:59
https://twitter.com/carlbfrey/status/1646800988743823360?s=61&t=rmEeUn68HhlFHGKbTPQr_A


​​​​​​​New paper comparing GPT-3.5 & GPT-4 performance on college physics problems.

It shows that in just a few months, AI has made a leap from the 39th to the 96th percentile of human level performance.

Now imagine were it will be in 10 years.