Go Back  PPRuNe Forums > Aircrew Forums > Military Aviation
Reload this Page >

Air Combat Drones

Military Aviation A forum for the professionals who fly military hardware. Also for the backroom boys and girls who support the flying and maintain the equipment, and without whom nothing would ever leave the ground. All armies, navies and air forces of the world equally welcome here.

Air Combat Drones

Old 23rd Aug 2020, 14:46
  #61 (permalink)  
Ecce Homo! Loquitur...
 
Join Date: Jul 2000
Location: Peripatetic
Posts: 17,227
Received 1,496 Likes on 678 Posts
Do you give it the basics and expect it to learn as it fights? because a lot of pilots have died attempting that through the many wars, and if you do give it the ability to learn will it be able to pass this on via a data link to other AI aircraft, or will you be stuck at the learn as you go?
In this case they taught it nothing - even about the ground and how hard it is - and let it learn for itself.

If you equate that to science, they now use evolutionary algorithms to find new drugs and methods of design etc because the computer doesn’t have any preconceived ideas and they are finding solutions where humans never even thought to look. Which brings us back to the comment by Banger that it was doing things he had never seen/fought before. And of course the computer can be learn through millions of simulations in hours.

The main thing to note here is that the fight was in an environment where the computer knew where the enemy was at all times. Put it in an environment where the inputs are radar/IR/visual sensors with blind spots then, initially it may not be as effective as a human who extrapolate - but remember they said the same thing about chess computers.

To the computer everything is taking place in ultra slow time - it has the ability to change its mind a million times a second. The first iterations might be as an pilot assistance mode where it steps in to take action to save the aircraft from entering a position to get shot, or as a snapshot gun mode where it takes control to steer the nose and fire a burst for a kill - which is where the AI scored its kills in this trial. The next step, as in the current software which senses pilot incapacitation and takes over control, may be to press the button and let the AI take over the fight with the pilot monitoring.

The next step is incorporating the software into smart wingmen who are, like attack dogs, let off the leash and sent into the fight by the pilot of the command aircraft.

https://link.springer.com/article/10...21-020-04832-8


ORAC is online now  
Old 23rd Aug 2020, 16:58
  #62 (permalink)  
 
Join Date: Nov 2000
Location: UK
Age: 69
Posts: 1,397
Received 40 Likes on 22 Posts
Originally Posted by pr00ne
Correct right up until Hiroshima and Nagasaki.

After those two events the whole game changed. The cold war concept of MAD was not based on gaining and holding territory in any way shape or form.

If the allies had possessed enough atomic bombs in 1944 or 5 then D-Day would not have been necessary. Germany would have been a ruined and wrecked country after a half dozen 'Hiroshimas' and no land invasion would have been needed.

Japan wasn't invaded and conquered, it was occupied after surrendering.
Quite correct, the allies gained and held Japanese territory following the Japanese surender. Which is what happens when one side loses, they lose control of their territory.

MAD was based around both sides being unable to win (and control territory), there would have been no winners, both sides were guaranteed to lose. The clue is in the name.
beardy is online now  
Old 23rd Aug 2020, 17:54
  #63 (permalink)  
 
Join Date: Apr 2008
Location: Cold Lake
Posts: 2
Likes: 0
Received 0 Likes on 0 Posts
I think Banger was talking about forward quarter gun attacks, which is where the Heron scored nearly all its hits and which USAF pilots (and pretty much those of every modern air force) are prohibited to train for due to safety risks. It looked to me like the AI's BFM was pretty weak compared to Banger actually. It certainly wasnt any better.
monkey416 is offline  
Old 25th Aug 2020, 17:45
  #64 (permalink)  
 
Join Date: May 2011
Location: NEW YORK
Posts: 1,352
Likes: 0
Received 1 Like on 1 Post
Originally Posted by monkey416
I think Banger was talking about forward quarter gun attacks, which is where the Heron scored nearly all its hits and which USAF pilots (and pretty much those of every modern air force) are prohibited to train for due to safety risks. It looked to me like the AI's BFM was pretty weak compared to Banger actually. It certainly wasnt any better.
That suggests the central purpose of the air combat training has been subjugated to 'safety risks'.
I doubt a serious enemy would be similarly bound.
Sadly the reality of 75 years of peacetime with only punching bag opponents, the military are no longer primarily a fighting force.
Perhaps we need a vastly smaller military, wholly professional, which would again make effective fighting the core mission.
etudiant is offline  
Old 26th Aug 2020, 04:31
  #65 (permalink)  
Ecce Homo! Loquitur...
 
Join Date: Jul 2000
Location: Peripatetic
Posts: 17,227
Received 1,496 Likes on 678 Posts
https://csbaonline.org/uploads/docum...ir-Report-.pdf
ORAC is online now  
Old 26th Aug 2020, 04:58
  #66 (permalink)  
 
Join Date: Apr 2008
Location: Cold Lake
Posts: 2
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by etudiant
That suggests the central purpose of the air combat training has been subjugated to 'safety risks'.
I doubt a serious enemy would be similarly bound.
Sadly the reality of 75 years of peacetime with only punching bag opponents, the military are no longer primarily a fighting force.
Perhaps we need a vastly smaller military, wholly professional, which would again make effective fighting the core mission.
No such limitation would exist in combat and I don't think there is any air force out there that practices forward quarter gunnery in the air to air environment.
Anyone with any time in fighters understands that there's a serious risk associated with ahead of 135 gun employment, with near zero training payoff. Theres a very good reason why its not done in training. You'd literally lose more aircraft in accidents than enemy you would take out with that added marginal skillset in combat.
monkey416 is offline  
Old 26th Aug 2020, 19:24
  #67 (permalink)  
 
Join Date: Mar 2005
Location: Wyoming
Posts: 511
Received 37 Likes on 15 Posts
Top Gun Instructors view

https://www.msn.com/en-us/news/techn...?ocid=msedgdhp

Former Navy TOPGUN instructor says the AI that defeated a human pilot in a simulated dogfight would have 'crashed and burned' in the real world

  • An experienced US Air Force F-16 pilot went head-to-head with an artificial intelligence algorithm in a simulated dogfight last week and suffered five straight losses in the battle with the machine.
  • Former US Navy pilot and TOPGUN instructor Guy Snodgrass told Insider that he was "not surprised" by the AI victory, arguing that the the pilot was forced to play the AI's game and that the AI algorithm would likely have "crashed and burned" in the real world.
  • Former Air Force pilot and senior research fellow at the Heritage Foundation John "JV" Venable said the competition was "gamed to a point where you can't beat it."
  • That being said, both of the former pilots said they see a not-too-distant future where the US military has mission-capable AI-driven autonomous combat aircraft that can engage other combatants in air-to-air combat.
  • Visit Business Insider's homepage for more stories.
An artificial intelligence algorithm absolutely destroyed a seasoned US fighter pilot last week in a simulated dogfight, a result some observers say was to be expected.

"I was not surprised by that outcome," Guy 'Bus' Snodgrass, a former US Navy pilot and TOPGUN instructor, told Insider, arguing that the set-up of the engagement gave the AI an advantage. John "JV" Venable, a former US Air Force F-16 pilot, said the same.

The Defense Advanced Research Projects Agency (DARPA) held the last round of its third and final AlphaDogfight competition Thursday, putting an AI system designed by Heron Systems against a human pilot in a "simulated within-visual-range air combat" situation.

The competing AI algorithm achieved a flawless victory, winning five straight matches without the human pilot — an experienced Air Force pilot and Weapons Instructor Course graduate with the callsign "Banger" — ever scoring a hit.

With advancements in air combat capabilities, there have long been questions about whether dogfighting even matters. Both Venable and Snodgrass said it remains relevant because pilots must be prepared to dogfight should their standoff capabilities be neutralized.

"In a best case scenario, dogfighting is completely irrelevant because you want to see your adversary as far away as possible," Snodgrass explained. But sometimes pilots are unable to defeat their opponent before they find themselves in close combat.

"Now, you're left to a bullet," Venable said.
© U.S. Air Force photo by Tech. Sgt. John Raven An F-16 during an approach at mission at Holloman Air Force Base, New Mexico, Apr. 21, 2019 U.S. Air Force photo by Tech. Sgt. John Raven

'Gamed to a point where you can't beat it'

Theresa Hitchens at Breaking Defense characterized the recent simulated engagement as a "one-on-one combat scenario" in which combatants fired "forward guns in a classic, WWII-style dogfight," suggesting that it mimicked certain aspects of close air-to-air combat. That being said, the contest was not necessarily a fair or realistic fight.

"This was gamed to a point where you can't beat it," Venable said, arguing that the AI algorithm appears to have had access to information that it would not have in the real world. The combat environment, as Snodgrass also pointed out, was built to the AI's advantage.

"You have an artificial intelligence program that has been perfectly trained in that environment to conduct a simulated fight, and you have a US Air Force fighter pilot who you are forcing to wear VR goggles," Snodgrass said, saying that the human is "playing the AI's game."

With a background in computer science and experience with some of the Department of Defense's AI research, he explained that "if you give [AI] a very narrow, specific job to accomplish, once it's been trained, once it has had exposure to a very static environment, it does phenomenally well."

"You find that artificial intelligence can begin to outperform human operators in short order," he added.

But, where AI struggles is when it's put in a complex environment with unconstrained variables and asked to think and act like a human being. "We're nowhere near that," Snodgrass said.

"The AlphaDogfight trials were a significant step toward one day providing an unmanned aircraft that can perform dogfighting," the retired Navy commander said, "but what it does not demonstrate is that we're there now."

"I think it's promising for the development of artificial intelligence," he added, "but if you took that same algorithm, put it into an unmanned vehicle and said, 'Okay, go fight a real dogfight,' it would have crashed and burned pretty quick."

A mission-capable AI-driven autonomous combat aircraft is possible in the not-too-distant future though, both he and Venable told Insider.

The AlphaDogfight trials are aimed at moving DARPA's Air Combat Evolution (ACE) program forward.

The ACE program, according to DARPA, is designed to "deliver a capability that enables a pilot to attend to a broader, more global air command mission while their aircraft and teamed unmanned systems are engaged in individual tactics."

Col. Dan Javorsek, the program manager in DARPA's Strategic Technology Office, said last year that the agency envisions "a future in which AI handles the split-second maneuvering during within-visual-range dogfights, keeping pilots safer and more effective as they orchestrate large numbers of unmanned systems into a web of overwhelming combat effects."

The agency's research is aimed at delivering advanced manned-unmanned teaming both inside and outside the cockpit, and DARPA is just one of several teams looking at these capabilities for the US military.

"When you think about all the advancements that have occurred in the last decade, in the last hundred years, I would never bet against technological progress," Snodagrass said.

"I think there's a point in time where the US military will have unmanned aircraft that you could give a mission, load it up, have it take off, and have it potentially fight its way in and fight its way back out," he added. "That's absolutely possible and something likely to happen probably sooner than we ever imagined."

As part of the ongoing Skyborg project, the Air Force is currently working to develop low-cost, attritable AI-driven autonomous aerial combat vehicles to fly alongside manned fighter aircraft in combat as early as 2023. The military is also talking about putting one of these unmanned systems against a manned aircraft in aerial combat as early as next year.

But developing the technology is only part of the process of fielding new warfighting capabilities. The technology also has to be accepted and trusted by pilots.

"The recent 5 to 0 victory of an Artificial Intelligence (AI) pilot developed by Heron Systems over an Air Force F-16 human pilot does not have me scrambling to send out applications for a new job," Navy Cmdr. Colin 'Farva' Price, a F/A-18 squadron commander, wrote in an article for The War Zone. "However, I was impressed by the AlphaDogfight trials and recognize its value in determining where the military can capitalize on AI applications."

He expressed interest in AI-enhanced systems in an aircraft assisting and augmenting the combat capabilities of fighter pilots through machine learning, something the Air Force is already looking at.

Explaining that top US pilots have thousands of hours of experience during a 2018 interview with Inside Defense, Steven Rogers, the head of autonomy at the Air Force Research Laboratory, asked the question, "What happens if I can augment their ability with a system that can have literally millions of hours of training time?"

"I am not ready for Skynet to become self-aware," Price wrote, referring to the evil AI enemy in the Terminator films, "but I am certainly ready to invite AI into the cockpit."
havoc is offline  
Old 26th Aug 2020, 21:26
  #68 (permalink)  
 
Join Date: Mar 2006
Location: England
Posts: 980
Likes: 0
Received 3 Likes on 1 Post
There is a lot of hype around what was just a technology demonstration. Until AI can explain how it wins, then there is little value for operations or training.

So you take two of these AI enabled 'aircraft' and start a fight; which one wins, when (fuel, time), and how would we know.
PEI_3721 is offline  
Old 27th Aug 2020, 01:48
  #69 (permalink)  
 
Join Date: May 2011
Location: NEW YORK
Posts: 1,352
Likes: 0
Received 1 Like on 1 Post
Originally Posted by monkey416
No such limitation would exist in combat and I don't think there is any air force out there that practices forward quarter gunnery in the air to air environment.
Anyone with any time in fighters understands that there's a serious risk associated with ahead of 135 gun employment, with near zero training payoff. Theres a very good reason why its not done in training. You'd literally lose more aircraft in accidents than enemy you would take out with that added marginal skillset in combat.
This simulation was the equivalent of combat and the human pilot was unprepared for it. If the head on attack is too risky to be trained in real life, perhaps it should at least be done in a simulator.
That might avoid embarrassing results such as we just saw, a 'top gun' surprised by a tactic he never expected before it happened in real life.
The 'near zero training payoff' clearly applies only if the eventual opponent has the same opinion about this tactic.

It does seem the rules of engagement need to be rethought here, the training fails to reflect reality.
etudiant is offline  
Old 27th Aug 2020, 08:43
  #70 (permalink)  
 
Join Date: Mar 2006
Location: England
Posts: 980
Likes: 0
Received 3 Likes on 1 Post
'… surprised by a tactic he never expected before… '
Would AI think of firing an out-of-envelope 'Winder' (Phantom), or jettison the drag chute (Vulcan), or a low fast runout and drop a 1000 lb retard (Buccaneer); the tactics may not win the fight, but surprise and distraction (in AI terms - not programmed for) could switch focus from attack to defence - advantage to the defender.

'This simulation was the equivalent of combat and the human pilot was unprepared for it. … the training fails to reflect reality … '
Capt Kirk, Star Trek, Kobayashi Maru simulation; no win situation, training objective to assess human reaction under stress, probable death.
Kirk hacks the simulator, everyone survives. Crew fails the training course, didn't follow the rules; but they would win the war.

He who sees first wins - with AI becomes a battle of sensors, how to blind AI. He who understands first … AI lacks context.
PEI_3721 is offline  
Old 11th Sep 2020, 07:43
  #71 (permalink)  
Ecce Homo! Loquitur...
 
Join Date: Jul 2000
Location: Peripatetic
Posts: 17,227
Received 1,496 Likes on 678 Posts
https://www.defensenews.com/congress...chine-teaming/

AI’s dogfight triumph a step toward human-machine teaming

WASHINGTON ― Human fighter pilots, your jobs are safe for now.

Weeks after an artificial intelligence algorithm defeated a human pilot in a simulated dogfight between F-16 jets, the Pentagon’s director of research and engineering for modernization said Thursday at the Defense News Conference that it’s more likely an AI will team with military pilots than replace them.

“I don’t see human pilots being phased out, I see them being enhanced, not physically, but I see their work, their effectiveness being enhanced by cooperation with artificial intelligence systems,” said Mark Lewis, who also serves as the acting deputy undersecretary of defense for research and engineering.

The AlphaDogfight Trials in August marked the finale of the Pentagon research agency’s AI air combat competition. The now-notorious algorithm, developed by Heron Systems, easily defeated the fighter pilot in all five rounds that capped off a yearlong competition hosted by the Defense Advanced Research Projects Agency ― which is overseen by Lewis and the Defense Department’s research and engineering shop.

“The key takeaway from that was the artificial intelligence system did so well because it wasn’t so concerned about self-preservation, it was willing to do things that a human pilot wouldn’t do. And that’s the advantage of artificial intelligence,” Lewis said. “I think the real answer is teaming AI with a human for the best combination of both. So I’m pretty confident we’re going to have human pilots into the future.”"..........

Fiscal 2023 will see the first in a yearlong series of trials using tactical fighter-class aircraft (currently L-39 trainers), with safety pilots on board to assist in case of trouble. Those pilots would be given “higher cognitive level battle management tasks while their aircraft fly dogfights,” all while sensors gauge the pilot’s attention, stress and trust in the AI, Adams said.

DARPA foresees a single human pilot serving as a mission commander in a manned aircraft, orchestrating multiple autonomous, unmanned platforms that would all be engaged in individual tactics. ACE would ultimately deliver that capability.

“ACE, therefore, seeks to create a hierarchical framework for autonomy in which higher-level cognitive functions (e.g., developing an overall engagement strategy, selecting and prioritizing targets, determining best weapon or effect, etc.) may be performed by a human, while lower-level functions (i.e., details of aircraft maneuver and engagement tactics) is left to the autonomous system,” Adams said.

“In order for this to be possible, the pilot must be able to trust the autonomy to conduct complex combat behaviors in scenarios such as the within-visual-range dogfight before progressing to beyond-visual-range engagements.”........

But Esper warned that both Russia and China were pursuing fully autonomous systems, and drew a distinction between them and what he described as the U.S. military’s ethically guided approach to AI.

“At this moment, Chinese weapons manufacturers are selling autonomous drones they claim can conduct lethal targeted strikes,” he said. “Meanwhile, the Chinese government is advancing the development of next-generation stealth UAVs, which they are preparing to export internationally.”
ORAC is online now  
Old 11th Sep 2020, 14:41
  #72 (permalink)  
 
Join Date: May 2011
Location: NEW YORK
Posts: 1,352
Likes: 0
Received 1 Like on 1 Post
The schedule for the trials does not seem particularly urgent, surprisingly so in light of the demonstrated effectiveness of drones both in actual combat as well as in simulations.
Meanwhile, China has a good export business to the Med and the Mid East, as well as a hugely profitable small drone business selling mostly to the US consumer market.
If this is a race, I don't think the USAF is winning.
etudiant is offline  
Old 12th Sep 2020, 07:48
  #73 (permalink)  
gsa
 
Join Date: Jun 2006
Location: Wensleydale.
Posts: 127
Received 9 Likes on 4 Posts
A friend of my son wrote “Predator Empire: Drone Warfare and Full Spectrum Dominance”. When I spoke to him last at length he said that it would take about 10 years for the drone combined with AI to full beat manned systems. So give it about another 5 years and we’ll fully see what technology can do.
gsa is offline  

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off


Thread Tools
Search this Thread

Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.