Do you give it the basics and expect it to learn as it fights? because a lot of pilots have died attempting that through the many wars, and if you do give it the ability to learn will it be able to pass this on via a data link to other AI aircraft, or will you be stuck at the learn as you go? If you equate that to science, they now use evolutionary algorithms to find new drugs and methods of design etc because the computer doesn’t have any preconceived ideas and they are finding solutions where humans never even thought to look. Which brings us back to the comment by Banger that it was doing things he had never seen/fought before. And of course the computer can be learn through millions of simulations in hours. The main thing to note here is that the fight was in an environment where the computer knew where the enemy was at all times. Put it in an environment where the inputs are radar/IR/visual sensors with blind spots then, initially it may not be as effective as a human who extrapolate - but remember they said the same thing about chess computers. To the computer everything is taking place in ultra slow time - it has the ability to change its mind a million times a second. The first iterations might be as an pilot assistance mode where it steps in to take action to save the aircraft from entering a position to get shot, or as a snapshot gun mode where it takes control to steer the nose and fire a burst for a kill - which is where the AI scored its kills in this trial. The next step, as in the current software which senses pilot incapacitation and takes over control, may be to press the button and let the AI take over the fight with the pilot monitoring. The next step is incorporating the software into smart wingmen who are, like attack dogs, let off the leash and sent into the fight by the pilot of the command aircraft. https://link.springer.com/article/10...21-020-04832-8 |
Originally Posted by pr00ne
(Post 10868436)
Correct right up until Hiroshima and Nagasaki.
After those two events the whole game changed. The cold war concept of MAD was not based on gaining and holding territory in any way shape or form. If the allies had possessed enough atomic bombs in 1944 or 5 then D-Day would not have been necessary. Germany would have been a ruined and wrecked country after a half dozen 'Hiroshimas' and no land invasion would have been needed. Japan wasn't invaded and conquered, it was occupied after surrendering. MAD was based around both sides being unable to win (and control territory), there would have been no winners, both sides were guaranteed to lose. The clue is in the name. |
I think Banger was talking about forward quarter gun attacks, which is where the Heron scored nearly all its hits and which USAF pilots (and pretty much those of every modern air force) are prohibited to train for due to safety risks. It looked to me like the AI's BFM was pretty weak compared to Banger actually. It certainly wasnt any better.
|
Originally Posted by monkey416
(Post 10868603)
I think Banger was talking about forward quarter gun attacks, which is where the Heron scored nearly all its hits and which USAF pilots (and pretty much those of every modern air force) are prohibited to train for due to safety risks. It looked to me like the AI's BFM was pretty weak compared to Banger actually. It certainly wasnt any better.
I doubt a serious enemy would be similarly bound. Sadly the reality of 75 years of peacetime with only punching bag opponents, the military are no longer primarily a fighting force. Perhaps we need a vastly smaller military, wholly professional, which would again make effective fighting the core mission. |
|
Originally Posted by etudiant
(Post 10870383)
That suggests the central purpose of the air combat training has been subjugated to 'safety risks'.
I doubt a serious enemy would be similarly bound. Sadly the reality of 75 years of peacetime with only punching bag opponents, the military are no longer primarily a fighting force. Perhaps we need a vastly smaller military, wholly professional, which would again make effective fighting the core mission. Anyone with any time in fighters understands that there's a serious risk associated with ahead of 135 gun employment, with near zero training payoff. Theres a very good reason why its not done in training. You'd literally lose more aircraft in accidents than enemy you would take out with that added marginal skillset in combat. |
Top Gun Instructors view
https://www.msn.com/en-us/news/techn...?ocid=msedgdhp |
There is a lot of hype around what was just a technology demonstration. Until AI can explain how it wins, then there is little value for operations or training.
So you take two of these AI enabled 'aircraft' and start a fight; which one wins, when (fuel, time), and how would we know. |
Originally Posted by monkey416
(Post 10870686)
No such limitation would exist in combat and I don't think there is any air force out there that practices forward quarter gunnery in the air to air environment.
Anyone with any time in fighters understands that there's a serious risk associated with ahead of 135 gun employment, with near zero training payoff. Theres a very good reason why its not done in training. You'd literally lose more aircraft in accidents than enemy you would take out with that added marginal skillset in combat. That might avoid embarrassing results such as we just saw, a 'top gun' surprised by a tactic he never expected before it happened in real life. The 'near zero training payoff' clearly applies only if the eventual opponent has the same opinion about this tactic. It does seem the rules of engagement need to be rethought here, the training fails to reflect reality. |
'… surprised by a tactic he never expected before… '
Would AI think of firing an out-of-envelope 'Winder' (Phantom), or jettison the drag chute (Vulcan), or a low fast runout and drop a 1000 lb retard (Buccaneer); the tactics may not win the fight, but surprise and distraction (in AI terms - not programmed for) could switch focus from attack to defence - advantage to the defender. 'This simulation was the equivalent of combat and the human pilot was unprepared for it. … the training fails to reflect reality … ' Capt Kirk, Star Trek, Kobayashi Maru simulation; no win situation, training objective to assess human reaction under stress, probable death. Kirk hacks the simulator, everyone survives. Crew fails the training course, didn't follow the rules; but they would win the war. He who sees first wins - with AI becomes a battle of sensors, how to blind AI. He who understands first … AI lacks context. |
https://www.defensenews.com/congress...chine-teaming/
AI’s dogfight triumph a step toward human-machine teaming WASHINGTON ― Human fighter pilots, your jobs are safe for now. Weeks after an artificial intelligence algorithm defeated a human pilot in a simulated dogfight between F-16 jets, the Pentagon’s director of research and engineering for modernization said Thursday at the Defense News Conference that it’s more likely an AI will team with military pilots than replace them. “I don’t see human pilots being phased out, I see them being enhanced, not physically, but I see their work, their effectiveness being enhanced by cooperation with artificial intelligence systems,” said Mark Lewis, who also serves as the acting deputy undersecretary of defense for research and engineering. The AlphaDogfight Trials in August marked the finale of the Pentagon research agency’s AI air combat competition. The now-notorious algorithm, developed by Heron Systems, easily defeated the fighter pilot in all five rounds that capped off a yearlong competition hosted by the Defense Advanced Research Projects Agency ― which is overseen by Lewis and the Defense Department’s research and engineering shop. “The key takeaway from that was the artificial intelligence system did so well because it wasn’t so concerned about self-preservation, it was willing to do things that a human pilot wouldn’t do. And that’s the advantage of artificial intelligence,” Lewis said. “I think the real answer is teaming AI with a human for the best combination of both. So I’m pretty confident we’re going to have human pilots into the future.”".......... Fiscal 2023 will see the first in a yearlong series of trials using tactical fighter-class aircraft (currently L-39 trainers), with safety pilots on board to assist in case of trouble. Those pilots would be given “higher cognitive level battle management tasks while their aircraft fly dogfights,” all while sensors gauge the pilot’s attention, stress and trust in the AI, Adams said. DARPA foresees a single human pilot serving as a mission commander in a manned aircraft, orchestrating multiple autonomous, unmanned platforms that would all be engaged in individual tactics. ACE would ultimately deliver that capability. “ACE, therefore, seeks to create a hierarchical framework for autonomy in which higher-level cognitive functions (e.g., developing an overall engagement strategy, selecting and prioritizing targets, determining best weapon or effect, etc.) may be performed by a human, while lower-level functions (i.e., details of aircraft maneuver and engagement tactics) is left to the autonomous system,” Adams said. “In order for this to be possible, the pilot must be able to trust the autonomy to conduct complex combat behaviors in scenarios such as the within-visual-range dogfight before progressing to beyond-visual-range engagements.”........ But Esper warned that both Russia and China were pursuing fully autonomous systems, and drew a distinction between them and what he described as the U.S. military’s ethically guided approach to AI. “At this moment, Chinese weapons manufacturers are selling autonomous drones they claim can conduct lethal targeted strikes,” he said. “Meanwhile, the Chinese government is advancing the development of next-generation stealth UAVs, which they are preparing to export internationally.” |
The schedule for the trials does not seem particularly urgent, surprisingly so in light of the demonstrated effectiveness of drones both in actual combat as well as in simulations.
Meanwhile, China has a good export business to the Med and the Mid East, as well as a hugely profitable small drone business selling mostly to the US consumer market. If this is a race, I don't think the USAF is winning. |
A friend of my son wrote “Predator Empire: Drone Warfare and Full Spectrum Dominance”. When I spoke to him last at length he said that it would take about 10 years for the drone combined with AI to full beat manned systems. So give it about another 5 years and we’ll fully see what technology can do.
|
All times are GMT. The time now is 16:29. |
Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.