Next generation of AI drones will be ‘too fast for humans to fight’ says general

Large-scale “swarms” of AI -controlled weaponised drones will be able to attack bases or ships so rapidly that no human defender will be able to fight them, a senior US military expert has warned.

General John Murray, head of Army Futures Command, said: ”When you are defending against a drone swarm, a human may be required to make that first decision, but I am just not sure any human can keep up.”

Once the decision has been made to engage an opposing force, he predicts, the rival artificial intelligences will be left to fight the battle unaided.

“A swarm with 10,000 or more drones must have extremely high levels of autonomy,” says leading military consultant Zak Kallenborn. “No human being could handle the amount of information necessary to make decisions.“

As there will be no human combatants involved, General Murray says, it may not be too great a concern that there are no humans controlling the fight: "How much human involvement do you actually need when you are [making] nonlethal decisions from a human standpoint?” he asks.

Another defence expert said that the human desire to retain control of artificially-intelligent combat systems is a guaranteed battle-loser, because AI can identify potential targets faster than any living commander.

The scientist, speaking anonymously to Forbes magazine, said “If you have to transmit an image of the target, let the human look at it, and wait for the human to hit the “fire” button, that is an eternity at machine speed.

“If we slow the AI to human speed …we’re going to lose.”

On January 20, the European Parliament published a new set of guidelines for using artificial intelligence.

The clause concerning military AI reads: “The decision to select a target and take lethal action using an autonomous weapon system must always be made by a human exercising meaningful control and judgement.”

That’s in line with a wider EU ban on so-called “ killer robots ”.

But the US military looks at combat AI differently. The Pentagon view is that artificial intelligences are less likely to make mistakes in the heat of battle, and they should be trusted to decide when to fire, and what to fire at.

The EU’s moral position on killer robots may not be practical when military drone swarms – which already can number in te hundreds – grow to thousands or even millions of drones operating at once.

Source: Read Full Article