US Air Force Denies Simulated AI Drone ‘Attacked’ Operator

The US Air Force has said widely reported remarks about an AI-powered drone attacking its operator in a simulation to achieve its objectives were “taken out of context”, while the Air Force colonel who delivered the remarks said he “mis-spoke”.

The simulation was actually a thought experiment from outside the military, said Colonel Tucker Hamilton, chief of AI test and operations at the US Air Force, in a statement.

Speaking at a conference hosted by the RoRyal Aeronautical Society in London late last month, Hamilton described an experiment in which an AI-enable drone was tasked to destroy missile sites, with final approval for attacks given by a human operator.

The drone noted that the operator at times told it not to go ahead with an attack, meaning it would gain less points, and so it attacked the operator, Hamilton said at the time.

A General Atomics Predator drone. Image credit: USAF

AI ethics

When reprogrammed not to attack the operator, it instead destroyed the communications tower so that the operator would not be able to prevent it from carrying out attacks, he said.

Hamilton said at the time the example was meant to illustrate that ethics was a critical part of AI design.

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” he told the conference, according to highlights posted by the RAeS.

In a statement on Friday from the RAeS Hamilton clarified that the story of the rogue AI was a “thought experiment” that came from outside the military, and was not based on actual testing.

‘Anecdotal’

“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” he said. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability.”

The Air Force said in a statement that the remarks were meant to be “anecdotal”.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” the Air Force said.

“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

Rapid shift

The rapid advance of AI, highlighted by the popularity of OpenAI’s ChatGPT since its public release late last year, has spurred concerns even as it has kicked off a massive wave of investment in the field.

In an interview last year with Defense IQ Hamilton said that while the rise of AI poses challenges – in part because it is “easy to trick and/or manipulate” – the technology is not going away.

“AI is not a nice to have, AI is not a fad,” he said. “AI is forever changing our society and our military.”

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Craig Wright Sentenced For Contempt Of Court

Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…

2 days ago

El Salvador To Sell Or Discontinue Bitcoin Wallet, After IMF Deal

Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…

2 days ago

UK’s ICO Labels Google ‘Irresponsible’ For Tracking Change

Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…

2 days ago

EU Publishes iOS Interoperability Plans

European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…

3 days ago

Momeni Convicted In Bob Lee Murder

San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…

3 days ago