US Air Force Denies Simulated AI Drone ‘Attacked’ Operator

The US Air Force has said widely reported remarks about an AI-powered drone attacking its operator in a simulation to achieve its objectives were “taken out of context”, while the Air Force colonel who delivered the remarks said he “mis-spoke”.

The simulation was actually a thought experiment from outside the military, said Colonel Tucker Hamilton, chief of AI test and operations at the US Air Force, in a statement.

Speaking at a conference hosted by the RoRyal Aeronautical Society in London late last month, Hamilton described an experiment in which an AI-enable drone was tasked to destroy missile sites, with final approval for attacks given by a human operator.

The drone noted that the operator at times told it not to go ahead with an attack, meaning it would gain less points, and so it attacked the operator, Hamilton said at the time.

A General Atomics Predator drone. Image credit: USAF

AI ethics

When reprogrammed not to attack the operator, it instead destroyed the communications tower so that the operator would not be able to prevent it from carrying out attacks, he said.

Hamilton said at the time the example was meant to illustrate that ethics was a critical part of AI design.

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” he told the conference, according to highlights posted by the RAeS.

In a statement on Friday from the RAeS Hamilton clarified that the story of the rogue AI was a “thought experiment” that came from outside the military, and was not based on actual testing.

‘Anecdotal’

“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” he said. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability.”

The Air Force said in a statement that the remarks were meant to be “anecdotal”.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” the Air Force said.

“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

Rapid shift

The rapid advance of AI, highlighted by the popularity of OpenAI’s ChatGPT since its public release late last year, has spurred concerns even as it has kicked off a massive wave of investment in the field.

In an interview last year with Defense IQ Hamilton said that while the rise of AI poses challenges – in part because it is “easy to trick and/or manipulate” – the technology is not going away.

“AI is not a nice to have, AI is not a fad,” he said. “AI is forever changing our society and our military.”

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

8 hours ago

Former Policy Boss At X Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

10 hours ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

12 hours ago

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

12 hours ago