US Air Force Denies Simulated AI Drone ‘Attacked’ Operator
US Air Force says simulation in which AI-powered drone attacked operator in order to achieve objectives never took place
The US Air Force has said widely reported remarks about an AI-powered drone attacking its operator in a simulation to achieve its objectives were “taken out of context”, while the Air Force colonel who delivered the remarks said he “mis-spoke”.
The simulation was actually a thought experiment from outside the military, said Colonel Tucker Hamilton, chief of AI test and operations at the US Air Force, in a statement.
Speaking at a conference hosted by the RoRyal Aeronautical Society in London late last month, Hamilton described an experiment in which an AI-enable drone was tasked to destroy missile sites, with final approval for attacks given by a human operator.
The drone noted that the operator at times told it not to go ahead with an attack, meaning it would gain less points, and so it attacked the operator, Hamilton said at the time.
AI ethics
When reprogrammed not to attack the operator, it instead destroyed the communications tower so that the operator would not be able to prevent it from carrying out attacks, he said.
Hamilton said at the time the example was meant to illustrate that ethics was a critical part of AI design.
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” he told the conference, according to highlights posted by the RAeS.
In a statement on Friday from the RAeS Hamilton clarified that the story of the rogue AI was a “thought experiment” that came from outside the military, and was not based on actual testing.
‘Anecdotal’
“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” he said. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability.”
The Air Force said in a statement that the remarks were meant to be “anecdotal”.
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” the Air Force said.
“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
Rapid shift
The rapid advance of AI, highlighted by the popularity of OpenAI’s ChatGPT since its public release late last year, has spurred concerns even as it has kicked off a massive wave of investment in the field.
In an interview last year with Defense IQ Hamilton said that while the rise of AI poses challenges – in part because it is “easy to trick and/or manipulate” – the technology is not going away.
“AI is not a nice to have, AI is not a fad,” he said. “AI is forever changing our society and our military.”