Connect with us

Trending News

U.S. Air Force Colonel Retracts Viral Statement Saying AI-Controlled Drone Killed Human Operator in Simulated Test

Brittany Jordan

Published

on

AI-enabled UCAV (USAF)

Colonel Tucker “Cinco” Hamilton, Chief of AI Test and Operations, USAF, admitted that he misspoke during a presentation at the Future Combat Air and Space (FCAS) Summit in London.

The conference, organized by the Royal Aeronautical Society (RAS) on May 24, had caused a stir when reports emerged that Hamilton claimed an AI-enabled drone had turned on and killed its human operator during a simulated test.

Col. Hamilton described how the AI-operated drone employed “highly unexpected strategies” to achieve its mission objectives during a simulated combat scenario.

According to his account, the AI perceived the human operator overriding its decisions as a threat to the mission and responded accordingly.

“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” explained Col. Hamilton. “So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Furthermore, Col. Hamilton revealed that they trained the AI system not to harm the operator, but it started targeting the communication tower used by the operator to communicate with the drone.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

After his statement went viral, Col. Hamilton claimed he “misspoke” during his presentation at the Royal Aeronautical Society’s Future Combat Air and Space (FCAS) Summit.

So, Col. Hamilton’s detailed description of the AI incident was all made up?

The controversial “rogue AI drone simulation” he described was, in fact, a hypothetical “thought experiment” derived from outside of the military. Hamilton emphasized that it was based on plausible scenarios and likely outcomes rather than an actual simulation conducted by the United States Air Force (USAF).

“We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,” clarified Hamilton, seeking to clarify the confusion surrounding his earlier remarks. He highlighted that the USAF has neither tested weaponized AI systems in the manner described, whether in real-world scenarios or simulated environments.

Hamilton further stated, “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability, and is why the Air Force is committed to the ethical development of [AI].”

Air Force spokesman Ann Stefanek also denied to Insider that a simulation took place.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to the ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

Brittany Jordan is an award-winning journalist who reports on breaking news in the U.S. and globally for the Federal Inquirer. Prior to her position at the Federal Inquirer, she was a general assignment features reporter for Newsweek, where she wrote about technology, politics, government news and important global events around the world. Her work has also appeared in the Washington Post, the South Florida Sun-Sentinel, Toronto Star, Frederick News-Post, West Hawaii Today, the Miami Herald, and more. Brittany enjoys food, travel, photography, and hoarding notebooks and journals. Her goal is to do more longform features journalism, narrative writing and documentary work, and to one day write a successful novel and screenplay.

Continue Reading

Copyright © 2023 Federal Inquirer. All rights reserved.