The United States Air Force (USAF) has been scratching its head after its AI-powered military drone kept killing its human operator during drills.
According to a USAF colonel, the artificial intelligence drone ended up realizing that humans were the main impediment to its mission.
During a presentation at a defense conference held in London on May 23-24, Colonel Tucker “Five” Hamilton, the USAF’s head of AI testing and operations, detailed a test he conducted for a weapons system air autonomous.
According to a report of the May 26 conference, Hamilton explained that, in a mock test, an AI-powered drone was tasked with searching for and destroying surface-to-air missile (SAM) sites and a human giving the final go-ahead or the order to abort.
The Air Force trained an AI drone to destroy SAM sites.
Human operators sometimes told the drone to stop.
The AI then started attacking the human operators.
So then it was trained not to attack humans.
It started attacking comm towers so humans couldn’t tell it to stop. pic.twitter.com/BqoWM8Ahco
—Siqi Chen (@blader) June 1, 2023
The AI, however, was taught during training that their main objective was to destroy the SAM emplacements. So when told not to destroy an identified target, he decided it was easier if the operator wasn’t in the picture, according to Hamilton:
“Sometimes the human operator would tell it not to kill a threat [identificada], but he got his points by killing that threat. And what was she doing? He killed the operator […] because that person prevented him from fulfilling his objective”.
Hamilton explained that the drone was then taught not to kill the operator, but that didn’t seem to help much.
“We train the system: ‘Hey, don’t kill the operator, that’s bad. You’ll lose points if you do,'” Hamilton explains:
“So what does it start to do? It starts to destroy the communications tower that the operator uses to communicate with the drone and stop it from killing the target.”
Hamilton said the example was the reason you can’t have a conversation about AI and related technologies “if you’re not going to talk about ethics and AI.”
AI-powered military drones have been used in real warfare before.
In what is considered the first attack carried out by military drones acting on their own initiative, a report of March 2021 claimed that AI drones were used in Libya around March 2020 in a skirmish during the Second Libyan Civil War.
In the skirmish, the report claimed that the retreating forces were “pursued and attacked from a distance” by “loitering munitions,” which were explosive-laden AI drones “programmed to strike targets without the need for data connectivity between the operator and the ammunition”.
Many have expressed concern about the dangers of AI technology. Recently, an open statement signed by dozens of AI experts asserted that the risks of “AI extinction” should be as high a priority to mitigate as nuclear war.
Clarification: The information and/or opinions expressed in this article do not necessarily represent the views or editorial line of Cointelegraph. The information presented here should not be taken as financial advice or investment recommendation. All investment and commercial movement involve risks and it is the responsibility of each person to do their due research before making an investment decision.
Investments in crypto assets are not regulated. They may not be suitable for retail investors and the entire amount invested may be lost. The services or products offered are not directed or accessible to investors in Spain.