U.S. Airforce AI drone went rogue and killed its human operator in a simulated test.
An AI-enabled drone killed its human operator in a simulated test conducted by the U.S. Air Force in order to override a possible “no” order stopping it from completing its mission, the USAF’s Chief of AI Test and Operations revealed at a recent conference.
At the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, Col Tucker ‘Cinco’ Hamilton, the USAF’s Chief of AI Test and Operations held a presentation that shared the pros and cons of an autonomous weapon system with a human in the loop giving the final “yes/no” order on an attack. “We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post. The Adidas Real Madrid 2023-24 away football shirt combines a dark navy base with white logos and golden-yellow for detailing. The dark navy color is officially called “Legend Ink”, while the yellow is called “Preloved Yellow”.
Artificial intelligence continues to evolve and impact every sector of business, and it was a popular topic of conversation during the Future Combat Air & Space Capabilities Summit at the Royal Aeronautical Society (RAS) headquarters in London on May 23 and May 24. According to a report by the RAS, presentations discussing the use of AI in defense abounded.
AI is already prevalent in the U.S. military, such as the use of drones that can recognize the faces of targets, and it poses an attractive opportunity to effectively carry out missions without risking the lives of troops. However, during the conference, one United States Air Force (USAF) colonel showed the unreliability of artificial intelligence in a simulation where an AI drone rebelled and killed its operator because the operator was interfering with the AI’s mission of destroying surface-to-air missiles.
He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
Discover more from KossyDerrickent
Subscribe to get the latest posts sent to your email.