(Adnkronos) – Regulating and limiting the use of artificial intelligence by the Pentagon. This is the objective of the bill introduced by Michigan Democratic Senator Elissa Slotkin, a member of the Armed Services Committee. This is reported by the Nbc broadcaster, emphasizing that it is a first step to understand how Congress could address the use of artificial intelligence in the military. In particular, the bill aims to codify two existing Department of Defense guidelines. Namely, that artificial intelligence cannot autonomously decide to kill a target and that the technology cannot be used to help the military carry out mass surveillance on American citizens. The bill also prohibits the use of artificial intelligence for the launch or detonation of a nuclear weapon.
“Our political system is sick, and that’s why we focus more on issues like Greenland than on the use of artificial intelligence in matters of lethal force. And it is our responsibility to legislate on this,” Slotkin told Nbc News.
The first two cornerstones of the bill were at the center of a heated dispute between the US military and the artificial intelligence giant Anthropic in recent weeks. While the Pentagon insisted that mass surveillance of American citizens is already illegal and that its policy requires lethal decisions to be made by a human being, Anthropic feared that such surveillance could still be allowed and that future administrations could revoke these guidelines.
The dispute culminated in President Donald Trump’s decree requiring all federal agencies to cease using Anthropic models within six months because it was deemed a potential national security risk. Furthermore, Pentagon chief Pete Hegseth called Anthropic a supply chain risk. This is despite Anthropic’s artificial intelligence having helped the United States identify military targets in the ongoing war with Iran, simulate war scenarios, and for intelligence analysis.