The Pentagon is threatening to sever its relationship with artificial intelligence company Anthropic over disagreements regarding the ethical use of AI in military applications, according to a report by Axios.
The dispute centers on Anthropic's reluctance to allow its AI technology to be used in ways that conflict with its ethical principles, specifically regarding mass surveillance of U.S. citizens and the development of fully autonomous weapons.
The Pentagon, which has been urging AI companies to integrate their technologies into military operations, is reportedly concerned that Anthropic's stance could hinder future operations.
One particular point of contention arose when Anthropic launched an internal investigation into whether its software was used in a U.S. military operation in Caracas, which resulted in the arrest of Venezuelan President Nicolas Maduro and his wife.
According to Axios, a senior administration official stated that Anthropic's demands complicate collaboration, citing gray areas in the company's restrictions and the impracticality of negotiating each use case individually.
Despite the conflict, a spokesperson for Anthropic affirmed the company's commitment to supporting U.S. national security.
Last year, Anthropic secured a $200 million contract with the Pentagon, which it hailed as "a new chapter in Anthropic's support for U.S. national security," according to a separate report by Gizmodo.
Anthropic's CEO has previously expressed concerns about the extensive use of AI in weapons development and defense technologies.
In an interview with Ross Douthat on the "Interesting Times" podcast, Anthropic CEO Dario Amodei stated, "Humans can disobey illegal orders. But that is not possible with fully autonomous weapons."
The potential loss of Anthropic's AI models would significantly impact the Pentagon's AI initiatives, as the company's "Claude" model is considered superior to competing technologies, according to the Axios report.