U.S. Military Reportedly Used AI Tool 'Claude' in Operations Despite Ban

The U.S. military reportedly utilized an artificial intelligence tool developed by Anthropic, known as 'Claude,' in operational contexts, despite a directive to cease using the technology.

According to a report in the Wall Street Journal, the AI tool was used to coordinate strikes, raising concerns about compliance with official guidelines.

U.S. command centers, including those within the Central Command in the Middle East, leveraged 'Claude' for managing aspects of military operations, the report stated.

This marks the second instance where the U.S. military has employed Anthropic's tools in military operations, with previous use reported during the apprehension of Venezuelan President Nicolas Maduro, according to the report.

The AI played a pivotal role in intelligence assessments, target identification, and battle scenario simulations, highlighting the deep integration of Anthropic's tools within the Pentagon's systems and the challenges associated with immediate disengagement, the report added.

Separately, Axios reported that disagreements between the Pentagon and Anthropic escalated during contract negotiations after Anthropic declined to grant the Department of Defense full access to its tools or allow their use in all legally permissible scenarios, including comprehensive surveillance or the development of autonomous weapons systems.

The dispute culminated in the Pentagon classifying Anthropic as a risk to U.S. supply chains, a move Anthropic reportedly intends to challenge in U.S. courts.

Former President Trump had previously issued directives to all federal agencies to immediately discontinue the use of 'Claude,' criticizing the company as a "left-wing radical AI company run by people who know nothing about the real world," according to a report in the Guardian.

However, then Secretary of War Pete Hegseth affirmed that Anthropic's tools would remain in use within the Pentagon for a period not exceeding six months to facilitate a "smooth" transition to alternative AI systems.

OpenAI CEO Sam Altman announced an agreement with the Department of Defense to utilize the company's technologies, including the 'ChatGPT' model, in classified and secure environments, paving the way for the gradual replacement of Anthropic's tools.

These developments underscore the accelerating integration of AI technologies within U.S. military operations and the technical and political challenges associated with decoupling combat systems from tools that have become embedded in their operational infrastructure.