The U.S. military has reportedly utilized Anthropic's Claude AI tool in coordinating operations, raising concerns about the company's involvement with the Department of Defense.
According to a report, U.S. command centers, including those in the Middle East, have been relying on Anthropic's AI to coordinate operations. This marks the second instance of the U.S. military employing Anthropic's tools, following its reported use in operations in February.
The AI tool played a crucial role in intelligence assessments, target identification, and battle scenario simulations, the report stated.
The use of Anthropic's tools has reportedly caused friction between the company and the Pentagon. Anthropic allegedly resisted granting the Pentagon full access to its tools and opposed their use in scenarios involving mass surveillance and the development of fully autonomous weapons, according to a separate report.
Consequently, the Pentagon allegedly labeled Anthropic as a risk to U.S. supply chains, a claim the company vowed to challenge in court, the report noted.
The U.S. government has directed federal agencies to cease using Claude, with officials denouncing the company as a "left-wing AI firm run by people who know nothing about the real world," according to a separate report.
Despite these concerns, U.S. officials have indicated that Anthropic's tools will remain in use within the Pentagon for a limited period to ensure a smooth transition to alternative AI solutions.
Another AI company has reached an agreement with the U.S. Department of Defense to integrate its technologies.