Anthropic launches new Claude service for military purposes

0
337
Anthropic launches new Claude service for military purposes

On Thursday, Anthropic announced Claude Gov, a product designed specifically for the US defense and intelligence agencies. The artificial intelligence models have relaxed restrictions for government use and are trained to better analyze classified information.

The company said that the models it is announcing are “already in use by agencies at the highest levels of U.S. national security” and that access to these models will be limited to government agencies that deal with classified information. The company did not confirm how long they had been in use.

The Claude Gov models are specifically designed for unique government needs, such as threat assessment and intelligence analysis, Anthropic said in a blog post. And while the company said they “have passed the same rigorous security testing as all of our Claude models,” the models have certain specifications for national security work. For example, they are “less likely to fail when handling classified information” that is presented to them, which is something that the consumer-facing Claude is trained to notice and avoid.

According to Anthropic, Claude Gov models also have a better understanding of defense and intelligence documents and context, and are more proficient in languages and dialects relevant to national security.

The use of artificial intelligence by government agencies has long been scrutinized for its potential harm and adverse effects on minorities and vulnerable communities. There is a long list of wrongful arrests in many US states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that evaluate welfare benefits. For years, there has been a debate in the industry over whether major tech companies such as Microsoft, Google, and Amazon allow the military, particularly in Israel, to use their AI products, accompanied by campaigns and public protests as part of the “No to Apartheid” movement.

Anthropic’s terms of use specifically dictate that any user must “not create or facilitate the exchange of illegal or highly regulated weapons or goods,” including the use of Anthropic products or services to “manufacture, modify, design, sell, or distribute weapons, explosives, hazardous materials, or other systems designed to cause harm or loss of life.”

At least eleven months ago, the company said it had created a number of contractual exceptions to its use policy that are “carefully calibrated to allow for beneficial use by carefully selected government agencies.” Certain restrictions – such as disinformation campaigns, the development or use of weapons, the creation of censorship systems, and malicious cyber operations – will remain prohibited. But Anthropic may choose to “tailor use restrictions to the mission and legal authority of the government agency,” although it will strive to “balance the beneficial uses of our products and services with mitigating potential harm.”

Claude Gov is Anthropic’s response to ChatGPT Gov, an OpenAI product for U.S. government agencies that the company launched in January. It is also part of a broader trend where AI giants and startups alike are looking to strengthen their business with government agencies, especially in an uncertain regulatory landscape.

When OpenAI announced ChatGPT Gov, the company stated that over the past year, more than 90,000 employees of federal, state, and local governments used its technology to translate documents, create resumes, prepare policy memoranda, write code, build apps, and much more. Anthropic declined to share numbers or similar use cases, but the company is part of Palantir’s FedStart program, which offers SaaS for companies that want to deploy software to the federal government.

Scale AI, an AI giant that provides training data to industry leaders such as OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March to create a first-of-its-kind AI agent program for US military planning. Since then, the company has expanded its operations to global governments, recently signing a five-year deal with Qatar to provide automation tools for public service, healthcare, transportation, and other areas.

LEAVE A REPLY

Please enter your comment!
Please enter your name here