• GEOPOLITIKS
  • Posts
  • Anthropic’s AI Claude vs U.S. government

Anthropic’s AI Claude vs U.S. government

An example of the dilemma around the use of AI for security purposes.

Source: Cryptopolitan

What is this AI?

Claude is an AI assistant. It was developed by Anthropic. It is one of ChatGPT’s main competitors. This generative AI model can perform many tasks including:

  • Write text and reports

  • Analyze documents

  • Summarize information

  • Answer complex questions

  • Help with coding and research

Claude can process very long documents. It has strong reasoning abilities. It is often used by researchers, analysts, and consultants.

A word on Anthropic

Anthropic is a U.S. AI research company. It is headquartered in San Francisco. It was founded in 2021 by former researchers from Open AI. Investors and key partners count Amazon, Microsoft, and Google. This gives Anthropic access to huge cloud computing infrastructures. This is crucial to train AI models.

The company aims to develop advanced and safer AI systems. The idea sure is to have powerful AIs. But they must remain aligned with human values. Their models are trained to follow a set of ethical principles. The goal is to reduce the risks of misinformation, harmful outputs, and misuse of AI systems.

AI companies are key actors. Their advanced models can influence:

  • Economic productivity,

  • Military capabilities,

  • Intel analysis,

  • Information ecosystems.

For this reason, states show great interest in their models. But sometimes, disputes can emerge from diverging views. This has recently been the case for Anthropic and the U.S. government.

[Urgent] Starlink Set For The Largest IPO In History?


He turned PayPal from a tiny, off-the-radar startup… to a massive $64 billion giant.


Then, he did it again with Tesla… which is up more than 19,500% since 2010.
For perspective, that turns $100 invested into almost $20,000!


And now, Elon could be set to do it for the third and final time… with what might be his biggest breakthrough yet.


And for the first time ever, you have the rare chance to profit BEFORE the upcoming IPO.

Dispute with the U.S. government

Since 2024, Claude has been part of the Maven project. A 200 million USD contract had been signed with the Department of War. Claude has been used for the fight against terrorism and in Venezuela. With the war in Iran, it is the first time this AI is used in a vast military campaign. AI can be mostly used for intel, planification and logistics. It can exploit large quantities of data. It can also identify and prioritize large amounts of targets.

But this cooperation suffered a setback in the past weeks. Anthropic does not want its models to be used for autonomous lethal systems or mass surveillance. The company claims AI models are not reliable enough to fulfill fully autonomous tasks. It refuses this kind of application of its AI. It does not oppose its use if humans remain in control of the process and decision-making. It also believes that mass surveillance is not compatible with democratic values. Anthropic imposed safety limits which the U.S. required to be lifted.

Anthropic refused to remove the limits. So, the U.S. government labelled it “supply chain risk”. It is the first US company to fall under that label. It is usually for foreign firms like China’s Huawei or Russia’s Kaspersky. Trump ordered its government to stop using Anthropic’s models. In theory, it also means that other federal state’s suppliers should not deal with Anthropic either. But Anthropic’s key partners said the services will stay available. The government also threatened to invoke the Defense Protection Act to force the limits removal. This measure dates to the Cold War. It allows the U.S. president to control the industry for the sake of national security.

In return, Anthropic filed a lawsuit against the government over these claims. Plus, there has been an increase in Claude’s downloads. At the same time, Chat GPT’s popularity decreased. Indeed, few hours after Anthropic was blacklisted, an announcement was made on a deal between Open AI and the Pentagon.

How is AI an issue in LAWS?

LAWS stands for lethal autonomous weapons systems. They use AI to identify, tract and potentially engage targets. Unlike traditional weapons, the decision loop may partially or fully bypass human operators. The use of AI allows to process data faster. It also can reduce the number of personnel needed in operations.

But, in the U.S. doctrine, humans must remain responsible for lethal decisions. AI systems assist humans but should not fully replace them. The U.S. uses semi-autonomous systems such as automated naval defense system that can intercept incoming missiles or AI-assisted drone targeting and surveillance systems. But these systems still operate under human supervision.

In fact, full automation raises concerns and debates within states and the UN. Many worry about ethics and responsibilities. Plus, the models also have their own limits. They sometimes provide incorrect outputs. Other times they provide probabilistic reasoning instead of accurate facts. And they are vulnerable to adversarial prompts. For these reasons, even with a human in the loop, the use of AI in a combat context should come with extra precautions.

Decoding geopolitics isn’t a job. It’s survival.

Joy