- GEOPOLITIKS
- Posts
- Pentagon’s Project Maven
Pentagon’s Project Maven
Insights on the program bringing AI at the heart of decision-making in warfare

Source: Trends Research and Advisory
What is Project Maven?
Project Maven is a U.S. Department of Defense program. Its official name is Algorithmic Warfare Cross Functional Team). It was launched in 2017. Two key elements of context led to its creation.
The U.S. military can collect huge amounts of data. But analysts cannot process them fast enough. Like for any intel services, analysts face masses of data to exploit. They must review and verify it. Then sort out what matters and make links. And then analyze a situation and transmit their intel reports in a timely manner for decision-making. During operations, this cycle can follow a faster pace. So, what if AI could help them? That’s the idea of Maven. The goal is to use AI to analyze military data. At first it was mostly for drones and satellites imagery. Analysts could then focus on interpretation rather than spending hours watching raw videos or images to identify elements of interest.
The project evolved to add other sensors and tools. It became an AI-assisted targeting and battlefield management system. It integrates data and provide visualization for analysts and decision-makers. It can flag potential targets. These include vehicles, buildings, people, weapons, etc. It can also add details on the battlefield. This gives many elements on a situation:
The target
Enemy troops
Allied troops
Means available to engage the target
Risks in the vicinity of the target
The automation of AI makes it easier and faster to get all this information. It also speeds decision cycles. But AI only assists. It does not make any decision. Humans make the analysis, reports, and decision.
This program was also seen as crucial because of U.S. rivals. China in particular has been heavily working on the use of AI in warfare.
![]() | How To Profit From Starlink’s $180 Billion IPO Jackpot? |
Contractors
Many companies work on Maven. Palantir is the primary tech contractor. Its AI forms the operational backbone of the program. Classified materials are processed through Amazon Web Services. Booz Allen Hamilton works on AI. DBA Yonder/Popily works on social media analytics. DigitalGlobe provides image and algorithms. ECS Federal served as primary support contractor and led AI integration since the creation. Anduril Industries deploys its sensor fusion platform and edge hardware for data captures. Other companies also contributed like L3 Harris Technologies, Microsoft, Maxar Technologies, and Sierra Nevada Corporation.
In 2018, Google was a key AI tech provider for the program. But the company faced internal controversy and did not renew the contract. Thousands of employees protested. They said this contract had crossed a red line. They feared the technology could be used for lethal targeting. Some signed a protest letter and others resigned.
Anthropic’s AI Claude joined the program late 2024. But the company was recently called a supply chain risk by U.S. DoD. For ethical reasons, Anthropic refused to lift restrictions that could allow autonomous lethal weapon systems and domestic mass surveillance. The DoD said that all use of Anthropic’s products will be phased out within 6 months.
This events with contractors remind that AI in warfare also brings debates on ethics. This is one of the challenges tied to Maven.
Challenges
Ethical and legal concerns. As seen with Google and Anthropic, some people are reluctant to help with tools that could be used for lethal means. Plus, there are some debates on where the line should be drawn between AI and human judgement. There are some worries on civilian casualties. But also, on responsibility in case of AI mistakes and compliance with the Law of Armed Conflicts.
Data quality and reliability. This depends on the quality of the training data. The AI must be able to perform as good and with discernment in all situations. These can include:
Weather effects or shadows
Different urban environment and landscape
Camouflages and decoys
Civilian facilities or vehicles looking military
If analysts over rely on AI results, mistakes can happen. Having a human-in-the-loop is not a total guarantee.
Adversary countermeasures. Another risk comes from there. Rivals adapt and develop ways to defeat AI systems. These methods can include decoys or camouflage. But also, data poisoning attacks. AI must be able to identify counter-AI warfare means.
Dependence on the private sector. This can be a strategic vulnerability. Cutting-edge AI innovations comes from the private sector. And military AI depends on it. It also depends on the private sector’s engineers. There is a defense tech talent gap. Many prefer to work for private companies because salaries are more competitive.
As a former targeting officer, I have experienced mass data and the fast-paced rhythm. Some missions require intel reports on strict deadlines. But I also know how important it is to respect the full targeting process without rushing it. I believe AI can be very useful in a monitoring approach. But humans’ double-checking and proper analysis is mandatory. To develop a target, it is our duty as analyst to acquire exhaustive knowledge on it. And this knowledge is better acquired when the analyst makes the whole research process than when AI serves it on a silver plate. AI can miss elements, make mistakes, misunderstand subtilities. It can sure be a tool to help analysts. But there are limits to AI’s role in the military intelligence and decision-making process.
Decoding geopolitics isn’t a job. It’s survival.
Joy
