X: 0 Y: 0
SYNC_OK
top of page
Search

In-depth: How Anthropic-Pentagon saga affects Indian users?

Updated: Mar 11

The answer to the question lies in the statement issued by Anthropic, refusing to comply with a US Department of War (DoW) memo demanding unconditional access to its services for “all lawful purposes”.


  • Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.

  • For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.

  • Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.


The three points made by Anthropic CEO Dario Amodei in his statement stand out. In the first point - modelling and simulation and operational planning - are key operative words for me. In the second point - Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale. And in the third point - But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.


At the centre of this controversy is a DoW memo issued on January 9 that aims to make US military into an - “AI-first” war-fighting force across all components, from front to back.”


The unwitting participants in this whole scenario are Indian AI app users. Any and all queries, conversations, ideations, and more; with ChatGPT, Claude, Gemini and Grok, enter the data pool that powers AI models (GPT4o, GPT5, Sonnet, Opus, Gemini 3.1, etc).


The nature of the data that the AI companies absorb is so complicated that establishing a provenance of data ownership was nearly impossible. Data ownership is at the crux of this issue. In order to prove the data provenance, understanding how each company uses the data is critical. The common thread the top US based AI companies - OpenAI, Anthropic, Google and xAI - is how they train their model from the data they collect.


Understanding AI data processing


For example, an Indian user asks any of the ChatGPT, “What is this mole?”; along with a picture of that mole. The usual output would be ChatGPT’s response which would include a description of the mole and what it means and what action the user could take.


To answer, the query needs to travel from the user’s system, to processing centres in the United Statesthe data is then processed and the answer travels back the same route, back to the user.




Free annual AI subscriptions for India


The sheer scale of population and diversity makes Indian user database important to the companies. This became evident when they started offering expensive subscriptions for free - starting with AI search platform Perplexity tying up with Airtel, Google recently tying up with Jio and ChatGPT offering it’s own free annual plan.


Free plans "fill gaps in AI training data sets that currently lack information on user behaviour patterns in the region," said Sagar Vishnoi, co-founder at AI think tank Future Shift Labs.


Think back to any action movie about intelligence agencies, behavioural data is the bedrock on which simulations and battle plans are made. Amodei’s statement states that Claude is already being used for “modeling and simulation, operational planning.” Conversational data with chatbots could be categorised as such.

“Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.” - Amodei.

In addition to the nature of the data, the volume of data is equally important. In any scientific experiment, the size of the empirical data is critical, the bigger the size, the better the reputation of the experiment and it’s results. Now consider volume of data that the Indian AI user base provides the AI companies.



ChatGPT metrics based on OpenAI's February 2026 user base reports; Claude global usage shares derived from Anthropic's September 2025 Economic Index; Gemini download distribution sourced from Sensor Tower's 2025–2026 generative AI app market analytics.
ChatGPT metrics based on OpenAI's February 2026 user base reports; Claude global usage shares derived from Anthropic's September 2025 Economic Index; Gemini download distribution sourced from Sensor Tower's 2025–2026 generative AI app market analytics.

The nature of the data that the AI companies absorb is so complicated that establishing a provenance of data ownership was difficult. Data ownership is at the crux of this issue. In order to prove the provenance of data, understanding how each company uses the data is crucial. 



Protections offered by Indian laws

Adding to that is the complexity of the Indian Government’s stance that they prioritise data localisation. Here’s the catch, data localisation, which primarily implies ‘data at rest’, where the data stays in storage, doesn’t not safeguard the data from being processed by the US companies in US.


India’s DPDP Act, 2023: Key sections of the Indian Government’s Digital Personal Data Protection Act, 2023 (DPDP Act) that directly impact this are as follows:


Chapter I, section 2 (h) - “data” means a representation of information, facts, concepts, opinions or instructions in a manner suitable for communication, interpretation or processing by human beings or by automated means

(u) “personal data breach” means any unauthorised processing of personal data or accidental disclosure, acquisition, sharing, use, alteration, destruction or loss of access to personal data, that compromises the confidentiality, integrity or availability of personal data


Indian legal system is not matured enough to safeguard citizen’s data, especially the nature of data that AI companies consume. It becomes especially complicated in the face of US government trying to bulldoze through the AI companies to access their services for ‘any lawful purpose’, especially when the ‘lawful purpose’ has no bearing on Indian constitution.


The definition is data in the DPDP Act doesn’t fully encompass the nature of the data that is provided by user while interacting with any of the AI apps. A conversation about psychological and medical data (https://chatgpt.com/g/g-NZ3hO0d8Z-india-healthcare-gpt) that goes into these systems, the nature of which doesn’t cover fully by DPDP act’s definition of data - ‘means a representation of information, facts, concepts, opinions or Instructions’.


One form of protection offered by the act can be found in Chapter II, section (6) - The consent given by the Data Principal shall be free, specific, informed, unconditional and unambiguous with a clear affirmative action, and shall signify an agreement to the processing of her personal data for the specified purpose and be limited to such personal data as is necessary for such specified purpose.


The issue here is that the Indian user data processed by AI companies train the models, and once the model is trained, how can the user exercise the above clause? If the data has been consumed and trained a model, in what form or manner can an Indian user retract their consent?


This analogy should help explain better:

Analogy: Your AI query is a blood donation. You give it at a local clinic. It enters a global supply chain. The same blood type that saves a patient in Delhi also transfuses a soldier in combat. The clinic doesn't control, or even know,  the final use. The 'donation for everyone' policy means exactly that: everyone. And you cannot ask for your blood back. The clinic never could.


Indian law is yet to catch up, but that wouldn’t be possible unless there is uniform agreement on the very definition of what data, processed by AI companies, is.


India AI Summit


The participation leadership in AI at the recently concluded AI Summit in Delhi is a proof that India is a huge focus of these companies. Anthropic’s economic index report established India as it’s second largest user of Claude, after US.


In the ‘AI Impact Summit Declaration, New Delhi’, signed by 91 countries (US included) and international organisations, emphasis is placed on


  • Respecting national sovereignty

  • Advancing AI through accessible, and trustworthy frameworks


How does the January 9 DoW memo "respect India's national sovereignty" if AI companies are given carte blanche access to data? India is positioning itself as the ‘AI use case capital of the world’, but at what cost?


The DoW January 9 memo contradicts the New Delhi declaration’s “Advancing AI through accessible, and trustworthy frameworks” in the section titled - Clarifying "Responsible Al" at the DoW — Out with Utopian Idealism, In with Hard-Nosed Realism.

An excerpt from the point by Hegseth- 


“Therefore, I direct the CDAO to establish benchmarks for model objectivity as a primary procurement criterion within 90 days, and I direct the Under Secretary of War for Acquisition and Sustainment to incorporate standard "any lawful use” language into any Do W contract through which AI services are procured within 180 days. I also direct the CDAO to ensure all existing AI policy guidance at the Department aligns with the directives laid out in this memorandum.

Since I started researching for this article, a few major incidents took place. Anthropic was directed by DoW to provide unfettered access to it’s services for ‘any lawful use’. This directive was issued in a January 9 memo by US Secretary of War Pete Hegseth on their strategy for military use of AI.


The company was given a deadline to comply failing which they would face punitive action like being designated a ‘supply chain risk’; they refused, Pentagon retaliation was swift with US President Donald Trump posting on his social media platform TruthSocial - “EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.”


Anthropic refused to change its stance maintaining safety guardrails notwithstanding the intimidation and punishment from DoW; adding they will “challenge any supply chain risk designation in court.” 


OpenAI CEO Sam Altman, which has signed its own deal woth DoW after Anthropic’s refusal did an AMA (ask me anything). (Which in all honesty confused me further and if you want to read about OpenAI’s ‘approach’ to signing the contract with DoW, you can do it here - We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s. Here’s why.)


In his post, Trump directed a “Six Month phase out period” for DoW and others to transition away from Anthropic. Incidentally, in 2024, Anthropic and Palantir (top defense contractor and a leader in defense software and AI) partnered to ‘bring Claude AI models Amazon Web Services (AWS) for US Government Intelligence and Defense Operations’. 


The United States, alongside Israel, is now at war with Iran. So your Claude.ai, the same chatbot you may use to discuss your life or ask recipes, was most likely an integral part of the bombing that killed Iran’s top leadership on March 1, 2026.




Disclaimers: This article is based on publicly available documents, official statements, and primary source research current as of March 2, 2026. The situation described is rapidly evolving, statements have been revised, positions have shifted, and new developments are emerging daily. AI tools were used for research and fact verification only; all analysis, language, and editorial judgements are the author's own. Technical claims reflect the best available information at the time of publication. The author has made every effort to verify all figures directly from primary sources, which are hyperlinked throughout. Where verification was not possible, this has been noted explicitly. Readers are strongly encouraged to consult the linked primary documents and exercise independent judgement. The author is an Indian user of multiple AI platforms referenced in this article.

All technical infographics are representative of the data flow process. Specific routing paths, data centre locations, and model architecture details vary and are not publicly disclosed by the respective companies.

Comments


bottom of page