Next Story
Newszop

ET Explainer: What OpenAI's local data residency means for India

Send Push
ChatGPT-maker OpenAI earlier this month enabled local data residency in key Asian countries including India—its second largest market—and Japan, Singapore, and South Korea. This was in a bid to help organisations who want to leverage its ChatGPT Enterprise, ChatGPT Edu, and the OpenAI API (application programming interface) offerings, but also have data localisation requirements. ET explains what this move means for Indian businesses and whether data sovereignty is on the horizon.

What does OpenAI’s data residency policy mean for India?

The feature allows ‘data at rest’ such as prompts, uploaded files, and chat interactions to be stored within India. But, models still reside in foreign servers and processing enterprise information at inference time (run-time) will need exchange outside India servers.


Data localisation has until now prevented OpenAI from gaining market share in India as BFSI customers opted to host open models like Meta's Llama and DeepSeek on-premise.


According to Aadya Misra, Partner at Spice Route Legal, “OpenAI’s residency option could allow financial institutions to deploy AI for use cases like payment processing while remaining compliant with existing requirements that require payment data to be stored locally.”

She explained that the Reserve Bank of India does permit transient cross-border processing under certain conditions, “so if implemented thoughtfully, concerns about data in motion could also be addressed. This move could shift reliance on self-hosted open-source models to enterprise-grade and centrally managed AI solutions.”

Does this spell data sovereignty for India?

The move may at best be seen as a first step towards compliance-enablement that could help companies to bolster contracts with responsible data handling clauses. It has fallen short of complete data sovereignty, however, experts said.

“The architecture stores data ‘at rest’ locally, but not necessarily ‘in transit’ or during model inference. That data may still leave the country, exposing enterprises to regulatory scrutiny,” said Leslie Joseph, principal analyst at Forrester.

Joseph noted that OpenAI has not announced local hosting of its GPT models or inference engines in India. “There’s no evidence of compute or model weights residing in-country. This is partial localisation at best, not sovereign AI,” Joseph added.

He explained that although OpenAI has added AES-256 level encryption for data at rest and TLS 1.2+ for data in transit, without full model localisation, including inference compute, enterprises handling PII (personally identifiable information) will continue to face regulatory and data exposure concerns.

“...There is no explicit indication that the underlying GPT models, including their inference engines, tokens, or trained weights, will themselves be hosted in India,” said Ankit Sahni, Partner at Ajay Sahni & Associates.

What impact could the move have?

Speculation remains that OpenAI may eventually bring full-stack model hosting to India, given its enterprise ambitions and steady competition from cost-effective open-weight models. For now, experts say companies must treat this as a “compliance-forward gesture.”

It could also mean opportunities for Indian data centre players.

Although the company is likely to host local storage within its long-time exclusive partner Microsoft’s data centres, sources told ET that OpenAI is hearing proposals from other colocation data centres in India as well.

“Given OpenAI’s shift to a for-profit structure and changing dynamics with Microsoft, we are actively seizing this opportunity to commit to a long-term relationship with them,” the senior executive at a leading data centre company told ET.

Annapurna Roy contributed to this story.
Loving Newspoint? Download the app now