Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

AI advisory is only for significant platforms, says MoS IT Rajeev Chandrasekhar

Days after the IT ministry issued an advisory to multiple companies including Google, Microsoft and Adobe that they must get explicit permission from the government for all their “under-testing” or “unreliable” artificial intelligence (AI) models before releasing them to users in India, minister of state for electronics and information technology, Rajeev Chandrasekhar tweeted that the advisory is aimed only at “significant platforms” and will not apply to start-ups.
“Recent advisory of @GoI_MeitY needs to be understood. -> Advisory is aimed at the Significant platforms and permission seeking from Meity is only for large platforms and will not apply to startups. -> Advisory is aimed at untested AI platforms from deploying on Indian internet -> Process of seeking permission, labelling & consent-based disclosure to user about untested platforms is insurance policy to platforms who can otherwise be sued by consumers,” Chandrasekhar tweeted on Monday.
Chandrasekhar’s tweet came in response to the backlash by the industry about the AI advisory. Aravind Srinivas, the CEO of Perplexity, called it a “bad move by India” in a tweet.
Pratik Desai, founder of KissanAI, which built the agriculture LLM Dhenu, had tweeted, “I was such a fool thinking I will work bringing GenAI to Indian Agriculture from SF. We were training multimodal low cost pest and disease model, and so excited about it. This is terrible and demotivating after working 4 years full time bringing AI to this domain in India.”
After Chandrasekhar’s tweet, Desai tweeted, “Not or startups! A good start [thumbs up emoji] The next step would be defining revenue and user base numbers to classify a startup, which would clear the uncertainty cloud for many of us.”
To be sure, the advisory that MeitY sent out on Monday said that “all intermediaries/ platforms” must ensure compliance with the advisory. It also said that “all intermediaries or platforms” are required to ensure that their AI tools do not allow any bias or discrimination or threaten the integrity of the electoral process.
The advisory did not limit itself to “significant social media intermediaries”, that is, social media intermediaries with more than 5 million users in India, which Chandrasekhar seems to be referring to in his tweet. It is only the instruction about needing to identify synthetically created content using a label, unique identifier or metadata that is limited to only “intermediaries” and does not mention “platforms”.
To be sure, a “platform” is a term that has neither been used nor defined in either the Information Technology Act, 2000, or the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, under which the December 26, 2023 advisory was issued. The March 1 advisory was issued in continuation of the December 26 advisory.
According to the IT Rules, a “significant social media intermediary” is “an intermediary which primarily or solely enables online interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services”.
Large language models (LLMs), such as those used by OpenAI, Google, Ola, Perplexity, etc., which users directly use through ChatGPT, Gemini, Krutrim, Perplexity AI, respectively, arguably do not enable online interaction between two users, but between a user and a machine. Thus, they are arguably not social media intermediaries despite being amongst the biggest and most consequential players in the Indian industry.
If the advisory is meant only for significant social media intermediaries which have over 5 million users in India such as YouTube, Facebook, Instagram, WhatsApp, Snapchat, etc., it is not clear if it is applicable to their parent companies’ AI tools such as Gemini and LLaMA.
An “intermediary”, under the IT Act, is “any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-market places and cyber cafes”. It is not clear if AI tools such as ChatGPT or Gemini are intermediaries to begin with.
At the heart of the tussle is the government’s attempt to allocate responsibility for outputs that these AI tools may generate which may be incorrect, or unacceptable to the government, if not illegal. Intermediaries, according to the law, are protected from liability for third party content. It is not clear if the outputs generated by these AI tools, which depend on the prompts entered by the user as well as their training datasets, are similarly protected.
In his tweet, Chandrasekhar also said that the process of seeking permission, labelling and consent-based disclosures to users about untested platforms is “insurance policy” from being sued by customers. It is not clear if this means that users can then sue the government of India who, as per the advisory, will permit the deployment of such models.

en_USEnglish