India has instructed technology companies to obtain its approval before publicly releasing artificial intelligence (AI) tools that are deemed “unreliable” or still in the testing phase. The tools should also be labelled to indicate their potential to provide incorrect answers to user queries, according to an advisory issued by the country’s IT ministry last Friday.
The use of such tools, including generative AI, and their availability to Indian internet users must be authorized explicitly by the Government of India, the advisory stated. This move aligns with global efforts to establish regulations for AI, as India has been tightening regulations for social media companies, considering the country a key growth market.
The advisory was issued following criticism of Google’s Gemini AI tool by a senior minister on February 23, who accused it of producing responses that some have described as reflecting “fascist” policies implemented by Indian Prime Minister Narendra Modi. Google responded by acknowledging that the tool “may not always be reliable,” especially regarding current events and political topics.
Deputy IT Minister Rajeev Chandrasekhar emphasized on social media that platforms have a legal obligation to ensure safety and trust, stating that labelling a tool as “Sorry Unreliable” does not exempt it from the law.
Additionally, the advisory warned platforms to ensure that their AI tools do not compromise the integrity of the electoral process, particularly with India’s general elections approaching this summer, where the ruling Hindu nationalist party is expected to win a significant majority.
On March 1, the Ministry of Electronics and Information Technology (MeitY) issued a notice to platforms, indicating that failure to comply with the advisory could result in legal action. Essentially, AI platforms are required to label under-trial AI, seek government approval before deploying AI models categorized as “under-testing” or “unreliable,” and obtain explicit user consent before exposing them to such AI models.