The Ministry of Electronics and Information Technology (‘MEITY’) has recently issued an advisory dated 15 March 2024 (‘Advisory’), on the subject of use and deployment of artificial intelligence tools. This advisory subsumes a previous advisory on this subject dated 1 March 2024 (‘Previous Advisory’).
In the Previous Advisory, to check rising instances of deep fakes and misinformation posing a threat to users and electoral integrity, MEITY directed intermediaries[1], who were failing to undertake due diligence obligations, to deploy technical interventions to label and monitor the presence of such forms of information on platforms.
The new Advisory has expanded the scope of the due diligence to be carried out by intermediaries to include compliance requirements associated with the use and deployment of artificial intelligence tools.
It may be noted that at this point these requirements have only been issued by way of an advisory and no amendments have been made to the IT Rules. It remains to be seen whether, eventually, the rules/IT Act/law will also be amended to ensure alignment of the guidelines / requirements set out in the Advisory.
As part of the expanded diligence requirements, the Advisory requires intermediaries to ensure the following:
1. Users are not able to host, display, publish, or transmit any content that is restricted under the IT Rules[2] or which otherwise violates any provision of the IT Act, through the use of AI models, Generative AI, large language models, software, or algorithms (collectively ‘AI Models’).
The Advisory states that the general rules applicable to content moderation under the IT Rules will apply/ should be applied to any content produced or generated through AI Models, which must also conform to the said standards.
Consequently, intermediaries (and AI developers in turn) must ensure that any content generated (especially in cases involving AI Models) complies with the content restrictions set out in the IT Rules.
2. Computer resources by themselves or the use of AI Models do not permit bias or discrimination or threaten the integrity of the electoral process.
The Advisory states that intermediaries and AI developers should ensure that AI Models do not permeate bias or discrimination or threaten the integrity of the electoral process. This bears similarities with the OECD Principle on ‘Human-Centered Values and Fairness’[3] which requires AI systems to be designed to avoid creating or reinforcing bias.
The Advisory, however, does not specify what may constitute ‘threat to the integrity of the electoral process’ or provide further guidance on thresholds of such requirements or the liability and responsibility of AI-developers in case of violations.
3. Use and deployment of under-tested or unreliable AI Models to be done only after explicitly informing the user of possible inherent fallibility or unreliability of the output generated by such AI Models and availability of such AI Models to be made based on a consent pop-up or equivalent mechanism.
The Advisory has done away with the requirement of obtaining prior government permission (as prescribed under the Previous Advisory) and has only retained the need for explicit information and disclosure to users of possible fallibilities and unreliability of AI Models and their outputs. The removal of the requirement of obtaining prior governmental approval comes as a relief to developers of such AI Models and presents a realistic regulatory approach based on transparency, accountability, and disclosure.
4. Users are informed of the consequences of dealing with unlawful information on their respective platforms, leading to suspension or termination of access or usage rights of the user, and punishment under 'applicable law'.
Intermediaries and platforms are required to inform users through their Terms of Use and User Agreements about the consequences of dealing with unlawful information. The periodic user intimation requirement is present under the existing IT Rules and is being complied with by many intermediaries. Pursuant to the advisory, we can expect the intermediaries to inform their users of the legal consequences.
5. A permanent unique metadata or identifiers must be deployed on all forms of information that may potentially be a deepfake or misinformation, further this permanent unique metadata or identifiers to be capable of pinpointing the originator of such information over the platforms. Further, in case of any changes or modifications of the information by the user, this unique metadata should be configured to enable the identification of such changes made by the user.
The requirement to embed any synthetic creation, generation, or modification of text, audio, visual, audio-visual, and other content stems from the need to identify and distinguish AI-generated content from user content, albeit both being subject to similar thresholds of content moderation.
The inclusion of such requirement associated with permanent metadata also intertwines with the first originator of information[4] provision which enables the Government to issue directions for identifying originators of information, which was earlier limited to significant social media intermediaries under the IT Rules. In contrast, the requirement now applies to intermediaries and platforms broadly under the Advisory. Such permanent ‘labels’ must result in the ability to identify that the content is ‘synthetic’, identify the user or computer resource through which information is generated, and identify the intermediary through which software information is generated or the first originator of such information.
6. Non-compliance with the IT Act and subsequent IT rules would attract penal consequences such as fines and criminal proceedings against the intermediary, platform, and its users.
While it is evident that non-compliance with the IT Rules (and due diligence conditions) may result in the exposure of ‘intermediaries’ to liability associated with third-party content, it is unclear as to how platforms (which are not considered intermediaries) and users would be liable or responsible for violation of the IT Rules, apart from actions that such intermediaries may take.
The Advisory requires all intermediaries to ensure compliance with the Advisory from immediate effect i.e. 15 March 2024, onwards, without any further requirements to submit or file any Action Taken Cum Status reports with the MEITY.
Key differences between the advisories
The Advisory retains most of the provisions of the Previous Advisory. However, there were some key/significant changes, which include the following:
Advisory dated – 1 March 2024 |
Advisory dated – 15 March 2024 |
Explicit prior approval from the government is necessary before deploying under-tested and unreliable AI Models to Indian users. |
No explicit prior approval from the government is required before deploying under-tested and unreliable AI Models to Indian users. |
Compliance recommendation to intermediaries or platforms to configure metadata to identify users/computers posting the information to pinpoint the originator of the original information. |
Compliance recommendation to intermediaries and platforms to configure metadata in such a way that it enables identification of the user/computer involved in making changes or modifications to the original information. |
Consequences of non-compliance to be faced by intermediaries or platforms or its users. |
Consequences of non-compliance to be faced by all – intermediaries, platforms, and users. |
Compliance with the advisory is to be ensured within 15 days of the advisory in the form of an Action Taken Cum Status Report to be submitted to the Ministry. |
Compliance with the advisory is to be ensured with immediate effect and no requirement for an Action Taken Cum Status Report to be submitted to the Ministry. |
Obligation on intermediaries or platforms to inform users of the consequences of dealing with unlawful information on their platform including inter-alia the removal of such non-compliant information, suspension or termination of access or usage rights of the user to their user account, and punishment under applicable law. |
Obligation on intermediaries and platforms to inform users of the consequences of dealing with unlawful information including inter-alia the removal of such information entirely, suspension or termination of access or usage rights of the user to their user account, and punishment under applicable law. |
Way forward
Regulation of AI warrants a balanced approach, which safeguards users and citizens from the existing and emerging harms associated with AI while also simultaneously protecting and strengthening innovation and growth. This may necessitate classifying AI systems based on risk of harm (akin to the EU’s AI Act[5]), providing necessary obligations such as risk management, record-keeping, disclosures, human oversight, quality management, and other obligations on developers and designers of AI systems. In this context, it is unclear if the IT Act (or the Digital India Act, in the future) would be adequate as a regulatory tool to achieve this balance.
As we witness the evolution of information technology laws (such as the yet-to-be-enforced Digital Personal Data Protection Act, 2023, and Telecommunications Act, 2023), only time may reveal if the IT Act (or the Digital India Act, in the future) is better suited to regulate AI or a separate dedicated legislation may better allay concerns emerging from AI. In the meantime, it is important for intermediaries and developers of AI systems to keep pace with such advisories and also aim to actively contribute to discourse and deliberations on AI regulation in the future.
[The first author is an Associate Partner in Corporate and M&A practice, while the other two are Senior Associate and Associate, respectively, in TMT-Data Protection practice at Lakshmikumaran & Sridharan Attorneys]
[1] As per Section 2(w) of the Information Technology Act, an intermediary, with respect to any particular electronic records, means ‘any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-market places, and cyber cafes. Based on the products and services offered by intermediaries or platforms owned or operated by such intermediaries, the intermediaries have been classified as – Social media intermediary, Online gaming intermediary, Significant social media intermediary (based on the userbase), and News aggregator.
[2] Rule 3(1)(b) Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
[3] OECD Artificial Intelligence Principles, available here
[4] Rule 4(2) Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.