In February 2026, India's Ministry of Electronics and Information Technology issued special supervision regulations on AI-generated synthetic content, which officially took effect on February 20. They build a comprehensive regulatory system around synthetic content, improve India's digital governance framework and set a clear boundary for AI's compliant application.
The new regulations define the supervision scope and exemption scenarios, taking algorithm-generated audio, visual and audio-visual synthetic content likely confused with real people or events as core objects. A clear exemption boundary is set: acts not substantially altering content core, such as routine editing, AI-assisted production without false information and technical processing to improve accessibility, are not supervised. This balances supervision effectiveness and industrial innovation, avoiding hindering AI development due to over-regulation.
For platform compliance operations, the new regulations specify the scope of content that platforms may dispose of in accordance with the law. They clearly define illegal information types that require timely measures like removal and blocking, and set a clear legal bottom line for handling illegal synthetic content. They also stipulate platforms’ compliant disposal acts are legally protected, eliminating worries and urging active fulfillment of content management responsibilities.
For all online intermediary platforms, the new regulations set unified compliance standards: they improve content disposal and complaint handling efficiency with shorter time limits for official removal notices, high-risk complaints and user appeals, urging platforms to optimize processes and enhance operational efficiency; they also strengthen user compliance guidance, requiring platforms to inform users of legal boundaries and illegal consequences, with synthetic content tool platforms issuing special risk warnings to regulate user behavior.
For AI platforms creating or disseminating synthetic content, the new regulations set targeted compliance requirements. Such platforms shall establish sound illegal content prevention mechanisms technically. For legitimate synthetic content, they must implement standardized labeling and traceability requirements. They should protect users' right to know via prominent visual labels and a prominently prefixed audio disclosure, and embed traceable metadata where technically feasible.
As the main channels for synthetic content dissemination, large social media platforms are subject to higher supervision responsibilities. The new regulations require them to review content from the source, accurately identify synthetic content via "user declaration + technical verification" and release it only after standardized labeling; they must also be equipped with technical review tools, strengthen prevention capabilities. Platforms condoning illegal synthetic content dissemination shall be legally deemed to have failed their due diligence and held accountable.
For more details on the specific revisions, please refer to the link below:
https://www.meity.gov.in/static/uploads/2026/02/f55fe52418b03f58b0669f6a8bc03b6d.pdf