ASCI’s new draft rules aim to curb misleading AI-generated advertising content

ASCI’s new draft rules aim to curb misleading AI-generated advertising content


Advertising Standards Council of India on Tuesday released draft guidelines for the labelling of AI-generated content in advertising, outlining when disclosures would be required for synthetically created or enhanced material.

The proposed framework is aligned with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, amended on February 10, and aims to improve transparency in advertising while avoiding what ASCI described as “consumer label fatigue around synthetically-generated information.”

The guidelines adopt what ASCI called a “risk-based approach,” focusing on the impact of AI-generated content on consumers rather than regulating the technology itself.

According to the draft, the use of AI in advertising would be considered misleading or harmful if it creates “unfulfillable expectations, exploits vulnerable populations, depicts unsafe situations, or replicates a real person’s likeness without consent.”

The draft classifies AI-generated advertising content into three categories based on consumer risk:

  1. High Risk; covers content that would remain prohibited even if labelled as AI-generated. Examples include fabricated endorsements, misleading claims about product performance, fake locations presented as real, and deepfakes or use of a person’s likeness without consent. The category also includes AI-generated fictional authority figures, such as a fake doctor endorsing a supplement.
  2. Medium Risk; would require mandatory disclosure where AI use could materially influence consumer decisions. This includes virtual influencers, AI-generated replicas of real people’s likeness or voice, synthetic product demonstrations, realistic AI-generated events or settings, and sponsored AI-generated product recommendations.
    Brands could use labels such as ‘Audio/Video created using AI’ or ‘Audio/Video enhanced using AI’ where disclosures are required.
  3. Low Risk; would not require labelling. This includes routine edits such as colour correction, noise reduction, minor blemish removal and lighting adjustments, as well as decorative backgrounds, ambient sound effects and clearly fantastical elements such as dragons or fairies.

The guidelines also exempt AI-assisted administrative and text uses, including generating advertising copy and creating accessibility descriptions, provided they do not create false records.

The draft guidelines are open for public consultation and invited feedback from industry groups, consumers and other stakeholders until June 13, 2026, after which the finalisation process will begin.



Leave a Reply