Advertising or Manipulating? The Use of AI in Children’s Advertisements

In an era where artificial intelligence (AI) increasingly permeates various facets of society, its application in influencing behavior — particularly among vulnerable populations like children — raises significant ethical and legal concerns.

The concept of “nudging,” introduced by behavioral economists Richard H. Thaler and Cass R. Sunstein in 2008, involves subtly guiding individuals toward certain decisions by leveraging cognitive biases. While initially proposed as a tool for public policy to promote beneficial behaviors, the integration of AI into nudging strategies has transformed its scope and impact, especially in advertising directed at children. Therefore, finding a balanced regulatory approach to this issue is vital.

In May 2024, the BBB National Programs’ Children’s Advertising Review Unit (CARU) issued a compliance warning emphasizing the application of its Self-Regulatory Guidelines for Children’s Advertising and Self-Regulatory Guidelines for Children’s Online Privacy Protection to the use of artificial intelligence in advertising and data collection practices directed at children. In particular, they took issue with advertisements using AI that could mislead children about product characteristics, blur the distinction between reality and fantasy, or create a false sense of personal connection with brands, celebrities, or influencers.

Additionally, the document highlights that advertisers must ensure that AI does not reinforce harmful stereotypes or unsafe behaviors. From a privacy standpoint, companies utilizing AI in child-directed content must transparently disclose their data collection practices and obtain verifiable parental consent before gathering personal information from children.

These self-regulatory guidelines align with the Federal Trade Commission’s (FTC) Children’s Online Privacy Protection Rule (COPPA), which establishes legal procedures for obtaining parental consent before collecting, using, or disclosing a child’s personal information. However, while COPPA and CARU’s guidelines provide essential safeguards, they primarily rely on industry self-regulation, leaving gaps in enforcement and compliance.

The EU’s Stricter Regulatory Stance

The European Union (EU) has taken a markedly different approach to AI’s role in influencing children. The AI Act, the world’s first comprehensive piece of legislation on AI, explicitly prohibits AI systems from exploiting age-related vulnerabilities, recognizing children as a particularly susceptible group. Unlike CARU’s self-regulatory model, the AI Act imposes legally binding requirements on companies, particularly for high-risk AI applications such as those used in education and digital advertising.

The EU’s regulatory framework mandates that AI-generated content, such as deepfakes, be clearly labeled, and users must be notified when interacting with AI. Furthermore, high-risk AI applications must undergo strict risk assessment procedures to ensure they do not harm children’s rights. This level of regulatory scrutiny stands in contrast to the U.S.’s approach, which focuses more on corporate responsibility than on enforceable restrictions.

AI Nudging: A Form of Manipulation?

The broader ethical concerns surrounding AI nudging extend beyond children’s advertising, but adversely affect the youth as well. Behavioral nudging has become a powerful instrument in marketing, often without consumers’ explicit awareness. Yuval Noah Harari warned in 2018 that as AI advances, it will become easier to manipulate individuals by tapping into their deepest emotions and desires. This concern is particularly relevant in the digital marketplace, where AI-powered nudges shape consumer preferences in ways that challenge the foundations of liberal market economies.

In a free-market model, consumers exert counterpressure on producers by making informed choices, compelling businesses to offer competitive products at fair prices. However, AI-driven nudging distorts this mechanism by subtly influencing consumer behavior, potentially reducing genuine choice and diminishing market transparency. The same logic applies to democratic participation, as AI’s ability to shape opinions raises concerns about election integrity and informed decision-making.

The Policy Divide: Innovation vs. Regulation

The regulatory debate over AI’s role in nudging reflects broader tensions between innovation and consumer protection. The Biden administration’s Executive Order 14110 emphasized the need for safeguards in AI deployment, prioritizing responsible AI development. However, the Trump administration’s recent executive order rescinded these regulations, aiming to eliminate perceived bureaucratic obstacles to American AI dominance.

This policy shift underscores the ideological divide between a regulatory approach that prioritizes accountability and a laissez-faire model that seeks to maintain the U.S.’s competitive edge in AI innovation. While minimizing regulatory barriers may accelerate technological advancement, it also raises the risk of unchecked AI applications with significant ethical and societal implications.

The Need for a Balanced Approach

AI-driven nudging, particularly in child-directed advertising, presents a complex challenge that requires a nuanced regulatory approach. While self-regulatory frameworks like CARU’s guidelines serve as an essential first step, they lack the enforceability needed to prevent manipulative practices effectively. In contrast, the EU’s AI Act demonstrates a more robust commitment to protecting vulnerable populations from AI-driven influence.

A balanced approach should integrate elements of both models: fostering innovation while implementing enforceable safeguards to prevent exploitation. Policymakers must consider stricter transparency requirements, enforceable ethical guidelines, and independent oversight mechanisms to ensure that AI serves the public interest rather than undermining autonomy and market integrity.

As AI continues to evolve, so too must the legal and ethical frameworks governing its use.

AUTHOR

Monika Mercz

Monika Mercz is a visiting researcher at The George Washington University. She is a Hungarian lawyer, focusing on how AI can be used to better protect children.

EDITORS NOTE: This Washington Stand column is republished with permission. All rights reserved. ©2025 Family Research Council.


The Washington Stand is Family Research Council’s outlet for news and commentary from a biblical worldview. The Washington Stand is based in Washington, D.C. and is published by FRC, whose mission is to advance faith, family, and freedom in public policy and the culture from a biblical worldview. We invite you to stand with us by partnering with FRC.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *