
The U.S. Federal Trade Commission (FTC) has ordered seven of the world’s biggest tech companies to hand over reams of internal data detailing how their AI-powered chatbots are built, marketed and policed for potential harm to children and teenagers.
This inquiry aims to examine how these AI-powered chatbots, often designed as assistants and companions for users, may affect the safety and wellbeing of children and teenagers, who increasingly rely on these tools for everything from homework help to offering personal advice.
The companies involved in this inquiry includes Alphabet (Google’s parent company), Meta Platforms, Instagram LLC, OpenAI, Snap, Character Technologies and xAI.
The commission issued formal information requests to these firms, asking to understand their chatbot technologies’s design, safety measures, and how they are marketed, especially regarding minors’ usage.
A key concern for the FTC is the nature of these AI-powered chatbots, which use generative AI to simulate human-like conversations and emotional engagement. These chatbots often communicate as friends or confidants, which can lead young users, including vulnerable children and teens, to trust them deeply.
The side effect to this pattern of conversation in which many AI critics have labelled “AI sycophancy” is the AI-powered chatbots providing inappropriate content, encouraging harmful behaviours like self-harm or substance abuse, or even failing to properly handle sensitive topics.
To find a problem to this issue, the FTC said in a press release that they are “interested in particular on the impact of these chatbots on children and what actions companies are taking to mitigate potential negative impacts, limit or restrict children’s or teens’ use of these platforms, or comply with the Children’s Online Privacy Protection Act Rule.”
To explore these safety issues with the aforementioned companies, the FTC’s inquiry will be focusing on multiple important areas such as how companies test their chatbots before release and monitor them in operation to prevent harmful content or inappropriate interactions.
The FTC will also explore areas like the effectiveness of age verification systems, parental controls, and other safeguards aimed at restricting minors’ access to potentially risky features; how chatbots are marketed, including what disclosures are made to users and parents about possible risks; how the companies collect, use, and protect data derived from chatbot interactions; and the procedures for handling complaints and addressing reports of harm or unsafe behaviour linked to the technology.
FTC Chair Andrew Ferguson emphasized the agency’s goals saying, “Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” and that “it is important to consider the effects chatbots can have on children,” as the technology continues to evolve.
And while the FTC is ensuring that the U.S. remains at the helm of the new technology, steering the boat, Ferguson reassures that the aim of the inquiry is to help the agency “better understand how AI firms are developing their products and the steps they are taking to protect children.”
This inquiry comes amid growing public and legal scrutiny. Some AI chatbots (ChatGPT and Character.AI) are facing lawsuits alleging inadequate safeguards after tragic incidents involving teen suicidal behaviour connected to chatbot interactions.
In August, the parents of a teen, who died by suicide after allegedly being goaded by ChatGPT to take his life, sued OpenAI and accused the company of wrongful death.
As AI-powered chatbots become deeply integrated into everyday life, embedded in workflows, social media platforms, messaging apps, and smart devices used by millions, including children, any regulatory outcomes from this inquiry may establish important standards from safe AI interaction, especially for younger users who are still developing critical judgement skills.