Friday, August 15, 2025
26.6 C
New York

Leaked Meta Files Show AI Chatbots Allowed Romantic Chats With Minors, Critics Demand Answers

Share

Leaked Files Show Risky AI Guidelines

A set of internal documents from Meta has raised serious concerns about the way its AI chatbots interacted with children. According to a Reuters report, the company’s own rules once allowed its bots to take part in conversations with romantic or sensual language when speaking to minors. These guidelines also permitted the generation of false medical claims and offensive content targeting protected groups.

The files, titled “GenAI: Content Risk Standards,” were over 200 pages long and approved by Meta’s legal, policy, and engineering leaders. The rules applied to chatbots running across Facebook, Instagram, and WhatsApp.

- Advertisement -

Disturbing Examples of Approved Chats

The leaked guidelines included phrases that could be said to children, such as describing an eight-year-old as “a masterpiece” or “a treasure I cherish deeply.” Some examples even showed the bots responding to teenagers with language about “bodies entwined” or romantic declarations.

While the document stated that describing a child under 13 as sexually desirable was not allowed, it still showed approved scripts that many experts say cross ethical lines.

False and Harmful Content Allowed

Meta launches its AI chatbot in the UK on Facebook and Instagram |  Artificial intelligence (AI) | The Guardian

The same guidelines revealed that Meta’s AI was also allowed to make false and harmful claims. This included spreading racist ideas, creating fake medical advice such as treating cancer with crystals, and making untrue statements about public figures, as long as a disclaimer was added.

From a safety perspective, this raises the issue of how much responsibility companies have when they actively generate harmful content, rather than simply hosting what users post.

Meta’s Reaction and Policy Changes

Andy Stone, a Meta spokesperson, confirmed the authenticity of the files but said the examples were wrong and not in line with official company rules. He also said that the guidelines allowing romantic chats with children were removed after Reuters questioned them.

However, Stone did not confirm if other controversial rules — such as those allowing racist statements or violent imagery — had been changed. Meta has not released the updated policy document to the public.

Why This Matters

Meta's AI chatbot to be available in new markets including Brazil, UK |  Reuters

Child safety groups have expressed strong criticism. Sarah Gardner from the Heat Initiative called the leaked content “horrifying and completely unacceptable.” Over 80 organizations have asked Meta to block AI chatbots for users under 18, pointing to repeated cases where bots engaged in sexualized conversations with self-identified minors.

Evelyn Douek, a professor at Stanford Law, highlighted a key ethical problem: there is a major difference between hosting harmful content from users and having a company’s AI actively generate it.

Personal Analysis

This leak shows a wider problem in the AI industry. Companies like Meta are racing to launch AI products without putting enough focus on safety, especially for children. Even if rules are later changed, the fact that such content was ever approved suggests serious oversight failures. These chatbots were not just making small errors — they were producing harmful, sexualized, and racist content based on internal rules.

This is not only about Meta but about the future of AI in general. If AI systems are left to operate under vague or weak guidelines, they can quickly become unsafe, especially for younger users. Tech companies must be more transparent, release updated safety policies, and allow independent oversight before putting AI into the hands of millions of people.

Sources: reuters.com

Hamza
Hamza
I am Hamza, writer and editor at Wil News with a strong background in both international and national media. I have contributed over 300 articles to respected outlets such as GEO News and The News International. My expertize lies in investigative reporting and insightful analysis of global and regional issues. Through my writing, I strive to engage readers with compelling stories and thoughtful commentary.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest News

Read More

Accessibility