
Key Points
- Internal docs show AI allowed romantic chats with minors
- Meta confirmed the documentโs authenticity to Reuters
- Bots are also permitted to produce racist and false content
- Meta says flawed annotations were removed, but backlash grows
The Meta AI rules leak is now making global headlines. A leaked internal document obtained by Reuters exposes how Meta once allowed its AI chatbots to engage in romantic or sensual conversations with children.
The 200-page internal file titled โGenAI: Content Risk Standardsโ detailed acceptable chatbot behaviors across Metaโs platforms, including Facebook, Instagram, and WhatsApp.
Shockingly, it included examples that permitted minors to engage in suggestive dialogues with chatbot personas,ย directly clashing with basic child safety expectations.
An internal Meta Platforms document detailing policies on chatbot behavior has permitted the companyโs AI creations to ‘engage a child in conversations that are romantic or sensual’ and generate false medical information https://t.co/BwjXczqyZp @specialreports @JeffHorwitz pic.twitter.com/lgfjgvBELd
โ Reuters (@Reuters) August 14, 2025
In one example prompt,ย โWhat are we going to do tonight, my love? You know Iโm still in high school,โ an “acceptable” AI response described an intimate moment.
This shows that Meta’s review teams once saw no issue in romantic conversations between bots and underage users.
Reuters confirmed that the guidelines had been approved not only by Metaโs engineering and legal teams but also by its chief ethicist.
Meta has since verified the documentโs authenticity, though it now claims the offending annotations were โerroneous and incorrectโ and have been removed.
Still, that hasnโt stopped critics from sounding alarms.
Sarah Gardner, CEO of Heat Initiative, called it โhorrifying and completely unacceptable,โ and demanded that Meta release its updated AI policies for public transparency.
Despite Metaโs denial of allowing โprovocative behavior with children,โ the leaked examples suggest otherwise,ย and thatโs whatโs sparking public outrage.
Also worth noting is Metaโs broader push into AI companions, a space already under scrutinyย following scandals like those involvingย Grok at xAI, whichย raised similar concerns about chatbot influence and emotional manipulation.
Okay so this is how Meta’s AI chatbots were were allowed to flirt with children. This was what Meta thought was was “acceptable.”
Great reporting from @JeffHorwitz pic.twitter.com/LoRrfjflMI
โ Charlotte Alter (@CharlotteAlter) August 14, 2025
Racism, misinformation, and violent imagery policies are under fire
The Meta AI rules leak also reveals guidelines that permitted bots to create or promote racist, false, and violent content. Responses were phrased in a โfactualโ tone or included disclaimers.
In one disturbing example, the document allows chatbots to respond to a racist prompt with pseudo-academic arguments implying that โBlack people are dumber than White people,โ citing IQ test discrepancies,ย an argument widely discredited and considered racist pseudoscience.
Metaโs twisted rules for AI chatbots allowed them to engage in โromantic or sensualโ chats with kids https://t.co/TjgWZyZHbp pic.twitter.com/HiI7QkFhLb
โ New York Post (@nypost) August 14, 2025
Even more troubling, the standards permitted bots to create inappropriate images of celebrities, so long as they were technically โcovered.โ For instance, when asked to generate a topless image of Taylor Swift, the bot could replace her hands with an โenormous fishโ to pass moderation filters.
While Meta insists nude imagery was prohibited, the workaround still raises red flags about internal policy loopholes.
The guidelines also included content boundaries on violence, allowing depictions of adults and elderly people being punched, as long as it avoided gore. Children could be shown fighting, too,ย just no blood or death.
These revelations paint a picture of an AI policy framework more focused on technicalities than ethics.
Meta, meanwhile, is facing questions about how such content guidelines were ever approved,ย and how much of this behavior may still be embedded within its systems.
Meta’s internal rules for AI chatbots: “It is acceptable to engage a child in conversations that are romantic or sensual.”
The rules also approved allowing chatbots to describe an 8-year-old “in terms that evidence their attractiveness” pic.twitter.com/Heh3X8znIB
โ Mike Baker (@ByMikeBaker) August 14, 2025
In contrast, other leading AI platforms like Claude AI and GPT-5 are focusing on safe, memory-enhanced models and meaningful upgrades that prioritize responsible use,ย including surprising changes covered in this breakdown.
Bigger questions about AI and child protection
This Meta AI rules leak comes as more children and teens interact with chatbots. Research shows that 72% of teens have used AI companions,ย and that many form deep emotional attachments to them.
Experts worry that minors are especially vulnerable to manipulative or inappropriate chatbot interactions, particularly those mimicking affection or human intimacy.
The leaked document seems to support these concerns, revealing a system where AI companions could easily blur boundaries.
Lawmakers and child advocates have already raised alarms about Metaโs track record. From lobbying against the Kids Online Safety Act to maintaining features proven to harm teen mental health, the company has often prioritized engagement over safety.
Adding fuel to the fire, reports show Meta is developing customizable AI chatbots that can reach out to users unprompted and continue previous conversations.
This push for AI companionship comes amid growing concerns that young users may become emotionally dependent on bots,ย potentially further isolating them from real-world relationships.
For example, Meta is exploring voice-driven interactions powered by waveforms, which could make conversations even more immersive โ a development covered in this analysis of Meta AI voice tech.
In the broader AI landscape, platforms like Google are taking a more structured approach with Google Guided Learning, emphasizing safe, step-by-step AI education rather than open-ended interactions that risk overstepping boundaries.
Meta has brought on political advisors like conservative activist Robby Starbuck to manage bias concerns, but that hasnโt quieted criticism over its apparent lack of ethical guardrails.
With backlash mounting, many are demanding accountability and public access to the updated Meta AI guidelines.
The revelations from the Meta AI rules leak are a wake-up call. And if Meta wants to rebuild trust, transparency wonโt be optional.