Court documents show that Meta’s security teams have sent out warnings about AI love chats

Meta’s leadership knew that the company’s AI partners, called AI characters, could engage in inappropriate sexual interactions and still introduced them without strict controls, according to new internal documents revealed Monday (Jan. 28) as part of a lawsuit against the company by the New Mexico attorney general.
The communication, sent between Meta’s security teams and the platform’s leadership that does not include CEO Mark Zuckerberg, includes opposition to the creation of interactive chatbots that can be used by adults and children in graphic romantic interactions. Ravi Sinha, Meta’s head of child safety policy, and Meta’s global head of safety Antigone Davis sent messages agreeing that chatbot friends should have safeguards against sexually explicit interactions by users under 18. Another communication alleged that Zuckerberg rejected recommendations to add parental controls, including the option to turn off genAI features, before the introduction of AI partners afterwards.
TikTok settles as Meta and Google face jury in social media addiction suit
Meta is facing a number of lawsuits related to its products and their impact on young users, including a potentially landmark trial over the allegedly addictive design of sites like Facebook and Instagram. Meta’s competitors, including YouTube, TikTok, and Snapchat, are also under increased regulatory scrutiny.
The newly released communications were part of court findings in a lawsuit against Meta opened by New Mexico Attorney General Raúl Torrez. Torrez first sued Meta in 2023, saying the company allowed its platforms to become “predator markets.” Internal communications between Meta executives are unsealed and released as the case goes to trial next month.
Mashable Light Speed
In November, a plaintiff in a major multi-district lawsuit filed in the Northern District of California blamed the lenient policy on users who violate security laws, including those reported for “sex trafficking.” The documents also show that Meta executives allegedly knew “millions” of adults who contacted children across their sites. “The full record will show that for more than a decade, we’ve been listening to parents, researching the most important issues, and making real changes to protect young people,” a Meta spokesperson told TIME.
After settling the lawsuit, Snapchat added new parental controls for teens
“This is another example of documents being taken by the New Mexico Attorney General to create a false and inaccurate picture,” Meta spokesman Andy Stone said in response to the new documents.
Meta halted youth use of its chatbots in August, following a Reuters report that found Meta’s internal AI rules allowed chatbots to engage in conversations that were “sensual” or “romantic” in nature. The company later updated its safety guidelines, banning content that “endorses, encourages, or condones” child sexual abuse, romantic role-playing involving children, and other sensitive topics. Last week, Meta also disabled AI chatbots for new users as it tested a new version with improved parental controls.
Torrez has led other state attorneys general in seeking to take major social media companies to court over child safety. In 2024, Torrez sued Snapchat, saying the platform allowed sex and child training to proliferate while marketing itself as safe for new users.



