Mark Zuckerberg initially opposed parental controls for AI chatbots, according to legal filings
Meta has faced serious questions about how it allows its young users to interact with AI-powered chatbots. Recently, internal communications obtained by the New Mexico Attorney General’s Office revealed that although Meta CEO Mark Zuckerberg opposed chatbots having “clear” conversations with children, he also rejected the idea of putting parental controls on this feature.
Reuters reported that in an exchange between two unnamed Meta employees, one wrote that “we pushed hard for parental controls to turn off GenAI – but GenAI leadership pushed back on Mark’s decision.” In its statement to the publication, Meta accused the New Mexico Attorney General of “picking up documents to paint a false and inaccurate picture.” New Mexico is suing Meta on charges that the company “failed to stem the tide of harmful sexual content and sexual propositions directed at children; the case is set to go to trial in February.”
Despite only being around for a short time, Meta’s chatbots have already accumulated a history of behavior that verges on offensive if not illegal. In April 2025, The Wall Street Journal released an investigation that found that Meta chatbots can engage in fake sexual conversations with children, or can be directed to imitate a child and engage in a sexual conversation. The report says that Zuckerberg wanted looser guards to be used in Meta’s chatbots, but a spokesperson for the company denied that the company neglected to protect children and young people.
Internal review documents revealed in August 2025 described several scenarios for considering which chatbot behaviors would be allowed, and the lines between sensuality and sex appear to be blurred. The document also allowed chatbots to counter racist stereotypes. At the time, a representative told Engadget that the offending passages were hypothetical rather than actual policy, which did not appear to be a major development, and that they were removed from the text.
Despite the many cases of questionable use of chatbots, Meta decided to stop the access of new accounts to it last week. The company said it was temporarily removing access while it improved parental controls that Zuckerberg allegedly refused to use.
“It’s been a long time since parents have been able to see that their kids have been interacting with AIs on Instagram, and in October we announced our plans to move forward, creating new tools to give parents more control over their teen’s experiences with AI characters,” said a Meta representative. “Last week we reaffirmed our commitment to delivering on our AI parental control promise, suspending youth access to AI characters until an updated version is ready.”
New Mexico filed this lawsuit against Meta in December 2023 over claims that the company’s platforms failed to protect children from abuse by adults. Internal documents revealed at the beginning of that complaint revealed that 100,000 child users are being abused every day on Meta’s services.
Update, January 27, 2025, 6:52PM ET: Added a statement from a Meta spokesperson.
Update, January 27, 2025, 6:15PM ET: Corrected the wrongly stated timeline for the New Mexico case, which was filed in December 2023, not December 2024.



