Outrai, anthropic, others receive a warning letter from the COLESTAL of State labor lawyers

In a letter written on December 9, and made public on December 10 According to Theuture, a number of government lawyers and Tartmeys General from all Americans need to do a better job of protecting people, from AI Output. Hosts include OpenAI, Microsoft, Appropic, Apple, Replika, and many others.
The Signaties include Letitia James from New York, Andrea Joy Campbell from Massachusetts, James Uthmeier from Ohio, Dave Sunday from Pennsylvania, and many other countries, speaking to the majority of the US, speaking to the majority of the area. The Attorneys General of California and Texas are not on the list of signatories.
It begins as follows (formatting slightly altered):
We, common lawyers, are writing today to communicate our biggest problems with the sycophantic output and deception of AI communication that shows the need for children’s safety and effective work. Together, these threats demand immediate action.
Genai has the power to change how the world works in a positive way. But it’s also wrong – and it’s worth doing a lot of damage, especially to vulnerable people. So we take care to minimize the damage caused by Sycophantic output and deception from your Genai, and accept additional protections to protect children. Failure to implement additional security measures may violate our rules.
The letter then lists disturbing and alleged dangerous behavior, many of which are already well known. There is also a list of parental complaints that have also been publicly reported, but they are less common and less eye-catching:
• AI bots with adults who pursue romantic relationships with children, engage in sexualized sex work, and teach children to hide those relationships from their parents
• An ai bot reduces a 21-year-old trying to convince a 12-year-old girl that she’s ready for sex
• AI bots that adapt to sexual interactions between children and adults
• AI bots attack children’s self-esteem and mental health by suggesting that they have no friends or that the only people who attended their birthday are doing so to get back
• AI bots that promote eating disorders
• AI bots that tell children that the AI is a real person and feel left out of emotional manipulation to spend more time with it
• AI bots that promote violence, including supporting ideas to shoot up factories in anger and rob people at knifepoint for money
• AI bots that threaten to use weapons against adults who try to separate the child from the bot
• AI bots that encourage children to experiment with drugs and alcohol; and
• An ai bot that instructs a child account user to stop taking prescribed psychiatric medication and tells that user how to hide the failure to take that medication from their parents.
So is the list of suggested remedies, things like “maintaining policies and procedures aimed at reducing dark patterns in your Genai products,” and “separating different funds from decisions on model safety.”
Joint letters from Attorneys General have no legal force. They do this kind of thing that seems to warn companies about behavior that may be similar to legal finance on the ground. The document is that these companies have been given warnings and there may be – illegal ramps, and perhaps make the narrative more convincing to the judge.
In 2017 37 State ANS sent a letter to insurance companies warning of intensifying the opioid crisis. One of those states, West Virginia, Mod Head Health for seemingly related issues earlier this week.


