Us News

Elon Musk’s Grok Investigated the World for Obvious Deepfakes

A post by Elon Musk on the X app, showing Grok’s rapidly generated AI image of Musk in a bikini. Leon Neal/Getty Images

Grok, an AI chatbot developed by Elon Musk’s xAI, is facing growing problems after users used the tool to create revealing photos of real women and children. Government regulators and AI safety advocates are now calling for investigations and, in some cases, outright bans, as illegal non-consensual pornography proliferates online.

Indonesia and Malaysia moved quickly this week to ban Grok. The minister of communications and digital affairs in Indonesia, Meutya Hafid, said in a statement, “The government sees non-consensual sexual acts as a serious violation of human rights, dignity and safety of citizens in the digital environment.”

Malaysian officials also cited Grok’s “repeated abuse” to make inappropriate, sexualized images. In both countries, the restrictions will remain in place while the regulatory investigation progresses.

UK communications regulator Ofcom is investigating what it calls “serious reports” of malicious use of Grok, as well as the platform’s compliance with existing laws. If regulators decide xAI is guilty, the company could face a fine equal to the greater of 10 percent of global revenue or 18 million pounds (about $21.2 million). A full ban in the UK remains on the table, depending on the outcome of the investigation.

Musk wants to remove responsibility from users who request or upload illegal content. In the post of Jan. 3 on X, he wrote, “Anyone using Grok to create illegal content will face the same consequences as uploading illegal content.” Regulators, however, seem unsure. The wave of investigations and bans suggests a broader shift in holding social media and AI companies accountable for how their tools are used—not just who uses them.

In response to the controversy, Musk limited Grok’s photo generation features to paying subscribers. Free users who request photos now receive a message that says: “Photo editing and editing is currently limited to paying subscribers. You can register to enable these features.” But for many lawmakers and victims of deep-seated abuse, the move falls far short.

The European Union has ordered X to preserve all Grok-related documents until the end of 2026, extending the existing mandate to retain the data while authorities investigate the matter. Sweden is among the EU member states that have publicly criticized Grok, especially after the country’s deputy prime minister was reportedly targeted with fake photos.

The debate continues against a broader regulatory background. Australia is entering its first full year of enforcing a nationwide ban on social media use by children under 16, while 45 US states have enacted laws targeting AI-generated child sexual abuse.

Despite this controversy, the US Department of Defense announced the partnership with Grok on Jan. 12, a few days after reports of serious abuse surfaced. Under the agreement, the Pentagon plans to feed military and intelligence data into Grok to support innovation efforts.

‘Naked apps’ and the dangers of untested productive AI

Devices like Grok have drawn widespread outrage for their similarities to so-called “nudity apps,” a term used by the UK’s children’s commissioner to describe technology that can quickly create sexual images without consent. Lawmakers argue that the speed and scale at which such images can now spread makes them especially dangerous.

A quarter of women across all age groups have experienced inappropriate sharing of revealing photos, according to a recent report from Communia, an AI-powered selfie app. For Gen Z women, that number rises to 40 percent. The report also found that the use of deepfakes in these images will quadruple among Gen Z women from 2023.

As schools and local authorities deal with AI-generated pornography involving children—like the case in Lancaster, Penn. where two young men were charged with multiple charges including possessing and distributing child pornography—other victims are seeking stronger protections. For example, Texas high school student Elliston Berry, has advocated for the Take It Down Act, which focuses on removing harmful content after it appears. The bill, however, does not detain legal entities unless they fail to comply with abatement requests.

For Olivia DeRamus, founder and CEO of Communia, growth measures are not enough. He argues that blocking Grok outright is the most effective solution. “No company should be allowed to knowingly and profitably exploit sexual harassment,” DeRamus told the Observer. “Charging the tool just increases his target.”

DeRamus argues that the AI ​​industry has shown an unwillingness to regulate itself or implement reasonable safeguards. “I have come to realize that the only steps governments can take to stop retaliation for pornography and the blatantly disenfranchised sharing of images from happening around the world to women and girls is to hold companies that willfully criminalize this or ban it altogether,” she said.

“Freedom of speech has never been immune to harassment and public harm,” DeRamus added. “In fact, it requires a certain level of moderation to ensure that everyone can participate in public discourse safely. This includes women and girls, who will be forced to leave public life if current levels of abuse continue.”

Elon Musk's Grok Hit With Vins And Regulatory Processes Around The World



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button