World News

Anthropic AI security researcher quits, says ‘world is at risk’ – National

An artificial intelligence researcher quit his job at US company Anthropic this week with an ambiguous warning about the state of the world, marking the latest in a series of resignations over security risks and ethical dilemmas.

In a letter posted to X, Mrinank Sharma wrote that he had achieved everything he had hoped for while working at an AI security company and was proud of his efforts, but he was leaving on the fear that “the world is at risk,” not only because of AI, but from “a whole series of interconnected problems,” from bioterrorism to concerns about the “sycophancy” of this industry.

The story continues below the ad

He said he felt called to write, pursue a degree in poetry and dedicate himself to “speaking boldly.”

“Throughout my time here, I have seen time and time again how difficult it is to let our values ​​dictate our actions,” he continued.

Anthropic was founded in 2021 by a renegade group of former OpenAI employees who promised to design a more security-oriented approach to AI development than its competitors.

Get the day's top news, politics, economics, and current affairs, delivered to your inbox once a day.

Get daily world news

Get the day’s top news, politics, economics, and current affairs, delivered to your inbox once a day.

Sharma led the company’s AI security research team.

Anthropic has released reports touting the safety of its products, including Claude, its large-scale language model that integrates logic, and markets itself as a company committed to building reliable and understandable AI systems.

The company faced criticism last year after it agreed to pay $1.5 billion to settle a lawsuit by a group of authors who said the company used versions of their work to train its AI models.

The story continues below the ad

Sharma’s resignation comes the same week that OpenAI researcher Zoë Hitzig announced her resignation in a New York Times story, citing concerns about the company’s advertising strategy, including placing ads on ChatGPT.

“I once believed that I could help people building AI to move past the problems it would create. This week has confirmed my hunch that OpenAI seems to have stopped asking the questions I joined to help answer,” he wrote.

“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on top of that database creates an opportunity to manipulate users in ways we don’t have the tools to understand, let alone prevent.”


Anthropic and OpenAI recently got into a public spat after Anthropic released a Super Bowl ad criticizing OpenAI’s decision to run ads on ChatGPT.

In 2024, OpenAI CEO Sam Altman said he’s not a fan of using ads and will only use them “as a last resort.”

Last week, he refuted the commercial claim that embedding ads is deceptive in a lengthy post criticizing Anthropic.

“I think it’s on brand Anthropic doublespeak to use a fake ad to criticize fake fake ads, but the Super Bowl ad isn’t where I would expect it to be,” he wrote, adding that the ads will continue to allow free access, which he said has long been “agency.”

The story continues below the ad

Employees at competing companies – Hitzig and Sharma – both expressed serious concerns about the erosion of the guiding principles established to maintain the integrity of AI and protect its users from fraud.

Hitzig wrote that the potential “erosion of OpenAI principles” to increase interoperability may already be happening at the company.

Sharma said he worries about AI’s ability to “distort humanity.”

&copy 2026 Global News, a division of Corus Entertainment Inc.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button