Tech News

Anthropic CEO Worries Humanity May Not Be ‘Mature’ Enough For Advanced AI

Dario Amodei has some thoughts on artificial intelligence. About 38 pages worth of thoughts, actually. The founder of Anthropic, the maker of Claude, published on Monday a fascinating article entitled “The Adolescence of Technology,” in which he discusses what he sees as the biggest risks that achieving the development of intelligence can bring to the world.

His company will continue to develop AI, by the way.

Amodei, who releases these stories from time to time, suggests that humanity is about to enter a new era. “I believe we are entering a process of transition, turbulent and inevitable, that will examine who we are as animals.” It could also be our last season if things go our way. “Humanity is about to be given almost unimaginable power, and it’s not clear that our social, political, and technological systems are capable of using it,” Amodei wrote, later saying that “AI-enabled authoritarianism terrifies me.”

Side note: Anthropic gave Claude to the Trump administration’s federal government for $1 a year.

To his credit, Amodei has a clear imagination that he turns to throughout the story. He recounts the time when the religious organization Aum Shinrikyo released sarin nerve gas in the Tokyo subway in 1995, resulting in 14 deaths and many injuries. He then suggested that putting “intelligence in everyone’s pocket” would remove the barrier to such, or even more dangerous, attacks.

“A deranged loner who wants to kill people but lacks the discipline or ability to do so will now be elevated to the level of a PhD virologist, who may have this motive,” he wrote. “I’m worried that there may be a large number of such people, and that if they can find an easy way to kill millions of people, soon one of them will do it.”

Apropos of nothing, did you know that one of the experiments Anthropic published in its report on Claude Opus 4.5’s “system card” was an experiment in which a model was tasked with helping virologists recreate a challenging virus?

Amodei is understandably impressed by the rate of progress AI has seen in recent years, but warned that if it continues to develop at the same rate, we are not far from developing super-intelligence—what guys like Amodei used to call general artificial intelligence, but that’s gone. “If the exponential continues – which is not certain, but now has a long history of supporting years – then it will not be more than a few years before AI is better than humans in everything,” he wrote.

What would that really mean? Amodei offered an analogy: “Let’s say a real ‘world of geniuses’ will appear somewhere in the world in 2027. Imagine, 50 million people, all more talented than any Nobel Prize winner, government official, or technologist,” he wrote. “Let’s say you were the national security advisor of a major power, responsible for assessing and responding to the situation. Just imagine, that because AI systems can work hundreds of times faster than humans, this ‘country’ operates at a time advantage compared to all other countries: for every cognitive step we can take, this country can take ten.”

From that framework, Anthorpic’s CEO said it’s worth considering what our biggest concerns should be. Amodei floated his own—including “Perils of Independence,” “abuse to destroy,” and “abuse to seize power”—and finally concluded that the country’s report would consider it “the single worst threat to national security that we’ve faced in a century, probably ever.”

A reminder that Anthropic built that world by analogy.

Anthropic, more than any other AI company, has been active in identifying the risks associated with AI development and advocating for more regulatory scrutiny and consumer protection (whether you believe that is legal or a form of legal capture is in the eye of the beholder, but at least it’s talking a good game). But it continues to build a machine that it warns could cause a coming disaster. You don’t need to build a machine of doom! And frankly, continuing to build it undermines how seriously anyone should take warnings about the threats that exist.

If there’s a real concern that humanity might not be mature enough to handle AI, maybe don’t make it publicly available to people with low access barriers, and brag about your active users every month.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button