Us News

Dario Amodei on the Obsessive Risks of AI—and the Anthropic Pressure to Stop Them

Dario Amodei says AI risks putting dangerous information in the wrong hands without strict monitoring. Photo by Chance Yeh/Getty Images for HubSpot

Anthropic is known for its strong security standards, which it has used to differentiate itself from competitors such as OpenAI and xAI. Those hard-line policies include roadblocks that prevent users from turning to Claude to produce bio-threat weapons that CEO Dario Amodei described as one of the most pressing risks of AI in a new 20,000-word essay.

“Humanity needs to wake up, and this article is an attempt – perhaps in vain, but worth trying – to wake people up,” Amodei wrote in this post, which he placed as a contemptuous follow-up to the 2024 article describing the benefits that AI will bring.

One of Amodei’s biggest fears is that AI could give large groups of people access to instructions for making and using dangerous tools—knowledge that has traditionally been confined to a small group of highly trained experts. “I worry that a genius in everyone’s pocket could remove that barrier, essentially turning everyone into a Ph.D. virologist who can be walked through the step-by-step process of designing, synthesizing, and releasing a biological weapon,” Amodei wrote.

To address that risk, Anthropic focuses on strategies like its Claude Constitution, a set of principles and principles that guide its model training. Prohibition of assistance with biological, chemical, nuclear or radiological weapons is listed among the “hard constraints” of the constitution, or actions Claude must never do regardless of the user’s instructions.

Still, the potential for AI models to jailbreak means Anthropic needs a “second line of defense,” Amodei said. That is why, in the middle of 2025, the company began to use additional protections designed to identify and prevent any effects related to bioweapons. “These classifiers increase the cost of providing our models proportionally (in some models, they are close to 5 percent of the total cost) and thus reach our margins, but we feel that using them is the right thing,” he noted.

Besides urging other AI companies to take similar steps, Amodei also called on governments to introduce legislation to curb AI-fueled bioweapon threats. He suggested countries invest in defense technologies such as rapid vaccine development and advanced personal protective equipment, adding that Anthropic is “excited” to work on those efforts with biotech and pharmaceutical companies.

Anthropic’s reputation, however, goes beyond safety. The start-up, co-founded by Amodei in 2021 and now close to a valuation of $350 billion, has seen its Claude products – especially its coding agent – widely adopted. The 2025 revenue is expected to reach $4.5 billion, which is a 12-fold increase from 2024, as reported by The Information, although its overall margin of 40 percent is lower than expected due to the high cost of contact, which includes using defenses.

Amodei argues that the pace of AI training and development is driving these rapidly emerging risks. He predicts that models as talented as Nobel Prize winners will arrive within the next one to two years. Other risks include the potential for AI models to misbehave, be manipulated by governments, or disrupt labor markets and concentrate economic power in the hands of a few, he said.

There are ways that development can be slowed down, Amodei said. Limiting the sale of chips to China, for example, would give democracies a “safeguard” to build technology carefully, especially in accordance with strict regulations. But the large sums of money at stake make it difficult to control. “This is the trap: AI is so powerful, such a glorious prize, that it’s very difficult for human civilization to put any limits on it at all,” he said.

Dario Amodei Warns Of AI's Biggest Risks—And How Anthropic Is Stopping Them

!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window, document,’script’,

fbq(‘init’, ‘618909876214345’);
fbq(‘track’, ‘PageView’);

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button