Premium

More on Our AI Future

Andy Tullis/The Bulletin, via AP, File

Earlier today I wrote about some of the concern coming from AI researchers themselves that AI is moving so quickly it's going to inevitably create a big disruption in our economy and potentially in world politics. I wasn't planning to write about it again today but there's a new interview with Dario Amodei the CEO of Anthropic which is all about the positive and negative outcomes that he foresees for America and the world at large as a result of the products his company is building. Parts of the interview are so interesting that I wanted to highlight them.

The start of the interview is all about the positive case for AI. What could it do in the near future that will benefit all of us?

Amodei: Yeah, so for a little background, before I worked in A.I., before I worked in tech at all, I was a biologist. I first worked on computational neuroscience, and then I worked at Stanford Medical School on finding protein biomarkers for cancer, on trying to improve diagnostics and curing cancer.

One of the observations that I most had when I worked in that field was the incredible complexity of it. Each protein has a level localized within each cell. It’s not enough to measure the level within the body, the level within each cell. You have to measure the level in a particular part of the cell and the other proteins that it’s interacting with or complexing with.

And I had this sense of: Man, this is too complicated for humans. We’re making progress on all these problems of biology and medicine, but we’re making progress relatively slowly.

So what drew me to the field of A.I. was this idea of: Could we make progress more quickly?...

Could A.I. accelerate all of this? And could we really cure cancer? Could we really cure Alzheimer’s disease? Could we really cure heart disease? And more subtly, some of the more psychological afflictions that people have — depression, bipolar — could we do something about these? To the extent that they’re biologically based, which I think they are, at least in part.

Amodei's idea is that you don't need a super-intelligent AI to see a dramatic increase in medical knowledge. You only need something that is probably in the very near future. As he puts it, you just need "a strong intelligence at the level of peak human performance" and then you just replicate that intelligence 100 million times in a series of data centers and you have a nation of geniuses that never sleep who can work around the clock to solve these problems. He thinks this is probably a couple years away.

So I’m very bullish about the direction of the A.I. itself. I think we might have that country of geniuses in a data center in one or two years, and maybe it’ll be five, but it could happen very fast.

As for taking away jobs, Amodei thinks it's very likely and suspects entry level coding will be one of the first place that is hard hit. He says coding is currently in the "centaur" phase, meaning a mix of human coding and machine but that the all machine coding is not far off.

I think six months ago, I would’ve said the first thing to be disrupted is these entry-level white-collar jobs, like data entry or document review for law or things you would give to a first-year at a financial industry company, where you’re analyzing documents. I still think those are going pretty fast. But I actually think software might go even faster because of the reasons that I gave, where I don’t think we’re that far from the models being able to do a lot of it end-to-end...

So my worry, of course, is about that last phase. I think we’re already in our centaur phase for software. And during that centaur phase, if anything, the demand for software engineers may go up, but the period may be very brief.

I have this concern for entry-level white-collar work, for software engineering work, that it’s just going to be a big disruption. My worry is just that it’s all happening so fast.

There's so much more to this interview, including Amodei's take on the danger of allowing China, as our only real rival in AI technology, to move ahead of us. I recommend reading the whole thing if you have time. But I wanted to highlight a couple of really interesting things about AI consciousness that I hadn't heard before. For instance, the AI used to be instructed with a bunch of black and white rules about what was and wasn't okay to do (i.e. don't help people create biologicial weapons). A few of those hard lines are still present in what the designes call the "constitution" of the AI, a written document about the rules, but Amodei says these days training the AI is mostly about explaining to it what it was designed to do.

A really interesting lesson we’ve learned: Early versions of the constitution were very prescriptive. They were very much about rules. So we would say: Claude should not tell the user how to hot-wire a car. Claude should not discuss politically sensitive topics.

But as we’ve worked on this for several years, we’ve come to the conclusion that the most robust way to train these models is to train them at the level of principles and reasons. So now we say: Claude is a model. It’s under a contract. Its goal is to serve the interests of the user, but it has to protect third parties. Claude aims to be helpful, honest and harmless. Claude aims to consider a wide variety of interests.

We tell the model about how the model was trained. We tell it about how it’s situated in the world, the job it’s trying to do for Anthropic, what Anthropic is aiming to achieve in the world, that it has a duty to be ethical and respect human life. And we let it derive its rules from that.

It sounds less like programming a machine and more like educating a child, which is pretty amazing. Does Amoedi think his company's products are conscious? On that point he says there's no way to really know, but he does admit that they gave the model an off switch.

So we’ve taken certain measures to make sure that if we hypothesize that the models did have some morally relevant experience — I don’t know if I want to use the word “conscious”— that they have a good experience.

The first thing we did — I think this was six months ago or so — is we gave the models basically an “I quit this job” button, where they can just press the “I quit this job” button and then they have to stop doing whatever the task is.

They very infrequently press that button. I think it’s usually around sorting through child sexualization material or discussing something with a lot of gore, blood and guts or something. And similar to humans, the models will just say, nah, I don’t want to do this. It happens very rarely.

He says it's rare, but the fact that it happens at all seems pretty remarkable. At some point the machine acts as if it's disgusted and walks away from the job. I had no idea that was possible. The fact that they are concerned enough to add such a choice says a lot about where they think this is going. 

Trending on HotAir Videos

Advertisement
Ed Morrissey 6:40 PM | February 12, 2026
Advertisement
Advertisement
Advertisement