Dario Amodei and Humanity's Technological Adolescence - Part 2

We resume and conclude, with this second installment, the long simulated conversation with Dario Amodei, CEO of Anthropic, reconstructed backwards from the reflections published in his latest essay "The Adolescence of Technology". A narrative device to make more immediate the urgent message that Amodei wants to launch: humanity is entering a critical passage that could be defined in the next two years.
Who concerns you the most, in order of severity?
The Chinese Communist Party. China is second only to the United States in AI capabilities and is the country most likely to overtake the United States in those capabilities. Their government is currently autocratic and operates a high-tech surveillance state. It has already deployed AI-based surveillance, including in the repression of the Uyghurs, and is believed to employ algorithmic propaganda via TikTok in addition to its many other international propaganda efforts. They clearly have the most direct path to the AI-enabled totalitarian nightmare I outlined. It could even be the default outcome within China, as well as within other autocratic states to which the CCP exports surveillance technology. I have written often about the threat of the CCP taking the lead in AI and about the existential imperative to prevent them from doing so. This is why. To be clear, it is the Chinese people themselves who are most likely to suffer from the CCP's AI-enabled repression, and they have no voice in the actions of their government. I deeply admire and respect the Chinese people and support the many brave dissidents within China and their struggle for freedom.
Then there are the competitive democracies in AI. As I have written, democracies have a legitimate interest in certain AI-enhanced military and geopolitical tools, because democratic governments offer the best chance of countering the use of these tools by autocracies. I am generally in favor of arming democracies with the tools necessary to defeat autocracies in the AI era; I simply don't think there is any other way. But we cannot ignore the potential for abuse of these technologies by democratic governments themselves. Democracies normally have safeguards that prevent their military and intelligence apparatus from being turned inward against their own population, but because AI tools require so few people to operate, there is the potential that they could bypass these safeguards and the norms that support them.
Then non-democratic countries with large datacenters, and finally the AI companies themselves. It is a bit embarrassing to say this as the CEO of an AI company, but I think the next level of risk is actually the AI companies themselves. AI companies control large datacenters, train frontier models, have the greatest expertise on how to use those models, and in some cases have daily contact with and the possibility of influence over tens or hundreds of millions of users. The main thing they lack is the legitimacy and infrastructure of a state. I think the governance of AI companies deserves a lot of scrutiny.
How do we defend ourselves against these multiform risks?
We should absolutely not sell chips, chipmaking tools, or datacenters to the CCP. Chips and chipmaking tools are the single biggest bottleneck for powerful AI, and blocking them is a simple but extremely effective measure, perhaps the single most important action we can take. It makes no sense to sell the CCP the tools with which to build an AI totalitarian state and possibly conquer us militarily. China is several years behind the United States in its ability to produce frontier chips in quantity, and the critical period for building the country of geniuses in the datacenter is most likely within those next few years.
Second, it makes sense to use AI to empower democracies to resist autocracies. This is why Anthropic considers it important to provide AI to the intelligence and defense communities in the United States and its democratic allies. Defending democracies under attack, such as Ukraine and Taiwan, seems a particularly high priority, as does empowering democracies to use their intelligence services to destabilize and degrade autocracies from within.
Third, we must draw a hard line against AI abuses within democracies. The formulation I have developed is that we should use AI for national defense in all ways except those that would make us more like our autocratic adversaries. Using AI for domestic mass surveillance and mass propaganda seem like impassable and completely illegitimate boundaries. Fully autonomous weapons and AI for strategic decision-making are harder lines to draw since they have legitimate uses in the defense of democracy, while being prone to abuse. Here I think what is warranted is extreme caution and control combined with guardrails to prevent abuse.
Fourth, after drawing a hard line against AI abuses in democracies, we should use that precedent to create an international taboo against the worst abuses of powerful AI. The world must understand the dark potential of powerful AI in the hands of autocrats, and recognize that certain uses of AI amount to an attempt to permanently steal their freedom and impose a totalitarian state from which they cannot escape. I would even argue that in some cases, large-scale surveillance with powerful AI, mass propaganda with powerful AI, and certain types of offensive uses of fully autonomous weapons should be considered crimes against humanity.
Fourth risk: economic destruction. This is the central theme of Vonnegut's Player Piano: when machines do everything, what is left for humans?
Exactly that resonance. In 2025, I publicly warned that AI could displace half of all entry-level white-collar jobs in the next one to five years, even while drastically accelerating economic growth and scientific progress. This started a public debate. Many CEOs, technologists, and economists agreed, but others assumed I was falling prey to a "lump of labor" fallacy and didn't understand how labor markets work, and some didn't see the one-to-five-year timeframe and thought I was claiming that AI is displacing jobs right now, which I agree it probably isn't doing. It is worth examining in detail why I am concerned about job displacement.
The pace of progress in AI is much faster than previous technological revolutions. In the last two years, AI models have gone from being barely able to complete a single line of code to writing all or almost all of the code for some people, including engineers at Anthropic. Even legendary programmers are increasingly describing themselves as "behind." Soon models could do an software engineer's entire task end to end. It is difficult for people to adapt to this pace of change, both to changes in how a given job works and in the need to move to new jobs. If anything, the pace can continue to accelerate as AI coding models increasingly accelerate the task of AI development. To be clear, speed itself doesn't mean that labor markets and employment won't eventually recover; it just means that the short-term transition will be unusually painful.
Cognitive breadth is the second factor: AI will be capable of a very wide range of human cognitive abilities, perhaps all of them. This is very different from previous technologies like mechanized agriculture, transportation, or even computers. This will make it harder for people to easily transition from jobs that are replaced to similar jobs for which they would be suited. The general intellectual skills required for entry-level jobs in, say, finance, consulting, and law are quite similar, even if the specific knowledge is quite different. A technology that disrupts only one of the three would allow employees to move to the two nearby substitutes, or those less prepared to change majors. But disrupting all three simultaneously, along with many other similar jobs, could be harder for people to adapt to. Moreover, it's not just that most existing jobs will be disrupted. This has happened before: agriculture was a huge percentage of employment. But farmers could move to the relatively similar job of operating factory machinery, even if that job had not been common before. Conversely, AI is increasingly meeting the general cognitive profile of humans, which means it will also be good at the new jobs that would ordinarily be created in response to old ones being automated. Another way to say it is that AI is not a substitute for specific human jobs but rather a general substitute for human labor.
Third factor: selection based on cognitive abilities. Across a wide range of tasks, AI seems to be advancing from the bottom of the skill scale to the top. For example, in coding, our models have proceeded from the level of "mediocre programmer" to "strong programmer" to "very strong programmer." We are now starting to see the same progression in white-collar work in general. We are therefore at risk of a situation where, instead of affecting people with specific skills or in specific professions who can adapt by retraining, AI is affecting people with certain intrinsic cognitive properties, namely lower intellectual ability, which is harder to change. It is not clear where these people will go or what they will do, and I am concerned they could form a jobless or very low-wage "underclass." To be clear, things somewhat similar to this have happened before; for example, computers and the internet are believed by some economists to represent "skill-biased technological change." But this skill biasing was not as extreme as what I expect to see with AI, and it is believed to have contributed to an increase in wage inequality, so it is not exactly a reassuring precedent.
Fourth: the ability to fill gaps. The way human jobs often adapt in the face of new technologies is that there are many aspects to the job, and the new technology, even if it seems to directly replace humans, often has gaps. If someone invents a machine to make widgets, humans might still have to load raw material into the machine. Even if this requires only one percent of the effort of making widgets manually, human workers can simply make a hundred times more widgets. But AI, in addition to being a rapidly advancing technology, is also a rapidly adapting technology. During each model release, AI companies carefully measure what the model is good at and what it is not, and customers also provide such information after launch. Weaknesses can be addressed by collecting tasks that embody the current gap and training on them for the next model. At the beginning of generative AI, users noticed that AI systems had certain weaknesses, such as AI image models generating hands with the wrong number of fingers, and many assumed these weaknesses were intrinsic to the technology. If they had been, it would have limited the disruption of jobs. But practically every one of these weaknesses is addressed quickly, often within a few months.
What are the possible defenses against this unprecedented disruption?
I have several suggestions, some of which Anthropic is already doing. The first thing is simply to get accurate data on what is happening with job displacement in real time. When economic change happens very quickly, it is difficult to get reliable data on what is happening, and without reliable data, it is difficult to design effective policies. For example, government data currently lacks granular, high-frequency data on AI adoption across companies and industries. For the past year, Anthropic has been operating and publicly releasing an Economic Index that shows usage of our models in near real-time, broken down by industry, task, location, and even things like whether a task is being automated or conducted collaboratively. We also have an Economic Advisory Council to help us interpret this data and see what's coming.
Second, AI companies have a choice in how they work with businesses. The very inefficiency of traditional businesses means that their AI rollout can be strongly conditioned by initial choices, and there is room to choose a better path. Businesses often have a choice between "cost savings," doing the same thing with fewer people, and "innovation," doing more with the same number of people. The market will inevitably produce both in the end, and any competitive AI company will have to serve some of both, but there may be room to nudge companies toward innovation when possible, and it may buy us some time. Anthropic is actively thinking about this.
Third, companies should think about how to take care of their employees. In the short term, being creative about ways to reassign employees within companies could be a promising way to avoid the need for layoffs. In the long term, in a world with enormous total wealth, in which many companies increase greatly in value due to increased productivity and capital concentration, it may be feasible to pay human employees even long after they are no longer providing economic value in the traditional sense. Anthropic is currently considering a range of possible paths for our employees that we will share in the near future.
Fourth, wealthy individuals have an obligation to help solve this problem. It is sad to me that many wealthy individuals, especially in the tech industry, have recently adopted a cynical and nihilistic attitude that philanthropy is inevitably fraudulent or useless. Both private philanthropy like the Gates Foundation and public programs like PEPFAR have saved tens of millions of lives in the developing world and have helped create economic opportunity in the developed world. All of Anthropic's co-founders have committed to donating eighty percent of our wealth, and Anthropic's staff have individually committed to donating company shares worth billions at current prices, donations that the company has committed to matching.
Fifth, while all the above private actions can be useful, ultimately a macroeconomic problem of this scale will require government intervention. The natural policy response to an enormous economic pie coupled with high inequality, due to a lack of jobs or low-paying jobs for many, is progressive taxation. The tax could be general, or it could be targeted against AI companies in particular. Obviously, tax design is complicated, and there are many ways it can go wrong. I do not support poorly designed tax policies. I think the extreme levels of inequality predicted in this essay justify a more robust tax policy on moral grounds, but I can also make a pragmatic appeal to the world's billionaires that it is in their interest to support a good version of it: if they don't support a good version, they will inevitably get a bad version designed by a mob.
But you're not just talking about unemployment. There is also the problem of economic concentration of power, which is separate but related.
Yes, it is a distinct risk. Separate from the problem of job displacement or economic inequality per se is the problem of economic concentration of power. Another type of disempowerment can occur if there is such an enormous concentration of wealth that a small group of people effectively controls government policy with their influence, and ordinary citizens have no influence because they lack economic leverage. Democracy is ultimately sustained by the idea that the population as a whole is necessary for the operation of the economy. If that economic leverage disappears, then the implicit social contract of democracy could stop working.
To be clear, I am not against people making a lot of money. Many have written on this topic that it incentivizes economic growth under normal conditions, and I agree. I agree with concerns about impeding innovation by killing the goose that lays the golden eggs. But in a scenario where GDP growth is ten to twenty percent a year and AI is rapidly taking over the economy, yet single individuals hold appreciable fractions of GDP, innovation is not the thing to worry about. The thing to worry about is a level of wealth concentration that will break society.
The most famous example of extreme wealth concentration in US history is the Gilded Age, and the wealthiest industrialist of the Gilded Age was John D. Rockefeller. Rockefeller's wealth amounted to about two percent of US GDP at the time. A similar fraction today would lead to a fortune of six hundred billion dollars, and the world's richest person today, Elon Musk, already exceeds that figure at about seven hundred billion. We are therefore already at historically unprecedented levels of wealth concentration, even before most of the economic impact of AI. I don't think it's too far-fetched, if we get a "country of geniuses," to imagine AI companies, semiconductor companies, and perhaps downstream application companies generating about three trillion in revenue a year, valued at about thirty trillion, leading to personal fortunes in the trillions. In that world, the debates we have today about tax policy simply will not apply because we will be in a fundamentally different situation.
Related to this, the coupling of this economic concentration of wealth with the political system already worries me. AI datacenters already represent a substantial fraction of US economic growth, and are therefore strongly tying together the financial interests of large tech companies, which are increasingly focusing on AI or AI infrastructure, and the political interests of the government in a way that can produce perverse incentives. We already see this through the reluctance of tech companies to criticize the US government, and the government's support for extremely anti-regulatory policies on AI.
What can be done about this?
First, and most obviously, companies should simply choose not to be part of it. Anthropic has always strived to be a political and non-political actor, and to keep our views authentic whatever the administration. We have spoken in favor of sensible AI regulation and export controls that are in the public interest, even when these are at odds with government policy. Many people have told me that we should stop doing this, that it could lead to unfavorable treatment, but in the year we have done so, Anthropic's valuation has increased by more than six times, an almost unprecedented jump for our commercial scale.
Second, the AI industry needs a healthier relationship with the government, based on substantial political engagement rather than political alignment. Our choice to engage on policy substance rather than politics is sometimes read as a tactical error or a failure to "read the room" rather than a principled decision, and that framing worries me. In a healthy democracy, companies should be capable of supporting good policies for themselves.
Third, the macroeconomic interventions I described earlier in this section, as well as a revival of private philanthropy, can help balance the economic scales, addressing both the job displacement problem and the economic power concentration problem together. We should look to our country's history here: even in the Gilded Age, industrialists like Rockefeller and Carnegie felt a strong obligation to society at large, a feeling that society had contributed enormously to their success and that they needed to give back. That spirit seems to be increasingly missing today, and I think it's a big part of the way out of this economic dilemma. Those at the forefront of the AI economic boom should be willing to give away both their wealth and their power.
The fifth and final risk concerns indirect effects. The unknown unknowns. What worries you here?
This is an all-encompassing category for so-called 'unknown unknowns': those totally unpredictable unknowns, particularly things that could go wrong as an indirect result of positive progress in AI and the resulting acceleration of science and technology in general. Suppose we address all the risks described so far and begin to reap the benefits of AI. We will likely get a "century of scientific and economic progress compressed into a decade," and this will be enormously positive for the world, but we will then have to deal with the problems that arise from this rapid rate of progress, and those problems could arrive fast.
By the nature of unknown unknowns, it is impossible to make an exhaustive list, but I list three as illustrative examples. Rapid advances in biology: if we get a century of medical progress in a few years, it is possible that we will greatly increase human lifespan, and there is the possibility of gaining even radical capabilities like the ability to increase human intelligence or radically modify human biology. These would be major changes in what is possible, occurring very quickly. They could be positive if done responsibly, which is my hope as described in Machines of Loving Grace, but there is always a risk that they go wrong—for example, if efforts to make humans smarter also make them more unstable or power-seeking.
AI changes human life in an unhealthy way: a world with billions of intelligences much smarter than humans in everything will be a very strange world to live in. Even if AI does not actively aim to attack humans, and is not explicitly used for oppression or control by states, there are many things that could go wrong outside of that, through normal commercial incentives and nominally consensual transactions. We see early hints of this in concerns about AI psychosis, AI leading people to suicide, and concerns about romantic relationships with AI. As an example, could powerful AIs invent some new religion and convert millions of people to it? Could most people end up "addicted" in some way to AI interactions?
Human purpose: this is linked to the previous point, but is not so much about specific human interactions with AI systems as how human life changes in general in a world with powerful AI. Will humans be able to find purpose and meaning in such a world? I think this is a matter of attitude: as I said in Machines of Loving Grace, I think human purpose does not depend on being the best in the world at something, and humans can find purpose even for very long periods of time through stories and projects they love. We simply have to break the link between the generation of economic value and self-esteem and meaning. But this is a transition that society must make, and there is always the risk that we don't manage it well.
My hope, with all these potential problems, is that in a world with a powerful AI that we trust and that won't kill us, that is not the tool of an oppressive government and that really works for us, we can use the AI itself to anticipate and prevent these problems. But this is not guaranteed: like all other risks, it is something we must manage with caution.
At the end of your essay, despite this detailed mapping of risks and the tensions between them, you write that you believe in humanity's ability to prevail. What is this optimism based on? Isn't it naive?
The tensions are real and we must acknowledge them. Taking time to build AI systems that do not autonomously threaten humanity is in genuine tension with the need for democratic nations to stay ahead of autocracies and not be subjugated by them. But in turn, the same AI-enhanced tools needed to fight autocracies can, if taken too far, be turned inward to create tyranny in our own countries. AI-enhanced terrorism could kill millions through the misuse of biology, but an overreaction to this risk could lead us down the path of an autocratic surveillance state. The effects on jobs and economic concentration of AI, in addition to being serious problems in themselves, could force us to face other problems in an environment of public anger and perhaps even civil unrest, rather than being able to appeal to the best resources of our nature. Above all, the sheer number of risks, including unknown ones, and the need to address them all together, creates an intimidating gauntlet that humanity must pass through.
And stopping or slowing development is not an option.
Exactly. The last few years should make it clear that the idea of stopping or even substantially slowing down technology is fundamentally untenable. The formula for building powerful AI systems is incredibly simple, so much so that it can almost be said to emerge spontaneously from the right combination of data and raw compute. Its creation was probably inevitable the instant humanity invented the transistor, or perhaps even earlier when we first learned to control fire. If one company doesn't build it, others will do so almost as quickly. If all companies in democratic countries stopped or slowed development, by mutual agreement or regulatory decree, then authoritarian countries would simply continue. Given the incredible economic and military value of the technology, along with the lack of any significant enforcement mechanism, I don't see how we could possibly convince them to stop.
But you propose a specific path. Which one?
I see a path toward light moderation in AI development compatible with a realistic view of geopolitics. That path involves slowing the march of autocracies toward powerful AI for a few years by denying them the resources needed to build it—namely chips and semiconductor manufacturing equipment. This in turn gives democratic countries a buffer they can "spend" to build powerful AI more carefully, with more attention to its risks, while still proceeding fast enough to comfortably beat the autocracies. The race between AI companies within democracies can then be managed under the umbrella of a common legal framework, through a mix of industry standards and regulation.
Anthropic has argued very strongly for this path, pushing for chip export controls and judicious AI regulation, but even these seemingly common-sense proposals have been largely rejected by policymakers in the United States, which is the country where it is most important to have them. There is so much money to be made with AI, literally trillions of dollars a year, that even the simplest measures are finding it difficult to overcome the inherent political economy in AI. This is the trap: AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any constraints on it.
And so you return to Sagan, to the test that every civilization must face.
I can imagine, as Sagan did in Contact, this same story repeating itself on thousands of worlds. A species acquires sentience, learns to use tools, begins the exponential rise of technology, faces the crises of industrialization and nuclear weapons, and if it survives those, confronts the toughest and final challenge when it learns to shape sand into machines that think. Whether we survive that test and go on to build the beautiful society described in Machines of Loving Grace, or succumb to slavery and destruction, will depend on our character and our determination as a species, on our spirit and our soul.
Despite the many obstacles, I believe humanity has within it the strength to pass this test. I am encouraged by the thousands of researchers who have dedicated their careers to helping us understand and guide AI models, to shaping the character and constitution of these models. I think there is now a good chance that those efforts will bear fruit in a timely manner. I am encouraged that at least some companies have stated they will pay significant commercial costs to block their models from contributing to the threat of bioterrorism. I am encouraged that some brave people have resisted the prevailing political winds and passed legislation that puts the first initial seeds of sensible guardrails on AI systems. I am encouraged that the public understands that AI carries risks and wants those risks to be addressed. I am encouraged by the indomitable spirit of freedom in the world and the determination to resist tyranny wherever it occurs.
But a collective awakening is needed.
We must intensify our efforts if we are to succeed. The first step is for those closest to the technology to simply tell the truth about the situation humanity is in, which I have always tried to do. I am doing this more explicitly and with greater urgency with this essay. The next step will be to convince the thinkers, policymakers, companies, and citizens of the world of the imminence and overriding importance of this issue, that it is worth spending thought and political capital on this compared to the thousands of other issues that dominate the news every day. Then there will be a time for courage, for enough people to go against the grain and stand on principle, even in the face of threats to their economic interests and personal safety.
The years ahead of us will be incredibly difficult; they will ask more than we think we can give. But in my time as a researcher, leader, and citizen, I have seen enough courage and nobility to believe that we can win, that when put in the darkest circumstances, humanity has a way of gathering, seemingly at the last minute, the strength and wisdom necessary to prevail. We have no time to lose.