Notizie IA Logo

AITalk

News and analysis on Artificial Intelligence

The Wrong Apocalypse: Andrea Pignataro responds to Amodei - Part 2

BusinessGenerative AIEthics & Society

pignataro-intervista2.jpg

We resume and conclude, with this second installment, the long simulated conversation with Andrea Pignataro, CEO of ION Group, reconstructed backward from the reflections published in his document "The Wrong Apocalypse". A narrative device to make more immediate the critical analysis that Pignataro levels at Dario Amodei's document and the market reactions.

Companies are training their replacement

So far you have dismantled the market panic. But in your document there is a point where you introduce a paradox that is, if possible, even more disturbing than the thesis you were refuting. What are you referring to?

When a consulting firm uses Claude to draft analyses for clients, it's not just getting a productivity gain. It's teaching Anthropic, through aggregated patterns of use, feedback, refinement, and evaluation—all in compliance with stated privacy policies—what the language games of consulting look like. Not the firm's proprietary data in the strictly legal sense, but something potentially more valuable: the form, the structure, the grammar of consulting work. How analyses are structured. What clients expect. What standards of rigor apply. What failure modes look like.

Over time, across thousands of such firms, the AI platform accumulates a cartography of the consulting language game at a level of resolution that no single firm possesses of itself. The same applies to law firms, accounting firms, financial advisors, insurance brokers, marketing agencies, architectural firms, engineering firms—any intellectual work business that adopts AI tools from a platform company.

You are describing a trap. Every company that rationally adopts these tools collectively accelerates its own irrelevance.

Exactly. Companies adopt AI tools to remain competitive. In doing so, they feed the very system that is learning to make them unnecessary. The individual company's logic is rational in isolation: if you don't adopt AI, your competitor will, they will be faster and cheaper, and you will lose market share. But the collective logic is irrational: every rational adoption of AI by every company accelerates the platform's ability to disintermediate the entire sector.

It's a classic tragedy of the commons, except the common good being destroyed is not a physical resource, but the economic moat of the entire economy. Every customer is simultaneously a revenue source and a training signal. The game structure is identical to an arms race: the individually rational strategy leads to a collectively catastrophic outcome. Every company arms itself with AI. AI platforms learn from the armament. Platforms become capable of doing what companies do. Companies become unnecessary. And by the time companies realize it, they have already trained their replacement.

There is a quote from Warren Buffett that you bring in support of this point.

Buffett observed that when you hire someone you have to look for three qualities: integrity, intelligence, and energy, and that if the candidate lacks the first, the other two will kill you. The aphorism can be applied to AI. Integrity—the alignment of interests between the tool and the user—is what the dynamics I described call into question. Every interaction teaches the platform to make the companies using it unnecessary.

Is there a way out of this trap?

Instead of adopting AI on closed platforms, companies can invest in open source models trained on their own data, deployed on their own infrastructure, under their own control. This path requires technical investment, data management, and a strategic orientation that most companies do not currently possess. But it preserves institutional knowledge as a proprietary resource. The window to build this capacity is open now. It will not remain open indefinitely.

The chain reaction the market doesn't see

Do the consequences of this paradox stop at enterprise software, or do you see wider chain effects?

These consequences are systemic and will propagate through every sector, because the sectors being disintermediated are themselves the infrastructure of other sectors. Consulting firms lose revenue as clients turn directly to AI. Law firms lose revenue as corporate legal departments automate contract work. Accounting firms lose revenue as AI handles compliance and audit preparation. Insurance brokers lose revenue as consumers and businesses compare policies directly through AI agents. Financial consulting firms lose revenue as automated portfolio managers take the place of human advisors. Marketing agencies lose revenue as systems like Claude Cowork produce campaign strategies and creative briefs. In each case, the professional service's position as an intermediary between knowledge and the client is eroded by a platform that knows the language game well enough to play it directly.

And then the effects on adjacent industries.

Consulting firms are large clients of commercial real estate, airlines, hotels, corporate catering services, recruiting firms, and training companies. Law firms are large clients of the same, plus legal tech companies, court reporting services, and document management providers. When revenue contracts in professional services, demand contracts across all these adjacent sectors. The collapse in software is the visible part of the iceberg. The collapse in the sectors that serve software companies and professional services is the part still underwater.

The 2 trillion destroyed in software market value is not the extent of the damage. It's a down payment. The dynamics will play out across all sectors. The total economic shift is not 2 trillion. It is two orders of magnitude higher. And it is unhedged, because no asset class is isolated from a systemic reduction in the volume of intellectual work intermediation.

Describe the domino effect in detail.

The first phase sees AI platforms becoming fluent enough in the industry's language games to directly handle routine tasks for end clients. Professional service firms lose revenue from these core services. Some adapt by moving up the value chain; many cannot. The first wave of closures begins.

In the second phase, as AI platforms accumulate more institutional knowledge from aggregated interactions, they begin to encroach on work that previously required deep contextual understanding: strategic consulting, complex litigation strategy, bespoke financial modeling, organizational change management. Platforms don't completely replace human judgment, but they reduce the number of humans needed for each engagement. Secondary cascade effects hit commercial real estate, business travel, and adjacent sectors.

In the third phase, the reduction in professional services revenue propagates through the financial system. Venture capital and private equity portfolios undergo significant write-downs. AI was powerful enough to destroy existing software, and the capital expenditures of large cloud infrastructure providers were unjustified, because the total volume of economic activity requiring AI infrastructure contracted along with the sectors that AI disintermediated. The investment thesis collapses on both sides simultaneously.

The fourth phase is the one concerning cities and the social fabric. The loss of employment in professional services—legal, consulting, accounting, specialist advice, financial services—affects not only workers but the communities, institutions, and tax bases that depend on them. Cities whose economies depend heavily on professional services—London, New York, Singapore, Zurich, Sydney—experience structural declines in commercial real estate values, local tax revenues, and consumer spending. University enrollments in business, law, and accounting programs collapse, triggering a crisis in higher education that propagates further through the economy. The social structures built around intellectual work—identities, career paths, middle-class livelihoods—begin to dissolve in the way Vonnegut described.

Sincerity as a competitive advantage

There is a passage in your document that struck me: the way you analyze Anthropic's "safety-first" brand not just as an ethical posture, but as a tool for strategic accumulation. Can you explain it?

The safety-first brand builds trust with regulators, enterprise clients, and the public. That trust creates access: access to more sectors, more use cases, more interactions, more language games. That access compounds the learning advantage I described earlier. The company that businesses trust most is the one they give most access to. The one they give most access to is the one that learns their language games fastest. The one that learns their language games fastest is the best positioned to eventually disintermediate those same businesses.

Amodei's essay devotes a fifth of its length to the risks of concentrated economic power, warning of "a single company or a small number of companies" controlling AI production. But he does not turn this lens on the structural position that any sufficiently trusted AI platform occupies. He warns of power concentration in the abstract while describing a company that, by the logic of its own products, is accumulating the institutional knowledge of every sector it serves. Sincerity and structural advantage are not mutually exclusive. In an industry where trust is the scarcest resource, sincerity is the strategic advantage.

Europe as a saving friction

Your document closes with a surprising geographical note. Usually, European regulatory fragmentation is cited as a disadvantage in the AI era. You flip it.

European regulatory fragmentation, usually cited as a handicap in the AI era, could prove to be a brake on the domino. The same institutional frictions that slow AI adoption also slow every phase of the transmission mechanism: twenty-seven regulatory regimes, multiple legal traditions, strict labor protections, and language barriers do not prevent disruption, but they hinder the speed at which disruption in one layer propagates to the next.

GDPR and the AI Act limit the learning of aggregated patterns that drives cross-sectional knowledge accumulation. Robust employment protections slow the translation of revenue loss into head-count reduction. Cultural resistance to rapid restructuring slows the translation of head-count reduction into community collapse. None of this is immunity. It is friction, and friction, in a domino effect, is the difference between a managed transition and a structural break.

The final question: what should we really fear?

The market is panicking about the wrong thing. The correct panic is about the structural incentive for companies to adopt tools whose competitive logic requires them to learn, cross-sectionally and longitudinally, the entire grammar of every sector they serve. This is not a repricing event. It is a civilizational transition. The right question is: can AI enter the language games that constitute economic life? And the next question is: what happens when the institutions that invite AI into their language games discover they have taught it to play without them?

Vonnegut understood this. His player piano was not a story of machines smarter than humans. It was a story of a society that had forgotten what humans were for. This is the question we should be asking ourselves: not whether AI can do what software does, or what consultants do, or what lawyers do, but what happens to the institutional fabric of civilization when the entities that hold it together are no longer necessary. And that question will be answered, slowly and painfully, over the course of the next decade, through the cumulative choices of millions of companies that are, right now, making the individually rational and collectively catastrophic decision to train their replacement.

Vonnegut's dystopia was the result of a society that stopped paying attention. The domino effect I described is one possible trajectory. Trajectories can be changed.