Margin Note

The Jobpocalypse Is Not Yet a Finding

Hand-drawn blueprint of an apparently stable bridge. A magnified inset reveals an amber-marked support element eroded from within.

Andrew Ng argues that there will be no AI jobpocalypse. The story of mass unemployment, he says, is exaggerated, fuels unnecessary fear, and mainly benefits those who want AI to look more powerful than it currently is inside real operations.

The interesting part is not only the optimistic forecast. The interesting part is the incentive map behind it.

Frontier labs benefit when AI sounds like a historic replacement machine. Vendors benefit when they anchor pricing not against SaaS seats, but against salaries. 1,000 euros per user per year is software. 10,000 euros per year suddenly looks reasonable when the comparison is an employee earning 100,000 euros. And companies benefit when old cost cuts can be narrated as new efficiency. “We overhired” sounds worse than “we are AI-first.”

That is the strongest part of Ng’s argument: the jobpocalypse is not only a forecast. It is also a sales frame.

But that does not settle the other side.

Goldman Sachs estimated in 2023 that the equivalent of 300 million full-time jobs could be exposed to automation from generative AI. The IMF estimates that almost 40 percent of global employment is exposed to AI, rising to around 60 percent in advanced economies. Brookings found that more than 30 percent of U.S. workers could see at least half of their occupational tasks disrupted by GenAI. Challenger reported that in March 2026, AI was cited in a quarter of announced job cuts.

That is not fantasy. Something is moving.

But “jobpocalypse” may be the wrong measuring instrument. At the macro level, the labor market can remain stable while specific career starts, departments, and learning ladders are hollowed out. The unemployment rate can look healthy while junior roles disappear. Net new jobs can emerge while certain training paths collapse. An organization can look more productive because it employs fewer people, and still be dismantling its own capacity for judgment.

That is the uncomfortable middle ground.

The ILO puts it more soberly: many jobs will be transformed rather than replaced. That sounds reassuring, but only halfway. Transformation does not mean everything stays. It means tasks move, entry-level work disappears, quality standards shift, and someone has to decide again where people are still supposed to learn what good work looks like.

The WEF expects a net gain of 78 million jobs by 2030, but also 92 million displaced jobs. It also estimates that, for every 100 workers, 59 will need reskilling or upskilling. Eleven of them are unlikely to receive it. That is not an apocalypse either. But for those eleven, it does not matter much if the macro chart ends in green.

Maybe the error is reading “jobs” too coarsely.

AI rarely replaces whole roles in one move. It eats tasks, especially tasks that are describable, repeatable, text-heavy, or code-heavy. But those were often the learning ramps for beginners. Research, first drafts, documentation, analysis, variants. What looks to the company like the automation of small stuff was often the path by which humans acquired judgment.

Then the issue is not a jobpocalypse. It is a pipeline problem.

At the same time, Ng’s warning remains useful: not every layoff carrying the AI label was caused by AI. Some companies are cutting because capital became more expensive, pandemic headcounts were too large, or margins are under pressure. AI then provides the better sentence for investors. Not cause, but narrative.

For management, neither panic nor reassurance is useful.

The better question is: Which tasks are becoming cheaper, which roles are losing their learning logic, and which cuts are merely being painted with AI after the fact?

There is no final conclusion yet.

But there is a useful suspicion: if the jobpocalypse comes, it will not arrive evenly. It will be patchy. Softer at the macro level, harsher at the micro level. Less end of the world, more quiet erosion in the places where organizations used to build juniors, context, and judgment.

Ask yourself or your AI: Which tasks are you automating because they look like small stuff, even though they are actually the learning ramps through which people in your organization build context, quality judgment, and taste?