Werner Glinka

AI LABOR CULTURE

The Wrong Argument

Apr 3, 2026

This is the fourth in a series that began with “Who Buys What We Build?” and continued with “I’ve Seen This Before” and “The Corporate Benevolence Fantasy.” If you’re new here, the first essay is where the argument starts.

On April 4, 2026, the New York Times published a piece about economists finally taking AI displacement seriously. [1] The most striking quote came from Molly Kinder, a senior fellow at the Brookings Institution: “I really don’t know anything a college student can bring to my team that Claude can’t do.” [1]

A Brookings senior fellow said she no longer needs entry-level researchers. That’s not a prediction. That’s a present-tense observation from someone whose job is to study the labor market — and who just eliminated a piece of it.

But the article’s most revealing detail came at the very end. Anton Korinek, the University of Virginia economist most willing to consider extreme AI scenarios, is leaving academia at the end of the semester to join Anthropic — the company that builds Claude. [1] The economist most engaged with the risk is leaving the institution that studies it to join the institution that creates it. Draw your own conclusions about what he sees coming and where he thinks the real work will be done.

I’ve been writing about AI displacement since early this year. Each essay advanced the argument. Each found an audience larger than I expected, which tells me people recognize what’s happening even when the institutions that are supposed to speak for them won’t say it out loud.

This essay is different. I’m not extending the argument. I’m stepping back to question the frame itself — including parts of my own earlier framing.

The Battle at the Gates

The public debate has two sides, and both are performing.

The tech evangelists — Altman, Khosla, Musk — predict massive job loss in terms designed to excite investors. [1] When the CEO of a company positioning for what could be one of the largest IPOs in history tells the world that AI will replace half the workforce, that’s not a neutral observation. It’s a sales pitch.

The economists, until very recently, have responded with what the Times called “skepticism bordering on dismissiveness.” [1] Daniel Rock at the University of Pennsylvania told the Times: “I don’t think A.I. has hit the labor market yet... but I think it’s coming.” [1] They’re debating whether the wave is approaching while the water is around their ankles.

Between these two camps, the policy conversation has defaulted to a single answer: retraining. Teach people to work alongside AI. Upskill the workforce. Adapt.

Left, right, corporate, academic. A multi-year study highlighted in Foreign Affairs found that adults in both the U.S. and Canada ranked worker retraining as their top policy choice, cutting across party lines. [2] It sounds reasonable. And by the available evidence, it has never worked at scale.

Why Not Retraining

I covered the Ruhr Valley retraining programs in “I’ve Seen This Before.” The U.S. record is no better.

Brookings published an analysis in May 2025 that should have ended this conversation. Researcher Julian Jacobs reviewed the entire history of U.S. public retraining programs. They concluded that the evidence “suggests we should be skeptical of retraining programs as an effective policy response to support labor force adjustment to AI.” [3] The National JTPA Study — a genuine randomized controlled trial — showed no statistically significant improvement in employment rates or earnings. [3] The WIA program that replaced it: no positive impact. [3] The Trade Adjustment Assistance program, designed specifically for displacement, participants had worse outcomes than non-participants. [4]

And Brookings identified a problem specific to AI: retraining programs frequently move workers from one automation-susceptible occupation to another. [5] You retrain someone as a data analyst, and eighteen months later, that role is being automated. The program organizers themselves admit they have a foggy understanding of AI’s future economic impact. [5] You can’t aim at a target that keeps moving.

But the most devastating data point came from a corporate earnings call. In September 2025, Accenture CEO Julie Sweet told analysts that the company was “exiting on a compressed timeline, people were reskilling, based on our experience, is not a viable path for the skills we need.” [9]

The CEO of the world’s largest consulting firm — the company that sells workforce transformation services to every other company — admitted on a public earnings call that reskilling doesn’t work fast enough for their own people. Accenture trained over 550,000 employees on generative AI and still couldn’t retrain the 11,000 it cut that quarter. [9] If the company that sells retraining to everyone else can’t make it work on its own workforce, the rest of the policy conversation starts to look like theater.

The Fog

A measurement problem compounds the retraining problem: we can’t see displacement clearly enough to respond to it, in part because companies are misrepresenting its causes.

Deutsche Bank analysts coined the term “AI redundancy washing” at Davos in January 2026. [10] A Resume.org survey of 1,000 U.S. hiring managers put a number on it: 59 percent admitted they emphasize AI’s role in explaining layoffs because it resonates better with stakeholders than citing financial pressure. Only 9 percent said AI has fully replaced certain roles at their company. [10]

Amazon provides the sharpest example. CEO Andy Jassy wrote in June 2025 that AI would reduce the corporate workforce. [10] When Amazon actually cut 14,000 positions in October, Jassy said on the earnings call, it was “not even really AI-driven.” [10] The same week, senior VP Beth Galetti’s memo to the affected employees cited AI as the transformative force. [10] Three contradictory statements in a single quarter. That’s not ambiguity. That’s a company that doesn’t know — or doesn’t care — whether its own narrative is coherent.

Forrester’s 2026 Predictions report found that 55 percent of employers who cited AI in layoff decisions now regret it, primarily because the technology didn’t deliver on what executives had promised themselves. [11] Gartner predicts that by 2027, half of the companies that cut customer service roles citing AI will rehire for similar functions — but under different job titles, often offshore, at lower wages. [11] [12]

The genuine displacement is real — at Block, Atlassian, Duolingo and Salesforce. [10] But it’s tangled with fabricated displacement used for investor optics, and nobody can separate the signal from the noise. Economists can’t measure what they can’t see clearly. Policymakers can’t design interventions for a crisis whose contours no one can define. And the retraining programs being built in response are targeting a problem they can’t accurately diagnose, using methods that have never worked, on timelines that the technology is already outrunning.

But all of this — the retraining debate, the AI-washing, the economist-versus-evangelist arguments — is the battle at the gates. It’s the visible conflict. Everyone is watching it. Everyone has an opinion.

And while that argument plays out, the actual displacement is already inside the walls.

This fog distorts the visible layer. But the more consequential problem is that the second layer doesn’t need distortion to remain invisible.

The Trojan Horse

When people talk about AI replacing workers, they picture a CEO standing at a podium announcing layoffs. That’s the image. It’s dramatic, it’s trackable, and it’s what the policy conversation is built around.

The most consequential displacement isn’t happening through layoff announcements. It’s happening through software updates.

What’s changing is not just capability, but the role of software itself. Software is shifting from a tool layer that augments labor to a layer that directly substitutes for it.

In January 2026, Bessemer Venture Partners made an observation that deserves more attention than it’s received: “Vertical AI isn’t competing for IT budgets; it’s competing for labor budgets.” [13] Unlike horizontal software that helps people work, vertical AI platforms — those built for specific industries like healthcare, legal, construction, insurance and field services — are designed to perform the work itself.

The economics are explicit. Vertical SaaS companies now trade at 12x to 15x revenue — nearly matching AI pure-play valuations of 14.5x — while generic horizontal SaaS sits at 2x to 5x. [14] That spread tells you how the market sees this: platforms that absorb what people do are valued at roughly triple the multiple of platforms that assist people in what they do. Investors are pricing replacement at a premium over assistance.

And the business models are being redesigned to make this explicit. Vertical AI companies are moving from per-seat pricing — you pay for each employee who uses the software — to outcome-based pricing: you pay per document reviewed, per claim processed, per dispatch scheduled. [13] When you price software by the outcome instead of by the seat, you’re no longer selling a tool. You’re competing directly with the salary of the person who used to produce that outcome.

Let me make this concrete.

What It Looks Like From the Inside

Consider a mid-sized property-casualty insurance carrier — 2,000 employees, regional, the kind of company that nobody writes about. They’ve used Guidewire for claims management for years. It’s their core platform. Their adjusters work inside it every day.

In Q1 2026, Guidewire pushes a platform update. The new module handles first notice of loss for routine auto claims — the fender benders, the parking lot dings, anything under a certain threshold. It reads the filing, cross-references the policy, pulls the damage estimate from photo analysis, and routes it for payment. An adjuster still reviews the output, but the review takes four minutes instead of the thirty-five it took to build the file from scratch.

Nothing dramatic happens. No announcement. No restructuring memo. The team handles more volume with the same headcount. The claims manager notices at the quarterly review that throughput is up 40 percent on routine auto. Good news.

In Q3, an adjuster retires. The claims manager reviews the numbers and the volume the team is handling, and decides not to backfill. The work is getting done. Budget pressure is real. Hiring takes three months anyway. The position stays open, then quietly disappears.

By Q1 2027, two more adjusters have left — one to a competitor, one to take care of an aging parent. Neither position gets filled. The team is down three people. Nobody was laid off. Nobody’s job was eliminated. The platform just kept getting better at the routine work, and the team shrank around it.

Now multiply this by every carrier using Guidewire, by every field services company using ServiceTitan, by every construction firm using Procore, by every legal department using a contract review platform that processes routine NDAs without human input. None of these show up in the Challenger report. None trigger WARN Act notices. None generates headlines. The insurance trade press might run a piece about Guidewire’s new AI capabilities, framed as innovation rather than displacement. The adjusters who left found other jobs — or didn’t — and nobody connected their departure to a platform update.

This is the mechanism. Not a dramatic replacement. A slow absorption, one function at a time, one update at a time, one unbackfilled position at a time.

The Scale

The capital is already allocated. Crunchbase reported that in 2026, “it will be very difficult for a SaaS company without native AI/agentic capabilities to find VC dollars at any stage.” [16] Vertical AI startups are growing at 400 percent and competing at roughly 80 percent of traditional SaaS pricing. [17] Gartner expects that by 2026, 80 percent of enterprises will have deployed generative AI-enabled applications. [15] BetterCloud put it plainly: “The SaaS industry spent more than two decades building tools that enabled human work. Now in 2026, the focus is shifting to software that autonomously performs the work.” [15]

These platforms are embedding themselves into insurance, construction, field services, healthcare billing, property management, legal services, and logistics — industries employing tens of millions of people that rarely feature in the AI displacement conversation. Because the disruption doesn’t look like AI, it appears to be a software upgrade.

You can’t file for unemployment because your job shrank. You can’t retrain for a role that still technically exists — it’s just not a full-time job anymore. There is no layoff announcement for a position that was never eliminated, just never refilled.

The Timing Problem

A fair objection: if vertical SaaS is silently absorbing labor at this scale, it will eventually show up in industry employment statistics. BLS data lags, but it catches up. Productivity data will reflect it. GDP per worker will move. The displacement I’m calling invisible won’t stay invisible forever.

That’s true. And it’s exactly the problem.

The Ruhr Valley’s supply chain collapse eventually showed up in the statistics, too. By the time it did, the damage was structural. The small firms were gone. The workers had been unemployed long enough that their skills had atrophied, and their savings were spent. The measurement systems caught up just in time to document what had already happened — not in time for anyone to intervene.

The question isn’t whether the data will eventually reflect this displacement. The question is whether it will reflect it in time for institutions to respond. And every indicator says no. The vertical SaaS adoption cycle is measured in quarters. Enterprise procurement decisions are made annually. Platform updates ship continuously. The BLS publishes employment data monthly, but industry-specific breakdowns lag by a year or more. By the time the data shows a pattern, the pattern has been running for two or three years.

This is a timing problem. But a timing problem where the delay is long enough for millions of jobs to shrink before any institution notices is not a minor technical caveat. It is the problem.

The Two Layers

So here is where we actually stand.

The first layer of displacement is visible. It’s the headline layoffs — Salesforce, Block, Amazon, Meta. It’s messy, contested, and partially fabricated. Nearly 60 percent of it is narrative cover for financial decisions that would have happened regardless of AI. [10] The displacement within this layer is genuine, but it’s tangled with AI-washing to the point that nobody can accurately measure its scope. This is the layer that economists study, journalists cover, policymakers react to, and retraining programs are designed to address.

The second layer is structurally invisible. It’s the embedding of AI into industry-specific platforms that companies already use, already trust, and already depend on. It doesn’t trigger layoff announcements. It doesn’t show up in BLS data. It doesn’t activate WARN Act notices. It happens inside procurement decisions, vendor renewals, and budget line items — a thousand small optimizations at a thousand companies, none of them dramatic enough to register in any tracking system we have. And it’s being funded at scale by investors who are explicitly pricing these platforms based on their ability to capture labor value. [13] [14]

The retraining conversation is happening entirely within the first layer. And it fails on both counts. It fails against the genuine displacement in the first layer because the evidence shows retraining has never worked at scale [3], because the target keeps moving [5], and because even Accenture can’t make it work for its own people. [9] And it’s completely irrelevant to the second layer, because you can’t retrain someone for a job that wasn’t eliminated — it was absorbed.

I’ve Seen This Before, Too

I keep coming back to Gelsenkirchen.

When the coal crisis hit the Ruhr Valley in 1957, the visible crisis was the mine closures. That’s what got the political attention. That’s what the co-determination laws were designed to address. And the programs worked — for the miners at the major companies. They got early retirement, retraining and transfers to the metal industries. Government officials could truthfully say that none of the miners at the big coal companies became unemployed.

But the co-determination laws only covered coal and steel. Not the suppliers, the service businesses, the small firms that existed because the mines existed. Those workers were on their own. Unemployment in the affected industries exceeded 15 percent. And the official narrative framed the whole thing as a success story for decades while Gelsenkirchen sat at 15.6 percent unemployment sixty-three years after the crisis began.

The parallel isn’t exact — the Ruhr’s supply chain collapse was downstream of the mine closures and was caused by them. In contrast, vertical SaaS absorption is a separate mechanism operating on its own logic. But the structural lesson holds: the people at the center of a transition get the programs. The people on the periphery absorb the damage for generations. And there are always more people on the periphery.

The Honest Position

I’ve been working with Claude for more than two years now. I’ve watched it go from a powerful but limited tool to something that changes what a single person can produce. The progression has been steep, and nothing in the trajectory suggests it’s slowing down.

But between AI getting competent and AI being fully integrated into existing business processes, there is a gap. That gap is real. It’s the reason Gartner predicts rehiring. [12] It’s the reason Forrester says 55 percent of companies regret premature cuts. [11] It’s the reason the economists can look at the aggregate data and not see much yet. [1] Integration is slow, messy, and dependent on organizational capacity that most companies don’t have. The hype merchants are selling the capability curve. The economists are measuring the integration curve. They’re looking at real data and reaching different conclusions because they’re measuring different things.

The honest position is that the gap exists, and it buys time. The equally honest position is that nobody is using the time it buys.

Not the companies, which are either cutting prematurely for investor optics or ignoring the problem until it’s undeniable.

No executive needs to decide to “replace workers” for this to happen. They only need to accept tools that improve margins. The outcome is the same.

Not the policymakers, who are defaulting to retraining because it’s the only lever they know how to pull. Not the economists, who are only now arriving at concerns that were visible to anyone paying attention a year ago. And not the tech leaders, who are selling a future that serves their capital-raising needs while building bunkers in Hawaii and New Zealand for the world they see coming.

The Ruhr Valley eventually stabilized — sort of — through massive, sustained public investment over decades. New universities, new industries, new infrastructure, a fundamental reimagining of what the region was for. It took the better part of sixty years and billions in subsidies. And it still left my hometown with a 15.6 percent unemployment rate.

What’s happening now is a transition at a global scale, at internet speed, with a second layer of displacement that no institution is watching and no measurement system can detect in time to respond.

The question from my first essay remains unanswered: who buys what we build, when the people who used to buy things no longer earn enough to do so? This essay adds a second question: what happens when we can’t even see the displacement clearly enough to act, when the most consequential layer of the transition is invisible to every institution that might intervene?

The time that “gradual” buys us will run out. It always does. And the longer we spend arguing about the visible layer, the less time we have to respond to the one we’re not even measuring.

By the time we can see it clearly, it won’t be a transition. It will be an outcome.


Updates

April 8, 2026

Five days after this essay was published, Jennifer M. Harris — a former economic official in the Biden White House — published an op-ed in the New York Times making substantially the same diagnosis I’ve laid out across this series. The wealth concentration. The productivity-pay decoupling. The AI-washing question. The inadequacy of every current policy response. She cites Gabriel Zucman’s finding that 19 households have added $1.8 trillion to their wealth in the past two years — roughly the size of Australia’s economy — and writes that “we are no longer sharing in self-government.”

That sentence points at something I’ve been circling but hadn’t named directly: the displacement isn’t just an economic problem. It’s a constitutional one. When that much wealth and power concentrate so quickly, the democratic part of democratic capitalism ceases to be meaningful, regardless of what the formal institutions look like.

The convergence is the news. Six months ago, the frame Harris is using would have been considered alarmist. Now it’s appearing under institutional credentials in the paper of record. The conversation is moving from the periphery to the center faster than the institutions themselves seem to realize.

But Harris stops where the mainstream conversation still stops. Her entire piece operates within the visible layer — the headline layoffs, the IBM cuts, the Stanford study on early-career employees. She names AI-washing as a possibility but doesn’t pursue it. She doesn’t grapple with the evidence that retraining has never worked at scale. And she doesn’t touch the vertical SaaS mechanism at all. Her policy prescriptions — taxing investment income, sovereign wealth funds, worker cooperatives — are responses to the layer she can see. They’re good ideas. They’re also what emerges when half the problem is still invisible.

This is what I meant by “the wrong argument.” Not that Harris is wrong — she’s largely right, and her piece is the most important mainstream signal yet that the diagnosis is breaking through. But the frame she’s using is still the frame at the gates. The Trojan Horse is still inside the walls, still getting software updates, and still not part of the conversation she’s joining.

The mainstream is catching up to where this series was a week ago. That’s encouraging. It also means the next layer of the argument is still waiting for them.

April 11, 2026

Three days after the Harris update above, Anthropic released Claude Mythos Preview. The news cycle has been dominated by its security capabilities — Mythos found thousands of zero-day vulnerabilities across every major operating system and browser, which is why Anthropic withheld public release and formed a defensive coalition before announcing the model’s existence. That’s the story getting coverage.

The detail that matters for this essay sits in a different part of the announcement. On SWE-bench Verified — the benchmark that measures how well an AI system can resolve real software engineering tasks from actual codebases — Mythos scores 93.9 percent. The previous frontier model was at 80.8. A year ago the best systems were in the low 70s.

Mythos itself isn’t generally available. That’s not the point. The point is what this capability means for the platforms inside the walls.

Return to the insurance carrier in the Trojan Horse section above. The Guidewire update that absorbed first-notice-of-loss for routine auto claims was built against the model capabilities available when that release was scoped — sometime in 2025, using tools roughly at the level of last year’s frontier systems. The Guidewire release after next will be built against something much closer to what Mythos can do. Not Mythos itself, which Anthropic is holding back, but the generally available model Anthropic will ship next, which will incorporate lessons from Mythos’s development.

The question isn’t whether a given vertical SaaS platform can absorb a specific function. The question is how much of what used to be review-and-judgment work can now be absorbed into routine, release-over-release, as the upstream models keep advancing. A year ago, routine auto claims. This year, maybe commercial property. Next year, something closer to the adjuster’s actual judgment work. Nothing dramatic. Just the same pattern — an update ships, throughput goes up, a position doesn’t get backfilled.

The Harris update noted that the mainstream conversation was catching up to where it had been a week earlier. Mythos is a different kind of confirmation. It’s evidence that the upstream pressure on the mechanism I described is accelerating faster than the essay implied. The vendors in your Trojan Horse are getting better tools, on a shorter cycle, than anyone watching the visible layer of displacement has reason to track.

By the time we can see it clearly, it won’t be a transition. It will be an outcome.


Sources

[1] Ben Casselman, “Economists Are Starting to Take A.I.Elimination of Jobs Seriously,“ New York Times, April 4, 2026.

[2] Beatrice Magistro et al., “The Coming AI Backlash,“ Foreign Affairs, 2025; covered in Northeastern University News, November 20, 2025. https://news.northeastern.edu/2025/11/20/ai-retraining-programs-northeastern-research/

[3] Julian Jacobs, “AI Labor Displacement and the Limits of Worker Retraining,“ Brookings Institution, May 16, 2025. https://www.brookings.edu/articles/ai-labor-displacement-and-the-limits-of-worker-retraining/

[4] Julian Jacobs, “AI & the Retraining Challenge,” AI Policy Perspectives, June 26, 2025. https://www.aipolicyperspectives.com/p/ai-and-the-retraining-challenge

[5] Brookings Institution, analysis of AI and workforce retraining programs, May 2025 (as referenced in “I’ve Seen This Before“).

[6] “The Reskilling Delusion: AI Reskilling Myth,“ Fast Company, March 2026. https://www.fastcompany.com/91484887/the-reskilling-delusion-ai-reskilling-myth

[7] Matt Hopkins, “The Reskilling Myth: Why Retraining Won’t Save Us from AI Displacement,“ matthopkins.com, April 1, 2026. https://matthopkins.com/business/the-reskilling-myth/

[8] “Why Traditional AI Training Isn’t Working in 2026,“ DataCamp (survey of 500+ US and UK enterprise leaders, conducted with YouGov), March 2026. https://www.datacamp.com/blog/why-traditional-ai-training-isn-t-working-in-2026

[9] Accenture Q4 FY2025 earnings call, September 25, 2025; CEO Julie Sweet statement. Reported by CNBC, September 26, 2025; CX Today, October 2025; Nearshore Americas, October 2025. Accenture workforce and restructuring figures from company filings.

[10] “11 Companies Replacing Workers With AI (2026): Real Layoff Data,“ PrepoAI, March 14, 2026. https://prepoai.com/article/11-companies-replacing-workers-with-ai-2026-real-layoff-data-mmqqj4qu — Aggregates data from: Challenger, Gray & Christmas 2025 Annual Report; Resume.org US Business Leaders AI Hiring Survey, 2025; Jack Dorsey shareholder letter on X, February 26, 2026; Marc Benioff on The Logan Bartlett Show, August 29, 2025; Amazon CEO Jassy internal memo (CNBC, June 17, 2025), Q3 earnings call (Fortune, CNN, GeekWire, October 30–31, 2025), and VP Galetti memo (Gizmodo, October 2025); Deutsche Bank analysts, World Economic Forum, Davos, January 20, 2026; Duolingo spokesperson statement, January 2024.

[11] Forrester Research, Predictions 2026: The Future of Work, October 2025. Reported by The Register, October 29, 2025; Computerworld, November 4, 2025.

[12] Gartner, “Gartner Predicts Half of Companies That Cut Customer Service Staff Due to AI Will Rehire by 2027,” press release, February 3, 2026 (based on survey of 321 customer service and support leaders, October 2025). https://www.gartner.com/en/newsroom/press-releases/2026-02-03-gartner-predicts-half-of-companies-that-cut-customer-service-staff-due-to-ai-will-rehire-by-2027

[13] Eze Vidra, “Vertical AI in 2026: The Good, The Bad, and The Ugly,“ VC Cafe, January 7, 2026. https://www.vccafe.com/2026/01/07/vertical-ai-in-2026-the-good-the-bad-and-the-ugly/ — Citing Bessemer Venture Partners’ Vertical AI Playbook.

[14] Forbes Finance Council / Tomas Milar, “Vertical SaaS: An Overlooked Winner in the AI Valuation Race,“ Forbes, March 30, 2026. https://www.forbes.com/councils/forbesfinancecouncil/2026/03/30/vertical-saas-an-overlooked-winner-in-the-ai-valuation-race/ — Valuation data sourced from PitchBook 2025 VC Emerging Opportunities report.

[15] “AI and the SaaS Industry in 2026,“ BetterCloud, January 2026. https://www.bettercloud.com/monitor/saas-industry/ — Citing Gartner forecast on GenAI enterprise deployment and McKinsey 2025 survey on organizational AI adoption.

[16] “Crunchbase Predicts: Why Top VCs Expect More Venture Dollars, Bigger Rounds And Fewer Winners In 2026,“ Crunchbase News, January 5, 2026. https://news.crunchbase.com/venture/crunchbase-predicts-vcs-expect-more-funding-ai-ipo-ma-2026-forecast/

[17] “AI SaaS Valuation Premium: 1-3x More in 2026,“ Livmo, February 2026. https://livmo.com/blog/ai-impact-saas-valuations-2026/ — Citing Bessemer State of the Cloud data on vertical AI startup growth rates.

Additional background sources referenced in previous essays in this series:

Challenger, Gray & Christmas, 2025 Annual Job Cut Report

University of Sydney / Prof. Uri Gal, “Tech Companies Are Blaming Massive Layoffs on AI. What’s Really Going On?“, March 17, 2026. https://www.sydney.edu.au/news-opinion/news/2026/03/17/tech-companies-are-blaming-massive-layoffs-on-ai.html

Fox Business / Eric Revell, “Oracle Laying Off Thousands of Workers to Cut Costs Amid AI Push,“ March 31, 2026. https://www.foxbusiness.com/economy/oracle-laying-off-thousands-workers-cut-costs-amid-ai-push-report

National Academies, “Retraining Workers for the Age of AI,” December 2025. https://www.nationalacademies.org/news/retraining-workers-for-the-age-of-ai