AI LABOR CULTURE
I've Seen This Before
Feb 16, 2026
My previous essay, Who Buys What We Build?, struck a nerve. The engagement tells me that people recognize the core argument — that AI-driven layoffs are eroding the consumer base that the entire economy depends on — even if nobody in power seems willing to say it out loud.
In that essay, I framed the problem as a feedback loop that capitalism forgot — implying it could remember, could self-correct. I’ve spent some time since then researching what people, institutions, and governments are actually doing to prepare. What I found pushed my thinking further than I expected.
But let me start with why, because I’ve watched this movie before. I grew up in it.
Gelsenkirchen, 1957
I was born in Gelsenkirchen, in Germany’s Ruhr Valley — the industrial heartland that powered the German economic miracle of the 1950s and 60s. At its peak, coal mining alone employed roughly 600,000 people. The entire region — some 8.5 million people — was built around coal, steel, and the vast supply chains that fed them. The mines, the foundries, the rail yards, the downstream businesses: they were the economy.
Then the coal crisis hit in 1957. I grew up inside the unraveling. Between 1955 and 1980, coal employment dropped from 480,000 to 143,000. The steel crisis followed. I watched it happen — the closures, the empty storefronts, the men who had defined themselves by their work suddenly having no work to define themselves by.
The German government responded with what, on paper, appeared to be a comprehensive transition plan. The large mining companies and their workers were protected by Germany’s co-determination laws[1]. Miners at the big firms were offered early retirement, retraining, and transfers to the metal industries. Government officials could truthfully say that none of the miners at the major coal companies became unemployed.
But that’s not the whole story. The co-determination law only covered coal and steel, not the upstream and downstream industries — the suppliers, the service businesses, the small firms that existed because the mines existed. Those workers got crushed. Unemployment in the affected industries exceeded 15%. And the replacement jobs that eventually materialized — mostly in the service sector — paid less.
The official narrative today frames the Ruhr’s transformation as a success story: dozens of universities were built, the region rebranded itself as a knowledge and tourism economy, old mines became UNESCO World Heritage Sites and design museums. And there’s truth in that. But it took the better part of sixty years, billions in subsidies from European, federal, state, and municipal budgets, and a 25-year concentrated push in research and education.
And after all of that? As of 2020, regional unemployment in the Ruhr still stood at 10.1% — nearly double the national average of 6.0%. Gelsenkirchen[2], my hometown, hit 15.6%. More than six decades after the crisis began.
That’s the thing about industrial collapse. The official programs help the workers at the center. The people on the periphery — and there are always more of them — absorb the damage for generations.
The Retraining Reflex
When you raise the issue of AI displacing workers, the first answer you hear from policymakers and corporate leaders is retraining. Upskill the workforce. Teach people to work alongside AI. Adapt.
It sounds reasonable. It’s also largely inadequate, and the research supports that conclusion.
Harvard’s Project on Workforce has found that workforce development in the U.S. is chronically underfunded compared to peer nations. The existing federal infrastructure under the Workforce Innovation and Opportunity Act[3] is a patchwork that varies wildly by state, with dollars overwhelmingly directed toward classroom learning rather than work-based training. Apprenticeship programs represent only a tiny fraction of eligible programs.
Brookings published a sobering analysis[4] in May 2025 identifying a fundamental flaw: retraining programs frequently move workers from one automation-susceptible occupation to another. Program organizers themselves admit they have a foggy understanding of AI’s future economic impact, which makes it nearly impossible to identify the right skills to train for. You retrain someone as a data analyst, and eighteen months later that role is being automated too.
The National Academies[5] reported that nearly half of American workers were already using AI tools at least monthly by spring 2025, up from a third the year before. Nobody is giving these people a roadmap — they’re just expected to figure it out.
There’s also the age problem. The fastest-growing segment of the U.S. labor force is workers 55 and older. Research shows older learners can do well in self-directed learning, but need more time and effort. The retraining pipeline was never designed for them.
The difference between coal and AI is that with coal, you at least knew what was replacing it — oil, gas, nuclear, eventually renewables. The target was visible. With AI, the target keeps moving. Entire categories of white-collar work — not just individual tasks, but whole job descriptions — are being absorbed. Legal research, financial analysis, customer support, content creation, software testing, entry-level coding. These aren’t fringe occupations. They’re the backbone of the professional class.
In the Ruhr, displaced miners could transfer to the metal industries. There was somewhere to go. When AI eliminates the entry-level analyst role at a bank, what’s the adjacent industry that absorbs that person? There isn’t one — because AI is hitting those adjacent industries simultaneously.
The Thinkers
The most rigorous thinker on this is Daron Acemoglu[6], the Nobel-winning MIT economist. His framework identifies two forces that have historically balanced each other: automation displaces workers from existing tasks, while the creation of new tasks reinstates them. For most of the 20th century, these forces roughly offset each other, which is why industrialization ultimately raised living standards broadly.
But Acemoglu’s core argument is that this balance broke around 1980. Automation accelerated. New task creation slowed. The result is what we’ve lived through for four decades: rising productivity, stagnant wages, and a growing gap between what the economy produces and what workers earn.
AI threatens to widen that gap dramatically. In an IMF publication, Acemoglu and his co-author Simon Johnson proposed five concrete policy steps: reform business models that let AI companies expropriate consumer data without compensation, fix a tax code that penalizes hiring humans more than investing in automation, expand competition among AI developers, invest in worker-complementary technologies, and strengthen labor voice in how technology gets deployed.
These are structural interventions. They’re also politically dead on arrival in the current U.S. climate.
Howard Marks[7], the investor and co-chairman of Oaktree Capital, put the psychological dimension bluntly: financial support alone will not replace the psychological and social benefits of employment. Work gives people identity, structure, and social connection. A check doesn’t replace that. Anyone who watched what happened to the men in the Ruhr Valley when the mines closed understands this intuitively.
Universal Basic Income: From Theory to Legislation
The policy conversation that has moved fastest is around Universal Basic Income. It’s no longer theoretical.
Sam Altman has proposed an “American Equity Fund[8]“ where large AI companies and landholders would contribute roughly 2.5% of their value annually to a fund distributed to all citizens — effectively socializing a share of the gains from automation. Elon Musk has floated the idea of “Universal High Income,” though his vision requires a level of post-scarcity abundance that no economy has achieved and that his critics consider detached from reality.
On the legislative side, Rep. Bonnie Watson Coleman introduced the Guaranteed Income Pilot Program Act of 2025[9], authorizing $495 million annually for five years to test nationwide guaranteed income. Rep. Rashida Tlaib’s BOOST Act proposes a $250 per month refundable tax credit. Over 150 mayors across the country have joined a network normalizing unconditional cash transfers as a municipal policy tool.
One of the more interesting experiments is Ireland’s Basic Income for the Arts[10], which began as a three-year pilot and became permanent in 2026. People working in creative fields can apply for a basic income that allows them to pursue their craft without needing side work to cover living expenses. It’s a targeted acknowledgment that certain kinds of valuable work don’t survive market pressures without support.
But the skeptics raise legitimate problems. A Newsweek analysis[11] in January 2026 pointed out that even redirecting an entire record year of S&P 500 stock buybacks — over a trillion dollars — wouldn’t come close to funding a national UBI at the levels most proponents suggest.
The honest assessment is that UBI might be necessary but is nowhere near sufficient, and the political infrastructure to implement it at scale doesn’t exist.
The Toolbelt Generation
While policymakers debate and economists publish, the most organic adaptation is coming from young people who are simply opting out of the knowledge-work pipeline.
Gen Z enrollment in trade programs has jumped 42% in recent years. Enrollment in vocational-focused community colleges rose 16% from 2022 to 2023, and enrollment at public community colleges offering vocational programs surged nearly 20% since 2020. NPR has dubbed them the “toolbelt generation[12].”
The numbers tell the story of a generation doing the math. Only 16% of Gen Z parents believe a college degree guarantees long-term job security. Seventy-seven percent of Gen Zers say choosing a career resistant to automation is a top priority. The professions they rank as most AI-resistant? Plumbers, HVAC technicians, electricians, and nurses.
There’s a deep irony here. A CSIS analysis[13] found that by 2030, the U.S. may need 140,000 additional electricians, HVAC techs, and welders — just to build the AI infrastructure itself. Google has pledged $10 million to train electricians for data centers. The machines that are eliminating knowledge work need human hands to build and maintain their physical homes.
Jim Farley, CEO of Ford — the company whose founder once understood that workers need to earn enough to buy what they build — launched a campaign in 2025 to partner with vocational colleges and fund scholarships for future technicians. The trade school resurgence is real and pragmatic. But it deserves a harder look than the feel-good headlines suggest.
The argument for trades rests on a simple premise: AI can think, but it can’t reach into a wall cavity and reroute old wiring. The brain is solved; the body isn’t. That gap is what makes a plumber’s job safe while a paralegal’s isn’t. But the gap is closing faster than most people realize. AI provides the intelligence. Advanced robotics provides the body. The last bottleneck is the hand — building a mechanical hand with the dexterity, tactile sensitivity, and adaptive grip of a human one. And that bottleneck is being solved right now, on multiple fronts simultaneously.
In 2025 alone, researchers published robotic hands in Nature Machine Intelligence[14] and Science Advances[15] achieving human-like adaptive grasping and the ability to handle delicate objects without damaging them. Humanoid robot funding grew fivefold from 2022 to 2024, now exceeding a billion dollars annually. McKinsey[16] estimates the general-purpose robotics market could reach $370 billion by 2040. Morgan Stanley[17] projects adoption will accelerate sharply in the late 2030s and 2040s. And in January 2026, Tesla announced[18] it would discontinue the Model S and Model X to convert its Fremont factory line to produce up to one million Optimus humanoid robots per year. When one of the world’s largest manufacturers pivots from building cars to building humanoid robots, this is no longer a research project. It’s an industrial strategy.
What this means is that today’s eighteen-year-old enrolling in trade school is buying a reprieve, not an escape. The trades are more resistant to automation than knowledge work — today. But the timeline isn’t infinite. A young electrician starting an apprenticeship in 2026 will be in their mid-thirties when humanoid robots start deploying at scale in construction. Not everyone can or wants to become an electrician, and the ones who do should know that the clock on their advantage is already ticking.
The Entry-Level Collapse
Perhaps the most alarming indicator of what’s coming is what’s already happening to people just entering the workforce.
In the UK, tech companies cut graduate hiring by 46%[19] from 2023 to 2024, with an additional 53% drop projected by 2026. Stanford’s Digital Economy Lab[20] found a 67% decrease in U.S. entry-level tech job postings over the same period. At one Indian engineering institute, fewer than a quarter of 400 graduating students had secured job offers as of late 2025.
A CBS News report[21] profiled a University of Connecticut mechanical engineering graduate who applied for 200 positions without landing a job in his field. He’s working as an assistant pool director. Labor economists point out that AI is best at the kind of rote, repetitive tasks that are the staple of entry-level work — which means the very bottom rung of the career ladder is being sawed off.
This is not just an inconvenience. Entry-level jobs are how people develop professional skills, build networks, and establish careers. If AI eliminates the first three to five years of professional development across multiple industries simultaneously, we’re not looking at a temporary adjustment. We’re looking at a generation that never gets on the ladder at all.
Scale and Speed
Every transition I’ve read about — the Ruhr Valley, the American Rust Belt, the decline of British coal under Thatcher — shares a common feature: the collapse was geographically concentrated and industry-specific. The Ruhr lost coal. Detroit lost auto manufacturing. These were devastating, but they were contained. Other regions and industries absorbed some of the displaced workers.
AI doesn’t work like that. It’s hitting every knowledge-work sector simultaneously, in every geography, with no clearly defined adjacent industry to absorb the displacement. The World Economic Forum[22] projects 92 million jobs displaced by 2030, against 78 million created — but those numbers assume the new jobs will require skills that the displaced workers can acquire, in locations they can access, within timeframes that matter. The Ruhr Valley’s experience suggests that the assumption is heroically optimistic.
And the speed is different. The Ruhr’s decline played out over six decades. The coal workers who were fifty in 1957 could ride out the transition on early retirement. The supply chain workers had at least some time to find alternatives, even if those alternatives were worse. AI is compressing what took the Ruhr sixty years into perhaps a decade. The layoff announcements come weekly. The LinkedIn posts follow the same day.
The Operating System Problem
After all this research — the retraining programs, the UBI proposals, the trade school resurgence, the policy frameworks — I keep circling back to a deeper discomfort. Every remedy I’ve described is a patch applied within a system whose core logic is producing the problem.
In my first essay, I wrote that AI didn’t break capitalism — it exposed a version of capitalism that had already stopped distributing its gains. I believed that then. I’m less sure now. The Ruhr Valley didn’t forget to distribute its gains. Germany distributed them as well as any capitalist economy ever has — co-determination, strong unions, massive public investment, a cultural commitment to social solidarity. And it still produced sixty years of generational damage from a single-industry, geographically contained transition. The system didn’t malfunction. It functioned exactly as designed. The design just isn’t built for this. What we’re facing isn’t a disruption — a temporary turbulence before a new equilibrium. It’s a structural incompatibility.
Capitalism optimizes for returns to capital. That’s not a moral failing — it’s the operating system. When replacing a worker with an algorithm increases shareholder value, the system will do it every time. Not because CEOs are villains, but because the incentive structure demands it. Retraining doesn’t change that incentive. UBI doesn’t change it. Trade schools don’t change it. They address the symptoms while the machine keeps running as designed. We are trying to run a post-labor technology on a labor-dependent philosophy.
The Writers Who Saw It Coming
Science fiction writers have understood this better than economists, and they’ve understood it for decades.
In 1952, Kurt Vonnegut published Player Piano[23], his first novel, based on what he’d seen working at General Electric. In it, automation has replaced most human workers. The displaced population has everything it needs materially — housing, appliances, a form of basic income — but, as one character puts it, everything worth enjoying about material abundance, “pride, dignity, self-respect, work worth doing, has been condemned as unfit for human consumption.” The novel’s most devastating detail is its ending: the workers revolt, destroy the machines — and then immediately start rebuilding them. Not because they’re stupid. Because the system has so thoroughly colonized their understanding of progress that they can’t conceive of life outside it.
Vonnegut wrote that based on watching a computer-operated milling machine cut jet engine rotors in 1949. He saw what was coming — not just the automation, but the trap. The machines weren’t the problem. The system that made people’s dignity contingent on their economic utility was the problem. Take away the utility, and you don’t just lose jobs. You lose the entire framework through which people understand their own worth.
Isaac Asimov got at the same thing from the other direction. In a 1964 essay[24], he predicted that by 2014, “the lucky few who can be involved in creative work of any sort will be the true elite of mankind, for they alone will do more than serve a machine.” He foresaw automation eliminating routine work and warned that humanity would suffer from “the disease of boredom” — not material want, but purposelessness. What he underestimated was how tightly capitalism would tie human purpose to economic function, making that boredom not just psychological but existential.
Iain M. Banks imagined one possible endpoint in his Culture series[25]. The Culture is a post-scarcity civilization where super-intelligent AIs manage everything, and humans are free to do whatever they want — create art, explore, play. Banks, an avowed socialist, described it as “hippie commies with hyper-weapons.” It’s utopian and intentionally provocative, but the telling detail is this: the Culture only works because it abandoned capitalism entirely. There’s no money. No property. No labor market. The AIs produce abundance and distribute it freely. Banks understood that you can’t graft post-scarcity onto a capitalist framework. The operating system has to change.
And Kim Stanley Robinson, whose Ministry for the Future[26] is perhaps the most serious attempt to imagine the actual transition out of capitalism — not life after it, but the wrenching process of getting there — has argued that markets are structurally incapable of valuing what doesn’t generate returns. Robinson’s point isn’t that markets are evil. It’s that they can’t price a stable climate, or a population that can afford to eat, because those things don’t fit on a balance sheet.
The same structural incapacity applies to AI displacement. Capitalism is very good at deploying AI to increase productivity and concentrate the gains. It has no mechanism — none — for ensuring those gains are distributed in a way that preserves social stability. If the best version of capitalism couldn’t manage the Ruhr, what chance does American-style shareholder capitalism have with a disruption that’s hitting every knowledge-work sector on the planet simultaneously?
This is the conversation nobody wants to have. Retraining is safe to talk about. UBI is edgy but increasingly mainstream. Trade schools are a feel-good story. But saying that the economic system itself might be structurally incompatible with what’s coming — that’s the third rail. Vonnegut saw it in 1952. Banks saw it in 1987. Robinson is writing about it right now. The science fiction writers have been ahead of the economists for seventy years, and we still haven’t caught up.
This Time Is Different
The standard rebuttal is that capitalism has absorbed every previous disruption. The steam engine displaced hand weavers, but created factory jobs. Electrification killed the lamplighter, but powered an entire industrial expansion. Computing eliminated typing pools, but created an information economy. Creative destruction has a 250-year track record. Why would this time be different?
Because every previous disruption replaced human muscle or human routine while still needing human judgment to direct it. The displaced worker could always move up the value chain — from the loom to the factory floor, from the switchboard to the help desk, from data entry to data analysis — because there was always a rung above the machine’s reach. The machine was a tool. The human was still the agent.
AI breaks that pattern. It doesn’t just replace the task. It replaces the capacity — judgment, analysis, pattern recognition, creative synthesis, strategic reasoning. These aren’t the bottom rungs of the value chain. They’re the top. When the machine can do what you got promoted into, there’s no rung left to climb to. That’s not a difference in degree from previous disruptions. It’s a difference in kind.
And the reflexive counter — “well, communism was worse” — misses the point entirely. Soviet central planning didn’t challenge the core premise. Mao didn’t challenge it. They still needed human labor as the engine of production. They just changed who held the steering wheel. Capitalism and communism were both built on the same assumption: that human work is the source of economic value, and the argument is over who controls it and who benefits from it. Every system we’ve tried — feudal, mercantile, capitalist, socialist — has that assumption at its foundation.
What AI introduces is something none of those systems were designed to handle: a world where human labor becomes optional. Not just manual labor. Not just routine cognitive labor. Labor itself. That’s not a crisis capitalism has solved before, because it’s never faced it before. And pointing to the failed alternatives doesn’t help, because they were built on the same assumption that’s breaking.
I don’t know what replaces this. Nobody does. The science fiction writers have imagined some possibilities — Banks’s post-scarcity anarchism, Robinson’s managed transition, Vonnegut’s bleak warning about what happens if we don’t figure it out. But imagining a destination is easier than mapping the route. The honest position is that we’re heading into territory where the old maps don’t apply, and pretending the current system can absorb this because it absorbed everything before — that’s the real fantasy.
What Conversation Do We Need?
The Ruhr eventually stabilized — sort of — through massive, sustained public investment over decades. New universities, new industries, new infrastructure, a fundamental reimagining of what the region was for. And it still left Gelsenkirchen with 15.6% unemployment sixty-three years after the crisis began.
What’s happening now is the Ruhr Valley at global scale and internet speed. The question from my first essay remains: who buys what we build, when the people who used to buy things no longer earn enough to do so? And the follow-up question is equally urgent: who are we preparing, for what, and is anyone being honest about whether it’s enough?
The economists know the problem. Acemoglu has been writing about it for years. The policymakers have their pilots and their bills. The universities are scrambling to retrofit their curricula. Gen Z is picking up welding torches. And the companies keep posting record profits while cutting headcount.
The science fiction writers saw it most clearly: a structural problem requires a structural response. Not just new training programs or cash transfers or tax incentives, but a serious reckoning with whether an economic system that ties human dignity to market utility can survive a technology that makes most human labor uncompetitive. The Ruhr tried everything within the existing system, and sixty years later, my hometown still hasn’t recovered.
Something has to give. The Ruhr taught me that waiting for the system to self-correct is a multi-generational death sentence. We are no longer waiting for the movie to start — we are in the third act. The question is no longer how we fix the workers to fit the machine, but whether we have the honesty to admit the machine itself is the problem. And whether, this time, the conversation will include the people on the periphery, not just the ones at the center.
Sources
[1] Germany's co-determination laws — explained at deutschland.de.
[2] Gelsenkirchen urban transition profile — from Urban Transitions.
[3] Workforce Innovation and Opportunity Act — from the U.S. Department of Labor.
[4] AI labor displacement and the limits of worker retraining — a Brookings analysis.
[5] Retraining workers for the age of AI — from the National Academies.
[6] What do we know about the economics of AI — with Daron Acemoglu at MIT.
[7] Is it a bubble? — a memo by Howard Marks.
[8] Moore's Law for Everything — Sam Altman's proposal for an American Equity Fund.
[9] Guaranteed Income Pilot Program Act of 2025 — at Congress.gov.
[10] Basic Income for the Arts — Ireland's program page.
[11] The AI universal basic income trap — a Newsweek analysis.
[12] Gen Z and vocational schools — NPR on the toolbelt generation.
[13] GenAI's human infrastructure challenge — a CSIS analysis.
[14] Robotic hand with adaptive grasping — published in Nature Machine Intelligence.
[15] Prosthetic robotic hand research — from Johns Hopkins / Science Advances.
[16] Will embodied AI create robotic coworkers — from McKinsey.
[17] The humanoid robot market — from Morgan Stanley.
[18] Tesla ending Model S and X production for Optimus robots — from CNBC.
[19] UK tech graduate hiring cuts — from The Register.
[20] Employment changes for young workers — from Stanford's Digital Economy Lab.
[21] AI, jobs, and college graduate unemployment — a CBS News report.
[22] The Future of Jobs Report 2025 — from the World Economic Forum.
[23] Player Piano by Kurt Vonnegut — on Wikipedia.
[24] Isaac Asimov's 1964 essay on the world of 2014 — in the New York Times archive.
[25] Iain M. Banks's Culture series — an overview.
[26] The Ministry for the Future by Kim Stanley Robinson — on Wikipedia.
This essay was also published on SubStack