Werner Glinka

AI LABOR CULTURE

Not Yet

Mar 10, 2026

Last week, the Pentagon labeled Anthropic — the AI company whose CEO has been the most persistent voice of caution in the industry — a supply chain risk to national security. [1] The designation, historically reserved for foreign adversaries, came because Anthropic refused to remove two conditions from its military contracts: no fully autonomous weapons, and no mass surveillance of Americans. [2]

Within hours, OpenAI had signed a deal to replace Anthropic in classified environments. [1] The government punished the company that tried to set boundaries. A competitor stepped in to take its place. If you want to understand what AI is doing to our democracy, you could start with the economics. But you might learn more from watching what happened when one company tried to say no.

The Economics

I’ve been writing about the economics of AI displacement for a while, and the argument I keep making is simple enough that it shouldn’t be controversial: companies are using AI to eliminate the jobs of the people who buy their products. That’s not a prediction. Salesforce cut thousands of support roles while raising prices. Microsoft did the same. Across corporate America in 2025, 1.2 million positions were eliminated. Only 55,000 were officially attributed to AI — but as I argued in “Who Buys What We Build,” anyone who’s watched a company announce “restructuring for efficiency” while simultaneously trumpeting its AI investments knows that official attribution is a fiction. Companies have every incentive to avoid the label. The savings didn’t flow to consumers or into new hiring. They flowed into $1.6 trillion in dividends and stock buybacks. [3]

This is not how the story is supposed to go. When Sam Altman promises that AI will make everything “radically cheaper,” he’s invoking a theory that has a name in economics — the deflationary benefit of automation. The idea is that as production costs fall, prices follow, and everyone benefits. But that theory assumes companies pass the savings on. They don’t. They pass them to shareholders, to executives, to the few people who own the machines. Henry Ford understood in 1914 that his workers needed to earn enough to buy his cars. A century later, the people building the most powerful technology in history have ignored that lesson, or decided it no longer applies.

The standard response is retraining. It’s always retraining. I grew up in Gelsenkirchen, in Germany’s Ruhr Valley, where I watched this script play out over decades. The coal mines closed. The steelworks followed. Workers were retrained for jobs that were subsequently automated, outsourced, or simply never materialized at the promised scale. Retraining is a political reflex, not an economic strategy. It makes the displacement invisible for a while, distributes the cost over years rather than quarters, and lets everyone involved claim they’re doing something.

But here’s where AI differs from coal, steel, or even the first waves of software automation. Those disruptions hit the bottom of the value chain first — manual labor, routine processing, repetitive assembly. Workers were told to move up. Get more education. Learn to code. Develop the skills that machines can’t replicate: judgment, analysis, creative synthesis. AI inverts that logic entirely. It targets the entire value chain. It writes legal briefs, generates marketing strategies, produces code, and drafts architectural plans. There is no higher rung. The ladder itself is being disassembled from the top.

Earlier this month, Anthropic’s research team published a paper attempting to measure what’s actually happening in the labor market. [4] Their findings are careful, rigorous, and precisely the kind of analysis that should inform policy. Building on their ongoing Economic Index — which has tracked real-world AI usage across occupations since 2025 — they found that AI is disproportionately targeting educated, higher-paid professionals. [5] Computer programmers, customer service representatives, and data entry workers showed the highest levels of task coverage by AI systems. The professional class, the knowledge workers, they’re the ones in the crosshairs.

The paper also found something that the people citing it as reassurance don’t read carefully enough: a 14 percent drop in the job-finding rate for workers aged 22 to 25 in AI-exposed fields, compared to 2022. [4] The entry-level pipeline is contracting. Young people trained for professional careers are arriving at a door that’s closing. This is not an abstraction. It’s a generation discovering that the investment they made — the degrees, the debt, the years of preparation — may not convert into the careers they were promised.

And then there’s the headline finding, the one that gets quoted by people who want to believe the disruption is manageable: “no systematic increase in unemployment” for workers in highly exposed occupations. [4] The Anthropic researchers are careful to note that current AI usage lags far behind its theoretical capability, that legal, technical, and verification hurdles slow adoption, and that their framework is designed to find vulnerable jobs “before displacement is visible.”

Not yet

Those are the two most dangerous words in economic history. The Ruhr Valley didn’t show catastrophic unemployment numbers on day one either. The decline was gradual. Communities adapted, downshifted, and made do. People left, but slowly. Services contracted, but incrementally. And then, over a span that felt long to the people living through it but looks sudden in retrospect, the floor fell out. Not because anything dramatic happened on a particular Tuesday, but because the structural supports had been quietly removed over the years, and one day, there was nothing left to hold the weight.

“Not yet” is not a reassurance. It’s a diagnosis. It means the underlying condition is progressing while the visible symptoms remain within normal range.

The Source

A skeptic might ask why we should give special weight to research published by an AI company about the effects of its own technology. It’s a fair question. The answer lies in what Anthropic has demonstrated about its willingness to act against its own financial interests.

Dario Amodei, Anthropic’s CEO, is the only leader of a major AI company whose caution is credible because his caution costs him something. He published a Responsible Scaling Policy. He released research showing early signs of labor-market disruption caused by his own technology. In his essay “Machines of Loving Grace,” he laid out a conditional vision of AI’s benefits — conditional on getting governance right, on distributing gains broadly, on preventing concentration of power. [6] He was careful enough to say: These outcomes are not automatic. They require deliberate choices.

And then he made one. Amodei drew two lines in Anthropic’s military contracts. [7] Not twenty. Two. No mass surveillance of Americans. No fully autonomous weapons. The Pentagon’s response was to label his company a supply chain risk to national security — a designation historically reserved for foreign adversaries. [1] The president ordered federal agencies to stop using Anthropic’s technology. OpenAI, whose leadership has made no comparable commitments, stepped in to take the contracts.

Some commentators, including Bruce Schneier, have argued that Amodei’s stance is primarily a matter of brand positioning. Maybe so. Bismarck said it best: “Motive does not change the effect.” Whether Amodei acted from deep conviction or market calculation, the effect was the same: Anthropic declined to make the concessions. Two lines held. That’s what matters.

The point is not the geopolitics. The point is what this tells us about the research. When a company publishes data showing that its own technology is beginning to displace entry-level workers and reshape the labor market — and that same company has just sacrificed hundreds of millions of dollars in defense contracts rather than compromise on ethical principles — you are not reading a marketing document. You are reading the work of people who have demonstrated, at real cost, that they take the societal consequences of their technology seriously. The “not yet” in their findings deserves to be read with that context in mind.

Democracy

So what does all of this do to democracy?

The question is usually answered with a list of threats — disinformation, surveillance, and election manipulation. Those are real, but they’re symptoms. The structural threat is deeper and operates on a longer timeline.

Democracy has always depended on a broad middle class — people with enough economic security to participate in civic life, enough autonomy to think independently, enough stake in the system to defend it. These are the people who serve on school boards, run local businesses, volunteer for campaigns, and show up at town meetings. Not because they’re virtuous, but because they have the bandwidth. Economic stability creates civic capacity.

What happens when AI hollows out that class over the next decade? Not all at once — gradually, the way the Ruhr Valley declined. Young professionals can’t find entry-level positions. Mid-career workers are displaced into lower-paying roles or out of the workforce entirely. The savings flow upward. The tax base contracts. Public services deteriorate. The people who once formed the backbone of democratic participation become an anxious, precarious population managing decline rather than building futures.

The concentration of wealth that’s already underway accelerates this. According to the Federal Reserve’s Distributional Financial Accounts, the top one percent of U.S. households held 31.7 percent of all wealth in the third quarter of 2025 — the highest share on record since tracking began in 1989. [8] In dollar terms, that one percent held roughly $55 trillion, approximately equal to the combined wealth of the bottom 90 percent. [9] AI amplifies that asymmetry. The technology’s benefits accrue to those who own and deploy it. Its costs fall on those it replaces. Every percentage point of “productivity gain” that translates into headcount reduction rather than wage growth widens the gap.

We are watching a feedback loop assemble itself. AI concentrates economic power. Concentrated economic power undermines democratic participation. Weakened democratic institutions lose the capacity to regulate AI. The technology advances without constraint. The cycle repeats.

And the institution that is supposed to break that cycle — Congress — is structurally incapable of doing so. This is not a matter of individual competence or partisan failure. It is a design problem. The people charged with regulating the most consequential technology of the century are, by and large, not fluent in how it works. They depend on the executives building it to explain it to them. When a senator needs to understand what large language models can do, the briefing comes from the same companies that profit from minimal regulation. The tech industry is simultaneously the legislature’s primary source of technical expertise, one of its largest sources of campaign funding, and the entity that most needs regulation. That is not a relationship that produces oversight. It produces capture.

Even if a handful of representatives understood the technology independently, the time horizons are wrong. Members of the House operate on two-year election cycles. AI displacement unfolds over a decade. It doesn’t produce the kind of acute, visible crisis that mobilizes voters or donors within a single campaign season — it produces “not yet,” which is politically indistinguishable from “never.” There is no electoral reward for addressing a structural problem that won’t show up in the unemployment data until after the next election. There is considerable electoral cost in antagonizing the industry that funds your campaign. So Congress holds hearings, asks questions that reveal how little it understands, and produces nothing. The most powerful technology to emerge in a generation advances essentially unregulated — not because regulation is impossible, but because the institution responsible for it is optimized for a completely different set of incentives.

The people who see this most clearly are, ironically, the wealthiest. Bloomberg reported that Silicon Valley executives have been buying multimillion-dollar survival bunkers in New Zealand for years, shipped from Texas and buried eleven feet underground — outfitted for long-term habitation in the event of, as one bunker manufacturer put it, “a revolution or a change where society is going to go after the 1 percenters.” [10] Reid Hoffman, the co-founder of LinkedIn, estimated that more than 50% of tech billionaires have some form of escape plan. [11] They’re not investing in social solutions or democratic resilience. They’re building escape routes. When the people best positioned to understand a system’s trajectory are planning their exit rather than its repair, that tells you something about what they expect.

Gelsenkirchen

Anthropic’s research paper concludes by calling for improved measurement frameworks to identify vulnerable jobs before displacement becomes visible. That’s exactly what a responsible research organization should do. Measurement matters.

But measurement is not action. And the gap between “not yet” and “too late” is shorter than anyone with a quarterly earnings call to manage wants to admit.

I know what “not yet” looks like. I grew up inside it.

My father was a coal miner in Gelsenkirchen-Horst, in the heart of Germany’s Ruhr Valley. In the 1960s, Gelsenkirchen was an industrial city with a clear social contract: you worked hard, you earned a living, you raised a family. The mines and steelworks employed 650,000 people across the region. The work was brutal, but it was work, and it sustained a civic life — football clubs, church communities, neighborhood pubs, the whole texture of a functioning society.

The decline didn’t announce itself. The mines started closing, one by one. Workers were retrained for jobs in sectors that were themselves contracting, or for positions that never materialized at the scale promised. Young people left for wherever the work was. The population shrank. Services contracted. The tax base eroded. At each stage, the people managing the transition kept measuring, kept retraining, kept promising that the adjustments were working.

Today, Gelsenkirchen has a poverty rate of nearly 38 percent [12] — in a country where the national average is 15. Among young people aged 15 to 24, it has the highest welfare dependency rate in all of Germany. [13] The city has lost more than 30 percent of its population since my father’s generation worked the mines. Employment in coal and steel across the region fell from 650,000 to 73,000. [14] The last colliery in Gelsenkirchen closed in 2000. Three thousand miners lost their jobs that day, but the real losses had been accumulating for decades before anyone counted them. The city that was once an industrial engine is now sometimes called the most dangerous in Germany. That breaks my heart. But what do you expect when nearly four out of ten residents live in poverty?

None of this happened suddenly. There was no single catastrophe, no dramatic collapse that would have forced an emergency response. It was a long, quiet process of doors closing, options narrowing, communities adapting downward until adaptation itself became the permanent condition. By the time the data confirmed what everyone living there already knew, a generation had been lost — and a second generation was growing up inside the consequences.

That is what “not yet” looks like from the inside. Not a crisis. A slow diminishment that the instruments don’t register as an emergency until it’s structural, until the civic capacity to respond has itself been eroded by the very process it needed to address.

We are in that period now, in this country, with this technology. The data says “not yet.” The structure says “already underway.” The most credible research we have — published by a company that has proven it takes these consequences seriously — indicates that the early signs are already visible, if we choose to read them.

The question is not whether AI will transform the economy. It will. The question is not whether the transformation poses risks to democracy. It does. The question is whether we will act on what the structure is telling us, or wait for the data to catch up — and discover, as Gelsenkirchen did, that by then there’s nothing left to save.


Sources

[1] NPR, “Pentagon labels AI company Anthropic a supply chain risk ‘effective immediately’,” March 6, 2026.

[2] Axios, “Anthropic sues Pentagon over rare ‘supply chain risk’ label,” March 9, 2026.

[3] Layoff data from Challenger, Gray & Christmas annual report, 2025. Buyback figures from S&P Dow Jones Indices. See also Who Buys What We Build? and The Corporate Benevolence Fantasy.

[4] Anthropic Economic Research, “Labor market impacts of AI: A new measure and early evidence,” March 5, 2026.

[5] Anthropic, “Anthropic Economic Index reports,” 2025--2026.

[6] Dario Amodei, “Machines of Loving Grace,” October 2024.

[7] Anthropic, “Where We Stand,” March 2026.

[8] Federal Reserve, “Distributional Financial Accounts,” Q3 2025.

[9] CBS News, “Wealth inequality in America just hit its widest gap in more than 3 decades,” January 21, 2026.

[10] Bloomberg, “The Super Rich of Silicon Valley Have a Doomsday Escape Plan,” September 5, 2018.

[11] The New Yorker, Reid Hoffman’s estimate of 50%+ of tech billionaires with escape plans, January 30, 2017.

[12] Paritätische Armutsbericht (Paritätische Poverty Report), 2024, reporting 37.9% poverty rate for Gelsenkirchen.

[13] Bremen Institute for Workplace Research and Career Support, youth welfare dependency data.

[14] Urban Transitions Alliance, “City profile: Gelsenkirchen.” Regional employment decline (650,000 to 73,000) and 30%+ population loss. Last colliery closure (Ewald-Hugo, April 28, 2000) from municipal records.

This essay was also published on SubStack