Werner Glinka

AI LABOR CULTURE

Is Anthropic vital for US Security?

Apr 11, 2026

On Wednesday, April 8, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned the CEOs of America’s largest banks to an urgent, unscheduled meeting at Treasury headquarters in Washington.[1][2] The subject was a single AI model — Claude Mythos Preview, built by Anthropic. The CEOs of Citigroup, Morgan Stanley, Bank of America, Wells Fargo, and Goldman Sachs showed up. Jamie Dimon couldn’t make it.[1] The message was simple: this technology poses real risks to the financial system. Prepare yourselves.

You know who wasn’t in the room? Anthropic.

Not because they had nothing to contribute. Anthropic had already briefed government officials before releasing Mythos. They had already decided, on their own, not to make the model publicly available.[7] They had already formed Project Glasswing — a coalition of over 50 organizations, including Amazon, Apple, Microsoft, Google, JPMorgan, CrowdStrike and Palo Alto Networks — to use Mythos defensively, finding and fixing vulnerabilities in critical infrastructure before others exploit them. They committed $100 million in credits to the effort, plus $4 million in direct donations to open-source security organizations. They contracted professional security teams to manually validate every vulnerability report before sending it to maintainers.[5] Anthropic had displayed all the traits of a good corporate citizen with great morals.

So why wasn’t Anthropic in the room?

The most likely explanation is the simplest: inviting Anthropic would mean publicly partnering with a company that the President had personally attacked.

The Contradiction

Weeks before the Bessent meeting, the Pentagon designated Anthropic a “supply chain risk” to national security.[3] The reason? Anthropic refused to remove safeguards that prevent its models from being used for mass domestic surveillance and fully autonomous weapons systems.[4] President Trump and Defense Secretary Pete Hegseth publicly attacked the company for insisting on limits. Federal agencies were ordered to stop using Anthropic’s products.[3]

The administration punished Anthropic for being too careful. Then, when Anthropic produced something genuinely dangerous and handled it responsibly — withholding public release, briefing officials, building a defensive coalition — the administration had to scramble to warn the banking system about the very thing Anthropic had already warned them about.

The Operational Cost

By excluding Anthropic, the government forced itself into the role of a messenger delivering secondhand information about a threat that the people who actually understand it best weren’t there to explain. The bank CEOs got a warning, but they didn’t get the expertise behind it. That’s not just politically awkward — it’s operationally worse. The people best equipped to answer questions about Mythos’s capabilities, its limits and what defensive measures actually work were absent from the conversation where those answers mattered most.

The Choreography

What you get instead is theater. Anthropic does the work — identifies thousands of zero-day vulnerabilities across every major operating system and browser, some dating back 27 years, builds the coalition, funds the defensive effort and sets up responsible disclosure processes.[5][6] And Bessent plays the middleman, relaying the danger to bank CEOs who are hearing it secondhand, after Anthropic has already told the government directly.

JPMorgan is already a Glasswing partner. They have access to Mythos. Their CISO called it “a unique, early-stage opportunity to evaluate next-generation AI tools for defensive cybersecurity.”[5] But the other banks at that table? They got summoned to be warned about a threat that one of their peers is already working on with the company the government says it can’t trust.

What Anthropic Actually Did

It’s worth being precise about what happened here because the coverage has been almost entirely about what Mythos can do — the vulnerabilities, the exploits, the security implications. The more important story is what Anthropic chose not to do.

They built the most capable AI model by public benchmarks.[8] They tested it. They discovered it could find and exploit vulnerabilities that had survived decades of human review and millions of automated security scans. And instead of releasing it — instead of capturing the market advantage, the revenue, the headlines — they locked it down.[7]

They restricted access to a small group of partners who maintain critical infrastructure.[5] They engaged with government officials before going public. They published a 244-page system card detailing the model’s capabilities and risks, with a level of transparency unmatched by any other lab.[6] They committed to developing new safeguards before making Mythos-class capabilities generally available.

We can argue about whether Anthropic gets everything right. But the posture is consistent: they built something powerful, they recognized the danger, and they chose to be deliberate about how it enters the world.

The Dangerous Part

The dangerous part of this story is not Mythos. Mythos has a responsible owner.

The dangerous part is that the institutions charged with governing this moment cannot decide whether that owner is a partner or a threat.

Consider what Anthropic has actually done. The company found thousands of zero-day vulnerabilities in every major operating system and browser. It built the defensive coalition. It briefed the government before acting. It voluntarily withheld a product that would have generated enormous revenue. That is national security work, regardless of what label the Defense Department puts on it. The Bessent meeting itself is the proof.

You don’t summon bank CEOs over a threat discovered by a company that doesn’t matter.

And yet the administration designated this same company a supply chain risk because it refused to build tools for mass surveillance and autonomous weapons. Federal agencies were ordered to cut ties. The decision wasn’t made on the merits. It was made in response to a company that told the administration no.

Anthropic told the Pentagon no. For that, it was labeled a national security threat. Anthropic told the world its most powerful model was too dangerous for public release and built a coalition to deploy it defensively. For that reason, it was left out of the room where the danger was being discussed.

The question isn’t whether AI companies can act responsibly. At least one of them just demonstrated that it can. The question is whether the institutions that are supposed to govern this technology can act coherently.

That’s a harder problem than any vulnerability Mythos will ever find.


Sources

1 - Powell, Bessent discussed Anthropic’s Mythos AI cyber threat with major U.S. banks (CNBC)

2 - Bessent, Powell Summon Bank CEOs to Urgent Meeting (Bloomberg)

3 - Pentagon labels Anthropic a supply chain risk (NPR)

4 - Anthropic rejects Pentagon offer (CNN)

5 - Project Glasswing: Securing critical software (Anthropic)

6 - Claude Mythos Preview system card (Anthropic)

7 - Why Anthropic won’t release Mythos to the public (NBC News)

8 - Everything You Need to Know About Claude Mythos (Vellum)