Werner Glinka

AI LABOR CULTURE

My Claude Wishlist

Mar 21, 2026

I’ve spent the last year working seriously with AI — not casually, not experimentally, but as a daily collaborator on writing, research, and technical projects. I use Claude for intellectual work: developing arguments across sessions, stress-testing ideas, writing essays, and building software. It’s the most productive working relationship I’ve had with a tool — and I keep running into the same wall.

Every time I work with Claude we start almost from scratch.

What I Keep Losing

Right now, if you use Claude seriously, you use it in fragments. Chat for thinking. Code for building. Cowork for file management. Each lives in its own context. Each starts from scratch — or close to it. The reasoning you developed in chat doesn’t follow you to the terminal. The document structure you built in Cowork doesn’t inform the conversation.

So you become the integration layer. You carry context between sessions, re-explain what you’re working on, re-establish the intellectual thread every time you switch modes. If you’ve ever onboarded the same colleague three times in one week because they keep forgetting what you discussed, you know the feeling.

This isn’t minor. It shapes what’s possible. Complex intellectual work — the kind where an argument develops across weeks, where each session builds on the last, where the collaboration deepens because both parties carry shared history — that work gets stunted when the system resets every time you switch tools.

What I Want

So what would I like that to look like in my daily work?

I’m on my iPad at a coffee shop. An argument takes shape in conversation — a structural parallel between two historical displacements clicks into focus. I sketch it out with Claude, testing the logic, pushing on the weak points. The reasoning chain is solid.

I get home, open my laptop, and say: “Let’s turn that into a draft.”

Claude knows what “that” means. It was there. Not a summary of what happened, not a re-reading of exported notes, but the actual thread — the false starts, the moments where the argument pivoted, the specific objections that got addressed. It opens a file, starts writing, and pulls in the right execution environment because the task shifted from thinking to producing.

That doesn’t exist yet. What exists is: copy your chat, paste it into a new context, hope the new instance picks up the thread, lose the texture of how you got there.

If I could design what I want, it would have three properties:

A persistent instance — one Claude bound to a project, not a device or a mode. It accumulates context the way a human collaborator does, not through explicit memory commands, but through participation. It was there for the conversation, so it knows the conversation.

Capabilities, not modes — shell access, file system access, browser access, document generation. These are tools the instance reaches for as needed, not separate applications I switch between. I wouldn’t think of picking up a pen as entering “pen mode.” The same principle applies.

Device-agnostic access — the project is the anchor. iPad, laptop, phone, desktop — those are windows into the same ongoing collaboration. The interface adapts. The context persists.

Why This Isn’t Just About Convenience

The current silo model implicitly assumes that AI collaboration is transactional — you ask, you get an answer, you move on. But the most valuable intellectual work is cumulative. It requires a collaborator who remembers not just facts but reasoning — why you rejected one framing, what counterargument shifted your position, which thread you deliberately left open for later.

Memory features help, but they store attributes rather than argumentation. They know where you live and what you’re working on. They don’t know that you spent three sessions testing a specific philosophical framework and came out with a set of unresolved questions you intended to return to. That’s not a fact to store. It’s a trajectory to continue.

There’s a conciseness argument here too, and it’s not trivial. Every context reset inflates the conversation with information both parties already possess. You re-explain your framework. The AI re-hedges with qualifications it wouldn’t need if it already knew where you stand.

That redundancy isn’t a style problem — it’s an architectural one. A persistent instance doesn’t just remember more. It communicates more efficiently because shared context compresses the conversation, focusing on the actual work. The same principle that makes good prose concise — don’t tell the reader what they already know — applies to collaboration itself.

Why It Doesn’t Exist Yet

I’m not naive about why this is hard. A persistent instance that carries full reasoning history across weeks of sessions would require enormous context windows or a very sophisticated compression scheme. Context windows are finite and expensive — every token of history carried forward is computation you’re paying for on every subsequent exchange. The economics push toward exactly the silo architecture I’m criticizing: short sessions, fresh contexts, minimal carryover.

And I imagine the storage side is huge. Millions of users, each with multiple projects, each project accumulating weeks or months of conversational history — that’s a fundamentally different infrastructure commitment than serving stateless chat sessions.

The question is whether that’s a permanent constraint or a cost curve that’s coming down. Context windows have been expanding rapidly. Retrieval-augmented approaches could keep the full history in storage and surface what’s relevant rather than carrying everything. The memory system already does a primitive version of this — distilling attributes from conversations and injecting them into future contexts. The persistent instance would need that same pattern but for reasoning rather than facts. That’s a harder problem. But it’s an engineering problem, not an impossibility.

The Pieces Are Already Moving

It seems that Anthropic is already heading in this direction. Claude Code gave the model execution capability. Cowork extended it beyond developers. The Projects feature, shipped this week, adds persistent task context within a workspace. Memory carries personal context across conversations.

Each of these is a fragment of what I’m wishing for. What’s missing is the connective tissue — the persistent identity layer that unifies them. The difference between a set of good tools and an actual collaborator.

I don’t know what’s on Anthropic’s roadmap. I don’t know when the economics make this viable. But I know what I need, because I bump into the absence of it every day. And I suspect anyone doing sustained intellectual work with AI — not one-off queries, but ongoing projects with real depth — is bumping into the same wall.

This is my wishlist. I’d like to stop being the integration layer.

Update Mar 26

What do you know? Two days before I published 'My Claude Wishlist,' Anthropic shipped Dispatch — a persistent thread that follows you between phone and desktop without resetting context. I didn't know. It's clear Anthropic understands where this needs to go. The reasoning layer — continuity of argumentation, not just task history — is still the hard part. But the direction is right, and they're moving fast.