In January 2026, Anthropic made a quiet change.
They blocked Claude Pro and Max subscribers from using Claude through third-party tools. No announcement. No warning. Tools like OpenCode just stopped working overnight.
The backlash was immediate.
David Heinemeier Hansson — the creator of Ruby on Rails — posted on X: “Confirmation that Anthropic is intentionally blocking OpenCode, and any other 3P harness, in a paranoid attempt to force devs into Claude Code. Terrible policy for a company built on training models on our code, our writing, our everything.”
OpenCode gained 18,000 GitHub stars in two weeks.
That’s the story behind the AI agent war. Let’s break it down.
What Happened Exactly
On January 9, 2026, Anthropic deployed server-side checks. Any third-party tool using a Claude Pro or Max OAuth token started getting this error:
“This credential is only authorized for use with Claude Code and cannot be used for other API requests.”
If you were paying $100 or $200 per month for Claude Max and using OpenCode as your interface, it stopped working. Your subscription still charged. The product just didn’t.
On February 19, Anthropic made it official. Updated Terms of Service explicitly banned using subscription tokens in any third-party tool — including their own Agent SDK.
By March, OpenCode’s maintainer removed Claude’s OAuth plugin entirely from the codebase.
Who Is OpenCode?
OpenCode is an open-source coding agent built for the terminal. It was created by the team at SST (the same people who built the SST framework for serverless apps).
It supports 75+ AI providers: Claude, GPT, Gemini, Ollama, Groq, AWS Bedrock, Azure OpenAI, OpenRouter, and local models via llama.cpp. You pick the model. You can swap it with one config change.
As of April 2026, OpenCode has over 140,000 GitHub stars and more than 6.5 million monthly users. It hit #1 on Hacker News in March 2026.
Install it in one line:
curl -fsSL https://opencode.ai/install | bash
Or with npm:
npm i -g opencode-ai@latest
Run it:
opencode
It opens a terminal UI. You pick a provider, add your API key, and start coding.
Swap models with one line in your config:
[model]
provider = "anthropic"
model = "claude-sonnet-4-6"
Change anthropic to openai or google and you’re done.
The Three-Way Comparison
Here is how OpenCode, Claude Code, and Codex CLI compare today:
| Feature | OpenCode | Claude Code | Codex CLI |
|---|---|---|---|
| Cost | Free (pay API only) | $20–$200/month | $8–$200/month |
| Provider lock-in | None (75+ providers) | Anthropic only | OpenAI only |
| Open source | Yes (MIT) | No | Yes (Apache 2.0) |
| Terminal-based | Yes | Yes | Yes |
| Local model support | Yes (Ollama, llama.cpp) | No | No |
| Privacy (no code storage) | Yes | No | No |
| GitHub stars | 140,000+ | — | 67,000+ |
Claude Code
Claude Code is Anthropic’s official terminal agent. It runs on Claude Opus 4.6 with up to 1 million tokens of context. It has deep features: Plan Mode, Agent Teams for parallel work, MCP integrations, lifecycle hooks.
Pricing: $20/month (Pro), $100/month (Max 5x), $200/month (Max 20x).
The quality is real. Claude Code scores 80.9% on SWE-bench Verified — the highest of any coding agent. But heavy users regularly burn through their usage window in a single complex session.
If you want to use Claude through a third-party tool: you must pay per-token via the API. The subscription no longer covers it.
Codex CLI
Codex CLI is OpenAI’s open-source terminal agent. It takes a fire-and-forget approach — you submit a task and walk away. No per-step approval needed.
It leads Terminal-Bench 2.0 at 77.3% and uses roughly 4x fewer tokens than Claude Code for equivalent tasks. Good for automated and CI/CD workflows.
Pricing: $8/month (Go), $20/month (Plus), $200/month (Pro). Locked to OpenAI models.
OpenCode
OpenCode is the only one that lets you swap the underlying model without changing your workflow. Same interface, same commands. Swap from Claude API to GPT to local Gemma 4 with a single config line.
No subscription. You pay only for API tokens you use.
For local models via Ollama: zero ongoing cost.
Real cost example: If you run Claude Sonnet 4.6 via the API at $3 per million input tokens, a heavy day of 1 million tokens costs $3. For the same work through Claude Code Pro, you might hit your usage limit mid-session. OpenCode lets you decide when it is cheaper to use the API directly versus paying for a subscription.
OpenCode also includes two built-in agents: build (full file access for development work) and plan (read-only for exploring and analyzing your codebase). You can run the plan agent first to understand a codebase before making changes — a useful safeguard before you let the build agent touch production files.
Is Anthropic’s Block Defensible?
There is a real business argument here.
The Claude Max subscription is priced as an “all-you-can-eat” plan through Claude Code. Anthropic controls the rate limits and execution context. When developers ran Claude Max through third-party tools, they could automate high-volume tasks that would normally cost hundreds of dollars via the API — all for a flat $200/month.
Anthropic’s position: the subscription is for Claude Code. If you want API access for other tools, you pay API prices.
That is a defensible pricing decision. The execution was the problem — they made the change silently, with no transition period, cutting off paying subscribers overnight.
DHH’s phrase “very customer hostile” captures it well. The policy may have been justified. The rollout was not.
The Real Risk: Vendor Lock-In
The deeper story here is not about OpenCode vs Claude Code. It is about what happens when your entire coding workflow depends on one provider’s pricing decisions.
Today, Claude Code is excellent. Next year, Anthropic could raise prices, change terms, or degrade performance for Max subscribers. If your workflow is fully locked to Claude Code, you have no exit.
The same is true for Codex CLI and OpenAI.
OpenCode’s value is not that it is better than Claude Code in a head-to-head test. It is that it is the exit ramp. If Anthropic raises prices tomorrow, you swap the model. Your muscle memory stays the same.
Who Should Use What
Use Claude Code if:
- You want the best code quality and the deepest codebase context
- You are already on Claude Pro/Max for other Anthropic products
- You do not mind the provider lock-in at this price point
Use Codex CLI if:
- You want automated, autonomous task execution
- You prefer token efficiency over raw code quality
- You are in the OpenAI ecosystem
Use OpenCode if:
- You want full control over which model you use
- You want to run local models (Ollama + Gemma 4 = zero cost)
- You want one interface that works with any provider
- You want to avoid lock-in before you are dependent on it
The Bigger Picture
This conflict was not really about OpenCode. It was a preview of what happens as AI coding agents become infrastructure.
When your agent is writing real production code, the cost of switching providers is high. Muscle memory, configuration, context — all built around one tool. At that point, a pricing change is not just annoying. It is a business risk.
Cursor went from code editor to agent console in one version with Cursor 3 in April 2026. Claude Code went from CLI helper to primary development environment. Codex CLI is being baked into CI/CD pipelines.
None of this was planned. It emerged from a hundred individual developer decisions. The tools that started as experiments are now running production deployments.
The AI coding market is moving fast. According to recent surveys, over 70% of developers now use AI coding tools daily. Trust in these tools is starting to slip — partly because of subscription lock-in stories like this one, and partly because developers are realizing how dependent they have become on something they do not control.
OpenCode was not built to win a benchmark. It was built for the moment when a company you depend on changes its terms. That moment has already happened once. It will happen again.
The AI agent war nobody planned is already here. The smart move is to pick your tools before they pick you.