Skip to content

Anthropic's Wild March: Mythos, Computer Use and GPUs

· 5 min read · Read in Español
Share:

Fourteen launches, five outages, an accidental leak of their secret model, and a Chinese industrial espionage scandal. Anthropic’s March was wild.


Mythos: the model you weren’t supposed to see

In late March, security researchers from LayerX Security and the University of Cambridge found an unprotected Anthropic data store. Inside: nearly 3,000 unpublished assets, including a draft blog post revealing the existence of Claude Mythos — a model Anthropic internally describes as “a step change” in capabilities.

Mythos (also referenced as Capybara in internal documents) would sit as a new tier above Opus. Not an incremental improvement — a leap.

The kicker: internal documents describe Mythos as the most advanced model in cybersecurity capabilities in existence. So advanced that Anthropic is directly briefing senior U.S. government officials, warning that Mythos makes large-scale cyberattacks “much more likely” from 2026 onward.

And as if the Mythos leak wasn’t enough, days later the source code for Claude Code leaked too — roughly 500,000 lines of code across 1,900 files. Two security breaches in one week. For a company that preaches AI security, not a great look.

Computer Use: Claude now touches your desktop

On March 23, Anthropic launched Computer Use in preview for Mac. On April 3, it hit Windows. Ten days — the fastest platform expansion they’ve done for a preview feature.

What is Computer Use? Claude can see your screen, move the mouse, click, open applications, fill forms, navigate menus. If it has a connector (Slack, Google Calendar), it uses that. If not, it controls the computer the way a human would.

Available to Pro and Max subscribers through Claude Cowork and Claude Code. Works with Dispatch for remote task orchestration.

It’s a move consistent with what we already discussed about Anthropic’s ecosystem: Claude is no longer a chatbot — it’s an agent that operates on your machine. Your subscription no longer pays just for answers — it pays for a digital worker.

24,000 fake accounts: China distilling Claude

In February, Anthropic publicly accused three Chinese labs — DeepSeek, Moonshot AI, and MiniMax — of orchestrating industrial-scale campaigns to extract Claude’s capabilities.

The numbers are staggering:

LabExchangesFocus
MiniMax13M+Massive general volume
Moonshot AI3.4M+Kimi models
DeepSeek150K+Reasoning, reward models, censorship

In total, 24,000 fraudulent accounts generated over 16 million exchanges with Claude. The technique: distillation — a weaker model trains on a stronger one’s outputs to absorb its capabilities.

To bypass commercial access restrictions in China, they used commercial proxies running “hydra cluster” architectures — distributed networks of fake accounts across the API and third-party cloud platforms. A single proxy managed over 20,000 accounts simultaneously.

This connects directly to what we analyzed about DeepSeek and Chinese data sovereignty. Not an isolated incident — a systematic strategy.

The GPU crisis as backdrop

All of this is happening in a context where compute capacity is the scarcest resource in the industry. NVIDIA’s Blackwell GPUs are sold out for the next 12 months. The “hardware shortage” is no longer temporary — it’s the new normal.

Anthropic has responded by diversifying: a deal with Google Cloud to access up to one million TPUs, over 1 GW of capacity. Plus a $50 billion infrastructure plan for data centers in Texas and New York.

The multi-chip strategy (Google TPUs + Amazon Trainium + NVIDIA GPUs) gives them flexibility that competitors like OpenAI don’t have. And it explains decisions like cutting off third-party harnesses — when every inference cycle counts, you can’t give it away.

Inference costs remain the elephant in the room. And with models like Mythos, presumably more expensive to run, the pressure on capacity will only grow.

My take

Anthropic’s March was a microcosm of everything happening in the industry right now:

  1. Models are advancing faster than security. You leak your secret model through a configuration error. You leak your flagship tool’s source code days later. Development velocity is outpacing the ability to secure what you’re building.

  2. Hardware is king. Doesn’t matter how good your model is if you don’t have GPUs to serve it. Anthropic’s chip diversification is smart, but it’s also an admission that depending solely on NVIDIA is an existential risk.

  3. China isn’t playing by the same rules. 24,000 fake accounts and 16 million exchanges isn’t hacking — it’s an industrial operation. And if Anthropic caught this one, imagine what they didn’t.

  4. Computer Use changes the game. A model that controls your computer is qualitatively different from one that answers questions. The implications for security, productivity, and technological dependence are massive.

Anthropic wants to be the provider of AI agents that work for you. Mythos is the power, Computer Use is the interface, and chip infrastructure is the bottleneck. Whoever controls all three wins.

The problem is that in March, they showed that while building the future, the present is slipping through the cracks.

Keep exploring

Found this useful? Share it

Share:

You might also like