AI Weekly Radar: February 10-18, 2026
The week was dominated by three narratives: the massive cheapening of frontier intelligence (Sonnet 4.6), the global race for AI infrastructure (India with commitments in the hundreds of billions), and the maturation of AI agents with their promises and real security limits.
Sonnet 4.6: the model that rewrites the pricing rules
Anthropic launched Claude Sonnet 4.6 and made it the default model in claude.ai and Claude Code. The numbers speak for themselves:
| Benchmark | Sonnet 4.6 | Opus 4.6 |
|---|---|---|
| SWE-bench Verified | 79.6% | 80.8% |
| OSWorld (computer use) | 72.5% | 72.7% |
| GDPval-AA Elo (office tasks) | 1633 | <1633 |
| Price (input/output) | $3/$15 | $5/$25 |
Virtually identical performance to the flagship at 40% less. Computer use went from 14.9% to 72.5% in 16 months — nearly 5x. And it ships with 1M token context in beta.
Mistral acquires Koyeb: European consolidation
Mistral AI made its first acquisition: Koyeb, a Paris-based startup for AI application deployment. The French company, valued at $13.8 billion, is positioning itself as a “full-stack” player with Mistral Compute.
Mistral’s numbers are impressive: ARR exceeding $400 million and a $1.4 billion investment announced in Swedish data centers. As the data sovereignty debate intensifies, Europe is consolidating its alternative.
EU Parliament blocks ChatGPT, Claude, and Copilot
The European Parliament’s IT department blocked access to AI tools on legislator devices. The reason: US authorities can legally compel OpenAI, Anthropic, and Microsoft to hand over user data.
This isn’t paranoia. It’s the same legislation that has enabled mass surveillance programs for decades, applied to a new context. Under the Trump administration, several European countries are reevaluating their tech dependency on the US.
I wrote about this in depth — AI is the new data leak channel, and now even legislators acknowledge it.
India: the new global AI front
India means business. This week’s numbers:
- $200+ billion in AI infrastructure investment sought by 2028
- Amazon, Google, and Microsoft have already committed ~$70 billion
- Adani Group pledges $100 billion for AI data centers
- 100 million weekly ChatGPT users in India (per Sam Altman)
- India is Claude’s second-largest market (6% of global usage). Anthropic opened a Bengaluru office.
- Infosys partners with Anthropic to build enterprise AI agents, integrating Claude into its Topaz AI platform
The challenge: energy and water access for intensive data centers. The same infrastructure bottleneck we see in the West, but at a different scale.
Funding: 17 startups raised $100M+ in 7 weeks
In less than two months of 2026, AI investment remains relentless:
| Company | Round | Valuation | Detail |
|---|---|---|---|
| Anthropic | $30B Series G | $380B | Largest startup funding round in history |
| xAI | $20B Series E | — | Acquired by SpaceX shortly after |
| SkildAI | $1.4B Series C | $14B | Autonomous robots |
| ElevenLabs | $500M Series D | $11B | Audio and voice AI |
| humans& | $480M seed | — | Record seed round |
| Ricursive Intelligence | $335M | $4B | AI for semiconductor chip design |
| Runway | $315M Series E | — | Video AI |
Ricursive Intelligence deserves special mention: founded by ex-Google Brain and Anthropic researchers, it uses AI to design semiconductor chips, reducing a process that normally takes a year with human designers to hours. If it works, it could accelerate the very hardware that powers AI.
The AI bubble chasing 7 trillion keeps inflating. Will it translate into returns? History says no for most.
OpenClaw: the saga continues
Two weeks ago I wrote that I wasn’t installing OpenClaw. Since then:
- Stars doubled: 190,000 (vs 100,000 when I published)
- Moltbook (a Reddit-for-AI-agents built on OpenClaw) had credentials publicly exposed
- Peter Steinberger, the project’s creator, was hired by OpenAI days after the scandal
- Expert verdict from cybersecurity researchers: prompt injection remains unsolved
My “wait” thesis has been reinforced. Updated post with details.
Cohere launches multilingual open-weight models
Cohere released the Tiny Aya family: open-source models supporting over 70 languages that can run on laptops without internet. At 3.35 billion parameters, they’re ideal for markets speaking minority languages.
Regional variants exist: TinyAya-Earth (Africa), TinyAya-Fire (South Asia), TinyAya-Water (Asia-Pacific/Europe). This connects to the Small Language Model trend — Gartner predicts that by 2027, organizations will use SLMs 3x more than general LLMs.
Quick notes
| Development | Detail |
|---|---|
| WordPress AI assistant | Native assistant that edits content, modifies styles, and generates images with Gemini Nano. Opt-in, requires block themes. |
| GPT-5.3-Codex | OpenAI’s most capable coding model yet: combines Codex + GPT-5, ~25% faster. |
| Codex-Spark | Preview of OpenAI’s ultra-fast model for real-time coding. |
| o3-pro | Replaces o1-pro in ChatGPT for Pro and Team users. |
| DRAM: 7x in one year | DRAM memory prices increased ~7x in 12 months. Memory management (prompt caching) is now critical for AI costs at scale. |
Conclusion
The week confirms a pattern: frontier intelligence is getting cheaper, but the infrastructure to run it is getting more expensive. Sonnet 4.6 proves that premium performance and premium pricing no longer go hand in hand. Meanwhile, the race for data centers, chips, and energy intensifies — with India emerging as the new front.
For companies: if you haven’t reviewed your AI model stack yet, this is the week to do it. What you were paying three months ago is probably 80% more expensive than necessary.
What was the most relevant news to you? Am I missing something important?
You might also like
Weekly AI Radar: January 22-29, 2026
DeepSeek shakes the market, OpenAI and Anthropic compete for healthcare, 95% of GenAI projects don't generate ROI, and Spain gets an AI factory in Barcelona.
The AI bubble: 7 trillion looking for returns
Who wins, who loses, and why you should care. Analysis of massive AI investment and its bubble signals.
AI is running out of internet to eat
AI models consume data faster than we create it. Quality internet content is almost used up. What comes next?