Today in AI — 15 April 2026
Today's top AI news — curated links and commentary on the stories that matter for product builders.
Figma lost 6% of its market cap today because of a product Anthropic hasn't shipped yet. That reaction tells you where AI competition has moved: the models are converging, so the fight is over the surfaces and infrastructure wrapped around them. Stanford's annual AI Index confirms the premise — frontier models match human baselines on PhD-level science, coding benchmarks near 100%, and 88% of organisations have adopted. The raw capability race is flattening. Everything else is accelerating.
Surfaces over models
Anthropic is pairing Opus 4.7 with a design tool for websites and presentations. The model is the engine; the design surface is what spooked public markets. Epitaxy, its Claude Code redesign, pushes the same logic into development: orchestrating parallel sub-agents across repos rather than autocompleting lines. Stanford's transparency finding is the uncomfortable footnote — the most capable models now disclose the least about how they work.
- Anthropic preps Opus 4.7 and an AI design tool — Figma, Adobe, and Wix stocks tumble on the news — The Information
- Stanford AI Index 2026: capability is historic, transparency is in crisis — Stanford HAI
- Anthropic launches Epitaxy, a major Claude Code desktop redesign with Coordinator Mode — TestingCatalog
The physical layer scales
ASML raised its 2026 forecast to €40B and plans to ship 60 EUV tools this year, up 25% from 2025. Downstream, the investment follows the watts: nEye.ai raised $80M for optical circuit switching in data centres, Sygaldry raised $139M for quantum-classical AI servers, NVIDIA released open models for quantum error correction, and DeepX is fabricating sub-5-watt AI chips on Samsung 2nm for Hyundai's robots.
- ASML raises 2026 sales forecast to €40B as AI chip demand stays strong — CNBC
- NVIDIA launches Ising, the first open AI models for quantum computing — NVIDIA Newsroom
- Sygaldry raises $139M to build quantum computers purpose-built for AI workloads — Fortune
- nEye.ai raises $80M to bring optical circuit switching to AI data centers — Tech Startups
- DeepX expands Hyundai partnership for generative AI robots as it prepares for Korean IPO — Reuters via Investing.com
Agents go vertical
OpenAI acquired Hiro Finance to build financial reasoning into ChatGPT, an acqui-hire signalling vertical depth over horizontal breadth. Synera raised $40M for agentic workflows at manufacturers like BMW. Microsoft is testing always-on Copilot agents that monitor email and calendar without prompting. The common thread: agents leaving the chat window for domain-specific, continuously running infrastructure.
- OpenAI acquires personal finance startup Hiro to build financial AI into ChatGPT — TechCrunch
- Synera raises $40M to bring agentic AI into industrial engineering workflows — SiliconANGLE
- Microsoft tests always-on AI agents in Copilot that run autonomously in the background — The AI Insider
The discovery layer
Two raises, same thesis: if agents mediate how people find products, someone needs to build the knowledge infrastructure underneath. Bluefish helps Fortune 500 brands optimise for AI-mediated surfaces like ChatGPT and Amazon Rufus. Mintlify, now valued at $500M, positions documentation as the interface through which agents understand your product. Think of it as SEO for the post-search era.
- Bluefish raises $43M to help brands show up in ChatGPT, Gemini, and AI shopping agents — Adweek
- Mintlify raises $45M to become the knowledge infrastructure layer for AI agents — Mintlify
Open science
Microsoft open-sourced GigaTIME, trained on 40 million cancer cells, which translates routine $10 pathology slides into high-resolution imaging across 21 protein channels. The kind of application where models approaching human-expert performance actually changes outcomes.
- Microsoft open-sources GigaTIME, an AI system that turns $10 tissue slides into advanced cancer imaging — Microsoft Research
Model capability gaps are shrinking. Stanford's data makes that clear. If you're building today, your differentiation lives in the vertical knowledge, the surface, and the infrastructure around the model, not the model itself.