
Hey, it’s Andreas.
For the last few months, it felt like Anthropic and Claude were setting the pace in AI.
But this week, OpenAI punched back.
Hard.
Four major launches in one week: ChatGPT Images 2.0, GPT-5.5, Workspace Agents, and ChatGPT for Clinicians.
The message is clear: OpenAI is back in the momentum game.
Especially with Images 2.0, which feels like a major step forward for text-heavy visuals, diagrams, infographics, and social graphics. It’s beyond impressive with text; I haven’t seen any generation with typos (I’ll go deeper on that later in this issue).
But the bigger takeaway is this: Stay model agnostic.
Anthropic will punch back.
Google will punch back.
Open source will punch back.
In today’s issue:
DeepSeek and Moonshot AI release strong new open-source models
Cohere acquires Aleph Alpha in a sovereign AI push
Nvidia offers free inference for frontier models
Yann LeCun says economists, not AI labs, should lead the labor debate
A deep dive on ChatGPT Images 2.0
Let’s get into it.

Weekly Field Notes
🧰 Industry Updates
🌀 DeepSeek previews V4 with cheaper frontier-class models → DeepSeek is back with open-source V4 models, 1M-token context, Huawei chip support, and pricing far below GPT-5.5 and Claude Opus.
🌀 Cohere acquires Aleph Alpha for sovereign AI push → Cohere is acquiring Germany’s Aleph Alpha in a merger aimed at governments and enterprises wary of U.S. AI dependency.
🌀 SpaceX partners with Cursor to build frontier coding AI → SpaceX is reportedly pairing its training compute with Cursor’s product distribution, with an option to acquire Cursor for $60B or pay $10B for the work.
🌀 Moonshot AI open-sources Kimi K2.6 → K2.6 is a serious open-source challenger in agentic coding, reportedly matching or beating GPT-5.4, Opus 4.6, and Gemini 3.1 Pro on key reasoning and coding benchmarks - while costing far less.
🌀 Google DeepMind forms coding “strike team” to chase Anthropic → Sergey Brin is reportedly pushing DeepMind to close the coding gap with Claude, framing code generation as the fastest path to self-improving AI.
🌀 Meta turns to AWS chips for agentic AI workloads → Meta signed a major AWS deal to use millions of Graviton5 cores. More and more AI players are diversifying infra beyond Nvidia-heavy stacks.
🌀 Anthropic’s Mythos model access reportedly leaked → Anthropic’s restricted cybersecurity model was reportedly accessed by a private Discord group days after launch via guessed deployment patterns and vendor credentials.
🎓 Learning & Upskilling
📘 Meta staff engineer on mastering OpenAI Codex → A practical guide to using Codex better - from prompting and parallel tasks to custom agent workflows.
📘 IBM Technology on the 7 skills for AI agent builders → Good breakdown on the shift from prompt engineering to agent engineering: system design, retrieval, reliability, security, and production readiness.
📘 Nvidia offers free inference for frontier models → Nvidia is hosting ~80 models ready to plug into coding agents.
📘 Hermes Tutorial - Self hosting AI agents → This tutorial walks through deploying a self-improving agent on your own machine, connecting it to Telegram, and adding tools like web search, voice, and scheduling.
📘 Google Cloud releases Agent Skills repository → Google Cloud launched an open GitHub repo for Agent Skills: compact, agent-first docs that give agents task-specific capabilities.
🌱 Perspectives & Research
🔹 Yann LeCunn says economists, not AI labs, should lead the labor debate
→ The ex-Meta chief AI scientist pushed back on big AI names framing the future of work, saying people should listen less to figures like Dario Amodei, Sam Altman, Yoshua Bengio, Geoff Hinton - or even himself - and more to economists.
🔹 Goldman Sachs on world models for enterprise decisions → After years of AI predicting patterns, Goldman argues the next shift is models that simulate outcomes.
🔹 McKinsey on the “great AI paradox” → Most companies invest in AI, but few show real ROI because they bolt AI onto old workflows. Successful companies redesign work end to end: agents handle prep, humans move above the loop, and judgment becomes the core job.
🔹 Jense Huang vs Dwarkesh Patel on China compute → Good podcasts feature a heated exchange about the U.S. GPU ban. Huang argues that GPU bans push China straight to Huawei, while Dwarkesh warns that more computing accelerates dangerous frontier AI.

♾️ Thought Loop - What I've been thinking, building, circling this week
For the last few months, it felt like Google had taken the lead in image generation. Nano Banana became my daily driver because it was fast, useful, and good enough for real workflows.
But OpenAI is back in the game.
GPT-Image-2 takes #1 in every single Text-to-Image category - all 7 of them. Surpassing the next leading model (Nano-banana-2 with web-search) across the board.
That’s a significant improvement, but benchmarks can be manipulated, and we all know that this doesn't mean much if it's poor in actual usage. However, ChatGPT Images 2 feels like the first image model where I don't immediately start looking for the usual AI indicators. The typography is cleaner, the layouts are more stable, and the outputs feel less synthetic.
The biggest difference for me is text.
With Nano Banana, text was always the pain point. You could get a beautiful image, a strong composition, and a useful concept, but then one broken word would make the whole asset unusable. With ChatGPT Images 2, I haven’t seen a generation with typos yet, even when the image contains a lot of text.

Example 1: Infographic “The Hallmarks of Aging”

Example 2: UI Screenshot MacBook

Example 3: Infographic “Linear Transformations, Eigen Decomposition and the Spectral Theorem”
If you haven't used ChatGPT in the last few months, this is your sign to get back into it and try out the new image model. It can also be accessed for now from the free tier.

That’s it for today. Thanks for reading.
Enjoy this newsletter? Please forward to a friend.
See you next week, and have an epic week ahead,
- Andreas




