
Hey, it’s Andreas.
I’m currently pushing ahead with a new book on Claude Code. As part of that, I started speaking with heavy users, friends, and colleagues who use it seriously to get feedback and gather insight for the book.
And I realized something:
Right now, there are two extremes when it comes to how people use Claude.
Some don’t use the tools at all. Others go too far and end up drowning in all the functionality they can set up. And this does not just apply to Claude (Code). You see the same pattern across AI tools more broadly.
So today, I want to show you how to take a deep breath, clear the clutter, and audit your setup.
Besides that, in today’s issue:
Meta rolls out their new model Muse Spark via Superintelligence Labs
Anthropic publishes Claude Cookbook (50+ ready-to-use guides)
OpenAI introduces $100 ChatGPT Pro tier
BCG with a comprehensive guide on why AI-first insurers will pull ahead
And more.
Let’s get into it.

Weekly Field Notes
🧰 Industry Updates
🌀 Meta rolls out Muse Spark via Superintelligence Labs → Meta’s new multimodal reasoning model handles voice, text, and images, and introduces a contemplating mode where multiple agents debate harder problems. Benchmarks look competitive with frontier models on reasoning, though still weaker in coding.
🌀 Anthropic launches Project Glasswing with Mythos Preview → Anthropic unveiled a new cybersecurity coalition with AWS, Apple, Google, Microsoft, Nvidia, and others around Mythos Preview, an unreleased frontier model that is so powerful that "Anthropic does not want to release it to the public."
🌀 Anthropic launches Claude Managed Agents → Developers can now build and deploy agents in Claude’s console without handling the backend infrastructure themselves.
🌀 OpenAI introduces $100 ChatGPT Pro tier → A new $100/month plan now sits between the $20 Plus and $200 Pro options, aimed at power users who want more headroom without jumping to the top tier.
🌀 Perplexity expands into personal finance with Plaid → Perplexity now lets users connect bank accounts, credit cards, loans, and savings to track spending, liabilities, and net worth inside a broader AI-native finance dashboard.
🌀 HappyHorse-1.0 debuts at No. 1 on Artificial Analysis’ video leaderboard
→ A new mystery model just took the top spot, surpassing ByteDance’s viral Seedance 2.0.
🎓 Learning & Upskilling
📘 Anthropic publishes Claude Cookbook → 50+ ready-to-use guides for building with Claude. A useful resource for builders who want practical patterns, not just docs.
📘 IBM Technology on AI technical debt → Jeff Crume breaks down one of the biggest hidden risks in ML projects: systems move fast, but data quality, evaluation, scalability, and governance often lag behind.
📘 DeepLearning.AI on efficient inference with SGLang → A strong intermediate course if you want to understand what actually makes LLM serving fast and affordable at scale.
📘 Kiro brings back startup credits → Early-stage teams can now apply for up to a year of Kiro Pro+ credits, with tiers for 2, 10, or 30 users.
📘 Perplexity launches Billion Dollar Build → An 8-week competition where teams use Perplexity Computer to build a company with a path to $1B, with finalists eligible for up to $1M in investment and up to $1M in credits.
🌱 Perspectives & Research
🔹 The Economist on Demis Hassabis’ AI worldview → DeepMind CEO Demis Hassabis reflects on the beliefs driving his AI mission, the risks of building powerful systems, and why global cooperation is getting harder as competition intensifies.
🔹 Anthropic’s Head of Growth says Claude is now “growing itself” → Amol Avasare says Anthropic is increasingly using Claude to run growth experiments internally, with activation as the key bottleneck and AI coding as a major product flywheel.
🔹 BCG on why AI-first insurers will pull ahead → BCG argues the real shift is not adding AI to legacy workflows, but redesigning underwriting, claims, and distribution around agent-led execution.
🔹 Inside the AI industry’s most expensive mistake → Meta reportedly used around 60 trillion tokens in 30 days, roughly 3x the estimated token count of all published books.
🔹 Penn researchers use AI to surface hidden Ozempic side effects → Researchers analyzed 400K+ Reddit posts on Ozempic and Mounjaro with LLMs and surfaced side effects like menstrual irregularities, chills, hot flashes, and fatigue that clinical trials underreported or missed. Effective method for using LLMs in quantitative research.

♾️ Thought Loop - What I've been thinking, building, circling this week
I started writing a book on Claude Code.
The manuscript is already quite far along. But at one point, I wanted to understand how other people were actually using it. So I reached out to friends, colleagues, and heavy users. People deep enough into Claude (Code) to know where the real leverage is.
I came to a simple conclusion:
There is no single perfect way to use Claude Code, and even defining "best practices" is quite challenging.
But I also noticed something else.
A lot of people are nowhere near the ceiling of these tools. They are drowning in the mess they built around them.
Spend enough time with Claude (Code) and you start collecting instructions, plugins, hooks, skills, and context files like digital clutter.
You watch/read a tutorial, install someone else’s skill, copy a prompt framework from X, add another file, connect an MCP, tweak your CLAUDE.md, maybe throw in a hook or two.
It feels productive because every addition solves a local problem.
But three months later, your setup is full of half-remembered rules, overlapping instructions, and imported habits you never properly evaluated.
That is when the system starts getting worse, not better.
Some instructions are redundant. Some are outdated. Some directly conflict with each other.
"Be concise" and "always explain your reasoning" are not exactly natural allies.
And when Claude is forced to satisfy too many priorities at once, it usually satisfies none of them particularly well.
We talk a lot about giving models more context.
Much less about context hygiene.
But context debt is real.
The more junk you pile into your setup, the more hidden interference you create:
weaker outputs
less predictable behavior
harder debugging when something breaks
more effort spent managing the setup than getting value from it
That is why I created the audit below: this will help you to strip out dead weight in your written instructions. The vague rules. The overlapping rules. The scar tissue from old bad outputs that no longer needs to be there.
Just copy the prompt below and paste it into Claude (Code):
# Setup Audit
Before responding, read my entire configuration:
- CLAUDE.md (and any other root-level instruction files)
- Every file in /skills (read each SKILL.md in full, not just names)
- Every file in /context
- Any other .md, .txt, or instruction file you can find in the project
Do not skim. If a file is long, read it fully. Tell me upfront which files you read so I can confirm you didn't miss any.
Then audit every rule, instruction, and preference against these five tests:
1. **Default behavior** — Is this something you'd already do without being told? (If yes, the instruction is doing no work.)
2. **Conflict** — Does it contradict another rule somewhere else in the setup? Name both sides.
3. **Redundancy** — Is it already covered by another rule or file? Name the overlap.
4. **Scar tissue** — Does it read like a patch for one specific bad output rather than a general improvement? Look for oddly specific prohibitions, lone examples, or rules that only make sense if you know the backstory.
5. **Vagueness** — Is it so subjective you'd interpret it differently each run? ("be natural", "good tone", "don't overdo it", "use judgment".)
For each rule that fails at least one test, quote the rule verbatim, say which file it's in, and mark which test(s) it failed.
Also check for conflicts between CLAUDE.md/skills and any project-level instructions, hooks, or MCP tool descriptions you can see in this conversation. Flag contradictions the same way.
## Deliverables
**1. Files read** — bulleted list, so I can verify coverage.
**2. Cut list** — every rule you'd remove, one line each, with the reason (e.g. "Default — Claude already does this" or "Redundant with CLAUDE.md line 14").
**3. Conflicts** — every contradiction between files or within a file. Quote both sides and say which should win and why.
**4. Vague rules** — the subjective ones, with a concrete rewrite for each (or "delete — unfixable").
**5. Scar tissue suspects** — rules that look like one-off patches. Flag them but don't auto-cut; ask me whether the original problem still matters.
**6. Cleaned CLAUDE.md** — the rewritten file with dead weight removed. Preserve anything load-bearing. If you're unsure whether something is load-bearing, leave it and flag it in a "Kept but unsure" section at the end.
## Rules for the audit itself
- Be blunt. If a rule is useless, say so plainly — don't hedge.
- Don't invent problems to look thorough. If the setup is already tight, say that.
- Don't rewrite rules you're cutting — just cut them.
- Don't touch the skills themselves in the cleaned version; only CLAUDE.md.
P.S.: Works not only for Claude Code but also for your regular Claude + Cowork.
P.S.S.: If you have any special requests for topics to be covered in the Claude Code book, my inbox is open. Just hit reply here.

That’s it for today. Thanks for reading.
Enjoy this newsletter? Please forward to a friend.
See you next week, and have an epic week ahead,
- Andreas

P.S. I read every reply - if there’s something you want me to cover or share your thoughts on, just let me know!


