• Human in the Loop
  • Posts
  • #20 Edition: Back from Las Vegas – Oracle’s Quiet Masterstroke in AI

#20 Edition: Back from Las Vegas – Oracle’s Quiet Masterstroke in AI

PLUS: Salesforce’s flagship AI event drops major news — and IBM teams up with Groq

Hey, it’s Andreas.
Welcome back to Human in the Loop — your field guide to the latest in AI agents, emerging workflows, and how to stay ahead of what’s here today and what’s coming next.

This week:
→ Salesforce drops major news on enterprise AI agents at its flagship event
→ IBM × Groq team up to supercharge inference and enterprise AI
→ Andrej Karpathy delivers a much-needed reality check on AI agents

Plus: a deep dive into Oracle’s AI strategy — after spending a week on the ground in Vegas, fully immersed in their vision.

Let’s dive in.

Weekly Field Notes

🧰 Industry Updates
New drops: Tools, frameworks & infra for AI agents

🌀 Salesforce launches Agentforce 360 for enterprise AI agents
→ Unified platform to build, deploy, and govern agents across CRM, Slack, and Data Cloud — unveiled at Dreamforce SF alongside other major AI updates.

🌀 IBM x Groq partner to accelerate enterprise AI
→ GroqCloud inference connects to watsonx Orchestrate, with planned Granite-on-GroqCloud support - delivering ultra-low-latency inference for real agent workflows (5x faster, at 20% of the cost).

🌀 Apple M5 chip adds Neural Accelerators in every GPU core
→ 10× faster on-device inference — AI-native compute built into the silicon.

🌀 Anthropic launches Claude “Skills” for reusable task agents
→ Modular behaviors teams can mix, share, and chain.

🌀 OpenAI x Broadcom partner on 10GW AI accelerators
→ Multi-year deal to co-develop custom chips and Ethernet systems for next-gen AI clusters.

🌀 Google Veo 3.1 pushes lifelike video + narrative control
→ Adds motion-aware diffusion and scene-consistent storytelling. Another big step towards full cinematic sequences.

🌀 Google brings Nano Banana AI editor to Search & Photos
→ Lightweight model that rewrites text, captions, and tones in-app - quietly making Google tools agent-native.

🌀 Claude Haiku 4.5 hits Sonnet-level coding at 2× speed, ⅓ cost
→ Optimized for IDEs and backend tasks — serious lift for dev agents at scale.

🌀 Microsoft makes Copilot native to Windows 11 as “AI PC”
→ Deep OS-level integration — context-aware across apps, search, and settings.

🌀 Hugging Face debuts HuggingChat Omni
→ Routes queries across 115 LLMs in real time — an open meta-model router for hybrid stacks.

🌀 Cognition releases SWE-grep for ultra-fast codebase search
→ Semantic grep engine that powers reasoning over gigabyte-scale repos.

🌀 Microsoft’s MAI-Image-1 cracks LM Arena Top 10
→ Photorealistic diffusion model trained on synthetic data — enterprise-safe by design.

🎓 Learning & Upskilling
Sharpen your edge - top free courses this week

📘 Hugging Face launches new Robotics Course
→ Free, interactive, and certified – this course takes you from classical robotics to modern learning-based systems. Perfect for anyone ready to bridge AI and robotics.

📘 Karpathy drops Nanochat — build a ChatGPT-class model for $100
→ Minimal, open-source blueprint proving small-model training is now anyone’s game.

🌱 Mind Fuel
Strategic reads, enterprise POVs and research

🔹 Andrej Karpathy gives reality check on AI agents
→ In The Dwarkesh Podcast, the former OpenAI and Tesla researcher called current agents “slop,” RL “noise,” and the hype “oversold” — warning true autonomy is still a decade away.

🔹 Anthropic co-founder: AI is a ‘real and mysterious creature’
→ Jack Clark says AI is evolving beyond predictable tools, showing hints of self-awareness — and warns even optimists should be “deeply afraid” as models start designing their own successors.

🔹 IBM C-suite survey: Agentic AI reshaping operations
→ From 800+ execs across 20 countries, 67% expect autonomous agents by 2027. Biggest barrier isn’t tech but mindset and readiness — the next edge lies in operating model innovation.

🔹 MIT unveils Recursive Language Models (RLMs)
→ Introduces RLM as a new inference strategy that lets language models decompose tasks and recursively interact with input contexts of unbounded length through REPL-like environments.

🔹 Stanford introduces Agentic Context Engineering (ACE)
→ Framework for self-improving agents that rewrite context mid-task.

♾️ Thought Loop
What I've been thinking, building, circling this week

I spent last week in Las Vegas for Oracle AI World 2025 — a conference that’s evolved almost poetically over time: 

OpenWorld → CloudWorld → AI World

Each name marks a shift in Oracle’s identity — from database company, to cloud player, to full-stack AI operator.

And yet it was one of the clearest looks yet at Oracle’s long-term AI posture. If you want the source, start with Larry Ellison’s keynote (worth the watch).

1) The full-stack advantage (practical, not flashy)

What stood out most: Oracle’s transformation feels complete. For a company once defined by databases, its end-to-end integration now stands out more than any single model announcement.

  • Infrastructure Layer:
    Oracle announced its OCI Zettascale10 supercluster, connecting hundreds of thousands of NVIDIA GPUs (targeting up to ~800,000 GPUs) and built on multi-gigawatt data-center campuses and the new Oracle Acceleron RoCE network fabric. The scale is not incremental — it was framed as “the largest AI supercomputer in the cloud.” On top of that, Oracle expands its infrastructure vendor base by deepening the partnership with AMD (deploying ~50,000 MI450 GPUs and earlier MI355X/MI300X units) to ensure capacity, flexibility and ecosystem control. 

  • Data Layer:
    Oracle’s new Oracle AI Database 26ai focuses on the embeding of AI capabilities into the core of its data platform: vector search, RAG reasoning, agentic workflows, support for open formats like Apache Iceberg, multi-cloud deployment (OCI + Azure + AWS + G Cloud) and quantum-resistant encryption. Importantly, the goal is to allow enterprises to use AI where their data lives (rather than moving copious data sets into separate silos). 

  • Business Layer:
    Oracle announced embedded AI agents across Fusion Cloud Applications (finance, HR, supply chain, sales, marketing, service), built via its AI Agent Studio and integrated into workflows with no bolt-on required. The scale: prebuilt agents (hundreds) plus a marketplace that lets customers and partners build/deploy their own. For example: Payables Agent, Ledger Agent, Talent Advisor Agent etc. 

Few vendors can truly claim: raw compute → data layer → business apps → workflow execution inside a coherent vertical integrated ecosystem with enterprise-grade governance and scale. Another fact that stands out: the strength of Oracle’s partner ecosystem. They don’t bet on a single provider — they work with anyone who helps create value. That’s Oracle’s edge right now.

2) Strategy = contextual intelligence

At its core, Oracle’s philosophy is simple: “Your competitive advantage lives in your private data — not on the public internet.”

The challenge is letting AI reason on that data without exposing it.
Oracle’s answer is built into its stack: RAG + vectorization, tightly bound to governance and control.

  • Data stays within your environment.

  • Models query through vector indexes.

  • You receive contextual answers that are auditable, compliant, and enterprise-grade.

In short, it’s about usable intelligence that respects sovereignty. It’s not about what a model can do, but what your model can do with your data, on your terms.

And that makes a lot of sense — because there’s arguably no company whose core products hold more enterprise data than Oracle’s. Unlocking that data safely and intelligently could unleash enormous power for organizations that already run their critical systems on Oracle infrastructure.

Most enterprise data isn’t public — and most models aren’t trained on it. The major unlock is making that private corpus usable — safely — and automating decisions across it. That’s the whole philosophy: context over scale.

It’s not the flashiest strategy, but it’s strategically complete.

🔧 Tool Spotlight
A tool I'm testing and watching closely this week

While everyone’s chasing new drag-and-drop AI builders, Alteryx has quietly mastered this for years — at real enterprise scale. It’s a reminder that not everything new is better. Sometimes, innovation is rediscovery.

Why it stands out:
→ Visual Data Workflows — pull, clean, and transform data from Excel, SQL, Salesforce, or APIs — fully traceable, no scripts.
→ Smart AI Suggestions — auto-detects joins, filters, and next steps.
→ Reusable Automation Flows — build once and can be scheduled forever.
→ Advanced Analytics (Code Optional) — regression, clustering, predictions, or drop to Python/R.
→ Governed Collaboration — versioning, access control, audit trails — built for scale.

How it works:
It runs on both desktop and cloud, and there’s now a free 30-day trial. It’s a solid option for anyone looking to work with a well-established platform that’s trusted by many companies. If you’re aiming to level up your skills or explore a new niche, this is also a great place to start.

→ Try it here: Alteryx

That’s it for today. Thanks for reading.

Enjoy this newsletter? Please forward to a friend.

See you next week and have an epic week ahead,

— Andreas

P.S. I read every reply — if there’s something you want me to cover or share your thoughts on, just let me know!

How did you like today's edition?

Login or Subscribe to participate in polls.