Lava Layers

This week it's Kris and Matt diving into the state of hardware, security, and what local AI actually needs to work. The conversation starts with AI agent social networks and why prompt injection is the unsolved SQL injection of our era, then shifts into why memory bandwidth is the real bottleneck for running models locally. Matt compiles Rust on a Mac Studio at the Apple Store, and the two debate whether the traditional PC build is even worth it anymore.

As always, we've got supporter content! This week that includes the security primitives nobody uses, Kris's local AI research pipeline, the myth that you'll actually upgrade your components, Matt's DaVinci Resolve nightmare on Arch Linux, and why the Mac Pro doesn't know what it is anymore. Not a supporter yet? Fix that today by heading over to https://fallthrough.fm/subscribe where you'll get not only extra content but also higher quality audio. Sign up today!

If you prefer to watch this episode, you can view it on YouTube.

This week's episode of Break continues the conversation. Kris and Matt dig into why the chat interface is just the piano keyboard moment for AI, the pair programming gap where agents can't notice your manual edits, and the Codex personality controversy. They close with a teaser for next week's Go generic methods discussion. Watch it on YouTube or listen with your favorite podcasting app! Learn more by going to https://break.show/28.

Thanks for tuning in and happy listening!


Table of Contents:
  • Prologue (00:00:00)
  • Chapter 1: Welcome and Catching Up (00:00:45)
  • Chapter 2: OpenClaw and AI Social Networks (00:12:18)
  • Chapter 3: Prompt Injection Is the New SQL Injection (00:17:01)
  • Chapter 4: Sandboxing and Defense in Depth (00:19:56)
  • Chapter 6: Lava Layers of Abstraction (00:21:53)
  • Chapter 8: Memory Bandwidth Is the Real Bottleneck (00:24:32)
  • Chapter 9: Consumer Hardware is at an Inflection Point (00:27:34)
  • Chapter 10: The RAM Shortage and Supply Chain Crisis (00:32:03)
  • Chapter 12: Nobody Actually Upgrades (00:34:36)
  • Chapter 13: Compiling Rust at the Apple Store (00:36:28)
  • Chapter 14: Do You Still Need a Big Desktop? [Extended] (00:41:24)
  • Chapter 16: The Future of Local AI (00:41:25)
  • Chapter 18: Two Terabytes of RAM and What We'd Do With It  (00:50:17)
  • Chapter 19: Reimagining the PC for Massive Parallelism (00:52:56)
  • Epilogue (00:55:08)

Socials:
  • (00:00) - Prologue
  • (00:45) - Chapter 1: Welcome and Catching Up
  • (12:18) - Chapter 2: OpenClaw and AI Social Networks
  • (17:01) - Chapter 3: Prompt Injection Is the New SQL Injection
  • (19:56) - Chapter 4: Sandboxing and Defense in Depth
  • (21:53) - Chapter 6: Lava Layers of Abstraction
  • (24:32) - Chapter 8: Memory Bandwidth Is the Real Bottleneck
  • (27:34) - Chapter 9: Consumer Hardware is at an Inflection Point
  • (32:03) - Chapter 10: The RAM Shortage and Supply Chain Crisis
  • (34:36) - Chapter 12: Nobody Actually Upgrades
  • (36:28) - Chapter 13: Compiling Rust at the Apple Store
  • (41:24) - Chapter 14: Do You Still Need a Big Desktop? [Extended]
  • (41:25) - Chapter 16: The Future of Local AI
  • (50:17) - Chapter 18: Two Terabytes of RAM and What We'd Do With It
  • (52:56) - Chapter 19: Reimagining the PC for Massive Parallelism
  • (55:08) - Epilogue

Creators and Guests

Matthew Sanabria
Host
Matthew Sanabria
Matthew is an engineering leader focused on building reliable, scalable, and observable systems. Matthew is known for using his breadth and depth of experience to add value in minimal context situations and help great people become great engineers through mentoring. Matthew serves the Go community as a member of GoBridge. In his spare time, Matthew spends time with his family, helps grow his wife's chocolate business, works on home improvement projects, and reads technical resources to learn and tinker.
Lava Layers
Broadcast by