A year ago, I hit “publish” on the first post for Letters from Silicon Valley, not knowing where it would lead. I just knew I wanted to write about the messy, hands-on process of building with AI. Twelve months later, the blog has become a record of experiments, discoveries, and more than a few wrong turns that taught me something valuable.
When I started, I had a specific itch to scratch. I wanted to see what would happen if I took a large, personally meaningful dataset—in my case, nearly two decades of podcasts—and built an AI system around it. My hunch was that somewhere in those thousands of hours of stories and interviews was a hidden web of ideas I could surface with the right tools. I wasn’t chasing an abstract “AI project”; I was building something for myself, something that could finally answer the question: “Where did I hear that thing about…?”
But there was a deeper reason, too. I had just come off a role managing very large teams and was intentional about what I wanted next. I wanted more time with my hands on a keyboard, building things that were helpful to me and others. I wanted to help developers understand how to use AI in their own work and explain core concepts—like I did in my series on embeddings—in a way that was accessible and grounded in real use. Most importantly, I wanted to reconnect with the simple satisfaction of writing code that solves my own problems.
I found the perfect environment for this on the AI Developer team, working alongside the people behind the Gemini API, AI Studio, Colab, and Kaggle. The team’s maturity freed me from constant hands-on management, giving me the space to build and write. At first, my schedule felt too unpredictable for mission-critical work, so building open-source projects and writing about them was the ideal way to contribute. It also let me lead by example, showing the kind of developer engagement I hoped to inspire.
As the year went on, I grew more comfortable with my schedule and began taking on core projects. The biggest of these is the Gemini CLI, which grew directly from the philosophy of this blog: solve real developer needs through hands-on experimentation.
Along the way, I built and released two other open-source projects:
- Podcast Rag: Tools for downloading, transcribing, embedding, and searching podcast archives, which grew directly from my initial project.
- Gemini Scribe: An Obsidian plugin that turns your notes into an interactive workspace with AI-powered recall, summarization, and writing support.
These projects became both proofs-of-concept for ideas I’ve shared here and real tools that others can adopt, modify, and build on.
Looking back, the throughline is clear: building with AI isn’t about replacing what I know—it’s about reframing what’s possible. The best work this year came from combining my own experience with the model’s speed and breadth, letting each push the other in new directions. That’s the spirit I plan to carry into year two.
📅 Year One Recap
September 2024: The Podcast Project
My first posts documented the journey of turning two decades of listening into a searchable, AI-powered personal knowledge base.
- Building an AI System Grounded in My Podcast History: The “why” behind the project—turning a personal passion into a practical AI experiment.
- Building My Homelab: From Gemma on a laptop to a rack-mounted AI powerhouse.
- The Great Podcast Download: How I used Podgrab to create a local, 180GB archive of my favorite shows.
- My 9,000 File Problem: Using Gemini and Linux command-line tools to wrangle tens of thousands of audio files.
- Cracking the Code: Exploring Transcription Methods: Moving from Gemini’s transcription to a more consistent Whisper-based system.
- Building a Podcast Transcription Script with AI Assistance: An experiment in building an entire workflow just by describing it to a model.
October 2024: The Embeddings Series
A deep dive into one of the most important—and misunderstood—concepts in modern AI, showing how I used them in the podcast project.
- The Magic of Embeddings: Transforming Data for AI: An accessible introduction to what embeddings are and why they matter.
- Unlocking AI Potential: Vector Databases and Embeddings: How embeddings and vector databases power semantic search.
- Unlocking Podcast Search with Embeddings: Practical Examples: A hands-on look at making my podcast archive searchable by idea, not just keyword.
- Turning Podcasts into Your Personal Knowledge Base with AI: The final step—querying the podcast archive like a personal library.
November 2024
- The AI-First Developer: A New Breed of Software Engineer: My take on the emerging AI-first mindset for developers.
- Introducing Gemini Scribe: Your AI Writing Assistant for Obsidian: Announcing my open-source plugin for AI-powered note-taking.
December 2024
- Prompts are Code: Treating AI Instructions Like Software: Why prompts should be managed with the same care as the rest of your codebase.
March 2025
- AI Is Changing Collaboration, and We’ll All Need to Adapt: How to bridge the gap between human intuition and machine precision on a team.
- Gemini Scribe Update: Supercharged Chat History: A big leap forward for Gemini Scribe with faster, more capable context handling.
April 2025
- Waiting for the True AI Coding Partner: Experimenting with “vibe coding”—letting the model take the first pass.
- How Throwaway AI Experiments Lead to Better Code: Why the quickest way to the right solution is often to throw away the first few attempts.
- Docker Did Nothing Wrong (But I’m Trying Podman Anyway): Exploring Podman’s daemonless architecture and Kubernetes-friendly features.
May 2025
- Small Tools, Big Ideas: A look at the small, quiet tools that can transform a workflow.
June 2025
- Unlocking the Future of Coding: Introducing the Gemini CLI: The official launch of my command-line interface for Gemini.
July 2025
- Gemini Scribe Supercharged: A Faster, More Powerful Workflow Awaits: The evolution of Gemini Scribe from a simple chat interface into a sophisticated writing partner.
August 2025
- Enhance Your Writing with Gemini Scribe’s New Rewrite Feature: Updating features in Gemini Scribe replacing the replace function with something more targeted.
What’s Next
Year two will bring more of the same, just with more intention. I started year one with an ambitious goal of publishing weekly, but quickly learned that pace wasn’t sustainable. This year, I’m aiming for at least two thoughtful posts a month—a balance that allows for a regular cadence and the breathing room to go deep.
The explainer series on embeddings was some of the most-read content I published, and I plan to do more of that. There are plenty of concepts in modern AI that could use the same treatment, and it’s a great way for me to learn alongside my readers.
At the same time, not every post needs to be a 10,000-word essay. I plan to mix in more short pieces—quick takes, reactions, and smaller ideas worth sharing in the moment. I enjoy the variety on Simon Willison’s Weblog, where short links and deep dives sit comfortably side-by-side, and I’d like to bring more of that spirit here.
When I launched this blog, I imagined it covering more than just AI. My “About Me” page mentions guitar building and woodworking, but I’ve hesitated to bring those topics in, worried they might feel “off-topic” for those who subscribed for AI content. This year, I’m going to embrace it. Those interests are part of the picture, and they may connect to the technical work in ways I don’t yet expect.
Finally, I’ve been inspired by the personal, thoughtful writing of former colleagues like Brian Brown (Changing Coordinates) and Chris DiBona (Substack). I hope to try a few posts in that spirit as well, sharing ideas that don’t fit into a narrow category but feel important to explore.