Small tools, Big Ideas

It’s a strange time to love simple things. Everywhere I look, the future seems to be rushing toward bigger models, smarter systems, and more complex layers of automation. The story of modern technology is often told as a relentless climb toward more: more intelligence, more capability, more speed. And yet, in the quiet corners of my own work, I keep finding myself drawn back to something much older and simpler. A clean note in a vault. A script with a single, clear purpose. A search box that just works. These small tools, which once felt ordinary, now feel almost radical in their elegance. In a world where everything is getting smarter, I’m finding unexpected joy in the tools that stay beautifully dumb.

Lately, I’ve been thinking a lot about Simon Willison’s llm tool — a little Python utility that gives you a command-line interface for large language models. It doesn’t hide the complexity behind a thousand settings or a shiny UI. It just gives you a simple, direct line to the model, letting you wire it into your workflows however you want. His files-to-prompt tool is another one I admire: an almost absurdly minimal way to push files into a prompt template for LLMs. Both tools feel like reminders that power doesn’t have to mean complexity. Sometimes the most transformative tools are the ones that stay small, sharp, and focused.

This same idea keeps showing up for me in other places too. I’ve been spending more time with tmux lately — not a simple tool in the sense of being easy, but a simple one in its spirit. It doesn’t try to be clever or guess what I want. It gives me a set of building blocks: panes, sessions, terminals — and lets me compose my environment exactly how I like it. Once you internalize its grammar, you realize that you’re no longer fighting your tools. You’re building with them.

In my podcast RAG project, I’ve seen this play out with Whisper too. Whisper isn’t flashy. It’s a humble little engine that quietly turns audio into text, and the latest version is astonishingly good at it. I didn’t need to fine-tune it or coax it into working. I just pointed it at my podcast archive, and it got to work. And it kept working. There’s a kind of magic in that — a tool that doesn’t require worship or endless maintenance, just quiet trust.

The same feeling hit me again recently when I started using uv for Python package management. For years, Python developers have wrestled with slow installs, dependency conflicts, and the occasional cryptic error that turns a five-minute task into a two-hour rabbit hole. uv doesn’t try to paper over those problems with another layer of complexity — it just fixes them. Installs are blindingly fast. Dependency resolution is smart and sane. Virtual environments are first-class citizens, not an afterthought. Using it feels like someone finally rebuilt the foundation without adding a skyscraper on top. It’s one of those tools that makes you wonder how you ever put up with the old way.

Then there’s Ollama, which has completely changed the way I think about local models. Before Ollama, running large language models yourself meant a tangle of Docker containers, custom scripts, GPU configurations, and crossed fingers. Now? You run ollama run, and you’re talking to a model. It’s almost unsettling how easy they’ve made it — not because they hid the power, but because they made a conscious choice to minimize the friction.

And finally, I can’t talk about this new season of rediscovery without mentioning Ghostty. Since it was released in December, Ghostty has become my daily driver for terminals. It doesn’t try to reinvent what a terminal is; it just fixes all the little things that made older terminals frustrating, and it does it with style. Fast, beautiful, reliable. It feels like someone finally sat down and asked: what if we just made this delightful?

When I step back and look at all of these tools — the tiny Python scripts, the old-school multiplexers, the whisper-quiet transcription engines, the frictionless model runners, the sleek terminals, the rebuilt package managers — I realize they all share something in common. They don’t try to be everything. They aren’t built around a fantasy of replacing me. They’re built around the idea of empowering me.

Maybe that’s the real story happening quietly in the margins of our AI-first world.

It’s not just about building bigger models or smarter systems. It’s about rebuilding the foundations — making the tools that carry us forward simpler, faster, sturdier. Tools that invite us to stay close to the work, instead of drifting away from it.

The future won’t be built by magic.

It will be built by people who still care about the foundations.

How Throwaway AI Experiments Lead to Better Code

Over the past few months, I’ve accidentally discovered a new rhythm when coding with AI—and it has reshaped my approach significantly. It wasn’t something I planned or found in a manual. Instead, it emerged naturally through my experiments as I kept noticing consistent patterns whenever I used AI models to build new features. What started as casual exploration has evolved into a trusted process: vibe, vibe again, and then build. Each step plays a distinct role, and together they’ve transformed how I move from a rough idea to functional software.

I first noticed this pattern while developing new features for Gemini Scribe, my Obsidian plugin. I was exploring ways to visualize the file context tree for an upcoming update. Out of curiosity, I gave Cursor an open brief—virtually no guidance from me at all. I simply wanted to see how the model would respond when left entirely to its own devices. I wasn’t disappointed. The model produced a surprisingly creative user interface and intriguing visualization approaches. The first visualization was a modal dialogue showing all files in the tree with a simple hierarchy. It wasn’t ready to ship, but I vividly remember feeling genuine excitement at the unexpected creativity the model demonstrated. The wiring was messy, and there were integration gaps, but it sparked ideas I wouldn’t have reached on my own.

Encouraged by this, I initiated a second round—this time with more structure. I took insights from the initial attempt and guided the model with clearer prompts and a deliberate breakdown of the problem. Again, the model delivered: this time, a new panel on the right-hand side that displayed the hierarchy and allowed users to click directly to any note included in the file context. This feature was genuinely intriguing, closely aligning with the functional design I envisioned. Between these two experiments, I gathered valuable insights on shaping the feature, making it more useful, and improving my future interactions with the model.

These experiences have crystalized into the three phases of my workflow:

  • Max vibe: Completely open-ended exploration to find creative possibilities.
  • Refined vibe: Targeted experimentation guided by learnings from the first round.
  • Build: Structured, focused development leveraging accumulated insights.

The first step—”max vibe and throw away”—is about unleashing the model’s creativity with maximum freedom. No constraints, no polish, just pure experimentation. It’s a discovery phase, surfacing both clever ideas and beautiful disasters. I spend roughly an hour here, take notes, then discard the output entirely. This early output is for exploration, not production.

Next comes “vibe with more detail and throw away again.” Equipped with insights from the initial exploration, I return to the model with a detailed plan, breaking the project into smaller, clearer steps. It’s still exploratory but more refined. This output remains disposable, maintaining fluidity in my thinking and preventing premature attachment to early drafts.

Only after these two rounds do I transition into production mode. At this point, experimentation gives way to deliberate building. Using my notes, I craft precise prompts and break the project into clear, manageable tasks. By now, the route forward is clear and defined. The resulting code, refined and polished, makes it to production, enriched by earlier explorations.

Interestingly, the realization of this workflow was born out of initial frustration. The first time I tried solely prompting for a specific feature set, my codebase became so tangled and problematic that I gave up on fixing it and threw it away entirely. That sense of frustration was pivotal—it highlighted how valuable it is to assume the first two tries would be disposable experiments rather than final products.

Stepping back, this workflow feels more like a structured series of experiments rather than a partnership or conversation. While I appreciate the creative input from AI models, I don’t yet see this approach as the true AI coding partner I envision. Instead, these models currently serve as tools that help me explore possibilities, challenge assumptions, and provide fresh perspectives. I discard many branches along the way, but the journey itself remains immensely valuable.

AI isn’t just writing code—it’s changing how I approach problems and explore solutions. The journey has become as valuable as the destination.

If you’ve been experimenting with AI in your projects, I’d love to hear about your rhythm and discoveries. Have you found your own version of “vibe and build”? Drop me a note—I’d love to learn how others navigate this fascinating new landscape.