GitHub issues transforming into glowing skill cards floating above a laptop screen.

Bundled Skills in Gemini Scribe

The feature that became Bundled Skills started with a GitHub issues page.

I wrote and maintain Gemini Scribe, an Obsidian plugin that puts a Gemini-powered agent inside your vault. Thousands of people use it, and they have questions. People would open discussions and issues asking how to configure completions, how to set up projects, what settings were available. I was answering the same questions over and over, and it hit me: the agent itself should be able to answer these. It has access to the vault. It can read files. Why am I the bottleneck for questions about my own plugin?

So I built a skill. I took the same documentation source that powers the plugin’s website, packaged it up as a set of instructions the agent could load on demand, and suddenly users could just ask the agent directly. “How do I set up completions?” “What settings are available?” The agent would pull in the right slice of documentation and give a grounded answer. The docs on the web and the docs the agent reads are built from the same source. There is no separate knowledge base to keep in sync.

That first skill opened a door. I was already using custom skills in my own vault to improve how the agent worked with Bases and frontmatter properties. Once I had the bundled skills mechanism in place, I started looking at those personal skills differently. The ones I had built for myself around Obsidian-specific tasks were not just useful to me. They would be useful to anyone running Gemini Scribe. So I started migrating them from my vault into the plugin as built-in skills.

With the latest version of Gemini Scribe, the plugin now ships with four built-in skills. In a future post I will walk through how to create your own custom skills, but first I want to explain what ships out of the box and why this approach works.

Four Skills Out of the Box

That first skill became gemini-scribe-help, and it is still the one I am most proud of conceptually. The plugin’s own documentation lives inside the same skill system as everything else. No special case, no separate knowledge base. The agent answers questions about itself using the same mechanism it uses for any other task.

The second skill I built was obsidian-bases. I wanted the agent to be good at creating Bases (Obsidian’s take on structured data views), but it kept getting the configuration wrong. Filters, formulas, views, grouping: there is a lot of surface area and the syntax is particular. So I wrote a skill that guides the agent through creating and configuring Bases from scratch, including common patterns like task trackers and project dashboards. Instead of me correcting the agent’s output every time, I describe what I want and the agent builds it right the first time.

Next came audio-transcription. This one has a fun backstory. Audio transcription was one of the oldest outstanding bugs in the repo. People wanted to use it with Obsidian’s native audio recording, but the results were poor. In this release, fixes around binary file uploads meant the model could finally receive audio files properly. Once that was working, I realized I did not need to write any more code to get good transcriptions. I just needed to give the agent good instructions. The skill guides it through producing structured notes with timestamps, speaker labels, and summaries. It turns a messy audio file into a clean, searchable note, and the fix was not code but context.

The fourth is obsidian-properties. Working with note properties (the YAML frontmatter at the top of every Obsidian note) sounds trivial until you are doing it across hundreds of notes. The agent would make inconsistent choices about property types, forget to use existing property names, or create duplicates. This skill makes it reliable at creating, editing, and querying properties consistently, which matters enormously if you are using Obsidian as a serious knowledge management system.

The pattern behind all four is the same. I watched the agent struggle with something specific to Obsidian, and instead of accepting that as a limitation of the model, I wrote a skill to fix it.

Why Not Just Use the System Prompt

You might be wondering why I did not just shove all of this into the system prompt. I wrote about this problem in detail in Managing the Agent’s Attention, but the short version is that system prompts are a “just-in-case” strategy. You load up the agent with everything it might need at the start of the conversation, and as you add more instructions, they start competing with each other for the model’s attention. Researchers call this the “Lost in the Middle” problem: models pay disproportionate attention to the beginning and end of their context, and everything in between gets diluted. If I packed all four skills worth of instructions into the system prompt, each one would make the others less effective. Every new skill I add would degrade the ones already there.

Skills avoid this entirely. The agent always knows which skills are available (it gets a short name and description for each one), but only loads the full instructions when it actually needs them. When a skill activates, its instructions land in the most recent part of the conversation, right before the model starts reasoning. Only one skill’s instructions are competing for attention at a time, and they are sitting in the highest-attention position in the context window.

There is a second benefit that surprised me. Because skills activate through the activate_skill tool call, you can watch the agent load them. In the agent session, you see exactly when a skill is activated and which one it chose. This gives you something that system prompts never do: observability. If the agent is not following your instructions, you can check whether it actually activated the skill. If it activated the skill but still got something wrong, you know the problem is in the skill’s instructions, not in the agent’s attention. That feedback loop is what lets you iterate and improve your skills over time. You are no longer guessing whether the agent read your instructions. You can see it happen.

Skills follow the open agentskills.io specification, and this matters more than it might seem. We have seen significant standardization around this spec across the industry in 2026. That means skills are portable. If you have been using skills with another agent, you can bring them into Gemini Scribe and they will work. If you build skills in Gemini Scribe, you can take them with you. They are not a proprietary format tied to one tool. They are Markdown files with a bit of YAML frontmatter, designed to be human-readable, version-controllable, and portable across any agent that supports the spec.

What Comes Next

The four built-in skills are just the beginning. When I decide what to build next, I think about skills in four categories. First, there are skills that give the agent domain knowledge about Obsidian itself, things like Bases and properties where the model’s general training is not specific enough. Second, there are skills that help the agent use Gemini Scribe’s own tools effectively. The plugin has capabilities like deep research, image generation, semantic search, and session recall, and each of those benefits from a skill that teaches the agent when and how to use them well. Third, there are skills that bring entirely new capabilities to the agent, like audio transcription. And fourth, there is user support: the help skill that started this whole process, making sure people can get answers without leaving their vault.

The next version of Gemini Scribe will add built-in skills for semantic search, deep research, image generation, and session recall. The skills system is also designed to be extended by users. In a future post I will walk through creating your own custom skills, both by hand and by asking the agent to build them for you.

For now, the takeaway is simple. A general-purpose model knows a lot, but it does not know your tools. When I watched the agent struggle with Obsidian Bases or produce flat transcripts or make a mess of note properties, I could have accepted those as limitations. Instead, I wrote skills to close the gap. The model’s knowledge is broad. Skills make it deep.

A bird's-eye view of a winding river of glowing green GitHub contribution tiles flowing across a dark landscape, with bright yellow-green flames rising from clusters of the brightest tiles, while a lone figure sits at a laptop at the edge of the mosaic under a distant skyline of code-filled windows.

4255 Contributions – A Year of Building in the Open

I was staring at my GitHub profile the other day when a number caught my eye. 4,255. That’s how many contributions GitHub has recorded for me over the past year. I sat with it for a moment, doing the quick mental math: that’s close to twelve contributions every single day, weekends included. The shape of the year looked just as striking. I showed up on 332 of the 366 days in the window, 91% of them, and at one point put together a 113-day streak without a gap. It felt like a lot. It felt like proof of something I hadn’t been able to articulate until I saw it rendered as a green heatmap on a screen.

About a year ago, I wrote about my decision to move back to individual contributor work after years in leadership roles. I talked about missing the flow state, the direct feedback loop of writing code and watching it work. What I didn’t know at the time was just how dramatically that shift would show up in the data. 4,255 contributions is the quantitative answer to the question I was trying to answer qualitatively in that post: what happens when you give a builder back the time to build?

The Shape of a Year

Numbers by themselves are just numbers. What makes them interesting is the shape they take when you zoom in. My year wasn’t a single monolithic effort on one project. It was a constellation of interconnected work, each project feeding into the next, each one teaching me something that made the others better.

The largest body of work was on Gemini CLI, Google’s open-source AI agent for the terminal. This project alone accounts for a significant chunk of those contributions, spanning everything from core feature development to building the Policy Engine that governs how the agent interacts with your system. But the contributions weren’t just code. A huge portion of my time went into code reviews, issue triage, and community engagement. Working on a repository with over 100,000 stars means that every merged PR has real impact, and every review is a conversation with developers around the world.

Then there was Gemini Scribe, my Obsidian plugin that started as a weekend experiment and grew into a tool with 302 stars and a community of writers who depend on it. Over the past year, I shipped a major 3.0 release, built agent mode, and iterated constantly on the rewrite features that make it useful for daily writing. In fact, this very blog post was drafted in the tool I built, which is a strange and satisfying loop.

Alongside these larger efforts, I shipped a handful of small, sharp tools that I needed for my own workflows. The GitHub Activity Reporter is one I’ve written about before, a utility that uses AI to transform raw GitHub data into narrative summaries for performance reviews and personal reflection. More recently, I built the Workspace extension for Gemini CLI and a deep research extension that lets you conduct multi-step research from the terminal. Each of these tools was born from a specific itch, and each turned out to be useful to more people than I expected. The Workspace extension alone has gathered 510 stars.

The Rhythm of Building

One thing the contribution graph doesn’t capture is the rhythm behind the numbers. My weeks developed a cadence over the year that I didn’t plan but that emerged naturally. Mornings were for deep work on Gemini CLI, the kind of focused system design and implementation that benefits from a fresh mind. Afternoons were for reviews and community work, responding to issues, providing feedback on PRs, and engaging with the developers building on top of our tools. Evenings and weekends were where the personal projects lived: Gemini Scribe, the extensions, and whatever new idea was rattling around in my head.

This rhythm is something I couldn’t have had in my previous role. When your calendar is stacked with meetings from nine to five, the creative work gets squeezed into the margins. Now, the creative work is the whole page. That’s the real story behind 4,255 contributions. It’s not about productivity metrics or GitHub gamification. It’s about what happens when you align your time with the work that energizes you.

What Surprised Me

A few things caught me off guard when I looked back at the year.

First, the ratio of code to “everything else” wasn’t what I expected. I assumed the majority of my contributions would be commits. In reality, a massive portion was reviews, comments, and issue management. On Gemini CLI alone I logged 205 reviews over the year. This was especially true as my role on that project evolved from pure contributor to something closer to a technical steward. Reviewing a complex PR, asking the right questions, and helping someone refine their approach takes just as much skill as writing the code yourself. Sometimes more.

Second, the personal projects had more reach than I anticipated. When I wrote about building personal software, I was mostly thinking about tools I built for myself. But Gemini Scribe has real users who file real bugs and request real features. The Workspace extension took off because it solved a problem that a lot of Gemini CLI users were hitting. Building in the open means you discover an audience you didn’t know was there.

Third, and this is the one I keep coming back to, the year felt shorter than 4,255 contributions would suggest. Flow state compresses time. When you’re deep in a problem, hours feel like minutes. I remember entire weekends spent in the codebase that felt like an afternoon. That compression is, for me, the clearest signal that I made the right call in going back to IC work.

Fourth, and this is the one I never would have predicted until I charted it out: the weekend, not the weekday, turned out to be my most productive window by a wide margin. Saturdays averaged 14.7 contributions, Sundays 14.5, and Thursday, the day I’d have guessed was safest, came in last at 8.3. The busiest single day of the entire year was a Saturday, December 20, when I shipped 89 contributions into podcast-rag, rebuilding the web upload flow, adding episode management to the admin dashboard, and migrating email delivery over to Resend, all in one afternoon. I didn’t plan for the weekends to become the engine. They just did, because that’s where the personal projects live, and the personal projects are where the work is loudest, most direct, and most free of interruption. A day with no meetings on it, I’ve come to realize, is worth more than I ever gave it credit for.

Looking Forward

I don’t know what next year’s number will be, and I’m not particularly interested in making it bigger. The number is a side effect, not a goal. What I care about is continuing to work on problems that matter, in the open, with people who push me to think more clearly. The AI-first developer model I wrote about over a year ago is now just how I work every day. The agents I’m building are the collaborators I’m building with, and both keep getting better.

If you’re someone who’s been thinking about a similar shift, whether it’s moving back to IC work, contributing to open source, or just carving out more time for the work that lights you up, I’d encourage you to try it. You might be surprised by what a year of focused building can produce. I certainly was.

A focused workspace at a desk in a vast library, with nearby shelves illuminated and distant shelves visible but softened, a pair of sunglasses resting on the desk

Scoping AI Context with Projects in Gemini Scribe

My son has a friend who likes to say, “born to dilly dally, forced to lock in.” I’ve started to think that describes AI agents in a large Obsidian vault perfectly.

My vault is a massive, sprawling entity. It holds nearly two decades of thoughts, ranging from deep dives into LLM architecture to my kids’ school syllabi and the exact dimensions needed for an upcoming home remodelling project. When I first introduced Gemini Scribe, the agent’s ability to explore all of that was a feature. I could ask it to surface surprising connections across topics, and it would. But as I’ve leaned harder into Scribe as a daily partner, both at home and at work, the dilly dallying became a real problem. My work vault has thousands of files with highly overlapping topics. It’s not a surprise that the agent might jump from one topic to another, or get confused about what we’re working on at any given time. When I asked the agent to help me structure a paragraph about agentic workflows, I didn’t want it pulling in notes from my jazz guitar practice.

I could have created a new, isolated vault just for my blog writing. I tried that briefly, but I immediately found myself copying data back and forth. I was duplicating Readwise syncs, moving research papers, and fracturing my knowledge base. That wasn’t efficient, and it certainly wasn’t fun. The problem wasn’t that the agent could see too much. The problem was glare. I needed sunglasses, not blinders. I needed to force the agent to lock in.

So, I built Projects in Gemini Scribe.

A project defines scope without acting as a gatekeeper

Fundamentally, a project in Gemini Scribe is a way to focus the agent’s attention without locking it out of anything. It defines a primary area of work, but the rest of the vault is still there. Think of it like sitting at a desk in the engineering section of a library. Those are the shelves you browse by default, the ones within arm’s reach. But if you know the call number for a book in the history section, nobody stops you from walking over and grabbing it. You can even leave a stack of books from other sections on your desk ahead of time if you know you’ll need them. If you’ve followed along with the evolution of Scribe from plugin to platform, you’ll recognize this as a natural extension of the agent’s growing capabilities.

The core mechanism is remarkably simple. Any Markdown file in your vault can become a project by adding a specific tag to its YAML frontmatter.

---
tags:
  - gemini-scribe/project
name: Letters From Silicon Valley
skills:
  - writing-coach
permissions:
  delete_file: deny
---

Once tagged, that file’s parent directory becomes the project root. From that point on, when an agent session is linked to the project, its discovery tools are automatically scoped to that directory and its subfolders. Under the hood, the plugin intercepts API calls to tools like list_files and find_files_by_content, transparently prepending the project root to the search paths. The practical difference is immediate. Before projects, I could be working on a blog post about agent memory systems and the agent would surface notes from a completely unrelated project that happened to use similar terminology. Now I can load up a project and work with the agent hand in hand, confident it won’t get distracted by similar ideas or overlapping vocabulary from other corners of the vault.

The project file serves as both configuration and context

The project file itself serves a dual purpose. It acts as both configuration and context. The frontmatter handles the configuration, allowing me to explicitly limit which skills the agent can use or override global permission settings. For example, denying file deletions for a critical writing project is a simple but effective safety net. But the real power is in customizing the agent’s behavior per project. For my creative writing, I actually don’t want the agent to write at all. I want it to read, critique, and discuss, but the words on the page need to be mine. Projects let me turn off the writing skill entirely for that context while leaving it fully enabled for my blog work. The same agent, shaped differently depending on what I’m working on.

Everything below the frontmatter is treated as context. Whatever I write in the body of the project note is injected directly into the agent’s system prompt, acting much like an additional, localized set of instructions. The global agent instructions are still respected, but the project instructions provide the specific context needed for that particular workspace. This is similar in spirit to how I’ve previously discussed treating prompts as code, where the instructions you give an agent deserve the same rigor and iteration as any other piece of software.

This is where the sunglasses metaphor really holds. The agent’s discovery tools, things like list_files and find_files_by_content, are scoped to the project folder. That’s the glare reduction. But the agent’s ability to read files is completely unrestricted. If I am working on a technical post and need to reference a specific architectural note stored in my main Notes folder, I have two options. I can ask the agent to go grab it, or I can add a wikilink or embed to the project file’s body and the agent will have it available from the start. One is like walking to the history section yourself. The other is like leaving that book on your desk before you sit down. Either way, the knowledge is accessible. The project just keeps the agent from rummaging through every shelf on its own. This builds directly on the concepts of agent attention I explored in Managing AI Agent Attention.

Session continuity keeps the agent focused across your vault

One of the more powerful aspects of this system is how it interacts with session memory. When I start a new chat, Gemini Scribe looks at the active file. If that file lives within a project folder, the session is automatically linked to that project. This is a direct benefit of the supercharged chat history work that landed earlier in the plugin’s life.

This linkage is stable for the lifetime of the session. I can navigate around my vault, opening files completely unrelated to the project, and the agent will remain focused on the project’s context and instructions. This means I don’t have to constantly remind the agent of the rules of the road. The project configuration persists across the entire conversation.

Furthermore, session recall allows the agent to look back at past conversations. When I ask about prior work or decisions related to a specific project, the agent can search its history, utilizing the project linkage to find the most relevant past interactions. This creates a persistent working environment that feels much more like a collaboration than a simple transaction.

Structuring projects effectively requires a few simple practices

To get the most out of projects, I’ve found a few practices to be particularly effective.

First, lean into the folder-based structure. Place the project file at the root of the folder containing the relevant work. Everything underneath it is automatically in scope. This feels natural if you already organize your vault by topic or project, which many Obsidian users do.

Second, start from the defaults and adjust as the project demands. Out of the box, a new project inherits the agent’s standard skills and permissions, which is a sensible baseline for most work. From there, you tune. If you find the agent reaching for tools that don’t make sense in a given context, narrow the allowed skills in the frontmatter. If a project needs extra safety, tighten the permissions. The creative writing example I mentioned earlier came about exactly this way. I started with the defaults, realized I wanted the agent as a reader and critic rather than a co-writer, and adjusted accordingly. This aligns with the broader principle I’ve written about when discussing building responsible agents: the right guardrails are the ones shaped by the actual work.

Finally, treat the project body as a living document. As the project evolves, update the instructions and external links to ensure the agent always has the most current and relevant context. It’s a simple mechanism, but it fundamentally changes how I interact with an AI embedded in a large knowledge base. It allows me to keep my single, massive vault intact, while giving the agent the precise focus it needs to be genuinely helpful.

A cracked-open obsidian geode on a weathered wooden desk reveals a glowing golden network of interconnected nodes and pathways inside. Tendrils of golden light extend outward from the geode across the desk toward open notebooks and a mechanical keyboard, with bookshelves softly blurred in the background.

Gemini Scribe From Agent to Platform

Six months ago, I wrote about building Agent Mode for Gemini Scribe from a hotel room in Fiji. That post ended with a sense of possibility. The agent could read your notes, search the web, and edit files. It was, by the standards of the time, pretty remarkable. I remember watching it chain together a sequence of tool calls for the first time and thinking I’d built something meaningful.

I had no idea it was just the beginning.

In the six months since that post, Gemini Scribe has gone through fifteen releases, from version 3.3 to 4.6. There have been over 400 commits, a complete architectural rethinking, and a transformation from “a chat plugin with an agent mode” into something I can only describe as a platform. The agent didn’t just get better. It got a memory, a research department, a set of extensible skills, and the ability to talk to external tools through the Model Context Protocol. If the vacation version was a clever assistant, this version is closer to a collaborator who actually understands your vault.

I want to walk through how we got here, because the journey reveals something I think is important about building with AI right now: the hardest problems aren’t the ones you set out to solve. They’re the ones that reveal themselves only after you ship the first version and start living with it.

The Agent Grows Up

The first big milestone after the vacation was version 4.0, released in November 2025. This was the release where I made a decision that felt risky at the time: I removed the old note-based chat entirely. No more dual modes, no more confusion about which interface to use. Everything became agent-first. Every conversation had tool calling built in. Every session was persistent.

It sounds simple in hindsight, but killing a feature that works is one of the hardest decisions in software. The old chat mode was comfortable. People used it. But it was holding back the entire plugin, because every new feature had to work in two completely different paradigms. Ripping it out was liberating. Suddenly I could focus all my energy on making one experience truly great instead of maintaining two mediocre ones.

Alongside 4.0, I built the AGENTS.md system, a persistent memory file that gives the agent an overview of your entire vault. When you initialize it, the agent analyzes your folder structure, your naming conventions, your tags, and the relationships between your notes. It writes all of this down in a file that persists across sessions. The result is that the agent doesn’t start every conversation from scratch. It already knows how your vault is organized, where you keep your research, and what projects you’re working on. It’s the difference between hiring a new intern every morning and having a colleague who’s been on the team for months.

Seeing and Searching

Version 4.1 brought something I’d wanted since the beginning: real thinking model support. When Google released Gemini 2.5 Pro and later Gemini 3 with extended thinking capabilities, I added a progress indicator that shows you the model’s reasoning in real time. You can watch it think through a problem, see it plan its approach, and understand why it chose a particular tool. It sounds like a small UI feature, but it fundamentally changes your relationship with the agent. You stop treating it like a black box and start treating it like a thinking partner whose process you can follow.

That same release added a stop button (which sounds trivial until you’re watching an agent go on a tangent and have no way to interrupt it), dynamic example prompts that are generated from your actual vault content, and multilingual support so the agent responds in whatever language you write in.

But the real game-changer came in version 4.2 with semantic vault search. I wrote about the magic of embeddings over a year ago, and this feature is that idea fully realized inside Obsidian. It uses Google’s File Search API to index your entire vault in the background. Once indexed, the agent can search by meaning, not just keywords. If you ask it to “find my notes about the trade-offs of microservices,” it will surface relevant notes even if they never use the word “microservices.” It understands that a note titled “Why We Split the Monolith” is probably relevant.

The indexing runs in the background, handles PDFs and attachments, and can be paused and resumed. Getting the reliability right was one of the more frustrating engineering challenges of the whole project. There were weeks of debugging race conditions, handling rate limits gracefully, and making sure a crash mid-index didn’t corrupt the cache. Version 4.2.1 was almost entirely dedicated to stabilizing the indexer, adding incremental cache saves and automatic retry logic. It’s the kind of work that nobody sees but everyone benefits from.

Images, Research, and the Expanding Toolbox

Version 4.3, released in January 2026, added multimodal image support. You can now paste or drag images directly into the chat, and the agent can analyze them, describe them, or reference them in notes it creates. The image generation tool, which I’d been building in the lead-up to 4.3, lets the agent create images on demand using Google’s Imagen models. There’s even an AI-powered prompt suggester that helps you describe what you want if you’re not sure how to phrase it.

That release also introduced two new selection-based actions: Explain Selection and Ask About Selection. These join the existing Rewrite feature to give you a full right-click menu for working with selected text. It sounds like a small addition, but in practice these micro-interactions are where people spend most of their time. Being able to highlight a paragraph, right-click, and ask “What’s the logical flaw in this argument?” without leaving your note is the kind of frictionless experience I’m always chasing.

Then came deep research in version 4.4. This is fundamentally different from the regular Google Search tool. Where a search returns quick snippets, deep research performs multiple rounds of investigation, reading and cross-referencing sources, synthesizing findings, and producing a structured report with inline citations. It can combine web sources with your own vault notes, so the output reflects both what the world knows and what you’ve already written. A single research request takes several minutes, but what you get back is closer to what a research assistant would produce after an afternoon in the library.

I built this on top of my gemini-utils library, which is a separate project I created to share common AI functionality across all of my TypeScript Gemini projects, including Gemini Scribe, my Gemini CLI deep research extension, and more. Having that shared foundation means deep research improvements benefit every project simultaneously.

Opening the Platform

If I had to pick the release that transformed Gemini Scribe from a plugin into a platform, it would be version 4.5. This is where MCP server support and the agent skills system arrived.

MCP, the Model Context Protocol, is an open standard that lets AI applications connect to external tool providers. In practical terms, it means Gemini Scribe can now talk to tools that I didn’t build. You can connect a filesystem server, a GitHub integration, a Brave Search provider, or anything else that speaks MCP. The plugin supports both local stdio transport (spawning a process on your desktop) and HTTP transport with full OAuth authentication, which means it works on mobile too. When you connect an MCP server, its tools appear alongside the built-in vault tools, with the same confirmation flow and safety features.

This was the moment the plugin stopped being a closed system. Instead of me having to build every integration myself, the entire MCP ecosystem became available. Someone who needs to query a database from their notes can connect a database MCP server. Someone who wants to interact with their GitHub issues can connect the GitHub server. The plugin becomes a hub rather than a destination.

The agent skills system, which follows the open agentskills.io specification, takes a similar approach to extensibility but for knowledge rather than tools. A skill is a self-contained instruction package that gives the agent specialized expertise. You can create a “meeting-notes” skill that teaches it your preferred format for processing meetings, or a “code-review” skill with your team’s specific standards. Skills use progressive disclosure, so the agent always knows what’s available but only loads the full instructions when it activates one. This keeps conversations focused while making specialized knowledge available on demand.

Version 4.5 also migrated API key storage to Obsidian’s SecretStorage, which uses the OS keychain. Your API key is no longer sitting in a plain JSON file in your vault. It’s a small change that matters a lot for security, especially for people who sync their vaults to cloud storage or version control.

Managing the Conversation

The most recent release, version 4.6, tackles a problem that only becomes apparent after you’ve been using an agent for a while: conversations get long, and long conversations hit token limits.

The solution is automatic context compaction, a direct answer to the attention management challenge I explored in the Agentic Shift series. When a conversation approaches the model’s token limit, the plugin automatically summarizes older turns to make room for new ones. There’s also an optional live token counter that shows you exactly how much of the context window you’re using, with a breakdown of cached versus new tokens. It’s the kind of visibility that helps you understand why the agent might be “forgetting” things from earlier in the conversation and gives you the information to manage it.

This release also added a per-tool permission policy system, which is the practical realization of the guardrails philosophy I wrote about in the Agentic Shift series. Instead of the binary choice between “confirm everything” and “confirm nothing,” you can now set individual tools to allow, deny, or ask-every-time. There are presets too: Read Only, Cautious, Edit Mode, and (for the brave) YOLO mode, which lets the agent execute everything without asking. I use Cautious mode myself, which auto-approves reads and searches but asks before any file modifications. It strikes a balance between speed and safety that feels right for daily use.

What I’ve Learned

Building Gemini Scribe has taught me something I keep coming back to in this blog: the most interesting work happens at the intersection of AI capabilities and human workflows. The technical challenges (semantic indexing, MCP integration, context compaction) are real, but they’re in service of a simple goal: making the AI useful enough that you forget it’s there.

The plugin now has users like Paul O’Malley building entire self-organizing knowledge systems on top of it. Seeing that kind of creative adoption is what keeps me building. Every feature request, every bug report, every surprising use case reveals another facet of what’s possible when you give a capable AI agent the right set of tools and the right context.

If you’re curious, Gemini Scribe is available in the Obsidian Community Plugins directory. All you need is a free Google Gemini API key. I’d love to hear what you build with it.

Great Video on Gemini Scribe and Obsidian

I was recently looking through the feedback in the Gemini Scribe repository when I noticed a few insightful comments from a user named Paul O’Malley. Curiosity got the better of me, I love seeing who is actually pushing the boundaries of the tools I build, so I took a look at his YouTube page. I quickly found myself deep into a walkthrough titled “I Built a Second Brain That Organises Itself.”

What caught my eye wasn’t just another productivity system, we’ve all seen the “shiny new app” cycle that leads to digital bankruptcy. It was seeing Gemini Scribe being used as the engine for a fully automated Obsidian vault.

The Friction of Digital Maintenance

Paul hits on a fundamental truth: most systems fail because the friction of maintenance—the tagging, the filing, the constant admin—eventually outweighs the benefit. He argues that what we actually need is a system that “bridges the gap in our own executive function”.

In his setup, he uses Obsidian as the chassis because it relies on Markdown. I’ve long believed that Markdown is the native language of AI, and seeing it used here to create a “seamless bridge” between messy human thoughts and structured AI processing was incredibly satisfying.

Gemini Scribe as the Engine

It was a bit surreal to watch Paul walk through the installation of Gemini Scribe as the core engine for this self-organizing brain. He highlights a few features that I poured a lot of heart into:

  • Session History as Knowledge: By saving AI interactions as Markdown files, they become a searchable part of your knowledge base. You can actually ask the AI to reflect on past conversations to find patterns in your own thinking.
  • The Setup Wizard: He uses a “Setup Wizard” to convert the AI from a generic chatbot into a specialized system administrator. Through a conversational interview, the agent learns your profession and hobbies to tailor a project taxonomy (like the PARA method) specifically to you.
  • Agentic Automation: The video demonstrates the “Inbox Processor,” where the AI reads a raw note, gives it a proper title, applies tags, and physically moves it to the right folder.

Beyond the Tool: A Human in the Loop

One thing Paul emphasized that really resonated with my own philosophy of Guiding the Agent’s Behavior is the “Human in the Loop”. When the agent suggests a change or creates a new command, it writes to a staging file first.

As Paul puts it, you are the boss and the AI is the junior employee—it can draft the contract, but you have to sign it before it becomes official. You always remain in control of the files that run your life.

Small Tools, Big Ideas

Seeing the Gemini CLI mentioned as a “cleaner and slightly more powerful” alternative for power users was another nice nod. It reinforces the idea that small, sharp tools can be composed into something transformative.

Building tools in a vacuum is one thing, but seeing them live in the wild, helping someone clear their “mental RAM” and close their loop at the end of the day, is one of the reasons I do this. It’s a reminder that the best technology doesn’t try to replace us; it just makes the foundations a little sturdier.

A laptop sits on a dark wooden desk under the warm glow of an Edison bulb; above the screen, a stream of glowing, holographic research papers and data visualizations cascades downward like a waterfall, physically dissolving into lines of green and white markdown text as they enter the open terminal window.

Bringing Deep Research to the Terminal

I lost the report somewhere between browser tabs. One moment it was there in the Gemini app, a detailed deep research analysis on how AI agents communicate with each other, complete with citations and a synthesis I’d spent an hour reviewing. The next moment, gone. Along with the draft blog post I’d been weaving it into.

I was working on part nine of my Agentic Shift series, trying to answer the question of what happens when agents start talking to each other instead of just talking to us. The research was sprawling—academic papers on multi-agent systems, documentation from LangGraph and AutoGen, blog posts from researchers at DeepMind and OpenAI. I’d been using Gemini’s deep research feature in the app to help synthesize all of this, and it was genuinely useful. The AI would spend minutes thinking through the question, querying sources, building a structured report. But then I had to move that report into my text-based workflow. Copy, paste, reformat, lose formatting, copy again. Somewhere in that dance between the browser and my terminal, I lost everything.

I stared at the empty browser tab for a moment. I could start over, rerun the research in the Gemini app, be more careful about saving this time. But this wasn’t the first time I’d hit this friction. Every time I used deep research in the browser, I had to bridge two worlds: the app where the AI did its thinking, and the terminal where I actually write and build.

What looked like yak shaving was actually a prerequisite. I needed deep research capabilities in my terminal workflow, not just wanted them. I couldn’t keep jumping between environments. And I was in luck. Just a few weeks earlier, Google had announced that deep research was now available through the Gemini API. The capability I’d been using in the browser could be accessed programmatically.

When Features Live in the Wrong Place

I’m not going to pretend this was built based on demand from the community. I needed this. Specifically, I needed to stop context-switching between the Gemini app and my terminal, because every time I did, I was introducing friction and risk. The lost report was just the most recent symptom of a workflow that was fundamentally broken for how I work.

I live in the terminal. My notes are markdown files. My drafts are plain text. My build process, my git workflow, my entire development environment assumes I’m working with files and command-line tools. When I have to move work from a browser back into that environment, I’m not just inconvenienced—I’m fighting against the grain of everything else I do.

Deep research is powerful. It works. But living in a web app meant it was disconnected from the places where I actually needed it. Sure, other people might benefit from having this integrated into MCP-compatible tools, but that’s a nice side effect. The real reason I built this was simpler: I had to finish part nine of the Agentic Shift series, and I couldn’t do that without fixing my workflow first.

The Model Context Protocol made this possible. It’s a standard for exposing AI capabilities as tools that can plug into different environments. Google’s API gave me the primitives. I just needed to connect them to where I actually work.

Building the Missing Piece

The extension wraps Gemini’s deep research capabilities into the Model Context Protocol, which means it integrates seamlessly with Gemini CLI and any other MCP-compatible client. The architecture is deliberately simple, but it supports two distinct workflows depending on what you need.

The first workflow is straightforward: you have a research question, and you want a deep investigation. You can kick off research with a simple command, but if you use the bundled /deep-research:start slash command, the model actually guides you through a step to optimize your question to get the most out of deep research. The agent then spends tens of minutes—or as much time as it needs—planning the investigation, querying sources, and synthesizing findings into a detailed report with citations you can follow up on.

The second workflow is for when you want to ground the research in your own documents. You use /deep-research:store-create to set up a file search store, then /deep-research:store-upload to index your files. Once they’re uploaded, you have two options: you can include that dataset in the deep research process so the agent grounds its investigation in your specific sources, or you can query against it directly for a simpler RAG experience. This is the same File Search capability I wrote about in November when I rebuilt my Podcast RAG system, but now it’s accessible from the terminal as part of my normal workflow.

The extension maintains local state in a workspace cache, so you don’t have to remember arcane resource identifiers or lose track of running research jobs. The whole thing is designed to feel as natural as running a grep command or kicking off a build—it’s just another tool in the environment where I already work.

So did it actually work?

The first time I ran it, I asked for a deep dive into Stonehenge construction. I’d been reading Ken Follett’s novel Circle of Days and found myself curious about the scientific evidence behind the story, what do we actually know about how it was built and who built it. I kicked off the query and watched something fascinating happen. The model understood that deep research takes time. Instead of just waiting silently, it kept checking in to see if the research was done, almost like checking the oven to see if dinner was ready. Twenty minutes later, a markdown file appeared in my filesystem with a comprehensive research report, complete with citations to academic sources, isotope analysis, and archaeological evidence. I didn’t have to copy anything from a browser. I didn’t lose any formatting. It was just there, ready to reference. The report mentioned the Bell Beaker culture and what happened to the Neolithic builders around 2500 BCE, which sent me down another rabbit hole. I immediately ran a second research query on that transition. Same seamless experience. That’s when I knew this was exactly what I needed.

What This Actually Means

I think extensions like this represent something important about where AI development is heading. We’re past the proof-of-concept phase where every AI interaction is a magic trick. Now we’re in the phase where AI capabilities need to integrate into actual workflows—not replace them, but augment them in ways that feel natural.

This is what I wrote about in November when I talked about the era of Personal Software. We’ve crossed a threshold where building a bespoke tool is often faster—and certainly less frustrating—than trying to adapt your workflow to someone else’s software. I didn’t build this extension for the community. I built it because I needed it. I had lost work, and I needed to stop context-switching between environments. If other people find it useful, that’s a nice side effect, but it’s fundamentally software for an audience of one.

The key insight for me was that the Model Context Protocol isn’t just a technical standard; it’s a design pattern for making AI tools composable. Instead of building a monolithic research application with its own UI and workflow, I built a small, focused extension that does one thing well and plugs into the environment where I already work. That composability matters because it means the tool can evolve with my workflow rather than forcing my workflow to evolve around the tool.

There’s also something interesting happening with how we think about AI capabilities. Deep research isn’t about making the model smarter—it’s about giving it time and structure. The same model that gives you a superficial answer in three seconds can give you a genuinely insightful report if you let it think for tens of minutes and provide it with the right sources. We’re learning that intelligence isn’t just about raw capability; it’s about how you orchestrate that capability over time.

What Comes Next

The extension is live on GitHub now, and I’m using it daily for my own research workflows. The immediate next step is adding better control over the research format—right now you can specify broad categories like “Technical Deep Dive” or “Executive Brief,” but I want more granular control over structure and depth. I’m also curious about chaining multiple research tasks together, where the output of one investigation becomes the input for the next.

But the bigger question I’m sitting with is what other AI capabilities are hiding in plain sight, waiting for someone to make them accessible. Deep research was always there in the Gemini API; it just needed a wrapper that made it feel like a natural part of the development workflow. What else is out there?

If you want to try it yourself, you’ll need a Gemini API key (get one at ai.dev) and set the GEMINI_DEEP_RESEARCH_API_KEY environment variable. Deep research runs on Gemini 3.0 Pro, and you can find the current pricing here. It’s charged based on token consumption for the research process plus any tool usage fees.

Install the extension with:

gemini extensions install https://github.com/allenhutchison/gemini-cli-deep-research --auto-update

The full source is on github.

As for me, I still need to finish part nine of the Agentic Shift series. But now I can get back to it with the confidence that I’m working in my preferred environment, with the tools I need accessible right from the terminal. Fair warning: once you start using AI for actual deep research, it’s hard to go back to the shallow stuff.

A retro computer monitor displaying the Gemini CLI prompt "> Ask Gemini to scaffold a web app" inside a glowing neon blue and pink holographic wireframe box, representing a digital sandbox.

The Guardrails of Autonomy

I still remember the first time I let an LLM execute a shell command on my machine. It was a simple ls -la, but my finger hovered over the Enter key for a solid ten seconds.

There is a visceral, lizard-brain reaction to giving an AI that level of access. We all know the horror stories—or at least the potential horror stories. One hallucinated argument, one misplaced flag, and a helpful cleanup script becomes rm -rf /. This fear creates a central tension in what I call the Agentic Shift. We want agents to be autonomous enough to be useful—fixing a bug across ten files while we grab coffee—but safe enough to be trusted with the keys to the kingdom.

Until now, my approach with the Gemini CLI was the blunt instrument of “Human-in-the-Loop.” Any tool call with a side effect—executing shell commands, writing code, or editing files—required a manual y/n confirmation. It was safe, sure. But it was also exhausting.

I vividly remember asking Gemini to “fix all the linting errors in this project.” It brilliantly identified the issues and proposed edits for twenty different files. Then I sat there, hitting yyy… twenty times.

The magic evaporated. I wasn’t collaborating with an intelligent agent; I was acting as a slow, biological barrier for a very expensive macro. This feeling has a name—“Confirmation Fatigue”—and it’s the silent killer of autonomy. I realized I needed to move from micromanagement to strategic oversight. I didn’t want to stop the agent; I wanted to give it a leash.

The Policy Engine

The solution I’ve built is the Gemini CLI Policy Engine.

Think of it as a firewall for tool calls. It sits between the LLM’s request and your operating system’s execution. Every time the model reaches for a tool—whether it’s to read a file, run a grep command, or make a network request—the Policy Engine intercepts the call and evaluates it against a set of rules.

The system relies on three core actions:

  1. allow: The tool runs immediately.
  2. deny: The AI gets a “Permission denied” error.
  3. ask_user: The default manual approval.

A Hierarchy of Trust

The magic isn’t just in blocking or allowing things; it’s in the hierarchy. Instead of a flat list of rules, I built a tiered priority system that functions like layers of defense.

At the base, you have the Default Safety Net. These are the built-in rules that apply to everyone—basic common sense like “always ask before overwriting a file.”

Above that sits the User Layer, which is where I define my personal comfort zone. This allows me to customize the “personality” of my safety rails. On my personal laptop, I might be a cowboy, allowing git commands to run freely because I know I can always undo a bad commit. But on a production server, I might lock things down tighter than a vault.

Finally, at the top, is the Enterprise/Admin Layer. These are the immutable laws of physics for the agent. In an enterprise setting, this is where you ensure that no matter how “creative” the agent gets, it can never curl data to an external IP or access sensitive directories.

Safe Exploration

In practice, this means I can trust the agent to look but ask it to verify before it touches. I generally trust the agent to check the repository status, review history, or check if the build passed. I don’t need to approve every git log or gh run list.

[[rule]]
toolName = "run_shell_command"
commandPrefix = [
  "git status",
  "git log",
  "git diff",
  "gh issue list",
  "gh pr list",
  "gh pr view",
  "gh run list"
]
decision = "allow"
priority = 100

Yolo Mode

Sometimes, I’m working in a sandbox and I just want speed. I can use the dedicated yolo mode to take the training wheels off. There is a distinct feeling of freedom—and a slight thrill of danger—when you watch the terminal fly by, commands executing one after another.

However, even in Yolo mode, I want a final sanity check before I push code or open a PR. While Yolo mode is inherently permissive, I define specific high-priority rules to catch critical actions. I also explicitly block docker commands—I don’t want the agent spinning up (or spinning down) containers in the background without me knowing.

# Exception: Always ask before committing or creating a PR
[[rule]]
toolName = "run_shell_command"
commandPrefix = ["git commit", "gh pr create"]
decision = "ask_user"
priority = 900
modes = ["yolo"]

# Exception: Never run docker commands automatically
[[rule]]
toolName = "run_shell_command"
commandPrefix = "docker"
decision = "deny"
priority = 999
modes = ["yolo"]

The Hard Stop

And then there are the things that should simply never happen. I don’t care how confident the model is; I don’t want it rebooting my machine. These rules are the “break glass in case of emergency” protections that let me sleep at night.

[[rule]]
toolName = "run_shell_command"
commandRegex = "^(shutdown|reboot|kill)"
decision = "deny"
priority = 999

Decoupling Capability from Control

The significance of this feature goes beyond just saving me from pressing y. It fundamentally changes how we design agents.

I touched on this concept in my series on autonomous agents, specifically in Building Secure Autonomous Agents, where I argued that a “policy engine” is essential for scaling from one agent to a fleet. Now, I’m bringing that same architecture to the local CLI.

Previously, the conversation around AI safety often presented a binary choice: you could have a capable agent that was potentially dangerous, or a safe agent that was effectively useless. If I wanted to ensure the agent wouldn’t accidentally delete my home directory, the standard advice was to simply remove the shell tool. But that is a false choice. It confuses the tool with the intent. Removing the shell doesn’t just stop the agent from doing damage; it stops it from running tests, managing git, or installing packages—the very things I need it to do.

With the Policy Engine, I can give the agent powerful tools but wrap them in strict policies. I can give it access to kubectl, but only for get commands. I can let it edit files, but only on specific documentation sites.

This is how we bridge the gap between a fun demo and a production-ready tool. It allows me to define the sandbox in which the AI plays, giving me the confidence to let it run autonomously within those boundaries.

Defining Your Own Rules

The Policy Engine is available now in the latest release of Gemini CLI. You can dive into the full documentation here.

If you want to see exactly what rules are currently active on your system—including the built-in defaults and your custom additions—you can simply run /policies list from inside the Gemini CLI.

I’m currently running a mix of “Safe Exploration” and “Hard Stop” rules. It’s quieted the noise significantly while keeping my file system intact. I’d love to hear how you configure yours—are you a “deny everything” security maximalist, or are you running in full “allow” mode?

A stylized, dark digital illustration of an open laptop displaying lines of blue code. Floating above the laptop are three glowing, neon blue wireframe icons: a document on the left, a calendar in the center, and an envelope on the right. The icons appear to be formed from streams of digital particles rising from the laptop screen, symbolizing the integration of digital tools. The overall aesthetic is futuristic and high-tech, with dramatic lighting emphasizing the connection between the code and the applications.

Bringing the Office to the Terminal

There is a specific kind of friction that every developer knows. It’s the friction of the “Alt-Tab.”

You’re deep in the code, holding a complex mental model of a system in your head, when you realize you need to check a requirement. That requirement lives in a Google Doc. Or maybe you need to see if you have time to finish a feature before your next meeting. That information lives in Google Calendar.

So you leave the terminal. You open the browser. You navigate the tabs. You find the info. And in those thirty seconds, the mental model you were holding starts to evaporate. The flow is broken.

But it’s not just the context switch that kills your momentum—it’s the ambush. The moment you open that browser window, the red dots appear. Chat pings, new emails, unresolved comments on a doc you haven’t looked at in two days—they all clamor for your attention. Before you know it, the quick thing you needed to look up has morphed into an hour of answering questions and putting out fires. You didn’t just lose your place in the code; you lost your afternoon.

I’ve been thinking a lot about this friction lately, especially as I’ve moved more of my workflow into the Gemini CLI. If we want AI to be a true partner in our development process, it can’t just live in a silo. It needs access to the context of our work—and for most of us, that context is locked away in the cloud, in documents, chats, and calendars.

That’s why I built the Google Workspace extension for Gemini CLI.

Giving the Agent “Senses

We often talk about AI agents in the abstract, but their utility is defined by their boundaries. An agent that can only see your code is a great coding partner. An agent that can see your code and your design documents and your team’s chat history? That’s a teammate.

This extension connects the Gemini CLI to the Google Workspace APIs, effectively giving your terminal-based AI a set of digital senses and hands. It’s not just about reading data; it’s about integrating that data into your active workflow.

Here is what that looks like in practice:

1. Contextual Coding

Instead of copying and pasting requirements from a browser window, you can now ask Gemini to pull the context directly.

“Find the ‘Project Atlas Design Doc’ in Drive, read the section on API authentication, and help me scaffold the middleware based on those specs.”

2. Managing the Day

I often get lost in work and lose track of time. Now, I can simply ask my terminal:

“Check my calendar for the rest of the day. Do I have any blocks of free time longer than two hours to focus on this migration?”

3. Seamless Communication

Sometimes you just need to drop a quick note without leaving your environment.

“Send a message to the ‘Core Eng’ chat space letting them know the deployment is starting now.”

The Accidental Product

Truth be told, I didn’t set out to build a product. When I first joined Google DeepMind, this was simply my “starter project.” My manager suggested I spend a few weeks experimenting with Google Workspace and our agentic capabilities, and the Gemini CLI seemed like the perfect sandbox for that kind of exploration.

I started building purely for myself, guided by my own daily friction. I wanted to see if I could check my calendar without leaving the terminal. Then I wanted to see if I could pull specs from a Doc. I followed the path of my own curiosity, adding tools one by one.

But when I shared this little experiment with a few colleagues, the reaction was immediate. They didn’t just think it was cool; they wanted to install it. That’s when I realized this wasn’t just a personal hack—it was a shared need. It snowballed from a few scripts into a full-fledged extension that we knew we had to ship.

Under the Hood

The extension is built as a Model Context Protocol (MCP) server, which means it runs locally on your machine. It uses your own OAuth credentials, so your data never passes through a third-party server. It’s direct communication between your local CLI and the Google APIs.

It currently supports a wide range of tools across the Workspace suite:

  • Docs & Drive: Search for files, read content, and even create new docs from markdown.
  • Calendar: List events, find free time, and schedule meetings.
  • Gmail: Search threads, read emails, and draft replies.
  • Chat: Send messages and list spaces.

Why This Matters

This goes back to the idea of “Small Tools, Big Ideas.” Individually, a command-line tool to read a calendar isn’t revolutionary. But when you combine that capability with the reasoning engine of a large language model, it becomes something else entirely.

It turns your terminal into a cockpit for your entire digital work life. It allows you to script interactions between your code and your company’s knowledge base. It reduces the friction of context switching, letting you stay where you are most productive.

If you want to try it out, the extension is open source and available now. You can install it directly into the Gemini CLI:

gemini extensions install https://github.com/gemini-cli-extensions/workspace

I’m curious to see how you all use this. Does it change your workflow? Does it keep you in the flow longer? Give it a spin and let me know.

A developer leans back in his chair with hands behind his head, smiling with relief. His monitor displays a large glowing "DELETE" button. In the background, a messy, tangled server rack is fading away, symbolizing the removal of complex infrastructure.

The Joy of Deleting Code: Rebuilding My Podcast Memory

Late last year, I shared the story of a personal obsession: building an AI system grounded in my podcast history. I had hundreds of hours of audio—conversations that had shaped my thinking—trapped in MP3 files. I wanted to set them free. I wanted to be able to ask my library questions, find half-remembered quotes, and synthesize ideas across years of listening.

So, I built a system. And like many “v1” engineering projects, it was a triumph of brute force.

It was a classic Retrieval-Augmented Generation (RAG) pipeline, hand-assembled from the open-source parts bin. I had a reliable tool called podgrab acting as my scout, faithfully downloading every new episode. But downstream from that was a complex RAG implementation to chop transcripts into bite-sized chunks. I had an embedding model to turn those chunks into vectors. And sitting at the center of it all was a vector database (ChromaDB) that I had to host, manage, and maintain.

It worked, but it was fragile. I didn’t even have a proper deployment setup; I ran the whole thing from a tmux session, with different panes for the ingestion watcher, the vector database, and the API server. It felt like keeping a delicate machine humming by hand. Every time I wanted to tweak the retrieval logic or—heaven forbid—change the embedding model, I was looking at a weekend of re-indexing and refactoring. I had built a memory for my podcasts, but I had also built myself a part-time job as a database administrator.

Then, a few weeks ago, I saw this announcement from the Gemini team.

They were launching File Search, a tool that promised to collapse my entire precarious stack into a single API call. The promise was bold: a fully managed RAG system. No vector DB to manage. No manual chunking strategies to debate. No embedding pipelines to debug. You just upload the files, and the model handles the rest.

I remember reading the documentation and feeling that specific, electric tingle that hits you when you realize the “hard problem” you’ve been solving is no longer a hard problem. It wasn’t just an update; it was permission to stop doing the busy work. I was genuinely excited—not just to write new code, but to tear down the old stuff.

Sometimes, it’s actually more fun to delete code than it is to write it.

The first step was the migration. I wrote a script to push my archive—over 18,000 podcast transcripts—into the new system. It took a while to run, but when it finished, everything was just… there. Searchable. Grounded. Ready.

That was the signal I needed. I opened my editor and started deleting code I had painstakingly written just last year. Podgrab stayed—it was doing its job perfectly—but everything else was on the chopping block.

  • I deleted the chromadb dependency and the local storage management. Gone.
  • I deleted the custom logic for sliding-window text chunking. Gone.
  • I deleted the manual embedding generation code. Gone.
  • I deleted the old web app and a dozen stagnant prototypes that were cluttering up the repo. Gone.

I watched my codebase shrink by hundreds of lines. The complexity didn’t just move; it evaporated. It was more than just a cleanup; it was a chance for a fresh start with new assumptions and fewer constraints. I wasn’t patching an old system anymore; I was building a new one, unconstrained by the decisions I made a year ago.

In its place, I wrote a new, elegant ingestion script. It does one thing: it takes the transcripts generated from the files podgrab downloads and uploads them to the Gemini File Search store. That’s it. Google handles the indexing, the storage, and the retrieval.

With the heavy lifting gone, I was free to rethink the application itself. I built a new central brain for the project, a lightweight service I call mcp_server.py (implementing the Model Context Protocol).

Previously, my server was bogged down with the mechanics of how to find data. Now, mcp_server.py simply hands a user’s query to my rag.py module. That module doesn’t need to be a database client anymore; it just configures the Gemini FileSearch tool and gets out of the way. The model itself, grounded by the tool, does the retrieval, the synthesis, and even the citation.

The difference is profound. The “RAG” part of my application—the part that used to consume 80% of my engineering effort—is now just a feature I use, like a spell checker or a date parser.

This shift is bigger than my podcast project. It changes the calculus for every new idea I have. Previously, if I wanted to build a grounded AI tool for a different context—say, for my project notes or my email archives—I would hesitate. I’d think about the boilerplate, the database setup, the chunking logic. Now? I can spin up a robust, grounded system in an hour.

My podcast agent is smarter now, faster, and much cheaper to run. But the best part? I’m not a database administrator anymore. I’m just a builder again.

You can try out the new system yourself at podcast-rag.hutchison.org or check out the code on GitHub.

Abstract digital visualization of glowing lines and nodes converging on a central geometric shape labeled 'AGENTS.md', symbolizing interconnected AI systems and a unifying standard.

On Context, Agents, and a Path to a Standard

When we were first designing the Gemini CLI, one of the foundational ideas was the importance of context. For an AI to be a true partner in a software project, it can’t just be a stateless chatbot; it needs a “worldview” of the codebase it’s operating in. It needs to understand the project’s goals, its constraints, and its key files. This philosophy isn’t unique; many agentic tools use similar mechanisms. In our case, it led to the GEMINI.md context system (which was first introduced in this commit) a simple Markdown file that acts as a charter, guiding the AI’s behavior within a specific repository.

At its core, GEMINI.md is designed for clarity and flexibility. It gives developers a straightforward way to provide durable instructions and file context to the model. We also recognized that not every project is the same, so we made the system adaptable. For instance, if you prefer a different convention, you can easily change the name of your context file with a simple setting.

This approach has worked well, but I’ve always been mindful that bespoke solutions, however effective, can lead to fragmentation. In the open, collaborative world of software development, standards are the bridges that connect disparate tools into a cohesive ecosystem.

That’s why I’ve been following the emergence of the Agents.md specification with great interest. We have several open issues in the Gemini CLI repo (like #406 and #12345) from users asking for Agents.md support, so there’s clear community interest. The idea of a universal standard for defining an AI’s context is incredibly appealing. A shared format would mean that a context file written for one tool could work seamlessly in another, allowing developers to move between tools without friction. I would love for Gemini CLI to become a first-class citizen in that ecosystem.

However, as I’ve considered a full integration, I’ve run into a few hurdles—not just technical limitations, but patterns of use that a standard would need to address. This has led me to a more concrete set of proposals for what an effective standard would need.

So, what would it take to bridge this gap? I believe with a few key additions, Agents.md could become the robust standard we need. Here’s a more detailed breakdown of what I believe is required:

  1. A Standard for @file Includes: From my perspective, this is mandatory. In any large project, you need the ability to break down a monolithic context file into smaller, logical, and more manageable parts—much like a C/C++ #include. A simple @file directive, which GEMINI.md and some other systems support, would provide the modularity needed for real-world use.
  2. A Pragma System for Model-Specific Instructions: Developers will always want to optimize prompts for specific models. To accommodate this without sacrificing portability, the standard could introduce a pragma system. This could leverage standard Markdown callouts to tag instructions that only certain models should pay attention to, while others ignore them. For example:

    > [!gemini]
    > Gemini only instructions here

    > [!claude]
    > Claude only instructions here

    > [!codex]
    > Codex only instructions here
  3. Clear Direction on Context Hierarchy: We need clear rules for how an agentic application should discover and apply context. Based on my own work, I’d propose a hierarchical strategy. When an agent is invoked, it should read the context in its current directory and all parent directories. Then, when it’s asked to read a specific file, it should first apply the context from that file’s local directory before applying the broader, inherited context. This ensures that the most specific instructions are always considered first, creating a predictable and powerful system.

If the Agents.md standard were to incorporate these three features, I believe it would unlock a new level of interoperability for AI developer tools. It would create a truly portable and powerful way to define AI context, and I would be thrilled to move Gemini CLI to a model of first-class support.

The future of AI-assisted development is collaborative, and shared standards are the bedrock of that collaboration. I’ve begun outreach to the Agents.md maintainers to discuss these proposals, and I’m optimistic that with community feedback, we can get there. If you have your own opinions on this, I’d love to hear them in the discussion on our repo.