I lost the report somewhere between browser tabs. One moment it was there in the Gemini app, a detailed deep research analysis on how AI agents communicate with each other, complete with citations and a synthesis I’d spent an hour reviewing. The next moment, gone. Along with the draft blog post I’d been weaving it into.
I was working on part nine of my Agentic Shift series, trying to answer the question of what happens when agents start talking to each other instead of just talking to us. The research was sprawling—academic papers on multi-agent systems, documentation from LangGraph and AutoGen, blog posts from researchers at DeepMind and OpenAI. I’d been using Gemini’s deep research feature in the app to help synthesize all of this, and it was genuinely useful. The AI would spend minutes thinking through the question, querying sources, building a structured report. But then I had to move that report into my text-based workflow. Copy, paste, reformat, lose formatting, copy again. Somewhere in that dance between the browser and my terminal, I lost everything.
I stared at the empty browser tab for a moment. I could start over, rerun the research in the Gemini app, be more careful about saving this time. But this wasn’t the first time I’d hit this friction. Every time I used deep research in the browser, I had to bridge two worlds: the app where the AI did its thinking, and the terminal where I actually write and build.
What looked like yak shaving was actually a prerequisite. I needed deep research capabilities in my terminal workflow, not just wanted them. I couldn’t keep jumping between environments. And I was in luck. Just a few weeks earlier, Google had announced that deep research was now available through the Gemini API. The capability I’d been using in the browser could be accessed programmatically.
When Features Live in the Wrong Place
I’m not going to pretend this was built based on demand from the community. I needed this. Specifically, I needed to stop context-switching between the Gemini app and my terminal, because every time I did, I was introducing friction and risk. The lost report was just the most recent symptom of a workflow that was fundamentally broken for how I work.
I live in the terminal. My notes are markdown files. My drafts are plain text. My build process, my git workflow, my entire development environment assumes I’m working with files and command-line tools. When I have to move work from a browser back into that environment, I’m not just inconvenienced—I’m fighting against the grain of everything else I do.
Deep research is powerful. It works. But living in a web app meant it was disconnected from the places where I actually needed it. Sure, other people might benefit from having this integrated into MCP-compatible tools, but that’s a nice side effect. The real reason I built this was simpler: I had to finish part nine of the Agentic Shift series, and I couldn’t do that without fixing my workflow first.
The Model Context Protocol made this possible. It’s a standard for exposing AI capabilities as tools that can plug into different environments. Google’s API gave me the primitives. I just needed to connect them to where I actually work.
Building the Missing Piece
The extension wraps Gemini’s deep research capabilities into the Model Context Protocol, which means it integrates seamlessly with Gemini CLI and any other MCP-compatible client. The architecture is deliberately simple, but it supports two distinct workflows depending on what you need.
The first workflow is straightforward: you have a research question, and you want a deep investigation. You can kick off research with a simple command, but if you use the bundled /deep-research:start slash command, the model actually guides you through a step to optimize your question to get the most out of deep research. The agent then spends tens of minutes—or as much time as it needs—planning the investigation, querying sources, and synthesizing findings into a detailed report with citations you can follow up on.
The second workflow is for when you want to ground the research in your own documents. You use /deep-research:store-create to set up a file search store, then /deep-research:store-upload to index your files. Once they’re uploaded, you have two options: you can include that dataset in the deep research process so the agent grounds its investigation in your specific sources, or you can query against it directly for a simpler RAG experience. This is the same File Search capability I wrote about in November when I rebuilt my Podcast RAG system, but now it’s accessible from the terminal as part of my normal workflow.
The extension maintains local state in a workspace cache, so you don’t have to remember arcane resource identifiers or lose track of running research jobs. The whole thing is designed to feel as natural as running a grep command or kicking off a build—it’s just another tool in the environment where I already work.
So did it actually work?
The first time I ran it, I asked for a deep dive into Stonehenge construction. I’d been reading Ken Follett’s novel Circle of Days and found myself curious about the scientific evidence behind the story, what do we actually know about how it was built and who built it. I kicked off the query and watched something fascinating happen. The model understood that deep research takes time. Instead of just waiting silently, it kept checking in to see if the research was done, almost like checking the oven to see if dinner was ready. Twenty minutes later, a markdown file appeared in my filesystem with a comprehensive research report, complete with citations to academic sources, isotope analysis, and archaeological evidence. I didn’t have to copy anything from a browser. I didn’t lose any formatting. It was just there, ready to reference. The report mentioned the Bell Beaker culture and what happened to the Neolithic builders around 2500 BCE, which sent me down another rabbit hole. I immediately ran a second research query on that transition. Same seamless experience. That’s when I knew this was exactly what I needed.
What This Actually Means
I think extensions like this represent something important about where AI development is heading. We’re past the proof-of-concept phase where every AI interaction is a magic trick. Now we’re in the phase where AI capabilities need to integrate into actual workflows—not replace them, but augment them in ways that feel natural.
This is what I wrote about in November when I talked about the era of Personal Software. We’ve crossed a threshold where building a bespoke tool is often faster—and certainly less frustrating—than trying to adapt your workflow to someone else’s software. I didn’t build this extension for the community. I built it because I needed it. I had lost work, and I needed to stop context-switching between environments. If other people find it useful, that’s a nice side effect, but it’s fundamentally software for an audience of one.
The key insight for me was that the Model Context Protocol isn’t just a technical standard; it’s a design pattern for making AI tools composable. Instead of building a monolithic research application with its own UI and workflow, I built a small, focused extension that does one thing well and plugs into the environment where I already work. That composability matters because it means the tool can evolve with my workflow rather than forcing my workflow to evolve around the tool.
There’s also something interesting happening with how we think about AI capabilities. Deep research isn’t about making the model smarter—it’s about giving it time and structure. The same model that gives you a superficial answer in three seconds can give you a genuinely insightful report if you let it think for tens of minutes and provide it with the right sources. We’re learning that intelligence isn’t just about raw capability; it’s about how you orchestrate that capability over time.
What Comes Next
The extension is live on GitHub now, and I’m using it daily for my own research workflows. The immediate next step is adding better control over the research format—right now you can specify broad categories like “Technical Deep Dive” or “Executive Brief,” but I want more granular control over structure and depth. I’m also curious about chaining multiple research tasks together, where the output of one investigation becomes the input for the next.
But the bigger question I’m sitting with is what other AI capabilities are hiding in plain sight, waiting for someone to make them accessible. Deep research was always there in the Gemini API; it just needed a wrapper that made it feel like a natural part of the development workflow. What else is out there?
If you want to try it yourself, you’ll need a Gemini API key (get one at ai.dev) and set the GEMINI_DEEP_RESEARCH_API_KEY environment variable. Deep research runs on Gemini 3.0 Pro, and you can find the current pricing here. It’s charged based on token consumption for the research process plus any tool usage fees.
Install the extension with:
gemini extensions install https://github.com/allenhutchison/gemini-cli-deep-research --auto-update
The full source is on github.
As for me, I still need to finish part nine of the Agentic Shift series. But now I can get back to it with the confidence that I’m working in my preferred environment, with the tools I need accessible right from the terminal. Fair warning: once you start using AI for actual deep research, it’s hard to go back to the shallow stuff.