I recently wrapped up the development cycle for Gemini Scribe 4.8.0. Looking back at the ~99 pull requests merged over the last month, the sheer volume of changes is significant. Not only are we shipping major features, but I’m also seeing a steady uptick in contributions from collaborators, an increase in issues filed by the community, and much more activity in our discussion group. Beyond the changelog and community growth, two structural narratives define this release: automation and measurement.
As I discussed in the evolution of Gemini Scribe, the goal has always been to move beyond a simple chat interface. With 4.8.0, we are taking a massive step toward making the agent a true background worker in your vault.
Here is a look at the architecture, the code, and what this release means for the future of our agentic workflows.
The Push for Automation
For a long time, running a complex agent task meant staring at a blocking UI. If you asked the agent to perform deep research or generate an image, you waited.
To solve this, we introduced a unified background execution lane. The new BackgroundTaskManager allows tools like DeepResearchTool and GenerateImageTool to accept a background: true parameter. The agent submits the task, receives an ID immediately, and returns to its turn. You can monitor these tasks in the new Gemini Activity modal, which consolidates background tasks and RAG indexing status into one view.
But unblocking the UI was only half the battle. We wanted to lay the groundwork for an agent that operates in the background. While true autonomy is a spectrum, the first step is moving away from the chat box and into scheduled, asynchronous workflows.
The Scheduled Task Engine
The marquee feature of 4.8.0 is the full task scheduling system. You can now define a task as a markdown file, and the plugin will run it on a cadence as a headless agent session, writing the output back to the vault.
To make this work, we built a ScheduledTaskManager with a 60-second tick loop. Tasks are stored in [state-folder]/Scheduled-Tasks/ with a sidecar JSON file for state. The headless ScheduledTaskRunner mirrors the standard AgentViewTools but auto-approves all tool calls.
We also expanded the schedule grammar. Originally, daily meant “every 24 hours from creation,” which surprised users. Now, you can specify daily@HH:MM and weekly@HH:MM:DAYS, so you can finally tell the agent to run “every weekday at 4:30 PM.”
We also handle missed runs gracefully. On startup, any task with runIfMissed: true that missed its window surfaces in a CatchUpModal.
Right now, this is essentially a highly intelligent cron job. You are still explicitly telling the agent when to run. But this scheduling engine is the foundational infrastructure for what comes next. In the next release, we are introducing Obsidian lifecycle hooks. Instead of just running on a timer, the agent will be able to react to events, triggering workflows when you create a new file, save a note, or modify a project board. That is where we cross the threshold into true ambient AI.
How I Use This in Practice
To give you an idea of what this unlocks, I currently rely on a few specific scheduled workflows:
The Daily Setup: Every afternoon, a scheduled skill runs to prepare my vault for the following day. It looks up my calendar, creates my daily note if it doesn’t exist, and seeds it with my upcoming meetings. It goes a step further by creating individual meeting note entries and building out context notes for the people I’ll be meeting with. When I walk into the office the next morning, my daily note is already prepped and ready to go.
Automated Blog Drafts: I also use this to automate my content pipeline. I have a scheduled skill that monitors my Readwise syncs and automatically generates drafts for my “Reading List” blog posts. Instead of manually curating and formatting these, the agent handles the heavy lifting in the background, leaving me to just review and polish the draft.
If you are worried about the agent running amok in your vault while you aren’t looking, there are several ways to mitigate this. You can limit the tools the agent has access to. If you don’t want it overwriting files, you can simply restrict its write access. Additionally, the agent’s response from any scheduled task is always saved in the Scheduled-Tasks/Runs file, giving you a complete audit log of what the agent had to say during the session.
In my case, I’m automating skills that I’ve been running manually for a while now, and I run my agent in a mode where I let it write and edit files day-to-day. You should set up your tasks to match your own comfort level. You can read more about how to configure this in the Scheduled Tasks Documentation.
Extracting the Agent Loop
To support headless scheduled tasks, I had to refactor how the agent executes tools. Previously, the tool-execution loop was tightly coupled to the UI in AgentViewTools.
I extracted this logic into a UI-agnostic AgentLoop class. AgentViewTools shrank from 386 lines down to 187, becoming a thin adapter over AgentLoop with specific hooks (onToolBatchStart, onToolCallStart, etc.).
// Conceptual extraction of the AgentLoop
export class AgentLoop {
constructor(private engine: ToolExecutionEngine) {}
async execute(turn: AgentTurn) {
// Iterative tool execution, removing the recursive stack-depth ceiling
while (this.hasPendingToolCalls(turn)) {
// Loop detection, batching, and execution logic lives here
}
}
}
This extraction immediately paid dividends, catching bugs that a duplicate headless runner had introduced, and eliminating a recursive stack-depth ceiling on deep tool chains. More importantly, it means scheduled tasks, evals, and the UI all share the exact same execution engine.
Local Models with Ollama and Gemma 4
First-class local-model support is here. By leveraging the ModelApi seam, chat, summarization, rewrite, and agent tool-calling all work against a local Ollama server. You can use any model from Ollama that supports tool calling, though I have personally only tested this extensively with Gemma 4.
In my local evaluation harness, Gemma 4 performed exceptionally well. It is incredibly capable, fast, and handles the agent loop with a level of reliability that makes local-only agentic workflows genuinely viable.
The way I use this right now is as an offline fallback: when I don’t have an internet connection, I switch to Gemma 4 and just keep working. Obviously, running offline means I don’t have access to online-dependent tools like Google Search, Deep Research, or Image Generation. But for synthesizing notes, organizing projects, or drafting content securely, it is incredibly powerful.
In the future, we will be refining the system to allow you to pick the model you want on a per-function basis. This means you’ll be able to route sensitive, local text processing to an offline model while still leveraging cloud models for heavy-lifting tasks like Deep Research or Image Generation when you are connected.
Moving from Guessing to Measuring
As the agent loop gets more complex (handling runaway loop aborts and budget constraints) we can no longer rely on “vibes” to know if a change improved the system.
To solve this, I built a new CLI-driven eval harness (npm run eval) that drives a live Obsidian instance. It captures turns, tool calls, token usage, cache ratios, and cost. Crucially, it measures reliability. By passing --repeat=N, the harness repeats each task to surface flakiness, reporting a pass^k metric. We can now test multi-hop retrieval and loop-trap cyclic references programmatically, ensuring the agent bails cleanly instead of spinning forever.
Right now, the focus for 4.8.0 was getting this infrastructure in place and establishing the beginnings of our eval set. Having the harness is the first step; the next step is building out a robust suite of test cases that reflect real-world vault interactions.
I would love to see contributions from the community for the evals themselves! If you have complex agentic workflows or edge cases you want to ensure remain stable, please submit them. In the next release, we will start publishing the actual eval results and benchmarks directly in the repo so we can transparently track the agent’s performance over time.
What’s Next?
What does this implementation tell us about the future of software engineering and personal knowledge management?
We are seeing a clear shift toward ambient AI. The chat interface is a great starting point, but the true value of an agentic system is its ability to operate asynchronously. While the scheduling engine in 4.8.0 acts as a highly capable cron job, it lays the groundwork for the event-driven lifecycle hooks coming in the next release.
By combining the AgentLoop extraction with asynchronous execution, Gemini Scribe is no longer just a tool you use; it is becoming a system that reacts and works alongside you. When you can rely on a background orchestrator to run your housekeeping routines (like updating changelogs or triaging issues) while you eat dinner, the vault becomes a living, breathing entity. The agent becomes a true extension of your workflow, utilizing the built-in skills we’ve developed entirely in the background.
Gemini Scribe 4.8.0 is a massive architectural leap forward. The code is cleaner, the tests are faster (thanks to a Vitest migration), and the agent is more autonomous than ever.
If you want to dive into the specifics or try out the new scheduling grammar, check out the updated documentation on scheduled tasks.
Let me know what automated tasks you end up building. I’m already finding new ways to let the agent do the heavy lifting while I focus on the work that matters.