Docker Did Nothing Wrong (But I’m Trying Podman Anyway)

Hey everyone, welcome back to the homelab series! One of the constant themes in managing a growing homelab is figuring out the best way to run and orchestrate all the different services we rely on. For me, this has meant evolving my setup over time into distinct systems to keep things scalable and maintainable.

My current homelab nerve center is spread across a few key machines: ns1 and ns2 handle critical DNS redundancy, beluga is the fortress for storage and archives, and bubba acts as the powerhouse for all my AI experiments and compute-heavy tasks.

Up until now, Docker has been the backbone for deploying and managing services across these systems. Whether it’s containerizing AI models on bubba or managing my core network services, it’s been indispensable for packaging applications and keeping dependencies tidy. It’s served me well, allowing for rapid deployment and relatively easy management.

However, the tech landscape is always shifting, and exploring new tools is part of the homelab fun, right? Lately, I’ve been hearing more about Podman as a powerful, open-source alternative to Docker. Recent changes in the container world and simple curiosity led me to check out this excellent video overview (which I highly recommend watching!):

This video really illuminated what Podman brings to the table and sparked a ton of ideas about how it could potentially fit into, and even improve, my homelab workflow. So, in this post, I want to walk through my current Docker-based setup in a bit more detail, share the specific Podman features from the video that caught my eye, and outline some experiments I’m planning for the future. Let’s dive in!

My Current Homelab: A Multi-Server Approach

As I mentioned, to keep things organized as my homelab grew, I settled on dedicating specific roles to my main servers using Proxmox VE as the foundation for virtualization:

  • ns1 and ns2 — The Backbone of Service Discovery: These identical servers run my critical internal DNS, ensuring all my services can find each other reliably. Redundancy here is key – if one fails, the other keeps everything connected.
  • bubba — The AI Workhorse: This is my compute powerhouse, equipped with a GPU and plenty of RAM. It’s dedicated to running local AI models like LLMs via Ollama and interacting with them through tools like Open WebUI. It handles tasks like podcast transcription, embeddings, and inference workloads.
  • beluga — The Keeper of the Archives: With its focus on storage, beluga houses my media library, data archives, and backups. It’s the long-term home for files and feeds data to bubba when needed.

This separation of duties has been crucial for keeping things maintainable and scalable.

Docker’s Role in My Current Setup

So, how do I actually run the services on these different machines? Docker and Docker Compose are absolutely central to making this multi-server setup manageable. Here’s a glimpse into how it’s wired together:

  • Base Services Everywhere: I have a base Docker Compose file that runs on most, if not all, of these servers. It includes essential plumbing:
    • Traefik: My go-to reverse proxy, handling incoming traffic and routing it to the correct service container, plus managing SSL certificates.
    • Portainer Agent: Allows me to manage the Docker environment on each host from a central Portainer instance (the agent itself is part of the Portainer ecosystem).
    • Watchtower: Automatically updates containers. (I use this cautiously – often pinning major versions in my compose files while letting Watchtower handle minor updates, though for rapidly evolving things like Ollama, I sometimes let it pull latest.)
    • Dozzle Agent: Feeds container logs to a central Dozzle instance for easy viewing (the agent enables the main Dozzle UI).
  • DNS Servers (ns1/ns2): On top of the base services, the DNS servers have a dedicated compose file that adds CoreDNS, specifically using the coredns_omada project which cleverly mirrors DHCP hostnames from my TP-Link Omada network gear into DNS – super handy! ns1 also runs the central dozzle instance (the log viewer UI) and Heimdall as my main homelab dashboard, providing a single pane of glass overview. Docker makes running these critical but relatively lightweight infrastructure services incredibly straightforward.
  • AI Workloads (bubba): On the AI workhorse bubba, Docker is essential for managing the AI stack. I run ollama to serve LLMs and open-webui as a frontend, all containerized. This simplifies deployment, dependency management, and allows me to easily experiment with different models and tools without polluting the host system.
  • Storage Server Utilities (beluga): Even the storage server beluga runs containers. I have PostgreSQL running here, which primarily backs the Speedtest-Tracker service but also serves as my go-to relational database for any other containers or services that need one. Again, Docker neatly packages these distinct applications.

Essentially, Docker Compose defines the what and how for each service stack on each server, and Docker provides the runtime environment. This containerization strategy is what allows me to easily deploy, update, and manage this diverse set of applications across my specialized hardware.

Video Insights: How Podman Could Fit into This Picture

Watching the video overview of Podman did more than just introduce another tool; it sparked concrete ideas about how its specific features could integrate with, and perhaps even improve, my current homelab operations distributed across ns1/ns2, bubba, and beluga.

Perhaps the most compelling concept showcased was Podman’s native support for Pods. While Docker Compose helps manage multiple containers, the idea of grouping tightly coupled containers – like my ollama and open-webui stack on bubba, potentially along with a future vector database – into a single, network-integrated unit feels intrinsically cleaner. Managing this AI application suite as one atomic Pod could simplify networking and lifecycle management significantly. I could even see potential benefits in treating the base services running on each host (traefik, portainer-agent, etc.) as a coherent Pod.

Another significant architectural difference highlighted is Podman’s daemonless nature. Running without a central, privileged daemon is interesting for a couple of reasons. While bubba has resources to spare, my leaner DNS servers (ns1/ns2) might benefit from even slight resource savings, though that needs practical testing. More importantly, this architecture often makes running containers as non-root (rootless) more straightforward. This has direct security appeal, especially for the complex AI applications processing data on bubba or the critical DNS infrastructure on ns1/ns2, potentially reducing the attack surface compared to running everything through a root daemon.

Furthermore, the video demonstrated Podman’s ability to generate Kubernetes YAML manifests directly from running containers or pods. This feature is particularly exciting for a homelabber keen on learning! It presents a practical pathway to experimenting with Kubernetes distributions like K3s or Minikube. I could define my AI stack on bubba using Podman Pods and then export it to a Kubernetes-native format, greatly lowering the barrier to entry for learning K8s concepts with my existing workloads. Even outside of a full K8s deployment, having standardized YAML definitions could make my application deployments more portable and consistent.

Of course, for those who prefer a graphical interface, the video also touched upon Podman Desktop. While I currently use Portainer, exploring Podman Desktop could offer a different management perspective, perhaps one more focused on visualizing and managing these Pods. And crucially, knowing that Podman aims for Docker CLI compatibility for many common commands makes the idea of experimenting much less daunting – it suggests I wouldn’t have to relearn everything from scratch.

So, rather than just being ‘another container tool’, the video positioned Podman as offering specific solutions – particularly around multi-container application management via Pods, security posture through its daemonless design, and bridging towards Kubernetes – that seem highly relevant to the challenges and opportunities in my own homelab setup.

Future Homelab Goals: Experimenting with Podman

So, all this reflection on my current setup and the potential benefits highlighted in the video leads to the obvious next question: what am I actually going to do about it? While I’m not planning a wholesale migration away from Docker immediately – it’s deeply integrated and works well – the possibilities offered by Podman are too compelling not to explore.

My plan is to dip my toes into the Podman waters with a few specific, manageable experiments, leveraging the flexibility of my Proxmox setup:

  1. Dedicated Test Environment: Instead of installing Podman directly onto one of my existing servers like bubba initially, I’ll spin up a fresh virtual machine using Proxmox dedicated solely to Podman testing. This is one of the huge advantages of using Proxmox – I can create an isolated sandbox environment easily. This clean slate will be perfect for getting Podman installed, getting comfortable with the basic CLI commands (leveraging that Docker compatibility mentioned at), and working out any kinks without impacting my operational services.
  2. Migrating a Stack to a Pod: Once the test VM is set up, the real test will be taking my current ollama and open-webui Docker Compose stack (conceptually, at least) and recreating it as a Podman Pod within that VM. This will directly evaluate the Pod concept for managing related services and let me see how the networking and management feel compared to Compose in a controlled environment.
  3. Testing a Simple Service: To get a feel for basic container management and the daemonless architecture in this new VM, I’ll deploy a simpler, standalone service using Podman. Perhaps I’ll containerize a small utility or pull down a common image like postgres or speedtest-tracker just to compare the basic workflow.
  4. Generating Kubernetes Manifests: Once I (hopefully!) have the AI stack running in a Podman Pod in the test VM, I definitely want to try the Kubernetes YAML generation feature. Even if I don’t deploy it immediately, I want to see how Podman translates the Pod definition into Kubernetes resources within this testbed. This feels like a practical homework assignment for my K8s learning goals.
  5. Exploring Podman Desktop: Finally, I’ll likely install and explore Podman Desktop within the test VM. I’m curious to see what its visualization and management capabilities look like, especially for Pods, compared to my usual tools.

This isn’t about finding a ‘winner’ between Docker and Podman right now, but rather about hands-on learning in a safe, isolated environment thanks to Proxmox. It’s about understanding the practical advantages and disadvantages of Podman’s approach before considering if or how I might integrate it into my primary homelab systems (ns1/ns2, bubba, beluga) later on. I’m looking forward to experimenting and, of course, I’ll be sure to share my findings and experiences here in future posts!

That’s the plan for now! Docker continues to be a vital part of my homelab, but exploring tools like Podman is essential for learning and potentially improving how things run. The video provided some great insights, and I’m excited to see how these experiments turn out.

What about you? Are you using Docker, Podman, or something else in your homelab? Have you experimented with Pods or rootless containers? Let me know your thoughts and experiences in the comments below!

3 thoughts on “Docker Did Nothing Wrong (But I’m Trying Podman Anyway)

  1. Thanks for the kind words, and for sharing your setup—this is exactly the kind of perspective I was hoping to hear more about when I wrote that post. I haven’t personally tried NixOS in LXC, but I’ve heard about it a few times and seen some really interesting tutorials online. It’s absolutely on my list of things to try, but I haven’t gotten around to it yet. Sounds like you’ve really leaned into its strengths—reproducibility, declarative config, and tight control over the stack.

    I agree the Proxmox docs can be a bit doctrinaire about avoiding app containers in LXC, and I suspect a lot of that comes down to support boundaries more than hard limitations. Your experience suggests that, with enough control—especially through Nix builds and well-separated data—you can make it work reliably. That kind of system confidence is exactly what makes homelabbing fun.

    That said, one of the reasons I’ve leaned toward running apps in VMs instead of LXC is about risk containment. Since LXC shares the host kernel, the blast radius from a bad container (or even just a misbehaving app inside one) is wider. When the host is running critical services—including Proxmox itself—I like the clean isolation you get from a full KVM-based VM. I know the odds of a container crash taking down the host are low, but I still tend to favor separation when I’m trying out new stacks or experimenting with things like GPU passthrough, custom networking, or container runtimes.

    On that note, Docker (and lately Podman) gives me a layer of convenience I’ve come to rely on—health checks, lifecycle events, easy migration between machines, and a well-understood ecosystem. I’m using a fair number of apps like OpenWebUI, n8n, ChromaDB, and Whisper, and being able to spin them up with Compose across dev and prod environments makes life easier. Still, I’m thinking more about whether the added abstraction always pays for itself. I might try pushing a few core services closer to the metal with systemd units or even LXC-native setups and see how that feels.

    I’d love to hear more about how you structure your Nix configurations for homelab—do you have a central flake that targets each container? How do you handle secrets and shared storage? If you’ve written anything up, I’d be eager to read it.

  2. I run NixOS inside LXC containers on Proxmox. NixOS has an “oci container” abstraction which I sometimes use to run “docker” containers, but it actually uses podman under the surface. I do this in the rare case where a NixOS systemd service is unavailable or it seems like the containerized app will have better support. The Proxmox documentation is adamant about not running “app containers” in LXC, but I haven’t had any problems. I’m curious what benefits you see from the extra layer of abstraction.

    I also have a more homogenous homelab environment because a) redundancy is very important to me and b) I got started at a time when low-cost used hardware seemed interesting (i.e. before GenAI). As a result, I let Proxmox HA (backed by ZFS replication) move around the LXCs (mainly during kernel upgrades which I use to test out hardware failover). The NixOS learning curve is steep but it pays off once you get over the hump. I build those LXC images on my laptop so the software stack is 100% reproducible, with data kept on a separate virtual disk.

    Thanks for the great posts!

Leave a Reply