This site β€” the one you're reading right now β€” runs on the infrastructure described below. It's a Flask app containerized with Podman, deployed to Kubernetes via a GitOps workflow. No cloud providers. Just hardware in a closet and some YAML files.

The Stack

πŸ–₯️

Proxmox VE β€” Hypervisor Layer

The foundation. Proxmox handles VM lifecycle, live migration, snapshots, and cluster management. The web UI is good enough for day-to-day ops and the CLI (qm, pvesh) means I can automate everything with Ansible. I take snapshots before doing anything I might regret.

🐧

RHEL VMs β€” OS Layer

Every node runs RHEL β€” not a clone, actual RHEL on a free developer subscription. This keeps the homelab mirroring what I work with professionally. I get proper package repos, security advisories, and subscription-manager behaves exactly like it does on a client engagement. Each VM is provisioned with Ansible from a template.

☸️

Kubernetes β€” Orchestration Layer

Kubernetes upstream on RHEL VMs. The cluster runs control plane and worker nodes spread across Proxmox hosts. I use it for everything deployed here: this website, internal tools, and whatever experiment I'm running this month. Namespaces keep things tidy. cert-manager handles TLS automatically.

πŸ”„

GitOps β€” Delivery Layer

Every cluster manifest lives in Git. ArgoCD watches the repository and reconciles cluster state automatically. When I push a new container image tag, ArgoCD detects the drift and syncs. No kubectl apply from a laptop. Infrastructure as code is a constraint I hold myself to even at home β€” it keeps things reproducible and makes me document what I've done.

πŸ€–

Ansible β€” Automation Layer

Provisioning, patching, configuration drift correction β€” all Ansible. The goal is a single ansible-playbook site.yml that takes a fresh Proxmox host to a functional Kubernetes node in under 10 minutes. I'm close. Writing it up for the blog when it's done.


Why bother?

Honest answer: because breaking things at home is cheaper and more educational than breaking them at work. Every concept I've applied to enterprise Kubernetes deployments β€” RBAC design, GitOps workflows, security hardening, storage classes, ingress configuration β€” I've first made work, then broken, then fixed in the homelab.

There's also something satisfying about running your own infrastructure end-to-end. I know where every bit of data lives, I control the TLS certificates, and the only SLA is the one I set for myself. The power bill is real. The learning is worth it.

Read homelab posts on the blog