At MyPartner ISC, we build ERP apps, serving a modest but growing user base of about 300 active users. Two years ago, we jumped into Kubernetes (K8s), chasing its promise of scalability . We thought it would future proof our infrastructure. Instead, K8s overwhelmed our servers, caused outages, and led to angry user complaints when services went down due to resource overload. K8s was a mismatch for our scale. Here’s how Kubernetes nearly derailed us, why we ditched it for Docker Compose and Swarm, and how we eliminated outages to keep our users happy.
The Kubernetes Promise
Our journey started with Docker Compose, running smoothly on our two servers: a local on premises server (16 CPU cores, 64 GB RAM) and another for Database. It handled our microservices for apps . But as we planned for growth, K8s beckoned with its hype: auto-scaling, self-healing pods, and enterprise-grade orchestration. We set up K8s clusters on VE, used Rancher for management. For a while, it worked YAML configs gave us clean deployments, and we spun up staging environments for our 300 users in minutes.
Then the cracks appeared, and our servers started suffering after a year.
The K8s Reality:
Kubernetes shines for massive systems with thousands of nodes, but for our small setup, it was a disaster. With 300 users and 16 VMs (50 CPU cores, 242 GB RAM, 2870 GB disk), K8s’s overhead caused outages that frustrated our users. Here’s how it went wrong:
-
Resource Overload Crashed Servers: K8s’s control plane etcd, API server, scheduler consumed 25–30% of CPU and memory at idle, eating 12–15 CPU cores and 60–73 GB RAM across our clusters
-
Complexity Stalled Our Team: K8s’s learning curve was brutal. We spent 10–15 hours weekly on YAML configs, RBAC policies, and debugging network issues, Instead of coding, we were wrestling with Helm charts and kubectl commands, delaying feature releases by 2–3 weeks.
-
User Fury from Downtime: When our production machines maxed out during a query spike, K8s rescheduled pods to stressed nodes, worsening the outage. We got 15+ angry calls and whatsapp messages in one week alone, K8s was costing us 40% more in compute resources and doubling our deployment times. We were firefighters, not builders, and our users were paying the price.
I’ve personally spent EVERY WEEKEND going around with my laptop because I expect the machines to be down at any minute and get calls from management or clients direclty.
Breaking Free: Back to Basics with Docker Compose and Swarm
In July 2025, I’ve hit my breaking point. Users were frustrated, and our team was burned out (even the CEO was fedup with client calls). We reviewed our infrastructure and made a bold call: ditch K8s entirely. We returned to Docker Compose for its simplicity and added Docker Swarm as a lightweight scaling option. Here’s how we did it:
-
Decommissioning K8s: We analyzed our 16 VMs and consolidated to 8, freeing 18 CPU cores, 83 GB RAM, and 1170 GB disk space.
-
Rebuilding with Docker Compose: We rewrote services for Paramedics, Digikids, and others in
docker-compose.yaml
. Traefik handled routing, and Harbor stayed for registries. Deployments became dead simple:docker-compose up -d
on our local server or on our Cloud Sever. No more pod evictions just stable containers. -
Scaling with Swarm: For occasional spikes , we set up Swarm:
docker swarm init
on the local server,docker swarm join
on the VPS. Scaling a service?docker service scale paramedics_ms_gs=3
. It’s lightweight, with no etcd or scheduler bloat. -
Monitoring to Prevent Outages: To keep users happy, we deployed Homarr and Portainer for intuitive dashboards, alongside Dash small metrics. Alerts now catch CPU/memory spikes before they crash VMs, and Trivy scans ensure secure images.
Since the switch, we’ve had zero “angry” calls about downtime. Our team shipped in peace unburdened by K8s complexity.
I’ll never forget that night at 9 PM, when the CEO and I stayed late in the office, transferring all our databases to a cloud server and clicking together to delete our three K8s machines a symbolic end to the chaos
Why K8s Isn’t Right for Us (Yet)
Don’t get me wrong Kubernetes is a beast for the right use case. If you’re running thousands of nodes or millions of users, its auto scaling and fault tolerance are unmatched. But for our case it wasn’t , our workloads are steady, not spiky, and our team is lean, not an army of SREs/Devops.
Here’s what I learned:
-
Small Teams, Simple Tools: With few engineers, we need tools that improve velocity, not slow us down. Docker Compose delivers 80% of K8s’s benefits with 20% of the hassle.
-
Scale Smart, Not Big: Swarm handles our occasional spikes (e.g., 50–100 new users) without K8s’s overhead. We’ll revisit K8s when we hit 3000+ users, not 300
The Road Ahead: Keeping Users Happy
Our users don’t care about our tech stack they care about uptime and features.
If your servers are crashing and users are angry, take a hard look at your stack. Sometimes, the “best” tool is the one that doesn’t strangle your progress.