Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> using container orchestration is about reliability, zero-downtime deployments

I think that's the first time I've heard any "techie" say we use containers because of reliability or zero-downtime deployments, those feel like they have nothing to do with each other, and we've been building reliable server-side software with zero-downtime deployments long before containers became the "go-to", and if anything it was easier before containers.



It would be interesting to hear your story, mine is that containers in general start an order of magnitude faster than vms (in general! we can easily find edge cases) and hence e.g. horizontal scaling is faster. You say it was easier before containers, I say k8s in spite of its complexity is a huge blessing as teams can upgrade their own parts independently and do things like canary releases easily with automated rollbacks etc. It's so much faster than VMs or bare metal (which I still use a lot and don't plan to abandon anytime soon but I understand their limitations).


In general, my experience is "the more moving parts == less reliable", if I were to generalize across two decades of running web services. The most reliable platforms I've helped manage has been platforms that tried to avoid adding extra complexity until they really couldn't avoid it, and when I left still deployed applications by copy a built binary to a Linux host, reload the systemd service, switch the port in the proxy and let traffic hit the new service while healtchecking, and when green, switch over and stop the old service.

Deploys usually took minutes (unless something was broken), scaling worked the same as if you were using anything else, increase a number and redeploy, and no Kubernetes, Docker or even containers as far as the eye could see.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: