Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you're just used to AWS services and don't see the complexity there. I tried running some stateful services on ECS once and it took me hours to have something _not_ working. In Kubernetes it takes me literally minutes to achieve the same task (+ automatic chart updates with renovatebot).


I'm not saying there's no complexity. It exists, and there are skills to be learned, but once you have the skills, it's not that hard.

Obviously that part's not different from Kubernetes, but here's the part that is: maintenance and upgrades are either completely out of my scope or absolutely minimal. On ECS, it might involve switching to a more recently built AMI every six months or so. AWS is famously good about not making backward incompatible changes to their APIs, so for the most part, things just keep working.

And don't forget you'll need a lot of those AWS skills to run Kubernetes on AWS, too. If you're lucky, you'll get simple use cases working without them. But once PVCs aren't getting mounted, or pods are stuck waiting because you ran out of ENI slots on the box, or requests are timing out somewhere between your ALB and your pods, you're going to be digging into the layer between AWS and Kubernetes to troubleshoot those things.

I run Kubernetes at home for my home lab, and it's not zero maintenance. It takes care and feeding, troubleshooting, and resolution to keep things working over the long term. And that's for my incredibly simple use cases (single node clusters with no shared virtualized network, no virtualized storage, no centralized logs or metrics). I've been in charge of much more involved ones at work and the complexity ceiling is almost unbounded. Running a distributed, scalable container orchestration platform is a lot more involved than piggy backing on ECS (or Lambda).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: