Kubernetes is what happens when you need to support everyone's wants and desires within the core platform. The abstraction facade breaks and ends up exposing all of the underlying pieces because someone needs feature X. So much of Kubernetes' complexity is YAGNI (for most users).
Kubernetes 2.0 should be a boring pod scheduler with some RBAC around it. Let folks swap out the abstractions if they need it instead of having everything so tightly coupled within the core platform.
Kubernetes doesn't need a flipping package manager or charts. It needs to do one single job well: workload scheduling.
Kubernetes clusters shouldn't be bespoke and weird with behaviors that change based on what flavor of plugins you added. That is antithetical to the principal of the workloads you're trying to manage. You should be able to headshot the whole thing with ease.
Service discovery is just one of many things that should be a different layer.
> Let folks swap out the abstractions if they need it instead of having everything so tightly coupled within the core platform.
Sure, but then one of those third party products (say, X) will catch up, and everyone will start using it. Then job ads will start requiring "10 year of experience in X". Then X will replace the core orchestrator (K8s) with their own implementation. Then we'll start seeing comments like "X is a horribly complex, bloated platform which should have been just a boring orchestrator" on HN.
Kubernetes is when you want to sell complexity because complexity makes money and and naturally gets you vendor lockin even while being ostensibly vendor neutral. Never interrupt the customer while they are foot gunning themselves.
Yup. Having migrated from a puppet + custom scripts environment and terraform + custom scripts. K8S is a breath of fresh air.
I get that it's not for everyone, I'd not recommend it for everyone. But once you start getting a pretty diverse ecosystem of services, k8s solves a lot of problems while being pretty cheap.
Storage is a mess, though, and something that really needs to be addressed. I typically recommend people wanting persistence to not use k8s.
> Storage is a mess, though, and something that really needs to be addressed. I typically recommend people wanting persistence to not use k8s.
I have actually come to wonder if this is actually an AWS problem, and not a Kubernetes problem. I mention this because the CSI controllers seem to behave sanely, but they are only as good as the requests being fulfilled by the IaaS control plane. I secretly suspect that EBS just wasn't designed for such a hot-swap world
Now, I posit this because I haven't had to run clusters in Azure nor GCP to know if my theory has legs
I guess the counter-experiment would be to forego the AWS storage layer and try Ceph or Longhorn but no company I've ever worked at wants to blaze trails about that, so they just build up institutional tribal knowledge about treating PVCs with kid gloves
Honestly this just feels like kubernetes just solving the easy problems and ignoring the hard bits. You notice the pattern a lot after a certain amount of time watching new software being built.
> Yup. Having migrated from a puppet + custom scripts environment and terraform + custom scripts. K8S is a breath of fresh air.
Having experience in both the former and the latter (in big tech) and then going on to write my own controllers and deal with fabric and overlay networking problems, not sure I agree.
In 2025 engineers need to deal with persistence, they need storage, they need high performance networking, they need HVM isolation and they need GPUs. When a philosophy starts to get in the way of solving real problems and your business falls behind, that philosophy will be left on the side of the road. IMHO it's destined to go the way as OpenStack when someone builds a simpler, cleaner abstraction, and it will take the egos of a lot of people with it when it does.
My life experience so far is that "simpler and cleaner" tends to be mostly achieved by ignoring the harder bits of actually dealing with the real world.
I used kubernete's (lack of) support for storage as an example elsewhere, it's the same sort of thing where you can look really clever and elegant if you just ignore the 10% of the problem space that's actually hard.
The problem is k8s is both a orchestration system and a service provider.
Grid/batch/tractor/cube are all much much more simple to run at scale. More over they can support complex dependencies. (but mapping storage is harder)
but k8s fucks around with DNS and networking, disables swap.
Making a simple deployment is fairly simple.
But if you want _any_ kind of ci/cd you need flux, any kind of config management you need helm.
K8s has swap now. I am managing a fleet of nodes with 12TB of NVMe swap each. Each container gets (memory limit / node memory) * (total swap) swap limit. No way to specify swap demand on the pod spec yet so needs to be managed “by hand” with taints or some other correlation.
Kubernetes 2.0 should be a boring pod scheduler with some RBAC around it. Let folks swap out the abstractions if they need it instead of having everything so tightly coupled within the core platform.