Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure why they state "although the AWS Load Balancer Controller is a fantastic piece of software, it is surprisingly tricky to roll out releases without downtime."

The AWS Load Balancer Controller uses readiness gates by default, exactly as described in the article. Am I missing something?

Edit: Ah, it's not by default, it requires a label in the namespace. I'd forgotten about this. To be fair though, the AWS docs tell you to add this label.



I think the "label (edit: annotation) based configuration" has got to be my least favorite thing about the k8s ecosystem. They're super magic, completely undiscoverable outside the documentation, not typed, not validated (for mutually exclusive options), and rely on introspecting the cluster and so aren't part of the k8s solver.

AWS uses them for all of their integrations and they're never not annoying.


I think you mean annotations. Labels and annotations are different things. And btw. Annotations can be validated and can be typed. With validation webhooks.


Yes, that is what we thought as well, but it turns out that the there is still a delay between the load balancer controller registering a target as offline and the pod actually being already terminated. We did some benchmarks to highlight that gap.


You mean the problem you describe in "Part 3" of the article?

Damn it, now you've made me paranoid. I'll have to check the ELB logs for 502 errors during our deployment windows.


Exactly! We initially received some sentry errors that triggered our curiosity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: