1. Instead of recreating the "gooey internal network" anti-pattern with CNI, provide strong zero-trust authentication for service-to-service calls.
2. Integrate with public networks. With IPv6, there's no _need_ for an overlay network.
3. Interoperability between several K8s clusters. I want to run a local k3s controller on my machine to develop a service, but this service still needs to call a production endpoint for a dependent service.
To the best of my knowledge, nothing is stopping you from doing any of those things right now. Including, ironically, authentication for pod-to-pod calls, since that's how Service Accounts work today. That even crosses the Kubernetes API boundary thanks to IRSA and, if one were really advanced, any OIDC compliant provider that would trust the OIDC issuer in Kubernetes. The eks-anywhere distribution even shows how to pull off this stunt from your workstation via publishing the JWKS to S3 or some other publicly resolvable https endpoint
I am not aware of any reason why you couldn't connect directly to any Pod, which necessarily includes the kube-apiserver's Pod, from your workstation except for your own company's networking policies
Service accounts are _close_ to what I want, but not quite. They are not quite seamless enough for service-to-service calls.
> I am not aware of any reason why you couldn't connect directly to any Pod, which necessarily includes the kube-apiserver's Pod, from your workstation except for your own company's networking policies
I don't think I can create a pod with EKS that doesn't have any private network?
K8s will happily let you run a pod with host network, and even the original "kubenet" network implementation allowed directly calling pods if your network wasn't more damaged than a brain after multiple strokes combined with headshot tank cannon (aka most enterprise networks in my experience)
In addition to what the sibling comment pointed out about that being an EKS-ism, yes, I know for 100% certainty VPC-CNI will allocate Pod IP addresses from the Subnet's definition, which includes public IP addresses. We used that to circumvent the NAT GW tax since all the Pods had Internet access without themselves being Internet accessible. Last I heard, one cannot run Fargate workloads in a public subnet for who-knows-why reasons, but that's the only mandate that I'm aware of for the public/privet delineation
And, if it wasn't obvious: VPC-CNI isn't the only CNI, nor even the best CNI, since the number of ENIs that one can attach to a Node vary based on its instance type, which is just stunningly dumb IMHO. Using an overlay network allows all Pods that can fit upon a Node to run there
1. Instead of recreating the "gooey internal network" anti-pattern with CNI, provide strong zero-trust authentication for service-to-service calls.
2. Integrate with public networks. With IPv6, there's no _need_ for an overlay network.
3. Interoperability between several K8s clusters. I want to run a local k3s controller on my machine to develop a service, but this service still needs to call a production endpoint for a dependent service.