Codiner

A Deep dive into AWS EKS Networking and its best practices

January 12, 2024

Deploying a Kubernetes cluster on AWS requires an understanding of networking within AWS and Kubernetes. Here to take a deeper dive into the AWS EKS cluster networking and its best practices.

To deploy your containerized application onto AWS EKS, one of the key areas to address is to plan for Cluster Networking and Pod Networking. When you deploy the EKS Cluster, the Kubernetes control plane is deployed in the ec2 instances. These instances are hosted on a VPC in an Amazon-managed AWS account. We will not be provided with access to those instances as they are fully managed by AWS.

AWS EKS also provisions an elastic network interface in VPC subnets and provides connectivity from the control plane to the worker nodes. The worker node or data plane resources are deployed in the VPC in the AWS account that you are responsible for hosting the containerized application. In the following cluster networking section, we will discuss the different options available to establish communication between worker nodes and the control plane.

Cluster Networking: Below is a quick overview of EKS Cluster Components.

Control Plane: The control plane runs on a dedicated set of EC2 instances in an Amazon-managed AWS account, it provides an endpoint for the managed Kubernetes API server which will be used to establish communication with the cluster.

Data Plane: Kubernetes worker nodes run on EC2 instances in your organization’s AWS account. They use the API endpoint to connect to the control plane.

API Endpoint access control lets us choose whether to have the endpoint completely private or to make them reachable from the public internet or to limit its accessibility over the public internet. The EKS gives you the feasibility to control API endpoint access with any of the below Networking modes.

Public endpoint only

This is the default behavior for new Amazon EKS clusters. In this mode, Kubernetes API requests that originate from your cluster’s VPC (such as worker node to control plane communication) leave the VPC but not Amazon’s network. In this case, the worker nodes should be deployed in a public subnet or the private subnet with a route to NAT enabled. The cluster’s API server is accessible from the internet. By default, your public endpoint is accessible from anywhere on the internet (0.0.0.0/0).

We can, optionally, limit the CIDR blocks that can access the public endpoint. But if you opt for the option of limiting the API server access to specific CIDR, It is recommended to have both Public and Private endpoints.

Public and Private endpoints

When access through both private and public endpoints is enabled, Kubernetes API requests within your cluster’s VPC (worker node to control plane communication) use the private VPC endpoint. The cluster API server is accessible from the internet. When endpoint private access is enabled for your cluster, Amazon EKS creates a Route 53 private hosted zone on your behalf and associates it with your cluster’s VPC.

This private hosted zone is managed by Amazon EKS, and it doesn’t appear in your account’s Route 53 resources. For the private hosted zone to properly route traffic to your API server, your VPC must have enableDnsHostnames and enableDnsSupport set to true, and the DHCP options set for your VPC must include Amazon Provided DNS in its domain name servers list. Access to the cluster API server can be limited by specifying the CIDR blocks that can access the public endpoint.

Private endpoint only

When only Private Endpoint is enabled, all traffic to your cluster API server must come from within your cluster’s VPC or a connected network. There is no public access to your API server from the internet. Any kubectl commands must come from your VPC or a connected network.

VPC AND SUBNETS REQUIREMENTS

It is very important to plan for your VPC and subnet requirements for EKS at an early stage for a successful EKS Cluster deployment. In most scenarios, your worker node deployment would fail as the nodes would have failed to register with the API server. This could be largely due to the way worker nodes and control plane is designed and deployed.

  • When you create a cluster, you specify a VPC and at least two subnets that are in different AWS availability Zones.
  • Make sure that the VPC has enough IP addresses available for the cluster, any other nodes, and other Kubernetes resources that you want to create.
  • Since containerized applications are mostly meant to scale, you must consider from a futuristic view and decide on the CIDR Range for the VPC that will be used by EKS Cluster.
  • The VPC must have a DNS hostname and DNS resolution support. Otherwise, nodes won’t be able to register themselves to your cluster.
  • The subnets must use IP address-based naming, the EC2 resource-based naming isn’t supported
  • If there is any requirement for a pod to have inbound access from the internet, make sure to have at least one public subnet with enough available IP addresses to deploy load balancers/ingress controllers.
  • Load balancers can balance the load among pods in private or public subnets.
  • It is always recommended to deploy your nodes in the private subnet (In this case, you can endpoints of private and public access or Private only access).
  • If you choose to deploy your worker nodes in a public subnet, automatic assignment of public IP addresses must be enabled (In this case, you will choose to use Public only access endpoint).

We have discussed how networking is vital for AWS EKS Cluster configuration and some of the best practices for AWS EKS cluster design.

Codincity specializes in Cloud Native Foundation design and application modernization with AWS EKS. We can also help you with a cost-neutral Well architected Review. #Reach out to us for a conversation on this topic, we will be happy to discuss and provide appropriate solutions.

More Blogs

View All
Codiner
July 15, 2024

Revolutionizing Infrastructure Management with Terra-Auto

Read More
Codiner
June 13, 2024

Drive Cloud Infrastructure And Operations with Generative AI

Read More
Codiner
January 8, 2024

Reimagine and Renew your applications with Codincity Application Modernization Services

Read More