AWS provides an easy way to deploy the Kubernetes cluster control plane with the EKS service. Control plane means etcd database and Kubernetes components kube-apiserverkube-controller-managerkube-schedulercloud-controller-manager which are deployed and managed by AWS in their managed account.
When you create an EKS cluster, all managed components are created in a separate AWS-managed account and provided to you as a service. AWS guarantees cluster high availability by deploying at least two instances of Kubernetes components and at least three instances of etcd database. These components are managed entirely by AWS, and you don’t need to worry about their lifecycle.
Attach AmazonEKSClusterPolicy policy to cluster role.
Create an EKS cluster with AWS CLI, Terraform or eksctl.
Confirm cluster installation and create the kubeconfig file.
Investigate what we have after creating an EKS cluster.
Step 1 – Create IAM role and attach the policy:
Your EKS cluster needs some permissions to do some things like create and manage volumes, describe network components, create routes, security groups, load balancers, and a bunch of stuff in your account on your behalf.
Important: For each cluster, you should create a separate cluster role.
To create the trust relationship policy, run this command:
serviceIpv4Cidr will be 10.100.0.0/16 or 172.20.0.0/16 if not specified.
publicAccessCidrs is 0.0.0.0/0 by default.
Creating the cluster may take a while. In the meantime, you can run this command to wait for cluster to be active. wait command helps you wait for a bunch of cluster events like active cluster, delete cluster, active nodegroup, delete nodegroup, etc.
aws eks wait cluster-active --name kubedemy
Note: I will talk about cluster logging, private clusters, encryption at rest, ipv6 clusters, additional security groups, and a multitude of other things about EKS in future articles.
After 10-15 minutes, the cluster gets ready; you can see cluster information:
To connect to the EKS cluster, you also need the kubectl command.
Now run this command to create kubeconfig file.
bash-4.2# aws eks update-kubeconfig --name kubedemy
Added new context arn:aws:eks:eu-west-2:231144931069:cluster/kubedemy to /root/.kube/config
To check your access to the Kubernetes API Server, run the following:
kubectl auth can-i "*" "*"
kubectl get ns
It shows I can do anything, which means I’m the cluster admin.
The user-principal who creates the cluster is the cluster admin, which is assigned to Kubernetes system:masters RBAC group in the cluster.
You can’t see the ClusterRoleBinding of the cluster owner. AWS manages it behind the scenes and does not appear in any objects.
You need to keep the cluster owner IAM user, or you may lose cluster access.
The cluster owner cannot be changed after creating the cluster.
Step 4 – Investigate EKS resources:
Cluster Security Group:
AWS EKS service creates a new security group for each cluster. This security group is assigned to Kubernetes control plane ENIs and managed node groups.
Cluster Control plane ENI:
AWS EKS creates up to 4 ENIs (at least two) for Kubernetes control plane in different subnets. These ENIs allow communication between worker nodes, applications, and other AWS resources within your VPC with the Kubernetes control plane.
Moreover, AWS EKS uses these ENIs to connect to worker nodes to be able to provide kubectl logs and kubectl exec features for the cluster.
Here is the architecture diagram of what we did in the previous steps:
Here are the results of the previous commands; we need them in the next articles:
So far, we have created an EKS cluster and confirmed that using kubectl command. In the following articles, we will add worker nodes through different methods, such as managed, self-managed, and managed nodes with custom AMI and fargate profiles.
If you like this series of articles, please share them and write your thoughts as comments here. Your feedback encourages me to complete this massively planned program. Just share them and provide feedback. I’ll make you an AWS EKS black belt.