So far, we deployed a cluster with a Public API Endpoint. Public Endpoint means kube-apiserver
deployed by EKS can be accessed over the Internet. You can restrict access to API using publicAccessCidrs
for Public Endpoint clusters. EKS also provides a way to deploy Private Endpoint clusters; in that case, the cluster API endpoint is not exposed to the Internet and can only be accessed within the AWS virtual private cloud.
Follow our social media:
https://www.linkedin.com/in/ssbostan
https://www.linkedin.com/company/kubedemy
https://www.youtube.com/@kubedemy
Register for the FREE EKS Tutorial:
If you want to access the course materials, register from the following link:
Register for the FREE AWS EKS Black Belt Course
Public vs Private in worker nodes:
In public endpoint mode, all requests to kube-apiserver
go out of VPC but inside the AWS global network. In private mode, all traffic should go to kube-apiserver
from within the VPC. In mix mode, all traffic from within the VPC will go through the VPC itself, and the cluster is also available over the Internet for kubectl
communications.
Public vs Private with Kubectl:
In public and mixed modes, you can access your cluster over the Internet, and it can be restricted using publicAccessCidrs
option. In private mode, you can connect to the cluster using a VPN, Connected network, EC2 bastion host, or any other way you can send requests from within the virtual private cloud.
Private API Endpoint vs Private Worker nodes:
As you realized, a Private cluster with a Private API Endpoint means kube-apiserver
is not accessible over the Internet. On the other hand, we have other terminologies called Private-network cluster and Air-gapped cluster, which are related to worker nodes’ communication with the Internet. In private-network clusters, worker nodes are in private subnets without public IP addresses and have access to the Internet through a NAT gateway. In Air-gapped clusters, workers are in private subnets with no internet access. I will deeply explain these types in future articles with examples.
Private API Endpoint cluster deployment:
- Enable
enableDnsHostnames
andenableDnsSupport
VPC options. - Enable
AmazonProvidedDNS
DHCP option. - Create an EKS cluster with Private API Endpoint.
- Deploy a bastion EC2 instance to connect to the cluster.
- Confirm cluster installation with Kubectl.
Step 1 – Enable needed VPC options:
We enabled these options to be able to deploy EKS clusters in the first article, but to confirm and make sure, run the following commands:
aws ec2 modify-vpc-attribute \
--vpc-id vpc-09a71d02067cf835d \
--enable-dns-hostnames
aws ec2 modify-vpc-attribute \
--vpc-id vpc-09a71d02067cf835d \
--enable-dns-support
Step 2 – Enable needed DHCP options:
We must enable AmazonProvidedDNS
option in VPC’s DHCP options. Although this option is enabled by default, to check and confirm, run this command:
bash-4.2# aws ec2 describe-dhcp-options
{
"DhcpOptions": [
{
"DhcpConfigurations": [
{
"Key": "domain-name",
"Values": [
{
"Value": "eu-west-2.compute.internal"
}
]
},
{
"Key": "domain-name-servers",
"Values": [
{
"Value": "AmazonProvidedDNS"
}
]
}
],
"DhcpOptionsId": "dopt-0c3ad00db38b27d36",
"OwnerId": "231144931069",
"Tags": []
}
]
}
Step 3 – Deploy EKS with Private endpoint:
aws eks create-cluster \
--name kubedemy \
--role-arn arn:aws:iam::231144931069:role/Kubedemy_EKS_Cluster_Role \
--resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=true,subnetIds=subnet-0ff015478090c2174,subnet-01b107cea804fdff1,subnet-09b7d720aca170608 \
--kubernetes-network-config serviceIpv4Cidr=172.20.0.0/16,ipFamily=ipv4 \
--kubernetes-version 1.28 \
--tags owner=kubedemy
Step 4 – Deploy Bastion Host Instance:
Bastion Instance is a server used to manage access to an internal or private network from an external network. Imagine we are on the east side of a river and wanna go to the west side; to be able to go, we need a bridge. Bastion Instance is that bridge and lets us connect to resources within the VPC from external networks over the Internet.
To find the latest Amazon Linux image:
aws ec2 describe-images \
--max-items 1 --filters \
Name=architecture,Values=x86_64 \
Name=virtualization-type,Values=hvm \
Name=name,Values=al2023-ami-2023* \
Name=creation-date,Values=2023-07*
Create a new Security Group and allow SSH connections:
aws ec2 create-security-group \
--description "Kubedemy Bastion Host Instance" \
--group-name Kubedemy_SG_Bastion_Host_Instance \
--vpc-id vpc-09a71d02067cf835d \
--tag-specifications "ResourceType=security-group,Tags=[{Key=owner,Value=kubedemy}]"
aws ec2 authorize-security-group-ingress \
--group-id sg-00cbdaca52422606c \
--cidr 0.0.0.0/0 \
--protocol tcp \
--port 22
Now, create the bastion host instance:
aws ec2 run-instances \
--image-id ami-020737107b4baaa50 \
--instance-type t2.micro \
--key-name kubedemy \
--security-group-ids sg-00cbdaca52422606c \
--subnet-id subnet-0ff015478090c2174 \
--tag-specifications "ResourceType=instance,Tags=[{Key=owner,Value=kubedemy}]" \
--associate-public-ip-address \
--count 1
Note: Replace all values with your own.
To allow traffic to kube-apiserver
from Bastion Host:
aws ec2 authorize-security-group-ingress \
--group-id sg-09ff6d3276b8ff697 \
--source-group sg-00cbdaca52422606c \
--protocol all
Step 5 – Confirm EKS Cluster installation:
Connect to the instance using SSH and install aws-cli
and kubectl
and confirm cluster installation with the following commands. To add worker nodes, you can deploy both public and private worker nodes. In previous articles, I explained how to deploy public-network worker nodes, and in future articles, I will explain how to deploy private-network and air-gapped worker nodes in AWS EKS as well.
Note: AWS CLI is available on Amazon Linux images.
aws configure
aws eks update-kubeconfig --name kubedemy
kubectl auth can-i "*" "*"
To add worker nodes, follow these articles:
AWS EKS – Part 3 – Deploy worker nodes using managed node groups
AWS EKS – Part 4 – Deploy worker nodes using custom launch templates
AWS EKS – Part 5 – Deploy self-managed worker nodes
AWS EKS – Part 6 – Deploy Bottlerocket worker nodes and update operator
AWS EKS – Part 7 – Deploy ARM-based Kubernetes Worker nodes
AWS EKS – Part 8 – Deploy Worker nodes using Spot Instances
AWS EKS – Part 9 – Deploy Worker nodes using Fargate Instances
Results:
Here are the results of the previous commands; we need them in the next articles:
VPC ID | vpc-09a71d02067cf835d |
Public Subnet ID | subnet-0ff015478090c2174 |
Security Group Name | Kubedemy_SG_Bastion_Host_Instance |
Security Group ID | sg-00cbdaca52422606c |
Cluster Security Group ID | sg-09ff6d3276b8ff697 |
Bastion Host AMI ID | ami-020737107b4baaa50 |
Conclusion:
So far, you learned how to deploy a cluster with Private API Endpoint. This is the first step to implementing EKS security best practices. In future articles, you will learn how to deploy a cluster with a Private API Endpoint and private network worker nodes.
If you like this series of articles, please share them and write your thoughts as comments here. Your feedback encourages me to complete this massively planned program. Just share them and provide feedback. I’ll make you an AWS EKS black belt.
Follow my LinkedIn https://www.linkedin.com/in/ssbostan
Follow Kubedemy LinkedIn https://www.linkedin.com/company/kubedemy
Follow Kubedemy Telegram https://telegram.me/kubedemy