AWS EKS – Part 24 – Delete EKS Cluster and Resources
Every deployment will be destroyed and deleted a day. When the time comes for Kubernetes and EKS clusters, you must destroy and delete the cluster and all dependent resources like worker nodes, node groups, Fargate profiles, storage, load balancers, etc., to avoid unexpected costs. In this lesson, you will learn how to delete an EKS cluster, find all its dependencies and delete them along with best practices.
Note: You may be charged if you leave any external resource behind.
Follow our social media:
https://www.linkedin.com/in/ssbostan
https://www.linkedin.com/company/kubedemy
https://www.youtube.com/@kubedemy
Register for the FREE EKS Tutorial:
If you want to access the course materials, register from the following link:
Register for the FREE AWS EKS Black Belt Course
EKS Cluster Resource Lifecycle:
It’s a good question to ask which resources are bound to the cluster lifecycle and depend on it and which ones are not. The resource lifecycle depends on how they’re managed in your environment and which AWS service is responsible for managing their lifecycle. In EKS, we have various dependent resources, like EC2 Autoscaling Group, EC2 instances, Fargate profiles, etc., which EKS directly creates, as well as various independent resources, like EBS storage, CLB/NLB/ALB load balancers, which are created by operators and controllers from within the cluster, but other AWS services manage them out of the EKS space. When you want to delete an EKS cluster, you must delete its dependent resources first; otherwise, it doesn’t allow you to delete it and asks you to delete the dependent resources first. For example, you can’t create an EKS cluster that has an active node group, but for independent resources, you’re responsible for finding and removing them; otherwise, you may be charged for what you left behind.
EKS Cluster graceful delete procedure:
- Delete Kubernetes
LoadBalancer
Service resources. - Delete Kubernetes Ingress resources.
- Delete Kubernetes PersistentVolume resources.
- Delete other independent resources.
- Delete Kubernetes worker nodes (managed and self-managed).
- Delete the EKS Kubernetes cluster.
Step 1 – Delete Kubernetes Service resources:
When you create a Service resource with LoadBalancer
as a type, EKS asks the ELB service to create an external load balancer. It depends on how you create a Service, but in the end, the ELB service will create a Classic Load Balancer(CLB) or Network Load Balancer(NLB) for you. If you delete the cluster before deleting the Load Balancers first, they will remain in your account, and you may be charged for them.
Note: Make sure to delete all load balancers created from within Kubernetes.
To find LoadBalancer
services:
kubectl get svc -A | grep LoadBalancer
To delete a service resource:
kubectl -n NAMESPACE delete svc NAME
Step 2 – Delete Kubernetes Ingress resources:
If you use the AWS LoadBalancer Controller inside your cluster, it also provides an Ingress Controller capability by creating the Application Load Balancer(ALB) for Ingress resources. If you forget to delete Ingress resources, the ALB load balancers will be left behind, and you may be charged for them. Always make sure to delete them.
To find Ingress resources:
kubectl get ingress -A
To delete an ingress resource:
kubectl -n NAMESPACE delete ingress NAME
Step 3 – Delete Kubernetes Persistent Volumes:
The other things you may leave behind are Kubernetes PersistentVolumes. If you use any kind of AWS storage, like EBS, EFS, etc., you must delete them manually before deleting the cluster, as EKS is not responsible for deleting storage volumes. To be able to delete the PV resources, you must delete the PVC resources first, and no running Pod should use the PVC resource to delete them. So, to delete storage, you must delete Pods, and of course, if you use other Kubernetes workloads, like Deployment, StatefulSet, ReplicaSet, Job, etc., you must delete them first to prevent Pod recreation.
To find all available storage volumes:
kubectl get pv -A
To delete a Persistent Volume resource:
# Delete running workloads first.
kubectl -n NAMESPACE delete po NAME
# Delete PersistentVolumeClaim resource.
kubectl -n NAMESPACE delete pvc NAME
# Delete PersistentVolume resource.
kubectl -n NAMESPACE delete pv NAME
Step 4 – Delete other related resources:
You may have many other AWS resources created by Kubernetes operators from within the cluster, which EKS does not manage, and you must take care of them.
Here are a couple of examples you may have:
- Route53 zones/records created by the External-DNS operator.
- Images were built using Kaniko and pushed to the AWS ECR service.
- KMS keys created for Encryption At Rest or any other encryption purposes.
- AWS resources created by the Crossplane operator.
- VPC Lattice resources created by the AWS Gateway API controller.
- Applications created by the AWS Resilience Hub service.
- Reports created by the AWS GuardDuty and Detective services.
- AWS Cognito user pool created for EKS OIDC authentication.
And many more! Always look behind for other resources.
You can also find all created resources with specific tags. In this course, we created all the resources with owner=kubedemy
tag to be able to find them easily.
cat <<EOF > search-query.json
{
"Type": "TAG_FILTERS_1_0",
"Query": "{\"ResourceTypeFilters\":[\"AWS::AllSupported\"],\"TagFilters\":[{\"Key\":\"owner\", \"Values\":[\"kubedemy\"]}]}"
}
EOF
aws resource-groups search-resources \
--resource-query file://search-query.json
Step 5 – Delete EKS Worker nodes:
After you delete all the resources mentioned above, it’s time to delete EKS worker nodes. How you delete the worker nodes depends on how you create them.
To delete fully managed node groups:
aws eks delete-nodegroup \
--cluster-name NAME \
--nodegroup-name NAME
To delete managed node groups with custom launch templates:
aws eks delete-nodegroup \
--cluster-name NAME \
--nodegroup-name NAME
aws ec2 delete-launch-template \
--launch-template-name NAME
To delete self-managed worker nodes:
EKS has no clue about your self-managed worker nodes and allows deleting the cluster before deleting worker nodes, so make sure you remove self-managed worker nodes.
aws ec2 terminate-instances \
--instance-ids ID
If you use EC2 Autoscaling Group, to delete it:
aws autoscaling delete-auto-scaling-group \
--auto-scaling-group-name NAME \
--no-force-delete
To delete Fargate profiles:
aws eks delete-fargate-profile \
--cluster-name NAME \
--fargate-profile-name NAME
Step 6 – Delete EKS Cluster:
EKS doesn’t allow the cluster to be deleted before node groups and Fargate profiles to prevent accidental cluster deletion. After you check other resources and ensure everything is deleted completely, you can remove the cluster safely.
To delete the EKS cluster:
aws eks delete-cluster \
--name NAME
Conclusion:
I’m not happy to delete a cluster, but sometimes, we have no choice. After removing the EKS cluster and all independent resources, you must check the AWS Cost Explorer to see if anything is left behind and remove them to avoid unexpected costs.
If you like this series of articles, please share them and write your thoughts as comments here. Your feedback encourages me to complete this massively planned program. Just share them and provide feedback. I’ll make you an AWS EKS black belt.
Follow my LinkedIn https://www.linkedin.com/in/ssbostan
Follow Kubedemy LinkedIn https://www.linkedin.com/company/kubedemy
Follow Kubedemy Telegram https://telegram.me/kubedemy