When using AWS VPC CNI, you can setup another feature to use EC2 Security Groups for Pods, which means you can customise network accesses per Pod to implement a feature like Kubernetes Network Policy but with native AWS features using Security Groups. In this lesson, you will learn how to setup the Security Groups for Pods feature.

Follow our social media:

https://linkedin.com/in/ssbostan

https://linkedin.com/company/kubedemy

https://youtube.com/@kubedemy

https://telegram.me/kubedemy

Register for the FREE EKS Tutorial:

If you want to access the course materials, register from the following link:

Register for the FREE AWS EKS Black Belt Course

Considerations and Limitations:

  • For IPv6 support, the VPC CNI version of at least 1.16.0 is needed.
  • Most Nitro-based instances are supported, but it doesn’t work on t-series instances like t3.medium and you must choose other Nitro-based instances.
  • If you use custom networking using ENIConfig resource and Security Groups for Pods simultaneously, the security group assigned using SGs for Pods will be used for the Pod networking instead of the one specified in the ENIConfig resource.
  • On VPC CNI version 1.11 and later, if you use standard Pod Security Group enforcing mode, outgoing traffic will be NATed at the worker node, and the Security Group assigned to the worker node’s primary network interface will be used. If you want to use the outbound rules of SGs for Pods for the outgoing traffic, this option must be set to strict to enforce SGs for Pods outbound rules.
  • To be able to use Calico network policy in addition to SGs for Pods, Pod Security Group enforcing mode should be standard and at least VPC CNI 1.11 is needed.
  • To be able to use Kubernetes Node Local DNS service, Pod Security Group enforcing mode should be standard and at least VPC CNI 1.11 is needed.

I recommend using the AWS VPC CNI version of at least 1.11 in your cluster.

Does this instance type support the SGs for Pods?

To find your answer, look at this limits.go file and check IsTrunkingCompatible option of the instance. This option must be true in instance limits to support the feature.

Security Groups for Pods setup procedure:

  • Deploy worker nodes that support trunking mode.
  • Attach the VPC Resource Controller policy to the cluster role.
  • Enable the ENI for Pod feature in AWS VPC CNI.
  • Set Security Group Enforcing Mode for Pods in AWS VPC CNI.
  • Create SecurityGroupPolicy resources and test traffic.

Step 1 – Deploy worker node for SG for Pods:

As mentioned above, we need worker nodes that support the Trunking feature to allow the creation of the trunk network interface. Using the trunk network, VPC CNI can manage the Pod network interface and implement the SG for Pods feature.

For this lesson, I’ve used c5.large which supports trunking.

aws eks create-nodegroup \
  --cluster-name kubedemy \
  --nodegroup-name application-managed-workers-001 \
  --scaling-config minSize=2,maxSize=5,desiredSize=2 \
  --subnets subnet-0ff015478090c2174 subnet-01b107cea804fdff1 subnet-09b7d720aca170608 \
  --node-role arn:aws:iam::231144931069:role/Kubedemy_EKS_Managed_Nodegroup_Role \
  --remote-access ec2SshKey=kubedemy \
  --instance-types c5.large \
  --ami-type AL2_x86_64 \
  --capacity-type ON_DEMAND \
  --update-config maxUnavailable=1 \
  --labels node.kubernetes.io/scope=application \
  --tags owner=kubedemy

Step 2 – VPC Resource Controller for SG for Pods:

VPC Resource Controller is an AWS-managed service running as part of the cluster control plane. It is responsible for creating the trunk network interface on each worker node that has AWS VPC CNI. We need to attach a policy to the cluster IAM role to allow this service to manage interfaces on our behalf and create trunk interfaces.

aws iam attach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKSVPCResourceController \
  --role-name Kubedemy_EKS_Cluster_Role

Step 3 – Enable Pod ENI for SG for Pods:

Set the following environment variable to aws-node DaemonSet.

kubectl set env daemonset aws-node -n kube-system ENABLE_POD_ENI=true

This option enforces VPC CNI to add a new label vpc.amazonaws.com/has-trunk-attached to the worker node. The VPC Resource Controller later uses this label to identify which worker nodes need a trunk network and create and attach one to them. When the trunk network is attached, the IPAMD controller of VPC CNI will set this label to true otherwise, it remains false that shows the instance does not have a free trunk interface slot or doesn’t support trunking at all. If no slots are available to create the trunk interface in some node, remove a Pod from that node to solve it.

Allow kubelet to probe (liveness and readiness probes):

When POD ENI is enabled, kubelet must run probes through the branch interface, which will be created on the worker node in addition to the trunk interface as soon as the first Pod is selected for Security Group enforcement. To do so, the TCP EARLY DEMUX feature must be disabled to allow Kubelet to run the probes.

kubectl patch daemonset aws-node -n kube-system \
  -p '{"spec": {"template": {"spec": {"initContainers": [{"env":[{"name":"DISABLE_TCP_EARLY_DEMUX","value":"true"}],"name":"aws-vpc-cni-init"}]}}}}'

Step 4 – Set Security Group Enforcing mode:

By default, the Security Group Enforcing mode is set to standard, which means ingress traffic to Pod must use the Security Group assigned to Pod, egress traffic from the Pod to inside the cluster must use the SG assigned to Pod (not in the same node!), and the egress from Pod to outside the cluster will use the Security Group assigned to the primary interface of the worker node. If you want to enforce Pod’s SG for external traffic as well, change it to strict. Be aware if you have NodeLocalDNS or want to use native NetworkPolicy resources using Calico, Cilium, etc., standard mode must be used.

# Default value
kubectl set env daemonset aws-node -n kube-system POD_SECURITY_GROUP_ENFORCING_MODE=standard

# To strict security group enforcement.
kubectl set env daemonset aws-node -n kube-system POD_SECURITY_GROUP_ENFORCING_MODE=strict

In this lesson, I’ve used the strict mode.

Step 5 – Create SecurityGroupPolicy in EKS:

Create a SG and delete the default outbound rule:

aws ec2 create-security-group \
  --description "Kubedemy EKS cluster SG for Pods Test" \
  --group-name Kubedemy_EKS_SG_For_Pods_Test \
  --vpc-id vpc-09a71d02067cf835d \
  --tag-specifications "ResourceType=security-group,Tags=[{Key=owner,Value=kubedemy}]"

aws ec2 revoke-security-group-egress \
  --group-id sg-001d7b0dc86125cb9 \
  --cidr 0.0.0.0/0 \
  --protocol all

Create a SecurityGroupPolicy resource and select Pods:

cat <<EOF | kubectl apply -f -
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
  name: sg-for-pod-test
spec:
  podSelector:
    matchLabels:
      app: test
  securityGroups:
    groupIds:
      - sg-001d7b0dc86125cb9
EOF

Create a new Pod with proper labels:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: test
  labels:
    app: test
spec:
  containers:
    - name: test
      image: alpine:latest
      command: ["sleep", "infinity"]
EOF

From now on, the Security Group for Pod will be used to manage Pod ingress/egress traffic. Ensure you create the Pod after creating SecurityGroupPolicy.

Now try to access something on the internet or resolve a domain:

kubectl exec -it test -- nslookup google.com

It doesn’t work, and you will get the connection timeout error because, at the moment, there is no rule to allow traffic between the test Pod and the CoreDNS. Now add the following rules to the cluster security group and SG for Pod security group.

### Allow access from Pod to CoreDNS within the worker node security group.
# Allow incoming traffic to worker node from SG for Pod.
aws ec2 authorize-security-group-ingress \
  --group-id sg-0434119d2c8b49857 \
  --source-group sg-001d7b0dc86125cb9 \
  --protocol all

# Allow outgoing traffic from SG for Pod to worker node.
aws ec2 authorize-security-group-egress \
  --group-id sg-001d7b0dc86125cb9 \
  --source-group sg-0434119d2c8b49857 \
  --protocol all

### Allow access from Kubelet to Pod within the worker node security group.
aws ec2 authorize-security-group-ingress \
  --group-id sg-001d7b0dc86125cb9 \
  --source-group sg-0434119d2c8b49857 \
  --protocol all

Try the same command and resolve the domain. It will work now.

Important: Don’t forget to add all the required rules to security groups.

Conclusion:

Security Groups for Pods is another cool feature of AWS VPC CNI. It allows you to customise Pod networking through Security Groups. Although native NetworkPolicy can be used in the new version of VPC CNI, you may need this feature to enforce Security Groups’ policies on Pods. In future lessons, you will learn NetworkPolicy as well.

If you like this series of articles, please share them and write your thoughts as comments here. Your feedback encourages me to complete this massively planned program. Just share them and provide feedback. I’ll make you an AWS EKS black belt.

Follow my LinkedIn https://linkedin.com/in/ssbostan

Follow Kubedemy’s LinkedIn https://linkedin.com/company/kubedemy

Follow Kubedemy’s Telegram https://telegram.me/kubedemy

Leave a Reply

Your email address will not be published. Required fields are marked *