Are you looking for somewhere to learn more about Kubernetes interview questions? You’re in the right place! We publish a Kubernetes question every day on our social media channels, LinkedIn, Telegram, and YouTube, and at the end of the week, we provide the correct answers with details here. If you want to test your Kubernetes knowledge or prepare yourself for Kubernetes role interviews, follow our social media.
Follow our social media:
https://www.linkedin.com/in/ssbostan
https://www.linkedin.com/company/kubedemy
https://www.youtube.com/@kubedemy
Kubernetes Interview Questions:
30 October 2023:
With how many nodes does Kubernetes consider the cluster as a large cluster?
- A) More than 10 worker nodes
- B) More than 25 worker nodes
- C) More than 50 worker nodes
- D) More than 100 worker nodes
If your cluster has more than 50 worker nodes, “this is the default value”, Kubernetes considers your cluster as a large cluster. In large clusters, scheduling decisions, evictions, health checks, etc., follow some additional rules. This option can be configured in kube-controller-manager
with --large-cluster-size-threshold
option.
Check the kube-controller-manager command line arguments
Read more about Node controller at Unofficial Kubernetes Documentation
31 October 2023:
In multi-zone Kubernetes deployment, if how many worker nodes in the zone are down, Kubernetes consider that zone as an unhealthy zone?
- A) At least 35% of worker nodes
- B) At least 45% of worker nodes
- C) At least 55% of worker nodes
- D) At least 75% of worker nodes
By default, the value of --unhealthy-zone-threshold
option is 0.55
which means if at least 55% of worker nodes “minimum 3 nodes ” in the zone are NotReady “unhealthy, down, failed, etc.” the zone will be treated as unhealthy. If the zone becomes unhealthy and the cluster is a large cluster, the node eviction rate will be reduced; otherwise, the eviction process will be stopped to avoid misleading evictions.
Check the kube-controller-manager command line arguments
Read more about Node controller at Unofficial Kubernetes Documentation
01 November 2023:
What is the default node eviction rate if a node fails?
- A) 1 node per second
- B) 1 node per 10 seconds
- C) 1 node per 100 seconds
- D) 1 node per 5 minutes
In a normal situation, if a node fails, Kubernetes follows --node-eviction-rate
that its default value is 0.1
which means 1 node per 10 seconds. So, if a couple of nodes fail at the same time, 1 node per 10 seconds will be evicted. This option can reduce the amount of changes and requests to the cluster at the same time.
Check the kube-controller-manager command line arguments
Read more about Node controller at Unofficial Kubernetes Documentation
02 November 2023:
In small clusters, what will be changed to eviction decisions if zones become unhealthy?
- A) Eviction rate will be changed to 1 node per 10 seconds
- B) Eviction rate will be changed to 1 node per 100 seconds
- C) Node controller will be restarted to solve the issue
- D) The eviction process will be stopped
Normally, The Node controller evicts Pods from the NotReady “failed” nodes with 1 node per 10 seconds approach. If zones become unhealthy based on the defined threshold, the eviction rate will be reduced to 1 node per 100 seconds in large clusters, “clusters with more than 50 worker nodes”, but in small clusters “mostly they’re not deployed in multi-zone” this threshold points the entire cluster, and the node controller will stop the eviction process because it’s obvious that the cluster has no enough resources to cover 55% workloads or this downtime is related to master issues, not the worker nodes.
Check the kube-controller-manager command line arguments
Read more about Node controller at Unofficial Kubernetes Documentation
03 November 2023:
How long does it take to deploy a new Pod after a node fails?
- A) 10 seconds
- B) 1 minute
- C) 5 minutes
- D) 10 minutes
After a node fails, Kubernetes waits for 5 minutes to delete Pods from that node. This is the default value for the deletion grace period of the failed nodes. You can change this configuration through --pod-eviction-timeout
old versions or PodEvictionTimeout in kube-controller-manager configuration manifest. You can also change this behaviour per Pod using the Taint-based Evictions within the Pod manifest.
Read more about Taint based Evictions
Conclusion:
Kubernetes itself is wild. Man vs. Wild in interviews needs deep knowledge, hands-on experience and additional skills to pass. If you want to pass Kubernetes role interviews, read more and do many more with Kubernetes. Our goal in Kubedemy is to help you understand and solve Kubernetes difficulties in all situations.
If you like this series of articles, please share them and write your thoughts as comments here. Your feedback encourages me to complete this massively planned program.
Follow my LinkedIn https://www.linkedin.com/in/ssbostan
Follow Kubedemy LinkedIn https://www.linkedin.com/company/kubedemy
Follow Kubedemy Telegram https://telegram.me/kubedemy