Kubernetes Overview

What is Kubernetes?

Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).

What is Kubernetes used for?

Kubernetes is used to manage clusters of containerized applications. It provides the infrastructure needed to deploy and run applications in a cloud-native environment, allowing for easy scaling, load balancing, and self-healing. Kubernetes is especially powerful for managing complex, microservices-based architectures that require automated deployment and scaling.

Example Interview Questions:

1. What are the key components of Kubernetes architecture?

Answer: The key components include the Master Node, which contains the API Server, Scheduler, Controller Manager, and etcd (a key-value store), and the Worker Nodes, which run the containerized applications in Pods. Each worker node has a Kubelet, a container runtime (like Docker), and a Kube-proxy for network routing.

2. How does Kubernetes handle scaling?

Answer: Kubernetes handles scaling through the use of Horizontal Pod Autoscalers (HPA). The HPA can automatically adjust the number of pods in a deployment based on observed CPU utilization or other select metrics.

3. Can you explain what a Kubernetes Pod is?

Answer: A Pod is the smallest and simplest Kubernetes object. It represents a single instance of a running process in the cluster. Pods can contain one or more containers that share storage, network, and have a specification for how to run them.

4. How do you perform a rolling update in Kubernetes?

Answer: Rolling updates in Kubernetes can be performed using the kubectl rollout command. This allows you to update the deployment without downtime by incrementally replacing old pods with new ones.

5. What are Kubernetes namespaces, and how do they work?

Answer: Namespaces in Kubernetes provide a way to divide cluster resources between multiple users. They allow you to manage different environments (e.g., dev, staging, production) within the same cluster while ensuring resource isolation.

6. How do you secure a Kubernetes cluster?

Answer: Security in Kubernetes can be enhanced by implementing Role-Based Access Control (RBAC), using network policies to control traffic between pods, securing the API server with TLS certificates, and regularly updating the cluster to address vulnerabilities.


Key Points of Kubernetes

Simple Kubernetes Deployment Example

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21.6
        ports:
        - containerPort: 80

This example creates a deployment named nginx-deployment with three replicas of the nginx container running version 1.21.6. The containers listen on port 80.


Kubernetes Hierarchy: Containers, Pods, and Nodes

1. Containers (Image Based)

2. Pods (Smallest Deployable Units)

3. Nodes (Virtual Machines make up Kubernetes Cluster)

Hierarchy Summary

Containers are the application units that run inside Pods.

Pods are the deployment units that encapsulate one or more containers, running on Nodes.

Nodes are the infrastructure units that provide the necessary resources to run Pods and manage the execution of containers within those Pods.


Difference Between Pods, Nodes, and Containers in Kubernetes

Containers

Pods

Nodes

Summary

Pods are the smallest deployable units in Kubernetes, designed to host one or more containers that share the same environment and network. They are ephemeral and run the application workloads.

Nodes are the machines (physical or virtual) that make up the Kubernetes cluster. They provide the computational resources needed to run the pods and ensure that the containers within the pods are running correctly. Nodes are categorized into master nodes (which manage the cluster) and worker nodes (which run the application workloads).

Containers are lightweight, portable units that package an application and its dependencies. They run inside pods and provide isolation and resource efficiency, making them ideal for running applications in distributed environments like Kubernetes.


Kubernetes Deployments

What is a Kubernetes Deployment?

A Kubernetes Deployment is a resource object in Kubernetes that provides declarative updates to applications. Deployments manage the creation and scaling of a set of Pods and ensure that the desired number of Pods are running at any given time. They provide a way to manage the rollout of new versions of an application, rollback to previous versions, and scale the application up or down.

What is a Kubernetes Deployment used for?

Kubernetes Deployments are used to automate the management of application lifecycle, including the following:

Key Points about Kubernetes Deployments:

Most Asked Job Interview Questions and Answers on Kubernetes Deployments:

1. How do you create a Kubernetes Deployment?

Answer: A Kubernetes Deployment can be created using a YAML file or with the kubectl command. A basic YAML file for a Deployment includes metadata (like name and labels), specifications for the number of replicas, and a template for the Pods (which includes the container image, ports, and other settings). You can apply the Deployment with the kubectl apply -f deployment.yaml command.

2. What is a rolling update in Kubernetes, and how does it work?

Answer: A rolling update is a deployment strategy where Kubernetes gradually replaces old Pods with new Pods. The update proceeds incrementally, ensuring that a specified number of Pods are always running during the update. This strategy prevents downtime during application updates.

3. How can you roll back a Deployment in Kubernetes?

Answer: You can roll back a Deployment in Kubernetes using the kubectl rollout undo command. By default, it rolls back to the previous revision, but you can also specify a specific revision if needed.

4. How do you scale a Kubernetes Deployment?

Answer: You can scale a Deployment by changing the number of replicas in the Deployment's YAML file and applying the changes, or by using the kubectl scale command, for example, kubectl scale deployment my-deployment --replicas=5.

5. What is the difference between a Deployment and a StatefulSet?

Answer: Deployments are used for stateless applications where the identity of individual Pods is not important. StatefulSets, on the other hand, are used for stateful applications where each Pod requires a unique identity and persistent storage. StatefulSets are used for applications like databases where the state needs to be preserved across Pod restarts.

6. How does Kubernetes handle updates to a Deployment?

Answer: Kubernetes handles updates to a Deployment by creating a new ReplicaSet for the updated Pods and gradually replacing the old Pods with the new ones. The update can be configured to proceed at a specified rate, and the progress of the update can be monitored with the kubectl rollout status command.

7. What is a ReplicaSet in Kubernetes, and how is it related to Deployments?

Answer: A ReplicaSet is a Kubernetes resource that ensures a specified number of Pods are running at any given time. A Deployment manages one or more ReplicaSets to orchestrate rolling updates, rollbacks, and scaling. While you can create and manage ReplicaSets directly, it is more common to use Deployments to manage ReplicaSets for you.

Conclusion:

Kubernetes Deployments are a powerful tool for managing the lifecycle of applications in a Kubernetes cluster. Understanding how to create, update, scale, and roll back Deployments is essential for maintaining reliable and scalable applications. In interviews, be prepared to discuss your experience with Kubernetes Deployments, focusing on how you've used them to manage application updates, ensure high availability, and handle scaling.


CPU Usage in Kubernetes Pods

Can a Kubernetes Pod Use More than 1 CPU?

Yes, a Kubernetes Pod can use more than 1 CPU, and it is configurable.

CPU Resource Requests and Limits in Kubernetes

CPU Request: This is the amount of CPU that a Pod is guaranteed to have. Kubernetes uses this value to schedule Pods on nodes that have sufficient resources. For example, if a Pod requests 1 CPU, Kubernetes will ensure that the node where the Pod is scheduled has at least 1 CPU available for that Pod.

CPU Limit: This is the maximum amount of CPU that a Pod can use. If a Pod's process tries to exceed this limit, Kubernetes will throttle the CPU usage, ensuring that the Pod does not consume more than the specified amount.

Configuring CPU Requests and Limits

You can configure the CPU request and limit in the Pod or container definition using the resources field in the Pod's YAML file. Here's an example:

apiVersion: v1
kind: Pod
metadata:
  name: cpu-limits-pod
spec:
  containers:
  - name: my-container
    image: my-image
    resources:
      requests:
        cpu: "0.5"       # Requesting 0.5 CPU (500 millicores)
      limits:
        cpu: "2"         # Limiting to 2 CPUs

Understanding CPU Units

1 CPU in Kubernetes:

Millicores: CPU resources can be specified in millicores, where 1000m equals 1 CPU. So 500m would be equivalent to 0.5 CPUs.

Multiple CPUs for a Pod

If you want a Pod to use more than 1 CPU, you would set the cpu limit to a value greater than 1. For example, setting cpu: "2" means the Pod can use up to 2 CPUs.

Example with Multiple CPUs

apiVersion: v1
kind: Pod
metadata:
  name: multi-cpu-pod
spec:
  containers:
  - name: my-container
    image: my-image
    resources:
      requests:
        cpu: "1"       # Requesting 1 CPU
      limits:
        cpu: "4"       # Limiting to 4 CPUs

In this example:

Conclusion

Yes, a Kubernetes Pod can use more than 1 CPU, and it is configurable through the resources field in the Pod's specification. By setting appropriate requests and limits, you can control how much CPU a Pod can use, ensuring it gets the resources it needs while preventing it from overconsuming.