Getting Started with Kubernetes Architecture

Getting Started with Kubernetes Architecture

Introduction to Kubernetes (K8s)

Kubernetes (often written as K8s) is an open-source platform that helps in automating the deployment, scaling, and management of containerized applications. It was originally created by Google and is now maintained by the Cloud Native Computing Foundation (CNCF). Kubernetes is widely used today to manage applications, especially in cloud and hybrid environments, because it makes it easier to run and scale applications in a consistent way across different systems. It has become one of the most popular tools for managing containerized applications.


Kubernetes Architecture Summary

Control Plane

The Control Plane is the brain of the Kubernetes cluster. It manages the overall state of the cluster, including the scheduling of tasks, maintaining the desired state of applications, and coordinating all the components.

Key components of the Control Plane:

  1. API Server
    The API Server is the central communication hub for all components in the Kubernetes cluster. It processes requests from users and other components, making it the heart of the cluster.

    • Role:

      • It receives commands, such as those from kubectl, and ensures smooth communication within the cluster.

      • It acts as the gateway for all operations within Kubernetes and enforces authentication and authorization.

    • Example:
      When you run the following command:

        kubectl get pods
      

      The API Server processes this request, interacts with etcd (the cluster's storage system), and returns the list of Pods and their status.

    • Functionality:

      • The API Server keeps track of the cluster state by interacting with etcd.

      • It manages access controls, deciding who can perform actions within the cluster.

  2. etcd
    etcd is a distributed key-value store that holds the cluster’s state data. It stores all the configuration and status information for Kubernetes resources, such as Pods, Secrets, ConfigMaps, Deployments, Services, and more.

    • Role:

      • etcd acts as the source of truth for Kubernetes. It maintains the desired state and current state of all resources in the cluster.

      • When the API Server receives a request (e.g., to get a list of Pods), etcd provides the required data.

    • Example:
      If the API Server is asked for the current state of Pods, it queries etcd to retrieve this information.

  3. Scheduler
    The Scheduler is responsible for assigning Pods to the appropriate worker nodes in the cluster. It watches for Pods that haven’t been assigned to nodes yet.

    • Role:

      • It evaluates available resources (such as CPU, memory) and selects the best node for each unassigned Pod.

      • After making the decision, the Scheduler updates the API Server with the assigned node, and etcd records this information.

    • Example:
      If there is a Pod that needs to be scheduled, the Scheduler will analyze the cluster’s resource availability and assign the Pod to a node that has enough resources.

  4. Controller Manager
    The Controller Manager ensures that the actual state of the cluster matches the desired state. It monitors the health and status of resources and takes corrective actions when needed.

    • Role:

      • It watches over the nodes and Pods to ensure they are running as expected.

      • If a Pod crashes, the Controller Manager automatically restarts it or reassigns it to another node to maintain the desired state.

    • Example:
      If a Pod is terminated unexpectedly, the Controller Manager ensures that a new Pod is created to replace it, keeping the application running.


Summary of Components in the Control Plane:

  • API Server: Central hub for communication and request processing.

  • etcd: Distributed key-value store that holds the current state of the cluster.

  • Scheduler: Decides where Pods should run based on available resources.

  • Controller Manager: Ensures the cluster stays in the desired state, fixing problems like crashed Pods.



Worker Node

A Worker Node is a machine within the Kubernetes cluster that runs applications and manages containers. It is where the actual work happens, as it hosts Pods, the smallest deployable units in Kubernetes.

Each Worker Node performs key functions to ensure that containers are deployed, running, and communicating correctly. These functions are carried out by various components that are part of the worker node.


Components of a Worker Node:

  1. Kubelet
    The Kubelet is the primary agent running on each worker node. It is responsible for ensuring that the Pods and containers on the node are functioning properly.

    • Role:

      • The Kubelet continuously monitors the status of the Pods and the worker node itself.

      • If there is an issue (e.g., a Pod crashes or doesn't start), it reports the issue to the API Server.

      • The API Server, in turn, coordinates with the Controller Manager to resolve the problem (e.g., by restarting or rescheduling the Pod).

    • Example:
      If a container inside a Pod stops unexpectedly, the Kubelet will notify the API Server, which triggers the Controller Manager to restart the container or Pod.

  2. Container Runtime
    The Container Runtime is the software responsible for running containers on the worker node.

    • Role:

      • It pulls container images from container registries, runs containers, and ensures they operate as expected.
    • Examples:

      • Docker

      • containerd

      • CRI-O

  3. kube-proxy
    The kube-proxy manages network traffic within the Kubernetes cluster.

    • Role:

      • It ensures that network requests to and from Pods are properly routed.

      • It maintains network rules and load-balances traffic between Pods, making sure they can communicate with each other and with external services.

    • Example:
      If a Pod wants to communicate with another Pod or an external service, the kube-proxy ensures that the traffic is properly forwarded and balanced.

  4. Pods
    Pods are the smallest and simplest deployable units in Kubernetes. A Pod can hold one or more containers that share the same network namespace and storage volumes.

    • Role:

      • Pods host the containers that run your applications.

      • They are scheduled on worker nodes, and the containers inside them work together as a single unit.

      • Pods ensure that related containers are managed together and can communicate with each other easily.

    • Example:
      If you are running a web application with a front-end container and a back-end database container, both containers would likely reside within the same Pod to allow easy communication.


Summary of Worker Node Components:

  • Kubelet: Ensures the health of Pods and containers, reporting any issues to the API Server.

  • Container Runtime: Runs the containers on the worker node (e.g., Docker, containerd).

  • kube-proxy: Manages network traffic and communication between Pods and services.

  • Pods: Host the containers that run your application, and they are the smallest deployable unit in Kubernetes.


Key Features of Kubernetes (K8s)

  1. Auto-Scheduling
    Kubernetes automatically assigns Pods to the most suitable nodes in the cluster based on the available resources (like CPU and memory). This ensures that your infrastructure is used efficiently and that the workload is balanced across the nodes.

  2. Self-Healing
    If a Pod fails or stops responding, Kubernetes automatically takes action to fix the problem. It will either restart the Pod or replace it with a new one, ensuring that the application remains in the desired state with minimal downtime.

  3. Automated Rollbacks and Rollouts
    Kubernetes helps you manage application updates smoothly. When you update your application, Kubernetes will roll out the new version. If there is an issue with the new version, it can quickly roll back to a stable version, ensuring no disruption in service.

  4. Horizontal Scaling
    Kubernetes can automatically scale your application up or down based on demand. For example, when more users visit your application, Kubernetes will add more Pods to handle the increased traffic. Similarly, it will reduce the number of Pods when the traffic decreases, saving resources.

  5. Service Discovery & Load Balancing
    Kubernetes automatically assigns unique IP addresses and a single DNS name to a group of Pods, making it easier for them to communicate with each other. It also manages load balancing, ensuring that the incoming network traffic is distributed evenly across the Pods for better performance and reliability.

  6. Storage Orchestration
    Kubernetes allows you to manage storage for your applications. You can mount the storage system of your choice, whether it’s local storage on your machines or external cloud storage from providers like AWS, Google Cloud (GCP), or shared network storage systems like NFS or EBS.


Conclusion:-

Kubernetes (K8s) is an open-source tool that helps automate the deployment, scaling, and management of applications inside containers. It makes it easier to manage applications by automatically handling tasks like scheduling, scaling, and fixing problems. With features like auto-scaling, self-healing, and automatic updates, Kubernetes helps keep applications running smoothly and efficiently, making it a popular choice for modern cloud-based systems.