← Back to blog
Infrastructure Architecture

Kubernetes in 2024 β€” understanding the orchestrator that changed software deployment

Kubernetes has become the de facto standard for container orchestration. But between the promise and the operational reality, there is a steep learning curve. What it does, why it's powerful, and when not to use it.

In 2014, Google open-sourced Borg, its internal container orchestration system, under the name Kubernetes. Ten years later, the project is managed by the Cloud Native Computing Foundation (CNCF), has over 88,000 contributors, and represents the de facto standard for deploying containerized applications at scale. There is probably not a single large cloud infrastructure in 2024 that isn't exposed to it.

Yet Kubernetes remains misunderstood β€” sometimes presented as the universal solution to all deployment problems, sometimes as an unmanageable complexity machine. The reality is more nuanced.


The problem Kubernetes solves

Imagine an application composed of 20 microservices. Each service runs in a Docker container. Without orchestration, you manually manage:

  • Which server does each container run on?
  • What happens if a server goes down? Do containers restart elsewhere?
  • How do you distribute incoming traffic between multiple instances of the same service?
  • How do you deploy a new version without service interruption?
  • How do you scale from 3 to 30 instances of a service under load?
  • How do services communicate with each other?

Kubernetes answers all these questions with a declarative model: you describe the desired state, Kubernetes takes care of reaching it and maintaining it.


The fundamental concepts

Pod β€” the basic unit

A Pod is the smallest deployable object in Kubernetes. It contains one or more containers sharing the same network and local storage. In practice, one Pod = one application container (+ optional sidecars).

apiVersion: v1
kind: Pod
metadata:
  name: my-app
spec:
  containers:
    - name: app
      image: my-app:1.4.2
      ports:
        - containerPort: 3000
      resources:
        requests:
          memory: "128Mi"
          cpu: "250m"
        limits:
          memory: "256Mi"
          cpu: "500m"

Pods are ephemeral β€” they are born and die. They are never managed directly in production.

Deployment β€” lifecycle management

A Deployment declares how many replicas of a Pod should run, and manages updates.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: app
          image: my-app:1.4.2
          readinessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 5
            periodSeconds: 10

The rollingUpdate ensures at least 2 out of 3 instances remain available during a deployment. The readinessProbe ensures traffic is only routed to a Pod when it's genuinely ready.

Service β€” the network abstraction

Pods have ephemeral IPs. A Service exposes a group of Pods under a stable IP and internal DNS, acting as a load balancer.

apiVersion: v1
kind: Service
metadata:
  name: my-app
spec:
  selector:
    app: my-app
  ports:
    - port: 80
      targetPort: 3000
  type: ClusterIP  # internal to the cluster

Ingress β€” the entry point

An Ingress manages HTTP(S) routing from outside the cluster to internal Services. This is where domain names, TLS, and path-based routing rules are configured.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  rules:
    - host: app.mydomain.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-app
                port:
                  number: 80
  tls:
    - hosts:
        - app.mydomain.com
      secretName: my-app-tls

ConfigMap and Secret β€” externalized configuration

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  LOG_LEVEL: "info"
  API_URL: "https://api.mydomain.com"
---
apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
type: Opaque
data:
  DATABASE_PASSWORD: <base64>  # ideally managed by Vault or External Secrets

The features that change everything

Horizontal Pod Autoscaler (HPA)

Kubernetes can automatically increase or decrease the number of replicas of a Deployment based on metrics β€” CPU, memory, or custom metrics via Prometheus.

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 2
  maxReplicas: 20
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70

Self-healing

If a Pod crashes, the Deployment automatically creates a new one. If a cluster node goes down, the Pods running on it are rescheduled on other nodes. Kubernetes continuously monitors the actual state of the cluster and reconciles it with the declared state.

Rolling deployments and rollbacks

A kubectl rollout undo deployment/my-app returns to the previous version in seconds. Deployment history is preserved.

Namespaces β€” logical isolation

A cluster can host multiple teams or environments (dev, staging, prod) via namespaces, with resource quotas and network policies per namespace.


The ecosystem in 2024

Kubernetes alone is not enough. The CNCF ecosystem has matured around it:

Need Solution
Package manager Helm
GitOps / declarative deployment ArgoCD, Flux
Service mesh Istio, Linkerd
Observability Prometheus + Grafana, OpenTelemetry
Secret management HashiCorp Vault, External Secrets Operator
TLS certificates cert-manager
Persistent storage Rook-Ceph, Longhorn
Image security Trivy, Falco

Helm deserves a special mention: it's the package manager for Kubernetes. A Helm chart is a parameterizable template of Kubernetes resources β€” deploying Nginx, PostgreSQL or a custom application is done with one command and a values file.


Managed distributions

In 2024, nobody should be managing a Kubernetes cluster from scratch in production. Cloud providers offer managed offerings that handle the control plane, updates and high availability:

  • Amazon EKS (AWS)
  • Google GKE (GCP) β€” historically the most mature, Google having invented Kubernetes
  • Azure AKS (Microsoft)
  • OVHcloud Managed Kubernetes for teams preferring to stay in Europe
  • k3s for edge environments or lightweight development clusters

When not to use Kubernetes

This is the question too few people ask.

Kubernetes is complex. The control plane involves etcd, kube-apiserver, kube-scheduler, kube-controller-manager. Operating it correctly requires specific skills. Debugging it when something goes wrong can take hours.

Kubernetes is probably oversized if:

  • Your application is a monolith with one or two services
  • Your team has fewer than 5 developers and no dedicated infra profile
  • Your traffic is predictable and doesn't require dynamic scaling
  • You're in product validation phase (MVP, early stage)

In these cases, Docker Compose on a server, a PaaS like Railway, Render or Fly.io, or even a simple VPS are sufficient. Operational simplicity has real value.

Kubernetes justifies its cost when:

  • You have 10+ services to orchestrate
  • You need zero-downtime deployments
  • Automatic scaling is critical to your business model
  • You have a dedicated infra or platform team

The learning curve

This is the reality to accept: Kubernetes has one of the steepest learning curves in the DevOps ecosystem. Understanding the basic concepts takes a few days. Operating a production cluster with confidence takes months.

Recommended resources for 2024:

  • Kubernetes.io β€” the official documentation is excellent
  • killer.sh β€” CKA/CKAD exam simulator for practice
  • k3s + k9s locally to experiment without cloud costs
  • The CKA (Certified Kubernetes Administrator) certification to validate knowledge

In summary

Kubernetes has delivered on its promise: deploying containerized applications at scale, reproducibly, with high availability and automatic scaling. It's a remarkable tool β€” and remarkably complex.

In 2024, the question is no longer "should a backend developer or infra engineer know Kubernetes?" The answer is yes. The real question is "should you operate it yourself?" β€” and there, the answer depends on context.

Have a project in mind?

Let's talk about your challenges and see how Gotan can help.

Contact us