Kubernetes for Beginners: Orchestrating containers like a pro

Use Kubernetes to automate the deployment, scaling, and management of your containerized applications.

Background

If you’ve worked with containerized applications, you know how powerful they can be. Containers provide a consistent, isolated environment for your applications, making them easier to develop, deploy, and scale. But managing containers across multiple hosts can quickly become complex and overwhelming.

This is where Kubernetes comes in. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.

The competition

If you’re considering Kubernetes, you might ask: “Why Kubernetes? What about other container orchestration platforms like Docker Swarm or Apache Mesos?”. A valid question. All these platforms aim to solve similar problems, but Kubernetes has some distinct advantages:

  1. Extensive Ecosystem: Kubernetes has a large and vibrant ecosystem, with a wide range of tools, plugins, and integrations available.
  2. Declarative Configuration: Kubernetes uses declarative configuration, allowing you to describe your desired state, and Kubernetes will work to ensure that the actual state matches.
  3. Self-Healing: Kubernetes can automatically restart containers that fail, replace and reschedule containers when nodes die, and kill containers that don’t respond to your user-defined health check.
  4. Automatic Scaling: Kubernetes can automatically scale your application up and down based on CPU usage or other application-provided metrics.
  5. Industry Standard: Kubernetes has become the de facto standard for container orchestration, with a large community and strong industry support.

Getting Started with Kubernetes

Now that we’ve covered the background and advantages of Kubernetes, let’s dive into creating a simple Kubernetes deployment. We’ll deploy a basic Node.js application.

Setting Up

First, you’ll need to have a Kubernetes cluster. If you’re just getting started, you can use Minikube to create a local cluster. Install Minikube according to the official documentation.

Next, make sure you have kubectl, the Kubernetes command-line tool, installed. You can find installation instructions in the official Kubernetes documentation.

Creating Our Application

Let’s create a simple Node.js application. Create a new directory and initialize a new Node.js project:

mkdir my-app
cd my-app
npm init -y

Install Express.js:

npm install express

Create a new file named app.js:

const express = require('express');
const app = express();

app.get('/', (req, res) => {
  res.send('Hello from Kubernetes!');
});

app.listen(3000, () => {
  console.log('Server is running on port 3000');
});

This is a basic Express.js application that responds with a simple message.

Dockerizing Our Application

To deploy our application on Kubernetes, we first need to containerize it. Create a new file named Dockerfile:

FROM node:14

WORKDIR /app

COPY package*.json ./
RUN npm install

COPY . .

EXPOSE 3000

CMD ["node", "app.js"]

This Dockerfile starts from the Node.js 14 base image, copies our application code into the image, and starts the application.

Build the Docker image:

docker build -t my-app .

Deploying on Kubernetes

Now, let’s deploy our application on Kubernetes. Create a new file named deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: my-app
        ports:
        - containerPort: 3000

This deployment configuration tells Kubernetes to create 3 replicas of our application, each running in a separate pod.

Apply the deployment:

kubectl apply -f deployment.yaml

Exposing Our Application

To make our application accessible from outside the cluster, we need to create a service. Create a new file named service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: my-app
spec:
  selector:
    app: my-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
  type: LoadBalancer

This service configuration exposes our application on port 80 and balances the load across the pods.

Apply the service:

kubectl apply -f service.yaml

Accessing Our Application

After a few minutes, your application should be accessible. If you’re using Minikube, you can get the URL with:

minikube service my-app --url

Visit this URL in your browser, and you should see the “Hello from Kubernetes!” message.

Advanced Features

While our example is simple, Kubernetes offers a wide range of features for managing complex applications. Here are a few advanced features to consider:

Rolling Updates and Rollbacks

Kubernetes allows you to perform rolling updates to your applications, gradually replacing old pods with new ones. If something goes wrong, you can also easily roll back to a previous version.

kubectl set image deployment/my-app my-app=my-app:v2
kubectl rollout status deployment/my-app
kubectl rollout undo deployment/my-app

Autoscaling

Kubernetes can automatically scale your application based on CPU usage or custom metrics. You can define Horizontal Pod Autoscalers (HPA) to automatically adjust the number of pods based on observed metrics.

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: my-app
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 50

This HPA configuration scales the deployment based on CPU usage, keeping the average CPU utilization across all pods around 50%.

ConfigMaps and Secrets

Kubernetes provides ConfigMaps and Secrets for separating configuration from your application code. ConfigMaps are used for non-sensitive data, while Secrets are used for sensitive data like passwords or API keys.

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-app-config
data:
  APP_COLOR: blue
  APP_MODE: prod

This ConfigMap defines configuration data that can be consumed by your application.

Conclusion

We’ve only scratched the surface of what’s possible with Kubernetes. By automating the deployment, scaling, and management of your containerized applications, Kubernetes can significantly simplify your operations and allow you to build more resilient, scalable systems.

While Kubernetes is a powerful tool, it also comes with a learning curve. Its concepts and API can be complex, and it requires a significant shift in how you think about your application architecture.

As with any technology, it’s important to evaluate whether Kubernetes is the right fit for your specific needs. For complex, large-scale applications with high scalability requirements, Kubernetes can be a game-changer. For simpler applications, it might introduce unnecessary complexity.

If you’re just getting started with Kubernetes, I recommend going through the official Kubernetes tutorials and experimenting with deploying your own applications. The Kubernetes documentation is also an excellent resource as you dive deeper.

Happy orchestrating!

関連記事

カテゴリー:

ブログ

情シス求人

  1. チームメンバーで作字やってみた#1

ページ上部へ戻る