Deploying services with K3s
この記事の目次
Understanding K3s
Before we look into the deployment process, let’s take a moment to understand what K3s is and why it’s gaining popularity. K3s is a certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations, or inside IoT appliances. It’s packaged as a single binary of less than 100MB and has minimal to no operating system dependencies.
The key advantages of K3s include:
1. Simplified installation and maintenance
2. Reduced resource requirements
3. Easier to understand and troubleshoot
4. Suitable for edge computing and IoT scenarios
Now that we have a basic understanding of K3s, let’s move on to the installation process.
Installing K3s
One of the most appealing aspects of K3s is its straightforward installation process. Unlike traditional Kubernetes, which can be complex to set up, K3s can be installed with a single command.
To install K3s on a Linux system, you can use the following command:
curl -sfL https://get.K3s.io | sh -
This command downloads the K3s binary and installs it as a service. By default, it starts the K3s server and configures it to run on boot.
Writing Dockerfiles: Containerizing Your Application
Now that we have K3s installed, the next step is to containerize our application. This process involves creating a Dockerfile, which is a text document containing all the commands needed to build a Docker image.
Let’s consider an example of a simple Node.js application. Here’s what a Dockerfile for such an application might look like:
# Use an official Node.js runtime as the base image FROM node:14 # Set the working directory in the container WORKDIR /usr/src/app # Copy package.json and package-lock.json COPY package*.json ./ # Install application dependencies RUN npm install # Copy the rest of the application code COPY . . # Expose the port the app runs on EXPOSE 3000 # Define the command to run the app CMD [ "node", "app.js" ]
Let’s break down this Dockerfile:
1. We start with a base image (node:14
) that includes the Node.js runtime.
2. We set the working directory inside the container.
3. We copy the package.json
and package-lock.json
files and install dependencies. This step is separated from copying the rest of the code to take advantage of Docker’s layer caching mechanism.
4. We copy the rest of the application code into the container.
5. We expose port 3000, which our application will listen on.
6. Finally, we specify the command to start our application.
This Dockerfile is relatively simple, but it demonstrates the key concepts. In practice, you might need to add more steps, such as setting environment variables, running tests, or optimizing the image size.
To build the Docker image from this Dockerfile, you would run:
docker build -t my-nodejs-app .
This command builds an image tagged as my-nodejs-app
based on the Dockerfile in the current directory.
Kubernetes YAML Files: Defining Your Deployment
With our application containerized, the next step is to define how it should be deployed in our K3s cluster. This is where Kubernetes YAML files come into play. These files describe the desired state of our application in the cluster.
Let’s create a deployment and a service for our Node.js application:
--- apiVersion: apps/v1 kind: Deployment metadata: name: nodejs-app spec: replicas: 3 selector: matchLabels: app: nodejs-app template: metadata: labels: app: nodejs-app spec: containers: - name: nodejs-app image: my-nodejs-app:latest ports: - containerPort: 3000 --- apiVersion: v1 kind: Service metadata: name: nodejs-app-service spec: selector: app: nodejs-app ports: - protocol: TCP port: 80 targetPort: 3000
This YAML file defines two Kubernetes resources:
1. A Deployment named nodejs-app
:
– It specifies that we want three replicas of our application running.
– It uses the my-nodejs-app:latest
image we built earlier.
– It exposes port 3000 from the container.
2. A Service named nodejs-app-service
:
– It selects all pods with the label app: nodejs-app
.
– It forwards traffic from port 80 to port 3000 of the selected pods.
The Deployment ensures that a specified number of pod replicas are running at any given time, while the Service provides a stable network endpoint to access these pods.
Applying Configuration: Bringing It All Together
With our YAML file ready, we can now apply this configuration to our K3s cluster. Here’s how you would do it:
kubectl apply -f nodejs-app.yaml
This command tells Kubernetes to create or update the resources defined in nodejs-app.yaml
. Kubernetes will then work to ensure that the actual state of the cluster matches the desired state we’ve defined.
You can check the status of your deployment with:
kubectl get deployments kubectl get pods kubectl get services
These commands will show you the status of your deployment, the pods that have been created, and the service that’s routing traffic to your pods.
Integrating Traefik: Ingress Configuration
K3s comes with Traefik as its default ingress controller. Traefik is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Let’s configure Traefik to route external traffic to our Node.js application.
First, we need to create an Ingress resource:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nodejs-app-ingress annotations: kubernetes.io/ingress.class: traefik traefik.ingress.kubernetes.io/router.entrypoints: web spec: rules: - host: myapp.example.com http: paths: - path: / pathType: Prefix backend: service: name: nodejs-app-service port: number: 80
This Ingress resource tells Traefik to route traffic for myapp.example.com
to our nodejs-app-service
.
Apply this configuration with:
kubectl apply -f nodejs-app-ingress.yaml
Now, Traefik will handle routing external traffic to your application. Remember to configure your DNS to point myapp.example.com
to your K3s node’s IP address.
Conclusion
Throughout this blog post, we’ve explored the process of deploying an application with K3s, from installation to configuration. While the examples in this post use a simple Node.js application, the principles apply to applications of any complexity. As you become more comfortable with K3s, you’ll find it’s capable of handling a wide range of deployment scenarios.
カテゴリー: