
Introduction
Hello, this is Tai Qi Ito from TIG/DX team. This is the first article in the CNCF series.
Containerization of applications has become mainstream these days, and there are opportunities to use containers in edge computing as well. In this article, I would like to share some of the key points of this article.
Here in this article, we will discuss k3s, which entered the CNCF in August 2020.
What is k3s?
k3s is an OSS that was originally developed by Rancher Labs (headquartered in Cupertino, Florida, USA) and was announced in February 2019. When it was first announced, it became a hot topic because it is an official distribution of Kubernetes and the binary is less than 40MB. It enables enterprises using Kubernetes in the enterprise to use containers on edge devices as well, and furthermore, to centralize the operation and monitoring of the extended Kubernetes as a Service (KaaS) infrastructure. Therefore, the best applications include the following
- Edge computing
- CI
- ARM environments
- IoT
- Development Environments
- Embedded Kubernetes
5 changes in k3s
The name k3s comes from the five changes from Kubernetes (k8s).
! k3s_architecture.png
(Quote: https://k3s.io/)
Datastore changes
The datastore in the Kubernetes master is etcd by default, but in k3s, it is replaced by SQLite. But of course, it is not fixed. MySQL, PostgreSQL, etcd data stores are also available.
Single binary of components
The components necessary for the Kubernetes control plane to work are combined into a single binary, a process. This has the advantage of automating cumbersome cluster operations (such as certificate distribution).
Minimize dependencies with external parties
With the exception of the following packages required for k3s, all external dependencies have been kept to a minimum.
- containerd
- Flannel
- CoreDNS
- Host utilities
Built-in Features
k3s base features such as Helm controller, Traefik ingress controller, etc. are available for k3s alone.
Removing plugins
Kubernetes had plugins for storage and various cloud providers, but these have been removed in k3s.
Looking at the changes from Kubernetes, we can once again see that it is designed to be used in environments with relatively low computing resources.
Let’s try it out
It’s important to get the prerequisite knowledge, but it’s also important to actually try it out. So, I’d like to try running k3s on my favorite GCP. This time, I’ll use the following minimum requirements to run k3s
- RAM: 512MB minimum
- CPU: minimum 1 CPU
So, I will try to run the Server side with n1-standard-1 (vCPU x 1, memory 3.75 GB) and the Agent side with f1-micro (vCPU x 1, memory 0.6 GB). The OS is set to Ubuntu 18.04.
Firewall
In order to allow external access to the server
- Inbound: 0.0.0.0/0
- tcp:6443
Open with.
Install the Server.
The instance name is set to k3s-server
. First, install the Server side.
1 | $ curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable-agent" K3S_KUBECONFIG_MODE="644" sh - |
This time, I added INSTALL_K3S_EXEC="--disable-agent"
to put the agent on a different server, and added K3S_KUBECONFIG_MODE="644"
to read kubeconfig.
After the installation is complete, output the token to be registered to agent
``shell
$ cat /var/lib/rancher/k3s/server/node-token
1 | |
I think we can check the node (Agent) now, so let’s check the Server side
1 | k3s kubectl get nodes |
Also, if you bring the file under /etc/rancher/k3s/k3s.yaml
to your PC, you can check the nodes locally.
``/etc/rancher/k3s/k3s.yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: secret
server: https://[server_external_ip]:6443 # Rewrite here to Server’s external IP
name: default
contexts: - context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users: - name: default
user:
password: xxxxxxxxx
username: adminAfter the apply is complete, make sure the pod is running and curl it with the following command.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
### Running the sample
Now that we've got a k3s cluster up and running, let's see if it works using a simple sample.
The sample we will use is from the official [Kubernetes Deployment: How to Run a Containerized Workload on a Cluster](https://rancher.com/learning-paths/how-to-deploy-your- application-to-kubernetes/).
To begin, apply the following.
````testdeploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysite
labels:
app: mysite
spec:
replicas: 1
selector:
matchLabels:
app: mysite
template:
metadata:
labels:
app: mysite
spec:
containers:
- name: mysite
image: kellygriffin/hello:v1
ports:
- containerPort: 80
Verify that it scales.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15kubectl exec -it [container_name] curl localhost
````
The following should be returned when executed.
```html
<!DOCTYPE html>
<html> <head
<head>
<title>Hello World This is Version 1 of our Application</title>.
</html>
````
Next, let's set Replica to 4 to scale.
```shell
kubectl scale --replicas=4 deploy/mysite
1
2
3
4
5
6kubectl get pods
NAME READY STATUS RESTARTS AGE
mysite-5bc4c5898d-8s448 1/1 Running 0 11m
mysite-5bc4c5898d-5d68q 1/1 Running 0 2m11s
mysite-5bc4c5898d-2zcpp 1/1 Running 0 3s
mysite-5bc4c5898d-nc6pc 1/1 Running 0 3s
Summary
Kubernetes is becoming more and more popular, but I thought again that k3s will expand the stage further. In Kubernetes, it’s Master and Node, but in k3s, it’s Server and Agent. The Server is really a control plane on the server, and the Agent is placed on the edge device, so I felt that it is possible to manage clusters even if the environment is different, rather than drinking from the cloud. As a different pattern, if it is embedded Kubernetes, it may be possible to bury each cluster (since the option of matching or separating Agents can be taken).
I have written about k3s so far, and it is easy to try, so please try it and feel how easy it is.
Reference
- English official document
- Japanese document
Translated with www.DeepL.com/Translator (free version)