Containerization of applications has become mainstream these days, and there are opportunities to use containers in edge computing as well. In this article, I would like to share some of the key points of this article.
Here in this article, we will discuss k3s, which entered the CNCF in August 2020.
k3s is an OSS that was originally developed by Rancher Labs (headquartered in Cupertino, Florida, USA) and was announced in February 2019. When it was first announced, it became a hot topic because it is an official distribution of Kubernetes and the binary is less than 40MB. It enables enterprises using Kubernetes in the enterprise to use containers on edge devices as well, and furthermore, to centralize the operation and monitoring of the extended Kubernetes as a Service (KaaS) infrastructure. Therefore, the best applications include the following
- Edge computing
- ARM environments
- Development Environments
- Embedded Kubernetes
- Host utilities
Looking at the changes from Kubernetes, we can once again see that it is designed to be used in environments with relatively low computing resources.
- RAM: 512MB minimum
- CPU: minimum 1 CPU
So, I will try to run the Server side with n1-standard-1 (vCPU x 1, memory 3.75 GB) and the Agent side with f1-micro (vCPU x 1, memory 0.6 GB). The OS is set to Ubuntu 18.04.
- Inbound: 0.0.0.0/0
k3s-server. First, install the Server side.
This time, I added
$ curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable-agent" K3S_KUBECONFIG_MODE="644" sh -
INSTALL_K3S_EXEC="--disable-agent"to put the agent on a different server, and added
K3S_KUBECONFIG_MODE="644"to read kubeconfig.
After the installation is complete, output the token to be registered to agent
$ cat /var/lib/rancher/k3s/server/node-token
## Install the Agent.
The instance name is set up as `k3s-agent-x`. The Agent side is installed as follows. The Agent side will be installed as follows, using the pre-cat token and the external IP of the Server.
curl -sfL https://get.k3s.io | K3S_TOKEN=[server_token] K3S_URL=https://[server_external_ip]:6443 sh -
I think we can check the node (Agent) now, so let’s check the Server side
k3s kubectl get nodes
Also, if you bring the file under
/etc/rancher/k3s/k3s.yamlto your PC, you can check the nodes locally.
server: https://[server_external_ip]:6443 # Rewrite here to Server’s external IP
- name: default
username: adminAfter the apply is complete, make sure the pod is running and curl it with the following command.
### Running the sample
Now that we've got a k3s cluster up and running, let's see if it works using a simple sample.
The sample we will use is from the official [Kubernetes Deployment: How to Run a Containerized Workload on a Cluster](https://rancher.com/learning-paths/how-to-deploy-your- application-to-kubernetes/).
To begin, apply the following.
- name: mysite
- containerPort: 80
Verify that it scales.
kubectl exec -it [container_name] curl localhost
The following should be returned when executed.
<title>Hello World This is Version 1 of our Application</title>.
Next, let's set Replica to 4 to scale.
kubectl scale --replicas=4 deploy/mysite
kubectl get pods
NAME READY STATUS RESTARTS AGE
mysite-5bc4c5898d-8s448 1/1 Running 0 11m
mysite-5bc4c5898d-5d68q 1/1 Running 0 2m11s
mysite-5bc4c5898d-2zcpp 1/1 Running 0 3s
mysite-5bc4c5898d-nc6pc 1/1 Running 0 3s
I have written about k3s so far, and it is easy to try, so please try it and feel how easy it is.
- English official document
- Japanese document
Translated with www.DeepL.com/Translator (free version)