Hot Ingress in the K3S Cluster

By | October 31, 2020

Traefik logo, copyright by TraefikLabs. Used with permission.

Update 2023-04-21: I have revised the K3S and Traefik installation processes and written a new article in which these are described. The Test-Drive part of this article is still relevant.

In this article I will install the free open-source edition of Traefik 2, version 2.3 being the latest at the time of writing, into a K3S (Kubernetes) cluster. Having installed Traefik 2 into my cluster, I will also take it for a quick test-drive.

What is Traefik?

Traefik will act as a gateway to the services deployed in a Kubernetes cluster, making it possible to access the selected services from outside of the Kubernetes cluster.

Traefik gateway in a Kubernetes cluster.

Some features provided by Traefik (not an exhaustive list):

In the case where Traefik is deployed to a Kubernetes cluster, it will act as an ingress controller.

Please refer to the Traefik documentation for further details!

Prerequisites

The prerequisites for my K3S cluster are:

Installing K3S

The installation process is, for most parts, identical to what I described in my earlier article. The difference is that this time I will exclude the default version of Traefik, which at the time of writing is Traefik 1.

Install K3S on Master Node

First I am going to install K3S on the master node.

  • Open a terminal window.
  • Open a shell on the k3s-master VM:
multipass shell k3s-master
  • Install the latest version of K3S for a master node, excluding the default version of Traefik:
curl -sfL https://get.k3s.io | sh -s - --disable=traefik

Output similar to the following should appear in the console:

[INFO]  Finding release for channel stable
[INFO]  Using v1.18.9+k3s1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.9+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.9+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
  • Check the configuration of K3S installed on the master node:
k3s check-config

Example output:

Verifying binaries in /var/lib/rancher/k3s/data/688c8ca42a6cd0c042322efea271d6f3849d3de17c850739b0da2461f6c69ee8/bin:
- sha256sum: good
- links: good

System:
- /sbin iptables v1.6.1: older than v1.8
- swap: disabled
- routes: ok

Limits:
- /proc/sys/kernel/keys/root_maxkeys: 1000000

modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-118-generic
info: reading kernel config from /boot/config-4.15.0-118-generic ...

Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- /sbin/apparmor_parser
apparmor: enabled and tools installed
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_KEYS: enabled
- CONFIG_VETH: enabled (as module)
- CONFIG_BRIDGE: enabled (as module)
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_NF_NAT_IPV4: enabled (as module)
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_IPVS: enabled (as module)
- CONFIG_IP_NF_NAT: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_NF_NAT_NEEDED: enabled
- CONFIG_POSIX_MQUEUE: enabled

Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_CGROUP_PIDS: enabled
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: enabled
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: enabled
- CONFIG_NET_CLS_CGROUP: enabled (as module)
- CONFIG_CGROUP_NET_PRIO: enabled
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: missing
- CONFIG_IP_NF_TARGET_REDIRECT: enabled (as module)
- CONFIG_IP_SET: enabled (as module)
- CONFIG_IP_VS: enabled (as module)
- CONFIG_IP_VS_NFCT: enabled
- CONFIG_IP_VS_PROTO_TCP: enabled
- CONFIG_IP_VS_PROTO_UDP: enabled
- CONFIG_IP_VS_RR: enabled (as module)
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Network Drivers:
  - "overlay":
    - CONFIG_VXLAN: enabled (as module)
      Optional (for encrypted networks):
      - CONFIG_CRYPTO: enabled
      - CONFIG_CRYPTO_AEAD: enabled
      - CONFIG_CRYPTO_GCM: enabled
      - CONFIG_CRYPTO_SEQIV: enabled
      - CONFIG_CRYPTO_GHASH: enabled
      - CONFIG_XFRM: enabled
      - CONFIG_XFRM_USER: enabled (as module)
      - CONFIG_XFRM_ALGO: enabled (as module)
      - CONFIG_INET_ESP: enabled (as module)
      - CONFIG_INET_XFRM_MODE_TRANSPORT: enabled (as module)
- Storage Drivers:
  - "overlay":
    - CONFIG_OVERLAY_FS: enabled (as module)

STATUS: pass

Notice the last line saying “STATUS: pass”, which means that the K3S master node installation has been verified and found satisfactory.

  • Retrieve the master node token:
sudo cat /var/lib/rancher/k3s/server/node-token

This token will be different for each installation. It is needed when installing the agent node(s). In my case it looks like this:

K10288e77934e06dda1e7523114282478fdc1798545f04235a86b97c71a0bca41f4::server:baecfccac88699f5a12e228e72a69cf2
  • Exit the k3s-master VM shell:
exit
  • Find the IP address of the master node, which will be needed when installing K3S on the agent node:
multipass list

Example output:

Name                    State             IPv4             Image
k3s-master              Running           192.168.64.7     Ubuntu 18.04 LTS

Install K3S on Agent Node

Having installed K3S on the master node and obtained the master node token, I can now install K3S on the agent node:

  • Open a terminal window if needed.
  • Open a shell on the k3s-agent VM:
multipass shell k3s-agent01
  • Install K3S for an agent node.
    Remember to replace the master node IP address and master node token with your values!
curl -sfL https://get.k3s.io | K3S_URL="https://192.168.64.7:6443" K3S_TOKEN="K10288e77934e06dda1e7523114282478fdc1798545f04235a86b97c71a0bca41f4::server:baecfccac88699f5a12e228e72a69cf2" sh -

Console output from K3S agent installation should look similar to this:

[INFO]  Finding release for channel stable
[INFO]  Using v1.18.9+k3s1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.9+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.9+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent
  • Check the configuration of K3S installed on the agent node.
k3s check-config

The output to the console should be almost identical to the corresponding output on the master node earlier. In my case, there is I difference (highlighted) in the System section:

System:
- /sbin iptables v1.6.1: older than v1.8
- swap: disabled
- routes: default CIDRs 10.42.0.0/16 or 10.43.0.0/16 already routed’

However, at the end of the output, the status is still pass.

  • Exit the k3s-agent01 VM shell:
exit

Configure kubectl

Configuration of kubectl that I have installed in the VM host to manage my K3S cluster is described in my previous article.

Installing Helm 3

I am going to install Traefik using Helm, the Kubernetes package manager, and so Helm needs to be installed. I followed the Helm installation instructions, using the “From Script” alternative.

  • Execute the following in a terminal window:
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

If already installed, the above script will update Helm 3 if there is a newer version available. If the latest version of Helm 3 is already installed, the script will inform you and exit.

Installing Traefik

Finally it is time to install Traefik 2 in the K3S cluster!

  • Add Traefik 2’s Helm chart repository:
helm repo add traefik https://helm.traefik.io/traefik
  • Update local information of available charts from chart repositories:
helm repo add traefik https://helm.traefik.io/traefik
kubectl create ns traefik-v2
  • Download the Traefik 2 default Helm values file from https://github.com/traefik/traefik-helm-chart/blob/master/traefik/values.yaml
    This step can be skipped if you do not want to make any modifications to the Traefik Helm chart values.
  • Modify the content of the values file as you find suitable.
    I am going to use the default values file, without any modifications. I could have skipped the values file altogether but wanted to include the option to customize the Traefik installation.
    This step can be skipped if you do not want to make any modifications to the Traefik Helm chart values.
  • Install Traefik 2:
helm install --namespace=traefik-v2 --values=./values.yaml traefik traefik/traefik

If you haven’t made any modifications to the Traefik Helm chart values, the following command should be used instead:

helm install --namespace=traefik-v2 traefik traefik/traefik

Example output from the Traefik installation:

NAME: traefik
LAST DEPLOYED: Thu Oct 22 21:11:19 2020
NAMESPACE: traefik-v2
STATUS: deployed
REVISION: 1
TEST SUITE: None
  • Verify that the Traefik pods have been successfully started:
kubectl get pods -n=traefik-v2 -o wide

Example output showing the Traefik pods:

NAME                       READY   STATUS    RESTARTS   AGE   IP          NODE          NOMINATED NODE   READINESS GATES
svclb-traefik-895t4        2/2     Running   0          61s   10.42.0.8   k3s-master    <none>           <none>
svclb-traefik-ztjj6        2/2     Running   0          61s   10.42.1.3   k3s-agent01   <none>           <none>
traefik-7fb947cdf7-sjjf8   1/1     Running   0          61s   10.42.1.2   k3s-agent01   <none>           <none>

All the pods should have the status running.

View Traefik Dashboard

The Traefik dashboard lets you view the current state of Traefik in your cluster.

  • List the Traefik pod:
kubectl get pods -n traefik-v2 --selector "app.kubernetes.io/name=traefik" --output name

Example output:

pod/traefik-7fb947cdf7-sjjf8
  • Forward requests on localhost port 9000 to the Traefik pod from the previous step.
    Note that you will have to replace the name of the Traefik pod with the name obtained in the previous step!
kubectl port-forward -n traefik-v2 pod/traefik-7fb947cdf7-sjjf8 9000:9000
Traefik 2 dashboard.

Test-Drive

With Traefik 2 installed in my K3S cluster, I will demonstrate how to expose a service using Traefik 2. The example consists of three different parts; a deployment, a service and an ingress route.

Traefik 2 example; one ingress route exposing a service that delegates requests to
two pods created from a deployment.

Deployment

The deployment used in the example, stored in a file with the name “01_example-deployment.yaml”, looks like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: default
  name: whoami
  labels:
    app: whoami

spec:
  replicas: 2
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: traefik/whoami
          ports:
            - name: web
              containerPort: 80

Note that:

  • Deployment is a standard Kubernetes resource (apiVersion: apps/v1).
  • The deployment uses the traefik/whoami image.
    This is a small webserver that, when receiving a HTTP request, prints out information about the request etc.
  • The server will be listening on port 80, named “web”, in the pod(s) it will be running.
  • All pods created from this deployment will have the lablel app=whoami as specified by spec.template.metadata.labels.

Service

The service defines a set of pods that are to process requests as a single service. The service definition used in the example is stored in a file with the name “02_example-service.yaml” and looks like this:

apiVersion: v1
kind: Service
metadata:
  name: whoami
spec:
  ports:
    - port: 80
      targetPort: web
  selector:
    app: whoami

Note that:

  • Service is also a standard Kubernetes resource (apiVersion: v1).
  • The service will be made available on port 80, as specified by spec.ports.port.
  • Requests to the service will be forwarded to the port with the name “web” in the pod(s) backing the service, as specified by spec.ports.targetPort.
  • All pods with the label app=whoami will be part of the service.

Ingress Route

The final part of the example is the ingress route, which is listed below. It is stored in a file with the name “03_example-plainhttp-ingressroute.yaml”.

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: simpleingressroute
  namespace: default
spec:
  entryPoints:
    - web
  routes:
  - match: PathPrefix(`/whoami`)
    kind: Rule
    services:
    - name: whoami
      port: 80

Note that:

  • The IngressRoute is a Traefik resource (apiVersion: traefik.containo.us/v1alpha1).
  • The IngressRoute will have the name “simpleingressroute”.
  • The IngressRoute will be created in the “default” namespace.
  • The entry point on which this ingress route will accept connections is the web entry point.
    This is the plain HTTP entry point. The name of the HTTPS entry point is “websecure”.
  • There is a route definition.
    The route will match all requests with the path prefix “/whoami” and route them to a service with the name “whoami” on port 80. As if by coincidence, this perfectly matches the service defined above!

Applying the Example Configuration

With the three files in place, we are now ready to apply the configuration using the following commands:

kubectl apply -f 01_example-deployment.yaml
kubectl apply -f 02_example-service.yaml
kubectl apply -f 03_example-plainhttp-ingressroute.yaml

Examine the pods created by the deployment:

kubectl get pods

The result should be similar to this:

NAME                      READY   STATUS    RESTARTS   AGE
whoami-5db58df676-gfrsn   1/1     Running   1          3s
whoami-5db58df676-sgm6t   1/1     Running   1          3s

Examine the service created:

kubect get service whoami

The result should look like this:

NAME     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
whoami   ClusterIP   10.43.185.44   <none>        80/TCP    4m

Note that the service has no external IP.

Examine the ingress route:

kubectl get ingressroute

The result should look like:

NAME                 AGE
simpleingressroute   3s

Accessing the Service

In order to be able to access the whoami service, we first need to find the port and IP address of the Traefik service:

kubectl get svc -n traefik-v2

The output should look like this:

NAME      TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)                      AGE
traefik   LoadBalancer   10.43.4.135   192.168.64.8   80:30074/TCP,443:30261/TCP   7d

The external IP address of the Traefik load balancer is, in my case, 192.168.64.8. Since we were using the web entry point in the ingress route, we want to look at the port that is mapped to port 80, which in my case is port 30074. Recall also that in the ingress route, we defined matching against the path prefix “/whoami”.
Thus the URL at which I can access the whoami service will be:

http://192.168.64.8:30072/whoami

When entering the above URL in a browser, the response is:

Hostname: whoami-5db58df676-sgm6t
IP: 127.0.0.1
IP: ::1
IP: 10.42.2.6
IP: fe80::9403:3ff:fee0:3627
RemoteAddr: 10.42.1.11:53602
GET /whoami HTTP/1.1
Host: 192.168.64.8:30074
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:83.0) Gecko/20100101 Firefox/83.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.5
Dnt: 1
Sec-Gpc: 1
Upgrade-Insecure-Requests: 1
X-Forwarded-For: 10.42.1.1
X-Forwarded-Host: 192.168.64.8:30074
X-Forwarded-Port: 30074
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-7fb947cdf7-sjjf8
X-Real-Ip: 10.42.1.1

We have succeeded exposing the whoami service to the world outside of the Kubernetes cluster using Traefik v2.

Happy coding!

5 thoughts on “Hot Ingress in the K3S Cluster

  1. Rinoy

    Great Tutorial. One question .. how can we enable tls for traefik with k8s or k3s?

    Reply
  2. Claiton Campos

    Very good post, Ivan.
    In my case, I’m testing Microk8s, on an EC2 cluster, hosted on AWS. That is, I am not localhost but in the cloud. So I’m having trouble accessing the Traefik dashboard because the instance’s DNS has a public IP, which is not the same generated as it happened in your deploy.
    Anyway, I’m new to kubernetes and I’ve encountered some difficulty because most of the tutorials are presented on localhost. I have been able to access the Kubernetes Dashboard through NordPort, which Traefik does not.
    Congratulations one more time.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *