Two years ago already?!? Apparently more than two years have already passed since I wrote an article about installing and testing Traefik v2 in a K3S cluster. Given a recent article in which I were able to completely automatize the creation and set up of a K3S cluster, I thought I would revisit my old article and automate the installation of Traefik v2 as well. Given some information from a comment on the earlier article I completely revised the procedure of installing the latest version of Traefik v2 in my K3S cluster.
Regarding the Traefik versions: As of writing, when installing K3S, Traefik version 2.9.4 was installed per default. When installing Traefik using the latest version of the Traefik Helm chart, version 2.9.10 was installed.
This article will show how to use the Ansible role to create a K3S cluster and then how to install and, albeit ever so slightly, customize the Traefik installation Traefik v2 in the K3S cluster.

Prerequisites
Please refer to my previous article on Automatized K3S Cluster Installation with Ansible – this article share the same prerequisites.
Clone or download the k3s-cluster-ansible repository available here: https://github.com/krizsan/k3s-cluster-ansible
With the repository cloned, or downloaded and unpacked, open a terminal window in the root of the project and install the Ansible collection:
ansible-galaxy collection install k3s_cluster --force
Create K3S Cluster
Creating the K3S cluster I will use a modified version of the k3s-cluster-create.yml Ansible playbook from project mentioned in the prerequisites above. I am assuming the same cluster configuration
- Edit the k3s-cluster-create.yml file so that it looks like this:
Note how the k3s_install_master_cmd is customized to disable the default Traefik installation.
--- - name: K3S Cluster hosts: all gather_facts: false vars: k3s_install_master_cmd: "curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=latest sh -s - --disable=traefik" tasks: - name: Create the K3S cluster ansible.builtin.include_role: name: ivankrizsan.k3s_cluster.k3scluster
- Create the K3S cluster:
ansible-playbook -i inventory.yml kubectl-local-config.yml
- Configure the local kubectl to manage the K3S cluster:
ansible-playbook -i inventory.yml kubectl-local-config.yml
- Verify K3S cluster:
Note that your actual output will, depending on the contents of your inventory file, differ from the output shown below but you should have one master node and multiple agent nodes.
$ kubectl get nodes NAME STATUS ROLES AGE VERSION k3s-agent-02 Ready <none> 113s v1.26.3+k3s1 k3s-agent-01 Ready <none> 112s v1.26.3+k3s1 k3s-master Ready control-plane,master 2m7s v1.26.3+k3s1
We now have a K3S cluster that does not have Traefik installed.
Install Traefik v2
As in the introduction, I have revised the installation of Traefik v2 given that K3S contain a Helm controller. The following deployment YAML replaces the invocation of the Helm command and have located it in a file with the name “deploy-traefik.yml”.
# Creates namespace into which Traefik will be deployed. apiVersion: v1 kind: Namespace metadata: name: traefik-v2 --- # Install Traefik v2 using the Helm controller apiVersion: helm.cattle.io/v1 kind: HelmChart metadata: # Name of the Helm chart to use. name: traefik # Namespace in which the Traefik Helm chart job will be run. namespace: kube-system spec: # Helm chart repository. Does not have to be defined in the K3S cluster prior to deploy of this configuration. repo: https://traefik.github.io/charts # Name of Helm chart to deploy chart: traefik # Namespace to which Traefik will be deployed. targetNamespace: traefik-v2 # Modified configuration of the Helm chart. # Add any values from the Helm chart here that are to have a value that is not # the same as the value found in the Helm chart. valuesContent: |- globalArguments: [ ]
Note that:
- The above YAML include a portion of the kind Namespace that creates the traefik-v2 namespace in which Traefik v2 will be deployed.
- Next is the HelmChart which contains the parameters that are passed to the Helm controller to install Traefik v2.
- Under the metadata key the name of the Helm chart to be used is specified.
Not surprisingly the name of the Helm chart is “traefik”. - Also under the metadata key, the namespace in which the installation job will be run.
I have chosen to run the installation in the “kube-system” namespace. - The spec.repo field specifies the Helm chart repository in which the Helm chart to be used is to be found.
Unlike when using the Helm command, the chart repository does not have to be explicitly added prior to installation but is specified as part of it. - The spec.chart specifies the name of the Helm chart in the Helm chart repository.
- The spec.targetNamespace specifies the namespace in which Traefik will be installed.
In this case, the namespace naturally matches the namespace created first in the file, which is “traefik-v2”. - Finally under the spec.valuesContent any customization of the Helm chart values are specified.
In this example, I have overridden the globalArguments in order to disable version checking and the sending of anonymous usage data. The original contents of the globalArguments in the Traefik Helm chart at the time of writing is:
globalArguments: - "--global.checknewversion" - "--global.sendanonymoususage"
To install Traefik v2 in the K3S cluster, apply the above using kubectl:
kubectl create -f deploy-traefik.yml
To verify the outcome of the installation, issue the following commands:
kubectl logs $(kubectl get pods -n kube-system -l job-name=helm-install-traefik --output name) -n kube-system
The above is actually two invocations of kubectl combined into one line. The first invocation retrieves the name of the pod in the kube-system namespace in which the Helm installation of Traefik was executed. The second invocation retrieves the logs from the pod in question. The output will look something like this in the case that Traefik was installed successfully:
if [[ ${KUBERNETES_SERVICE_HOST} =~ .*:.* ]]; then echo "KUBERNETES_SERVICE_HOST is using IPv6" CHART="${CHART//%\{KUBERNETES_API\}%/[${KUBERNETES_SERVICE_HOST}]:${KUBERNETES_SERVICE_PORT}}" else CHART="${CHART//%\{KUBERNETES_API\}%/${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}}" fi set +v -x + [[ '' != \t\r\u\e ]] + export HELM_HOST=127.0.0.1:44134 + HELM_HOST=127.0.0.1:44134 + helm_v2 init --skip-refresh --client-only --stable-repo-url https://charts.helm.sh/stable/ + tiller --listen=127.0.0.1:44134 --storage=secret Creating /home/klipper-helm/.helm Creating /home/klipper-helm/.helm/repository Creating /home/klipper-helm/.helm/repository/cache Creating /home/klipper-helm/.helm/repository/local Creating /home/klipper-helm/.helm/plugins Creating /home/klipper-helm/.helm/starters Creating /home/klipper-helm/.helm/cache/archive Creating /home/klipper-helm/.helm/repository/repositories.yaml Adding stable repo with URL: https://charts.helm.sh/stable/ Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /home/klipper-helm/.helm. Not installing Tiller due to 'client-only' flag having been set [main] 2023/04/20 20:25:55 Starting Tiller v2.17.0 (tls=false) [main] 2023/04/20 20:25:55 GRPC listening on 127.0.0.1:44134 [main] 2023/04/20 20:25:55 Probes listening on :44135 [main] 2023/04/20 20:25:55 Storage driver is Secret [main] 2023/04/20 20:25:55 Max history per release is 0 ++ jq -r '.Releases | length' ++ timeout -s KILL 30 helm_v2 ls --all '^traefik$' --output json [storage] 2023/04/20 20:25:55 listing all releases with filter + V2_CHART_EXISTS= + [[ '' == \1 ]] + [[ '' == \v\2 ]] + [[ -f /config/ca-file.pem ]] + [[ -n '' ]] + shopt -s nullglob + helm_content_decode + set -e + ENC_CHART_PATH=/chart/traefik.tgz.base64 + CHART_PATH=/tmp/traefik.tgz + [[ ! -f /chart/traefik.tgz.base64 ]] + return + [[ install != \d\e\l\e\t\e ]] + helm_repo_init + grep -q -e 'https\?://' + [[ helm_v3 == \h\e\l\m\_\v\3 ]] + [[ traefik == stable/* ]] + [[ -n https://traefik.github.io/charts ]] + helm_v3 repo add traefik https://traefik.github.io/charts "traefik" has been added to your repositories + helm_v3 repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "traefik" chart repository Update Complete. ⎈Happy Helming!⎈ + helm_update install --namespace traefik-v2 --repo https://traefik.github.io/charts + [[ helm_v3 == \h\e\l\m\_\v\3 ]] ++ tr '[:upper:]' '[:lower:]' ++ jq -r '"\(.[0].app_version),\(.[0].status)"' ++ helm_v3 ls --all -f '^traefik$' --namespace traefik-v2 --output json + LINE=null,null + IFS=, + read -r INSTALLED_VERSION STATUS _ + VALUES= + for VALUES_FILE in /config/*.yaml + VALUES=' --values /config/values-01_HelmChart.yaml' + [[ install = \d\e\l\e\t\e ]] + [[ null =~ ^(|null)$ ]] + [[ null =~ ^(|null)$ ]] + echo 'Installing helm_v3 chart' + helm_v3 install --namespace traefik-v2 --repo https://traefik.github.io/charts traefik traefik --values /config/values-01_HelmChart.yaml NAME: traefik LAST DEPLOYED: Thu Apr 20 20:25:57 2023 NAMESPACE: traefik-v2 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Traefik Proxy v2.9.10 has been deployed successfully on traefik-v2 namespace ! + exit
We can see that, after having added the Traefik Helm chart repository and having retrieved the most recent Helm chart from the repository, Traefik v2.9.10 was successfully deployed.
View Traefik Dashboard
To be able to view the Traefik dashboard, run the following commands in a terminal window:
kubectl port-forward -n traefik-v2 $(kubectl get pods -n traefik-v2 -o name) 9000:9000
In a browser, navigate to http://127.0.0.1:9000/dashboard/
The Traefik dashboard with the dark theme should look like this:

Traefik v2.9.10 dashboard.
Here we can also see that it is version 2.9.10 of Traefik that has been installed and that is running in the K3S cluster. For an actual test of Traefik, please refer to my earlier article on Traefik!
Happy coding!