Automatized K3S Cluster in Multipass VMs with Ansible

By | April 13, 2023

Some time ago I sat out to automatize the creation of a K3S (lightweight Kubernetes) cluster in Multipass virtual machines using Ansible since I wanted to be able to quickly create a K3S cluster for wild experiments. Given a number of nodes and configurations for each of the virtual machines on which the nodes are to run, I wanted to create the virtual machines and later install K3S on the different nodes. Initially I was under the impression that I would have to create an Ansible dynamic inventory plugin in order to be able to install K3S on the virtual machines however newly gained knowledge has made me able to manage this without such a plugin.

In this article I will create an Ansible role that creates a K3S cluster as well as an Ansible playbook that uses the role. The Ansible role will also contain a Molecule test that also can be used to create and destroy a K3S cluster, including the Multipass virtual machines in which the cluster nodes run.
For convenience I will also create an Ansible playbook that configures kubectl as to be able to control a K3S cluster created by the Ansible role.

The complete project is available on GitHub.

Historical References

This article builds on a few articles I have written earlier:

Reading the above articles before this article is not necessary.

Prerequisites

The following are required prerequisites in order to be able to run the code in this article:

  • Multipass
    Manages Ubuntu virtual machines. Avalable for Linux, macOS and Windows.
  • Ansible
    Infrastructure-as-code automation tool.
  • Molecule
    Tool used to test Ansible roles.
  • Kubectl
    Tool to manage one or more Kubernetes clusters.

Create the Project

In an earlier article I have detailed creation of Ansible projects that contain Molecule tests so in this article I will just list the commands that will set up the project for this article.

mkdir k3s-cluster-ansible
cd k3s-cluster-ansible
ansible-galaxy collection init ivankrizsan.k3s_cluster
mv ivankrizsan/k3s_cluster .
rmdir ivankrizsan
cd k3s_cluster/roles/
molecule init role k3scluster
cd k3scluster
rm -r files
rm -r handlers
rm -r tests
rm -r vars

Additionally, in the project root create an Ansible configuration file named “ansible.cfg” with the following contents:

[defaults]
collections_paths = ./ansible_collections
roles_path = ./ansible_roles
remote_tmp = /var/tmp

There should now be a project skeleton containing an Ansible collection named “k3s_cluster” that in turn contain one Ansible role named “k3scluster”.

K3S Cluster Inventory

The first file specific for the project of this article is the Ansible inventory file with the name “inventory.yml” located in the root of the project. In this file the number of nodes in the cluster and their configuration is specified. Unlike what is most common at least for me, the nodes do not exist prior to having executed the example playbook that will be developed later.

all:
  children:
    # Currently only one master node is supported.
    # The name of a master node must contain the string "master".
    master-nodes:
      hosts:
        k3s-master:
          # Multipass VM configuration for the master node.
          vm_config: "-m 3G --disk 5G 20.04"
    # Names of K3S agent nodes must contain the string "agent".
    agent-nodes:
      hosts:
        k3s-agent-01:
          # Multipass VM configuration for agent 1.
          vm_config: "-m 4G --disk 5G 20.04"
        k3s-agent-02:
          # Multipass VM configuration for agent 2.
          vm_config: "-m 5G --disk 5G 20.04"

Note that:

  • There are two host-groups; “master-nodes” and “agent-nodes”.
    These groups determine, as we will later see, what will be installed on the nodes that belong to the group. There is a comment saying that currently only one single master node is supported. Having a single master node is sufficient for a cluster created for experimental purposes and I have tried to keep things simple.
  • Below each host there is a variable named “vm_config”.
    The value of this variable contains the parameters to the “multipass launch” command that will create the virtual machines for the nodes of the cluster. The -m option specifies how much memory to allocate to the virtual machine, the –disk option specifies how much disk-space to allocate to the virtual machine. Finally the parameter “20.04” specifies the version of Ubuntu that will run in the virtual machines.

Example Playbook

The example Ansible playbook, which I have located in a file named “k3s-cluster-create.yml” also in the root of the project, that uses the K3S cluster role that will be developed later in this article is trivial:

---
- name: K3S Cluster
  hosts: all
  gather_facts: false
  tasks:
    - name: Create the K3S cluster
      ansible.builtin.include_role:
        name: ivankrizsan.k3s_cluster.k3scluster

Note that:

  • The gather_facts is set to false.
    The reason for this is that it is impossible to gather facts about the hosts in the inventory since the virtual machines representing the hosts will be created during execution of the playbook.

Configure kubectl Playbook

For convenience, an Ansible playbook that configures a previously installed kubectl has been included in the file “kubectl-local-config.yml” in the project root. Instruction on how to install kubectl under different operating systems can be found here.

---
# Configures a previously installed kubectl as to be able to manage the K3S cluster
# running in Multipass VMs.
#
# Variables:
# vm_user - User in the Multipass VM used to SSH to the VM with.
# vm_user_ssh_private_key_file - File containing above user's private key used to SSH to the VM with.
- name: Install and configure kubectl
  hosts: master-nodes
  vars:
    vm_user: "vmadmin"
    tempfiles_root_rel: "~/k3scluster"
    ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
    ansible_connection: local
    k3s_cluster_user: "k3s_user"
    k3s_cluster: "k3s"
    k3s_context: "k3s_context"
  tasks:
    - name: Find absolute path to tempfiles root
      ansible.builtin.shell: "echo {{ tempfiles_root_rel }}"
      register: tempfiles_root_out
      delegate_to: localhost
    - name: Retrieve absolute path to tempfiles root
      ansible.builtin.set_fact:
        tempfiles_root: "{{ tempfiles_root_out.stdout }}"
    - name: Set paths where tempfiles root is used
      ansible.builtin.set_fact:
        vm_user_ssh_private_key_file: "{{ tempfiles_root  }}/multipass-certs/user_key"
        client_key_file: "{{ tempfiles_root }}/temp/k3s_client_key"
        client_certificate_file: "{{ tempfiles_root }}/temp/k3s_client_certificate"
        cluster_certificate_authority_file: "{{ tempfiles_root }}/temp/k3s_cluster_certificate_authority"
    - ansible.builtin.debug:
        msg: "Using client certificate file: {{ vm_user_ssh_private_key_file }}"
    - name: Obtain IP address of K3S cluster master node
      ansible.builtin.include_role:
        name: ivankrizsan.k3s_cluster.k3scluster
        tasks_from: get-node-ip
    - name: Store K3S master node IP address
      ansible.builtin.set_fact:
        k3s_master_ip: "{{ vm_ip }}"
    - name: Retrieve K3S master kubectl configuration from K3S master node
      ansible.builtin.include_role:
        name: ivankrizsan.k3s_cluster.k3scluster
        tasks_from: execute-cmd-and-log.yml
      vars:
        vm_command: "sudo kubectl config view --minify --flatten"
        log_msg_prefix: "K3S master token"
        ansible_user: "{{ vm_user }}"
        ansible_ssh_private_key_file: "{{ vm_user_ssh_private_key_file }}"
        ansible_connection: ssh
    - name: Parse kubectl configuration YAML
      ansible.builtin.set_fact:
        master_kubectl_config: "{{ cmd_output.stdout | from_yaml }}"
    - name: Retrieve kubectl user client certificate data from K3S kubectl configuration.
      ansible.builtin.set_fact:
        kubectl_client_certificate_data: "{{ master_kubectl_config['users'][0]['user']['client-certificate-data'] }}"
    - name: Write user client certificate to file.
      ansible.builtin.copy:
        content: "{{ kubectl_client_certificate_data | b64decode }}"
        dest: "{{ client_certificate_file }}"
    # User client key from K3S kubectl configuration
    - name: Retrieve kubectl user client key data
      ansible.builtin.set_fact:
        kubectl_client_key_data: "{{ master_kubectl_config['users'][0]['user']['client-key-data'] }}"
    - name: Write user client key to file
      ansible.builtin.copy:
        content: "{{ kubectl_client_key_data | b64decode }}"
        dest: "{{ client_key_file }}"
    # Cluster certificate authority from K3S kubectl configuration
    - name: Retrieve kubectl cluster certificate-authority-data
      ansible.builtin.set_fact:
        kubectl_cluster_certificate_authority_data: "{{ master_kubectl_config['clusters'][0]['cluster']['certificate-authority-data'] }}"
    - name: Write cluster certificate authority to file
      ansible.builtin.copy:
        content: "{{ kubectl_cluster_certificate_authority_data | b64decode }}"
        dest: "{{ cluster_certificate_authority_file }}"
    # Create or re-create a K3S cluster in the local kubectl configuration
    - name: Delete any existing K3S cluster in the local kubectl configuration
      ansible.builtin.shell: kubectl config delete-cluster {{ k3s_cluster }}
      ignore_errors: true
    - name: Add a K3S cluster to the local kubectl configuration
      ansible.builtin.shell: |
        kubectl config set-cluster {{ k3s_cluster }} \
        --embed-certs \
        --server=https://{{ k3s_master_ip }}:6443 \
        --certificate-authority={{ cluster_certificate_authority_file }}
    # Create or re-create a K3S user in the local kubectl configuration
    - name: Delete any existing user in the local kubectl configuration
      ansible.builtin.shell: kubectl config delete-user {{ k3s_cluster_user }}
      ignore_errors: true
    - name: Add the K3S user to the local kubectl configuration
      ansible.builtin.shell: |
        kubectl config set-credentials {{ k3s_cluster_user }} \
        --embed-certs \
        --client-certificate={{ client_certificate_file }} \
        --client-key={{ client_key_file }}
    # Create or re-create a context in the local kubectl configuration that connects
    # the cluster and the user created earlier
    - name: Delete any existing context in the local kubectl configuration
      ansible.builtin.shell: kubectl config delete-context {{ k3s_context }}
      ignore_errors: true
    - name: Create the K3S context in the local kubectl configuration
      ansible.builtin.shell: |
        kubectl config set-context {{ k3s_context }} \
        --cluster={{ k3s_cluster }} \
        --user={{ k3s_cluster_user }}
    # Set the current context to the K3S context
    - name: Set the current kubectl context to the K3S context
      ansible.builtin.shell: |
        kubectl config use-context {{ k3s_context }}
    # Delete the files containing client certificate, client key and cluster certificate authority
    - name: Delete certificate files used when configuring kubectl
      ansible.builtin.file:
        path: "{{ item }}"
        state: absent
      with_items:
        - "{{ client_certificate_file }}"
        - "{{ client_key_file }}"
- "{{ cluster_certificate_authority_file }}"

Note that:

  • The above playbook assume that a K3S cluster has already been created and that relevant temporary files written to the directory specified by the variable “tempfiles_root_rel” are present.
  • The playbook also assume that at least the K3S cluster master node is running.
    Information necessary to configure kubectl will be retrieved from the cluster master node during execution of the playbook.
  • The playbook uses tasks from the k3scluster role that will be developed later.
    These tasks obtains the IP address of the K3S master node and executes a command, kubectl, on the K3S master node.
  • The variable “k3s_cluster_user” specifies the name of the use that will be created in the kubectl configuration.
  • The variable “k3s_cluster” specifies the name of the cluster that will be created in the kubectl configuration.
  • The variable “k3s_context” specifies the name of the context that will be created in the kubectl configuration.
  • The absolute path to the directory containing the temporary files need to be obtained.
    This is due to paths to files containing information needed to configure kubectl need to be passed to kubectl and such paths must be absolute, at least this has been the case during the writing of this article.
  • The playbook logs in to the K3S master node and executes a copy of kubectl installed on the node in order to obtain information needed to configure the instance of kubectl being installed on the node on which the above playbook is executed.
    Information obtained from the K3S master node kubectl configuration include:
    Client certificate, client key, certificate authority.
  • A cluster is deleted and created in the local kubectl configuration.
  • A user is deleted and created in the local kubectl configuration.
  • A context is deleted and created in the local kubectl configuration.
  • The context is made the current context in the local kubectl configuration.
  • Files on the Ansible controller node containing information from the K3S master kubectl configuration are deleted.

Ansible k3scluster Role

The empty Ansible role “k3scluster”, created when the project was set up, has its files located at the path k3s_cluster/roles/k3scluster relative to the root of the project. It contains the following directories:

  • defaults
    Default variable values.
  • meta
    Ansible role metadata.
  • molecule
    Ansible Molecule “test”. While not a test in the regular sense, Molecule code will be developed as to allow creating and tearing down a K3S cluster running in Multipass virtual machines using only two commands – molecule create and molecule destroy.
  • tasks
    The tasks of the Ansible role – the most central part of this article if I dare say.
  • templates
    Will contain only a template used in the creation of a cloud-init configuration file used when creating the Multipass virtual machines.

Ansible role metadata and documentation will be skipped in this article. The project on GitHub include these items.

Defaults

The defaults located in the file k3s_cluster/roles/k3scluster/defaults/main.yml contain default values for variables of the role. These variables allow for customization of the K3S cluster created for instance by modifying the commands used to install K3S on a master or an agent node.

---
# K3S Cluster defaults
#
# Path to directory that will be used to write data related to the K3S cluster to.
# Data in this directory include:
# - Cloud-init configuration used when creating Multipass VMs.
# - Public and private key of user Ansible will use to log in to the Multipass VMs created for the cluster.
# - IP address of K3S master node.
# - K3S master node token.
# - K3S cluster certificate authority. Will be deleted after kubectl configuration.
# - K3S client certificate and key. Will be deleted after kubectl configuration.
#
# There must be no '.' in the tempfiles path!
# If the tempfiles_root variable is modified here, do also modify the value
# in the file kubectl-local-config.yml in the project root.
tempfiles_root: "~/k3scluster"
# Path to file to which IP address of the K3S cluster master node will be written.
k3s_master_ip_file: "{{ tempfiles_root }}/temp/k3s-master-ip.txt"
# Path to file in which the K3S master node token will be stored.
# This token is used during installation of K3S agent nodes.
k3s_master_token_file: "{{ tempfiles_root }}/temp/k3s-master-token.txt"

# Name of user for which a password-less certificate-based account will be created.
# This used is used to connect to the Multipass virtual machines during the creation of the K3S cluster.
vm_user: "vmadmin"
# Path to the above user's private key.
vm_user_ssh_private_key_file: "{{ tempfiles_root }}/multipass-certs/user_key"

# Disable strict host checking when connecting to Multipass VMs with Ansible.
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"

# Command that installs K3S master on a master node
k3s_install_master_cmd: "curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=latest sh -"
# Command that installs K3S agent on an agent node
k3s_install_agent_cmd: 'curl -sfL https://get.k3s.io | K3S_URL="https://{{ k3s_master_ip }}:6443" INSTALL_K3S_CHANNEL=latest K3S_TOKEN="{{ k3s_master_token }}" sh -'

# Default Ansible connection for the K3S cluster role is local connection.
# This is due to the fact that there are no IP addresses to the nodes in the cluster
# since the Multipass VMs are created by the role and the IP addresses discovered during
# execution of the role.
ansible_connection: local

The documentation of the variables is fairly good but some things to note are:

  • The tempfiles_root variable contains the path to a directory which will contain files written by the role during the creation of the K3S cluster.
    The directory will be created as needed and files in it with matching names will be overwritten.
  • The variable ansible_ssh_common_args is assigned a value as to disable host checking when connecting to the Multipass VMs with Ansible.
    The reason for this is that Multipass virtual machines may be created and assigned an IP that was previously assigned to a virtual machine that has been deleted. The key stored by SSH for the host will then mismatch and Ansible will not be allowed to connect to the virtual machine unless host checking is disabled.
  • The variable k3s_install_master_cmd specifies the command used to install K3S on a master node.
    The Quick-Start Guide and Installation section of the K3S documentation contain further details on installation of K3S.
  • The variable k3s_install_agent_cmd specifies the command used to install K3S on an agent node.
    Further details on installation of K3S can be found in the sources mentioned above.

Molecule

Molecule is commonly used to test Ansible roles and may indeed may be used for this even in the case of the k3scluster role. Another option how to use the Molecule scenario that will be developed below is to create and start a K3S cluster using “molecule converge” and, later, destroy the cluster including the virtual machines it runs on using the “molecule destroy” command.

Converge

The converge playbook for the default Molecule scenario is located at k3s_cluster/roles/k3scluster/molecule/default/converge.yml and has the following contents:

---
- name: Converge
  hosts: all
  vars:
    ansible_connection: local
  tasks:
    - name: "Include k3scluster"
      include_role:
        name: "k3scluster"

As can be seen, the playbook just include the k3scluster role relying on the default configuration.

Destroy

The file k3s_cluster/roles/k3scluster/molecule/default/destroy.yml contains a playbook that cleans up after the default Molecule scenario by deleting all the virtual machines that make up the cluster:

---
- name: Cleanup after Molecule tests deleting all the Multipass VMs
  hosts: all
  gather_facts: false
  vars:
    ansible_connection: local
  tasks:
    - name: Delete Multipass VM
      ansible.builtin.shell: |
        multipass info {{ ansible_host }} > /dev/null 2> /dev/null
        if [ $? -eq 0 ]; then
          multipass delete {{ ansible_host }}
        fi
    - name: Purge Multipass VMs
      ansible.builtin.shell: |
        multipass purge

Molecule

The Molecule configuration file for the default scenario is located at k3s_cluster/roles/k3scluster/molecule/default/molecule.yml:

---
dependency:
  name: galaxy
driver:
  name: delegated
platforms:
  - name: "k3s-master"
    groups:
      - master-nodes
  - name: "k3s-agent-01"
    groups:
      - agent-nodes
  - name: "k3s-agent-02"
    groups:
      - agent-nodes
provisioner:
  name: ansible
  inventory:
    host_vars:
      k3s-master:
        vm_config: "-m 3G --disk 5G 20.04"
      k3s-agent-01:
        vm_config: "-m 4G --disk 5G 20.04"
      k3s-agent-02:
        vm_config: "-m 5G --disk 5G 20.04"
scenario:
  name: default
  test_sequence:
    - destroy
    - create
    - prepare
    - converge
    # Idempotence has been removed.
    # - lint
    - verify
    - cleanup
    - destroy
verifier:
  name: ansible

Note that:

  • The scenario uses the delegated driver.
    Since the K3S cluster will be created in virtual machines created by the role itself, there is no need to create Docker containers or anything similar to contain the K3S cluster hosts.
  • The definition of the inventory seen earlier is split up in the platforms and provisioner sections.
    The platforms section define the hosts and which group each host belongs to. The provisioner section contain an inventory section in which variables, vm_config in this case, can be set on each host.
  • Finally, the scenario section contain the definition of the test sequence.
    The idempotence and lint step have been removed.

Prepare

The prepare playbook of the scenario located in the file k3s_cluster/roles/k3scluster/molecule/default/prepare.yml does nothing except for logging a message:

---
- name: Prepares for Molecule tests
  hosts: all
  gather_facts: false
  vars:
    ansible_connection: local
  tasks:
  - name: Prepare
    ansible.builtin.debug:
      msg: "Prepare completed!"

Verify

The verify playbook of the scenario, located in k3s_cluster/roles/k3scluster/molecule/default/verify.yml, currently contain no verification of the outcome of execution of the k3scluster role.

---
# This is an example playbook to execute Ansible tests.
- name: Verify
  hosts: all
  gather_facts: false
  tasks:
  - name: Example assertion
    ansible.builtin.assert:
      that: true

Implementing verification of the K3S cluster status is left as an exercise for the reader. One or more of the following tests are possible candidates:

  • Ensure that all the Multipass virtual machines are running.
  • Ensure that all the Multipass virtual machines have been assigned an IP.
  • Execute “k3s -version” on all the nodes in the K3S cluster as to verify that K3S has been installed.
  • Using the command “kubectl get nodes” either executed in the K3S master node or on the Ansible control node ensure that all the nodes of the cluster are present and have the status “Ready”.

Tasks

The tasks are the central part of the role and have been quite interesting to develop given that the virtual machines are created by the role and there are no IP addresses available for the hosts until the virtual machines have been created and started. All the task files in the role are located in the k3s_cluster/roles/k3scluster/tasks/ relative to the root of the example project.

Main

The main.yml task file looks like this:

---
# Main task file for the K3S Cluster role

- name: Delete and create directories used by role
  ansible.builtin.include_tasks: create-directories.yml
  run_once: true
- name: Create SSH keys and cloudinit configuration file.
  ansible.builtin.include_tasks: create-keys-and-cloudinit.yml
  run_once: true
- name: Delete any existing Multipass VM and create a new one
  ansible.builtin.include_tasks: delete-and-create-new-vm.yml
  vars:
    vm_creation_parameters: "{{ vm_config }}"
- name: Retrieve the IP address of the newly created Multipass VM
  ansible.builtin.include_tasks: get-node-ip.yml
- name: Log VM IPs
  ansible.builtin.debug:
    msg: "VM {{ ansible_host }} has IP address: {{ vm_ip }}"
- name: Install K3S on a master node
  ansible.builtin.include_tasks: install-k3s-master.yml
  when: '"master" in ansible_host'
- name: Install K3S on agent nodes
  ansible.builtin.include_tasks: install-k3s-agent.yml
  when: '"agent" in ansible_host'

In words, what happens during the execution of the role is:

  • Directories in the temporary directory used by the role are deleted, if present, and created.
  • The SSH keys and a cloud-init file is created.
  • Any Multipass virtual machines that have the same names as the hosts in the inventory are deleted.
  • Multipass virtual machines are created and started.
  • The IP address of each host/virtual machine is retrieved.
  • K3S in “master-mode” is installed on the master host.
  • K3S in “agent-mode” is installed on the agent hosts.

Create Directories

As earlier, the tasks in the create-directories.yml file creates directories in the temporary directory. The temporary directory is located on the computer on which the Ansible role is executed and can be configured by assigning a value to the variable tempfiles_root, as seen in the defaults file earlier.

---
# Deletes directories used by the role and all files in the directories
# and then re-create the empty directories.
- name: Delete directories used by the role
  ansible.builtin.file:
    path: "{{ item }}"
    state: absent
  with_items:
    - "{{ tempfiles_root }}/multipass-certs"
    - "{{ tempfiles_root }}/temp"
  delegate_to: localhost
- name: Create directories used by the role
  ansible.builtin.file:
    path: "{{ item }}"
    state: directory
  with_items:
    - "{{ tempfiles_root }}/multipass-certs"
    - "{{ tempfiles_root }}/temp"
  delegate_to: localhost

Create Keys and Cloud-init

The keys, one public and one private key, and the cloud-init file will be used when creating the virtual machines in order to have a user that can log into the virtual machines using only a SSH key and not having to supply a password. These tasks are located in the create-keys-and-cloudinit.yml file.

---
# Creates VM admin keypair that will be used to log into the Multipass VMs
# and the cloud-init file use when create the VMs.
- name: Create VM admin key-pair
  ansible.builtin.shell: ssh-keygen -C {{ vm_user }} -N "" -f {{ tempfiles_root }}/multipass-certs/user_key
  delegate_to: localhost
- name: Create cloud-init file inserting the public key
  ansible.builtin.template:
    src: templates/cloud-init-template.j2
    dest: "{{ tempfiles_root }}/temp/cloud-init.yaml"
  delegate_to: localhost
  vars:
    public_key: "{{lookup('file', '{{ tempfiles_root }}/multipass-certs/user_key.pub')}}"

Note that:

  • The ssh-keygen command is used to create the private and public keys for the VM user.
  • A cloud-init file is created by inserting the public key created earlier and the name of the user into a template.

Delete and Create VM

The tasks in the delete-and-create-new-vm.yml file deletes and creates a new Multipass virtual machine for each host using the cloud-init file created earlier.

---
# Deletes any existing Multipass VM with the supplied name, then creates
# a new VM with the supplied name is created with the supplied parameters.
# Prerequisites:
# A SSH keypair that will be used as one option to log into the new VM must exist.
# A cloud-init configuration file named "cloud-init.yaml" that will be used
# when creating the new VM must exist.
# Parameters:
# ansible_host - Name of the VM that is to be created.
# vm_creation_parameters - Parameters that will be used when creating the new VM
# with Multipass.
- name: Delete any existing VM
  ansible.builtin.shell: |
    multipass info {{ ansible_host }} > /dev/null 2> /dev/null
    if [ $? -eq 0 ]; then
      multipass delete {{ ansible_host }}
      multipass purge
    fi
- name: Create new Multipass VM
  ansible.builtin.shell: |
    multipass launch \
    --name {{ ansible_host }} \
    --cloud-init {{ tempfiles_root }}/temp/cloud-init.yaml \
    {{ vm_creation_parameters }}

Note that:

  • The above Ansible tasks relies on the Multipass CLI client to manage Multipass virtual machines using shell commands.
  • When deleting a virtual machine, all virtual machines that are candidates to be purged will be purged in the process.

Get Node IP Address

Having created and started a Multipass virtual machine it will be assigned an IP address which can be used to communicate with the virtual machine and any applications running in the virtual machine. The following tasks retrieves the first IPv4 address of a Multipass virtual machine with a specified name.

---
# Retrieves the IP address of the Multipass VM with the supplied name.
#
# Parameters:
# ansible_host - Name of the VM which IP is to be retrieved
# Output:
# vm_ip - First IPv4 of Multipass VM.
- name: Retrieve Multipass VM info
  ansible.builtin.shell: |
    multipass info --format yaml {{ ansible_host }}
  register: vm_info_output
- name: Extract Multipass VM info YAML from task output
  ansible.builtin.set_fact:
    vm_info: "{{ vm_info_output.stdout | from_yaml }}"
- name: Extract first Multipass VM IPv4
  ansible.builtin.set_fact:
    vm_ip: "{{ vm_info[ansible_host][0]['ipv4'][0] }}"

Again, the above Ansible tasks relies on the Multipass CLI client to retrieve information, including the IP address, of a virtual machine. The IP address is stored in the vm_ip variable which, given that standard Ansible mechanisms as far as hosts in an inventory are relied upon, will be host-local for each host for which the tasks are executed.

Execute Command and Log

Before looking at the K3S installation on master and agent nodes, we’ll first have a look at the tasks that executes a shell command on a host, which will always be a Multipass virtual machine in this case, with a specified IP address. These tasks are the key to being able to avoid a dynamic inventory plugin.

---
# Executes the supplied command in the Multipass VM with the supplied IP connecting
# to the VM using the supplied user and the supplied private key file.
# The output from the command is logged with the supplied message prefix.
#
# Parameters:
# vm_ip - IP address of the Multipass VM on which to execute the command.
# vm_command - Command to execute in the Multipass VM.
# log_msg_prefix - Message to log before the result of the command output.
# vm_user - User in the Multipass VM used to SSH to the VM with.
# vm_user_ssh_private_key_file - File containing above user's private key used to SSH to the VM with.
- name: Execute command on K3S node
  ansible.builtin.shell: "{{ vm_command }}"
  register: cmd_output
  delegate_to: "{{ vm_ip }}"
  vars:
    ansible_user: "{{ vm_user }}"
    ansible_ssh_private_key_file: "{{ vm_user_ssh_private_key_file }}"
    ansible_connection: ssh
- name: Log message prefix and the output from the command.
  ansible.builtin.debug:
    msg: "{{ log_msg_prefix }}: {{ cmd_output.stdout }}"

Note that:

  • There are three parameters that are essential in order to be able to connect to the Multipass virtual machine in question.
    The parameters are vm_user, vm_user_ssh_private_key_file and vm_ip. The user is the user for which the keys were created which is also the same user as were specified in the cloud-init file used when the virtual machine was created. The private key file is the user’s private key.
  • When executing the shell command specified by vm_command, the task is delegated to the address contained in the vm_ip variable.
    In a normal Ansible playbook, the variable ansible_host will contain an address or a DNS name that resolves to an address at which the host on which to perform the task. In this case the ansible_host contains the name of the virtual machine which does not resolve to any address so the address of the host, the virtual machine, must be specified using delegate_to.
  • The ansible_user, ansible_ssh_private_key_file and ansible_connection variables are set on a single task, the task that is to execute the shell command.
    To my surprise it is indeed possible to set these variables, which affect how Ansible will connect to the host, on a per-task-basis.
  • The ansible_connection variable is set to ssh.
    Recall that in the defaults of the role, the ansible_connection variable was set to local. The variable must thus be explicitly set to ssh when there are tasks that are to be performed on a host to which Ansible is to connect using SSH.

Install K3S Master Node

Next is installation of K3S on the master node. The node selection is conditional, as seen earlier in the main task file, and the tasks below will only be applied to the node which name contain “master”. Actually, there is nothing stopping you from having multiple master nodes in the inventory but the master nodes will not be aware of each other and the agents will only register with one of the masters.

---
# Installs K3S on a master node with the supplied IP address.
# Stores the IP address and the access token of the K3S master node in dedicated files
# as to make them available when installing K3S agent(s) on other host(s).
#
# Parameters:
# vm_ip - IP address of Multipass VM on which to install the K3S master.
# vm_user - User in the Multipass VM used to SSH to the VM with.
# vm_user_ssh_private_key_file - File containing above user's private key used to SSH to the VM with.
# Output:
# k3s_master_token - Token to be used when installing K3S agents to be managed by master.
# k3s_master_ip - IP address of K3S master node.
- name: Delete any existing file containing K3S master node IP address.
  ansible.builtin.file:
    path: "{{ k3s_master_ip_file }}"
    state: absent
  vars:
    ansible_connection: local
- name: Delete any existing file containing K3S master node token.
  ansible.builtin.file:
    path: "{{ k3s_master_token_file }}"
    state: absent
  vars:
    ansible_connection: local
- name: Install K3S on master node
  ansible.builtin.include_tasks: execute-cmd-and-log.yml
  vars:
    vm_command: "{{ k3s_install_master_cmd }}"
    log_msg_prefix: "K3S master installation log"
    ansible_user: "{{ vm_user }}"
    ansible_ssh_private_key_file: "{{ vm_user_ssh_private_key_file }}"
    ansible_connection: ssh
- name: Find K3S version
  ansible.builtin.include_tasks: execute-cmd-and-log.yml
  vars:
    vm_command: k3s -version
    log_msg_prefix: "K3S master version"
    ansible_user: "{{ vm_user }}"
    ansible_ssh_private_key_file: "{{ vm_user_ssh_private_key_file }}"
    ansible_connection: ssh
- name: Check K3S master configuration
  ansible.builtin.include_tasks: execute-cmd-and-log.yml
  vars:
    vm_command: k3s check-config
    log_msg_prefix: "K3S master check-config result"
    ansible_user: "{{ vm_user }}"
    ansible_ssh_private_key_file: "{{ vm_user_ssh_private_key_file }}"
    ansible_connection: ssh
- name: Retrieve the K3S master node token
  ansible.builtin.include_tasks: execute-cmd-and-log.yml
  vars:
    vm_command: "sudo cat /var/lib/rancher/k3s/server/node-token"
    log_msg_prefix: "K3S master token"
    ansible_user: "{{ vm_user }}"
    ansible_ssh_private_key_file: "{{ vm_user_ssh_private_key_file }}"
    ansible_connection: ssh
- name: Save K3S master node token
  ansible.builtin.set_fact:
    k3s_master_token: "{{ cmd_output.stdout }}"
- name: Log K3S master node token
  ansible.builtin.debug:
    msg: "K3S master node token: {{ k3s_master_token }}"
- name: Save K3S master node IP
  ansible.builtin.set_fact:
    k3s_master_ip: "{{ vm_ip }}"
# The K3S master node IP address is stored in a file in order to be made available
# when installing K3S on the agent nodes.
- name: Write K3S master node IP to file to make it globally available
  ansible.builtin.copy:
    content: "{{ k3s_master_ip }}"
    dest: "{{ k3s_master_ip_file }}"
  vars:
    ansible_connection: local
- name: Write K3S master node token to file to make it globally available
  ansible.builtin.copy:
    content: "{{ k3s_master_token }}"
    dest: "{{ k3s_master_token_file }}"
  vars:
    ansible_connection: local

Note that:

  • The K3S master is installed using a command in the variable k3s_install_master_cmd.
  • The K3S version is retrieved and logged.
  • The configuration of the K3S master node is checked using the “k3s check-config” command and the result logged.
  • Having installed K3S on the master node, the K3S version is retrieved and logged.
  • The configuration of the K3S master node is checked using the “k3s check-config” command and the result logged.
  • The K3S master node IP address and the K3S master token are written to files.
    This is a substitute for global variables in order to make the information available to other hosts, that is when installing K3S agents that need to register with the K3S master.

Install K3S Agent Node

Installing K3S on an agent node, virtual machine, is similar to installing K3S on a master node with the difference that the IP address of the K3S master node and the master node token are read from files and passed as parameters to the K3S installation-command. As with the master, the node selection is conditional and the tasks below will only be applied to the nodes which names contain “agent”.

---
# Installs K3S on an agent node with the supplied IP address.
# A file containing the K3S master node IP address must exist and contain the
# valid IP address of a running K3S master node.
#
# Parameters:
# vm_ip - IP address of Multipass VM on which to install the K3S agent.
# vm_user - User in the Multipass VM used to SSH to the VM with.
# vm_user_ssh_private_key_file - File containing above user's private key used to SSH to the VM with.
# k3s_master_token - Token of K3S master that is to manage the agent.
- name: Read K3S master node IP address from file
  ansible.builtin.set_fact:
    k3s_master_ip: "{{ lookup('file', k3s_master_ip_file) }}"
  vars:
    ansible_connection: local
- name: Read K3S master node token from file
  ansible.builtin.set_fact:
    k3s_master_token: "{{ lookup('file', k3s_master_token_file) }}"
  vars:
    ansible_connection: local
- name: Install K3S on agent node and log result
  ansible.builtin.include_tasks: execute-cmd-and-log.yml
  vars:
    vm_command: "{{ k3s_install_agent_cmd }}"
    log_msg_prefix: "K3S agent installation log"
    ansible_user: "{{ vm_user }}"
    ansible_ssh_private_key_file: "{{ vm_user_ssh_private_key_file }}"
    ansible_connection: ssh
- name: Find and log K3S version
  ansible.builtin.include_tasks: execute-cmd-and-log.yml
  vars:
    vm_command: k3s -version
    log_msg_prefix: "K3S agent version"
    ansible_user: "{{ vm_user }}"
    ansible_ssh_private_key_file: "{{ vm_user_ssh_private_key_file }}"
    ansible_connection: ssh
- name: Check K3S agent configuration
  ansible.builtin.include_tasks: execute-cmd-and-log.yml
  vars:
    vm_command: k3s check-config
    log_msg_prefix: "K3S agent check-config result"
    ansible_user: "{{ vm_user }}"
    ansible_ssh_private_key_file: "{{ vm_user_ssh_private_key_file }}"
    ansible_connection: ssh

Note that:

  • The master node IP address is read from a file.
    The file was written, as earlier, during the installation of K3S on the master node.
  • The K3S master node token is read from a file.
    As with the IP address, the master node token was also written to a file during the installation of the K3S on the master node.
  • The K3S agent is installed using a command in the variable k3s_install_agent_cmd.
    If customized this command must contain placeholders for the k3s_master_ip and k3s_master_token variables in order for this information to be conveyed to the K3S agent installation.
  • The K3S version is retrieved and logged.
  • The configuration of the K3S agent node is checked using the “k3s check-config” command and the result logged.

Create a K3S Cluster with Molecule

With the k3scluster Ansible role completed, we can now create a K3S cluster using Molecule. Prior to creating the cluster, recall that the inventory defining the nodes of the K3S cluster is located in the file k3s_cluster/roles/k3scluster/molecule/default/molecule.yml.
Before being able to run Molecule on the role, a terminal window must be opened and the current directory must be set, relative to the project root directory, as below:

cd k3s_cluster/roles/k3scluster/

Create the Cluster

The K3S cluster is created using:

molecule converge

After the execution of the role has completed, the Multipass virtual machines of the cluster can be listed using the multipass command:

$ multipass list
Name                    State             IPv4             Image
k3s-agent-01            Running           192.168.1.186    Ubuntu 20.04 LTS
k3s-agent-02            Running           192.168.1.104    Ubuntu 20.04 LTS
k3s-master              Running           192.168.1.52     Ubuntu 20.04 LTS

The K3S cluster can be verified using the following steps:

  • SSH into the master node.
multipass shell k3s-master
  • Examine the nodes of the K3S cluster using kubectl:
$ sudo kubectl get nodes
NAME           STATUS   ROLES                  AGE     VERSION
k3s-master     Ready    control-plane,master   6m3s    v1.26.3+k3s1
k3s-agent-01   Ready    <none>                 5m48s   v1.26.3+k3s1
k3s-agent-02   Ready    <none>                 5m47s   v1.26.3+k3s1
  • Examine K3S cluster-information using kubectl:
$ sudo kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
  • Obtain even more detailed cluster-information using kubectl.
    Output from the command has been omitted due to its length.
sudo kubectl cluster-info dump
  • Exit the k3s-master virtual machine.
exit

Tear Down the Cluster

The K3S cluster can be torn down using the following command:

molecule destroy --all

The “–all” flag ensures that all instances from all scenarios are destroyed and has proven necessary to me at times.

Create a K3S Cluster with the Ansible Playbook

Included in the project is also an Ansible playbook, the example playbook seen earlier, along with an inventory file, the K3S cluster inventory also seen earlier, that uses the k3scluster role to create a K3S cluster.

Create the Cluster

The example playbook is located in the root of the project. Thus make sure that there is a terminal window open and that the current directory is indeed the root of the project.

  • Install the k3s_cluster Ansible collection locally. The installation is forced in case the collection has already been installed. This ensures that the most current version of the collection will be used.
ansible-galaxy collection install k3s_cluster --force
  • Run the example playbook to create the K3S cluster:
ansible-playbook -i inventory.yml k3s-cluster-create.yml
  • Update the local kubectl installation as to be able to manage the K3S cluster running in virtual machines:
ansible-playbook -i inventory.yml kubectl-local-config.yml
  • Examine the nodes of the K3S cluster using kubectl.
    Note that since the local kubectl installation has been configured, there is no need to SSH into the K3S master virtual machine.
$ kubectl get nodes
NAME           STATUS   ROLES                  AGE    VERSION
k3s-agent-02   Ready    <none>                 113s   v1.26.3+k3s1
k3s-agent-01   Ready    <none>                 112s   v1.26.3+k3s1
k3s-master     Ready    control-plane,master   2m7s   v1.26.3+k3s1
  • Examine K3S cluster-information using kubectl:
$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.1.199:6443
CoreDNS is running at https://192.168.1.199:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://192.168.1.199:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
  • Obtain even more detailed cluster-information using kubectl.
    Output from the command has been omitted due to its length.
kubectl cluster-info dump

Tear Down the Cluster

In the root of the project, create a file named “k3s-cluster-teardown.yml” with the following contents:

---
- name: K3S Cluster teardown
  hosts: all
  gather_facts: false
  vars:
    ansible_connection: local
  tasks:
    - name: Delete Multipass VM
      ansible.builtin.shell: |
        multipass info {{ ansible_host }} > /dev/null 2> /dev/null
        if [ $? -eq 0 ]; then
          multipass delete {{ ansible_host }}
        fi
    - name: Purge Multipass VMs
      ansible.builtin.shell: multipass purge

The above playbook is almost identical to the Molecule destroy playbook seen earlier in this article.
The K3S cluster can now be torn down by executing the following command from the root of the project:

ansible-playbook -i inventory.yml k3s-cluster-teardown.yml

Customization

The k3scluster role is built to be flexible, at least to some extent. Customizing for example the K3S installation on master node is as simple as adding a variable to the playbook that creates the cluster. Variable definitions may also be added in the inventory, for example if the customization is to be applied to one single host.

---
- name: K3S Cluster
  hosts: all
  gather_facts: false
  vars:
    k3s_install_master_cmd: "curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET INSTALL_K3S_CHANNEL=latest sh -s - server --cluster-init"
  tasks:
    - name: Create the K3S cluster
      ansible.builtin.include_role:
        name: ivankrizsan.k3s_cluster.k3scluster

In the above playbook the command that installs K3S on a master node is modified as to install K3S with an embedded etcd as described here.
The cluster can then be created and the local kubectl installation configured as described earlier:

ansible-playbook -i inventory.yml k3s-cluster-create.yml
ansible-playbook -i inventory.yml kubectl-local-config.yml

After the above playbooks having finished executing, the nodes of the cluster can be examined using kubectl:

$ kubectl get nodes
NAME           STATUS   ROLES                       AGE   VERSION
k3s-agent-01   Ready    <none>                      27s   v1.26.3+k3s1
k3s-agent-02   Ready    <none>                      26s   v1.26.3+k3s1
k3s-master     Ready    control-plane,etcd,master   43s   v1.26.3+k3s1

Note that in the ROLES column for the k3s-master “etcd” now appears in the output from kubectl.

Final Words

I now have a way to quickly create and tear down a small K3S cluster running in virtual machines and will from now on be able to spend more time on learning. In addition I now also have a good base for further development of my K3S cluster.

Happy coding!

Leave a Reply

Your email address will not be published. Required fields are marked *