In this article I will install OpenEBS on my K3S Kubernetes cluster and present a simple example showing how to allocate storage for an application running in the cluster.
I will take a slight detour to kubectl user contexts and, later, I will use a Persistent Volume and a Persistent Volume Claim to make storage available and to allocate storage for a pod.
Again, this is another of my Kubernetes-related notebook entries in my developer-diary and will not provide any ground-breaking information – OpenEBS will be installed with all the default setting and the example will use local directories in the virtual machines as storage.
Container Attached Storage
Instead of attempting to introduce container attached storage myself, I will just provide this link to a very good article written by an expert on the subject matter.
Some use cases for persistent storage of applications/services running in Kubernetes are listed in the OpenEBS documentation.
Prerequisites
There are two prerequisite that need to be met in order to be able to install and use OpenEBS. The first is a Kubernetes cluster, in my case K3S, and the second is the iSCSI client.
K3S Cluster
The K3S cluster I will use for this example is a three-node Kubernetes cluster running on virtual machines, prepared as described in an earlier article.
iSCSI Client
The following information is available in the OpenEBS documentation but I have chosen to extract what is relevant for my setup below. If your cluster is not Ubuntu 18.04 LTS virtual machines running in Multipass, please refer to this page in the OpenEBS documentation that contains instructions on how to verify the presence of, and if necessary install or activate, the iSCSI client on a number of other platforms.
A prerequisite for OpenEBS is the iSCSI client running on all nodes in the cluster. In the Ubuntu 18.04 LTS as obtained when creating a new virtual machine with Multipass, the iSCSI client is not enabled per default and so I needed to enable it in each of the nodes in my cluster.
Repeat the following procedure for each node:
- Start the Multipass virtual machine for the node in question.
Example: multipass start k3s-master - Open a shell connecting to the Multipass virtual machine.
Example: multipass shell k3s-master - Examine the status of the iSCSI client:
systemctl status iscsid
In my virtual machines the iSCSI client is disabled:
- iscsid.service - iSCSI initiator daemon (iscsid)
Loaded: loaded (/lib/systemd/system/iscsid.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:iscsid(8)
- Enable and start the iSCSI client:
sudo systemctl enable –now iscsid - Re-examine the status of the iSCSI client:
systemctl status iscsi
- iscsid.service - iSCSI initiator daemon (iscsid)
Loaded: loaded (/lib/systemd/system/iscsid.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2020-06-27 21:20:23 CEST; 2s ago
Docs: man:iscsid(8)
Process: 1508 ExecStart=/sbin/iscsid (code=exited, status=0/SUCCESS)
Process: 1495 ExecStartPre=/lib/open-iscsi/startup-checks.sh (code=exited, status=0/SUCCESS)
Main PID: 1510 (iscsid)
Tasks: 2 (limit: 2362)
CGroup: /system.slice/iscsid.service
├─1509 /sbin/iscsid
└─1510 /sbin/iscsid
Jun 27 21:20:23 test systemd[1]: Starting iSCSI initiator daemon (iscsid)...
Jun 27 21:20:23 test iscsid[1508]: iSCSI logger with pid=1509 started!
Jun 27 21:20:23 test systemd[1]: iscsid.service: Failed to parse PID from file /run/iscsid.pid: Invalid argument
Jun 27 21:20:23 test iscsid[1509]: iSCSI daemon with pid=1510 started!
Jun 27 21:20:23 test systemd[1]: Started iSCSI initiator daemon (iscsid).
The iSCSI client is now started for the virtual machine in question and will be started automatically when the virtual machine is rebooted.
OpenEBS
Before installing OpenEBS, I am going to digress slightly taking a look at kubectl contexts.
Administrator User Context
A kubectl user context associates a user with a Kubernetes cluster in order to be able to conveniently manage the cluster. The kubectl configuration file can contain multiple clusters, users and contexts. Using kubectl use-context it is possible to quickly switch between different contexts.
The OpenEBS installation instructions state that a cluster-administrator context is required. With my K3S cluster, which is only for educational purposes and not for production use, the default user is an administrator of the default cluster so strictly speaking there is no need for (another) cluster-administrator context in order to install OpenEBS. Readers that do not want to set up an administrator context may skip the remainder of this section.
To examine the current state, I use:
kubectl config view
My kubectl configuration looks like this:
apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://192.168.64.7:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: password: 5d4fc5cc4168318fb63a24a160d63a05 username: admin
Note that:
- The name of the cluster is “default”.
- The URL of the master-node of the “default” cluster is https://192.168.64.7:6443.
- There is one context named “default” for the “default” cluster.
- There is a user named “default” in the “default” context.
- The current context is the “default” context.
- The “default” user has the user name “admin” and the password “5d4fc5cc4168318fb63a24a160d63a05”.
Since I know that the default user is an administrator, I will not add another user, but I will just create a new context:
kubectl config set-context admin-ctx --cluster=default --user=default
The above creates a context named “admin-ctx” for the “default” cluster and adds the “default” user to the new context.
If I now view the configuration after having create a new context using kubectl config view again it looks like this:
apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://192.168.64.7:6443 name: default contexts: - context: cluster: default user: default name: admin-ctx - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: password: 5d4fc5cc4168318fb63a24a160d63a05 username: admin
Note that:
- Creating the new “admin-ctx” context caused the addition of the highlighted rows in the above kubectl configuration.
Finally, I make the new “admin-ctx” the current context:
kubectl config use-context admin-ctx
Install OpenEBS
For those readers that have Helm installed, there is an OpenEBS Helm chart available. I am going to install OpenEBS using a kubectl yaml file using all the default settings:
kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
As a result, the following is output to the console:
namespace/openebs created
serviceaccount/openebs-maya-operator created
clusterrole.rbac.authorization.k8s.io/openebs-maya-operator created
clusterrolebinding.rbac.authorization.k8s.io/openebs-maya-operator created
deployment.apps/maya-apiserver created
service/maya-apiserver-service created
deployment.apps/openebs-provisioner created
deployment.apps/openebs-snapshot-operator created
configmap/openebs-ndm-config created
daemonset.apps/openebs-ndm created
deployment.apps/openebs-ndm-operator created
deployment.apps/openebs-admission-server created
deployment.apps/openebs-localpv-provisioner created
Note that a namespace named “openebs” is created. OpenEBS will be installed in this namespace.
Verify the OpenEBS Installation
To verify that OpenEBS has been properly installed, I start with examining the pods in the OpenEBS namespace:
kubectl get pods -n openebs
It will take a little time for OpenEBS to become ready and some of the pods will initially fail and be restarted, so have patience.
With OpenEBS properly up and running, the status of the pods should look like this:
NAME READY STATUS RESTARTS AGE
openebs-localpv-provisioner-695ffd78d6-m2z55 1/1 Running 0 6m51s
openebs-ndm-5pd4r 1/1 Running 0 6m51s
openebs-ndm-8858b 1/1 Running 0 6m51s
openebs-snapshot-operator-cf5cc6c54-pgp9n 2/2 Running 0 6m51s
openebs-provisioner-64c9565ccb-9b6bb 1/1 Running 0 6m52s
openebs-admission-server-766f5d7c48-tf5jw 1/1 Running 0 6m51s
openebs-ndm-hpbml 1/1 Running 0 6m51s
openebs-ndm-operator-58ccd48f9d-6cbc9 1/1 Running 1 6m51s
maya-apiserver-5d87746c75-9lcmm 1/1 Running 2 6m52s
The second verification step is to examine the storage classes:
kubectl get storageclasses
In my K3S cluster I see the following storage classes after having installed OpenEBS:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 5d16h
openebs-jiva-default openebs.io/provisioner-iscsi Delete Immediate false 19m
openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter Delete Immediate false 19m
openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 19m
openebs-device openebs.io/local Delete WaitForFirstConsumer false 19m
There is one storage class, local-path, that was present before the installation. OpenEBS has added four storage classes:
- openebs-jiva-default
- openebs-snapshot-promoter
- openebs-hostpath
- openebs-device
With the OpenEBS pods up and running and the above storage classes present, the OpenEBS installation is to be considered successful.
OpenEBS Version
When having verified the installation, let’s check which version of OpenEBS is running.
First, list the OpenEBS pods:
kubectl get pods -n openebs
Select the name of a pod. In my case, the pod-name I selected is “openebs-ndm-8858b”.
Describe the pod and search for the string “version” using the following command:
kubectl describe pod openebs-ndm-8858b -n openebs | grep version
The resulting string will display the OpenEBS version. In my example, the following is output to the console:
openebs.io/version=1.11.0
This concludes the OpenEBS installation and I am now ready for an example in which I will try to allocate some storage for an application running in my cluster.
Allocating Storage Example
In this example I will use OpenEBS to allocate storage for an application that is to run in my K3S cluster. I will create a Kubernetes persistent volume using the openebs-hostpath storage class. It does not matter if you, at this point, understand these terms.
Multipass Shared Directories
In order not to use disk-space in the virtual machines and to be able to quickly examine any files written by the application that will run in a pod, I create three shared directories, one for each virtual machine.
- Create the directories in the host computer.
I have chosen to locate the shared directories in the host in a directory named “multipass_storage” in my home directory. If you relocate the “multipass_storage” directory then please remember to change the commands in this and the next step.
mkdir -p ~/multipass_storage/k3s-agent01 mkdir -p ~/multipass_storage/k3s-agent02 mkdir -p ~/multipass_storage/k3s-master
- Tell Multipass to mount the directories created in the previous step in each of my virtual machines at the path /data/openebs.
This can be done either before having started the virtual machines, in which case the directories will be mounted after next start, or when the virtual machines are running. In the latter case, there is no need to restart the virtual machines in order for the change to take effect.
multipass mount ~/multipass_storage/k3s-agent01 k3s-agent01:/data/openebs multipass mount ~/multipass_storage/k3s-agent02 k3s-agent02:/data/openebs multipass mount ~/multipass_storage/k3s-master k3s-master:/data/openebs
Later (not now!), when finished with this example, the shared directories can be unmounted from the virtual machines using the following commands:
multipass unmount k3s-agent01:/data/openebs multipass unmount k3s-agent02:/data/openebs multipass unmount k3s-master:/data/openebs
Pod
I will start with creating the pod which require some storage to write to.
Create a file named “greeting-generator-pod.yaml” with the following content:
apiVersion: v1 kind: Pod metadata: name: greeting-files-generator spec: volumes: - name: local-storage persistentVolumeClaim: claimName: openebs-pvc containers: - name: greeting-files-generator image: busybox command: - sh - -c - 'while true; do echo "Time is now: `date` on host: [`hostname`]" >> /data/openebs/$(date "+%Y-%m-%d-%H:%M:%S").txt; sleep $(($RANDOM % 5 + 300)); done' volumeMounts: - mountPath: /data/openebs name: local-storage
Note that:
- The name of the pod created will be “greeting-files-generator”.
- The volume named “local-storage” is to be backed by a persistent volume claim named “openebs-pvc”.
- The pod contains one container created from the “busybox” image.
- A script that, with regular intervals, will write a file will be executed in the busybox container.
- Under volumeMounts, the volume “local-storage” is declared to be mounted at the path “/data/openebs”.
This is the volume seen earlier to be backed by the persistent volume claim “openebs-pvc”.
Create the pod using the command:
kubectl apply -f greeting-generator-pod.yaml
Examine the pod using the following command:
kubectl get pods
There should be output similar to the following on the console:
NAME READY STATUS RESTARTS AGE
greeting-files-generator 0/1 Pending 0 15m
Note that the status of the pod is pending.
If examining the pod using the following command:
kubectl describe pod greeting-files-generator
More detailed information about the pod similar to the following should appear:
Name: greeting-files-generator
Namespace: default
Priority: 0
Node: <none>
Labels: <none>
Annotations: Status: Pending
IP:
IPs: <none>
Containers:
greeting-files-generator:
Image: busybox
Port: <none>
Host Port: <none>
Command:
sh
-c
while true; do echo "Time is now: `date` on host: [`hostname`]" >> /mnt/store/$(date "+%Y-%m-%d-%H:%M:%S").txt; sleep $(($RANDOM % 5 + 300)); done
Environment: <none>
Mounts:
/mnt/store from local-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-92zbs (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
local-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: openebs-pvc
ReadOnly: false
default-token-92zbs:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-92zbs
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler persistentvolumeclaim "openebs-pvc" not found
Warning FailedScheduling <unknown> default-scheduler persistentvolumeclaim "openebs-pvc" not found
In the last section listing events we can see that the reason for the pod being pending is that the persistent volume claim “openebs-pvc” cannot be found. Since the persistent volume claim is not available the pod can not be started and thus it will remain in the pending state until the volume claim can be found or the pod is deleted.
Persistent Volume
Before being able to create a persistent volume claim there must be storage available in the Kubernetes cluster. This is accomplished by creating a Persistent Volume.
Create a file named “persistent-volume.yaml” with the following contents:
apiVersion: v1 kind: PersistentVolume metadata: name: openebs-pv-volume labels: type: local spec: storageClassName: openebs-hostpath capacity: storage: 10Gi accessModes: - ReadWriteMany hostPath: path: "/data/openebs"
Note that:
- The name of the persistent volume is “openebs-pv-volume”.
- The storage class is “openebs-hostpath”.
This storage class will allocate storage on the node(s) on which to persist data in a directory. Since there is nothing specifying on which nodes, storage may be allocated on any node in the cluster. - The maximum capacity of the storage on one node is 10GB.
- The access mode of the storage allow for reading and writing multiple times.
The access mode can be ReadWriteOnce, ReadOnlyMany and ReadWriteMany. Different types of storage resources allow for different types of access modes – please refer to the Kubernetes documentation on access modes for details. - The path of the directory on the nodes in which data will be stored is “/data/openebs”.
The astute reader recognize this path from Multipass shared directories configured earlier.
There are more to persistent volumes, but the above will suffice for this simple example.
Apply the persistent volume declaration using the following command:
kubectl apply -f persistent-volume.yaml
Verify the creation of the persistent volume using the following command:
kubectl get persistentvolumes
The result should be similar to this:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
openebs-pv-volume 10Gi RWX Retain Available openebs-hostpath 17m
Persistent Volume Claim
A persistent volume claim is allocation of storage for, in the case of this example, a pod. Recall that the persistent volume claim name used when creating the pod was “openebs-pvc”.
Create a file named “persistent-volume-claim.yaml” with the following contents:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: openebs-pvc spec: storageClassName: openebs-hostpath accessModes: - ReadWriteMany resources: requests: storage: 2Gi
Note that:
- The name of the persistent volume claim is “openebs-pvc”.
This name matches that of the persistent volume claim in the pod declaration. - The storage class name “openebs-hostpath”.
The storage class matches that used in the persistent volume declaration. - The access mode is ReadWriteMany.
As in the notes about the persistent volume earlier, this allows for reading from and writing to the storage multiple times. - The size of the requested storage is 2GB.
Apply the persistent volume claim using kubectl:
kubectl apply -f persistent-volume-claim.yaml
Examine the greeting-files-generator pod created earlier again:
kubectl describe pod greeting-files-generator
The pod information should now look something like this:
Name: greeting-files-generator
Namespace: default
Priority: 0
Node: k3s-agent01/192.168.64.8
Start Time: Mon, 06 Jul 2020 21:02:45 +0200
Labels: <none>
Annotations: Status: Running
IP: 10.42.1.24
IPs:
IP: 10.42.1.24
Containers:
greeting-files-generator:
Container ID: containerd://18acb65b986e5ef9977a2d83ff55d6d622a542777cca3922f831e936876ef938
Image: busybox
Image ID: docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793
Port: <none>
Host Port: <none>
Command:
sh
-c
while true; do echo "Time is now: `date` on host: [`hostname`]" >> /mnt/store/$(date "+%Y-%m-%d-%H:%M:%S").txt; sleep $(($RANDOM % 5 + 300)); done
State: Running
Started: Mon, 06 Jul 2020 21:02:49 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/mnt/store from local-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-92zbs (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
local-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: openebs-pvc
ReadOnly: false
default-token-92zbs:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-92zbs
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler persistentvolumeclaim "openebs-pvc" not found
Warning FailedScheduling <unknown> default-scheduler persistentvolumeclaim "openebs-pvc" not found
Warning FailedScheduling <unknown> default-scheduler persistentvolumeclaim "openebs-pvc" not found
Normal Scheduled <unknown> default-scheduler Successfully assigned default/greeting-files-generator to k3s-agent01
Normal Pulling 103s kubelet, k3s-agent01 Pulling image "busybox"
Normal Pulled 100s kubelet, k3s-agent01 Successfully pulled image "busybox"
Normal Created 100s kubelet, k3s-agent01 Created container greeting-files-generator
Normal Started 100s kubelet, k3s-agent01 Started container greeting-files-generator
Finale
With the greeting-files-generator pod up and running, examine the Multipass shared directories that were set up in preparation to the example. There should be files appearing in the directory that corresponds to the node on which the pod is running.

Files created by the greeting-generator-pod appearing in the shared directory.
In my case, as can be seen in the above figure, the greeting-files-generator pod is running on the k3s-agent01 node.
This shows that the pod is writing to the allocated hostpath storage.
If you do not see any files in the Multipass shared directories, try the following kubectl command to list the files in the /data/openebs directory in the node on which the pod is running:
kubectl exec greeting-files-generator -- ls -al /data/openebs
You should see a list similar to this:
drwxr-xr-x 1 1000 1000 510 Jul 9 19:52 .
drwxr-xr-x 3 root root 4096 Jul 9 19:02 ..
-rw-r--r-- 1 1000 1000 78 Jul 6 19:27 2020-07-06-19:27:07.txt
-rw-r--r-- 1 1000 1000 78 Jul 6 19:32 2020-07-06-19:32:09.txt
-rw-r--r-- 1 1000 1000 78 Jul 9 19:02 2020-07-09-19:02:43.txt
-rw-r--r-- 1 1000 1000 78 Jul 9 19:07 2020-07-09-19:07:43.txt
-rw-r--r-- 1 1000 1000 78 Jul 9 19:12 2020-07-09-19:12:43.txt
-rw-r--r-- 1 1000 1000 78 Jul 9 19:17 2020-07-09-19:17:43.txt
-rw-r--r-- 1 1000 1000 78 Jul 9 19:22 2020-07-09-19:22:43.txt
-rw-r--r-- 1 1000 1000 78 Jul 9 19:27 2020-07-09-19:27:44.txt
-rw-r--r-- 1 1000 1000 78 Jul 9 19:32 2020-07-09-19:32:45.txt
-rw-r--r-- 1 1000 1000 78 Jul 9 19:37 2020-07-09-19:37:45.txt
-rw-r--r-- 1 1000 1000 78 Jul 9 19:42 2020-07-09-19:42:46.txt
This concludes this article.
Happy coding!