Kubernetes Persistent Volumes

In this blog post, I’ll be walking through a simple example of how to create a persistent volume in Kubernetes. There are several volume types in Kubernetes, but to get started I’ll be using the local volume type. A local volume represents a mounted local storage device such as a disk, partition or directory. I’ll be using a single node Kubernetes cluster running within Ubuntu server and using a docker image of a .Net Core WebAPI application that uses the persistent volume to write logs to. Below are them main steps needed to achive this goal, it assumed that you have knowledge of creating the .Net Core appliation and the Kubernetes single node cluster.

Create a Persistent Volume

You begin by creating a hostPath Persistent Volume. Kubernetes supports hostPath for development and testing on a single-node cluster. A hostPath Persistent Volume uses a file or directory on the Node to emulate network-attached storage. In a production cluster, you would not use hostPath. Instead a cluster administrator would provision a network resource like a Google Compute Engine persistent disk, an NFS share, or an Amazon Elastic Block Store volume. Cluster administrators can also use StorageClasses to set up dynamic provisioning.

The configuration file specifies that the volume is at /mnt/data on the cluster’s Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the volume can be mounted as read-write by a single Node. It defines the Storage Class name manual for the Persistent Volume, which will be used to bind Persistent Volume Claim requests to this Persistent Volume.

Create the PersistentVolume: Copy the code into a pv-volume.yaml file and run:

kubectl apply -f pv-volume.yaml

View information about the Persistent Volume:

kubectl get pv my-pv-volume


Configuration for the Persistent Volume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
 accessModes:
 - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

Create a Persistent Volume Claim

The next step is to create a PersistentVolumeClaim. Pods use Persistent Volume Claims to request physical storage. In this exercise, you create a Persistent Volume Claim that requests a volume of at least three gibibytes that can provide read-write access for at least one Node.

The configuration file specifies that the volume is at /mnt/data on the cluster’s Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the volume can be mounted as read-write by a single Node. It defines the StorageClass name manual for the Persistent Volume, which will be used to bind Persistent Volume Claim requests to this Persistent Volume.

Create the Persistent Volume Claim:Copy the code into a pv-volume-claim.yaml file and run:

kubectl apply -f a pv-volume-claim.yaml

After you create the Persistent Volume Claim, the Kubernetes control plane looks for a Persistent Volume that satisfies the claim’s requirements.
If the control plane finds a suitable Persistent Volume with the same StorageClass, it binds the claim to the volume.
Look again at the Persistent Volume:

kubectl get pv my-pv-volume

Now the output shows a STATUS of Bound.
Look at the Persistent Volume Claim:

kubectl get pvc my-pv-claim

Configuration for the Persistent Volume Claim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

Create a Pod

The next step is to create a Pod that uses your PersistentVolumeClaim as a volume.

Notice that the Pod’s configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume. From the Pod’s point of view, the claim is a volume.

Create the Pod:Copy the code into a pv-pod.yaml file and run:

kubectl apply -f pv-pod.yaml

Verify that the container in the Pod is running;

kubectl get pod my-pv-pod

Configuration for the Pod:

apiVersion: v1
kind: Pod
metadata:
  name: my-pv-pod
  labels:
    run: my-app
spec:
  volumes:
    - name: my-pv-storage
      persistentVolumeClaim:
      claimName: my-pv-claim
  containers:
    - name: my-app
      image: 'docker.io/mydockerwebapp:v1'
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/my/logs"
          name: pv-storage

Using Persistent Volume Claims with Deployments

At this point, a persistent volume and a persistent volume claim have been created. The persistent volume claim has been bound to the persistent volume by matching the access modes and capacity. You can confirm this by running the kubectl get pv command and looking at the STATUS column of the mysql-pv-volume. The claim is now ready to be used by a pod or deployment spec. To demonstrate this I’ll use a mysql deployment that creates a service for mysql and runs a single mysql container. Pay close attention to the volumes and volumeMounts sections. The volumeMounts section of the container spec specifies the name of the volume and the mountPath inside the container. The volumes section creates uses the persistent volume claim as it’s mapping to an actual volume that can be used by the container.

Create the Persistent Volume Service & Deployment:
Copy the code into a pv-deployment.yaml file and run:

kubectl apply -f pv-deployment.yaml

Verify that the container in the Pod is running;

kubectl get pod my-pv-pod


You can now test your app using http://localhost:31718/api/mycontroller
Configuration for the Deployment:

apiVersion: v1
kind: Service
metadata:
  name: dockerwebapi-srv
  namespace: default
  labels:
    run: my-app
spec:
  ports:
    - protocol: TCP
      port: 80
      nodePort: 31718
  selector:
    run: my-app
  clusterIP: 10.152.183.80
  type: NodePort
  sessionAffinity: None
  externalTrafficPolicy: Cluster

- - -

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  namespace: default
  labels:
    run: my-app
spec:
  replicas: 1
  selector:
  matchLabels:
      run: my-app
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: my-app
    spec:
      containers:
        - name: my-app
          image: 'docker.io/mydockerwebapp:v1'
          imagePullPolicy: IfNotPresent
          ports:
          - containerPort: 80

Understanding Persistent Volume Reclaim Policies

here is a setting you can defined called persistentVolumeReclaimPolicy in the PersistentVolume manifest. This setting determines what happens to the data in a volume when its released from a claim. Meaning the claim was deleted and the volume is no longer bound to any claim. There are three options for persistentVolumeReclaimPolicy; Retain, Delete, and Recycle. Retain preserves the data and the volume. When a claim is removed the volume simply remains as it was and can be attached to new claims that fit the requirements of the claim. Note, that by using retain the new claims would have access to the old data stored on the volume. Delete will delete the volume when a claim is deleted. It does not however delete the data inside the volume, just the volume. So, if you re-create a volume at the same path it will load up the old data into the volume. Recycle is being deprecated in favor of dynamic volumes. But, it’s worth knowing that it can be used to preserve the volume but to delete the data in the volume when a claim is deleted.
Reclaim Policies: Retain, Delete, Recycle (deprecated in favor of dynamic volumes)
Configuration showing reclaim policies:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  hostPath:
    path: "/mnt/data"


Want help with Kubernetes Persistent Volumes?


Get in Touch