Volume Expansion in LKE
It sure does!
You'll want to make sure you've already deployed a Persistent Volume Claim(PVC) to your LKE Cluster. If you haven't, you can check out our Deploy Persistent Volume Claims with the Linode Block Storage CSI Driver guide to learn how.
Using that guide as an example, you should have created a PVC using the following manifest file named
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-example spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: linode-block-storage-retain
To expand that PVC, all you'll need to do is update the spec.resources.requests.storage field to the newly desired size and then apply it to your cluster. So if you wanted to expand your PVC to 50GiB, your updated manifest should look like this:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-example spec: accessModes: - ReadWriteOnce resources: requests: storage: 50Gi storageClassName: linode-block-storage-retain
All that's left to do is apply the updated manifest to your cluster using
kubectl apply -f pvc.yaml and the Linode Block Storage CSI Driver will handle the rest!
Awesome! I was really looking forward to this feature. I hope it also works to shrink a volume size. Thanks!
Hey @remusmp. I'm sorry to say that shrinking volumes is not supported at this time. LKE uses the Block Storage service, and the Block Storage Volumes are what cannot be sized down. From our resizing Block Storage guide:
Storage Volumes cannot be sized down, only up. Keep this in mind when sizing your Volumes.
That said, if you resize up, you can retain your information by using
linode-block-storage-retain when specifying your storageClassName in your manifest yaml file.
Just wanted to provide an additional bit of insight into resizing PVCs attached to Nodes in your LKE pools.
At this time the Linode Block Storage CSI Driver does not automatically resize the filesystem when resizing the volume. So if you simply resize the claim with by updating your yaml file, you will not be able to make use of the additional storage space. To work around this, you will need to manually complete this resize.
Log directly into the Linode in your pool that the volume is attached to (Available in the Volumes Tab in Cloud Manager).
If you are unsure about how to access the Linode, check out this post on SSHing into an LKE node. There is a real nifty tool that can help you with it!
Once you have console access to the Linode, you will want to resize the filesystem by running the below command, replacing the sample path with your volume's:
The correct filesystem path for your volume is also available in your Volumes Tab in Cloud Manager.
Once you've resized the filesystem, you can use
kubectl to confirm the resize by running the below command:
kubectl exec -it <service> df
This will output the disk space for your disks, including the total size, the amount used, amount available.
Note that I am informed by Linode support that a reboot of your Node is required before you can run resize2fs.
Reboot of the nodes is pretty extreme. Not sure why support stated that, but simply scaling the affected deployment to 0, waiting for the volumes to detach, and then rescaling back to normal is more than enough.
Ideally you could just echo 1 to the rescan of the appropriate device, however this doesn't seem to work with the Linode iSCSI disks.