Old Linode CSI Driver on LKE?
I'm seeing some RBAC errors in the logs for the
csi-attacher container. This is happening every second, so I think there's a config problem in my LKE cluster. It's a new cluster so I don't think I broke this.
Example Log Item
E0107 04:39:52.244787 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:csi-controller-sa" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
Here we see permissions for
csinodeinfos but not
csinodes… Did the name change for this CRD?
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: lke.linode.com/caplke-version: v1.18.14-002 name: external-attacher-role rules: - apiGroups: - "" resources: - persistentvolumes verbs: - get - list - watch - update - apiGroups: - "" resources: - nodes verbs: - get - list - watch - apiGroups: - csi.storage.k8s.io resources: - csinodeinfos verbs: - get - list - watch - apiGroups: - storage.k8s.io resources: - volumeattachments verbs: - create - get - list - watch - update
After checking out the code for the github repo of the CSI driver, I think the problem is my cluster was started with an outdated version of the CSI controller.
After rebooting the controller, and looking at log output, I see this User-Agent is being used:
LinodeCSI/v0.1.7-1-gd1de67d-dirty. The latest according to the CSI project on github is
I will try to update this in my cluster, but if this is the problem it's quite annoying to have to do that with a newly provisioned LKE cluster.
Probable Smoking Gun
Found the apparent missing commit… https://github.com/linode/linode-blockstorage-csi-driver/pull/60/commits/786a1c7f7b87c07405d24a3a94d48adde6816090
So this looks a lot like a bug report, but there are some questions in here.
What happened here? Will LKE keep my CSI driver updated or is it my responsibility? After an LKE cluster is created is it completely handed off to me to take care of it?
Hi @kekoav - This is a known problem with the current version of our CSI and we'll be fixing it in our next release. Thanks for keeping an eye out, and feel free to let us know if you have any other feedback.
I updated my CSI by applying this yaml for 0.3.0. This worked, and the bug disappeared, but a few minutes later the old version was back again.
How do I make a permanent CSI update in an LKE cluster? Is there something that runs to update the CSI in my cluster? How does that work? Is there a way to disable this?