-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Add support for CSI driver in CKS #11419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch. |
…support-csi-projects
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #11419 +/- ##
============================================
- Coverage 17.55% 17.55% -0.01%
Complexity 15526 15526
============================================
Files 5908 5908
Lines 528696 528779 +83
Branches 64569 64579 +10
============================================
+ Hits 92801 92805 +4
- Misses 425457 425537 +80
+ Partials 10438 10437 -1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
@blueorangutan package |
4fb7b37 to
4e53e96
Compare
|
@blueorangutan package |
|
@kiranchavala a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
|
Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 14731 |
…support-csi-projects
|
@blueorangutan package |
|
@blueorangutan package |
|
@Pearl1594 a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress. |
|
Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ el10 ✔️ debian ✔️ suse15. SL-JID 15377 |
harikrishna-patnala
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
code LGTM.
@Pearl1594 please see if this requires any doc PR, if not needed please ignore
rohityadavcloud
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couldn't test it, but LGTM. Adding CSI driver whlie orchestarting CKS cluster.
|
@blueorangutan test |
|
@harikrishna-patnala a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests |
|
This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch. |
kiranchavala
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Pearl1594 , Unable to launch a cks cluster with csi driver enabled on vmware
-
Cloudstack with vmware80u3
-
Isolated network with 10.1.1.0/24 and 172.16.10.0/24
Logs on the control node
Oct 10 09:48:51 systemvm kubelet[1987]: I1010 09:48:51.756269 1987 kubelet_node_status.go:75] "Attempting to register node" node="test7-control-199cd7b84dd"
Oct 10 09:48:51 systemvm kubelet[1987]: E1010 09:48:51.757198 1987 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.16.10.213:6443/api/v1/nodes\": dial tcp 172.16.10.213:6443: connect: connection refused" node="test7-control-199cd7b84dd"
Oct 10 09:23:03 systemvm kubelet[1966]: E1010 09:23:03.153634 1966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-cks-test2-control-199cd6685c2_kube-system(02e215e6c12bdc87238cda2c37d4ff4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-cks-test2-control-199cd6685c2_kube-system(02e215e6c12bdc87238cda2c37d4ff4c)\\\": rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fb835ae483a592040f34461f9fd51d6cff4c6ce9fc0dd8a054d411e41bdc6a6e/log.json: no such file or directory): runc did not terminate successfully: exit status 139: unknown\"" pod="kube-system/kube-controller-manager-cks-test2-control-199cd6685c2" podUID="02e215e6c12bdc87238cda2c37d4ff4c"
Oct 10 09:23:05 systemvm kubelet[1966]: E1010 09:23:05.738176 1966 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.1.1.241:6443/api/v1/namespaces/default/events\": dial tcp 10.1.1.241:6443: connect: connection refused" event="&Event{ObjectMeta:{cks-test2-control-199cd6685c2.186d16b5dab68691 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:cks-test2-control-199cd6685c2,UID:cks-test2-control-199cd6685c2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:cks-test2-control-199cd6685c2,},FirstTimestamp:2025-10-10 09:19:59.686018705 +0000 UTC m=+0.443168357,LastTimestamp:2025-10-10 09:19:59.686018705 +0000 UTC m=+0.443168357,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:cks-test2-control-199cd6685c2,}"
Oct 10 09:23:05 systemvm kubelet[1966]: E1010 09:23:05.896012 1966 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"cks-test2-control-199cd6685c2\" not found" node="cks-test2-control-199cd6685c2"
Oct 10 09:23:05 systemvm containerd[1855]: time="2025-10-10T09:23:05.897300037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-cks-test2-control-199cd6685c2,Uid:7ebc26e2e530bcd942d8560c4c775c2a,Namespace:kube-system,Attempt:0,}"
Oct 10 09:23:06 systemvm systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b122d422a3f523e21b2267d3435832046c1108cd0c7641702e34b6f73ba20bfa-rootfs.mount: Deactivated successfully.
Oct 10 09:23:06 systemvm containerd[1855]: time="2025-10-10T09:23:06.032401897Z" level=info msg="shim disconnected" id=b122d422a3f523e21b2267d3435832046c1108cd0c7641702e34b6f73ba20bfa namespace=k8s.io
Oct 10 09:23:06 systemvm containerd[1855]: time="2025-10-10T09:23:06.032463031Z" level=warning msg="cleaning up after shim disconnected" id=b122d422a3f523e21b2267d3435832046c1108cd0c7641702e34b6f73ba20bfa namespace=k8s.io
Oct 10 09:23:06 systemvm containerd[1855]: time="2025-10-10T09:23:06.032479083Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Oct 10 09:23:06 systemvm containerd[1855]: time="2025-10-10T09:23:06.060718358Z" level=warning msg="cleanup warnings time=\"2025-10-10T09:23:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status -1: \" runtime=io.containerd.runc.v2\ntime=\"2025-10-10T09:23:06Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b122d422a3f523e21b2267d3435832046c1108cd0c7641702e34b6f73ba20bfa/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Oct 10 09:23:06 systemvm containerd[1855]: time="2025-10-10T09:23:06.063364953Z" level=error msg="copy shim log" error="read /proc/self/fd/18: file already closed" namespace=k8s.io
Oct 10 09:23:06 systemvm systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b122d422a3f523e21b2267d3435832046c1108cd0c7641702e34b6f73ba20bfa-shm.mount: Deactivated successfully.
Oct 10 09:23:06 systemvm containerd[1855]: time="2025-10-10T09:23:06.069802460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-cks-test2-control-199cd6685c2,Uid:7ebc26e2e530bcd942d8560c4c775c2a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b122d422a3f523e21b2267d3435832046c1108cd0c7641702e34b6f73ba20bfa/log.json: no such file or directory): runc did not terminate successfully: exit status 139: unknown"
Oct 10 09:23:06 systemvm kubelet[1966]: E1010 09:23:06.070208 1966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b122d422a3f523e21b2267d3435832046c1108cd0c7641702e34b6f73ba20bfa/log.json: no such file or directory): runc did not terminate successfully: exit status 139: unknown"
Oct 10 09:23:06 systemvm kubelet[1966]: E1010 09:23:06.070377 1966 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b122d422a3f523e21b2267d3435832046c1108cd0c7641702e34b6f73ba20bfa/log.json: no such file or directory): runc did not terminate successfully: exit status 139: unknown" pod="kube-system/etcd-cks-test2-control-199cd6685c2"
Oct 10 09:23:06 systemvm kubelet[1966]: E1010 09:23:06.070536 1966 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b122d422a3f523e21b2267d3435832046c1108cd0c7641702e34b6f73ba20bfa/log.json: no such file or directory): runc did not terminate successfully: exit status 139: unknown" pod="kube-system/etcd-cks-test2-control-199cd6685c2"
Oct 10 09:23:06 systemvm kubelet[1966]: E1010 09:23:06.070626 1966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-cks-test2-control-199cd6685c2_kube-system(7ebc26e2e530bcd942d8560c4c775c2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-cks-test2-control-199cd6685c2_kube-system(7ebc26e2e530bcd942d8560c4c775c2a)\\\": rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b122d422a3f523e21b2267d3435832046c1108cd0c7641702e34b6f73ba20bfa/log.json: no such file or directory): runc did not terminate successfully: exit status 139: unknown\"" pod="kube-system/etcd-cks-test2-control-199cd6685c2" podUID="7ebc26e2e530bcd942d8560c4c775c2a"
Oct 10 09:23:07 systemvm kubelet[1966]: E1010 09:23:07.410878 1966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.1.1.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cks-test2-control-199cd6685c2?timeout=10s\": dial tcp 10.1.1.241:6443: connect: connection refused" interval="7s"
Oct 10 09:23:07 systemvm kubelet[1966]: I1010 09:23:07.634251 1966 kubelet_node_status.go:75] "Attempting to register node" node="cks-test2-control-199cd6685c2"
Oct 10 09:23:07 systemvm kubelet[1966]: E1010 09:23:07.635785 1966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.1.1.241:6443/api/v1/nodes\": dial tcp 10.1.1.241:6443: connect: connection refused" node="cks-test2-control-199cd6685c2"
Oct 10 09:23:09 systemvm kubelet[1966]: E1010 09:23:09.812289 1966 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"cks-test2-control-199cd6685c2\" not found"
Oct 10 09:23:09 systemvm kubelet[1966]: E1010 09:23:09.899416 1966 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"cks-test2-control-199cd6685c2\" not found" node="cks-test2-control-199cd6685c2"
Oct 10 09:23:09 systemvm containerd[1855]: time="2025-10-10T09:23:09.901691263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-cks-test2-control-199cd6685c2,Uid:d3306bac0aa612e11f6e3900e97cc854,Namespace:kube-system,Attempt:0,}"
Oct 10 09:23:09 systemvm systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8678a4979a70f114b742deb127d2ad5572cd6d52aafc11bd3b16af6d5973bcac-rootfs.mount: Deactivated successfully.
Oct 10 09:23:10 systemvm containerd[1855]: time="2025-10-10T09:23:10.031176021Z" level=info msg="shim disconnected" id=8678a4979a70f114b742deb127d2ad5572cd6d52aafc11bd3b16af6d5973bcac namespace=k8s.io
Oct 10 09:23:10 systemvm containerd[1855]: time="2025-10-10T09:23:10.031234710Z" level=warning msg="cleaning up after shim disconnected" id=8678a4979a70f114b742deb127d2ad5572cd6d52aafc11bd3b16af6d5973bcac namespace=k8s.io
Oct 10 09:23:10 systemvm containerd[1855]: time="2025-10-10T09:23:10.031245928Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Oct 10 09:23:10 systemvm containerd[1855]: time="2025-10-10T09:23:10.084975115Z" level=warning msg="cleanup warnings time=\"2025-10-10T09:23:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status -1: \" runtime=io.containerd.runc.v2\ntime=\"2025-10-10T09:23:10Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8678a4979a70f114b742deb127d2ad5572cd6d52aafc11bd3b16af6d5973bcac/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Oct 10 09:23:10 systemvm containerd[1855]: time="2025-10-10T09:23:10.086674074Z" level=error msg="copy shim log" error="read /proc/self/fd/18: file already closed" namespace=k8s.io
Oct 10 09:23:10 systemvm systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8678a4979a70f114b742deb127d2ad5572cd6d52aafc11bd3b16af6d5973bcac-shm.mount: Deactivated successfully.
Oct 10 09:23:10 systemvm containerd[1855]: time="2025-10-10T09:23:10.092437851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-cks-test2-control-199cd6685c2,Uid:d3306bac0aa612e11f6e3900e97cc854,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8678a4979a70f114b742deb127d2ad5572cd6d52aafc11bd3b16af6d5973bcac/log.json: no such file or directory): runc did not terminate successfully: exit status 139: unknown"
Oct 10 09:23:10 systemvm kubelet[1966]: E1010 09:23:10.092889 1966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8678a4979a70f114b742deb127d2ad5572cd6d52aafc11bd3b16af6d5973bcac/log.json: no such file or directory): runc did not terminate successfully: exit status 139: unknown"
Oct 10 09:23:10 systemvm kubelet[1966]: E1010 09:23:10.093111 1966 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8678a4979a70f114b742deb127d2ad5572cd6d52aafc11bd3b16af6d5973bcac/log.json: no such file or directory): runc did not terminate successfully: exit status 139: unknown" pod="kube-system/kube-apiserver-cks-test2-control-199cd6685c2"
Oct 10 09:23:10 systemvm kubelet[1966]: E1010 09:23:10.093235 1966 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8678a4979a70f114b742deb127d2ad5572cd6d52aafc11bd3b16af6d5973bcac/log.json: no such file or directory): runc did not terminate successfully: exit status 139: unknown" pod="kube-system/kube-apiserver-cks-test2-control-199cd6685c2"
Oct 10 09:23:10 systemvm kubelet[1966]: E1010 09:23:10.093308 1966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-cks-test2-control-199cd6685c2_kube-system(d3306bac0aa612e11f6e3900e97cc854)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-cks-test2-control-199cd6685c2_kube-system(d3306bac0aa612e11f6e3900e97cc854)\\\": rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8678a4979a70f114b742deb127d2ad5572cd6d52aafc11bd3b16af6d5973bcac/log.json: no such file or directory): runc did not terminate successfully: exit status 139: unknown\"" pod="kube-system/kube-apiserver-cks-test2-control-199cd6685c2" podUID="d3306bac0aa612e11f6e3900e97cc854"
Oct 10 09:23:10 systemvm kubelet[1966]: E1010 09:23:10.895443 1966 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"cks-test2-control-199cd6685c2\" not found" node="cks-test2-control-199cd6685c2"
Oct 10 09:23:10 systemvm containerd[1855]: time="2025-10-10T09:23:10.896187204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-cks-test2-control-199cd6685c2,Uid:1f054c6ebe3045d41fdc0c1b1e22303c,Namespace:kube-system,Attempt:0,}"
Oct 10 09:23:10 systemvm systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5acb55be8befc85f001e241b5cf85298bc75477c7856a6baba29e6d580f7f6b-
root@test7-control-199cd7b84dd:/opt/bin# ./kubectl get nodes
E1010 09:50:11.604465 2779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
E1010 09:50:11.607276 2779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
E1010 09:50:11.609917 2779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
E1010 09:50:11.612688 2779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
E1010 09:50:11.615051 2779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"
The connection to the server localhost:8080 was refused - did you specify the right host or port?
|
@kiranchavala I observed this failure even without the csi changes, Im investigating. Thanks for testing. |
|
[SF] Trillian test result (tid-14606)
|
kiranchavala
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, tested with kvm , xcpng83 and vmware 7
Only issue with vmware 8 for which an issue is logged
| Test Case Execution | Result |
|---|---|
| CSI driver should deploy on kvm hypervisor | Pass |
| CSi driver should deploy on Vmware | Pass |
| CSi driver should deploy on Xenserver | Pass |
| CSI option should be displayed in the CKS cluster creation form and in the details | Pass |
| CSI driver should be installed when an external cks node is added to the k8s cluster | Pass |
| CSI driver should work fine when autoscaling is enabled on the cks cluster | Pass |
| Test the creation of storage class , persistent volume, persistent volume claim and pod which uses the pv | Pass |
| Test the persistent volume resize feature | Pass |
| Test the reclaimPolicy and volumeBindingMode of a storage class | Pass |
| Test the deletion of storage class , persistent volume and persistent volume claim | Pass |
| Test the deployment of csi driver on a capc cluster | Pass |
| Test the creation of storage class , persistent volume and persistent volume claim, snapshots on a capc cluster | Pass |
| Test the creation snapshot for the persistent volume | Pass |
| Test the creation of persistent volume from snapshot | Pass |
| Test the deletion of the volume snapshots of a persistent volume | Pass |
| Test the nodeSelector option in the pod specification to make persistent volume attaches to the correct node | Pass |
| Test the custom disk offering with tags matching to a particular primary storage | Pass |
| Test the csi driver on a cks cluster deployed in a project | Pass |
| Test the csi driver on a cks cluster deployed in a user account | Pass |
| Test the cks driver on a cks cluster deployed under a domain | Pass |
| Test the cks driver on a isolated network | Pass |
| Test the cks driver on a shared network | Pass |
| Test the cks driver on vpc network | Pass |
| Test the csi driver on a routed mode network | Pass |
| Test deletion of cks cluster when reclaim policy is set to Retain and Delete | Pass |
| Test the upgrade of k8s cluster with csi driver enabled | Pass |
…support-csi-projects
|
* Support creation of PV(persistent volumes) in CloudStack projects * add support for snapshot APIs for project role * Add support to setup csi driver on k8s cluster creation * fix deploy script * update response * fix table name * fix linter * show if csi driver is setup in cluster * delete pvs whose reclaim policy is delete when cluster is destroyed * update ref * move changes to 4.22 * fix variables * fix eof


Description
This PR adds support to deploy the CSI driver when deploying CKS clusters. It isn't setup by default, only done when the enablecsi parameter is passed. Also added support for snapshot related APIs for the project level user to allow snapshots of persistent volumes at project level
Types of changes
Feature/Enhancement Scale or Bug Severity
Feature/Enhancement Scale
Screenshots (if appropriate):
How Has This Been Tested?
How did you try to break this feature and the system with this change?