HA Kubernetes Cluster Not Using Auto-Generated Public IP in Apache CloudStack 4.21.0.0 #11642
-
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 3 replies
-
Beta Was this translation helpful? Give feedback.
-
|
@hodie-aurora there is a similar upstream issue logged As a workaround pass the following command when executing kubectl command kubectl --insecure-skip-tls-verify=true |
Beta Was this translation helpful? Give feedback.
-
|
@weizhouapache Since #11579 can reproduce this issue, can you confirm if this is a bug introduced in CloudStack 4.21.0.0? If yes, I'd like to know when it might be fixed—will it be in 4.22, or in a patch minor version? If fixed in the 4.21 series, will the release package be updated? thank you. |
Beta Was this translation helpful? Give feedback.
-
|
@weizhouapache Following up on my previous comment, I believe that using kubectl --insecure-skip-tls-verify=true only allows symptomatic access to the cluster but doesn't resolve the root cause. The fundamental issue appears to be that during cluster initialization, the Kubernetes API server is configured to point to the internal IP of a single control node VM (e.g., 10.1.0.219:6443) instead of the auto-generated public IP. If the cluster were properly set up to use the public IP (with the load balancer), the kubectl access problems would be resolved naturally, and the cluster would truly achieve high availability—meaning it could tolerate the failure of any number of control nodes up to less than half without the entire cluster going down. Is my understanding of the root cause correct? Thank you for any confirmation or additional insights! |
Beta Was this translation helpful? Give feedback.




yes, #11720 should fix it