You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ran
helm install kyverno kyverno/kyverno --namespace kyverno --create-namespace -f scripts/kyverno-overrides.yaml
to install kyverno in the vcluster.
The admission-controller pod fails to start. I believe the problem is that is unable to list configmaps.
What did you expect to happen?
I expect the kyverno deployments to run without issue
How can we reproduce it (as minimally and precisely as possible)?
I believe creating a vcluster and deploying kyverno is all that is required to recreate the problem.
Anything else we need to know?
I will attach logs and yaml for the k8s objects I believe are related.
In order to see the logs for the admission controller I had to use kubectl logs against the physical pod. With kyverno in a bad state a number of commands were failing when run against the vcluster
I removed kyverno-resource-mutating-webhook-cfg and kyverno-resource-validating-webhook-cfg from the virtual cluster because with kyverno being in a bad state the associated services were non-responsive and this in turn caused the k8s api server to fail too many commands. ( Including pod create, see below )
I created a pod that used bitnami/kubectl image and the kyverno admission controller service account. In a shell session in the pod kubectl get -A configmap didn't have a problem. Also kubectl auth whoami confirmed as was running as the admission controller service account.
Host cluster Kubernetes version
kubectl versionWARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.3", GitCommit:"9e644106593f3f4aa98f8a84b23db5fa378900bd", GitTreeState:"clean", BuildDate:"2023-03-15T13:40:17Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/amd64"}Kustomize Version: v4.5.7Server Version: version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.5", GitCommit:"74e84a90c725047b1328ff3d589fedb1cb7a120e", GitTreeState:"clean", BuildDate:"2024-09-12T00:11:55Z", GoVersion:"go1.22.6", Compiler:"gc", Platform:"linux/amd64"}WARNING: version difference between client (1.26) and server (1.30) exceeds the supported minor version skew of +/-1
Another observation, I after removing the problematic webhooks ( see original description ), a lease that seemed to be problematic, and then performing a rolling restart of the admission-controller I saw a different error in the admission-controller log
2024-10-24T15:46:56Z INFO webhooks.server logging/log.go:184 2024/10/24 15:46:56 http: TLS handshake error from 10.250.0.135:58636: secret "kyverno-svc.kyverno.svc.kyverno-tls-pair" not found
Checking the secrets in the vcluster it does appear to be missing. I haven't checked to see if it was missing from the very beginning or not
kubectl get -A secret
NAMESPACE NAME TYPE DATA AGE
default dockersecret kubernetes.io/dockerconfigjson 1 11h
kyverno kyverno-cleanup-controller.kyverno.svc.kyverno-tls-ca kubernetes.io/tls 2 11h
kyverno kyverno-cleanup-controller.kyverno.svc.kyverno-tls-pair kubernetes.io/tls 2 29m
kyverno kyverno-svc.kyverno.svc.kyverno-tls-ca kubernetes.io/tls 2 11h
kyverno sh.helm.release.v1.kyverno.v1 helm.sh/release.v1 1 11h
What happened?
I ran
helm install kyverno kyverno/kyverno --namespace kyverno --create-namespace -f scripts/kyverno-overrides.yaml
to install kyverno in the vcluster.
The admission-controller pod fails to start. I believe the problem is that is unable to list configmaps.
What did you expect to happen?
I expect the kyverno deployments to run without issue
How can we reproduce it (as minimally and precisely as possible)?
I believe creating a vcluster and deploying kyverno is all that is required to recreate the problem.
Anything else we need to know?
kubectl get -A configmap
didn't have a problem. Alsokubectl auth whoami
confirmed as was running as the admission controller service account.Host cluster Kubernetes version
vcluster version
VCluster Config
The text was updated successfully, but these errors were encountered: