-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get "https://node:10250/containerLogs/kube-system/coredns-b96499967-86vbm/coredns": Access violation #6119
Comments
Can you reproduce this with the lastest 1.24 release? Do you see anything in the k3s journald log or containerd log file? The |
The CoreOS error is as follows: # crictl logs coredns-b96499967-ppxpz
E0912 11:33:38.667444 18312 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"coredns-b96499967-ppxpz\": not found" containerID="coredns-b96499967-ppxpz"
FATA[0000] rpc error: code = NotFound desc = an error occurred when try to find container "coredns-b96499967-ppxpz": not found # crictl logs 2a00f365f3e67
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
.:53
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[INFO] plugin/reload: Running configuration SHA512 = b941b080e5322f6519009bb49349462c7ddb6317425b0f6a83e5451175b720703949e3f3b454a24e77f3ffe57fd5e9c6130e528a5a1dd00d9500e4afd6c1108d
CoreDNS-1.9.1
linux/amd64, go1.17.8, 4b597f8
[ERROR] plugin/errors: 2 8475937091263715528.8917947399188439399. HINFO: read udp 10.42.0.5:38111->10.42.0.2:53: read: connection refused
[ERROR] plugin/errors: 2 8475937091263715528.8917947399188439399. HINFO: read udp 10.42.0.5:39613->10.42.0.2:53: read: connection refused
[ERROR] plugin/errors: 2 8475937091263715528.8917947399188439399. HINFO: read udp 10.42.0.5:51733->10.42.0.2:53: read: connection refused
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[ERROR] plugin/errors: 2 update.traefik.io. A: read udp 10.42.0.5:40937->10.42.0.2:53: i/o timeout
[ERROR] plugin/errors: 2 update.traefik.io. AAAA: read udp 10.42.0.5:49397->10.42.0.2:53: i/o timeout
[ERROR] plugin/errors: 2 collect.traefik.io. AAAA: read udp 10.42.0.5:39185->10.42.0.2:53: i/o timeout
[ERROR] plugin/errors: 2 collect.traefik.io. A: read udp 10.42.0.5:41055->10.42.0.2:53: i/o timeout
[ERROR] plugin/errors: 2 update.traefik.io. A: read udp 10.42.0.5:40784->10.42.0.2:53: i/o timeout
[ERROR] plugin/errors: 2 update.traefik.io. AAAA: read udp 10.42.0.5:45919->10.42.0.2:53: i/o timeout
[ERROR] plugin/errors: 2 collect.traefik.io. A: read udp 10.42.0.5:44121->10.42.0.2:53: i/o timeout
[ERROR] plugin/errors: 2 collect.traefik.io. AAAA: read udp 10.42.0.5:53980->10.42.0.2:53: i/o timeout
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server At the same time I searched JournalCTL and the following warning log appeared: level=warning msg="dynamiclistener [::]:6443: no cached certificate available for preload - deferring certificate load until storage initialization or first client request"
level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found"
level=warning msg="Unable to fetch coredns config map: configmaps \"coredns\" not found" Now, I think the source of Access violation errors was CoreDNS。 |
No, I'm looking for the k3s service logs, and the logs from containerd itself - not any of the pods. |
Is this what you need? -- Logs begin at 二 2022-09-13 09:49:32 CST, end at 二 2022-09-13 14:10:01 CST. --
13:28:36 test systemd[1]: Starting Lightweight Kubernetes...
13:28:36 test sh[21461]: /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
13:28:36 test sh[21461]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
13:28:36 test k3s[21472]: time="2022-09-13T13:28:36 08:00" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock"
13:28:36 test k3s[21472]: time="2022-09-13T13:28:36 08:00" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/577968fa3d58539cc4265248631b7be688833e6bf5ad7869fa2afe02f15f1cd2"
13:28:39 test k3s[21472]: time="2022-09-13T13:28:39 08:00" level=info msg="Starting k3s v1.24.4 k3s1 (c3f830e9)"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Configuring database table schema and indexes, this may take a moment..."
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Database tables and indexes are up to date"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Kine available at unix://kine.sock"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="generated self-signed CA certificate CN=k3s-client-ca@1663046920: notBefore=2022-09-13 05:28:40.595624119 0000 UTC notAfter=2032-09-10 05:28:40.595624119 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1663046920: notBefore=2022-09-13 05:28:40 0000 UTC notAfter=2023-09-13 05:28:40 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1663046920: notBefore=2022-09-13 05:28:40 0000 UTC notAfter=2023-09-13 05:28:40 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1663046920: notBefore=2022-09-13 05:28:40 0000 UTC notAfter=2023-09-13 05:28:40 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="certificate CN=system:apiserver,O=system:masters signed by CN=k3s-client-ca@1663046920: notBefore=2022-09-13 05:28:40 0000 UTC notAfter=2023-09-13 05:28:40 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1663046920: notBefore=2022-09-13 05:28:40 0000 UTC notAfter=2023-09-13 05:28:40 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1663046920: notBefore=2022-09-13 05:28:40 0000 UTC notAfter=2023-09-13 05:28:40 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1663046920: notBefore=2022-09-13 05:28:40 0000 UTC notAfter=2023-09-13 05:28:40 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="generated self-signed CA certificate CN=k3s-server-ca@1663046920: notBefore=2022-09-13 05:28:40.60107555 0000 UTC notAfter=2032-09-10 05:28:40.60107555 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1663046920: notBefore=2022-09-13 05:28:40 0000 UTC notAfter=2023-09-13 05:28:40 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="generated self-signed CA certificate CN=k3s-request-header-ca@1663046920: notBefore=2022-09-13 05:28:40.602244227 0000 UTC notAfter=2032-09-10 05:28:40.602244227 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1663046920: notBefore=2022-09-13 05:28:40 0000 UTC notAfter=2023-09-13 05:28:40 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="generated self-signed CA certificate CN=etcd-server-ca@1663046920: notBefore=2022-09-13 05:28:40.603299202 0000 UTC notAfter=2032-09-10 05:28:40.603299202 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1663046920: notBefore=2022-09-13 05:28:40 0000 UTC notAfter=2023-09-13 05:28:40 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1663046920: notBefore=2022-09-13 05:28:40 0000 UTC notAfter=2023-09-13 05:28:40 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="generated self-signed CA certificate CN=etcd-peer-ca@1663046920: notBefore=2022-09-13 05:28:40.605001035 0000 UTC notAfter=2032-09-10 05:28:40.605001035 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1663046920: notBefore=2022-09-13 05:28:40 0000 UTC notAfter=2023-09-13 05:28:40 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1663046920: notBefore=2022-09-13 05:28:40 0000 UTC notAfter=2023-09-13 05:28:40 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=warning msg="dynamiclistener [::]:6443: no cached certificate available for preload - deferring certificate load until storage initialization or first client request"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Active TLS secret / (ver=) (count 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.100.195:192.168.100.195 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/cn-test:test listener.cattle.io/fingerprint:SHA1=1F97F3A490AF470DFF1713A67FFA6D76BC9DF7AF]"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Tunnel server egress proxy mode: agent"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --etcd-servers=unix://kine.sock --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --profiling=false"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Waiting for API server to become available"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.100.195:6443 -t ${SERVER_NODE_TOKEN}"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.100.195:6443 -t ${AGENT_NODE_TOKEN}"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="Run: k3s kubectl"
13:28:40 test k3s[21472]: I0913 13:28:40.834604 21472 server.go:576] external host was not specified, using 192.168.100.195
13:28:40 test k3s[21472]: I0913 13:28:40.834825 21472 server.go:168] Version: v1.24.4 k3s1
13:28:40 test k3s[21472]: I0913 13:28:40.834846 21472 server.go:170] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="certificate CN=test signed by CN=k3s-server-ca@1663046920: notBefore=2022-09-13 05:28:40 0000 UTC notAfter=2023-09-13 05:28:40 0000 UTC"
13:28:40 test k3s[21472]: time="2022-09-13T13:28:40 08:00" level=info msg="certificate CN=system:node:test,O=system:nodes signed by CN=k3s-client-ca@1663046920: notBefore=2022-09-13 05:28:40 0000 UTC notAfter=2023-09-13 05:28:40 0000 UTC"
13:28:41 test k3s[21472]: I0913 13:28:41.333580 21472 shared_informer.go:255] Waiting for caches to sync for node_authorizer
13:28:41 test k3s[21472]: I0913 13:28:41.334692 21472 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
13:28:41 test k3s[21472]: I0913 13:28:41.334711 21472 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
13:28:41 test k3s[21472]: I0913 13:28:41.335831 21472 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
13:28:41 test k3s[21472]: I0913 13:28:41.335850 21472 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
13:28:41 test k3s[21472]: W0913 13:28:41.362955 21472 genericapiserver.go:557] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
13:28:41 test k3s[21472]: I0913 13:28:41.363921 21472 instance.go:274] Using reconciler: lease
13:28:41 test k3s[21472]: I0913 13:28:41.468059 21472 instance.go:586] API group "internal.apiserver.k8s.io" is not enabled, skipping.
13:28:41 test k3s[21472]: W0913 13:28:41.650048 21472 genericapiserver.go:557] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
13:28:41 test k3s[21472]: W0913 13:28:41.652426 21472 genericapiserver.go:557] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
13:28:41 test k3s[21472]: W0913 13:28:41.668490 21472 genericapiserver.go:557] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
13:28:41 test k3s[21472]: W0913 13:28:41.670703 21472 genericapiserver.go:557] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
13:28:41 test k3s[21472]: W0913 13:28:41.677772 21472 genericapiserver.go:557] Skipping API networking.k8s.io/v1beta1 because it has no resources.
13:28:41 test k3s[21472]: W0913 13:28:41.681333 21472 genericapiserver.go:557] Skipping API node.k8s.io/v1alpha1 because it has no resources.
13:28:41 test k3s[21472]: W0913 13:28:41.690460 21472 genericapiserver.go:557] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
13:28:41 test k3s[21472]: W0913 13:28:41.690482 21472 genericapiserver.go:557] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
13:28:41 test k3s[21472]: W0913 13:28:41.692344 21472 genericapiserver.go:557] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
13:28:41 test k3s[21472]: W0913 13:28:41.692364 21472 genericapiserver.go:557] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
13:28:41 test k3s[21472]: W0913 13:28:41.697353 21472 genericapiserver.go:557] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
13:28:41 test k3s[21472]: W0913 13:28:41.701993 21472 genericapiserver.go:557] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
13:28:41 test k3s[21472]: W0913 13:28:41.707031 21472 genericapiserver.go:557] Skipping API apps/v1beta2 because it has no resources.
13:28:41 test k3s[21472]: W0913 13:28:41.707053 21472 genericapiserver.go:557] Skipping API apps/v1beta1 because it has no resources.
13:28:41 test k3s[21472]: W0913 13:28:41.709297 21472 genericapiserver.go:557] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
13:28:41 test k3s[21472]: I0913 13:28:41.713659 21472 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
13:28:41 test k3s[21472]: I0913 13:28:41.713690 21472 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
13:28:41 test k3s[21472]: W0913 13:28:41.729620 21472 genericapiserver.go:557] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
13:28:43 test k3s[21472]: I0913 13:28:43.362478 21472 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
13:28:43 test k3s[21472]: I0913 13:28:43.362480 21472 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
13:28:43 test k3s[21472]: I0913 13:28:43.362744 21472 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
13:28:43 test k3s[21472]: I0913 13:28:43.362964 21472 secure_serving.go:210] Serving securely on 127.0.0.1:6444
13:28:43 test k3s[21472]: I0913 13:28:43.363019 21472 tlsconfig.go:240] "Starting DynamicServingCertificateController"
13:28:43 test k3s[21472]: I0913 13:28:43.363063 21472 controller.go:83] Starting OpenAPI AggregationController
13:28:43 test k3s[21472]: I0913 13:28:43.363113 21472 controller.go:80] Starting OpenAPI V3 AggregationController
13:28:43 test k3s[21472]: I0913 13:28:43.363171 21472 available_controller.go:491] Starting AvailableConditionController
13:28:43 test k3s[21472]: I0913 13:28:43.363183 21472 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
13:28:43 test k3s[21472]: I0913 13:28:43.363674 21472 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key"
13:28:43 test k3s[21472]: I0913 13:28:43.363827 21472 apiservice_controller.go:97] Starting APIServiceRegistrationController
13:28:43 test k3s[21472]: I0913 13:28:43.363838 21472 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
13:28:43 test k3s[21472]: I0913 13:28:43.363865 21472 apf_controller.go:317] Starting API Priority and Fairness config controller
13:28:43 test k3s[21472]: I0913 13:28:43.363899 21472 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
13:28:43 test k3s[21472]: I0913 13:28:43.368069 21472 customresource_discovery_controller.go:209] Starting DiscoveryController
13:28:43 test k3s[21472]: I0913 13:28:43.373069 21472 controller.go:85] Starting OpenAPI controller
13:28:43 test k3s[21472]: I0913 13:28:43.373109 21472 controller.go:85] Starting OpenAPI V3 controller
13:28:43 test k3s[21472]: I0913 13:28:43.373136 21472 naming_controller.go:291] Starting NamingConditionController
13:28:43 test k3s[21472]: I0913 13:28:43.373164 21472 establishing_controller.go:76] Starting EstablishingController
13:28:43 test k3s[21472]: I0913 13:28:43.373187 21472 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
13:28:43 test k3s[21472]: I0913 13:28:43.373204 21472 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
13:28:43 test k3s[21472]: I0913 13:28:43.373229 21472 crd_finalizer.go:266] Starting CRDFinalizer
13:28:43 test k3s[21472]: I0913 13:28:43.376410 21472 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
13:28:43 test k3s[21472]: I0913 13:28:43.379338 21472 crdregistration_controller.go:111] Starting crd-autoregister controller
13:28:43 test k3s[21472]: I0913 13:28:43.379352 21472 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
13:28:43 test k3s[21472]: I0913 13:28:43.376416 21472 autoregister_controller.go:141] Starting autoregister controller
13:28:43 test k3s[21472]: I0913 13:28:43.380625 21472 cache.go:32] Waiting for caches to sync for autoregister controller
13:28:43 test k3s[21472]: I0913 13:28:43.382354 21472 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
13:28:43 test k3s[21472]: I0913 13:28:43.382370 21472 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
13:28:43 test k3s[21472]: I0913 13:28:43.393001 21472 controller.go:611] quota admission added evaluator for: namespaces
13:28:43 test k3s[21472]: E0913 13:28:43.400845 21472 controller.go:166] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocate IP 10.43.0.1: cannot allocate resources of type serviceipallocations at this time
13:28:43 test k3s[21472]: I0913 13:28:43.433890 21472 shared_informer.go:262] Caches are synced for node_authorizer
13:28:43 test k3s[21472]: I0913 13:28:43.464245 21472 apf_controller.go:322] Running API Priority and Fairness config worker
13:28:43 test k3s[21472]: I0913 13:28:43.464287 21472 cache.go:39] Caches are synced for AvailableConditionController controller
13:28:43 test k3s[21472]: I0913 13:28:43.464319 21472 cache.go:39] Caches are synced for APIServiceRegistrationController controller
13:28:43 test k3s[21472]: I0913 13:28:43.479383 21472 shared_informer.go:262] Caches are synced for crd-autoregister
13:28:43 test k3s[21472]: I0913 13:28:43.480658 21472 cache.go:39] Caches are synced for autoregister controller
13:28:43 test k3s[21472]: I0913 13:28:43.482962 21472 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
13:28:44 test k3s[21472]: I0913 13:28:44.006613 21472 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
13:28:44 test k3s[21472]: I0913 13:28:44.369233 21472 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
13:28:44 test k3s[21472]: I0913 13:28:44.374284 21472 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
13:28:44 test k3s[21472]: I0913 13:28:44.374311 21472 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
13:28:45 test k3s[21472]: time="2022-09-13T13:28:45 08:00" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
13:28:50 test k3s[21472]: time="2022-09-13T13:28:50 08:00" level=info msg="Slow SQL (started: 2022-09-13 13:28:44.557567103 0800 CST m= 5.502965546) (total time: 6.074633778s): INSERT INTO kine(name, created, deleted, create_revision, prev_revision, lease, value, old_value) values(?, ?, ?, ?, ?, ?, ?, ?) : [[/registry/clusterroles/system:controller:resourcequota-controller 1 0 0 0 0 [107 56 115 0 10 43 10 28 114 98 97 99 46 97 117 116 104 111 114 105 122 97 116 105 111 110 46 107 56 115 46 105 111 47 118 49 18 11 67 108 117 115 116 101 114 82 111 108 101 18 163 4 10 182 3 10 42 115 121 115 116 101 109 58 99 111 110 116 114 111 108 108 101 114 58 114 101 115 111 117 114 99 101 113 117 111 116 97 45 99 111 110 116 114 111 108 108 101 114 18 0 26 0 34 0 42 36 51 99 51 56 49 54 97 55 45 54 49 52 99 45 52 99 54 48 45 57 55 54 49 45 102 100 52 57 49 102 100 52 97 53 53 52 50 0 56 0 66 8 8 140 170 128 153 6 16 0 90 44 10 27 107 117 98 101 114 110 101 116 101 115 46 105 111 47 98 111 111 116 115 116 114 97 112 112 105 110 103 18 13 114 98 97 99 45 100 101 102 97 117 108 116 115 98 51 10 43 114 98 97 99 46 97 117 116 104 111 114 105 122 97 116 105 111 110 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 117 116 111 117 112 100 97 116 101 18 4 116 114 117 101 122 0 138 1 231 1 10 3 107 51 115 18 6 85 112 100 97 116 101 26 28 114 98 97 99 46 97 117 116 104 111 114 105 122 97 116 105 111 110 46 107 56 115 46 105 111 47 118 49 34 8 8 140 170 128 153 6 16 0 50 8 70 105 101 108 100 115 86 49 58 163 1 10 160 1 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 114 98 97 99 46 97 117 116 104 111 114 105 122 97 116 105 111 110 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 117 116 111 117 112 100 97 116 101 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 98 111 111 116 115 116 114 97 112 112 105 110 103 34 58 123 125 125 125 44 34 102 58 114 117 108 101 115 34 58 123 125 125 66 0 18 19 10 4 108 105 115 116 10 5 119 97 116 99 104 18 1 42 26 1 42 18 33 10 6 117 112 100 97 116 101 18 0 26 21 114 101 115 111 117 114 99 101 113 117 111 116 97 115 47 115 116 97 116 117 115 18 48 10 6 99 114 101 97 116 101 10 5 112 97 116 99 104 10 6 117 112 100 97 116 101 18 0 18 13 101 118 101 110 116 115 46 107 56 115 46 105 111 26 6 101 118 101 110 116 115 26 0 34 0] []]]"
13:28:50 test k3s[21472]: I0913 13:28:50.632760 21472 trace.go:205] Trace[1430945124]: "Create" url:/apis/rbac.authorization.k8s.io/v1/clusterroles,user-agent:k3s/v1.24.4 k3s1 (linux/amd64) kubernetes/c3f830e,audit-id:c004691c-d13e-4912-a2aa-9fb1eeead826,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Sep-2022 13:28:44.556) (total time: 6076ms):
13:28:50 test k3s[21472]: Trace[1430945124]: ---"Object stored in database" 6075ms (13:28:50.632)
13:28:50 test k3s[21472]: Trace[1430945124]: [6.076098764s] [6.076098764s] END
13:28:50 test k3s[21472]: I0913 13:28:50.823573 21472 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
13:28:50 test k3s[21472]: time="2022-09-13T13:28:50 08:00" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
13:28:50 test k3s[21472]: I0913 13:28:50.854926 21472 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
13:28:50 test k3s[21472]: I0913 13:28:50.924603 21472 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.43.0.1]
13:28:50 test k3s[21472]: W0913 13:28:50.929237 21472 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.100.195]
13:28:50 test k3s[21472]: I0913 13:28:50.930035 21472 controller.go:611] quota admission added evaluator for: endpoints
13:28:50 test k3s[21472]: I0913 13:28:50.933659 21472 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
13:28:51 test k3s[21472]: time="2022-09-13T13:28:51 08:00" level=info msg="Module overlay was already loaded"
13:28:51 test k3s[21472]: time="2022-09-13T13:28:51 08:00" level=info msg="Module nf_conntrack was already loaded"
13:28:51 test k3s[21472]: time="2022-09-13T13:28:51 08:00" level=info msg="Module br_netfilter was already loaded"
13:28:51 test k3s[21472]: time="2022-09-13T13:28:51 08:00" level=info msg="Module iptable_nat was already loaded"
13:28:51 test k3s[21472]: time="2022-09-13T13:28:51 08:00" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
13:28:51 test k3s[21472]: time="2022-09-13T13:28:51 08:00" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
13:28:51 test k3s[21472]: time="2022-09-13T13:28:51 08:00" level=info msg="Kube API server is now running"
13:28:51 test k3s[21472]: time="2022-09-13T13:28:51 08:00" level=info msg="ETCD server is now running"
13:28:51 test k3s[21472]: time="2022-09-13T13:28:51 08:00" level=info msg="k3s is up and running"
13:28:51 test systemd[1]: Started Lightweight Kubernetes.
13:28:51 test k3s[21472]: time="2022-09-13T13:28:51 08:00" level=info msg="Waiting for cloud-controller-manager privileges to become available"
13:28:51 test k3s[21472]: time="2022-09-13T13:28:51 08:00" level=info msg="Applying CRD addons.k3s.cattle.io"
13:28:51 test k3s[21472]: time="2022-09-13T13:28:51 08:00" level=info msg="Applying CRD helmcharts.helm.cattle.io"
13:28:51 test k3s[21472]: time="2022-09-13T13:28:51 08:00" level=info msg="Applying CRD helmchartconfigs.helm.cattle.io"
13:28:51 test k3s[21472]: time="2022-09-13T13:28:51 08:00" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Containerd is now running"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=test --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Handling backend connection request [test]"
13:28:52 test k3s[21472]: Flag --cloud-provider has been deprecated, will be removed in 1.24 or later, in favor of removing cloud provider code from Kubelet.
13:28:52 test k3s[21472]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
13:28:52 test k3s[21472]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
13:28:52 test k3s[21472]: I0913 13:28:52.188744 21472 server.go:192] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
13:28:52 test k3s[21472]: I0913 13:28:52.197600 21472 server.go:395] "Kubelet version" kubeletVersion="v1.24.4 k3s1"
13:28:52 test k3s[21472]: I0913 13:28:52.197621 21472 server.go:397] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
13:28:52 test k3s[21472]: I0913 13:28:52.198692 21472 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
13:28:52 test k3s[21472]: W0913 13:28:52.205621 21472 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default"
13:28:52 test k3s[21472]: E0913 13:28:52.205646 21472 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default"
13:28:52 test k3s[21472]: I0913 13:28:52.213078 21472 server.go:644] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
13:28:52 test k3s[21472]: I0913 13:28:52.213426 21472 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
13:28:52 test k3s[21472]: I0913 13:28:52.213499 21472 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
13:28:52 test k3s[21472]: I0913 13:28:52.213528 21472 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
13:28:52 test k3s[21472]: I0913 13:28:52.213544 21472 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true
13:28:52 test k3s[21472]: I0913 13:28:52.213629 21472 state_mem.go:36] "Initialized new in-memory state store"
13:28:52 test k3s[21472]: I0913 13:28:52.217246 21472 kubelet.go:376] "Attempting to sync node with API server"
13:28:52 test k3s[21472]: I0913 13:28:52.217287 21472 kubelet.go:267] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
13:28:52 test k3s[21472]: I0913 13:28:52.217331 21472 kubelet.go:278] "Adding apiserver pod source"
13:28:52 test k3s[21472]: I0913 13:28:52.217352 21472 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
13:28:52 test k3s[21472]: I0913 13:28:52.217943 21472 kuberuntime_manager.go:239] "Container runtime initialized" containerRuntime="containerd" version="v1.6.6-k3s1" apiVersion="v1"
13:28:52 test k3s[21472]: I0913 13:28:52.218317 21472 server.go:1177] "Started kubelet"
13:28:52 test k3s[21472]: I0913 13:28:52.218449 21472 server.go:150] "Starting to listen" address="0.0.0.0" port=10250
13:28:52 test k3s[21472]: I0913 13:28:52.219209 21472 server.go:410] "Adding debug handlers to kubelet server"
13:28:52 test k3s[21472]: I0913 13:28:52.220096 21472 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
13:28:52 test k3s[21472]: I0913 13:28:52.220228 21472 volume_manager.go:289] "Starting Kubelet Volume Manager"
13:28:52 test k3s[21472]: I0913 13:28:52.220319 21472 desired_state_of_world_populator.go:145] "Desired state populator starts to run"
13:28:52 test k3s[21472]: E0913 13:28:52.222303 21472 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
13:28:52 test k3s[21472]: E0913 13:28:52.222340 21472 kubelet.go:1298] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
13:28:52 test k3s[21472]: E0913 13:28:52.225730 21472 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"test\" not found" node="test"
13:28:52 test k3s[21472]: I0913 13:28:52.229351 21472 serving.go:355] Generated self-signed cert in-memory
13:28:52 test k3s[21472]: I0913 13:28:52.258001 21472 kubelet_network_linux.go:76] "Initialized protocol iptables rules." protocol=IPv4
13:28:52 test k3s[21472]: I0913 13:28:52.284017 21472 kubelet_network_linux.go:76] "Initialized protocol iptables rules." protocol=IPv6
13:28:52 test k3s[21472]: I0913 13:28:52.284051 21472 status_manager.go:161] "Starting to sync pod status with apiserver"
13:28:52 test k3s[21472]: I0913 13:28:52.284073 21472 kubelet.go:1986] "Starting kubelet main sync loop"
13:28:52 test k3s[21472]: E0913 13:28:52.284119 21472 kubelet.go:2010] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
13:28:52 test k3s[21472]: E0913 13:28:52.321211 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:52 test k3s[21472]: E0913 13:28:52.385224 21472 kubelet.go:2010] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Done waiting for CRD helmchartconfigs.helm.cattle.io to become available"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-10.19.300.tgz"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-10.19.300.tgz"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Failed to get existing traefik HelmChart" error="helmcharts.helm.cattle.io \"traefik\" not found"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
13:28:52 test k3s[21472]: E0913 13:28:52.422233 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:52 test k3s[21472]: E0913 13:28:52.522567 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Creating deploy event broadcaster"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Creating svccontroller event broadcaster"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Waiting for control-plane node test startup: nodes \"test\" not found"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Cluster dns configmap has been set successfully"
13:28:52 test k3s[21472]: E0913 13:28:52.585330 21472 kubelet.go:2010] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
13:28:52 test k3s[21472]: E0913 13:28:52.623543 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Starting apps/v1, Kind=Deployment controller"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Starting apps/v1, Kind=DaemonSet controller"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Starting rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding controller"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Starting batch/v1, Kind=Job controller"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Starting /v1, Kind=ConfigMap controller"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Starting /v1, Kind=ServiceAccount controller"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Starting /v1, Kind=Service controller"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Starting /v1, Kind=Pod controller"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Starting /v1, Kind=Endpoints controller"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Starting /v1, Kind=Node controller"
13:28:52 test k3s[21472]: E0913 13:28:52.724094 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:52 test k3s[21472]: E0913 13:28:52.821302 21472 csi_plugin.go:301] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "test" not found
13:28:52 test k3s[21472]: E0913 13:28:52.824428 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:52 test k3s[21472]: I0913 13:28:52.865181 21472 controllermanager.go:180] Version: v1.24.4 k3s1
13:28:52 test k3s[21472]: I0913 13:28:52.865205 21472 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
13:28:52 test k3s[21472]: I0913 13:28:52.867808 21472 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
13:28:52 test k3s[21472]: I0913 13:28:52.867825 21472 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
13:28:52 test k3s[21472]: I0913 13:28:52.867851 21472 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
13:28:52 test k3s[21472]: I0913 13:28:52.867852 21472 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
13:28:52 test k3s[21472]: I0913 13:28:52.867869 21472 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
13:28:52 test k3s[21472]: I0913 13:28:52.867860 21472 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
13:28:52 test k3s[21472]: I0913 13:28:52.867929 21472 secure_serving.go:210] Serving securely on 127.0.0.1:10257
13:28:52 test k3s[21472]: I0913 13:28:52.867977 21472 tlsconfig.go:240] "Starting DynamicServingCertificateController"
13:28:52 test k3s[21472]: E0913 13:28:52.924860 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:52 test k3s[21472]: time="2022-09-13T13:28:52 08:00" level=info msg="Starting /v1, Kind=Secret controller"
13:28:52 test k3s[21472]: I0913 13:28:52.968760 21472 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
13:28:52 test k3s[21472]: I0913 13:28:52.968794 21472 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
13:28:52 test k3s[21472]: I0913 13:28:52.968825 21472 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
13:28:52 test k3s[21472]: E0913 13:28:52.985458 21472 kubelet.go:2010] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
13:28:53 test k3s[21472]: E0913 13:28:53.029713 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:53 test k3s[21472]: I0913 13:28:53.074587 21472 shared_informer.go:255] Waiting for caches to sync for tokens
13:28:53 test k3s[21472]: I0913 13:28:53.080424 21472 controller.go:611] quota admission added evaluator for: serviceaccounts
13:28:53 test k3s[21472]: I0913 13:28:53.082155 21472 controllermanager.go:593] Started "deployment"
13:28:53 test k3s[21472]: I0913 13:28:53.082195 21472 deployment_controller.go:153] "Starting controller" controller="deployment"
13:28:53 test k3s[21472]: I0913 13:28:53.082209 21472 shared_informer.go:255] Waiting for caches to sync for deployment
13:28:53 test k3s[21472]: I0913 13:28:53.094818 21472 controllermanager.go:593] Started "horizontalpodautoscaling"
13:28:53 test k3s[21472]: I0913 13:28:53.094849 21472 horizontal.go:168] Starting HPA controller
13:28:53 test k3s[21472]: I0913 13:28:53.094861 21472 shared_informer.go:255] Waiting for caches to sync for HPA
13:28:53 test k3s[21472]: I0913 13:28:53.103081 21472 controllermanager.go:593] Started "disruption"
13:28:53 test k3s[21472]: I0913 13:28:53.103208 21472 disruption.go:363] Starting disruption controller
13:28:53 test k3s[21472]: I0913 13:28:53.103219 21472 shared_informer.go:255] Waiting for caches to sync for disruption
13:28:53 test k3s[21472]: I0913 13:28:53.109337 21472 controllermanager.go:593] Started "persistentvolume-expander"
13:28:53 test k3s[21472]: I0913 13:28:53.109455 21472 expand_controller.go:341] Starting expand controller
13:28:53 test k3s[21472]: I0913 13:28:53.109468 21472 shared_informer.go:255] Waiting for caches to sync for expand
13:28:53 test k3s[21472]: I0913 13:28:53.115607 21472 controllermanager.go:593] Started "endpointslicemirroring"
13:28:53 test k3s[21472]: I0913 13:28:53.115740 21472 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller
13:28:53 test k3s[21472]: I0913 13:28:53.115750 21472 shared_informer.go:255] Waiting for caches to sync for endpoint_slice_mirroring
13:28:53 test k3s[21472]: I0913 13:28:53.121698 21472 controllermanager.go:593] Started "podgc"
13:28:53 test k3s[21472]: I0913 13:28:53.121783 21472 gc_controller.go:92] Starting GC controller
13:28:53 test k3s[21472]: I0913 13:28:53.121795 21472 shared_informer.go:255] Waiting for caches to sync for GC
13:28:53 test k3s[21472]: E0913 13:28:53.129791 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:53 test k3s[21472]: I0913 13:28:53.144090 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpoints
13:28:53 test k3s[21472]: I0913 13:28:53.144147 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for events.events.k8s.io
13:28:53 test k3s[21472]: I0913 13:28:53.144205 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io
13:28:53 test k3s[21472]: I0913 13:28:53.144237 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
13:28:53 test k3s[21472]: I0913 13:28:53.144266 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
13:28:53 test k3s[21472]: I0913 13:28:53.144315 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for serviceaccounts
13:28:53 test k3s[21472]: I0913 13:28:53.144357 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for daemonsets.apps
13:28:53 test k3s[21472]: I0913 13:28:53.144389 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for jobs.batch
13:28:53 test k3s[21472]: I0913 13:28:53.144419 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
13:28:53 test k3s[21472]: I0913 13:28:53.144446 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
13:28:53 test k3s[21472]: I0913 13:28:53.144473 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io
13:28:53 test k3s[21472]: I0913 13:28:53.144511 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
13:28:53 test k3s[21472]: I0913 13:28:53.144540 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
13:28:53 test k3s[21472]: I0913 13:28:53.144561 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
13:28:53 test k3s[21472]: I0913 13:28:53.144587 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
13:28:53 test k3s[21472]: I0913 13:28:53.144654 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for cronjobs.batch
13:28:53 test k3s[21472]: I0913 13:28:53.144679 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for statefulsets.apps
13:28:53 test k3s[21472]: I0913 13:28:53.144710 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for controllerrevisions.apps
13:28:53 test k3s[21472]: I0913 13:28:53.144749 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for podtemplates
13:28:53 test k3s[21472]: I0913 13:28:53.144783 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for replicasets.apps
13:28:53 test k3s[21472]: I0913 13:28:53.144818 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
13:28:53 test k3s[21472]: I0913 13:28:53.144843 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
13:28:53 test k3s[21472]: I0913 13:28:53.144866 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for limitranges
13:28:53 test k3s[21472]: I0913 13:28:53.144886 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for deployments.apps
13:28:53 test k3s[21472]: I0913 13:28:53.144910 21472 controllermanager.go:593] Started "resourcequota"
13:28:53 test k3s[21472]: I0913 13:28:53.144936 21472 resource_quota_controller.go:273] Starting resource quota controller
13:28:53 test k3s[21472]: I0913 13:28:53.144949 21472 shared_informer.go:255] Waiting for caches to sync for resource quota
13:28:53 test k3s[21472]: I0913 13:28:53.144975 21472 resource_quota_monitor.go:308] QuotaMonitor running
13:28:53 test k3s[21472]: I0913 13:28:53.151786 21472 controllermanager.go:593] Started "ttl"
13:28:53 test k3s[21472]: I0913 13:28:53.151843 21472 ttl_controller.go:121] Starting TTL controller
13:28:53 test k3s[21472]: I0913 13:28:53.151853 21472 shared_informer.go:255] Waiting for caches to sync for TTL
13:28:53 test k3s[21472]: I0913 13:28:53.175476 21472 shared_informer.go:262] Caches are synced for tokens
13:28:53 test k3s[21472]: time="2022-09-13T13:28:53 08:00" level=info msg="Updating TLS secret for kube-system/k3s-serving (count: 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.100.195:192.168.100.195 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/cn-test:test listener.cattle.io/fingerprint:SHA1=1F97F3A490AF470DFF1713A67FFA6D76BC9DF7AF]"
13:28:53 test k3s[21472]: I0913 13:28:53.217784 21472 apiserver.go:52] "Watching apiserver"
13:28:53 test k3s[21472]: E0913 13:28:53.230187 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:53 test k3s[21472]: I0913 13:28:53.278361 21472 controllermanager.go:593] Started "endpointslice"
13:28:53 test k3s[21472]: I0913 13:28:53.278405 21472 endpointslice_controller.go:257] Starting endpoint slice controller
13:28:53 test k3s[21472]: I0913 13:28:53.278426 21472 shared_informer.go:255] Waiting for caches to sync for endpoint_slice
13:28:53 test k3s[21472]: E0913 13:28:53.331293 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:53 test k3s[21472]: W0913 13:28:53.387495 21472 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default"
13:28:53 test k3s[21472]: E0913 13:28:53.387520 21472 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default"
13:28:53 test k3s[21472]: I0913 13:28:53.427780 21472 controllermanager.go:593] Started "daemonset"
13:28:53 test k3s[21472]: I0913 13:28:53.427810 21472 daemon_controller.go:284] Starting daemon sets controller
13:28:53 test k3s[21472]: I0913 13:28:53.427823 21472 shared_informer.go:255] Waiting for caches to sync for daemon sets
13:28:53 test k3s[21472]: E0913 13:28:53.432198 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:53 test k3s[21472]: E0913 13:28:53.533172 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:53 test k3s[21472]: I0913 13:28:53.578755 21472 controllermanager.go:593] Started "csrcleaner"
13:28:53 test k3s[21472]: W0913 13:28:53.578778 21472 controllermanager.go:558] "cloud-node-lifecycle" is disabled
13:28:53 test k3s[21472]: I0913 13:28:53.578826 21472 cleaner.go:82] Starting CSR cleaner controller
13:28:53 test k3s[21472]: time="2022-09-13T13:28:53 08:00" level=info msg="Active TLS secret kube-system/k3s-serving (ver=217) (count 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.100.195:192.168.100.195 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/cn-test:test listener.cattle.io/fingerprint:SHA1=1F97F3A490AF470DFF1713A67FFA6D76BC9DF7AF]"
13:28:53 test k3s[21472]: E0913 13:28:53.633329 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:53 test k3s[21472]: I0913 13:28:53.728932 21472 controllermanager.go:593] Started "endpoint"
13:28:53 test k3s[21472]: I0913 13:28:53.729500 21472 endpoints_controller.go:178] Starting endpoint controller
13:28:53 test k3s[21472]: I0913 13:28:53.729012 21472 shared_informer.go:255] Waiting for caches to sync for endpoint
13:28:53 test k3s[21472]: E0913 13:28:53.733853 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:53 test k3s[21472]: E0913 13:28:53.785934 21472 kubelet.go:2010] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
13:28:53 test k3s[21472]: time="2022-09-13T13:28:53 08:00" level=info msg="Waiting for control-plane node test startup: nodes \"test\" not found"
13:28:53 test k3s[21472]: E0913 13:28:53.834607 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:53 test k3s[21472]: I0913 13:28:53.881238 21472 controllermanager.go:593] Started "cronjob"
13:28:53 test k3s[21472]: W0913 13:28:53.881256 21472 controllermanager.go:558] "tokencleaner" is disabled
13:28:53 test k3s[21472]: W0913 13:28:53.881263 21472 controllermanager.go:558] "service" is disabled
13:28:53 test k3s[21472]: I0913 13:28:53.881335 21472 cronjob_controllerv2.go:135] "Starting cronjob controller v2"
13:28:53 test k3s[21472]: I0913 13:28:53.881347 21472 shared_informer.go:255] Waiting for caches to sync for cronjob
13:28:53 test k3s[21472]: E0913 13:28:53.935050 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:54 test k3s[21472]: I0913 13:28:54.029797 21472 controllermanager.go:593] Started "attachdetach"
13:28:54 test k3s[21472]: I0913 13:28:54.029842 21472 attach_detach_controller.go:328] Starting attach detach controller
13:28:54 test k3s[21472]: I0913 13:28:54.029856 21472 shared_informer.go:255] Waiting for caches to sync for attach detach
13:28:54 test k3s[21472]: E0913 13:28:54.036084 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:54 test k3s[21472]: E0913 13:28:54.137038 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:54 test k3s[21472]: I0913 13:28:54.180203 21472 controllermanager.go:593] Started "root-ca-cert-publisher"
13:28:54 test k3s[21472]: I0913 13:28:54.180254 21472 publisher.go:107] Starting root CA certificate configmap publisher
13:28:54 test k3s[21472]: I0913 13:28:54.180265 21472 shared_informer.go:255] Waiting for caches to sync for crt configmap
13:28:54 test k3s[21472]: E0913 13:28:54.238056 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:54 test k3s[21472]: I0913 13:28:54.329934 21472 controllermanager.go:593] Started "ephemeral-volume"
13:28:54 test k3s[21472]: I0913 13:28:54.329970 21472 controller.go:170] Starting ephemeral volume controller
13:28:54 test k3s[21472]: I0913 13:28:54.329983 21472 shared_informer.go:255] Waiting for caches to sync for ephemeral
13:28:54 test k3s[21472]: E0913 13:28:54.338750 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:54 test k3s[21472]: E0913 13:28:54.438900 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:54 test k3s[21472]: I0913 13:28:54.477942 21472 controllermanager.go:593] Started "replicaset"
13:28:54 test k3s[21472]: I0913 13:28:54.478002 21472 replica_set.go:205] Starting replicaset controller
13:28:54 test k3s[21472]: I0913 13:28:54.478012 21472 shared_informer.go:255] Waiting for caches to sync for ReplicaSet
13:28:54 test k3s[21472]: I0913 13:28:54.528203 21472 controller.go:611] quota admission added evaluator for: addons.k3s.cattle.io
13:28:54 test k3s[21472]: I0913 13:28:54.528337 21472 certificate_controller.go:119] Starting certificate controller "csrsigning-kubelet-serving"
13:28:54 test k3s[21472]: I0913 13:28:54.528357 21472 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
13:28:54 test k3s[21472]: I0913 13:28:54.528399 21472 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key"
13:28:54 test k3s[21472]: I0913 13:28:54.528704 21472 certificate_controller.go:119] Starting certificate controller "csrsigning-kubelet-client"
13:28:54 test k3s[21472]: I0913 13:28:54.528720 21472 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-client
13:28:54 test k3s[21472]: I0913 13:28:54.528728 21472 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key"
13:28:54 test k3s[21472]: I0913 13:28:54.529015 21472 certificate_controller.go:119] Starting certificate controller "csrsigning-kube-apiserver-client"
13:28:54 test k3s[21472]: I0913 13:28:54.529029 21472 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
13:28:54 test k3s[21472]: I0913 13:28:54.529069 21472 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key"
13:28:54 test k3s[21472]: I0913 13:28:54.529384 21472 controllermanager.go:593] Started "csrsigning"
13:28:54 test k3s[21472]: I0913 13:28:54.529422 21472 certificate_controller.go:119] Starting certificate controller "csrsigning-legacy-unknown"
13:28:54 test k3s[21472]: I0913 13:28:54.529441 21472 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
13:28:54 test k3s[21472]: I0913 13:28:54.529467 21472 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key"
13:28:54 test k3s[21472]: time="2022-09-13T13:28:54 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"56e60922-f9b1-4a62-9115-10df455e901d\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"236\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
13:28:54 test k3s[21472]: E0913 13:28:54.539303 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:54 test k3s[21472]: time="2022-09-13T13:28:54 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"56e60922-f9b1-4a62-9115-10df455e901d\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"236\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
13:28:54 test k3s[21472]: time="2022-09-13T13:28:54 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"30abed4e-d57a-41ed-b311-9b7dd6b8e4c8\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"244\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
13:28:54 test k3s[21472]: E0913 13:28:54.621674 21472 csi_plugin.go:301] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "test" not found
13:28:54 test k3s[21472]: E0913 13:28:54.639643 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:54 test k3s[21472]: I0913 13:28:54.678333 21472 node_ipam_controller.go:91] Sending events to api server.
13:28:54 test k3s[21472]: E0913 13:28:54.739976 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:54 test k3s[21472]: time="2022-09-13T13:28:54 08:00" level=info msg="Waiting for control-plane node test startup: nodes \"test\" not found"
13:28:54 test k3s[21472]: E0913 13:28:54.840020 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:54 test k3s[21472]: E0913 13:28:54.940104 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:54 test k3s[21472]: I0913 13:28:54.944180 21472 controller.go:611] quota admission added evaluator for: deployments.apps
13:28:54 test k3s[21472]: I0913 13:28:54.955383 21472 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.43.0.10]
13:28:54 test k3s[21472]: time="2022-09-13T13:28:54 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"30abed4e-d57a-41ed-b311-9b7dd6b8e4c8\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"244\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
13:28:54 test k3s[21472]: time="2022-09-13T13:28:54 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"local-storage\", UID:\"93cb4507-b45d-4c5d-99b6-00b1ec4e86a5\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"256\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""
13:28:55 test k3s[21472]: time="2022-09-13T13:28:55 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"local-storage\", UID:\"93cb4507-b45d-4c5d-99b6-00b1ec4e86a5\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"256\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""
13:28:55 test k3s[21472]: time="2022-09-13T13:28:55 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"e913af62-6332-4ab2-90d0-82d47ee8f96a\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"266\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""
13:28:55 test k3s[21472]: time="2022-09-13T13:28:55 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"e913af62-6332-4ab2-90d0-82d47ee8f96a\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"266\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""
13:28:55 test k3s[21472]: time="2022-09-13T13:28:55 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"137879e8-dcbf-4ba3-bcc3-703c01697779\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"271\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""
13:28:55 test k3s[21472]: time="2022-09-13T13:28:55 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"137879e8-dcbf-4ba3-bcc3-703c01697779\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"271\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""
13:28:55 test k3s[21472]: time="2022-09-13T13:28:55 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-reader\", UID:\"2068980c-0cd6-4372-9f13-9a1f0579351d\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"276\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\""
13:28:55 test k3s[21472]: E0913 13:28:55.040166 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:55 test k3s[21472]: time="2022-09-13T13:28:55 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-reader\", UID:\"2068980c-0cd6-4372-9f13-9a1f0579351d\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"276\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\""
13:28:55 test k3s[21472]: time="2022-09-13T13:28:55 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-apiservice\", UID:\"fa9fc7a9-1206-44b7-8394-7bdc59c2d208\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"281\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\""
13:28:55 test k3s[21472]: time="2022-09-13T13:28:55 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-apiservice\", UID:\"fa9fc7a9-1206-44b7-8394-7bdc59c2d208\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"281\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\""
13:28:55 test k3s[21472]: E0913 13:28:55.140913 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:55 test k3s[21472]: I0913 13:28:55.190923 21472 serving.go:355] Generated self-signed cert in-memory
13:28:55 test k3s[21472]: E0913 13:28:55.241226 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:55 test k3s[21472]: W0913 13:28:55.315267 21472 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default"
13:28:55 test k3s[21472]: E0913 13:28:55.315315 21472 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default"
13:28:55 test k3s[21472]: E0913 13:28:55.342340 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:55 test k3s[21472]: time="2022-09-13T13:28:55 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-deployment\", UID:\"b9ddab2c-65c9-4bc1-b783-f63c46d0b820\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"287\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml\""
13:28:55 test k3s[21472]: time="2022-09-13T13:28:55 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-deployment\", UID:\"b9ddab2c-65c9-4bc1-b783-f63c46d0b820\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"287\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml\""
13:28:55 test k3s[21472]: E0913 13:28:55.386478 21472 kubelet.go:2010] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
13:28:55 test k3s[21472]: E0913 13:28:55.443201 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:55 test k3s[21472]: E0913 13:28:55.543828 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:55 test k3s[21472]: E0913 13:28:55.644181 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:55 test k3s[21472]: E0913 13:28:55.745177 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:55 test k3s[21472]: time="2022-09-13T13:28:55 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-service\", UID:\"8a0ae246-ddac-4665-a2de-98ecdd874c63\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"293\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml\""
13:28:55 test k3s[21472]: I0913 13:28:55.769578 21472 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.43.127.184]
13:28:55 test k3s[21472]: time="2022-09-13T13:28:55 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-service\", UID:\"8a0ae246-ddac-4665-a2de-98ecdd874c63\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"293\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml\""
13:28:55 test k3s[21472]: time="2022-09-13T13:28:55 08:00" level=info msg="Waiting for control-plane node test startup: nodes \"test\" not found"
13:28:55 test k3s[21472]: E0913 13:28:55.845328 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:55 test k3s[21472]: E0913 13:28:55.946134 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:55 test k3s[21472]: I0913 13:28:55.955877 21472 controllermanager.go:143] Version: v1.24.4 k3s1
13:28:55 test k3s[21472]: I0913 13:28:55.958415 21472 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
13:28:55 test k3s[21472]: I0913 13:28:55.958428 21472 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
13:28:55 test k3s[21472]: I0913 13:28:55.958444 21472 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
13:28:55 test k3s[21472]: I0913 13:28:55.958442 21472 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
13:28:55 test k3s[21472]: I0913 13:28:55.958433 21472 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
13:28:55 test k3s[21472]: I0913 13:28:55.958458 21472 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
13:28:55 test k3s[21472]: I0913 13:28:55.958560 21472 secure_serving.go:210] Serving securely on 127.0.0.1:10258
13:28:55 test k3s[21472]: I0913 13:28:55.958589 21472 tlsconfig.go:240] "Starting DynamicServingCertificateController"
13:28:56 test k3s[21472]: E0913 13:28:56.047228 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:56 test k3s[21472]: I0913 13:28:56.059103 21472 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
13:28:56 test k3s[21472]: I0913 13:28:56.059159 21472 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
13:28:56 test k3s[21472]: I0913 13:28:56.059196 21472 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
13:28:56 test k3s[21472]: W0913 13:28:56.075366 21472 handler_proxy.go:102] no RequestInfo found in the context
13:28:56 test k3s[21472]: W0913 13:28:56.075395 21472 handler_proxy.go:102] no RequestInfo found in the context
13:28:56 test k3s[21472]: E0913 13:28:56.075411 21472 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
13:28:56 test k3s[21472]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
13:28:56 test k3s[21472]: E0913 13:28:56.075418 21472 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
13:28:56 test k3s[21472]: I0913 13:28:56.075424 21472 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
13:28:56 test k3s[21472]: I0913 13:28:56.077422 21472 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
13:28:56 test k3s[21472]: E0913 13:28:56.148306 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:56 test k3s[21472]: time="2022-09-13T13:28:56 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"resource-reader\", UID:\"b0fa3589-f310-4af6-b976-4fe42ad5029a\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"300\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml\""
13:28:56 test k3s[21472]: time="2022-09-13T13:28:56 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"resource-reader\", UID:\"b0fa3589-f310-4af6-b976-4fe42ad5029a\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"300\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml\""
13:28:56 test k3s[21472]: E0913 13:28:56.221150 21472 csi_plugin.go:301] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "test" not found
13:28:56 test k3s[21472]: E0913 13:28:56.248666 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:56 test k3s[21472]: E0913 13:28:56.349546 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:56 test k3s[21472]: E0913 13:28:56.450528 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:56 test k3s[21472]: E0913 13:28:56.551450 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:56 test k3s[21472]: time="2022-09-13T13:28:56 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"rolebindings\", UID:\"677acadf-4d2d-4f8d-a842-741cc8220966\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"306\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\""
13:28:56 test k3s[21472]: time="2022-09-13T13:28:56 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"rolebindings\", UID:\"677acadf-4d2d-4f8d-a842-741cc8220966\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"306\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\""
13:28:56 test k3s[21472]: E0913 13:28:56.651684 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:56 test k3s[21472]: E0913 13:28:56.752667 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:56 test k3s[21472]: time="2022-09-13T13:28:56 08:00" level=info msg="Waiting for control-plane node test startup: nodes \"test\" not found"
13:28:56 test k3s[21472]: E0913 13:28:56.852799 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:56 test k3s[21472]: E0913 13:28:56.953579 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:56 test k3s[21472]: time="2022-09-13T13:28:56 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"2878f3c9-afeb-476e-bbc0-d69dd1739d15\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"313\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/traefik.yaml\""
13:28:56 test k3s[21472]: I0913 13:28:56.967924 21472 controller.go:611] quota admission added evaluator for: helmcharts.helm.cattle.io
13:28:56 test k3s[21472]: time="2022-09-13T13:28:56 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"f338e750-521b-4432-a34e-400bd35715dc\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"314\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik"
13:28:56 test k3s[21472]: time="2022-09-13T13:28:56 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"2878f3c9-afeb-476e-bbc0-d69dd1739d15\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"313\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/traefik.yaml\""
13:28:56 test k3s[21472]: time="2022-09-13T13:28:56 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"c67dd1f0-a0c2-4784-ba15-f4d864330ee1\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"316\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd"
13:28:56 test k3s[21472]: I0913 13:28:56.986934 21472 controller.go:611] quota admission added evaluator for: jobs.batch
13:28:57 test k3s[21472]: time="2022-09-13T13:28:57 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"f338e750-521b-4432-a34e-400bd35715dc\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"326\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik"
13:28:57 test k3s[21472]: time="2022-09-13T13:28:57 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"c67dd1f0-a0c2-4784-ba15-f4d864330ee1\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"327\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd"
13:28:57 test k3s[21472]: time="2022-09-13T13:28:57 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"f338e750-521b-4432-a34e-400bd35715dc\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"328\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik"
13:28:57 test k3s[21472]: time="2022-09-13T13:28:57 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"c67dd1f0-a0c2-4784-ba15-f4d864330ee1\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"329\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd"
13:28:57 test k3s[21472]: time="2022-09-13T13:28:57 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"c67dd1f0-a0c2-4784-ba15-f4d864330ee1\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"329\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd"
13:28:57 test k3s[21472]: time="2022-09-13T13:28:57 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"f338e750-521b-4432-a34e-400bd35715dc\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"328\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik"
13:28:57 test k3s[21472]: E0913 13:28:57.053986 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:57 test k3s[21472]: E0913 13:28:57.154269 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:57 test k3s[21472]: time="2022-09-13T13:28:57 08:00" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=test --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
13:28:57 test k3s[21472]: I0913 13:28:57.192819 21472 server.go:231] "Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP"
13:28:57 test k3s[21472]: E0913 13:28:57.207058 21472 node.go:152] Failed to retrieve node info: nodes "test" not found
13:28:57 test k3s[21472]: E0913 13:28:57.254852 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:57 test k3s[21472]: I0913 13:28:57.321929 21472 kubelet_node_status.go:70] "Attempting to register node" node="test"
13:28:57 test k3s[21472]: I0913 13:28:57.322872 21472 cpu_manager.go:213] "Starting CPU manager" policy="none"
13:28:57 test k3s[21472]: I0913 13:28:57.322900 21472 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s"
13:28:57 test k3s[21472]: I0913 13:28:57.322931 21472 state_mem.go:36] "Initialized new in-memory state store"
13:28:57 test k3s[21472]: E0913 13:28:57.355717 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:57 test k3s[21472]: I0913 13:28:57.357843 21472 policy_none.go:49] "None policy: Start"
13:28:57 test k3s[21472]: I0913 13:28:57.358496 21472 memory_manager.go:168] "Starting memorymanager" policy="None"
13:28:57 test k3s[21472]: I0913 13:28:57.358531 21472 state_mem.go:35] "Initializing new in-memory state store"
13:28:57 test k3s[21472]: I0913 13:28:57.363917 21472 request.go:601] Waited for 1.048993231s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:6444/apis/autoscaling/v2
13:28:57 test k3s[21472]: I0913 13:28:57.408255 21472 manager.go:610] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
13:28:57 test k3s[21472]: I0913 13:28:57.408549 21472 plugin_manager.go:114] "Starting Kubelet Plugin Manager"
13:28:57 test k3s[21472]: E0913 13:28:57.408924 21472 eviction_manager.go:254] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test\" not found"
13:28:57 test k3s[21472]: E0913 13:28:57.456633 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:57 test k3s[21472]: E0913 13:28:57.556772 21472 kubelet.go:2424] "Error getting node" err="node \"test\" not found"
13:28:57 test k3s[21472]: I0913 13:28:57.622254 21472 kubelet_node_status.go:73] "Successfully registered node" node="test"
13:28:57 test k3s[21472]: time="2022-09-13T13:28:57 08:00" level=info msg="Updated coredns node hosts entry [192.168.100.195 test]"
13:28:57 test k3s[21472]: time="2022-09-13T13:28:57 08:00" level=info msg="Annotations and labels have been set successfully on node: test"
13:28:57 test k3s[21472]: time="2022-09-13T13:28:57 08:00" level=info msg="Starting flannel with backend vxlan"
13:28:57 test k3s[21472]: time="2022-09-13T13:28:57 08:00" level=info msg="Labels and annotations have been set successfully on node: test"
13:28:57 test k3s[21472]: E0913 13:28:57.964598 21472 controllermanager.go:463] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
13:28:57 test k3s[21472]: I0913 13:28:57.964931 21472 node_controller.go:118] Sending events to api server.
13:28:57 test k3s[21472]: I0913 13:28:57.965020 21472 controllermanager.go:291] Started "cloud-node"
13:28:57 test k3s[21472]: I0913 13:28:57.965142 21472 node_controller.go:157] Waiting for informer caches to sync
13:28:57 test k3s[21472]: I0913 13:28:57.965219 21472 node_lifecycle_controller.go:77] Sending events to api server
13:28:57 test k3s[21472]: I0913 13:28:57.965251 21472 controllermanager.go:291] Started "cloud-node-lifecycle"
13:28:58 test k3s[21472]: I0913 13:28:58.065342 21472 node_controller.go:406] Initializing node test with cloud provider
13:28:58 test k3s[21472]: I0913 13:28:58.070022 21472 node_controller.go:470] Successfully initialized node test with cloud provider
13:28:58 test k3s[21472]: I0913 13:28:58.070088 21472 event.go:294] "Event occurred" object="test" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"
13:28:58 test k3s[21472]: I0913 13:28:58.407743 21472 node.go:163] Successfully retrieved node IP: 192.168.100.195
13:28:58 test k3s[21472]: I0913 13:28:58.407789 21472 server_others.go:138] "Detected node IP" address="192.168.100.195"
13:28:58 test k3s[21472]: I0913 13:28:58.414269 21472 server_others.go:206] "Using iptables Proxier"
13:28:58 test k3s[21472]: I0913 13:28:58.414306 21472 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
13:28:58 test k3s[21472]: I0913 13:28:58.414319 21472 server_others.go:214] "Creating dualStackProxier for iptables"
13:28:58 test k3s[21472]: I0913 13:28:58.414341 21472 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
13:28:58 test k3s[21472]: I0913 13:28:58.414367 21472 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
13:28:58 test k3s[21472]: I0913 13:28:58.414488 21472 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
13:28:58 test k3s[21472]: I0913 13:28:58.414683 21472 server.go:661] "Version info" version="v1.24.4 k3s1"
13:28:58 test k3s[21472]: I0913 13:28:58.414697 21472 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
13:28:58 test k3s[21472]: I0913 13:28:58.415093 21472 config.go:317] "Starting service config controller"
13:28:58 test k3s[21472]: I0913 13:28:58.415110 21472 shared_informer.go:255] Waiting for caches to sync for service config
13:28:58 test k3s[21472]: I0913 13:28:58.415115 21472 config.go:226] "Starting endpoint slice config controller"
13:28:58 test k3s[21472]: I0913 13:28:58.415128 21472 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
13:28:58 test k3s[21472]: I0913 13:28:58.415138 21472 config.go:444] "Starting node config controller"
13:28:58 test k3s[21472]: I0913 13:28:58.415146 21472 shared_informer.go:255] Waiting for caches to sync for node config
13:28:58 test k3s[21472]: I0913 13:28:58.416936 21472 controller.go:611] quota admission added evaluator for: events.events.k8s.io
13:28:58 test k3s[21472]: I0913 13:28:58.445900 21472 serving.go:355] Generated self-signed cert in-memory
13:28:58 test k3s[21472]: I0913 13:28:58.515712 21472 shared_informer.go:262] Caches are synced for endpoint slice config
13:28:58 test k3s[21472]: I0913 13:28:58.515761 21472 shared_informer.go:262] Caches are synced for node config
13:28:58 test k3s[21472]: I0913 13:28:58.515778 21472 shared_informer.go:262] Caches are synced for service config
13:28:58 test k3s[21472]: I0913 13:28:58.662257 21472 reconciler.go:159] "Reconciler: start to sync state"
13:28:58 test k3s[21472]: I0913 13:28:58.772996 21472 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4 k3s1"
13:28:58 test k3s[21472]: I0913 13:28:58.773028 21472 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
13:28:58 test k3s[21472]: I0913 13:28:58.775840 21472 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
13:28:58 test k3s[21472]: I0913 13:28:58.775871 21472 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
13:28:58 test k3s[21472]: I0913 13:28:58.775919 21472 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
13:28:58 test k3s[21472]: I0913 13:28:58.775923 21472 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
13:28:58 test k3s[21472]: I0913 13:28:58.778639 21472 secure_serving.go:210] Serving securely on 127.0.0.1:10259
13:28:58 test k3s[21472]: I0913 13:28:58.776013 21472 tlsconfig.go:240] "Starting DynamicServingCertificateController"
13:28:58 test k3s[21472]: I0913 13:28:58.775950 21472 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
13:28:58 test k3s[21472]: I0913 13:28:58.775939 21472 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
13:28:58 test k3s[21472]: I0913 13:28:58.876518 21472 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
13:28:58 test k3s[21472]: I0913 13:28:58.876538 21472 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
13:28:58 test k3s[21472]: I0913 13:28:58.876622 21472 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
13:29:00 test k3s[21472]: I0913 13:29:00.289490 21472 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/a85544c0-e081-4e8f-907e-7c55e2e87c42/volumes"
13:29:00 test k3s[21472]: I0913 13:29:00.289549 21472 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/a92ccbdd-ed1e-4348-8a5d-22a766551bbb/volumes"
13:29:00 test k3s[21472]: I0913 13:29:00.289572 21472 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/8870130a-8216-4035-834b-724e9375b951/volumes"
13:29:00 test k3s[21472]: I0913 13:29:00.289593 21472 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/ddd11994-ace6-4d3f-a535-d149cfccc145/volumes"
13:29:00 test k3s[21472]: I0913 13:29:00.289614 21472 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/185af83a-4310-4a56-945e-f4f5c3502752/volumes"
13:29:00 test k3s[21472]: time="2022-09-13T13:29:00 08:00" level=info msg="Stopped tunnel to 127.0.0.1:6443"
13:29:00 test k3s[21472]: time="2022-09-13T13:29:00 08:00" level=info msg="Connecting to proxy" url="wss://192.168.100.195:6443/v1-k3s/connect"
13:29:00 test k3s[21472]: time="2022-09-13T13:29:00 08:00" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
13:29:00 test k3s[21472]: time="2022-09-13T13:29:00 08:00" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
13:29:00 test k3s[21472]: time="2022-09-13T13:29:00 08:00" level=info msg="Handling backend connection request [test]"
13:29:02 test k3s[21472]: I0913 13:29:02.635738 21472 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
13:29:04 test k3s[21472]: I0913 13:29:04.693990 21472 range_allocator.go:83] Sending events to api server.
13:29:04 test k3s[21472]: I0913 13:29:04.694139 21472 range_allocator.go:117] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
13:29:04 test k3s[21472]: I0913 13:29:04.694173 21472 controllermanager.go:593] Started "nodeipam"
13:29:04 test k3s[21472]: I0913 13:29:04.694305 21472 node_ipam_controller.go:154] Starting ipam controller
13:29:04 test k3s[21472]: I0913 13:29:04.694316 21472 shared_informer.go:255] Waiting for caches to sync for node
13:29:04 test k3s[21472]: I0913 13:29:04.700723 21472 controllermanager.go:593] Started "pv-protection"
13:29:04 test k3s[21472]: I0913 13:29:04.700817 21472 pv_protection_controller.go:79] Starting PV protection controller
13:29:04 test k3s[21472]: I0913 13:29:04.700829 21472 shared_informer.go:255] Waiting for caches to sync for PV protection
13:29:04 test k3s[21472]: I0913 13:29:04.707007 21472 controllermanager.go:593] Started "ttl-after-finished"
13:29:04 test k3s[21472]: I0913 13:29:04.707093 21472 ttlafterfinished_controller.go:109] Starting TTL after finished controller
13:29:04 test k3s[21472]: I0913 13:29:04.707104 21472 shared_informer.go:255] Waiting for caches to sync for TTL after finished
13:29:04 test k3s[21472]: I0913 13:29:04.713444 21472 controllermanager.go:593] Started "job"
13:29:04 test k3s[21472]: I0913 13:29:04.713550 21472 job_controller.go:184] Starting job controller
13:29:04 test k3s[21472]: I0913 13:29:04.713562 21472 shared_informer.go:255] Waiting for caches to sync for job
13:29:04 test k3s[21472]: I0913 13:29:04.719548 21472 controllermanager.go:593] Started "clusterrole-aggregation"
13:29:04 test k3s[21472]: I0913 13:29:04.719678 21472 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator
13:29:04 test k3s[21472]: I0913 13:29:04.719690 21472 shared_informer.go:255] Waiting for caches to sync for ClusterRoleAggregator
13:29:04 test k3s[21472]: I0913 13:29:04.725983 21472 controllermanager.go:593] Started "pvc-protection"
13:29:04 test k3s[21472]: I0913 13:29:04.726110 21472 pvc_protection_controller.go:103] "Starting PVC protection controller"
13:29:04 test k3s[21472]: I0913 13:29:04.726123 21472 shared_informer.go:255] Waiting for caches to sync for PVC protection
13:29:04 test k3s[21472]: I0913 13:29:04.732710 21472 controllermanager.go:593] Started "serviceaccount"
13:29:04 test k3s[21472]: I0913 13:29:04.732742 21472 serviceaccounts_controller.go:117] Starting service account controller
13:29:04 test k3s[21472]: I0913 13:29:04.732754 21472 shared_informer.go:255] Waiting for caches to sync for service account
13:29:04 test k3s[21472]: I0913 13:29:04.734708 21472 controllermanager.go:593] Started "csrapproving"
13:29:04 test k3s[21472]: W0913 13:29:04.734728 21472 controllermanager.go:558] "route" is disabled
13:29:04 test k3s[21472]: I0913 13:29:04.734827 21472 certificate_controller.go:119] Starting certificate controller "csrapproving"
13:29:04 test k3s[21472]: I0913 13:29:04.734842 21472 shared_informer.go:255] Waiting for caches to sync for certificate-csrapproving
13:29:04 test k3s[21472]: I0913 13:29:04.741066 21472 controllermanager.go:593] Started "persistentvolume-binder"
13:29:04 test k3s[21472]: I0913 13:29:04.741178 21472 pv_controller_base.go:311] Starting persistent volume controller
13:29:04 test k3s[21472]: I0913 13:29:04.741191 21472 shared_informer.go:255] Waiting for caches to sync for persistent volume
13:29:04 test k3s[21472]: I0913 13:29:04.747225 21472 controllermanager.go:593] Started "replicationcontroller"
13:29:04 test k3s[21472]: I0913 13:29:04.747361 21472 replica_set.go:205] Starting replicationcontroller controller
13:29:04 test k3s[21472]: I0913 13:29:04.747372 21472 shared_informer.go:255] Waiting for caches to sync for ReplicationController
13:29:04 test k3s[21472]: E0913 13:29:04.789687 21472 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
13:29:04 test k3s[21472]: I0913 13:29:04.789805 21472 controllermanager.go:593] Started "namespace"
13:29:04 test k3s[21472]: I0913 13:29:04.789870 21472 namespace_controller.go:200] Starting namespace controller
13:29:04 test k3s[21472]: I0913 13:29:04.789883 21472 shared_informer.go:255] Waiting for caches to sync for namespace
13:29:05 test k3s[21472]: I0913 13:29:05.032114 21472 controllermanager.go:593] Started "garbagecollector"
13:29:05 test k3s[21472]: I0913 13:29:05.032306 21472 garbagecollector.go:149] Starting garbage collector controller
13:29:05 test k3s[21472]: I0913 13:29:05.032320 21472 shared_informer.go:255] Waiting for caches to sync for garbage collector
13:29:05 test k3s[21472]: I0913 13:29:05.032347 21472 graph_builder.go:289] GraphBuilder running
13:29:05 test k3s[21472]: W0913 13:29:05.247649 21472 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
13:29:05 test k3s[21472]: I0913 13:29:05.282793 21472 controllermanager.go:593] Started "statefulset"
13:29:05 test k3s[21472]: W0913 13:29:05.282821 21472 controllermanager.go:558] "bootstrapsigner" is disabled
13:29:05 test k3s[21472]: I0913 13:29:05.282884 21472 stateful_set.go:147] Starting stateful set controller
13:29:05 test k3s[21472]: I0913 13:29:05.282899 21472 shared_informer.go:255] Waiting for caches to sync for stateful set
13:29:05 test k3s[21472]: I0913 13:29:05.331943 21472 node_lifecycle_controller.go:377] Sending events to api server.
13:29:05 test k3s[21472]: I0913 13:29:05.332113 21472 taint_manager.go:163] "Sending events to api server"
13:29:05 test k3s[21472]: I0913 13:29:05.332181 21472 node_lifecycle_controller.go:505] Controller will reconcile labels.
13:29:05 test k3s[21472]: I0913 13:29:05.332213 21472 controllermanager.go:593] Started "nodelifecycle"
13:29:05 test k3s[21472]: I0913 13:29:05.332318 21472 node_lifecycle_controller.go:539] Starting node controller
13:29:05 test k3s[21472]: I0913 13:29:05.332334 21472 shared_informer.go:255] Waiting for caches to sync for taint
13:29:05 test k3s[21472]: I0913 13:29:05.336100 21472 shared_informer.go:255] Waiting for caches to sync for resource quota
13:29:05 test k3s[21472]: I0913 13:29:05.341905 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:29:05 test k3s[21472]: I0913 13:29:05.341963 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd
13:29:05 test k3s[21472]: W0913 13:29:05.345549 21472 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="test" does not exist
13:29:05 test k3s[21472]: I0913 13:29:05.347886 21472 shared_informer.go:262] Caches are synced for ReplicationController
13:29:05 test k3s[21472]: E0913 13:29:05.351208 21472 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
13:29:05 test k3s[21472]: I0913 13:29:05.352372 21472 shared_informer.go:262] Caches are synced for TTL
13:29:05 test k3s[21472]: E0913 13:29:05.353149 21472 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
13:29:05 test k3s[21472]: I0913 13:29:05.355450 21472 shared_informer.go:255] Waiting for caches to sync for garbage collector
13:29:05 test k3s[21472]: I0913 13:29:05.378141 21472 shared_informer.go:262] Caches are synced for ReplicaSet
13:29:05 test k3s[21472]: I0913 13:29:05.381371 21472 shared_informer.go:262] Caches are synced for crt configmap
13:29:05 test k3s[21472]: I0913 13:29:05.381397 21472 shared_informer.go:262] Caches are synced for cronjob
13:29:05 test k3s[21472]: I0913 13:29:05.382317 21472 shared_informer.go:262] Caches are synced for deployment
13:29:05 test k3s[21472]: I0913 13:29:05.383249 21472 shared_informer.go:262] Caches are synced for stateful set
13:29:05 test k3s[21472]: I0913 13:29:05.390491 21472 shared_informer.go:262] Caches are synced for namespace
13:29:05 test k3s[21472]: I0913 13:29:05.394731 21472 shared_informer.go:262] Caches are synced for node
13:29:05 test k3s[21472]: I0913 13:29:05.394768 21472 range_allocator.go:173] Starting range CIDR allocator
13:29:05 test k3s[21472]: I0913 13:29:05.394781 21472 shared_informer.go:255] Waiting for caches to sync for cidrallocator
13:29:05 test k3s[21472]: I0913 13:29:05.394796 21472 shared_informer.go:262] Caches are synced for cidrallocator
13:29:05 test k3s[21472]: I0913 13:29:05.395354 21472 shared_informer.go:262] Caches are synced for HPA
13:29:05 test k3s[21472]: I0913 13:29:05.399889 21472 range_allocator.go:374] Set node test PodCIDR to [10.42.0.0/24]
13:29:05 test k3s[21472]: time="2022-09-13T13:29:05 08:00" level=info msg="Flannel found PodCIDR assigned for node test"
13:29:05 test k3s[21472]: time="2022-09-13T13:29:05 08:00" level=info msg="The interface ens192 with ipv4 address 192.168.100.195 will be used by flannel"
13:29:05 test k3s[21472]: I0913 13:29:05.401140 21472 shared_informer.go:262] Caches are synced for PV protection
13:29:05 test k3s[21472]: I0913 13:29:05.402032 21472 kube.go:121] Waiting 10m0s for node controller to sync
13:29:05 test k3s[21472]: I0913 13:29:05.402068 21472 kube.go:402] Starting kube subnet manager
13:29:05 test k3s[21472]: I0913 13:29:05.402959 21472 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
13:29:05 test k3s[21472]: I0913 13:29:05.403404 21472 shared_informer.go:262] Caches are synced for disruption
13:29:05 test k3s[21472]: I0913 13:29:05.403417 21472 disruption.go:371] Sending events to api server.
13:29:05 test k3s[21472]: I0913 13:29:05.403717 21472 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
13:29:05 test k3s[21472]: I0913 13:29:05.407370 21472 shared_informer.go:262] Caches are synced for TTL after finished
13:29:05 test k3s[21472]: I0913 13:29:05.409598 21472 shared_informer.go:262] Caches are synced for expand
13:29:05 test k3s[21472]: I0913 13:29:05.413803 21472 shared_informer.go:262] Caches are synced for job
13:29:05 test k3s[21472]: I0913 13:29:05.419823 21472 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
13:29:05 test k3s[21472]: I0913 13:29:05.421974 21472 shared_informer.go:262] Caches are synced for GC
13:29:05 test k3s[21472]: I0913 13:29:05.426350 21472 shared_informer.go:262] Caches are synced for PVC protection
13:29:05 test k3s[21472]: I0913 13:29:05.428572 21472 shared_informer.go:262] Caches are synced for daemon sets
13:29:05 test k3s[21472]: I0913 13:29:05.428620 21472 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
13:29:05 test k3s[21472]: I0913 13:29:05.429719 21472 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
13:29:05 test k3s[21472]: I0913 13:29:05.429778 21472 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
13:29:05 test k3s[21472]: I0913 13:29:05.429795 21472 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
13:29:05 test k3s[21472]: I0913 13:29:05.429929 21472 shared_informer.go:262] Caches are synced for attach detach
13:29:05 test k3s[21472]: I0913 13:29:05.430022 21472 shared_informer.go:262] Caches are synced for ephemeral
13:29:05 test k3s[21472]: I0913 13:29:05.433099 21472 shared_informer.go:262] Caches are synced for service account
13:29:05 test k3s[21472]: I0913 13:29:05.435298 21472 shared_informer.go:262] Caches are synced for certificate-csrapproving
13:29:05 test k3s[21472]: I0913 13:29:05.441925 21472 shared_informer.go:262] Caches are synced for persistent volume
13:29:05 test k3s[21472]: I0913 13:29:05.479452 21472 shared_informer.go:262] Caches are synced for endpoint_slice
13:29:05 test k3s[21472]: time="2022-09-13T13:29:05 08:00" level=info msg="Starting the netpol controller"
13:29:05 test k3s[21472]: I0913 13:29:05.506919 21472 network_policy_controller.go:162] Starting network policy controller
13:29:05 test k3s[21472]: I0913 13:29:05.516304 21472 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
13:29:05 test k3s[21472]: I0913 13:29:05.529783 21472 shared_informer.go:262] Caches are synced for endpoint
13:29:05 test k3s[21472]: I0913 13:29:05.534808 21472 network_policy_controller.go:174] Starting network policy controller full sync goroutine
13:29:05 test k3s[21472]: I0913 13:29:05.632748 21472 shared_informer.go:262] Caches are synced for taint
13:29:05 test k3s[21472]: I0913 13:29:05.632795 21472 taint_manager.go:187] "Starting NoExecuteTaintManager"
13:29:05 test k3s[21472]: I0913 13:29:05.632834 21472 node_lifecycle_controller.go:1399] Initializing eviction metric for zone:
13:29:05 test k3s[21472]: W0913 13:29:05.632897 21472 node_lifecycle_controller.go:1014] Missing timestamp for Node test. Assuming now as a timestamp.
13:29:05 test k3s[21472]: I0913 13:29:05.632931 21472 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
13:29:05 test k3s[21472]: I0913 13:29:05.632981 21472 event.go:294] "Event occurred" object="test" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test event: Registered Node test in Controller"
13:29:05 test k3s[21472]: I0913 13:29:05.636329 21472 shared_informer.go:262] Caches are synced for resource quota
13:29:05 test k3s[21472]: I0913 13:29:05.645822 21472 shared_informer.go:262] Caches are synced for resource quota
13:29:05 test k3s[21472]: I0913 13:29:05.886870 21472 controller.go:611] quota admission added evaluator for: replicasets.apps
13:29:05 test k3s[21472]: I0913 13:29:05.889888 21472 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-b96499967 to 1"
13:29:05 test k3s[21472]: I0913 13:29:05.893117 21472 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-668d979685 to 1"
13:29:05 test k3s[21472]: I0913 13:29:05.893144 21472 event.go:294] "Event occurred" object="kube-system/local-path-provisioner" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set local-path-provisioner-7b7dc8d6f5 to 1"
13:29:05 test k3s[21472]: I0913 13:29:05.946406 21472 event.go:294] "Event occurred" object="kube-system/helm-install-traefik" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: helm-install-traefik-n2f6q"
13:29:05 test k3s[21472]: I0913 13:29:05.946516 21472 event.go:294] "Event occurred" object="kube-system/helm-install-traefik-crd" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: helm-install-traefik-crd-mmbqk"
13:29:05 test k3s[21472]: I0913 13:29:05.947100 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:29:05 test k3s[21472]: I0913 13:29:05.947199 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd
13:29:05 test k3s[21472]: I0913 13:29:05.953347 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:29:05 test k3s[21472]: I0913 13:29:05.955930 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd
13:29:05 test k3s[21472]: I0913 13:29:05.957615 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:29:05 test k3s[21472]: I0913 13:29:05.957858 21472 topology_manager.go:200] "Topology Admit Handler"
13:29:05 test k3s[21472]: I0913 13:29:05.958017 21472 topology_manager.go:200] "Topology Admit Handler"
13:29:05 test k3s[21472]: I0913 13:29:05.958400 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd
13:29:05 test k3s[21472]: time="2022-09-13T13:29:05 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"f338e750-521b-4432-a34e-400bd35715dc\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"328\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik"
13:29:05 test k3s[21472]: time="2022-09-13T13:29:05 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"c67dd1f0-a0c2-4784-ba15-f4d864330ee1\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"329\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd"
13:29:05 test k3s[21472]: I0913 13:29:05.982084 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:29:05 test k3s[21472]: I0913 13:29:05.990946 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd
13:29:06 test k3s[21472]: I0913 13:29:06.003378 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flt52\" (UniqueName: \"kubernetes.io/projected/229a248f-bb6e-4d71-aa15-2f55d817aee2-kube-api-access-flt52\") pod \"helm-install-traefik-n2f6q\" (UID: \"229a248f-bb6e-4d71-aa15-2f55d817aee2\") " pod="kube-system/helm-install-traefik-n2f6q"
13:29:06 test k3s[21472]: I0913 13:29:06.003426 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"values\" (UniqueName: \"kubernetes.io/configmap/6ff1e9c7-11c7-4d69-9b08-827770cfbdcc-values\") pod \"helm-install-traefik-crd-mmbqk\" (UID: \"6ff1e9c7-11c7-4d69-9b08-827770cfbdcc\") " pod="kube-system/helm-install-traefik-crd-mmbqk"
13:29:06 test k3s[21472]: I0913 13:29:06.003459 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/6ff1e9c7-11c7-4d69-9b08-827770cfbdcc-content\") pod \"helm-install-traefik-crd-mmbqk\" (UID: \"6ff1e9c7-11c7-4d69-9b08-827770cfbdcc\") " pod="kube-system/helm-install-traefik-crd-mmbqk"
13:29:06 test k3s[21472]: I0913 13:29:06.003522 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpvg9\" (UniqueName: \"kubernetes.io/projected/6ff1e9c7-11c7-4d69-9b08-827770cfbdcc-kube-api-access-tpvg9\") pod \"helm-install-traefik-crd-mmbqk\" (UID: \"6ff1e9c7-11c7-4d69-9b08-827770cfbdcc\") " pod="kube-system/helm-install-traefik-crd-mmbqk"
13:29:06 test k3s[21472]: I0913 13:29:06.003563 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"values\" (UniqueName: \"kubernetes.io/configmap/229a248f-bb6e-4d71-aa15-2f55d817aee2-values\") pod \"helm-install-traefik-n2f6q\" (UID: \"229a248f-bb6e-4d71-aa15-2f55d817aee2\") " pod="kube-system/helm-install-traefik-n2f6q"
13:29:06 test k3s[21472]: I0913 13:29:06.003603 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/229a248f-bb6e-4d71-aa15-2f55d817aee2-content\") pod \"helm-install-traefik-n2f6q\" (UID: \"229a248f-bb6e-4d71-aa15-2f55d817aee2\") " pod="kube-system/helm-install-traefik-n2f6q"
13:29:06 test k3s[21472]: I0913 13:29:06.056556 21472 shared_informer.go:262] Caches are synced for garbage collector
13:29:06 test k3s[21472]: I0913 13:29:06.132433 21472 shared_informer.go:262] Caches are synced for garbage collector
13:29:06 test k3s[21472]: I0913 13:29:06.132454 21472 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
13:29:06 test k3s[21472]: I0913 13:29:06.240082 21472 event.go:294] "Event occurred" object="kube-system/local-path-provisioner-7b7dc8d6f5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: local-path-provisioner-7b7dc8d6f5-7q5x6"
13:29:06 test k3s[21472]: I0913 13:29:06.244940 21472 event.go:294] "Event occurred" object="kube-system/metrics-server-668d979685" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-668d979685-2x9ld"
13:29:06 test k3s[21472]: I0913 13:29:06.247124 21472 event.go:294] "Event occurred" object="kube-system/coredns-b96499967" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-b96499967-zcgn5"
13:29:06 test k3s[21472]: I0913 13:29:06.259237 21472 topology_manager.go:200] "Topology Admit Handler"
13:29:06 test k3s[21472]: I0913 13:29:06.259380 21472 topology_manager.go:200] "Topology Admit Handler"
13:29:06 test k3s[21472]: I0913 13:29:06.259752 21472 topology_manager.go:200] "Topology Admit Handler"
13:29:06 test k3s[21472]: I0913 13:29:06.305210 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-config-volume\" (UniqueName: \"kubernetes.io/configmap/c67dfbb3-1bb8-4467-aea6-1f7ccdcc1399-custom-config-volume\") pod \"coredns-b96499967-zcgn5\" (UID: \"c67dfbb3-1bb8-4467-aea6-1f7ccdcc1399\") " pod="kube-system/coredns-b96499967-zcgn5"
13:29:06 test k3s[21472]: I0913 13:29:06.305254 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmmvb\" (UniqueName: \"kubernetes.io/projected/c67dfbb3-1bb8-4467-aea6-1f7ccdcc1399-kube-api-access-gmmvb\") pod \"coredns-b96499967-zcgn5\" (UID: \"c67dfbb3-1bb8-4467-aea6-1f7ccdcc1399\") " pod="kube-system/coredns-b96499967-zcgn5"
13:29:06 test k3s[21472]: I0913 13:29:06.305369 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6cr\" (UniqueName: \"kubernetes.io/projected/63bbedb4-99d8-4eec-8e94-b82d281a7e88-kube-api-access-fv6cr\") pod \"metrics-server-668d979685-2x9ld\" (UID: \"63bbedb4-99d8-4eec-8e94-b82d281a7e88\") " pod="kube-system/metrics-server-668d979685-2x9ld"
13:29:06 test k3s[21472]: I0913 13:29:06.305423 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c67dfbb3-1bb8-4467-aea6-1f7ccdcc1399-config-volume\") pod \"coredns-b96499967-zcgn5\" (UID: \"c67dfbb3-1bb8-4467-aea6-1f7ccdcc1399\") " pod="kube-system/coredns-b96499967-zcgn5"
13:29:06 test k3s[21472]: I0913 13:29:06.305482 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/63bbedb4-99d8-4eec-8e94-b82d281a7e88-tmp-dir\") pod \"metrics-server-668d979685-2x9ld\" (UID: \"63bbedb4-99d8-4eec-8e94-b82d281a7e88\") " pod="kube-system/metrics-server-668d979685-2x9ld"
13:29:06 test k3s[21472]: I0913 13:29:06.305522 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0e440ab-b6bc-46fc-826d-a78b5d066a9b-config-volume\") pod \"local-path-provisioner-7b7dc8d6f5-7q5x6\" (UID: \"b0e440ab-b6bc-46fc-826d-a78b5d066a9b\") " pod="kube-system/local-path-provisioner-7b7dc8d6f5-7q5x6"
13:29:06 test k3s[21472]: I0913 13:29:06.305572 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77s4z\" (UniqueName: \"kubernetes.io/projected/b0e440ab-b6bc-46fc-826d-a78b5d066a9b-kube-api-access-77s4z\") pod \"local-path-provisioner-7b7dc8d6f5-7q5x6\" (UID: \"b0e440ab-b6bc-46fc-826d-a78b5d066a9b\") " pod="kube-system/local-path-provisioner-7b7dc8d6f5-7q5x6"
13:29:06 test k3s[21472]: I0913 13:29:06.402477 21472 kube.go:128] Node controller sync successful
13:29:06 test k3s[21472]: I0913 13:29:06.402572 21472 vxlan.go:138] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
13:29:06 test k3s[21472]: I0913 13:29:06.410477 21472 kube.go:357] Skip setting NodeNetworkUnavailable
13:29:06 test k3s[21472]: time="2022-09-13T13:29:06 08:00" level=info msg="Wrote flannel subnet file to /run/flannel/subnet.env"
13:29:06 test k3s[21472]: time="2022-09-13T13:29:06 08:00" level=info msg="Running flannel backend."
13:29:06 test k3s[21472]: I0913 13:29:06.412101 21472 vxlan_network.go:61] watching for new subnet leases
13:29:06 test k3s[21472]: I0913 13:29:06.418335 21472 iptables.go:177] bootstrap done
13:29:06 test k3s[21472]: I0913 13:29:06.421881 21472 iptables.go:177] bootstrap done
13:29:07 test k3s[21472]: W0913 13:29:07.141871 21472 handler_proxy.go:102] no RequestInfo found in the context
13:29:07 test k3s[21472]: W0913 13:29:07.141875 21472 handler_proxy.go:102] no RequestInfo found in the context
13:29:07 test k3s[21472]: E0913 13:29:07.141919 21472 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
13:29:07 test k3s[21472]: E0913 13:29:07.141977 21472 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
13:29:07 test k3s[21472]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
13:29:07 test k3s[21472]: I0913 13:29:07.141988 21472 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
13:29:07 test k3s[21472]: I0913 13:29:07.143081 21472 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
13:29:07 test k3s[21472]: I0913 13:29:07.586382 21472 request.go:601] Waited for 1.152933547s due to client-side throttling, not priority and fairness, request: POST:https://127.0.0.1:6443/api/v1/namespaces/kube-system/serviceaccounts/metrics-server/token
13:29:35 test k3s[21472]: E0913 13:29:35.650071 21472 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
13:29:36 test k3s[21472]: W0913 13:29:36.071247 21472 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
13:29:52 test k3s[21472]: E0913 13:29:52.229742 21472 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1fbd466c1f862d95ab4303ca48e48a985d60531d8c81631f14c8447d159c5b0\": not found" containerID="d1fbd466c1f862d95ab4303ca48e48a985d60531d8c81631f14c8447d159c5b0"
13:29:52 test k3s[21472]: I0913 13:29:52.229795 21472 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="d1fbd466c1f862d95ab4303ca48e48a985d60531d8c81631f14c8447d159c5b0" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1fbd466c1f862d95ab4303ca48e48a985d60531d8c81631f14c8447d159c5b0\": not found"
13:29:52 test k3s[21472]: E0913 13:29:52.230396 21472 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c874c63b54075f06e023770d5844c8367c65c4f22ec4039f0cdccfa7b2a9b658\": not found" containerID="c874c63b54075f06e023770d5844c8367c65c4f22ec4039f0cdccfa7b2a9b658"
13:29:52 test k3s[21472]: I0913 13:29:52.230429 21472 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="c874c63b54075f06e023770d5844c8367c65c4f22ec4039f0cdccfa7b2a9b658" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c874c63b54075f06e023770d5844c8367c65c4f22ec4039f0cdccfa7b2a9b658\": not found"
13:29:52 test k3s[21472]: E0913 13:29:52.230805 21472 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d1a38138ba0b63fde2bdfef7836b219d9cb3748c74cd1cb6eccdea4345d4c2b\": not found" containerID="1d1a38138ba0b63fde2bdfef7836b219d9cb3748c74cd1cb6eccdea4345d4c2b"
13:29:52 test k3s[21472]: I0913 13:29:52.230841 21472 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="1d1a38138ba0b63fde2bdfef7836b219d9cb3748c74cd1cb6eccdea4345d4c2b" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d1a38138ba0b63fde2bdfef7836b219d9cb3748c74cd1cb6eccdea4345d4c2b\": not found"
13:29:52 test k3s[21472]: E0913 13:29:52.231227 21472 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c307e2f4a67aa70a47d5a2d84c77c66d80384b60b1405a8b34f9598daf51ba68\": not found" containerID="c307e2f4a67aa70a47d5a2d84c77c66d80384b60b1405a8b34f9598daf51ba68"
13:29:52 test k3s[21472]: I0913 13:29:52.231255 21472 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="c307e2f4a67aa70a47d5a2d84c77c66d80384b60b1405a8b34f9598daf51ba68" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c307e2f4a67aa70a47d5a2d84c77c66d80384b60b1405a8b34f9598daf51ba68\": not found"
13:29:52 test k3s[21472]: E0913 13:29:52.231708 21472 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c61f32f1b80faca73e0f78910543083fe13a0e86dbf94b592e2a5c52e01e8fa0\": not found" containerID="c61f32f1b80faca73e0f78910543083fe13a0e86dbf94b592e2a5c52e01e8fa0"
13:29:52 test k3s[21472]: I0913 13:29:52.231745 21472 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="c61f32f1b80faca73e0f78910543083fe13a0e86dbf94b592e2a5c52e01e8fa0" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c61f32f1b80faca73e0f78910543083fe13a0e86dbf94b592e2a5c52e01e8fa0\": not found"
13:29:52 test k3s[21472]: E0913 13:29:52.232132 21472 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9891ede20cfba2eb34e5c7bce8590f7165c38c82b3f4e9209a0e751fa0cd5fba\": not found" containerID="9891ede20cfba2eb34e5c7bce8590f7165c38c82b3f4e9209a0e751fa0cd5fba"
13:29:52 test k3s[21472]: I0913 13:29:52.232159 21472 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="9891ede20cfba2eb34e5c7bce8590f7165c38c82b3f4e9209a0e751fa0cd5fba" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9891ede20cfba2eb34e5c7bce8590f7165c38c82b3f4e9209a0e751fa0cd5fba\": not found"
13:29:52 test k3s[21472]: E0913 13:29:52.232544 21472 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5698eb4653beb8efa90ee2942253397ddc381b5e354c61e496445dee909677a5\": not found" containerID="5698eb4653beb8efa90ee2942253397ddc381b5e354c61e496445dee909677a5"
13:29:52 test k3s[21472]: I0913 13:29:52.232571 21472 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="5698eb4653beb8efa90ee2942253397ddc381b5e354c61e496445dee909677a5" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5698eb4653beb8efa90ee2942253397ddc381b5e354c61e496445dee909677a5\": not found"
13:29:52 test k3s[21472]: E0913 13:29:52.232947 21472 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2dc83f7e0f4f6cc87d4f2ff33007bc026e4622f5e1f9604674136e2cd84c6b9e\": not found" containerID="2dc83f7e0f4f6cc87d4f2ff33007bc026e4622f5e1f9604674136e2cd84c6b9e"
13:29:52 test k3s[21472]: I0913 13:29:52.232975 21472 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="2dc83f7e0f4f6cc87d4f2ff33007bc026e4622f5e1f9604674136e2cd84c6b9e" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2dc83f7e0f4f6cc87d4f2ff33007bc026e4622f5e1f9604674136e2cd84c6b9e\": not found"
13:29:56 test k3s[21472]: I0913 13:29:56.419463 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:29:56 test k3s[21472]: I0913 13:29:56.434649 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd
13:29:57 test k3s[21472]: I0913 13:29:57.430414 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:29:57 test k3s[21472]: time="2022-09-13T13:29:57 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"f338e750-521b-4432-a34e-400bd35715dc\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"328\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik"
13:29:57 test k3s[21472]: I0913 13:29:57.444052 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd
13:29:57 test k3s[21472]: time="2022-09-13T13:29:57 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"c67dd1f0-a0c2-4784-ba15-f4d864330ee1\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"329\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd"
13:30:25 test k3s[21472]: time="2022-09-13T13:30:25 08:00" level=info msg="Slow SQL (started: 2022-09-13 13:30:22.1882423 0800 CST m= 103.133640745) (total time: 2.946084317s): INSERT INTO kine(name, created, deleted, create_revision, prev_revision, lease, value, old_value) values(?, ?, ?, ?, ?, ?, ?, ?) : [[/registry/apiextensions.k8s.io/customresourcedefinitions/tlsstores.traefik.containo.us 0 0 527 543 0 [123 34 107 105 110 100 34 58 34 67 117 115 116 111 109 82 101 115 111 117 114 99 101 68 101 102 105 110 105 116 105 111 110 34 44 34 97 112 105 86 101 114 115 105 111 110 34 58 34 97 112 105 101 120 116 101 110 115 105 111 110 115 46 107 56 115 46 105 111 47 118 49 98 101 116 97 49 34 44 34 109 101 116 97 100 97 116 97 34 58 123 34 110 97 109 101 34 58 34 116 108 115 115 116 111 114 101 115 46 116 114 97 101 102 105 107 46 99 111 110 116 97 105 110 111 46 117 115 34 44 34 117 105 100 34 58 34 98 100 97 54 54 55 49 102 45 101 98 52 50 45 52 100 102 57 45 97 56 101 101 45 98 99 54 101 57 50 100 101 100 98 57 56 34 44 34 103 101 110 101 114 97 116 105 111 110 34 58 49 44 34 99 114 101 97 116 105 111 110 84 105 109 101 115 116 97 109 112 34 58 34 50 48 50 50 45 48 57 45 49 51 84 48 53 58 51 48 58 50 50 90 34 44 34 108 97 98 101 108 115 34 58 123 34 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 110 97 103 101 100 45 98 121 34 58 34 72 101 108 109 34 125 44 34 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 99 111 110 116 114 111 108 108 101 114 45 103 101 110 46 107 117 98 101 98 117 105 108 100 101 114 46 105 111 47 118 101 114 115 105 111 110 34 58 34 118 48 46 54 46 50 34 44 34 109 101 116 97 46 104 101 108 109 46 115 104 47 114 101 108 101 97 115 101 45 110 97 109 101 34 58 34 116 114 97 101 102 105 107 45 99 114 100 34 44 34 109 101 116 97 46 104 101 108 109 46 115 104 47 114 101 108 101 97 115 101 45 110 97 109 101 115 112 97 99 101 34 58 34 107 117 98 101 45 115 121 115 116 101 109 34 125 44 34 109 97 110 97 103 101 100 70 105 101 108 100 115 34 58 91 123 34 109 97 110 97 103 101 114 34 58 34 104 101 108 109 34 44 34 111 112 101 114 97 116 105 111 110 34 58 34 85 112 100 97 116 101 34 44 34 97 112 105 86 101 114 115 105 111 110 34 58 34 97 112 105 101 120 116 101 110 115 105 111 110 115 46 107 56 115 46 105 111 47 118 49 34 44 34 116 105 109 101 34 58 34 50 48 50 50 45 48 57 45 49 51 84 48 53 58 51 48 58 50 50 90 34 44 34 102 105 101 108 100 115 84 121 112 101 34 58 34 70 105 101 108 100 115 86 49 34 44 34 102 105 101 108 100 115 86 49 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 45 103 101 110 46 107 117 98 101 98 117 105 108 100 101 114 46 105 111 47 118 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 101 116 97 46 104 101 108 109 46 115 104 47 114 101 108 101 97 115 101 45 110 97 109 101 34 58 123 125 44 34 102 58 109 101 116 97 46 104 101 108 109 46 115 104 47 114 101 108 101 97 115 101 45 110 97 109 101 115 112 97 99 101 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 110 97 103 101 100 45 98 121 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 118 101 114 115 105 111 110 34 58 123 34 46 34 58 123 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 125 125 44 34 102 58 103 114 111 117 112 34 58 123 125 44 34 102 58 110 97 109 101 115 34 58 123 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 108 105 115 116 75 105 110 100 34 58 123 125 44 34 102 58 112 108 117 114 97 108 34 58 123 125 44 34 102 58 115 105 110 103 117 108 97 114 34 58 123 125 125 44 34 102 58 115 99 111 112 101 34 58 123 125 44 34 102 58 118 101 114 115 105 111 110 115 34 58 123 125 125 125 125 44 123 34 109 97 110 97 103 101 114 34 58 34 107 51 115 34 44 34 111 112 101 114 97 116 105 111 110 34 58 34 85 112 100 97 116 101 34 44 34 97 112 105 86 101 114 115 105 111 110 34 58 34 97 112 105 101 120 116 101 110 115 105 111 110 115 46 107 56 115 46 105 111 47 118 49 34 44 34 116 105 109 101 34 58 34 50 48 50 50 45 48 57 45 49 51 84 48 53 58 51 48 58 50 50 90 34 44 34 102 105 101 108 100 115 84 121 112 101 34 58 34 70 105 101 108 100 115 86 49 34 44 34 102 105 101 108 100 115 86 49 34 58 123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 99 99 101 112 116 101 100 78 97 109 101 115 34 58 123 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 108 105 115 116 75 105 110 100 34 58 123 125 44 34 102 58 112 108 117 114 97 108 34 58 123 125 44 34 102 58 115 105 110 103 117 108 97 114 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 69 115 116 97 98 108 105 115 104 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 97 109 101 115 65 99 99 101 112 116 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125 44 34 115 117 98 114 101 115 111 117 114 99 101 34 58 34 115 116 97 116 117 115 34 125 93 125 44 34 115 112 101 99 34 58 123 34 103 114 111 117 112 34 58 34 116 114 97 101 102 105 107 46 99 111 110 116 97 105 110 111 46 117 115 34 44 34 118 101 114 115 105 111 110 34 58 34 118 49 97 108 112 104 97 49 34 44 34 110 97 109 101 115 34 58 123 34 112 108 117 114 97 108 34 58 34 116 108 115 115 116 111 114 101 115 34 44 34 115 105 110 103 117 108 97 114 34 58 34 116 108 115 115 116 111 114 101 34 44 34 107 105 110 100 34 58 34 84 76 83 83 116 111 114 101 34 44 34 108 105 115 116 75 105 110 100 34 58 34 84 76 83 83 116 111 114 101 76 105 115 116 34 125 44 34 115 99 111 112 101 34 58 34 78 97 109 101 115 112 97 99 101 100 34 44 34 118 97 108 105 100 97 116 105 111 110 34 58 123 34 111 112 101 110 65 80 73 86 51 83 99 104 101 109 97 34 58 123 34 100 101 115 99 114 105 112 116 105 111 110 34 58 34 84 76 83 83 116 111 114 101 32 105 115 32 97 32 115 112 101 99 105 102 105 99 97 116 105 111 110 32 102 111 114 32 97 32 84 76 83 83 116 111 114 101 32 114 101 115 111 117 114 99 101 46 34 44 34 116 121 112 101 34 58 34 111 98 106 101 99 116 34 44 34 114 101 113 117 105 114 101 100 34 58 91 34 109 101 116 97 100 97 116 97 34 44 34 115 112 101 99 34 93 44 34 112 114 111 112 101 114 116 105 101 115 34 58 123 34 97 112 105 86 101 114 115 105 111 110 34 58 123 34 100 101 115 99 114 105 112 116 105 111 110 34 58 34 65 80 73 86 101 114 115 105 111 110 32 100 101 102 105 110 101 115 32 116 104 101 32 118 101 114 115 105 111 110 101 100 32 115 99 104 101 109 97 32 111 102 32 116 104 105 115 32 114 101 112 114 101 115 101 110 116 97 116 105 111 110 32 111 102 32 97 110 32 111 98 106 101 99 116 46 32 83 101 114 118 101 114 115 32 115 104 111 117 108 100 32 99 111 110 118 101 114 116 32 114 101 99 111 103 110 105 122 101 100 32 115 99 104 101 109 97 115 32 116 111 32 116 104 101 32 108 97 116 101 115 116 32 105 110 116 101 114 110 97 108 32 118 97 108 117 101 44 32 97 110 100 32 109 97 121 32 114 101 106 101 99 116 32 117 110 114 101 99 111 103 110 105 122 101 100 32 118 97 108 117 101 115 46 32 77 111 114 101 32 105 110 102 111 58 32 104 116 116 112 115 58 47 47 103 105 116 46 107 56 115 46 105 111 47 99 111 109 109 117 110 105 116 121 47 99 111 110 116 114 105 98 117 116 111 114 115 47 100 101 118 101 108 47 115 105 103 45 97 114 99 104 105 116 101 99 116 117 114 101 47 97 112 105 45 99 111 110 118 101 110 116 105 111 110 115 46 109 100 35 114 101 115 111 117 114 99 101 115 34 44 34 116 121 112 101 34 58 34 115 116 114 105 110 103 34 125 44 34 107 105 110 100 34 58 123 34 100 101 115 99 114 105 112 116 105 111 110 34 58 34 75 105 110 100 32 105 115 32 97 32 115 116 114 105 110 103 32 118 97 108 117 101 32 114 101 112 114 101 115 101 110 116 105 110 103 32 116 104 101 32 82 69 83 84 32 114 101 115 111 117 114 99 101 32 116 104 105 115 32 111 98 106 101 99 116 32 114 101 112 114 101 115 101 110 116 115 46 32 83 101 114 118 101 114 115 32 109 97 121 32 105 110 102 101 114 32 116 104 105 115 32 102 114 111 109 32 116 104 101 32 101 110 100 112 111 105 110 116 32 116 104 101 32 99 108 105 101 110 116 32 115 117 98 109 105 116 115 32 114 101 113 117 101 115 116 115 32 116 111 46 32 67 97 110 110 111 116 32 98 101 32 117 112 100 97 116 101 100 46 32 73 110 32 67 97 109 101 108 67 97 115 101 46 32 77 111 114 101 32 105 110 102 111 58 32 104 116 116 112 115 58 47 47 103 105 116 46 107 56 115 46 105 111 47 99 111 109 109 117 110 105 116 121 47 99 111 110 116 114 105 98 117 116 111 114 115 47 100 101 118 101 108 47 115 105 103 45 97 114 99 104 105 116 101 99 116 117 114 101 47 97 112 105 45 99 111 110 118 101 110 116 105 111 110 115 46 109 100 35 116 121 112 101 115 45 107 105 110 100 115 34 44 34 116 121 112 101 34 58 34 115 116 114 105 110 103 34 125 44 34 109 101 116 97 100 97 116 97 34 58 123 34 116 121 112 101 34 58 34 111 98 106 101 99 116 34 125 44 34 115 112 101 99 34 58 123 34 100 101 115 99 114 105 112 116 105 111 110 34 58 34 84 76 83 83 116 111 114 101 83 112 101 99 32 99 111 110 102 105 103 117 114 101 115 32 97 32 84 76 83 83 116 111 114 101 32 114 101 115 111 117 114 99 101 46 34 44 34 116 121 112 101 34 58 34 111 98 106 101 99 116 34 44 34 114 101 113 117 105 114 101 100 34 58 91 34 100 101 102 97 117 108 116 67 101 114 116 105 102 105 99 97 116 101 34 93 44 34 112 114 111 112 101 114 116 105 101 115 34 58 123 34 100 101 102 97 117 108 116 67 101 114 116 105 102 105 99 97 116 101 34 58 123 34 100 101 115 99 114 105 112 116 105 111 110 34 58 34 68 101 102 97 117 108 116 67 101 114 116 105 102 105 99 97 116 101 32 104 111 108 100 115 32 97 32 115 101 99 114 101 116 32 110 97 109 101 32 102 111 114 32 116 104 101 32 84 76 83 79 112 116 105 111 110 32 114 101 115 111 117 114 99 101 46 34 44 34 116 121 112 101 34 58 34 111 98 106 101 99 116 34 44 34 114 101 113 117 105 114 101 100 34 58 91 34 115 101 99 114 101 116 78 97 109 101 34 93 44 34 112 114 111 112 101 114 116 105 101 115 34 58 123 34 115 101 99 114 101 116 78 97 109 101 34 58 123 34 100 101 115 99 114 105 112 116 105 111 110 34 58 34 83 101 99 114 101 116 78 97 109 101 32 105 115 32 116 104 101 32 110 97 109 101 32 111 102 32 116 104 101 32 114 101 102 101 114 101 110 99 101 100 32 75 117 98 101 114 110 101 116 101 115 32 83 101 99 114 101 116 32 116 111 32 115 112 101 99 105 102 121 32 116 104 101 32 99 101 114 116 105 102 105 99 97 116 101 32 100 101 116 97 105 108 115 46 34 44 34 116 121 112 101 34 58 34 115 116 114 105 110 103 34 125 125 125 125 125 125 125 125 44 34 118 101 114 115 105 111 110 115 34 58 91 123 34 110 97 109 101 34 58 34 118 49 97 108 112 104 97 49 34 44 34 115 101 114 118 101 100 34 58 116 114 117 101 44 34 115 116 111 114 97 103 101 34 58 116 114 117 101 125 93 44 34 99 111 110 118 101 114 115 105 111 110 34 58 123 34 115 116 114 97 116 101 103 121 34 58 34 78 111 110 101 34 125 44 34 112 114 101 115 101 114 118 101 85 110 107 110 111 119 110 70 105 101 108 100 115 34 58 102 97 108 115 101 125 44 34 115 116 97 116 117 115 34 58 123 34 99 111 110 100 105 116 105 111 110 115 34 58 91 123 34 116 121 112 101 34 58 34 78 97 109 101 115 65 99 99 101 112 116 101 100 34 44 34 115 116 97 116 117 115 34 58 34 84 114 117 101 34 44 34 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 34 50 48 50 50 45 48 57 45 49 51 84 48 53 58 51 48 58 50 50 90 34 44 34 114 101 97 115 111 110 34 58 34 78 111 67 111 110 102 108 105 99 116 115 34 44 34 109 101 115 115 97 103 101 34 58 34 110 111 32 99 111 110 102 108 105 99 116 115 32 102 111 117 110 100 34 125 44 123 34 116 121 112 101 34 58 34 69 115 116 97 98 108 105 115 104 101 100 34 44 34 115 116 97 116 117 115 34 58 34 84 114 117 101 34 44 34 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 34 50 48 50 50 45 48 57 45 49 51 84 48 53 58 51 48 58 50 50 90 34 44 34 114 101 97 115 111 110 34 58 34 73 110 105 116 105 97 108 78 97 109 101 115 65 99 99 101 112 116 101 100 34 44 34 109 101 115 115 97 103 101 34 58 34 116 104 101 32 105 110 105 116 105 97 108 32 110 97 109 101 115 32 104 97 118 101 32 98 101 101 110 32 97 99 99 101 112 116 101 100 34 125 93 44 34 97 99 99 101 112 116 101 100 78 97 109 101 115 34 58 123 34 112 108 117 114 97 108 34 58 34 116 108 115 115 116 111 114 101 115 34 44 34 115 105 110 103 117 108 97 114 34 58 34 116 108 115 115 116 111 114 101 34 44 34 107 105 110 100 34 58 34 84 76 83 83 116 111 114 101 34 44 34 108 105 115 116 75 105 110 100 34 58 34 84 76 83 83 116 111 114 101 76 105 115 116 34 125 44 34 115 116 111 114 101 100 86 101 114 115 105 111 110 115 34 58 91 34 118 49 97 108 112 104 97 49 34 93 125 125 10] [123 34 107 105 110 100 34 58 34 67 117 115 116 111 109 82 101 115 111 117 114 99 101 68 101 102 105 110 105 116 105 111 110 34 44 34 97 112 105 86 101 114 115 105 111 110 34 58 34 97 112 105 101 120 116 101 110 115 105 111 110 115 46 107 56 115 46 105 111 47 118 49 98 101 116 97 49 34 44 34 109 101 116 97 100 97 116 97 34 58 123 34 110 97 109 101 34 58 34 116 108 115 115 116 111 114 101 115 46 116 114 97 101 102 105 107 46 99 111 110 116 97 105 110 111 46 117 115 34 44 34 117 105 100 34 58 34 98 100 97 54 54 55 49 102 45 101 98 52 50 45 52 100 102 57 45 97 56 101 101 45 98 99 54 101 57 50 100 101 100 98 57 56 34 44 34 103 101 110 101 114 97 116 105 111 110 34 58 49 44 34 99 114 101 97 116 105 111 110 84 105 109 101 115 116 97 109 112 34 58 34 50 48 50 50 45 48 57 45 49 51 84 48 53 58 51 48 58 50 50 90 34 44 34 108 97 98 101 108 115 34 58 123 34 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 110 97 103 101 100 45 98 121 34 58 34 72 101 108 109 34 125 44 34 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 99 111 110 116 114 111 108 108 101 114 45 103 101 110 46 107 117 98 101 98 117 105 108 100 101 114 46 105 111 47 118 101 114 115 105 111 110 34 58 34 118 48 46 54 46 50 34 44 34 109 101 116 97 46 104 101 108 109 46 115 104 47 114 101 108 101 97 115 101 45 110 97 109 101 34 58 34 116 114 97 101 102 105 107 45 99 114 100 34 44 34 109 101 116 97 46 104 101 108 109 46 115 104 47 114 101 108 101 97 115 101 45 110 97 109 101 115 112 97 99 101 34 58 34 107 117 98 101 45 115 121 115 116 101 109 34 125 44 34 109 97 110 97 103 101 100 70 105 101 108 100 115 34 58 91 123 34 109 97 110 97 103 101 114 34 58 34 104 101 108 109 34 44 34 111 112 101 114 97 116 105 111 110 34 58 34 85 112 100 97 116 101 34 44 34 97 112 105 86 101 114 115 105 111 110 34 58 34 97 112 105 101 120 116 101 110 115 105 111 110 115 46 107 56 115 46 105 111 47 118 49 34 44 34 116 105 109 101 34 58 34 50 48 50 50 45 48 57 45 49 51 84 48 53 58 51 48 58 50 50 90 34 44 34 102 105 101 108 100 115 84 121 112 101 34 58 34 70 105 101 108 100 115 86 49 34 44 34 102 105 101 108 100 115 86 49 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 45 103 101 110 46 107 117 98 101 98 117 105 108 100 101 114 46 105 111 47 118 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 101 116 97 46 104 101 108 109 46 115 104 47 114 101 108 101 97 115 101 45 110 97 109 101 34 58 123 125 44 34 102 58 109 101 116 97 46 104 101 108 109 46 115 104 47 114 101 108 101 97 115 101 45 110 97 109 101 115 112 97 99 101 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 112 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 110 97 103 101 100 45 98 121 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 118 101 114 115 105 111 110 34 58 123 34 46 34 58 123 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 125 125 44 34 102 58 103 114 111 117 112 34 58 123 125 44 34 102 58 110 97 109 101 115 34 58 123 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 108 105 115 116 75 105 110 100 34 58 123 125 44 34 102 58 112 108 117 114 97 108 34 58 123 125 44 34 102 58 115 105 110 103 117 108 97 114 34 58 123 125 125 44 34 102 58 115 99 111 112 101 34 58 123 125 44 34 102 58 118 101 114 115 105 111 110 115 34 58 123 125 125 125 125 44 123 34 109 97 110 97 103 101 114 34 58 34 107 51 115 34 44 34 111 112 101 114 97 116 105 111 110 34 58 34 85 112 100 97 116 101 34 44 34 97 112 105 86 101 114 115 105 111 110 34 58 34 97 112 105 101 120 116 101 110 115 105 111 110 115 46 107 56 115 46 105 111 47 118 49 34 44 34 116 105 109 101 34 58 34 50 48 50 50 45 48 57 45 49 51 84 48 53 58 51 48 58 50 50 90 34 44 34 102 105 101 108 100 115 84 121 112 101 34 58 34 70 105 101 108 100 115 86 49 34 44 34 102 105 101 108 100 115 86 49 34 58 123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 99 99 101 112 116 101 100 78 97 109 101 115 34 58 123 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 108 105 115 116 75 105 110 100 34 58 123 125 44 34 102 58 112 108 117 114 97 108 34 58 123 125 44 34 102 58 115 105 110 103 117 108 97 114 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 69 115 116 97 98 108 105 115 104 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 97 109 101 115 65 99 99 101 112 116 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125 44 34 115 117 98 114 101 115 111 117 114 99 101 34 58 34 115 116 97 116 117 115 34 125 93 125 44 34 115 112 101 99 34 58 123 34 103 114 111 117 112 34 58 34 116 114 97 101 102 105 107 46 99 111 110 116 97 105 110 111 46 117 115 34 44 34 118 101 114 115 105 111 110 34 58 34 118 49 97 108 112 104 97 49 34 44 34 110 97 109 101 115 34 58 123 34 112 108 117 114 97 108 34 58 34 116 108 115 115 116 111 114 101 115 34 44 34 115 105 110 103 117 108 97 114 34 58 34 116 108 115 115 116 111 114 101 34 44 34 107 105 110 100 34 58 34 84 76 83 83 116 111 114 101 34 44 34 108 105 115 116 75 105 110 100 34 58 34 84 76 83 83 116 111 114 101 76 105 115 116 34 125 44 34 115 99 111 112 101 34 58 34 78 97 109 101 115 112 97 99 101 100 34 44 34 118 97 108 105 100 97 116 105 111 110 34 58 123 34 111 112 101 110 65 80 73 86 51 83 99 104 101 109 97 34 58 123 34 100 101 115 99 114 105 112 116 105 111 110 34 58 34 84 76 83 83 116 111 114 101 32 105 115 32 97 32 115 112 101 99 105 102 105 99 97 116 105 111 110 32 102 111 114 32 97 32 84 76 83 83 116 111 114 101 32 114 101 115 111 117 114 99 101 46 34 44 34 116 121 112 101 34 58 34 111 98 106 101 99 116 34 44 34 114 101 113 117 105 114 101 100 34 58 91 34 109 101 116 97 100 97 116 97 34 44 34 115 112 101 99 34 93 44 34 112 114 111 112 101 114 116 105 101 115 34 58 123 34 97 112 105 86 101 114 115 105 111 110 34 58 123 34 100 101 115 99 114 105 112 116 105 111 110 34 58 34 65 80 73 86 101 114 115 105 111 110 32 100 101 102 105 110 101 115 32 116 104 101 32 118 101 114 115 105 111 110 101 100 32 115 99 104 101 109 97 32 111 102 32 116 104 105 115 32 114 101 112 114 101 115 101 110 116 97 116 105 111 110 32 111 102 32 97 110 32 111 98 106 101 99 116 46 32 83 101 114 118 101 114 115 32 115 104 111 117 108 100 32 99 111 110 118 101 114 116 32 114 101 99 111 103 110 105 122 101 100 32 115 99 104 101 109 97 115 32 116 111 32 116 104 101 32 108 97 116 101 115 116 32 105 110 116 101 114 110 97 108 32 118 97 108 117 101 44 32 97 110 100 32 109 97 121 32 114 101 106 101 99 116 32 117 110 114 101 99 111 103 110 105 122 101 100 32 118 97 108 117 101 115 46 32 77 111 114 101 32 105 110 102 111 58 32 104 116 116 112 115 58 47 47 103 105 116 46 107 56 115 46 105 111 47 99 111 109 109 117 110 105 116 121 47 99 111 110 116 114 105 98 117 116 111 114 115 47 100 101 118 101 108 47 115 105 103 45 97 114 99 104 105 116 101 99 116 117 114 101 47 97 112 105 45 99 111 110 118 101 110 116 105 111 110 115 46 109 100 35 114 101 115 111 117 114 99 101 115 34 44 34 116 121 112 101 34 58 34 115 116 114 105 110 103 34 125 44 34 107 105 110 100 34 58 123 34 100 101 115 99 114 105 112 116 105 111 110 34 58 34 75 105 110 100 32 105 115 32 97 32 115 116 114 105 110 103 32 118 97 108 117 101 32 114 101 112 114 101 115 101 110 116 105 110 103 32 116 104 101 32 82 69 83 84 32 114 101 115 111 117 114 99 101 32 116 104 105 115 32 111 98 106 101 99 116 32 114 101 112 114 101 115 101 110 116 115 46 32 83 101 114 118 101 114 115 32 109 97 121 32 105 110 102 101 114 32 116 104 105 115 32 102 114 111 109 32 116 104 101 32 101 110 100 112 111 105 110 116 32 116 104 101 32 99 108 105 101 110 116 32 115 117 98 109 105 116 115 32 114 101 113 117 101 115 116 115 32 116 111 46 32 67 97 110 110 111 116 32 98 101 32 117 112 100 97 116 101 100 46 32 73 110 32 67 97 109 101 108 67 97 115 101 46 32 77 111 114 101 32 105 110 102 111 58 32 104 116 116 112 115 58 47 47 103 105 116 46 107 56 115 46 105 111 47 99 111 109 109 117 110 105 116 121 47 99 111 110 116 114 105 98 117 116 111 114 115 47 100 101 118 101 108 47 115 105 103 45 97 114 99 104 105 116 101 99 116 117 114 101 47 97 112 105 45 99 111 110 118 101 110 116 105 111 110 115 46 109 100 35 116 121 112 101 115 45 107 105 110 100 115 34 44 34 116 121 112 101 34 58 34 115 116 114 105 110 103 34 125 44 34 109 101 116 97 100 97 116 97 34 58 123 34 116 121 112 101 34 58 34 111 98 106 101 99 116 34 125 44 34 115 112 101 99 34 58 123 34 100 101 115 99 114 105 112 116 105 111 110 34 58 34 84 76 83 83 116 111 114 101 83 112 101 99 32 99 111 110 102 105 103 117 114 101 115 32 97 32 84 76 83 83 116 111 114 101 32 114 101 115 111 117 114 99 101 46 34 44 34 116 121 112 101 34 58 34 111 98 106 101 99 116 34 44 34 114 101 113 117 105 114 101 100 34 58 91 34 100 101 102 97 117 108 116 67 101 114 116 105 102 105 99 97 116 101 34 93 44 34 112 114 111 112 101 114 116 105 101 115 34 58 123 34 100 101 102 97 117 108 116 67 101 114 116 105 102 105 99 97 116 101 34 58 123 34 100 101 115 99 114 105 112 116 105 111 110 34 58 34 68 101 102 97 117 108 116 67 101 114 116 105 102 105 99 97 116 101 32 104 111 108 100 115 32 97 32 115 101 99 114 101 116 32 110 97 109 101 32 102 111 114 32 116 104 101 32 84 76 83 79 112 116 105 111 110 32 114 101 115 111 117 114 99 101 46 34 44 34 116 121 112 101 34 58 34 111 98 106 101 99 116 34 44 34 114 101 113 117 105 114 101 100 34 58 91 34 115 101 99 114 101 116 78 97 109 101 34 93 44 34 112 114 111 112 101 114 116 105 101 115 34 58 123 34 115 101 99 114 101 116 78 97 109 101 34 58 123 34 100 101 115 99 114 105 112 116 105 111 110 34 58 34 83 101 99 114 101 116 78 97 109 101 32 105 115 32 116 104 101 32 110 97 109 101 32 111 102 32 116 104 101 32 114 101 102 101 114 101 110 99 101 100 32 75 117 98 101 114 110 101 116 101 115 32 83 101 99 114 101 116 32 116 111 32 115 112 101 99 105 102 121 32 116 104 101 32 99 101 114 116 105 102 105 99 97 116 101 32 100 101 116 97 105 108 115 46 34 44 34 116 121 112 101 34 58 34 115 116 114 105 110 103 34 125 125 125 125 125 125 125 125 44 34 118 101 114 115 105 111 110 115 34 58 91 123 34 110 97 109 101 34 58 34 118 49 97 108 112 104 97 49 34 44 34 115 101 114 118 101 100 34 58 116 114 117 101 44 34 115 116 111 114 97 103 101 34 58 116 114 117 101 125 93 44 34 99 111 110 118 101 114 115 105 111 110 34 58 123 34 115 116 114 97 116 101 103 121 34 58 34 78 111 110 101 34 125 44 34 112 114 101 115 101 114 118 101 85 110 107 110 111 119 110 70 105 101 108 100 115 34 58 102 97 108 115 101 125 44 34 115 116 97 116 117 115 34 58 123 34 99 111 110 100 105 116 105 111 110 115 34 58 91 123 34 116 121 112 101 34 58 34 78 97 109 101 115 65 99 99 101 112 116 101 100 34 44 34 115 116 97 116 117 115 34 58 34 84 114 117 101 34 44 34 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 34 50 48 50 50 45 48 57 45 49 51 84 48 53 58 51 48 58 50 50 90 34 44 34 114 101 97 115 111 110 34 58 34 78 111 67 111 110 102 108 105 99 116 115 34 44 34 109 101 115 115 97 103 101 34 58 34 110 111 32 99 111 110 102 108 105 99 116 115 32 102 111 117 110 100 34 125 44 123 34 116 121 112 101 34 58 34 69 115 116 97 98 108 105 115 104 101 100 34 44 34 115 116 97 116 117 115 34 58 34 70 97 108 115 101 34 44 34 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 34 50 48 50 50 45 48 57 45 49 51 84 48 53 58 51 48 58 50 50 90 34 44 34 114 101 97 115 111 110 34 58 34 73 110 115 116 97 108 108 105 110 103 34 44 34 109 101 115 115 97 103 101 34 58 34 116 104 101 32 105 110 105 116 105 97 108 32 110 97 109 101 115 32 104 97 118 101 32 98 101 101 110 32 97 99 99 101 112 116 101 100 34 125 93 44 34 97 99 99 101 112 116 101 100 78 97 109 101 115 34 58 123 34 112 108 117 114 97 108 34 58 34 116 108 115 115 116 111 114 101 115 34 44 34 115 105 110 103 117 108 97 114 34 58 34 116 108 115 115 116 111 114 101 34 44 34 107 105 110 100 34 58 34 84 76 83 83 116 111 114 101 34 44 34 108 105 115 116 75 105 110 100 34 58 34 84 76 83 83 116 111 114 101 76 105 115 116 34 125 44 34 115 116 111 114 101 100 86 101 114 115 105 111 110 115 34 58 91 34 118 49 97 108 112 104 97 49 34 93 125 125 10]]]"
13:30:25 test k3s[21472]: I0913 13:30:25.136391 21472 trace.go:205] Trace[1564060424]: "GuaranteedUpdate etcd3" type:*apiextensions.CustomResourceDefinition (13-Sep-2022 13:30:22.185) (total time: 2950ms):
13:30:25 test k3s[21472]: Trace[1564060424]: ---"Transaction committed" 2948ms (13:30:25.136)
13:30:25 test k3s[21472]: Trace[1564060424]: [2.950407536s] [2.950407536s] END
13:30:25 test k3s[21472]: I0913 13:30:25.136542 21472 trace.go:205] Trace[564045123]: "Update" url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/tlsstores.traefik.containo.us/status,user-agent:k3s/v1.24.4 k3s1 (linux/amd64) kubernetes/c3f830e,audit-id:40e0802d-4d01-44bf-aebc-d09def4d942c,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Sep-2022 13:30:22.185) (total time: 2950ms):
13:30:25 test k3s[21472]: Trace[564045123]: ---"Object stored in database" 2950ms (13:30:25.136)
13:30:25 test k3s[21472]: Trace[564045123]: [2.950747883s] [2.950747883s] END
13:30:26 test k3s[21472]: I0913 13:30:26.468703 21472 scope.go:110] "RemoveContainer" containerID="3e6d77630f428b6b1bf4fe761d48f1ae9c852a5c629b4a742b61e306c23090a0"
13:30:26 test k3s[21472]: I0913 13:30:26.487040 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:30:26 test k3s[21472]: I0913 13:30:26.495633 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd
13:30:27 test k3s[21472]: I0913 13:30:27.488285 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:30:27 test k3s[21472]: I0913 13:30:27.492922 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:30:27 test k3s[21472]: I0913 13:30:27.497394 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:30:27 test k3s[21472]: time="2022-09-13T13:30:27 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"f338e750-521b-4432-a34e-400bd35715dc\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"328\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik"
13:30:27 test k3s[21472]: I0913 13:30:27.500605 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd
13:30:27 test k3s[21472]: time="2022-09-13T13:30:27 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"c67dd1f0-a0c2-4784-ba15-f4d864330ee1\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"329\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd"
13:30:27 test k3s[21472]: I0913 13:30:27.803879 21472 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpvg9\" (UniqueName: \"kubernetes.io/projected/6ff1e9c7-11c7-4d69-9b08-827770cfbdcc-kube-api-access-tpvg9\") pod \"6ff1e9c7-11c7-4d69-9b08-827770cfbdcc\" (UID: \"6ff1e9c7-11c7-4d69-9b08-827770cfbdcc\") "
13:30:27 test k3s[21472]: I0913 13:30:27.803936 21472 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"values\" (UniqueName: \"kubernetes.io/configmap/6ff1e9c7-11c7-4d69-9b08-827770cfbdcc-values\") pod \"6ff1e9c7-11c7-4d69-9b08-827770cfbdcc\" (UID: \"6ff1e9c7-11c7-4d69-9b08-827770cfbdcc\") "
13:30:27 test k3s[21472]: I0913 13:30:27.803966 21472 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/6ff1e9c7-11c7-4d69-9b08-827770cfbdcc-content\") pod \"6ff1e9c7-11c7-4d69-9b08-827770cfbdcc\" (UID: \"6ff1e9c7-11c7-4d69-9b08-827770cfbdcc\") "
13:30:27 test k3s[21472]: W0913 13:30:27.804228 21472 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/6ff1e9c7-11c7-4d69-9b08-827770cfbdcc/volumes/kubernetes.io~configmap/content: clearQuota called, but quotas disabled
13:30:27 test k3s[21472]: I0913 13:30:27.804592 21472 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ff1e9c7-11c7-4d69-9b08-827770cfbdcc-content" (OuterVolumeSpecName: "content") pod "6ff1e9c7-11c7-4d69-9b08-827770cfbdcc" (UID: "6ff1e9c7-11c7-4d69-9b08-827770cfbdcc"). InnerVolumeSpecName "content". PluginName "kubernetes.io/configmap", VolumeGidValue ""
13:30:27 test k3s[21472]: W0913 13:30:27.804803 21472 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/6ff1e9c7-11c7-4d69-9b08-827770cfbdcc/volumes/kubernetes.io~configmap/values: clearQuota called, but quotas disabled
13:30:27 test k3s[21472]: I0913 13:30:27.804939 21472 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ff1e9c7-11c7-4d69-9b08-827770cfbdcc-values" (OuterVolumeSpecName: "values") pod "6ff1e9c7-11c7-4d69-9b08-827770cfbdcc" (UID: "6ff1e9c7-11c7-4d69-9b08-827770cfbdcc"). InnerVolumeSpecName "values". PluginName "kubernetes.io/configmap", VolumeGidValue ""
13:30:27 test k3s[21472]: I0913 13:30:27.812581 21472 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff1e9c7-11c7-4d69-9b08-827770cfbdcc-kube-api-access-tpvg9" (OuterVolumeSpecName: "kube-api-access-tpvg9") pod "6ff1e9c7-11c7-4d69-9b08-827770cfbdcc" (UID: "6ff1e9c7-11c7-4d69-9b08-827770cfbdcc"). InnerVolumeSpecName "kube-api-access-tpvg9". PluginName "kubernetes.io/projected", VolumeGidValue ""
13:30:27 test k3s[21472]: I0913 13:30:27.904190 21472 reconciler.go:384] "Volume detached for volume \"content\" (UniqueName: \"kubernetes.io/configmap/6ff1e9c7-11c7-4d69-9b08-827770cfbdcc-content\") on node \"test\" DevicePath \"\""
13:30:27 test k3s[21472]: I0913 13:30:27.904317 21472 reconciler.go:384] "Volume detached for volume \"values\" (UniqueName: \"kubernetes.io/configmap/6ff1e9c7-11c7-4d69-9b08-827770cfbdcc-values\") on node \"test\" DevicePath \"\""
13:30:27 test k3s[21472]: I0913 13:30:27.904368 21472 reconciler.go:384] "Volume detached for volume \"kube-api-access-tpvg9\" (UniqueName: \"kubernetes.io/projected/6ff1e9c7-11c7-4d69-9b08-827770cfbdcc-kube-api-access-tpvg9\") on node \"test\" DevicePath \"\""
13:30:28 test k3s[21472]: I0913 13:30:28.478417 21472 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="129334c456192a24de7e55485c896b38a6bd46dab0b18389523c4632a50d14af"
13:30:28 test k3s[21472]: I0913 13:30:28.492880 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd
13:30:29 test k3s[21472]: I0913 13:30:29.502172 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd
13:30:29 test k3s[21472]: I0913 13:30:29.507265 21472 event.go:294] "Event occurred" object="kube-system/helm-install-traefik-crd" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
13:30:29 test k3s[21472]: I0913 13:30:29.507545 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd
13:30:29 test k3s[21472]: time="2022-09-13T13:30:29 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"c67dd1f0-a0c2-4784-ba15-f4d864330ee1\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"329\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd"
13:30:29 test k3s[21472]: I0913 13:30:29.516965 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd
13:30:29 test k3s[21472]: E0913 13:30:29.524570 21472 job_controller.go:533] syncing job: tracking status: adding uncounted pods to status: Operation cannot be fulfilled on jobs.batch "helm-install-traefik-crd": the object has been modified; please apply your changes to the latest version and try again
13:30:29 test k3s[21472]: time="2022-09-13T13:30:29 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"c67dd1f0-a0c2-4784-ba15-f4d864330ee1\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"329\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd"
13:30:30 test k3s[21472]: I0913 13:30:30.071905 21472 event.go:294] "Event occurred" object="kube-system/traefik" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set traefik-7cd4fcff68 to 1"
13:30:30 test k3s[21472]: I0913 13:30:30.082808 21472 alloc.go:327] "allocated clusterIPs" service="kube-system/traefik" clusterIPs=map[IPv4:10.43.195.239]
13:30:30 test k3s[21472]: I0913 13:30:30.086010 21472 event.go:294] "Event occurred" object="kube-system/traefik-7cd4fcff68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: traefik-7cd4fcff68-5jvm5"
13:30:30 test k3s[21472]: I0913 13:30:30.094130 21472 topology_manager.go:200] "Topology Admit Handler"
13:30:30 test k3s[21472]: E0913 13:30:30.094178 21472 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="6ff1e9c7-11c7-4d69-9b08-827770cfbdcc" containerName="helm"
13:30:30 test k3s[21472]: I0913 13:30:30.094209 21472 memory_manager.go:345] "RemoveStaleState removing state" podUID="6ff1e9c7-11c7-4d69-9b08-827770cfbdcc" containerName="helm"
13:30:30 test k3s[21472]: I0913 13:30:30.094725 21472 controller.go:611] quota admission added evaluator for: daemonsets.apps
13:30:30 test k3s[21472]: time="2022-09-13T13:30:30 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"14536c6f-d600-4b93-b00f-34030b318be7\", APIVersion:\"v1\", ResourceVersion:\"580\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedDaemonSet' Applied LoadBalancer DaemonSet kube-system/svclb-traefik-14536c6f"
13:30:30 test k3s[21472]: I0913 13:30:30.103603 21472 event.go:294] "Event occurred" object="traefik" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kube-system/traefik: endpoints \"traefik\" already exists"
13:30:30 test k3s[21472]: time="2022-09-13T13:30:30 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"14536c6f-d600-4b93-b00f-34030b318be7\", APIVersion:\"v1\", ResourceVersion:\"580\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedDaemonSet' Applied LoadBalancer DaemonSet kube-system/svclb-traefik-14536c6f"
13:30:30 test k3s[21472]: I0913 13:30:30.112199 21472 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
13:30:30 test k3s[21472]: I0913 13:30:30.129140 21472 event.go:294] "Event occurred" object="kube-system/svclb-traefik-14536c6f" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: svclb-traefik-14536c6f-96sxm"
13:30:30 test k3s[21472]: I0913 13:30:30.136734 21472 topology_manager.go:200] "Topology Admit Handler"
13:30:30 test k3s[21472]: I0913 13:30:30.142262 21472 controller.go:611] quota admission added evaluator for: ingressroutes.traefik.containo.us
13:30:30 test k3s[21472]: I0913 13:30:30.217440 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b8bab962-5ac7-4090-8b83-5a7e0ab19c3b-data\") pod \"traefik-7cd4fcff68-5jvm5\" (UID: \"b8bab962-5ac7-4090-8b83-5a7e0ab19c3b\") " pod="kube-system/traefik-7cd4fcff68-5jvm5"
13:30:30 test k3s[21472]: I0913 13:30:30.217487 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b8bab962-5ac7-4090-8b83-5a7e0ab19c3b-tmp\") pod \"traefik-7cd4fcff68-5jvm5\" (UID: \"b8bab962-5ac7-4090-8b83-5a7e0ab19c3b\") " pod="kube-system/traefik-7cd4fcff68-5jvm5"
13:30:30 test k3s[21472]: I0913 13:30:30.217525 21472 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghbln\" (UniqueName: \"kubernetes.io/projected/b8bab962-5ac7-4090-8b83-5a7e0ab19c3b-kube-api-access-ghbln\") pod \"traefik-7cd4fcff68-5jvm5\" (UID: \"b8bab962-5ac7-4090-8b83-5a7e0ab19c3b\") " pod="kube-system/traefik-7cd4fcff68-5jvm5"
13:30:30 test k3s[21472]: I0913 13:30:30.482683 21472 scope.go:110] "RemoveContainer" containerID="3e6d77630f428b6b1bf4fe761d48f1ae9c852a5c629b4a742b61e306c23090a0"
13:30:30 test k3s[21472]: I0913 13:30:30.493595 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:30:31 test k3s[21472]: I0913 13:30:31.499330 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:30:31 test k3s[21472]: I0913 13:30:31.501463 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:30:31 test k3s[21472]: time="2022-09-13T13:30:31 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"f338e750-521b-4432-a34e-400bd35715dc\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"328\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik"
13:30:31 test k3s[21472]: I0913 13:30:31.825315 21472 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flt52\" (UniqueName: \"kubernetes.io/projected/229a248f-bb6e-4d71-aa15-2f55d817aee2-kube-api-access-flt52\") pod \"229a248f-bb6e-4d71-aa15-2f55d817aee2\" (UID: \"229a248f-bb6e-4d71-aa15-2f55d817aee2\") "
13:30:31 test k3s[21472]: I0913 13:30:31.825373 21472 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"values\" (UniqueName: \"kubernetes.io/configmap/229a248f-bb6e-4d71-aa15-2f55d817aee2-values\") pod \"229a248f-bb6e-4d71-aa15-2f55d817aee2\" (UID: \"229a248f-bb6e-4d71-aa15-2f55d817aee2\") "
13:30:31 test k3s[21472]: I0913 13:30:31.825404 21472 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/229a248f-bb6e-4d71-aa15-2f55d817aee2-content\") pod \"229a248f-bb6e-4d71-aa15-2f55d817aee2\" (UID: \"229a248f-bb6e-4d71-aa15-2f55d817aee2\") "
13:30:31 test k3s[21472]: W0913 13:30:31.825673 21472 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/229a248f-bb6e-4d71-aa15-2f55d817aee2/volumes/kubernetes.io~configmap/values: clearQuota called, but quotas disabled
13:30:31 test k3s[21472]: I0913 13:30:31.825883 21472 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/229a248f-bb6e-4d71-aa15-2f55d817aee2-values" (OuterVolumeSpecName: "values") pod "229a248f-bb6e-4d71-aa15-2f55d817aee2" (UID: "229a248f-bb6e-4d71-aa15-2f55d817aee2"). InnerVolumeSpecName "values". PluginName "kubernetes.io/configmap", VolumeGidValue ""
13:30:31 test k3s[21472]: W0913 13:30:31.826021 21472 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/229a248f-bb6e-4d71-aa15-2f55d817aee2/volumes/kubernetes.io~configmap/content: clearQuota called, but quotas disabled
13:30:31 test k3s[21472]: I0913 13:30:31.826161 21472 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/229a248f-bb6e-4d71-aa15-2f55d817aee2-content" (OuterVolumeSpecName: "content") pod "229a248f-bb6e-4d71-aa15-2f55d817aee2" (UID: "229a248f-bb6e-4d71-aa15-2f55d817aee2"). InnerVolumeSpecName "content". PluginName "kubernetes.io/configmap", VolumeGidValue ""
13:30:31 test k3s[21472]: I0913 13:30:31.833901 21472 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/229a248f-bb6e-4d71-aa15-2f55d817aee2-kube-api-access-flt52" (OuterVolumeSpecName: "kube-api-access-flt52") pod "229a248f-bb6e-4d71-aa15-2f55d817aee2" (UID: "229a248f-bb6e-4d71-aa15-2f55d817aee2"). InnerVolumeSpecName "kube-api-access-flt52". PluginName "kubernetes.io/projected", VolumeGidValue ""
13:30:31 test k3s[21472]: I0913 13:30:31.926217 21472 reconciler.go:384] "Volume detached for volume \"content\" (UniqueName: \"kubernetes.io/configmap/229a248f-bb6e-4d71-aa15-2f55d817aee2-content\") on node \"test\" DevicePath \"\""
13:30:31 test k3s[21472]: I0913 13:30:31.926250 21472 reconciler.go:384] "Volume detached for volume \"kube-api-access-flt52\" (UniqueName: \"kubernetes.io/projected/229a248f-bb6e-4d71-aa15-2f55d817aee2-kube-api-access-flt52\") on node \"test\" DevicePath \"\""
13:30:31 test k3s[21472]: I0913 13:30:31.926265 21472 reconciler.go:384] "Volume detached for volume \"values\" (UniqueName: \"kubernetes.io/configmap/229a248f-bb6e-4d71-aa15-2f55d817aee2-values\") on node \"test\" DevicePath \"\""
13:30:32 test k3s[21472]: I0913 13:30:32.297838 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:30:32 test k3s[21472]: I0913 13:30:32.489753 21472 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="852c30cdadd7c62d64c6266caebbf39f00e1deeb88a530fe7c4bb958ac1524c5"
13:30:32 test k3s[21472]: I0913 13:30:32.507325 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:30:32 test k3s[21472]: I0913 13:30:32.513535 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:30:32 test k3s[21472]: I0913 13:30:32.513601 21472 event.go:294] "Event occurred" object="kube-system/helm-install-traefik" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
13:30:32 test k3s[21472]: time="2022-09-13T13:30:32 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"f338e750-521b-4432-a34e-400bd35715dc\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"328\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik"
13:30:32 test k3s[21472]: I0913 13:30:32.519530 21472 job_controller.go:498] enqueueing job kube-system/helm-install-traefik
13:30:32 test k3s[21472]: E0913 13:30:32.522908 21472 job_controller.go:533] syncing job: tracking status: adding uncounted pods to status: Operation cannot be fulfilled on jobs.batch "helm-install-traefik": the object has been modified; please apply your changes to the latest version and try again
13:30:32 test k3s[21472]: time="2022-09-13T13:30:32 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"f338e750-521b-4432-a34e-400bd35715dc\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"328\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik"
13:30:35 test k3s[21472]: I0913 13:30:35.685580 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for tlsoptions.traefik.containo.us
13:30:35 test k3s[21472]: I0913 13:30:35.685640 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for serverstransports.traefik.containo.us
13:30:35 test k3s[21472]: I0913 13:30:35.685672 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for middlewaretcps.traefik.containo.us
13:30:35 test k3s[21472]: I0913 13:30:35.685709 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for ingressrouteudps.traefik.containo.us
13:30:35 test k3s[21472]: I0913 13:30:35.685738 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for traefikservices.traefik.containo.us
13:30:35 test k3s[21472]: I0913 13:30:35.685767 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for ingressroutetcps.traefik.containo.us
13:30:35 test k3s[21472]: I0913 13:30:35.685795 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for middlewares.traefik.containo.us
13:30:35 test k3s[21472]: I0913 13:30:35.685826 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for tlsstores.traefik.containo.us
13:30:35 test k3s[21472]: I0913 13:30:35.685863 21472 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for ingressroutes.traefik.containo.us
13:30:35 test k3s[21472]: I0913 13:30:35.685934 21472 shared_informer.go:255] Waiting for caches to sync for resource quota
13:30:35 test k3s[21472]: I0913 13:30:35.887056 21472 shared_informer.go:262] Caches are synced for resource quota
13:30:36 test k3s[21472]: I0913 13:30:36.104014 21472 shared_informer.go:255] Waiting for caches to sync for garbage collector
13:30:36 test k3s[21472]: I0913 13:30:36.104062 21472 shared_informer.go:262] Caches are synced for garbage collector
13:30:49 test k3s[21472]: time="2022-09-13T13:30:49 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"14536c6f-d600-4b93-b00f-34030b318be7\", APIVersion:\"v1\", ResourceVersion:\"580\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedDaemonSet' Applied LoadBalancer DaemonSet kube-system/svclb-traefik-14536c6f"
13:30:49 test k3s[21472]: time="2022-09-13T13:30:49 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"14536c6f-d600-4b93-b00f-34030b318be7\", APIVersion:\"v1\", ResourceVersion:\"633\", FieldPath:\"\"}): type: 'Normal' reason: 'UpdatedIngressIP' LoadBalancer Ingress IP addresses updated: 192.168.100.195"
13:30:49 test k3s[21472]: time="2022-09-13T13:30:49 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"14536c6f-d600-4b93-b00f-34030b318be7\", APIVersion:\"v1\", ResourceVersion:\"633\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedDaemonSet' Applied LoadBalancer DaemonSet kube-system/svclb-traefik-14536c6f"
13:30:49 test k3s[21472]: time="2022-09-13T13:30:49 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"14536c6f-d600-4b93-b00f-34030b318be7\", APIVersion:\"v1\", ResourceVersion:\"633\", FieldPath:\"\"}): type: 'Normal' reason: 'UpdatedIngressIP' LoadBalancer Ingress IP addresses updated: 192.168.100.195"
13:30:49 test k3s[21472]: time="2022-09-13T13:30:49 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"14536c6f-d600-4b93-b00f-34030b318be7\", APIVersion:\"v1\", ResourceVersion:\"635\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedDaemonSet' Applied LoadBalancer DaemonSet kube-system/svclb-traefik-14536c6f"
13:30:49 test k3s[21472]: time="2022-09-13T13:30:49 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"14536c6f-d600-4b93-b00f-34030b318be7\", APIVersion:\"v1\", ResourceVersion:\"635\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedDaemonSet' Applied LoadBalancer DaemonSet kube-system/svclb-traefik-14536c6f"
13:30:52 test k3s[21472]: time="2022-09-13T13:30:52 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"14536c6f-d600-4b93-b00f-34030b318be7\", APIVersion:\"v1\", ResourceVersion:\"635\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedDaemonSet' Applied LoadBalancer DaemonSet kube-system/svclb-traefik-14536c6f"
13:31:10 test k3s[21472]: time="2022-09-13T13:31:10 08:00" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"14536c6f-d600-4b93-b00f-34030b318be7\", APIVersion:\"v1\", ResourceVersion:\"635\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedDaemonSet' Applied LoadBalancer DaemonSet kube-system/svclb-traefik-14536c6f"
13:39:03 test k3s[21472]: time="2022-09-13T13:39:03 08:00" level=warning msg="Proxy error: write failed: write tcp 127.0.0.1:6443->127.0.0.1:60854: write: broken pipe"
13:44:03 test k3s[21472]: time="2022-09-13T13:44:03 08:00" level=warning msg="Proxy error: write failed: write tcp 127.0.0.1:6443->127.0.0.1:34222: write: broken pipe" |
Yes, but the log appears to be truncated to terminal width so the line ends are missing. Can you copy it, and the containerd log, off the node and attach it to a comment instead of pasting it all inline? Also, are there any messages in there that correspond to the times when you're running your |
It's been redeployed many times, the problems are the same, the logs are the same。 # kubectl -n kube-system logs coredns-b96499967-zcgn5
Error from server: Get "https://192.168.100.195:10250/containerLogs/kube-system/coredns-b96499967-zcgn5/coredns": Access violation # crictl logs coredns-b96499967-zcgn5
E0913 14:56:09.437437 31884 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"coredns-b96499967-zcgn5\": not found" containerID="coredns-b96499967-zcgn5"
FATA[0000] rpc error: code = NotFound desc = an error occurred when try to find container "coredns-b96499967-zcgn5": not found |
That seems wrong. Are you using the containerd that is packaged with K3s? Do you see anything interesting in the containerd log at |
This is my Containerd log, please check。 time="2022-09-14T11:31:28 08:00" level=warning msg="containerd config version `1` has been deprecated and will be removed in containerd v2.0, please switch to version `2`, see https://github.com/containerd/containerd/blob/main/docs/PLUGINS.md#version-header"
time="2022-09-14T11:31:28.052839094 08:00" level=info msg="starting containerd" revision= version=v1.6.6-k3s1
time="2022-09-14T11:31:28.069484365 08:00" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2022-09-14T11:31:28.069639765 08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
time="2022-09-14T11:31:28.071309284 08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found.\\n\"): skip plugin" type=io.containerd.snapshotter.v1
time="2022-09-14T11:31:28.071405499 08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
time="2022-09-14T11:31:28.071609599 08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.btrfs (xfs) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2022-09-14T11:31:28.071637225 08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2022-09-14T11:31:28.071654047 08:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
time="2022-09-14T11:31:28.071666965 08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2022-09-14T11:31:28.071749939 08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2022-09-14T11:31:28.071987951 08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.fuse-overlayfs\"..." type=io.containerd.snapshotter.v1
time="2022-09-14T11:31:28.072075216 08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.stargz\"..." type=io.containerd.snapshotter.v1
time="2022-09-14T11:31:28.090146338 08:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
time="2022-09-14T11:31:28.090336236 08:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2022-09-14T11:31:28.090367157 08:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2022-09-14T11:31:28.090468885 08:00" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
time="2022-09-14T11:31:28.090491780 08:00" level=info msg="metadata content store policy set" policy=shared
time="2022-09-14T11:31:28.187524996 08:00" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
time="2022-09-14T11:31:28.187577374 08:00" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
time="2022-09-14T11:31:28.187600185 08:00" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
time="2022-09-14T11:31:28.187665598 08:00" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
time="2022-09-14T11:31:28.187720306 08:00" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
time="2022-09-14T11:31:28.187748788 08:00" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
time="2022-09-14T11:31:28.187771607 08:00" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
time="2022-09-14T11:31:28.187794397 08:00" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
time="2022-09-14T11:31:28.187816270 08:00" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
time="2022-09-14T11:31:28.187834306 08:00" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
time="2022-09-14T11:31:28.187855507 08:00" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
time="2022-09-14T11:31:28.187878056 08:00" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
time="2022-09-14T11:31:28.187987506 08:00" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
time="2022-09-14T11:31:28.188135767 08:00" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
time="2022-09-14T11:31:28.188648666 08:00" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
time="2022-09-14T11:31:28.188711735 08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
time="2022-09-14T11:31:28.188736086 08:00" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
time="2022-09-14T11:31:28.188842313 08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
time="2022-09-14T11:31:28.188867398 08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
time="2022-09-14T11:31:28.188888111 08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
time="2022-09-14T11:31:28.188904283 08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
time="2022-09-14T11:31:28.188924470 08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
time="2022-09-14T11:31:28.188944168 08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
time="2022-09-14T11:31:28.188964489 08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
time="2022-09-14T11:31:28.188980038 08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
time="2022-09-14T11:31:28.189001812 08:00" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
time="2022-09-14T11:31:28.189184652 08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
time="2022-09-14T11:31:28.189213368 08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
time="2022-09-14T11:31:28.189232379 08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
time="2022-09-14T11:31:28.189257604 08:00" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
time="2022-09-14T11:31:28.189293618 08:00" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
time="2022-09-14T11:31:28.189324272 08:00" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
time="2022-09-14T11:31:28.189361087 08:00" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
time="2022-09-14T11:31:28.189418625 08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
time="2022-09-14T11:31:28.189697354 08:00" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/var/lib/rancher/k3s/data/577968fa3d58539cc4265248631b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin NetworkPluginConfDir:/var/lib/rancher/k3s/agent/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:rancher/mirrored-pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/rancher/k3s/agent/containerd ContainerdEndpoint:/run/k3s/containerd/containerd.sock RootDir:/var/lib/rancher/k3s/agent/containerd/io.containerd.grpc.v1.cri StateDir:/run/k3s/containerd/io.containerd.grpc.v1.cri}"
time="2022-09-14T11:31:28.189778517 08:00" level=info msg="Connect containerd service"
time="2022-09-14T11:31:28.189866562 08:00" level=info msg="Get image filesystem path \"/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs\""
time="2022-09-14T11:31:28.190579182 08:00" level=info msg="Start subscribing containerd event"
time="2022-09-14T11:31:28.190640130 08:00" level=info msg="Start recovering state"
time="2022-09-14T11:31:28.190686964 08:00" level=info msg=serving... address=/run/k3s/containerd/containerd.sock.ttrpc
time="2022-09-14T11:31:28.190743751 08:00" level=info msg=serving... address=/run/k3s/containerd/containerd.sock
time="2022-09-14T11:31:28.190742897 08:00" level=info msg="Start event monitor"
time="2022-09-14T11:31:28.190778148 08:00" level=info msg="containerd successfully booted in 0.138774s"
time="2022-09-14T11:31:28.190785094 08:00" level=info msg="Start snapshots syncer"
time="2022-09-14T11:31:28.190812187 08:00" level=info msg="Start cni network conf syncer for default"
time="2022-09-14T11:31:28.190826363 08:00" level=info msg="Start streaming server"
time="2022-09-14T11:31:42.650090378 08:00" level=info msg="No cni config template is specified, wait for other system components to drop the config."
time="2022-09-14T11:31:43.615460172 08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helm-install-traefik-crd-5bfkm,Uid:076acc7e-b4d4-4d29-8b37-52cebc79852e,Namespace:kube-system,Attempt:0,}"
time="2022-09-14T11:31:43.624739382 08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helm-install-traefik-l4zwz,Uid:eec71f65-8e76-431c-ab9a-5505a14fb2ad,Namespace:kube-system,Attempt:0,}"
time="2022-09-14T11:31:44.731752820 08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-b96499967-mggvt,Uid:f847b3c7-c387-4170-9e64-333d18398233,Namespace:kube-system,Attempt:0,}"
time="2022-09-14T11:31:44.731830120 08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:local-path-provisioner-7b7dc8d6f5-cg8m9,Uid:50750f51-7f40-46a9-83b5-f2045d037063,Namespace:kube-system,Attempt:0,}"
time="2022-09-14T11:31:45.340862390 08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-668d979685-5cpdl,Uid:fd08404f-b1bb-4470-a9e5-46611c645e16,Namespace:kube-system,Attempt:0,}"
time="2022-09-14T11:31:56.668609041 08:00" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:56.816381876 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:56.868650003 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:56.924921521 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:56.960168045 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:56.988517679 08:00" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:57.102796302 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:57.120111093 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:57.177311559 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:57.205803045 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:57.222897029 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:57.268057728 08:00" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-pause@sha256:74c4244427b7312c5b901fe0f67cbc53683d06f4f24c6faee65d4182bf0fa893,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
map[string]interface {}{"cniVersion":"1.0.0", "forceAddress":true, "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.42.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0x2a, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000bc900), "name":"cbr0", "type":"bridge"}
{"cniVersion":"1.0.0","forceAddress":true,"hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.42.0.0/24"}]],"routes":[{"dst":"10.42.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2022-09-14T11:31:57.308653576 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:57.337036671 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:57.411619494 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:57.491454368 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:57.571366308 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-pause@sha256:74c4244427b7312c5b901fe0f67cbc53683d06f4f24c6faee65d4182bf0fa893,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:31:57.588588089 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-pause@sha256:74c4244427b7312c5b901fe0f67cbc53683d06f4f24c6faee65d4182bf0fa893,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
map[string]interface {}{"cniVersion":"1.0.0", "forceAddress":true, "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.42.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0x2a, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a970), "name":"cbr0", "type":"bridge"}
{"cniVersion":"1.0.0","forceAddress":true,"hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.42.0.0/24"}]],"routes":[{"dst":"10.42.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}
map[string]interface {}{"cniVersion":"1.0.0", "forceAddress":true, "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.42.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0x2a, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000bc900), "name":"cbr0", "type":"bridge"}
{"cniVersion":"1.0.0","forceAddress":true,"hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.42.0.0/24"}]],"routes":[{"dst":"10.42.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2022-09-14T11:31:57.639100605 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-pause@sha256:74c4244427b7312c5b901fe0f67cbc53683d06f4f24c6faee65d4182bf0fa893,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
map[string]interface {}{"cniVersion":"1.0.0", "forceAddress":true, "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.42.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0x2a, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000016970), "name":"cbr0", "type":"bridge"}
{"cniVersion":"1.0.0","forceAddress":true,"hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.42.0.0/24"}]],"routes":[{"dst":"10.42.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2022-09-14T11:31:57.708615998 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-pause@sha256:74c4244427b7312c5b901fe0f67cbc53683d06f4f24c6faee65d4182bf0fa893,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
map[string]interface {}{"cniVersion":"1.0.0", "forceAddress":true, "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.42.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0x2a, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000014970), "name":"cbr0", "type":"bridge"}
{"cniVersion":"1.0.0","forceAddress":true,"hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.42.0.0/24"}]],"routes":[{"dst":"10.42.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2022-09-14T11:31:58.239449908 08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2022-09-14T11:31:58.239545232 08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2022-09-14T11:31:58.239561203 08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2022-09-14T11:31:58.239744338 08:00" level=info msg="starting signal loop" namespace=k8s.io path=/run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/31187721f285fe9cdae8debdf49349e2e1df23daddd595ed468fdbae05477062 pid=19437 runtime=io.containerd.runc.v2
time="2022-09-14T11:31:58.324217138 08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-b96499967-mggvt,Uid:f847b3c7-c387-4170-9e64-333d18398233,Namespace:kube-system,Attempt:0,} returns sandbox id \"31187721f285fe9cdae8debdf49349e2e1df23daddd595ed468fdbae05477062\""
time="2022-09-14T11:31:58.326200180 08:00" level=info msg="PullImage \"rancher/mirrored-coredns-coredns:1.9.1\""
time="2022-09-14T11:31:58.474736870 08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2022-09-14T11:31:58.474830019 08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2022-09-14T11:31:58.474850538 08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2022-09-14T11:31:58.475042320 08:00" level=info msg="starting signal loop" namespace=k8s.io path=/run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/a6372681d74eef861e844cb50e07389ef657769bbb748c650086eacb887d9d87 pid=19477 runtime=io.containerd.runc.v2
time="2022-09-14T11:31:58.522198450 08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2022-09-14T11:31:58.522299301 08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2022-09-14T11:31:58.522316845 08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2022-09-14T11:31:58.522494529 08:00" level=info msg="starting signal loop" namespace=k8s.io path=/run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/db6d3379bdb5e281256e7386f0af3f49a8d95a486e9910dd3e4f7928e5cb9d60 pid=19510 runtime=io.containerd.runc.v2
time="2022-09-14T11:31:58.563159525 08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helm-install-traefik-crd-5bfkm,Uid:076acc7e-b4d4-4d29-8b37-52cebc79852e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6372681d74eef861e844cb50e07389ef657769bbb748c650086eacb887d9d87\""
time="2022-09-14T11:31:58.564635518 08:00" level=info msg="PullImage \"rancher/klipper-helm:v0.7.3-build20220613\""
time="2022-09-14T11:31:58.590839628 08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2022-09-14T11:31:58.590968603 08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2022-09-14T11:31:58.591008463 08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2022-09-14T11:31:58.591214742 08:00" level=info msg="starting signal loop" namespace=k8s.io path=/run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/c20418e89501a4726a3d1259ea53888c2a062800234831d993445bfa3fbb5c86 pid=19552 runtime=io.containerd.runc.v2
time="2022-09-14T11:31:58.608650431 08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-668d979685-5cpdl,Uid:fd08404f-b1bb-4470-a9e5-46611c645e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"db6d3379bdb5e281256e7386f0af3f49a8d95a486e9910dd3e4f7928e5cb9d60\""
time="2022-09-14T11:31:58.612453060 08:00" level=info msg="PullImage \"rancher/mirrored-metrics-server:v0.5.2\""
time="2022-09-14T11:31:58.632125393 08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2022-09-14T11:31:58.632223502 08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2022-09-14T11:31:58.632240380 08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2022-09-14T11:31:58.632439926 08:00" level=info msg="starting signal loop" namespace=k8s.io path=/run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/0f3bb604785bc2785b59832bf11547c876877c2e5fda66e5dcc1373e0e43525d pid=19591 runtime=io.containerd.runc.v2
time="2022-09-14T11:31:58.676025154 08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:local-path-provisioner-7b7dc8d6f5-cg8m9,Uid:50750f51-7f40-46a9-83b5-f2045d037063,Namespace:kube-system,Attempt:0,} returns sandbox id \"c20418e89501a4726a3d1259ea53888c2a062800234831d993445bfa3fbb5c86\""
time="2022-09-14T11:31:58.678440960 08:00" level=info msg="PullImage \"rancher/local-path-provisioner:v0.0.21\""
time="2022-09-14T11:31:58.723763556 08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helm-install-traefik-l4zwz,Uid:eec71f65-8e76-431c-ab9a-5505a14fb2ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f3bb604785bc2785b59832bf11547c876877c2e5fda66e5dcc1373e0e43525d\""
time="2022-09-14T11:31:58.725090113 08:00" level=info msg="PullImage \"rancher/klipper-helm:v0.7.3-build20220613\""
time="2022-09-14T11:32:14.273560450 08:00" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-coredns-coredns:1.9.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:14.336358511 08:00" level=info msg="ImageCreate event &ImageCreate{Name:sha256:99376d8f35e0abb6ff9d66b50a7c81df6e6dfdb649becc5df84a691a7b4beca4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:14.359921731 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-coredns-coredns:1.9.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:14.394633211 08:00" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-coredns-coredns@sha256:35e38f3165a19cb18c65d83334c13d61db6b24905f45640aa8c2d2a6f55ebcb0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:14.394962562 08:00" level=info msg="PullImage \"rancher/mirrored-coredns-coredns:1.9.1\" returns image reference \"sha256:99376d8f35e0abb6ff9d66b50a7c81df6e6dfdb649becc5df84a691a7b4beca4\""
time="2022-09-14T11:32:14.403644236 08:00" level=info msg="CreateContainer within sandbox \"31187721f285fe9cdae8debdf49349e2e1df23daddd595ed468fdbae05477062\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
time="2022-09-14T11:32:14.819314528 08:00" level=info msg="CreateContainer within sandbox \"31187721f285fe9cdae8debdf49349e2e1df23daddd595ed468fdbae05477062\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c1869e2fd657a9e1ac258db3a0c0a13c712d07e71af34476456a02d327c8905\""
time="2022-09-14T11:32:14.820056997 08:00" level=info msg="StartContainer for \"7c1869e2fd657a9e1ac258db3a0c0a13c712d07e71af34476456a02d327c8905\""
time="2022-09-14T11:32:14.990962962 08:00" level=info msg="StartContainer for \"7c1869e2fd657a9e1ac258db3a0c0a13c712d07e71af34476456a02d327c8905\" returns successfully"
time="2022-09-14T11:32:20.336659041 08:00" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-metrics-server:v0.5.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:20.377187213 08:00" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/local-path-provisioner:v0.0.21,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:20.537286443 08:00" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f73640fb506199d02192ef1dc99404aeb1afec43a9f7dad5de96c09eda17cd71,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:20.554375021 08:00" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fb9b574e03c344e1619ced3ef0700acb2ab8ef1d39973cabd90b8371a46148be,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:20.627915452 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-metrics-server:v0.5.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:20.685144931 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/local-path-provisioner:v0.0.21,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:20.708628627 08:00" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-metrics-server@sha256:48ecad4fe641a09fa4459f93c7ad29d4916f6b9cf7e934d548f1d8eff96e2f35,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:20.709482094 08:00" level=info msg="PullImage \"rancher/mirrored-metrics-server:v0.5.2\" returns image reference \"sha256:f73640fb506199d02192ef1dc99404aeb1afec43a9f7dad5de96c09eda17cd71\""
time="2022-09-14T11:32:20.712055223 08:00" level=info msg="CreateContainer within sandbox \"db6d3379bdb5e281256e7386f0af3f49a8d95a486e9910dd3e4f7928e5cb9d60\" for container &ContainerMetadata{Name:metrics-server,Attempt:0,}"
time="2022-09-14T11:32:20.725597419 08:00" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/local-path-provisioner@sha256:1da612c913ce0b4ab82e20844baa9dce1f7065e39412d6a0bb4de99c413f21bf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:20.726375580 08:00" level=info msg="PullImage \"rancher/local-path-provisioner:v0.0.21\" returns image reference \"sha256:fb9b574e03c344e1619ced3ef0700acb2ab8ef1d39973cabd90b8371a46148be\""
time="2022-09-14T11:32:20.728216682 08:00" level=info msg="CreateContainer within sandbox \"c20418e89501a4726a3d1259ea53888c2a062800234831d993445bfa3fbb5c86\" for container &ContainerMetadata{Name:local-path-provisioner,Attempt:0,}"
time="2022-09-14T11:32:21.055789284 08:00" level=info msg="CreateContainer within sandbox \"db6d3379bdb5e281256e7386f0af3f49a8d95a486e9910dd3e4f7928e5cb9d60\" for &ContainerMetadata{Name:metrics-server,Attempt:0,} returns container id \"12abf52091a7f17b84676a3377b4d50f0390d50a2c41bf88f4f13ffd325c9f03\""
time="2022-09-14T11:32:21.056687857 08:00" level=info msg="StartContainer for \"12abf52091a7f17b84676a3377b4d50f0390d50a2c41bf88f4f13ffd325c9f03\""
time="2022-09-14T11:32:21.061423132 08:00" level=info msg="CreateContainer within sandbox \"c20418e89501a4726a3d1259ea53888c2a062800234831d993445bfa3fbb5c86\" for &ContainerMetadata{Name:local-path-provisioner,Attempt:0,} returns container id \"8f9a81ac78c71656f6a073e7432857dadd3b9ed3e851410a7a08e911df41b5bc\""
time="2022-09-14T11:32:21.061811056 08:00" level=info msg="StartContainer for \"8f9a81ac78c71656f6a073e7432857dadd3b9ed3e851410a7a08e911df41b5bc\""
time="2022-09-14T11:32:21.170474274 08:00" level=info msg="StartContainer for \"12abf52091a7f17b84676a3377b4d50f0390d50a2c41bf88f4f13ffd325c9f03\" returns successfully"
time="2022-09-14T11:32:21.187627894 08:00" level=info msg="StartContainer for \"8f9a81ac78c71656f6a073e7432857dadd3b9ed3e851410a7a08e911df41b5bc\" returns successfully"
time="2022-09-14T11:32:29.074262369 08:00" level=error msg="ContainerStatus for \"c1368d68f32e9bdaf7072e37c8b3568b0e7454564fd656d164385b873cc854be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1368d68f32e9bdaf7072e37c8b3568b0e7454564fd656d164385b873cc854be\": not found"
time="2022-09-14T11:32:29.074934087 08:00" level=error msg="ContainerStatus for \"cada374fa07bc4b6abbd3131e14632a5658009e20fb47546259f935452357997\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cada374fa07bc4b6abbd3131e14632a5658009e20fb47546259f935452357997\": not found"
time="2022-09-14T11:32:29.075411930 08:00" level=error msg="ContainerStatus for \"7ebaa0a5a78728511919f2940502249b6085abd5c59309be3b863f4263155c28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ebaa0a5a78728511919f2940502249b6085abd5c59309be3b863f4263155c28\": not found"
time="2022-09-14T11:32:29.075827816 08:00" level=error msg="ContainerStatus for \"3fe4151219c6fdc8db4e11e04f3b063838f9bbeb20aae8c332b6813e02b41aa5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fe4151219c6fdc8db4e11e04f3b063838f9bbeb20aae8c332b6813e02b41aa5\": not found"
time="2022-09-14T11:32:29.076233481 08:00" level=error msg="ContainerStatus for \"a206c9b3e49b51da720c7f4fd48bd5c6e9c74b43235fc893374fa9da8ec33307\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a206c9b3e49b51da720c7f4fd48bd5c6e9c74b43235fc893374fa9da8ec33307\": not found"
time="2022-09-14T11:32:29.076684448 08:00" level=error msg="ContainerStatus for \"e72af6653c3978390b860d81f1d7a59256ed316f9a61234a376ee9bf53fedb2e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e72af6653c3978390b860d81f1d7a59256ed316f9a61234a376ee9bf53fedb2e\": not found"
time="2022-09-14T11:32:29.077183699 08:00" level=error msg="ContainerStatus for \"b4b2c8b51555e6bdff3ea7ae18a430219136197fb2fd3def912190edb9b8d8c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4b2c8b51555e6bdff3ea7ae18a430219136197fb2fd3def912190edb9b8d8c9\": not found"
time="2022-09-14T11:32:29.077652479 08:00" level=error msg="ContainerStatus for \"8fc4778d0d679017353477ca3ebe813884ebe2148adac315bbeafc3ad82180af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8fc4778d0d679017353477ca3ebe813884ebe2148adac315bbeafc3ad82180af\": not found"
time="2022-09-14T11:32:30.359934719 08:00" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/klipper-helm:v0.7.3-build20220613,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:30.399911110 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/klipper-helm:v0.7.3-build20220613,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:30.417052582 08:00" level=info msg="ImageCreate event &ImageCreate{Name:sha256:38b3b9ad736afd9f64e9844cdb78f8d43f1924a9e6dbeabc399fe57b9ec15247,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:30.462794965 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/klipper-helm:v0.7.3-build20220613,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:30.485568833 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:38b3b9ad736afd9f64e9844cdb78f8d43f1924a9e6dbeabc399fe57b9ec15247,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:30.508438390 08:00" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/klipper-helm@sha256:6a8e819402e3fdd5ff9ec576174b6c0013870b9c0627a05fa0ab17374b5cf189,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:30.509484477 08:00" level=info msg="PullImage \"rancher/klipper-helm:v0.7.3-build20220613\" returns image reference \"sha256:38b3b9ad736afd9f64e9844cdb78f8d43f1924a9e6dbeabc399fe57b9ec15247\""
time="2022-09-14T11:32:30.511640748 08:00" level=info msg="CreateContainer within sandbox \"0f3bb604785bc2785b59832bf11547c876877c2e5fda66e5dcc1373e0e43525d\" for container &ContainerMetadata{Name:helm,Attempt:0,}"
time="2022-09-14T11:32:30.531358063 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/klipper-helm:v0.7.3-build20220613,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:30.588545777 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/klipper-helm@sha256:6a8e819402e3fdd5ff9ec576174b6c0013870b9c0627a05fa0ab17374b5cf189,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:32:30.589384777 08:00" level=info msg="PullImage \"rancher/klipper-helm:v0.7.3-build20220613\" returns image reference \"sha256:38b3b9ad736afd9f64e9844cdb78f8d43f1924a9e6dbeabc399fe57b9ec15247\""
time="2022-09-14T11:32:30.591477597 08:00" level=info msg="CreateContainer within sandbox \"a6372681d74eef861e844cb50e07389ef657769bbb748c650086eacb887d9d87\" for container &ContainerMetadata{Name:helm,Attempt:0,}"
time="2022-09-14T11:32:30.902324913 08:00" level=info msg="CreateContainer within sandbox \"0f3bb604785bc2785b59832bf11547c876877c2e5fda66e5dcc1373e0e43525d\" for &ContainerMetadata{Name:helm,Attempt:0,} returns container id \"1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8\""
time="2022-09-14T11:32:30.903137915 08:00" level=info msg="StartContainer for \"1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8\""
time="2022-09-14T11:32:30.909735876 08:00" level=info msg="CreateContainer within sandbox \"a6372681d74eef861e844cb50e07389ef657769bbb748c650086eacb887d9d87\" for &ContainerMetadata{Name:helm,Attempt:0,} returns container id \"16e82bbce55902a88d885575ef7b6cc46ac3681c7d1df2c0a39980834ae53835\""
time="2022-09-14T11:32:30.910383530 08:00" level=info msg="StartContainer for \"16e82bbce55902a88d885575ef7b6cc46ac3681c7d1df2c0a39980834ae53835\""
time="2022-09-14T11:32:31.022752532 08:00" level=info msg="StartContainer for \"1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8\" returns successfully"
time="2022-09-14T11:32:31.030137119 08:00" level=info msg="StartContainer for \"16e82bbce55902a88d885575ef7b6cc46ac3681c7d1df2c0a39980834ae53835\" returns successfully"
time="2022-09-14T11:32:34.329430753 08:00" level=error msg="collecting metrics for 1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8" error="cgroups: cgroup deleted: unknown"
time="2022-09-14T11:32:37.236209528 08:00" level=error msg="collecting metrics for 1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8" error="cgroups: cgroup deleted: unknown"
time="2022-09-14T11:32:43.148312894 08:00" level=error msg="failed to handle container TaskExit event &TaskExit{ContainerID:1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8,ID:1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8,Pid:19999,ExitStatus:1,ExitedAt:2022-09-14 03:32:33.147226191 0000 UTC,XXX_unrecognized:[],}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown"
time="2022-09-14T11:32:44.191428520 08:00" level=info msg="TaskExit event &TaskExit{ContainerID:1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8,ID:1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8,Pid:19999,ExitStatus:1,ExitedAt:2022-09-14 03:32:33.147226191 0000 UTC,XXX_unrecognized:[],}"
time="2022-09-14T11:32:44.348779197 08:00" level=error msg="collecting metrics for 1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8" error="cgroups: cgroup deleted: unknown"
time="2022-09-14T11:32:44.349503552 08:00" level=error msg="collecting metrics for 16e82bbce55902a88d885575ef7b6cc46ac3681c7d1df2c0a39980834ae53835" error="cgroups: cgroup deleted: unknown"
time="2022-09-14T11:32:46.192054177 08:00" level=error msg="get state for 1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8" error="context deadline exceeded: unknown"
time="2022-09-14T11:32:46.192117829 08:00" level=warning msg="unknown status" status=0
time="2022-09-14T11:32:51.162031254 08:00" level=error msg="failed to handle container TaskExit event &TaskExit{ContainerID:16e82bbce55902a88d885575ef7b6cc46ac3681c7d1df2c0a39980834ae53835,ID:16e82bbce55902a88d885575ef7b6cc46ac3681c7d1df2c0a39980834ae53835,Pid:20001,ExitStatus:0,ExitedAt:2022-09-14 03:32:41.161134702 0000 UTC,XXX_unrecognized:[],}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown"
time="2022-09-14T11:32:52.234356628 08:00" level=error msg="collecting metrics for 1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8" error="cgroups: cgroup deleted: unknown"
time="2022-09-14T11:32:52.234720641 08:00" level=error msg="collecting metrics for 16e82bbce55902a88d885575ef7b6cc46ac3681c7d1df2c0a39980834ae53835" error="cgroups: cgroup deleted: unknown"
time="2022-09-14T11:32:54.191993342 08:00" level=error msg="Failed to handle backOff event &TaskExit{ContainerID:1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8,ID:1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8,Pid:19999,ExitStatus:1,ExitedAt:2022-09-14 03:32:33.147226191 0000 UTC,XXX_unrecognized:[],} for 1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown"
time="2022-09-14T11:32:54.192114442 08:00" level=info msg="TaskExit event &TaskExit{ContainerID:16e82bbce55902a88d885575ef7b6cc46ac3681c7d1df2c0a39980834ae53835,ID:16e82bbce55902a88d885575ef7b6cc46ac3681c7d1df2c0a39980834ae53835,Pid:20001,ExitStatus:0,ExitedAt:2022-09-14 03:32:41.161134702 0000 UTC,XXX_unrecognized:[],}"
time="2022-09-14T11:32:54.362660747 08:00" level=error msg="collecting metrics for 16e82bbce55902a88d885575ef7b6cc46ac3681c7d1df2c0a39980834ae53835" error="cgroups: cgroup deleted: unknown"
time="2022-09-14T11:32:54.365535419 08:00" level=error msg="collecting metrics for 1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8" error="cgroups: cgroup deleted: unknown"
time="2022-09-14T11:32:56.193042517 08:00" level=error msg="get state for 16e82bbce55902a88d885575ef7b6cc46ac3681c7d1df2c0a39980834ae53835" error="context deadline exceeded: unknown"
time="2022-09-14T11:32:56.193085751 08:00" level=warning msg="unknown status" status=0
time="2022-09-14T11:32:56.487646757 08:00" level=info msg="TaskExit event &TaskExit{ContainerID:1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8,ID:1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8,Pid:19999,ExitStatus:1,ExitedAt:2022-09-14 03:32:33.147226191 0000 UTC,XXX_unrecognized:[],}"
time="2022-09-14T11:32:57.293841145 08:00" level=info msg="CreateContainer within sandbox \"0f3bb604785bc2785b59832bf11547c876877c2e5fda66e5dcc1373e0e43525d\" for container &ContainerMetadata{Name:helm,Attempt:1,}"
time="2022-09-14T11:32:57.560069586 08:00" level=info msg="CreateContainer within sandbox \"0f3bb604785bc2785b59832bf11547c876877c2e5fda66e5dcc1373e0e43525d\" for &ContainerMetadata{Name:helm,Attempt:1,} returns container id \"3755dd26e7ce3fd8399662754dd4f9964abaae9e30ef61a485d87d4f41bddefd\""
time="2022-09-14T11:32:57.560627311 08:00" level=info msg="StartContainer for \"3755dd26e7ce3fd8399662754dd4f9964abaae9e30ef61a485d87d4f41bddefd\""
time="2022-09-14T11:32:57.653172250 08:00" level=info msg="StartContainer for \"3755dd26e7ce3fd8399662754dd4f9964abaae9e30ef61a485d87d4f41bddefd\" returns successfully"
time="2022-09-14T11:32:58.298508797 08:00" level=info msg="StopPodSandbox for \"a6372681d74eef861e844cb50e07389ef657769bbb748c650086eacb887d9d87\""
time="2022-09-14T11:32:58.298597674 08:00" level=info msg="Container to stop \"16e82bbce55902a88d885575ef7b6cc46ac3681c7d1df2c0a39980834ae53835\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2022-09-14T11:32:58.416868035 08:00" level=info msg="shim disconnected" id=a6372681d74eef861e844cb50e07389ef657769bbb748c650086eacb887d9d87
time="2022-09-14T11:32:58.416923190 08:00" level=warning msg="cleaning up after shim disconnected" id=a6372681d74eef861e844cb50e07389ef657769bbb748c650086eacb887d9d87 namespace=k8s.io
time="2022-09-14T11:32:58.416936166 08:00" level=info msg="cleaning up dead shim"
time="2022-09-14T11:32:58.417476894 08:00" level=info msg="shim disconnected" id=16e82bbce55902a88d885575ef7b6cc46ac3681c7d1df2c0a39980834ae53835
time="2022-09-14T11:32:58.417528864 08:00" level=warning msg="cleaning up after shim disconnected" id=16e82bbce55902a88d885575ef7b6cc46ac3681c7d1df2c0a39980834ae53835 namespace=k8s.io
time="2022-09-14T11:32:58.417541672 08:00" level=info msg="cleaning up dead shim"
time="2022-09-14T11:32:58.426197099 08:00" level=warning msg="cleanup warnings time=\"2022-09-14T11:32:58 08:00\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=20396 runtime=io.containerd.runc.v2\n"
time="2022-09-14T11:32:58.426420902 08:00" level=warning msg="cleanup warnings time=\"2022-09-14T11:32:58 08:00\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=20397 runtime=io.containerd.runc.v2\n"
time="2022-09-14T11:32:58.474644232 08:00" level=info msg="TearDown network for sandbox \"a6372681d74eef861e844cb50e07389ef657769bbb748c650086eacb887d9d87\" successfully"
time="2022-09-14T11:32:58.474695918 08:00" level=info msg="StopPodSandbox for \"a6372681d74eef861e844cb50e07389ef657769bbb748c650086eacb887d9d87\" returns successfully"
time="2022-09-14T11:33:00.893778809 08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:svclb-traefik-0e009984-j7vp8,Uid:1c5d6b53-6261-4baf-a6c0-ed011ee5a4bb,Namespace:kube-system,Attempt:0,}"
map[string]interface {}{"cniVersion":"1.0.0", "forceAddress":true, "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.42.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0x2a, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018970), "name":"cbr0", "type":"bridge"}
{"cniVersion":"1.0.0","forceAddress":true,"hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.42.0.0/24"}]],"routes":[{"dst":"10.42.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2022-09-14T11:33:01.159103471 08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:traefik-7cd4fcff68-f6sqm,Uid:5ed19d2f-773b-4bb7-ac0a-2af7f2f4158d,Namespace:kube-system,Attempt:0,}"
time="2022-09-14T11:33:01.169172135 08:00" level=info msg="shim disconnected" id=3755dd26e7ce3fd8399662754dd4f9964abaae9e30ef61a485d87d4f41bddefd
time="2022-09-14T11:33:01.169228833 08:00" level=warning msg="cleaning up after shim disconnected" id=3755dd26e7ce3fd8399662754dd4f9964abaae9e30ef61a485d87d4f41bddefd namespace=k8s.io
time="2022-09-14T11:33:01.169241396 08:00" level=info msg="cleaning up dead shim"
map[string]interface {}{"cniVersion":"1.0.0", "forceAddress":true, "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.42.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0x2a, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00012a900), "name":"cbr0", "type":"bridge"}
{"cniVersion":"1.0.0","forceAddress":true,"hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.42.0.0/24"}]],"routes":[{"dst":"10.42.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2022-09-14T11:33:01.178954895 08:00" level=warning msg="cleanup warnings time=\"2022-09-14T11:33:01 08:00\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=20673 runtime=io.containerd.runc.v2\n"
time="2022-09-14T11:33:01.307104458 08:00" level=info msg="RemoveContainer for \"1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8\""
time="2022-09-14T11:33:01.352979883 08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2022-09-14T11:33:01.353068858 08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2022-09-14T11:33:01.353090455 08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2022-09-14T11:33:01.353344515 08:00" level=info msg="starting signal loop" namespace=k8s.io path=/run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/2cc5adde61e642fbcb6b247ac20cb6f17dc9d57313a35a7d8056be5e68ceba8a pid=20703 runtime=io.containerd.runc.v2
time="2022-09-14T11:33:01.441149726 08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:svclb-traefik-0e009984-j7vp8,Uid:1c5d6b53-6261-4baf-a6c0-ed011ee5a4bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cc5adde61e642fbcb6b247ac20cb6f17dc9d57313a35a7d8056be5e68ceba8a\""
time="2022-09-14T11:33:01.442734400 08:00" level=info msg="PullImage \"rancher/klipper-lb:v0.3.5\""
time="2022-09-14T11:33:01.451883271 08:00" level=info msg="RemoveContainer for \"1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8\" returns successfully"
time="2022-09-14T11:33:01.580801435 08:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2022-09-14T11:33:01.580896164 08:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2022-09-14T11:33:01.580919209 08:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2022-09-14T11:33:01.581111965 08:00" level=info msg="starting signal loop" namespace=k8s.io path=/run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/d488e9aa525e36713b2f370ef3a661631f40ab7508ae1b0747bef6898abad46c pid=20745 runtime=io.containerd.runc.v2
time="2022-09-14T11:33:01.665359088 08:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:traefik-7cd4fcff68-f6sqm,Uid:5ed19d2f-773b-4bb7-ac0a-2af7f2f4158d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d488e9aa525e36713b2f370ef3a661631f40ab7508ae1b0747bef6898abad46c\""
time="2022-09-14T11:33:01.666894257 08:00" level=info msg="PullImage \"rancher/mirrored-library-traefik:2.6.2\""
time="2022-09-14T11:33:02.309847379 08:00" level=info msg="StopPodSandbox for \"0f3bb604785bc2785b59832bf11547c876877c2e5fda66e5dcc1373e0e43525d\""
time="2022-09-14T11:33:02.309913904 08:00" level=info msg="Container to stop \"3755dd26e7ce3fd8399662754dd4f9964abaae9e30ef61a485d87d4f41bddefd\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2022-09-14T11:33:02.458072371 08:00" level=info msg="shim disconnected" id=0f3bb604785bc2785b59832bf11547c876877c2e5fda66e5dcc1373e0e43525d
time="2022-09-14T11:33:02.458130967 08:00" level=warning msg="cleaning up after shim disconnected" id=0f3bb604785bc2785b59832bf11547c876877c2e5fda66e5dcc1373e0e43525d namespace=k8s.io
time="2022-09-14T11:33:02.458143953 08:00" level=info msg="cleaning up dead shim"
time="2022-09-14T11:33:02.458961613 08:00" level=info msg="shim disconnected" id=1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8
time="2022-09-14T11:33:02.459002507 08:00" level=warning msg="cleaning up after shim disconnected" id=1b1524c02bb058e9c817ddf7383c3b5455cd57206d5f3ce5cc06863fc4107cb8 namespace=k8s.io
time="2022-09-14T11:33:02.459014458 08:00" level=info msg="cleaning up dead shim"
time="2022-09-14T11:33:02.467793392 08:00" level=warning msg="cleanup warnings time=\"2022-09-14T11:33:02 08:00\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=20798 runtime=io.containerd.runc.v2\n"
time="2022-09-14T11:33:02.467793487 08:00" level=warning msg="cleanup warnings time=\"2022-09-14T11:33:02 08:00\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=20799 runtime=io.containerd.runc.v2\n"
time="2022-09-14T11:33:02.516325570 08:00" level=info msg="TearDown network for sandbox \"0f3bb604785bc2785b59832bf11547c876877c2e5fda66e5dcc1373e0e43525d\" successfully"
time="2022-09-14T11:33:02.516377270 08:00" level=info msg="StopPodSandbox for \"0f3bb604785bc2785b59832bf11547c876877c2e5fda66e5dcc1373e0e43525d\" returns successfully"
time="2022-09-14T11:33:20.920001913 08:00" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/klipper-lb:v0.3.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:33:21.039792865 08:00" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-library-traefik:2.6.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:33:21.079940580 08:00" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dbd43b6716a084c929553d435c5f86f17b331739fe4652eb95e6bdc12df38a10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:33:21.147565869 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/klipper-lb:v0.3.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:33:21.171266094 08:00" level=info msg="ImageCreate event &ImageCreate{Name:sha256:72463d8000a351a929f93834ecbb65b63e1d5dd2990f9c4b8f7cd28a66acec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:33:21.199765822 08:00" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/klipper-lb@sha256:02f8cb41d53fe08b5726a563ce36c3675ad7f2694b65a8477f6a66afac89fef7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:33:21.200709971 08:00" level=info msg="PullImage \"rancher/klipper-lb:v0.3.5\" returns image reference \"sha256:dbd43b6716a084c929553d435c5f86f17b331739fe4652eb95e6bdc12df38a10\""
time="2022-09-14T11:33:21.202758499 08:00" level=info msg="CreateContainer within sandbox \"2cc5adde61e642fbcb6b247ac20cb6f17dc9d57313a35a7d8056be5e68ceba8a\" for container &ContainerMetadata{Name:lb-tcp-80,Attempt:0,}"
time="2022-09-14T11:33:21.233449548 08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-library-traefik:2.6.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:33:21.297025120 08:00" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-library-traefik@sha256:ad2226527eea71b7591d5e9dcc0bffd0e71b2235420c34f358de6db6d529561f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-09-14T11:33:21.297936516 08:00" level=info msg="PullImage \"rancher/mirrored-library-traefik:2.6.2\" returns image reference \"sha256:72463d8000a351a929f93834ecbb65b63e1d5dd2990f9c4b8f7cd28a66acec44\""
time="2022-09-14T11:33:21.299970265 08:00" level=info msg="CreateContainer within sandbox \"d488e9aa525e36713b2f370ef3a661631f40ab7508ae1b0747bef6898abad46c\" for container &ContainerMetadata{Name:traefik,Attempt:0,}"
time="2022-09-14T11:33:21.754621188 08:00" level=info msg="CreateContainer within sandbox \"2cc5adde61e642fbcb6b247ac20cb6f17dc9d57313a35a7d8056be5e68ceba8a\" for &ContainerMetadata{Name:lb-tcp-80,Attempt:0,} returns container id \"9126eebfed2ab5f3cd9bcc4bbde6bf1345b8204dce42dca86c1721ae1ab84643\""
time="2022-09-14T11:33:21.755121600 08:00" level=info msg="StartContainer for \"9126eebfed2ab5f3cd9bcc4bbde6bf1345b8204dce42dca86c1721ae1ab84643\""
time="2022-09-14T11:33:21.848862064 08:00" level=info msg="StartContainer for \"9126eebfed2ab5f3cd9bcc4bbde6bf1345b8204dce42dca86c1721ae1ab84643\" returns successfully"
time="2022-09-14T11:33:21.851581171 08:00" level=info msg="CreateContainer within sandbox \"2cc5adde61e642fbcb6b247ac20cb6f17dc9d57313a35a7d8056be5e68ceba8a\" for container &ContainerMetadata{Name:lb-tcp-443,Attempt:0,}"
time="2022-09-14T11:33:21.874808599 08:00" level=info msg="CreateContainer within sandbox \"d488e9aa525e36713b2f370ef3a661631f40ab7508ae1b0747bef6898abad46c\" for &ContainerMetadata{Name:traefik,Attempt:0,} returns container id \"9fe6a66ff5e37441cdd22892ae753701e7fe2f0796b7fa78404aa57c1740a8e4\""
time="2022-09-14T11:33:21.875219006 08:00" level=info msg="StartContainer for \"9fe6a66ff5e37441cdd22892ae753701e7fe2f0796b7fa78404aa57c1740a8e4\""
time="2022-09-14T11:33:21.960863897 08:00" level=info msg="StartContainer for \"9fe6a66ff5e37441cdd22892ae753701e7fe2f0796b7fa78404aa57c1740a8e4\" returns successfully"
time="2022-09-14T11:33:22.073853843 08:00" level=info msg="CreateContainer within sandbox \"2cc5adde61e642fbcb6b247ac20cb6f17dc9d57313a35a7d8056be5e68ceba8a\" for &ContainerMetadata{Name:lb-tcp-443,Attempt:0,} returns container id \"0ae7fea9d094c85398a5e4256ee2a112c966074db99a5e10b7d790d16c570936\""
time="2022-09-14T11:33:22.074498435 08:00" level=info msg="StartContainer for \"0ae7fea9d094c85398a5e4256ee2a112c966074db99a5e10b7d790d16c570936\""
time="2022-09-14T11:33:22.175785510 08:00" level=info msg="StartContainer for \"0ae7fea9d094c85398a5e4256ee2a112c966074db99a5e10b7d790d16c570936\" returns successfully" |
Can you update to the latest 1.24 release (available later this week), and see if this error still occurs? There were some issues with runc and cgroups that I believe might be causing this: #6064 |
OK, I upgraded to V1.24.4 K3S1, now I updated to V1.25.0 K3S1 and still have the same problem, I will try the new version later. |
The fix should be in v1.25.0. If it's still happening with that release then something else is going on with your host that is causing these errors. Can you run |
# kubectl -n kube-system logs coredns-75fc8f8fff-rgt4l
Error from server: Get "https://192.168.100.195:10250/containerLogs/kube-system/coredns-75fc8f8fff-rgt4l/coredns": Access violation # crictl logs coredns-75fc8f8fff-rgt4l
E0915 18:08:02.193737 17658 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"coredns-75fc8f8fff-rgt4l\": not found" containerID="coredns-75fc8f8fff-rgt4l"
FATA[0000] rpc error: code = NotFound desc = an error occurred when try to find container "coredns-75fc8f8fff-rgt4l": not found # crictl logs 7ba00e39a7080
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
.:53
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[INFO] plugin/reload: Running configuration SHA512 = b941b080e5322f6519009bb49349462c7ddb6317425b0f6a83e5451175b720703949e3f3b454a24e77f3ffe57fd5e9c6130e528a5a1dd00d9500e4afd6c1108d
CoreDNS-1.9.1
linux/amd64, go1.17.8, 4b597f8
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server |
OK, so the logs are there and available if you use the pod ID instead of name (which makes sense). Do you by any chance have a HTTP proxy configured in your environment? If so, the "Access violation" error may be coming from that proxy server when K3s tries to connect to the kubelet to retrieve logs. You might try adding |
Thank you. It's all right now. |
I'm having exactly this problem on Happy to open a new issue, if that's easier, but I'd really appreciate some help investigating this.
This happens with any pod, in any namespace. I can |
Environmental Info:
K3s Version:
# k3s -v k3s version v1.24.1 k3s1 (0581808f) go version go1.18.1
Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
# kubectl get nodes NAME STATUS ROLES AGE VERSION node Ready control-plane,master 3h28m v1.24.1 k3s1
Describe the bug:
All Pods' logs only output "Access violation".
Steps To Reproduce:
Expected behavior:
All Pods can output logs normally
Actual behavior:
Logs are not generated properly
Additional context / logs:
# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE local-path-provisioner-7b7dc8d6f5-4cf7z 1/1 Running 0 3h35m coredns-b96499967-86vbm 1/1 Running 0 3h35m metrics-server-668d979685-7f9ln 1/1 Running 0 3h35m helm-install-traefik-crd-tllwv 0/1 Completed 0 3h35m helm-install-traefik-tbmfl 0/1 Completed 1 3h35m svclb-traefik-pg4fp 2/2 Running 0 3h34m traefik-7cd4fcff68-njh6l 1/1 Running 0 3h34m
The text was updated successfully, but these errors were encountered: