Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube delete doesn't always delete the docker network #9776

Open
sharifelgamal opened this issue Nov 24, 2020 · 4 comments
Open

minikube delete doesn't always delete the docker network #9776

sharifelgamal opened this issue Nov 24, 2020 · 4 comments
Assignees
Labels
co/docker-driver Issues related to kubernetes in container kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@sharifelgamal
Copy link
Collaborator

sharifelgamal commented Nov 24, 2020

~/minikube (master) $ mk delete
πŸ™„  "minikube" profile does not exist, trying anyways.
πŸ’€  Removed all traces of the "minikube" cluster.

~/minikube (master) $ mk start -n 2
πŸ˜„  minikube v1.15.1 on Darwin 10.15.7
✨  Automatically selected the docker driver. Other choices: hyperkit, virtualbox
πŸ‘  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
πŸ”₯  Creating docker container (CPUs=2, Memory=2200MB) ...
E1124 10:22:03.958407   70271 network_create.go:77] error while trying to create network create network minikube 192.168.49.0/24: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true minikube -o com.docker.network.driver.mtu=1500: exit status 1
stdout:

stderr:
Error response from daemon: network with name minikube already exists
❗  Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create network minikube 192.168.49.0/24: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true minikube -o com.docker.network.driver.mtu=1500: exit status 1
stdout:

stderr:
Error response from daemon: network with name minikube already exists

^C
~/minikube (master) $ docker network rm minikube
minikube

~/minikube (master) $ mk delete
πŸ”₯  Deleting "minikube" in docker ...
πŸ”₯  Deleting container "minikube" ...
πŸ”₯  Removing /Users/selgamal/.minikube/machines/minikube ...
πŸ’€  Removed all traces of the "minikube" cluster.

~/minikube (master) $ mk start -n 2
πŸ˜„  minikube v1.15.1 on Darwin 10.15.7
✨  Automatically selected the docker driver. Other choices: hyperkit, virtualbox
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ”₯  Creating docker container (CPUs=2, Memory=2200MB) ...
🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
πŸ”Ž  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass

❗  Multi-node clusters are currently experimental and might exhibit unintended behavior.
πŸ“˜  To track progress on multi-node clusters, see https://github.com/kubernetes/minikube/issues/7538.

πŸ‘  Starting node minikube-m02 in cluster minikube
πŸ”₯  Creating docker container (CPUs=2, Memory=2200MB) ...
🌐  Found network options:
    β–ͺ NO_PROXY=192.168.59.2
🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
    β–ͺ env NO_PROXY=192.168.59.2
πŸ”Ž  Verifying Kubernetes components...
πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
@sharifelgamal sharifelgamal added kind/bug Categorizes issue or PR as related to a bug. co/docker-driver Issues related to kubernetes in container labels Nov 24, 2020
@medyagh
Copy link
Member

medyagh commented Nov 26, 2020

I think this is simmilar to this failure

https://storage.googleapis.com/minikube-builds/logs/9781/a60176c/Docker_Linux.html#fail_TestErrorSpam

=== RUN   TestErrorSpam
=== PAUSE TestErrorSpam
=== CONT  TestErrorSpam
error_spam_test.go:62: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20201125224657-8457 -n=1 --memory=2250 --wait=false --driver=docker 
=== CONT  TestErrorSpam
error_spam_test.go:62: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20201125224657-8457 -n=1 --memory=2250 --wait=false --driver=docker : (4m7.689221659s)
error_spam_test.go:77: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname nospam-20201125224657-8457 --name nospam-20201125224657-8457 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=nospam-20201125224657-8457 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=nospam-20201125224657-8457 --network nospam-20201125224657-8457 --ip 192.168.70.10 --volume nospam-20201125224657-8457:/var --security-opt apparmor=unconfined --memory=2250mb --memory-swap=2250mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot@sha256:8aba7f18c2de2d5fa32da5a03653814482682311fa35f146b01d8c569fb73dc5: exit status 125"
error_spam_test.go:77: unexpected stderr: "stdout:"
error_spam_test.go:77: unexpected stderr: "0c210e1a2094fd1b699db4e2a534f6f77df68d892b7c8645e49c699a8b85088b"
error_spam_test.go:77: unexpected stderr: "stderr:"
error_spam_test.go:77: unexpected stderr: "WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
error_spam_test.go:77: unexpected stderr: "docker: Error response from daemon: network nospam-20201125224657-8457 not found."
error_spam_test.go:91: minikube stdout:
* [nospam-20201125224657-8457] minikube v1.15.1 on Debian 9.13
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-9781-4385-a60176cf00267e54f9001f9d3b267bc4b1d7d537/kubeconfig
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-9781-4385-a60176cf00267e54f9001f9d3b267bc4b1d7d537/.minikube
- MINIKUBE_LOCATION=9781
* Using the docker driver based on user configuration
* Starting control plane node nospam-20201125224657-8457 in cluster nospam-20201125224657-8457
* Creating docker container (CPUs=2, Memory=2250MB) ...
* docker "nospam-20201125224657-8457" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-20201125224657-8457" cluster and "default" namespace by default
error_spam_test.go:92: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname nospam-20201125224657-8457 --name nospam-20201125224657-8457 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=nospam-20201125224657-8457 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=nospam-20201125224657-8457 --network nospam-20201125224657-8457 --ip 192.168.70.10 --volume nospam-20201125224657-8457:/var --security-opt apparmor=unconfined --memory=2250mb --memory-swap=2250mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot@sha256:8aba7f18c2de2d5fa32da5a03653814482682311fa35f146b01d8c569fb73dc5: exit status 125
stdout:
0c210e1a2094fd1b699db4e2a534f6f77df68d892b7c8645e49c699a8b85088b
stderr:
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
docker: Error response from daemon: network nospam-20201125224657-8457 not found.
error_spam_test.go:94: *** TestErrorSpam FAILED at 2020-11-25 22:51:04.86530289  0000 UTC m= 1397.764022570
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestErrorSpam]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect nospam-20201125224657-8457
helpers_test.go:229: (dbg) docker inspect nospam-20201125224657-8457:
-- stdout --
	[
	    {
	        "Id": "2a37722027937797f33dd25c8dc3c9b2deab584682571280f09dc541ac6ad2e6",
	        "Created": "2020-11-25T22:50:25.966661662Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 189385,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-25T22:50:26.611296052Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1270733446388b870519ce2516a29a5b34dac213ca3a909e0aef1f6f268f94b7",
	        "ResolvConfPath": "/var/lib/docker/containers/2a37722027937797f33dd25c8dc3c9b2deab584682571280f09dc541ac6ad2e6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2a37722027937797f33dd25c8dc3c9b2deab584682571280f09dc541ac6ad2e6/hostname",
	        "HostsPath": "/var/lib/docker/containers/2a37722027937797f33dd25c8dc3c9b2deab584682571280f09dc541ac6ad2e6/hosts",
	        "LogPath": "/var/lib/docker/containers/2a37722027937797f33dd25c8dc3c9b2deab584682571280f09dc541ac6ad2e6/2a37722027937797f33dd25c8dc3c9b2deab584682571280f09dc541ac6ad2e6-json.log",
	        "Name": "/nospam-20201125224657-8457",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "nospam-20201125224657-8457:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "nospam-20201125224657-8457",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2359296000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e31b83db67ac67d292bf85a41d5f6dd2d3ccdf79deea936b77175c78a0690749-init/diff:/var/lib/docker/overlay2/f15ed2a7847bd2e5ac672989c1f499030b7d27415afeaeaaee254c7124631502/diff:/var/lib/docker/overlay2/3acdef08094895b483ee598319ce2a58adb2eca370a6ff92f2526df22604153d/diff:/var/lib/docker/overlay2/69f96e3b4a08e9bb1f0aa614815777e7202be8554e615af219f353d5e1428f07/diff:/var/lib/docker/overlay2/acef036ed8a5dd90f0f679c139a76256f049fab83a43d19a74037aef58b50e8e/diff:/var/lib/docker/overlay2/db819c79d4d92c3722357ee3369db60f0f22e1398153c3ffb52edb48746fb2cb/diff:/var/lib/docker/overlay2/06d6cf34cfce0d5f02fb3f67ef4c07e8a801bcbdfc0a4ca7d3f2d3a8b7bf9696/diff:/var/lib/docker/overlay2/32805c6b33fa27247ebf09a7e4e2c53d5b6125f32534b44878c8e7df8df608de/diff:/var/lib/docker/overlay2/3a607a93ee98375c133fb58d8dc70c19912ad45ed7fe6a85b0f466f3b1c24251/diff:/var/lib/docker/overlay2/155b48d83d6556694e3d4da75300f6590abcc025460194faab176b54c30e7e3c/diff:/var/lib/docker/overlay2/02b21d
a8cd537fa580ed2c3907c32c421615c42d4079d9621946b300fefda2ef/diff:/var/lib/docker/overlay2/7e3b1af86135e23c1493ac7effd085c1e270d7f71e40adbe48dd8afe93d145cd/diff:/var/lib/docker/overlay2/5123864845d15ff6ade6bb7bdca25700bef99f0fb46b0b4c65d4ccec464747ff/diff:/var/lib/docker/overlay2/dcb15b7ca485eac5d56e18be2fbd009999860cb3bd00bc3c4bffb4c918a3d686/diff:/var/lib/docker/overlay2/5ff3fee912ccee2526c2a3ad295f05a7745b7baad722c67ae704da5cdd9a2aca/diff:/var/lib/docker/overlay2/d5dbf624fde7cf6f9f726e90c7421861806b9c86a8d958c72ab9102078ea672f/diff:/var/lib/docker/overlay2/f79457b5c791bee57c0233d3e075e5d3913e8ff8a0b190d1d85403ea01830504/diff:/var/lib/docker/overlay2/a4444c2be10c41ce45026e1ae5304a3fb47d288cc0e33763904aaa952efa3e68/diff:/var/lib/docker/overlay2/96b4b105f156c86f17b821bbeb8bb955fc5d412f184c892468c01cd8ce492188/diff:/var/lib/docker/overlay2/9d96b19f123e3528c3c17dedcec24ea5fa4902b372de0c8f2d71562e4978ad7b/diff:/var/lib/docker/overlay2/7df1efa231be270de98cf5f28d44689ba639b4761ba9be1e6326d5fa05d679f5/diff:/var/lib/d
ocker/overlay2/be1d11697b54531d0b77e4b07e3e7b62ea1871c443029eeb127596685cd086de/diff:/var/lib/docker/overlay2/4d89793615b80f29a3304225b9e81b366017c7f52fe866f5704af6e07f6f7087/diff:/var/lib/docker/overlay2/b39391a389fb0067f4b7c0dece2100deabc9d2ab52e5f16d9c89973a0f392df8/diff:/var/lib/docker/overlay2/4dcb84d15241377ae0a0d9a4d4189011f33fd1f98cd8b78853da3caebcd74705/diff:/var/lib/docker/overlay2/19e7b59ecac8293e773e17b4ccf3b32d3b545ecb17d6f9cbc25dc56b0815ddda/diff:/var/lib/docker/overlay2/3a4c2bbac3ec6def9951d4d6d1b43374697deea17aecfc8e395cdc0de063cec2/diff:/var/lib/docker/overlay2/cd73208beccf15c657e1bde7ee6c3a609b060dc9abfbcd53b3bd9ca9662f7418/diff:/var/lib/docker/overlay2/d4d1bd4383bdc0df1e2d49c48a2cd4e3d60661c37dbc9682d075a22fcff4d302/diff:/var/lib/docker/overlay2/6592a70f33c7abc3ec6610a878f9344887958109ab5bb1175698d09588beab6f/diff:/var/lib/docker/overlay2/20f1a7fe6f9ab5d8cf4f9b3e035f281097dd4c43c807862deb9638823b477ccb/diff:/var/lib/docker/overlay2/1b711feabab61769cdc062727895f63240c32d834c88cc048bac51625b8
c047b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e31b83db67ac67d292bf85a41d5f6dd2d3ccdf79deea936b77175c78a0690749/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e31b83db67ac67d292bf85a41d5f6dd2d3ccdf79deea936b77175c78a0690749/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e31b83db67ac67d292bf85a41d5f6dd2d3ccdf79deea936b77175c78a0690749/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "nospam-20201125224657-8457",
	                "Source": "/var/lib/docker/volumes/nospam-20201125224657-8457/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "nospam-20201125224657-8457",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot@sha256:8aba7f18c2de2d5fa32da5a03653814482682311fa35f146b01d8c569fb73dc5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "nospam-20201125224657-8457",
	                "name.minikube.sigs.k8s.io": "nospam-20201125224657-8457",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN 3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f3c2879659765531b44713d998fc490e787d4002ac94313c6b045dfbedf4f8ea",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32893"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f3c287965976",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "nospam-20201125224657-8457": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.95.10"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2a3772202793"
	                    ],
	                    "NetworkID": "3cbd622ff04096605985f8f5f3ef6012abd9e43b7a4b549880851871fa0a337d",
	                    "EndpointID": "dd1e13cfe6f576140b96464cc62150065d43cc8a95196026e3d8968e99bd8045",
	                    "Gateway": "192.168.95.1",
	                    "IPAddress": "192.168.95.10",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:5f:0a",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p nospam-20201125224657-8457 -n nospam-20201125224657-8457
=== CONT  TestErrorSpam
helpers_test.go:238: <<< TestErrorSpam FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestErrorSpam]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20201125224657-8457 logs -n 25
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p nospam-20201125224657-8457 logs -n 25: (2.564248177s)
helpers_test.go:246: TestErrorSpam logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2020-11-25 22:50:27 UTC, end at Wed 2020-11-25 22:51:06 UTC. --
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[166]: time="2020-11-25T22:50:34.259381932Z" level=info msg="Daemon shutdown complete"
	* Nov 25 22:50:34 nospam-20201125224657-8457 systemd[1]: docker.service: Succeeded.
	* Nov 25 22:50:34 nospam-20201125224657-8457 systemd[1]: Stopped Docker Application Container Engine.
	* Nov 25 22:50:34 nospam-20201125224657-8457 systemd[1]: Starting Docker Application Container Engine...
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.337721421Z" level=info msg="Starting up"
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.340446825Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.340486625Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.341497473Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.341790415Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.344924218Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.344961252Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.344996675Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.345019253Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.369714709Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.378168974Z" level=warning msg="Your kernel does not support swap memory limit"
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.378200500Z" level=warning msg="Your kernel does not support cgroup rt period"
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.378212782Z" level=warning msg="Your kernel does not support cgroup rt runtime"
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.378392605Z" level=info msg="Loading containers: start."
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.491676740Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.540433610Z" level=info msg="Loading containers: done."
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.577921672Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.578002192Z" level=info msg="Daemon has completed initialization"
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.597752296Z" level=info msg="API listen on /var/run/docker.sock"
	* Nov 25 22:50:34 nospam-20201125224657-8457 dockerd[420]: time="2020-11-25T22:50:34.597821640Z" level=info msg="API listen on [::]:2376"
	* Nov 25 22:50:34 nospam-20201125224657-8457 systemd[1]: Started Docker Application Container Engine.
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 7448fe38e1bb7       14cd22f7abe78       19 seconds ago      Running             kube-scheduler            0                   c947156078056
	* 33bb568702a8d       b15c6247777d7       19 seconds ago      Running             kube-apiserver            0                   e94ff920ae571
	* 3dd42d51528a4       4830ab6185860       19 seconds ago      Running             kube-controller-manager   0                   1655f3e3a14ef
	* b1a78d27dac2e       0369cf4303ffd       19 seconds ago      Running             etcd                      0                   a1b0973d6f235
	* 
	* ==> describe nodes <==
	* Name:               nospam-20201125224657-8457
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=nospam-20201125224657-8457
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=5786acc75905914d848a07d56959d20e9026c202
	*                     minikube.k8s.io/name=nospam-20201125224657-8457
	*                     minikube.k8s.io/updated_at=2020_11_25T22_50_57_0700
	*                     minikube.k8s.io/version=v1.15.1
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 25 Nov 2020 22:50:54  0000
	* Taints:             node.kubernetes.io/not-ready:NoSchedule
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  nospam-20201125224657-8457
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 25 Nov 2020 22:50:58  0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 25 Nov 2020 22:50:58  0000   Wed, 25 Nov 2020 22:50:49  0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 25 Nov 2020 22:50:58  0000   Wed, 25 Nov 2020 22:50:49  0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 25 Nov 2020 22:50:58  0000   Wed, 25 Nov 2020 22:50:49  0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            False   Wed, 25 Nov 2020 22:50:58  0000   Wed, 25 Nov 2020 22:50:58  0000   KubeletNotReady              container runtime status check may not have completed yet
	* Addresses:
	*   InternalIP:  192.168.95.10
	*   Hostname:    nospam-20201125224657-8457
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  309568300Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  309568300Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 612979e05d104a57abeede850501ae8d
	*   System UUID:                a0da8a75-58b1-48a6-bd41-d9688c36dcab
	*   Boot ID:                    d58fb65e-1de7-441d-b5b1-f4ebc4379466
	*   Kernel Version:             4.9.0-14-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.13
	*   Kubelet Version:            v1.19.4
	*   Kube-Proxy Version:         v1.19.4
	* Non-terminated Pods:          (5 in total)
	*   Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	*   kube-system                 etcd-nospam-20201125224657-8457                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s
	*   kube-system                 kube-apiserver-nospam-20201125224657-8457             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	*   kube-system                 kube-controller-manager-nospam-20201125224657-8457    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	*   kube-system                 kube-proxy-lp5tl                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         0s
	*   kube-system                 kube-scheduler-nospam-20201125224657-8457             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                550m (6%)  0 (0%)
	*   memory             0 (0%)     0 (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	*   hugepages-1Gi      0 (0%)     0 (0%)
	*   hugepages-2Mi      0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age                From     Message
	*   ----    ------                   ----               ----     -------
	*   Normal  NodeHasSufficientMemory  20s (x5 over 21s)  kubelet  Node nospam-20201125224657-8457 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    20s (x5 over 21s)  kubelet  Node nospam-20201125224657-8457 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     20s (x4 over 21s)  kubelet  Node nospam-20201125224657-8457 status is now: NodeHasSufficientPID
	*   Normal  Starting                 8s                 kubelet  Starting kubelet.
	*   Normal  NodeHasSufficientMemory  8s                 kubelet  Node nospam-20201125224657-8457 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    8s                 kubelet  Node nospam-20201125224657-8457 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     8s                 kubelet  Node nospam-20201125224657-8457 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             8s                 kubelet  Node nospam-20201125224657-8457 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  8s                 kubelet  Updated Node Allocatable limit across pods
	* 
	* ==> dmesg <==
	* [  12.456435] cgroup: cgroup2: unknown option "nsdelegate"
	* [   0.220667] cgroup: cgroup2: unknown option "nsdelegate"
	* [  18.460394] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov25 22:45] cgroup: cgroup2: unknown option "nsdelegate"
	* [  26.623808] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov25 22:46] cgroup: cgroup2: unknown option "nsdelegate"
	* [  22.113303] cgroup: cgroup2: unknown option "nsdelegate"
	* [  27.645928] tee (139392): /proc/133068/oom_adj is deprecated, please use /proc/133068/oom_score_adj instead.
	* [Nov25 22:47] cgroup: cgroup2: unknown option "nsdelegate"
	* [  14.853059] cgroup: cgroup2: unknown option "nsdelegate"
	* [   4.854882] cgroup: cgroup2: unknown option "nsdelegate"
	* [  25.488651] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov25 22:48] cgroup: cgroup2: unknown option "nsdelegate"
	* [  10.569182] cgroup: cgroup2: unknown option "nsdelegate"
	* [  12.334177] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov25 22:49] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov25 22:50] cgroup: cgroup2: unknown option "nsdelegate"
	* [   8.365465] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	* [   0.000004] ll header: 00000000: ff ff ff ff ff ff 72 b2 cf 16 c4 04 08 06        ......r.......
	* [   0.000006] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	* [   0.000002] ll header: 00000000: ff ff ff ff ff ff 72 b2 cf 16 c4 04 08 06        ......r.......
	* [   5.318170] cgroup: cgroup2: unknown option "nsdelegate"
	* [  10.110045] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethe4693b29
	* [   0.000003] ll header: 00000000: ff ff ff ff ff ff 32 52 59 8f 3f f6 08 06        ......2RY.?...
	* [  15.049823] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [b1a78d27dac2] <==
	* 2020-11-25 22:50:47.438119 I | etcdserver/membership: added member 8a9c826ac8d5d8c9 [https://192.168.95.10:2380] to cluster 4ea01d85898afdca
	* 2020-11-25 22:50:47.439980 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-11-25 22:50:47.440105 I | embed: listening for peers on 192.168.95.10:2380
	* 2020-11-25 22:50:47.440185 I | embed: listening for metrics on http://127.0.0.1:2381
	* raft2020/11/25 22:50:47 INFO: 8a9c826ac8d5d8c9 is starting a new election at term 1
	* raft2020/11/25 22:50:47 INFO: 8a9c826ac8d5d8c9 became candidate at term 2
	* raft2020/11/25 22:50:47 INFO: 8a9c826ac8d5d8c9 received MsgVoteResp from 8a9c826ac8d5d8c9 at term 2
	* raft2020/11/25 22:50:47 INFO: 8a9c826ac8d5d8c9 became leader at term 2
	* raft2020/11/25 22:50:47 INFO: raft.node: 8a9c826ac8d5d8c9 elected leader 8a9c826ac8d5d8c9 at term 2
	* 2020-11-25 22:50:47.527628 I | etcdserver: setting up the initial cluster version to 3.4
	* 2020-11-25 22:50:47.528722 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-11-25 22:50:47.528785 I | etcdserver: published {Name:nospam-20201125224657-8457 ClientURLs:[https://192.168.95.10:2379]} to cluster 4ea01d85898afdca
	* 2020-11-25 22:50:47.528811 I | embed: ready to serve client requests
	* 2020-11-25 22:50:47.528993 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-11-25 22:50:47.529086 I | embed: ready to serve client requests
	* 2020-11-25 22:50:47.531660 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-25 22:50:47.532066 I | embed: serving client requests on 192.168.95.10:2379
	* 2020-11-25 22:51:02.046990 W | etcdserver: request "header:<ID:15621146531764307368 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/namespace-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/namespace-controller\" value_size:127 >> failure:<>>" with result "size:16" took too long (2.264042374s) to execute
	* 2020-11-25 22:51:02.047601 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-nospam-20201125224657-8457\" " with result "range_response_count:1 size:3008" took too long (2.629232834s) to execute
	* 2020-11-25 22:51:02.047862 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" " with result "range_response_count:0 size:5" took too long (2.255440252s) to execute
	* 2020-11-25 22:51:02.048203 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.252375588s) to execute
	* 2020-11-25 22:51:03.957772 W | wal: sync duration of 1.898397154s, expected less than 1s
	* 2020-11-25 22:51:03.971605 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-nospam-20201125224657-8457\" " with result "range_response_count:1 size:6028" took too long (1.900568169s) to execute
	* 2020-11-25 22:51:03.975727 W | etcdserver: read-only range request "key:\"/registry/storageclasses/standard\" " with result "range_response_count:0 size:5" took too long (1.838176603s) to execute
	* 2020-11-25 22:51:03.975984 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.16752242s) to execute
	* 
	* ==> kernel <==
	*  22:51:07 up 33 min,  0 users,  load average: 5.31, 5.47, 3.44
	* Linux nospam-20201125224657-8457 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [33bb568702a8] <==
	* I1125 22:50:56.386888       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	* I1125 22:50:57.318688       1 controller.go:606] quota admission added evaluator for: deployments.apps
	* I1125 22:50:57.429951       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	* I1125 22:50:58.225343       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	* I1125 22:51:02.048645       1 trace.go:205] Trace[1321325920]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts,user-agent:kube-controller-manager/v1.19.4 (linux/amd64) kubernetes/d360454/kube-controller-manager,client:192.168.95.10 (25-Nov-2020 22:50:59.282) (total time: 2766ms):
	* Trace[1321325920]: ---"Object stored in database" 2766ms (22:51:00.048)
	* Trace[1321325920]: [2.766355395s] [2.766355395s] END
	* I1125 22:51:02.048823       1 trace.go:205] Trace[1026303146]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner,user-agent:kubectl/v1.19.4 (linux/amd64) kubernetes/d360454,client:127.0.0.1 (25-Nov-2020 22:50:59.791) (total time: 2256ms):
	* Trace[1026303146]: [2.256991653s] [2.256991653s] END
	* I1125 22:51:02.051796       1 trace.go:205] Trace[988627774]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-scheduler-nospam-20201125224657-8457,user-agent:kubelet/v1.19.4 (linux/amd64) kubernetes/d360454,client:192.168.95.10 (25-Nov-2020 22:50:59.417) (total time: 2634ms):
	* Trace[988627774]: ---"About to write a response" 2631ms (22:51:00.048)
	* Trace[988627774]: [2.634127158s] [2.634127158s] END
	* I1125 22:51:03.973283       1 trace.go:205] Trace[1243427438]: "Create" url:/api/v1/namespaces/kube-system/secrets,user-agent:kube-controller-manager/v1.19.4 (linux/amd64) kubernetes/d360454/tokens-controller,client:192.168.95.10 (25-Nov-2020 22:51:02.057) (total time: 1915ms):
	* Trace[1243427438]: ---"Object stored in database" 1915ms (22:51:00.973)
	* Trace[1243427438]: [1.915882689s] [1.915882689s] END
	* I1125 22:51:03.973608       1 trace.go:205] Trace[862205916]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-nospam-20201125224657-8457,user-agent:kubelet/v1.19.4 (linux/amd64) kubernetes/d360454,client:192.168.95.10 (25-Nov-2020 22:51:02.070) (total time: 1903ms):
	* Trace[862205916]: ---"About to write a response" 1902ms (22:51:00.973)
	* Trace[862205916]: [1.903154792s] [1.903154792s] END
	* I1125 22:51:03.973293       1 trace.go:205] Trace[1721996189]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts,user-agent:kubectl/v1.19.4 (linux/amd64) kubernetes/d360454,client:127.0.0.1 (25-Nov-2020 22:51:02.070) (total time: 1902ms):
	* Trace[1721996189]: ---"Object stored in database" 1902ms (22:51:00.973)
	* Trace[1721996189]: [1.902521504s] [1.902521504s] END
	* I1125 22:51:03.976492       1 trace.go:205] Trace[1089128034]: "Get" url:/apis/storage.k8s.io/v1/storageclasses/standard,user-agent:kubectl/v1.19.4 (linux/amd64) kubernetes/d360454,client:127.0.0.1 (25-Nov-2020 22:51:02.137) (total time: 1839ms):
	* Trace[1089128034]: [1.83941337s] [1.83941337s] END
	* I1125 22:51:06.540700       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	* I1125 22:51:06.611867       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	* 
	* ==> kube-controller-manager [3dd42d51528a] <==
	* I1125 22:51:06.614210       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	* I1125 22:51:06.614284       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	* I1125 22:51:06.614704       1 shared_informer.go:247] Caches are synced for namespace 
	* I1125 22:51:06.614750       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	* I1125 22:51:06.615079       1 shared_informer.go:247] Caches are synced for endpoint 
	* I1125 22:51:06.615835       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	* I1125 22:51:06.618760       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	* I1125 22:51:06.629186       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	* I1125 22:51:06.718238       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lp5tl"
	* I1125 22:51:06.731021       1 shared_informer.go:247] Caches are synced for taint 
	* I1125 22:51:06.731126       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	* W1125 22:51:06.731191       1 node_lifecycle_controller.go:1044] Missing timestamp for Node nospam-20201125224657-8457. Assuming now as a timestamp.
	* I1125 22:51:06.731835       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* I1125 22:51:06.732278       1 event.go:291] "Event occurred" object="nospam-20201125224657-8457" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node nospam-20201125224657-8457 event: Registered Node nospam-20201125224657-8457 in Controller"
	* I1125 22:51:06.732610       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	* E1125 22:51:06.765461       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"4c7cd7de-5eac-4900-b3de-d4460c3a9a33", ResourceVersion:"225", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63741941457, loc:(*time.Location)(0x6a61c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0011e96c0), FieldsType:"FieldsV1", FieldsV1:(*v1.Fiel
dsV1)(0xc0011e96e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0011e9700), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)
(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000a1c880), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v
1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0011e9720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersi
stentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0011e9740), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.
DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.4", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"",
ValueFrom:(*v1.EnvVarSource)(0xc0011e9780)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00175b980), Stdin:false, StdinOnce:false, TTY:false}},
EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000389748), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000022620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConf
ig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0006bad08)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000389818)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	* I1125 22:51:06.766001       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1125 22:51:06.779730       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	* I1125 22:51:06.809147       1 shared_informer.go:247] Caches are synced for resource quota 
	* E1125 22:51:06.835635       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	* I1125 22:51:06.861072       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* E1125 22:51:06.864508       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	* I1125 22:51:07.145168       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1125 22:51:07.145209       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1125 22:51:07.161365       1 shared_informer.go:247] Caches are synced for garbage collector 
	* 
	* ==> kube-scheduler [7448fe38e1bb] <==
	* I1125 22:50:54.217743       1 registry.go:173] Registering SelectorSpread plugin
	* I1125 22:50:54.221725       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I1125 22:50:54.221907       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1125 22:50:54.221931       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1125 22:50:54.221954       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* E1125 22:50:54.228048       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1125 22:50:54.234629       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1125 22:50:54.237203       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1125 22:50:54.238724       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1125 22:50:54.239028       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1125 22:50:54.239310       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1125 22:50:54.237279       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1125 22:50:54.238857       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1125 22:50:54.239738       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1125 22:50:54.239405       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1125 22:50:54.239670       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1125 22:50:54.239675       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1125 22:50:54.240148       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1125 22:50:55.085244       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1125 22:50:55.133875       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1125 22:50:55.140385       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1125 22:50:55.149210       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1125 22:50:55.210097       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1125 22:50:55.236640       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* I1125 22:50:57.322127       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2020-11-25 22:50:27 UTC, end at Wed 2020-11-25 22:51:07 UTC. --
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.022259    2437 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.031097    2437 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.037740    2437 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.040767    2437 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.118714    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-flexvolume-dir") pod "kube-controller-manager-nospam-20201125224657-8457" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100")
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.118998    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-usr-local-share-ca-certificates") pod "kube-controller-manager-nospam-20201125224657-8457" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100")
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.119205    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/5b79f8089a5bcf39aed5917ac6fdc1d0-etcd-data") pod "etcd-nospam-20201125224657-8457" (UID: "5b79f8089a5bcf39aed5917ac6fdc1d0")
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.119418    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/42563dd2899bd3034447363d305d7208-etc-ca-certificates") pod "kube-apiserver-nospam-20201125224657-8457" (UID: "42563dd2899bd3034447363d305d7208")
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.119613    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/42563dd2899bd3034447363d305d7208-usr-share-ca-certificates") pod "kube-apiserver-nospam-20201125224657-8457" (UID: "42563dd2899bd3034447363d305d7208")
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.119710    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-kubeconfig") pod "kube-controller-manager-nospam-20201125224657-8457" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100")
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.119744    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/42563dd2899bd3034447363d305d7208-ca-certs") pod "kube-apiserver-nospam-20201125224657-8457" (UID: "42563dd2899bd3034447363d305d7208")
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.119817    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-etc-ca-certificates") pod "kube-controller-manager-nospam-20201125224657-8457" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100")
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.119878    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-usr-share-ca-certificates") pod "kube-controller-manager-nospam-20201125224657-8457" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100")
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.119930    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/42563dd2899bd3034447363d305d7208-usr-local-share-ca-certificates") pod "kube-apiserver-nospam-20201125224657-8457" (UID: "42563dd2899bd3034447363d305d7208")
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.119977    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-ca-certs") pod "kube-controller-manager-nospam-20201125224657-8457" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100")
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.120030    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-k8s-certs") pod "kube-controller-manager-nospam-20201125224657-8457" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100")
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.120083    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/38744c90661b22e9ae232b0452c54538-kubeconfig") pod "kube-scheduler-nospam-20201125224657-8457" (UID: "38744c90661b22e9ae232b0452c54538")
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.120134    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/5b79f8089a5bcf39aed5917ac6fdc1d0-etcd-certs") pod "etcd-nospam-20201125224657-8457" (UID: "5b79f8089a5bcf39aed5917ac6fdc1d0")
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.120191    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/42563dd2899bd3034447363d305d7208-k8s-certs") pod "kube-apiserver-nospam-20201125224657-8457" (UID: "42563dd2899bd3034447363d305d7208")
	* Nov 25 22:50:59 nospam-20201125224657-8457 kubelet[2437]: I1125 22:50:59.120232    2437 reconciler.go:157] Reconciler: start to sync state
	* Nov 25 22:51:06 nospam-20201125224657-8457 kubelet[2437]: I1125 22:51:06.727458    2437 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 25 22:51:06 nospam-20201125224657-8457 kubelet[2437]: I1125 22:51:06.754299    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/2ee5119e-a60b-4ea7-9071-512c22f77dfd-lib-modules") pod "kube-proxy-lp5tl" (UID: "2ee5119e-a60b-4ea7-9071-512c22f77dfd")
	* Nov 25 22:51:06 nospam-20201125224657-8457 kubelet[2437]: I1125 22:51:06.754371    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-ptvnj" (UniqueName: "kubernetes.io/secret/2ee5119e-a60b-4ea7-9071-512c22f77dfd-kube-proxy-token-ptvnj") pod "kube-proxy-lp5tl" (UID: "2ee5119e-a60b-4ea7-9071-512c22f77dfd")
	* Nov 25 22:51:06 nospam-20201125224657-8457 kubelet[2437]: I1125 22:51:06.754416    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/2ee5119e-a60b-4ea7-9071-512c22f77dfd-kube-proxy") pod "kube-proxy-lp5tl" (UID: "2ee5119e-a60b-4ea7-9071-512c22f77dfd")
	* Nov 25 22:51:06 nospam-20201125224657-8457 kubelet[2437]: I1125 22:51:06.754466    2437 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/2ee5119e-a60b-4ea7-9071-512c22f77dfd-xtables-lock") pod "kube-proxy-lp5tl" (UID: "2ee5119e-a60b-4ea7-9071-512c22f77dfd")
-- /stdout --
** stderr ** 
	E1125 22:51:07.269838  199866 out.go:286] unable to execute * 2020-11-25 22:51:02.046990 W | etcdserver: request "header:<ID:15621146531764307368 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/namespace-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/namespace-controller\" value_size:127 >> failure:<>>" with result "size:16" took too long (2.264042374s) to execute
	: html/template:* 2020-11-25 22:51:02.046990 W | etcdserver: request "header:<ID:15621146531764307368 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/namespace-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/namespace-controller\" value_size:127 >> failure:<>>" with result "size:16" took too long (2.264042374s) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
** /stderr **
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p nospam-20201125224657-8457 -n nospam-20201125224657-8457
helpers_test.go:255: (dbg) Run:  kubectl --context nospam-20201125224657-8457 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: coredns-f9fd979d6-hjww8 storage-provisioner
helpers_test.go:263: ======> post-mortem[TestErrorSpam]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context nospam-20201125224657-8457 describe pod coredns-f9fd979d6-hjww8 storage-provisioner
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context nospam-20201125224657-8457 describe pod coredns-f9fd979d6-hjww8 storage-provisioner: exit status 1 (152.83732ms)
** stderr ** 
	Error from server (NotFound): pods "coredns-f9fd979d6-hjww8" not found
	Error from server (NotFound): pods "storage-provisioner" not found
** /stderr **
helpers_test.go:268: kubectl --context nospam-20201125224657-8457 describe pod coredns-f9fd979d6-hjww8 storage-provisioner: exit status 1
helpers_test.go:171: Cleaning up "nospam-20201125224657-8457" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p nospam-20201125224657-8457
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p nospam-20201125224657-8457: (3.61353147s)
--- FAIL: TestErrorSpam (255.47s)

@priyawadhwa priyawadhwa added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Nov 30, 2020
@priyawadhwa priyawadhwa added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Feb 3, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 4, 2021
@rastafarien
Copy link

HI. Got similar issue
stuck at this step :
Creating kvm2 VM (CPUs=2, Memory=3000MB, Disk=20000MB) ...
🀦 StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: creating network: un-retryable: no free private network subnets found with given parameters (start: "192.168.59.0", step: 11, tries: 20)

docker network ls didn't show any network named minikube

anyway the only workaround I found was to issue :
docker network prune

Then after that minikube ran smoothly

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 25, 2021
@spowelljr spowelljr added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Jun 30, 2021
@spowelljr spowelljr added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Oct 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

7 participants