Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s kube-apiserver report some errlog when install kubevirt v1.0.0 #10338

Closed
caijian76 opened this issue Aug 25, 2023 · 15 comments
Closed

k8s kube-apiserver report some errlog when install kubevirt v1.0.0 #10338

caijian76 opened this issue Aug 25, 2023 · 15 comments
Labels
kind/bug lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@caijian76
Copy link

caijian76 commented Aug 25, 2023

E0825 16:18:50.570760       1 controller.go:113] loading OpenAPI spec for "v1.subresources.kubevirt.io" failed with: Error, could not get list of group versions for APIService
I0825 16:18:50.570782       1 controller.go:126] OpenAPI AggregationController: action for item v1.subresources.kubevirt.io: Rate Limited Requeue.
E0825 16:18:50.573848       1 controller.go:113] loading OpenAPI spec for "v1alpha3.subresources.kubevirt.io" failed with: Error, could not get list of group versions for APIService
I0825 16:18:50.573859       1 controller.go:126] OpenAPI AggregationController: action for item v1alpha3.subresources.kubevirt.io: Rate Limited Requeue.

the same log will repeat 1 min

@aburdenthehand
Copy link
Contributor

Does KubeVirt install regardless? We have seen in the past this issue, where KubeVirt installs but the error gets logged: #9725

If not, it might help to know your environment details.

@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 28, 2023
@starcraft66
Copy link

I'm seeing this constant spam in my kube-apiserver logs.

@aburdenthehand
Copy link
Contributor

@starcraft66 What version of k8s and kubevirt are you running?

@starcraft66
Copy link

I'm on kubernetes 1.27.8 at the moment.

@berlineric
Copy link

same here.
k8s 1.26.8
kubevirt 1.1.0

api-logs spammed as hell.

@aburdenthehand
Copy link
Contributor

@fossedihelm (since you were also pinged on #9725), are you able to shed any light on this?

@kubevirt-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kubevirt-bot kubevirt-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2024
@aburdenthehand
Copy link
Contributor

/remove-lifecycle rotten

@kubevirt-bot kubevirt-bot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jan 29, 2024
@tiraboschi
Copy link
Member

checking virt-api pods logs I see:
2024-04-07T09:45:21.077587774Z {"component":"virt-api","level":"info","msg":"setting rate limiter for the API to 5 QPS and 10 Burst","pos":"api.go:1097","timestamp":"2024-04-07T09:45:21.072417Z"}

so virt-api is setting a rate limit of 5 queries per second with a burst of 10.

This is probably exceeded during the installation due to many request in a short sequence, but it simply means that we are hitting the rate limit and the request is delayed by requeuing but this is definitively harmless if the install is then fine.

This is not systematic depending on the speed of the hardware, maybe we can fine-tune the rate limit on virt-api to prevent this noise.

@xpivarc
Copy link
Member

xpivarc commented Apr 10, 2024

The root cause is missing /openapi/v3 endpoint. It should not have any functional impact. We would welcome a PR if anyone would be interested.

@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 9, 2024
@kubevirt-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kubevirt-bot kubevirt-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 8, 2024
@kubevirt-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

@kubevirt-bot
Copy link
Contributor

@kubevirt-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants