-
-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use upstream docker.io/rancher/k3s images for k3s preset #875
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @VJftw, thank you for kind words and your contribution.
First, this is mind blowing. I remember struggling with making k3s work and exposing the configuration over http. I really like your solution of packaged addons. Second, thanks for helping to get rid of my custom image, that I haven't updated for a long time, so it is now broken and doesn't work on arm64.
Next, I'm not sure if this preset is actually being used... My original adapted k3s-dind image has 80k downloads so I guess somebody out there might be still trying to use that. That's why I'd like to see if it is possible to make this change backwards compatible as much as possible. If I understand correctly, public API isn't really changed much (because the tests have now minimal changes), and changes to the endpoint address and config port are mostly internal. Is this correct? Also, what about Port
option removal? Code that uses it won't compile anymore, can we still support custom ports, or is it impossible now? If complicated, that's ok, compilation error will lead to docs, and we'll have a short migration guide (a couple sentences) ready.
Anyway, there is not much left here to do. The 1.19 version can be upgraded to something that actually works and is used by people. This should be documented in the README: it has a list of supported versions per preset, and an indicator of whether it works on arm64 (latest
image works, haven't checked the other ones, but I can test whatever versions you pick to support).
Thanks for taking care of this
I'll give this a bit more love to improve the backward compatibility, test coverage and the documentation! :) |
I've just pushed a few commits to try and cover all the concerns from your last comment, mainly around ensuring these changes are entirely backwards compatible. I've been purposely a little less specific on which versions of K3s we support as I think there's a good case to primarily focus on maintaining support for the current versions of kubernetes/K3s and support for previous versions where possible. The good news is we can modify the implementation based on the version if we really need to (like what I've done for <1.17.0). Regarding ARM support, there are ARM images for all the rancher/k3s images so I'm expecting it to work consistently - but please feel free to test a lot deeper if you can. Let me know what you think :) |
Codecov ReportPatch coverage:
📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more Additional details and impacted files@@ Coverage Diff @@
## master #875 /- ##
==========================================
- Coverage 85.87% 85.68% -0.19%
==========================================
Files 50 52 2
Lines 2350 2459 109
==========================================
Hits 2018 2107 89
- Misses 173 183 10
- Partials 159 169 10
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing, no blockers here, but I think we still need to make a decision about supported versions.
According to the docs, since 1.19 they keep the version supported for 1 year. There is also end of life schedule for past versions. Since Gnomock is a testing tool, maybe it needs to support a bit more than that so people can test against a little older versions, so to me it makes sense to support 1.5-2 years back instead of 1 year. This will keep the number of versions limited to 4-8, and some really old versions will be removed. Or we can just stick to the official release policy and keep 1 year old versions at most in our tests.
Regardless of what you choose, k3s job is still failing due to test timeout. It is possible that one container is just stuck somewhere so the entire job times out, or everything works and the timeout just needs to be increased. The fix depends on what you suggest to do with version support. I don't have a strong preference here.
Let's stick with the official Kubernetes policy (1 year) as it feels quite difficult to claim that we support something that's out of our control and EOL by the upstream. We can say that older versions may work, but we won't actively support them or diminish support for newer versions in favour of them. I also think it's unlikely that the problem scenario will happen:
If this does happen, I think we should recommend to use a custom preset instead. Let me know if there's anything else you spot; happy for you to override my thoughts on the support policy too if you feel strongly :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the contribution, this is great!
👋 , thanks for this awesome tool!
I've been trying to use K3s preset but it currently only supports up to K3s v1.21.2 but I needed to test against more recent versions.
To fix this (hopefully forever), this PR updates the preset to use Rancher's official k3s images where their usage is described here: https://docs.k3s.io/advanced#running-k3s-in-docker
The fun/fiddlier part was making use of K3s'
/var/lib/rancher/k3s/server/manifests
to automatically create a pod which serves the kubeconfig that k3s generates over httpd. This Pod manifest is base64-encoded and written by the overridden entrypoint before callingk3s server ...
.Let me know if there's any questions or if i've missed something from the contributing guidelines. I'm more than happy to update as needed :)