-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tag component caches #9550
Tag component caches #9550
Conversation
tools/packaging/kata-deploy/local-build/kata-deploy-binaries.sh
Outdated
Show resolved
Hide resolved
dbecf55
to
c3a0a69
Compare
c3a0a69
to
ed2f7c5
Compare
/test |
36a4d8f
to
661b3c6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, thanks @stevenhorsman!
661b3c6
to
51fd728
Compare
@@ -10,6 10,7 @@ jobs: | |||
build-kata-static-tarball-amd64: | |||
uses: ./.github/workflows/build-kata-static-tarball-amd64.yaml | |||
with: | |||
push-to-registry: yes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @stevenhorsman !
In ./.github/workflows/build-kata-static-tarball-amd64.yaml
with push-to-registry=yes
it will log-in quay.io but push the image to ghcr.io. Unless I missed something, we should log-in the right registry :D (applies to the arm64, ppc64le and s390x workflows).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm... just realized that it's passing ARTEFACT_REGISTRY_USERNAME and ARTEFACT_REGISTRY_PASSWORD variables. So likely internally, somewhere, it's loging to ghcr.io
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah - I was puzzled by the ghcr.io login, but assumed it must been done auto-magically by github
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -105,6 99,7 @@ jobs: | |||
RELEASE: ${{ if inputs.stage == 'release' && 'yes' || 'no' }} | |||
|
|||
- name: store-artifact ${{ matrix.asset }} | |||
if: ${{ matrix.stage != 'release' && (matrix.component == 'agent' || matrix.component == 'coco-guest-components' || matrix.component == 'pause-image') }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This bug'ed me!
I think you meant:
matrix.stage != 'release' || (matrix.component != 'agent' && matrix.component != 'coco-guest-components' && matrix.component != 'pause-image')
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm correct here then it needs to adjust the next store-artifacts's ifs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes - you are correct, I think I should have had a !
before (matrix.component == 'agent' || matrix.component == 'coco-guest-components' || matrix.component == 'pause-image')
, but your code works too. Thanks for the good spot!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I've updated the logic on all the build jobs to actually work now. Thanks!
51fd728
to
e02e9f2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be very useful on peer-pods indeed! Thanks @stevenhorsman !
e02e9f2
to
71e75d6
Compare
/test |
For other projects (e.g. CoCo projects) being able to access the released versions of components is helpful, so push these during the release process Signed-off-by: stevenhorsman <[email protected]>
- Set RELEASE env to 'yes', or 'no', based on if the stage passed in was 'release', so we can use it in the build scripts Signed-off-by: stevenhorsman <[email protected]>
- We don't want to ship certain components (agent, coco-guest-components) as part of the release, but for other consumers it's useful to be able to pull in the components from oras, so rather than not building them, just don't upload it as part of the release. - Also make the archs all consistent on not shipping the agent Signed-off-by: stevenhorsman <[email protected]>
- CoCo wants to use the agent and coco-guest-components cached artifacts so tag them with a helpful version, so make these easier to get Signed-off-by: stevenhorsman <[email protected]> No commands remaining.
71e75d6
to
7f41329
Compare
/test |
In kata-deploy-binaries.sh we want to understand if we are running as part of a release, so we need to pass through the RELEASE env from the workflow, which I missed in kata-containers#9550 Fixes: kata-containers#9921
In kata-deploy-binaries.sh we want to understand if we are running as part of a release, so we need to pass through the RELEASE env from the workflow, which I missed in kata-containers#9550 Fixes: kata-containers#9921 Signed-off-by: stevenhorsman <[email protected]>
In kata-deploy-binaries.sh we want to understand if we are running as part of a release, so we need to pass through the RELEASE env from the workflow, which I missed in kata-containers#9550 Fixes: kata-containers#9921 Signed-off-by: stevenhorsman <[email protected]>
In kata-deploy-binaries.sh we want to understand if we are running as part of a release, so we need to pass through the RELEASE env from the workflow, which I missed in kata-containers#9550 Fixes: kata-containers#9921 Signed-off-by: stevenhorsman <[email protected]>
Add the ability to tag certain components in our ghcr cache, rather than just storing them all by digest. In particular the cloud-api-adaptor implementation of the remote-hypervisor wants to use the agent, agent-opa and guest-components