Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't load the image that is in the localhost registry #19155

Open
smartroaddev opened this issue Jun 28, 2024 · 4 comments
Open

Can't load the image that is in the localhost registry #19155

smartroaddev opened this issue Jun 28, 2024 · 4 comments
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. os/linux priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@smartroaddev
Copy link

What Happened?

I used podman to built the image into localhost registry then tried to load image via "minikube image load" command but it was failed. Is there a reason to not allow image in localhost registry to be loaded? I tested with other registry name without "." it was happening too.

Attach the log file

Screenshot from 2024-06-28 15-48-25

Operating System

Redhat/Fedora

Driver

Podman

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jun 28, 2024

Try without the "localhost/" prefix, it is not a real registry but just an annoying prefix that podman adds.

This full name is buggy: docker.io/localhost/..., it was not supposed to add the default registry there...

@afbjorklund afbjorklund added co/podman-driver podman driver issues os/linux labels Jun 28, 2024
@smartroaddev
Copy link
Author

Actually, I've try other registry names and success. But do minikube have a plan to support local built registry name?

@afbjorklund afbjorklund added kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. labels Jun 28, 2024
@afbjorklund
Copy link
Collaborator

afbjorklund commented Jun 28, 2024

Well, it's a bug. It is supposed to not add any registry when one is present, even if it is a fake host it is still there.

Possibly related to this hack:

// addRepoTagToImageName makes sure the image name has a repo tag in it.
// in crictl images list have the repo tag prepended to them
// for example "kubernetesui/dashboard:v2.0.0 will show up as "docker.io/kubernetesui/dashboard:v2.0.0"
// warning this is only meant for kubernetes images where we know the GCR addresses have .io in them
// not mean to be used for public images
func addRepoTagToImageName(imgName string) string {
        if !strings.Contains(imgName, ".io/") {
                return "docker.io/"   imgName
        } // else it already has repo name dont add anything
        return imgName
}

Which should use the real code...


https://pkg.go.dev/github.com/distribution/reference#ParseNormalizedNamed

// ParseImageName parses a docker image string into three parts: repo, tag and digest.
// If both tag and digest are empty, a default image tag will be returned.
func ParseImageName(image string) (string, string, string, error) {
        named, err := dockerref.ParseNormalizedNamed(image)
        if err != nil {
                return "", "", "", fmt.Errorf("couldn't parse image name %q: %v", image, err)
        }

        repoToPull := named.Name()
        var tag, digest string

        tagged, ok := named.(dockerref.Tagged)
        if ok {
                tag = tagged.Tag()
        }

        digested, ok := named.(dockerref.Digested)
        if ok {
                digest = digested.Digest().String()
        }
        // If no tag was specified, use the default "latest".
        if len(tag) == 0 && len(digest) == 0 {
                tag = "latest"
        }
        return repoToPull, tag, digest, nil
}

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. os/linux priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

4 participants