-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support EndpointSlice addressType "FQDN" #10080
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@ChristianAnke , i did these steps ;
I was able to get a response code of 200. So can you write your own instructions based on the above commands and the manifest produced by above kubectl commands, using the flag --dry-run=client . Edit the manifests as required. And also provide the appropriate curl command that does not get a 200 response. Then add the output of commands like Then copy/paste the entire commands instructions or manifests for all related objects, so someone can reproduce the problem you are reporting. Next, the new issue template asks questions so that there is data available to analyse the reported problem. You have not answered any questions. There is no info even on the controller version etc. So edit your issue description and kindly answer the questions asked in a new issue template. Please do format the information in markdown and code-snippets |
@longwuyuan , thanks for the answer. on purpose i provided a kubernetes manifest which is reflecting what is required to reproduce the issue. I do not understand why you come up with a completely different setup than the one i provided. Furthermore did i fill out template with the asked questions. I just removed the un-commented things because i have no idea how this was meant to be used since nothing of the template was visible in the preview mode. The template is this:
|
Understood.
|
Hi @ChristianAnke why would you play with low-level endpoint slices api, have you tried service externalName https://kubernetes.io/docs/concepts/services-networking/service/#externalname ? |
/triage needs-information |
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach |
I am verifying that issue is still present (kubernetes-version=1.27.4). I run the configuration form the description, here is full log output:
ingress logs:
line |
@tombokombo you can't use @longwuyuan is there any chance this can be fixed?
|
I don't understand the tiny details, that the question would imply, fully well. But if you are asking if a endpointslice can be created manually for the purpose of the controller picking it up in lieu of the function to get endpointslices, as a feature, then its not likely in the near future. It will also help to know what, in layman terms, is the bigger picture problem, that is blocking use of ingress-nginx controller functions, and that will get fixed if you create a endpointslice and make the controller use that for routing ? Hoping for some elaboration on the reference to a |
I'm working with @Ghilteras on this "TCP service" is as per https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/, in this case is Because we want to use the service as a proxy to a FQDN, we created a k8s service that has no selectors, and an endpointSlice of type FQDN that maps to the service (which hopefully creates the endpoints for the "tcp" service). But we are getting Code refs:
|
Looks like you want host a proxy inside the cluster, listening at port 6379, and you are expecting a connection to this LB:6379 should in-turn connect to a AWS Redis Instance.
|
@longwuyuan I think there might be a misunderstanding so let me address your questions in a different order:
The redis (elasticache) is only reachable inside the k8s cluster VPC (our elasticache shares the same VPC as the k8s cluster), we're trying to EXPOSE the elasticache for access outside the VPC As mentioned, the proxy is just a normal k8s service (my-proxy), the expose is through another load balancer service as https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
then proxying through the tcp services map via args to run the nginx controller
and we make the NLB public ("external: false") so that redis is reachable from outside the VPC
That's interesting - we feared that might be the case, but are you saying that basically "ingress for tcp traffic to k8s cluster" will not ever be supported because the k8s spec says so? |
|
My comments may not be on similar opinions as others so please wait and see if others comment on this |
@longwuyuan please see inline below the comments
The issue is that the ingress controller does not recognize EndpointSlice of type FQDN
Since Endpoints are deprecated, we have just created an EndpointSlice
Not really. We would expect the Service to pick up the EndpointSlice as per k8s documentation, which works fine for EndpointSlice of type IPv4, but for EndpointSlice of type FQDN the controller thinks it's an IPv6. This looks like a bug, not a feature request. Shouldn't we change the
That's what we are doing with haproxy to circumvent the fqdn/ipv6 EndpointSlice bug
I don't think you can't tie a Service to another Service though. This could work if we could use an Ingress, but we can't. That's why we are hooking the Service with the EndpointSlice |
@Ghilteras thanks for the update. it helped
On the Redis-Proxy part, my thoughts were that I found some hits on searching like https://artifacthub.io/packages/search?ts_query_web=redis proxy&sort=relevance&page=1
On a complete tangent, if I were to implement this, I would have the frontend consume a configurable ENV-VAR for the AWS-Redis-ElasticCache FQDN, instead of redis queries first coming to a K8S-Cluster and then getting bounced off to AWS. The efficiency & security of K8S as target of redis queries but ultimatety destined for a AWS-ElasticCache, would only be compromised if there was some really unpleasing design aspect, forcing you to do this. But these are my opinions. It is clear that a developer needs to comment here. There is really acute shortage of developer time so the choices are to join the community meeting https://github.com/kubernetes/community/tree/master/sig-network (and of course wait here for comments from community experts and developers) |
We already use a tcp proxy (haproxy) in the meantime while we wait that |
@Ghilteras sorry for not being clear enough.
Hope you have more info now. |
The previous post contains enough data already supplied from @BulatSaif and @ChristianAnke. If you guys require additional information please let us know.
I think we are all aware of that, that's why we filed this issue against NGINX Ingress repo and not against k8s
Again, we are not asking to change how tcp/udp port expose works, we are asking to fix a bug
IMHO bugs that are easy to fix and this one looks like it should not require a lot of effort can be prioritized without dramatically altering the roadmap of the project. |
/remove-kind feature |
@Ghilteras thanks for your comments. |
Just circling back to this to check whether someone can accept the triage and remove the needs more information tags |
@Ghilteras: The label In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I'm using k8s v1.29.1 and was using helm chart v4.8.3.
|
I am running into this bug. I have the need to deploy a helm application multiple times, and load balance it for reliability / no downtime upgrades. Deploying the application multiple times in the same namespace will not work due to automation we have, and as a work-around, I attempted to deploy the application in two separate namespaces. The idea was to loadbalance them using Network Slices. I was hoping to use a combination of The idea on paper: The overall setup actually works. When tested without Only when attempting to connect through
We are currently running They current work-around I have is to specify This is my currently invalid setup.
To someone following along and wanting my workaround, you would change the endpoint slice to something like this:
|
@antoniolago which component did you update to v4.10.1? @sig-piskule the whole point of this bug is that you cannot use |
Hello, that would be the ingress-nginx helm chart. |
I don't see Endpoint Slices having been updated in the last few years so I don't see how bumping the chart would do anything to this bug.. |
Hi, If the expectation is that the project will support & maintain manual creation of endpointSlice, then please note that there are no resources to work on that kind of support or maintenance. If the expectation is that the project will support & maintain manual creation of endpointSlice for the ultimate goal to route TCP/UDP traffic from outside the cluster to pods inside the cluster or to a FQDN, then this is not going to be worked on anytime in the foreseeable future. This is because the project can not support features and use-cases that are not close and implied by the Ingress-API specs & functionalities. There is just not enough developer time available to maintain all the features that are far away from the Ingress-API implications. And the requirement of securing the controller by default while working on the Gateway-API is higher priority. Also it seems a fair expectation that a user should be able to create a endpointSlice and configure it with a FQDN destination. But this project is primarily a ingress-controller and there was never a promise made to support/maintain manual creation of endpointSlice. There are many other features the project provides that are not part of the Ingress-API. But it was done when conditions were favoring like expectations and resources. Thus this issue is not really a bug as such. Allowing creation of endpointSlice would be a fringe use feature when compared to the routing of HTTP/HTTPS traffic from outside the cluster to pods inside the cluster. |
Agreed, but it did for me. |
@antoniolago I'm saying that I don't think it did fix it, but that rather you dont see the error anymore because you are not leveraging the slice. @longwuyuan this project has always been stretched thin. This is not new. What is not clear is the priority of bugs vs features. I frankly don't understand why allowing a bug like this to persist vs prioritizing other things, especially since this does not require a lot of effort to fix a feature like EndpointSlices that are not End of Life. Also, you mentioned manual creation? I don't understand what you mean by that, it's definitely automatic IaC manifests that we lay down in k8s with the Ingress Slice. |
There is no endpointSlice creating procedure in docs AFAIK https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/ |
When configuring an EndpointSlice with addressType "FQDN" it will be correctly configured.
https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/
Currently the following configuration is accepted, but not working when accessing the Ingress endpoint:
Error when accessing URL:
Requires Kubernetes Version: v1.21
The text was updated successfully, but these errors were encountered: