-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nginx-proxy only sees docker's virtual interface (and IP) #133
Comments
Could you describe how you're generating this HTTP request? What URL are you using, what command are you running, or what are you doing in the browser? |
First of all, thanks for your assistance. The HTTP request is being generated by my web browser, running on my local machine. The server that is hosting The reason why I'm using |
I just wanted to add that I have recently switched on the |
Just to confirm, you're putting a DNS name that resolves to the public IP of your VPS (or the public IP itself) in the browser and you're seeing this behavior? If you're running with Also, it would probably be good to know what version of Docker you're running. One other thing that could be involved here is the |
This looks possibly relevant to my |
Yes, it's an existing DNS name that resolves to the public IP of the VPS. As far as I can see, the iptables rules that were applied initially by Docker are still in place, even after restarting Docker with I am running Docker 1.5 on Ubuntu 14.04, and the VPS public IP is v4. |
... I think it was a bug for boot2docker,I run it in ubuntu it works well |
I believe this issue is related to moby/moby#14856. The solution might be adding |
I have this same problem with CoreOS. This parameter CoreOS(1068.9.0. and 1122.0.0.) |
I have the same issue explained here: |
Same problem here 😞 |
1 |
1 similar comment
1 |
anyone resolve this? |
Same problem :( |
I just wanna confirm that the problem is still present. |
same problem |
Hi, if you are using docker-compose file, add next field to your nginx service settings:
Now user real IP will be in x-real-ip header of your web services. |
@wandebandera Hi. I tried this too. Once all services attached to the default host network, the service name cannot be resolved. Any solution? I'm using a mac. The port 80 is not mapped to the host related link: https://stackoverflow.com/questions/43349996/docker-cannot-link-containers-in-net-host-mode |
im getting with this
there is no problem now |
@emrecanozkok which block does this go into and is that a complete directive? were you still using bridge network too? |
I was also having this issue with local development. For me, the issue was due to me updating my hosts file and pointing all my domains to my localhost IP (127.0.0.1). Something special about this IP always has it resolve to docker's virtual interface. When I changed it up so that the domains pointed to my eth0 interface (192.168.1.2, etc.), then all of a sudden IP addresses were coming through in the reverse proxy container and to the container behind that one. |
Hi all! I also had the problem, that my installation always changed the source IP on packets sent to containers on e.g. the bridge interface. I found a vague MASQUERADE rule, which causes this, unfortunately I still cannot explain how it gets there after every restart (most likely it is not created by docker - didn't find it on other docker installations). I found it with: and the rule looked like this: This masquerades everything, however this does not make sense, since only traffic from the containers to other (external) networks should be changed => you should find other rules in POSTROUTING for this in your installation. The rule can be deleted by using (please don't run this on a any system without understanding it in detail): Best regards, |
I found the solution on the traefik page and used with jwilder/nginx-proxy The relevant part is: # Listen on port 80, default for HTTP, necessary to redirect to HTTPS
- target: 80
published: 80
mode: host
# Listen on port 443, default for HTTPS
- target: 443
published: 443
mode: host Main point is you don't have to put the "whole container" to "host networking mode" just those ports. |
Your router works the same way, it bridges your internal network to your ISP and performs Network Address Translation between the two. That's why you don't see the IP your router assigned to you when you check your "public" IP: You actually see the ISPs bridged connection. NAT is a hack to solve the address drain of IPV4. Works well, is a pain though.
Hence: Nginx will never see an external IP if it is behind docker's network bridges. That's why the only way to do this is to use the host network |
I encountered this issue today and discovered a reason why this can happen that isn't documented (directly) here, so I wanted to add to the conversation regarding it. Most versions of docker will use the userland proxy, rather than iptables/NAT capabilities, for sending traffic to nginx-proxy if the docker daemon is configured to use IPv6. The iptables/NAT capabilities didn't exist for docker's IPv6 networking stack until recently (moby/moby#41622), so the userland proxy was required to compensate in this case. This will cause the incoming connections to have a container-local IP address rather than the real incoming connection IP. As this is recently fixed, there is a light at the end of the tunnel if IPv6 support is the reason why this feature doesn't work for you, as it was for me. |
Windows / macOS hosts AFAIK don't work with this (I don't have either available to test, but have read those platforms are more problematic, I'm not aware of any progress being made with them). If it's helpful to those on Linux hosts, the following appears to work. This is an observation from an IPv6 enabled VPS (Vultr) running Ubuntu 22.10 with Docker Engine 22.10.22 and Docker Compose 2.14.1:
Below I use
|
Many thanks @polarathene for this detailed howto. It helped me to understand better the IPv6 Docker documentation. I share here the Docker daemon configuration I came up with:
A few notes below.It is recommended to use ULA addresses. From the IPv6 Docker doc:
Configuring the IPv6 subnet pools it tricky. I had to try many prefix length / size configurations to understand what was possible or not. This note from the Docker documentation is important:
Configuring IPv6 subnet pools enables declaring IPv6 networks with no subnet specified. The network configuration from @polarathene example above becomes:
If the above example produces this error:
then you'll have to change the subnet pools' prefix length (up) and / or size (down). I configured IPv4 subnet pools as well after this helpful blog post. I keep out 192.168.0.0/16 for manually configured subnets. |
You are welcome, although I also participated in review and revisions of the Docker IPv6 docs you linked, they are in much better shape than they used to be! 😝 I invested a lot of time going through the same pains (or worse haha), glad to hear it's helpful! ❤️
This was my advice, and I emphasized important to document the default IPv4 pools, along with an example to add an IPv6 pool. I don't think the location in IPv6 docs is the best place for that, they may relocate the documentation on default pools in future to another page. I thought that all you needed to know about configuring the pools was covered in the IPv6 docs?: https://docs.docker.com/config/daemon/ipv6/#dynamic-ipv6-subnet-allocation Perhaps the terminology was a bit too much and not as easy to follow when first exposed to it? I would suggest the I wasn't happy about their decision to use the documentation IPv6, while they mixed in actual private range IPv4 subnets however. While I couldn't win the Docker reviewers over on that inconsistency, but if you look at my own IPv6 docs I wrote for Docker Mailserver, you will get some better advice: |
You don't need to set this to be so large btw, it's unrelated to IPv6 pools. That is for the default
In the docs I wrote I assign a
That is only relevant to |
My front-facing
nginx-proxy
container doesn't seem to see the real IP a connection is coming from; here is an example:The IP 172.17.42.1 is actually from the virtual interface docker has created (
docker0
). For this reason even if I set Nginx to set the header to the real IP, it's all for nothing, since Nginx can't see the real IP to start with. So the question is how do I setnginx-proxy
to see the real IP a connection is coming from? Is this something that should be rather adjusted with the Docker daemon instead?Thanks!
The text was updated successfully, but these errors were encountered: