Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to choose outbound (external) IP for containers #30053

Closed
mitar opened this issue Jan 11, 2017 · 50 comments · Fixed by #40579
Closed

Unable to choose outbound (external) IP for containers #30053

mitar opened this issue Jan 11, 2017 · 50 comments · Fixed by #40579
Labels
area/networking kind/enhancement Enhancements are not bugs or new features but can improve usability or performance.
Milestone

Comments

@mitar
Copy link

mitar commented Jan 11, 2017

In single host mode (no swarm and more complicated stuff) I have a host with multiple public IPs. It seems there is no way to configure which of those IPs containers use for outbound communication. Always the primary IP on the host is used. So I would need that different containers are seen on the Internet as using different IPs.

My use case is a mail server. I have an extra IP allocated to the server to use for sending e-mails so that forward and backwards DNS entries can match. The other IP address is used for HTTP virtual hosting and has many different DNS entries. Additionally using an extra IP for a dedicated mail server is in general a good practice.

Tried with Docker 1.12.5 on Linux (Ubuntu 16.04.1 LTS) with 4.8.0 kernel.

@mitar
Copy link
Author

mitar commented Jan 11, 2017

cc @kostko, @gw0

@thaJeztah thaJeztah added area/networking kind/enhancement Enhancements are not bugs or new features but can improve usability or performance. labels Jan 11, 2017
@thaJeztah
Copy link
Member

/cc @sanimej ptal

@sanimej
Copy link

sanimej commented Jan 13, 2017

@mitar For external connectivity docker programs a MASQUERADE . It works better than an SNAT rule since its not tied to a particular IP. Currently there is no option to change this behavior.

One work around I can suggest is..

  • create a new routing table with a default route to go via the interface you want for email traffic.,
  • add an iptables entry to mark the e-mail traffic
  • add an ip rule to direct the marked traffic to the new routing table

@mitar
Copy link
Author

mitar commented Jan 13, 2017

OK, but marking would be based on e-mail traffic port, not on the container. There is no way currently to ask Docker to mark all traffic from a container with some mark?

@sanimej
Copy link

sanimej commented Jan 13, 2017

Yes, this custom marking is something you have to do yourself.

@gw0
Copy link

gw0 commented Jan 19, 2017

Is there a way to make the container's internal IP static? Or a preferred way to run a command on the host each time the container starts? If yes, then some simple iptables rules on the host would be enough.

Another idea is to mark traffic with iptables inside the mail container. Is it possible?

@thaJeztah
Copy link
Member

The docker events command (or API) can be used to listen for containers that are started / stopped, or connected/disconnected from a network

@mitar
Copy link
Author

mitar commented Jan 19, 2017

Oh, I hoped we could get rid of dynamic configuration of the network stack now that there is support for Docker networks. We made this daemon in the past to configure custom network configuration so that we could use custom routing inside Docker. But with Docker networks this is more or less obsolete. The only open case is this outbound/external IP.

I think it would be great if this could be something supported by Docker directly.

@thaJeztah
Copy link
Member

If someone can write a proposal for this functionality, including what the UX would look like, it could be looked into to see if there's a way to implement.

@mitar
Copy link
Author

mitar commented Jan 30, 2017

I think it could be as simple as adding two more options to Docker the run command:

  • --outgoing-ip – Container outgoing IPv4 address
  • --outgoing-ip6 – Container outgoing IPv6 address

Behavior would be similar to how for normal programs you can "bind" a program to an IP to use that IP for outgoing packets. Simply, all outgoing traffic would go through that IP out.

Some other names I was considering: --host-ip (but it might get confused with incoming IP), --bind-ip (but it might get confused with binding volumes).

I think this would cover the most cases when one needs this. But I would also consider adding another option: --outgoing-mark which would add an iptables mark for all outgoing traffic. Then one can route it however you want, if you want a more complicated routing behavior. It could also be simply --traffic-mark and would all traffic to and from this container be marked with such mark.

I am not sure if marks for iptables is considered breaking a Docker abstraction.

For me, the simple argument above would be enough.

@viossat
Copy link

viossat commented Mar 2, 2017

I have the same problem, --outgoing-ip would be really appreciated.
My workaround:

docker network create NETWORK --subnet=192.168.1.0/24 --gateway=192.168.1.1 # choose an unused subnet
iptables -t nat -I POSTROUTING -s 192.168.1.0/24 -j SNAT --to-source OUTGOING_IP # remember that Docker also edit POSTROUTING
docker network connect NETWORK CONTAINER # or with Compose

@mitar
Copy link
Author

mitar commented Jun 4, 2017

@viossat: But if I want that CONTAINER is one other network, so that other containers can talk to it? I can attach it to two networks, but how do I know which one will it use to communicate out?

@nazar-pc
Copy link

nazar-pc commented Aug 7, 2017

I agree to --outgoing-ip. There is a lack of parity right now: we can easily and conveniently specify on which IP and port to listen, but we can't specify which IP is used when container tries to make connection to the outside. I also have 2 IPs on one of my machines and do not want to mess with host configuration, I'd like to specify it in docker-compose.yml just like the rest of networking settings.

@ozburo
Copy link

ozburo commented Sep 8, 2017

Has there been any new thoughts or progress on this issue?

Do you think we need to raise this issue on the official Docker tracker or is this sufficient?

Really want there to be an elegant solution to this as well; I also concur that --outgoing-ip would be a great solution.

1

@thematrixdev
Copy link

@viossat I am trying to assign a container using eth1.
docker network create MYNETWORK
docket network inspect MYNETWORK
iptables -t nat -I POSTROUTING -s SUBNET/xx -j SNAT --to-source WAN_IP_OF_ETH1
docket run -ti --network MYNETWORK ubuntu:16.04 bash
Inside the container, there is no Internet connectivity.
I am running all these on Amazon Linux. There is Internet connection via eth1 on host.
May you please help?

@viossat
Copy link

viossat commented Sep 17, 2017

@y2kbug-hk Have you assigned a fixed custom --subnet and --gateway on network creation? It has to be the same as in your iptables rule.

@thematrixdev
Copy link

@viossat I have no idea what value to specify, so I inspect the value after creation. It gives
[ { "Name": "mynetwork", "Id": "df9406e83af0e1d41ae4b82eb70b55e4e0c21c41eb4c9d3b5ce2cc9b8c531643", "Created": "2017-09-16T18:13:20.459974528Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" } ] }, "Internal": false, "Attachable": false, "Containers": {}, "Options": {}, "Labels": {} } ]

Hence iptables -t nat -I POSTROUTING -s 172.18.0.0/16 -j SNAT --to-source WAN_IP_OF_ETH1

route -n gives
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-df9406e83af0

@mitar
Copy link
Author

mitar commented Feb 9, 2018

I made a docker image which configures the external IP of any container having EXTERNAL_IP environment variable using docker-gen. It seems to work, but this should really be part of Docker proper.

@FalkNisius
Copy link

In swarm mode we can prepare a network with driver bridge and scope swarm.
The problem is, that all --opt configurations are ignored at scope swarm. So there is no way to set --opt com.docker.network.bridge.enable_ip_masquerade=false and add on own iptables rule with a SNAT action. The masquerade rule is renewed from time to to time from the docker daemon, so that deleting and replacing it, is not a solution. it would be fine that the driver opts are recogniced also in scope swarm.

@thaJeztah
Copy link
Member

The problem is, that all --opt configurations are ignored at scope swarm

@FalkNisius could you open a separate ticket for that with details? Not sure if that was "by design" or "for future implementation" (and don't want to derail the discussion here 😄)

@FalkNisius
Copy link

what is the right github project ? this one, the network subsystem, the bridge driver ?

@thaJeztah
Copy link
Member

This project / repository is fine

@Bessonov
Copy link

Run in the same issue. I have multiple ipv4 and ipv6 ip's attached to eth0 and want to use these on container basis in swarm mode w/ compose.

arkodg pushed a commit to arkodg/libnetwork that referenced this issue Sep 25, 2019
This commit allows a user to specify a Host IP via the
com.docker.network.host_ipv4 label which is used as the
Source IP during SNAT for bridge networks .

The use case is for hosts with multiple interfaces and
this label can dictate which IP will be used as Source IP
for North-South traffic

In the absence of this label, MASQUERADE is used which picks the Source IP
based on Next Hop from the Route Table

Addresses: moby/moby#30053

Signed-off-by: Arko Dasgupta <[email protected]>
thaJeztah added a commit to thaJeztah/docker that referenced this issue Feb 17, 2020
full diff: moby/libnetwork@feeff4f...6659f7f

includes:

- moby/libnetwork#2317 Allow bridge net driver to skip IPv4 configuration of bridge interface
    - adds support for a `com.docker.network.bridge.inhibit_ipv4` label/configuration
    - addresses moby#37430 Prevent bridge network driver from setting IPv4 address on bridge interface
- moby/libnetwork#2454 Support for com.docker.network.host_ipv4 driver label
    - addresses moby#30053 Unable to choose outbound (external) IP for containers
- moby/libnetwork#2491 Improving load balancer performance
    - addresses moby#35082 [SWARM] Very poor performance for ingress network with lots of parallel requests

Signed-off-by: Sebastiaan van Stijn <[email protected]>
@thaJeztah
Copy link
Member

fixed on master through #40579

@mitar
Copy link
Author

mitar commented Feb 28, 2020

I think this is related as well: moby/libnetwork#2454

@thaJeztah
Copy link
Member

yes that's the change that's being vendored through #40579

@tikoflano
Copy link

tikoflano commented Jun 24, 2020

This configuration worked for me, using @mitar image:

I have one NIC (eth0) with two sub-interfaces (eth0:1 and eth0:2). I used private IPs which are NATed upstream by my netowrk router. But I guess this should work with public IPs too.

  • eth0 -->10.100.36.246
  • eth0:1 -->10.100.36.245
  • eth0:2 -->10.100.36.244

My docker-compose.yml file

version: '3'

services:
  nat_manager:
    image: tozd/external-ip
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    network_mode: host
    cap_add:
      - NET_ADMIN
      - NET_RAW

  a:
    image: byrnedo/alpine-curl
    entrypoint: tail -f /dev/null
    environment:
      EXTERNAL_IP: 10.100.36.246

  b:
    image: byrnedo/alpine-curl
    entrypoint: tail -f /dev/null
    environment:
      EXTERNAL_IP: 10.100.36.245

  c:
    image: byrnedo/alpine-curl
    entrypoint: tail -f /dev/null
    environment:
      EXTERNAL_IP: 10.100.36.244

networks:
  default:
    driver_opts:
      com.docker.network.bridge.enable_ip_masquerade: "false"

You can test it using for ID in a b c;do echo -n "$ID: "; docker-compose exec $ID curl ifconfig.me;echo;done

I hope this helps someone out there.

@karser
Copy link

karser commented Jul 10, 2020

Thank you @tikoflano this helped a lot! Had to convert tozd/external-ip to alpine and add arm support though.

@mitar
Copy link
Author

mitar commented Jul 10, 2020

Feel free to make MR to the repo with alpine and stuff if you want.

@karser
Copy link

karser commented Jul 10, 2020

@mitar done tozd/docker-external-ip#5

@iambenmitchell
Copy link

Hi, I just stumbled across this issue, its the closest thing I've found so far so please could you help me :)

Was this ever done? I have bought a public/ripe /28 subnet from my hosting provider, I have managed to route the IPs to each container and I can connect to them individually using their public IP, however, their external IP when I curl some site like "what's my ip address" shows up as the host/dedicated server's ip instead of the containers ip.

Do you know what I need to do in order to have my containers show up as their own IP? Thanks :)

@P4sca1
Copy link

P4sca1 commented Jan 22, 2021

@MrBenFTW A label has been added to docker networks, which you can use: com.docker.network.host_ipv4.
You need to create a new network, assign the label with the public ip you want and then assign the container to that network.

moby/libnetwork#2454 for reference.

@iambenmitchell
Copy link

@MrBenFTW A label has been added to docker networks, which you can use: com.docker.network.host_ipv4.
You need to create a new network, assign the label with the public ip you want and then assign the container to that network.

moby/libnetwork#2454 for reference.

Thanks!

Just to clarify, do I need to make a new network for every container or can com.docker.network.host_ipv4 be assigned to each container manually.

IE, can I create a docker bridge subnet with my 123.123.123.123/28 ip and then on each container do

com.docker.network.host_ipv4 = 123.123.123.124
com.docker.network.host_ipv4 = 123.123.123.125
com.docker.network.host_ipv4 = 123.123.123.126
...

and so on

@P4sca1
Copy link

P4sca1 commented Jan 22, 2021

The label only exists on a network. So you would need to create one network per public ip address.

@iambenmitchell
Copy link

The label only exists on a network. So you would need to create one network per public ip address.

I think something like this would be better

docker network create \
  --driver=bridge \
  --subnet=x.x.x.0/28 \
  --gateway=x.x.x.1 \
  --isPublic=true \
  bignet

and then

docker run -it --net=bignet --Ip=x.x.x.2 ubuntu bin/bash

when --ip is specified docker should check the network to see if --isPublic = true and if so, it assumes the container should have its IP publicly accessible incoming and outgoing, rather than having to create a new network for each ip

@iambenmitchell
Copy link

I can't get this solution to actually work anyways, my host requires that the IPs are statically routed through the main ip of the server, because of this, the gateway must be the first ip in the subnet. But if i have to create a new network for each ip in a subnet I don't see how I can do so without each one being in a /32 subnet, and in this case the gateway ip is not accessible as it is outside of the subnet..

What do I do?
image

@iambenmitchell
Copy link

image

I forgot to change the name of the network, the actual error is
Error response from daemon: Pool overlaps with other one on this address space

@iambenmitchell
Copy link

iambenmitchell commented Jan 22, 2021

I forgot about ip ranges, that could be my solution, except that I cannot use the same gateway over again
image

and I also can't just not specify a gateway because
Error response from daemon: cannot create network afe8207c3a649f14f8173500c62a237b6033e812c585cc0f45329ef51ccfe077 (br-afe8207c3a64): conflicts with network fac274f74d818cbfe760b5a7394591d20eba1959b7cd11b44b2787810f0619fc (br-fac274f74d81): networks have overlapping IPv4

I also cannot use the network either

docker run -it --net=mail --ip=x.x.242.2 ubuntu bin/bash

docker: Error response from daemon: Address already in use. ERRO[0005] error waiting for container: context canceled

@AlexGrs
Copy link

AlexGrs commented Jan 26, 2021

Hello @itouch5000 :)

I used the same approach as you to be able to bind a physical NIC to one docker network and so far it works really well.

My issue now is that using the same kind of routes as yourself, if I attach 2 containers on this bridge-coi network with IP 172.18.0.3 and 172.18.0.4 somehow they can't communicate anymore.

It seems logical as all the traffic is routed to the external NIC. I am not the best in terms of routing and iptables but any ideas how to proceed so that container in the bridge-coi network can still continue to reach each other and for all outbound traffic to use the interface the bridge is linked to ?

cpuguy83 pushed a commit to cpuguy83/docker that referenced this issue May 25, 2021
This commit allows a user to specify a Host IP via the
com.docker.network.host_ipv4 label which is used as the
Source IP during SNAT for bridge networks .

The use case is for hosts with multiple interfaces and
this label can dictate which IP will be used as Source IP
for North-South traffic

In the absence of this label, MASQUERADE is used which picks the Source IP
based on Next Hop from the Route Table

Addresses: moby#30053

Signed-off-by: Arko Dasgupta <[email protected]>
@nwithan8
Copy link

nwithan8 commented Mar 24, 2022

It looks like the article https://medium.com/@havloujian.joachim/advanced-docker-networking-outgoing-ip-921fc3090b09 shared by @P4sca1 does only work for virtual interfaces (eth0:0). I have separate interfaces, so some modifications are necessary.

List of all interfaces

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:23:a1:a7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.224/24 brd 192.168.1.255 scope global dynamic enp0s3
       valid_lft 4291sec preferred_lft 4291sec
    inet6 fe80::a00:27ff:fe23:a1a7/64 scope link 
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:e5:a4:18 brd ff:ff:ff:ff:ff:ff
    inet 192.168.30.129/24 brd 192.168.30.255 scope global dynamic enp0s8
       valid_lft 4353sec preferred_lft 4353sec
    inet6 fe80::a00:27ff:fee5:a418/64 scope link 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:ee:b3:23:93 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:eeff:feb3:2393/64 scope link 
       valid_lft forever preferred_lft forever

The default external ip used by docker is 85.144.163.27

$ docker run --rm byrnedo/alpine-curl http://httpbin.org/ip
{
  "origin": "85.144.163.27, 85.144.163.27"
}

Create a new docker bridge interface bridge-coi

$ docker network create --attachable --opt "com.docker.network.bridge.name=bridge-coi" --opt "com.docker.network.bridge.enable_ip_masquerade=false" bridge-coi

Inspect the new bridge bridge-coi to get the subnet, 172.18.0.0/16 in this case

$ docker network inspect bridge-coi
[
    {
        "Name": "bridge-coi",
        "Id": "5beaf23a7d3a3c58885b7968ccf03ccd631440827c4ea0715038fdf3b99311b8",
        "Created": "2019-09-24T16:20:20.084657739 02:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.enable_ip_masquerade": "false",
            "com.docker.network.bridge.name": "bridge-coi"
        },
        "Labels": {}
    }
]

Create a route from interface bridge-coi to interface enp0s8

sudo ip route add 172.18.0.0/16 dev enp0s8 tab 1
sudo ip route add default via 192.168.30.1 dev enp0s8 tab 1
sudo ip rule add from 172.18.0.0/16 tab 1
sudo ip route flush cache

Add a nat roule from bridge-coi to the ip used by the interface enp0s8

sudo iptables -t nat -A POSTROUTING -s 172.18.0.0/16 ! -o bridge-coi -j SNAT --to-source 192.168.30.129

Now the external ip used by containers that are using the interface bridge-coi is 46.166.122.114. You have to specify the dns server here or in /etc/resolv.conf.

$ docker run --rm --network bridge-coi --dns=192.168.30.1 byrnedo/alpine-curl http://httpbin.org/ip
{
  "origin": "46.166.122.114, 46.166.122.114"
}

Can vouch this works perfectly (Ubuntu VM running inside ProxMox)
Am able to get specific containers to use my 10G NIC rather than the default 1G interface.

@pulakivasilaki
Copy link

is it possible to add similar option com.docker.network.host_ipv6 for ipv6?

@rhansen
Copy link
Contributor

rhansen commented Sep 12, 2023

I opened #46469 to request IPv6 support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/enhancement Enhancements are not bugs or new features but can improve usability or performance.
Projects
None yet