Skip to content
This repository has been archived by the owner on Jan 22, 2024. It is now read-only.

Support for swarm mode in Docker 1.12 #141

Closed
alantrrs opened this issue Jul 18, 2016 · 24 comments
Closed

Support for swarm mode in Docker 1.12 #141

alantrrs opened this issue Jul 18, 2016 · 24 comments

Comments

@alantrrs
Copy link

alantrrs commented Jul 18, 2016

I'm trying to use nvidia-docker with the swarm functionality introduced in the new Docker 1.12.

I tried to create a service docker service create .. with nvidia-docker (nvidia-docker service create ..) and didn't work. I haven't seen any way to pass devices to docker service create so I'm wondering if it's even supported on docker's side.

Any thoughts?

@flx42
Copy link
Member

flx42 commented Jul 18, 2016

This question was asked on the docker GitHub a few hours ago:
moby/moby#24750

Currently, nvidia-docker doesn't support Docker Swarm and thus service create will be simply pass-through to the docker CLI.
You are right that there doesn't seem to be a way to pass devices to service create, so far we can only mount the volume:

docker service create --mount type=volume,source=nvidia_driver_367.35,target=/usr/local/nvidia,volume-driver=nvidia-docker [...]

But that's not enough, we can't get around the device cgroup.

Even if we could, in a cluster environment with Swarm there will also be a problem if different machines have a different number of GPUs.

@3XX0 thoughts?

@Josca
Copy link

Josca commented Dec 6, 2016

1 to add nvidia-docker support for docker swarm.

@davad
Copy link

davad commented Jan 29, 2017

@flx42 @3XX0 any movement on this? I looked over the related issues and don't see anything recent. I'm itching to orchestrate CUDA jobs via docker across multiple machines 😄

@el3ment
Copy link

el3ment commented Mar 31, 2017

Any progress on this?

@anaderi
Copy link

anaderi commented Mar 31, 2017

would be cool, no?

@3XX0
Copy link
Member

3XX0 commented Apr 4, 2017

This is basically what we need for basic GPU support:

moby/swarmkit#2090

@mjp0
Copy link

mjp0 commented May 27, 2017

@3XX0 It got merged today! I've been managing nvidia-docker containers manually via docker-compose so having swarmkit and all the v3 deploy things work would be absolutely fantastic.

Is there any potential merge conflicts that has to be dealt with before this can be merged to nvidia-docker?

@cheyang
Copy link

cheyang commented May 30, 2017

@0fork , can you share any docs about how you played with it? So we can also try this cool feature. Thanks.

@3XX0
Copy link
Member

3XX0 commented May 30, 2017

Yes, this is big step forward to get GPU working within Swarm. However, we're not quite there yet, we still need to add support in Docker itself and we are still missing some pieces which should come with nvidia-docker 2.0.

Stay tuned ;)

@luiscborbon
Copy link

1

@erbas
Copy link

erbas commented Jun 9, 2017

@3XX0 What's the timeframe for this to come together?

@thommiano
Copy link

thommiano commented Jun 14, 2017

My team is working on a machine with several GPUs, and we're using Docker to containerize all of our projects. I'm trying to figure out the best way to schedule GPU jobs that are running in Docker containers so that users don't accidentally interfere with existing jobs or have to sit around until one of the other team members frees up a GPU. Would swarm functionality solve this problem? Our current approach is to use NV_GPU=n in our nvidia-docker run command to isolate a GPU to that container, as referenced here, and I'm hoping that we can do away with this with job scheduling.

@omerh
Copy link

omerh commented Jun 19, 2017

This is great feature and a must for docker swarm.
I am going to solve it with pre backed AMI and autoscale group.
But, only cause it fits my use case.
Waiting for updates on both moby project and nvidia-docker

@fvillarr
Copy link

@0fork , can I get also any information about how you played with it?
Thanks.

@mjp0
Copy link

mjp0 commented Jun 25, 2017

@fvillarr & @cheyang I'm sorry I don't understand what you want to know :) We've been using nvidia-docker via nvidia-docker-compose, not this swarmkit feature we're all anxiously waiting. Word of caution: using nvidia-docker in scale is a PITA right now. You have to manage each server separately because nvidia-docker-compose needs to generate specific mount points for NVIDIA drivers to work via compose. There's nothing available to automate this and I don't think we can scale this much further with the current setup of scripts and manual effort. I don't have any docs because it's all in nvidia-docker-compose repo, we just took it to scale.

@88plug
Copy link

88plug commented Aug 26, 2017

1

@3XX0
Copy link
Member

3XX0 commented Nov 14, 2017

Closing, most of the issues remaining are on the Docker side. You can track our progress here:
moby/moby#33439

@3XX0 3XX0 closed this as completed Nov 14, 2017
@nikoargo
Copy link

nikoargo commented Jan 9, 2018

Any update on this now that all the PRs in moby/moby#33439 have been merged? They allow placing services according to generic resources, but I'm not sure how to actually mount the GPU inside the service's container.

@3XX0
Copy link
Member

3XX0 commented Jan 10, 2018

@nikoargo with 17.12.0-ce you can configure the docker daemon to expose your GPUs to swarm:

  1. Create an override for the dockerd configuration, changing your default runtime and adding GPU resources. You can generate the resource flags like this:
    nvidia-smi -a | grep UUID | awk '{print "--node-generic-resource gpu="substr($4,0,12)}' | paste -d' ' -s
sudo systemctl edit docker

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --default-runtime=nvidia <resource output from the above>
  1. Uncomment swarm-resource under /etc/nvidia-container-runtime/config.toml

  2. Restart the docker daemon, create your swarm and create a new service requesting GPUs:
    docker service create -t --generic-resource "gpu=1" ubuntu bash

Note, there is currently a bug: moby/moby#35970 it should normally be --node-generic-resources, this will be fixed in the future

@nikoargo
Copy link

This is incredible. Thank you so much!

@romilbhardwaj
Copy link

@3XX0 This is fantastic, thanks a lot!

An observation: this also seems to enforce exclusive allocation of the GPUs at the orchestration layer. For example, if I have a machine with two physical GPUs, I cannot create more than two services (each of which requests one GPU). Adding a third service results in a no suitable node (insufficient resources) message and docker swarm waits for a running service to end before scheduling the new one.

Is there any way to overcome this and allow sharing of GPUs across services while maintaining isolation? For instance, adding a third service in the above example should create a service and have it share a GPU with one of the existing services.

This can be achieved by using node labels (have a count label for GPUs at the node and any service that requires less GPUs than the count is deployed on the node), but this approach is incognizant of the resource requirements of the service and does not enforce isolation - all GPUs on the machine will be visible to a service which may require only one GPU.

@3XX0
Copy link
Member

3XX0 commented Feb 20, 2018

Unfortunately we do not support sharing GPUs. We have the same limitation with Kubernetes and we're looking into relaxing this constraint.
Having said that, the hardware doesn't support true multitenancy, so doing this can be quite costly. We usually recommend writing your application with this in mind instead, and implement your own scheduling/batching taking full advantage of the whole GPU.

@CharlesJQuarra
Copy link

one must change the default runtime for a given node in order to use the gpu for swarm services? can the gpu generic resource be added to a swarm node while leaving runc as default runtime?

@hholst80
Copy link

hholst80 commented Mar 4, 2019

Runtime have to be specified as shown above on the dockerd level, as services does not support the runtime directive (yet).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests