Skip to content

Commit

Permalink
MWMS tutorial: Replace tf.distribute.experimental.MultiWorkerMirrored…
Browse files Browse the repository at this point in the history
…Strategy with tf.distribute.MultiWorkerMirroredStrategy.

PiperOrigin-RevId: 348508697
  • Loading branch information
rchao authored and copybara-github committed Dec 21, 2020
1 parent 2b72cbe commit 187adca
Showing 1 changed file with 5 additions and 4 deletions.
9 changes: 5 additions & 4 deletions site/en/tutorials/distribute/multi_worker_with_keras.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@
"source": [
"## Overview\n",
"\n",
"This tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API, specifically `tf.distribute.experimental.MultiWorkerMirroredStrategy`. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.\n",
"This tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API, specifically `tf.distribute.MultiWorkerMirroredStrategy`. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.\n",
"\n",
"[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs.\n"
]
Expand Down Expand Up @@ -422,7 +422,7 @@
"* Asynchronous training, where the training steps are not strictly synced.\n",
"\n",
"`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.\n",
"To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`.\n",
"To train the model, use an instance of `tf.distribute.MultiWorkerMirroredStrategy`.\n",
"\n",
"`MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy."
]
Expand All @@ -435,7 +435,7 @@
},
"outputs": [],
"source": [
"strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()"
"strategy = tf.distribute.MultiWorkerMirroredStrategy()"
]
},
{
Expand Down Expand Up @@ -520,7 +520,7 @@
"tf_config = json.loads(os.environ['TF_CONFIG'])\n",
"num_workers = len(tf_config['cluster']['worker'])\n",
"\n",
"strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n",
"strategy = tf.distribute.MultiWorkerMirroredStrategy()\n",
"\n",
"global_batch_size = per_worker_batch_size * num_workers\n",
"multi_worker_dataset = mnist.mnist_dataset(global_batch_size)\n",
Expand Down Expand Up @@ -1126,6 +1126,7 @@
"colab": {
"collapsed_sections": [],
"name": "multi_worker_with_keras.ipynb",
"provenance": [],
"toc_visible": true
},
"kernelspec": {
Expand Down

0 comments on commit 187adca

Please sign in to comment.