Mirrored Strategy

Have more than 1 GPU on your setup? Data parallelism on multiple identical GPUs is easy when training with Tensorflow Estimators, and just marginally less convenient with Keras' model.fit.

Estimators

strat = tf.distribute.MirroredStrategy(local_gpu_list)
runconfig = tf.estimator.RunConfig(train_distribute=strat,
                                   eval_distribute=strat,
                                  )

If the evaluation dataset's input_fn is something that Tensorflow can't figure out how to split/shard, you might run into errors during evaluation. The exact same input function works properly if in train, but will throw some error when doing evaluation.

Keras models

For Keras models to take advantage of multiple GPUs, it's just slightly more annoying. The model has to be created and compiled in the strategy scope.

Example with sequential model:

from tf.keras import models, layers

strat = tf.distribute.MirroredStrategy(local_gpu_list)
with strat.scope():
    model = models.Sequential([layers.InputLayer(input_shape=[64, 64, 3]),
                               layers.Conv2D(3, 64, padding='same'),
                               ...,
                              ])
    model.compile(loss='binary_crossentropy', optimizer='adam')

TODO:

  • Splitting datasets/sharding

Edits:

-5/6/2019 added snippet for Keras models

Show Comments