Scalable Distributed DL Training: Batching Communication and Computation

Authors

  • Shaoqi Wang University of Colorado, Colorado Springs
  • Aidi Pi University of Colorado, Colorado Springs
  • Xiaobo Zhou University of Colorado, Colorado Springs

DOI:

https://doi.org/10.1609/aaai.v33i01.33015289

Abstract

Scalability of distributed deep learning (DL) training with parameter server architecture is often communication constrained in large clusters. There are recent efforts that use a layer by layer strategy to overlap gradient communication with backward computation so as to reduce the impact of communication constraint on the scalability. However, the approaches cannot be effectively applied to the overlap between parameter communication and forward computation. In this paper, we propose and design iBatch, a novel communication approach that batches parameter communication and forward computation to overlap them with each other. We formulate the batching decision as an optimization problem and solve it based on greedy algorithm to derive communication and computation batches. We implement iBatch in the open-source DL framework BigDL and perform evaluations with various DL workloads. Experimental results show that iBatch improves the scalability of a cluster of 72 nodes by up to 73% over the default PS and 41% over the layer by layer strategy.

Downloads

Published

2019-07-17

How to Cite

Wang, S., Pi, A., & Zhou, X. (2019). Scalable Distributed DL Training: Batching Communication and Computation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5289-5296. https://doi.org/10.1609/aaai.v33i01.33015289

Issue

Section

AAAI Technical Track: Machine Learning