Call for papers: AAAI 2017 Workshop on Distributed Machine Learning

Feb 4-5, San Francisco, California, USA,

With the fast development of machine learning (especially deep learning) and cloud computing, it has become a trend to train machine learning models in a distributed manner on a cluster of machines. In recent years, there have been many exciting progresses along this direction, with quite a few papers published, and several open-source projects populated. For example, distributed machine learning tools such as Petuum, TensorFlow, and DMTK have been developed; parallel learning algorithms such as LightLDA, parallel logistic regression, XGBoost, and PV-Tree  have been proposed; and convergence theory for both synchronous and asynchronous parallelization have been established. However, there are also many open issues in this field, for example,

  • How to select an appropriate infrastructure (e.g., parameter server vs. data flow) and parallelization mechanism (e.g., synchronous vs. asynchronous), given the application and system configuration?
  • Why many papers reported linear speed-ups, but when the accuracy on real-world workloads is required, the practical speed-up is far smaller than that?
  • Why parallelization mechanisms with similar convergence rates could perform so differently in practice?
  • How to conduct proper comparison/evaluation for distributed machine learning (e.g., benchmark, criteria, system configurations, and baselines)?

Without answers to these important questions, people can hardly be confident in wide adoption of distributed machine learning in real applications. This workshop is designed to answer these questions. With this workshop, we hope to provide the community with deep insights and to substantially push the frontier of distributed machine learning.

The workshop will consist of both invited talks and contributed talks, and a panel discussion. The contributed talks mainly call for blue-sky ideas, but also welcome on-going research works. You are highly encouraged to submit your ideas or works to our workshop, and share with the wide audience of AAAI. The authors are encouraged to focus on (but not limited to) the following topics

  • Distributed machine learning systems and infrastructure
  • Parallelization mechanisms for distributed machine learning
  • Parallel machine learning algorithms
  • Theory for distributed machine learning
  • Toolkits for distributed machine learning
  • Applications of distributed machine learning

For those who want to submit papers to our workshop, please go to Submissions should be 4-6 pages in AAAI format, and need to be anonymized.

The important dates of the workshop are as follows

  • Paper submission deadline: Oct 21, 2016
  • Notification: Nov 18, 2016
  • Camera ready due: Dec 8, 2016

More information about the workshop will be posted at shortly.


Program Committee

  • Tom Goldstein (University of Maryland)
  • Cho-Jui Hsieh (UC Davis)
  • Abhimanu Kumar (Groupon)
  • Wu-Jun Li (Nanjing University)
  • Dhruv Mahajan (Yahoo)
  • Martin Takac (Lehigh University)
  • Taifeng Wang (Microsoft Research Asia)