Monash University

File(s) stored somewhere else

Please note: Linked content is NOT stored on Monash University and we can't guarantee its availability, quality, security or accept any liability.

Video: Effective Parallelisation for Machine Learning

Version 2 2020-08-27, 04:10
Version 1 2020-08-20, 02:34
posted on 2020-08-27, 04:10 authored by Michael KampMichael Kamp
Effective Parallelisation for Machine Learning Michael Kamp (University of Bonn and Fraunhofer IAIS) Mario Boley (Max Planck Institute for Informatics and Saarland University) Olana Missura (Google Inc.) Thomas Gärtner (University of Nottingham) ( We present a novel parallelisation scheme that simplifies the adaptation of learning algorithms to growing amounts of data as well as growing needs for accurate and confident predictions in critical applications. In contrast to other parallelisation techniques, it can be applied to a broad class of learning algorithms without further mathematical derivations and without writing dedicated code, while at the same time maintaining theoretical performance guarantees. Moreover, our parallelisation scheme is able to reduce the runtime of many learning algorithms to polylogarithmic time on quasi-polynomially many processing units. This is a significant step towards a general answer to an open question [21] on efficient parallelisation of machine learning algorithms in the sense of Nick’s Class (NC). The cost of this parallelisation is in the form of a larger sample complexity. Our empirical study confirms the potential of our parallelisation scheme with fixed numbers of processors and instances in realistic applica ion scenarios.