First look of the benchmark submitted to AutoML Workshop at ICML 2019.
abstract: In recent years, an active field of research has developed around automated machine learning(AutoML). Unfortunately, comparing different AutoML systems is hard and often done in correctly. We introduce an open, ongoing, and extensible benchmark framework which follows best practices and avoids common mistakes. The framework is open-source, uses public datasets and has a website with up-to-date results. We use the framework to conduct a thorough comparison of 4 AutoML systems across 39 datasets and analyze the results.