An Open Source AutoML Benchmark

This the homepage for the open and extensible AutoML Benchmark. The AutoML Benchmark provides an overview and comparison of open-source AutoML systems. It is open because the benchmark infrastructure is open-source and extensible because you can add your own problems and datasets.

A brief overview and further references for each AutoML system can be found on the AutoML systems page. For a thorough explanation of the benchmark, and evaluation of results, you can read our paper. If you want to analyze the results yourself, you can do this on the results pages.

Because the benchmark infrastructure is open-source, you can rerun the benchmark yourself, use custom datasets or your own AutoML platform as explained in our project documentation. We also invite you to submit your own AutoML system to be evaluated against the benchmark and included in the overview.