Papers
AMLB: an AutoML Benchmark
Comparing different AutoML frameworks is notoriously challenging and
often done incorrectly. We introduce an open and extensible benchmark
that follows best practices and avoids common mistakes when comparing
AutoML frameworks. We conduct a thorough comparison of 9 well-known
AutoML frameworks across 71 classification and 33 regression tasks.
The differences between the AutoML frameworks are explored with a
multi-faceted analysis, evaluating model accuracy, its trade-offs with
inference time, and framework failures. We also use Bradley-Terry
trees to discover subsets of tasks where the relative AutoML framework
rankings differ. The benchmark comes with an open-source tool that
integrates with many AutoML frameworks and automates the empirical
evaluation process end-to-end: from framework installation and
resource allocation to in-depth evaluation. The benchmark uses public
data sets, can be easily extended with other AutoML frameworks and
tasks, and has a website with up-to-date results.
@article{JMLR:v25:22-0493,
author = {Pieter Gijsbers and Marcos L. P. Bueno and Stefan Coors and Erin LeDell and S{{\'e}}bastien Poirier and Janek Thomas and Bernd Bischl and Joaquin Vanschoren},
title = {AMLB: an AutoML Benchmark},
journal = {Journal of Machine Learning Research},
year = {2024},
volume = {25},
number = {101},
pages = {1--65},
url = {http://jmlr.org/papers/v25/22-0493.html}
}
author = {Pieter Gijsbers and Marcos L. P. Bueno and Stefan Coors and Erin LeDell and S{{\'e}}bastien Poirier and Janek Thomas and Bernd Bischl and Joaquin Vanschoren},
title = {AMLB: an AutoML Benchmark},
journal = {Journal of Machine Learning Research},
year = {2024},
volume = {25},
number = {101},
pages = {1--65},
url = {http://jmlr.org/papers/v25/22-0493.html}
}
[preprint, '22] AMLB: an AutoML Benchmark
This is the preprint of the 2024 JMLR paper, the first submission before a revision.
This version which reports on the experimental results obtained in 2021.
Please only cite this paper if you specifically refer to results reported herein,
and cannot use the JMLR paper for that purpose.
@misc{https://doi.org/10.48550/arxiv.2207.12560,
doi = {10.48550/ARXIV.2207.12560},
url = {https://arxiv.org/abs/2207.12560},
author = {Gijsbers, Pieter and Bueno, Marcos L. P. and Coors, Stefan and LeDell, Erin and Poirier, S\'{e}bastien and Thomas, Janek and Bischl, Bernd and Vanschoren, Joaquin},
keywords = {Machine Learning (cs.LG), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {AMLB: an AutoML Benchmark},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
doi = {10.48550/ARXIV.2207.12560},
url = {https://arxiv.org/abs/2207.12560},
author = {Gijsbers, Pieter and Bueno, Marcos L. P. and Coors, Stefan and LeDell, Erin and Poirier, S\'{e}bastien and Thomas, Janek and Bischl, Bernd and Vanschoren, Joaquin},
keywords = {Machine Learning (cs.LG), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {AMLB: an AutoML Benchmark},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
An Open Source AutoML Benchmark
In recent years, an active field of research has developed around
automated machine learning (AutoML). Unfortunately, comparing
different AutoML systems is hard and often done incorrectly. We
introduce an open, ongoing, and extensible benchmark framework which
follows best practices and avoids common mistakes. The framework is
open-source, uses public datasets and has a website with up-to-date
results. We use the framework to conduct a thorough comparison of 4
AutoML systems across 39 datasets and analyze the results.
@article{amlb2019,
title={An Open Source AutoML Benchmark},
author={Gijsbers, P. and LeDell, E. and Poirier, S. and Thomas, J. and Bischl, B. and Vanschoren, J.},
journal={arXiv preprint arXiv:1907.00909 [cs.LG]},
url={https://arxiv.org/abs/1907.00909},
note={Accepted at AutoML Workshop at ICML 2019},
year={2019}
}
title={An Open Source AutoML Benchmark},
author={Gijsbers, P. and LeDell, E. and Poirier, S. and Thomas, J. and Bischl, B. and Vanschoren, J.},
journal={arXiv preprint arXiv:1907.00909 [cs.LG]},
url={https://arxiv.org/abs/1907.00909},
note={Accepted at AutoML Workshop at ICML 2019},
year={2019}
}