openml.evaluations.list_evaluations(function: str, offset: Union[int, NoneType] = None, size: Union[int, NoneType] = None, task: Union[List, NoneType] = None, setup: Union[List, NoneType] = None, flow: Union[List, NoneType] = None, run: Union[List, NoneType] = None, uploader: Union[List, NoneType] = None, tag: Union[str, NoneType] = None, study: Union[int, NoneType] = None, per_fold: Union[bool, NoneType] = None, sort_order: Union[str, NoneType] = None, output_format: str = 'object') → Union[Dict, pandas.core.frame.DataFrame]

List all run-evaluation pairs matching all of the given filters. (Supports large amount of results)


the evaluation function. e.g., predictive_accuracy

offsetint, optional

the number of runs to skip, starting from the first

sizeint, optional

the maximum number of runs to show

tasklist, optional
setup: list, optional
flowlist, optional
runlist, optional
uploaderlist, optional
tagstr, optional
studyint, optional
per_foldbool, optional
sort_orderstr, optional

order of sorting evaluations, ascending (“asc”) or descending (“desc”)

output_format: str, optional (default=’object’)

The parameter decides the format of the output. - If ‘object’ the output is a dict of OpenMLEvaluation objects - If ‘dict’ the output is a dict of dict - If ‘dataframe’ the output is a pandas DataFrame

dict or dataframe