openml.evaluations.list_evaluations(function: str, offset: Optional[int] = None, size: Optional[int] = 10000, tasks: Optional[List[Union[str, int]]] = None, setups: Optional[List[Union[str, int]]] = None, flows: Optional[List[Union[str, int]]] = None, runs: Optional[List[Union[str, int]]] = None, uploaders: Optional[List[Union[str, int]]] = None, tag: Optional[str] = None, study: Optional[int] = None, per_fold: Optional[bool] = None, sort_order: Optional[str] = None, output_format: str = 'object') Union[Dict, pandas.core.frame.DataFrame]

List all run-evaluation pairs matching all of the given filters. (Supports large amount of results)


the evaluation function. e.g., predictive_accuracy

offsetint, optional

the number of runs to skip, starting from the first

sizeint, default 10000

The maximum number of runs to show. If set to None, it returns all the results.

taskslist[int,str], optional

the list of task IDs

setups: list[int,str], optional

the list of setup IDs

flowslist[int,str], optional

the list of flow IDs

runs :list[int,str], optional

the list of run IDs

uploaderslist[int,str], optional

the list of uploader IDs

tagstr, optional

filter evaluation based on given tag

studyint, optional
per_foldbool, optional
sort_orderstr, optional

order of sorting evaluations, ascending (“asc”) or descending (“desc”)

output_format: str, optional (default=’object’)

The parameter decides the format of the output. - If ‘object’ the output is a dict of OpenMLEvaluation objects - If ‘dict’ the output is a dict of dict - If ‘dataframe’ the output is a pandas DataFrame

dict or dataframe