openml.evaluations.list_evaluations

openml.evaluations.list_evaluations(function: str, offset: int | None = None, size: int | None = 10000, tasks: List[str | int] | None = None, setups: List[str | int] | None = None, flows: List[str | int] | None = None, runs: List[str | int] | None = None, uploaders: List[str | int] | None = None, tag: str | None = None, study: int | None = None, per_fold: bool | None = None, sort_order: str | None = None, output_format: str = 'object') Dict | DataFrame

List all run-evaluation pairs matching all of the given filters. (Supports large amount of results)

Parameters:
functionstr

the evaluation function. e.g., predictive_accuracy

offsetint, optional

the number of runs to skip, starting from the first

sizeint, default 10000

The maximum number of runs to show. If set to None, it returns all the results.

taskslist[int,str], optional

the list of task IDs

setups: list[int,str], optional

the list of setup IDs

flowslist[int,str], optional

the list of flow IDs

runs :list[int,str], optional

the list of run IDs

uploaderslist[int,str], optional

the list of uploader IDs

tagstr, optional

filter evaluation based on given tag

studyint, optional
per_foldbool, optional
sort_orderstr, optional

order of sorting evaluations, ascending (“asc”) or descending (“desc”)

output_format: str, optional (default=’object’)

The parameter decides the format of the output. - If ‘object’ the output is a dict of OpenMLEvaluation objects - If ‘dict’ the output is a dict of dict - If ‘dataframe’ the output is a pandas DataFrame

Returns:
dict or dataframe