runs
openml.runs
#
OpenMLRun
#
OpenMLRun(task_id: int, flow_id: int | None, dataset_id: int | None, setup_string: str | None = None, output_files: dict[str, int] | None = None, setup_id: int | None = None, tags: list[str] | None = None, uploader: int | None = None, uploader_name: str | None = None, evaluations: dict | None = None, fold_evaluations: dict | None = None, sample_evaluations: dict | None = None, data_content: list[list] | None = None, trace: OpenMLRunTrace | None = None, model: object | None = None, task_type: str | None = None, task_evaluation_measure: str | None = None, flow_name: str | None = None, parameter_settings: list[dict[str, Any]] | None = None, predictions_url: str | None = None, task: OpenMLTask | None = None, flow: OpenMLFlow | None = None, run_id: int | None = None, description_text: str | None = None, run_details: str | None = None)
Bases: OpenMLBase
OpenML Run: result of running a model on an OpenML dataset.
Parameters#
task_id: int The ID of the OpenML task associated with the run. flow_id: int The ID of the OpenML flow associated with the run. dataset_id: int The ID of the OpenML dataset used for the run. setup_string: str The setup string of the run. output_files: Dict[str, int] Specifies where each related file can be found. setup_id: int An integer representing the ID of the setup used for the run. tags: List[str] Representing the tags associated with the run. uploader: int User ID of the uploader. uploader_name: str The name of the person who uploaded the run. evaluations: Dict Representing the evaluations of the run. fold_evaluations: Dict The evaluations of the run for each fold. sample_evaluations: Dict The evaluations of the run for each sample. data_content: List[List] The predictions generated from executing this run. trace: OpenMLRunTrace The trace containing information on internal model evaluations of this run. model: object The untrained model that was evaluated in the run. task_type: str The type of the OpenML task associated with the run. task_evaluation_measure: str The evaluation measure used for the task. flow_name: str The name of the OpenML flow associated with the run. parameter_settings: list[OrderedDict] Representing the parameter settings used for the run. predictions_url: str The URL of the predictions file. task: OpenMLTask An instance of the OpenMLTask class, representing the OpenML task associated with the run. flow: OpenMLFlow An instance of the OpenMLFlow class, representing the OpenML flow associated with the run. run_id: int The ID of the run. description_text: str, optional Description text to add to the predictions file. If left None, is set to the time the arff file is generated. run_details: str, optional (default=None) Description of the run stored in the run meta-data.
Source code in openml/runs/run.py
openml_url
property
#
The URL of the object on the server, if it was uploaded, else None.
from_filesystem
classmethod
#
from_filesystem(directory: str | Path, expect_model: bool = True) -> OpenMLRun
The inverse of the to_filesystem method. Instantiates an OpenMLRun object based on files stored on the file system.
Parameters#
directory : str a path leading to the folder where the results are stored
bool
if True, it requires the model pickle to be present, and an error will be thrown if not. Otherwise, the model might or might not be present.
Returns#
run : OpenMLRun the re-instantiated run object
Source code in openml/runs/run.py
get_metric_fn
#
Calculates metric scores based on predicted values. Assumes the run has been executed locally (and contains run_data). Furthermore, it assumes that the 'correct' or 'truth' attribute is specified in the arff (which is an optional field, but always the case for openml-python runs)
Parameters#
sklearn_fn : function
a function pointer to a sklearn function that
accepts y_true
, y_pred
and **kwargs
kwargs : dict
kwargs for the function
Returns#
scores : ndarray of scores of length num_folds * num_repeats metric results
Source code in openml/runs/run.py
487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 |
|
open_in_browser
#
Opens the OpenML web page corresponding to this object in your default browser.
Source code in openml/base.py
publish
#
publish() -> OpenMLBase
Publish the object on the OpenML server.
Source code in openml/base.py
push_tag
#
remove_tag
#
to_filesystem
#
The inverse of the from_filesystem method. Serializes a run on the filesystem, to be uploaded later.
Parameters#
directory : str a path leading to the folder where the results will be stored. Should be empty
bool, optional (default=True)
if True, a model will be pickled as well. As this is the most storage expensive part, it is often desirable to not store the model.
Source code in openml/runs/run.py
url_for_id
classmethod
#
Return the OpenML URL for the object of the class entity with the given id.
OpenMLRunTrace
#
OpenMLRunTrace(run_id: int | None, trace_iterations: dict[tuple[int, int, int], OpenMLTraceIteration])
OpenML Run Trace: parsed output from Run Trace call
Parameters#
run_id : int OpenML run id.
dict
Mapping from key (repeat, fold, iteration)
to an object of
OpenMLTraceIteration.
Parameters#
run_id : int Id for which the trace content is to be stored. trace_iterations : List[List] The trace content obtained by running a flow on a task.
Source code in openml/runs/trace.py
generate
classmethod
#
generate(attributes: list[tuple[str, str]], content: list[list[int | float | str]]) -> OpenMLRunTrace
Generates an OpenMLRunTrace.
Generates the trace object from the attributes and content extracted while running the underlying flow.
Parameters#
attributes : list List of tuples describing the arff attributes.
list
List of lists containing information about the individual tuning runs.
Returns#
OpenMLRunTrace
Source code in openml/runs/trace.py
get_selected_iteration
#
Returns the trace iteration that was marked as selected. In case multiple are marked as selected (should not happen) the first of these is returned
Parameters#
fold: int
repeat: int
Returns#
int The trace iteration from the given fold and repeat that was selected as the best iteration by the search procedure
Source code in openml/runs/trace.py
merge_traces
classmethod
#
merge_traces(traces: list[OpenMLRunTrace]) -> OpenMLRunTrace
Merge multiple traces into a single trace.
Parameters#
cls : type Type of the trace object to be created. traces : List[OpenMLRunTrace] List of traces to merge.
Returns#
OpenMLRunTrace A trace object representing the merged traces.
Raises#
ValueError If the parameters in the iterations of the traces being merged are not equal. If a key (repeat, fold, iteration) is encountered twice while merging the traces.
Source code in openml/runs/trace.py
trace_from_arff
classmethod
#
trace_from_arff(arff_obj: dict[str, Any]) -> OpenMLRunTrace
Generate trace from arff trace.
Creates a trace file from arff object (for example, generated by a local run).
Parameters#
arff_obj : dict LIAC arff obj, dict containing attributes, relation, data.
Returns#
OpenMLRunTrace
Source code in openml/runs/trace.py
trace_from_xml
classmethod
#
trace_from_xml(xml: str | Path | IO) -> OpenMLRunTrace
Generate trace from xml.
Creates a trace file from the xml description.
Parameters#
xml : string | file-like object
An xml description that can be either a string
or a file-like
object.
Returns#
run : OpenMLRunTrace Object containing the run id and a dict containing the trace iterations.
Source code in openml/runs/trace.py
trace_to_arff
#
Generate the arff dictionary for uploading predictions to the server.
Uses the trace object to generate an arff dictionary representation.
Returns#
arff_dict : dict Dictionary representation of the ARFF file that will be uploaded. Contains information about the optimization trace.
Source code in openml/runs/trace.py
OpenMLTraceIteration
dataclass
#
OpenMLTraceIteration(repeat: int, fold: int, iteration: int, evaluation: float, selected: bool, setup_string: dict[str, str] | None = None, parameters: dict[str, str | int | float] | None = None)
OpenML Trace Iteration: parsed output from Run Trace call
Exactly one of setup_string
or parameters
must be provided.
Parameters#
repeat : int repeat number (in case of no repeats: 0)
int
fold number (in case of no folds: 0)
int
iteration number of optimization procedure
str, optional
json string representing the parameters
If not provided, parameters
should be set.
double
The evaluation that was awarded to this trace iteration. Measure is defined by the task
bool
Whether this was the best of all iterations, and hence selected for making predictions. Per fold/repeat there should be only one iteration selected
OrderedDict, optional
Dictionary specifying parameter names and their values.
If not provided, setup_string
should be set.
get_parameters
#
Get the parameters of this trace iteration.
Source code in openml/runs/trace.py
delete_run
#
Delete run with id run_id
from the OpenML server.
You can only delete runs which you uploaded.
Parameters#
run_id : int OpenML id of the run
Returns#
bool True if the deletion was successful. False otherwise.
Source code in openml/runs/functions.py
get_run
#
get_run(run_id: int, ignore_cache: bool = False) -> OpenMLRun
Gets run corresponding to run_id.
Parameters#
run_id : int
bool
Whether to ignore the cache. If true
this will download and overwrite the run xml
even if the requested run is already cached.
ignore_cache
Returns#
run : OpenMLRun Run corresponding to ID, fetched from the server.
Source code in openml/runs/functions.py
get_run_trace
#
get_run_trace(run_id: int) -> OpenMLRunTrace
Get the optimization trace object for a given run id.
Parameters#
run_id : int
Returns#
openml.runs.OpenMLTrace
Source code in openml/runs/functions.py
get_runs
#
get_runs(run_ids: list[int]) -> list[OpenMLRun]
Gets all runs in run_ids list.
Parameters#
run_ids : list of ints
Returns#
runs : list of OpenMLRun List of runs corresponding to IDs, fetched from the server.
Source code in openml/runs/functions.py
initialize_model_from_run
#
Initialized a model based on a run_id (i.e., using the exact same parameter settings)
Parameters#
run_id : int
The Openml run_id
strict_version: bool (default=True)
See flow_to_model
strict_version.
Returns#
model
Source code in openml/runs/functions.py
initialize_model_from_trace
#
initialize_model_from_trace(run_id: int, repeat: int, fold: int, iteration: int | None = None) -> Any
Initialize a model based on the parameters that were set by an optimization procedure (i.e., using the exact same parameter settings)
Parameters#
run_id : int The Openml run_id. Should contain a trace file, otherwise a OpenMLServerException is raised
int
The repeat nr (column in trace file)
int
The fold nr (column in trace file)
int
The iteration nr (column in trace file). If None, the best (selected) iteration will be searched (slow), according to the selection criteria implemented in OpenMLRunTrace.get_selected_iteration
Returns#
model
Source code in openml/runs/functions.py
list_runs
#
list_runs(offset: int | None = None, size: int | None = None, id: list | None = None, task: list[int] | None = None, setup: list | None = None, flow: list | None = None, uploader: list | None = None, tag: str | None = None, study: int | None = None, display_errors: bool = False, task_type: TaskType | int | None = None) -> DataFrame
List all runs matching all of the given filters. (Supports large amount of results)
Parameters#
offset : int, optional the number of runs to skip, starting from the first size : int, optional the maximum number of runs to show
id : list, optional
task : list, optional
setup: list, optional
flow : list, optional
uploader : list, optional
tag : str, optional
study : int, optional
bool, optional (default=None)
Whether to list runs which have an error (for example a missing prediction file).
task_type : str, optional
Returns#
dataframe
Source code in openml/runs/functions.py
1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 |
|
run_exists
#
Checks whether a task/setup combination is already present on the server.
Parameters#
task_id : int
setup_id : int
Returns#
Set run ids for runs where flow setup_id was run on task_id. Empty
set if it wasn't run yet.
Source code in openml/runs/functions.py
run_flow_on_task
#
run_flow_on_task(flow: OpenMLFlow, task: OpenMLTask, avoid_duplicate_runs: bool | None = None, flow_tags: list[str] | None = None, seed: int | None = None, add_local_measures: bool = True, upload_flow: bool = False, n_jobs: int | None = None) -> OpenMLRun
Run the model provided by the flow on the dataset defined by task.
Takes the flow and repeat information into account. The Flow may optionally be published.
Parameters#
flow : OpenMLFlow
A flow wraps a machine learning model together with relevant information.
The model has a function fit(X,Y) and predict(X),
all supervised estimators of scikit learn follow this definition of a model.
task : OpenMLTask
Task to perform. This may be an OpenMLFlow instead if the first argument is an OpenMLTask.
avoid_duplicate_runs : bool, optional (default=None)
If True, the run will throw an error if the setup/task combination is already present on
the server. This feature requires an internet connection.
If not set, it will use the default from your openml configuration (False if unset).
flow_tags : List[str], optional (default=None)
A list of tags that the flow should have at creation.
seed: int, optional (default=None)
Models that are not seeded will get this seed.
add_local_measures : bool, optional (default=True)
Determines whether to calculate a set of evaluation measures locally,
to later verify server behaviour.
upload_flow : bool (default=False)
If True, upload the flow to OpenML if it does not exist yet.
If False, do not upload the flow to OpenML.
n_jobs : int (default=None)
The number of processes/threads to distribute the evaluation asynchronously.
If None
or 1
, then the evaluation is treated as synchronous and processed sequentially.
If -1
, then the job uses as many cores available.
Returns#
run : OpenMLRun Result of the run.
Source code in openml/runs/functions.py
178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 |
|
run_model_on_task
#
run_model_on_task(model: Any, task: int | str | OpenMLTask, avoid_duplicate_runs: bool | None = None, flow_tags: list[str] | None = None, seed: int | None = None, add_local_measures: bool = True, upload_flow: bool = False, return_flow: bool = False, n_jobs: int | None = None) -> OpenMLRun | tuple[OpenMLRun, OpenMLFlow]
Run the model on the dataset defined by the task.
Parameters#
model : sklearn model
A model which has a function fit(X,Y) and predict(X),
all supervised estimators of scikit learn follow this definition of a model.
task : OpenMLTask or int or str
Task to perform or Task id.
This may be a model instead if the first argument is an OpenMLTask.
avoid_duplicate_runs : bool, optional (default=None)
If True, the run will throw an error if the setup/task combination is already present on
the server. This feature requires an internet connection.
If not set, it will use the default from your openml configuration (False if unset).
flow_tags : List[str], optional (default=None)
A list of tags that the flow should have at creation.
seed: int, optional (default=None)
Models that are not seeded will get this seed.
add_local_measures : bool, optional (default=True)
Determines whether to calculate a set of evaluation measures locally,
to later verify server behaviour.
upload_flow : bool (default=False)
If True, upload the flow to OpenML if it does not exist yet.
If False, do not upload the flow to OpenML.
return_flow : bool (default=False)
If True, returns the OpenMLFlow generated from the model in addition to the OpenMLRun.
n_jobs : int (default=None)
The number of processes/threads to distribute the evaluation asynchronously.
If None
or 1
, then the evaluation is treated as synchronous and processed sequentially.
If -1
, then the job uses as many cores available.
Returns#
run : OpenMLRun
Result of the run.
flow : OpenMLFlow (optional, only if return_flow
is True).
Flow generated from the model.
Source code in openml/runs/functions.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 |
|