evaluation
OpenMLEvaluation
¶
Contains all meta-information about a run / evaluation combination, according to the evaluation/list function
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
run_id
|
int
|
Refers to the run. |
required |
task_id
|
int
|
Refers to the task. |
required |
setup_id
|
int
|
Refers to the setup. |
required |
flow_id
|
int
|
Refers to the flow. |
required |
flow_name
|
str
|
Name of the referred flow. |
required |
data_id
|
int
|
Refers to the dataset. |
required |
data_name
|
str
|
The name of the dataset. |
required |
function
|
str
|
The evaluation metric of this item (e.g., accuracy). |
required |
upload_time
|
str
|
The time of evaluation. |
required |
uploader
|
int
|
Uploader ID (user ID) |
required |
upload_name
|
str
|
Name of the uploader of this evaluation |
required |
value
|
float
|
The value (score) of this evaluation. |
required |
values
|
List[float]
|
The values (scores) per repeat and fold (if requested) |
required |
array_data
|
str
|
list of information per class. (e.g., in case of precision, auroc, recall) |
None
|
Source code in openml/evaluations/evaluation.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | |