Metrics¶
This module provides utility functions for evaluating model performance and activation functions. It includes functions to compute the accuracy, top-k accuracy of model predictions, and the sigmoid function.
accuracy(out, yb)
¶
Computes the accuracy of model predictions.
Parameters: out (Tensor): The output tensor from the model, containing predicted class scores. yb (Tensor): The ground truth labels tensor.
Returns: Tensor: The mean accuracy of the predictions, computed as a float tensor.
Source code in openml_pytorch/metrics.py
9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
accuracy_topk(out, yb, k=5)
¶
Computes the top-k accuracy of the given model outputs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
out
|
Tensor
|
The output predictions of the model, of shape (batch_size, num_classes). |
required |
yb
|
Tensor
|
The ground truth labels, of shape (batch_size,). |
required |
k
|
int
|
The number of top predictions to consider. Default is 5. |
5
|
Returns:
Name | Type | Description |
---|---|---|
float |
The top-k accuracy as a float value. |
The function calculates how often the true label is among the top-k predicted labels.
Source code in openml_pytorch/metrics.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
f1_score(out, yb)
¶
Computes the F1 score for the given model outputs and true labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
out
|
Tensor
|
The output predictions of the model, of shape (batch_size, num_classes). |
required |
yb
|
Tensor
|
The ground truth labels, of shape (batch_size,). |
required |
Returns:
Name | Type | Description |
---|---|---|
float |
The F1 score as a float value. |
Source code in openml_pytorch/metrics.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
|