Skip to content

Metrics

This module provides utility functions for evaluating model performance and activation functions. It includes functions to compute the accuracy, top-k accuracy of model predictions, and the sigmoid function.

accuracy(out, yb)

Computes the accuracy of model predictions.

Parameters: out (Tensor): The output tensor from the model, containing predicted class scores. yb (Tensor): The ground truth labels tensor.

Returns: Tensor: The mean accuracy of the predictions, computed as a float tensor.

Source code in openml_pytorch/metrics.py
 9
10
11
12
13
14
15
16
17
18
19
20
21
def accuracy(out, yb):
    """

    Computes the accuracy of model predictions.

    Parameters:
    out (Tensor): The output tensor from the model, containing predicted class scores.
    yb (Tensor): The ground truth labels tensor.

    Returns:
    Tensor: The mean accuracy of the predictions, computed as a float tensor.
    """
    return (torch.argmax(out, dim=1) == yb.long()).float().mean()

accuracy_topk(out, yb, k=5)

Computes the top-k accuracy of the given model outputs.

Parameters:

Name Type Description Default
out Tensor

The output predictions of the model, of shape (batch_size, num_classes).

required
yb Tensor

The ground truth labels, of shape (batch_size,).

required
k int

The number of top predictions to consider. Default is 5.

5

Returns:

Name Type Description
float

The top-k accuracy as a float value.

The function calculates how often the true label is among the top-k predicted labels.

Source code in openml_pytorch/metrics.py
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
def accuracy_topk(out, yb, k=5):
    """

    Computes the top-k accuracy of the given model outputs.

    Args:
        out (torch.Tensor): The output predictions of the model, of shape (batch_size, num_classes).
        yb (torch.Tensor): The ground truth labels, of shape (batch_size,).
        k (int, optional): The number of top predictions to consider. Default is 5.

    Returns:
        float: The top-k accuracy as a float value.

    The function calculates how often the true label is among the top-k predicted labels.
    """
    return (torch.topk(out, k, dim=1)[1] == yb.long().unsqueeze(1)).float().mean()

f1_score(out, yb)

Computes the F1 score for the given model outputs and true labels.

Parameters:

Name Type Description Default
out Tensor

The output predictions of the model, of shape (batch_size, num_classes).

required
yb Tensor

The ground truth labels, of shape (batch_size,).

required

Returns:

Name Type Description
float

The F1 score as a float value.

Source code in openml_pytorch/metrics.py
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
def f1_score(out, yb):
    """
    Computes the F1 score for the given model outputs and true labels.

    Args:
        out (torch.Tensor): The output predictions of the model, of shape (batch_size, num_classes).
        yb (torch.Tensor): The ground truth labels, of shape (batch_size,).

    Returns:
        float: The F1 score as a float value.
    """
    pred = torch.argmax(out, dim=1)
    tp = torch.sum((pred == 1) & (yb == 1)).float()
    fp = torch.sum((pred == 1) & (yb == 0)).float()
    fn = torch.sum((pred == 0) & (yb == 1)).float()

    precision = tp / (tp + fp + 1e-8)
    recall = tp / (tp + fn + 1e-8)

    f1 = 2 * (precision * recall) / (precision + recall + 1e-8)

    return f1