跳转至

Loss

Loss Factory

mindcv.loss.loss_factory.create_loss(name='CE', weight=None, reduction='mean', label_smoothing=0.0, aux_factor=0.0)

Creates loss function

PARAMETER DESCRIPTION
name

loss name : 'CE' for cross_entropy. 'BCE': binary cross entropy. Default: 'CE'.

TYPE: str DEFAULT: 'CE'

weight

Class weight. A rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size 'nbatch'. Data type must be float16 or float32.

TYPE: Tensor DEFAULT: None

reduction

Apply specific reduction method to the output: 'mean' or 'sum'. By default, the sum of the output will be divided by the number of elements in the output. 'sum': the output will be summed. Default:'mean'.

TYPE: str DEFAULT: 'mean'

label_smoothing

Label smoothing factor, a regularization tool used to prevent the model from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default: 0.0.

TYPE: float DEFAULT: 0.0

aux_factor

Auxiliary loss factor. Set aux_factor > 0.0 if the model has auxiliary logit outputs (i.e., deep supervision), like inception_v3. Default: 0.0.

TYPE: float DEFAULT: 0.0

Inputs
  • logits (Tensor or Tuple of Tensor): Input logits. Shape [N, C], where N means the number of samples, C means number of classes. Tuple of two input logits are supported in order (main_logits, aux_logits) for auxiliary loss used in networks like inception_v3. Data type must be float16 or float32.
  • labels (Tensor): Ground truth labels. Shape: [N] or [N, C]. (1) If in shape [N], sparse labels representing the class indices. Must be int type. (2) shape [N, C], dense labels representing the ground truth class probability values, or the one-hot labels. Must be float type. If the loss type is BCE, the shape of labels must be [N, C].
RETURNS DESCRIPTION

Loss function to compute the loss between the input logits and labels.

Source code in mindcv/loss/loss_factory.py
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
def create_loss(
    name: str = "CE",
    weight: Optional[Tensor] = None,
    reduction: str = "mean",
    label_smoothing: float = 0.0,
    aux_factor: float = 0.0,
):
    r"""Creates loss function

    Args:
        name (str):  loss name : 'CE' for cross_entropy. 'BCE': binary cross entropy. Default: 'CE'.
        weight (Tensor): Class weight. A rescaling weight given to the loss of each batch element.
            If given, has to be a Tensor of size 'nbatch'. Data type must be float16 or float32.
        reduction: Apply specific reduction method to the output: 'mean' or 'sum'.
            By default, the sum of the output will be divided by the number of elements in the output.
            'sum': the output will be summed. Default:'mean'.
        label_smoothing: Label smoothing factor, a regularization tool used to prevent the model
            from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default: 0.0.
        aux_factor (float): Auxiliary loss factor. Set aux_factor > 0.0 if the model has auxiliary logit outputs
            (i.e., deep supervision), like inception_v3. Default: 0.0.

    Inputs:
        - logits (Tensor or Tuple of Tensor): Input logits. Shape [N, C], where N means the number of samples,
            C means number of classes. Tuple of two input logits are supported in order (main_logits, aux_logits)
            for auxiliary loss used in networks like inception_v3. Data type must be float16 or float32.
        - labels (Tensor): Ground truth labels. Shape: [N] or [N, C].
            (1) If in shape [N], sparse labels representing the class indices. Must be int type.
            (2) shape [N, C], dense labels representing the ground truth class probability values,
            or the one-hot labels. Must be float type. If the loss type is BCE, the shape of labels must be [N, C].

    Returns:
       Loss function to compute the loss between the input logits and labels.
    """
    name = name.lower()

    if name == "ce":
        loss = CrossEntropySmooth(smoothing=label_smoothing, aux_factor=aux_factor, reduction=reduction, weight=weight)
    elif name == "bce":
        loss = BinaryCrossEntropySmooth(
            smoothing=label_smoothing, aux_factor=aux_factor, reduction=reduction, weight=weight, pos_weight=None
        )
    elif name == "asl_single_label":
        loss = AsymmetricLossSingleLabel(smoothing=label_smoothing)
    elif name == "asl_multi_label":
        loss = AsymmetricLossMultilabel()
    elif name == "jsd":
        loss = JSDCrossEntropy(smoothing=label_smoothing, aux_factor=aux_factor, reduction=reduction, weight=weight)
    else:
        raise NotImplementedError

    return loss

Cross Entropy

mindcv.loss.cross_entropy_smooth.CrossEntropySmooth

Bases: LossBase

Cross entropy loss with label smoothing. Apply softmax activation function to input logits, and uses the given logits to compute cross entropy between the logits and the label.

PARAMETER DESCRIPTION
smoothing

Label smoothing factor, a regularization tool used to prevent the model from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default: 0.0.

DEFAULT: 0.0

aux_factor

Auxiliary loss factor. Set aux_factor > 0.0 if the model has auxiliary logit outputs (i.e., deep supervision), like inception_v3. Default: 0.0.

DEFAULT: 0.0

reduction

Apply specific reduction method to the output: 'mean' or 'sum'. Default: 'mean'.

DEFAULT: 'mean'

weight

Class weight. Shape [C]. A rescaling weight applied to the loss of each batch element. Data type must be float16 or float32.

TYPE: Tensor DEFAULT: None

Inputs

logits (Tensor or Tuple of Tensor): Input logits. Shape [N, C], where N is # samples, C is # classes. Tuple composed of multiple logits are supported in order (main_logits, aux_logits) for auxiliary loss used in networks like inception_v3. labels (Tensor): Ground truth label. Shape: [N] or [N, C]. (1) Shape (N), sparse labels representing the class indices. Must be int type. (2) Shape [N, C], dense labels representing the ground truth class probability values, or the one-hot labels. Must be float type.

Source code in mindcv/loss/cross_entropy_smooth.py
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
class CrossEntropySmooth(nn.LossBase):
    """
    Cross entropy loss with label smoothing.
    Apply softmax activation function to input `logits`, and uses the given logits to compute cross entropy
    between the logits and the label.

    Args:
        smoothing: Label smoothing factor, a regularization tool used to prevent the model
            from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default: 0.0.
        aux_factor: Auxiliary loss factor. Set aux_factor > 0.0 if the model has auxiliary logit outputs
            (i.e., deep supervision), like inception_v3.  Default: 0.0.
        reduction: Apply specific reduction method to the output: 'mean' or 'sum'. Default: 'mean'.
        weight (Tensor): Class weight. Shape [C]. A rescaling weight applied to the loss of each batch element.
            Data type must be float16 or float32.

    Inputs:
        logits (Tensor or Tuple of Tensor): Input logits. Shape [N, C], where N is # samples, C is # classes.
            Tuple composed of multiple logits are supported in order (main_logits, aux_logits)
            for auxiliary loss used in networks like inception_v3.
        labels (Tensor): Ground truth label. Shape: [N] or [N, C].
            (1) Shape (N), sparse labels representing the class indices. Must be int type.
            (2) Shape [N, C], dense labels representing the ground truth class probability values,
            or the one-hot labels. Must be float type.
    """

    def __init__(self, smoothing=0.0, aux_factor=0.0, reduction="mean", weight=None):
        super().__init__()
        self.smoothing = smoothing
        self.aux_factor = aux_factor
        self.reduction = reduction
        self.weight = weight

    def construct(self, logits, labels):
        loss_aux = 0

        if isinstance(logits, tuple):
            main_logits = logits[0]
            for aux in logits[1:]:
                if self.aux_factor > 0:
                    loss_aux += F.cross_entropy(
                        aux, labels, weight=self.weight, reduction=self.reduction, label_smoothing=self.smoothing
                    )
        else:
            main_logits = logits

        loss_logits = F.cross_entropy(
            main_logits, labels, weight=self.weight, reduction=self.reduction, label_smoothing=self.smoothing
        )
        loss = loss_logits + self.aux_factor * loss_aux
        return loss

Binary Cross Entropy

mindcv.loss.binary_cross_entropy_smooth.BinaryCrossEntropySmooth

Bases: LossBase

Binary cross entropy loss with label smoothing. Apply sigmoid activation function to input logits, and uses the given logits to compute binary cross entropy between the logits and the label.

PARAMETER DESCRIPTION
smoothing

Label smoothing factor, a regularization tool used to prevent the model from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default: 0.0.

DEFAULT: 0.0

aux_factor

Auxiliary loss factor. Set aux_factor > 0.0 if the model has auxiliary logit outputs (i.e., deep supervision), like inception_v3. Default: 0.0.

DEFAULT: 0.0

reduction

Apply specific reduction method to the output: 'mean' or 'sum'. Default: 'mean'.

DEFAULT: 'mean'

weight

Class weight. A rescaling weight applied to the loss of each batch element. Shape [C]. It can be broadcast to a tensor with shape of logits. Data type must be float16 or float32.

TYPE: Tensor DEFAULT: None

pos_weight

Positive weight for each class. A weight of positive examples. Shape [C]. Must be a vector with length equal to the number of classes. It can be broadcast to a tensor with shape of logits. Data type must be float16 or float32.

TYPE: Tensor DEFAULT: None

Inputs

logits (Tensor or Tuple of Tensor): (1) Input logits. Shape [N, C], where N is # samples, C is # classes. Or (2) Tuple of two input logits (main_logits and aux_logits) for auxiliary loss. labels (Tensor): Ground truth label, (1) shape [N, C], has the same shape as logits or (2) shape [N]. can be a class probability matrix or one-hot labels. Data type must be float16 or float32.

Source code in mindcv/loss/binary_cross_entropy_smooth.py
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
class BinaryCrossEntropySmooth(nn.LossBase):
    """
    Binary cross entropy loss with label smoothing.
    Apply sigmoid activation function to input `logits`, and uses the given logits to compute binary cross entropy
    between the logits and the label.

    Args:
        smoothing: Label smoothing factor, a regularization tool used to prevent the model
            from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default: 0.0.
        aux_factor: Auxiliary loss factor. Set aux_factor > 0.0 if the model has auxiliary logit outputs
            (i.e., deep supervision), like inception_v3.  Default: 0.0.
        reduction: Apply specific reduction method to the output: 'mean' or 'sum'. Default: 'mean'.
        weight (Tensor): Class weight. A rescaling weight applied to the loss of each batch element. Shape [C].
            It can be broadcast to a tensor with shape of `logits`. Data type must be float16 or float32.
        pos_weight (Tensor): Positive weight for each class. A weight of positive examples. Shape [C].
            Must be a vector with length equal to the number of classes.
            It can be broadcast to a tensor with shape of `logits`. Data type must be float16 or float32.

    Inputs:
        logits (Tensor or Tuple of Tensor): (1) Input logits. Shape [N, C], where N is # samples, C is # classes.
            Or (2) Tuple of two input logits (main_logits and aux_logits) for auxiliary loss.
        labels (Tensor): Ground truth label, (1) shape [N, C], has the same shape as `logits` or (2) shape [N].
            can be a class probability matrix or one-hot labels. Data type must be float16 or float32.
    """

    def __init__(self, smoothing=0.0, aux_factor=0.0, reduction="mean", weight=None, pos_weight=None):
        super().__init__()
        self.smoothing = smoothing
        self.aux_factor = aux_factor
        self.reduction = reduction
        self.weight = weight
        self.pos_weight = pos_weight
        self.ones = P.OnesLike()
        self.one_hot = P.OneHot()

    def construct(self, logits, labels):
        loss_aux = 0
        aux_logits = None

        if isinstance(logits, tuple):
            main_logits = logits[0]
        else:
            main_logits = logits

        if main_logits.size != labels.size:
            # We must explicitly convert the label to one-hot,
            # for binary_cross_entropy_with_logits restricting input and label have the same shape.
            class_dim = 0 if main_logits.ndim == 1 else 1
            n_classes = main_logits.shape[class_dim]
            labels = self.one_hot(labels, n_classes, Tensor(1.0), Tensor(0.0))

        ones_input = self.ones(main_logits)
        if self.weight is not None:
            weight = self.weight
        else:
            weight = ones_input
        if self.pos_weight is not None:
            pos_weight = self.pos_weight
        else:
            pos_weight = ones_input

        if self.smoothing > 0.0:
            class_dim = 0 if main_logits.ndim == 1 else -1
            n_classes = main_logits.shape[class_dim]
            labels = labels * (1 - self.smoothing) + self.smoothing / n_classes

        if self.aux_factor > 0 and aux_logits is not None:
            for aux_logits in logits[1:]:
                loss_aux += F.binary_cross_entropy_with_logits(
                    aux_logits, labels, weight=weight, pos_weight=pos_weight, reduction=self.reduction
                )
        # else:
        #    warnings.warn("There are logit tuple input, but the auxiliary loss factor is 0.")

        loss_logits = F.binary_cross_entropy_with_logits(
            main_logits, labels, weight=weight, pos_weight=pos_weight, reduction=self.reduction
        )

        loss = loss_logits + self.aux_factor * loss_aux

        return loss