Loss¶
Loss Factory¶
mindcv.loss.loss_factory.create_loss(name='CE', weight=None, reduction='mean', label_smoothing=0.0, aux_factor=0.0)
¶
Creates loss function
| PARAMETER | DESCRIPTION |
|---|---|
name |
loss name : 'CE' for cross_entropy. 'BCE': binary cross entropy. Default: 'CE'.
TYPE:
|
weight |
Class weight. A rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size 'nbatch'. Data type must be float16 or float32.
TYPE:
|
reduction |
Apply specific reduction method to the output: 'mean' or 'sum'. By default, the sum of the output will be divided by the number of elements in the output. 'sum': the output will be summed. Default:'mean'.
TYPE:
|
label_smoothing |
Label smoothing factor, a regularization tool used to prevent the model from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default: 0.0.
TYPE:
|
aux_factor |
Auxiliary loss factor. Set aux_factor > 0.0 if the model has auxiliary logit outputs (i.e., deep supervision), like inception_v3. Default: 0.0.
TYPE:
|
Inputs
- logits (Tensor or Tuple of Tensor): Input logits. Shape [N, C], where N means the number of samples, C means number of classes. Tuple of two input logits are supported in order (main_logits, aux_logits) for auxiliary loss used in networks like inception_v3. Data type must be float16 or float32.
- labels (Tensor): Ground truth labels. Shape: [N] or [N, C]. (1) If in shape [N], sparse labels representing the class indices. Must be int type. (2) shape [N, C], dense labels representing the ground truth class probability values, or the one-hot labels. Must be float type. If the loss type is BCE, the shape of labels must be [N, C].
| RETURNS | DESCRIPTION |
|---|---|
|
Loss function to compute the loss between the input logits and labels. |
Source code in mindcv/loss/loss_factory.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | |
Cross Entropy¶
mindcv.loss.cross_entropy_smooth.CrossEntropySmooth
¶
Bases: LossBase
Cross entropy loss with label smoothing.
Apply softmax activation function to input logits, and uses the given logits to compute cross entropy
between the logits and the label.
| PARAMETER | DESCRIPTION |
|---|---|
smoothing |
Label smoothing factor, a regularization tool used to prevent the model from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default: 0.0.
DEFAULT:
|
aux_factor |
Auxiliary loss factor. Set aux_factor > 0.0 if the model has auxiliary logit outputs (i.e., deep supervision), like inception_v3. Default: 0.0.
DEFAULT:
|
reduction |
Apply specific reduction method to the output: 'mean' or 'sum'. Default: 'mean'.
DEFAULT:
|
weight |
Class weight. Shape [C]. A rescaling weight applied to the loss of each batch element. Data type must be float16 or float32.
TYPE:
|
Inputs
logits (Tensor or Tuple of Tensor): Input logits. Shape [N, C], where N is # samples, C is # classes. Tuple composed of multiple logits are supported in order (main_logits, aux_logits) for auxiliary loss used in networks like inception_v3. labels (Tensor): Ground truth label. Shape: [N] or [N, C]. (1) Shape (N), sparse labels representing the class indices. Must be int type. (2) Shape [N, C], dense labels representing the ground truth class probability values, or the one-hot labels. Must be float type.
Source code in mindcv/loss/cross_entropy_smooth.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | |
Binary Cross Entropy¶
mindcv.loss.binary_cross_entropy_smooth.BinaryCrossEntropySmooth
¶
Bases: LossBase
Binary cross entropy loss with label smoothing.
Apply sigmoid activation function to input logits, and uses the given logits to compute binary cross entropy
between the logits and the label.
| PARAMETER | DESCRIPTION |
|---|---|
smoothing |
Label smoothing factor, a regularization tool used to prevent the model from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default: 0.0.
DEFAULT:
|
aux_factor |
Auxiliary loss factor. Set aux_factor > 0.0 if the model has auxiliary logit outputs (i.e., deep supervision), like inception_v3. Default: 0.0.
DEFAULT:
|
reduction |
Apply specific reduction method to the output: 'mean' or 'sum'. Default: 'mean'.
DEFAULT:
|
weight |
Class weight. A rescaling weight applied to the loss of each batch element. Shape [C].
It can be broadcast to a tensor with shape of
TYPE:
|
pos_weight |
Positive weight for each class. A weight of positive examples. Shape [C].
Must be a vector with length equal to the number of classes.
It can be broadcast to a tensor with shape of
TYPE:
|
Inputs
logits (Tensor or Tuple of Tensor): (1) Input logits. Shape [N, C], where N is # samples, C is # classes.
Or (2) Tuple of two input logits (main_logits and aux_logits) for auxiliary loss.
labels (Tensor): Ground truth label, (1) shape [N, C], has the same shape as logits or (2) shape [N].
can be a class probability matrix or one-hot labels. Data type must be float16 or float32.
Source code in mindcv/loss/binary_cross_entropy_smooth.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 | |