Common Layers in Model¶
Activation¶
mindcv.models.layers.activation.Swish
¶
Bases: Cell
Swish activation function: x * sigmoid(x).
Return
Tensor
Example
x = Tensor(((20, 16), (50, 50)), mindspore.float32) Swish()(x)
Source code in mindcv/models/layers/activation.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
DropPath¶
mindcv.models.layers.drop_path.DropPath
¶
Bases: Cell
DropPath (Stochastic Depth) regularization layers
Source code in mindcv/models/layers/drop_path.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
Identity¶
mindcv.models.layers.identity.Identity
¶
Bases: Cell
Identity
Source code in mindcv/models/layers/identity.py
5 6 7 8 9 |
|
MLP¶
mindcv.models.layers.mlp.Mlp
¶
Bases: Cell
Source code in mindcv/models/layers/mlp.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
Patch Embedding¶
mindcv.models.layers.patch_embed.PatchEmbed
¶
Bases: Cell
Image to Patch Embedding
PARAMETER | DESCRIPTION |
---|---|
image_size |
Image size. Default: 224.
TYPE:
|
patch_size |
Patch token size. Default: 4.
TYPE:
|
in_chans |
Number of input image channels. Default: 3.
TYPE:
|
embed_dim |
Number of linear projection output channels. Default: 96.
TYPE:
|
norm_layer |
Normalization layer. Default: None
TYPE:
|
Source code in mindcv/models/layers/patch_embed.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
|
mindcv.models.layers.patch_embed.PatchEmbed.construct(x)
¶
docstring
Source code in mindcv/models/layers/patch_embed.py
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
|
Pooling¶
mindcv.models.layers.pooling.GlobalAvgPooling
¶
Bases: Cell
GlobalAvgPooling, same as torch.nn.AdaptiveAvgPool2d when output shape is 1
Source code in mindcv/models/layers/pooling.py
5 6 7 8 9 10 11 12 13 14 15 16 |
|
Selective Kernel¶
mindcv.models.layers.selective_kernel.SelectiveKernelAttn
¶
Bases: Cell
Selective Kernel Attention Module Selective Kernel attention mechanism factored out into its own module.
Source code in mindcv/models/layers/selective_kernel.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
|
mindcv.models.layers.selective_kernel.SelectiveKernel
¶
Bases: Cell
Selective Kernel Convolution Module As described in Selective Kernel Networks (https://arxiv.org/abs/1903.06586) with some modifications. Largest change is the input split, which divides the input channels across each convolution path, this can be viewed as a grouping of sorts, but the output channel counts expand to the module level value. This keeps the parameter count from ballooning when the convolutions themselves don't have groups, but still provides a noteworthy increase in performance over similar param count models without this attention layer. -Ross W Args: in_channels (int): module input (feature) channel count out_channels (int): module output (feature) channel count kernel_size (int, list): kernel size for each convolution branch stride (int): stride for convolutions dilation (int): dilation for module as a whole, impacts dilation of each branch groups (int): number of groups for each branch rd_ratio (int, float): reduction factor for attention features rd_channels(int): reduction channels can be specified directly by arg (if rd_channels is set) rd_divisor(int): divisor can be specified to keep channels keep_3x3 (bool): keep all branch convolution kernels as 3x3, changing larger kernels for dilations split_input (bool): split input channels evenly across each convolution branch, keeps param count lower, can be viewed as grouping by path, output expands to module out_channels count activation (nn.Module): activation layer to use norm (nn.Module): batchnorm/norm layer to use
Source code in mindcv/models/layers/selective_kernel.py
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 |
|
Squeeze and Excite¶
mindcv.models.layers.squeeze_excite.SqueezeExcite
¶
Bases: Cell
SqueezeExcite Module as defined in original SE-Nets with a few additions. Additions include: * divisor can be specified to keep channels % div == 0 (default: 8) * reduction channels can be specified directly by arg (if rd_channels is set) * reduction channels can be specified by float rd_ratio (default: 1/16) * customizable activation, normalization, and gate layer
Source code in mindcv/models/layers/squeeze_excite.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
|
mindcv.models.layers.squeeze_excite.SqueezeExciteV2
¶
Bases: Cell
SqueezeExcite Module as defined in original SE-Nets with a few additions. V1 uses 1x1conv to replace fc layers, and V2 uses nn.Dense to implement directly.
Source code in mindcv/models/layers/squeeze_excite.py
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
|