LuminaNextDiT2DModel¶
A Next Version of Diffusion Transformer model for 2D data from Lumina-T2X.
mindone.diffusers.LuminaNextDiT2DModel
¶
Bases: ModelMixin
, ConfigMixin
Inherit ModelMixin and ConfigMixin to be compatible with the sampler StableDiffusionPipeline of diffusers.
PARAMETER | DESCRIPTION |
---|---|
sample_size |
The width of the latent images. This is fixed during training since it is used to learn a number of position embeddings.
TYPE:
|
patch_size |
The size of each patch in the image. This parameter defines the resolution of patches fed into the model.
TYPE:
|
in_channels |
The number of input channels for the model. Typically, this matches the number of channels in the input images.
TYPE:
|
hidden_size |
The dimensionality of the hidden layers in the model. This parameter determines the width of the model's hidden representations.
TYPE:
|
num_layers |
The number of layers in the model. This defines the depth of the neural network.
TYPE:
|
num_attention_heads |
The number of attention heads in each attention layer. This parameter specifies how many separate attention mechanisms are used.
TYPE:
|
num_kv_heads |
The number of key-value heads in the attention mechanism, if different from the number of attention heads. If None, it defaults to num_attention_heads.
TYPE:
|
multiple_of |
A factor that the hidden size should be a multiple of. This can help optimize certain hardware configurations.
TYPE:
|
ffn_dim_multiplier |
A multiplier for the dimensionality of the feed-forward network. If None, it uses a default value based on the model configuration.
TYPE:
|
norm_eps |
A small value added to the denominator for numerical stability in normalization layers.
TYPE:
|
learn_sigma |
Whether the model should learn the sigma parameter, which might be related to uncertainty or variance in predictions.
TYPE:
|
qk_norm |
Indicates if the queries and keys in the attention mechanism should be normalized.
TYPE:
|
cross_attention_dim |
The dimensionality of the text embeddings. This parameter defines the size of the text representations used in the model.
TYPE:
|
scaling_factor |
A scaling factor applied to certain parameters or layers in the model. This can be used for adjusting the overall scale of the model's operations.
TYPE:
|
Source code in mindone/diffusers/models/transformers/lumina_nextdit2d.py
174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 |
|
mindone.diffusers.LuminaNextDiT2DModel.construct(hidden_states, timestep, encoder_hidden_states, encoder_mask, image_rotary_emb, cross_attention_kwargs=None, return_dict=False)
¶
Forward pass of LuminaNextDiT.
PARAMETER | DESCRIPTION |
---|---|
hidden_states |
Input tensor of shape (N, C, H, W).
TYPE:
|
timestep |
Tensor of diffusion timesteps of shape (N,).
TYPE:
|
encoder_hidden_states |
Tensor of caption features of shape (N, D).
TYPE:
|
encoder_mask |
Tensor of caption masks of shape (N, L).
TYPE:
|
Source code in mindone/diffusers/models/transformers/lumina_nextdit2d.py
285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 |
|