QwenImageTransformer2DModel¶
The model can be loaded with the following code snippet.
from mindone.diffusers import QwenImageTransformer2DModel
transformer = QwenImageTransformer2DModel.from_pretrained("Qwen/QwenImage", subfolder="transformer", mindspore_dtype=mindspore.bfloat16)
mindone.diffusers.QwenImageTransformer2DModel
¶
Bases: ModelMixin
, ConfigMixin
, PeftAdapterMixin
, FromOriginalModelMixin
The Transformer model introduced in Qwen.
PARAMETER | DESCRIPTION |
---|---|
patch_size
|
Patch size to turn the input data into small patches.
TYPE:
|
in_channels
|
The number of channels in the input.
TYPE:
|
out_channels
|
The number of channels in the output. If not specified, it defaults to
TYPE:
|
num_layers
|
The number of layers of dual stream DiT blocks to use.
TYPE:
|
attention_head_dim
|
The number of dimensions to use for each attention head.
TYPE:
|
num_attention_heads
|
The number of attention heads to use.
TYPE:
|
joint_attention_dim
|
The number of dimensions to use for the joint attention (embedding/channel dimension of
TYPE:
|
guidance_embeds
|
Whether to use guidance embeddings for guidance-distilled variant of the model.
TYPE:
|
axes_dims_rope
|
The dimensions to use for the rotary positional embeddings.
TYPE:
|
Source code in mindone/diffusers/models/transformers/transformer_qwenimage.py
449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 |
|
mindone.diffusers.QwenImageTransformer2DModel.construct(hidden_states, encoder_hidden_states=None, encoder_hidden_states_mask=None, timestep=None, img_shapes=None, txt_seq_lens=None, guidance=None, attention_kwargs=None, controlnet_block_samples=None, return_dict=True)
¶
The [QwenTransformer2DModel
] forward method.
PARAMETER | DESCRIPTION |
---|---|
hidden_states
|
Input
TYPE:
|
encoder_hidden_states
|
Conditional embeddings (embeddings computed from the input conditions such as prompts) to use.
TYPE:
|
encoder_hidden_states_mask
|
Mask of the input conditions.
TYPE:
|
timestep
|
Used to indicate denoising step.
TYPE:
|
attention_kwargs
|
A kwargs dictionary that if specified is passed along to the
TYPE:
|
return_dict
|
Whether or not to return a [
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Union[Tensor, Transformer2DModelOutput]
|
If |
Union[Tensor, Transformer2DModelOutput]
|
|
Source code in mindone/diffusers/models/transformers/transformer_qwenimage.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 |
|
mindone.diffusers.models.modeling_outputs.Transformer2DModelOutput
dataclass
¶
Bases: BaseOutput
The output of [Transformer2DModel
].
PARAMETER | DESCRIPTION |
---|---|
`
|
The hidden states output conditioned on the
TYPE:
|
Source code in mindone/diffusers/models/modeling_outputs.py
24 25 26 27 28 29 30 31 32 33 34 35 36 |
|