FluxTransformer2DModel¶
A Transformer model for image-like data from Flux.
mindone.diffusers.models.transformers.transformer_flux.FluxTransformer2DModel
¶
Bases: ModelMixin
, ConfigMixin
, PeftAdapterMixin
, FromOriginalModelMixin
, FluxTransformer2DLoadersMixin
, CacheMixin
, AttentionMixin
The Transformer model introduced in Flux.
Reference: https://blackforestlabs.ai/announcing-black-forest-labs/
PARAMETER | DESCRIPTION |
---|---|
patch_size
|
Patch size to turn the input data into small patches.
TYPE:
|
in_channels
|
The number of channels in the input.
TYPE:
|
out_channels
|
The number of channels in the output. If not specified, it defaults to
TYPE:
|
num_layers
|
The number of layers of dual stream DiT blocks to use.
TYPE:
|
num_single_layers
|
The number of layers of single stream DiT blocks to use.
TYPE:
|
attention_head_dim
|
The number of dimensions to use for each attention head.
TYPE:
|
num_attention_heads
|
The number of attention heads to use.
TYPE:
|
joint_attention_dim
|
The number of dimensions to use for the joint attention (embedding/channel dimension of
TYPE:
|
pooled_projection_dim
|
The number of dimensions to use for the pooled projection.
TYPE:
|
guidance_embeds
|
Whether to use guidance embeddings for guidance-distilled variant of the model.
TYPE:
|
axes_dims_rope
|
The dimensions to use for the rotary positional embeddings.
TYPE:
|
Source code in mindone/diffusers/models/transformers/transformer_flux.py
602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 |
|
mindone.diffusers.models.transformers.transformer_flux.FluxTransformer2DModel.construct(hidden_states, encoder_hidden_states=None, pooled_projections=None, timestep=None, img_ids=None, txt_ids=None, guidance=None, joint_attention_kwargs=None, controlnet_block_samples=None, controlnet_single_block_samples=None, return_dict=False, controlnet_blocks_repeat=False)
¶
The [FluxTransformer2DModel
] forward method.
PARAMETER | DESCRIPTION |
---|---|
hidden_states
|
Input
TYPE:
|
encoder_hidden_states
|
Conditional embeddings (embeddings computed from the input conditions such as prompts) to use.
TYPE:
|
pooled_projections
|
Embeddings projected from the embeddings of input conditions.
TYPE:
|
timestep
|
Used to indicate denoising step.
TYPE:
|
block_controlnet_hidden_states
|
(
|
joint_attention_kwargs
|
A kwargs dictionary that if specified is passed along to the
TYPE:
|
return_dict
|
Whether or not to return a [
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Union[Tensor, Transformer2DModelOutput]
|
If |
Union[Tensor, Transformer2DModelOutput]
|
|
Source code in mindone/diffusers/models/transformers/transformer_flux.py
705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 |
|