Skip to content

Stable Cascade

This model is built upon the Würstchen architecture and its main difference to other models like Stable Diffusion is that it is working at a much smaller latent space. Why is this important? The smaller the latent space, the faster you can run inference and the cheaper the training becomes. How small is the latent space? Stable Diffusion uses a compression factor of 8, resulting in a 1024x1024 image being encoded to 128x128. Stable Cascade achieves a compression factor of 42, meaning that it is possible to encode a 1024x1024 image to 24x24, while maintaining crisp reconstructions. The text-conditional model is then trained in the highly compressed latent space. Previous versions of this architecture, achieved a 16x cost reduction over Stable Diffusion 1.5.

Therefore, this kind of model is well suited for usages where efficiency is important. Furthermore, all known extensions like finetuning, LoRA, ControlNet, IP-Adapter, LCM etc. are possible with this method as well.

The original codebase can be found at Stability-AI/StableCascade.

Model Overview

Stable Cascade consists of three models: Stage A, Stage B and Stage C, representing a cascade to generate images, hence the name "Stable Cascade".

Stage A & B are used to compress images, similar to what the job of the VAE is in Stable Diffusion. However, with this setup, a much higher compression of images can be achieved. While the Stable Diffusion models use a spatial compression factor of 8, encoding an image with resolution of 1024 x 1024 to 128 x 128, Stable Cascade achieves a compression factor of 42. This encodes a 1024 x 1024 image to 24 x 24, while being able to accurately decode the image. This comes with the great benefit of cheaper training and inference. Furthermore, Stage C is responsible for generating the small 24 x 24 latents given a text prompt.

The Stage C model operates on the small 24 x 24 latents and denoises the latents conditioned on text prompts. The model is also the largest component in the Cascade pipeline and is meant to be used with the StableCascadePriorPipeline

The Stage B and Stage A models are used with the StableCascadeDecoderPipeline and are responsible for generating the final image given the small 24 x 24 latents.

Warning

There are some restrictions on data types that can be used with the Stable Cascade models. The official checkpoints for the StableCascadePriorPipeline do not support the mindspore.float16 data type. Please use mindspore.bfloat16 instead.

In order to use the mindspore.bfloat16 data type with the StableCascadeDecoderPipeline you need to have mindspore 2.3.0 or higher installed. This also means that using the StableCascadeCombinedPipeline with mindspore.bfloat16 requires MindSpore 2.3.0 or higher, since it calls the StableCascadeDecoderPipeline internally.

If it is not possible to install MindSpore 2.3.0 or higher in your environment, the StableCascadeDecoderPipeline can be used on its own with the mindspore.float16 data type. You can download the full precision or bf16 variant weights for the pipeline and cast the weights to mindspore.float16.

Usage example

import mindspore as ms
from mindone.diffusers import StableCascadeDecoderPipeline, StableCascadePriorPipeline
from transformers import AutoTokenizer
from mindone.transformers import CLIPTextModelWithProjection

tokenizer = AutoTokenizer.from_pretrained("laion/CLIP-ViT-bigG-14-laion2B-39B-b160k")
text_encoder = CLIPTextModelWithProjection.from_pretrained("path/to/weight", use_safetensors=True)

prompt = "an image of a shiba inu, donning a spacesuit and helmet"
negative_prompt = ""

prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", tokenizer = tokenizer, text_encoder=text_encoder, variant="bf16", mindspore_dtype=ms.bfloat16)
decoder = StableCascadeDecoderPipeline.from_pretrained("stabilityai/stable-cascade", tokenizer = tokenizer, text_encoder=text_encoder, variant="bf16", mindspore_dtype=ms.float16)

prior_output = prior(
    prompt=prompt,
    height=1024,
    width=1024,
    negative_prompt=negative_prompt,
    guidance_scale=4.0,
    num_images_per_prompt=1,
    num_inference_steps=20
)

decoder_output = decoder(
    image_embeddings=prior_output.image_embeddings.to(ms.float16),
    prompt=prompt,
    negative_prompt=negative_prompt,
    guidance_scale=0.0,
    output_type="pil",
    num_inference_steps=10
)[0]
decoder_output.save("cascade.png")

Using the Lite Versions of the Stage B and Stage C models

import mindspore as ms
from mindone.diffusers import (
    StableCascadeDecoderPipeline,
    StableCascadePriorPipeline,
    StableCascadeUNet,
)

from transformers import AutoTokenizer
from mindone.transformers import CLIPTextModelWithProjection

tokenizer = AutoTokenizer.from_pretrained("laion/CLIP-ViT-bigG-14-laion2B-39B-b160k")
text_encoder = CLIPTextModelWithProjection.from_pretrained("path/to/weight", use_safetensors=True)

prompt = "an image of a shiba inu, donning a spacesuit and helmet"
negative_prompt = ""

prior_unet = StableCascadeUNet.from_pretrained("stabilityai/stable-cascade-prior", text_encoder=text_encoder, tokenizer=tokenizer, subfolder="prior_lite")
decoder_unet = StableCascadeUNet.from_pretrained("stabilityai/stable-cascade", text_encoder=text_encoder, tokenizer=tokenizer, subfolder="decoder_lite")

prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", prior=prior_unet)
decoder = StableCascadeDecoderPipeline.from_pretrained("stabilityai/stable-cascade", decoder=decoder_unet)

prior_output = prior(
    prompt=prompt,
    height=1024,
    width=1024,
    negative_prompt=negative_prompt,
    guidance_scale=4.0,
    num_images_per_prompt=1,
    num_inference_steps=20
)

decoder_output = decoder(
    image_embeddings=prior_output.image_embeddings,
    prompt=prompt,
    negative_prompt=negative_prompt,
    guidance_scale=0.0,
    output_type="pil",
    num_inference_steps=10
)[0]
decoder_output.save("cascade.png")

Loading original checkpoints with from_single_file

Loading the original format checkpoints is supported via from_single_file method in the StableCascadeUNet.

import mindspore as ms
from mindone.diffusers import (
    StableCascadeDecoderPipeline,
    StableCascadePriorPipeline,
    StableCascadeUNet,
)

from transformers import AutoTokenizer
from mindone.transformers import CLIPTextModelWithProjection

tokenizer = AutoTokenizer.from_pretrained("laion/CLIP-ViT-bigG-14-laion2B-39B-b160k")
text_encoder = CLIPTextModelWithProjection.from_pretrained("path/to/weight", use_safetensors=True)

prompt = "an image of a shiba inu, donning a spacesuit and helmet"
negative_prompt = ""

prior_unet = StableCascadeUNet.from_single_file(
    "https://huggingface.co/stabilityai/stable-cascade/stage_c_bf16.safetensors",
    mindspore_dtype=ms.bfloat16
)
decoder_unet = StableCascadeUNet.from_single_file(
    "https://huggingface.co/stabilityai/stable-cascade/stage_b_bf16.safetensors",
    mindspore_dtype=ms.bfloat16
)

prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", tokenizer=tokenizer, text_encoder=text_encoder, prior=prior_unet, mindspore_dtype=ms.bfloat16)
decoder = StableCascadeDecoderPipeline.from_pretrained("stabilityai/stable-cascade", tokenizer=tokenizer, text_encoder=text_encoder, decoder=decoder_unet, mindspore_dtype=ms.bfloat16)

prior_output = prior(
    prompt=prompt,
    height=1024,
    width=1024,
    negative_prompt=negative_prompt,
    guidance_scale=4.0,
    num_images_per_prompt=1,
    num_inference_steps=20
)

decoder_output = decoder(
    image_embeddings=prior_output.image_embeddings,
    prompt=prompt,
    negative_prompt=negative_prompt,
    guidance_scale=0.0,
    output_type="pil",
    num_inference_steps=10
).images[0]
decoder_output.save("cascade-single-file.png")

Uses

Direct Use

The model is intended for research purposes for now. Possible research areas and tasks include

  • Research on generative models.
  • Safe deployment of models which have the potential to generate harmful content.
  • Probing and understanding the limitations and biases of generative models.
  • Generation of artworks and use in design and other artistic processes.
  • Applications in educational or creative tools.

Excluded uses are described below.

Out-of-Scope Use

The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. The model should not be used in any way that violates Stability AI's Acceptable Use Policy.

Limitations and Bias

Limitations

  • Faces and people in general may not be generated properly.
  • The autoencoding part of the model is lossy.

mindone.diffusers.StableCascadeCombinedPipeline

Bases: DiffusionPipeline

Combined Pipeline for text-to-image generation using Stable Cascade.

This model inherits from [DiffusionPipeline]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

PARAMETER DESCRIPTION
tokenizer

The decoder tokenizer to be used for text inputs.

TYPE: `CLIPTokenizer`

text_encoder

The decoder text encoder to be used for text inputs.

TYPE: `CLIPTextModel`

decoder

The decoder model to be used for decoder image generation pipeline.

TYPE: `StableCascadeUNet`

scheduler

The scheduler to be used for decoder image generation pipeline.

TYPE: `DDPMWuerstchenScheduler`

vqgan

The VQGAN model to be used for decoder image generation pipeline.

TYPE: `PaellaVQModel`

feature_extractor

Model that extracts features from generated images to be used as inputs for the image_encoder.

TYPE: [`~transformers.CLIPImageProcessor`]

image_encoder

Frozen CLIP image-encoder (clip-vit-large-patch14).

TYPE: [`CLIPVisionModelWithProjection`]

prior_prior

The prior model to be used for prior pipeline.

TYPE: `StableCascadeUNet`

prior_scheduler

The scheduler to be used for prior pipeline.

TYPE: `DDPMWuerstchenScheduler`

Source code in mindone/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_combined.py
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
class StableCascadeCombinedPipeline(DiffusionPipeline):
    """
    Combined Pipeline for text-to-image generation using Stable Cascade.

    This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
    library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

    Args:
        tokenizer (`CLIPTokenizer`):
            The decoder tokenizer to be used for text inputs.
        text_encoder (`CLIPTextModel`):
            The decoder text encoder to be used for text inputs.
        decoder (`StableCascadeUNet`):
            The decoder model to be used for decoder image generation pipeline.
        scheduler (`DDPMWuerstchenScheduler`):
            The scheduler to be used for decoder image generation pipeline.
        vqgan (`PaellaVQModel`):
            The VQGAN model to be used for decoder image generation pipeline.
        feature_extractor ([`~transformers.CLIPImageProcessor`]):
            Model that extracts features from generated images to be used as inputs for the `image_encoder`.
        image_encoder ([`CLIPVisionModelWithProjection`]):
            Frozen CLIP image-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
        prior_prior (`StableCascadeUNet`):
            The prior model to be used for prior pipeline.
        prior_scheduler (`DDPMWuerstchenScheduler`):
            The scheduler to be used for prior pipeline.
    """

    _load_connected_pipes = True

    def __init__(
        self,
        tokenizer: CLIPTokenizer,
        text_encoder: CLIPTextModel,
        decoder: StableCascadeUNet,
        scheduler: DDPMWuerstchenScheduler,
        vqgan: PaellaVQModel,
        prior_prior: StableCascadeUNet,
        prior_text_encoder: CLIPTextModel,
        prior_tokenizer: CLIPTokenizer,
        prior_scheduler: DDPMWuerstchenScheduler,
        prior_feature_extractor: Optional[CLIPImageProcessor] = None,
        prior_image_encoder: Optional[CLIPVisionModelWithProjection] = None,
    ):
        super().__init__()

        self.register_modules(
            text_encoder=text_encoder,
            tokenizer=tokenizer,
            decoder=decoder,
            scheduler=scheduler,
            vqgan=vqgan,
            prior_text_encoder=prior_text_encoder,
            prior_tokenizer=prior_tokenizer,
            prior_prior=prior_prior,
            prior_scheduler=prior_scheduler,
            prior_feature_extractor=prior_feature_extractor,
            prior_image_encoder=prior_image_encoder,
        )
        self.prior_pipe = StableCascadePriorPipeline(
            prior=prior_prior,
            text_encoder=prior_text_encoder,
            tokenizer=prior_tokenizer,
            scheduler=prior_scheduler,
            image_encoder=prior_image_encoder,
            feature_extractor=prior_feature_extractor,
        )
        self.decoder_pipe = StableCascadeDecoderPipeline(
            text_encoder=text_encoder,
            tokenizer=tokenizer,
            decoder=decoder,
            scheduler=scheduler,
            vqgan=vqgan,
        )

    def enable_xformers_memory_efficient_attention(self, attention_op: Optional[Callable] = None):
        self.decoder_pipe.enable_xformers_memory_efficient_attention(attention_op)

    def progress_bar(self, iterable=None, total=None):
        self.prior_pipe.progress_bar(iterable=iterable, total=total)
        self.decoder_pipe.progress_bar(iterable=iterable, total=total)

    def set_progress_bar_config(self, **kwargs):
        self.prior_pipe.set_progress_bar_config(**kwargs)
        self.decoder_pipe.set_progress_bar_config(**kwargs)

    def __call__(
        self,
        prompt: Optional[Union[str, List[str]]] = None,
        images: Union[ms.Tensor, PIL.Image.Image, List[ms.Tensor], List[PIL.Image.Image]] = None,
        height: int = 512,
        width: int = 512,
        prior_num_inference_steps: int = 60,
        prior_guidance_scale: float = 4.0,
        num_inference_steps: int = 12,
        decoder_guidance_scale: float = 0.0,
        negative_prompt: Optional[Union[str, List[str]]] = None,
        prompt_embeds: Optional[ms.Tensor] = None,
        prompt_embeds_pooled: Optional[ms.Tensor] = None,
        negative_prompt_embeds: Optional[ms.Tensor] = None,
        negative_prompt_embeds_pooled: Optional[ms.Tensor] = None,
        num_images_per_prompt: int = 1,
        generator: Optional[Union[np.random.Generator, List[np.random.Generator]]] = None,
        latents: Optional[ms.Tensor] = None,
        output_type: Optional[str] = "pil",
        return_dict: bool = False,
        prior_callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None,
        prior_callback_on_step_end_tensor_inputs: List[str] = ["latents"],
        callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None,
        callback_on_step_end_tensor_inputs: List[str] = ["latents"],
    ):
        """
        Function invoked when calling the pipeline for generation.

        Args:
            prompt (`str` or `List[str]`):
                The prompt or prompts to guide the image generation for the prior and decoder.
            images (`ms.Tensor`, `PIL.Image.Image`, `List[ms.Tensor]`, `List[PIL.Image.Image]`, *optional*):
                The images to guide the image generation for the prior.
            negative_prompt (`str` or `List[str]`, *optional*):
                The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
                if `guidance_scale` is less than `1`).
            prompt_embeds (`ms.Tensor`, *optional*):
                Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, *e.g.* prompt
                weighting. If not provided, text embeddings will be generated from `prompt` input argument.
            prompt_embeds_pooled (`ms.Tensor`, *optional*):
                Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, *e.g.* prompt
                weighting. If not provided, text embeddings will be generated from `prompt` input argument.
            negative_prompt_embeds (`ms.Tensor`, *optional*):
                Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, *e.g.*
                prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt`
                input argument.
            negative_prompt_embeds_pooled (`ms.Tensor`, *optional*):
                Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, *e.g.*
                prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt`
                input argument.
            num_images_per_prompt (`int`, *optional*, defaults to 1):
                The number of images to generate per prompt.
            height (`int`, *optional*, defaults to 512):
                The height in pixels of the generated image.
            width (`int`, *optional*, defaults to 512):
                The width in pixels of the generated image.
            prior_guidance_scale (`float`, *optional*, defaults to 4.0):
                Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
                `prior_guidance_scale` is defined as `w` of equation 2. of [Imagen
                Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting
                `prior_guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked
                to the text `prompt`, usually at the expense of lower image quality.
            prior_num_inference_steps (`Union[int, Dict[float, int]]`, *optional*, defaults to 60):
                The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the
                expense of slower inference. For more specific timestep spacing, you can pass customized
                `prior_timesteps`
            num_inference_steps (`int`, *optional*, defaults to 12):
                The number of decoder denoising steps. More denoising steps usually lead to a higher quality image at
                the expense of slower inference. For more specific timestep spacing, you can pass customized
                `timesteps`
            decoder_guidance_scale (`float`, *optional*, defaults to 0.0):
                Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
                `guidance_scale` is defined as `w` of equation 2. of [Imagen
                Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
                1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
                usually at the expense of lower image quality.
            generator (`np.random.Generator` or `List[np.random.Generator]`, *optional*):
                One or a list of [np.random.Generator(s)](https://numpy.org/doc/stable/reference/random/generator.html)
                to make generation deterministic.
            latents (`ms.Tensor`, *optional*):
                Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
                generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
                tensor will ge generated by sampling using the supplied random `generator`.
            output_type (`str`, *optional*, defaults to `"pil"`):
                The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
                (`np.array`) or `"ms"` (`ms.Tensor`).
            return_dict (`bool`, *optional*, defaults to `False`):
                Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
            prior_callback_on_step_end (`Callable`, *optional*):
                A function that calls at the end of each denoising steps during the inference. The function is called
                with the following arguments: `prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep:
                int, callback_kwargs: Dict)`.
            prior_callback_on_step_end_tensor_inputs (`List`, *optional*):
                The list of tensor inputs for the `prior_callback_on_step_end` function. The tensors specified in the
                list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in
                the `._callback_tensor_inputs` attribute of your pipeine class.
            callback_on_step_end (`Callable`, *optional*):
                A function that calls at the end of each denoising steps during the inference. The function is called
                with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
                callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
                `callback_on_step_end_tensor_inputs`.
            callback_on_step_end_tensor_inputs (`List`, *optional*):
                The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
                will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
                `._callback_tensor_inputs` attribute of your pipeine class.

        Examples:

        Returns:
            [`~pipelines.ImagePipelineOutput`] or `tuple` [`~pipelines.ImagePipelineOutput`] if `return_dict` is True,
            otherwise a `tuple`. When returning a tuple, the first element is a list with the generated images.
        """
        prior_outputs = self.prior_pipe(
            prompt=prompt if prompt_embeds is None else None,
            images=images,
            height=height,
            width=width,
            num_inference_steps=prior_num_inference_steps,
            guidance_scale=prior_guidance_scale,
            negative_prompt=negative_prompt if negative_prompt_embeds is None else None,
            prompt_embeds=prompt_embeds,
            prompt_embeds_pooled=prompt_embeds_pooled,
            negative_prompt_embeds=negative_prompt_embeds,
            negative_prompt_embeds_pooled=negative_prompt_embeds_pooled,
            num_images_per_prompt=num_images_per_prompt,
            generator=generator,
            latents=latents,
            output_type="ms",
            return_dict=False,
            callback_on_step_end=prior_callback_on_step_end,
            callback_on_step_end_tensor_inputs=prior_callback_on_step_end_tensor_inputs,
        )
        image_embeddings = prior_outputs[0]
        prompt_embeds = prior_outputs[1]
        prompt_embeds_pooled = prior_outputs[2]
        negative_prompt_embeds = prior_outputs[3]
        negative_prompt_embeds_pooled = prior_outputs[4]

        outputs = self.decoder_pipe(
            image_embeddings=image_embeddings,
            prompt=prompt if prompt_embeds is None else None,
            num_inference_steps=num_inference_steps,
            guidance_scale=decoder_guidance_scale,
            negative_prompt=negative_prompt if negative_prompt_embeds is None else None,
            prompt_embeds=prompt_embeds,
            prompt_embeds_pooled=prompt_embeds_pooled,
            negative_prompt_embeds=negative_prompt_embeds,
            negative_prompt_embeds_pooled=negative_prompt_embeds_pooled,
            generator=generator,
            output_type=output_type,
            return_dict=return_dict,
            callback_on_step_end=callback_on_step_end,
            callback_on_step_end_tensor_inputs=callback_on_step_end_tensor_inputs,
        )

        return outputs

mindone.diffusers.StableCascadeCombinedPipeline.__call__(prompt=None, images=None, height=512, width=512, prior_num_inference_steps=60, prior_guidance_scale=4.0, num_inference_steps=12, decoder_guidance_scale=0.0, negative_prompt=None, prompt_embeds=None, prompt_embeds_pooled=None, negative_prompt_embeds=None, negative_prompt_embeds_pooled=None, num_images_per_prompt=1, generator=None, latents=None, output_type='pil', return_dict=False, prior_callback_on_step_end=None, prior_callback_on_step_end_tensor_inputs=['latents'], callback_on_step_end=None, callback_on_step_end_tensor_inputs=['latents'])

Function invoked when calling the pipeline for generation.

PARAMETER DESCRIPTION
prompt

The prompt or prompts to guide the image generation for the prior and decoder.

TYPE: `str` or `List[str]` DEFAULT: None

images

The images to guide the image generation for the prior.

TYPE: `ms.Tensor`, `PIL.Image.Image`, `List[ms.Tensor]`, `List[PIL.Image.Image]`, *optional* DEFAULT: None

negative_prompt

The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1).

TYPE: `str` or `List[str]`, *optional* DEFAULT: None

prompt_embeds

Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

prompt_embeds_pooled

Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

negative_prompt_embeds

Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input argument.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

negative_prompt_embeds_pooled

Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input argument.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

num_images_per_prompt

The number of images to generate per prompt.

TYPE: `int`, *optional*, defaults to 1 DEFAULT: 1

height

The height in pixels of the generated image.

TYPE: `int`, *optional*, defaults to 512 DEFAULT: 512

width

The width in pixels of the generated image.

TYPE: `int`, *optional*, defaults to 512 DEFAULT: 512

prior_guidance_scale

Guidance scale as defined in Classifier-Free Diffusion Guidance. prior_guidance_scale is defined as w of equation 2. of Imagen Paper. Guidance scale is enabled by setting prior_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality.

TYPE: `float`, *optional*, defaults to 4.0 DEFAULT: 4.0

prior_num_inference_steps

The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. For more specific timestep spacing, you can pass customized prior_timesteps

TYPE: `Union[int, Dict[float, int]]`, *optional*, defaults to 60 DEFAULT: 60

num_inference_steps

The number of decoder denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. For more specific timestep spacing, you can pass customized timesteps

TYPE: `int`, *optional*, defaults to 12 DEFAULT: 12

decoder_guidance_scale

Guidance scale as defined in Classifier-Free Diffusion Guidance. guidance_scale is defined as w of equation 2. of Imagen Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality.

TYPE: `float`, *optional*, defaults to 0.0 DEFAULT: 0.0

generator

One or a list of np.random.Generator(s) to make generation deterministic.

TYPE: `np.random.Generator` or `List[np.random.Generator]`, *optional* DEFAULT: None

latents

Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random generator.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

output_type

The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" (np.array) or "ms" (ms.Tensor).

TYPE: `str`, *optional*, defaults to `"pil"` DEFAULT: 'pil'

return_dict

Whether or not to return a [~pipelines.ImagePipelineOutput] instead of a plain tuple.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

prior_callback_on_step_end

A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict).

TYPE: `Callable`, *optional* DEFAULT: None

prior_callback_on_step_end_tensor_inputs

The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the list will be passed as callback_kwargs argument. You will only be able to include variables listed in the ._callback_tensor_inputs attribute of your pipeine class.

TYPE: `List`, *optional* DEFAULT: ['latents']

callback_on_step_end

A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by callback_on_step_end_tensor_inputs.

TYPE: `Callable`, *optional* DEFAULT: None

callback_on_step_end_tensor_inputs

The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list will be passed as callback_kwargs argument. You will only be able to include variables listed in the ._callback_tensor_inputs attribute of your pipeine class.

TYPE: `List`, *optional* DEFAULT: ['latents']

RETURNS DESCRIPTION

[~pipelines.ImagePipelineOutput] or tuple [~pipelines.ImagePipelineOutput] if return_dict is True,

otherwise a tuple. When returning a tuple, the first element is a list with the generated images.

Source code in mindone/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_combined.py
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
def __call__(
    self,
    prompt: Optional[Union[str, List[str]]] = None,
    images: Union[ms.Tensor, PIL.Image.Image, List[ms.Tensor], List[PIL.Image.Image]] = None,
    height: int = 512,
    width: int = 512,
    prior_num_inference_steps: int = 60,
    prior_guidance_scale: float = 4.0,
    num_inference_steps: int = 12,
    decoder_guidance_scale: float = 0.0,
    negative_prompt: Optional[Union[str, List[str]]] = None,
    prompt_embeds: Optional[ms.Tensor] = None,
    prompt_embeds_pooled: Optional[ms.Tensor] = None,
    negative_prompt_embeds: Optional[ms.Tensor] = None,
    negative_prompt_embeds_pooled: Optional[ms.Tensor] = None,
    num_images_per_prompt: int = 1,
    generator: Optional[Union[np.random.Generator, List[np.random.Generator]]] = None,
    latents: Optional[ms.Tensor] = None,
    output_type: Optional[str] = "pil",
    return_dict: bool = False,
    prior_callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None,
    prior_callback_on_step_end_tensor_inputs: List[str] = ["latents"],
    callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None,
    callback_on_step_end_tensor_inputs: List[str] = ["latents"],
):
    """
    Function invoked when calling the pipeline for generation.

    Args:
        prompt (`str` or `List[str]`):
            The prompt or prompts to guide the image generation for the prior and decoder.
        images (`ms.Tensor`, `PIL.Image.Image`, `List[ms.Tensor]`, `List[PIL.Image.Image]`, *optional*):
            The images to guide the image generation for the prior.
        negative_prompt (`str` or `List[str]`, *optional*):
            The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
            if `guidance_scale` is less than `1`).
        prompt_embeds (`ms.Tensor`, *optional*):
            Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, *e.g.* prompt
            weighting. If not provided, text embeddings will be generated from `prompt` input argument.
        prompt_embeds_pooled (`ms.Tensor`, *optional*):
            Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, *e.g.* prompt
            weighting. If not provided, text embeddings will be generated from `prompt` input argument.
        negative_prompt_embeds (`ms.Tensor`, *optional*):
            Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, *e.g.*
            prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt`
            input argument.
        negative_prompt_embeds_pooled (`ms.Tensor`, *optional*):
            Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, *e.g.*
            prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt`
            input argument.
        num_images_per_prompt (`int`, *optional*, defaults to 1):
            The number of images to generate per prompt.
        height (`int`, *optional*, defaults to 512):
            The height in pixels of the generated image.
        width (`int`, *optional*, defaults to 512):
            The width in pixels of the generated image.
        prior_guidance_scale (`float`, *optional*, defaults to 4.0):
            Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
            `prior_guidance_scale` is defined as `w` of equation 2. of [Imagen
            Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting
            `prior_guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked
            to the text `prompt`, usually at the expense of lower image quality.
        prior_num_inference_steps (`Union[int, Dict[float, int]]`, *optional*, defaults to 60):
            The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the
            expense of slower inference. For more specific timestep spacing, you can pass customized
            `prior_timesteps`
        num_inference_steps (`int`, *optional*, defaults to 12):
            The number of decoder denoising steps. More denoising steps usually lead to a higher quality image at
            the expense of slower inference. For more specific timestep spacing, you can pass customized
            `timesteps`
        decoder_guidance_scale (`float`, *optional*, defaults to 0.0):
            Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
            `guidance_scale` is defined as `w` of equation 2. of [Imagen
            Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
            1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
            usually at the expense of lower image quality.
        generator (`np.random.Generator` or `List[np.random.Generator]`, *optional*):
            One or a list of [np.random.Generator(s)](https://numpy.org/doc/stable/reference/random/generator.html)
            to make generation deterministic.
        latents (`ms.Tensor`, *optional*):
            Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
            generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
            tensor will ge generated by sampling using the supplied random `generator`.
        output_type (`str`, *optional*, defaults to `"pil"`):
            The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
            (`np.array`) or `"ms"` (`ms.Tensor`).
        return_dict (`bool`, *optional*, defaults to `False`):
            Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
        prior_callback_on_step_end (`Callable`, *optional*):
            A function that calls at the end of each denoising steps during the inference. The function is called
            with the following arguments: `prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep:
            int, callback_kwargs: Dict)`.
        prior_callback_on_step_end_tensor_inputs (`List`, *optional*):
            The list of tensor inputs for the `prior_callback_on_step_end` function. The tensors specified in the
            list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in
            the `._callback_tensor_inputs` attribute of your pipeine class.
        callback_on_step_end (`Callable`, *optional*):
            A function that calls at the end of each denoising steps during the inference. The function is called
            with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
            callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
            `callback_on_step_end_tensor_inputs`.
        callback_on_step_end_tensor_inputs (`List`, *optional*):
            The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
            will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
            `._callback_tensor_inputs` attribute of your pipeine class.

    Examples:

    Returns:
        [`~pipelines.ImagePipelineOutput`] or `tuple` [`~pipelines.ImagePipelineOutput`] if `return_dict` is True,
        otherwise a `tuple`. When returning a tuple, the first element is a list with the generated images.
    """
    prior_outputs = self.prior_pipe(
        prompt=prompt if prompt_embeds is None else None,
        images=images,
        height=height,
        width=width,
        num_inference_steps=prior_num_inference_steps,
        guidance_scale=prior_guidance_scale,
        negative_prompt=negative_prompt if negative_prompt_embeds is None else None,
        prompt_embeds=prompt_embeds,
        prompt_embeds_pooled=prompt_embeds_pooled,
        negative_prompt_embeds=negative_prompt_embeds,
        negative_prompt_embeds_pooled=negative_prompt_embeds_pooled,
        num_images_per_prompt=num_images_per_prompt,
        generator=generator,
        latents=latents,
        output_type="ms",
        return_dict=False,
        callback_on_step_end=prior_callback_on_step_end,
        callback_on_step_end_tensor_inputs=prior_callback_on_step_end_tensor_inputs,
    )
    image_embeddings = prior_outputs[0]
    prompt_embeds = prior_outputs[1]
    prompt_embeds_pooled = prior_outputs[2]
    negative_prompt_embeds = prior_outputs[3]
    negative_prompt_embeds_pooled = prior_outputs[4]

    outputs = self.decoder_pipe(
        image_embeddings=image_embeddings,
        prompt=prompt if prompt_embeds is None else None,
        num_inference_steps=num_inference_steps,
        guidance_scale=decoder_guidance_scale,
        negative_prompt=negative_prompt if negative_prompt_embeds is None else None,
        prompt_embeds=prompt_embeds,
        prompt_embeds_pooled=prompt_embeds_pooled,
        negative_prompt_embeds=negative_prompt_embeds,
        negative_prompt_embeds_pooled=negative_prompt_embeds_pooled,
        generator=generator,
        output_type=output_type,
        return_dict=return_dict,
        callback_on_step_end=callback_on_step_end,
        callback_on_step_end_tensor_inputs=callback_on_step_end_tensor_inputs,
    )

    return outputs

mindone.diffusers.StableCascadePriorPipeline

Bases: DiffusionPipeline

Pipeline for generating image prior for Stable Cascade.

This model inherits from [DiffusionPipeline]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

PARAMETER DESCRIPTION
prior

The Stable Cascade prior to approximate the image embedding from the text and/or image embedding.

TYPE: [`StableCascadeUNet`]

text_encoder

TYPE: [`CLIPTextModelWithProjection`]

feature_extractor

Model that extracts features from generated images to be used as inputs for the image_encoder.

TYPE: [`~transformers.CLIPImageProcessor`] DEFAULT: None

image_encoder

Frozen CLIP image-encoder (clip-vit-large-patch14).

TYPE: [`CLIPVisionModelWithProjection`] DEFAULT: None

tokenizer

Tokenizer of class CLIPTokenizer.

TYPE: `CLIPTokenizer`

scheduler

A scheduler to be used in combination with prior to generate image embedding.

TYPE: [`DDPMWuerstchenScheduler`]

resolution_multiple

Default resolution for multiple images generated.

TYPE: 'float', *optional*, defaults to 42.67 DEFAULT: 42.67

Source code in mindone/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_prior.py
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
class StableCascadePriorPipeline(DiffusionPipeline):
    """
    Pipeline for generating image prior for Stable Cascade.

    This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
    library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

    Args:
        prior ([`StableCascadeUNet`]):
            The Stable Cascade prior to approximate the image embedding from the text and/or image embedding.
        text_encoder ([`CLIPTextModelWithProjection`]):
            Frozen text-encoder ([laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
        feature_extractor ([`~transformers.CLIPImageProcessor`]):
            Model that extracts features from generated images to be used as inputs for the `image_encoder`.
        image_encoder ([`CLIPVisionModelWithProjection`]):
            Frozen CLIP image-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
        tokenizer (`CLIPTokenizer`):
            Tokenizer of class
            [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
        scheduler ([`DDPMWuerstchenScheduler`]):
            A scheduler to be used in combination with `prior` to generate image embedding.
        resolution_multiple ('float', *optional*, defaults to 42.67):
            Default resolution for multiple images generated.
    """

    unet_name = "prior"
    text_encoder_name = "text_encoder"
    model_cpu_offload_seq = "image_encoder->text_encoder->prior"
    _optional_components = ["image_encoder", "feature_extractor"]
    _callback_tensor_inputs = ["latents", "text_encoder_hidden_states", "negative_prompt_embeds"]

    def __init__(
        self,
        tokenizer: CLIPTokenizer,
        text_encoder: CLIPTextModelWithProjection,
        prior: StableCascadeUNet,
        scheduler: DDPMWuerstchenScheduler,
        resolution_multiple: float = 42.67,
        feature_extractor: Optional[CLIPImageProcessor] = None,
        image_encoder: Optional[CLIPVisionModelWithProjection] = None,
    ) -> None:
        super().__init__()
        self.register_modules(
            tokenizer=tokenizer,
            text_encoder=text_encoder,
            image_encoder=image_encoder,
            feature_extractor=feature_extractor,
            prior=prior,
            scheduler=scheduler,
        )
        self.register_to_config(resolution_multiple=resolution_multiple)

    def prepare_latents(self, batch_size, height, width, num_images_per_prompt, dtype, generator, latents, scheduler):
        latent_shape = (
            num_images_per_prompt * batch_size,
            self.prior.config.in_channels,
            ceil(height / self.config.resolution_multiple),
            ceil(width / self.config.resolution_multiple),
        )

        if latents is None:
            latents = randn_tensor(latent_shape, generator=generator, dtype=dtype)
        else:
            if latents.shape != latent_shape:
                raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latent_shape}")

        latents = (latents * scheduler.init_noise_sigma).to(dtype)
        return latents

    def encode_prompt(
        self,
        batch_size,
        num_images_per_prompt,
        do_classifier_free_guidance,
        prompt=None,
        negative_prompt=None,
        prompt_embeds: Optional[ms.Tensor] = None,
        prompt_embeds_pooled: Optional[ms.Tensor] = None,
        negative_prompt_embeds: Optional[ms.Tensor] = None,
        negative_prompt_embeds_pooled: Optional[ms.Tensor] = None,
    ):
        if prompt_embeds is None:
            # get prompt text embeddings
            text_inputs = self.tokenizer(
                prompt,
                padding="max_length",
                max_length=self.tokenizer.model_max_length,
                truncation=True,
                return_tensors="np",
            )
            text_input_ids = text_inputs.input_ids
            attention_mask = ms.tensor(text_inputs.attention_mask)

            untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="np").input_ids

            if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not np.array_equal(
                text_input_ids, untruncated_ids
            ):
                removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
                logger.warning(
                    "The following part of your input was truncated because CLIP can only handle sequences up to"
                    f" {self.tokenizer.model_max_length} tokens: {removed_text}"
                )
                text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
                attention_mask = attention_mask[:, : self.tokenizer.model_max_length]

            text_encoder_output = self.text_encoder(
                ms.tensor(text_input_ids), attention_mask=attention_mask, output_hidden_states=True
            )
            prompt_embeds = text_encoder_output[2][-1]
            if prompt_embeds_pooled is None:
                prompt_embeds_pooled = text_encoder_output[0].unsqueeze(1)

        prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype)
        prompt_embeds_pooled = prompt_embeds_pooled.to(dtype=self.text_encoder.dtype)
        prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0)
        prompt_embeds_pooled = prompt_embeds_pooled.repeat_interleave(num_images_per_prompt, dim=0)

        if negative_prompt_embeds is None and do_classifier_free_guidance:
            uncond_tokens: List[str]
            if negative_prompt is None:
                uncond_tokens = [""] * batch_size
            elif type(prompt) is not type(negative_prompt):
                raise TypeError(
                    f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
                    f" {type(prompt)}."
                )
            elif isinstance(negative_prompt, str):
                uncond_tokens = [negative_prompt]
            elif batch_size != len(negative_prompt):
                raise ValueError(
                    f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
                    f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
                    " the batch size of `prompt`."
                )
            else:
                uncond_tokens = negative_prompt

            uncond_input = self.tokenizer(
                uncond_tokens,
                padding="max_length",
                max_length=self.tokenizer.model_max_length,
                truncation=True,
                return_tensors="np",
            )
            negative_prompt_embeds_text_encoder_output = self.text_encoder(
                ms.Tensor(uncond_input.input_ids),
                attention_mask=ms.Tensor(uncond_input.attention_mask),
                output_hidden_states=True,
            )

            negative_prompt_embeds = negative_prompt_embeds_text_encoder_output[2][-1]
            negative_prompt_embeds_pooled = negative_prompt_embeds_text_encoder_output[0].unsqueeze(1)

        if do_classifier_free_guidance:
            # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
            seq_len = negative_prompt_embeds.shape[1]
            negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype)
            negative_prompt_embeds = negative_prompt_embeds.tile((1, num_images_per_prompt, 1))
            negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)

            seq_len = negative_prompt_embeds_pooled.shape[1]
            negative_prompt_embeds_pooled = negative_prompt_embeds_pooled.to(dtype=self.text_encoder.dtype)
            negative_prompt_embeds_pooled = negative_prompt_embeds_pooled.tile((1, num_images_per_prompt, 1))
            negative_prompt_embeds_pooled = negative_prompt_embeds_pooled.view(
                batch_size * num_images_per_prompt, seq_len, -1
            )
            # done duplicates

        return prompt_embeds, prompt_embeds_pooled, negative_prompt_embeds, negative_prompt_embeds_pooled

    def encode_image(self, images, dtype, batch_size, num_images_per_prompt):
        image_embeds = []
        for image in images:
            image = self.feature_extractor(image, return_tensors="np").pixel_values
            image = ms.tensor(image, dtype=dtype)
            image_embed = self.image_encoder(image)[0].unsqueeze(1)
            image_embeds.append(image_embed)
        image_embeds = ops.cat(image_embeds, axis=1)

        image_embeds = image_embeds.tile((batch_size * num_images_per_prompt, 1, 1))
        negative_image_embeds = ops.zeros_like(image_embeds)

        return image_embeds, negative_image_embeds

    def check_inputs(
        self,
        prompt,
        images=None,
        image_embeds=None,
        negative_prompt=None,
        prompt_embeds=None,
        prompt_embeds_pooled=None,
        negative_prompt_embeds=None,
        negative_prompt_embeds_pooled=None,
        callback_on_step_end_tensor_inputs=None,
    ):
        if callback_on_step_end_tensor_inputs is not None and not all(
            k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs
        ):
            raise ValueError(
                f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, "
                f"but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}"
            )

        if prompt is not None and prompt_embeds is not None:
            raise ValueError(
                f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
                " only forward one of the two."
            )
        elif prompt is None and prompt_embeds is None:
            raise ValueError(
                "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
            )
        elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
            raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")

        if negative_prompt is not None and negative_prompt_embeds is not None:
            raise ValueError(
                f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
                f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
            )

        if prompt_embeds is not None and negative_prompt_embeds is not None:
            if prompt_embeds.shape != negative_prompt_embeds.shape:
                raise ValueError(
                    "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
                    f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
                    f" {negative_prompt_embeds.shape}."
                )

        if prompt_embeds is not None and prompt_embeds_pooled is None:
            raise ValueError(
                "If `prompt_embeds` are provided, `prompt_embeds_pooled` must also be provided. "
                "Make sure to generate `prompt_embeds_pooled` from the same text encoder that was used to generate `prompt_embeds`"
            )

        if negative_prompt_embeds is not None and negative_prompt_embeds_pooled is None:
            raise ValueError(
                "If `negative_prompt_embeds` are provided, `negative_prompt_embeds_pooled` must also be provided. "
                "Make sure to generate `prompt_embeds_pooled` from the same text encoder that was used to generate `prompt_embeds`"
            )

        if prompt_embeds_pooled is not None and negative_prompt_embeds_pooled is not None:
            if prompt_embeds_pooled.shape != negative_prompt_embeds_pooled.shape:
                raise ValueError(
                    "`prompt_embeds_pooled` and `negative_prompt_embeds_pooled` must have the same shape when passed"
                    f"directly, but got: `prompt_embeds_pooled` {prompt_embeds_pooled.shape} !="
                    f"`negative_prompt_embeds_pooled` {negative_prompt_embeds_pooled.shape}."
                )

        if image_embeds is not None and images is not None:
            raise ValueError(
                f"Cannot forward both `images`: {images} and `image_embeds`: {image_embeds}. Please make sure to"
                " only forward one of the two."
            )

        if images:
            for i, image in enumerate(images):
                if not isinstance(image, ms.Tensor) and not isinstance(image, PIL.Image.Image):
                    raise TypeError(
                        f"'images' must contain images of type 'ms.Tensor' or 'PIL.Image.Image, but got"
                        f"{type(image)} for image number {i}."
                    )

    @property
    def guidance_scale(self):
        return self._guidance_scale

    @property
    def do_classifier_free_guidance(self):
        return self._guidance_scale > 1

    @property
    def num_timesteps(self):
        return self._num_timesteps

    def get_timestep_ratio_conditioning(self, t, alphas_cumprod):
        s = ms.tensor([0.003])
        clamp_range = [0, 1]
        min_var = ops.cos(s / (1 + s) * pi * 0.5) ** 2
        var = alphas_cumprod[t]
        var = var.clamp(*clamp_range)
        ratio = (((var * min_var) ** 0.5).acos() / (pi * 0.5)) * (1 + s) - s
        return ratio

    def __call__(
        self,
        prompt: Optional[Union[str, List[str]]] = None,
        images: Union[ms.Tensor, PIL.Image.Image, List[ms.Tensor], List[PIL.Image.Image]] = None,
        height: int = 1024,
        width: int = 1024,
        num_inference_steps: int = 20,
        timesteps: List[float] = None,
        guidance_scale: float = 4.0,
        negative_prompt: Optional[Union[str, List[str]]] = None,
        prompt_embeds: Optional[ms.Tensor] = None,
        prompt_embeds_pooled: Optional[ms.Tensor] = None,
        negative_prompt_embeds: Optional[ms.Tensor] = None,
        negative_prompt_embeds_pooled: Optional[ms.Tensor] = None,
        image_embeds: Optional[ms.Tensor] = None,
        num_images_per_prompt: Optional[int] = 1,
        generator: Optional[Union[np.random.Generator, List[np.random.Generator]]] = None,
        latents: Optional[ms.Tensor] = None,
        output_type: Optional[str] = "ms",
        return_dict: bool = False,
        callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None,
        callback_on_step_end_tensor_inputs: List[str] = ["latents"],
    ):
        """
        Function invoked when calling the pipeline for generation.

        Args:
            prompt (`str` or `List[str]`):
                The prompt or prompts to guide the image generation.
            height (`int`, *optional*, defaults to 1024):
                The height in pixels of the generated image.
            width (`int`, *optional*, defaults to 1024):
                The width in pixels of the generated image.
            num_inference_steps (`int`, *optional*, defaults to 60):
                The number of denoising steps. More denoising steps usually lead to a higher quality image at the
                expense of slower inference.
            guidance_scale (`float`, *optional*, defaults to 8.0):
                Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
                `decoder_guidance_scale` is defined as `w` of equation 2. of [Imagen
                Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting
                `decoder_guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely
                linked to the text `prompt`, usually at the expense of lower image quality.
            negative_prompt (`str` or `List[str]`, *optional*):
                The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
                if `decoder_guidance_scale` is less than `1`).
            prompt_embeds (`ms.Tensor`, *optional*):
                Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
                provided, text embeddings will be generated from `prompt` input argument.
            prompt_embeds_pooled (`ms.Tensor`, *optional*):
                Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
                If not provided, pooled text embeddings will be generated from `prompt` input argument.
            negative_prompt_embeds (`ms.Tensor`, *optional*):
                Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
                weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
                argument.
            negative_prompt_embeds_pooled (`ms.Tensor`, *optional*):
                Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
                weighting. If not provided, negative_prompt_embeds_pooled will be generated from `negative_prompt` input
                argument.
            image_embeds (`ms.Tensor`, *optional*):
                Pre-generated image embeddings. Can be used to easily tweak image inputs, *e.g.* prompt weighting.
                If not provided, image embeddings will be generated from `image` input argument if existing.
            num_images_per_prompt (`int`, *optional*, defaults to 1):
                The number of images to generate per prompt.
            generator (`np.random.Generator` or `List[np.random.Generator]`, *optional*):
                One or a list of [np.random.Generator(s)](https://numpy.org/doc/stable/reference/random/generator.html)
                to make generation deterministic.
            latents (`ms.Tensor`, *optional*):
                Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
                generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
                tensor will ge generated by sampling using the supplied random `generator`.
            output_type (`str`, *optional*, defaults to `"pil"`):
                The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
                (`np.array`) or `"ms"` (`ms.Tensor`).
            return_dict (`bool`, *optional*, defaults to `False`):
                Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
            callback_on_step_end (`Callable`, *optional*):
                A function that calls at the end of each denoising steps during the inference. The function is called
                with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
                callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
                `callback_on_step_end_tensor_inputs`.
            callback_on_step_end_tensor_inputs (`List`, *optional*):
                The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
                will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
                `._callback_tensor_inputs` attribute of your pipeline class.

        Examples:

        Returns:
            [`StableCascadePriorPipelineOutput`] or `tuple` [`StableCascadePriorPipelineOutput`] if
            `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the
            generated image embeddings.
        """

        # 0. Define commonly used variables
        dtype = next(self.prior.get_parameters()).dtype
        self._guidance_scale = guidance_scale
        if prompt is not None and isinstance(prompt, str):
            batch_size = 1
        elif prompt is not None and isinstance(prompt, list):
            batch_size = len(prompt)
        else:
            batch_size = prompt_embeds.shape[0]

        # 1. Check inputs. Raise error if not correct
        self.check_inputs(
            prompt,
            images=images,
            image_embeds=image_embeds,
            negative_prompt=negative_prompt,
            prompt_embeds=prompt_embeds,
            prompt_embeds_pooled=prompt_embeds_pooled,
            negative_prompt_embeds=negative_prompt_embeds,
            negative_prompt_embeds_pooled=negative_prompt_embeds_pooled,
            callback_on_step_end_tensor_inputs=callback_on_step_end_tensor_inputs,
        )

        # 2. Encode caption + images
        (
            prompt_embeds,
            prompt_embeds_pooled,
            negative_prompt_embeds,
            negative_prompt_embeds_pooled,
        ) = self.encode_prompt(
            prompt=prompt,
            batch_size=batch_size,
            num_images_per_prompt=num_images_per_prompt,
            do_classifier_free_guidance=self.do_classifier_free_guidance,
            negative_prompt=negative_prompt,
            prompt_embeds=prompt_embeds,
            prompt_embeds_pooled=prompt_embeds_pooled,
            negative_prompt_embeds=negative_prompt_embeds,
            negative_prompt_embeds_pooled=negative_prompt_embeds_pooled,
        )

        if images is not None:
            image_embeds_pooled, uncond_image_embeds_pooled = self.encode_image(
                images=images,
                dtype=dtype,
                batch_size=batch_size,
                num_images_per_prompt=num_images_per_prompt,
            )
        elif image_embeds is not None:
            image_embeds_pooled = image_embeds.tile((batch_size * num_images_per_prompt, 1, 1))
            uncond_image_embeds_pooled = ops.zeros_like(image_embeds_pooled)
        else:
            image_embeds_pooled = ops.zeros(
                (batch_size * num_images_per_prompt, 1, self.prior.config.clip_image_in_channels), dtype=dtype
            )
            uncond_image_embeds_pooled = ops.zeros(
                (batch_size * num_images_per_prompt, 1, self.prior.config.clip_image_in_channels), dtype=dtype
            )

        if self.do_classifier_free_guidance:
            image_embeds = ops.cat([image_embeds_pooled, uncond_image_embeds_pooled], axis=0)
        else:
            image_embeds = image_embeds_pooled

        # For classifier free guidance, we need to do two forward passes.
        # Here we concatenate the unconditional and text embeddings into a single batch
        # to avoid doing two forward passes
        text_encoder_hidden_states = (
            ops.cat([prompt_embeds, negative_prompt_embeds]) if negative_prompt_embeds is not None else prompt_embeds
        )
        text_encoder_pooled = (
            ops.cat([prompt_embeds_pooled, negative_prompt_embeds_pooled])
            if negative_prompt_embeds is not None
            else prompt_embeds_pooled
        )

        # 4. Prepare and set timesteps
        self.scheduler.set_timesteps(num_inference_steps)
        timesteps = self.scheduler.timesteps

        # 5. Prepare latents
        latents = self.prepare_latents(
            batch_size, height, width, num_images_per_prompt, dtype, generator, latents, self.scheduler
        )

        if isinstance(self.scheduler, DDPMWuerstchenScheduler):
            timesteps = timesteps[:-1]
        else:
            if self.scheduler.config.clip_sample:
                self.scheduler.config.clip_sample = False  # disample sample clipping
                logger.warning(" set `clip_sample` to be False")
        # 6. Run denoising loop
        if hasattr(self.scheduler, "betas"):
            alphas = 1.0 - self.scheduler.betas
            alphas_cumprod = ops.cumprod(alphas, dim=0)
        else:
            alphas_cumprod = []

        self._num_timesteps = len(timesteps)
        for i, t in enumerate(self.progress_bar(timesteps)):
            if not isinstance(self.scheduler, DDPMWuerstchenScheduler):
                if len(alphas_cumprod) > 0:
                    timestep_ratio = self.get_timestep_ratio_conditioning(t.long(), alphas_cumprod)
                    timestep_ratio = timestep_ratio.broadcast_to((latents.shape[0],)).to(dtype)
                else:
                    timestep_ratio = (
                        t.float().div(self.scheduler.timesteps[-1]).broadcast_to((latents.shape[0],)).to(dtype)
                    )
            else:
                timestep_ratio = t.broadcast_to((latents.shape[0],)).to(dtype)
            # 7. Denoise image embeddings
            predicted_image_embedding = self.prior(
                sample=ops.cat([latents] * 2) if self.do_classifier_free_guidance else latents,
                timestep_ratio=ops.cat([timestep_ratio] * 2) if self.do_classifier_free_guidance else timestep_ratio,
                clip_text_pooled=text_encoder_pooled,
                clip_text=text_encoder_hidden_states,
                clip_img=image_embeds,
                return_dict=False,
            )[0]

            # 8. Check for classifier free guidance and apply it
            if self.do_classifier_free_guidance:
                predicted_image_embedding_text, predicted_image_embedding_uncond = predicted_image_embedding.chunk(2)
                predicted_image_embedding = ops.lerp(
                    predicted_image_embedding_uncond,
                    predicted_image_embedding_text,
                    ms.tensor(self.guidance_scale, dtype=predicted_image_embedding_text.dtype),
                )

            # 9. Renoise latents to next timestep
            if not isinstance(self.scheduler, DDPMWuerstchenScheduler):
                timestep_ratio = t
            latents = self.scheduler.step(
                model_output=predicted_image_embedding, timestep=timestep_ratio, sample=latents, generator=generator
            )[0]

            if callback_on_step_end is not None:
                callback_kwargs = {}
                for k in callback_on_step_end_tensor_inputs:
                    callback_kwargs[k] = locals()[k]
                callback_outputs = callback_on_step_end(self, i, t, callback_kwargs)

                latents = callback_outputs.pop("latents", latents)
                prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds)
                negative_prompt_embeds = callback_outputs.pop("negative_prompt_embeds", negative_prompt_embeds)

        if output_type == "np":
            latents = latents.float().asnumpy()  # float() as bfloat16-> numpy doesnt work
            prompt_embeds = prompt_embeds.float().asnumpy()  # float() as bfloat16-> numpy doesnt work
            negative_prompt_embeds = (
                negative_prompt_embeds.float().asnumpy() if negative_prompt_embeds is not None else None
            )  # float() as bfloat16-> numpy doesnt work

        if not return_dict:
            return (
                latents,
                prompt_embeds,
                prompt_embeds_pooled,
                negative_prompt_embeds,
                negative_prompt_embeds_pooled,
            )

        return StableCascadePriorPipelineOutput(
            image_embeddings=latents,
            prompt_embeds=prompt_embeds,
            prompt_embeds_pooled=prompt_embeds_pooled,
            negative_prompt_embeds=negative_prompt_embeds,
            negative_prompt_embeds_pooled=negative_prompt_embeds_pooled,
        )

mindone.diffusers.StableCascadePriorPipeline.__call__(prompt=None, images=None, height=1024, width=1024, num_inference_steps=20, timesteps=None, guidance_scale=4.0, negative_prompt=None, prompt_embeds=None, prompt_embeds_pooled=None, negative_prompt_embeds=None, negative_prompt_embeds_pooled=None, image_embeds=None, num_images_per_prompt=1, generator=None, latents=None, output_type='ms', return_dict=False, callback_on_step_end=None, callback_on_step_end_tensor_inputs=['latents'])

Function invoked when calling the pipeline for generation.

PARAMETER DESCRIPTION
prompt

The prompt or prompts to guide the image generation.

TYPE: `str` or `List[str]` DEFAULT: None

height

The height in pixels of the generated image.

TYPE: `int`, *optional*, defaults to 1024 DEFAULT: 1024

width

The width in pixels of the generated image.

TYPE: `int`, *optional*, defaults to 1024 DEFAULT: 1024

num_inference_steps

The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.

TYPE: `int`, *optional*, defaults to 60 DEFAULT: 20

guidance_scale

Guidance scale as defined in Classifier-Free Diffusion Guidance. decoder_guidance_scale is defined as w of equation 2. of Imagen Paper. Guidance scale is enabled by setting decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality.

TYPE: `float`, *optional*, defaults to 8.0 DEFAULT: 4.0

negative_prompt

The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored if decoder_guidance_scale is less than 1).

TYPE: `str` or `List[str]`, *optional* DEFAULT: None

prompt_embeds

Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

prompt_embeds_pooled

Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled text embeddings will be generated from prompt input argument.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

negative_prompt_embeds

Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input argument.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

negative_prompt_embeds_pooled

Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds_pooled will be generated from negative_prompt input argument.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

image_embeds

Pre-generated image embeddings. Can be used to easily tweak image inputs, e.g. prompt weighting. If not provided, image embeddings will be generated from image input argument if existing.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

num_images_per_prompt

The number of images to generate per prompt.

TYPE: `int`, *optional*, defaults to 1 DEFAULT: 1

generator

One or a list of np.random.Generator(s) to make generation deterministic.

TYPE: `np.random.Generator` or `List[np.random.Generator]`, *optional* DEFAULT: None

latents

Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random generator.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

output_type

The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" (np.array) or "ms" (ms.Tensor).

TYPE: `str`, *optional*, defaults to `"pil"` DEFAULT: 'ms'

return_dict

Whether or not to return a [~pipelines.ImagePipelineOutput] instead of a plain tuple.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

callback_on_step_end

A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by callback_on_step_end_tensor_inputs.

TYPE: `Callable`, *optional* DEFAULT: None

callback_on_step_end_tensor_inputs

The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list will be passed as callback_kwargs argument. You will only be able to include variables listed in the ._callback_tensor_inputs attribute of your pipeline class.

TYPE: `List`, *optional* DEFAULT: ['latents']

RETURNS DESCRIPTION

[StableCascadePriorPipelineOutput] or tuple [StableCascadePriorPipelineOutput] if

return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the

generated image embeddings.

Source code in mindone/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_prior.py
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
def __call__(
    self,
    prompt: Optional[Union[str, List[str]]] = None,
    images: Union[ms.Tensor, PIL.Image.Image, List[ms.Tensor], List[PIL.Image.Image]] = None,
    height: int = 1024,
    width: int = 1024,
    num_inference_steps: int = 20,
    timesteps: List[float] = None,
    guidance_scale: float = 4.0,
    negative_prompt: Optional[Union[str, List[str]]] = None,
    prompt_embeds: Optional[ms.Tensor] = None,
    prompt_embeds_pooled: Optional[ms.Tensor] = None,
    negative_prompt_embeds: Optional[ms.Tensor] = None,
    negative_prompt_embeds_pooled: Optional[ms.Tensor] = None,
    image_embeds: Optional[ms.Tensor] = None,
    num_images_per_prompt: Optional[int] = 1,
    generator: Optional[Union[np.random.Generator, List[np.random.Generator]]] = None,
    latents: Optional[ms.Tensor] = None,
    output_type: Optional[str] = "ms",
    return_dict: bool = False,
    callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None,
    callback_on_step_end_tensor_inputs: List[str] = ["latents"],
):
    """
    Function invoked when calling the pipeline for generation.

    Args:
        prompt (`str` or `List[str]`):
            The prompt or prompts to guide the image generation.
        height (`int`, *optional*, defaults to 1024):
            The height in pixels of the generated image.
        width (`int`, *optional*, defaults to 1024):
            The width in pixels of the generated image.
        num_inference_steps (`int`, *optional*, defaults to 60):
            The number of denoising steps. More denoising steps usually lead to a higher quality image at the
            expense of slower inference.
        guidance_scale (`float`, *optional*, defaults to 8.0):
            Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
            `decoder_guidance_scale` is defined as `w` of equation 2. of [Imagen
            Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting
            `decoder_guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely
            linked to the text `prompt`, usually at the expense of lower image quality.
        negative_prompt (`str` or `List[str]`, *optional*):
            The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
            if `decoder_guidance_scale` is less than `1`).
        prompt_embeds (`ms.Tensor`, *optional*):
            Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
            provided, text embeddings will be generated from `prompt` input argument.
        prompt_embeds_pooled (`ms.Tensor`, *optional*):
            Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
            If not provided, pooled text embeddings will be generated from `prompt` input argument.
        negative_prompt_embeds (`ms.Tensor`, *optional*):
            Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
            weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
            argument.
        negative_prompt_embeds_pooled (`ms.Tensor`, *optional*):
            Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
            weighting. If not provided, negative_prompt_embeds_pooled will be generated from `negative_prompt` input
            argument.
        image_embeds (`ms.Tensor`, *optional*):
            Pre-generated image embeddings. Can be used to easily tweak image inputs, *e.g.* prompt weighting.
            If not provided, image embeddings will be generated from `image` input argument if existing.
        num_images_per_prompt (`int`, *optional*, defaults to 1):
            The number of images to generate per prompt.
        generator (`np.random.Generator` or `List[np.random.Generator]`, *optional*):
            One or a list of [np.random.Generator(s)](https://numpy.org/doc/stable/reference/random/generator.html)
            to make generation deterministic.
        latents (`ms.Tensor`, *optional*):
            Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
            generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
            tensor will ge generated by sampling using the supplied random `generator`.
        output_type (`str`, *optional*, defaults to `"pil"`):
            The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
            (`np.array`) or `"ms"` (`ms.Tensor`).
        return_dict (`bool`, *optional*, defaults to `False`):
            Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
        callback_on_step_end (`Callable`, *optional*):
            A function that calls at the end of each denoising steps during the inference. The function is called
            with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
            callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
            `callback_on_step_end_tensor_inputs`.
        callback_on_step_end_tensor_inputs (`List`, *optional*):
            The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
            will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
            `._callback_tensor_inputs` attribute of your pipeline class.

    Examples:

    Returns:
        [`StableCascadePriorPipelineOutput`] or `tuple` [`StableCascadePriorPipelineOutput`] if
        `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the
        generated image embeddings.
    """

    # 0. Define commonly used variables
    dtype = next(self.prior.get_parameters()).dtype
    self._guidance_scale = guidance_scale
    if prompt is not None and isinstance(prompt, str):
        batch_size = 1
    elif prompt is not None and isinstance(prompt, list):
        batch_size = len(prompt)
    else:
        batch_size = prompt_embeds.shape[0]

    # 1. Check inputs. Raise error if not correct
    self.check_inputs(
        prompt,
        images=images,
        image_embeds=image_embeds,
        negative_prompt=negative_prompt,
        prompt_embeds=prompt_embeds,
        prompt_embeds_pooled=prompt_embeds_pooled,
        negative_prompt_embeds=negative_prompt_embeds,
        negative_prompt_embeds_pooled=negative_prompt_embeds_pooled,
        callback_on_step_end_tensor_inputs=callback_on_step_end_tensor_inputs,
    )

    # 2. Encode caption + images
    (
        prompt_embeds,
        prompt_embeds_pooled,
        negative_prompt_embeds,
        negative_prompt_embeds_pooled,
    ) = self.encode_prompt(
        prompt=prompt,
        batch_size=batch_size,
        num_images_per_prompt=num_images_per_prompt,
        do_classifier_free_guidance=self.do_classifier_free_guidance,
        negative_prompt=negative_prompt,
        prompt_embeds=prompt_embeds,
        prompt_embeds_pooled=prompt_embeds_pooled,
        negative_prompt_embeds=negative_prompt_embeds,
        negative_prompt_embeds_pooled=negative_prompt_embeds_pooled,
    )

    if images is not None:
        image_embeds_pooled, uncond_image_embeds_pooled = self.encode_image(
            images=images,
            dtype=dtype,
            batch_size=batch_size,
            num_images_per_prompt=num_images_per_prompt,
        )
    elif image_embeds is not None:
        image_embeds_pooled = image_embeds.tile((batch_size * num_images_per_prompt, 1, 1))
        uncond_image_embeds_pooled = ops.zeros_like(image_embeds_pooled)
    else:
        image_embeds_pooled = ops.zeros(
            (batch_size * num_images_per_prompt, 1, self.prior.config.clip_image_in_channels), dtype=dtype
        )
        uncond_image_embeds_pooled = ops.zeros(
            (batch_size * num_images_per_prompt, 1, self.prior.config.clip_image_in_channels), dtype=dtype
        )

    if self.do_classifier_free_guidance:
        image_embeds = ops.cat([image_embeds_pooled, uncond_image_embeds_pooled], axis=0)
    else:
        image_embeds = image_embeds_pooled

    # For classifier free guidance, we need to do two forward passes.
    # Here we concatenate the unconditional and text embeddings into a single batch
    # to avoid doing two forward passes
    text_encoder_hidden_states = (
        ops.cat([prompt_embeds, negative_prompt_embeds]) if negative_prompt_embeds is not None else prompt_embeds
    )
    text_encoder_pooled = (
        ops.cat([prompt_embeds_pooled, negative_prompt_embeds_pooled])
        if negative_prompt_embeds is not None
        else prompt_embeds_pooled
    )

    # 4. Prepare and set timesteps
    self.scheduler.set_timesteps(num_inference_steps)
    timesteps = self.scheduler.timesteps

    # 5. Prepare latents
    latents = self.prepare_latents(
        batch_size, height, width, num_images_per_prompt, dtype, generator, latents, self.scheduler
    )

    if isinstance(self.scheduler, DDPMWuerstchenScheduler):
        timesteps = timesteps[:-1]
    else:
        if self.scheduler.config.clip_sample:
            self.scheduler.config.clip_sample = False  # disample sample clipping
            logger.warning(" set `clip_sample` to be False")
    # 6. Run denoising loop
    if hasattr(self.scheduler, "betas"):
        alphas = 1.0 - self.scheduler.betas
        alphas_cumprod = ops.cumprod(alphas, dim=0)
    else:
        alphas_cumprod = []

    self._num_timesteps = len(timesteps)
    for i, t in enumerate(self.progress_bar(timesteps)):
        if not isinstance(self.scheduler, DDPMWuerstchenScheduler):
            if len(alphas_cumprod) > 0:
                timestep_ratio = self.get_timestep_ratio_conditioning(t.long(), alphas_cumprod)
                timestep_ratio = timestep_ratio.broadcast_to((latents.shape[0],)).to(dtype)
            else:
                timestep_ratio = (
                    t.float().div(self.scheduler.timesteps[-1]).broadcast_to((latents.shape[0],)).to(dtype)
                )
        else:
            timestep_ratio = t.broadcast_to((latents.shape[0],)).to(dtype)
        # 7. Denoise image embeddings
        predicted_image_embedding = self.prior(
            sample=ops.cat([latents] * 2) if self.do_classifier_free_guidance else latents,
            timestep_ratio=ops.cat([timestep_ratio] * 2) if self.do_classifier_free_guidance else timestep_ratio,
            clip_text_pooled=text_encoder_pooled,
            clip_text=text_encoder_hidden_states,
            clip_img=image_embeds,
            return_dict=False,
        )[0]

        # 8. Check for classifier free guidance and apply it
        if self.do_classifier_free_guidance:
            predicted_image_embedding_text, predicted_image_embedding_uncond = predicted_image_embedding.chunk(2)
            predicted_image_embedding = ops.lerp(
                predicted_image_embedding_uncond,
                predicted_image_embedding_text,
                ms.tensor(self.guidance_scale, dtype=predicted_image_embedding_text.dtype),
            )

        # 9. Renoise latents to next timestep
        if not isinstance(self.scheduler, DDPMWuerstchenScheduler):
            timestep_ratio = t
        latents = self.scheduler.step(
            model_output=predicted_image_embedding, timestep=timestep_ratio, sample=latents, generator=generator
        )[0]

        if callback_on_step_end is not None:
            callback_kwargs = {}
            for k in callback_on_step_end_tensor_inputs:
                callback_kwargs[k] = locals()[k]
            callback_outputs = callback_on_step_end(self, i, t, callback_kwargs)

            latents = callback_outputs.pop("latents", latents)
            prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds)
            negative_prompt_embeds = callback_outputs.pop("negative_prompt_embeds", negative_prompt_embeds)

    if output_type == "np":
        latents = latents.float().asnumpy()  # float() as bfloat16-> numpy doesnt work
        prompt_embeds = prompt_embeds.float().asnumpy()  # float() as bfloat16-> numpy doesnt work
        negative_prompt_embeds = (
            negative_prompt_embeds.float().asnumpy() if negative_prompt_embeds is not None else None
        )  # float() as bfloat16-> numpy doesnt work

    if not return_dict:
        return (
            latents,
            prompt_embeds,
            prompt_embeds_pooled,
            negative_prompt_embeds,
            negative_prompt_embeds_pooled,
        )

    return StableCascadePriorPipelineOutput(
        image_embeddings=latents,
        prompt_embeds=prompt_embeds,
        prompt_embeds_pooled=prompt_embeds_pooled,
        negative_prompt_embeds=negative_prompt_embeds,
        negative_prompt_embeds_pooled=negative_prompt_embeds_pooled,
    )

mindone.diffusers.pipelines.stable_cascade.pipeline_stable_cascade_prior.StableCascadePriorPipelineOutput dataclass

Bases: BaseOutput

Output class for WuerstchenPriorPipeline.

PARAMETER DESCRIPTION
prompt_embeds

Text embeddings for the prompt.

TYPE: `ms.Tensor`

negative_prompt_embeds

Text embeddings for the negative prompt.

TYPE: `ms.Tensor`

Source code in mindone/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_prior.py
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
@dataclass
class StableCascadePriorPipelineOutput(BaseOutput):
    """
    Output class for WuerstchenPriorPipeline.

    Args:
        image_embeddings (`ms.Tensor` or `np.ndarray`)
            Prior image embeddings for text prompt
        prompt_embeds (`ms.Tensor`):
            Text embeddings for the prompt.
        negative_prompt_embeds (`ms.Tensor`):
            Text embeddings for the negative prompt.
    """

    image_embeddings: Union[ms.Tensor, np.ndarray]
    prompt_embeds: Union[ms.Tensor, np.ndarray]
    prompt_embeds_pooled: Union[ms.Tensor, np.ndarray]
    negative_prompt_embeds: Union[ms.Tensor, np.ndarray]
    negative_prompt_embeds_pooled: Union[ms.Tensor, np.ndarray]

mindone.diffusers.StableCascadeDecoderPipeline

Bases: DiffusionPipeline

Pipeline for generating images from the Stable Cascade model.

This model inherits from [DiffusionPipeline]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

PARAMETER DESCRIPTION
tokenizer

The CLIP tokenizer.

TYPE: `CLIPTokenizer`

text_encoder

The CLIP text encoder.

TYPE: `CLIPTextModel`

decoder

The Stable Cascade decoder unet.

TYPE: [`StableCascadeUNet`]

vqgan

The VQGAN model.

TYPE: [`PaellaVQModel`]

scheduler

A scheduler to be used in combination with prior to generate image embedding.

TYPE: [`DDPMWuerstchenScheduler`]

latent_dim_scale

Multiplier to determine the VQ latent space size from the image embeddings. If the image embeddings are height=24 and width=24, the VQ latent shape needs to be height=int(24*10.67)=256 and width=int(24*10.67)=256 in order to match the training conditions.

TYPE: float, `optional`, defaults to 10.67 DEFAULT: 10.67

Source code in mindone/diffusers/pipelines/stable_cascade/pipeline_stable_cascade.py
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
class StableCascadeDecoderPipeline(DiffusionPipeline):
    """
    Pipeline for generating images from the Stable Cascade model.

    This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
    library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

    Args:
        tokenizer (`CLIPTokenizer`):
            The CLIP tokenizer.
        text_encoder (`CLIPTextModel`):
            The CLIP text encoder.
        decoder ([`StableCascadeUNet`]):
            The Stable Cascade decoder unet.
        vqgan ([`PaellaVQModel`]):
            The VQGAN model.
        scheduler ([`DDPMWuerstchenScheduler`]):
            A scheduler to be used in combination with `prior` to generate image embedding.
        latent_dim_scale (float, `optional`, defaults to 10.67):
            Multiplier to determine the VQ latent space size from the image embeddings. If the image embeddings are
            height=24 and width=24, the VQ latent shape needs to be height=int(24*10.67)=256 and
            width=int(24*10.67)=256 in order to match the training conditions.
    """

    unet_name = "decoder"
    text_encoder_name = "text_encoder"
    model_cpu_offload_seq = "text_encoder->decoder->vqgan"
    _callback_tensor_inputs = [
        "latents",
        "prompt_embeds_pooled",
        "negative_prompt_embeds",
        "image_embeddings",
    ]

    def __init__(
        self,
        decoder: StableCascadeUNet,
        tokenizer: CLIPTokenizer,
        text_encoder: CLIPTextModel,
        scheduler: DDPMWuerstchenScheduler,
        vqgan: PaellaVQModel,
        latent_dim_scale: float = 10.67,
    ) -> None:
        super().__init__()
        self.register_modules(
            decoder=decoder,
            tokenizer=tokenizer,
            text_encoder=text_encoder,
            scheduler=scheduler,
            vqgan=vqgan,
        )
        self.register_to_config(latent_dim_scale=latent_dim_scale)

    def prepare_latents(
        self, batch_size, image_embeddings, num_images_per_prompt, dtype, generator, latents, scheduler
    ):
        _, channels, height, width = image_embeddings.shape
        latents_shape = (
            batch_size * num_images_per_prompt,
            4,
            int(height * self.config.latent_dim_scale),
            int(width * self.config.latent_dim_scale),
        )

        if latents is None:
            latents = randn_tensor(latents_shape, generator=generator, dtype=dtype)
        else:
            if latents.shape != latents_shape:
                raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")

        latents = (latents * scheduler.init_noise_sigma).to(dtype)
        return latents

    def encode_prompt(
        self,
        batch_size,
        num_images_per_prompt,
        do_classifier_free_guidance,
        prompt=None,
        negative_prompt=None,
        prompt_embeds: Optional[ms.Tensor] = None,
        prompt_embeds_pooled: Optional[ms.Tensor] = None,
        negative_prompt_embeds: Optional[ms.Tensor] = None,
        negative_prompt_embeds_pooled: Optional[ms.Tensor] = None,
    ):
        if prompt_embeds is None:
            # get prompt text embeddings
            text_inputs = self.tokenizer(
                prompt,
                padding="max_length",
                max_length=self.tokenizer.model_max_length,
                truncation=True,
                return_tensors="np",
            )
            text_input_ids = text_inputs.input_ids
            attention_mask = ms.Tensor(text_inputs.attention_mask)

            untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="np").input_ids

            if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not np.array_equal(
                text_input_ids, untruncated_ids
            ):
                removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
                logger.warning(
                    "The following part of your input was truncated because CLIP can only handle sequences up to"
                    f" {self.tokenizer.model_max_length} tokens: {removed_text}"
                )
                text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
                attention_mask = attention_mask[:, : self.tokenizer.model_max_length]

            text_encoder_output = self.text_encoder(
                ms.tensor(text_input_ids), attention_mask=attention_mask, output_hidden_states=True
            )
            prompt_embeds = text_encoder_output[2][-1]
            if prompt_embeds_pooled is None:
                prompt_embeds_pooled = text_encoder_output[0].unsqueeze(1)

        prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype)
        prompt_embeds_pooled = prompt_embeds_pooled.to(dtype=self.text_encoder.dtype)
        prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0)
        prompt_embeds_pooled = prompt_embeds_pooled.repeat_interleave(num_images_per_prompt, dim=0)

        if negative_prompt_embeds is None and do_classifier_free_guidance:
            uncond_tokens: List[str]
            if negative_prompt is None:
                uncond_tokens = [""] * batch_size
            elif type(prompt) is not type(negative_prompt):
                raise TypeError(
                    f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
                    f" {type(prompt)}."
                )
            elif isinstance(negative_prompt, str):
                uncond_tokens = [negative_prompt]
            elif batch_size != len(negative_prompt):
                raise ValueError(
                    f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
                    f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
                    " the batch size of `prompt`."
                )
            else:
                uncond_tokens = negative_prompt

            uncond_input = self.tokenizer(
                uncond_tokens,
                padding="max_length",
                max_length=self.tokenizer.model_max_length,
                truncation=True,
                return_tensors="np",
            )
            negative_prompt_embeds_text_encoder_output = self.text_encoder(
                ms.Tensor(uncond_input.input_ids),
                attention_mask=ms.Tensor(uncond_input.attention_mask),
                output_hidden_states=True,
            )

            negative_prompt_embeds = negative_prompt_embeds_text_encoder_output[2][-1]
            negative_prompt_embeds_pooled = negative_prompt_embeds_text_encoder_output[0].unsqueeze(1)

        if do_classifier_free_guidance:
            # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
            seq_len = negative_prompt_embeds.shape[1]
            negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype)
            negative_prompt_embeds = negative_prompt_embeds.tile((1, num_images_per_prompt, 1))
            negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)

            seq_len = negative_prompt_embeds_pooled.shape[1]
            negative_prompt_embeds_pooled = negative_prompt_embeds_pooled.to(dtype=self.text_encoder.dtype)
            negative_prompt_embeds_pooled = negative_prompt_embeds_pooled.tile((1, num_images_per_prompt, 1))
            negative_prompt_embeds_pooled = negative_prompt_embeds_pooled.view(
                batch_size * num_images_per_prompt, seq_len, -1
            )
            # done duplicates

        return prompt_embeds, prompt_embeds_pooled, negative_prompt_embeds, negative_prompt_embeds_pooled

    def check_inputs(
        self,
        prompt,
        negative_prompt=None,
        prompt_embeds=None,
        negative_prompt_embeds=None,
        callback_on_step_end_tensor_inputs=None,
    ):
        if callback_on_step_end_tensor_inputs is not None and not all(
            k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs
        ):
            raise ValueError(
                f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, "
                f"but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}"
            )

        if prompt is not None and prompt_embeds is not None:
            raise ValueError(
                f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
                " only forward one of the two."
            )
        elif prompt is None and prompt_embeds is None:
            raise ValueError(
                "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
            )
        elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
            raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")

        if negative_prompt is not None and negative_prompt_embeds is not None:
            raise ValueError(
                f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
                f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
            )

        if prompt_embeds is not None and negative_prompt_embeds is not None:
            if prompt_embeds.shape != negative_prompt_embeds.shape:
                raise ValueError(
                    "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
                    f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
                    f" {negative_prompt_embeds.shape}."
                )

    @property
    def guidance_scale(self):
        return self._guidance_scale

    @property
    def do_classifier_free_guidance(self):
        return self._guidance_scale > 1

    @property
    def num_timesteps(self):
        return self._num_timesteps

    def __call__(
        self,
        image_embeddings: Union[ms.Tensor, List[ms.Tensor]],
        prompt: Union[str, List[str]] = None,
        num_inference_steps: int = 10,
        guidance_scale: float = 0.0,
        negative_prompt: Optional[Union[str, List[str]]] = None,
        prompt_embeds: Optional[ms.Tensor] = None,
        prompt_embeds_pooled: Optional[ms.Tensor] = None,
        negative_prompt_embeds: Optional[ms.Tensor] = None,
        negative_prompt_embeds_pooled: Optional[ms.Tensor] = None,
        num_images_per_prompt: int = 1,
        generator: Optional[Union[np.random.Generator, List[np.random.Generator]]] = None,
        latents: Optional[ms.Tensor] = None,
        output_type: Optional[str] = "pil",
        return_dict: bool = False,
        callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None,
        callback_on_step_end_tensor_inputs: List[str] = ["latents"],
    ):
        """
        Function invoked when calling the pipeline for generation.

        Args:
            image_embedding (`ms.Tensor` or `List[ms.Tensor]`):
                Image Embeddings either extracted from an image or generated by a Prior Model.
            prompt (`str` or `List[str]`):
                The prompt or prompts to guide the image generation.
            num_inference_steps (`int`, *optional*, defaults to 12):
                The number of denoising steps. More denoising steps usually lead to a higher quality image at the
                expense of slower inference.
            guidance_scale (`float`, *optional*, defaults to 0.0):
                Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
                `decoder_guidance_scale` is defined as `w` of equation 2. of [Imagen
                Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting
                `decoder_guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely
                linked to the text `prompt`, usually at the expense of lower image quality.
            negative_prompt (`str` or `List[str]`, *optional*):
                The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
                if `decoder_guidance_scale` is less than `1`).
            prompt_embeds (`ms.Tensor`, *optional*):
                Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
                provided, text embeddings will be generated from `prompt` input argument.
            prompt_embeds_pooled (`ms.Tensor`, *optional*):
                Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
                If not provided, pooled text embeddings will be generated from `prompt` input argument.
            negative_prompt_embeds (`ms.Tensor`, *optional*):
                Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
                weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
                argument.
            negative_prompt_embeds_pooled (`ms.Tensor`, *optional*):
                Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
                weighting. If not provided, negative_prompt_embeds_pooled will be generated from `negative_prompt` input
                argument.
            num_images_per_prompt (`int`, *optional*, defaults to 1):
                The number of images to generate per prompt.
            generator (`np.random.Generator` or `List[np.random.Generator]`, *optional*):
                One or a list of [np.random.Generator(s)](https://numpy.org/doc/stable/reference/random/generator.html)
                to make generation deterministic.
            latents (`ms.Tensor`, *optional*):
                Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
                generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
                tensor will ge generated by sampling using the supplied random `generator`.
            output_type (`str`, *optional*, defaults to `"pil"`):
                The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
                (`np.array`) or `"ms"` (`ms.Tensor`).
            return_dict (`bool`, *optional*, defaults to `False`):
                Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
            callback_on_step_end (`Callable`, *optional*):
                A function that calls at the end of each denoising steps during the inference. The function is called
                with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
                callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
                `callback_on_step_end_tensor_inputs`.
            callback_on_step_end_tensor_inputs (`List`, *optional*):
                The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
                will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
                `._callback_tensor_inputs` attribute of your pipeline class.

        Examples:

        Returns:
            [`~pipelines.ImagePipelineOutput`] or `tuple` [`~pipelines.ImagePipelineOutput`] if `return_dict` is True,
            otherwise a `tuple`. When returning a tuple, the first element is a list with the generated image
            embeddings.
        """

        # 0. Define commonly used variables
        dtype = next(self.decoder.get_parameters()).dtype
        self._guidance_scale = guidance_scale

        # 1. Check inputs. Raise error if not correct
        self.check_inputs(
            prompt,
            negative_prompt=negative_prompt,
            prompt_embeds=prompt_embeds,
            negative_prompt_embeds=negative_prompt_embeds,
            callback_on_step_end_tensor_inputs=callback_on_step_end_tensor_inputs,
        )
        if isinstance(image_embeddings, list):
            image_embeddings = ops.cat(image_embeddings, axis=0)

        if prompt is not None and isinstance(prompt, str):
            batch_size = 1
        elif prompt is not None and isinstance(prompt, list):
            batch_size = len(prompt)
        else:
            batch_size = prompt_embeds.shape[0]

        # Compute the effective number of images per prompt
        # We must account for the fact that the image embeddings from the prior can be generated with num_images_per_prompt > 1
        # This results in a case where a single prompt is associated with multiple image embeddings
        # Divide the number of image embeddings by the batch size to determine if this is the case.
        num_images_per_prompt = num_images_per_prompt * (image_embeddings.shape[0] // batch_size)

        # 2. Encode caption
        if prompt_embeds is None and negative_prompt_embeds is None:
            _, prompt_embeds_pooled, _, negative_prompt_embeds_pooled = self.encode_prompt(
                prompt=prompt,
                batch_size=batch_size,
                num_images_per_prompt=num_images_per_prompt,
                do_classifier_free_guidance=self.do_classifier_free_guidance,
                negative_prompt=negative_prompt,
                prompt_embeds=prompt_embeds,
                prompt_embeds_pooled=prompt_embeds_pooled,
                negative_prompt_embeds=negative_prompt_embeds,
                negative_prompt_embeds_pooled=negative_prompt_embeds_pooled,
            )

        # The pooled embeds from the prior are pooled again before being passed to the decoder
        prompt_embeds_pooled = (
            ops.cat([prompt_embeds_pooled, negative_prompt_embeds_pooled])
            if self.do_classifier_free_guidance
            else prompt_embeds_pooled
        )
        effnet = (
            ops.cat([image_embeddings, ops.zeros_like(image_embeddings)])
            if self.do_classifier_free_guidance
            else image_embeddings
        )

        self.scheduler.set_timesteps(num_inference_steps)
        timesteps = self.scheduler.timesteps

        # 5. Prepare latents
        latents = self.prepare_latents(
            batch_size, image_embeddings, num_images_per_prompt, dtype, generator, latents, self.scheduler
        )

        # 6. Run denoising loop
        self._num_timesteps = len(timesteps[:-1])
        for i, t in enumerate(self.progress_bar(timesteps[:-1])):
            timestep_ratio = t.broadcast_to((latents.shape[0],)).to(dtype)

            # 7. Denoise latents
            predicted_latents = self.decoder(
                sample=ops.cat([latents] * 2) if self.do_classifier_free_guidance else latents,
                timestep_ratio=ops.cat([timestep_ratio] * 2) if self.do_classifier_free_guidance else timestep_ratio,
                clip_text_pooled=prompt_embeds_pooled,
                effnet=effnet,
                return_dict=False,
            )[0]

            # 8. Check for classifier free guidance and apply it
            if self.do_classifier_free_guidance:
                predicted_latents_text, predicted_latents_uncond = predicted_latents.chunk(2)
                predicted_latents = ops.lerp(
                    predicted_latents_uncond,
                    predicted_latents_text,
                    ms.tensor(self.guidance_scale, dtype=predicted_latents_text.dtype),
                )

            # 9. Renoise latents to next timestep
            # TODO: check prev_sample
            latents = self.scheduler.step(
                model_output=predicted_latents,
                timestep=timestep_ratio,
                sample=latents,
                generator=generator,
            )[0]

            if callback_on_step_end is not None:
                callback_kwargs = {}
                for k in callback_on_step_end_tensor_inputs:
                    callback_kwargs[k] = locals()[k]
                callback_outputs = callback_on_step_end(self, i, t, callback_kwargs)

                latents = callback_outputs.pop("latents", latents)
                prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds)
                negative_prompt_embeds = callback_outputs.pop("negative_prompt_embeds", negative_prompt_embeds)

        if output_type not in ["ms", "np", "pil", "latent"]:
            raise ValueError(
                f"Only the output types `ms`, `np`, `pil` and `latent` are supported not output_type={output_type}"
            )

        if not output_type == "latent":
            # 10. Scale and decode the image latents with vq-vae
            # TODO: check self.vqgan.decode(latents).sample.clamp(0, 1)
            latents = self.vqgan.config.scale_factor * latents
            images = self.vqgan.decode(latents)[0].clamp(0, 1)
            if output_type == "np":
                images = images.permute((0, 2, 3, 1)).float().asnumpy()  # float() as bfloat16-> numpy doesnt work
            elif output_type == "pil":
                images = images.permute((0, 2, 3, 1)).float().asnumpy()  # float() as bfloat16-> numpy doesnt work
                images = self.numpy_to_pil(images)
        else:
            images = latents

        if not return_dict:
            return images
        return ImagePipelineOutput(images)

mindone.diffusers.StableCascadeDecoderPipeline.__call__(image_embeddings, prompt=None, num_inference_steps=10, guidance_scale=0.0, negative_prompt=None, prompt_embeds=None, prompt_embeds_pooled=None, negative_prompt_embeds=None, negative_prompt_embeds_pooled=None, num_images_per_prompt=1, generator=None, latents=None, output_type='pil', return_dict=False, callback_on_step_end=None, callback_on_step_end_tensor_inputs=['latents'])

Function invoked when calling the pipeline for generation.

PARAMETER DESCRIPTION
image_embedding

Image Embeddings either extracted from an image or generated by a Prior Model.

TYPE: `ms.Tensor` or `List[ms.Tensor]`

prompt

The prompt or prompts to guide the image generation.

TYPE: `str` or `List[str]` DEFAULT: None

num_inference_steps

The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.

TYPE: `int`, *optional*, defaults to 12 DEFAULT: 10

guidance_scale

Guidance scale as defined in Classifier-Free Diffusion Guidance. decoder_guidance_scale is defined as w of equation 2. of Imagen Paper. Guidance scale is enabled by setting decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality.

TYPE: `float`, *optional*, defaults to 0.0 DEFAULT: 0.0

negative_prompt

The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored if decoder_guidance_scale is less than 1).

TYPE: `str` or `List[str]`, *optional* DEFAULT: None

prompt_embeds

Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

prompt_embeds_pooled

Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled text embeddings will be generated from prompt input argument.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

negative_prompt_embeds

Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input argument.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

negative_prompt_embeds_pooled

Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds_pooled will be generated from negative_prompt input argument.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

num_images_per_prompt

The number of images to generate per prompt.

TYPE: `int`, *optional*, defaults to 1 DEFAULT: 1

generator

One or a list of np.random.Generator(s) to make generation deterministic.

TYPE: `np.random.Generator` or `List[np.random.Generator]`, *optional* DEFAULT: None

latents

Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random generator.

TYPE: `ms.Tensor`, *optional* DEFAULT: None

output_type

The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" (np.array) or "ms" (ms.Tensor).

TYPE: `str`, *optional*, defaults to `"pil"` DEFAULT: 'pil'

return_dict

Whether or not to return a [~pipelines.ImagePipelineOutput] instead of a plain tuple.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

callback_on_step_end

A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by callback_on_step_end_tensor_inputs.

TYPE: `Callable`, *optional* DEFAULT: None

callback_on_step_end_tensor_inputs

The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list will be passed as callback_kwargs argument. You will only be able to include variables listed in the ._callback_tensor_inputs attribute of your pipeline class.

TYPE: `List`, *optional* DEFAULT: ['latents']

RETURNS DESCRIPTION

[~pipelines.ImagePipelineOutput] or tuple [~pipelines.ImagePipelineOutput] if return_dict is True,

otherwise a tuple. When returning a tuple, the first element is a list with the generated image

embeddings.

Source code in mindone/diffusers/pipelines/stable_cascade/pipeline_stable_cascade.py
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
def __call__(
    self,
    image_embeddings: Union[ms.Tensor, List[ms.Tensor]],
    prompt: Union[str, List[str]] = None,
    num_inference_steps: int = 10,
    guidance_scale: float = 0.0,
    negative_prompt: Optional[Union[str, List[str]]] = None,
    prompt_embeds: Optional[ms.Tensor] = None,
    prompt_embeds_pooled: Optional[ms.Tensor] = None,
    negative_prompt_embeds: Optional[ms.Tensor] = None,
    negative_prompt_embeds_pooled: Optional[ms.Tensor] = None,
    num_images_per_prompt: int = 1,
    generator: Optional[Union[np.random.Generator, List[np.random.Generator]]] = None,
    latents: Optional[ms.Tensor] = None,
    output_type: Optional[str] = "pil",
    return_dict: bool = False,
    callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None,
    callback_on_step_end_tensor_inputs: List[str] = ["latents"],
):
    """
    Function invoked when calling the pipeline for generation.

    Args:
        image_embedding (`ms.Tensor` or `List[ms.Tensor]`):
            Image Embeddings either extracted from an image or generated by a Prior Model.
        prompt (`str` or `List[str]`):
            The prompt or prompts to guide the image generation.
        num_inference_steps (`int`, *optional*, defaults to 12):
            The number of denoising steps. More denoising steps usually lead to a higher quality image at the
            expense of slower inference.
        guidance_scale (`float`, *optional*, defaults to 0.0):
            Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
            `decoder_guidance_scale` is defined as `w` of equation 2. of [Imagen
            Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting
            `decoder_guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely
            linked to the text `prompt`, usually at the expense of lower image quality.
        negative_prompt (`str` or `List[str]`, *optional*):
            The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
            if `decoder_guidance_scale` is less than `1`).
        prompt_embeds (`ms.Tensor`, *optional*):
            Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
            provided, text embeddings will be generated from `prompt` input argument.
        prompt_embeds_pooled (`ms.Tensor`, *optional*):
            Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
            If not provided, pooled text embeddings will be generated from `prompt` input argument.
        negative_prompt_embeds (`ms.Tensor`, *optional*):
            Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
            weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
            argument.
        negative_prompt_embeds_pooled (`ms.Tensor`, *optional*):
            Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
            weighting. If not provided, negative_prompt_embeds_pooled will be generated from `negative_prompt` input
            argument.
        num_images_per_prompt (`int`, *optional*, defaults to 1):
            The number of images to generate per prompt.
        generator (`np.random.Generator` or `List[np.random.Generator]`, *optional*):
            One or a list of [np.random.Generator(s)](https://numpy.org/doc/stable/reference/random/generator.html)
            to make generation deterministic.
        latents (`ms.Tensor`, *optional*):
            Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
            generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
            tensor will ge generated by sampling using the supplied random `generator`.
        output_type (`str`, *optional*, defaults to `"pil"`):
            The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
            (`np.array`) or `"ms"` (`ms.Tensor`).
        return_dict (`bool`, *optional*, defaults to `False`):
            Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
        callback_on_step_end (`Callable`, *optional*):
            A function that calls at the end of each denoising steps during the inference. The function is called
            with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
            callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
            `callback_on_step_end_tensor_inputs`.
        callback_on_step_end_tensor_inputs (`List`, *optional*):
            The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
            will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
            `._callback_tensor_inputs` attribute of your pipeline class.

    Examples:

    Returns:
        [`~pipelines.ImagePipelineOutput`] or `tuple` [`~pipelines.ImagePipelineOutput`] if `return_dict` is True,
        otherwise a `tuple`. When returning a tuple, the first element is a list with the generated image
        embeddings.
    """

    # 0. Define commonly used variables
    dtype = next(self.decoder.get_parameters()).dtype
    self._guidance_scale = guidance_scale

    # 1. Check inputs. Raise error if not correct
    self.check_inputs(
        prompt,
        negative_prompt=negative_prompt,
        prompt_embeds=prompt_embeds,
        negative_prompt_embeds=negative_prompt_embeds,
        callback_on_step_end_tensor_inputs=callback_on_step_end_tensor_inputs,
    )
    if isinstance(image_embeddings, list):
        image_embeddings = ops.cat(image_embeddings, axis=0)

    if prompt is not None and isinstance(prompt, str):
        batch_size = 1
    elif prompt is not None and isinstance(prompt, list):
        batch_size = len(prompt)
    else:
        batch_size = prompt_embeds.shape[0]

    # Compute the effective number of images per prompt
    # We must account for the fact that the image embeddings from the prior can be generated with num_images_per_prompt > 1
    # This results in a case where a single prompt is associated with multiple image embeddings
    # Divide the number of image embeddings by the batch size to determine if this is the case.
    num_images_per_prompt = num_images_per_prompt * (image_embeddings.shape[0] // batch_size)

    # 2. Encode caption
    if prompt_embeds is None and negative_prompt_embeds is None:
        _, prompt_embeds_pooled, _, negative_prompt_embeds_pooled = self.encode_prompt(
            prompt=prompt,
            batch_size=batch_size,
            num_images_per_prompt=num_images_per_prompt,
            do_classifier_free_guidance=self.do_classifier_free_guidance,
            negative_prompt=negative_prompt,
            prompt_embeds=prompt_embeds,
            prompt_embeds_pooled=prompt_embeds_pooled,
            negative_prompt_embeds=negative_prompt_embeds,
            negative_prompt_embeds_pooled=negative_prompt_embeds_pooled,
        )

    # The pooled embeds from the prior are pooled again before being passed to the decoder
    prompt_embeds_pooled = (
        ops.cat([prompt_embeds_pooled, negative_prompt_embeds_pooled])
        if self.do_classifier_free_guidance
        else prompt_embeds_pooled
    )
    effnet = (
        ops.cat([image_embeddings, ops.zeros_like(image_embeddings)])
        if self.do_classifier_free_guidance
        else image_embeddings
    )

    self.scheduler.set_timesteps(num_inference_steps)
    timesteps = self.scheduler.timesteps

    # 5. Prepare latents
    latents = self.prepare_latents(
        batch_size, image_embeddings, num_images_per_prompt, dtype, generator, latents, self.scheduler
    )

    # 6. Run denoising loop
    self._num_timesteps = len(timesteps[:-1])
    for i, t in enumerate(self.progress_bar(timesteps[:-1])):
        timestep_ratio = t.broadcast_to((latents.shape[0],)).to(dtype)

        # 7. Denoise latents
        predicted_latents = self.decoder(
            sample=ops.cat([latents] * 2) if self.do_classifier_free_guidance else latents,
            timestep_ratio=ops.cat([timestep_ratio] * 2) if self.do_classifier_free_guidance else timestep_ratio,
            clip_text_pooled=prompt_embeds_pooled,
            effnet=effnet,
            return_dict=False,
        )[0]

        # 8. Check for classifier free guidance and apply it
        if self.do_classifier_free_guidance:
            predicted_latents_text, predicted_latents_uncond = predicted_latents.chunk(2)
            predicted_latents = ops.lerp(
                predicted_latents_uncond,
                predicted_latents_text,
                ms.tensor(self.guidance_scale, dtype=predicted_latents_text.dtype),
            )

        # 9. Renoise latents to next timestep
        # TODO: check prev_sample
        latents = self.scheduler.step(
            model_output=predicted_latents,
            timestep=timestep_ratio,
            sample=latents,
            generator=generator,
        )[0]

        if callback_on_step_end is not None:
            callback_kwargs = {}
            for k in callback_on_step_end_tensor_inputs:
                callback_kwargs[k] = locals()[k]
            callback_outputs = callback_on_step_end(self, i, t, callback_kwargs)

            latents = callback_outputs.pop("latents", latents)
            prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds)
            negative_prompt_embeds = callback_outputs.pop("negative_prompt_embeds", negative_prompt_embeds)

    if output_type not in ["ms", "np", "pil", "latent"]:
        raise ValueError(
            f"Only the output types `ms`, `np`, `pil` and `latent` are supported not output_type={output_type}"
        )

    if not output_type == "latent":
        # 10. Scale and decode the image latents with vq-vae
        # TODO: check self.vqgan.decode(latents).sample.clamp(0, 1)
        latents = self.vqgan.config.scale_factor * latents
        images = self.vqgan.decode(latents)[0].clamp(0, 1)
        if output_type == "np":
            images = images.permute((0, 2, 3, 1)).float().asnumpy()  # float() as bfloat16-> numpy doesnt work
        elif output_type == "pil":
            images = images.permute((0, 2, 3, 1)).float().asnumpy()  # float() as bfloat16-> numpy doesnt work
            images = self.numpy_to_pil(images)
    else:
        images = latents

    if not return_dict:
        return images
    return ImagePipelineOutput(images)