Skip to content

EDMEulerScheduler

The Karras formulation of the Euler scheduler (Algorithm 2) from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson.

mindone.diffusers.EDMEulerScheduler

Bases: SchedulerMixin, ConfigMixin

Implements the Euler scheduler in EDM formulation as presented in Karras et al. 2022 [1].

[1] Karras, Tero, et al. "Elucidating the Design Space of Diffusion-Based Generative Models." https://arxiv.org/abs/2206.00364

This model inherits from [SchedulerMixin] and [ConfigMixin]. Check the superclass documentation for the generic methods the library implements for all schedulers such as loading and saving.

PARAMETER DESCRIPTION
sigma_min

Minimum noise magnitude in the sigma schedule. This was set to 0.002 in the EDM paper [1]; a reasonable range is [0, 10].

TYPE: `float`, *optional*, defaults to 0.002 DEFAULT: 0.002

sigma_max

Maximum noise magnitude in the sigma schedule. This was set to 80.0 in the EDM paper [1]; a reasonable range is [0.2, 80.0].

TYPE: `float`, *optional*, defaults to 80.0 DEFAULT: 80.0

sigma_data

The standard deviation of the data distribution. This is set to 0.5 in the EDM paper [1].

TYPE: `float`, *optional*, defaults to 0.5 DEFAULT: 0.5

sigma_schedule

Sigma schedule to compute the sigmas. By default, we the schedule introduced in the EDM paper (https://arxiv.org/abs/2206.00364). Other acceptable value is "exponential". The exponential schedule was incorporated in this model: https://huggingface.co/stabilityai/cosxl.

TYPE: `str`, *optional*, defaults to `karras` DEFAULT: 'karras'

num_train_timesteps

The number of diffusion steps to train the model.

TYPE: `int`, defaults to 1000 DEFAULT: 1000

prediction_type

Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), sample (directly predicts the noisy sample) orv_prediction` (see section 2.4 of Imagen Video paper).

TYPE: `str`, defaults to `epsilon`, *optional* DEFAULT: 'epsilon'

rho

The rho parameter used for calculating the Karras sigma schedule, which is set to 7.0 in the EDM paper [1].

TYPE: `float`, *optional*, defaults to 7.0 DEFAULT: 7.0

Source code in mindone/diffusers/schedulers/scheduling_edm_euler.py
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
class EDMEulerScheduler(SchedulerMixin, ConfigMixin):
    """
    Implements the Euler scheduler in EDM formulation as presented in Karras et al. 2022 [1].

    [1] Karras, Tero, et al. "Elucidating the Design Space of Diffusion-Based Generative Models."
    https://arxiv.org/abs/2206.00364

    This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic
    methods the library implements for all schedulers such as loading and saving.

    Args:
        sigma_min (`float`, *optional*, defaults to 0.002):
            Minimum noise magnitude in the sigma schedule. This was set to 0.002 in the EDM paper [1]; a reasonable
            range is [0, 10].
        sigma_max (`float`, *optional*, defaults to 80.0):
            Maximum noise magnitude in the sigma schedule. This was set to 80.0 in the EDM paper [1]; a reasonable
            range is [0.2, 80.0].
        sigma_data (`float`, *optional*, defaults to 0.5):
            The standard deviation of the data distribution. This is set to 0.5 in the EDM paper [1].
        sigma_schedule (`str`, *optional*, defaults to `karras`):
            Sigma schedule to compute the `sigmas`. By default, we the schedule introduced in the EDM paper
            (https://arxiv.org/abs/2206.00364). Other acceptable value is "exponential". The exponential schedule was
            incorporated in this model: https://huggingface.co/stabilityai/cosxl.
        num_train_timesteps (`int`, defaults to 1000):
            The number of diffusion steps to train the model.
        prediction_type (`str`, defaults to `epsilon`, *optional*):
            Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
            `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
            Video](https://imagen.research.google/video/paper.pdf) paper).
        rho (`float`, *optional*, defaults to 7.0):
            The rho parameter used for calculating the Karras sigma schedule, which is set to 7.0 in the EDM paper [1].
    """

    _compatibles = []
    order = 1

    @register_to_config
    def __init__(
        self,
        sigma_min: float = 0.002,
        sigma_max: float = 80.0,
        sigma_data: float = 0.5,
        sigma_schedule: str = "karras",
        num_train_timesteps: int = 1000,
        prediction_type: str = "epsilon",
        rho: float = 7.0,
    ):
        if sigma_schedule not in ["karras", "exponential"]:
            raise ValueError(f"Wrong value for provided for `{sigma_schedule=}`.`")

        # setable values
        self.num_inference_steps = None

        ramp = ms.tensor(np.linspace(0, 1, num_train_timesteps), ms.float32)
        if sigma_schedule == "karras":
            sigmas = self._compute_karras_sigmas(ramp)
        elif sigma_schedule == "exponential":
            sigmas = self._compute_exponential_sigmas(ramp)

        self.timesteps = self.precondition_noise(sigmas)

        self.sigmas = ops.cat([sigmas, ops.zeros(1, dtype=sigmas.dtype)])

        self.is_scale_input_called = False

        self._step_index = None
        self._begin_index = None
        self.sigma_data = self.config.sigma_data

    @property
    def init_noise_sigma(self):
        # standard deviation of the initial noise distribution
        return (self.config.sigma_max**2 + 1) ** 0.5

    @property
    def step_index(self):
        """
        The index counter for current timestep. It will increase 1 after each scheduler step.
        """
        return self._step_index

    @property
    def begin_index(self):
        """
        The index for the first timestep. It should be set from pipeline with `set_begin_index` method.
        """
        return self._begin_index

    # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler.set_begin_index
    def set_begin_index(self, begin_index: int = 0):
        """
        Sets the begin index for the scheduler. This function should be run from pipeline before the inference.

        Args:
            begin_index (`int`):
                The begin index for the scheduler.
        """
        self._begin_index = begin_index

    def precondition_inputs(self, sample, sigma):
        c_in = 1 / ((sigma**2 + self.sigma_data**2) ** 0.5)
        scaled_sample = sample * c_in
        return scaled_sample

    def precondition_noise(self, sigma):
        if not isinstance(sigma, ms.Tensor):
            sigma = ms.tensor(sigma)

        c_noise = 0.25 * ops.log(sigma)

        return c_noise

    def precondition_outputs(self, sample, model_output, sigma):
        sigma_data = self.sigma_data
        c_skip = sigma_data**2 / (sigma**2 + sigma_data**2)

        if self.config.prediction_type == "epsilon":
            c_out = sigma * sigma_data / (sigma**2 + sigma_data**2) ** 0.5
        elif self.config.prediction_type == "v_prediction":
            c_out = -sigma * sigma_data / (sigma**2 + sigma_data**2) ** 0.5
        else:
            raise ValueError(f"Prediction type {self.config.prediction_type} is not supported.")

        denoised = (c_skip * sample + c_out * model_output).to(sample.dtype)

        return denoised

    def scale_model_input(self, sample: ms.Tensor, timestep: Union[float, ms.Tensor]) -> ms.Tensor:
        """
        Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
        current timestep. Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm.

        Args:
            sample (`ms.Tensor`):
                The input sample.
            timestep (`int`, *optional*):
                The current timestep in the diffusion chain.

        Returns:
            `ms.Tensor`:
                A scaled input sample.
        """
        if self.step_index is None:
            self._init_step_index(timestep)

        sigma = self.sigmas[self.step_index]
        sample = self.precondition_inputs(sample, sigma).to(sample.dtype)

        self.is_scale_input_called = True
        return sample

    def set_timesteps(self, num_inference_steps: int):
        """
        Sets the discrete timesteps used for the diffusion chain (to be run before inference).

        Args:
            num_inference_steps (`int`):
                The number of diffusion steps used when generating samples with a pre-trained model.
        """
        self.num_inference_steps = num_inference_steps

        ramp = ms.tensor(np.linspace(0, 1, self.num_inference_steps))
        if self.config.sigma_schedule == "karras":
            sigmas = self._compute_karras_sigmas(ramp)
        elif self.config.sigma_schedule == "exponential":
            sigmas = self._compute_exponential_sigmas(ramp)

        sigmas = sigmas.to(ms.float32)
        self.timesteps = self.precondition_noise(sigmas)

        self.sigmas = ops.cat([sigmas, ops.zeros(1)])
        self._step_index = None
        self._begin_index = None

    # Taken from https://github.com/crowsonkb/k-diffusion/blob/686dbad0f39640ea25c8a8c6a6e56bb40eacefa2/k_diffusion/sampling.py#L17
    def _compute_karras_sigmas(self, ramp, sigma_min=None, sigma_max=None) -> ms.Tensor:
        """Constructs the noise schedule of Karras et al. (2022)."""
        sigma_min = sigma_min or self.config.sigma_min
        sigma_max = sigma_max or self.config.sigma_max

        rho = self.config.rho
        min_inv_rho = sigma_min ** (1 / rho)
        max_inv_rho = sigma_max ** (1 / rho)
        sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho

        return sigmas

    def _compute_exponential_sigmas(self, ramp, sigma_min=None, sigma_max=None) -> ms.Tensor:
        """Implementation closely follows k-diffusion.

        https://github.com/crowsonkb/k-diffusion/blob/6ab5146d4a5ef63901326489f31f1d8e7dd36b48/k_diffusion/sampling.py#L26
        """
        sigma_min = sigma_min or self.config.sigma_min
        sigma_max = sigma_max or self.config.sigma_max
        sigmas = ms.tensor(np.linspace(math.log(sigma_min), math.log(sigma_max), len(ramp)).exp().flip((0,)))
        return sigmas

    # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler.index_for_timestep
    def index_for_timestep(self, timestep, schedule_timesteps=None):
        if schedule_timesteps is None:
            schedule_timesteps = self.timesteps

        if (schedule_timesteps == timestep).sum() > 1:
            pos = 1
        else:
            pos = 0

        # The sigma index that is taken for the **very** first `step`
        # is always the second index (or the last index if there is only 1)
        # This way we can ensure we don't accidentally skip a sigma in
        # case we start in the middle of the denoising schedule (e.g. for image-to-image)
        indices = (schedule_timesteps == timestep).nonzero()

        return int(indices[pos])

    # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._init_step_index
    def _init_step_index(self, timestep):
        if self.begin_index is None:
            self._step_index = self.index_for_timestep(timestep)
        else:
            self._step_index = self._begin_index

    def step(
        self,
        model_output: ms.Tensor,
        timestep: Union[float, ms.Tensor],
        sample: ms.Tensor,
        s_churn: float = 0.0,
        s_tmin: float = 0.0,
        s_tmax: float = float("inf"),
        s_noise: float = 1.0,
        generator: Optional[np.random.Generator] = None,
        return_dict: bool = False,
    ) -> Union[EDMEulerSchedulerOutput, Tuple]:
        """
        Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
        process from the learned model outputs (most often the predicted noise).

        Args:
            model_output (`ms.Tensor`):
                The direct output from learned diffusion model.
            timestep (`float`):
                The current discrete timestep in the diffusion chain.
            sample (`ms.Tensor`):
                A current instance of a sample created by the diffusion process.
            s_churn (`float`):
            s_tmin  (`float`):
            s_tmax  (`float`):
            s_noise (`float`, defaults to 1.0):
                Scaling factor for noise added to the sample.
            generator (`np.random.Generator`, *optional*):
                A random number generator.
            return_dict (`bool`):
                Whether or not to return a [`~schedulers.scheduling_euler_discrete.EDMEulerSchedulerOutput`] or tuple.

        Returns:
            [`~schedulers.scheduling_euler_discrete.EDMEulerSchedulerOutput`] or `tuple`:
                If return_dict is `True`, [`~schedulers.scheduling_euler_discrete.EDMEulerSchedulerOutput`] is
                returned, otherwise a tuple is returned where the first element is the sample tensor.
        """

        if isinstance(timestep, int) or (isinstance(timestep, ms.Tensor) and timestep.dtype in [ms.int32, ms.int64]):
            raise ValueError(
                (
                    "Passing integer indices (e.g. from `enumerate(timesteps)`) as timesteps to"
                    " `EDMEulerScheduler.step()` is not supported. Make sure to pass"
                    " one of the `scheduler.timesteps` as a timestep."
                ),
            )

        if not self.is_scale_input_called:
            logger.warning(
                "The `scale_model_input` function should be called before `step` to ensure correct denoising. "
                "See `StableDiffusionPipeline` for a usage example."
            )

        if self.step_index is None:
            self._init_step_index(timestep)

        # Upcast to avoid precision issues when computing prev_sample
        sample = sample.to(ms.float32)

        sigma = self.sigmas[self.step_index]

        gamma = min(s_churn / (len(self.sigmas) - 1), 2**0.5 - 1) if s_tmin <= sigma <= s_tmax else 0.0

        noise = randn_tensor(model_output.shape, dtype=model_output.dtype, generator=generator)

        eps = noise * s_noise
        sigma_hat = sigma * (gamma + 1)

        if gamma > 0:
            sample = sample + eps * (sigma_hat**2 - sigma**2) ** 0.5

        # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise
        pred_original_sample = self.precondition_outputs(sample, model_output, sigma_hat)

        # 2. Convert to an ODE derivative
        derivative = (sample - pred_original_sample) / sigma_hat

        dt = self.sigmas[self.step_index + 1] - sigma_hat

        prev_sample = sample + derivative * dt

        # Cast sample back to model compatible dtype
        prev_sample = prev_sample.to(model_output.dtype)

        # upon completion increase step index by one
        self._step_index += 1

        if not return_dict:
            return (prev_sample,)

        return EDMEulerSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample)

    # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler.add_noise
    def add_noise(
        self,
        original_samples: ms.Tensor,
        noise: ms.Tensor,
        timesteps: ms.Tensor,
    ) -> ms.Tensor:
        broadcast_shape = original_samples.shape
        # Make sure sigmas and timesteps have the same device and dtype as original_samples
        sigmas = self.sigmas.to(dtype=original_samples.dtype)
        schedule_timesteps = self.timesteps

        # self.begin_index is None when scheduler is used for training, or pipeline does not implement set_begin_index
        if self.begin_index is None:
            step_indices = [self.index_for_timestep(t, schedule_timesteps) for t in timesteps]
        elif self.step_index is not None:
            # add_noise is called after first denoising step (for inpainting)
            step_indices = [self.step_index] * timesteps.shape[0]
        else:
            # add noise is called before first denoising step to create initial latent(img2img)
            step_indices = [self.begin_index] * timesteps.shape[0]

        sigma = sigmas[step_indices].flatten()
        # while len(sigma.shape) < len(original_samples.shape):
        #     sigma = sigma.unsqueeze(-1)
        sigma = ops.reshape(sigma, (timesteps.shape[0],) + (1,) * (len(broadcast_shape) - 1))

        noisy_samples = original_samples + noise * sigma
        return noisy_samples

    def __len__(self):
        return self.config.num_train_timesteps

mindone.diffusers.EDMEulerScheduler.begin_index property

The index for the first timestep. It should be set from pipeline with set_begin_index method.

mindone.diffusers.EDMEulerScheduler.step_index property

The index counter for current timestep. It will increase 1 after each scheduler step.

mindone.diffusers.EDMEulerScheduler.scale_model_input(sample, timestep)

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm.

PARAMETER DESCRIPTION
sample

The input sample.

TYPE: `ms.Tensor`

timestep

The current timestep in the diffusion chain.

TYPE: `int`, *optional*

RETURNS DESCRIPTION
Tensor

ms.Tensor: A scaled input sample.

Source code in mindone/diffusers/schedulers/scheduling_edm_euler.py
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
def scale_model_input(self, sample: ms.Tensor, timestep: Union[float, ms.Tensor]) -> ms.Tensor:
    """
    Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
    current timestep. Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm.

    Args:
        sample (`ms.Tensor`):
            The input sample.
        timestep (`int`, *optional*):
            The current timestep in the diffusion chain.

    Returns:
        `ms.Tensor`:
            A scaled input sample.
    """
    if self.step_index is None:
        self._init_step_index(timestep)

    sigma = self.sigmas[self.step_index]
    sample = self.precondition_inputs(sample, sigma).to(sample.dtype)

    self.is_scale_input_called = True
    return sample

mindone.diffusers.EDMEulerScheduler.set_begin_index(begin_index=0)

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.

PARAMETER DESCRIPTION
begin_index

The begin index for the scheduler.

TYPE: `int` DEFAULT: 0

Source code in mindone/diffusers/schedulers/scheduling_edm_euler.py
140
141
142
143
144
145
146
147
148
def set_begin_index(self, begin_index: int = 0):
    """
    Sets the begin index for the scheduler. This function should be run from pipeline before the inference.

    Args:
        begin_index (`int`):
            The begin index for the scheduler.
    """
    self._begin_index = begin_index

mindone.diffusers.EDMEulerScheduler.set_timesteps(num_inference_steps)

Sets the discrete timesteps used for the diffusion chain (to be run before inference).

PARAMETER DESCRIPTION
num_inference_steps

The number of diffusion steps used when generating samples with a pre-trained model.

TYPE: `int`

Source code in mindone/diffusers/schedulers/scheduling_edm_euler.py
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def set_timesteps(self, num_inference_steps: int):
    """
    Sets the discrete timesteps used for the diffusion chain (to be run before inference).

    Args:
        num_inference_steps (`int`):
            The number of diffusion steps used when generating samples with a pre-trained model.
    """
    self.num_inference_steps = num_inference_steps

    ramp = ms.tensor(np.linspace(0, 1, self.num_inference_steps))
    if self.config.sigma_schedule == "karras":
        sigmas = self._compute_karras_sigmas(ramp)
    elif self.config.sigma_schedule == "exponential":
        sigmas = self._compute_exponential_sigmas(ramp)

    sigmas = sigmas.to(ms.float32)
    self.timesteps = self.precondition_noise(sigmas)

    self.sigmas = ops.cat([sigmas, ops.zeros(1)])
    self._step_index = None
    self._begin_index = None

mindone.diffusers.EDMEulerScheduler.step(model_output, timestep, sample, s_churn=0.0, s_tmin=0.0, s_tmax=float('inf'), s_noise=1.0, generator=None, return_dict=False)

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion process from the learned model outputs (most often the predicted noise).

PARAMETER DESCRIPTION
model_output

The direct output from learned diffusion model.

TYPE: `ms.Tensor`

timestep

The current discrete timestep in the diffusion chain.

TYPE: `float`

sample

A current instance of a sample created by the diffusion process.

TYPE: `ms.Tensor`

s_churn

TYPE: `float` DEFAULT: 0.0

s_tmin

TYPE: (`float` DEFAULT: 0.0

s_tmax

TYPE: (`float` DEFAULT: float('inf')

s_noise

Scaling factor for noise added to the sample.

TYPE: `float`, defaults to 1.0 DEFAULT: 1.0

generator

A random number generator.

TYPE: `np.random.Generator`, *optional* DEFAULT: None

return_dict

Whether or not to return a [~schedulers.scheduling_euler_discrete.EDMEulerSchedulerOutput] or tuple.

TYPE: `bool` DEFAULT: False

RETURNS DESCRIPTION
Union[EDMEulerSchedulerOutput, Tuple]

[~schedulers.scheduling_euler_discrete.EDMEulerSchedulerOutput] or tuple: If return_dict is True, [~schedulers.scheduling_euler_discrete.EDMEulerSchedulerOutput] is returned, otherwise a tuple is returned where the first element is the sample tensor.

Source code in mindone/diffusers/schedulers/scheduling_edm_euler.py
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
def step(
    self,
    model_output: ms.Tensor,
    timestep: Union[float, ms.Tensor],
    sample: ms.Tensor,
    s_churn: float = 0.0,
    s_tmin: float = 0.0,
    s_tmax: float = float("inf"),
    s_noise: float = 1.0,
    generator: Optional[np.random.Generator] = None,
    return_dict: bool = False,
) -> Union[EDMEulerSchedulerOutput, Tuple]:
    """
    Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
    process from the learned model outputs (most often the predicted noise).

    Args:
        model_output (`ms.Tensor`):
            The direct output from learned diffusion model.
        timestep (`float`):
            The current discrete timestep in the diffusion chain.
        sample (`ms.Tensor`):
            A current instance of a sample created by the diffusion process.
        s_churn (`float`):
        s_tmin  (`float`):
        s_tmax  (`float`):
        s_noise (`float`, defaults to 1.0):
            Scaling factor for noise added to the sample.
        generator (`np.random.Generator`, *optional*):
            A random number generator.
        return_dict (`bool`):
            Whether or not to return a [`~schedulers.scheduling_euler_discrete.EDMEulerSchedulerOutput`] or tuple.

    Returns:
        [`~schedulers.scheduling_euler_discrete.EDMEulerSchedulerOutput`] or `tuple`:
            If return_dict is `True`, [`~schedulers.scheduling_euler_discrete.EDMEulerSchedulerOutput`] is
            returned, otherwise a tuple is returned where the first element is the sample tensor.
    """

    if isinstance(timestep, int) or (isinstance(timestep, ms.Tensor) and timestep.dtype in [ms.int32, ms.int64]):
        raise ValueError(
            (
                "Passing integer indices (e.g. from `enumerate(timesteps)`) as timesteps to"
                " `EDMEulerScheduler.step()` is not supported. Make sure to pass"
                " one of the `scheduler.timesteps` as a timestep."
            ),
        )

    if not self.is_scale_input_called:
        logger.warning(
            "The `scale_model_input` function should be called before `step` to ensure correct denoising. "
            "See `StableDiffusionPipeline` for a usage example."
        )

    if self.step_index is None:
        self._init_step_index(timestep)

    # Upcast to avoid precision issues when computing prev_sample
    sample = sample.to(ms.float32)

    sigma = self.sigmas[self.step_index]

    gamma = min(s_churn / (len(self.sigmas) - 1), 2**0.5 - 1) if s_tmin <= sigma <= s_tmax else 0.0

    noise = randn_tensor(model_output.shape, dtype=model_output.dtype, generator=generator)

    eps = noise * s_noise
    sigma_hat = sigma * (gamma + 1)

    if gamma > 0:
        sample = sample + eps * (sigma_hat**2 - sigma**2) ** 0.5

    # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise
    pred_original_sample = self.precondition_outputs(sample, model_output, sigma_hat)

    # 2. Convert to an ODE derivative
    derivative = (sample - pred_original_sample) / sigma_hat

    dt = self.sigmas[self.step_index + 1] - sigma_hat

    prev_sample = sample + derivative * dt

    # Cast sample back to model compatible dtype
    prev_sample = prev_sample.to(model_output.dtype)

    # upon completion increase step index by one
    self._step_index += 1

    if not return_dict:
        return (prev_sample,)

    return EDMEulerSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample)

mindone.diffusers.schedulers.scheduling_edm_euler.EDMEulerSchedulerOutput dataclass

Bases: BaseOutput

Output class for the scheduler's step function output.

PARAMETER DESCRIPTION
prev_sample

Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the denoising loop.

TYPE: `ms.Tensor` of shape `(batch_size, num_channels, height, width)` for images

pred_original_sample

The predicted denoised sample (x_{0}) based on the model output from the current timestep. pred_original_sample can be used to preview progress or for guidance.

TYPE: `ms.Tensor` of shape `(batch_size, num_channels, height, width)` for images DEFAULT: None

Source code in mindone/diffusers/schedulers/scheduling_edm_euler.py
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
@dataclass
# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->EulerDiscrete
class EDMEulerSchedulerOutput(BaseOutput):
    """
    Output class for the scheduler's `step` function output.

    Args:
        prev_sample (`ms.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
            Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
            denoising loop.
        pred_original_sample (`ms.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
            The predicted denoised sample `(x_{0})` based on the model output from the current timestep.
            `pred_original_sample` can be used to preview progress or for guidance.
    """

    prev_sample: ms.Tensor
    pred_original_sample: Optional[ms.Tensor] = None