Skip to content

Speed up inference

There are several ways to optimize Diffusers for inference speed, such as reducing the computational burden by lowering the data precision or using a lightweight distilled model. There are also memory-efficient attention implementations, like Flash Attention, that reduce memory usage which also indirectly speeds up inference. Different speed optimizations can be stacked together to get the fastest inference times.

Tip

Optimizing for inference speed or reduced memory usage can lead to improved performance in the other category, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about lowering memory usage in the Reduce memory usage guide.

The inference times below are obtained from generating a single 512x512 image from the prompt "a photo of an astronaut riding a horse on mars" with 50 DDIM steps on a Ascend 910B in Graph mode.

setup latency speed-up
baseline 5.64s x1
fp16 4.03s x1.40

Half-precision weights

To save Ascend memory and get more speed, set mindspore_dtype=ms.float16 to load and run the model weights directly with half-precision weights.

import mindspore as ms
from mindone.diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5",
    mindspore_dtype=ms.float16,
    use_safetensors=True,
)

Distilled model

You could also use a distilled Stable Diffusion model and autoencoder to speed up inference. During distillation, many of the UNet's residual and attention blocks are shed to reduce the model size by 51% and improve latency by 43%. The distilled model is faster and uses less memory while generating images of comparable quality to the full Stable Diffusion model.

Tip

Read the Open-sourcing Knowledge Distillation Code and Weights of SD-Small and SD-Tiny blog post to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model.

The inference times below are obtained from generating 4 images from the prompt "a photo of an astronaut riding a horse on mars" with 25 PNDM steps on a Ascend 910B. Each generation is repeated 3 times with the distilled Stable Diffusion v1.4 model by Nota AI.

setup latency speed-up
baseline 5.89s x1
distilled 3.82s x1.54
distilled + tiny autoencoder 3.77s x1.56

Let's load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model.

from mindone.diffusers import StableDiffusionPipeline
from mindone.diffusers.utils import make_image_grid
import mindspore as ms
import numpy as np

distilled = StableDiffusionPipeline.from_pretrained(
    "nota-ai/bk-sdm-small", mindspore_dtype=ms.float16, use_safetensors=True,
)
prompt = "a golden vase with different flowers"
generator = [np.random.Generator(np.random.PCG64(i)) for i in range(4)]
images = distilled(
    "a golden vase with different flowers",
    num_inference_steps=25,
    generator=generator,
    num_images_per_prompt=4
)[0]
make_image_grid(images, rows=2, cols=2)
original Stable Diffusion
distilled Stable Diffusion

Tiny AutoEncoder

To speed inference up even more, replace the autoencoder with a distilled version of it.

import mindspore as ms
from mindone.diffusers import AutoencoderTiny, StableDiffusionPipeline
from mindone.diffusers.utils import make_image_grid
import numpy as np

distilled = StableDiffusionPipeline.from_pretrained(
    "nota-ai/bk-sdm-small", mindspore_dtype=ms.float16, use_safetensors=True,
)
distilled.vae = AutoencoderTiny.from_pretrained(
    "madebyollin/taesd", mindspore_dtype=ms.float16, use_safetensors=True,
)

prompt = "a golden vase with different flowers"
generator = [np.random.Generator(np.random.PCG64(i)) for i in range(4)]
images = distilled(
    "a golden vase with different flowers",
    num_inference_steps=25,
    generator=generator,
    num_images_per_prompt=4
)[0]
make_image_grid(images, rows=2, cols=2)
distilled Stable Diffusion + Tiny AutoEncoder