Skip to content

Unconditional image generation

Unconditional image generation generates images that look like a random sample from the training data the model was trained on because the denoising process is not guided by any additional context like text or image.

To get started, use the DiffusionPipeline to load the anton-l/ddpm-butterflies-128 checkpoint to generate images of butterflies. The DiffusionPipeline downloads and caches all the model components required to generate an image.

from mindone.diffusers import DiffusionPipeline

generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128")
image = generator()[0][0]
image

Tip

Want to generate images of something else? Take a look at the training guide to learn how to train a model to generate your own images.

The output image is a PIL.Image object that can be saved:

image.save("generated_image.png")

You can also try experimenting with the num_inference_steps parameter, which controls the number of denoising steps. More denoising steps typically produce higher quality images, but it'll take longer to generate. Feel free to play around with this parameter to see how it affects the image quality.

image = generator(num_inference_steps=100)[0][0]
image