DDPM¶
Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline.
The abstract from the paper is:
We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.
The original codebase can be found at hohonathanho/diffusion.
Tip
Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.
mindone.diffusers.DDPMPipeline
¶
Bases: DiffusionPipeline
Pipeline for image generation.
This model inherits from [DiffusionPipeline
]. Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).
PARAMETER | DESCRIPTION |
---|---|
unet |
A
TYPE:
|
scheduler |
A scheduler to be used in combination with
TYPE:
|
Source code in mindone/diffusers/pipelines/ddpm/pipeline_ddpm.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
|
mindone.diffusers.DDPMPipeline.__call__(batch_size=1, generator=None, num_inference_steps=1000, output_type='pil', return_dict=False)
¶
The call function to the pipeline for generation.
PARAMETER | DESCRIPTION |
---|---|
batch_size |
The number of images to generate.
TYPE:
|
generator |
A
TYPE:
|
num_inference_steps |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
TYPE:
|
output_type |
The output format of the generated image. Choose between
TYPE:
|
return_dict |
Whether or not to return a [
TYPE:
|
>>> from mindone.diffusers import DDPMPipeline
>>> # load model and scheduler
>>> pipe = DDPMPipeline.from_pretrained("google/ddpm-cat-256")
>>> # run pipeline in inference (sample random noise and denoise)
>>> image = pipe()[0][0]
>>> # save image
>>> image.save("ddpm_generated_image.png")
RETURNS | DESCRIPTION |
---|---|
Union[ImagePipelineOutput, Tuple]
|
[ |
Source code in mindone/diffusers/pipelines/ddpm/pipeline_ddpm.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
|
mindone.diffusers.pipelines.pipeline_utils.ImagePipelineOutput
dataclass
¶
Bases: BaseOutput
Output class for image pipelines.
Source code in mindone/diffusers/pipelines/pipeline_utils.py
69 70 71 72 73 74 75 76 77 78 79 80 |
|