Skip to content

AutoencoderOobleck

The Oobleck variational autoencoder (VAE) model with KL loss was introduced in Stability-AI/stable-audio-tools and Stable Audio Open by Stability AI. The model is used in 🤗 Diffusers to encode audio waveforms into latents and to decode latent representations into audio waveforms.

The abstract from the paper is:

Open generative models are vitally important for the community, allowing for fine-tunes and serving as baselines when presenting new models. However, most current text-to-audio models are private and not accessible for artists and researchers to build upon. Here we describe the architecture and training process of a new open-weights text-to-audio model trained with Creative Commons data. Our evaluation shows that the model's performance is competitive with the state-of-the-art across various metrics. Notably, the reported FDopenl3 results (measuring the realism of the generations) showcase its potential for high-quality stereo sound synthesis at 44.1kHz.

mindone.diffusers.AutoencoderOobleck

Bases: ModelMixin, ConfigMixin

An autoencoder for encoding waveforms into latents and decoding latent representations into waveforms. First introduced in Stable Audio.

This model inherits from [ModelMixin]. Check the superclass documentation for it's generic methods implemented for all models (such as downloading or saving).

PARAMETER DESCRIPTION
encoder_hidden_size

Intermediate representation dimension for the encoder.

TYPE: `int`, *optional*, defaults to 128 DEFAULT: 128

downsampling_ratios

Ratios for downsampling in the encoder. These are used in reverse order for upsampling in the decoder.

TYPE: `List[int]`, *optional*, defaults to `[2, 4, 4, 8, 8]` DEFAULT: [2, 4, 4, 8, 8]

channel_multiples

Multiples used to determine the hidden sizes of the hidden layers.

TYPE: `List[int]`, *optional*, defaults to `[1, 2, 4, 8, 16]` DEFAULT: [1, 2, 4, 8, 16]

decoder_channels

Intermediate representation dimension for the decoder.

TYPE: `int`, *optional*, defaults to 128 DEFAULT: 128

decoder_input_channels

Input dimension for the decoder. Corresponds to the latent dimension.

TYPE: `int`, *optional*, defaults to 64 DEFAULT: 64

audio_channels

Number of channels in the audio data. Either 1 for mono or 2 for stereo.

TYPE: `int`, *optional*, defaults to 2 DEFAULT: 2

sampling_rate

The sampling rate at which the audio waveform should be digitalized expressed in hertz (Hz).

TYPE: `int`, *optional*, defaults to 44100 DEFAULT: 44100

Source code in mindone/diffusers/models/autoencoders/autoencoder_oobleck.py
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
class AutoencoderOobleck(ModelMixin, ConfigMixin):
    r"""
    An autoencoder for encoding waveforms into latents and decoding latent representations into waveforms. First
    introduced in Stable Audio.

    This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented
    for all models (such as downloading or saving).

    Parameters:
        encoder_hidden_size (`int`, *optional*, defaults to 128):
            Intermediate representation dimension for the encoder.
        downsampling_ratios (`List[int]`, *optional*, defaults to `[2, 4, 4, 8, 8]`):
            Ratios for downsampling in the encoder. These are used in reverse order for upsampling in the decoder.
        channel_multiples (`List[int]`, *optional*, defaults to `[1, 2, 4, 8, 16]`):
            Multiples used to determine the hidden sizes of the hidden layers.
        decoder_channels (`int`, *optional*, defaults to 128):
            Intermediate representation dimension for the decoder.
        decoder_input_channels (`int`, *optional*, defaults to 64):
            Input dimension for the decoder. Corresponds to the latent dimension.
        audio_channels (`int`, *optional*, defaults to 2):
            Number of channels in the audio data. Either 1 for mono or 2 for stereo.
        sampling_rate (`int`, *optional*, defaults to 44100):
            The sampling rate at which the audio waveform should be digitalized expressed in hertz (Hz).
    """

    _supports_gradient_checkpointing = False

    @register_to_config
    def __init__(
        self,
        encoder_hidden_size=128,
        downsampling_ratios=[2, 4, 4, 8, 8],
        channel_multiples=[1, 2, 4, 8, 16],
        decoder_channels=128,
        decoder_input_channels=64,
        audio_channels=2,
        sampling_rate=44100,
    ):
        super().__init__()

        self.encoder_hidden_size = encoder_hidden_size
        self.downsampling_ratios = downsampling_ratios
        self.decoder_channels = decoder_channels
        self.upsampling_ratios = downsampling_ratios[::-1]
        self.hop_length = int(np.prod(downsampling_ratios))
        self.sampling_rate = sampling_rate

        self.encoder = OobleckEncoder(
            encoder_hidden_size=encoder_hidden_size,
            audio_channels=audio_channels,
            downsampling_ratios=downsampling_ratios,
            channel_multiples=channel_multiples,
        )

        self.decoder = OobleckDecoder(
            channels=decoder_channels,
            input_channels=decoder_input_channels,
            audio_channels=audio_channels,
            upsampling_ratios=self.upsampling_ratios,
            channel_multiples=channel_multiples,
        )

        self.use_slicing = False

    def enable_slicing(self):
        r"""
        Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
        compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
        """
        self.use_slicing = True

    def disable_slicing(self):
        r"""
        Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
        decoding in one step.
        """
        self.use_slicing = False

    def encode(
        self, x: ms.Tensor, return_dict: bool = True
    ) -> Union[AutoencoderOobleckOutput, Tuple[OobleckDiagonalGaussianDistribution]]:
        """
        Encode a batch of images into latents.

        Args:
            x (`ms.Tensor`): Input batch of images.
            return_dict (`bool`, *optional*, defaults to `True`):
                Whether to return a [`~models.autoencoder_kl.AutoencoderKLOutput`] instead of a plain tuple.

        Returns:
                The latent representations of the encoded images. If `return_dict` is True, a
                [`~models.autoencoder_kl.AutoencoderKLOutput`] is returned, otherwise a plain `tuple` is returned.
        """
        if self.use_slicing and x.shape[0] > 1:
            encoded_slices = [self.encoder(x_slice) for x_slice in x.split(1)]
            h = ops.cat(encoded_slices)
        else:
            h = self.encoder(x)

        posterior = OobleckDiagonalGaussianDistribution(h)

        if not return_dict:
            return (posterior,)

        return AutoencoderOobleckOutput(latent_dist=posterior)

    def _decode(self, z: ms.Tensor, return_dict: bool = True) -> Union[OobleckDecoderOutput, ms.Tensor]:
        dec = self.decoder(z)

        if not return_dict:
            return (dec,)

        return OobleckDecoderOutput(sample=dec)

    def decode(self, z: ms.Tensor, return_dict: bool = True, generator=None) -> Union[OobleckDecoderOutput, ms.Tensor]:
        """
        Decode a batch of images.

        Args:
            z (`ms.Tensor`): Input batch of latent vectors.
            return_dict (`bool`, *optional*, defaults to `True`):
                Whether to return a [`~models.vae.OobleckDecoderOutput`] instead of a plain tuple.

        Returns:
            [`~models.vae.OobleckDecoderOutput`] or `tuple`:
                If return_dict is True, a [`~models.vae.OobleckDecoderOutput`] is returned, otherwise a plain `tuple`
                is returned.

        """
        if self.use_slicing and z.shape[0] > 1:
            decoded_slices = [self._decode(z_slice).sample for z_slice in z.split(1)]
            decoded = ops.cat(decoded_slices)
        else:
            decoded = self._decode(z).sample

        if not return_dict:
            return (decoded,)

        return OobleckDecoderOutput(sample=decoded)

    def construct(
        self,
        sample: ms.Tensor,
        sample_posterior: bool = False,
        return_dict: bool = True,
        generator: Optional[np.random.Generator] = None,
    ) -> Union[OobleckDecoderOutput, ms.Tensor]:
        r"""
        Args:
            sample (`ms.Tensor`): Input sample.
            sample_posterior (`bool`, *optional*, defaults to `False`):
                Whether to sample from the posterior.
            return_dict (`bool`, *optional*, defaults to `True`):
                Whether or not to return a [`OobleckDecoderOutput`] instead of a plain tuple.
        """
        x = sample
        posterior = self.encode(x).latent_dist
        if sample_posterior:
            z = posterior.sample(generator=generator)
        else:
            z = posterior.mode()
        dec = self.decode(z).sample

        if not return_dict:
            return (dec,)

        return OobleckDecoderOutput(sample=dec)

mindone.diffusers.AutoencoderOobleck.construct(sample, sample_posterior=False, return_dict=True, generator=None)

PARAMETER DESCRIPTION
sample

Input sample.

TYPE: `ms.Tensor`

sample_posterior

Whether to sample from the posterior.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

return_dict

Whether or not to return a [OobleckDecoderOutput] instead of a plain tuple.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

Source code in mindone/diffusers/models/autoencoders/autoencoder_oobleck.py
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
def construct(
    self,
    sample: ms.Tensor,
    sample_posterior: bool = False,
    return_dict: bool = True,
    generator: Optional[np.random.Generator] = None,
) -> Union[OobleckDecoderOutput, ms.Tensor]:
    r"""
    Args:
        sample (`ms.Tensor`): Input sample.
        sample_posterior (`bool`, *optional*, defaults to `False`):
            Whether to sample from the posterior.
        return_dict (`bool`, *optional*, defaults to `True`):
            Whether or not to return a [`OobleckDecoderOutput`] instead of a plain tuple.
    """
    x = sample
    posterior = self.encode(x).latent_dist
    if sample_posterior:
        z = posterior.sample(generator=generator)
    else:
        z = posterior.mode()
    dec = self.decode(z).sample

    if not return_dict:
        return (dec,)

    return OobleckDecoderOutput(sample=dec)

mindone.diffusers.AutoencoderOobleck.decode(z, return_dict=True, generator=None)

Decode a batch of images.

PARAMETER DESCRIPTION
z

Input batch of latent vectors.

TYPE: `ms.Tensor`

return_dict

Whether to return a [~models.vae.OobleckDecoderOutput] instead of a plain tuple.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

RETURNS DESCRIPTION
Union[OobleckDecoderOutput, Tensor]

[~models.vae.OobleckDecoderOutput] or tuple: If return_dict is True, a [~models.vae.OobleckDecoderOutput] is returned, otherwise a plain tuple is returned.

Source code in mindone/diffusers/models/autoencoders/autoencoder_oobleck.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
def decode(self, z: ms.Tensor, return_dict: bool = True, generator=None) -> Union[OobleckDecoderOutput, ms.Tensor]:
    """
    Decode a batch of images.

    Args:
        z (`ms.Tensor`): Input batch of latent vectors.
        return_dict (`bool`, *optional*, defaults to `True`):
            Whether to return a [`~models.vae.OobleckDecoderOutput`] instead of a plain tuple.

    Returns:
        [`~models.vae.OobleckDecoderOutput`] or `tuple`:
            If return_dict is True, a [`~models.vae.OobleckDecoderOutput`] is returned, otherwise a plain `tuple`
            is returned.

    """
    if self.use_slicing and z.shape[0] > 1:
        decoded_slices = [self._decode(z_slice).sample for z_slice in z.split(1)]
        decoded = ops.cat(decoded_slices)
    else:
        decoded = self._decode(z).sample

    if not return_dict:
        return (decoded,)

    return OobleckDecoderOutput(sample=decoded)

mindone.diffusers.AutoencoderOobleck.disable_slicing()

Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing decoding in one step.

Source code in mindone/diffusers/models/autoencoders/autoencoder_oobleck.py
391
392
393
394
395
396
def disable_slicing(self):
    r"""
    Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
    decoding in one step.
    """
    self.use_slicing = False

mindone.diffusers.AutoencoderOobleck.enable_slicing()

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.

Source code in mindone/diffusers/models/autoencoders/autoencoder_oobleck.py
384
385
386
387
388
389
def enable_slicing(self):
    r"""
    Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
    compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
    """
    self.use_slicing = True

mindone.diffusers.AutoencoderOobleck.encode(x, return_dict=True)

Encode a batch of images into latents.

PARAMETER DESCRIPTION
x

Input batch of images.

TYPE: `ms.Tensor`

return_dict

Whether to return a [~models.autoencoder_kl.AutoencoderKLOutput] instead of a plain tuple.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

RETURNS DESCRIPTION
Union[AutoencoderOobleckOutput, Tuple[OobleckDiagonalGaussianDistribution]]

The latent representations of the encoded images. If return_dict is True, a

Union[AutoencoderOobleckOutput, Tuple[OobleckDiagonalGaussianDistribution]]

[~models.autoencoder_kl.AutoencoderKLOutput] is returned, otherwise a plain tuple is returned.

Source code in mindone/diffusers/models/autoencoders/autoencoder_oobleck.py
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
def encode(
    self, x: ms.Tensor, return_dict: bool = True
) -> Union[AutoencoderOobleckOutput, Tuple[OobleckDiagonalGaussianDistribution]]:
    """
    Encode a batch of images into latents.

    Args:
        x (`ms.Tensor`): Input batch of images.
        return_dict (`bool`, *optional*, defaults to `True`):
            Whether to return a [`~models.autoencoder_kl.AutoencoderKLOutput`] instead of a plain tuple.

    Returns:
            The latent representations of the encoded images. If `return_dict` is True, a
            [`~models.autoencoder_kl.AutoencoderKLOutput`] is returned, otherwise a plain `tuple` is returned.
    """
    if self.use_slicing and x.shape[0] > 1:
        encoded_slices = [self.encoder(x_slice) for x_slice in x.split(1)]
        h = ops.cat(encoded_slices)
    else:
        h = self.encoder(x)

    posterior = OobleckDiagonalGaussianDistribution(h)

    if not return_dict:
        return (posterior,)

    return AutoencoderOobleckOutput(latent_dist=posterior)

mindone.diffusers.models.autoencoders.autoencoder_oobleck.OobleckDecoderOutput dataclass

Bases: BaseOutput

Output of decoding method.

PARAMETER DESCRIPTION
sample

The decoded output sample from the last layer of the model.

TYPE: `ms.Tensor` of shape `(batch_size, audio_channels, sequence_length)`

Source code in mindone/diffusers/models/autoencoders/autoencoder_oobleck.py
214
215
216
217
218
219
220
221
222
223
224
@dataclass
class OobleckDecoderOutput(BaseOutput):
    r"""
    Output of decoding method.

    Args:
        sample (`ms.Tensor` of shape `(batch_size, audio_channels, sequence_length)`):
            The decoded output sample from the last layer of the model.
    """

    sample: ms.Tensor

mindone.diffusers.models.autoencoders.autoencoder_oobleck.AutoencoderOobleckOutput dataclass

Bases: BaseOutput

Output of AutoencoderOobleck encoding method.

PARAMETER DESCRIPTION
latent_dist

Encoded outputs of Encoder represented as the mean and standard deviation of OobleckDiagonalGaussianDistribution. OobleckDiagonalGaussianDistribution allows for sampling latents from the distribution.

TYPE: `OobleckDiagonalGaussianDistribution`

Source code in mindone/diffusers/models/autoencoders/autoencoder_oobleck.py
199
200
201
202
203
204
205
206
207
208
209
210
211
@dataclass
class AutoencoderOobleckOutput(BaseOutput):
    """
    Output of AutoencoderOobleck encoding method.

    Args:
        latent_dist (`OobleckDiagonalGaussianDistribution`):
            Encoded outputs of `Encoder` represented as the mean and standard deviation of
            `OobleckDiagonalGaussianDistribution`. `OobleckDiagonalGaussianDistribution` allows for sampling latents
            from the distribution.
    """

    latent_dist: "OobleckDiagonalGaussianDistribution"  # noqa: F821