Skip to content

Video Processor

The VideoProcessor provides a unified API for video pipelines to prepare inputs for VAE encoding and post-processing outputs once they're decoded. The class inherits VaeImageProcessor so it includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays.

mindone.diffusers.video_processor.VideoProcessor.preprocess_video(video, height=None, width=None)

Preprocesses input video(s).

PARAMETER DESCRIPTION
video

The input video. It can be one of the following: * List of the PIL images. * List of list of PIL images. * 4D MindSpore tensors (expected shape for each tensor (num_frames, num_channels, height, width)). * 4D NumPy arrays (expected shape for each array (num_frames, height, width, num_channels)). * List of 4D MindSpore tensors (expected shape for each tensor (num_frames, num_channels, height, width)). * List of 4D NumPy arrays (expected shape for each array (num_frames, height, width, num_channels)). * 5D NumPy arrays: expected shape for each array (batch_size, num_frames, height, width, num_channels). * 5D MindSpore tensors: expected shape for each array (batch_size, num_frames, num_channels, height, width).

TYPE: `List[PIL.Image]`, `List[List[PIL.Image]]`, `ms.Tensor`, `np.array`, `List[ms.Tensor]`, `List[np.array]`

height

The height in preprocessed frames of the video. If None, will use the get_default_height_width() to get default height.

TYPE: `int`, *optional*, defaults to `None` DEFAULT: None

width

The width in preprocessed frames of the video. If None, will use get_default_height_width()` to get the default width.

TYPE: `int`, *optional*`, defaults to `None` DEFAULT: None

Source code in mindone/diffusers/video_processor.py
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
def preprocess_video(self, video, height: Optional[int] = None, width: Optional[int] = None) -> ms.Tensor:
    r"""
    Preprocesses input video(s).

    Args:
        video (`List[PIL.Image]`, `List[List[PIL.Image]]`, `ms.Tensor`, `np.array`, `List[ms.Tensor]`, `List[np.array]`):
            The input video. It can be one of the following:
            * List of the PIL images.
            * List of list of PIL images.
            * 4D MindSpore tensors (expected shape for each tensor `(num_frames, num_channels, height, width)`).
            * 4D NumPy arrays (expected shape for each array `(num_frames, height, width, num_channels)`).
            * List of 4D MindSpore tensors (expected shape for each tensor `(num_frames, num_channels, height,
              width)`).
            * List of 4D NumPy arrays (expected shape for each array `(num_frames, height, width, num_channels)`).
            * 5D NumPy arrays: expected shape for each array `(batch_size, num_frames, height, width,
              num_channels)`.
            * 5D MindSpore tensors: expected shape for each array `(batch_size, num_frames, num_channels, height,
              width)`.
        height (`int`, *optional*, defaults to `None`):
            The height in preprocessed frames of the video. If `None`, will use the `get_default_height_width()` to
            get default height.
        width (`int`, *optional*`, defaults to `None`):
            The width in preprocessed frames of the video. If `None`, will use get_default_height_width()` to get
            the default width.
    """
    if isinstance(video, list) and isinstance(video[0], np.ndarray) and video[0].ndim == 5:
        warnings.warn(
            "Passing `video` as a list of 5d np.ndarray is deprecated."
            "Please concatenate the list along the batch dimension and pass it as a single 5d np.ndarray",
            FutureWarning,
        )
        video = np.concatenate(video, axis=0)
    if isinstance(video, list) and isinstance(video[0], ms.Tensor) and video[0].ndim == 5:
        warnings.warn(
            "Passing `video` as a list of 5d ms.Tensor is deprecated."
            "Please concatenate the list along the batch dimension and pass it as a single 5d ms.Tensor",
            FutureWarning,
        )
        video = ops.cat(video, axis=0)

    # ensure the input is a list of videos:
    # - if it is a batch of videos (5d ms.Tensor or np.ndarray), it is converted to a list of videos (a list of 4d ms.Tensor or np.ndarray)
    # - if it is is a single video, it is convereted to a list of one video.
    if isinstance(video, (np.ndarray, ms.Tensor)) and video.ndim == 5:
        video = list(video)
    elif isinstance(video, list) and is_valid_image(video[0]) or is_valid_image_imagelist(video):
        video = [video]
    elif isinstance(video, list) and is_valid_image_imagelist(video[0]):
        video = video
    else:
        raise ValueError(
            "Input is in incorrect format. Currently, we only support numpy.ndarray, ms.Tensor, PIL.Image.Image"
        )

    video = ops.stack([self.preprocess(img, height=height, width=width) for img in video], axis=0)

    # move the number of channels before the number of frames.
    video = video.permute(0, 2, 1, 3, 4)

    return video

mindone.diffusers.video_processor.VideoProcessor.postprocess_video(video, output_type='np')

Converts a video tensor to a list of frames for export.

PARAMETER DESCRIPTION
video

The video as a tensor.

TYPE: `ms.Tensor`

output_type

Output type of the postprocessed video tensor.

TYPE: `str`, defaults to `"np"` DEFAULT: 'np'

Source code in mindone/diffusers/video_processor.py
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
def postprocess_video(
    self, video: ms.Tensor, output_type: str = "np"
) -> Union[np.ndarray, ms.Tensor, List[PIL.Image.Image]]:
    r"""
    Converts a video tensor to a list of frames for export.

    Args:
        video (`ms.Tensor`): The video as a tensor.
        output_type (`str`, defaults to `"np"`): Output type of the postprocessed `video` tensor.
    """
    batch_size = video.shape[0]
    outputs = []
    for batch_idx in range(batch_size):
        batch_vid = video[batch_idx].permute(1, 0, 2, 3)
        batch_output = self.postprocess(batch_vid, output_type)
        outputs.append(batch_output)

    if output_type == "np":
        outputs = np.stack(outputs)
    elif output_type == "pt":
        outputs = ops.stack(outputs)
    elif not output_type == "pil":
        raise ValueError(f"{output_type} does not exist. Please choose one of ['np', 'pt', 'pil']")

    return outputs