Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VAAPI: format map should only be used when trying to map frames for the CPU #18

Open
Gnurou opened this issue May 22, 2023 · 1 comment
Labels
enhancement New feature or request

Comments

@Gnurou
Copy link
Collaborator

Gnurou commented May 22, 2023

As far as decoding is concerned, the only relevant format is the RT format. The VA_FOURCC_* are only used to present a view of the buffer in the layout given by the fourcc.

However the current code requires a fourcc to be specified (or at least chosen by default) in the open method, and that even if we have no intent to read the decoded result using the CPU. This should probably be changed to something that does not involve fourccs until we actually want to map a decoded buffer.

The interface would probably be such that we can query which DecodedFormats are supported for the decoder on the current settings ; then we can request a mapping of a given frame to one of the supported formats.

@Gnurou Gnurou added the enhancement New feature or request label May 22, 2023
@dwlsalmeida
Copy link
Collaborator

dwlsalmeida commented May 23, 2023

The idea was that, back then, changing the format would reallocate the surfaces with the new fourcc hint. I noticed that 464cbe7 changes this by removing the va_fourcc hint during surface creation.

In which case, we can probably simplify things by calling open() only once and by removing the FormatMap argument.

Nevertheless, I find that the current strategy works well with format() and try_format():

  • During format, we use the current VA_FOURCC (i.e. the current VAImageFormat) to inform the user of the format currently set.
  • During try_format(), we check whether the chosen fourcc is supported by calling vaQueryImageFormats. If so, we replace the current VAImageFormat.

When mapping, we use the format set at the time the picture was decoded, by saving the map_format variable in GenericBackendHandle. This is important, because if the resolution changes, the hardware may be unable to honor the previous DecodedFormat.

The interface would probably be such that we can query which DecodedFormats are supported for the decoder on the current settings ; then we can request a mapping of a given frame to one of the supported formats.

This used to be possible, but after some refactor this method was moved into the struct below and made private:

    /// Gets a set of supported formats for the particular stream being
    /// processed. This requires that some buffers be processed before this call
    /// is made. Only formats that are compatible with the current color space,
    /// bit depth, and chroma format are returned such that no conversion is
    /// needed.
    fn supported_formats_for_stream(&self) -> anyhow::Result<HashSet<DecodedFormat>> {
        let metadata = self.metadata_state.get_parsed()?;
        let image_formats = self.display.query_image_formats()?;

        let formats = supported_formats_for_rt_format(
            &self.display,
            metadata.rt_format,
            metadata.profile,
            libva::VAEntrypoint::VAEntrypointVLD,
            &image_formats,
        )?;

        Ok(formats.into_iter().map(|f| f.decoded_format).collect())
    }

See https://chromium-review.googlesource.com/c/crosvm/crosvm/+/3875043/15/media/cros-codecs/src/decoders.rs#70

The original negotiation strategy was described in https://chromium-review.googlesource.com/c/crosvm/crosvm/+/3875043/15/media/cros-codecs/src/decoders.rs#100

Notice that the client used to be able to access the backend, e.g.:

https://chromium-review.googlesource.com/c/crosvm/crosvm/+/3875043/15/media/cros-codecs/src/decoders/vp8/decoder.rs#402

But when this changed, some methods were not reimplemented on VideoDecoder, like fn display_resolution(&self) -> Option<Resolution>; and supported_formats_for_stream(). I think we should fix that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants