In 4.1, a OpenCL variant filter was added for GPU processing.
In 4.2 threading support was added to the tonemap CPU filter.
In 4.3, a VAAPI variant filter was added for GPU processing on linux.
I have a real-world use case: I have a full Atmos audio setup (7.1.4) and a projector for video. Now projectors don't do HDR very well[1], and to be honest I'm more interested in immersive audio than shiny highlights in the picture.
Unfortunately, a good few film disc releases only put the Atmos audio on the UHD disk along with HDR and running those disks through my rig and forcing SDR yields a picture that is too dark compared to the original HD SDR copy (and letting it try HDR isn't much better).
So I'm wondering if this will enable me to rip and convert to a good SDR UHD copy of a film with the Atmos audio ?
--
[1] ...and 100 inch OLED TVs are stupidly expensive.
Your output device's color space needs to match the color space the display device expects. We've been in an SDR world of everything being sRGB/Rec. 709 for so long we don't even realize when something is putting out a mismatched color space. Normally if you have something like an Apple TV it'll do the transform automatically from whatever the media is to the color space of the screen it's connected to but if you're using something like a PC that assumes sRGB and force video in the BT.2020 color space out at sRGB levels you're going to have a bad time.
ffmpeg command to convert hdr video to SDR:
ffmpeg -i video.mp4
and it returned:
ffmpeg -i video.mp4 -vf format=pix_fmts=yuv420p -c:v libx264 -crf 18 -preset veryslow -c:a copy output.mp4 This FFmpeg command will take your HDR video and convert it to an SDR video using the libx264 codec with a CRF value of 18.``` ffmpeg command to convert hdr video to SDR and fix the colorspace:
and it returned: ffmpeg -i input.hdr -pix_fmt yuv420p -filter_complex "colorlevels=entropy_mode=1:levels=srgb:source_range=full:target_range=full:rgb_to_yuv=sigmoidal" -y output.sdr.mp4
I don't really understand this so I asked it to explain: how does this fix the colorspace?
It replied: The RGB to YUV conversion is done using a "sigmoidal" function, which basically means that it applies a non-linear transformation to the input video. This transformation is designed to "compress" the bright colors in the video, making them more manageable for SDR displays.
The colorlevels filter is also adjusting the black and white points of the video. This is done to ensure that the video can be displayed correctly on SDR displays, which have a limited color gamut.
Finally, the target_range and source_range options are used to ensure that the video is encoded in the correct range for SDR displays.