I used to work in a ~2 bil unicorn in which a big part of the products we worked on relied on ffmpeg.
*Fabrice Bellard. Also creator of QEMU, TCC, QuickJS, and others.
> The name of the project is inspired by the MPEG video standards group, together with "FF" for "fast forward"
Corresponding source (Fabrice Bellard himself): https://ffmpeg.org/pipermail/ffmpeg-devel/2006-February/0103...
In all seriousness though, the sheer amount of devices running code he wrote at any given moment is just ridiculous.
Comparable only to famous film director Alan Smithee who has credit for so many films.
Its optimized Pythonic video filtering... But also so much more: https://vsdb.top/
And Staxrip, which makes such good use of ffmpeg, vapoursynth, and dozens of other encoders and tools that I reboot from linux to Windows just to use it: https://github.com/staxrip/staxrip
It is also incredibly stupid how 99% of time ffprobe is used without any arguments to just quickly see something as mundane as duration, resolution, framerate, MAYBE number of audio-tracks, yet 99% of its output is some completely irrelevant bullshit like compiling options.
There were (are?) tons of them on GitHub. But many are still obscure, or are single dev efforts that fizzled out.
Some focused, purpose built CLI frontends (like Av1an, specifically for transcoding) are excellent at what they do. Perhaps that is the better way than an all encompassing wrapper.
* https://ffmpeg.guide/ — create complex FFmpeg filtergraphs quickly and correctly
* https://www.hadet.dev/ffmpeg-cheatsheet/ — clipping, adding fade in/out, scaling, concat, etc
It's also an issue in the original post.
Seeking the container is usually much faster than decoding and then throwing away what you don't need, but it has fatal flaw: most videos use p-frames and thus require you to decode the frames before it.
So, say you want to skip to 60 seconds in. The solution is to do "-ss 50 -i input.mkv -ss 10", which is fast and should get the keyframes you need.
...which becomes obvious once you notice that options apply either to one of the inputs or output.
I started using chatgpt to come up with ffmpeg commands, just so much faster and easier to find what I need.
I made a small tool so I can do it right in the cli: https://github.com/alexkrkn/help-cli
and made a video about it: https://www.youtube.com/watch?v=pOda6TDBqcY
I've finally started compressing my 15 year, 300GB personal video collection..
https://www.shutterencoder.com/
I'm compressing everything using, H.265 and videos are shrinking to sometimes 1/10th the size.. Is there who would give me reasons why I would not want to do this? I've read that it takes more processing power to watch these compressed videos, but not sure that will cause much trouble in the future...
But with 300gb, storage is cheap enough that you could just keep the masters.
Also it's just easier on my homelab to use Plex without having to transcode
Though if you are reencoding, you mind as well go whole hog and use QTGMC.
It took me more time than I wish it did to become open to using CLI apps, the Windows world had taught me to expect a GUI for everything.
I'm not certain, but I highly suspect that if I sat down and learned about digital video encoding and compression on a granular enough level then figuring out how to do things in ffmpeg would be rather intuitive. Does anyone have experience doing this?
Formats like mkv or codecs like HEVC didn't exist back then but the concept of manipulating audio/video through a bunch of filters is a wonderful one and most (all?) a/v transforming software does it. When I started looking into FFmpeg's man pages I could connect the dots and start using it after a day of fooling around.
I'm a CLI lover and man page reader so perhaps it worked to my advantage.
Nowadays I wrapped them all with Emacs functions. This makes them easily accessible as a "right-click" menu of sorts via M-x.
find . -maxdepth 1 -type f -name "" -exec sh -c 'pv "$1" | ffmpeg -i pipe:0 -filter:v scale=720:-2 -c:a copy "${1%.}.mp4" 2> /dev/null' _ {} \;
Not sure if it can be improved but it works well
In the past (over 10 years ago), I used to work with H.264, but I remember fiddling with parameters was a pain. I wonder if nowadays there are some promising new codecs based on ML. Again, as long as it works in my machine it's good, so anything from GitHub, HuggingFace and so on is acceptable, as long as it doesn't need too much effort and specialized knowledge to run it.
If you have more time, then AV1 is good. Read through the trac page [1] and do test encodes with 5-10 seconds of video to determine what settings give you your desired quality. Note that low `-cpu-used` or `-preset` values will give great quality but take incredibly long. Then, encode a few minutes of video with various settings to determine what settings give you your desired file size.
For human time usage, keep track of the commands and options you use and how those affect the output. If the job will take more than a few hours, write your script to be cancellable and resumable.
You use "access" several times but I don't know what you mean by it. I'm going to guess that is some non-english usage slipping in. Nothing else to complain about at this time. [EDIT] I should say "is used" and "they mean" because I don't know if the author is also the poster.
(At least that is how the term is used by collections librarians. Even there terminology may vary)
I'd use gifski: https://gif.ski/
Specific recipes that should be added:
Removing audio: ffmpeg -i $input_file -c copy -an $output_file
Halving resolution: ffmpeg -i $input_file -vf "scale=iw/2:ih/2" $output_file
ffmpeg is infrastructure-level important, and tools like this keep it going.