This is how I convert videos to 720p web-playable videos (if it's not already web-playable):
https://github.com/nicbou/homeserver/blob/22c0a160f9df5f4c34...
This is how I create hover previews like on modern streaming sites:
https://github.com/nicbou/timeline/blob/9d9340930ed0213dffdd...
I like collections of commands. However, the challenges that seem unsolved are (1) keeping the example in sync with the CLI options, and (2) making it easy to dig into parts of examples. The former is a classic documentation problem, of course.
I use a collection of commands in a dotfiles [1] repo I share around a few machines.
This [2] command, compresses any video into a small web-playable file with relatively high quality so you can quickly share HD footage.
ffcompress INPUT.mp4
# creates INPUT.compressed.mp4
[1] https://github.com/benwinding/dotfiles[2] https://github.com/benwinding/dotfiles/blob/master/bin/ffcom...
https://amiaopensource.github.io/ffmprovisr/
https://github.com/amiaopensource/ffmprovisr
(CC Licensed, they accept pull requests.)
---
The ffmpeg subreddit also has some helpful contributors:
https://github.com/HotpotDesign/FFMpeg-Online
could we list and credit your commands?
The videoprocessing one is more mature. It's been converting various torrents movies to playable mp4s for a few years.
The 10x1s preview looks magical once you see it in production. It's also a great introduction to ffmpeg filter syntax, which really isn't that complex.
This seemingly easy task is actually pretty hard to get right with FFMPEG.
Just leave a few notes here if anyone want to use that.
Video part
Chroma sampling
Without any `pix_fmt` argument, FFMPEG defaults to 4:4:4 instead of typical 4:2:0. Which has very limited support (for example, Firefox doesn't support it.)
To fix: add `pix_fmt yuv420p`, or chain a `format=yuv420p` vf at the end to your other vf(s).
Color space
FFMPEG by default uses BT.601 when converting your RGB images into YUV. This is an issue if your image / output video is in HD resolution. Almost all video players (including browsers) would assume BT.709 for anything >=720P. This causes color to shift in playback(255,0,0 would become 255,24,0).
To fix: add `-vf zscale=matrix=709`.
Note: there are some other video filters can do the same. The most famous being good ol' `scale` (based on libswscale). However, it has a notorious bug that would cause the color to shift on its own (becomes yellow) if the input is in BGR (all the `.bmp`s). See: https://trac.ffmpeg.org/ticket/979 So stick with better `zscale`.
Framerate etc.
You can set framerate with `-r` to a small number for both input (reading the same image X times per second) and output (since it's a still image, you can get away with very low framerate. `-tune stillimage` should also be used for (default) x264 encoder.
In summary:
ffmpeg -loop 1 -r 1 -i image.png -i audio.wav -r 1 -shortest -vf zscale=matrix=709,format=yuv420p -c:v libx264 -tune stillimage output_video.mp4
Audio part - length mismatch bug
Even if we ignore all the image/video troubles above, `ffmpeg -loop 1 -i image.png -i sound.mp3 -shortest video.mp4` still doesn't work well. "-shortest" argument has a long-standing bug (https://trac.ffmpeg.org/ticket/5456) that is the output video would be longer than your input audio (by quite a few seconds, worse if using -r 1). There are some workarounds (listed in the ticket) but they don't eliminate the issue entirely.
Your best bet (if the length match is crucial) is to just convert the output video again and use -t to cut to the proper length.
# batch convert MTS files into mp4 files for f in .MTS; do avconv -i "$f" -deinterlace -vcodec libx264 -pass 1 "${f%.MTS}.mp4";done
https://www.mrfdev.com/ffmpeg-command-generator
I've also added some shell functions[0] to my dotfiles to make it a bit easier to use when switching between several different machines.
[0] https://gist.github.com/devadvance/03d3c8f57b3e0254fb989e946...
And yeah, it's great, and not just for gifs. Just quickly re-encoding something in a different format / container is so much easier than faffing around opening a proper program.
Took a while to not get annoyed with it though and figure out annoying watch-outs like -pix_fmt yuv420p, which I now finally have memorised.
<video width="320" height="240" autoplay loop playsinline muted>
<source src="foo.mp4" type="video/mp4">
<source src="foo.ogg" type="video/ogg">
<!-- GIF fallback for ancient browsers but it's 2021 and you probably don't need this anymore -->
<img src="foo.gif" width="320" height="240">
</video>
With H.264 or any modern codec you'll get 24-bit color and smaller file sizes than GIF. It's better in every way I can imagine.There are still situations where you want a gif. Your approach works fine if you're writing your own code, but if I want to embed a looping animation on a Confluence page, it isn't a viable option.
A bit overkill but, a gui to hand-hold old folks like me would handy. Perhaps an addin for blender could work - since it's video handling chops are growing bigger every day?
> I'm an engineering manager at Google Stadia
That's gotta be an interesting gig!
As far as I know there is no automated tool to do it though. It's done manually by the meme lords
Also dialing in the quality is some black magic and not only about adjusting the fps. You can adjust the image size and bitdepth and other parans. This is also not automated and ffmpeg makes it very cumbersome to dial in the right parameters to get decent quality and small file size
I'd actually be really interested to know if there is a better reference out there
https://www.lcdf.org/gifsicle/
https://kornel.ski/lossygif (merged into gifsicle)
Even imagemagick's convert can the static stuff.As for the palette stage, that depends on ffmpeg to get things right. I find the guide at https://superuser.com/a/556031 a lot more comprehensive.
I once wrote a similar thing as part of a tool to record Android devices and make gifs from the resulting video. I recall having to make two or three separate ffmpeg invocations, but here I only see one. It always frustrated me to have to do that, and I'm pleased to learn how to do it concisely.
From a very high level point of view you are right. Yes, webm videos use vp8 and webp uses vp8 key frames, but there are still differences between animated webp and webm videos without sound.
A webm video is a video with a vp8/vp9 video stream contained inside an mkv container, while webp files, including the animated ones, are inside RIFF containers. Also, webp's animation mode does not have the full inter frame system that vp8 has. It does support carrying over of state from one frame to the next, but does this in the simplest way possible: just paint the next frame over the prior one, respecting the alphas. Most importantly, those frames are still encoded like vp8 Intraframes (keyframes), not like vp8 Interframes [1]. So you won't need a full vp8 decoder to write an animated webp decoder.
[0] https://github.com/devadvance/terminalcheatsheet/blob/stagin...