[1] http://www.catb.org/esr/writings/homesteading/cathedral-baza...
- Intel QSV-accelerated MJPEG encoding
- NVIDIA NVDEC-accelerated H.264, HEVC, MJPEG, MPEG-1/2/4, VC1, VP8/9 hwaccel decoding
- Intel QSV-accelerated overlay filter
- OpenCL overlay filter
- VAAPI MJPEG and VP8 decoding
- AMD AMF H.264 and HEVC encoders
- VideoToolbox HEVC encoder and hwaccel
- VAAPI-accelerated ProcAmp (color balance), denoise and sharpness filters
Confused, as I've been using ffmpeg for HEVC NVDEC already...
I was trying to do something the other day and couldn’t figure it out, if anyone has any ideas.
The end goal is to provide a set of video files, with time stamps for each, splicing them into one file while removing parts I don’t want.
That is straightforward enough, as long as you’re willing to re-encode the whole file. Otherwise, it seems like ffmpeg is restricted to make cuts at key frames.
It’s rare for the key frame to be placed at the exact spot I would want to make a cut, so the section of the video around the cut would need to be re-encoded. Ideally that would be the only part hat is re-encoded - everything else would be a a straight transcode from key frame to key frame.
I believe this is called ‘smart rendering’, and the pages I could find in the past said ffmpeg isn’t really suited for it, or it’s very difficult.
Does anyone know if that has changed recently, or have found a way to do it?
Afraid I don't know how to do what you want with the ffmpeg commandline tool, though, either by partial re-encoding or by edit lists.
It's good to be able to edit video without losing quality.
Are you sure you need sub-keyframe precision? In h264+aac+mp4, for example, if it's not keyframe aligned, the result is usually a stalled video frame for a split second, but since the audio continues smoothly, it's not that noticeable.
If you know the exact codec settings that were used to encode the video, you can create new pieces to be fit losslessly together. Otherwise, it is more difficult.
Contact me on twitter at @downpoured and I can describe more.
EDIT: Just found that FFmpeg merges (almost) all libav.org changes: https://github.com/FFmpeg/FFmpeg/blob/master/doc/libav-merge...
Just this week there was an update showing that they had nearly a year-long window of vulnerability due to out of date version[1].
A media format christmas tree like this has really a lot of vulnerabilities & exposes the user to them fairly directly through media files.
[1] https://bugs.launchpad.net/ubuntu/+source/ffmpeg/+bug/169778...
If you actually go on the AV1 spec issue tracker, there are issues (both closed and open) from people at Nvidia, ARM's hardware team, Google and Netflix.
Lots of good times with ffserver although thankfully https://github.com/arut/nginx-rtmp-module seems to meet the same use cases and exec ffmpeg under the hood.
[0] https://github.com/sergey-dryabzhinsky/nginx-rtmp-module
Sounds great. Is there any meaning to Linux computers that don’t support aptX? Also I am wondering how it is posssible to include the aptX codec since its license term is against GPL?
https://patchwork.ffmpeg.org/patch/5879/
> Aptx support for linux with FFMpeg and bluez-alsa
https://github.com/Samt43/BluetoothAPTXForLinux https://github.com/Arkq/bluez-alsa/issues/92
But FFmpeg have a clean-room implementation, based on the (expired) EP0398973B1 patent and from reverse-engineering the binary library.
The problem that I think the parent post is referring to is that mpv 0.28.0, which introduced Vulkan support, also introduced a hard dependency on FFmpeg APIs that haven't been released until now (4.0). Linux distros prefer to use stable versions of packages, so most of them have been packaging FFmpeg 3.x and mpv 0.27.0. They can only upgrade to mpv 0.28.0 (with Vulkan support) now that FFmpeg 4.0 has been released.
For a personal project, I would like to generate videos to visualize the evolution of our git repository.
Is ffmpeg the best approach to programmatically create videos? What is the state of java, python or go bindings for such a usecase?
Or should I use OpenGL for this particular use?
I'm new to this, so any help and guidance would be great for me to get started.
Thanks!
Here is a nice excerpt out of a tutorial exercise from the book The Go Programming Language: http://www.informit.com/articles/article.aspx?p=2453564&seqN...
As an example, here's a video covering 22 years of the evolution of Python:
I'm keen on building something, and extending to other use cases like embedding photographs, milestones and other major events involving our business unit.
> support LibreSSL (via libtls)
Wow, libtls! Nice.
in normal mode, calculates a (weighted) measure of the variance in pixel values.
in diff mode, calculates a (weighted) measure of the variance in differences of pixel count between two neighbouring values (if 800 pixels have value 112 and 1400 pixels have value 113, then the (abs) difference is 600)
Someone posted a brilliant script in one of these ffmpeg posts but I can't find it for the life of me. I used it to create "trailers" of my media collection.
I wrote a script that cuts out clips of every sentence spoken, and builds them into example sentences to learn Chinese.
https://www.youtube.com/playlist?list=PLhIooD7mFhphhT5nDdhK0...
I would really like to test AV1 with it.