also at the metalevel, gnu parallel
Care to share yours?
SET filters="fps=%4,scale=%3:-1:flags=lanczos"
ffmpeg -v warning -i %1 -vf "%filters%,palettegen" -y palette.png
ffmpeg -v warning -i %1 -i palette.png -lavfi "%filters% \[x\]; \[x\]\[1:v\] paletteuse" -y %2
DEL palette.png
togif.bat <input.mp4> <output.gif> <width> <fps>
Extract audio from a video ffmpeg -i "path\to\my_input_video_file.mp4" "path\to\my_output_audio_only.wav"
Extract specific video and audio stream ffmpeg -i "path\to\my_input_video_file.mp4" -map 0:0 -c copy video.mp4 -map 0:1 -c copy audio0.m4a -map 0:2 -c copy audio1.m4a
Concatenate two or more video clips (echo file 'first file.mp4' & echo file 'second file.mp4' )>list.txt
ffmpeg -safe 0 -f concat -i list.txt -c copy output.mp4
Convert 10-bit H.265 to 10-bit H.264 ffmpeg -i input -c:v libx264 -crf 18 -c:a copy output.mkv
Convert 10-bit H.265 to 8-bit H.265 ffmpeg -i input -c:v libx265 -vf format=yuv420p -c:a copy output.mkv
Convert 10-bit H.265 to 8-bit H.264 ffmpeg -i input -c:v libx264 -crf 18 -vf format=yuv420p -c:a copy output.mkvIf you use `-vn` and `-acodec copy` (I use both although not sure `-vn` is strictly necessary), you can demux the audio from the video in the same format it is already in. Of course, you're extracting to wav so not transcoding, but copying may be faster/use less space.
In my opinion, `ffmpeg -i filename.mp4 filename.wav` is one of the greatest known examples of powerful functionality with a simple interface.
ffmpeg -hwaccel cuvid -c:v h264_cuvid -i input.mp4 -vf "hwdownload,format=nv12,scale=(iw*sar)*max(1280/(iw*sar)\,720/ih):ih*max(1280/(iw*sar)\,720/ih), crop=1280:720, subtitles=input.srt:force_style='FontName='TiresiasScreenfont',Fontsize=32,Outline=2,MarginL=50,MarginV=30,Alignment=1',minterpolate=fps=60:mi_mode=blend" -c:v h264_nvenc -preset slow -b:v 8M -f matroska - | vlc -
What it does: Allows streaming content to Chrome Cast with my idea of subtitles through VLC.For me I want subtitles with Tiresias screen font used in Finnish YLE 90s. (Aligned always to left second row, so that starting point is always the same. Center alignment is bad because you need to always re-adjust eyes to the position where the subtitles start, left alignment makes the first character always in same place.)
* `hwaccel cuvid -c:v h264_cuvid` * makes the hardware accelerated decoding (h264 only)
* `-vf` * video filter
* `hwdownload,format=nv12` * downloads the hardware accelerated frame to memory for the video filter (required by cuvid)
* `scale=(iwsar)max(1280/(iwsar)\,720/ih):ihmax(1280/(iw*sar)\,720/ih), crop=1280:720` * crops the video to 1280x720, (exteremely high impact on performance!) Use crop and resize cuvid below for better performance.
* `subtitles=input.srt:force_style='FontName='TiresiasScreenfont',Fontsize=32,Outline=2,MarginL=50,MarginV=30,Alignment=1'` * subtitles
* `-c:v h264_nvenc -preset slow -b:v 8M` * hardware accelerated encoding 8000 kb/s for 720p/60 (for 1080p use `16M`)
* `-f matroska * | vlc -` * output as matroska, pipe to VLC
* `minterpolate=fps=60:mi_mode=blend` * output 60 fps interpolate blending (this can cause problems, avoid sometimes)
Crop and resize with cuvid example:
* `-hwaccel cuvid -crop 0x0x200x200` - faster way to crop top x bottom x left x right
* `-hwaccel cuvid -resize 1200x300` - resize (forces the size)
Seek example:
Seek to 30 minutes (60*30 = 1800)
ffmpeg -ss 1800 ... vf "hwdownload,format=nv12,setpts=PTS+1800/TB,subtitles='...',setpts=PTS-STARTPTS"`Source: https://yle.fi/aihe/artikkeli/2012/01/27/televisiokanavien-t...
Print subtitles (useful for old TVs that can't select a subtitle from streams or files; I use this to get my kids to watch movies in English):
-vf "ass=subtitle.ass"
or with .srt and in a huge yellow font -vf "subtitles=subtitles.srt:force_style='Fontsize=36,PrimaryColour=&H0000FFFF'"
Extract 1 second of video every 90 seconds (if you have very long footage of a trip from a dashcam and you don't know what to do with it, that makes for a much shorter "souvenir"): -vf "select='lt(mod(t,90),1)',setpts=N/FRAME_RATE/TB" -af "aselect='lt(mod(t,90),1)',asetpts=N/SR/TB" h264:
-c:v libx264 -preset medium -crf 22
h265:
-c:v libx265 -preset medium -crf 26
no recompress:
-c copy
presets:
ultrafast,superfast, faster, fast, medium, slow, slower, veryslow
desinterlace:
-vf yadif
target size:
-s 1920x1080
aspect ratio without recompressing:
-aspect 16:9
rotate video:
-vf "transpose=1"
0 = 90CounterCLockwise and Vertical Flip (default)
1 = 90Clockwise
2 = 90CounterClockwise
3 = 90Clockwise and Vertical Flip
rotate without recompressing:
-metadata:s:v rotate="90"
audio aac two channels:
-c:a aac -b:a 160k -ac 2
web fast start:
-movflags +faststart
autoscale to WxH with black bands:
-vf "scale=W:H:force_original_aspect_ratio=decrease,pad=W:H:(ow-iw)/2:(oh-ih)/2"
get jpeg snapshot:
-vframes 1 -q:v 2 dest.jpg
concatenate mp4 without recompressing:
-f concat -safe 0 -i "files.txt" -c copy -movflags +faststart
files.txt format:
file 'filepath'
ffprobe get videoinfo:
ffprobe -v quiet -print_format xml -show_format -show_streams "filepath" > file.xml
if override which sub track is default, use "-default_mode infer_no_subs"
clear disposition (default sub):
-disposition:s 0
default or forced disposition:
-disposition:s forced
track metadata (audio):
-metadata:s:a title="xx"
track metadata (video):
-metadata:s:v title="xx"
global metadata:
-metadata title="xx"
-metadata description="xx"
-metadata comment="xx"
extract sound from video to mp4
ffmpeg -i source_video.avi -vn -ar 44100 -ac 2 -ab 192k -f mp3 sound.mp3
skip time (place after input file):
-ss 00:05:00
stop after:
-t 00:05:00
Approx fast seek (place before input file):
-ss 00:05:00 -noaccurate_seek -i .... ffmpeg -i 'input.mkv' -filter_complex '[0:a:1]volume=0.1[l];[0:a:0][l]amerge=inputs=2[a]' -map '0:v:0' -map '[a]' -c:v copy -c:a libmp3lame -q:a 3 -ac 2 'output.mp4'
[0] https://blog.nytsoi.net/2017/12/31/ffmpeg-combining-audio-tr... ffmpeg -hide_banner -loglevel error -nostdin -y -i "$f" -map 0:0 -c:v copy -map 0:a:0? -c:a:0 aac -b:a:0 160k -filter:a:0 "pan=stereo|FL=1.414*FC+0.707*FL+0.5*FLC+0.5*BL+0.5*SL+0.5*LFE|FR=1.414*FC+0.707*FR+0.5*FRC+0.5*BR+0.5*SR+0.5*LFE,acompressor=ratio=4" -metadata:s:a:0 title="NightMixed" -metadata:s:a:0 language=eng -disposition:a:0 default -map 0:a:0? -c:a:1 copy -disposition:a:1 none -map 0:a:1? -c:a:2 copy -disposition:a:2 none -map 0:a:2? -c:a:3 copy -disposition:a:3 none -map 0:a:3? -c:a:4 copy -disposition:a:4 none -map 0:a:4? -c:a:5 copy -disposition:a:5 none -map 0:a:5? -c:a:6 copy -disposition:a:6 none -map 0:a:6? -c:a:7 copy -disposition:a:7 none -map 0:a:7? -c:a:8 copy -disposition:a:8 none -map 0:a:8? -c:a:9 copy -disposition:a:9 none -map 0:s:0? -c:s:0 copy -map 0:s:1? -c:s:1 copy -map 0:s:2? -c:s:2 copy -map 0:s:3? -c:s:3 copy -map 0:s:4? -c:s:4 copy -map 0:s:5? -c:s:5 copy -map 0:s:6? -c:s:6 copy -map 0:s:7? -c:s:7 copy -map 0:s:8? -c:s:8 copy -map 0:s:9? -c:s:9 copy "/home/pi/media/remixed_in_progress/$f"I mean, it's never ever done in a way that doesn't produce quiet dialog that makes you raise volume and loud sounds that makes your eardrums bleed. Like, even basic audio normalisation would produce half-decent results, but no we get crazy contrast by default and constant volume switching.
ffmpeg -i source.avc -c copy -bsf:v h264_metadata=colour_primaries=1:matrix_coefficients=1 output.h264
https://ffmpeg.org/ffmpeg-bitstream-filters.html#h264_005fme...
https://www.itu.int/rec/T-REC-H.264-201906-I/en function grab-screen-lossless {
ffmpeg -an -f x11grab -video_size 1920x1080 -framerate 60 -i :0.0 -c:v h264_nvenc -preset llhq -tune zerolatency -crf 0 -qp 0 "${1}"
}
function grab-screen {
ffmpeg -an -f x11grab -video_size 1920x1080 -framerate 60 -i :0.0 -c:v h264_nvenc -preset llhq -tune zerolatency -qp 10 "${1}"
}
function vid_compress {
ffmpeg -i "${1}" -codec:v libx264 -preset:v fast -pix_fmt yuv420p "${2}"
}Unfortunately both these commands create a somewhat jerky video on my rx 550 on linux mint and drives CPU crazy (ryzen 3500).
I guess it's my underpowered graphics card, but any way to tweak your grab-screen-lossless for a smoother capture?
# 1. Record your device using QuickTime
# (File->New Movie Recording->Select your phone)
# 2. Run `$ app-preview your-recording.mov`
function app-preview() {
echo "name $1"
ffmpeg -i $1 -vf scale=1080:1920,setsar=1 -c:a copy "out_$1"
}#!/bin/bash
#Creates a Directory
mkdir -p titles
#loop for files in folder
for f in .MOV; do
text="${f%.MOV}"
#Uses drawtext ffmpeg -i "$f" -vf drawtext="fontfile=/usr/share/fonts/opentype/NotoSansCJK-Regular.ttc : \
text=$text : fontcolor=white: fontsize=82: box=1: boxcolor=black@0.5: \
boxborderw=20: x=(w-text_w)/2:y=h-th-40 :enable='between(t,0,5)'" -codec:a copy "./titles/$f"
done
# Other options:# Bottom center: x=(w-text_w)/2:y=h-th (with 10 px padding: x=(w-text_w)/2:y=h-th-20)
# centered: x=(w-text_w)/2: y=(h-text_h)/2
This is my trim tool to cut beginnings and/or endings of files
#!/bin/bash
mkdir -p trimmed
duration=$(ffmpeg -i "$1" 2>&1 | grep "Duration"| cut -d ' ' -f 4 | sed s/,//)
length=$(echo "$duration" | awk '{ split($1, A, ":"); print 3600A[1] + 60*A[2] + A[3] }' )
trim_start="$2"
trim_end=$(echo "$length" - "$3" - "$trim_start" | bc)
ffmpeg -ss "$trim_start" -i "$1" -c copy -map 0 -t "$trim_end" "./trimmed/$1"
# Example # >trim example.MOV 0.2 10
# Save a RTSP stream to file
ffmpeg -i rtps://someserver/somevideo -c copy out.mp4
# encoding quality settings examples
# H264
ffmpeg -i input.mp4 -a:c copy -c:v libx264 -crf 22 -preset slower -tune film -profile:v high -level 4.1 output.mp4
# H264 10bit (must switch to a different libx264 version)
LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/x264-10bit ffmpeg -i input.mp4 -a:c copy -c:v libx264 -crf 22 -preset slower -tune film -pix_fmt yuv420p10le -profile:v high10 -level 5.0 output.mp4
# resize
ffmpeg -i in.mp4 -s 480x270 -c:v libx264 out.mp4
# change source fps
ffmpeg -r 10 -i in.mp4 out.mp4
# resample fps
ffmpeg -i in.mp4 -r 10 out.mp4
# extract the audio from a video
ffmpeg -i inputfile.mp4 -vn -codec:a copy outputfile.m4a
# merge audio and video files
ffmpeg -i inputfile.mp4 -i inputfile.m4a -codec copy outputfile.mp4
# cut (reencoding)
# set the start time and duration as HH:MM:SS
ffmpeg -ss 00:00:40.000 -i input.mp4 -t 00:00:10 -c:v libx264 output.mp4
# set the start time and duration as seconds
ffmpeg -ss 40.0 -i input.mp4 -t 10.0 -c:v libx264 output.mp4
# skip an exact number of frames at the start (100)
ffmpeg -i input.mp4 -vf 'select=gte(n\,100)' -c:v libx264 output.mp4
# cut (w/o reencoding - cut times will be approximate)
ffmpeg -ss 40.0 -i input.mp4 -t 10.0 -c copy output.mp4
# save all keyframes to images
ffmpeg -i video.mp4 -vf "select=eq(pict_type\,I)" -vsync vfr video-%03d.png
# encode video from images (image numbering must be sequential)
ffmpeg -r 25 -i image_%04d.jpg -vcodec libx264 timelapse.mp4
# flip image horizontally
ffmpeg -i input.mp4 -vf hflip -c:v libx264 output.mp4
# crop and concatenate 3 videos vertically
ffmpeg -i cam_4.mp4 -i cam_5.mp4 -i cam_6.mp4 -filter_complex "[0:v]crop=1296:432:0:200[c0];[1:v]crop=1296:432:0:200[c1];[2:v]crop=1296:432:0:230[c2];[c0][c1][c2]vstack=inputs=3[out]" -map "[out]" out.mp4
# 2x2 mosaic (all inputs must be same size)
ffmpeg -i video1.mp4 -i video2.mp4 -i video3.mp4 -i video4.mp4 -filter_complex "[0:v][1:v]hstack=inputs=2[row1];[2:v][3:v]hstack=inputs=2[row2];[row1][row2]vstack=inputs=2[out]" -map "[out]" out.mp4
# picture-in-picture
ffmpeg -i video1.mp4 -i video2.mp4 -filter_complex "[1:v]scale=iw/3:ih/3[pip];[0:v][pip]overlay=main_w-overlay_w-20:main_h-overlay_h-20[out]" -map "[out]" out.mp4
# print framerate of every file in dir
for f in *.mp4; do echo $f; mediainfo $f|grep "Frame rate"; done
# print selected info of every file in dir in CSV format
for f in *.mp4; do echo -n $f,; mediainfo --Inform="Video;%Duration%,%FrameCount%,%FrameRate%" $f; doneWow, I didn't know there was a "keyframes" feature on Ffmpeg. This is awesome, thanks for sharing.
fd -t f -e flac . "$@" -x ffmpeg -loglevel 16 -i "{}" -qscale:a 0 "{.}.mp3" ffmpeg -f lavfi -i testsrc=d=60:s=1920x1080:r=24,format=yuv420p -f lavfi -i sine=f=440:b=4 -b:v 1M -b:a 192k -shortest output-testsrc.mp4
ffmpeg -f lavfi -i testsrc2=d=60:s=1920x1080:r=24,format=yuv420p -f lavfi -i sine=f=440:b=4 -b:v 1M -b:a 192k -shortest output-testsrc.mp4
ffmpeg -f lavfi -i smptebars=d=60:s=1920x1080:r=24,format=yuv420p -f lavfi -i sine=f=440:b=4 -b:v 1M -b:a 192k -shortest output-smptebars.mp4 $cat ~/bin/ffmpeg_grab_desktop.sh
#!/bin/bash
size=${1:-"1920x1080"}
offset=${2:-"0,0"}
name=${3:-"video"}
ffmpeg -video_size $size -framerate 25 -f x11grab -i :0.0+$offset -c:v libx264 -crf 0 -preset ultrafast "$name.mkv"
ffmpeg -i "$name.mkv" -movflags faststart -pix_fmt yuv420p "$name.mp4"# H264
$ ffmpeg -i input.avi -c copy -c:v libx264 -crf 18 -preset slow output.mkv
# H265 -c:v libx265 -crf 22
Append Subtitles $ ffmpeg -i input.mkv -i input.srt -map 0 -map 1 -c copy output.mkv
Concat Videos $ ls *.mkv > mylist.txt
$ vim mylist.txt # append each line with "file "
$ ffmpeg -f concat -i mylist.txt -c copy merged-pre-subbed.mkv
Probe size issues? $ ffmpeg -analyzeduration 2147483647 -probesize 2147483647 ...
Deinterlace -vf yadif
Metadata Examples ffmpeg -i foo.avi -i commentary.mp3 -map 0 -map 1 -c:a copy -c:v libx264 -crf 18 -preset fast -metadata:s:a:1 title="Commentary" -metadata:s:a:0 language=eng funk.mkv
Autocrop $ ffplay -i foo.mkv -vf "cropdetect=24:16:0"
<let it run for a minute>
[Parsed_cropdetect_0 @ 0x7ff093c65c00] x1:0 x2:1919 y1:132 y2:947 w:1920 h:816 x:0 y:132 pts:30155 t:30.155000 crop=1920:816:0:132
-filter:v:0 "crop=1920:816:0:132" # Example output file.
f=/tmp/output.mp4
# Example video resolution.
g=1920x1080
# Example capture framerate
fr=4
# Example X11 display
d=$DISPLAY
# Simple screencast.
#
# Try adjusting the libx264 CRF from 15 to some greater number, as long
# as there is no visible effect on video quality.
#
# If increasing the capture framerate, you may also wish to use a
# faster preset.
ffmpeg -probesize 50M -f x11grab -video_size "$g" -framerate "$fr" -i "$d" \
-c:v libx264 -crf 15 -preset veryslow "$f"
# Simple screencast without drawing the pointer/cursor.
ffmpeg -probesize 50M -f x11grab -video_size "$g" -framerate "$fr" -draw_mouse 0 -i "$d" \
-c:v libx264 -crf 15 -preset veryslow "$f"
# See what devices are available for capturing sound.
arecord -l
# Select a device.
audioCaptureDevice=hw:0
# List some permitted parameters associated with device 0.
#
# We are interested in the "FORMAT", "CHANNELS", and "RATE" parameters
# for use in the ffmpeg command.
arecord --dump-hw-params -D "$audioCaptureDevice"
audioSampleFormat=pcm_s32le
audioNumChannels=2
audioRate=44100
f=/tmp/output.mp3
# Record sound.
ffmpeg -thread_queue_size 8192 -f alsa -channels "$audioNumChannels" -sample_rate "$audioRate" \
-c:a "$audioSampleFormat" -ar "$audioRate" -i "$audioCaptureDevice" "$f"
f=/tmp/output.mkv
# Screencast with sound.
#
# Note that there seems to be an FFMPEG bug where the audio in the last
# 15 seconds of the video is cut off. The workaround is to record for
# 15 exrtra seconds, and then cut the extra video.
ffmpeg -probesize 50M -f x11grab -video_size "$g" -framerate "$fr" -i "$d" \
-thread_queue_size 8192 -f alsa -channels "$audioNumChannels" -sample_rate "$audioRate" \
-c:a "$audioSampleFormat" -ar "$audioRate" -i "$audioCaptureDevice" \
-c:a flac -c:v libx264 -crf 17 "$f" Convert AVI to MP4
----------------------------
ffmpeg -i input.avi -c:v libx264 -crf 19 -preset slow -c:a libvo_aacenc -b:a 192k -ac 2 out.mp4
Convert MP4 to GIF
---------------------------
mkdir frames
ffmpeg -i video.mp4 -r 5 'frames/frame-%03d.jpg'
cd frames
convert -delay 20 -loop 0 -layers Optimize *.jpg myimage.gifffmpeg -i snow.mp4 -c:v libx265 -preset slow -crf 28 -tag:v hvc1 -c:a aac -ac 2 -b:a 128k -vf scale=1280:-1 snow.mov
# Merge video from one file, audio from another, trimmed
ffmpeg -i video.mp4 -i audio.mp4 -c:v copy -c:a aac -map 0:v:0 -map 1:a:0 -to 00:00:41 merged.mp4
# Trim a video
ffmpeg -ss 00:06:58 -to 00:08:26 -i TeSiJmLoJd0.webm bbf.mp4
ffmpeg -i input.mp4 -vf "drawtext=fontfile=font.ttf: text=%{n}: x=48: y=48: fontcolor=white: box=1: boxcolor=0x00000099: fontsize=72" -y output.mp4
Remove audio track from video: ffmpeg -i input.mp4 -c copy -an output.mp4 ffmpeg -i $1 -vf scale=320:-1:flags=lanczos,fps=10 /tmp/frames/ffout%03d.png
convert -loop 0 /tmp/frames/ffout*.png $1.gif
flac2mp3 parallel-moreutils -i -j$(nproc) ffmpeg -i {} -qscale:a 0 {}.mp3 -- ./*.flac
rename .flac.mp3 .mp3 ./*.mp3
screencast TIMESTAMP=`date +"%Y-%m-%d_%H"`
TMP_AVI=$(mktemp /tmp/outXXXXXXXXXX.avi)
ffcast -s % ffmpeg -y -f x11grab -show_region 1 -framerate 15 \
-video_size %s -i %D+%c -codec:v huffyuv \
-vf crop="iw-mod(iw\\,2):ih-mod(ih\\,2)" $TMP_AVI \
&& convert -set delay 10 -layers Optimize $TMP_AVI /home/nemo/Desktop/GIFs/$TIMESTAMP.gif
split-audio-by-chapters[0] to split a audiobook from Audible to multiple files by chapters.split-by-audible[1] to split a audiobook from Audible to multiple files by chapters using the copied timestamps from the Audible Web Player.
split-by-silence[2] to split audiobooks by Silence instead
audiobook2video[3] to generate a video file from audiobooks. Puts up a nice cover.
[0]:https://github.com/captn3m0/Scripts/blob/master/split-audio-...
[1]: https://github.com/captn3m0/Scripts/blob/master/split-by-aud...
[2]: https://github.com/captn3m0/Scripts/blob/master/split-by-sil...
[3]: https://github.com/captn3m0/Scripts/blob/master/audiobook2vi...
Example: Start at 52 seconds, take 10 minutes after that:
ffmpeg -ss 52 -i input.mp4 -c copy -t 00:10:00.0 output.mp4 ffmpeg -i foo.mp4 -c:v h264_videotoolbox -b:v 1600k foo_out.mp4
On macOS, this uses hardware acceleration to reencode a video at a lower bitrate. My macbook is from 2012, so this does make a notable difference. There's also "hevc_videotoolbox" for H.265 if your machine supports it.To screen capture every 600 frames:
ffmpeg -i my_vid.mp4 -vf "select=not(mod(n\,600))" -vsync vfr -q:v 15 img_%03d.jpg
To make a montage that's 1024 px wide: for d in \*
do
(montage -mode concatenate -tile 4x -resize 1024x $d"/\* "$d".jpg)
doneffmpeg -r 1 -loop 1 -i 1.png -i 1.wav -c:v libvpx -c:a libvorbis -b:a 64k -shortest out.webm (clearly this is old)
-max_muxing_queue_size 2048 (magically fixes some errors and microscopically increases quality, a no-brainer on machines with more than token amounts of RAM)
Every damn time I have to export from after effects and need a file I can actually open.
The original audio files of the dub were at this point still available on archive.org. The problem is that the second audio file is not meant to play directly after the first - this was back in the days of CD players, so halfway through the movie the dub instructs you to begin playing the second CD once the next scene starts. The other problem is that the second file is louder than the first.
Most sources I saw online said to insert a gap of three seconds to account for the delay, and didn’t have a solution for the difference in volume. I wanted to be more precise.
First, I found the exact start time of the scene where the second audio track begins:
ffprobe hp.mkv
...
Chapter #0:18: start 4428.882000, end 4817.521000
Metadata:
title : Chapter 19
...
Then I compared this with the duration of the first audio track: ffprobe wiz1.mp3
...
Duration: 01:13:45.33, start: 0.000000, bitrate: 66 kb/s
...
The difference between these time stamps gave the actual delay of 3.582 seconds.I then compared the maximum audio levels of the two audio tracks to determine the level to increase the first track’s volume by (there are more advanced features in FFmpeg for volume normalization, but I just wanted to remove the potential for eardrum damage when beginning Chapter 19 and keep things as similar as possible otherwise):
ffmpeg -i wiz1.mp3 -af volumedetect -f null -
[Parsed_volumedetect_0 @ 000001ecf871c1c0] max_volume: -11.1 dB
ffmpeg -i wiz2.mp3 -af volumedetect -f null -
[Parsed_volumedetect_0 @ 000001858b881880] max_volume: -3.6 dB
This gave me the volume increase for the first track of 7.5 dB.Once I had these numbers, it was time for the one-liner to adjust the first track’s volume, concatenate the two tracks with the gap of silence, and mux them with the video from the movie:
ffmpeg -i hp.mkv -i wiz1.mp3 -i wiz2.mp3 -filter_complex "[1]volume=7.5dB[wiz1];aevalsrc=0:duration=3.582:sample_rate=22050[gap];[wiz1][gap][2]concat=n=3:v=0:a=1,apad[wiz]" -map 0 -map "[wiz]" -shortest -c:v copy -c:a flac -sample_fmt s16 -f matroska wiz.mkv for i in *.webm ; do echo "$i";ffmpeg -i "$i" -acodec copy -vn "${i%.*}.opus" ; done
Nothing fancy. ffmpeg -i input.mov -vf "fps=10,scale=600:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse" -loop 0 output.gifffmpeg -i ${1} -sn -an -c:v libx264 -r 24 -filter:v "setpts=10PTS" -crf 19 ${1}-10x.mkv
reverse video
ffmpeg -i $IFILE -s 3840x2160 -f image2 -q:v 1 -vf format=yuvj420p $tmpdir/p%8d.jpg cat $(ls -t $tmpdir/p.jpg) | ffmpeg -f image2pipe -c:v mjpeg -r 30 -i - -c:v libx264 -crf 18 -vf format=yuv420p -preset slow -f mp4 $OFILE
loop a video: loopcount=$1
ffmpeg -stream_loop $1 -i $2 -c copy $3.mkv
> LEGO Racers ALP (.tun & .pcm) muxer
Maybe that just shows off my ignorance, but reading the changelogs (current and past), I never realized that ffmpeg contains so many "niche" features.
While working with ffmpeg over the years, i always thought that FFmpeg should have a simple to use UI. I have recently started working on a desktop(electron) tool that wraps ffmpeg script in easy to use interface
Here is a video showing the tool in action https://www.youtube.com/watch?v=qqqiK6YnJu8
Assuming that you actually meant video content, I think your question may be a bit misguided on the nuances and goals of video encoding. Video encoding can be both lossy and lossless. Lossless video encoding isn't particularly interesting in most cases, but I do believe that HEVC (H.265) will usually come out slightly smaller. However anything to do with encoding will always vary based on the actual source content. So partial answer to your question would probably be x265, but it depends. Based on the source you could construct theoretical content that could be better tuned to one or the other's encoding strengths.
Where it gets interesting is in lossy encoding. With lossy encoding you seek to retain visual acuity to a certain standard while minimizing size and/or processing requirements. Both codecs do an excellent job at removing the right amount of information to effectively fool the human observer. With lossy encoding there isn't really a filesize difference, as you tune your filesize to whatever you want to given your source and your desired output constraints. The big feature of av1 is that it is open and unencumbered by patents+royalties and will hopefully therefore make it THE industry standard in the coming years. It's openness also makes it more likely to eventually be ubiquitous as it should be implementable and playable on most new video platforms and hardware, and hopefully the mythical one format that just works everywhere.
AV1 100 MiB
H.265 500 MiB
H.264 1.5 GiB
At this point, the audio codecs and track count start making a difference, so this isn't really a fair comparison. And BTW, in terms of video quality in the files above, AV1 > H.265 > H.264.In non-animated content, the difference is less impressive, but comes at about ~20% in favor of AV1 vs HEVC, in my subjective-and-not-rigorously-benchmarked opinion. But "video quality" is subjective anyway.
https://trac.ffmpeg.org/ticket/7037
I really hoped they could complete this behemoth of a task.
But what to do with such a person if you can't/won't ban them from posting? Delete his comments? Don't engage until he tires himself out?
Microsoft Paint (MSP) version 2 decoder
Microsoft Paint (MSP) demuxer
Just in time!
does this needs specific hardware or any VDPAU capable GPU can decode now HECV??
Video Hub App - https://videohubapp.com/
MIT open source too: https://github.com/whyboris/Video-Hub-App
Do I need to wait on the Brew people to upgrade or did I make a mistake?
Also, I have no idea which codecs are native encoding and decoding on a M1 chip but I hope h.265 is included