This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
fuss:ffmpeg [2019/01/04 19:24] – [Convert Video to GIF Animation] office | fuss:ffmpeg [2025/06/11 02:17] (current) – [Remove Black Side-Bars or Other Borders with FFMpeg] office | ||
---|---|---|---|
Line 41: | Line 41: | ||
</ | </ | ||
- | Note that this preserves the FLAC files and does not delete them. | + | Note that this preserves the FLAC files and does not delete them after converting. |
====== Lowering the Quality of Movies ====== | ====== Lowering the Quality of Movies ====== | ||
Line 76: | Line 76: | ||
where '' | where '' | ||
- | Sometimes, this will not work because the width and height is not divisible by '' | + | Sometimes, this will not work because the width and height is not divisible by '' |
<code bash> | <code bash> | ||
ffmpeg -i input.mp4 -vf " | ffmpeg -i input.mp4 -vf " | ||
Line 82: | Line 82: | ||
which will scale the width to '' | which will scale the width to '' | ||
+ | |||
+ | Another alternative is to use '' | ||
+ | <code bash> | ||
+ | ffmpeg -i input.mp4 -vf scale=iw/ | ||
+ | </ | ||
+ | and it should make sure that the output is divisible by 2. | ||
+ | |||
====== Speed-up Movies ====== | ====== Speed-up Movies ====== | ||
Line 323: | Line 330: | ||
* '' | * '' | ||
* '' | * '' | ||
+ | |||
+ | |||
====== Lowest Common Denominator Settings Compatible with All Sites ====== | ====== Lowest Common Denominator Settings Compatible with All Sites ====== | ||
Line 331: | Line 340: | ||
-c:v libx264 -crf 23 -profile:v baseline -level 3.0 -pix_fmt yuv420p \ | -c:v libx264 -crf 23 -profile:v baseline -level 3.0 -pix_fmt yuv420p \ | ||
-c:a aac -ac 2 -b:a 128k \ | -c:a aac -ac 2 -b:a 128k \ | ||
- | -movflags faststart \ | + | -movflags faststart |
+ | -tune zerolatency | ||
output.mp4 | output.mp4 | ||
</ | </ | ||
+ | |||
+ | Or, with Intel QSV acceleration: | ||
+ | <code bash> | ||
+ | ffmpeg \ | ||
+ | -hwaccel qsv \ | ||
+ | -hwaccel_output_format qsv \ | ||
+ | -i $INPUT_FILE \ | ||
+ | -c:v h264_qsv -profile:v baseline -level 3.0 -pix_fmt nv12 \ | ||
+ | -c:a aac -ac 2 -b:a 128k \ | ||
+ | -movflags faststart -movflags separate_moof \ | ||
+ | output.mp4 | ||
+ | </ | ||
+ | |||
+ | |||
+ | ====== Capture WebCam to Framebuffer ====== | ||
+ | |||
+ | The following command will capture video from ''/ | ||
+ | <code bash> | ||
+ | ffmpeg -f v4l2 -video_size 320x240 -i /dev/video0 -pix_fmt bgra -f fbdev /dev/fb0 | ||
+ | </ | ||
+ | |||
+ | The command can be used, for example, to display a webcam to screen directly without needing to install the X window system. | ||
+ | |||
+ | ====== Adding Subtitles to Videos ====== | ||
+ | |||
+ | * as optional subtitles: | ||
+ | <code bash> | ||
+ | ffmpeg -i " | ||
+ | </ | ||
+ | where: | ||
+ | * '' | ||
+ | * '' | ||
+ | * '' | ||
+ | * as burnt into the video (requires '' | ||
+ | <code bash> | ||
+ | ffmpeg -i movie.mp4 -vf subtitles=movie.srt output.mp4 | ||
+ | </ | ||
+ | or with '' | ||
+ | <code bash> | ||
+ | ffmpeg -i movie.srt movie.ass | ||
+ | ffmpeg -i movie.mp4 -vf ass=movie.ass output.mp4 | ||
+ | </ | ||
+ | where: | ||
+ | * '' | ||
+ | * '' | ||
+ | * '' | ||
+ | |||
+ | ====== Concatenating or Merging Multiple Files ====== | ||
+ | |||
+ | Given several files such as: | ||
+ | * '' | ||
+ | * '' | ||
+ | * '' | ||
+ | |||
+ | the files can be concatenated together into one large merged movie by following the steps: | ||
+ | * create a list of files with a format specific to the '' | ||
+ | <code bash> | ||
+ | for i in $(find . -name \*.mkv); do echo "file ' | ||
+ | </ | ||
+ | * merge the files together with '' | ||
+ | <code bash> | ||
+ | ffmpeg -loglevel info -f concat -safe 0 -i list.txt -c copy " | ||
+ | </ | ||
+ | |||
+ | ====== Normalizing the Size of Video Clips ====== | ||
+ | |||
+ | Sometimes it is necessary to normalize the size of multiple video clips. For example, the clips extracted for [[/ | ||
+ | |||
+ | The following command: | ||
+ | <code bash> | ||
+ | for i in *.mp4; do ffmpeg -i " | ||
+ | </ | ||
+ | will batch-change the size of all MP4 files in the same directory and store them inside a '' | ||
+ | |||
+ | This method is called crop-and-scale meaning that the video clip is cropped to a fixed size and then scaled to a fixed size. The only drawback in doing this is that whilst the size of the video clips will be the same for all the clips, the content might be distorted depending on the original video clip. | ||
+ | |||
+ | ====== Determining if A Video File has Fast Start Enabled ====== | ||
+ | |||
+ | Issue: | ||
+ | <code bash> | ||
+ | ffmpeg -v trace -i FILE 2>&1 | grep -e type:' | ||
+ | </ | ||
+ | where: | ||
+ | * '' | ||
+ | |||
+ | This will yield output similar to the following: | ||
+ | < | ||
+ | mov, | ||
+ | [mov, | ||
+ | </ | ||
+ | |||
+ | If '' | ||
+ | |||
+ | ====== Windows 7 Compatible Builds ====== | ||
+ | |||
+ | It seems that FFMpeg version 6.1.1 is a version that is suitable for Windows and higher version numbers might crash with a memory access violation error '' | ||
+ | |||
+ | ====== Increase Input Buffering When Reading from a Pipeline ====== | ||
+ | |||
+ | When ffmpeg is reading from a pipeline, for instance, by using the '' | ||
+ | <code bash> | ||
+ | ffmpeg -y -i http:// | ||
+ | </ | ||
+ | |||
+ | results in several small errors along the lines of: | ||
+ | < | ||
+ | frame= 1015 fps= 11 q=21.0 size= 4352kB time=00: | ||
+ | [mpegts @ 0x55f2c471b2c0] Packet corrupt (stream = 1, dts = 4597918858). | ||
+ | pipe:: corrupt input packet in stream 1 | ||
+ | Last message repeated 2 times | ||
+ | [mp2 @ 0x55f2c4755ec0] Header missing | ||
+ | Error while decoding stream #0:1: Invalid data found when processing input | ||
+ | [mpeg2video @ 0x55f2c4754300] ac-tex damaged at 30 23 | ||
+ | [mpeg2video @ 0x55f2c4754300] Warning MVs not available | ||
+ | [mpeg2video @ 0x55f2c4754300] concealing 585 DC, 585 AC, 585 MV errors in I frame | ||
+ | pipe:: corrupt decoded frame in stream 0 | ||
+ | frame= 1226 fps= 13 q=19.0 size= 4864kB time=00: | ||
+ | </ | ||
+ | being printed to the console output. | ||
+ | |||
+ | Intuitively these are buffer-underrun errors that are due to the small internal ffmpeg buffer size. In order to fix these issues, specify an queue size on the command line: | ||
+ | <code bash> | ||
+ | ffmpeg -thread_queue_size 8192 -y -i http:// | ||
+ | </ | ||
+ | where: | ||
+ | * '' | ||
+ | |||
+ | Fortunately, | ||
+ | < | ||
+ | Thread message queue blocking; consider raising the thread_queue_size option (current value: 8192) | ||
+ | </ | ||
+ | |||
+ | such that on the next invocation, the ffmpeg command parameters can be adjusted and the queue increased. | ||
+ | |||
+ | ====== Strategies for Recording Live Webcam Streams ====== | ||
+ | |||
+ | Webcams are cheap equipment these days with various performance issues and quirks for every producer out there. There are some general guidelines that should be minded when recording live webcam streams. | ||
+ | |||
+ | ===== Transcoding ===== | ||
+ | |||
+ | The immediate *nix reflex is to jump onto " | ||
+ | |||
+ | For instance, running the command ''< | ||
+ | <code bash> | ||
+ | ffmpeg -i rtsp://... -c:v copy -c:a copy out.mkv | ||
+ | </ | ||
+ | Or, perhaps with some little streaming optimizations that do not affect CPU nor GPU power because they just change the way how markers are added to the saved video files: | ||
+ | <code bash> | ||
+ | ffmpeg -i rtsp://... -c:v copy -c:a copy -movflags faststart -movflags separate_moof -tune zerolatency out.mkv | ||
+ | </ | ||
+ | |||
+ | The former commands will generate zero CPU or GPU overhead, whilst recording an RTSP stream that is already provided with universal codecs. When in doubt, '' | ||
+ | |||
+ | ===== Annotations ===== | ||
+ | |||
+ | Video editing tools typically have the ability to draw on top of the video. Even " | ||
+ | <code bash> | ||
+ | ffmpeg -i rtsp://... -c:v libx264 -c:a copy -vf " | ||
+ | </ | ||
+ | and now " | ||
+ | |||
+ | However, if just annotations are needed, subtitles could be used instead, such that a subtitle file can be generated in parallel to the recording of the live stream and then be read automatically when the recorded file is loaded. For instance, the script snippet in the [[/ | ||
+ | |||
+ | Not only are subtitles more efficient, but it also seems fairly canonical to have subtitles (or annotations) separate from the video recording instead of just drawing the text onto the video. If this strategy is adopted, the files are loaded together by any media player, but they can also be read separately. | ||
+ | |||
+ | ===== Node-Red Flow ===== | ||
+ | |||
+ | {{fuss: | ||
+ | |||
+ | The following flow uses two " | ||
+ | |||
+ | <code json> | ||
+ | [{" |