Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
fuss:ffmpeg [2019/01/04 19:24] – [Convert Video to GIF Animation] officefuss:ffmpeg [2025/06/11 02:17] (current) – [Remove Black Side-Bars or Other Borders with FFMpeg] office
Line 41: Line 41:
 </code> </code>
  
-Note that this preserves the FLAC files and does not delete them.+Note that this preserves the FLAC files and does not delete them after converting.
  
 ====== Lowering the Quality of Movies ====== ====== Lowering the Quality of Movies ======
Line 76: Line 76:
 where ''iw'' stands for ''input width'' which is then divided by ''4''. The ''-1'' indicates that ''ffmpeg'' should preserve the ration and scale the height accordingly. where ''iw'' stands for ''input width'' which is then divided by ''4''. The ''-1'' indicates that ''ffmpeg'' should preserve the ration and scale the height accordingly.
  
-Sometimes, this will not work because the width and height is not divisible by ''2''. To avoid that, you can issue:+Sometimes, this will not work because the width and height is not divisible by ''2'' when using with YUV 4:2:0 chroma subsampling as in [[/fuss/ffmpeg#lowest_common_denominator_settings_compatible_with_all_sites|the lowest common denominator conversion template]]. To avoid that, you can issue:
 <code bash> <code bash>
 ffmpeg -i input.mp4 -vf "scale=720:trunc(ow/a/2)*2" output.mp4 ffmpeg -i input.mp4 -vf "scale=720:trunc(ow/a/2)*2" output.mp4
Line 82: Line 82:
  
 which will scale the width to ''720'' and adjust the height according to that value. which will scale the width to ''720'' and adjust the height according to that value.
 +
 +Another alternative is to use ''-2'' instead of ''-1'', as in:
 +<code bash>
 +ffmpeg -i input.mp4 -vf scale=iw/4:-2 output.mp4
 +</code>
 +and it should make sure that the output is divisible by 2.
 +
  
 ====== Speed-up Movies ====== ====== Speed-up Movies ======
Line 323: Line 330:
   * ''fps=8'' - attempt to set the framerate to $8$ frames per second,   * ''fps=8'' - attempt to set the framerate to $8$ frames per second,
   * ''setpts=0.35*PTS'' - speed up the movie by $0.35$ of the original speed.   * ''setpts=0.35*PTS'' - speed up the movie by $0.35$ of the original speed.
 +
 +
  
 ====== Lowest Common Denominator Settings Compatible with All Sites ====== ====== Lowest Common Denominator Settings Compatible with All Sites ======
Line 331: Line 340:
   -c:v libx264 -crf 23 -profile:v baseline -level 3.0 -pix_fmt yuv420p \   -c:v libx264 -crf 23 -profile:v baseline -level 3.0 -pix_fmt yuv420p \
   -c:a aac -ac 2 -b:a 128k \   -c:a aac -ac 2 -b:a 128k \
-  -movflags faststart \+  -movflags faststart -movflags separate_moof \ 
 +  -tune zerolatency
   output.mp4   output.mp4
 </code> </code>
 +
 +Or, with Intel QSV acceleration:
 +<code bash>
 +ffmpeg \
 +  -hwaccel qsv \
 +  -hwaccel_output_format qsv \
 +  -i $INPUT_FILE \
 +  -c:v h264_qsv -profile:v baseline -level 3.0 -pix_fmt nv12 \
 +  -c:a aac -ac 2 -b:a 128k \
 +  -movflags faststart -movflags separate_moof \
 +  output.mp4
 +</code>
 +
 +
 +====== Capture WebCam to Framebuffer ======
 +
 +The following command will capture video from ''/dev/video0'', at ''320x240'' resolution and send it to the Linux framebuffer:
 +<code bash>
 +ffmpeg -f v4l2 -video_size 320x240 -i /dev/video0 -pix_fmt bgra -f fbdev /dev/fb0
 +</code>
 +
 +The command can be used, for example, to display a webcam to screen directly without needing to install the X window system.
 +
 +====== Adding Subtitles to Videos ======
 +
 +  * as optional subtitles:
 +<code bash>
 +ffmpeg -i "movie.mp4" -i "movie.srt" -c copy -c:s mov_text output.mp4
 +</code>
 +where:
 +    * ''movie.mp4'' is the movie,
 +    * ''movie.srt'' are the subtitles in SRT format,
 +    * ''output.mp4'' is the output movie
 +  * as burnt into the video (requires ''ffmpeg'' compiled with ''--enable-libass''):
 +<code bash>
 +ffmpeg -i movie.mp4 -vf subtitles=movie.srt output.mp4
 +</code>
 +or with ''libass'':
 +<code bash>
 +ffmpeg -i movie.srt movie.ass
 +ffmpeg -i movie.mp4 -vf ass=movie.ass output.mp4
 +</code>
 +where:
 +    * ''movie.mp4'' is the movie,
 +    * ''movie.srt'' are the subtitles in SRT format,
 +    * ''output.mp4'' is the output movie
 +
 +====== Concatenating or Merging Multiple Files ======
 +
 +Given several files such as:
 +  * ''a.mkv''
 +  * ''b.mkv''
 +  * ''c.mkv''
 +
 +the files can be concatenated together into one large merged movie by following the steps:
 +  * create a list of files with a format specific to the ''ffmpeg'' concatenate filter, assuming that files are named sequentially:
 +<code bash>
 +for i in $(find . -name \*.mkv); do echo "file '$(realpath $i)'"; done >> list.txt
 +</code>
 +  * merge the files together with ''ffmpeg'':
 +<code bash>
 +ffmpeg -loglevel info -f concat -safe 0 -i list.txt -c copy "Merged Movie.mkv"
 +</code>
 +
 +====== Normalizing the Size of Video Clips ======
 +
 +Sometimes it is necessary to normalize the size of multiple video clips. For example, the clips extracted for [[/fuss/ham_radio#morse_cw_learning_method|learning morse code]] ended up having different sizes that made the video player change size every single letter was displayed.
 +
 +The following command:
 +<code bash>
 +for i in *.mp4; do ffmpeg -i "$i" -vf "crop=w='420':h='420',scale=420:420,setsar=1" conv/"$i"; done
 +</code>
 +will batch-change the size of all MP4 files in the same directory and store them inside a ''conv'' sub-directory.
 +
 +This method is called crop-and-scale meaning that the video clip is cropped to a fixed size and then scaled to a fixed size. The only drawback in doing this is that whilst the size of the video clips will be the same for all the clips, the content might be distorted depending on the original video clip.
 +
 +====== Determining if A Video File has Fast Start Enabled ======
 +
 +Issue:
 +<code bash>
 +ffmpeg -v trace -i FILE 2>&1 | grep -e type:'\mdat\' -e type:\'moov\'
 +</code>
 +where:
 +  * ''FILE'' is a video file
 +
 +This will yield output similar to the following:
 +<code>
 +mov,mp4,m4a,3gp,3g2,mj2 @ 0x5578f2e712c0] type:'moov' parent:'root' sz: 2586269 40 344228747
 +[mov,mp4,m4a,3gp,3g2,mj2 @ 0x5578f2e712c0] type:'mdat' parent:'root' sz: 341642438 2586317 344228747
 +</code>
 +
 +If ''moov'' appears before ''mdat'' then the ''faststart'' flag is set on the video file.
 +
 +====== Windows 7 Compatible Builds ======
 +
 +It seems that FFMpeg version 6.1.1 is a version that is suitable for Windows and higher version numbers might crash with a memory access violation error ''0xc0000005''.
 +
 +====== Increase Input Buffering When Reading from a Pipeline ======
 +
 +When ffmpeg is reading from a pipeline, for instance, by using the ''-i pipe:'' flag, the input has a set buffer that ffmpeg uses before performing any transcoding. It might happen that the set buffer is insufficient, which leads to conversion failures, spurious errors for various reasons and the resulting file containing distortions. For example, using ffmpeg with a HDHomeRun input stream, as in:
 +<code bash>
 +ffmpeg -y -i http://192.168.1.10:5004/tuner3/ch390000000-10 -c:v libx264 -c:a aac -ac 2 -b:a 128k -movflags +faststart -tune zerolatency /output.mp4
 +</code>
 +
 +results in several small errors along the lines of:
 +<code>
 +frame= 1015 fps= 11 q=21.0 size=    4352kB time=00:00:40.56 bitrate= 879.0kbits/frame= 1022 fps= 11 q=21.0 size=    4608kB time=00:00:40.84 bitrate= 924.3kbits/[mpegts @ 0x55f2c471b2c0] PES packet size mismatch
 +[mpegts @ 0x55f2c471b2c0] Packet corrupt (stream = 1, dts = 4597918858).
 +pipe:: corrupt input packet in stream 1
 +    Last message repeated 2 times
 +[mp2 @ 0x55f2c4755ec0] Header missing
 +Error while decoding stream #0:1: Invalid data found when processing input
 +[mpeg2video @ 0x55f2c4754300] ac-tex damaged at 30 23
 +[mpeg2video @ 0x55f2c4754300] Warning MVs not available
 +[mpeg2video @ 0x55f2c4754300] concealing 585 DC, 585 AC, 585 MV errors in I frame
 +pipe:: corrupt decoded frame in stream 0
 +frame= 1226 fps= 13 q=19.0 size=    4864kB time=00:00:49.00 bitrate= 813.2kbits/frame= 1227 fps= 12 q=25.0 size=    4864kB time=00:00:49.04 bitrate= 812.5kbits/frame= 1231 fps= 12 q=25.0 size=    4864kB time=00:00:49.20 bitrate= 809.9kbits/
 +</code>
 +being printed to the console output.
 +
 +Intuitively these are buffer-underrun errors that are due to the small internal ffmpeg buffer size. In order to fix these issues, specify an queue size on the command line:
 +<code bash>
 +ffmpeg -thread_queue_size 8192 -y -i http://192.168.1.10:5004/tuner3/ch390000000-10 -c:v libx264 -c:a aac -ac 2 -b:a 128k -movflags +faststart -tune zerolatency /output.mp4
 +</code>
 +where:
 +  * ''-thread_queue_size'' specifies the internal queue size expressed in bytes
 +
 +Fortunately, if the queue is not enough and buffer underruns are detected, ffmpeg will print out a warning:
 +<code>
 +Thread message queue blocking; consider raising the thread_queue_size option (current value: 8192)
 +</code>
 +
 +such that on the next invocation, the ffmpeg command parameters can be adjusted and the queue increased.
 +
 +====== Strategies for Recording Live Webcam Streams ======
 +
 +Webcams are cheap equipment these days with various performance issues and quirks for every producer out there. There are some general guidelines that should be minded when recording live webcam streams.
 +
 +===== Transcoding =====
 +
 +The immediate *nix reflex is to jump onto "ffmpeg" and start generating a long line to access the live stream, transcode it to a desired format and then store it onto the drive. Unfortunately, the problem is that for complex operations such as transcoding, "ffmpeg" will eat a whole lot of CPU power and/or GPU power. Whilst the former is not as important given commodity hardware that is cheaper and cheaper, sometimes there is simply no need to perform a transcoding task because the camera already provides a stream with an optimized and universal format.
 +
 +For instance, running the command ''<nowiki>ffprobe -i rtsp://...</nowiki>'' against a D-Link Tapo camera, will return a video stream codec type of ''h264'' and audio encoded in ''pcm_alaw'', both of which can just be dumped directly to an MKV container file without any processing:
 +<code bash>
 +ffmpeg -i rtsp://... -c:v copy -c:a copy out.mkv
 +</code>
 +Or, perhaps with some little streaming optimizations that do not affect CPU nor GPU power because they just change the way how markers are added to the saved video files:
 +<code bash>
 +ffmpeg -i rtsp://... -c:v copy -c:a copy -movflags faststart -movflags separate_moof -tune zerolatency out.mkv
 +</code>
 +
 +The former commands will generate zero CPU or GPU overhead, whilst recording an RTSP stream that is already provided with universal codecs. When in doubt, ''fprobe'' should be first used on the camera to make sure that the camera does not already provide something suitable such that transcoding is not needed.
 +
 +===== Annotations =====
 +
 +Video editing tools typically have the ability to draw on top of the video. Even "ffmpeg", and in spite that it is used on the command line with no GUI, has some editing tools such as ''drawtext'' that allows text to be overlayed on top of the video. Unfortunately, following the example in the section before, regardless whether the output will still be H264, transcoding will be needed and the command changes to:
 +<code bash>
 +ffmpeg -i rtsp://... -c:v libx264 -c:a copy -vf "drawtext..." -movflags faststart -movflags separate_moof -tune zerolatency out.mkv
 +</code>
 +and now "ffmpeg" will start using the CPU and GPU to both overlay the text and transcode to the final result.
 +
 +However, if just annotations are needed, subtitles could be used instead, such that a subtitle file can be generated in parallel to the recording of the live stream and then be read automatically when the recorded file is loaded. For instance, the script snippet in the [[/fuss/bash#snippet_that_generates_subtitles_by_reading_a_file|bash section]] is capable of generating a subtitle file in real time by also polling a file in real time.
 +
 +Not only are subtitles more efficient, but it also seems fairly canonical to have subtitles (or annotations) separate from the video recording instead of just drawing the text onto the video. If this strategy is adopted, the files are loaded together by any media player, but they can also be read separately.
 +
 +===== Node-Red Flow =====
 +
 +{{fuss:fuss_ffmpeg_node-red_strategies_for_recording_live_webcam_streams_flow.png?512}}
 +
 +The following flow uses two "exec" nodes, one to launch "ffmpeg" and record the RTSP stream to a local file by copying the video and audio codecs without transcoding and without hogging the CPU or GPU, whilst the other "exec" node launches a process in the background that reads a file and generates a subtitle file. Both the video and the subtitle file are named similarly as the basename with only the prefix varying from ''MKV'' to ''SUB'' such that opening up the video MKV file with any player should make the player automatically load the subtitle file.
 +
 +<code json>
 +[{"id":"def4582d2e2d067d","type":"group","z":"01bf6772c1feb7f4","g":"ab8078c26ea86566","name":"Recording","style":{"label":true},"nodes":["a69c011e5ebab4da","58d245ffd8d671f1","8423cd253b595171","ce0d715d97c16df1","1918f6c6249a9122","49fe07f39d609066","8708c7e8198f347a","d3d1499402365cce","bde51c6e8d3fc3dd","94568eddcaaee36f","83574efd30e1f858"],"x":54,"y":419,"w":552,"h":302},{"id":"a69c011e5ebab4da","type":"function","z":"01bf6772c1feb7f4","g":"def4582d2e2d067d","name":"record","func":"msg={}\nmsg.outputFile=`/projects/cameras/external/auto/${moment().format('YYYYMMDDHHmmss')}.mkv`\nmsg.payload=`ffmpeg -y -i rtsp://.../stream2 -c:v copy -c:a copy -movflags faststart -movflags separate_moof -tune zerolatency ${msg.outputFile}\"`\nreturn msg\n","outputs":1,"timeout":0,"noerr":0,"initialize":"","finalize":"","libs":[{"var":"moment","module":"moment"},{"var":"crypto","module":"crypto"}],"x":310,"y":540,"wires":[["8423cd253b595171","49fe07f39d609066"]]},{"id":"58d245ffd8d671f1","type":"function","z":"01bf6772c1feb7f4","g":"def4582d2e2d067d","name":"stop","func":"msg = {}\nmsg.kill = \"SIGHUP\"\nreturn msg;\n","outputs":1,"timeout":0,"noerr":0,"initialize":"","finalize":"","libs":[],"x":310,"y":660,"wires":[["49fe07f39d609066","94568eddcaaee36f"]]},{"id":"8423cd253b595171","type":"debug","z":"01bf6772c1feb7f4","g":"def4582d2e2d067d","name":"debug 64","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","statusVal":"","statusType":"auto","x":500,"y":460,"wires":[]},{"id":"ce0d715d97c16df1","type":"inject","z":"01bf6772c1feb7f4","g":"def4582d2e2d067d","name":"","props":[{"p":"payload"},{"p":"topic","vt":"str"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","payload":"","payloadType":"date","x":160,"y":540,"wires":[["a69c011e5ebab4da","83574efd30e1f858"]]},{"id":"1918f6c6249a9122","type":"inject","z":"01bf6772c1feb7f4","g":"def4582d2e2d067d","name":"","props":[{"p":"payload"},{"p":"topic","vt":"str"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","payload":"","payloadType":"date","x":160,"y":600,"wires":[["58d245ffd8d671f1"]]},{"id":"49fe07f39d609066","type":"exec","z":"01bf6772c1feb7f4","g":"def4582d2e2d067d","command":"","addpay":"payload","append":"","useSpawn":"true","timer":"","winHide":true,"oldrc":false,"name":"","x":490,"y":540,"wires":[["8423cd253b595171"],["8423cd253b595171"],["8423cd253b595171"]]},{"id":"8708c7e8198f347a","type":"link in","z":"01bf6772c1feb7f4","g":"def4582d2e2d067d","name":"link in 23","links":["437c598d34158341"],"x":195,"y":480,"wires":[["a69c011e5ebab4da","83574efd30e1f858"]]},{"id":"d3d1499402365cce","type":"link in","z":"01bf6772c1feb7f4","g":"def4582d2e2d067d","name":"link in 24","links":["2b231636c40b06f9"],"x":205,"y":680,"wires":[["58d245ffd8d671f1"]]},{"id":"bde51c6e8d3fc3dd","type":"debug","z":"01bf6772c1feb7f4","g":"def4582d2e2d067d","name":"debug 80","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","statusVal":"","statusType":"auto","x":500,"y":680,"wires":[]},{"id":"94568eddcaaee36f","type":"exec","z":"01bf6772c1feb7f4","g":"def4582d2e2d067d","command":"","addpay":"payload","append":"","useSpawn":"false","timer":"","winHide":false,"oldrc":false,"name":"","x":490,"y":620,"wires":[["bde51c6e8d3fc3dd"],["bde51c6e8d3fc3dd"],["bde51c6e8d3fc3dd"]]},{"id":"83574efd30e1f858","type":"function","z":"01bf6772c1feb7f4","g":"def4582d2e2d067d","name":"subtitle","func":"msg = {}\nmsg.outputFile = `/projects/cameras/external/auto/${moment().format('YYYYMMDDHHmmss')}.sub`\nmsg.payload = `COUNT=0 && while [ 1 ]; do while read i; do echo -e \"$COUNT\\n$(date -d@$COUNT -u +%H:%M:%S,000) --> $(date -d@$((COUNT+1)) -u +%H:%M:%S,000)\\n$i\\n\" && COUNT=$((COUNT+1)) && sleep 1; done </projects/sensor-cocktail/actual/noise.txt; done >> ${msg.outputFile}`\nreturn msg\n","outputs":1,"timeout":0,"noerr":0,"initialize":"","finalize":"","libs":[{"var":"moment","module":"moment"}],"x":320,"y":600,"wires":[["94568eddcaaee36f"]]}]
 +</code>
 +
 +====== Fixing a Broken Video Header ======
 +
 +Canonically, without re-encoding a video, which should take much more time than useful, the correct way to correct a header error is to simply have ''ffmpeg'' act as a copier for the input stream to the output stream, by preserving both audio and video codecs, but also by skipping errors via the ''-err_detect'' parameter:
 +<code bash>
 +ffmpeg -err_detect ignore_err -i A.B -acodec copy -vcodec copy C.B
 +</code>
 +where:
 +  * ''A'' is the input file name, 
 +  * ''B'' is the input file name extension,
 +  * ''C'' is the output file name
 +
 +As a side-comment, note that if you want to skip the waiting time to re-encode a video, such that it is readable by Premiere Pro, the former would not convey you too much of a relief, given that Premiere Pro will take a long time to read-in older-format files, making the conversion a much-more time-effective solution.
 +
 +====== Merging Sequences of Transparent Images and Specifying Background Color ======
 +
 +When merging sprites spread out in various files that have transparency, for example, sequential PNG files, it is useful to specify a background color, in particular when wanting to create animations because the background color will become the "chroma key"-color that will allow the elimination of the background in order to overlay the sprite onto the scene.
 +
 +Thusly, the following ffmpeg command, via the ''lavfi'' codec takes as input a sequence of files to be found under the directory ''folder/'' named sequentially ''image_01.png'' though to the maximum ''image_09.png'' and merges them together in the resulting GIF file ''merged.gif'':
 +
 +<code bash>
 +ffmpeg \
 +    -f lavfi \
 +    -i color=0000FF \
 +    -i folder/image_0%d.png \
 +    -filter_complex "[0:v][1:v]overlay=shortest=1[out]" \
 +    -map "[out]" \
 +    -r 10 \
 +    merged.gif
 +</code>
 +
 +====== Generating a Palette Image File from a Series of Files ======
 +
 +When creating animations, one of the earliest problems encountered is the need to determine which color a background should be such that when a "chroma key" is applied, the background ends up removed without removing the main subject in the image in order for the cutout to be perfect fit for overlaying onto a scene.
 +
 +Perhaps the most determinate way to accomplish this would be to attempt and create some index palette of all colors used within the image, or even multiple images if they have to be joined together, in order to pick a color for the background that will need less fuzzy matching when the "chroma key" is applied.
 +
 +Using ffmpeg a color palette can be generated from multiple files:
 +<code bash>
 +ffmpeg \
 +  -i img_0%d.png \
 +  -vf palettegen,scale=512:512 \
 +  -sws_flags neighbor \
 +  -sws_dither none \
 +  -f image2 \
 +   palette.png
 +</code>
 +where:
 +  * ''img_0%d.png'' will match the input files ''img_00.png'' through to ''img_09.png'' from the current directory that the command is ran in,
 +  * ''palette.png'' is the output palette file
 +
 +When opening up ''palette.png'' the result will be an image that will contain only the colors used in the images that have been provided as input. As an example, the following image is the result of a test using some red, green and blue images with a single color.
 +
 +{{fuss:fuss_ffmpeg_generating_palette_from_multiple_files.png?512}}
 +
 +The palette file ''palette.png'' is displayed in the top-most window and the three windows below represent the input images ''img_01.png'', ''img_02.png'' and ''img_03.png''. As can be observed, the palette file container the red, green and blue colors (the bottom right black block is, in fact, transparent).
 +
 +====== Reducing the Length of Screencasting Videos or Repairing Videos ======
 +
 +Typically in casting videos, such as PC / Desktop casting, the presenter will spend a lot of time idling, such that the recording will generate a video with lots of duplicate frames that can be reduced in order to reduce the overall video size and length.
 +
 +In order to do so, the ''mpdecimate'' filter can be used, as in invoking ffmpeg with the parameter ''-vf mpdecimate''. The ''mpdecimate'' filter will remove duplicate frames and, in this case, without any parameters to ''mpdecimate'', it will remove only identical frames. Of course, it greatly depends on the type of recordings how much the video can be reduced, with some recordings such as screencasts benefiting the most due to the idle time during presentations.
 +
 +The ''mpdecimate'' addon can also be used to repair videos with dropped frames simply by using ''mpdecimate'' along with ''-r'' to set a framerate and ''-vsync vfr'' that makes sure to remove frames and also re-sync the audio.
 +====== Custom Windows Builds ======
 +
 +These builds are particular in the sense that they are compiled for specific hardware accelerators matching specific operating systems.
 +
 +^ OS        ^ Hardware Encoder ^ Source ^ Local Mirror ^ Example Invocation ^
 +| Windows 7 x64 | CUDA 9.0         | [[https://archive.org/details/zeranoe|zeranoe]] | {{fuss:ffmpeg-4.3-win64-static.zip}}  | <code>ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i a.mp4 -c:v h264_nvenc -c:a aac -ac 2 -b:a 128k -movflags faststart -movflags separate_moof b.mp4</code>
 +
 +====== Remove Black Side-Bars or Other Borders with FFMpeg ======
 +
 +FFMpeg seems great at detecting and removing black borders, regardless whether they're on the side of video or whether they are black top and bottom bars.
 +
 +In order to remove black borders, first run the video through ''ffplay'' with the ''cropdetect'' filter:
 +<code bash>
 +ffplay -i VIDEO -vf cropdetect
 +</code>
 +where:
 +  * ''VIDEO'' is a video file
 +
 +The ''cropdetect'' filter will play the video and print out on the console a crop parameter while the video plays. For example:
 +<code>
 +[Parsed_cropdetect_0 @ 000000001300dd00] x1:210 x2:443 y1:0 y2:359 w:224 h:352 x:216 y:4 pts:137137 t:4.571233 crop=224:352:216:4
 +</code>
 +
 +The ''crop=224:352:216:4'' parameter can then be passed as a video filter to ffmpeg in order to remove the black borders on a video. For example:
 +<code bash>
 +ffmpeg -i INPUT -vf crop=224:352:216:4 OUTPUT
 +</code>
 +where:
 +  * ''INPUT'' is an input video file,
 +  * ''OUTPUT'' is an output video file
 +
 +and ''crop=224:352:216:4'' is the parameter computed by ''ffplay''.
 +
 +====== Reading the RMS noise Level from a Video Clip ======
 +
 +The following script reads the RMS noise level of a video clip and generates subtitles for the video that update and display the noise level every second.
 +
 +<code bash>
 +#!/usr/bin/env bash
 +###########################################################################
 +##  Copyright (C) Wizardry and Steamworks 2025 - License: MIT            ##
 +##  Please see: https://opensource.org/license/mit/ for legal details,   ##
 +##  rights of fair usage, the disclaimer and warranty conditions.        ##
 +###########################################################################
 +# Given an input video file and a path parameter, the script will run     #
 +# through the video and read out the RMS value whilst writing it to a     #
 +# subtitle file specified as the output file.                             #
 +#                                                                         #
 +# When the video and the subtitle is loaded, the RMS level will be shown  #
 +# as a subtitle that will update every second.                            #
 +###########################################################################
 +
 +if [[ -z "$1" ]] || [[ -z "$2" ]]; then
 +    echo "SYNTAX: $0 <INPUT FILE> <SUBTITLE FILE>"
 +    exit 1
 +fi
 +
 +AUDIO_RATE=`ffprobe -v error -select_streams a -of default=noprint_wrappers=1:nokey=1 -show_entries stream=sample_rate "$1"`
 +
 +INC=0
 +ffmpeg \
 +    -i "$1" \
 +    -af asetnsamples="$AUDIO_RATE",astats=metadata=1:reset=1,ametadata=print:key=lavfi.astats.Overall.RMS_level:file=- -f null - 2>&1 | \
 +    awk -F"RMS_level=" '{ print $2 }' | \
 +    perl -ne '/([\-0-9\.]+)/ and print $ENV{INC} . "\n" . sprintf("%02d:%02d:%02d,%03.0f", $ENV{INC}/3600, $ENV{INC}/60%60, $ENV{INC}%60, 1000*($ENV{INC} - $ENV{INC}%60)) . " --> " . sprintf("%02d:%02d:%02d,%03.0f", ($ENV{INC} + 1)/3600, ($ENV{INC} + 1)/60%60, ($ENV{INC} + 1)%60, 1000*(($ENV{INC} + 1) - ($ENV{INC} + 1)%60)) . "\n" . sprintf("%.1f RMS", $1) . "\n\n" and $ENV{INC}++' > "$2"
 +
 +</code>
 +
 +A useful invocation of this script would be to call it with the first parameter set to a video file and the output parameter set to the name of the video file but with the extension replaced by ''.sub''. In doing so, when video players load the video clip, they will automatically load the subtitles as well.
  

fuss/ffmpeg.1546629866.txt.gz · Last modified: 2019/01/04 19:24 by office

Wizardry and Steamworks

© 2025 Wizardry and Steamworks

Access website using Tor Access website using i2p Wizardry and Steamworks PGP Key


For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.