date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,326,231,956,000
My question originates from my problem in getting ffmpeg started. I have installed ffmpeg and it is displayed as installed: whereis ffmpeg ffmpeg: /usr/bin/ffmpeg /usr/bin/X11/ffmpeg /usr/share/ffmpeg /usr/share/man/man1/ffmpeg.1.gz Later, I figured out, that some programs depend on libraries that do not come with the installation itself, so I checked with ldd command what is missing: # ldd /usr/bin/ffmpeg linux-vdso.so.1 => (0x00007fff71fe9000) libavfilter.so.0 => not found libpostproc.so.51 => not found libswscale.so.0 => not found libavdevice.so.52 => not found libavformat.so.52 => not found libavcodec.so.52 => not found libavutil.so.49 => not found libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5f20bdf000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f5f209c0000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f5f205fb000) /lib64/ld-linux-x86-64.so.2 (0x00007f5f20f09000) As it turns out my ffmpeg is cut off from 7 libraries too work. I first thought that each of those libraries have to be installed, but than I figured out, that some or all might be installed, but their location unknown to ffmpeg. I read that /etc/ld.so.conf and /etc/ld.so.cache contain the paths to the libraries, but I was confused, because, there was only one line in /etc/ld.so.conf cat /etc/ld.so.conf include /etc/ld.so.conf.d/*.conf but a very long /etc/ld.so.cache. I am now at a point where I feel lost how to investigate further, It might be a helpful next step to figure out, how I can determine if a given library is indeed installed even if its location unknown to ffmpeg. ---------Output---of----apt-cache-policy-----request--------- apt-cache policy Package files: 100 /var/lib/dpkg/status release a=now 500 http://archive.canonical.com/ubuntu/ trusty/partner Translation-en 500 http://archive.canonical.com/ubuntu/ trusty/partner i386 Packages release v=14.04,o=Canonical,a=trusty,n=trusty,l=Partner archive,c=partner origin archive.canonical.com 500 http://archive.canonical.com/ubuntu/ trusty/partner amd64 Packages release v=14.04,o=Canonical,a=trusty,n=trusty,l=Partner archive,c=partner origin archive.canonical.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/universe Translation-en 500 http://security.ubuntu.com/ubuntu/ trusty-security/restricted Translation-en 500 http://security.ubuntu.com/ubuntu/ trusty-security/multiverse Translation-en 500 http://security.ubuntu.com/ubuntu/ trusty-security/main Translation-en 500 http://security.ubuntu.com/ubuntu/ trusty-security/multiverse i386 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=multiverse origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/universe i386 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=universe origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/restricted i386 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=restricted origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/main i386 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=main origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/multiverse amd64 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=multiverse origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/universe amd64 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=universe origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/restricted amd64 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=restricted origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/main amd64 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=main origin security.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/universe Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/restricted Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/multiverse Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/main Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/multiverse i386 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=multiverse origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/universe i386 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=universe origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/restricted i386 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=restricted origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/main i386 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=main origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/multiverse amd64 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=multiverse origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/universe amd64 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=universe origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/restricted amd64 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=restricted origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=main origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/universe Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty/restricted Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty/multiverse Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty/main Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty/multiverse i386 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=multiverse origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/universe i386 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=universe origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/restricted i386 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=restricted origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/main i386 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=main origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/multiverse amd64 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=multiverse origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/universe amd64 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=universe origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/restricted amd64 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=restricted origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=main origin archive.ubuntu.com 700 http://extra.linuxmint.com/ rebecca/main i386 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main origin extra.linuxmint.com 700 http://extra.linuxmint.com/ rebecca/main amd64 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main origin extra.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/import i386 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=import origin packages.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/upstream i386 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=upstream origin packages.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/main i386 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main origin packages.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/import amd64 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=import origin packages.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/upstream amd64 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=upstream origin packages.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/main amd64 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main origin packages.linuxmint.com Pinned packages:
Look in /usr/lib and /usr/lib64 for those libraries. If you find one of the ones ffmpeg is missing, symlink it so it exists in the other directory. You can also run a find for 'libm.so.6' and see where that file is at. There is a good chance ffmpeg is looking in the same directory for the missing ones. Symlink them over there once you find them. If they don't exist on your server, install the package that includes them. If they are included in ffmpeg package but you don't see them, try reinstalling ffmpeg.
How to check if a shared library is installed?
1,326,231,956,000
I have a bash script looping through the results of a find and performing an ffmpeg encoding of some FLV files. Whilst the script is running the ffmpeg output seems to be interupted and is outputting some strange looking errors like the one below. I've no idea what is going on here. Can anyone point me in the right direction? It's as though the loop is still running when it shouldn't be and interupting the ffmpeg process. The specific error is: frame= 68 fps= 67 q=28.0 00000000000000000000000000001000size= 22kB time=00:00:00.50 bitrate= 363.2kbits/s dup=1 drop=0 Enter command: <target> <time> <command>[ <argument>] Parse error, at least 3 arguments were expected, only 1 given in string 'om/pt_br/nx/R3T4N2_HD3D_demoCheckedOut.flv' Some more details from the ffmpeg output: [buffer @ 0xa30e1e0] w:800 h:600 pixfmt:yuv420p tb:1/1000000 sar:0/1 sws_param:flags=2 [libx264 @ 0xa333240] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.1 Cache64 [libx264 @ 0xa333240] profile High, level 3.1 [libx264 @ 0xa333240] 264 - core 122 r2184 5c85e0a - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=5 deblock=1:0:0 analyse=0x3:0x113 me=umh subme=8 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=2 b_bias=0 direct=3 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=50 rc=cbr mbtree=1 bitrate=500 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=500 vbv_bufsize=1000 nal_hrd=none ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to './mp4s/pt_br/teamcenter/tc8_interactive/videos/8_SRM_EN.mp4': Metadata: audiodelay : 0 canSeekToEnd : true encoder : Lavf54.3.100 Stream #0:0: Video: h264 (![0][0][0] / 0x0021), yuv420p, 800x600, q=-1--1, 500 kb/s, 30k tbn, 29.97 tbc Stream #0:1: Audio: aac (@[0][0][0] / 0x0040), 44100 Hz, mono, s16, 128 kb/s Stream mapping: Stream #0:1 -> #0:0 (vp6f -> libx264) Stream #0:0 -> #0:1 (mp3 -> libfaac) Press [q] to stop, [?] for help error parsing debug value0 00000000000000000000000000000000size= 13kB time=00:00:00.-3 bitrate=-3165.5kbits/s dup=1 drop=0 debug=0 frame= 68 fps= 67 q=28.0 00000000000000000000000000001000size= 22kB time=00:00:00.50 bitrate= 363.2kbits/s dup=1 drop=0 Enter command: <target> <time> <command>[ <argument>] Parse error, at least 3 arguments were expected, only 1 given in string 'om/pt_br/nx/R3T4N2_HD3D_demoCheckedOut.flv' The script is as follows #!/bin/bash LOGFILE=encodemp4ize.log echo '' > $LOGFILE STARTTIME=date echo "Started at `$STARTTIME`" >> $LOGFILE rsync -avz flvs/ mp4s/ --exclude '*.flv' #find flvs/ -name "*.flv" > flv-files # The loop find flvs/ -name "*.flv" | while read f do FILENAME=`echo $f | sed 's#flvs/##'` MP4FILENAME=`echo $FILENAME | sed 's#.flv#.mp4#'` ffmpeg -i "$f" -vcodec libx264 -vprofile high -preset slow -b:v 500k -maxrate 500k -bufsize 1000k -threads 0 -acodec libfaac -ab 128k "./mp4s/$MP4FILENAME" echo "$f MP4 done" >> $LOGFILE done
Your question is actually Bash FAQ #89: just add </dev/null to prevent ffmpeg from reading its standard input. I've taken the liberty of fixing up your script for you because it contains a lot of potential errors. A few of the important points: Filenames are tricky to handle, because most filesystems allow them to contain all sorts of unprintable characters normal people would see as garbage. Making simplifying assumptions like "file names contain only 'normal' characters" tends to result fragile shell scripts that appear to work on "normal" file names and then break the day they run into a particularly nasty file name that doesn't follow the script's assumptions. On the other hand, correctly handling file names can be such a bother that you may find it not worth the effort if the chance of encountering a weird file name is expected to be near zero (i.e. you only use the script on your own files and you give your own files "simple" names). Sometimes it is possible to avoid this decision altogether by not parsing file names at all. Fortunately, that is possible with find(1)'s -exec option. Just put {} in the argument to -exec and you don't have to worry about parsing find output. Using sed or other external processes to do simple string operations like stripping extensions and prefixes is inefficient. Instead, use parameter expansions which are part of the shell (no external process means it will be faster). Some helpful articles on the subject are listed below: Bash FAQ 73: Parameter expansions Bash FAQ 100: String manipulations Use $( ), and don't use `` anymore: Bash FAQ 82. Avoid using UPPERCASE variable names. That namespace is generally reserved by the shell for special purposes (like PATH), so using it for your own variables is a bad idea. And now, without further ado, here's a cleaned up script for you: #!/bin/sh logfile=encodemp4ize.log echo "Started at $(date)." > "$logfile" rsync -avz --exclude '*.flv' flvs/ mp4s/ find flvs/ -type f -name '*.flv' -exec sh -c ' for flvsfile; do file=${flvsfile#flvs/} < /dev/null ffmpeg -i "$flvsfile" -vcodec libx264 -vprofile high \ -preset slow -b:v 500k -maxrate 500k -bufsize 1000k \ -threads 0 -acodec libfaac -ab 128k \ "mp4s/${file%flv}"mp4 printf %s\\n "$flvsfile MP4 done." >> "$logfile" done ' _ {} + Note: I used POSIX sh because you didn't use or need any bash-specific features in your original.
Strange errors when using ffmpeg in a loop
1,326,231,956,000
I want to convert all *.flac to *.mp3 in the specific folder. This is what I've tried, but not works: # change to the home directory cd ~/music # convert all *.flac files ffmpeg -i *.flac -acodec libmp3lame *.mp3 # (optional: check whether there are any errors printed on the terminal) sleep 60 How to get my goal?
Try this: for i in *.flac ; do ffmpeg -i "$i" -acodec libmp3lame "$(basename "${i/.flac}")".mp3 sleep 60 done
Bash script to convert all *flac to *.mp3 with FFmpeg?
1,326,231,956,000
Someone suggested I direct a copy of the unmodified X display to a file and afterwards convert that file to a general purpose video file. What commands would I use to do this on a Kubuntu system? (Edit: He said something about attaching a display port to a file.) If not possible, what is my best option for an excellent quality screen recording that does not depend on fast hardware? Background: I tried using avconv with -f x11grab and some GUI programs. However, no matter what I try, the resulting video either has artifacts/ blurriness or is choppy (missing frames). This is probably due to CPU/ memory constraints. Goals: Video quality must not be noticeably different from seeing the session directly on a screen, because the purpose is to demonstrate an animated application. The final video must be in a common format that can be sent to Windows users and used on the web. I think H.264 MP4 should work. The solution should not presume much prior knowledge. I am familiar with the command line and basic Linux commands, but I am still learning Linux and do not know much about video codecs. What I already tried: Best command so far: ffmpeg -f x11grab -s xga -r 30 -i :0.0 -qscale 0.1 -vcodec huffyuv grab.avi, then convert to mp4 with ffmpeg -i grab.avi -sameq -vcodec mpeg4 grab.mp4. The picture quality is great, but on my test sytem it lags the computer. On a faster target system it does not lag, but frames are obviously skipped, making the video not very smooth. I am still trying to figure out how to save the grab.avi file to SHM to see if that helps. Using Istanbul and RecordMyDesktop GUI recorders Simple command: avconv -f x11grab -s xga -r 25 -i :0.0 simple.mpg using avconv version 0.8.3-4:0.8.3-0ubuntu0.12.04.1 Adding -codec:copy (fails with: Requested output format 'x11grab' is not a suitable output format) Adding -same_quant (results in great quality, but is very choppy/ missing many frames) Adding -vpre lossless_ultrafast (fails with: Unrecognized option 'vpre', Failed to set value 'lossless_ultrafast' for option 'vpre') Adding various values of -qscale Adding various values of -b Adding -vcodec h264 (outputs repeatedly: Error while decoding stream #0:0, [h264 @ 0x8300980] no frame!) Note: h264 is listed in avconv -formats output as DE h264 raw H.264 video format
If your HDD allows, you can try to do it this way: First write uncompressed file: ffmpeg -f x11grab -s SZ -r 30 -i :0.0 -qscale 0 -vcodec huffyuv grab.avi here SZ is your display size (e.g. 1920x1080). After that you can compress it at any time you want: ffmpeg -i grab.avi grab.mkv Of course, you can change compression, select codec and so on.
How to get near-perfect screen recording quality?
1,326,231,956,000
Both ALAC and FLAC are lossless audio formats and files will usually have more or less the same size when converted from one format to the other. I use ffmpeg -i track.flac track.m4a to convert between these two formats but I notice that the resulting ALAC files are much smaller than the original ones. When using a converter software like the MediaHuman Audio Converter, the size of the ALACs will remain around the same size as the FLACs so I guess I'm missing some flags here that are causing ffmpeg to downsample the signal.
Ok, I was probably a little quick to ask here but for the sake of future reference here is the answer: One should pass the flag -acodec alac to ffmpeg for a lossless conversion between FLAC and ALAC: ffmpeg -i track.flac -acodec alac track.m4a
Lossless audio conversion from FLAC to ALAC using ffmpeg
1,326,231,956,000
I've tried to figure this out myself, but the myriad of options just baffles me. I want to use ideally either ffmpeg or mencoder (or something else, but those two I know I have working) to convert any incoming video to a fixed screen size. If the video is wider or too short for it, then centre crop the video. If it's then not the right size, the resize up or down to make it exactly the fixed screen size. The exact final thing I need is 720x480 in a XVid AVI with an MP3 audio track. I've found lots of pages showing how to resize to a maximum resolution, but I need the video to be exactly that resolution (with extra parts cropped off, no black bars). Can anyone tell me the command line to run - or at least get me some/most of the way there? If it needs to be multiple command lines (run X to get the resolution, do this calculation and then run Y with the output of that calculation) I can script that.
I'm no ffmpeg guru, but this should do the trick. First of all, you can get the size of input video like this: ffprobe -v error -of flat=s=_ -select_streams v:0 -show_entries stream=height,width in.mp4 With a reasonably recent ffmpeg, you can resize your video with these options: ffmpeg -i in.mp4 -vf scale=720:480 out.mp4 You can set the width or height to -1 in order to let ffmpeg resize the video keeping the aspect ratio. Actually, -2 is a better choice since the computed value should even. So you could type: ffmpeg -i in.mp4 -vf scale=720:-2 out.mp4 Once you get the video, it may be bigger than the expected 720x480 since you let ffmpeg compute the height, so you'll have to crop it. This can be done like this: ffmpeg -i in.mp4 -filter:v "crop=in_w:480" out.mp4 Finally, you could write a script like this (can easily be optimized, but I kept it simple for legibility): #!/bin/bash FILE="/tmp/test.mp4" TMP="/tmp/tmp.mp4" OUT="/tmp/out.mp4" OUT_WIDTH=720 OUT_HEIGHT=480 # Get the size of input video: eval $(ffprobe -v error -of flat=s=_ -select_streams v:0 -show_entries stream=height,width ${FILE}) IN_WIDTH=${streams_stream_0_width} IN_HEIGHT=${streams_stream_0_height} # Get the difference between actual and desired size W_DIFF=$[ ${OUT_WIDTH} - ${IN_WIDTH} ] H_DIFF=$[ ${OUT_HEIGHT} - ${IN_HEIGHT} ] # Let's take the shorter side, so the video will be at least as big # as the desired size: CROP_SIDE="n" if [ ${W_DIFF} -lt ${H_DIFF} ] ; then SCALE="-2:${OUT_HEIGHT}" CROP_SIDE="w" else SCALE="${OUT_WIDTH}:-2" CROP_SIDE="h" fi # Then perform a first resizing ffmpeg -i ${FILE} -vf scale=${SCALE} ${TMP} # Now get the temporary video size eval $(ffprobe -v error -of flat=s=_ -select_streams v:0 -show_entries stream=height,width ${TMP}) IN_WIDTH=${streams_stream_0_width} IN_HEIGHT=${streams_stream_0_height} # Calculate how much we should crop if [ "z${CROP_SIDE}" = "zh" ] ; then DIFF=$[ ${IN_HEIGHT} - ${OUT_HEIGHT} ] CROP="in_w:in_h-${DIFF}" elif [ "z${CROP_SIDE}" = "zw" ] ; then DIFF=$[ ${IN_WIDTH} - ${OUT_WIDTH} ] CROP="in_w-${DIFF}:in_h" fi # Then crop... ffmpeg -i ${TMP} -filter:v "crop=${CROP}" ${OUT}
Convert a video to a fixed screen size by cropping and resizing
1,326,231,956,000
I made a recording with ffmpeg -f alsa -ac 2 -i plughw:0,0 /tmp/audio.mp4 I then moved /tmp/audio.mp4 to another directory (/root/audio.mp4) without stopping ffmpeg leading to a broken .mp4 file: ffplay /root/audio.mp4 [...] [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f3524000b80] moov atom not found audio.mp4: Invalid data found when processing input How to recover and read my .mp4 file?
You can try and use Untrunc to fix the file. Restore a damaged (truncated) mp4, m4v, mov, 3gp video. Provided you have a similar not broken video. you may need to compile it from source, but there is another option to use a Docker container and bind the folder with the file into the container and fix it that way. You can use the included Dockerfile to build and execute the package as a container git clone https://github.com/ponchio/untrunc.git cd untrunc docker build -t untrunc . docker run -v ~/Desktop/:/files untrunc /files/filea /files/fileb
How to recover a broken mp4 file: moov atom not found
1,326,231,956,000
I have two *.avi files: sequence1.avi sequence2.avi How do I merge these two files using a command-line or GUI?
There's a dedicated tool to do this, avimerge: avimerge -o cd.avi -i cd1.avi cd2.avi If not installed, install transcode: Avimerge is part of transcode package: https://manpages.debian.org/jessie/transcode/avimerge.1.en.html http://manpages.ubuntu.com/manpages/bionic/man1/avimerge.1.html
How do I merge two *.avi files into one
1,326,231,956,000
Currently we're using this command within a shell script to remove silence from audio files: ffmpeg -i $INFILE -af silenceremove=0:0:0:-1:1:${NOISE_TOLERANCE}dB -ac 1 $SILENCED_FILE -y This works fine except that it removes all the silence, causing the remaining audio to be squeezed together. How can this be done while leaving two or three seconds between each piece of audio? The solution needs to be very efficient as we'll be processing a lot of audio and should use a tool that can be fairly easily installed on both Linux and OSX, such as ffmpeg or sox.
The best way that I've seen is by adding the -l flag to silence as follows: sox in.wav out6.wav silence -l 1 0.1 1% -1 2.0 1% I've copied this command from Example 6 of this very useful blog post called The Sox of Silence
Remove silence from audio files while leaving gaps
1,326,231,956,000
Subtitle files come in a variety of formats, from .srt to .sub to .ass and so on and so forth. Is there a way to tell mpv to search for subtitle files alongwith the media files and if it does to start playing the file automatically. Currently I have to do something like this which can be pretty long depending on the filename - [$] mpv --list-options | grep sub-file (null) requires an argument --sub-file String list (default: ) [file] Look forward to answers. Update 1 - A typical movie which has .srt (or subscript) [$] mpv Winter.Sleep.\(Kis.Uykusu\).2014.720p.BrRip.2CH.x265.HEVC.Megablast.mkv (null) requires an argument Playing: Winter.Sleep.(Kis.Uykusu).2014.720p.BrRip.2CH.x265.HEVC.Megablast.mkv (+) Video --vid=1 (*) (hevc) (+) Audio --aid=1 (aac) (+) Subs --sid=1 'Winter.Sleep.(Kis.Uykusu).2014.720p.BrRip.2CH.x265.HEVC.Megablast.srt' (subrip) (external) [vo/opengl] Could not create EGL context! [sub] Using subtitle charset: UTF-8-BROKEN AO: [alsa] 48000Hz stereo 2ch float VO: [opengl] 1280x536 yuv420p AV: 00:02:14 / 03:16:45 (1%) A-V: 0.000 The most interesting line is this :- (+) Subs --sid=1 'Winter.Sleep.(Kis.Uykusu).2014.720p.BrRip.2CH.x265.HEVC.Megablast.srt' (subrip) (external) Now if the file was as .ass or .sub with the same filename, it wouldn't work. I have tried it in many media files which have those extensions and each time mpv loads the video and audio and the protocols but not the external subtitle files. Update 2 - The .ass script part is listed as a bug at mpv's bts - https://github.com/mpv-player/mpv/issues/2846 Update 3 - Have been trying to debug with help of upstream, filed https://github.com/mpv-player/mpv/issues/3091 for that. It seems though that it's not mpv which is responsible but ffmpeg (and libavformat) which is supposed to decode the subtitles. Hence have added ffmpeg to it too.
As seen in man mpv: --sub-auto=<no|exact|fuzzy|all>, --no-sub-auto Load additional subtitle files matching the video filename. The parameter specifies how external subtitle files are matched. exact is enabled by default. no Don't automatically load external subtitle files. exact Load the media filename with subtitle file extension (default). fuzzy Load all subs containing media filename. all Load all subs in the current and --sub-paths directories. exact would seem like the appropriate choice, but since it's the default and it doesn't load files like [video name minus extension].srt, fuzzy is the next best bet and it works on my system. So just echo "sub-auto=fuzzy" >> ~/.config/mpv/mpv.conf.
Play subtitles automatically with mpv
1,326,231,956,000
I would like to copy the file creation date of an mp4 file into the file's metadata. I'm pretty sure this can be done with ffmpeg and some nifty Linux commands.
You can set metadata with FFmpeg via the -metadata parameter MP4s support the year attribute according to this, but I only got it to work with the "date" field which is shown in VLC (if it is only a year) and in MPlayer and Winamp without a problem as full date. I found the date attribute by setting the year via VLC and dumping the metadata with FFmpeg To set the date to the time of the last modification (as a complete date like 2014-11-13) use something like: ffmpeg -i inputfile.mp4 -metadata date="$(stat --printf='%y' inputfile.mp4 | cut -d ' ' -f1)" -codec copy outputfile.mp4 The last modified detection could most definitely be done nicer, plus, I am not sure how widespread the usage of the date metadata is, but it worked in my case.
Copy file creation date to metadata in ffmpeg
1,326,231,956,000
Here is a Linux audio MP3 puzzle that has been bugging me for awhile: How to trim the beginning few seconds off an MP3 audio file? (I can't get ffmpeg -ss to work with either 00:01 or 1.000 format) So far, to do what I want, I resort doing it in a GUI manner which is maybe slower for a single file, and definitely slower for a batch of files.
For editing mp3's under linux, I'd recommend sox. It has a simple to use trim effect that will do what you ask for (see man sox for datails - search (press/) for "trim start"). Example: sox input.mp3 output.mp3 trim 1 5 You didn't mention it, but if your aim is just to remove the silence at the beginning of files, you will find silence effect much more useful (man sox, search for "above-periods")
How can you trim mp3 files using `ffmpeg`?
1,326,231,956,000
I have a bunch of videos which I want to check if they are complete or not. Some of them may be downloaded partially, but they are not faulty. How can I efficiently check if these video are completely downloaded? If I had the links, I would have checked the size of them, but I don't. I tried to use ffprobe and mediainfo. ffprobe reports minor problems on partially downloaded files, but it also reports similar problems with some of completely downloaded files. Should I use ffmpeg to read the whole files and compare the length of the videos to check if they are downloaded? Is there a better solution?
ffmpeg is an OS agnostic tool that is capable of determining if a video file has been completely downloaded. The command below instructs ffmpeg to read the input video and encode the video to nothing. During the encoding process, any errors such as missing frames are output to test.log. ffmpeg -v error -i FILENAME.mp4 -f null - 2>test.log If a video file is not totally downloaded, there will be many lines in the test.log file. For example, .1 MB missing from a video file produced 71 lines of errors. If the video is fully downloaded and hasn’t been corrupted, no errors are found, and no lines are printed to test.log. Edit In the example I gave above, I tested the entire file because the test video I downloaded was a torrent, which can have missing chunks throughout the file. Adding -sseof -60 to the list of arguments will check the last 60 seconds of the file, which is considerably faster. ffmpeg -v error -sseof -60 -i FILENAME.mp4 -f null - 2>test.log You'll need a newer version of ffmpeg, 2.8 was missing the sseof flag, so I used 3.0.
How to check if a video is completely downloaded?
1,326,231,956,000
I'd like to use the new codec x265 (libx265) to encode my video collection. For this I created a lovely bash script under linux which works in general very well! But something is strange: I prohibit the output of ffmpeg to echo on my own way. With x264 (the "old" one) everything works fine. But as soon as I use x265 I get always this kind of output on my terminal: x265 [info]: HEVC encoder version 1.7 x265 [info]: build info [Linux][GCC 5.1.0][64 bit] 8bpp x265 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64 x265 [info]: Main profile, Level-2.1 (Main tier) x265 [info]: Thread pool created using 2 threads x265 [info]: frame threads / pool features : 1 / wpp(5 rows) x265 [info]: Coding QT: max CU size, min CU size : 64 / 8 x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra x265 [info]: ME / range / subpel / merge : hex / 57 / 2 / 2 x265 [info]: Keyframe min / max / scenecut : 25 / 250 / 40 x265 [info]: Lookahead / bframes / badapt : 20 / 4 / 2 x265 [info]: b-pyramid / weightp / weightb / refs: 1 / 1 / 0 / 3 x265 [info]: AQ: mode / str / qg-size / cu-tree : 1 / 1.0 / 64 / 1 x265 [info]: Rate Control / qCompress : CRF-28.0 / 0.60 x265 [info]: tools: rd=3 psy-rd=0.30 signhide tmvp strong-intra-smoothing x265 [info]: tools: deblock sao This is the way I encode my video with ffmpeg: ffmpeg -i /input/file -c:v libx265 -c:a copy -loglevel quiet /output/file.mp4 <>/dev/null 2>&1 I thought that the <>/dev/null 2>&1 and the -loglevel quiet will do this but apparently I'm mistaken. How can I solve this problem? Thanks for your help!
Solution You need to add an additional parameter -x265-params log-level=xxxxx, as in ffmpeg -i /input/file -c:v libx265 -c:a copy -loglevel quiet -x265-params log-level=quiet \ /output/file.mp4 <>/dev/null 2>&1 Note that, while the FFmpeg option is -loglevel, the x25 option is log-level, with a - between log and level; see the x265 Command Line Options documentation. Explanation The FFmpeg command you wrote should have worked (see: ffmpeg documentation); however, it looks like FFmpeg doesn't tell the x265 encoder to use the loglevel you're telling FFmpeg to use. So, assuming you want to the whole FFmpeg command to run quietly (i.e., suppress the messages from both the main FFmpeg program and the x265 encoder), you need to explicitly set the log level options for both of them. Analogously, if you have an FFmpeg command that looks like this: ffmpeg -loglevel error -stats -i "inputfile.xyz" -c:v libx265 -x265-params parameter1=value:parameter2=value outputfile.xyz You can add the log-level=error option to the list of x265-params like this: ffmpeg -loglevel error -stats -i "inputfile.xyz" -c:v libx265 -x265-params log-level=error:parameter1=value:parameter2=value …
bash: ffmpeg libx265 prevent output
1,326,231,956,000
I just watched the trailer for the hobbit, and a trailer for the avengers which both feature an increased framerate. A lot of the comments state that this isn't "true" 60fps since it was not shot at 60fps, but actually a lower frame-rate that has been interpolated. If this is the case, is there any way that I can convert some of my existing media in Linux with ffmpeg or avconv in the same way in order to create this "illusion"? I can understand if higher framerates are not to other's tastes, but not the point of this post.
You can try ffmpeg -i source.mp4 -filter:v tblend -r 120 result.mp4 or this from https://superuser.com/users/114058/mulvya ffmpeg -i source.mp4 -filter:v minterpolate -r 120 result.mp4 There are filter for motion blur
FFMPEG - Interpolate frames or add motion blur
1,326,231,956,000
When I use this, the quality doesn't look bad. ffmpeg -same_quant -i video.mp4 video.avi In the ffmpeg documentation is written: "Note that this is NOT SAME QUALITY. Do not use this option unless you know you need it." Do I get with -same_quant the best quality or is there an option that gives the same quality as the input and is more recommended?
(adapted from comments above) Depending on the codecs used (some codecs are incompatible with some containers), you could always simply copy the streams (-codec copy). That is the best way to avoid quality changes, as you're not reencoding the streams, just repackaging those in a different container. When dealing with audio/video files, it is important to keep in mind that containers are mostly independent from the used codecs. It is common to see people referring to files as "AVI video" or "MP4 video", but those are containers and tell us little about whether a player will be able to play the streams, as, apart from technical limitations (for example, AVI may have issues with h264 and Ogg Vorbis), you could use any codec. -same_quant seems to be a way to tell ffmpeg to try to achieve a similar quality, but as soon as you reencode the video (at least with lossy codecs), you have no way to get the same quality. If you're concerned with quality, a good rule of thumb is to avoid reencoding the streams when possible. So, in order to copy the streams with ffmpeg, you'd do: ffmpeg -i video.mp4 -codec copy video.avi (As @Peter.O mentioned, option order is important, so that's where -codec copy must go. You could still keep -same_quant, but it won't have any effect as you're not reencoding the streams.)
How get the best quality when converting from mp4 to avi with ffmpeg?
1,326,231,956,000
I recently needed a single webcam to be shared simultaneously by 3 applications (a web browser, a videoconferencing app, and ffmpeg to save the stream). It's not possible to simply share the /dev/video* stream because as soon as one application is using it, the others cannot, and anything else will get a "device or resource busy" or equivalent. So I turned to v4l2-loopback with the intention of mirroring the webcam to 3 loopbacks. Using 3 loopbacks does work as expected, but what has really surprised me is it turns out I don't actually need 3 loopbacks, but only 1. If I create a single loopback and feed it with ffmpeg, then the single mirrored loopback can be used by all 3 applications at the same time, with no "device or resource busy" issue. So this is even better than I planned, and there is no practical problem I need help with. But my question is, how is this possible with the loopback? And why not using the original source directly? Example command to create the single loopback: sudo modprobe v4l2loopback video_nr=30 exclusive_caps=1 card_label="loopback cam" Example command using ffmpeg to mirror /dev/video5 to the loopback (/dev/video30). This will default to raw, but recent builds of ffmpeg can use an alternative stream like MJPEG, the behaviour is the same regardless: ffmpeg -f v4l2 -i /dev/video5 -codec copy -f v4l2 /dev/video30 After doing this, try to access /dev/video30 with multiple applications, here are some examples: ffmpeg -f v4l2 -i /dev/video30 -codec libx264 recordstream.mp4 ffplay -f video4linux2 -i /dev/video30 System info in case it's relevant: Ubuntu 20.04 Kernel: 5.4.0-31-generic package: v4l2loopback-dkms 0.12.3-1
It's by design. First, let it be said that multiple processes can open the /dev/video0 device, but only one of them will be able to issue certain controls (ioctl()) until streaming starts. Those V4L2 controls define things like bitrate. After you start streaming, the kernel will not let you change them and returns EBUSY (Device or resource busy) if you try. See this note in the kernel source. That effectively blocks other consumers, since you should set those before you start streaming. What does v4l2loopback do differently? It adds logic and data structures for multiple openers and by default will not try to apply new controls by provinding its own setter. Note that v4l2loopback needs to have multiple openers, at least two to be useful. One reader and one writer.
Why can multiple consumers access a *single* v4l2-loopback stream from a webcam
1,326,231,956,000
I have two commands, one that lets me record my screen to an AVI video file, and another which lets me stream a video file as a (fake) "webcam". This is really useful in apps that doesn't support selecting one screen to share (I'm looking at you Slack). command #1 (https://askubuntu.com/a/892683/721238): ffmpeg -y -f alsa -i hw:0 -f x11grab -framerate 30 -video_size 1920x1080 -i :0.0+1920,0 -c:v libx264 -pix_fmt yuv420p -qp 0 -preset ultrafast screenStream.avi command #2 (https://unix.stackexchange.com/a/466683/253391): ffmpeg -re -i screenStream.avi -map 0:v -f v4l2 /dev/video1 Why can't I just run both of these in parallel? Well, the second command starts streaming from the beginning of the file, whenever I use my "webcam". So I have to time it really close, otherwise there is latency. I've tried lots and lots of solutions (including solutions with gstreamer instead of ffmpeg), can't get anything to work. This is my last hope. How can I stream my desktop/screen to /dev/video1 as a (fake) "webcam" on Ubuntu?
Solved. Steps to solve: Unload previous v4l2loopback sudo modprobe -r v4l2loopback git clone https://github.com/umlaeute/v4l2loopback/ make && sudo make install (if you're using secure boot, you'll need to sign it first https://ubuntu.com/blog/how-to-sign-things-for-secure-boot) sudo depmod -a Load the videodev drivers sudo modprobe videodev sudo insmod ./v4l2loopback.ko devices=1 video_nr=2 exclusive_caps=1 Change video_nr based on how many cams you already got. Zero indexed ls -al /dev/video* Use /dev/video[video_nr] with ffmpeg sudo ffmpeg -f x11grab -r 60 -s 1920x1080 -i :0.0+1920,0 -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f v4l2 -vf 'hflip,scale=640:360' /dev/video2 Go to https://webcamtests.com and test your dummy cam Profit! If you want this to persist between boots, https://askubuntu.com/a/1024786/721238 should do it.
How can I stream my desktop/screen to /dev/video1 as a (fake) "webcam" on Linux?
1,326,231,956,000
Every then and now ffmpeg tells me myfile.avi: Protocol not found Did you mean file:myfile.avi if executing ffmpeg -i myfile.avi -vcodec copy -acodec copy -pix_fmt yuv420p myfile.mp4. Using ffmpeg -i file:myfile.avi -vcodec copy -acodec copy -pix_fmt yuv420p myfile.mp4 then fails: file:myfile.avi: Protocol not found Did you mean file:file:myfile.avi Executing the same (out of bash history) after reboot (normally I just go into pm-suspend-hybrid) works, converting from .avi to .mp4 like expected. Any ideas what might be the reason and how to fix this? complete in-/output: $ ffmpeg -i myfile.avi -vcodec copy -acodec copy -pix_fmt yuv420p myfile.mp4 ffmpeg version 3.2.9-1~deb9u1 Copyright (c) 2000-2017 the FFmpeg developers built with gcc 6.3.0 (Debian 6.3.0-18) 20170516 configuration: --prefix=/usr --extra-version='1~deb9u1' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libebur128 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared WARNING: library configuration mismatch avutil configuration: --disable-everything --disable-all --disable-doc --disable-htmlpages --disable-manpages --disable-podpages --disable-txtpages --disable-static --enable-avcodec --enable-avformat --enable-avutil --enable-fft --enable-rdft --enable-static --enable-libopus --disable-bzlib --disable-error-resilience --disable-iconv --disable-lzo --disable-network --disable-schannel --disable-sdl --disable-symver --disable-xlib --disable-zlib --disable-securetransport --disable-d3d11va --disable-dxva2 --disable-vaapi --disable-vda --disable-vdpau --disable-videotoolbox --disable-nvenc --enable-decoder='vorbis,libopus,flac' --enable-decoder='pcm_u8,pcm_s16le,pcm_s24le,pcm_s32le,pcm_f32le' --enable-decoder='pcm_s16be,pcm_s24be,pcm_mulaw,pcm_alaw' --enable-demuxer='ogg,matroska,wav,flac' --enable-parser='opus,vorbis,flac' --extra-cflags=-I/ssd/trunk_blink_tot/src/third_party/opus/src/include --optflags='"-O2"' --enable-decoder='theora,vp8' --enable-parser='vp3,vp8' --enable-pic --enable-decoder='aac,h264,mp3' --enable-demuxer='aac,mp3,mov' --enable-parser='aac,h264,mpegaudio' --enable-lto avcodec configuration: --disable-everything --disable-all --disable-doc --disable-htmlpages --disable-manpages --disable-podpages --disable-txtpages --disable-static --enable-avcodec --enable-avformat --enable-avutil --enable-fft --enable-rdft --enable-static --enable-libopus --disable-bzlib --disable-error-resilience --disable-iconv --disable-lzo --disable-network --disable-schannel --disable-sdl --disable-symver --disable-xlib --disable-zlib --disable-securetransport --disable-d3d11va --disable-dxva2 --disable-vaapi --disable-vda --disable-vdpau --disable-videotoolbox --disable-nvenc --enable-decoder='vorbis,libopus,flac' --enable-decoder='pcm_u8,pcm_s16le,pcm_s24le,pcm_s32le,pcm_f32le' --enable-decoder='pcm_s16be,pcm_s24be,pcm_mulaw,pcm_alaw' --enable-demuxer='ogg,matroska,wav,flac' --enable-parser='opus,vorbis,flac' --extra-cflags=-I/ssd/trunk_blink_tot/src/third_party/opus/src/include --optflags='"-O2"' --enable-decoder='theora,vp8' --enable-parser='vp3,vp8' --enable-pic --enable-decoder='aac,h264,mp3' --enable-demuxer='aac,mp3,mov' --enable-parser='aac,h264,mpegaudio' --enable-lto avformat configuration: --disable-everything --disable-all --disable-doc --disable-htmlpages --disable-manpages --disable-podpages --disable-txtpages --disable-static --enable-avcodec --enable-avformat --enable-avutil --enable-fft --enable-rdft --enable-static --enable-libopus --disable-bzlib --disable-error-resilience --disable-iconv --disable-lzo --disable-network --disable-schannel --disable-sdl --disable-symver --disable-xlib --disable-zlib --disable-securetransport --disable-d3d11va --disable-dxva2 --disable-vaapi --disable-vda --disable-vdpau --disable-videotoolbox --disable-nvenc --enable-decoder='vorbis,libopus,flac' --enable-decoder='pcm_u8,pcm_s16le,pcm_s24le,pcm_s32le,pcm_f32le' --enable-decoder='pcm_s16be,pcm_s24be,pcm_mulaw,pcm_alaw' --enable-demuxer='ogg,matroska,wav,flac' --enable-parser='opus,vorbis,flac' --extra-cflags=-I/ssd/trunk_blink_tot/src/third_party/opus/src/include --optflags='"-O2"' --enable-decoder='theora,vp8' --enable-parser='vp3,vp8' --enable-pic --enable-decoder='aac,h264,mp3' --enable-demuxer='aac,mp3,mov' --enable-parser='aac,h264,mpegaudio' --enable-lto avdevice configuration: --disable-decoder=amrnb --disable-decoder=libopenjpeg --disable-mips32r2 --disable-mips32r6 --disable-mips64r6 --disable-mipsdsp --disable-mipsdspr2 --disable-mipsfpu --disable-msa --disable-libopencv --disable-podpages --disable-stripping --enable-avfilter --enable-avresample --enable-gcrypt --enable-gnutls --enable-gpl --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libilbc --enable-libkvazaar --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx265 --enable-libxvid --enable-libzvbi --enable-nonfree --enable-opengl --enable-openssl --enable-postproc --enable-pthreads --enable-shared --enable-version3 --incdir=/usr/include/x86_64-linux-gnu --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --toolchain=hardened --enable-frei0r --enable-chromaprint --enable-libx264 --enable-libiec61883 --enable-libdc1394 --enable-vaapi --disable-opencl --enable-libmfx --disable-altivec --shlibdir=/usr/lib/x86_64-linux-gnu avfilter configuration: --disable-decoder=amrnb --disable-decoder=libopenjpeg --disable-mips32r2 --disable-mips32r6 --disable-mips64r6 --disable-mipsdsp --disable-mipsdspr2 --disable-mipsfpu --disable-msa --disable-libopencv --disable-podpages --disable-stripping --enable-avfilter --enable-avresample --enable-gcrypt --enable-gnutls --enable-gpl --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libilbc --enable-libkvazaar --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx265 --enable-libxvid --enable-libzvbi --enable-nonfree --enable-opengl --enable-openssl --enable-postproc --enable-pthreads --enable-shared --enable-version3 --incdir=/usr/include/x86_64-linux-gnu --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --toolchain=hardened --enable-frei0r --enable-chromaprint --enable-libx264 --enable-libiec61883 --enable-libdc1394 --enable-vaapi --disable-opencl --enable-libmfx --disable-altivec --shlibdir=/usr/lib/x86_64-linux-gnu avresample configuration: --disable-decoder=amrnb --disable-decoder=libopenjpeg --disable-mips32r2 --disable-mips32r6 --disable-mips64r6 --disable-mipsdsp --disable-mipsdspr2 --disable-mipsfpu --disable-msa --disable-libopencv --disable-podpages --disable-stripping --enable-avfilter --enable-avresample --enable-gcrypt --enable-gnutls --enable-gpl --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libilbc --enable-libkvazaar --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx265 --enable-libxvid --enable-libzvbi --enable-nonfree --enable-opengl --enable-openssl --enable-postproc --enable-pthreads --enable-shared --enable-version3 --incdir=/usr/include/x86_64-linux-gnu --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --toolchain=hardened --enable-frei0r --enable-chromaprint --enable-libx264 --enable-libiec61883 --enable-libdc1394 --enable-vaapi --disable-opencl --enable-libmfx --disable-altivec --shlibdir=/usr/lib/x86_64-linux-gnu swscale configuration: --disable-decoder=amrnb --disable-decoder=libopenjpeg --disable-mips32r2 --disable-mips32r6 --disable-mips64r6 --disable-mipsdsp --disable-mipsdspr2 --disable-mipsfpu --disable-msa --disable-libopencv --disable-podpages --disable-stripping --enable-avfilter --enable-avresample --enable-gcrypt --enable-gnutls --enable-gpl --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libilbc --enable-libkvazaar --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx265 --enable-libxvid --enable-libzvbi --enable-nonfree --enable-opengl --enable-openssl --enable-postproc --enable-pthreads --enable-shared --enable-version3 --incdir=/usr/include/x86_64-linux-gnu --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --toolchain=hardened --enable-frei0r --enable-chromaprint --enable-libx264 --enable-libiec61883 --enable-libdc1394 --enable-vaapi --disable-opencl --enable-libmfx --disable-altivec --shlibdir=/usr/lib/x86_64-linux-gnu swresample configuration: --disable-decoder=amrnb --disable-decoder=libopenjpeg --disable-mips32r2 --disable-mips32r6 --disable-mips64r6 --disable-mipsdsp --disable-mipsdspr2 --disable-mipsfpu --disable-msa --disable-libopencv --disable-podpages --disable-stripping --enable-avfilter --enable-avresample --enable-gcrypt --enable-gnutls --enable-gpl --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libilbc --enable-libkvazaar --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx265 --enable-libxvid --enable-libzvbi --enable-nonfree --enable-opengl --enable-openssl --enable-postproc --enable-pthreads --enable-shared --enable-version3 --incdir=/usr/include/x86_64-linux-gnu --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --toolchain=hardened --enable-frei0r --enable-chromaprint --enable-libx264 --enable-libiec61883 --enable-libdc1394 --enable-vaapi --disable-opencl --enable-libmfx --disable-altivec --shlibdir=/usr/lib/x86_64-linux-gnu postproc configuration: --disable-decoder=amrnb --disable-decoder=libopenjpeg --disable-mips32r2 --disable-mips32r6 --disable-mips64r6 --disable-mipsdsp --disable-mipsdspr2 --disable-mipsfpu --disable-msa --disable-libopencv --disable-podpages --disable-stripping --enable-avfilter --enable-avresample --enable-gcrypt --enable-gnutls --enable-gpl --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libilbc --enable-libkvazaar --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx265 --enable-libxvid --enable-libzvbi --enable-nonfree --enable-opengl --enable-openssl --enable-postproc --enable-pthreads --enable-shared --enable-version3 --incdir=/usr/include/x86_64-linux-gnu --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --toolchain=hardened --enable-frei0r --enable-chromaprint --enable-libx264 --enable-libiec61883 --enable-libdc1394 --enable-vaapi --disable-opencl --enable-libmfx --disable-altivec --shlibdir=/usr/lib/x86_64-linux-gnu libavutil 55. 34.101 / 55. 33.100 libavcodec 57. 64.101 / 57. 63.103 libavformat 57. 56.101 / 57. 55.100 libavdevice 57. 1.100 / 57. 6.100 libavfilter 6. 65.100 / 6. 82.100 libavresample 3. 1. 0 / 3. 5. 0 libswscale 4. 2.100 / 4. 6.100 libswresample 2. 3.100 / 2. 7.100 libpostproc 54. 1.100 / 54. 5.100 myfile.avi: Protocol not found Did you mean file:myfile.avi? Mulvya asked for following: $ ffmpeg -protocols -v 0 Supported file protocols: Input: async bluray cache concat crypto data ffrtmpcrypt ffrtmphttp file ftp gopher hls http httpproxy https mmsh mmst pipe rtmp rtmpe rtmps rtmpt rtmpte rtmpts rtp sctp srtp subfile tcp tls udp udplite unix Output: crypto ffrtmpcrypt ffrtmphttp file ftp gopher http httpproxy https icecast md5 pipe prompeg rtmp rtmpe rtmps rtmpt rtmpte rtmpts rtp sctp srtp tee tcp tls udp udplite unix Comparing the ldd output (suggested by andcoz) gives a difference of $ diff ldd-not-working-libs-only ldd-working-libs-only 2d1 < /usr/lib/chromium-browser/libffmpeg.so 16d14 < librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 88a87 > librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 Full output: $ cat ldd-not-working linux-vdso.so.1 (0x00007fff303e5000) /usr/lib/chromium-browser/libffmpeg.so (0x00007f8fb7d89000) libavdevice.so.57 => /usr/lib/x86_64-linux-gnu/libavdevice.so.57 (0x00007f8fb7b5b000) libavfilter.so.6 => /usr/lib/x86_64-linux-gnu/libavfilter.so.6 (0x00007f8fb76da000) libavformat.so.57 => /usr/lib/x86_64-linux-gnu/libavformat.so.57 (0x00007f8fb7295000) libavcodec.so.57 => /usr/lib/x86_64-linux-gnu/libavcodec.so.57 (0x00007f8fb5b6e000) libavresample.so.3 => /usr/lib/x86_64-linux-gnu/libavresample.so.3 (0x00007f8fb594c000) libpostproc.so.54 => /usr/lib/x86_64-linux-gnu/libpostproc.so.54 (0x00007f8fb572e000) libswresample.so.2 => /usr/lib/x86_64-linux-gnu/libswresample.so.2 (0x00007f8fb550f000) libswscale.so.4 => /usr/lib/x86_64-linux-gnu/libswscale.so.4 (0x00007f8fb527e000) libavutil.so.55 => /usr/lib/x86_64-linux-gnu/libavutil.so.55 (0x00007f8fb4ff5000) libva.so.1 => /usr/lib/x86_64-linux-gnu/libva.so.1 (0x00007f8fb4dd5000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f8fb4ad1000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f8fb48b4000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f8fb4515000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f8fb430d000) libXv.so.1 => /usr/lib/x86_64-linux-gnu/libXv.so.1 (0x00007f8fb4108000) libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f8fb3dc8000) libXext.so.6 => /usr/lib/x86_64-linux-gnu/libXext.so.6 (0x00007f8fb3bb6000) libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f8fb398e000) libxcb-shm.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-shm.so.0 (0x00007f8fb378a000) libxcb-xfixes.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-xfixes.so.0 (0x00007f8fb3582000) libxcb-shape.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-shape.so.0 (0x00007f8fb337e000) libcdio_paranoia.so.1 => /usr/lib/x86_64-linux-gnu/libcdio_paranoia.so.1 (0x00007f8fb3176000) libcdio_cdda.so.1 => /usr/lib/x86_64-linux-gnu/libcdio_cdda.so.1 (0x00007f8fb2f6e000) libsndio.so.6.1 => /usr/lib/x86_64-linux-gnu/libsndio.so.6.1 (0x00007f8fb2d5e000) libjack.so.0 => /usr/lib/x86_64-linux-gnu/libjack.so.0 (0x00007f8fb2b17000) libasound.so.2 => /usr/lib/x86_64-linux-gnu/libasound.so.2 (0x00007f8fb280a000) libSDL2-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libSDL2-2.0.so.0 (0x00007f8fb24ee000) libdc1394.so.22 => /usr/lib/x86_64-linux-gnu/libdc1394.so.22 (0x00007f8fb2277000) libGL.so.1 => /usr/lib/x86_64-linux-gnu/libGL.so.1 (0x00007f8fb2005000) libpulse.so.0 => /usr/lib/x86_64-linux-gnu/libpulse.so.0 (0x00007f8fb1db4000) libcaca.so.0 => /usr/lib/x86_64-linux-gnu/libcaca.so.0 (0x00007f8fb1aeb000) libraw1394.so.11 => /usr/lib/x86_64-linux-gnu/libraw1394.so.11 (0x00007f8fb18db000) libavc1394.so.0 => /usr/lib/x86_64-linux-gnu/libavc1394.so.0 (0x00007f8fb16d6000) librom1394.so.0 => /usr/lib/x86_64-linux-gnu/librom1394.so.0 (0x00007f8fb14d1000) libiec61883.so.0 => /usr/lib/x86_64-linux-gnu/libiec61883.so.0 (0x00007f8fb12c4000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f8fb10c0000) libvidstab.so.1.0 => /usr/lib/libvidstab.so.1.0 (0x00007f8fb0eab000) libtesseract.so.3 => /usr/lib/x86_64-linux-gnu/libtesseract.so.3 (0x00007f8fb0708000) librubberband.so.2 => /usr/lib/x86_64-linux-gnu/librubberband.so.2 (0x00007f8fb04d2000) libmfx.so.0 => /usr/lib/x86_64-linux-gnu/libmfx.so.0 (0x00007f8fb02be000) libfribidi.so.0 => /usr/lib/x86_64-linux-gnu/libfribidi.so.0 (0x00007f8fb00a7000) libfreetype.so.6 => /usr/lib/x86_64-linux-gnu/libfreetype.so.6 (0x00007f8fafdf8000) libfontconfig.so.1 => /usr/lib/x86_64-linux-gnu/libfontconfig.so.1 (0x00007f8fafbba000) libbs2b.so.0 => /usr/lib/x86_64-linux-gnu/libbs2b.so.0 (0x00007f8faf9b4000) libass.so.9 => /usr/lib/x86_64-linux-gnu/libass.so.9 (0x00007f8faf783000) libgcrypt.so.20 => /lib/x86_64-linux-gnu/libgcrypt.so.20 (0x00007f8faf474000) libopenmpt.so.0 => /usr/lib/x86_64-linux-gnu/libopenmpt.so.0 (0x00007f8faf0e5000) libgme.so.0 => /usr/lib/x86_64-linux-gnu/libgme.so.0 (0x00007f8faee98000) libbluray.so.2 => /usr/lib/x86_64-linux-gnu/libbluray.so.2 (0x00007f8faec4b000) libgnutls.so.30 => /usr/lib/x86_64-linux-gnu/libgnutls.so.30 (0x00007f8fae8b2000) libchromaprint.so.1 => /usr/lib/x86_64-linux-gnu/libchromaprint.so.1 (0x00007f8fae69a000) libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 (0x00007f8fae48a000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f8fae270000) libzvbi.so.0 => /usr/lib/x86_64-linux-gnu/libzvbi.so.0 (0x00007f8fadfe3000) libxvidcore.so.4 => /usr/lib/x86_64-linux-gnu/libxvidcore.so.4 (0x00007f8fadccf000) libx265.so.116 => /usr/lib/x86_64-linux-gnu/libx265.so.116 (0x00007f8fad649000) libx264.so.150 => /usr/lib/x86_64-linux-gnu/libx264.so.150 (0x00007f8fad2ce000) libvpx.so.4 => /usr/lib/x86_64-linux-gnu/libvpx.so.4 (0x00007f8face91000) libvorbisenc.so.2 => /usr/lib/x86_64-linux-gnu/libvorbisenc.so.2 (0x00007f8facbe8000) libvorbis.so.0 => /usr/lib/x86_64-linux-gnu/libvorbis.so.0 (0x00007f8fac9bc000) libvo-amrwbenc.so.0 => /usr/lib/x86_64-linux-gnu/libvo-amrwbenc.so.0 (0x00007f8fac7a2000) libtheoraenc.so.1 => /usr/lib/x86_64-linux-gnu/libtheoraenc.so.1 (0x00007f8fac563000) libtheoradec.so.1 => /usr/lib/x86_64-linux-gnu/libtheoradec.so.1 (0x00007f8fac345000) libspeex.so.1 => /usr/lib/x86_64-linux-gnu/libspeex.so.1 (0x00007f8fac12c000) libsnappy.so.1 => /usr/lib/x86_64-linux-gnu/libsnappy.so.1 (0x00007f8fabf24000) libshine.so.3 => /usr/lib/x86_64-linux-gnu/libshine.so.3 (0x00007f8fabd17000) libopus.so.0 => /usr/lib/x86_64-linux-gnu/libopus.so.0 (0x00007f8fabac8000) libopenjp2.so.7 => /usr/lib/x86_64-linux-gnu/libopenjp2.so.7 (0x00007f8fab88d000) libopenh264.so.2 => /usr/lib/x86_64-linux-gnu/libopenh264.so.2 (0x00007f8fab598000) libopencore-amrwb.so.0 => /usr/lib/x86_64-linux-gnu/libopencore-amrwb.so.0 (0x00007f8fab384000) libopencore-amrnb.so.0 => /usr/lib/x86_64-linux-gnu/libopencore-amrnb.so.0 (0x00007f8fab15a000) libmp3lame.so.0 => /usr/lib/x86_64-linux-gnu/libmp3lame.so.0 (0x00007f8faaec3000) libkvazaar.so.3 => /usr/lib/x86_64-linux-gnu/libkvazaar.so.3 (0x00007f8faac3d000) libilbc.so.2 => /usr/lib/x86_64-linux-gnu/libilbc.so.2 (0x00007f8faaa26000) libgsm.so.1 => /usr/lib/x86_64-linux-gnu/libgsm.so.1 (0x00007f8faa819000) libfdk-aac.so.1 => /usr/lib/x86_64-linux-gnu/libfdk-aac.so.1 (0x00007f8faa561000) libcrystalhd.so.3 => /usr/lib/x86_64-linux-gnu/libcrystalhd.so.3 (0x00007f8faa346000) liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007f8faa120000) libsoxr.so.0 => /usr/lib/x86_64-linux-gnu/libsoxr.so.0 (0x00007f8fa9ebd000) libvdpau.so.1 => /usr/lib/x86_64-linux-gnu/libvdpau.so.1 (0x00007f8fa9cb9000) libva-x11.so.1 => /usr/lib/x86_64-linux-gnu/libva-x11.so.1 (0x00007f8fa9ab3000) libva-drm.so.1 => /usr/lib/x86_64-linux-gnu/libva-drm.so.1 (0x00007f8fa98b0000) /lib64/ld-linux-x86-64.so.2 (0x00007f8fb861a000) libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007f8fa96ac000) libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f8fa94a6000) libcdio.so.13 => /usr/lib/x86_64-linux-gnu/libcdio.so.13 (0x00007f8fa9281000) libbsd.so.0 => /lib/x86_64-linux-gnu/libbsd.so.0 (0x00007f8fa906b000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f8fa8ce9000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f8fa8ad2000) libpulse-simple.so.0 => /usr/lib/x86_64-linux-gnu/libpulse-simple.so.0 (0x00007f8fa88cd000) libXcursor.so.1 => /usr/lib/x86_64-linux-gnu/libXcursor.so.1 (0x00007f8fa86c2000) libXinerama.so.1 => /usr/lib/x86_64-linux-gnu/libXinerama.so.1 (0x00007f8fa84bf000) libXi.so.6 => /usr/lib/x86_64-linux-gnu/libXi.so.6 (0x00007f8fa82af000) libXrandr.so.2 => /usr/lib/x86_64-linux-gnu/libXrandr.so.2 (0x00007f8fa80a4000) libXss.so.1 => /usr/lib/x86_64-linux-gnu/libXss.so.1 (0x00007f8fa7ea1000) libXxf86vm.so.1 => /usr/lib/x86_64-linux-gnu/libXxf86vm.so.1 (0x00007f8fa7c9b000) libwayland-egl.so.1 => /usr/lib/x86_64-linux-gnu/libwayland-egl.so.1 (0x00007f8fa7a99000) libwayland-client.so.0 => /usr/lib/x86_64-linux-gnu/libwayland-client.so.0 (0x00007f8fa788a000) libwayland-cursor.so.0 => /usr/lib/x86_64-linux-gnu/libwayland-cursor.so.0 (0x00007f8fa7682000) libxkbcommon.so.0 => /usr/lib/x86_64-linux-gnu/libxkbcommon.so.0 (0x00007f8fa7442000) libusb-1.0.so.0 => /lib/x86_64-linux-gnu/libusb-1.0.so.0 (0x00007f8fa7229000) libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f8fa6fff000) libxcb-dri3.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-dri3.so.0 (0x00007f8fa6dfc000) libxcb-present.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-present.so.0 (0x00007f8fa6bf9000) libxcb-sync.so.1 => /usr/lib/x86_64-linux-gnu/libxcb-sync.so.1 (0x00007f8fa69f2000) libxshmfence.so.1 => /usr/lib/x86_64-linux-gnu/libxshmfence.so.1 (0x00007f8fa67f0000) libglapi.so.0 => /usr/lib/x86_64-linux-gnu/libglapi.so.0 (0x00007f8fa65c1000) libXdamage.so.1 => /usr/lib/x86_64-linux-gnu/libXdamage.so.1 (0x00007f8fa63be000) libXfixes.so.3 => /usr/lib/x86_64-linux-gnu/libXfixes.so.3 (0x00007f8fa61b8000) libX11-xcb.so.1 => /usr/lib/x86_64-linux-gnu/libX11-xcb.so.1 (0x00007f8fa5fb6000) libxcb-glx.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-glx.so.0 (0x00007f8fa5d9b000) libxcb-dri2.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-dri2.so.0 (0x00007f8fa5b96000) libdrm.so.2 => /usr/lib/x86_64-linux-gnu/libdrm.so.2 (0x00007f8fa5986000) libpulsecommon-10.0.so => /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-10.0.so (0x00007f8fa5703000) libdbus-1.so.3 => /lib/x86_64-linux-gnu/libdbus-1.so.3 (0x00007f8fa54b3000) libcap.so.2 => /lib/x86_64-linux-gnu/libcap.so.2 (0x00007f8fa52ad000) libslang.so.2 => /lib/x86_64-linux-gnu/libslang.so.2 (0x00007f8fa4dc5000) libncursesw.so.5 => /lib/x86_64-linux-gnu/libncursesw.so.5 (0x00007f8fa4b95000) libtinfo.so.5 => /lib/x86_64-linux-gnu/libtinfo.so.5 (0x00007f8fa496b000) liblept.so.5 => /usr/lib/x86_64-linux-gnu/liblept.so.5 (0x00007f8fa44fb000) libsamplerate.so.0 => /usr/lib/x86_64-linux-gnu/libsamplerate.so.0 (0x00007f8fa418f000) libfftw3.so.3 => /usr/lib/x86_64-linux-gnu/libfftw3.so.3 (0x00007f8fa3d92000) libpng16.so.16 => /usr/lib/x86_64-linux-gnu/libpng16.so.16 (0x00007f8fa3b5f000) libharfbuzz.so.0 => /usr/lib/x86_64-linux-gnu/libharfbuzz.so.0 (0x00007f8fa38ca000) libgpg-error.so.0 => /lib/x86_64-linux-gnu/libgpg-error.so.0 (0x00007f8fa36b6000) libmpg123.so.0 => /usr/lib/x86_64-linux-gnu/libmpg123.so.0 (0x00007f8fa3457000) libvorbisfile.so.3 => /usr/lib/x86_64-linux-gnu/libvorbisfile.so.3 (0x00007f8fa324e000) libxml2.so.2 => /usr/lib/x86_64-linux-gnu/libxml2.so.2 (0x00007f8fa2ee7000) libp11-kit.so.0 => /usr/lib/x86_64-linux-gnu/libp11-kit.so.0 (0x00007f8fa2c82000) libidn.so.11 => /lib/x86_64-linux-gnu/libidn.so.11 (0x00007f8fa2a4e000) libtasn1.so.6 => /usr/lib/x86_64-linux-gnu/libtasn1.so.6 (0x00007f8fa283b000) libnettle.so.6 => /usr/lib/x86_64-linux-gnu/libnettle.so.6 (0x00007f8fa2604000) libhogweed.so.4 => /usr/lib/x86_64-linux-gnu/libhogweed.so.4 (0x00007f8fa23cf000) libgmp.so.10 => /usr/lib/x86_64-linux-gnu/libgmp.so.10 (0x00007f8fa214c000) libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 (0x00007f8fa1f41000) libogg.so.0 => /usr/lib/x86_64-linux-gnu/libogg.so.0 (0x00007f8fa1d38000) libcairo.so.2 => /usr/lib/x86_64-linux-gnu/libcairo.so.2 (0x00007f8fa1a24000) libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 (0x00007f8fa17f7000) libXrender.so.1 => /usr/lib/x86_64-linux-gnu/libXrender.so.1 (0x00007f8fa15ed000) libffi.so.6 => /usr/lib/x86_64-linux-gnu/libffi.so.6 (0x00007f8fa13e4000) libudev.so.1 => /lib/x86_64-linux-gnu/libudev.so.1 (0x00007f8fb87a0000) libICE.so.6 => /usr/lib/x86_64-linux-gnu/libICE.so.6 (0x00007f8fa11c7000) libSM.so.6 => /usr/lib/x86_64-linux-gnu/libSM.so.6 (0x00007f8fa0fbf000) libXtst.so.6 => /usr/lib/x86_64-linux-gnu/libXtst.so.6 (0x00007f8fa0db9000) libsystemd.so.0 => /lib/x86_64-linux-gnu/libsystemd.so.0 (0x00007f8fb8714000) libwrap.so.0 => /lib/x86_64-linux-gnu/libwrap.so.0 (0x00007f8fa0baf000) libsndfile.so.1 => /usr/lib/x86_64-linux-gnu/libsndfile.so.1 (0x00007f8fa0937000) libasyncns.so.0 => /usr/lib/x86_64-linux-gnu/libasyncns.so.0 (0x00007f8fa0731000) libjpeg.so.62 => /usr/lib/x86_64-linux-gnu/libjpeg.so.62 (0x00007f8fa04c6000) libgif.so.7 => /usr/lib/x86_64-linux-gnu/libgif.so.7 (0x00007f8fa02bc000) libtiff.so.5 => /usr/lib/x86_64-linux-gnu/libtiff.so.5 (0x00007f8fa0045000) libwebp.so.6 => /usr/lib/x86_64-linux-gnu/libwebp.so.6 (0x00007f8f9fde4000) libglib-2.0.so.0 => /lib/x86_64-linux-gnu/libglib-2.0.so.0 (0x00007f8f9fad0000) libgraphite2.so.3 => /usr/lib/x86_64-linux-gnu/libgraphite2.so.3 (0x00007f8f9f8a3000) libpixman-1.so.0 => /usr/lib/x86_64-linux-gnu/libpixman-1.so.0 (0x00007f8f9f5fc000) libxcb-render.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-render.so.0 (0x00007f8f9f3ee000) libuuid.so.1 => /lib/x86_64-linux-gnu/libuuid.so.1 (0x00007f8f9f1e9000) libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f8f9efc1000) liblz4.so.1 => /usr/lib/x86_64-linux-gnu/liblz4.so.1 (0x00007f8f9edaf000) libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x00007f8f9eb97000) libFLAC.so.8 => /usr/lib/x86_64-linux-gnu/libFLAC.so.8 (0x00007f8f9e920000) libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007f8f9e709000) libjbig.so.0 => /usr/lib/x86_64-linux-gnu/libjbig.so.0 (0x00007f8f9e4fb000) libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f8f9e288000)
The comment of @andcoz lead me onto the right way: the problem exists as long as /usr/lib/chromium-browser/libffmpeg.so is loaded. In my case this happens in some special use case: The libffmpeg.so is used by the Vivaldi browser for showing proprietary video formats (H.264). If I download a file via Vivaldi, and then go to its download panel, right click on the downloaded file and select 'Show in file manager', my file manager opens. It runs in the 'context' of the browser, so when I go a step further, and open a terminal out of that file manager session, this also runs in the context of Vivaldi. For me this means, that there are a few environment variables set, which are not there, if I start a normal terminal session (not out of Vivaldi context). These environment variables contain (among others) a LD_PRELOAD=/usr/lib/chromium-browser/libffmpeg.so. As that one causes libffmpeg.so to be loaded when running my conversion with ffmpeg, I tried an unset LD_PRELOAD in the terminal session and after that my conversion ran again fully normal. So this is where the 'then and now' problem came from, I just was not aware of the case I am in a different environment variable context when starting my program chain out of Vivaldi. Thx everyone for helping out.
ffmpeg: 'Protocol not found' for normal file name
1,326,231,956,000
Currently, ffmpeg is missing from APT packages when using the stable versions of Debian and Ubuntu. There are numerous resources (example from SuperUser, another one from Debian's documentation and the one from AskUbuntu) which explain how to install it in a different (and more complex) way than a simple apt-get install ffmpeg. What I wonder is why the package is not there in the first place? From what I understood, avconv is a fork of ffmpeg and is a de facto standard for Debian similar distributions. Meanwhile, ffmpeg is not abandoned: the website mentions no intention to close the project in profit of avconv, despite the fact that the leader of ffmpeg left the project. So: Why ffmpeg was plainly removed from APT packages, instead of keeping it and simply adding avconv? Is there a reason (other than the fact that it became more difficult to install ffmpeg) to stop using it?
Why was ffmpeg not available in the repo? For some time there was a so-called "ffmpeg" available after Debian switched to Libav, but it was not from FFmpeg. This can probably be explained best with a a rough timeline of what happened: Libav split from FFmpeg and kept the ffmpeg binary name (it also kept the names of the libraries, and the "libav" name was already being used by FFmpeg as a collective noun for the libraries: libavcodec, libavformat, libavutils, etc.). The Debian ffmpeg package maintainer at the time, a member of the Libav fork, switched Debian to use Libav. Libav eventually deprecated/renamed their ffmpeg to avconv, then eventually removed the counterfeit "ffmpeg", but some downstream users such as Ubuntu kept the old, fake "ffmpeg" for "compatibility and transitional" reasons for some time. Debian/Ubuntu eventually removed the buggy, old, dead, fake "ffmpeg". FFmpeg returns in Debian stable (jessie-backports) and Ubuntu Vivid 15.04. Debian/Ubuntu drops Libav. You're currently between steps 4 and 5. Updating to a newer release of your distro will allow you to install the real ffmpeg from the repository. Is there a reason to stop using ffmpeg? FFmpeg development is very active, and now that Libav has lost its major downstream users I think you can ask this question about avconv instead. Other stuff ...the leader of the FFmpeg left the project. Michael Niedermayer is still quite active. He just got tired of some of the admin duties and politics and resigned as leader. Also, it was a gesture to Libav developers as a potential step for reunification with Libav developers.
Why was ffmpeg removed from Debian?
1,326,231,956,000
I am trying to convert a VOB file copied from a DVD into an avi in Ubuntu 13.10. I tried dvdrip, which failed due to a frame count error or something. I tried acidrip as well, but it always choose an audio track I did not want to use. I would prefer to have a command line solution to create an avi with the following features: usable with mplayer (i.e. using the step functions) with the subtitle as required (or no one at all) with the correct audio track with video and audio in sync I tried some ffmpeg and avconv commands and managed to create an avi, but the video and audio were completly out of sync. So what options should I use to have the VOB file converted to an avi file? I would appreciate some explanations on the suggested options! Additional information: The ffmpeg command gives the following output for the input file Input #0, mpeg, from 'Videos/Test/VIDEO_TS/VTS_01_1.VOB': Duration: 00:04:53.32, start: 0.045500, bitrate: 29284 kb/s Stream #0.0[0x1e0]: Video: mpeg2video (Main), yuv420p, 720x480 [PAR 8:9 DAR 4:3], 7500 kb/s, 27.68 fps, 59.94 tbr, 90k tbn, 59.94 tbc Stream #0.1[0x82]: Audio: ac3, 48000 Hz, 5.1, s16, 384 kb/s Stream #0.2[0x80]: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s Stream #0.3[0x81]: Audio: ac3, 0 channels [buffer @ 0x15ca6e0] w:720 h:480 pixfmt:yuv420p The following command ffmpeg -i Videos/Test/VIDEO_TS/VTS_01_1.VOB -ss 589 -t 274 -sameq -acodec copy -ab 320k output.avi for example resulted in a crash of my Ubuntu session. The following command avconv -i Videos/Test/VIDEO_TS/VTS_01_1.VOB -acodec copy -vcodec copy output.avi for example resulted in the following error: Application provided invalid, non monotonically increasing dts to muxer in stream 1: 374 >= 374 av_interleaved_write_frame(): Invalid argument The following command avconv -i Videos/Test/VIDEO_TS/VTS_01_1.VOB -f avi -c:v mpeg4 -b:v 800k -g 300 -bf 2 -c:a libmp3lame -b:a 128k output.avi for example resulted in the following error: Error while opening encoder for output stream #0:1 - maybe incorrect parameters such as bit_rate, rate, width or height The following command avconv -i Videos/Test/VIDEO_TS/VTS_01_1.VOB -f avi -c:v mpeg4 -b:v 800k -g 300 -bf 2 -c:a ac3 -b:a 128k output.avi for example seemed to work for some frames. But very soon I encountered many errors of the form [ac3 @ 0x120d480] frame sync error Error while decoding stream #0:1 frame CRC mismatch The following command mencoder Videos/Test/VIDEO_TS/VTS_01_1.VOB -oac copy -ovc x264 -x264encopts bitrate=2500 -o output.avi did some converting, but is (i) using subtitles although I did not want to use them (ii) got the audio wrong (audio and video is terribly misplaced) and (iii) seems to be slower than the movie actually goes (might take 2 hours for a 90 minute movie). I tried to command given here (third post from Xeratul), but it stopped with the error FATAL: Cannot initialize video driver. I tried the suggestion made below to look at the mencoder page. This page suggests to use two passes: the first reads informations about the movie, the second uses that information to encode. but neither it is explained which information to extract nor how to use them in the second pass. So I used the following command: mencoder Videos/Test/VIDEO_TS/VTS_01_1.VOB -nosound -ovc x264 \ -x264encopts direct=auto:pass=2:bitrate=900:frameref=5:bframes=1:\ me=umh:partitions=all:trellis=1:qp_step=4:qcomp=0.7:direct_pred=auto:keyint=300 \ -vf scale=-1:-10,harddup -o video.avi which did convert the video, but with an unwanted subtitle. It is not clear at all how I can avoid using a subtitle.
To get rid of the subtitles I believe you can add the -nosub switch, right after the .VOB file's name. Example $ mencoder Videos/Test/VIDEO_TS/VTS_01_1.VOB -nosub -nosound -ovc x264 \ -x264encopts direct=auto:pass=2:bitrate=900:frameref=5:bframes=1:\ me=umh:partitions=all:trellis=1:qp_step=4:qcomp=0.7:direct_pred=auto:keyint=300 \ -vf scale=-1:-10,harddup -o video.avi Details These incantations are often very dense so to break this one down a bit input file: Videos/Test/VIDEO_TS/VTS_01_1.VOB output file: -o video.avi no subtitles: -nosub don't encode sound: -nosound encode with given codec: -ovc x264 list of other codecs $ mencoder -ovc help MEncoder SVN-r36171-4.8.1 (C) 2000-2013 MPlayer Team Available codecs: copy - frame copy, without re-encoding. Doesn't work with filters. frameno - special audio-only file for 3-pass encoding, see DOCS. raw - uncompressed video. Use fourcc option to set format explicitly. nuv - nuppel video lavc - libavcodec codecs - best quality! libdv - DV encoding with libdv v0.9.5 xvid - XviD encoding x264 - H.264 encoding x264 encode options: x264encopts set mode for direct motion vectors: direct=auto number of passes: pass=2 target encoding bitrate: bitrate=900 pre. frames used as predictors in B- and P-frames (def: 3): frameref=5 concurrent # of B-frames: bframes=1 fullpixel motion estimation alg.: me=umh NOTE: umh - uneven multi-hexagon search (slow) enable all macroblock types: partitions=all rate-distortion optimal quantization: trellis=1 NOTE: 2 - enabled during all mode decisions (slow, requires subq>=6) quantizer increment/decerement value: qp_step=4 NOTE: maximum value by which the quantizer may be incremented/decremented between frames (default: 4) quantizer compression (default: 0.6): qcomp=0.7 motion prediction for macroblocks in B-frames: direct_pred=auto maximum interval between keyframes in frames: keyint=300 options after this are video filters: -vf NOTE: For the video filter switches, it's important that you use harddup as the last filter: it will force MEncoder to write every frame (even duplicate ones) in the output. Also, it is necessary to use scale=$WIDTH,-10 with $WIDTH as -1 to keep the original width or a new, usually smaller, width: it is necessary since the H.264 codec uses square pixels and DVDs instead use rectangular pixels. scale=-1 -10 harddup
How to convert a VOB file to avi?
1,326,231,956,000
I'm told it's possible to embed subtitles (.srt) into video files (.avi) using ffmpeg, but I can't find any mention of it in the man page. Is this possible? What command do I use?
From man ffmpeg: Subtitle options: -scodec codec Force subtitle codec ('copy' to copy stream). -newsubtitle Add a new subtitle stream to the current output stream. -slang code Set the ISO 639 language code (3 letters) of the current subtitle stream. So: ffmpeg -newsubtitle subtitles.srv -i video.avi ...
How can I embed subtitles into videos with ffmpeg?
1,326,231,956,000
I want to reduce the size of a video to be able to send it via email and such. I looked at this question: How can I reduce a video's size with ffmpeg? where I got good advice on how to reduce it. Problem is that I need to manually calculate the bitrate. And also when doing this I had to finish of with manual trial and error. Is there any good way to use ffmpeg (or another tool) to reduce the size of a video to a target size? Note due to a comment from frostschutz: Worrying too much about size and going for a fixed filesize regardless of video length / resolution / framerate / content will not give satisfactory results most of the time... Upload it somewhere, email the link. If you must encode it, set a quality level, leave the bitrate dynamic. If it's obvious that size will not be met, cancel the encode and adapt quality level accordingly. Rinse and repeat. Good advice in general, but it does not suit my usecase. Uploading to an external link is not an option due to technical limitations, one of them being that the receiver does not have http access. I could also clarify that I'm NOT looking for exactly the size I'm specifying. It's enough if it is reasonably close to what I want (maybe up to 5 or 10% lower or so) but it should be guaranteed that it will not exceed my target limit.
@philip-couling's answer is close but missing several pieces: The desired size in MB needs to be multiplied by 8 to get bit rate For bit rates, 1Mb/s = 1000 kb/s = 1000000 b/s; the multipliers aren't 1024 (there is another prefix, KiB/s, which is 1024 B/s, but ffmpeg doesn't appear to use that) Truncating or rounding down a non-integer length (instead of using it precisely or rounding up) will result in the file size slightly exceeding target size The -b flag specifies the average bit rate: in practice the encoder will let the bit rate jump around a bit, and the result could overshoot your target size. You can however specify a max bit rate by using -maxrate and -bufsize For my use-case I also wanted a hardcoded audio bitrate and just adjust video accordingly (this seems safer too, I'm not 100% certain ffmpeg's -b flag specifies bitrate for video and audio streams together or not). Taking all these into account and limiting to strictly 25MB: file="input.mp4" target_size_mb=25 # 25MB target size target_size=$(( $target_size_mb * 1000 * 1000 * 8 )) # target size in bits length=`ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 "$file"` length_round_up=$(( ${length%.*} + 1 )) total_bitrate=$(( $target_size / $length_round_up )) audio_bitrate=$(( 128 * 1000 )) # 128k bit rate video_bitrate=$(( $total_bitrate - $audio_bitrate )) ffmpeg -i "$file" -b:v $video_bitrate -maxrate:v $video_bitrate -bufsize:v $(( $target_size / 20 )) -b:a $audio_bitrate "${file}-${target_size_mb}mb.mp4" Note that when -maxrate and -bufsize limit the max bit rate, by necessity the average bit rate will be lower, so the video will undershoot the target size by as much as 5-10% in my tests (on a 20s video at various target sizes). The value of -bufsize is important, and the calculated value used above (based on target size) is my best guess. Too small and it will vastly lower quality and undershoot target size by like 50%, but too large and I think it could potentially overshoot target size? To give the encoder more flexibility if you don't have a strict maximum file size, removing -maxrate and -bufsize will result in better quality, but can cause the video to overshoot the target size by 5% in my tests. More info in the docs, where you will see this warning: Note: Constraining the bitrate might result in low quality output if the video is hard to encode. In most cases (such as storing a file for archival), letting the encoder choose the proper bitrate is the constant quality or CRF-based encoding. You can use the following code to declare a reusable bash function: ffmpeg_resize () { file=$1 target_size_mb=$2 # target size in MB target_size=$(( $target_size_mb * 1000 * 1000 * 8 )) # target size in bits length=`ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 "$file"` length_round_up=$(( ${length%.*} + 1 )) total_bitrate=$(( $target_size / $length_round_up )) audio_bitrate=$(( 128 * 1000 )) # 128k bit rate video_bitrate=$(( $total_bitrate - $audio_bitrate )) ffmpeg -i "$file" -b:v $video_bitrate -maxrate:v $video_bitrate -bufsize:v $(( $target_size / 20 )) -b:a $audio_bitrate "${file}-${target_size_mb}mb.mp4" } ffmpeg_resize file1.mp4 25 # resize `file1.mp4` to 25 MB ffmpeg_resize file2.mp4 64 # resize `file2.mp4` to 64 MB
How to reduce the size of a video to a target size?
1,326,231,956,000
I read the following thread, and the solution works if I want to split a video into pieces every two minutes: Split video file into pieces with ffmpeg The problem is that I want to split a video into pieces every 15 seconds. When I use the following command: for i in {00..49} do ffmpeg -i fff.avi -sameq -ss 00:$[ i*2 ]:00 -t 00:00:15 output_$i.avi done it will output 15-second video sequences, but not in order. It will skip parts of the video, so I end up with a few 15 second clips, but not all the clips. I want to be able to use ffmpeg to split any video I throw at it into many pieces based on the time I give it.
I found the answer. It turns out I was having problems because I didn't have the proper FFmpeg installed, but a fork of ffmpeg. This code works for me: ffmpeg -i fff.avi -acodec copy -f segment -segment_time 10 -vcodec copy -reset_timestamps 1 -map 0 fff%d.avi fff.avi is the name of the source clip. Change -segment_time 10 to the general duration you want for each segment. If you want each clip to be about 40 seconds, then use -segment_time 40. Use -an after -map 0 if you don't want the resulting clips to have sound.
FFmpeg - Split video multiple parts
1,326,231,956,000
My problem is this. I have access to a server that hosts many video files, most of them are very large and not well compressed. I intend to make a reduced quality smaller size copy of these on my local machine for better access. The problem is that the server does not have ftp access. I can scp the files to my machine and then use ffmpeg to reduce the size, but I'll run out of space if I copy all of the files locally. I am looking for a way to directly input a network file to ffmpeg, that way I'll be able to write a script that will overnight get me all the videos in reduced size.
You can use sshfs to make the remote files appear in a directory on the local machine. You don't say what distro you're using on your client, but this is cribbed from the Ubuntu sshfs documentation: Install the sshfs package (aptitude install sshfs) Add your user to the fuse group (sudo gpasswd -a username fuse) Mount the filesystem using the sshfs command To use sshfs, make yourself a directory (we'll call this /mountpoint), and do sshfs -o idmap=user remote_user@remote_server:/remote/directory /mountpoint The remote files will now appear in /mountpoint, but are in fact still on the remote server. Any changes you make will be made remotely and not locally. To unmount the directory, do fusermount -u /mountpoint
How to input a network file to ffmpeg
1,326,231,956,000
I got a new drive and I can copy files fine with simple cp on the drive. However for some weird reason I get Permission denied with ffmpeg. Permissions seem fine unless I'm missing something > ll /media/manos/6TB/ drwxrwxrwx 13 manos 4096 Apr 16 00:56 ./ drwxr-x---+ 6 manos 4096 Apr 16 00:49 .. -rwxrwxrwx 1 manos 250900209 Apr 15 17:28 test.mp4* .. But ffmpeg keeps complaing > ffmpeg -i test.mp4 test.mov ffmpeg version n4.1.4 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 7 (Ubuntu 7.4.0-1ubuntu1~18.04.1) configuration: --prefix= --prefix=/usr --disable-debug --disable-doc --disable-static --enable-avisynth --enable-cuda --enable-cuvid --enable-libdrm --enable-ffplay --enable-gnutls --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopus --enable-libpulse --enable-sdl2 --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libv4l2 --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxvid --enable-nonfree --enable-nvenc --enable-omx --enable-openal --enable-opencl --enable-runtime-cpudetect --enable-shared --enable-vaapi --enable-vdpau --enable-version3 --enable-xlib libavutil 56. 22.100 / 56. 22.100 libavcodec 58. 35.100 / 58. 35.100 libavformat 58. 20.100 / 58. 20.100 libavdevice 58. 5.100 / 58. 5.100 libavfilter 7. 40.101 / 7. 40.101 libswscale 5. 3.100 / 5. 3.100 libswresample 3. 3.100 / 3. 3.100 libpostproc 55. 3.100 / 55. 3.100 test.mp4: Permission denied Simply copying like below works fine.. > cp test.mp4 test.mp4.bak 'test.mp4' -> 'test.mp4.bak' Any ideas on what is going on? This is pretty annoying. Note ffmpeg is installed at /snap/bin/ffmpeg
So after a lot of digging I figured the issue is with snap package manager. Apparently by default, snap can't access the media directory so we need to manually fix this. Check if ffmpeg has access to removable-media like below > snap connections | grep ffmpeg desktop ffmpeg:desktop :desktop - home ffmpeg:home :home - network ffmpeg:network :network - network-bind ffmpeg:network-bind :network-bind - opengl ffmpeg:opengl :opengl - optical-drive ffmpeg:optical-drive :optical-drive - pulseaudio ffmpeg:pulseaudio :pulseaudio - wayland ffmpeg:wayland :wayland - x11 ffmpeg:x11 :x11 - Add that permission if it's missing sudo snap connect ffmpeg:removable-media
"Permission denied" with ffmpeg (via snap) on external drive
1,326,231,956,000
How to concatenate (join) multiple MP4 video files into one file interactively? There are a lot of programs that do this for two files from the command line. For instance: ffmpeg avconv MP4Box But we frequently need a solution to do this interactively.
I use MP4Box as work base. The script i suggest reads all the files one by one, verifying each one (weather it's an ordinary file), asks the user for the input filename to create. #!/bin/bash printf "### Concatenate Media files ###\n" fInputCount=0 # Reading input files IFS='' while (true) do let currentNumber=$fInputCount+1 printf "File n°%s (\"ok\" to finish): " $currentNumber read inputFile [ "$inputFile" == "ok" ] && break [ ! -e "$inputFile" ] || [ ! -f "$inputFile" ] && printf "\"%s\" : Invalid filename. Skipped !\n" "$inputFile" && continue ((fInputCount++)) inputFileList[$fInputCount]=$inputFile printf "\"%s\" : Added to queue !\n" "$inputFile" done [ "$fInputCount" == "0" ] || [ "$fInputCount" == "1" ] && echo "No enough input data. BYE ! " && exit # Listing the input file list for ((i=1;i<=$fInputCount;i++)) do printf "%2d : %s\n" $i ${inputFileList[$i]} done # Reading the output filename while (true) do printf "Output file without extention (\"none\" to dismiss) : " read outputRead [ "$outputRead" == "none" ] && echo "Dismissed. BYE ! " && exit [ "$outputRead" == "" ] && echo "Try again ! " && continue [ -e "$outputRead" ] && echo "\"$outputRead\" exists. Try again !" && continue outputFile=$outputRead.mp4 echo "Output to \"$outputFile\". Go !" && break done # Creating a random temporary filename tmpOutFile="/tmp/concatMedia"`date +"%s%N"| sha1sum | awk '{print $1}'`".mp4" # Joining the two first input files MP4Box -cat "${inputFileList[1]}" -cat "${inputFileList[2]}" $tmpOutFile # Adding all other files for ((i=3;i<=$fInputCount;i++)) do tmpIntermediateFile=$tmpOutFile tmpOutFile="/tmp/concatMedia"`date +"%s%N"| sha1sum | awk '{print $1}'`".mp4" MP4Box -cat $tmpIntermediateFile -cat "${inputFileList[$i]}" $tmpOutFile rm $tmpIntermediateFile done mv $tmpOutFile "$outputFile" # Finished echo "\"$outputFile\" Saved !"
Interactively concatenate video files
1,326,231,956,000
I've recorded a .gif of the screen with ffmpeg. I've used gifsicle and imagemagick to compress it a bit, but it's still to big. My intent is making it small by removing, say, a frame every 2 frames, so that the total count of frames will be halved. I couldn't find a way to do it, neither with gifsicle nor with imagemagick. man pages didn't help. How can I remove a frame from a .gif animation every n frames?
There is probably a better way to do it, but here is what I would do First, split your animation in frames convert animation.gif +adjoin temp_%02d.gif Then, select one over n frames with a small for-loop in which you loop over all the frames, you check if it is divisible by 2 and if so you copy it in a new temporary file. j=0; for i in $(ls temp_*gif); do if [ $(( $j%2 )) -eq 0 ]; then cp $i sel_`printf %02d $j`.gif; fi; j=$(echo "$j+1" | bc); done If you prefer to keep all the non-divisible numbers (and so if you want to delete rather than to keep every nth frame), replace -eq by -ne. And once you done it, create your new animation from the selected frames convert -delay 20 $( ls sel_*) new_animation.gif You can make a small script convert.sh easily, which would be something like that #!/bin/bash animtoconvert=$1 nframe=$2 fps=$3 # Split in frames convert $animtoconvert +adjoin temp_%02d.gif # select the frames for the new animation j=0 for i in $(ls temp_*gif); do if [ $(( $j%${nframe} )) -eq 0 ]; then cp $i sel_`printf %02d $j`.gif; fi; j=$(echo "$j+1" | bc); done # Create the new animation & clean up everything convert -delay $fps $( ls sel_*) new_animation.gif rm temp_* sel_* And then just call, for example $ convert.sh youranimation.gif 2 20
Remove nth frames of a gif (remove a frame every n frames)
1,326,231,956,000
I have a dilemma.. I've had a script for a while now that downloads pictures from a webcam every few minutes. The naming convention just does new_image((len(files(dir))+1) + '.jpg') and that's all fine.. until today.. I've had a Python script that loads the images based on creation date and renders them and i take the OpenGL data and dump it into a movie, which isn't hard and it's all ok.. Except now i have a few thousand images and it's quite ineffective to go on about it this way (even tho it's cool because i can build my own GUI and overlay etc).. anyway, i'm using ffmpeg to combine the images into a slideshow, like so: ffmpeg -f image2 -r 25 -pattern_type glob -i '*.jpg' -qscale 3 -s 1920x1080 -c:v wmv1 video.wmv The ffmpeg works fine, except that -pattern_type glob takes the images in naming order which doesn't work because the way the files get fed is similar if not the same as ls, which looks like: user@host:/storage/photos$ ls 0.jpg 1362.jpg 1724.jpg 2086.jpg 2448.jpg 280.jpg 3171.jpg 3533.jpg 3896.jpg 4257.jpg 4619.jpg 4981.jpg 5342.jpg 5704.jpg 6066.jpg 650.jpg 1000.jpg 1363.jpg 1725.jpg 2087.jpg 2449.jpg 2810.jpg 3172.jpg 3534.jpg 3897.jpg 4258.jpg 461.jpg 4982.jpg 5343.jpg 5705.jpg 6067.jpg 651.jpg 0, 1000, 1, 2000, 2... this is the logic of ls so the image sequence (timelapse) will be all f-ed up.. Any ideas on how to use ffmpeg to load the images in a more sorted manner?
ffmpeg concat same file types You could use a command like this to concatenate the list of files any way you want: ffmpeg -f concat -i <( for f in *.wav; do echo "file '$(pwd)/$f'"; done ) \ output.wav The above can only work if the files you're concatenating are all the same codecs. So they'd all have to be .wav or .mpg for example. NOTE: You need to have ffmpeg v1.1 or higher to use the concat demuxer, you can read more about the above example and also how to concatenate different codecs using this technique on the ffmpeg website. ffmpeg using printf formatters Ffmpeg can also take input using the printf formatters such as %d. This matches digits starting at 0 and going up from there in order. If the numbers were structured like this, 000 - 099, you could use this formatter, %03d, which means a series of 3 digits, zero padded. So you could do something like this: ffmpeg -r 25 -i %d.png -qscale 3 -s 1920x1080 -c:v wmv1 video.wmv The above didn't quite work for me, ffmpeg was complaining about the option -c:v. I simply omitted that option and this version of the command worked as expected. ffmpeg -r 25 -i %d.png -qscale 3 -s 1920x1080 video.wmv
ffmpeg -pattern_type glob -- not loading files in correct order
1,326,231,956,000
First part of my question: I read on the ffmpeg documentation (section 3.2 How do I encode single pictures into movies?) the following: To encode single pictures into movies, run the command: ffmpeg -f image2 -i img%d.jpg movie.mpg Notice that `%d' is replaced by the image number: img%03d.jpg means the sequence img001.jpg, img002.jpg, etc... My question is: Who is doing the translation between img%03d.jpg and img001.jpg, img002.jpg, etc? Is it the shell or ffmpeg? Second part: I would like to ask ffmpeg to encode a sequence of images into a video. However, my sequences often start with an index different from 1, (e.g. we can call it start_index) and end on an index that we can call end_index. Moreover, the sequence uses increments of value increment, e.g.: img_0025.png, img_0030.png, img_0035.png, ... img_0100.png where start_index was 25, end_index was 100, and increment was 5. I would like feed an image sequence like the above to ffmpeg without having to rename the sequence first. The documentation explains how to do this with symbolic links, but I was wondering if there is a way to avoid them altogether, maybe using advanced globbing on zsh.
Part 1: % is not a special character, so the img%d.jpg argument is passed as is to ffmpeg which “does the job” itself. Part 2: Looking at ffmpeg documentation, I don't think there is another way to provide input files, so you may have to use symlinks or wait for the “fix”: If the pattern contains "%d" or "%0Nd", the first filename of the file list specified by the pattern must contain a number inclusively contained between 0 and 4, all the following numbers must be sequential. This limitation may be hopefully fixed.
Who is doing the job: ffmpeg or the shell?
1,326,231,956,000
How do I convert a hdmv_pgs_subtitle (which is image based) to a text based subtitle in a MKV file? I have tried ffmpeg -i in.mkv -c:v copy -c:a copy -c:s mov_text out.mkv but that results with the following error: Stream mapping: Stream #0:0 -> #0:0 (copy) Stream #0:1 -> #0:1 (copy) Stream #0:2 -> #0:2 (hdmv_pgs_subtitle (pgssub) -> mov_text (native)) Error while opening encoder for output stream #0:2 - maybe incorrect > parameters such as bit_rate, rate, width or height
Converting image based subtitles to text is a nontrivial process, as you will need some kind of OCR system to interpret the bitmaps and figure out what the corresponding text is. ffmpeg alone will not do that for you. I am not aware of any app that will do the whole process in one go, for Linux/UNIX. However, this process should work: Extract the subtitles with mkvextract or ffmpeg Convert the PGS subtitles to DVD SUB format with BDSup2Sub OCR the subtitles into SRT format with VobSub2SRT Mux the subtitles back into an mkv file with mkvmerge or ffmpeg
Convert image based subtitle to text based subtitle inside MKV file
1,326,231,956,000
How to send a ffmpeg stream to the framebuffer /dev/fb0? For instance, how to send the webcam output to the framebuffer? I am looking for an equivalent of this mplayer command but using ffmpeg exclusively: mplayer -ov fbdev2 -tv driver=v4l2 device=/dev/video0 tv:// P. S.: I don't wat to pipe the output of ffmpeg to mplayer
There is a lot of misinformation on the web about this not being possible, however, it most definitely is possible. Note, you may need to adjust the -i and -pix_fmt a bit for your situation. ffmpeg -i /dev/video0 -pix_fmt bgra -f fbdev /dev/fb0 Also note, the user executing this must have privileges to write to the framebuffer (i.e. root).
How to send ffmpeg output to framebuffer?
1,482,854,250,000
I'm trying to covert all m4a to mp3 my code look like this: find . -name '*.m4a' -print0 | while read -d '' -r file; do ffmpeg -i "$file" -n -acodec libmp3lame -ab 128k "${file%.m4a}.mp3"; done but it only work for first mp3 file for next it show error: Parse error, at least 3 arguments were expected, only 1 given in string '<All files in one line>' Enter command: <target>|all <time>|-1 <command>[ <argument>] The files contain spaces ampersands and parenthesis.
When reading a file line by line, if a command inside the loop also reads stdin, it can exhaust the input file. Continue reading here: Bash FAQ 89 So the code should look like this: find . -name '*.m4a' -print0 | while read -d '' -r file; do ffmpeg -i "$file" -n -acodec libmp3lame -ab 128k "${file%.m4a}.mp3" < /dev/null done
Convert all found m4a to mp3
1,482,854,250,000
I'm running Kubuntu. I don't want to install any Windows applications in wine. I would like a (relatively simple) command to convert a flash animation (.SWF file) to an animated GIF. The input .SWF file is only 14.5 KiB and I want to convert the entire thing at best quality. I'm hoping the GIF will be of similar size. Here's the info on the ffmpeg I have installed: ffmpeg version 0.10.12-7:0.10.12-1~precise1 Copyright (c) 2000-2014 the FFmpeg developers built on Apr 26 2014 09:49:36 with gcc 4.6.3 configuration: --arch=amd64 --disable-stripping --enable-pthreads --enable-runtime-cpudetect --extra-version='7:0.10.12-1~precise1' --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --enable-bzlib --enable-libdc1394 --enable-libfreetype --enable-frei0r --enable-gnutls --enable-libgsm --enable-libmp3lame --enable-librtmp --enable-libopencv --enable-libopenjpeg --enable-libpulse --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-vaapi --enable-vdpau --enable-libvorbis --enable-libvpx --enable-zlib --enable-gpl --enable-postproc --enable-libcdio --enable-x11grab --enable-libx264 --shlibdir=/usr/lib/x86_64-linux-gnu --enable-shared --disable-static libavutil 51. 35.100 / 51. 35.100 libavcodec 53. 61.100 / 53. 61.100 libavformat 53. 32.100 / 53. 32.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 61.100 / 2. 61.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 52. 0.100 / 52. 0.100
You don't mention in your post if you've tried the most basic command that should accomplish this: ffmpeg -i input.swf output.gif Assuming that works there are going to be quality problems with it, because GIF is a 256-color format. (Imgur recently extended the file format for GIFV which uses WebM video, but that's a separate topic) If that didn't work, it's because you don't have a SWF decoder or a GIF encoder. You can run this command to see what codecs/formats are supported by your version of FFMpeg: ffmpeg -formats The output of that is pretty verbose (it will list everything) and you can use grep to cut it down for you: ffmpeg -formats | grep -i GIF ffmpeg -formats | grep -i SWF For me I get this: DE gif GIF Animation E avm2 SWF (ShockWave Flash) (AVM2) DE swf SWF (ShockWave Flash) This shows that my version of FFMpeg supports decoding and encoding GIF and SWF. You may also want to test converting it to an AVI before converting it to GIF, to see the quality before any GIF problems: ffmpeg -i input.swf -sameq output.avi It may get mad about trying to use the -sameq flag because SWF doesn't have a "quality". You could also try -b:v 900k to set the video bitrate to pretty high. Update The source you linked to can easily be compiled on any Linux system that has GCC and the "zlib" library (almost everything has that) Here is how I compiled it: sudo apt-get install build-essential zlib-dev wget -o main.c "http://svn.perian.org/ffmpeg/tools/cws2fws.c" gcc main.c -lz You can now run the tool to convert like this: ./a.out input.swf decompressed.swf Cheers
Command line for converting .SWF to animated GIF
1,482,854,250,000
When running the following command line to convert multiple .mkv files into .mp4 files, things go awry. It's as if the script tries to run all the commands at once or they get entangled in each other somehow, and the output gets messy. I don't know exactly. When I ran this just now, only the first file was converted. Other times, I seem to get unexpected or random results. ls -1 *.mkv | while read f; do ffmpeg -i "$f" -codec copy "${f%.*}.mp4"; done However, if I do the following, things work as expected: ls -1 *.mkv | while read f; do echo ffmpeg -i \"$f\" -codec copy \"${f%.*}.mp4\"; done > script; bash script But why? What is it with ffmpeg? This doesn't happen with bash loops and other commands than ffmpeg. I'd like to do it a simple way without escaping quotes, piping to a script file, etc.
ffmpeg reads from standard input as well, consuming data meant for read. Redirect its input from /dev/null: ... | while read f; do ffmpeg -i "$f" -codec copy "${f%.*}.mp4" < /dev/null; done That said, don't read the output from ls like this. A simple for loop will suffice (and remove the original problem). for f in *.mkv; do ffmpeg -i "$f" -codec copy "${f%.*}.mp4" done
Problem with ffmpeg in bash loop
1,482,854,250,000
I like mencoder for converting video files because it shows me how much time is remaining to the end of the conversion (in opposite to ffmpeg). mencoder file.wmv -ofps 23.976 -ovc lavc -oac copy -o file.avi The problem is that the resulting file is not the same quality as the input file. With ffmpeg there is sameq option. Is there similar option for mencoder to achieve the same quality? Is this only my feeling or is mencoder faster than ffmpeg when I will achieve the same quality? thank you
Ok, first things first... ffmpeg's sameq option doesn't mean same quality, and in many cases will not do anything relevant. This is from the ffmpeg man page: -sameq Use same quantizer as source (implies VBR). This is actually from the ffmpeg online documentation ‘-same_quant’ Use same quantizer as source (implies VBR). Note that this is NOT SAME QUALITY. Do not use this option unless you know you need it. You can check it here http://ffmpeg.org/ffmpeg.html, search for same_quant (it's the same as sameq). Ok, now that's out of the way, I don't know exactly what you are trying to achieve with your conversion, but there are 2 possibilities, which I will enumerate and try to help: You are only trying to change the Container and not the codec of the video. The container is the bit of data that's there to tell the software that's trying to play the video how things are organized, It has the info on the FPS, video codec, audio codecs, how the data is stored. If your video was water the container could be a glass bottle a cartoon, or plastic bottle. In this case containers are wmv, avi, mp4, mov, etc. Containers are important because not all of them support all codecs, and not all software supports all containers. Keep in mind that you could still get the container right, but if the video inside is the wrong codec you're still out of luck. So if you only want to change containers you can use: mencoder file.wmv -ofps 23.976 -ovc copy -oac copy -o file.avi You're trying to change the Codec. The Codec is the algorithm or set of specs that actually says how a video is encoded, different codecs have different quality outputs, sizes, parameters, etc. Now things get a bit tricky, while changing the container you can actually copy the video and not lose any quality (because you only change the container) when changing codecs you will most certainly lose some quality, but most times if both codecs are good and the right parameters are configured you wouldn't notice much after just one conversion (this is inherent of the way codecs compress video ). The different quality between ffmpeg and mencoder I can only guess, but one of two things must be happening: You only need to change containers, and ffmpeg is detecting that and only copying the video. (if this is the case use -ovc copy on mencoder) You need to change codecs and ffmpeg is choosing better codecs/parameters to recompress the video. In this case try to check what codec ffmpeg is using (if you can't post them here and I will take a look) or try a diferent code with mencoder (right now you're not specifying anything) for example: mencoder file.wmv -ofps 23.976 -ovc lavc -lavcopts vcodec=mpeg4 -oac copy -o file.avi This would make use the ISO standard MPEG-4 (DivX 5, XVID compatible), of course you could still need to change the bitrate that reconversion is using, looking at the source bitrate would be a good indicator. Also mencoder supports a lot of different libs, some are multi codec like lavc, others not, try mencoder -ovc help to get a list of supported codecs, in mine I got: MEncoder SVN-r33094-4.5.3 (C) 2000-2011 MPlayer Team Available codecs: copy - frame copy, without re-encoding. Doesn't work with filters. frameno - special audio-only file for 3-pass encoding, see DOCS. raw - uncompressed video. Use fourcc option to set format explicitly. nuv - nuppel video lavc - libavcodec codecs - best quality! libdv - DV encoding with libdv v0.9.5 xvid - XviD encoding x264 - H.264 encoding So I could do: mencoder file.wmv -ofps 23.976 -ovc xvid -oac copy -o file.avi or mencoder file.wmv -ofps 23.976 -ovc x264 -oac copy -o file.avi For a smaller file size and looooonger compress time
Mencoder with same quality output
1,482,854,250,000
I have found a ffmpeg command to record area of my screen: ffmpeg -video_size 2000x1600 -framerate 25 -f x11grab -i :0.0+2140,280 output.mp4 But to find the correct area, I had to do multiple trial/error runs and it was tedious. Is there some possibility to select area by mouse, and have it recorded by ffmpeg ? If there is no ready made solution, how would I hack together something that works in this way? in another post, somebody mentioned xrectsel, which prints coordinates of a rectangle selected by mouse. Is there some easier way to do this?
Slop (an application that queries for a selection from the user and prints the region to stdout) appears to be the easiest tool exactly fitting your purpose… since… it provides an example linked to ffmpeg capturing straight in its readme. ;-) slop can be used to create a video recording script in only three lines of code. #!/bin/bash slop=$(slop -f "%x %y %w %h %g %i") || exit 1 read -r X Y W H G ID <<< $slop ffmpeg -f x11grab -s "$W"x"$H" -i :0.0+$X,$Y -f alsa -i pulse ~/myfile.webm
ffmpeg: record screen area selected by mouse
1,482,854,250,000
Before upgrade to the latest Ubuntu I used the following command to extract mp3 from an avi file: $ for x in *.avi; do ffmpeg -vol 100 -ab 160k -ar 44100 -i "$x" "`basename "$x" .avi`.mp3"; done This worked. But since Ubuntu 12.04 I get Option sample_rate not found. when I try to execute this command. If I omit -ar 44100 (or -sample_rate 44100) it does extract the mp3, but in most cases the length of the extracted mp3 doesn't fit anymore. That means that they've got a length of 43min or something although the avi has a length of only 5min. It's the same problem with both ffmpeg and avconv. Any ideas how to solve these problems?
Try with this instead: ffmpeg -i "$x" -vol 100 -ab 160k -ar 44100 "`basename "$x" .avi`.mp3" Before the options were in front of the input file, but it looks like you want to set them for the output file.
mp3 extraction with avconv or ffmpeg doesn't work properly anymore
1,482,854,250,000
So far I have been using the following command to split an audio file from S to E: ffmpeg -i input.mp3 -ss S -to E -c copy output.mp3 Due to the fact that I need to split an audio file into multiple segments, each having a different length, I have to use the above command multiple times for only a single file. So, is there a way to split a single audio file into multiple segments using ffmpeg, but using only a single command? Note that the segments are not of the same length.
Sure, just give it more output files: ffmpeg -i input.mp3 -ss S -to E -c copy output1.mp3 -ss S -to E -c copy output2.mp3 … Options after the input file actually pertain to the output file (so the -c, -ss and -to options are for the output file). And you can have multiple output files. (Unlike the segments muxer, you can have overlapping output this way if you want. Or different codecs, or metadata. But @Gyan's answer with the segment muxer is easier if its restrictions are OK for you).
Trim an audio file into multiple segments using `ffmpeg` with a single command
1,482,854,250,000
When I run the ./configure command in ffmpeg source directory I get this error: gcc is unable to create an executable file. If gcc is a cross-compiler, use the --enable-cross-compile option. Only do this if you know what cross compiling means. C compiler test failed. If you think configure made a mistake, make sure you are using the latest version from Git. If the latest version fails, report the problem to the [email protected] mailing list or IRC #ffmpeg on irc.freenode.net. Include the log file "config.log" produced by configure as this will help solving the problem. in config.log: check_ld cc check_cc BEGIN /tmp/ffconf.xECiIX7z.c 1 int main(void){ return 0; } END /tmp/ffconf.xECiIX7z.c gcc -c -o /tmp/ffconf.xsCaoMWN.o /tmp/ffconf.xECiIX7z.c gcc -o /tmp/ffconf.ApzYq7NQ /tmp/ffconf.xsCaoMWN.o /usr/lib/gcc/x86_64-linux-gnu/4.6.1/../../../x86_64-linux-gnu/crt1.o: In function `_start': (.text+0x12): undefined reference to `__libc_csu_fini' /usr/lib/gcc/x86_64-linux-gnu/4.6.1/../../../x86_64-linux-gnu/crt1.o: In function `_start': (.text+0x19): undefined reference to `__libc_csu_init' collect2: ld returned 1 exit status C compiler test failed. What's wrong? What should I do?
Your libc installation is incomplete or broken somehow. You should say what OS you use... the easiest fix is probably to reinstall the packages that comprise the libc. Or if you are really interested in finding out exactly which part of it is broken, here are some tips: On a typical glibc installation, the references to __libc_csu_init and __libc_csu_fini will be resolved by finding them in /usr/lib/libc_nonshared.a which you can check as follows: $ nm /usr/lib/libc_nonshared.a | egrep '__libc_csu_(init|fini)' 0000000000000000 T __libc_csu_fini 0000000000000010 T __libc_csu_init The use of /usr/lib/libc_nonshared.a will be triggered by linking to /usr/lib/libc.so (which is a text file, not an actual shared object). You can check that like this: There may be some other stuff in it too. You can check that with $ less /usr/lib/libc.so [...] GROUP ( /lib/libc.so.6 /usr/lib/libc_nonshared.a ) /usr/lib/libc.so will be used by the linker to satisfy the -lc requirement, which you can check like this: $ ld --verbose -lc [... lots of stuff ...] opened script file /usr/lib64/libc.so attempt to open /lib/libc.so.6 succeeded /lib/libc.so.6 attempt to open /usr/lib/libc_nonshared.a succeeded
error while compiling ffmpeg: gcc is unable to create an executable file
1,482,854,250,000
A standard Unixy command line approach to burning a video file to DVD is as follows (assuming) # 1. First use ffmpeg to convert to an mpg file. ffmpeg -i input.m4v -target ntsc-dvd output.mpg # 2. now do the authoring dvdauthor --title -o dvd -f output.mpg dvdauthor -o dvd -T NOTE: --title sets the title of the DVD, -T sets the table of contents. In both above commands the -o switch is referencing a directory, NOT the actual dvd. # 3. roll the .mpg file into an ISO file genisoimage -dvd-video -o dvdimage.iso dvd Finally, burn the resulting iso file to a DVD blank. I use Brasero, which is reliable. However, this method does not work well with non-standard aspect ratios. The DVD format is quite rigid about specifying what aspect ratios are accepted. You'll know if you have this problem if dvdauthor says something similar to WARN: unknown mpeg2 aspect ratio 1 What is a good way to modify this method to handle non-standard aspect ratios? UPDATE: Thanks to Anthony fo the very thorough and clear answer. I hope this will be useful for people trying to find an answer for this vexing issue. I don't know of any other clear explanation of this on the net.
The basic approach is to add black borders to your video to make it fit in one of DVD's allowed aspect ratios. TLDR: skip down to In Conclusion. A few definitions First, though, I need to define a couple of different things: An aspect ratio is simply the width of something divided by the height, typically expressed as a fraction. Often the traditional slash is replaced with a colon: we write 4:3 instead of 4⁄3. Sometimes, these are expressed in decimal (1.333…). You could also call it 8:6, 12:9, 16:12, etc, as those are all equal. Or even 1.333:1 (equal, if only you could write enough 3s). A display aspect ratio (DAR) is the aspect ratio of an actual display (e.g., a TV). Common displays are nominally 4:3 or 16:9. A storage aspect ratio (SAR) is the ratio of width to height (in pixels) of the stored image or video. For example, NTSC DVD Video maxes out at 720 by 480 ("Full D1"), which has a SAR of 1.5:1. A pixel aspect ratio is the aspect ratio of a single pixel in a stored image. In video, pixels are not always square. When non-square, they are usually narrower than they are tall. There is a simple mathematical relationship between the three above: SAR × PAR = DAR. As an example, 720:480 * 8:9 = 4:3. That'd be video for a 4:3 display put on a DVD at full resolution. Another complexity Analog television does not have pixels. Instead, it has a continuously varying signal. A certain portion of that signal is supposed to be projected onto each line of the display, then there is an inactive time to allow for, e.g., moving to the next line. Different TVs would display slightly different amounts of each line, though, and that could vary over the lifetime of the TV, or even as the TV warmed up. DVD says not to use the leftmost and rightmost 8 pixels. So out of 720, 704 are supposed to be used. The 8 pixels on each side are supposed to be filled with black. The specified PAR is this 10:11. Of course, people used to digital equipment find this silly (to use words suitable for polite company). And many commercial DVD releases actually use all 720 pixels, some expecting a 10:11 PAR and some expecting 8:9. [Or similar for a 16:9 DAR]. Most hardware DVD players use 10:11, though of course it'll also depend on the TV settings. Summary If you're starting with video where you want every pixel displayed, you probably want to fit it in to 704x480 (SAR 22:15), with the 8px black bars. If the sides being cut off is OK, then you can use the full 720x480. Either way, you need to scale your video as required to get a PAR of either 10:11 (DAR of 4:3) or 40:33 (DAR of 16:9) and, if less than the full 720x480 frame, add black bars. Actually doing it Thankfully, ffmpeg can do this. On the unofficial ffmpeg support forum, ks_kalvan offers an ffmpeg video filter chain for targeting a 16:9 DAR: -filter:v "scale='w=min(720,trunc((480*33/40*dar)/2+0.5)*2):h=min(480,trunc((720*40/33/dar)/2+0.5)*2)',pad='w=720:h=480:x=(ow-iw)/2:y=(oh-ih)/2',setsar='r=40/33'" How does that work‽ Note there are three filters in there: scale, pad, and setsar. We'll take each in turn, from the end, as the end ones are the simplest. The last filter is confusing until you check the documentation (`man ffmpeg-filter) and find this expletive removed line: "the "setsar" filter sets the Sample (aka Pixel) Aspect Ratio for the filter output video." So that is actually setting the PAR to 40:33, which is the value we said we wanted to use, above. The pad filter adds black borders. The documentation tells us that ow is the output width (i.e., 720, from the w=720 part), oh the output height (i.e., 480, from the h=480 part). iw and ih are the input width and height, respectively. ow-iw is thus the number of pixels we're adding to the width (and similarly oh-ih for the height); dividing by 2 puts half of that on each side of the picture. In other words, we're centering the picture. The scale filter resizes the video. The w= option specifies the new/output width and h= option specifies the height. Again, it's an expression, but more complicated. The width and the height formulas are the same, just with the height and width swapped. Let's examine the width (w=) formula: Functions and operators are documented in man ffmpeg-util. For positive numbers, trunc(x+0.5) is trick to get round-to-nearest-integer, something which ffmpeg doesn't have. Building on that trunc(x+0.5) trick, we have trunc(x/2+0.5)*2. x/2 gives us half x, of course; the truncate then rounds it to the nearest integer. Doubling it then gives us the nearest even number. I'm going to use «W» where the actual command like uses 720. That's the final output width in pixels. Similarly, «H» instead of 480 (final output height). Instead of 40/33, «PAR», as that's the target pixel aspect ratio. And PAR⁻¹ is the reciprocal of PAR, i.e., 33/40. dar is a ffmpeg variable. It is the DAR of the input video. A key thing to understand this is the calculation in the middle, where we have a calculation that is «H» × dar ÷ «PAR». Remember that dar is the display w/h of the original video. So multiplying the target width (in pixels) by the dar gives us how wide it'd be if we scaled the video to have the target height «H» and square pixels as well. Dividing by the PAR then converts that into width in the non-square pixels we're actually using. min(«W», # take the minimum [lesser of] target width and... trunc( # the truncation of (round to integer towards 0) («H»*«PAR⁻¹»*dar) # calculate the wanted output width, see note above / 2 + 0.5) * 2 # and get the nearest even number to that )) Example: Say you take a 1280 by 720 pixel video with a 16:9 display aspect ratio. Starting from the middle: «H»*«PAR⁻¹»*dar = 480*33/40*16/9 = 704. 704 / 2 = 352. trunc(352 + 0.5) = 352 352 * 2 = 704 min(720, 704) = 704 Which is the full usable width (that is, excluding the 8px unusable areas), as we'd expect of a 16:9 video on a 16:9 display. And when we do it on height, we get 490, which thanks to the min() is held to 480. But actually, that shows that we probably wanted to use 704 there (the usable width) instead of 720, which comes to precisely 480. So it'd appear that there is a minor bug there, causing a tiny (and probably imperceptible) distortion. That's fixed below, I believe. In conclusion ffmpeg -i YOUR-FILE-HERE \ -filter:v "scale='w=min(720,trunc((480*33/40*dar)/2+0.5)*2):h=min(480,trunc((704*40/33/dar)/2+0.5)*2)',pad='w=720:h=480:x=(ow-iw)/2:y=(oh-ih)/2',setsar='r=40/33'" \ -target ntsc-dvd YOUR-OUTPUT-HERE.mpg NOTE: Older ffmpeg's don't have -filter:v. You can try -vf instead (untested), or better yet download a new static build from ffmpeg.org.
Burning a video file with non-standard aspect ratio to DVD
1,482,854,250,000
I have a set of videos in the folder. I would like to get the info about this videos using ffmpeg -i command and save the output to file. So I wrote the line: find . -type f -exec ffmpeg -i {} \; > log.txt But surprisingly, the log is empty! What am I missing here?
I think what might be happening here is that ffmpeg is sending its output to stderr, in which case what you want is just: find . -type f -exec ffmpeg -i {} \; 2>log.txt I don't have ffmpeg in my distro to test with, but this has been verified with avconv from libav (should still be the same in this respect).
Save find -exec output to text file
1,482,854,250,000
The way I understand man avconv (version 9.16-6:9.16-0ubuntu0.14.04.1), the following command should convert input.ogg to output.mp3 and carry over metadata: avconv -i input.ogg -map_metadata 0 output.mp3 It does not, however; ogginfo clearly shows the information (artist, album, title, ...) in input.ogg and id3info confirms that output.mp3 has empty (ID3) tags. The same happens when converting ogg to flac, or (presumably) any combination of the formats. Is my understanding of -map_metadata wrong? Is there a way to convert between formats and keep tags (without hardcoding like this)?
Following this answer on Stack Overflow, I tinkered around and found out that the correct parameter depends on the combination of input and output format/codec. These combinations work as intended: OGG → MP3: -map_metadata 0:s:0 FLAC → MP3: -map_metadata 0:g:0 FLAC → OGG: -map_metadata 0 Using -codec libvorbis. In case your FLACs contains covers (as stream), add -vn to drop that stream (all video streams, really); the result is otherwise a broken file¹. See here for ways to add cover images back in later. Since avconv is officially dead now, I'll note that the same options seem to work with ffmpeg (at least up to 3.4.8). According to some players, anyway. easyTag would log, "Ogg bitstream contains unknown data", and Android 12 would refuse to play the file, but VLC would see nothing wrong. So YMMV.
Mapping metadata with avconv does not work
1,482,854,250,000
I'm trying to extract certain info from ffmpeg output. Sample ffmpeg output: configuration: --enable-memalign-hack --enable-mp3lame --enable-gpl --disable-vhook --disable-ffplay --disable-ffserver --enable-a52 --enable-xvid --enable-faac --enable-faad --enable-amr_nb --enable-amr_wb --enable-pthreads --enable-x264 libavutil version: 49.0.0 libavcodec version: 51.9.0 libavformat version: 50.4.0 built on Apr 15 2006 04:58:19, gcc: 4.0.1 (Apple Computer, Inc. build 5250) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'file.mov': Duration: 00:01:32.0, start: 0.000000, bitrate: 63489 kb/s Stream #0.0(eng): Audio: pcm_s16le, 48000 Hz, stereo, 1536 kb/s Stream #0.1(eng), 29.97 fps(r): Video: Apple ProRes 422, 1280x720 Must supply at least one output file I want to get back a string with just Duration, frame rate, codec, and size, e.g.: [00:01:32_29.97_Apple ProRes 422_1280x720] I tried starting with this (from another hint): ffmpeg -i file.mov 2>&1 | sed -n 's/Duration: \(.*\), start/\1/gp' to get the Duration, but that just 'removed' the Duration and , start, i.e.: 00:01:32.0: 0.000000, bitrate: 63489 kb/s PS: I'd also like to remove the Apple from Apple ProRes 422 :-) Thanks! Update: I was able to extract the codec with sed -n "s/.*\Video: \(.*\),.*/\1/p" but I don't know how to (a) get the size and framerate, and (b) combine the searches on one line...
awk: It's like magic, but better. #!/usr/bin/awk -f /Duration/ {sub(/,/, "", $2); fields["dur"] = $2} /fps/ { fields["fps"] = $3 } /Video/ { sub(/.*Video:/, "", $0); sub(/\W*Apple\W*/, "", $0); split($0, arr, ", ") fields["codec"] = arr[1]; fields["res"] = arr[2]; } END { printf "[%s_%s_%s_%s]\n", fields["dur"], fields["fps"], fields["codec"], fields["res"] }
extracting certain info from output
1,482,854,250,000
I have the following script to convert a big bunch of .MOD and .XM files into Wave format: #!/bin/bash for f in ./XM.* ./MOD.* do xmp $f -d wav -o - | ffmpeg -i - -acodec libmp3lame -ab 320k "$f.mp3" done But it doesn't work as expected. The program just hang up. It creates the .wav file but nothing more.(Doesn't write in it) Even the -vvv switch doesn't give any information. The strange thing is: if I prepend "strace", it's working fine. Any ideas/workarounds?
Perhaps xmp gets confused because stdin is not a tty? You could try: xmp $f -d wav -o - </dev/null | ffmpeg -i - -acodec libmp3lame -ab 320k "$f.mp3" Also, I would imagine that the order of arguments needs to be xmp -d wav -o - "$f" </dev/null | ffmpeg -i - -acodec libmp3lame -ab 320k "$f.mp3" On Ubuntu 14.04 with xmp 4.0.6 and avconv instead of ffmpeg, the order needs to be with the -d wav option later, or raw gets used xmp -o - -d wav "$f" | avconv -i - -b 320k "$f.mp3"
Batch conversion of Module files with XMP
1,482,854,250,000
When I try to play audio in chromium (e.g. in google translate or any icecast radio station) it gives this error in terminal [69:97:0711/164208.951104:ERROR:render_media_log.cc(30)] MediaEvent: MEDIA_ERROR_LOG_ENTRY {"error":"FFmpegDemuxer: open context failed"} [69:69:0711/164208.951234:ERROR:render_media_log.cc(30)] MediaEvent: PIPELINE_ERROR DEMUXER_ERROR_COULD_NOT_OPEN Sound in video (e.g. youtube) still works without problems, so it happens only with audio tag. My chromium version: Chromium 67.0.3396.87 built on Debian 9.4, running on Debian 9.5 My ffmpeg version: ffmpeg version 3.2.10-1~deb9u1 Copyright (c) 2000-2018 the FFmpeg developers built with gcc 6.3.0 (Debian 6.3.0-18) 20170516 configuration: --prefix=/usr --extra-version='1~deb9u1' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libebur128 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared libavutil 55. 34.101 / 55. 34.101 libavcodec 57. 64.101 / 57. 64.101 libavformat 57. 56.101 / 57. 56.101 libavdevice 57. 1.100 / 57. 1.100 libavfilter 6. 65.100 / 6. 65.100 libavresample 3. 1. 0 / 3. 1. 0 libswscale 4. 2.100 / 4. 2.100 libswresample 2. 3.100 / 2. 3.100 libpostproc 54. 1.100 / 54. 1.100 I still can download radio stations .m3u files and VLC (which also uses libav libraries) plays them without trouble. Playing audio via firefox works too.
I had the same problem. It happened after the last chromium upgrade. This is the email which described things changed in ~deb9u1. livewire98801 suggests installing a different version of chromium. However installing packages from testing into stable is not advised: Don't make a FrankenDebian Debian Stable should not be combined with other releases. Alternatively one can install an older version from debian stable. It contains the security patches but videos work with it: ➜ ~ apt list chromium -a Listing... Done chromium/stable,now 67.0.3396.87-1~deb9u1 amd64 [installed] chromium/stable 62.0.3202.89-1~deb9u1 amd64 Running apt install chromium=62.0.3202.89-1~deb9u1 installs the older version from stable (stretch).
Chromium "FFmpegDemuxer: open context failed" when playing audio
1,482,854,250,000
There are many audio and video players but I prefer to use one tool for many purposes. And so I thought of using ffplay as both audio and video player. To play a file the command is like this. ffplay path_to_audio_file.mp3 Fine, but how do I play a list of audio files or a list of videos? I tried to use: ffplay *.mp3 but to no avail. It gives me the following error: Argument 'audiofileB.mp3' provided as input filename, but 'audiofileA.mp3' was already specified.
ffplay appears to only support a single input file, so you'll need to use code to loop over a list of input files (and possibly to shuffle them); wildly assuming coreutils (for shuf), perhaps something like: find musicdir -type f -name "*.mp3" | shuf | while read f; do ffplay -autoexit -- "$f"; done This will of course break horribly if there are spaces or newlines in the filenames. (My current music player is fairly similar, find ~/music -type f -name "*.mp3" | mpg123 --shuffle -Z --list -)
How do I use ffplay to play a list of audio files
1,482,854,250,000
I tried to enable libass for ffmpeg with this --enable-libass in command but it didn't recognise the command. Does anyone know how to enable libass on Linux?
First you need to make sure that your particular version of ffmpeg was built with and supports that switch. You'll also likely need to make sure that the library libass is installed as well. You don't specify your Linux distro but I did notice that libass is available in my stock Fedora 19 repository so it's trivial to install. $ sudo yum install libass Now back to ffmpeg's support of libass. You can confirm how it was built by simply running it without any arguments. $ ~/ffmpeg |& grep libass configuration: --prefix=/root/ffmpeg-static/64bit --extra-cflags='-I/root/ffmpeg-static/64bit/include -static' --extra-ldflags='-L/root/ffmpeg-static/64bit/lib -static' --extra-libs='-lxml2 -lexpat -lfreetype' --enable-static --disable-shared --disable-ffserver --disable-doc --enable-bzlib --enable-zlib --enable-postproc --enable-runtime-cpudetect --enable-libx264 --enable-gpl --enable-libtheora --enable-libvorbis --enable-libmp3lame --enable-gray --enable-libass --enable-libfreetype --enable-libopenjpeg --enable-libspeex --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-version3 --enable-libvpx So the version that I have, does include this support, --enable-libass. If your version of ffmpeg doesn't support it you can simply download a static build: https://johnvansickle.com/ffmpeg/
How to enable libass on Linux?
1,482,854,250,000
What is the ffmpeg command to record screen and internal audio (on Ubuntu 18.04)? I'll omit the many things I tried that did not work and skip to the something close to what I am looking for; V="$(xdpyinfo | grep dimensions | perl -pe 's/.* ([0-9]+x[0-9]+) .*/$1/g')" A="$(pacmd list-sources | grep -PB 1 "analog.*monitor>" | head -n 1 | perl -pe 's/.* //g')" F="$(date --iso-8601=minutes).mkv" ffmpeg -video_size "$V" -framerate 10 -f x11grab -i :0.0 -f pulse -ac 2 -i "$A" "$F" I can get video but no audio. parecord -d alsa_output.pci-0000_00_1b.0.analog-stereo.monitor example.wav # index: 1 will get audio.
Framerate applied to both streams, but since ffmpeg documentation examples are scattered I'll leave an answer here A="$(pacmd list-sources | grep -PB 1 "analog.*monitor>" | head -n 1 | perl -pe 's/.* //g')" F="$(date --iso-8601=minutes | perl -pe 's/[^0-9]+//g').mkv" V="$(xdpyinfo | grep dimensions | perl -pe 's/.* ([0-9]+x[0-9]+) .*/$1/g')" ffmpeg -loglevel error -video_size "$V" -f x11grab -i :0.0 -f pulse -i "$A" -f pulse -i default -filter_complex amerge -ac 2 -preset veryfast "$F" where #A=1 #F=2018121711440500.mkv #V=2560x1440 ffmpeg the tool -loglevel error only print errors -video_size "$V" resolution of your screen (or less if you only want a subsection recorded) -f x11grab record the screen (screen recordings may not be possible on wayland?) -i :0.0 the X11 screen ID, (can also add +x,y for offset) -f pulse the audio driver -i "$A" the id of the audio stream -f pulse the audio driver again (maybe not needed?) -i default normally the system microphone -filter_complex amerge merge the 2 audio streams -ac 2 convert the 4 audio channels to 2 -preset veryfast go light on video encoding to avoid stuttering "$F" the output file Remember that the parameter order matters, and pavucontrol can re-map audio only while ffmpeg is running.
record screen and internal audio with ffmpeg
1,482,854,250,000
Today I learned of a Windows-only product today called the Overwolf Replay HUD, which lets the user press a key to replay the last 20 seconds of happenings on their screen. It's meant for people playing or spectating fast-paced videogames who want to quickly review a hectic moment. I'm trying to duplicate that behaviour on Linux. So far, I figure I could easily start ffmpeg (with -f x11grab) capture to a file in /tmp (which is memory-mapped), then use sxhkd to bind a keyboard shortcut to launch mpv to play the last 20 seconds of that file. However, the rest of the recording would still be stored, and I'd eventually run out of RAM. How could I go about keeping only the last 20 seconds?
The segment muxer will work. Step 1: ffmpeg -i input force_key_frames expr:gte(t,n_forced*4) -c:v libx264 -c:a aac -f segment -segment_time 4 -segment_wrap 6 -segment_list list.m3u8 -segment_list_size 6 seg%d.ts This will save the recording in segments of 4 seconds. Once 6 segments have been written, the next segment will overwrite the first file. The playlist is updated accordingly. Step 2: ffmpeg -i list.m3u8 -c copy video.mp4 or ffplay list.m3u8 The duration of the preserved footage is 20 < duration < 24.
How can I record my desktop into a "replay buffer", so I can replay the last x seconds?
1,482,854,250,000
As noted here, we could speedup audio with ffmpeg using: $ ffmpeg -i input.mkv -filter:a "atempo=2.0" -vn output.mkv But on Ubuntu, ffmpeg is replaced by avconv, and the atempo filter is not available in avconv. My question is: Are there any alternatives to the atempo filter to use with avconv, or how to speed up audio AND video using avconv? How to speed up a video file (if you have better ideas)?
It is possible to circumvent the fact that the atempo filter is not available for avconv(yet the setpts video filter is). Simply use another tool like sox to do the audio part(adjust mapping depending on streams): avconv -i input.mkv -c copy -map 0:0 video.raw #copy raw video stream avconv -i input.mkv -c copy -map 0:1 audio.raw #copy raw audio stream sox audio.raw audioq.raw speed 4 #speed up(4x)audio&pitch sox audio.raw audioq.raw tempo 4 #or, to preserve pitch avconv -i video.raw -filter:v "setpts=0.25*PTS" output.raw #speed up (4x) video avconv -i output.raw -i audioq.raw final.mkv #combine outputs to .mkv There is surely a simpler way to do this but I've tried with some random .mkv file and it worked.
Is there any equivalent to the atempo ffmpeg audio filter but for avconv to speed up video & audio?
1,482,854,250,000
When you want to extract two minutes of a video file from minute 2 to minute 4, you will use command ffmpeg -i input.mpg -sameq -ss 00:02:00 -t 00:02:00 output.mpg I'd like to split the video into two minutes chunks. I have an idea to use a bash script, but there are a few errors. for i in seq 50; do ffmpeg -i input.mpg -sameq -ss 00:`expr $i \* 2 - 2`:00 -t 00:02:00 output.mpg; done Do you know how to correct the errors in script that are where I use $i expression? Or is there some better solution?
You forgot to use backticks - or better: $( ) subshell - in the seq invocation. This works: for i in $( seq 50 ); do ffmpeg -i input.mpg -sameq -ss 00:`expr $i \* 2 - 2`:00 -t 00:02:00 output.mpg; done Another thing is that you probably don't want output.mpg to be overwritten in each run, do you? :) Use $i in the output filename as well. Apart from that: In bash, you can just use $(( )) or $[ ] instead of expr - it also looks more clear (in my opinion). Also, there is no need for seq - brace expansion is all you need to get a sequence. Here's an example: for i in {1..50} do ffmpeg -i input.mpg -sameq -ss 00:$[ i* 2 - 2 ]:00 -t 00:02:00 output_$i.mpg done Another good thing about braces is that you can have leading zeros in the names (very useful for file sorting in the future): for i in {01..50} do ffmpeg -i input.mpg -sameq -ss 00:$[ i* 2 - 2 ]:00 -t 00:02:00 output_$i.mpg done Notice as well, that i*2 - 2 can be easily simplified to i*2 if you just change the range: for i in {00..49} do ffmpeg -i input.mpg -sameq -ss 00:$[ i*2 ]:00 -t 00:02:00 output_$i.mpg done
Split video file into pieces with ffmpeg
1,482,854,250,000
I'm using avconv for trimming and converting videos. Let's say I want to drop the first 7 and last 2.5 seconds of the video stream and one audio stream of an one-hour mts file: avconv -i input.mts -map 0:0 -map 0:3 -ss 0:0:07 -t 0:59:50.5 out.mov This works so far, but now I want to add two seconds of fading in and out at the beginning and the end by adding: -vf fade=type=in:start_frame=350:nb_frames=100 -vf fade=type=out:start_frame=178750:nb_frames=100 Those frames are calculated with the 50 fps that avconv reports for the video source. But there is neither fading in nor out. 1) What goes wrong with the video fading and how to do it right? 2) How to add audio fading. There seems to be an -afade option. but I don't find it documented. Alternatively, you can propose a different tool for this goal (trim and fade video and audio), preferrably available as package for Debian 8.
I finally found the time to try the answer suggested by @Mario G., but it seemed extremely cumbersome. I need to do this many dozens of times. I read the documentation of ffmpeg and found it much more powerful than avconv, including fading for audio and video, so the solution is ffmpeg -i input.mts -map 0:0 -map 0:3 -ss 0:0:07 -to 0:59:57.5 -vf 'fade=t=in:st=7:d=2,fade=t=out:st=3595.5:d=2,crop=out_h=692' -af 'afade=t=in:st=7:d=2,afade=t=out:st=3595.5:d=2' out.mov So the st= and d= parameters for the fade take times in seconds, no need for converting to frames. I also discovered the -to option to take the end time directly instead of calculating the length. This command does all steps channel selection with -map, trimming with -ss and -to, video fading with -vf option fade=t=in and fade=t=out, audio fading with -af option afade=t=in and afade=t=out and cropping with -vf option crop= in a single step.
trim and fade in/out video and audio with avconv (or different tool)
1,482,854,250,000
How can I determine whether the internal microphone of my laptop is used or not in one single command? Example: The output of the command should be different once a ffmpeg capture is run: ffmpeg -f alsa -ac 2 -i plughw:0,0 recording.mp4
Look into /proc/asound/card0/pcm0p/sub0/status; it's either closed or something like state: RUNNING owner_pid : 6371 trigger_time: 51690.093652120 tstamp : 0.000000000 delay : 51156 avail : 210988 avail_max : 229376 ----- hw_ptr : 79916 appl_ptr : 131072
How to know if my microphone is used or not?
1,482,854,250,000
While searching through U&L I noticed a fair amount of questions asking how to script the generation of ffmpeg command lines like these: ffmpeg -i video.mp4 -ss 00:00:00 -t 00:10:00 -c copy 01.mp4 ffmpeg -i video.mp4 -ss 00:10:00 -t 00:10:00 -c copy 02.mp4 ffmpeg -i video.mp4 -ss 00:20:00 -t 00:10:00 -c copy 03.mp4 In researching solutions for this I stumbled across this ticket in the ffmpeg issue tracker, titled: Split an input video into multiple output video chunks. This ticket highlights a patch that would allow you to finally be able to provide a list of time points to cut a video into smaller sections with a single command line like this: $ ffmpeg -i input.avi -f segment -segment_times 10,20,40,50,90,120,180 \ -vcodec copy output02%d.avi The patch appears to have been released in this revision of the code repository: commit 2058b52cf8a4eea9bf046f72b98e89fe9b36d3e3 Author: Stefano Sabatini <[email protected]> Date: Sat Jan 28 22:36:38 2012 +0100 lavf/segment: add -segment_times option Address trac ticket #1504. I downloaded this statically built version of ffmpeg, ffmpeg.static.64bit.2013-10-05.tar.gz from the ffmpeg site, but it apparently didn't include that switch. $ ./ffmpeg --help |& grep segment $ Has anyone been able to get this new switch working?
Is it because the missing -map option, which designates the input streams as a source for the output file. I'm using ffmpeg 2.6.1 and is able to split the video with -segment_times: ffmpeg -i foo.mp4 -segment_times 10,20,30,40 -c copy -map 0 -f segment %03d.mp4 If you need more elaborate splitting, you can use OpenCV to read the original video and write desired frame to the new split. Check these out: "Creating a video with OpenCV" and "Split a video to multiple chunks according to multiple starting and ending frame indices"
Anyone gotten the switch '-segment_times' working with ffmpeg?
1,482,854,250,000
I am trying to get the duration of a bunch of files in current directory and its sub-directories all ending in .mp4. I'm trying to use ffprobe. To make things simple, I am not using find, but ls. If it only lists durations in seconds, it will be fine, since summing them all up would probably be beyond me! My current effort is like this: ls -a * |grep .mp4$|xargs ffprobe This shows Argument 'SomeVideo.mp4' provided as input filename, but 'DifferentVideo.mp4' was already specified. How can I make this work? Can ffprobe not just process one file at a time passed via pipe?
With exiftool: $ exiftool -ext mp4 -r -n -q -p '${Duration;$_ = ConvertDuration(our $total += $_)}' . | tail -n 1 4 days 22:14:59 The main difference from @don_cristi's answer to a very similar question, being the -r -ext mp4 which lets exiftool look for files with extensions recursively.
How to get the total duration of all video files in current folder and sub-directories [duplicate]
1,482,854,250,000
I want to cut a video into about 10 minute parts like this. ffmpeg -i video.mp4 -ss 00:00:00 -t 00:10:00 -c copy 01.mp4 ffmpeg -i video.mp4 -ss 00:10:00 -t 00:10:00 -c copy 02.mp4 ffmpeg -i video.mp4 -ss 00:20:00 -t 00:10:00 -c copy 03.mp4 With for it will be like this. for i in `seq 10`; do ffmpeg -i video.mp4 -ss 00:${i}0:00 -t 00:10:00 -c copy ${i].mp4; done; But it works only if duration is under a hour. How can I convert number to time format in bash shell?
BASH isn't that bad for this problem. You just need to use the very powerful, but underused date command. for i in {1..10}; do hrmin=$(date -u -d@$(($i * 10 * 60)) +"%H:%M") outfile=${hrmin/:/-}.mp4 ffmpeg -i video.mp4 -ss ${hrmin}:00 -t 00:10:00 -c copy ${outfile} done date Command Explained date with a -d flags allows you to set which date you want displayed (instead of the current date and time, which is the default). In this case, I am setting it to a UNIX time by prepending the @ symbol before an integer. The integer in this case is the time in ten minute increments (calculated by the BASH built-in calculator: $((...))). The + symbol tells date that you would like to specify a format for displaying the results. In our case, we care only about the hour (%H) and the minutes (%M). And finally, the -u is to display as UTC time instead of local. This is important in this case because we specified the time as UTC when we gave it the UNIX time (UNIX time is always as UTC). The numbers would most likely not start from 0 if you didn't specify -u. BASH Variable Substitution Explained The date command gave us just what we needed. But colons in a file name might be problematic/non-standard. So, we substitute the ':' for a '-'. This can be done by the sed or cut or tr command, but because this is such a simple task, why spawn a new subshell when BASH can do it? Here we use BASH's simple expression substitution. To do this, the variable must be contained within curly braces (${hrmin}) and then use the standard forward slash notation. The first string after the first slash is the search pattern. The second string after the second slash is the substitution. BASH variable substitution and more can be found at http://tldp.org/LDP/abs/html/parameter-substitution.html.
how to convert number to time format in shell script?
1,482,854,250,000
I love VLC's option of slowing down audio playback. Now I want to take my mp3-files to a portable player and play them there. Unfortunately the player does not have a slow down option. How can I convert mp3s so they sound like being played slowly using VLC? I will prefer a command line tool, but will accept other free software tools for GNU/Linux.
I recommend using SoX like this: sox <input> <output> tempo 0.5 This slows down <input>'s tempo by a factor of 2 and record the result to <output>. You can add option --show-progress to display relevant information and progression percentage. Note that if <input> is for instance normal.wav and <output> is half-tempo.ogg, SoX will detect the different audio encoding by itself (for more control on that part, read man sox). The tempo algorithm should give similar results to VLC's scaletempo module. However you can try the alternative stretch algorithm: sox <input> <output> stretch 2 The result is expected to be more synthetic (again, read man sox for details) and be aware that the parameter is the inverse of the one given to tempo (2 instead of 0.5 in this example). SoX offers even more possibilities of time manipulation through speed, pitch and bend options that can be easily explored. To install SoX using apt-get: sudo apt-get install sox To enable extra codecs (including MP3), add this library: sudo apt-get install libsox-fmt-all As a final note, I would come back to VLC since you can play your file slowed down from command line this way: cvlc --rate 0.5 <input> So there may be a way to ask VLC to save the result to some file, or to output audio to JACK and then use a JACK compatible recorder.
Slow down mp3 audio
1,482,854,250,000
A ffmpeg webcam capture is running in the background. ffmpeg -f video4linux2 -s vga -i /dev/video0 capture.mp4 (1) I am therefore unable to read it with ffplay since the device /dev/video0 is used: ffplay -f video4linux2 -s vga -i /dev/video0 (2) [...] /dev/video0: Device or resource busy How to read the webcam with ffplay without stopping the background capture? PS: The background capture command should not be modified. I am aware that this can be done by modifying the command (1) with fifo.
Do this: sudo modprobe v4l2loopback devices=1 If you get similar error like modprobe: FATAL: Module v4l2loopback not found in directory /lib/modules/4.6.0-kali1-amd64, just install v4l2loopback-dkms first, e.g.: sudo apt-get install v4l2loopback-dkms Now run it first (Note that it can't run as background process by trailing &): ffmpeg -f video4linux2 -i /dev/video0 -codec copy -f v4l2 /dev/video1 Without stop the process above, in other bash session(s), you should able to run your two commands, i.e. ffmpeg -f video4linux2 -s vga -i /dev/video1 capture.mp4 and ffplay -f video4linux2 -s vga -i /dev/video1 (change it to /dev/video1) at the same time.. If you set it to 2: sudo modprobe v4l2loopback devices=2 Then you can do ffmpeg -f video4linux2 -i /dev/video0 -codec copy -f v4l2 /dev/video1 -codec copy -f v4l2 /dev/video2, which allow you to used both /dev/video1 and /dev/video2 in the same time.
How to read a webcam that is already used by a background capture?
1,482,854,250,000
I have a video and I want to extract every 10th frame as I am getting way too many images. ffmpeg -i out1.avi -r 1 -f image2 image-%3d.jpeg How to extract images from video file?
If you want 1/10 of what you have now (when you use -r 1) then use -r 0.1 It will get 1 frame every 10 seconds instead of 1 frame every 1 second. ffmpeg -i out1.avi -r 0.1 -f image2 image-%3d.jpeg EDIT: If you really what every 10th frame from video then you can use select with modulo 10 ffmpeg -i out1.mp4 -vf "select=not(mod(n\,10))" -vsync vfr image_%03d.jpg but it may gives more images than before. If video has 25fps then -r 1 gives image every 25th frame. And if video has 60fps then gives image every 60th frame. So it gives less images then this code which get image every 10th frame.
How to extract every 10th frame from a video?
1,487,619,657,000
I'm compiling FFMPEG from source using the guide for Ubuntu which I've used before with success. I'm compiling on a Vagrant virtual machine in VirtualBox on Ubuntu server 14.04. You can clone the project and vagrant up, and you'll have FFMPEG installed in the box: github.com/rfkrocktk/vagrant-ffmpeg. I'm using Ansible to automate the compilation, so you can see all of the steps here: --- - hosts: all tasks: # version control - apt: name=git - apt: name=mercurial # build tools - apt: name=build-essential - apt: name=autoconf - apt: name=automake - apt: name=cmake - apt: name=pkg-config - apt: name=texinfo - apt: name=yasm # libraries - apt: name=libass-dev - apt: name=libfreetype6-dev - apt: name=libsdl1.2-dev - apt: name=libtheora-dev - apt: name=libtool - apt: name=libva-dev - apt: name=libvdpau-dev - apt: name=libvorbis-dev - apt: name=libxcb1-dev - apt: name=libxcb-shm0-dev - apt: name=libxcb-xfixes0-dev - apt: name=zlib1g-dev - apt: name=libopus-dev - apt: name=libmp3lame-dev - apt: name=libx264-dev # dependent libraries # libx265 - name: clone libx265 command: hg clone https://bitbucket.org/multicoreware/x265 /usr/src/x265 args: creates: /usr/src/x265 - name: configure libx265 command: cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX=/usr/local -DENABLED_SHARED:bool=off ../../source args: chdir: /usr/src/x265/build/linux creates: /usr/src/x265/build/linux/Makefile - name: compile libx265 command: make -j2 args: chdir: /usr/src/x265/build/linux creates: /usr/src/x265/build/linux/x265 - name: install libx265 command: make install args: chdir: /usr/src/x265/build/linux creates: /usr/local/bin/x265 # libfdk-aac - name: clone libfdk-aac command: git clone https://github.com/mstorsjo/fdk-aac.git /usr/src/libfdk-aac args: creates: /usr/src/libfdk-aac - name: autoconf libfdk-aac command: autoreconf -fiv args: chdir: /usr/src/libfdk-aac creates: /usr/src/libfdk-aac/configure - name: configure libfdk-aac command: /usr/src/libfdk-aac/configure --prefix=/usr/local --disable-shared args: chdir: /usr/src/libfdk-aac creates: /usr/src/libfdk-aac/libtool - name: compile libfdk-aac command: make -j2 args: chdir: /usr/src/libfdk-aac creates: /usr/src/libfdk-aac/libFDK/src/FDK_core.o - name: install libfdk-aac command: make install args: chdir: /usr/src/libfdk-aac creates: /usr/local/lib/libfdk-aac.a # libvpx - name: download libvpx shell: wget -O - https://storage.googleapis.com/downloads.webmproject.org/releases/webm/libvpx-1.4.0.tar.bz2 | tar xjvf - args: chdir: /usr/src creates: /usr/src/libvpx-1.4.0 - name: configure libvpx command: ./configure --prefix=/usr/local --disable-examples --disable-unit-tests args: chdir: /usr/src/libvpx-1.4.0 creates: /usr/src/libvpx-1.4.0/Makefile - name: compile libvpx command: make -j2 args: chdir: /usr/src/libvpx-1.4.0 creates: /usr/src/libvpx-1.4.0/libvpx.a - name: install libvpx command: make install args: chdir: /usr/src/libvpx-1.4.0 creates: /usr/local/lib/libvpx.a # ffmpeg itself - name: download ffmpeg shell: wget -O - "https://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2" | tar xjvf - args: chdir: /usr/src creates: /usr/src/ffmpeg - name: configure ffmpeg shell: PKG_CONFIG_PATH=/usr/local/lib/pkgconfig /usr/src/ffmpeg/configure \ --prefix=/usr/local \ --pkg-config-flags='--static' \ --bindir=/usr/local/bin \ --enable-gpl --enable-version3 --enable-nonfree \ --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame \ --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx \ --enable-libx264 --enable-libx265 args: chdir: /usr/src/ffmpeg creates: /usr/src/ffmpeg/config.asm - name: compile ffmpeg command: make -j2 args: chdir: /usr/src/ffmpeg creates: /usr/src/ffmpeg/ffmpeg - name: install ffmpeg command: make install args: chdir: /usr/src/ffmpeg creates: /usr/local/bin/ffmpeg Hopefully, even if you don't know Ansible, it should be clear what this is doing. The problem I'm having is that even after all of this running successfully, when I run ffmpeg from within the machine, I get the following error: ffmpeg: error while loading shared libraries: libx265.so.77: cannot open shared object file: No such file or directory I can clearly find the file: $ find /usr -iname libx265.so.77 /usr/local/lib/libx265.so.77 Why is this not being found? Am I missing something in the compilation guide? I'd like my binaries to be as portable as humanly possible. Edit output of ldd $(which ffmpeg): linux-vdso.so.1 => (0x00007fff552ea000) libva.so.1 => /usr/lib/x86_64-linux-gnu/libva.so.1 (0x00007f2fb3b45000) libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f2fb3926000) libxcb-shm.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-shm.so.0 (0x00007f2fb3722000) libxcb-xfixes.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-xfixes.so.0 (0x00007f2fb351b000) libxcb-shape.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-shape.so.0 (0x00007f2fb3317000) libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f2fb2fe1000) libasound.so.2 => /usr/lib/x86_64-linux-gnu/libasound.so.2 (0x00007f2fb2cf1000) libSDL-1.2.so.0 => /usr/lib/x86_64-linux-gnu/libSDL-1.2.so.0 (0x00007f2fb2a5b000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f2fb2754000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f2fb2536000) libx265.so.77 => not found librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f2fb232d000) libx264.so.142 => /usr/lib/x86_64-linux-gnu/libx264.so.142 (0x00007f2fb1f97000) libvorbisenc.so.2 => /usr/lib/x86_64-linux-gnu/libvorbisenc.so.2 (0x00007f2fb1ac8000) libvorbis.so.0 => /usr/lib/x86_64-linux-gnu/libvorbis.so.0 (0x00007f2fb189a000) libtheoraenc.so.1 => /usr/lib/x86_64-linux-gnu/libtheoraenc.so.1 (0x00007f2fb165a000) libtheoradec.so.1 => /usr/lib/x86_64-linux-gnu/libtheoradec.so.1 (0x00007f2fb1441000) libopus.so.0 => /usr/lib/x86_64-linux-gnu/libopus.so.0 (0x00007f2fb11f8000) libmp3lame.so.0 => /usr/lib/x86_64-linux-gnu/libmp3lame.so.0 (0x00007f2fb0f6b000) libfreetype.so.6 => /usr/lib/x86_64-linux-gnu/libfreetype.so.6 (0x00007f2fb0cc8000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f2fb0aae000) libass.so.4 => /usr/lib/x86_64-linux-gnu/libass.so.4 (0x00007f2fb088a000) libvdpau.so.1 => /usr/lib/x86_64-linux-gnu/libvdpau.so.1 (0x00007f2fb0686000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2fb02bf000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f2fb00bb000) libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007f2fafeb7000) libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f2fafcb0000) /lib64/ld-linux-x86-64.so.2 (0x00007f2fb3d66000) libpulse-simple.so.0 => /usr/lib/x86_64-linux-gnu/libpulse-simple.so.0 (0x00007f2fafaac000) libpulse.so.0 => /usr/lib/x86_64-linux-gnu/libpulse.so.0 (0x00007f2faf862000) libXext.so.6 => /usr/lib/x86_64-linux-gnu/libXext.so.6 (0x00007f2faf650000) libcaca.so.0 => /usr/lib/x86_64-linux-gnu/libcaca.so.0 (0x00007f2faf383000) libogg.so.0 => /usr/lib/x86_64-linux-gnu/libogg.so.0 (0x00007f2faf179000) libpng12.so.0 => /lib/x86_64-linux-gnu/libpng12.so.0 (0x00007f2faef53000) libfribidi.so.0 => /usr/lib/x86_64-linux-gnu/libfribidi.so.0 (0x00007f2faed3b000) libfontconfig.so.1 => /usr/lib/x86_64-linux-gnu/libfontconfig.so.1 (0x00007f2faeaff000) libenca.so.0 => /usr/lib/x86_64-linux-gnu/libenca.so.0 (0x00007f2fae8cc000) libpulsecommon-4.0.so => /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-4.0.so (0x00007f2fae664000) libjson-c.so.2 => /lib/x86_64-linux-gnu/libjson-c.so.2 (0x00007f2fae45a000) libdbus-1.so.3 => /lib/x86_64-linux-gnu/libdbus-1.so.3 (0x00007f2fae214000) libslang.so.2 => /lib/x86_64-linux-gnu/libslang.so.2 (0x00007f2fade84000) libncursesw.so.5 => /lib/x86_64-linux-gnu/libncursesw.so.5 (0x00007f2fadc50000) libtinfo.so.5 => /lib/x86_64-linux-gnu/libtinfo.so.5 (0x00007f2fada26000) libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f2fad7fc000) libwrap.so.0 => /lib/x86_64-linux-gnu/libwrap.so.0 (0x00007f2fad5f1000) libsndfile.so.1 => /usr/lib/x86_64-linux-gnu/libsndfile.so.1 (0x00007f2fad389000) libasyncns.so.0 => /usr/lib/x86_64-linux-gnu/libasyncns.so.0 (0x00007f2fad183000) libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x00007f2facf68000) libFLAC.so.8 => /usr/lib/x86_64-linux-gnu/libFLAC.so.8 (0x00007f2facd37000) libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007f2facb1b000) output of file /usr/local/lib/libx265.so.77: /usr/local/lib/libx265.so.77: ELF 64-bit LSB shared object, x86-64, version 1 (GNU/Linux), dynamically linked, BuildID[sha1]=f91388281cc2dba1dfe37797324dc6b3898d8d1b, not stripped LD_LIBRARY_PATH is undefined in environment variables. Also this: $ grep -r local /etc/ld.so.conf.d/ /etc/ld.so.conf.d/libc.conf:/usr/local/lib My working theory is that because the Ansible command module strips environment variables, everything breaks. I'm thinking that there's something wrong with the FFMPEG or x265 build. I removed --enable-libx265 from the FFMPEG configure command and I now have a working FFMPEG.
My problem lay in the fact that I never ran ldconfig after installing everything. In order for shared object libraries to be found on Debian, you must run sudo ldconfig to rebuild the shared library cache.
Compiling FFMPEG from source: cannot find shared library
1,487,619,657,000
I hear a mp3 that is extracted from some tv dramas as a English exercise. So in that mp3 there are lot of "silence part" in which no one speaks and there is no background music. Is there a way to remove that silence part by using ffmpeg or some other program from command line?
You could use SoX - Sound eXchange. To remove silence from file: sox input.mp3 output.mp3 silence 1 0.1 1% -1 0.1 1% Also there is a good tutorial: The SoX of Silence | digitalcardboard.com
How to remove silence part from mp3 that is extracted from tv drama
1,487,619,657,000
I have a local video and I want to stream it to a dummy webcam video device from v4l2loopback. I created the dummy device at /dev/video1. How to use ffmpeg to steam that local video trial_video.mp4 to the dummy device?
Based on this AU Q&A titled: Is there any way ffmpeg send video to /dev/video0 on Ubuntu? you can do the following: $ ffmpeg -re -i trial_video.mp4 -map 0:v -f v4l2 /dev/video1
How to stream a local video to webcam using ffmpeg?
1,487,619,657,000
I'm processing a variety of audio files in a bunch of different formats and I'd like to unify their format and configuration using FFMPEG and SoX. There are two steps to my process: Convert the file, whatever it may originally be, to a PCM 16-bit little-endian WAV file: ffmpeg -i input.wav -c:a pcm_s16le output.wav Process the file in Sox to make it conform to the sample rate and channel count that we need: sox input.wav output.flac channels 2 rate 44.1k I'd ideally like to pipe these two commands together so as to avoid creating an unnecessary file. I'm having a lot of trouble actually getting the format to work properly, though. SoX complains that it needs to explicitly know the format of the incoming audio, which is something that I don't even know at execution time. I know the format of the PCM audio, but I'm not sure the channel count nor of the sample rate of the incoming audio. Is there a way to pipe these two commands together, or better, to only have to use one tool for the job? The reason I've used two tools rather than just trying to do it with one: FFMPEG Not sure if there's a way to safely convert a mono audio stream to a stereo audio stream by duplicating the channels. (SoX does this natively.) Not sure how to change sample rate. (SoX does this natively.) Not sure how to output to FLAC using the best compression rate. SoX Not able to do audio format detection as well as FFMPEG does. If I have a file without an extension, SoX asks me to manually specify the format, which doesn't work at all for my application.
I think sox needs to seek its input if it is to determine the input format from the file's header, and that's incompatible with a pipe. I think ffmpeg can do all you want, though I'm not completely sure. I'm unfamiliar with it and the documentation is clear as mud. ffmpeg -i "$input" -compression_level 9 -ac 2 -ab 44100 output.flac Alternatively, mencoder should be able to do a similar job. mencoder "$input" -oac lavc -lavcopts=acodec=flac:abitrate=44.1:o=compression_level=9 -af channels=2 output.flac
Piping Sox and FFMPEG together
1,487,619,657,000
I want to extract frames as images from video and I want each image to be named as InputFileName_number.bmp.  How can I do this? I tried the following command: ffmpeg -i clip.mp4 fr1/$filename%d.jpg -hide_banner but it is not working as I want.  I want to get, for example, clip_1.bmp, but what I get is 1.bmp. I am trying to use it with GNU parallel to extract images of multiple videos and I am new to both so I want some king of dynamic file naming input -> input_number.bmp.
$filename is handled as a shell variable. What about ffmpeg -i clip.mp4 fr1/clip_%d.jpg -hide_banner or $mp4filename=clip ffmpeg -i ${mp4filename}.mp4 fr1/${mp4filename}_%d.jpg -hide_banner ? Update: For use with gnu parallel, you can use parallel's -i option: -i Normally the command is passed the argument at the end of its command line. With this option, any instances of "{}" in the command are replaced with the argument. The resulting command line could be as simple as parallel -i ffmpeg -i {} fr1/{}_%d.jpg -hide_banner -- *.mp4 if you can live with the extension in the output files. Be aware that you may not actually want to run this in parallel on a traditional hard-disk as the concurrent i/o will slow it down. Edit: Fixed variable reference as pointed out by @DonHolgo.
How to include input file name in output file name in ffmpeg
1,487,619,657,000
I created a mp4 file from video, audio and .srt file. ffmpeg -i video.mp4 -i audio.m4a -i sub.srt -map 0:0 -metadata:s:v:0:0 language=eng -map 1:0 -metadata:s:a:0:0 language=eng -map 2:0 -metadata:s:s:0:0 language=eng -c copy -scodec mov_text out.mp4 It works fine, but subtitle is not shown by default. Is there way to show the subtitle by default?
You can use MP4Box 0.6.2-DEV-rev453 (May 2016) or higher to do this: mp4box -add xr.mp4 -add xr.en.srt:txtflags=0xC0000000 -new ya.mp4 This will mark the subtitle stream in the output file as forced. However, this mark will only be recognized starting with VLC 3.0.0-20161101 (Nov 2016). I have seen mentions to this post on the FFmpeg mailing list about a patch that implements disposition for FFmpeg: ffmpeg -i xr.mp4 -i xr.en.srt -c copy -c:s mov_text -disposition:s forced ya.mp4 However after having tried it with both "forced" and "default", the subtitles marked by FFmpeg are not recognized as forced by VLC. To respond to the comment, here is a test with MP4Box 0.7.0 (Apr 2017) and VLC 3.0.0-20170926 (Sep 2017). Note more recent VLC versions are having a crashing problem, even on videos without subtitles. Using this file: youtube-dl --write-auto-sub --format 18 --output xr.mp4 kcs82HnguGc ffmpeg -i xr.en.vtt xr.en.srt mp4box -add xr.mp4 -add xr.en.srt:txtflags=0xC0000000 -new ya.mp4 Result:
How to set default subtitle with ffmpeg
1,487,619,657,000
I recently installed spotdl (Spotify Downloader) by following the instructions given at GitHub's Installation page. However, when I try to run the tool and execute the commands provided at the project page, Terminal results with the following message: spotdl: command not found. I am rather an inexperienced user with Linux, but I think I might have made an error in the installation process. See, the command that I ran to install the tool was pip install -U spotdl (according to the source number one). However, the source number two — that is the project page — tells to run a different command, pip3 install spotdl, emphasizing that the tool only works with Python 3. Well, before noticing the latter, I already had followed the steps provided by the former. And that's not all. While running pip install -U spotdl, I received the following message: XXXXs@pop-os:~$ pip install -U spotdl Command 'pip' not found, but can be installed with: sudo apt install python-pip Well, I decided to follow the advice and run the command above, resulting: XXXXs@pop-os:~$ sudo apt install python-pip [sudo] password for XXXXs: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libllvm7 libllvm7:i386 libvala-0.40-0 valac-0.40-vapi Use 'sudo apt autoremove' to remove them. The following additional packages will be installed: libpython-all-dev libpython-dev libpython2.7-dev python-all python-all-dev python-dev python-pip-whl python-setuptools python-wheel python-xdg python2.7-dev Suggested packages: python-setuptools-doc The following NEW packages will be installed: libpython-all-dev libpython-dev libpython2.7-dev python-all python-all-dev python-dev python-pip python-pip-whl python-setuptools python-wheel python-xdg python2.7-dev 0 upgraded, 12 newly installed, 0 to remove and 0 not upgraded. Need to get 30.8 MB of archives. After this operation, 46.2 MB of additional disk space will be used. Do you want to continue? [Y/n] Y Get:1 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 libpython2.7-dev amd64 2.7.15-4ubuntu4~18.04 [28.3 MB] Get:2 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libpython-dev amd64 2.7.15~rc1-1 [7,684 B] Get:3 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libpython-all-dev amd64 2.7.15~rc1-1 [1,092 B] Get:4 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 python-all amd64 2.7.15~rc1-1 [1,076 B] Get:5 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 python2.7-dev amd64 2.7.15-4ubuntu4~18.04 [278 kB] Get:6 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 python-dev amd64 2.7.15~rc1-1 [1,256 B] Get:7 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 python-all-dev amd64 2.7.15~rc1-1 [1,100 B] Get:8 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 python-pip-whl all 9.0.1-2.3~ubuntu1.18.04.1 [1,653 kB] Get:9 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 python-pip all 9.0.1-2.3~ubuntu1.18.04.1 [151 kB] Get:10 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 python-setuptools all 39.0.1-2 [329 kB] Get:11 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 python-wheel all 0.30.0-0.2 [36.4 kB] Get:12 http://us.archive.ubuntu.com/ubuntu bionic/universe amd64 python-xdg all 0.25-4ubuntu1 [31.3 kB] Fetched 30.8 MB in 50s (614 kB/s) Selecting previously unselected package libpython2.7-dev:amd64. (Reading database ... 257811 files and directories currently installed.) Preparing to unpack .../00-libpython2.7-dev_2.7.15-4ubuntu4~18.04_amd64.deb ... Unpacking libpython2.7-dev:amd64 (2.7.15-4ubuntu4~18.04) ... Selecting previously unselected package libpython-dev:amd64. Preparing to unpack .../01-libpython-dev_2.7.15~rc1-1_amd64.deb ... Unpacking libpython-dev:amd64 (2.7.15~rc1-1) ... Selecting previously unselected package libpython-all-dev:amd64. Preparing to unpack .../02-libpython-all-dev_2.7.15~rc1-1_amd64.deb ... Unpacking libpython-all-dev:amd64 (2.7.15~rc1-1) ... Selecting previously unselected package python-all. Preparing to unpack .../03-python-all_2.7.15~rc1-1_amd64.deb ... Unpacking python-all (2.7.15~rc1-1) ... Selecting previously unselected package python2.7-dev. Preparing to unpack .../04-python2.7-dev_2.7.15-4ubuntu4~18.04_amd64.deb ... Unpacking python2.7-dev (2.7.15-4ubuntu4~18.04) ... Selecting previously unselected package python-dev. Preparing to unpack .../05-python-dev_2.7.15~rc1-1_amd64.deb ... Unpacking python-dev (2.7.15~rc1-1) ... Selecting previously unselected package python-all-dev. Preparing to unpack .../06-python-all-dev_2.7.15~rc1-1_amd64.deb ... Unpacking python-all-dev (2.7.15~rc1-1) ... Selecting previously unselected package python-pip-whl. Preparing to unpack .../07-python-pip-whl_9.0.1-2.3~ubuntu1.18.04.1_all.deb ... Unpacking python-pip-whl (9.0.1-2.3~ubuntu1.18.04.1) ... Selecting previously unselected package python-pip. Preparing to unpack .../08-python-pip_9.0.1-2.3~ubuntu1.18.04.1_all.deb ... Unpacking python-pip (9.0.1-2.3~ubuntu1.18.04.1) ... Selecting previously unselected package python-setuptools. Preparing to unpack .../09-python-setuptools_39.0.1-2_all.deb ... Unpacking python-setuptools (39.0.1-2) ... Selecting previously unselected package python-wheel. Preparing to unpack .../10-python-wheel_0.30.0-0.2_all.deb ... Unpacking python-wheel (0.30.0-0.2) ... Selecting previously unselected package python-xdg. Preparing to unpack .../11-python-xdg_0.25-4ubuntu1_all.deb ... Unpacking python-xdg (0.25-4ubuntu1) ... Setting up python-pip-whl (9.0.1-2.3~ubuntu1.18.04.1) ... Setting up python-setuptools (39.0.1-2) ... Setting up python-wheel (0.30.0-0.2) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ... Setting up libpython2.7-dev:amd64 (2.7.15-4ubuntu4~18.04) ... Setting up python-pip (9.0.1-2.3~ubuntu1.18.04.1) ... Setting up python2.7-dev (2.7.15-4ubuntu4~18.04) ... Setting up python-all (2.7.15~rc1-1) ... Setting up python-xdg (0.25-4ubuntu1) ... Setting up libpython-dev:amd64 (2.7.15~rc1-1) ... Setting up python-dev (2.7.15~rc1-1) ... Setting up libpython-all-dev:amd64 (2.7.15~rc1-1) ... Setting up python-all-dev (2.7.15~rc1-1) ... XXXXs@pop-os:~$ pip install -U spotdl Now, when I run pip -V in Terminal, I'll get the following: XXXXs@pop-os:~$ pip -V pip 9.0.1 from /usr/lib/python2.7/dist-packages (python 2.7) XXXXs@pop-os:~$ So this means I have indeed Python 2 installed on my system, doesn't it? What should I do in this case? Remove Python 2 from the system — to avoid bloating the system, if for nothing else — and install Python 3? But I don't know how to :-( Your kind help would be deeply appreciated! Otherwise, I already have the newest version of ffmpeg: @pop-os:~$ sudo apt-get install ffmpeg Reading package lists... Done Building dependency tree Reading state information... Done ffmpeg is already the newest version (7:3.4.6-0ubuntu0.18.04.1). The following packages were automatically installed and are no longer required: libllvm7 libllvm7:i386 libvala-0.40-0 valac-0.40-vapi Use 'sudo apt autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. XXXXs@pop-os:~$ Oh, and this is the installation log of spotdl, just in case: XXXXs@pop-os:~$ pip install -U spotdl Collecting spotdl Downloading https://files.pythonhosted.org/packages/9a/6d/ b66a58f08890965f4afb94bc3738624407328fc12c081697ac18537d0446/spotdl-0.9.3.tar.gz Collecting PyYAML =3.12 (from spotdl) Downloading https://files.pythonhosted.org/packages/ a3/65/837fefac7475963d1eccf4aa684c23b95aa6c1d033a2c5965ccb11e22623/PyYAML-5.1.1.tar.gz (274kB) 100% |████████████████████████████████| 276kB 944kB/s Collecting beautifulsoup4 =4.6.0 (from spotdl) Downloading https://files.pythonhosted.org/packages/8b/0e/ 048a2f88bc4be5e3697df9dc1f7b9d5c9c75be62676feeeb91d2e896c5ea/beautifulsoup4-4.7.1-py2-none-any.whl (94kB) 100% |████████████████████████████████| 102kB 507kB/s Collecting logzero =1.3.1 (from spotdl) Downloading https://files.pythonhosted.org/packages/ 97/24/27295d318ea8976b12cf9cc51d82e7c7129220f6a3cc9e3443df3be8afdb/logzero-1.5.0-py2.py3-none-any.whl Collecting lyricwikia =0.1.8 (from spotdl) Downloading https://files.pythonhosted.org/packages/36/82/dfce4509b6097cdacfab4510a401ef007e8314a2d1d179267efd873d1a55/lyricwikia-0.1.9.tar.gz Collecting mutagen =1.37 (from spotdl) Downloading https://files.pythonhosted.org/packages/30/4c/ 5ad1a6e1ccbcfaf6462db727989c302d9d721beedd9b09c11e6f0c7065b0/mutagen-1.42.0.tar.gz (925kB) 100% |████████████████████████████████| 931kB 717kB/s Collecting pafy =0.5.3.1 (from spotdl) Downloading https://files.pythonhosted.org/packages/b0/e8/3516f761558525b00d3eaf73744eed5c267db20650b7b660674547e3e506/pafy-0.5.4-py2.py3-none-any.whl Collecting pathlib =1.0.1 (from spotdl) Downloading https://files.pythonhosted.org/packages/ac/aa/9b065a76b9af472437a0059f77e8f962fe350438b927cb80184c32f075eb/pathlib-1.0.1.tar.gz (49kB) 100% |████████████████████████████████| 51kB 948kB/s Collecting spotipy =2.4.4 (from spotdl) Downloading https://files.pythonhosted.org/packages/59/46/3c957255c96910a8a0e2d9c25db1de51a8676ebba01d7966bedc6e748822/spotipy-2.4.4.tar.gz Collecting titlecase =0.10.0 (from spotdl) Downloading https://files.pythonhosted.org/packages/3b/78/5b9faa7b9288c9fa5a4fdb6989f5e675744511ab6cff0489a0c7744a4f6b/titlecase-0.12.0.tar.gz Collecting unicode-slugify =0.1.3 (from spotdl) Downloading https://files.pythonhosted.org/packages/8c/ba/1a05f61c7fd72df85ae4dc1c7967a3e5a4b6c61f016e794bc7f09b2597c0/unicode-slugify-0.1.3.tar.gz Collecting youtube_dl =2017.5.1 (from spotdl) Downloading https://files.pythonhosted.org/packages/f6/05/908331f41e7ed52a3510c8927177056ffc7d26c3692ab87e3fad78081a05/youtube_dl-2019.6.21-py2.py3-none-any.whl (1.8MB) 100% |████████████████████████████████| 1.8MB 613kB/s Collecting soupsieve =1.2 (from beautifulsoup4 =4.6.0- spotdl) Downloading https://files.pythonhosted.org/packages/b9/a5/7ea40d0f8676bde6e464a6435a48bc5db09b1a8f4f06d41dd997b8f3c616/soupsieve-1.9.1-py2.py3-none-any.whl Collecting requests (from lyricwikia =0.1.8- spotdl) Downloading https://files.pythonhosted.org/packages/51/bd/23c926cd341ea6b7dd0b2a00aba99ae0f828be89d72b2190f27c11d4b7fb/requests-2.22.0-py2.py3-none-any.whl (57kB) 100% |████████████████████████████████| 61kB 1.8MB/s Collecting six (from lyricwikia =0.1.8- spotdl) Downloading https://files.pythonhosted.org/packages/73/fb/00a976f728d0d1fecfe898238ce23f502a721c0ac0ecfedb80e0d88c64e9/six-1.12.0-py2.py3-none-any.whl Collecting unidecode (from unicode-slugify =0.1.3- spotdl) Downloading https://files.pythonhosted.org/packages/d0/42/d9edfed04228bacea2d824904cae367ee9efd05e6cce7ceaaedd0b0ad964/Unidecode-1.1.1-py2.py3-none-any.whl (238kB) 100% |████████████████████████████████| 245kB 1.2MB/s Collecting backports.functools-lru-cache; python_version < "3" (from soupsieve =1.2- beautifulsoup4 =4.6.0- spotdl) Downloading https://files.pythonhosted.org/packages/03/8e/2424c0e65c4a066e28f539364deee49b6451f8fcd4f718fefa50cc3dcf48/backports.functools_lru_cache-1.5-py2.py3-none-any.whl Collecting urllib3!=1.25.0,!=1.25.1,<1.26, =1.21.1 (from requests- lyricwikia =0.1.8- spotdl) Downloading https://files.pythonhosted.org/packages/e6/60/247f23a7121ae632d62811ba7f273d0e58972d75e58a94d329d51550a47d/urllib3-1.25.3-py2.py3-none-any.whl (150kB) 100% |████████████████████████████████| 153kB 1.4MB/s Collecting certifi =2017.4.17 (from requests- lyricwikia =0.1.8- spotdl) Downloading https://files.pythonhosted.org/packages/69/1b/b853c7a9d4f6a6d00749e94eb6f3a041e342a885b87340b79c1ef73e3a78/certifi-2019.6.16-py2.py3-none-any.whl (157kB) 100% |████████████████████████████████| 163kB 771kB/s Collecting chardet<3.1.0, =3.0.2 (from requests- lyricwikia =0.1.8- spotdl) Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl (133kB) 100% |████████████████████████████████| 143kB 1.0MB/s Collecting idna<2.9, =2.5 (from requests- lyricwikia =0.1.8- spotdl) Downloading https://files.pythonhosted.org/packages/14/2c/cd551d81dbe15200be1cf41cd03869a46fe7226e7450af7a6545bfc474c9/idna-2.8-py2.py3-none-any.whl (58kB) 100% |████████████████████████████████| 61kB 2.1MB/s Building wheels for collected packages: spotdl, PyYAML, lyricwikia, mutagen, pathlib, spotipy, titlecase, unicode-slugify Running setup.py bdist_wheel for spotdl ... done Stored in directory: /home/XXXXs/.cache/pip/wheels/27/9b/65/5cd2c56c23f5566ace8fc31393943251124de819bd069f2d2c Running setup.py bdist_wheel for PyYAML ... done Stored in directory: /home/XXXXs/.cache/pip/wheels/16/27/a1/775c62ddea7bfa62324fd1f65847ed31c55dadb6051481ba3f Running setup.py bdist_wheel for lyricwikia ... done Stored in directory: /home/XXXXs/.cache/pip/wheels/5e/7d/5d/b77975b5cabfc8848a795a851b07b3fde7fd685b27e501d055 Running setup.py bdist_wheel for mutagen ... done Stored in directory: /home/XXXXs/.cache/pip/wheels/33/4c/c3/6189a75038a7b00a8bc77fcb4dbdc38de335c55443f6680b13 Running setup.py bdist_wheel for pathlib ... done Stored in directory: /home/XXXXs/.cache/pip/wheels/f9/b2/4a/68efdfe5093638a9918bd1bb734af625526e849487200aa171 Running setup.py bdist_wheel for spotipy ... done Stored in directory: /home/XXXXs/.cache/pip/wheels/76/28/19/a86ca9bb0e32dbd4a4f580870250f5aeef852870578e0427e6 Running setup.py bdist_wheel for titlecase ... done Stored in directory: /home/XXXXs/.cache/pip/wheels/9f/fb/8f/4d61939e2447b1b8c13f6ceeca035383c14d4228e88b174402 Running setup.py bdist_wheel for unicode-slugify ... done Stored in directory: /home/XXXXs/.cache/pip/wheels/00/86/80/77ea75d401d5d6550a79179f76c6b26fe1280d40fb447ea4f3 Successfully built spotdl PyYAML lyricwikia mutagen pathlib spotipy titlecase unicode-slugify Installing collected packages: PyYAML, backports.functools-lru-cache, soupsieve, beautifulsoup4, logzero, urllib3, certifi, chardet, idna, requests, six, lyricwikia, mutagen, pafy, pathlib, spotipy, titlecase, unidecode, unicode-slugify, youtube-dl, spotdl Successfully installed PyYAML-5.1.1 backports.functools-lru-cache-1.5 beautifulsoup4-4.7.1 certifi-2019.6.16 chardet-3.0.4 idna-2.8 logzero-1.5.0 lyricwikia-0.1.9 mutagen-1.42.0 pafy-0.5.4 pathlib-1.0.1 requests-2.22.0 six-1.12.0 soupsieve-1.9.1 spotdl-0.9.3 spotipy-2.4.4 titlecase-0.12.0 unicode-slugify-0.1.3 unidecode-1.1.1 urllib3-1.25.3 youtube-dl-2019.6.21 XXXXs@pop-os:~$ I hope I haven't messed up anything entirely. Thanks a lot for your help in advance, and happy Midsummer's Day!
Python 2 and Python 3 can be installed on the same system without conflict. You can install the Python 3 version of pip similar to how you installed the Python 2 version: sudo apt install python3-pip Now, you you should be able to use pip3 to install spotdl: pip3 install spotdl I just tried this on my own system and the installation failed when running the command as a regular user*; I had to install it as a superuser (using sudo) which successfully installed the program to /usr/local/bin/spotdl. Removing Python 2 pip and packages installed by it If you’re really constrained by resources, What is the easiest way to remove all packages installed by pip? shows how to remove the packages installed by pip2. pip2 freeze | xargs pip2 uninstall -y To remove the pip2 Debian package, run sudo apt-get remove --purge python-pip followed by sudo apt-get autoremove to remove dependencies that are no longer required. * This could be due to too much messing around with installing pip and pip3 from different sources years ago when I first installed my current system (Ubuntu 16.04).
Pop!_OS: Spotify Downloader (spotdl: command not found)
1,487,619,657,000
I want to convert a bunch of Webp images to individual PDFs. I'm able to do it with this command: parallel convert '{} {.}.pdf' ::: *.webp or I can use this FFMPEG command: find ./ -name "*.webp" -exec dwebp {} -o {}.pdf \; However during the process of conversion the Webp files are decoded and the resultant PDFs have a much bigger file size. When I use the above commands for JPG-to-PDF conversion the PDF size is reasonably close to the JPG image size. This command works fine with JPGs, but the program img2pdf doesn't work with the Webp format: find ./ -name "*.jpg" -exec img2pdf {} -o {}.pdf \; I also tried Webp-to-PDF conversion with this online service, but the PDF was huge. How can I keep the PDF size down to the Webp file size?
Why not just use Imagemagick and Ghostscript? convert img.webp img.pdf gs -sDEVICE=pdfwrite -dNOPAUSE -dBATCH -dPDFSETTINGS=/ebook -sOutputFile=img-small.pdf img.pdf In my test with your sample file I got a pdf result of about 3.2 MB. EDIT You could follow these instructions on Ubuntu to make sure that imagemagick was built with webp. Install this package for Windows or on macos do this: brew install webp brew install imagemagick
Convert Webp to PDF
1,487,619,657,000
I have some .mkv files that contain 6.1 audio in FLAC format. mediainfo reports the audio track in these files as: Audio ID : 2 Format : FLAC Format/Info : Free Lossless Audio Codec Codec ID : A_FLAC Duration : 2mn 29s Bit rate mode : Variable Channel(s) : 7 channels Channel positions : Front: L C R, Side: L R, Back: C, LFE Sampling rate : 48.0 KHz Bit depth : 24 bits Delay relative to video : 14ms Writing library : libFLAC 1.3.0 (UTC 2013-05-26) Language : English Default : Yes Forced : No I also have a "Home Theater" 6.1 amp (Sony STR-DE895, if anyone cares) that accepts digital audio natively through an S/PDIF (optical and coax) connection in the following formats: PCM (limited to 2 channels on S/PDIF) DTS (5.1) DTS-ES (6.1) NEO6 (6.1) Dolby Digital (5.1) DIGITAL-EX (6.1) PRO LOGIC II I'd like to have these .mkv files driving all the 6.1 speakers from the amp, but if I convert the .mkv file with a command like this: ffmpeg -i Input.FLAC.6.1.mkv -c:s copy -c:v copy -c:a ac3 Output.AC3.6.1.mkv Then I get 5.1 audio, i.e. I lose the center back channel. Per mediainfo: Audio ID : 2 Format : AC-3 Format/Info : Audio Coding 3 Mode extension : CM (complete main) Format settings, Endianness : Big Codec ID : A_AC3 Duration : 2mn 29s Bit rate mode : Constant Bit rate : 448 Kbps Channel(s) : 6 channels Channel positions : Front: L C R, Side: L R, LFE Sampling rate : 48.0 KHz Bit depth : 16 bits Compression mode : Lossy Delay relative to video : 9ms Stream size : 8.00 MiB (9%) Writing library : Lavc57.107.100 ac3 Language : English Default : Yes Forced : No DURATION : 00:02:29.768000000 NUMBER_OF_FRAMES : 1755 NUMBER_OF_BYTES : 56974307 _STATISTICS_WRITING_APP : mkvmerge v8.2.0 ('World of Adventure') 64bit _STATISTICS_WRITING_DATE_UTC : 2015-08-01 13:29:10 _STATISTICS_TAGS : BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES Notice how it changed from: Channel(s) : 7 channels Channel positions : Front: L C R, Side: L R, Back: C, LFE To: Channel(s) : 6 channels Channel positions : Front: L C R, Side: L R, LFE If I try to force the number of channels with -ac 7 I get: [ac3 @ 0x43f2a40] Specified channel layout '6.1' is not supported Trying to convert to DTS has the exact same result. I.e. replacing: -c:a ac3 With: -strict experimental -c:a dts Results in a mediainfo of: Audio ID : 2 Format : DTS Format/Info : Digital Theater Systems Mode : 16 Format settings, Endianness : Big Codec ID : A_DTS Duration : 2mn 29s Bit rate mode : Constant Bit rate : 1 413 Kbps Channel(s) : 6 channels Channel positions : Front: L C R, Side: L R, LFE Sampling rate : 48.0 KHz Bit depth : 16 bits Compression mode : Lossy Delay relative to video : 14ms Stream size : 25.2 MiB (23%) Writing library : Lavc57.107.100 dca Language : English Default : Yes Forced : No DURATION : 00:02:29.774000000 NUMBER_OF_FRAMES : 1755 NUMBER_OF_BYTES : 56974307 _STATISTICS_WRITING_APP : mkvmerge v8.2.0 ('World of Adventure') 64bit _STATISTICS_WRITING_DATE_UTC : 2015-08-01 13:29:10 _STATISTICS_TAGS : BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES And trying to force 6.1 with -ac 7 causes the same '6.1' is not supported error as above. For what is worth, the ffmpeg used in the tests above was: $ ffmpeg -version ffmpeg version 3.4.1-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2017 the FFmpeg developers built with gcc 6.4.0 (Debian 6.4.0-10) 20171112 configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-librtmp --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg libavutil 55. 78.100 / 55. 78.100 libavcodec 57.107.100 / 57.107.100 libavformat 57. 83.100 / 57. 83.100 libavdevice 57. 10.100 / 57. 10.100 libavfilter 6.107.100 / 6.107.100 libswscale 4. 8.100 / 4. 8.100 libswresample 2. 9.100 / 2. 9.100 libpostproc 54. 7.100 / 54. 7.100 So, how can I convert the audio in the .mkv file to a format supported by my system, while preserving the 6.1 channel format?
Partial answer (untested): So the main problem seems to be that you are stuck with an optical/coax S/PDIF connection for some reason, which doesn't have enough bandwidth (actually, as you say, it even doesn't even have enough bandwidth for more than two uncompressed audio channels; the 5.1 variant is already compressed). I can confirm that ffmpeg doesn't support encoding more than 6 channels by looking at the code both for DTS or AC3. If ffmpeg doesn't support it, my guess is that no ready-made tools for Linux exist which do support it. Looking at how DTS-ES and Dolby Digital EX work, one can see that all of them don't give you an additional independent channel either, but instead mix (or "matrix") the back center channel onto the other channels in some way, and set a special flag for 6.1 mode in the digital data stream. The encoder then has to separate the channels again, which (because of loss of information) is not always possible, and can lead to sound artifacts, depending on the source material. (The possible exception is "DTS-ES Discrete 6.1", which claims to have a real separate channel in addition to the matrix encoding, but it's not clear how this channel is encoded, and how it is supposed to fit the limited S/PDIF bandwidth if transported via S/PDIF, so it's quite likely that the separation only exists in the source material, and is lost on S/PDIF, anyway). So there are two problems: How to enable the 6.1 flag in the data stream, and how to mix the extra channel onto the existing channels. Fortunately, your Sony STR-DE895 seems to have a SB DEC [MATRIX] mode (manual page 32), which ignores the flag and always applies the Dolby Digital EX decoder matrix regardless of the flag. So that solves the first problem without having to modify e.g. ffmpeg source code. I couldn't find exact information about the coefficients of this matrix, but as it is "similar in practice to Dolby's earlier Pro-Logic format", which simply adds the center channel to both left and right after decreasing it by 3 dB (factor 0.5), in first approximation I'd try the same for the back channels using the ffmpeg pan filter, encode this as ac3, and see if the result is acceptable. Assuming this works, a longer term solution would be to hack the ALSA A52 plugin to support this kind of mixing internally, so you'd have a true 6.1 channel ALSA device. You can then use this to play a 6.1 source in any format, without having to go through the contortions of re-encoding the source material. Another, completely different approach (and I'd recommend to try this, and make a listening comparison to get an idea both about the difference in quality, and possible presence of sound artifacts) is to use your Multi Ch In 1 field on the Sony, together with a good analog 7.1 soundcard (if you have one, or can borrow one). This will provide true channel separation, but of course will now use the D/A converters of the soundcard, and not of the Sony.
Convert audio inside MKV to AC3 or DTS preserving 6.1 channels
1,487,619,657,000
I have Slackware installed on my computer, and I install a lot of software from source. Now I want to install ffmpeg from source just to recompile it with some more options. But I already have ffmpeg installed on my computer, so what's gonna happen? Is it going to overwrite my old install or is it going to create new files, and if so how can I differentiate between the two installed versions. Also if there is a better way to recompile a programs on Slack let me know, because i'm very interested.
If you use the configure, make, make install routine to install software under any Linux distro, then the new version will usually overwrite the previous. The only caveat is that if the newer version happens to change the install location or names of certain files then you may end up with the old version or parts of the old-version remaining on your computer. For this reason, it is not advised to install programs in this way on Slackware. The recommended practice is to create a .txz or .tgz package which can be installed with the standard Slackware package installer installpkg. This also means that you can cleanly uninstall the package with removepkg or upgrade to a new version with upgradepkg. Many scripts for compiling and creating packages, including one for ffmpeg, can be found at SlackBuilds. Running the provided script with the sources in the same directory will compile and produce a .txz. Most Slackware users make heavy use of Slackbuilds to install non-official software.
New source install over existing one
1,487,619,657,000
I am emulating a screen with Xvfb, and I want to capture it with ffmpeg. In order to prevent frame dropping or duplication, I would like to capture at the exact refresh rate of the screen, but it seems that Xvfb does not … have this? xrandr confirms that it's 0.0: $ Xvfb :123 -ac -nolisten tcp -screen 0 1920x1080x24 & $ DISPLAY=:123 xrandr xrandr: Failed to get size of gamma for output screen Screen 0: minimum 1 x 1, current 1920 x 1080, maximum 1920 x 1080 screen connected 1920x1080+0+0 0mm x 0mm 1920x1080 0.00* At which rate are the images then drawn to the screen? ffmpeg shows 29.97 fps for this input when using x11grab: $ ffmpeg -f x11grab -i :123 ffmpeg version 4.3.5-0+deb11u1 Copyright (c) 2000-2022 the FFmpeg developers built with gcc 10 (Debian 10.2.1-6) configuration: --prefix=/usr --extra-version=0+deb11u1 --toolchain=hardened --libdir=/usr/lib/aarch64-linux-gnu --incdir=/usr/include/aarch64-linux-gnu --arch=arm64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared libavutil 56. 51.100 / 56. 51.100 libavcodec 58. 91.100 / 58. 91.100 libavformat 58. 45.100 / 58. 45.100 libavdevice 58. 10.100 / 58. 10.100 libavfilter 7. 85.100 / 7. 85.100 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 7.100 / 5. 7.100 libswresample 3. 7.100 / 3. 7.100 libpostproc 55. 7.100 / 55. 7.100 [x11grab @ 0xaaaadaa0c2a0] Stream #0: not enough frames to estimate rate; consider increasing probesize Input #0, x11grab, from ':123': Duration: N/A, start: 1669816070.219332, bitrate: 1988667 kb/s Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1920x1080, 1988667 kb/s, 29.97 fps, 1000k tbr, 1000k tbn, 1000k tbc But when I encode using the "native" framerate at realtime, I get more like ~17-21 fps. $ ffmpeg -progress - -nostats -f x11grab -re -i :123 -f null /dev/null … frame=198 fps=17.80 stream_0_0_q=-0.0 bitrate=N/A total_size=N/A out_time_us=11111100 out_time_ms=11111100 out_time=00:00:11.111100 dup_frames=0 drop_frames=0 speed=0.999x progress=end It appears to me that the display does not even have a native frame rate, rather letting me sample as often as I want from it. So, my question is: is that correct? If so, does it go up to "infinite" fps as long as the processing power is large enough?
Frame rate comes from movie projectors, where you have an image in a box ("frame") and it is flashed on the screen for some time, and then blanked, and another frame flashed up for some time, and your persistence of vision makes this look like objects moving in a picture. The first graphical video displays for computers used oscilloscopes to do vector drawing. The 'scope screen had a hold time, over which the vector lines drawn on it would slowly fade. The driver could draw and redraw different lines at different frequencies and rates to vary brightness, so some portion of the display would be refreshed at some interval, but other parts at other intervals and rates to get a different brightness. Those didn't particuarly have a frame rate. Later, they created raster screens, where the same electron beam in the 'scope would scan the screen, but now in a rectangular pattern, lines left to right and then lines below them, either sequential or interlaced, turning the beam on and off (or varying the intensity) to give a black and white image. If the image was interlaced with a factor of 2, you had a 'half frame' rate, where half of the lines on the screen were drawn, and then the other half in between were drawn (every other line in the same half frame). This interlaced half frame rate doesn't exactly translate to a frame rate either, as motion could occur differently on the two half frames. Early TV's had lines, but not really pixels, as it is analog within the line, but when this was brought to computers, the line became pixels, and video cards had pixel rates and refresh rates, where refresh rate is how long it takes to paint two half frames...which eventually looks like a frame rate. Especially when you get higher pixel rates so that you can paint what was two half frames in the same time as a single half frame, and then move from interlaced to progressive sequential lines. Then we went from tube monitors to LCD monitors -- which suddenly no longer have a refresh rate, as they are not using hold time of phosphor anymore, and there is no electron beam, and so no scan rate. But the video signal format design is still using the old pixel scan format. So do LCD's have a frame rate? Sort of. They have a max pixel rate they can accept a stream of sequential pixels to put on the screen, which limits the speed of updating the entire sccreen, and thus how fast a moving object can be updated. Now, you have LCD screens and other format screens that no longer need a pseduo analog signal to accept pixels -- they use a fully digital format containing compression and complex commands. So a screen can be told to just refresh a small region rather than the whole screen, and now frame rate is gone again. And getting back to your question, does Xvfb have a frame rate? It's a fully virtual device with no hardware -- so there is nothing to constrain the rate it can "display" things. And the X11 protocol supports complete drawing commands rather than just raw pixel raster updates, so there's no frame rate there either. So what about the source? The source of the image can send drawing commands limited only by the speed of your cpu and gpu, so any frame rate there can be variable, depending on those speeds plus the complexity of what it is trying to update. So some games have a frame rate, because they want to render an entire scene and then give you the whole picture at once, rather than letting you see the scene be drawn in pieces, so typically it will double buffer -- displaying one buffer while drawing on the hidden buffer, and then swap. This creates a frame rate just like movie films. The game could just render that as fast as the hardware allows, but again, that can vary quite a bit. Many games might want to synchronize internal game state with what is displayed, so it sets a frame rate and artificially limits the hardware to that rate. (Or, if the hardware can't keep up, you skip frames and get something lower but still synchronized.) Xvfb doesn't have a natural frame rate. It may have a maximum frame rate. Your recording software may have a maximum frame rate limited by the pipeline from capturing the image, processing it, and writing it to disk. Ideally, you would be able to get hints from xvfb and only save a frame after there are changes made and they have been fully drawn, but this isn't a frame rate, this would be frames with variable timing. Or just save the sequence of drawing commands with no frames. Presumably, you want a frame rate because you picked a video codec to record with that uses a fixed frame rate rather than frames with variable times or drawing primitives. Your best bet might be to just pick an arbitrary frame rate corresponding to the output file size you want, that isn't so slow that it looks jumpy.
What is the "native" frame rate of an Xvfb screen?
1,487,619,657,000
I have a bunch of audio files from each of which I need to cut out the last 1 second. The audio files are of arbitrary lengths. What is the best tool/way to do this? I am somewhat familiar with ffmpeg, but AFAIK, it does not have a direct option to do this (the starting and ending times have to be provided with -ss and -to).
Inspired by answers to other questions on this site, I was also able to make something up. This command can be passed to find as an exec option ffmpeg -i FILE1.wma -ss 0 -to $(echo $(ffprobe -i FILE1.wma -show_entries format=duration -v quiet -of csv="p=0") - 1 | bc) -c copy FILE2.wma The part in the middle $(echo $(ffprobe -i FILE1.wma -show_entries format=duration -v quiet -of csv="p=0") - 1 | bc) first gets the desired duration. The piping to bc is to compute in floating format. The output of this is then passed as the argument to -to in the main command. To break down the long single line command above, for ease of comprehension: ffmpeg -i FILE1.wma -ss 0 -to $(echo $(ffprobe -i FILE1.wma -show_entries format=duration -v quiet -of csv="p=0") - 1 | bc ) -c copy FILE2.wma
Trim last x seconds of audio files
1,487,619,657,000
I have a directory with a bunch of CD quality (16bit 44100Hz) wave-files. How can I batch decode those into different formats (lets say FLAC, OGG and MP3) using ffmpeg? Update: here are the commands one by one as suggested by @StephenHarris ffmpeg -i input.wav output.ogg ffmpeg -i input.wav output.mp3 ffmpeg -i input.wav output.flac
ffmpeg accepts multiple output formats. Set the input file.format with -i followed by the output file.format: ffmpeg -i input.wav output.ogg output.mp3 output.flac Batch conversion: As a simple one liner with putting each format in a separate folder: mkdir mp3 ogg flac; for i in *.wav; do ffmpeg -i "$i" -b:a 320000 "./mp3/${i%.*}.mp3" -b:a 320000 "./ogg/${i%.*}.ogg" "./flac/${i%.*}.flac"; done Decode all into one folder: for i in *.wav; do ffmpeg -i "$i" -b:a 320000 "${i%.*}.mp3" -b:a 320000 "${i%.*}.ogg" "${i%.*}.flac"; done -b:a 320000 sets the bitrate for the decoding of mp3 and ogg and can be adjusted (the bitrate is measured in bits/sec so 320kbit/s equals 320000). thanks to https://stackoverflow.com/a/33766147 for the parameter-expansion
Batch convert (decode) audio into multiple formats with ffmpeg
1,487,619,657,000
I have folder with files named like 0001.mp4, 0002.mp4, ... I want to merge all these files to a combined.mp4, and I want it to have an internal chapter marks. It is convenient when playing for example in vlc, as you can see the chapter names in timeline. How can I make such combined.mp4? Preferably, command line script using ffmpeg, and no additional dependencies. There is similar question, but asker wants to use handbrake.
Chapters info is stored in file metadata. Ffmpeg allows to export metadata to file, and load metadata from file. Documentation is here: ffmpeg metadata 1 Now we need to prepare a metadata file, that we will use for combined.mp4. And we can do this in one go, without the need to first make combined file from all files, and then another file but with injected metadata. We save storage space. I have made a python script multiple_mp4_to_single_mp4_with_chapters.py that does the job: import subprocess import os import re def make_chapters_metadata(list_mp4: list): print(f"Making metadata source file") chapters = {} for single_mp4 in list_mp4: number = single_mp4.removesuffix(".mp4") duration_in_microseconds = int((subprocess.run(f"ffprobe -v quiet -of csv=p=0 -show_entries format=duration {folder}/{single_mp4}", shell=True, capture_output=True).stdout.decode().strip().replace(".", ""))) chapters[number] = {"duration": duration_in_microseconds} chapters["0001"]["start"] = 0 for n in range(1, len(chapters)): chapter = f"{n:04d}" next_chapter = f"{n + 1:04d}" chapters[chapter]["end"] = chapters[chapter]["start"] + chapters[chapter]["duration"] chapters[next_chapter]["start"] = chapters[chapter]["end"] + 1 last_chapter = f"{len(chapters):04d}" chapters[last_chapter]["end"] = chapters[last_chapter]["start"] + chapters[last_chapter]["duration"] metadatafile = f"{folder}/combined.metadata.txt" with open(metadatafile, "w+") as m: m.writelines(";FFMETADATA1\n") for chapter in chapters: ch_meta = """ [CHAPTER] TIMEBASE=1/1000000 START={} END={} title={} """.format(chapters[chapter]["start"], chapters[chapter]["end"], chapter) m.writelines(ch_meta) def concatenate_all_to_one_with_chapters(): print(f"Concatenating list of mp4 to combined.mp4") metadatafile = f"{folder}/combined.metadata.txt" os.system(f"ffmpeg -hide_banner -loglevel error -y -f concat -i list_mp4.txt -i {metadatafile} -map_metadata 1 combined.mp4") if __name__ == '__main__': folder = "." # Specify folder where the files 0001.mp4... are list_mp4 = [f for f in os.listdir(folder) if re.match(r'[0-9]{4}\.mp4', f)] list_mp4.sort() # Make the list of mp4 in ffmpeg format if os.path.isfile("list_mp4.txt"): os.remove("list_mp4.txt") for filename_mp4 in list_mp4: with open("list_mp4.txt", "a") as f: line = f"file '{filename_mp4}'\n" f.write(line) make_chapters_metadata(list_mp4) concatenate_all_to_one_with_chapters() Now you can place it to the folder with your mp4 files (or edit the folder variable in script), and run it: $ ls 0001.mp4 0002.mp4 0003.mp4 0004.mp4 multiple_mp4_to_single_mp4_with_chapters.py $ python multiple_mp4_to_single_mp4_with_chapters.py Now you will have the combined.mp4, and when it's opened in vlc, you will see chapter marks. I saw a script on bash that does it with mp4box and mp4chaps: this gist Also there is bash version without such dependencies: this gist And another version on python, but it creates merged file twice: this gist
How to merge multiple mp4 files as chapters in a final mp4?
1,487,619,657,000
I combine multiple audio files with FFMPEG as ffmpeg -i 1.mp3 -i 2.mp3 -i 3.mp3 \ -filter_complex '[0:0][1:0][2:0]concat=n=3:v=0:a=1[out]' -map '[out]' out.mp4 Then, I add a still image to the created video as ffmpeg -loop 1 -framerate 1 -i photo.jpg -i out.mp4 -tune stillimage -shortest out2.mp4 How can I add the still image to the first command to make the video in one single process? Disclaimer: I deleted my previous question as it was unclear and asked a new one.
Use ffmpeg -loop 1 -framerate 1 -i photo.jpg -i 1.mp3 -i 2.mp3 -i 3.mp3 \ -filter_complex 'concat=n=3:v=0:a=1' -tune stillimage -shortest out.mp4
How to add an still image while combining multiple audio files in FFMPEG
1,487,619,657,000
I looked at the following link: Trim audio file using start and stop times But this doesn't completely answer my question. My problem is: I have an audio file such as abc.mp3 or abc.wav. I also have a text file containing start and end timestamps: 0.0 1.0 silence 1.0 5.0 music 6.0 8.0 speech I want to split the audio into three parts using Python and sox/ffmpeg, thus resulting in three seperate audio files. How do I achieve this using either sox or ffmpeg? Later I want to compute the MFCC corresponding to those portions using librosa. I have Python 2.7, ffmpeg, and sox on an Ubuntu Linux 16.04 installation.
I've just had a quick go at it, very little in the way of testing so maybe it'll be of help. Below relies on ffmpeg-python, but it wouldn't be a challenge to write with subprocess anyway. At the moment the time input file is just treated as pairs of times, start and end, and then an output name. Missing names are replaced as linecount.wav import ffmpeg from sys import argv """ split_wav `audio file` `time listing` `audio file` is any file known by local FFmpeg `time listing` is a file containing multiple lines of format: `start time` `end time` output name times can be either MM:SS or S* """ _in_file = argv[1] def make_time(elem): # allow user to enter times on CLI t = elem.split(':') try: # will fail if no ':' in time, otherwise add together for total seconds return int(t[0]) * 60 + float(t[1]) except IndexError: return float(t[0]) def collect_from_file(): """user can save times in a file, with start and end time on a line""" time_pairs = [] with open(argv[2]) as in_times: for l, line in enumerate(in_times): tp = line.split() tp[0] = make_time(tp[0]) tp[1] = make_time(tp[1]) - tp[0] # if no name given, append line count if len(tp) < 3: tp.append(str(l) + '.wav') time_pairs.append(tp) return time_pairs def main(): for i, tp in enumerate(collect_from_file()): # open a file, from `ss`, for duration `t` stream = ffmpeg.input(_in_file, ss=tp[0], t=tp[1]) # output to named file stream = ffmpeg.output(stream, tp[2]) # this was to make trial and error easier stream = ffmpeg.overwrite_output(stream) # and actually run ffmpeg.run(stream) if __name__ == '__main__': main()
Split audio into several pieces based on timestamps from a text file with sox or ffmpeg
1,487,619,657,000
If I download a video using youtube-downloader I can watch the part file while downloading (in my case using mpv). Suppose I cannot or don't want to select a format containing both video and audio then the audio in the part file is missing because it is downloaded and merged after the video download is complete. Is there any fast way to get audio and video merged during the download, such that I can watch the part file including audio. I already asked a similar question on github and learned that I could use the --downloader ffmpeg option. This works, but is very slow, so I am looking for a faster way to do it. The problem occurs if I download very large video (for example 10 hours long) in a high quality. However downloading audio is much much faster. So suppose I have the audio file already and I am downloading the video file. Is there an indirect way (workaround) using for example ffmpeg to merge the audio continuously into the video while the file is downloading.
Option 1: You can select a video download format that contains a mixed/muxed stream of both video and audio. For example, yt-dlp -F https://youtu.be/3QnD2c4Xovk will list formats to choose, and something like yt-dlp -f 18 https://youtu.be/3QnD2c4Xovk will choose that format. The partial file contains video and audio if the format supports it. Option 2: You can also choose to download two formats, one each for audio and video which will then be muxed by yt-dlp: yt-dlp -f 251,244 https://youtu.be/3QnD2c4Xovk The format I specified first (here, 251) was downloaded first in my tests and I could listen right away by playing the partial file. For completeness: the above currently outputs yt-dlp -F https://youtu.be/3QnD2c4Xovk [youtube] Extracting URL: https://youtu.be/3QnD2c4Xovk [youtube] 3QnD2c4Xovk: Downloading webpage [youtube] 3QnD2c4Xovk: Downloading ios player API JSON [youtube] 3QnD2c4Xovk: Downloading android player API JSON [youtube] 3QnD2c4Xovk: Downloading m3u8 information [info] Available formats for 3QnD2c4Xovk: ID EXT RESOLUTION FPS CH │ FILESIZE TBR PROTO │ VCODEC VBR ACODEC ABR ASR MORE INFO ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────── sb2 mhtml 48x27 0 │ mhtml │ images storyboard sb1 mhtml 67x45 0 │ mhtml │ images storyboard sb0 mhtml 135x90 0 │ mhtml │ images storyboard 233 mp4 audio only │ m3u8 │ audio only unknown [en] Default 234 mp4 audio only │ m3u8 │ audio only unknown [en] Default 139 m4a audio only 2 │ 1.84MiB 48k https │ audio only mp4a.40.5 48k 22k [en] low, m4a_dash 249 webm audio only 2 │ 2.22MiB 57k https │ audio only opus 57k 48k [en] low, webm_dash 250 webm audio only 2 │ 3.02MiB 78k https │ audio only opus 78k 48k [en] low, webm_dash 140 m4a audio only 2 │ 4.91MiB 127k https │ audio only mp4a.40.2 127k 44k [en] medium, m4a_dash 251 webm audio only 2 │ 5.82MiB 151k https │ audio only opus 151k 48k [en] medium, webm_dash 17 3gp 176x144 12 1 │ 2.17MiB 56k https │ mp4v.20.3 mp4a.40.2 22k [en] 144p 394 mp4 216x144 24 │ 1.26MiB 33k https │ av01.0.00M.08 33k video only 144p, mp4_dash 269 mp4 216x144 24 │ ~ 4.53MiB 115k m3u8 │ avc1.4D400C 115k video only 160 mp4 216x144 24 │ 717.16KiB 18k https │ avc1.4D400C 18k video only 144p, mp4_dash 603 mp4 216x144 24 │ ~ 5.39MiB 136k m3u8 │ vp09.00.11.08 136k video only 278 webm 216x144 24 │ 1.34MiB 35k https │ vp09.00.11.08 35k video only 144p, webm_dash 395 mp4 360x240 24 │ 1.41MiB 37k https │ av01.0.00M.08 37k video only 240p, mp4_dash 229 mp4 360x240 24 │ ~ 6.73MiB 170k m3u8 │ avc1.4D400D 170k video only 133 mp4 360x240 24 │ 1.11MiB 29k https │ avc1.4D400D 29k video only 240p, mp4_dash 604 mp4 360x240 24 │ ~ 9.56MiB 242k m3u8 │ vp09.00.20.08 242k video only 242 webm 360x240 24 │ 1.58MiB 41k https │ vp09.00.20.08 41k video only 240p, webm_dash 396 mp4 540x360 24 │ 2.13MiB 55k https │ av01.0.01M.08 55k video only 360p, mp4_dash 230 mp4 540x360 24 │ ~ 16.81MiB 425k m3u8 │ avc1.4D4015 425k video only 134 mp4 540x360 24 │ 2.31MiB 60k https │ avc1.4D4015 60k video only 360p, mp4_dash 18 mp4 540x360 24 2 │ ≈ 7.36MiB 186k https │ avc1.42001E mp4a.40.2 44k [en] 360p 605 mp4 540x360 24 │ ~ 19.08MiB 482k m3u8 │ vp09.00.21.08 482k video only 243 webm 540x360 24 │ 2.66MiB 69k https │ vp09.00.21.08 69k video only 360p, webm_dash 397 mp4 720x480 24 │ 3.21MiB 83k https │ av01.0.04M.08 83k video only 480p, mp4_dash 231 mp4 720x480 24 │ ~ 29.80MiB 753k m3u8 │ avc1.4D401E 753k video only 135 mp4 720x480 24 │ 4.36MiB 113k https │ avc1.4D401E 113k video only 480p, mp4_dash 606 mp4 720x480 24 │ ~ 28.21MiB 713k m3u8 │ vp09.00.30.08 713k video only 244 webm 720x480 24 │ 4.21MiB 109k https │ vp09.00.30.08 109k video only 480p, webm_dash and you can see the "audio only" and "video only" description text by the yt-dlp tool.
How to watch video while still downloading including audio?
1,487,619,657,000
I was reading Cut part from video file from start position to end position with FFmpeg a few days back and I tried the following and it worked. While the video part was good, I am not sure if the audio could be made better or not. This is how I went about it $ ffmpeg -ss 00:11:50 -i input.mkv -t 165 -c:v libx264 -preset slower -crf 22 -c:a copy output.mkv I was cutting a portion of a video for my own requirements. Something like a clip or something from a movie or an audio/video piece that is nice, quirky etc. Is there a better way? I am using ffmpeg 4.2.2 on Debian testing which will eventually become Debian Bullseye (11.0)
No, there is no better way. When you copy the audio stream with -c:a copy, you are not doing any re-encoding, so you are getting the exact same quality in the source. The only way to improve the quality further beyond what the source provides is by using sound editing software to remove noise or use other features. You didn't mention why you re-encoded the video, and in case you only want to cut the video, a better way would be to remove the re-encoding and do $ ffmpeg -ss 00:11:50 -i input.mkv -t 165 -c copy output.mkv
ffmpeg: extract audio only from a media file using the best presets (2.0)
1,487,619,657,000
I want to output a video using two videos as input, where these two videos fade (or dissolve) into each other in a smooth and repetitive manner, every second or so. I'm assuming a combination of ffmpeg with melt, mkvmerge, or another similar tool might produce the effect I'm after. Basically, I want to use ffmpeg to cut up video A according to a specific interval, discarding every second cut up (automatically). Likewise for video B, but in this case inverting the process to retain the discarded parts. I wish to then interweave these parts. The file names should be correctly formatted so that I can then concatenate the result using a wild card command argument or batch processing list, as per one of the aforementioned tools. The transition effect (e.g. a "lapse dissolve") isn't absolutely necessary, but it would be great if there were a filter to achieve that too. Lastly, it would also be great if this process could be done with little to no re-encoding, to preserve the video quality. I've read through this thread and the Melt Framework documentation, in addition to the ffmpeg manual.
Assuming both videos have the same resolution and sample aspect ratio, you can use the blend filter in ffmpeg. A couple of examples, ffmpeg -i videoA -i videoB -filter_complex \ "[0][1]blend=all_expr=if(mod(trunc(T),2),A,B);\ [0]volume=0:enable='mod(trunc(t+1),2)'[a]; [1]volume=0:enable='mod(trunc(t),2)'[b];\ [a][b]amix" out.mp4 Straight cuts. Output: time, in seconds, [0,1) -> videoB [1,2) -> videoA [2,3) -> videoB ... [2N ,2N+1) -> videoB [2N+1,2N+2) -> videoA ffmpeg -i videoA -i videoB -filter_complex \ "[0][1]blend=all_expr='if(mod(trunc(T/2),2),min(1,2*(T-2*trunc(T/2))),max(0,1-2*(T-2*trunc(T/2))))*A+if(mod(trunc(T/2),2),max(0,1-2*(T-2*trunc(T/2))),min(1,2*(T-2*trunc(T/2))))*B';\ [0]volume='if(mod(trunc(t/2),2),min(1,2*(t-2*trunc(t/2))),max(0,1-2*(t-2*trunc(t/2))))':eval=frame[a]; [1]volume='if(mod(trunc(t/2),2),max(0,1-2*(t-2*trunc(t/2))),min(1,2*(t-2*trunc(t/2))))':eval=frame[b];\ [a][b]amix" out.mp4 Each input's video/audio for 2 seconds with a 0.5 second transition. Output: time, in seconds, [0,0.5) -> videoA fades out 1 to 0 + videoB fades in from 0 to 1 [0.5,2) -> videoB [2,2.5) -> videoB fades out 1 to 0 + videoA fades in from 0 to 1 [2.5,4) -> videoA [4,4.5) -> videoA fades out 1 to 0 + videoB fades in from 0 to 1 [4.5,6) -> videoB [6,6.5) -> videoB fades out 1 to 0 + videoA fades in from 0 to 1 [6.5,8) -> videoA ... [4N ,4N+0.5) -> videoA fades out 1 to 0 + videoB fades in from 0 to 1 [4N+0.5,4N+2) -> videoB [4N+2 ,4N+2.5) -> videoB fades out 1 to 0 + videoA fades in from 0 to 1 [4N+2.5,4N+4) -> videoA
How to transition smoothly and repeatedly between two videos using command line tools?
1,487,619,657,000
When I run ffmpeg I get the following error: /usr/local/bin/ffmpeg: error while loading shared libraries: libvpx.so.1: cannot open shared object file: No such file or directory Output of ls -l /usr/lib/libvpx*: lrwxrwxrwx 1 root root 15 Nov 2 14:10 /usr/lib/libvpx.so.0 -> libvpx.so.0.0.0 lrwxrwxrwx 1 root root 15 Nov 2 14:10 /usr/lib/libvpx.so.0.0 -> libvpx.so.0.0.0 -rwxr-xr-x 1 root root 409800 Jun 25 2011 /usr/lib/libvpx.so.0.0.0 What should I do?
The path is /usr/local So it looks like you compiled and installed ffmpeg manually, instead of package manager. And the problems is that ffmpeg requires a higher minor version of libvpx, recompile ffmpeg will solve this issue.
ffmpeg and libvpx: error while loading shared libraries
1,487,619,657,000
I have a bunch of mp3 files in a folder. They are recorded from cassette tape, and the individual tracks need to be separated out. This is one one the filenames: Gobbolino the Witch's Cat, 10:52 The Hare & Tortoise, 14:52 The Shoe Tree, 24:22 The Emperors New Clothes, 34:11 The Red Nightcaps, 37:07 Aldo in Arcadia (1), 40:37 The Forest Troll.mp3 You can see it has timestamps within the filename, indicating the start of each track. The first track is not timestamped since it always starts at 00:00. And the last track should always go to the end of the mp3. Somehow, I want to extract these timestamps in order to create separate files. If the file above were split correctly the output would be: Gobbolino the Witch's Cat.mp3 The Hare & Tortoise.mp3 The Shoe Tree.mp3 The Emperors New Clothes.mp3 The Red Nightcaps.mp3 Aldo in Arcadia (1).mp3 The Forest Troll.mp3 I know how to loop thru files, and how to cut a file with ffmpeg, but I don't know how to extract the timestamps and track names from the filename. I'm using zsh, and here is my current code: for file in *; do if [[ -f "$file" ]]; then # extract timestamps and loop thru, for each timestamped section ffmpeg -ss TIMESTAMP_START -to TIMESTAMP_END -I "$file" -acodec copy TRACK_NAME.mp3 fi done UPDATE My spec for this problem has changed. The filename looks like this: Tape 1 - Gobbolino the Witch's Cat, 11-06 The Hare & Tortoise, 14-25 The Shoe Tree, 24-06 The Emperors New Clothes, 34-27 The Red Nightcaps, 37-29 Aldo in Arcadia (1), 40-40 The Forest Troll.mp3 i.e it has an Album name at the beginning, and there are hyphens in the timestamps instead of colons (filenames can't have colons in macOS). Also, I wanted to insert some mp3 tags into the files, and put the tracks for each album in its own album folder. I based my solution on Gilles one below. The script looks like this: setopt interactive_comments for file in *(.); do extension=$file:e rest=$file:r timestamp_start=0:00 timestamp_duration=$(ffprobe -i "$file" -show_entries format=duration -v quiet -of csv="p=0" -sexagesimal -sexagesimal) timestamp_duration=${timestamp_duration%.*} tracknum=1 while [[ $rest =~ ,\ *([0-9:]+-[0-9][0-9])\ * ]]; do track_name="$rest[1,$MBEGIN-1]" if [[ "$track_name" == *"Tape "* ]]; then albumname="${track_name%% - *}" track_name="${track_name#* - }" echo "\n\nALBUM NAME $albumname\n" mkdir $albumname fi rest=$rest[$MEND+1,-1] timestamp_end=$match[1] timestamp_end="${timestamp_end//-/:}" # echo "$timestamp_start $timestamp_end $track_name.$extension" ffmpeg -ss $timestamp_start -to $timestamp_end -i $file -acodec libmp3lame -ac 2 -ab 256k -ar 44100 -metadata album="$albumname" -metadata title="$track_name" -metadata track="$tracknum" $track_name.$extension mv $track_name.$extension $albumname timestamp_start=$timestamp_end tracknum=$((tracknum+1)) last_track_name="$rest:r" done if [[ -n $timestamp_end ]]; then # echo "$timestamp_start $timestamp_duration $last_track_name.$extension" ffmpeg -ss $timestamp_start -to $timestamp_duration -i $file -acodec libmp3lame -ac 2 -ab 256k -ar 44100 -metadata album="$albumname" -metadata title="$last_track_name" -metadata track="$tracknum" $last_track_name.$extension mv $last_track_name.$extension $albumname fi done
Use a loop over the file name to match track delimiters using regular expression matching with the =~ conditional expression operator. The regular expression ,\ *([0-9:]+:[0-9][0-9])\ * matches a comma followed by a timestamp with optional spaces around it. $file:e and $file:r extract the file extension through history modifiers. Instead of looping over all files and then matching just the regular files, use a glob qualifier to match just regular files. for file in *(.); do extension=$file:e rest=$file:r timestamp_start=0:00 timestamp_end= while [[ $rest =~ ,\ *([0-9:]+:[0-9][0-9])\ * ]]; do track_name=$rest[1,$MBEGIN-1] rest=$rest[$MEND+1,-1] timestamp_end=$match[1] ffmpeg -ss $timestamp_start -to $timestamp_end -I $file -acodec copy $track_name.$extension timestamp_start=$timestamp_end done if [[ -n $timestamp_end ]]; then ffmpeg -ss $timestamp_end -I $file -acodec copy $rest.$extension else : # If you want special processing for single-track files, it goes here. fi done
Extrating timestamps from filename to split mp3s
1,487,619,657,000
I want to record my screen with audio for practice. I saw recommendations to use this command line: ffmpeg -f alsa -ac 2 -i pulse -f x11grab -r 25 -s 800x600 -i :0.0+200,100 -c:a pcm_s16le -c:v libx264 -preset ultrafast -crf 0 -threads 4 output.mkv But ffmpeg doesn't recognize the "alsa" format or the "pulse" file. If I remove both of those, it can capture the specified region of the screen just fine, but silently. I'm using Linux Mint 17 Mate edition; I know I'm using ALSA. My testing audio source is VLC (which I thought used PulseAudio) playing an Ogg Vorbis file from the system tray. This is my ffmpeg configuration: ffmpeg version 2.2.4 Copyright (c) 2000-2014 the FFmpeg developers built on Jul 6 2014 09:48:53 with Ubuntu clang version 3.4-1ubuntu3 (tags/RELEASE_34/final) (based on LLVM 3.4) configuration: --cc=clang --extra-libs=-ldl --disable-shared --disable-ffserver --enable-ffplay --disable-doc --enable-bzlib --enable-zlib --enable-libx264 --enable-libtheora --enable-libvorbis --enable-libmp3lame --enable-libass --enable-libfaac --enable-libvpx --enable-libopus --enable-x11grab --enable-nonfree --enable-gpl libavutil 52. 66.100 / 52. 66.100 libavcodec 55. 52.102 / 55. 52.102 libavformat 55. 33.100 / 55. 33.100 libavdevice 55. 10.100 / 55. 10.100 libavfilter 4. 2.100 / 4. 2.100 libswscale 2. 5.102 / 2. 5.102 libswresample 0. 18.100 / 0. 18.100 libpostproc 52. 3.100 / 52. 3.100
ALSA support isn't inherently built into ffmpeg. You need to have the ALSA development files installed at ./configure time when building ffmpeg. The ffmpeg configure script looks for alsa/asoundlib.h and libasound. If either is missing, it simply won't build ALSA support into the program. This contrasts with other features of ffmpeg which you can enable with configure script flags. That is to say, you can't ask it not to build ALSA support in, if it finds the header and library files.
FFmpeg doesn't recognize my audio sources
1,487,619,657,000
I have put together this script for recording the microphone, the desktop audio and the screen using ffmpeg: DATE=`which date` RESO=2560x1440 FPS=30 PRESET=ultrafast DIRECTORY=$HOME/Video/ FILENAME=videocast`$DATE +%d%m%Y_%H.%M.%S`.mkv ffmpeg -y -vsync 1 \ -f pulse -ac 2 -i alsa_output.pci-0000_00_1b.0.analog-stereo.monitor \ -f pulse -ac 1 -ar 25000 -i alsa_input.usb-0d8c_C-Media_USB_Headphone_Set-00-Set.analog-mono \ -filter_complex aresample=async=1,amix=duration=shortest,apad \ -f x11grab -r $FPS -s $RESO -i :0.0 \ -acodec libvorbis \ -vcodec libx264 -pix_fmt yuv420p -preset $PRESET -threads 0 \ $DIRECTORY$FILENAME Everything is recorded and between the screen and the microphone sound there are no issues what so ever, however the desktop audio falls behind badly. It begins in sync but gets worse over time during playback, also in ffplay. It does not matter what application playing sound: both Youtube-videos in the browser, desktop sounds and Rhythmbox (playing a couple of seconds of song then stops, wait and repeat) gets out of sync. The terminal output complain about "ALSA lib pcm.c:7843:(snd_pcm_recover) overrun occurred22.73 bitrate=10384.5kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurred" and similar but I do not know what that means. Full terminal output here: ffmpeg version 2.0.1 Copyright (c) 2000-2013 the FFmpeg developers built on Aug 11 2013 14:52:28 with gcc 4.8.1 (GCC) 20130725 (prerelease) configuration: --prefix=/usr --disable-debug --disable-static --enable-avresample --enable-dxva2 --enable-fontconfig --enable-gpl --enable-libass --enable-libbluray --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libv4l2 --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-pic --enable-postproc --enable-runtime-cpudetect --enable-shared --enable-swresample --enable-vdpau --enable-version3 --enable-x11grab libavutil 52. 38.100 / 52. 38.100 libavcodec 55. 18.102 / 55. 18.102 libavformat 55. 12.100 / 55. 12.100 libavdevice 55. 3.100 / 55. 3.100 libavfilter 3. 79.101 / 3. 79.101 libavresample 1. 1. 0 / 1. 1. 0 libswscale 2. 3.100 / 2. 3.100 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 3.100 / 52. 3.100 Guessed Channel Layout for Input Stream #0.0 : stereo Input #0, pulse, from 'alsa_output.pci-0000_00_1b.0.analog-stereo.monitor': Duration: N/A, start: 0.014093, bitrate: 1536 kb/s Stream #0:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s Guessed Channel Layout for Input Stream #1.0 : mono Input #1, pulse, from 'alsa_input.usb-0d8c_C-Media_USB_Headphone_Set-00-Set.analog-mono': Duration: N/A, start: 0.006172, bitrate: 400 kb/s Stream #1:0: Audio: pcm_s16le, 25000 Hz, mono, s16, 400 kb/s [x11grab @ 0x218a6e0] device: :0.0 -> display: :0.0 x: 0 y: 0 width: 2560 height: 1440 [x11grab @ 0x218a6e0] shared memory extension found Input #2, x11grab, from ':0.0': Duration: N/A, start: 1379021580.184321, bitrate: N/A Stream #2:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 2560x1440, -2147483 kb/s, 30 tbr, 1000k tbn, 30 tbc [libx264 @ 0x21ae560] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX [libx264 @ 0x21ae560] profile Constrained Baseline, level 5.0 [libx264 @ 0x21ae560] 264 - core 133 r2339 585324f - H.264/MPEG-4 AVC codec - Copyleft 2003-2013 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0 Output #0, matroska, to '/home/anders/Video/videocast12092013_23.33.00.mkv': Metadata: encoder : Lavf55.12.100 Stream #0:0: Audio: vorbis (libvorbis) (oV[0][0] / 0x566F), 25000 Hz, mono, fltp Stream #0:1: Video: h264 (libx264) (H264 / 0x34363248), yuv420p, 2560x1440, q=-1--1, 1k tbn, 30 tbc Stream mapping: Stream #0:0 (pcm_s16le) -> aresample (graph 0) Stream #1:0 (pcm_s16le) -> amix:input1 (graph 0) amix (graph 0) -> Stream #0:0 (libvorbis) Stream #2:0 -> #0:1 (rawvideo -> libx264) Press [q] to stop, [?] for help ALSA lib pcm.c:7843:(snd_pcm_recover) overrun occurred22.73 bitrate=10384.5kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurred ALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurred3.22 bitrate=10423.3kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) overrun occurred25.25 bitrate=11011.0kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurred ALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurred5.76 bitrate=11013.7kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) overrun occurred27.25 bitrate=11175.4kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurred7.76 bitrate=11168.7kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurred8.24 bitrate=11176.4kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) overrun occurred55.48 bitrate=11243.8kbits/s ALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurred ALSA lib pcm.c:7843:(snd_pcm_recover) underrun occurred frame=12871 fps= 30 q=-1.0 Lsize= 542369kB time=00:07:09.31 bitrate=10349.3kbits/s video:539762kB audio:2363kB subtitle:0 global headers:3kB muxing overhead 0.044476% [libx264 @ 0x21ae560] frame I:52 Avg QP:15.46 size:725888 [libx264 @ 0x21ae560] frame P:12819 Avg QP:18.26 size: 40172 [libx264 @ 0x21ae560] mb I I16..4: 100.0% 0.0% 0.0% [libx264 @ 0x21ae560] mb P I16..4: 2.6% 0.0% 0.0% P16..4: 18.1% 0.0% 0.0% 0.0% 0.0% skip:79.3% [libx264 @ 0x21ae560] coded y,uvDC,uvAC intra: 57.8% 49.8% 25.3% inter: 8.9% 8.7% 2.2% [libx264 @ 0x21ae560] i16 v,h,dc,p: 23% 29% 32% 16% [libx264 @ 0x21ae560] i8c dc,h,v,p: 45% 28% 18% 9% [libx264 @ 0x21ae560] kb/s:10306.26 Please help me, I am really close to get this working! UPDATE: The desktop audio is out of sync when skipping filter_complex and microphone also, bit in a smaller amount. Using copy instead of libvorbis does not change anything either.
Not sure if this will fix it for you, but I have a script that I haven't had problems with. Comparing our two scripts, the only differences I can see are: my filter_complex is just amerge I force the use of 4 threads My audio codec is mp3lame I'm thinking the audio codec change is the most relevant difference. I think that some audio codecs get interlaced with the video somehow so they can't get out of sync. Unfortunately I'm no video engineer so I can't be so sure. Here is my script: #!/usr/bin/bash # video information INRES="1920x1080" OUTRES="1280x720" FPS="24" QUAL="fast" FILE_OUT="$1" #audio information PULSE_IN="alsa_input.pci-0000_00_1b.0.analog-stereo" PULSE_OUT="alsa_output.pci-0000_00_1b.0.analog-stereo.monitor" ffmpeg -f x11grab -s "$INRES" -r "$FPS" -i :0.0 \ -f pulse -i "$PULSE_IN" -f pulse -i "$PULSE_OUT" \ -filter_complex amerge \ -vcodec libx264 -crf 30 -preset "$QUAL" -s "$OUTRES" \ -acodec libmp3lame -ab 96k -ar 44100 -threads 4 -pix_fmt yuv420p \ -f flv "$FILE_OUT"
Desktop audio falls behind when recording microphone + desktop audio + screen using ffmpeg
1,487,619,657,000
I have an album of 11 .flac audio files. (edit: since this issue has been resolved, it's now clear that the precise names and content of the files are irrelevant, so I've renamed them): > find . -name "*.flac" ./two.flac ./ten.flac ./nine.flac ./eight.flac ./seven.flac ./three.flac ./four.flac ./five.flac ./one.flac ./eleven.flac ./six.flac I was converting these to .wav files with a specific sample rate and bit depth. I used ffmpeg in a Bash shell to do this. A command like this one works perfectly for each of the 11 files if I call it manually: ffmpeg -i "./six.flac" -sample_fmt s16 -ar 44100 "./six.wav" However, when I wrote a while loop to run this command for each file: find . -name "*.flac" | sed "s/\.flac$//" | while read f; do ffmpeg -i "$f.flac" -sample_fmt s16 -ar 44100 "$f.wav"; done This worked, but only for 8/11 of the files, with ffmpeg giving the following error messages for the three failures: /ten.flac: No such file or directory ree.flac: No such file or directory ix.flac: No such file or directory For ./ten.flac, the leading . in the relative file path was truncated, resulting in an absolute path, and the other two, ./three.flac and ./six.flac, lose even more characters, including from the start of their basenames. Some of the other eight files had ./ truncated, but this resulted in the correct relative path, so ffmpeg was able to continue in these cases. If I use the same loop structure, but with echo "$f" instead of calling ffmpeg, I see no problems with the output: > find . -name "*.flac" | sed "s/\.flac$//" | while read f; do echo "$f"; done ./two ./ten ./nine ./seven ./three ./four ./five ./one ./eleven ./six So I'm convinced the structure of the loop is fine, with "$f" expanding how I expect it to at each iteration. Somehow, when passing "$f.flac" into ffmpeg, part of the string is truncated, but only for some of the files. I just want to understand why my first attempt exhibited this behaviour. I'm not looking for an alternative way to loop over or convert the files (my second attempt, with a different kind of loop, was successful for all files). edit: I accidentally discovered that piping yes | into ffmpeg seems to fix this issue. I added this so I wouldn't be prompted to overwrite existing .wav files. edit: Thanks @roaima for the explanation! Turns out that ffmpeg and read both inherit the same stdin handle, so ffmpeg could consume characters from the start of each line before read got a chance, thus mangling some of the file paths. This explains why yes | ffmpeg ... worked, since it gave ffmpeg a different stdin handle. The original loop works fine with ffmpeg -nostdin .... See http://mywiki.wooledge.org/BashFAQ/089.
Unless you have a directory tree of files (not mentioned, and not shown in your example dataset), you can use a simple loop to process the files for f in *.flac do w="${f%.flac}.wav" ffmpeg -i "$f" -sample_fmt s16 -ar 44100 "$w" done If you really do have a hierarchy of files you can incorporate this into a find search find -type f -name '*.flac" -exec sh -c 'for f in "$@"; do w="${f%.flac}.wav"; ffmpeg -i "$f" -sample_fmt s16 -ar 44100 "$w"; done' _ {} + For efficiency you might want to skip processing of any flac that already has a corresponding wav. After the w= assignment, add [ -s "$w" ] && continue. If you really don't want that then you can further optimise the command thus, find -type f -name '*.flac" -exec sh -c 'ffmpeg -i "$1" -sample_fmt s16 -ar 44100 "${1%.flac}.wav";' _ {} \; For pre-run testing, prefix ffmpeg with echo to see what would get executed without it actually doing so. (Quotes won't be shown, though, so bear that in mind.) It turns out that the question is actually about ffmpeg chewing up the filenames it's supposed to be processing. This is because ffmpeg reads commands from stdin, and the list of files has been presented as a pipe to a while read … loop (also using stdin). Solution: ffmpeg -nostdin … or ffmpeg … </dev/null See Strange errors when using ffmpeg in a loop http://mywiki.wooledge.org/BashFAQ/089
Bash variable truncated when passed into ffmpeg
1,487,619,657,000
I want to do the following conversion: for f in *.m4a; do ( ffmpeg -i "$f" -f wav - | opusenc --bitrate 38 - "${f%.m4a}.opus" ) & done I know I could use ffmpeg directly to convert to opus, but I want to use opusenc in this case, since it's a newer version. When I run opusenc after the ffmpeg it works fine, but when I try to run the above I just get a bunch of Stopped and nothing happens.
If you use GNU Parallel then this works: parallel 'ffmpeg -i {} -f wav - | opusenc --bitrate 38 - {.}.opus' ::: *m4a Maybe that is good enough? It has the added benefit that it only runs 1 job per cpu thread, so if you have 1000 files you will not overload your machine.
How do I run an on-the-fly ffmpeg (pipe) conversion in parallel?
1,495,379,997,000
I have an IP cam which uploads JPEG pictures to an FTP folder on my Arch Linux box whenever it detects movement in the room it is looking in. It uploads a JPEG every second until all motion activity stops. It names the JPEG files in the following way: Dissected, it means: name-of-camera(inside)_1_YEAR-MONTH-DATE-TIME-SECONDS_imagenumber.jpg I want a script that can make a 1 frame-per-second video from them (easy with ffmpeg I know), BUT, it must be clever enough to only make the video from the images that are within 2 seconds of each other, then delete those jpegs that it used. I say "2 seconds of each other" in-case of network latency where it misses one frame. Any future images that are within 2 seconds of being taken, should become its own video. So it should basically be able to make videos from each 'event' of motion the camera saw. I know programs like zoneminder and motion can do this, but I want to design a script instead. Any ideas much appreciated.
You could generate time stamp from the date and check for span between each file. One issue, as already mentioned in comments, are daylight savings – assuming the date/times are locale specific. Using stat instead of filenames as base could help on this. But, that gain depends on how the files are uploaded (if timestamps are preserved etc.) As a starting point (this became much longer then intended) you could try something like this: #!/bin/bash declare -a fa_tmp=() # Array holding tmp files with jpg collections. declare dd="" # Date extracted form file name. declare -i ts=0 # Time stamp from date. declare -i pre=0 # Previous time stamp. declare -i lim=2 # Limit in seconds triggering new collection. fmt_base='+%F-%H_%M_%S' # Format for date to generate video file name. # Perhaps better using date from file-name: # export TZ=UTC # stat --printf=%Y $f # Loop all jpg files for f in *.jpg; do # Extract date, optionally use mktime() with gawk. # This assumes XX_XX_DATETIME_XXX... by split on underscore. dd=$(printf "$f" | tr '_' ' ' | awk '{ printf("%d-%02d-%02d %02d:%02d:%02d", substr($3, 1, 4), substr($3, 5, 2), substr($3, 7, 2), substr($3, 9, 2), substr($3, 11, 2), substr($3, 13, 2)) }') # Create time stamp from date. ts=$(date +%s -d "$dd") # If duration is greater then lim, create new tmp file. if ((ts - pre > lim)); then f_tmp="$(mktemp)" fa_tmp+=("$f_tmp") # First line in tmp file is first time stamp. printf "%s\n" "$ts" >> "$f_tmp" fi # Add filename to current tmp file. printf "%s\n" "$f" >> "$f_tmp" # Previous is current. pre="$ts" done declare -i i=1 # Loop tmp files. for f_tmp in "${fa_tmp[@]}"; do printf "PROCESSING: %s\n---------------------------\n" "$f_tmp" base="" i=1 # Rename files. while read -r img; do # First line is time stamp and is used as base for name. if [[ "$base" == "" ]]; then base=$(date "$fmt_base" -d "@$img") continue fi # New image name. iname=$(printf "%s-%04d.jpg" "$base" "$i") echo "mv '$img' => '$iname'" mv "$img" "$iname" ((++i)) done <"$f_tmp" # Generate video. if ffmpeg -f image2 \ -framerate 3 \ -pattern_type sequence \ -start_number 1 \ -i "$base-%04d.jpg" \ -vcodec mpeg4 \ -r 6 \ "$base.mp4"; then # Iff success, move jpg's to backup folder. mkdir "$base" mv $base-*.jpg "$base" else printf "FAILED:\n" >&2 ls $base-*.jpg >&2 fi # Remove tmp file. rm "$f_tmp" done
Need script: latest JPEGs to video, then delete old JPEGs
1,495,379,997,000
When I want to run ffmpeg I get the following error: ffmpeg: error while loading shared libraries: libtheoraenc.so.1: cannot open shared object file: No such file or directory This is output of ls \usr\lib -l | grep libtheora: -rw-r--r-- 1 root root 419238 Jan 5 2010 libtheora.a -rw-r--r-- 1 root root 935 Jan 5 2010 libtheora.la lrwxrwxrwx 1 root root 19 Jan 5 2010 libtheora.so -> libtheora.so.0.3.10 -rw-r--r-- 1 root root 145636 Jan 5 2010 libtheoradec.a -rw-r--r-- 1 root root 948 Jan 5 2010 libtheoradec.la lrwxrwxrwx 1 root root 21 Jan 5 2010 libtheoradec.so -> libtheoradec.so.1.1.4 -rw-r--r-- 1 root root 334696 Jan 5 2010 libtheoraenc.a -rw-r--r-- 1 root root 954 Jan 5 2010 libtheoraenc.la lrwxrwxrwx 1 root root 21 Jan 5 2010 libtheoraenc.so -> libtheoraenc.so.1.1.2 What should I do to solve the problem? Edit: in line: libtheoraenc.so -> libtheoraenc.so.1.1.2 I can't find libtheoraenc.so.1.1.2 in /usr/lib(packages libtheora and libtheora-dev are installed) output of locate libtheoraenc.so.1.1.2 is: /usr/lib/libtheoraenc.so.1 /usr/lib/libtheoraenc.so.1.1.2 But I can't find this files in /usr/lib!
I think I'd recommend a reinstall of libtheora0: sudo apt-get install --reinstall libtheora0 And since you have some non-unixey backslashes in your original question, let's be explicit about looking for the libraries: ls -l /usr/lib/libtheoraenc*
ffmpeg: error while loading shared libraries: libtheoraenc.so.1
1,495,379,997,000
I'm trying to record some videos using a USB camera, but I'm having some issues when using ffmpeg. If I run ffmpeg -f video4linux2 -t 00:00:10 -i /dev/video0 out.mpg, the program tries to record at 640x480 resolution and ffmpeg hangs. However, if I add the -s to ffmpeg and record at lower resolutions than 640x480 (e.g., 320x240), the video is recorded successfully. After a hang, if I hit CTRL+C, ffmpeg resumes, yielding a file of 0 kB in size. Using strace I can see that an ioctl call to the device keeps returns -EINVAL and subsequent ioctls return -EAGAIN. ioctl(3, VIDIOC_G_STD, 0xbe84dfb0) = -1 EINVAL (Invalid argument) ioctl(3, VIDIOC_DQBUF, {type=V4L2_BUF_TYPE_VIDEO_CAPTURE}) = -1 EAGAIN (Resource temporarily unavailable) Any ideas why this happens? I'm using ffmpeg version 2.8.7, built through busybox 1.25. The host architecture is an ARM processor running kernel 3.2. I also tried compiling the most recent version from source, and the problem persists...
Figured out the reason: transcoding and raw data volume. Using the command line mentioned in my question I was reading from a raw format (yuv422), and transcoding it to mpeg-1, which was the default encoding for my version of ffmpeg. The data amount being streamed from the camera was simply too much for the processor, causing the hang. The camera I was using was capable of streaming in a compressed format as well (mjpeg). By switching to this format, ffmpeg no longer hanged, and was capable of recording at 15 fps. However, there was a transcoding step, from mjpeg to mpeg-1. I was able to reach a higher fps count by telling ffmpeg to copy the stream, removing the last transcoding step.
ffmpeg hangs when trying to record video on higher resolutions
1,495,379,997,000
Context I recently updated my nVidia drivers to 375.26 and recompiled FFmpeg N-83180-gcf3affa and OBS 17.0.2-5-g43e4a2e (sorry if these numbers don't mean anything, I'm not quite sure what version numbers are significant) on my Debian machine. Doing a suspend to RAM will cause OBS to stop working with its only fix to reboot the machine. How to reproduce Run OBS Output Configuration: Set output to NVENC H.264 and .mp4 Use CBR Bitrate = 200K Kf interval = 0 Low latency, High quality preset, main, auto 2 pass encoding enabled GPU = 0 B-frames = 0 Start recording and stop to confirm that it works Go to login actions and click suspend Turn on and login again Start recording, OBS fails with this error: [h264_nvenc @ 0x3fdd1e0] Failed creating CUDA context for NVENC: 0x3e7 [h264_nvenc @ 0x3fdd1e0] No NVENC capable devices found System info Drivers/Software versions listed above GPU: MSI GTX 970 uname -a: Linux version 3.16.0-4-amd64, #1 SMP Debian 3.16.39-1 (2016-12-30) OS: Debian 8.7 Jessie I use XFCE 4.10 if that makes a difference to how the action buttons work. Question Is there any way to short of rebooting every time to avoid getting this error after waking the computer? Edit 1 I know for a fact that OBS is the source of this problem. Test case 1: Start computer, use ffmpeg's h264_nvenc encoder to output a video file Suspend to RAM Login, successfully repeat step 1 Test case 2: Start computer, use OBS to record a video with h264_nvenc Quit OBS Suspend to RAM Login, successfully repeat step 2 Test case 3: Start computer, use OBS to record video with h264_nvenc Suspend to RAM Login, fails with Cannot init CUDA My guess is that OBS does not close its streams when a recording is stopped, it probably is persisted for performance (?) reasons until you exit the program? I have no clue how to fix this. Restarting OBS has no effect once the error shows up, you must reboot the system. It appears that the GPU is completely fine at handling everything else nonetheless, glxinfo, nvidia-smi, nvidia-settings all confirm that the GPU is indeed being utilized to process other tasks. It seems the NVENC is the only thing that has trouble after the suspend to RAM. Edit 2 Here are the dmesg logs: https://www.diffchecker.com/wto7KPJZ Tabbed "original" were what changed after doing the suspend, tabbed "changed" were what changed after doing the fix that I suggested. Full dmesg output: https://0paste.com/10601#hl
FFmpeg only locks up on CUDA init if an h264_nvenc stream was started (and stopped, but this is not needed) before putting the system into suspend. If OBS never records anything with the h264_nvenc encoder before suspend, it will work fine when you login again. If OBS locks up after logging in, it will become usable by: Exit OBS Run in terminal: sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm Open OBS again ??? Profit If unloading nvidia_uvm doesn't work, the DRM and modeset modules may need to be reloaded as well, although I've never had this problem.
CUDA Context for NVENC not found after system suspend
1,495,379,997,000
I have some files Joapira___BERLINA_DEL_HIERRO.mp4 Joapira___EL_BAILE_DEL_VIVO.mp4 Joapira___EL_CONDE_CABRA.mp4 Joapira___FLAIRE.mp4 Joapira___MAZULKA_DEL_HIERRO.mp4 Joapira___MEDA_A_MANOLITO_DIAZ_ARTESANO_TALLISTA.mp4 that I want to convert to some other formats with ffmpeg and GNU parallel. For example to convert them to flac I do parallel --bar ffmpeg -i "{}" -map_metadata 0 "{/.}.flac" ::: * or to convert them to mp3 I do parallel --bar ffmpeg -i "{}" -vn -ar 44100 -ab 128k -map_metadata 0 "{/.}.mp3" ::: $@ but the process continues forever and the first file is always missing. Why? Info I am on Fedora 22 using GNU parallel 20160222 and ffmpeg version N-80953-gd4c8e93-static http://johnvansickle.com/ffmpeg/ Update Fascinating, I tried it with ffmpeg version 2.6.8 (comes with Fedora) and it works!! And even with the most recent static build from git it does not. :-( Update 2 When I run ps auxwww and search for ffmpeg I see all the jobs with the state Rl, except for the command of the file that is missing, which has the state T. GNU parallel has the state S+, but sometimes during the processing of the working files changes to R+. The man page of ps says the following about the states: D uninterruptible sleep (usually IO) R running or runnable (on run queue) S interruptible sleep (waiting for an event to complete) T stopped by job control signal t stopped by debugger during the tracing W paging (not valid since the 2.6.xx kernel) X dead (should never be seen) Z defunct ("zombie") process, terminated but not reaped by its parent < high-priority (not nice to other users) N low-priority (nice to other users) L has pages locked into memory (for real-time and custom IO) s is a session leader l is multi-threaded (using CLONE_THREAD, like NPTL pthreads do) + is in the foreground process group Maybe this helps to understand the problem.
Solution is —as suggested by @OleTange in a comment— to update to a newer version of parallel, i.e. GNU parallel 20161122. EVerything works again. And it is better to protect the commands from shell interaction with single quotes, i.e.: parallel --bar 'ffmpeg -i {} -map_metadata 0 {/.}.flac' ::: * and parallel --bar 'ffmpeg -i {} -vn -ar 44100 -ab 128k -map_metadata 0 {/.}.mp3' ::: $@
gnu parallel with ffmpeg does not process first file
1,495,379,997,000
In Short: I tried to reinstall ffmpeg: sudo apt install ffmpeg: Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ffmpeg : Depends: libavcodec58 (= 7:4.1.6-1~deb10u1) Depends: libavdevice58 (= 7:4.1.6-1~deb10u1) but 7:4.3.1-6~bpo10+1 is to be installed Depends: libavfilter7 (= 7:4.1.6-1~deb10u1) [...] E: Unable to correct problems, you have held broken packages.` How to resolve those missing dependencies? Full question: When I ran ffmpeg -i "./input.mp4" -vcodec libvpx-vp9 -acodec libvorbis -b:v 9M "./output.webm" the resulting did not have a bitrate of 9MB but a much smaller one even though the bitrate of the input-videos that I tried this with were either exactly 9MB or larger. I'm interested in why this occurred and nothing helped to solve this problem except using -crf instead. With that I could get the output file's bitrate to match the input file's bitrate by trial and error. This problem may or may not be related to the following problem. Maybe I should create a separate question for it. Because of the problem described above I tried to reinstall ffmpeg. It looks like I had ffmpeg installed from Basil Gello's Kodi repository (the Kodi version in Debian's main repos is very outdated). At first I tried reinstalling ffmpeg from the main repositories by removing the repo and running sudo apt-get update && sudo apt-get --reinstall install ffmpeg by which I got: Reinstallation of ffmpeg is not possible, it cannot be downloaded. so I added that repo again and ran: sudo apt-get --reinstall install ffmpeg/buster-backports. This was the output: Reading package lists... Done Building dependency tree Reading state information... Done Selected version '7:4.3.1-6~bpo10+1' (kodi-nightly-debian-repo:1.0/stable-backports [amd64]) for 'ffmpeg' Selected version '7:4.3.1-6~bpo10+1' (kodi-nightly-debian-repo:1.0/stable-backports [amd64]) for 'libavcodec58' because of 'ffmpeg' Selected version '7:4.3.1-6~bpo10+1' (kodi-nightly-debian-repo:1.0/stable-backports [amd64]) for 'libavutil56' because of 'libavcodec58' Selected version '7:4.3.1-6~bpo10+1' (kodi-nightly-debian-repo:1.0/stable-backports [amd64]) for 'libswresample3' because of 'libavcodec58' Selected version '7:4.3.1-6~bpo10+1' (kodi-nightly-debian-repo:1.0/stable-backports [amd64]) for 'libavdevice58' because of 'ffmpeg' Selected version '7:4.3.1-6~bpo10+1' (kodi-nightly-debian-repo:1.0/stable-backports [amd64]) for 'libavfilter7' because of 'libavdevice58' Selected version '7:4.3.1-6~bpo10+1' (kodi-nightly-debian-repo:1.0/stable-backports [amd64]) for 'libavformat58' because of 'libavfilter7' Selected version '7:4.3.1-6~bpo10+1' (kodi-nightly-debian-repo:1.0/stable-backports [amd64]) for 'libpostproc55' because of 'libavfilter7' Selected version '7:4.3.1-6~bpo10+1' (kodi-nightly-debian-repo:1.0/stable-backports [amd64]) for 'libswscale5' because of 'libavfilter7' Selected version '7:4.3.1-6~bpo10+1' (kodi-nightly-debian-repo:1.0/stable-backports [amd64]) for 'libavresample4' because of 'ffmpeg' The following packages were automatically installed and are no longer required: i965-va-driver:i386 intel-media-va-driver:i386 libgomp1:i386 libigdgmm5:i386 libsoxr0:i386 libva-drm2:i386 libva-x11-2:i386 libva2:i386 libvdpau-va-gl1:i386 libvdpau1:i386 mesa-va-drivers:i386 mesa-vdpau-drivers:i386 ocl-icd-libopencl1:i386 va-driver-all:i386 vdpau-driver-all:i386 Use 'sudo apt autoremove' to remove them. The following additional packages will be installed: libavcodec58 libavdevice58 libavfilter7 libavformat58 libavresample4 libavutil56 libpostproc55 libswresample3 libswscale5 Suggested packages: ffmpeg-doc The following packages will be REMOVED: libavcodec-dev libavfilter-dev libavformat-dev libavresample-dev libavutil-dev libavutil56:i386 libpostproc-dev libswresample-dev libswresample3:i386 libswscale-dev The following packages will be upgraded: ffmpeg libavcodec58 libavdevice58 libavfilter7 libavformat58 libavresample4 libavutil56 libpostproc55 libswresample3 libswscale5 10 upgraded, 0 newly installed, 10 to remove and 0 not upgraded. Need to get 9,752 kB of archives. After this operation, 39.0 MB disk space will be freed. Do you want to continue? [Y/n] y Get:1 https://basilgello.github.io/kodi-nightly-debian-repo buster-backports/main amd64 ffmpeg amd64 7:4.3.1-6~bpo10+1 [1,584 kB] Get:2 https://basilgello.github.io/kodi-nightly-debian-repo buster-backports/main amd64 libavdevice58 amd64 7:4.3.1-6~bpo10+1 [114 kB] Get:3 https://basilgello.github.io/kodi-nightly-debian-repo buster-backports/main amd64 libavfilter7 amd64 7:4.3.1-6~bpo10+1 [1,281 kB] Get:4 https://basilgello.github.io/kodi-nightly-debian-repo buster-backports/main amd64 libswscale5 amd64 7:4.3.1-6~bpo10+1 [195 kB] Get:5 https://basilgello.github.io/kodi-nightly-debian-repo buster-backports/main amd64 libavformat58 amd64 7:4.3.1-6~bpo10+1 [1,037 kB] Get:6 https://basilgello.github.io/kodi-nightly-debian-repo buster-backports/main amd64 libavcodec58 amd64 7:4.3.1-6~bpo10+1 [4,942 kB] Get:7 https://basilgello.github.io/kodi-nightly-debian-repo buster-backports/main amd64 libswresample3 amd64 7:4.3.1-6~bpo10+1 [95.0 kB] Get:8 https://basilgello.github.io/kodi-nightly-debian-repo buster-backports/main amd64 libpostproc55 amd64 7:4.3.1-6~bpo10+1 [91.0 kB] Get:9 https://basilgello.github.io/kodi-nightly-debian-repo buster-backports/main amd64 libavresample4 amd64 7:4.3.1-6~bpo10+1 [92.0 kB] Get:10 https://basilgello.github.io/kodi-nightly-debian-repo buster-backports/main amd64 libavutil56 amd64 7:4.3.1-6~bpo10+1 [320 kB] Fetched 9,752 kB in 8s (1,242 kB/s) Reading changelogs... Done apt-listchanges: Do you want to continue? [Y/n] y apt-listchanges: Mailing root: apt-listchanges: changelogs for hostname(Reading database ... 427402 files and directories currently installed.) Removing libavfilter-dev:amd64 (7:4.3.1-5.1~bpo10+1) ... Removing libavformat-dev:amd64 (7:4.3.1-5.1~bpo10+1) ... Removing libavcodec-dev:amd64 (7:4.3.1-5.1~bpo10+1) ... Removing libavresample-dev:amd64 (7:4.3.1-5.1~bpo10+1) ... Removing libswscale-dev:amd64 (7:4.3.1-5.1~bpo10+1) ... Removing libswresample-dev:amd64 (7:4.3.1-5.1~bpo10+1) ... Removing libswresample3:i386 (7:4.3.1-5.1~bpo10+1) ... Removing libavutil56:i386 (7:4.3.1-5.1~bpo10+1) ... Removing libpostproc-dev:amd64 (7:4.3.1-5.1~bpo10+1) ... Removing libavutil-dev:amd64 (7:4.3.1-5.1~bpo10+1) ... (Reading database ... 427211 files and directories currently installed.) Preparing to unpack .../0-ffmpeg_7%3a4.3.1-6~bpo10+1_amd64.deb ... Unpacking ffmpeg (7:4.3.1-6~bpo10+1) over (7:4.3.1-5.1~bpo10+1) ... Preparing to unpack .../1-libavdevice58_7%3a4.3.1-6~bpo10+1_amd64.deb ... Unpacking libavdevice58:amd64 (7:4.3.1-6~bpo10+1) over (7:4.3.1-5.1~bpo10+1) ... Preparing to unpack .../2-libavfilter7_7%3a4.3.1-6~bpo10+1_amd64.deb ... Unpacking libavfilter7:amd64 (7:4.3.1-6~bpo10+1) over (7:4.3.1-5.1~bpo10+1) ... Preparing to unpack .../3-libswscale5_7%3a4.3.1-6~bpo10+1_amd64.deb ... Unpacking libswscale5:amd64 (7:4.3.1-6~bpo10+1) over (7:4.3.1-5.1~bpo10+1) ... Preparing to unpack .../4-libavformat58_7%3a4.3.1-6~bpo10+1_amd64.deb ... Unpacking libavformat58:amd64 (7:4.3.1-6~bpo10+1) over (7:4.3.1-5.1~bpo10+1) ... Preparing to unpack .../5-libavcodec58_7%3a4.3.1-6~bpo10+1_amd64.deb ... Unpacking libavcodec58:amd64 (7:4.3.1-6~bpo10+1) over (7:4.3.1-5.1~bpo10+1) ... Preparing to unpack .../6-libswresample3_7%3a4.3.1-6~bpo10+1_amd64.deb ... Unpacking libswresample3:amd64 (7:4.3.1-6~bpo10+1) over (7:4.3.1-5.1~bpo10+1) ... Preparing to unpack .../7-libpostproc55_7%3a4.3.1-6~bpo10+1_amd64.deb ... Unpacking libpostproc55:amd64 (7:4.3.1-6~bpo10+1) over (7:4.3.1-5.1~bpo10+1) ... Preparing to unpack .../8-libavresample4_7%3a4.3.1-6~bpo10+1_amd64.deb ... Unpacking libavresample4:amd64 (7:4.3.1-6~bpo10+1) over (7:4.3.1-5.1~bpo10+1) ... Preparing to unpack .../9-libavutil56_7%3a4.3.1-6~bpo10+1_amd64.deb ... Unpacking libavutil56:amd64 (7:4.3.1-6~bpo10+1) over (7:4.3.1-5.1~bpo10+1) ... Setting up libavutil56:amd64 (7:4.3.1-6~bpo10+1) ... Setting up libpostproc55:amd64 (7:4.3.1-6~bpo10+1) ... Setting up libswscale5:amd64 (7:4.3.1-6~bpo10+1) ... Setting up libswresample3:amd64 (7:4.3.1-6~bpo10+1) ... Setting up libavresample4:amd64 (7:4.3.1-6~bpo10+1) ... Setting up libavcodec58:amd64 (7:4.3.1-6~bpo10+1) ... Setting up libavformat58:amd64 (7:4.3.1-6~bpo10+1) ... Setting up libavfilter7:amd64 (7:4.3.1-6~bpo10+1) ... Setting up libavdevice58:amd64 (7:4.3.1-6~bpo10+1) ... Setting up ffmpeg (7:4.3.1-6~bpo10+1) ... Processing triggers for man-db (2.8.5-2) ... Processing triggers for libc-bin (2.28-10) ... [ Rootkit Hunter version 1.4.6 ] File updated: searched for 181 files, found 146 Scanning processes... Scanning candidates... Scanning processor microcode... Scanning linux images... Running kernel seems to be up-to-date. The processor microcode seems to be up-to-date. No services need to be restarted. No containers need to be restarted. User sessions running outdated binaries: [...] Now the ouput of sudo apt install ffmpeg always is: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ffmpeg : Depends: libavcodec58 (= 7:4.1.6-1~deb10u1) Depends: libavdevice58 (= 7:4.1.6-1~deb10u1) but 7:4.3.1-6~bpo10+1 is to be installed Depends: libavfilter7 (= 7:4.1.6-1~deb10u1) Depends: libavformat58 (= 7:4.1.6-1~deb10u1) but 7:4.3.1-6~bpo10+1 is to be installed Depends: libavresample4 (= 7:4.1.6-1~deb10u1) but 7:4.3.1-6~bpo10+1 is to be installed Depends: libavutil56 (= 7:4.1.6-1~deb10u1) but 7:4.3.1-6~bpo10+1 is to be installed Depends: libpostproc55 (= 7:4.1.6-1~deb10u1) but 7:4.3.1-6~bpo10+1 is to be installed Depends: libswresample3 (= 7:4.1.6-1~deb10u1) but 7:4.3.1-6~bpo10+1 is to be installed Depends: libswscale5 (= 7:4.1.6-1~deb10u1) but 7:4.3.1-6~bpo10+1 is to be installed E: Unable to correct problems, you have held broken packages. How to resolve those missing dependencies? I can't just reinstall those, many other core packages depend on these. I also tried running sudo apt-get clean ; sudo apt-get update ; sudo apt-get check ; sudo apt-get purge ffmpeg* -y ; sudo apt-get autoremove -y ; sudo apt-get -f satisfy ffmpeg -y This did not solve the problem. I'm running Debian 10 with KDE. Any help is appreciated.
It looks like someone requested libavdevice58 version 7:4.3.1-6~bpo10+1 to be installed. This clashes with Debian's vanilla ffmpeg which depends on libavdevice58 version 7:4.1.6-1~deb10u1. You can explicitly request the old version by specifying the version on the command line: sudo apt install ffmpeg libavdevice58=7:4.1.6-1~deb10u1 As for the encoding results, you should open a second question. As the guide points out, there are a couple of parameters playing together. Your command actually puts the encoder in an average bitrate mode, not constant bitrate mode. Keep in mind that the encoder may always chose to stay below the targeted bitrate if the source material is easy enough to compress.
Can't reinstall ffmpeg due to unmet dependencies in Debian / why did ffmpeg not use the specified bitrate in video conversions?
1,495,379,997,000
I'd like to use the output of ffmpeg in order to encrypt the video with openssl: I tried to use name pipe without sucess. With the command: mkfifo myfifo ffmpeg -f alsa -ac 2 -i plughw:0,0 -f video4linux2 -s vga -i /dev/video0 myfifo I get the error [NULL @ 0x563c02ce5c00] Unable to find a suitable output format for 'myfifo' myfifo: Invalid argument The idea is to later encrypt the stdout of ffmpeg with dd if=myfifo | openssl enc -des3 -out video.mp4 How can I pipe the output of ffmpeg to openssl PS: I know that encryption with ffmpeg is possible but prefer to use openssl with a pipe.
ffmpeg tries to guess the video format based on the filename extension. Either "set options for output format and such" as @alex-stragies states, or use a filename extension for your fifo that ffmpeg knows about. If openssl is to be run detached, also give it the encrypting password on the command line. When using a pipe or fifo as output, ffmpeg can't go back and forth in the output file, so the chosen format has to be something that does'nt need random acces while writing. For example, if you try to create an mp4 with x264 video and aac audio (ffmpeg -c:v libx264 -c:a aac), ffmpeg will die with [mp4 @ 0xc83d00] muxer does not support non seekable output. ( umask 066 ; echo password >/tmp/myfilepasswd ) mkfifo /tmp/schproutz-vid openssl enc -des3 -out video.enc \ -in /tmp/schproutz-vid \ -pass file:/tmp/myfilepasswd & sleep 1 ffmpeg -f alsa -ac 2 -i plughw:0,0 \ -f video4linux2 \ -s vga -i /dev/video0 \ -f ogg /tmp/schproutz-vid Once you get this to work, you can easily remove the fifo and use a pipe between ffmpeg and openssl : ffmpeg -f alsa -ac 2 -i plughw:0,0 \ -f video4linux2 \ -s vga -i /dev/video0 \ -f ogg - | openssl enc -des3 \ -pass file:/tmp/myfilepasswd \ > outputfile.enc
How to pipe the output ffmpeg?
1,495,379,997,000
I have MOD video file. My LG TV doesn't support MOD. How can I convert MOD video file without loosing quality to modern video format to watch it on different modern TVs?
A typical video file is a container for video data. The video data itself is compressed by some codec. You therefore have to distinguish between the container format and the video codec. For a correct answer you should tell the codec type of the video data in your file. You find lots of information about your video file by ffprobe yourfile.MOD If the codec is one that is understood by your TV, you can repack your video data into another container format with ffmpeg -i yourfile.MOD -c:copy out.mp4 Or whatever container format you desire. On the other hand if the codec of the MOD file is not understood, you must recode your video data which necessarily incurs losses on the video quality. For that case refer to any tutorial about transcoding.
how to convert MOD video file without loosing quality to modern popular format for TV
1,495,379,997,000
When you convert and create a 320kbps mp3 file, you may execute ffmpeg -i original.wav -b:a 320K out.mp3 But why can -b:a designate the bit rate? I've read man ffmpeg and the official ffmpeg Documentation, but -b:a and even -b aren't described at all, though some examples can be seen in them. Also, it seems the default bit rate for mp3 is 128kbps, but this is neither referred to. Does anyone have the verification for the validness of -b:a option? What do b and a mean? BIT-RATE and AUDIO?
FFmpeg is made up of multiple libraries, each dedicated to certains parts of the media processing pipeline, and tools, like the ffmpeg binary, which sets up the pipeline and manages its execution. The doc page you linked to relates to the ffmpeg binary. However, bitrate is an option related to encoding, and it is documented on the libavcodec page at https://ffmpeg.org/ffmpeg-codecs.html#Codec-Options In the token -b:a, the portion before the colon identifies the option, in this case, bitrate. The string after the (first) colon is the stream specifier and is used to identify the target of the option. So, -b:a:2 sets the bitrate for the third audio stream in the output.
How to parse ffmpeg's -b:a option?
1,495,379,997,000
I've got a side project going that requires files which are just the samples from uncompressed PCM audio. My plan was to first convert everything to WAV, then use my own scripts to strip off all the segments other than the samples. It occurs to me that converting directly to CDDA ("Red Book" audio) is the same as what I'm getting but with one less step involved, and better-tested code, as long as I'm fine with forcing everything to CD-quality audio (I am). However, looking through the documentation, it seems that ffmpeg can't actually output CDDA? Which is somewhat confusing, given how many less-common things it can output and the fact that the audio would be decoded through that form in any other audio-file conversion. Am I missing some method of getting what I'm looking for, or is this an unexpected missing component in ffmpeg? Can some other common tool do it?
If you just need a raw bitstream, use ffmpeg -i in.mp3 -ar 44100 -ac 2 -f s16le out.pcm where the output has no headers or other metadata. It is a raw bitstream. The two channels are interleaved i.e. {sample-ch1 sample-ch2 sample-ch1...}
Use ffmpeg to get PCM/Red Book/CDDA without WAV headers?
1,495,379,997,000
I have a Bash script of a thousand lines each containing an ffmpeg command. I start this with source script and it runs just fine. However, when I try to take control of this script in various ways, things go completely awry: If I do Ctrl + Z to pause the whole thing, only the current ffmpeg command is paused, and the next one is started! Not what I wanted! If I do Ctrl + C to stop everything, the script jumps to the next ffmpeg command, and I have to press once for every line in the script to finally stop everything. Sheer hell. I tried using ps -ef from another shell to locate the source command to pause/kill it from there, but it does not exist in the list. So how can I pause/stop the parent script the way I wish? Or possibly, how can I execute the script in a different way to begin with that gives me the proper control over it?
Try running the script as a script instead of sourcing it: $ bash <scriptname>
Job control over a Bash script
1,495,379,997,000
I have a bash script that look out for files recursively under multi-sub folders, then use ffmpeg to each file. at debug, it works great if i use echo ffmpeg but the thing is, at real work where it actually needs to use ffmpeg and wait for half an hour to finish for each video, the bash script cannot successfully run the ffmpeg to each file My snippet is below: find * \( -name '*.mkv' -o -name '*avi' -o -name '*mp4' -o -name '*flv' -o -name '*ogg' -o -name '*mov' ! -name '*-[900p].mkv' \) -print | while IFS= read file ## IFS= prevents "read" stripping whitespace do ffmpeg -i "$file" "$target" && rm -rf "$file" done the thing is, because ffmpeg takes too long to respond, and while would just keep listing and executing. So I'm hoping what can I do to make sure loop would be process only after the done has ended? Can somebody please help? Thanks! Geez, I do not know why it was tagged as possible duplicate with another question of mine: Converting `for file in` to `find` so that my script can apply recursively I am asking for a loop action that would allow my bash script to wait for each function inside the loop to finish before iterating to the next object. It is very far from asking how to find files recursively. Thanks!
Instead of piping find into the loop, you could go through an intermediate file. That will ensure that the finding step and the looping step happen in sequence, with no overlap. Something like the following (I altered your expression for brevity): find \( -name '*.mkv' -o -name '*avi' \) >files <files while IFS= read file do ffmpeg -i "$file" "$target" && rm -f "$file" done Note that this still isn't completely safe; using find -exec would be simpler if your requirements didn't include running find strictly before the first call to ffmpeg. With this requirement, using find -print0 along with xargs or GNU parallel could do the job.
replace `while` with more suitable function so looping would wait for each process to end
1,495,379,997,000
I have a large mp3 file; I want to split it into 480 mp3 files in different time points. I want to know if there is an easy way to do it other than splitting it one by one. Can I provide the different splitting times in a text file and just execute one command in FFmpeg?
Use this script #! /bin/bash x="00:00:00" z=0 filename=$(basename -- "$2") ext="${filename##*.}" filename="${filename%.*}" initcmd="ffmpeg -nostdin -hide_banner -loglevel error -i $2" while read y do initcmd+=" -ss $x -to $y -c copy $filename$z.$ext" let "z=z+1" x=$y done < $1 $initcmd Make it executable with chmod +x And make your time configuration file like this, timestamps separated by space. 00:02:30 00:05:40 Remember to press enter after the last entry. And then the syntax is ./splitter.sh ./time ./Big_Buck_Bunny.mp4 It will result two files 1.mp4 and 2.mp4, first one is from 00:00:00 to 00:02:30, second one from 00:02:30 to 00:05:40. If you want to make rest the file in a output, run this command ffprobe -v error -show_entries format=duration -sexagesimal -of default=noprint_wrappers=1:nokey=1 Big_Buck_Bunny.mp4 >> ./time (This gets the length of the file and add to your time config) Explanation Line 2: set initial start time Line 3: set output file name number Line 4, 5 and 6: set extension (From a SO post) line 7 : a string with initial parameter while loop: reads line by line (From SO) Line ffmpeg: -nostdin stops ffmpeg from eating first characters -hide_banner -loglevel panic suppress ffmpeg messages -i set input file name Line initcmd+=: adds output name and parameter -ss start time -to end time -c copy use same codec as input, no quality loss and fastest. line 10: increment file name number (From AskUbuntu Post) line 11: set new start point Edit: Now it can read line separated file. SO Instead of repeatedly calling ffmpeg, do everything in last line with multiple output
Automating the splitting of a large mp3 file with FFmpeg into multiple files in different time points provided in a text file
1,495,379,997,000
I have a Debian Linux server (Debian GNU/Linux 8 (Jessie)) and want to install FFmpeg. I do the following steps: apt update apt install ffmpeg I then get the following error: Reading package lists... Done Building dependency tree Reading state information... Done Package ffmpeg is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'ffmpeg' has no installation candidate What am I doing wrong?
Add the following line to your /etc/apt/sources.list: deb http://archive.debian.org/debian jessie-backports main Then run: sudo apt update sudo apt install -t jessie-backports ffmpeg
Install FFmpeg on Debian with 'apt'
1,495,379,997,000
I want to export all frames from a lot of video files, automatically in a jenkins build job using ffmpeg. This script is running fine when I ssh into the slave and execute it in the same folder: find . -name "*.mp4" -exec ffmpeg -i {} -qscale:v 1 -vf fps=6 {}_exportedFrame_%d.jpg \; It should find all mp4 files and run ffmpeg on them. It fails with this message when jenkins is running it (execute shell plugin): 08:51:32 find: ffmpeg: No such file or directory 08:51:32 find: ffmpeg: No such file or directory 08:51:32 find: ffmpeg: No such file or directory 08:51:32 find: ffmpeg: No such file or directory ...many more lines of the same error Output from terminal (it's running fine): bash-3.2$ find . -name "*.mp4" -exec ffmpeg -i {} -qscale:v 1 -vf fps=6 {}_exportedFrame_%d.jpg \; ffmpeg version 4.1 Copyright (c) 2000-2018 the FFmpeg developers built with Apple LLVM version 10.0.0 (clang-1000.11.45.5) configuration: --prefix=/usr/local/Cellar/ffmpeg/4.1 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gpl --enable-libmp3lame --enable-libopus --enable-libsnappy --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-opencl --enable-videotoolbox libavutil 56. 22.100 / 56. 22.100 libavcodec 58. 35.100 / 58. 35.100 ... Build slave is running the latest version of mac os. ffmpeg is installed. edit: I've added ffmpeg to the paths file bash-3.2$ cat /etc/paths /usr/local/bin /usr/bin /bin /usr/sbin /sbin /usr/local/bin/ffmpeg bash-3.2$ type ffmpeg ffmpeg is /usr/local/bin/ffmpeg I'm still getting the same error.
Probably ffmpeg is not in the PATH of the jenkins job. Run type ffmpeg in your terminal to see where ffmpeg is located and echo $PATH in your jenkins job and compare.
find command fails in jenkins, but not in terminal