command
stringlengths 1
42
| description
stringlengths 29
182k
⌀ | name
stringlengths 7
64.9k
⌀ | synopsis
stringlengths 4
85.3k
⌀ | options
stringclasses 593
values | examples
stringclasses 455
values |
|---|---|---|---|---|---|
dvd2concat
| null | null | null | null | null |
rsaperf
| null | null | null | null | null |
autoconf
|
Generate a configuration script from a TEMPLATE-FILE if given, or 'configure.ac' if present, or else 'configure.in'. Output is sent to the standard output if TEMPLATE-FILE is given, else into 'configure'. Operation modes: -h, --help print this help, then exit -V, --version print version number, then exit -v, --verbose verbosely report processing -d, --debug don't remove temporary files -f, --force consider all files obsolete -o, --output=FILE save output in FILE (stdout is the default) -W, --warnings=CATEGORY report the warnings falling in CATEGORY (comma-separated list accepted) Warning categories are: cross cross compilation issues gnu GNU coding standards (default in gnu and gnits modes) obsolete obsolete features or constructions (default) override user redefinitions of Automake rules or variables portability portability issues (default in gnu and gnits modes) portability-recursive nested Make variables (default with -Wportability) extra-portability extra portability issues related to obscure tools syntax dubious syntactic constructs (default) unsupported unsupported or incomplete features (default) -W also understands: all turn on all the warnings none turn off all the warnings no-CATEGORY turn off warnings in CATEGORY error treat all enabled warnings as errors Library directories: -B, --prepend-include=DIR prepend directory DIR to search path -I, --include=DIR append directory DIR to search path Tracing: -t, --trace=MACRO[:FORMAT] report the list of calls to MACRO -i, --initialization also trace Autoconf's initialization process In tracing mode, no configuration script is created. FORMAT defaults to '$f:$l:$n:$%'; see 'autom4te --help' for information about FORMAT. AUTHOR Written by David J. MacKenzie and Akim Demaille. REPORTING BUGS Report bugs to <bug-autoconf@gnu.org>, or via Savannah: <https://savannah.gnu.org/support/?group=autoconf>. COPYRIGHT Copyright © 2023 Free Software Foundation, Inc. License GPLv3+/Autoconf: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>, <https://gnu.org/licenses/exceptions.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO autoconf(1), automake(1), autoreconf(1), autoupdate(1), autoheader(1), autoscan(1), config.guess(1), config.sub(1), ifnames(1), libtool(1). The full documentation for Autoconf is maintained as a Texinfo manual. To read the manual locally, use the command info autoconf You can also consult the Web version of the manual at <https://gnu.org/software/autoconf/manual/>. GNU Autoconf 2.72 December 2023 AUTOCONF(1)
|
autoconf - Generate configuration scripts
|
autoconf [OPTION]... [TEMPLATE-FILE]
| null | null |
grpc_php_plugin
| null | null | null | null | null |
sndfile-deinterleave
|
sndfile-interleave creates a multi-channel file taking audio data from two or more mono files as individual channels. The format of the output file is determined by its filename suffix. The audio parameters of the output file will be made so that the format can accommodate each of the mono inputs; for example, the samplerate will be the maximal samplerate occurring in the inputs. The output file will be overwritten if it already exists. sndfile-deinterleave creates two or more mono files from a multi-channel audio file, containing data from the individual channels. The names of the resulting mono files are of the form “name_XY.suf” where name and suf are the basename and suffix of the original file. If any file of such name already exists, it will be overwritten. Apart from the number of channels, the audio format of the resulting mono files is the same as that of the original file. EXIT STATUS The sndfile-interleave utility exits 0 on success, and >0 if an error occurs.
|
sndfile-interleave, sndfile-deinterleave – convert mono files into a multi-channel file and vice versa
|
sndfile-interleave input1 input2 ... -o output sndfile-deinterleave file
| null |
Merge a mono OGG file and a mono FLAC file into a stereo WAV file: $ sndfile-interleave left.ogg right.flac -o stereo.wav Split a multi-channel into individual mono files: $ sndfile-deinterleave multi.wav Input file : multi Output files : multi_00.wav multi_01.wav multi_02.wav multi_03.wav SEE ALSO http://libsndfile.github.io/libsndfile/ AUTHORS Erik de Castro Lopo <erikd@mega-nerd.com> macOS 14.5 November 2, 2014 macOS 14.5
|
lz4c
|
lz4 is an extremely fast lossless compression algorithm, based on byte-aligned LZ77 family of compression scheme. lz4 offers compression speeds > 500 MB/s per core, linearly scalable with multi-core CPUs. It features an extremely fast decoder, offering speed in multiple GB/s per core, typically reaching RAM speed limit on multi-core systems. The native file format is the .lz4 format. Difference between lz4 and gzip lz4 supports a command line syntax similar but not identical to gzip(1). Differences are : • lz4 compresses a single file by default (see -m for multiple files) • lz4 file1 file2 means : compress file1 into file2 • lz4 file.lz4 will default to decompression (use -z to force compression) • lz4 preserves original files (see --rm to erase source file on completion) • lz4 shows real-time notification statistics during compression or decompression of a single file (use -q to silence them) • When no destination is specified, result is sent on implicit output, which depends on stdout status. When stdout is Not the console, it becomes the implicit output. Otherwise, if stdout is the console, the implicit output is filename.lz4. • It is considered bad practice to rely on implicit output in scripts. because the script´s environment may change. Always use explicit output in scripts. -c ensures that output will be stdout. Conversely, providing a destination name, or using -m ensures that the output will be either the specified name, or filename.lz4 respectively. Default behaviors can be modified by opt-in commands, detailed below. • lz4 -m makes it possible to provide multiple input filenames, which will be compressed into files using suffix .lz4. Progress notifications become disabled by default (use -v to enable them). This mode has a behavior which more closely mimics gzip command line, with the main remaining difference being that source files are preserved by default. • Similarly, lz4 -m -d can decompress multiple *.lz4 files. • It´s possible to opt-in to erase source files on successful compression or decompression, using --rm command. • Consequently, lz4 -m --rm behaves the same as gzip. Concatenation of .lz4 files It is possible to concatenate .lz4 files as is. lz4 will decompress such files as if they were a single .lz4 file. For example: lz4 file1 > foo.lz4 lz4 file2 >> foo.lz4 Then lz4cat foo.lz4 is equivalent to cat file1 file2.
|
lz4 - lz4, unlz4, lz4cat - Compress or decompress .lz4 files
|
lz4 [OPTIONS] [-|INPUT-FILE] OUTPUT-FILE unlz4 is equivalent to lz4 -d lz4cat is equivalent to lz4 -dcfm When writing scripts that need to decompress files, it is recommended to always use the name lz4 with appropriate arguments (lz4 -d or lz4 -dc) instead of the names unlz4 and lz4cat.
|
Short commands concatenation In some cases, some options can be expressed using short command -x or long command --long-word. Short commands can be concatenated together. For example, -d -c is equivalent to -dc. Long commands cannot be concatenated. They must be clearly separated by a space. Multiple commands When multiple contradictory commands are issued on a same command line, only the latest one will be applied. Operation mode -z --compress Compress. This is the default operation mode when no operation mode option is specified, no other operation mode is implied from the command name (for example, unlz4 implies --decompress), nor from the input file name (for example, a file extension .lz4 implies --decompress by default). -z can also be used to force compression of an already compressed .lz4 file. -d --decompress --uncompress Decompress. --decompress is also the default operation when the input filename has an .lz4 extension. -t --test Test the integrity of compressed .lz4 files. The decompressed data is discarded. No files are created nor removed. -b# Benchmark mode, using # compression level. --list List information about .lz4 files. note : current implementation is limited to single-frame .lz4 files. Operation modifiers -# Compression level, with # being any value from 1 to 12. Higher values trade compression speed for compression ratio. Values above 12 are considered the same as 12. Recommended values are 1 for fast compression (default), and 9 for high compression. Speed/compression trade-off will vary depending on data to compress. Decompression speed remains fast at all settings. --fast[=#] Switch to ultra-fast compression levels. The higher the value, the faster the compression speed, at the cost of some compression ratio. If =# is not present, it defaults to 1. This setting overrides compression level if one was set previously. Similarly, if a compression level is set after --fast, it overrides it. --best Set highest compression level. Same as -12. --favor-decSpeed Generate compressed data optimized for decompression speed. Compressed data will be larger as a consequence (typically by ~0.5%), while decompression speed will be improved by 5-20%, depending on use cases. This option only works in combination with very high compression levels (>=10). -D dictionaryName Compress, decompress or benchmark using dictionary dictionaryName. Compression and decompression must use the same dictionary to be compatible. Using a different dictionary during decompression will either abort due to decompression error, or generate a checksum error. -f --[no-]force This option has several effects: If the target file already exists, overwrite it without prompting. When used with --decompress and lz4 cannot recognize the type of the source file, copy the source file as is to standard output. This allows lz4cat --force to be used like cat (1) for files that have not been compressed with lz4. -c --stdout --to-stdout Force write to standard output, even if it is the console. -m --multiple Multiple input files. Compressed file names will be appended a .lz4 suffix. This mode also reduces notification level. Can also be used to list multiple files. lz4 -m has a behavior equivalent to gzip -k (it preserves source files by default). -r operate recursively on directories. This mode also sets -m (multiple input files). -B# Block size [4-7](default : 7) -B4= 64KB ; -B5= 256KB ; -B6= 1MB ; -B7= 4MB -BI Produce independent blocks (default) -BD Blocks depend on predecessors (improves compression ratio, more noticeable on small blocks) -BX Generate block checksums (default:disabled) --[no-]frame-crc Select frame checksum (default:enabled) --no-crc Disable both frame and block checksums --[no-]content-size Header includes original size (default:not present) Note : this option can only be activated when the original size can be determined, hence for a file. It won´t work with unknown source size, such as stdin or pipe. --[no-]sparse Sparse mode support (default:enabled on file, disabled on stdout) -l Use Legacy format (typically for Linux Kernel compression) Note : -l is not compatible with -m (--multiple) nor -r Other options -v --verbose Verbose mode -q --quiet Suppress warnings and real-time statistics; specify twice to suppress errors too -h -H --help Display help/long help and exit -V --version Display Version number and exit -k --keep Preserve source files (default behavior) --rm Delete source files on successful compression or decompression -- Treat all subsequent arguments as files Benchmark mode -b# Benchmark file(s), using # compression level -e# Benchmark multiple compression levels, from b# to e# (included) -i# Minimum evaluation time in seconds [1-9] (default : 3) BUGS Report bugs at: https://github.com/lz4/lz4/issues AUTHOR Yann Collet lz4 v1.9.4 August 2022 LZ4(1)
| null |
pip3.10
| null | null | null | null | null |
view
|
Vim is a text editor that is upwards compatible to Vi. It can be used to edit all kinds of plain text. It is especially useful for editing programs. There are a lot of enhancements above Vi: multi level undo, multi windows and buffers, syntax highlighting, command line editing, filename completion, on-line help, visual selection, etc.. See ":help vi_diff.txt" for a summary of the differences between Vim and Vi. While running Vim a lot of help can be obtained from the on-line help system, with the ":help" command. See the ON-LINE HELP section below. Most often Vim is started to edit a single file with the command vim file More generally Vim is started with: vim [options] [filelist] If the filelist is missing, the editor will start with an empty buffer. Otherwise exactly one out of the following four may be used to choose one or more files to be edited. file .. A list of filenames. The first one will be the current file and read into the buffer. The cursor will be positioned on the first line of the buffer. You can get to the other files with the ":next" command. To edit a file that starts with a dash, precede the filelist with "--". - The file to edit is read from stdin. Commands are read from stderr, which should be a tty. -t {tag} The file to edit and the initial cursor position depends on a "tag", a sort of goto label. {tag} is looked up in the tags file, the associated file becomes the current file and the associated command is executed. Mostly this is used for C programs, in which case {tag} could be a function name. The effect is that the file containing that function becomes the current file and the cursor is positioned on the start of the function. See ":help tag-commands". -q [errorfile] Start in quickFix mode. The file [errorfile] is read and the first error is displayed. If [errorfile] is omitted, the filename is obtained from the 'errorfile' option (defaults to "AztecC.Err" for the Amiga, "errors.err" on other systems). Further errors can be jumped to with the ":cn" command. See ":help quickfix". Vim behaves differently, depending on the name of the command (the executable may still be the same file). vim The "normal" way, everything is default. ex Start in Ex mode. Go to Normal mode with the ":vi" command. Can also be done with the "-e" argument. view Start in read-only mode. You will be protected from writing the files. Can also be done with the "-R" argument. gvim gview The GUI version. Starts a new window. Can also be done with the "-g" argument. evim eview The GUI version in easy mode. Starts a new window. Can also be done with the "-y" argument. rvim rview rgvim rgview Like the above, but with restrictions. It will not be possible to start shell commands, or suspend Vim. Can also be done with the "-Z" argument.
|
vim - Vi IMproved, a programmer's text editor
|
vim [options] [file ..] vim [options] - vim [options] -t tag vim [options] -q [errorfile] ex view gvim gview evim eview rvim rview rgvim rgview
|
The options may be given in any order, before or after filenames. Options without an argument can be combined after a single dash. +[num] For the first file the cursor will be positioned on line "num". If "num" is missing, the cursor will be positioned on the last line. +/{pat} For the first file the cursor will be positioned in the line with the first occurrence of {pat}. See ":help search-pattern" for the available search patterns. +{command} -c {command} {command} will be executed after the first file has been read. {command} is interpreted as an Ex command. If the {command} contains spaces it must be enclosed in double quotes (this depends on the shell that is used). Example: vim "+set si" main.c Note: You can use up to 10 "+" or "-c" commands. -S {file} {file} will be sourced after the first file has been read. This is equivalent to -c "source {file}". {file} cannot start with '-'. If {file} is omitted "Session.vim" is used (only works when -S is the last argument). --cmd {command} Like using "-c", but the command is executed just before processing any vimrc file. You can use up to 10 of these commands, independently from "-c" commands. -A If Vim has been compiled with ARABIC support for editing right-to-left oriented files and Arabic keyboard mapping, this option starts Vim in Arabic mode, i.e. 'arabic' is set. Otherwise an error message is given and Vim aborts. -b Binary mode. A few options will be set that makes it possible to edit a binary or executable file. -C Compatible. Set the 'compatible' option. This will make Vim behave mostly like Vi, even though a .vimrc file exists. -d Start in diff mode. There should between two to eight file name arguments. Vim will open all the files and show differences between them. Works like vimdiff(1). -d {device}, -dev {device} Open {device} for use as a terminal. Only on the Amiga. Example: "-d con:20/30/600/150". -D Debugging. Go to debugging mode when executing the first command from a script. -e Start Vim in Ex mode, just like the executable was called "ex". -E Start Vim in improved Ex mode, just like the executable was called "exim". -f Foreground. For the GUI version, Vim will not fork and detach from the shell it was started in. On the Amiga, Vim is not restarted to open a new window. This option should be used when Vim is executed by a program that will wait for the edit session to finish (e.g. mail). On the Amiga the ":sh" and ":!" commands will not work. --nofork Foreground. For the GUI version, Vim will not fork and detach from the shell it was started in. -F If Vim has been compiled with FKMAP support for editing right-to-left oriented files and Farsi keyboard mapping, this option starts Vim in Farsi mode, i.e. 'fkmap' and 'rightleft' are set. Otherwise an error message is given and Vim aborts. -g If Vim has been compiled with GUI support, this option enables the GUI. If no GUI support was compiled in, an error message is given and Vim aborts. --gui-dialog-file {name} When using the GUI, instead of showing a dialog, write the title and message of the dialog to file {name}. The file is created or appended to. Only useful for testing, to avoid that the test gets stuck on a dialog that can't be seen. Without the GUI the argument is ignored. --help, -h, -? Give a bit of help about the command line arguments and options. After this Vim exits. -H If Vim has been compiled with RIGHTLEFT support for editing right-to-left oriented files and Hebrew keyboard mapping, this option starts Vim in Hebrew mode, i.e. 'hkmap' and 'rightleft' are set. Otherwise an error message is given and Vim aborts. -i {viminfo} Specifies the filename to use when reading or writing the viminfo file, instead of the default "~/.viminfo". This can also be used to skip the use of the .viminfo file, by giving the name "NONE". -L Same as -r. -l Lisp mode. Sets the 'lisp' and 'showmatch' options on. -m Modifying files is disabled. Resets the 'write' option. You can still modify the buffer, but writing a file is not possible. -M Modifications not allowed. The 'modifiable' and 'write' options will be unset, so that changes are not allowed and files can not be written. Note that these options can be set to enable making modifications. -N No-compatible mode. Resets the 'compatible' option. This will make Vim behave a bit better, but less Vi compatible, even though a .vimrc file does not exist. -n No swap file will be used. Recovery after a crash will be impossible. Handy if you want to edit a file on a very slow medium (e.g. floppy). Can also be done with ":set uc=0". Can be undone with ":set uc=200". -nb Become an editor server for NetBeans. See the docs for details. -o[N] Open N windows stacked. When N is omitted, open one window for each file. -O[N] Open N windows side by side. When N is omitted, open one window for each file. -p[N] Open N tab pages. When N is omitted, open one tab page for each file. -P {parent-title} Win32 GUI only: Specify the title of the parent application. When possible, Vim will run in an MDI window inside the application. {parent-title} must appear in the window title of the parent application. Make sure that it is specific enough. Note that the implementation is still primitive. It won't work with all applications and the menu doesn't work. -R Read-only mode. The 'readonly' option will be set. You can still edit the buffer, but will be prevented from accidentally overwriting a file. If you do want to overwrite a file, add an exclamation mark to the Ex command, as in ":w!". The -R option also implies the -n option (see above). The 'readonly' option can be reset with ":set noro". See ":help 'readonly'". -r List swap files, with information about using them for recovery. -r {file} Recovery mode. The swap file is used to recover a crashed editing session. The swap file is a file with the same filename as the text file with ".swp" appended. See ":help recovery". -s Silent mode. Only when started as "Ex" or when the "-e" option was given before the "-s" option. -s {scriptin} The script file {scriptin} is read. The characters in the file are interpreted as if you had typed them. The same can be done with the command ":source! {scriptin}". If the end of the file is reached before the editor exits, further characters are read from the keyboard. -T {terminal} Tells Vim the name of the terminal you are using. Only required when the automatic way doesn't work. Should be a terminal known to Vim (builtin) or defined in the termcap or terminfo file. --not-a-term Tells Vim that the user knows that the input and/or output is not connected to a terminal. This will avoid the warning and the two second delay that would happen. --ttyfail When stdin or stdout is not a a terminal (tty) then exit right away. -u {vimrc} Use the commands in the file {vimrc} for initializations. All the other initializations are skipped. Use this to edit a special kind of files. It can also be used to skip all initializations by giving the name "NONE". See ":help initialization" within vim for more details. -U {gvimrc} Use the commands in the file {gvimrc} for GUI initializations. All the other GUI initializations are skipped. It can also be used to skip all GUI initializations by giving the name "NONE". See ":help gui-init" within vim for more details. -V[N] Verbose. Give messages about which files are sourced and for reading and writing a viminfo file. The optional number N is the value for 'verbose'. Default is 10. -V[N]{filename} Like -V and set 'verbosefile' to {filename}. The result is that messages are not displayed but written to the file {filename}. {filename} must not start with a digit. --log {filename} If Vim has been compiled with eval and channel feature, start logging and write entries to {filename}. This works like calling ch_logfile({filename}, 'ao') very early during startup. -v Start Vim in Vi mode, just like the executable was called "vi". This only has effect when the executable is called "ex". -w{number} Set the 'window' option to {number}. -w {scriptout} All the characters that you type are recorded in the file {scriptout}, until you exit Vim. This is useful if you want to create a script file to be used with "vim -s" or ":source!". If the {scriptout} file exists, characters are appended. -W {scriptout} Like -w, but an existing file is overwritten. -x Use encryption when writing files. Will prompt for a crypt key. -X Don't connect to the X server. Shortens startup time in a terminal, but the window title and clipboard will not be used. -y Start Vim in easy mode, just like the executable was called "evim" or "eview". Makes Vim behave like a click-and-type editor. -Z Restricted mode. Works like the executable starts with "r". -- Denotes the end of the options. Arguments after this will be handled as a file name. This can be used to edit a filename that starts with a '-'. --clean Do not use any personal configuration (vimrc, plugins, etc.). Useful to see if a problem reproduces with a clean Vim setup. --echo-wid GTK GUI only: Echo the Window ID on stdout. --literal Take file name arguments literally, do not expand wildcards. This has no effect on Unix where the shell expands wildcards. --noplugin Skip loading plugins. Implied by -u NONE. --remote Connect to a Vim server and make it edit the files given in the rest of the arguments. If no server is found a warning is given and the files are edited in the current Vim. --remote-expr {expr} Connect to a Vim server, evaluate {expr} in it and print the result on stdout. --remote-send {keys} Connect to a Vim server and send {keys} to it. --remote-silent As --remote, but without the warning when no server is found. --remote-wait As --remote, but Vim does not exit until the files have been edited. --remote-wait-silent As --remote-wait, but without the warning when no server is found. --serverlist List the names of all Vim servers that can be found. --servername {name} Use {name} as the server name. Used for the current Vim, unless used with a --remote argument, then it's the name of the server to connect to. --socketid {id} GTK GUI only: Use the GtkPlug mechanism to run gvim in another window. --startuptime {file} During startup write timing messages to the file {fname}. --version Print version information and exit. --windowid {id} Win32 GUI only: Make gvim try to use the window {id} as a parent, so that it runs inside that window. ON-LINE HELP Type ":help" in Vim to get started. Type ":help subject" to get help on a specific subject. For example: ":help ZZ" to get help for the "ZZ" command. Use <Tab> and CTRL-D to complete subjects (":help cmdline-completion"). Tags are present to jump from one place to another (sort of hypertext links, see ":help"). All documentation files can be viewed in this way, for example ":help syntax.txt". FILES /usr/local/share/vim/vim??/doc/*.txt The Vim documentation files. Use ":help doc-file-list" to get the complete list. vim?? is short version number, like vim91 for Vim 9.1 /usr/local/share/vim/vim??/doc/tags The tags file used for finding information in the documentation files. /usr/local/share/vim/vim??/syntax/syntax.vim System wide syntax initializations. /usr/local/share/vim/vim??/syntax/*.vim Syntax files for various languages. /usr/local/share/vim/vimrc System wide Vim initializations. ~/.vimrc, ~/.vim/vimrc, $XDG_CONFIG_HOME/vim/vimrc Your personal Vim initializations (first one found is used). /usr/local/share/vim/gvimrc System wide gvim initializations. ~/.gvimrc, ~/.vim/gvimrc, $XDG_CONFIG_HOME/vim/gvimrc Your personal gvim initializations (first one found is used). /usr/local/share/vim/vim??/optwin.vim Script used for the ":options" command, a nice way to view and set options. /usr/local/share/vim/vim??/menu.vim System wide menu initializations for gvim. /usr/local/share/vim/vim??/bugreport.vim Script to generate a bug report. See ":help bugs". /usr/local/share/vim/vim??/filetype.vim Script to detect the type of a file by its name. See ":help 'filetype'". /usr/local/share/vim/vim??/scripts.vim Script to detect the type of a file by its contents. See ":help 'filetype'". /usr/local/share/vim/vim??/print/*.ps Files used for PostScript printing. For recent info read the VIM home page: <URL:http://www.vim.org/> SEE ALSO vimtutor(1) AUTHOR Most of Vim was made by Bram Moolenaar, with a lot of help from others. See ":help credits" in Vim. Vim is based on Stevie, worked on by: Tim Thompson, Tony Andrews and G.R. (Fred) Walter. Although hardly any of the original code remains. BUGS Probably. See ":help todo" for a list of known problems. Note that a number of things that may be regarded as bugs by some, are in fact caused by a too-faithful reproduction of Vi's behaviour. And if you think other things are bugs "because Vi does it differently", you should take a closer look at the vi_diff.txt file (or type :help vi_diff.txt when in Vim). Also have a look at the 'compatible' and 'cpoptions' options. 2024 Jun 04 VIM(1)
| null |
aarch64-apple-darwin23-gfortran-13
| null | null | null | null | null |
k6
| null | null | null | null | null |
gifbuild
| null |
gifbuild - dump GIF data in a textual format, or undump it to a GIF
|
gifbuild [-v] [-a] [-d] [-t translation-table] [-h] [gif-file]
|
A program to convert a series of editable text GIF icon specifications and named GIF files into a multi-image GIF, usable as a graphic resource file. It can also dump existing GIFs in this format. When dumping a GIF, certain sanity checks are performed which may result in a warning emitted to standard error. If no GIF file is given, gifbuild will try to read a text input from stdin. SPECIFICATION SYNTAX Here is a syntax summary in informal BNF. The token `NL' represents a required newline. <gif-spec> ::= <header-block> <image-block>... <header-block> ::= <header-declaration>... <header-declaration ::= | screen width <digits> NL | screen height <digits> NL | screen colors <digits> NL | screen background <digits> NL | pixel aspect byte <digits> NL | screen map <color-table> NL <color-table> ::= <color-declaration>... end NL <color-declaration> ::= rgb <digits> <digits> <digits> [ is <key>] NL | sort flag {on|off} NL <image-block> ::= include <file-name> NL | image NL <image-declaration>... <raster-picture> [ <extension> ] <image-declarations> ::= image top <digits> NL | image left <digits> NL | image interlaced NL | image map <color-table> NL | image bits <digits> by <digits> [hex|ascii] NL <raster-block> <extension> := <comment> NL <extension-block> NL end NL | <plaintext> NL <extension-block> NL end NL | graphics control NL <GCB-part> NL end NL | netscape loop <digits> NL | extension <hex-digits> NL <extension-block> NL end NL <GCB-part> ::= disposal mode <digits> NL | user input flag {on|off} NL | delay <digits> NL | transparent index <digits> NL If the data types of the “screen height”, “screen width”, “screen background”, “image top”, and “image left” declarations aren't obvious to you, what are you doing with this software? The “pixel aspect byte” declaration sets an integer denominator for a fraction expressing the puxel aspect ratio. See the GIF standard for details; this field is actually long obsolete. A color table declares color indices (in ascending order from 0) and may associate them with key characters (these associations are absent when the map is more than 94 colors lang and raster blocks using it must use hex pairs). These characters can later be used in raster blocks. As these must be printable and non-whitespace, you can only specify 94 colors per icon. Life is like that sometimes. A color table declaration can also set the table's sort flag with "sort flag on" or "sort flag off" on any line before the end. An “ascii” raster block is just a block of key characters (used for a color map of 94 or fewer colors). A “hex” raster block uses hex digit pairs instead (used for a color map with more than 94 colors). The default is ASCII. It should be sized correctly for the “image bits” declaration that leads it. Raster blocks from interlaced GIFs are dumped with the lines in non-interlaced order. The “comment”, “plaintext” or “ggraphics control” keywords lead defined GIF89 extension record data. The final GIF89 type, graphics control and application block, are not yet supported, but the code does recognize a Netscape loop block. You can also say “extension” followed by a hexadecimal record type. All of these extension declarations must be followed by an extension block, which is terminated by the keyword “end” on its own line. An extension block is a series of text lines, each interpreted as a string of bytes to fill an extension block (the terminating newline is stripped). Text may include standard C-style octal and hex escapes preceded by a backslash. A graphics control block declaration creates a graphics control extension block; for the field semantics see the GIF89 standard, part 23. A netscape loop declaration creates an application extension block containing a NETSCAPE 2.0 animation loop control with a specified repeat count (repeat count 0 means loop forever). This must be immediately after the header declaration. These loop blocks are interpreted by the Netscape/Mozilla/Firefox line of browsers. All <digits> tokens are interpreted as decimal numerals; <hex-digits> tokens are interpreted as two hex digits (a byte). All coordinates are zero-origin with the top left corner (0,0). Range checking is weak and signedness checking nonexistent; caveat hacker! In general, the amount of whitespace and order of declarations within a header or image block is not significant, except that a raster picture must immediately follow its “image bits” bits declaration. The “include” declaration includes a named GIF as the next image. The global color maps of included GIFs are merged with the base table defined by any “screen color” declaration. All images of an included multi-image GIF will be included in order. Comments (preceded with “#”) will be ignored. -v Verbose mode (show progress). Enables printout of running scan lines. -d Dump the input GIF file(s) into the text form described above. -t Specify name characters to use when dumping raster blocks. Only valid with -d option. -h Print one line of command line help, similar to Usage above. BUGS Error checking is rudimentary. EXAMPLE: A sample icon file called sample.ico is included in the pic directory of the GIFLIB source distribution. AUTHOR Eric S. Raymond <esr@thyrsus.com> GIFLIB 2 May 2012 GIFBUILD(1)
| null |
aead_demo
| null | null | null | null | null |
aviocat
| null | null | null | null | null |
gdircolors
|
Output commands to set the LS_COLORS environment variable. Determine format of output: -b, --sh, --bourne-shell output Bourne shell code to set LS_COLORS -c, --csh, --c-shell output C shell code to set LS_COLORS -p, --print-database output defaults --print-ls-colors output fully escaped colors for display --help display this help and exit --version output version information and exit If FILE is specified, read it to determine which colors to use for which file types and extensions. Otherwise, a precompiled database is used. For details on the format of these files, run 'dircolors --print-database'. AUTHOR Written by H. Peter Anvin. REPORTING BUGS GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT Copyright © 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO Full documentation <https://www.gnu.org/software/coreutils/dircolors> or available locally via: info '(coreutils) dircolors invocation' GNU coreutils 9.3 April 2023 DIRCOLORS(1)
|
dircolors - color setup for ls
|
dircolors [OPTION]... [FILE]
| null | null |
aspell
|
aspell is a utility program that connects to the Aspell library so that it can function as an ispell -a replacement, as an independent spell checker, as a test utility to test out Aspell library features, and as a utility for managing dictionaries used by the library. The Aspell library contains an interface allowing other programs direct access to its functions and therefore reducing the complex task of spell checking to simple library calls. The default library does not contain dictionary word lists. To add language dictionaries, please check your distro first for modified dictionaries, otherwise look here for base language dictionaries <http://aspell.net>. The following information describes the commands and options used by the Aspell Utility. This manual page is maintained separately from the official documentation so it may be out of date or incomplete. The official documentation is maintained as a Texinfo manual. See the `aspell' entry in info for more complete documentation. COMMANDS <command> is one of: usage, -? Send a brief Aspell Utility usage message to standard output. This is a short summary listing more common spell-check commands and options. help Send a detailed Aspell Utility help message to standard output. This is a complete list showing all commands, options, filters and dictionaries. version, -v Print version number of Aspell Library and Utility to standard output. check <file>, -c <file> Spell-check a single file. pipe, -a Run Aspell in ispell -a compatibility mode. list Produce a list of misspelled words from standard input. [dump] config Dump all current configuration options to standard output. config <key> Send the current value of <key> to standard output. soundslike Output the soundslike equivalent of each word entered. munch Generate possible root words and affixes from an input list of words. expand [1–4] Expands the affix flags of each affix compressed word entered. clean [strict] Cleans an input word list so that every line is a valid word. munch-list [simple] [single|multi] [keep] Reduce the size of a word list via affix compression. conv <from> <to> [<norm-form>] Converts <from> one encoding <to> another. norm (<norm-map>|<from> <norm-map> <to>) [<norm-form>] Perform Unicode normalization. [dump] dicts|filters|modes Lists available dictionaries, filters, or modes. dump|create|merge master|personal|repl <wordlist> dump, create, or merge a master, personal, or replacement word list. DICTIONARY OPTIONS The following options may be used to control which dictionaries to use and how they behave. --master=<name>, -d <name> Base name of the dictionary to use. If this option is specified then Aspell will either use this dictionary or die. --dict-dir=<directory> Location of the main dictionary word list. --lang=<string>, -l <string> Language to use. It follows the same format of the LANG environmental variable on most systems. It consists of the two letter ISO 639 language code and an optional two letter ISO 3166 country code after a dash or underscore. The default value is based on the value of the LC_MESSAGES locale. --size=<string> The preferred size of the dictionary word list. This consists of a two char digit code describing the size of the list, with typical values of: 10=tiny, 20=really small, 30=small, 40=med- small, 50=med, 60=med-large, 70=large, 80=huge, 90=insane. --variety=<string> Any extra information to distinguish this variety of dictionary from other dictionaries which may have the same lang and size. --jargon=<string> Please use the variety option since it replaces jargon as a better choice. jargon will be removed in the future. --word-list-path=<list of directories> Search path for word list information files. --personal=<file>, -p <file> Personal word list file name. --repl=<file> Replacements list file name. --extra-dicts=<list> Extra dictionaries to use. --ignore-accents This option is not yet implemented. CHECKER OPTIONS These options control the behavior of Aspell when checking documents. --ignore=<integer>, -W <integer> Ignore words <= <integer> characters in length. --ignore-case, --dont-ignore-case Ignore case when checking words. --ignore-repl, --dont-ignore-repl Ignore commands to store replacement pairs. --save-repl, --dont-save-repl Save the replacement word list on save all. --sug-mode=<mode> Suggestion <mode> = ultra|fast|normal|bad-spellers FILTER OPTIONS These options modify the behavior of the various filters. --add-filter=<list>, --rem-filter=<list> Add or remove a filter. --add-filter-path=<paths>, --rem-filter-path=<paths> Add or remove paths searched for filters. --mode=<string>, -e, -H, -t, -n Sets the filter mode. Mode is one of none, url, email, html, tex or nroff. The alternative shortcut options are '-e' for email, '-H' for Html/Sgml, '-t' for Tex or '-n' for Nroff. --encoding=<string> encoding the document is expected to be in. The default depends on the current locale. --add-email-quote=<list>, --rem-email-quote=<list> Add or remove a list of email quote characters. --email-margin=<integer> Number of chars that can appear before the quote char. --add-html-check=<list>, --rem-html-check=<list> Add or remove a list of HTML attributes to always check. For example, look inside alt= attributes. --add-html-skip=<list>, --rem-html-skip=<list> Add or remove a list of HTML tags to always skip while spell checking. --add-sgml-check=<list>, --rem-sgml-check=<list> Add or remove a list of SGML attributes to always check for spelling. --add-sgml-skip=<list>, --rem-sgml-skip=<list> Add or remove a list of SGML tags to always skip while spell checking. --sgml-extension=<list> SGML file extensions. --tex-check-comments, --dont-tex-check-comments Check TeX comments. --add-tex-command=<list>, --rem-tex-command=<list> Add or remove a list of TeX commands. --add-texinfo-ignore=<list>, --rem-texinfo-ignore=<list> Add or remove a list of Texinfo commands. --add-texinfo-ignore-env=<list>, --rem-texinfo-ignore-env=<list> Add or remove a list of Texinfo environments to ignore. --context-visible-first, --dont-context-visible-first Switch the context which should be visible to Aspell. --add-context-delimiters=<list>, --rem-context-delimiters=<list> Add or remove pairs of delimiters. RUN-TOGETHER WORD OPTIONS These may be used to control the behavior of run-together words. --run-together, --dont-run-together, -C, -B Consider run-together words valid. --run-together-limit=<integer> Maximum number of words that can be strung together. --run-together-min=<integer> Minimal length of interior words. MISC OPTIONS Miscellaneous options that don't fall under any other category. --conf=<file name> Main configuration file. This file overrides Aspell's global defaults. --conf-dir=<directory> Location of main configuration file. --data-dir=<directory> Location of language data files. --keyboard=<keyboard> Use this keyboard layout for suggesting possible words. These spelling errors happen if a user accidentally presses a key next to the intended correct key. --local-data-dir=<directory> Alternative location of language data files. This directory is searched before data-dir. --home-dir=<directory> Directory Location for personal wordlist files. --per-conf=<file name> Personal configuration file. This file overrides options found in the global config file. ASPELL UTILITY OPTIONS These options are part of the aspell Utility and work independently of the library. --backup, --dont-backup, -b, -x The aspell utility creates a backup file by making a copy and appending .bak to file name. This only applies when the command is check <file> and the backup file is only created if any spelling modifications take place. --byte-offsets, --dont-byte-offsets Use byte offsets instead of character offsets. --guess, --dont-guess, -m, -P Create missing root/affix combinations not in the dictionary in pipe mode. --keymapping=aspell, --keymapping=ispell The keymapping to use, either aspell for the default mapping or ispell to use the same mapping that the Ispell utility uses. --reverse, --dont-reverse Reverse the order of the suggestions list in pipe mode. --suggest, --dont-suggest Suggest possible replacements in pipe mode. If false, Aspell will simply report the misspelling and make no attempt at suggestions or possible corrections. --time, --dont-time Time the load time and suggest a time in pipe mode. In addition Aspell will try to make sense out of Ispell's command line options so that it can function as a drop in replacement for Ispell. If Aspell is run without any command line options it will display a brief help screen and quit. CONFIGURATION Aspell can accept options via global or personal configuration files so that you do not need to specify them each time at the command line. The default global configuration file is /etc/aspell.conf or another file specified by option --conf and is checked first. The default per user configuration file ~/.aspell.conf located in the $HOME directory (or another file specified by option --per-conf) is checked next and overrides options set in the global config file. Options specified at either the command line or via an environmental variable override those specified by either configuration file. Each line of the configuration file has the format: option [value] where option is any one of the standard library options above without the leading dashes. For example the following line will set the default language to Swiss German: lang de_CH There may be any number of spaces between the option and the value, however it can only be spaces, i.e. there is no '=' between the option name and the value. Comments may also be included by preceding them with a '#' as anything from a '#' to a newline is ignored. Blank lines are also allowed. The /etc/aspell.conf file is a good example of how to set these options and the Aspell Manual has more detailed info. SEE ALSO aspell-import(1), prezip-bin(1), run-with-aspell(1), word-list-compress(1) Aspell is fully documented in its Texinfo manual. See the `aspell' entry in info for more complete documentation. SUPPORT Support for Aspell can be found on the Aspell mailing lists. Instructions for joining the various mailing lists (and an archive of them) can be found off the Aspell home page at <http://aspell.net>. Bug reports should be submitted via GitHub Issues rather than being posted to the mailing lists. AUTHOR This manual page was written by Brian Nelson <pyro@debian.org> based on the Aspell User's Manual, Copyright © 2002 Kevin Atkinson. Updated Nov 2006 by Jose Da Silva <digital@joescat.com>, and Dec 2006 by Kevin Atkinson <kevina@gnu.org>. GNU 2006-12-10 ASPELL(1)
|
aspell - interactive spell checker
|
aspell [options] <command>
| null | null |
cloudflared
|
cloudflared creates a persistent connection between a local service and the Cloudflare network. Once the daemon is running and the Tunnel has been configured, the local service can be locked down to only allow connections from Cloudflare. 2023.7.3 2023-07-25T20:51:49Z man(1)
|
cloudflared - creates a connection to the cloudflare edge network
| null | null | null |
msgcat
|
Concatenates and merges the specified PO files. Find messages which are common to two or more of the specified PO files. By using the --more-than option, greater commonality may be requested before messages are printed. Conversely, the --less-than option may be used to specify less commonality before messages are printed (i.e. --less-than=2 will only print the unique messages). Translations, comments, extracted comments, and file positions will be cumulated, except that if --use-first is specified, they will be taken from the first PO file to define them. Mandatory arguments to long options are mandatory for short options too. Input file location: INPUTFILE ... input files -f, --files-from=FILE get list of input files from FILE -D, --directory=DIRECTORY add DIRECTORY to list for input files search If input file is -, standard input is read. Output file location: -o, --output-file=FILE write output to specified file The results are written to standard output if no output file is specified or if it is -. Message selection: -<, --less-than=NUMBER print messages with less than this many definitions, defaults to infinite if not set ->, --more-than=NUMBER print messages with more than this many definitions, defaults to 0 if not set -u, --unique shorthand for --less-than=2, requests that only unique messages be printed Input file syntax: -P, --properties-input input files are in Java .properties syntax --stringtable-input input files are in NeXTstep/GNUstep .strings syntax Output details: -t, --to-code=NAME encoding for output --use-first use first available translation for each message, don't merge several translations --lang=CATALOGNAME set 'Language' field in the header entry --color use colors and other text attributes always --color=WHEN use colors and other text attributes if WHEN. WHEN may be 'always', 'never', 'auto', or 'html'. --style=STYLEFILE specify CSS style rule file for --color -e, --no-escape do not use C escapes in output (default) -E, --escape use C escapes in output, no extended chars --force-po write PO file even if empty -i, --indent write the .po file using indented style --no-location do not write '#: filename:line' lines -n, --add-location generate '#: filename:line' lines (default) --strict write out strict Uniforum conforming .po file -p, --properties-output write out a Java .properties file --stringtable-output write out a NeXTstep/GNUstep .strings file -w, --width=NUMBER set output page width --no-wrap do not break long message lines, longer than the output page width, into several lines -s, --sort-output generate sorted output -F, --sort-by-file sort output by file location Informative output: -h, --help display this help and exit -V, --version output version information and exit AUTHOR Written by Bruno Haible. REPORTING BUGS Report bugs in the bug tracker at <https://savannah.gnu.org/projects/gettext> or by email to <bug-gettext@gnu.org>. COPYRIGHT Copyright © 2001-2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO The full documentation for msgcat is maintained as a Texinfo manual. If the info and msgcat programs are properly installed at your site, the command info msgcat should give you access to the complete manual. GNU gettext-tools 0.22.5 February 2024 MSGCAT(1)
|
msgcat - combines several message catalogs
|
msgcat [OPTION] [INPUTFILE]...
| null | null |
aomenc
| null | null | null | null | null |
rav1e
| null | null | null | null | null |
zstdcat
|
zstd is a fast lossless compression algorithm and data compression tool, with command line syntax similar to gzip(1) and xz(1). It is based on the LZ77 family, with further FSE & huff0 entropy stages. zstd offers highly configurable compression speed, from fast modes at > 200 MB/s per core, to strong modes with excellent compression ratios. It also features a very fast decoder, with speeds > 500 MB/s per core, which remains roughly stable at all compression settings. zstd command line syntax is generally similar to gzip, but features the following few differences: • Source files are preserved by default. It´s possible to remove them automatically by using the --rm command. • When compressing a single file, zstd displays progress notifications and result summary by default. Use -q to turn them off. • zstd displays a short help page when command line is an error. Use -q to turn it off. • zstd does not accept input from console, though it does accept stdin when it´s not the console. • zstd does not store the input´s filename or attributes, only its contents. zstd processes each file according to the selected operation mode. If no files are given or file is -, zstd reads from standard input and writes the processed data to standard output. zstd will refuse to write compressed data to standard output if it is a terminal: it will display an error message and skip the file. Similarly, zstd will refuse to read compressed data from standard input if it is a terminal. Unless --stdout or -o is specified, files are written to a new file whose name is derived from the source file name: • When compressing, the suffix .zst is appended to the source filename to get the target filename. • When decompressing, the .zst suffix is removed from the source filename to get the target filename Concatenation with .zst Files It is possible to concatenate multiple .zst files. zstd will decompress such agglomerated file as if it was a single .zst file.
|
zstd - zstd, zstdmt, unzstd, zstdcat - Compress or decompress .zst files
|
zstd [OPTIONS] [-|INPUT-FILE] [-o OUTPUT-FILE] zstdmt is equivalent to zstd -T0 unzstd is equivalent to zstd -d zstdcat is equivalent to zstd -dcf
|
Integer Suffixes and Special Values In most places where an integer argument is expected, an optional suffix is supported to easily indicate large integers. There must be no space between the integer and the suffix. KiB Multiply the integer by 1,024 (2^10). Ki, K, and KB are accepted as synonyms for KiB. MiB Multiply the integer by 1,048,576 (2^20). Mi, M, and MB are accepted as synonyms for MiB. Operation Mode If multiple operation mode options are given, the last one takes effect. -z, --compress Compress. This is the default operation mode when no operation mode option is specified and no other operation mode is implied from the command name (for example, unzstd implies --decompress). -d, --decompress, --uncompress Decompress. -t, --test Test the integrity of compressed files. This option is equivalent to --decompress --stdout > /dev/null, decompressed data is discarded and checksummed for errors. No files are created or removed. -b# Benchmark file(s) using compression level #. See BENCHMARK below for a description of this operation. --train FILES Use FILES as a training set to create a dictionary. The training set should contain a lot of small files (> 100). See DICTIONARY BUILDER below for a description of this operation. -l, --list Display information related to a zstd compressed file, such as size, ratio, and checksum. Some of these fields may not be available. This command´s output can be augmented with the -v modifier. Operation Modifiers • -#: selects # compression level [1-19] (default: 3). Higher compression levels generally produce higher compression ratio at the expense of speed and memory. A rough rule of thumb is that compression speed is expected to be divided by 2 every 2 levels. Technically, each level is mapped to a set of advanced parameters (that can also be modified individually, see below). Because the compressor´s behavior highly depends on the content to compress, there´s no guarantee of a smooth progression from one level to another. • --ultra: unlocks high compression levels 20+ (maximum 22), using a lot more memory. Note that decompression will also require more memory when using these levels. • --fast[=#]: switch to ultra-fast compression levels. If =# is not present, it defaults to 1. The higher the value, the faster the compression speed, at the cost of some compression ratio. This setting overwrites compression level if one was set previously. Similarly, if a compression level is set after --fast, it overrides it. • -T#, --threads=#: Compress using # working threads (default: 1). If # is 0, attempt to detect and use the number of physical CPU cores. In all cases, the nb of threads is capped to ZSTDMT_NBWORKERS_MAX, which is either 64 in 32-bit mode, or 256 for 64-bit environments. This modifier does nothing if zstd is compiled without multithread support. • --single-thread: Use a single thread for both I/O and compression. As compression is serialized with I/O, this can be slightly slower. Single-thread mode features significantly lower memory usage, which can be useful for systems with limited amount of memory, such as 32-bit systems. Note 1: this mode is the only available one when multithread support is disabled. Note 2: this mode is different from -T1, which spawns 1 compression thread in parallel with I/O. Final compressed result is also slightly different from -T1. • --auto-threads={physical,logical} (default: physical): When using a default amount of threads via -T0, choose the default based on the number of detected physical or logical cores. • --adapt[=min=#,max=#]: zstd will dynamically adapt compression level to perceived I/O conditions. Compression level adaptation can be observed live by using command -v. Adaptation can be constrained between supplied min and max levels. The feature works when combined with multi-threading and --long mode. It does not work with --single-thread. It sets window size to 8 MiB by default (can be changed manually, see wlog). Due to the chaotic nature of dynamic adaptation, compressed result is not reproducible. Note: at the time of this writing, --adapt can remain stuck at low speed when combined with multiple worker threads (>=2). • --long[=#]: enables long distance matching with # windowLog, if # is not present it defaults to 27. This increases the window size (windowLog) and memory usage for both the compressor and decompressor. This setting is designed to improve the compression ratio for files with long matches at a large distance. Note: If windowLog is set to larger than 27, --long=windowLog or --memory=windowSize needs to be passed to the decompressor. • -D DICT: use DICT as Dictionary to compress or decompress FILE(s) • --patch-from FILE: Specify the file to be used as a reference point for zstd´s diff engine. This is effectively dictionary compression with some convenient parameter selection, namely that windowSize > srcSize. Note: cannot use both this and -D together. Note: --long mode will be automatically activated if chainLog < fileLog (fileLog being the windowLog required to cover the whole file). You can also manually force it. Note: for all levels, you can use --patch-from in --single-thread mode to improve compression ratio at the cost of speed. Note: for level 19, you can get increased compression ratio at the cost of speed by specifying --zstd=targetLength= to be something large (i.e. 4096), and by setting a large --zstd=chainLog=. • --rsyncable: zstd will periodically synchronize the compression state to make the compressed file more rsync-friendly. There is a negligible impact to compression ratio, and a potential impact to compression speed, perceptible at higher speeds, for example when combining --rsyncable with many parallel worker threads. This feature does not work with --single-thread. You probably don´t want to use it with long range mode, since it will decrease the effectiveness of the synchronization points, but your mileage may vary. • -C, --[no-]check: add integrity check computed from uncompressed data (default: enabled) • --[no-]content-size: enable / disable whether or not the original size of the file is placed in the header of the compressed file. The default option is --content-size (meaning that the original size will be placed in the header). • --no-dictID: do not store dictionary ID within frame header (dictionary compression). The decoder will have to rely on implicit knowledge about which dictionary to use, it won´t be able to check if it´s correct. • -M#, --memory=#: Set a memory usage limit. By default, zstd uses 128 MiB for decompression as the maximum amount of memory the decompressor is allowed to use, but you can override this manually if need be in either direction (i.e. you can increase or decrease it). This is also used during compression when using with --patch-from=. In this case, this parameter overrides that maximum size allowed for a dictionary. (128 MiB). Additionally, this can be used to limit memory for dictionary training. This parameter overrides the default limit of 2 GiB. zstd will load training samples up to the memory limit and ignore the rest. • --stream-size=#: Sets the pledged source size of input coming from a stream. This value must be exact, as it will be included in the produced frame header. Incorrect stream sizes will cause an error. This information will be used to better optimize compression parameters, resulting in better and potentially faster compression, especially for smaller source sizes. • --size-hint=#: When handling input from a stream, zstd must guess how large the source size will be when optimizing compression parameters. If the stream size is relatively small, this guess may be a poor one, resulting in a higher compression ratio than expected. This feature allows for controlling the guess when needed. Exact guesses result in better compression ratios. Overestimates result in slightly degraded compression ratios, while underestimates may result in significant degradation. • --target-compressed-block-size=#: Attempt to produce compressed blocks of approximately this size. This will split larger blocks in order to approach this target. This feature is notably useful for improved latency, when the receiver can leverage receiving early incomplete data. This parameter defines a loose target: compressed blocks will target this size "on average", but individual blocks can still be larger or smaller. Enabling this feature can decrease compression speed by up to ~10% at level 1. Higher levels will see smaller relative speed regression, becoming invisible at higher settings. • -f, --force: disable input and output checks. Allows overwriting existing files, input from console, output to stdout, operating on links, block devices, etc. During decompression and when the output destination is stdout, pass-through unrecognized formats as-is. • -c, --stdout: write to standard output (even if it is the console); keep original files (disable --rm). • -o FILE: save result into FILE. Note that this operation is in conflict with -c. If both operations are present on the command line, the last expressed one wins. • --[no-]sparse: enable / disable sparse FS support, to make files with many zeroes smaller on disk. Creating sparse files may save disk space and speed up decompression by reducing the amount of disk I/O. default: enabled when output is into a file, and disabled when output is stdout. This setting overrides default and can force sparse mode over stdout. • --[no-]pass-through enable / disable passing through uncompressed files as-is. During decompression when pass-through is enabled, unrecognized formats will be copied as-is from the input to the output. By default, pass-through will occur when the output destination is stdout and the force (-f) option is set. • --rm: remove source file(s) after successful compression or decompression. This command is silently ignored if output is stdout. If used in combination with -o, triggers a confirmation prompt (which can be silenced with -f), as this is a destructive operation. • -k, --keep: keep source file(s) after successful compression or decompression. This is the default behavior. • -r: operate recursively on directories. It selects all files in the named directory and all its subdirectories. This can be useful both to reduce command line typing, and to circumvent shell expansion limitations, when there are a lot of files and naming breaks the maximum size of a command line. • --filelist FILE read a list of files to process as content from FILE. Format is compatible with ls output, with one file per line. • --output-dir-flat DIR: resulting files are stored into target DIR directory, instead of same directory as origin file. Be aware that this command can introduce name collision issues, if multiple files, from different directories, end up having the same name. Collision resolution ensures first file with a given name will be present in DIR, while in combination with -f, the last file will be present instead. • --output-dir-mirror DIR: similar to --output-dir-flat, the output files are stored underneath target DIR directory, but this option will replicate input directory hierarchy into output DIR. If input directory contains "..", the files in this directory will be ignored. If input directory is an absolute directory (i.e. "/var/tmp/abc"), it will be stored into the "output-dir/var/tmp/abc". If there are multiple input files or directories, name collision resolution will follow the same rules as --output-dir-flat. • --format=FORMAT: compress and decompress in other formats. If compiled with support, zstd can compress to or decompress from other compression algorithm formats. Possibly available options are zstd, gzip, xz, lzma, and lz4. If no such format is provided, zstd is the default. • -h/-H, --help: display help/long help and exit • -V, --version: display version number and immediately exit. note that, since it exits, flags specified after -V are effectively ignored. Advanced: -vV also displays supported formats. -vvV also displays POSIX support. -qV will only display the version number, suitable for machine reading. • -v, --verbose: verbose mode, display more information • -q, --quiet: suppress warnings, interactivity, and notifications. specify twice to suppress errors too. • --no-progress: do not display the progress bar, but keep all other messages. • --show-default-cparams: shows the default compression parameters that will be used for a particular input file, based on the provided compression level and the input size. If the provided file is not a regular file (e.g. a pipe), this flag will output the parameters used for inputs of unknown size. • --exclude-compressed: only compress files that are not already compressed. • --: All arguments after -- are treated as files gzip Operation Modifiers When invoked via a gzip symlink, zstd will support further options that intend to mimic the gzip behavior: -n, --no-name do not store the original filename and timestamps when compressing a file. This is the default behavior and hence a no-op. --best alias to the option -9. Environment Variables Employing environment variables to set parameters has security implications. Therefore, this avenue is intentionally limited. Only ZSTD_CLEVEL and ZSTD_NBTHREADS are currently supported. They set the default compression level and number of threads to use during compression, respectively. ZSTD_CLEVEL can be used to set the level between 1 and 19 (the "normal" range). If the value of ZSTD_CLEVEL is not a valid integer, it will be ignored with a warning message. ZSTD_CLEVEL just replaces the default compression level (3). ZSTD_NBTHREADS can be used to set the number of threads zstd will attempt to use during compression. If the value of ZSTD_NBTHREADS is not a valid unsigned integer, it will be ignored with a warning message. ZSTD_NBTHREADS has a default value of (1), and is capped at ZSTDMT_NBWORKERS_MAX==200. zstd must be compiled with multithread support for this variable to have any effect. They can both be overridden by corresponding command line arguments: -# for compression level and -T# for number of compression threads. ADVANCED COMPRESSION OPTIONS zstd provides 22 predefined regular compression levels plus the fast levels. A compression level is translated internally into multiple advanced parameters that control the behavior of the compressor (one can observe the result of this translation with --show-default-cparams). These advanced parameters can be overridden using advanced compression options. --zstd[=options]: The options are provided as a comma-separated list. You may specify only the options you want to change and the rest will be taken from the selected or default compression level. The list of available options: strategy=strat, strat=strat Specify a strategy used by a match finder. There are 9 strategies numbered from 1 to 9, from fastest to strongest: 1=ZSTD_fast, 2=ZSTD_dfast, 3=ZSTD_greedy, 4=ZSTD_lazy, 5=ZSTD_lazy2, 6=ZSTD_btlazy2, 7=ZSTD_btopt, 8=ZSTD_btultra, 9=ZSTD_btultra2. windowLog=wlog, wlog=wlog Specify the maximum number of bits for a match distance. The higher number of increases the chance to find a match which usually improves compression ratio. It also increases memory requirements for the compressor and decompressor. The minimum wlog is 10 (1 KiB) and the maximum is 30 (1 GiB) on 32-bit platforms and 31 (2 GiB) on 64-bit platforms. Note: If windowLog is set to larger than 27, --long=windowLog or --memory=windowSize needs to be passed to the decompressor. hashLog=hlog, hlog=hlog Specify the maximum number of bits for a hash table. Bigger hash tables cause fewer collisions which usually makes compression faster, but requires more memory during compression. The minimum hlog is 6 (64 entries / 256 B) and the maximum is 30 (1B entries / 4 GiB). chainLog=clog, clog=clog Specify the maximum number of bits for the secondary search structure, whose form depends on the selected strategy. Higher numbers of bits increases the chance to find a match which usually improves compression ratio. It also slows down compression speed and increases memory requirements for compression. This option is ignored for the ZSTD_fast strategy, which only has the primary hash table. The minimum clog is 6 (64 entries / 256 B) and the maximum is 29 (512M entries / 2 GiB) on 32-bit platforms and 30 (1B entries / 4 GiB) on 64-bit platforms. searchLog=slog, slog=slog Specify the maximum number of searches in a hash chain or a binary tree using logarithmic scale. More searches increases the chance to find a match which usually increases compression ratio but decreases compression speed. The minimum slog is 1 and the maximum is ´windowLog´ - 1. minMatch=mml, mml=mml Specify the minimum searched length of a match in a hash table. Larger search lengths usually decrease compression ratio but improve decompression speed. The minimum mml is 3 and the maximum is 7. targetLength=tlen, tlen=tlen The impact of this field vary depending on selected strategy. For ZSTD_btopt, ZSTD_btultra and ZSTD_btultra2, it specifies the minimum match length that causes match finder to stop searching. A larger targetLength usually improves compression ratio but decreases compression speed. For ZSTD_fast, it triggers ultra-fast mode when > 0. The value represents the amount of data skipped between match sampling. Impact is reversed: a larger targetLength increases compression speed but decreases compression ratio. For all other strategies, this field has no impact. The minimum tlen is 0 and the maximum is 128 KiB. overlapLog=ovlog, ovlog=ovlog Determine overlapSize, amount of data reloaded from previous job. This parameter is only available when multithreading is enabled. Reloading more data improves compression ratio, but decreases speed. The minimum ovlog is 0, and the maximum is 9. 1 means "no overlap", hence completely independent jobs. 9 means "full overlap", meaning up to windowSize is reloaded from previous job. Reducing ovlog by 1 reduces the reloaded amount by a factor 2. For example, 8 means "windowSize/2", and 6 means "windowSize/8". Value 0 is special and means "default": ovlog is automatically determined by zstd. In which case, ovlog will range from 6 to 9, depending on selected strat. ldmHashLog=lhlog, lhlog=lhlog Specify the maximum size for a hash table used for long distance matching. This option is ignored unless long distance matching is enabled. Bigger hash tables usually improve compression ratio at the expense of more memory during compression and a decrease in compression speed. The minimum lhlog is 6 and the maximum is 30 (default: 20). ldmMinMatch=lmml, lmml=lmml Specify the minimum searched length of a match for long distance matching. This option is ignored unless long distance matching is enabled. Larger/very small values usually decrease compression ratio. The minimum lmml is 4 and the maximum is 4096 (default: 64). ldmBucketSizeLog=lblog, lblog=lblog Specify the size of each bucket for the hash table used for long distance matching. This option is ignored unless long distance matching is enabled. Larger bucket sizes improve collision resolution but decrease compression speed. The minimum lblog is 1 and the maximum is 8 (default: 3). ldmHashRateLog=lhrlog, lhrlog=lhrlog Specify the frequency of inserting entries into the long distance matching hash table. This option is ignored unless long distance matching is enabled. Larger values will improve compression speed. Deviating far from the default value will likely result in a decrease in compression ratio. The default value is wlog - lhlog. Example The following parameters sets advanced compression options to something similar to predefined level 19 for files bigger than 256 KB: --zstd=wlog=23,clog=23,hlog=22,slog=6,mml=3,tlen=48,strat=6 -B#: Specify the size of each compression job. This parameter is only available when multi-threading is enabled. Each compression job is run in parallel, so this value indirectly impacts the nb of active threads. Default job size varies depending on compression level (generally 4 * windowSize). -B# makes it possible to manually select a custom size. Note that job size must respect a minimum value which is enforced transparently. This minimum is either 512 KB, or overlapSize, whichever is largest. Different job sizes will lead to non-identical compressed frames. DICTIONARY BUILDER zstd offers dictionary compression, which greatly improves efficiency on small files and messages. It´s possible to train zstd with a set of samples, the result of which is saved into a file called a dictionary. Then, during compression and decompression, reference the same dictionary, using command -D dictionaryFileName. Compression of small files similar to the sample set will be greatly improved. --train FILEs Use FILEs as training set to create a dictionary. The training set should ideally contain a lot of samples (> 100), and weight typically 100x the target dictionary size (for example, ~10 MB for a 100 KB dictionary). --train can be combined with -r to indicate a directory rather than listing all the files, which can be useful to circumvent shell expansion limits. Since dictionary compression is mostly effective for small files, the expectation is that the training set will only contain small files. In the case where some samples happen to be large, only the first 128 KiB of these samples will be used for training. --train supports multithreading if zstd is compiled with threading support (default). Additional advanced parameters can be specified with --train-fastcover. The legacy dictionary builder can be accessed with --train-legacy. The slower cover dictionary builder can be accessed with --train-cover. Default --train is equivalent to --train-fastcover=d=8,steps=4. -o FILE Dictionary saved into FILE (default name: dictionary). --maxdict=# Limit dictionary to specified size (default: 112640 bytes). As usual, quantities are expressed in bytes by default, and it´s possible to employ suffixes (like KB or MB) to specify larger values. -# Use # compression level during training (optional). Will generate statistics more tuned for selected compression level, resulting in a small compression ratio improvement for this level. -B# Split input files into blocks of size # (default: no split) -M#, --memory=# Limit the amount of sample data loaded for training (default: 2 GB). Note that the default (2 GB) is also the maximum. This parameter can be useful in situations where the training set size is not well controlled and could be potentially very large. Since speed of the training process is directly correlated to the size of the training sample set, a smaller sample set leads to faster training. In situations where the training set is larger than maximum memory, the CLI will randomly select samples among the available ones, up to the maximum allowed memory budget. This is meant to improve dictionary relevance by mitigating the potential impact of clustering, such as selecting only files from the beginning of a list sorted by modification date, or sorted by alphabetical order. The randomization process is deterministic, so training of the same list of files with the same parameters will lead to the creation of the same dictionary. --dictID=# A dictionary ID is a locally unique ID. The decoder will use this value to verify it is using the right dictionary. By default, zstd will create a 4-bytes random number ID. It´s possible to provide an explicit number ID instead. It´s up to the dictionary manager to not assign twice the same ID to 2 different dictionaries. Note that short numbers have an advantage: an ID < 256 will only need 1 byte in the compressed frame header, and an ID < 65536 will only need 2 bytes. This compares favorably to 4 bytes default. Note that RFC8878 reserves IDs less than 32768 and greater than or equal to 2^31, so they should not be used in public. --train-cover[=k#,d=#,steps=#,split=#,shrink[=#]] Select parameters for the default dictionary builder algorithm named cover. If d is not specified, then it tries d = 6 and d = 8. If k is not specified, then it tries steps values in the range [50, 2000]. If steps is not specified, then the default value of 40 is used. If split is not specified or split <= 0, then the default value of 100 is used. Requires that d <= k. If shrink flag is not used, then the default value for shrinkDict of 0 is used. If shrink is not specified, then the default value for shrinkDictMaxRegression of 1 is used. Selects segments of size k with highest score to put in the dictionary. The score of a segment is computed by the sum of the frequencies of all the subsegments of size d. Generally d should be in the range [6, 8], occasionally up to 16, but the algorithm will run faster with d <= 8. Good values for k vary widely based on the input data, but a safe range is [2 * d, 2000]. If split is 100, all input samples are used for both training and testing to find optimal d and k to build dictionary. Supports multithreading if zstd is compiled with threading support. Having shrink enabled takes a truncated dictionary of minimum size and doubles in size until compression ratio of the truncated dictionary is at most shrinkDictMaxRegression% worse than the compression ratio of the largest dictionary. Examples: zstd --train-cover FILEs zstd --train-cover=k=50,d=8 FILEs zstd --train-cover=d=8,steps=500 FILEs zstd --train-cover=k=50 FILEs zstd --train-cover=k=50,split=60 FILEs zstd --train-cover=shrink FILEs zstd --train-cover=shrink=2 FILEs --train-fastcover[=k#,d=#,f=#,steps=#,split=#,accel=#] Same as cover but with extra parameters f and accel and different default value of split If split is not specified, then it tries split = 75. If f is not specified, then it tries f = 20. Requires that 0 < f < 32. If accel is not specified, then it tries accel = 1. Requires that 0 < accel <= 10. Requires that d = 6 or d = 8. f is log of size of array that keeps track of frequency of subsegments of size d. The subsegment is hashed to an index in the range [0,2^f - 1]. It is possible that 2 different subsegments are hashed to the same index, and they are considered as the same subsegment when computing frequency. Using a higher f reduces collision but takes longer. Examples: zstd --train-fastcover FILEs zstd --train-fastcover=d=8,f=15,accel=2 FILEs --train-legacy[=selectivity=#] Use legacy dictionary builder algorithm with the given dictionary selectivity (default: 9). The smaller the selectivity value, the denser the dictionary, improving its efficiency but reducing its achievable maximum size. --train-legacy=s=# is also accepted. Examples: zstd --train-legacy FILEs zstd --train-legacy=selectivity=8 FILEs BENCHMARK The zstd CLI provides a benchmarking mode that can be used to easily find suitable compression parameters, or alternatively to benchmark a computer´s performance. Note that the results are highly dependent on the content being compressed. -b# benchmark file(s) using compression level # -e# benchmark file(s) using multiple compression levels, from -b# to -e# (inclusive) -d benchmark decompression speed only (requires providing an already zstd-compressed content) -i# minimum evaluation time, in seconds (default: 3s), benchmark mode only -B#, --block-size=# cut file(s) into independent chunks of size # (default: no chunking) --priority=rt set process priority to real-time (Windows) Output Format: CompressionLevel#Filename: InputSize -> OutputSize (CompressionRatio), CompressionSpeed, DecompressionSpeed Methodology: For both compression and decompression speed, the entire input is compressed/decompressed in-memory to measure speed. A run lasts at least 1 sec, so when files are small, they are compressed/decompressed several times per run, in order to improve measurement accuracy. SEE ALSO zstdgrep(1), zstdless(1), gzip(1), xz(1) The zstandard format is specified in Y. Collet, "Zstandard Compression and the ´application/zstd´ Media Type", https://www.ietf.org/rfc/rfc8878.txt, Internet RFC 8878 (February 2021). BUGS Report bugs at: https://github.com/facebook/zstd/issues AUTHOR Yann Collet zstd 1.5.6 March 2024 ZSTD(1)
| null |
autoheader
|
Create a template file of C '#define' statements for 'configure' to use. To this end, scan TEMPLATE-FILE, or 'configure.ac' if present, or else 'configure.in'. -h, --help print this help, then exit -V, --version print version number, then exit -v, --verbose verbosely report processing -d, --debug don't remove temporary files -f, --force consider all files obsolete -W, --warnings=CATEGORY report the warnings falling in CATEGORY (comma-separated list accepted) Warning categories are: cross cross compilation issues gnu GNU coding standards (default in gnu and gnits modes) obsolete obsolete features or constructions (default) override user redefinitions of Automake rules or variables portability portability issues (default in gnu and gnits modes) portability-recursive nested Make variables (default with -Wportability) extra-portability extra portability issues related to obscure tools syntax dubious syntactic constructs (default) unsupported unsupported or incomplete features (default) -W also understands: all turn on all the warnings none turn off all the warnings no-CATEGORY turn off warnings in CATEGORY error treat all enabled warnings as errors Library directories: -B, --prepend-include=DIR prepend directory DIR to search path -I, --include=DIR append directory DIR to search path AUTHOR Written by Roland McGrath and Akim Demaille. REPORTING BUGS Report bugs to <bug-autoconf@gnu.org>, or via Savannah: <https://savannah.gnu.org/support/?group=autoconf>. COPYRIGHT Copyright © 2023 Free Software Foundation, Inc. License GPLv3+/Autoconf: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>, <https://gnu.org/licenses/exceptions.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO autoconf(1), automake(1), autoreconf(1), autoupdate(1), autoheader(1), autoscan(1), config.guess(1), config.sub(1), ifnames(1), libtool(1). The full documentation for Autoconf is maintained as a Texinfo manual. To read the manual locally, use the command info autoconf You can also consult the Web version of the manual at <https://gnu.org/software/autoconf/manual/>. GNU Autoconf 2.72 December 2023 AUTOHEADER(1)
|
autoheader - Create a template header for configure
|
autoheader [OPTION]... [TEMPLATE-FILE]
| null | null |
protoc
| null | null | null | null | null |
gpg-error-config
|
gpg-error-config is a tool that is used to configure to determine the compiler and linker flags that should be used to compile and link programs that use Libgpg-error. This tool is now obsolete. Instead, please use pkg-config with gpg- error.pc for your new program, or use gpg-error.m4 which uses gpgrt- config and gpg-error.pc internally.
|
gpg-error-config - Script to get information about the installed version of libgpg-error
|
gpg-error-config [options]
|
gpg-error-config accepts the following options: --mt Provide output appropriate for multithreaded programs. --mt is only useful when combined with other options, and must be the first option if present. --version Print the currently installed version of Libgpg-error on the standard output. --libs Print the linker flags that are necessary to link a program using Libgpg-error. --cflags Print the compiler flags that are necessary to compile a program using Libgpg-error. --prefix=prefix If specified, use prefix instead of the installation prefix that Libgpg-error was built with when computing the output for the --cflags and --libs options. This option is also used for the exec prefix if --exec-prefix was not specified. This option must be specified before any --libs or --cflags options. --exec-prefix=prefix If specified, use prefix instead of the installation exec prefix that Libgpg-error was built with when computing the output for the --cflags and --libs options. This option must be specified before any --libs or --cflags options. Libgpg-error 1.49 2024-04-25 GPG-ERROR-CONFIG(1)
| null |
pq_to_hlg
| null | null | null | null | null |
cipher_aead_demo
| null | null | null | null | null |
echo_supervisord_conf
| null | null | null | null | null |
ginstall
|
This install program copies files (often just compiled) into destination locations you choose. If you want to download and install a ready-to-use package on a GNU/Linux system, you should instead be using a package manager like yum(1) or apt-get(1). In the first three forms, copy SOURCE to DEST or multiple SOURCE(s) to the existing DIRECTORY, while setting permission modes and owner/group. In the 4th form, create all components of the given DIRECTORY(ies). Mandatory arguments to long options are mandatory for short options too. --backup[=CONTROL] make a backup of each existing destination file -b like --backup but does not accept an argument -c (ignored) -C, --compare compare content of source and destination files, and if no change to content, ownership, and permissions, do not modify the destination at all -d, --directory treat all arguments as directory names; create all components of the specified directories -D create all leading components of DEST except the last, or all components of --target-directory, then copy SOURCE to DEST --debug explain how a file is copied. Implies -v -g, --group=GROUP set group ownership, instead of process' current group -m, --mode=MODE set permission mode (as in chmod), instead of rwxr-xr-x -o, --owner=OWNER set ownership (super-user only) -p, --preserve-timestamps apply access/modification times of SOURCE files to corresponding destination files -s, --strip strip symbol tables --strip-program=PROGRAM program used to strip binaries -S, --suffix=SUFFIX override the usual backup suffix -t, --target-directory=DIRECTORY copy all SOURCE arguments into DIRECTORY -T, --no-target-directory treat DEST as a normal file -v, --verbose print the name of each created file or directory --preserve-context preserve SELinux security context -Z set SELinux security context of destination file and each created directory to default type --context[=CTX] like -Z, or if CTX is specified then set the SELinux or SMACK security context to CTX --help display this help and exit --version output version information and exit The backup suffix is '~', unless set with --suffix or SIMPLE_BACKUP_SUFFIX. The version control method may be selected via the --backup option or through the VERSION_CONTROL environment variable. Here are the values: none, off never make backups (even if --backup is given) numbered, t make numbered backups existing, nil numbered if numbered backups exist, simple otherwise simple, never always make simple backups AUTHOR Written by David MacKenzie. REPORTING BUGS GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT Copyright © 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO Full documentation <https://www.gnu.org/software/coreutils/install> or available locally via: info '(coreutils) install invocation' GNU coreutils 9.3 April 2023 INSTALL(1)
|
install - copy files and set attributes
|
install [OPTION]... [-T] SOURCE DEST install [OPTION]... SOURCE... DIRECTORY install [OPTION]... -t DIRECTORY SOURCE... install [OPTION]... -d DIRECTORY...
| null | null |
cws2fws
| null | null | null | null | null |
gif2webp
|
This manual page documents the gif2webp command. gif2webp converts a GIF image to a WebP image.
|
gif2webp - Convert a GIF image to WebP
|
gif2webp [options] input_file.gif -o output_file.webp
|
The basic options are: -o string Specify the name of the output WebP file. If omitted, gif2webp will perform conversion but only report statistics. Using "-" as output name will direct output to 'stdout'. -- string Explicitly specify the input file. This option is useful if the input file starts with an '-' for instance. This option must appear last. Any other options afterward will be ignored. If the input file is "-", the data will be read from stdin instead of a file. -h, -help Usage information. -version Print the version number (as major.minor.revision) and exit. -lossy Encode the image using lossy compression. -mixed Mixed compression mode: optimize compression of the image by picking either lossy or lossless compression for each frame heuristically. -q float Specify the compression factor for RGB channels between 0 and 100. The default is 75. In case of lossless compression (default), a small factor enables faster compression speed, but produces a larger file. Maximum compression is achieved by using a value of 100. In case of lossy compression (specified by the -lossy option), a small factor produces a smaller file with lower quality. Best quality is achieved by using a value of 100. -m int Specify the compression method to use. This parameter controls the trade off between encoding speed and the compressed file size and quality. Possible values range from 0 to 6. Default value is 4. When higher values are used, the encoder will spend more time inspecting additional encoding possibilities and decide on the quality gain. Lower value can result is faster processing time at the expense of larger file size and lower compression quality. -min_size Encode image to achieve smallest size. This disables key frame insertion and picks the dispose method resulting in the smallest output for each frame. It uses lossless compression by default, but can be combined with -q, -m, -lossy or -mixed options. -kmin int -kmax int Specify the minimum and maximum distance between consecutive key frames (independently decodable frames) in the output animation. The tool will insert some key frames into the output animation as needed so that this criteria is satisfied. A 'kmax' value of 0 will turn off insertion of key frames. A 'kmax' value of 1 will result in all frames being key frames. 'kmin' value is not taken into account in both these special cases. Typical values are in the range 3 to 30. Default values are kmin = 9, kmax = 17 for lossless compression and kmin = 3, kmax = 5 for lossy compression. These two options are relevant only for animated images with large number of frames (>50). When lower values are used, more frames will be converted to key frames. This may lead to smaller number of frames required to decode a frame on average, thereby improving the decoding performance. But this may lead to slightly bigger file sizes. Higher values may lead to worse decoding performance, but smaller file sizes. Some restrictions: (i) kmin < kmax, (ii) kmin >= kmax / 2 + 1 and (iii) kmax - kmin <= 30. If any of these restrictions are not met, they will be enforced automatically. -metadata string A comma separated list of metadata to copy from the input to the output if present. Valid values: all, none, icc, xmp. The default is xmp. -f int For lossy encoding only (specified by the -lossy option). Specify the strength of the deblocking filter, between 0 (no filtering) and 100 (maximum filtering). A value of 0 will turn off any filtering. Higher value will increase the strength of the filtering process applied after decoding the picture. The higher the value the smoother the picture will appear. Typical values are usually in the range of 20 to 50. -mt Use multi-threading for encoding, if possible. -loop_compatibility If enabled, handle the loop information in a compatible fashion for Chrome version prior to M62 (inclusive) and Firefox. -v Print extra information. -quiet Do not print anything. BUGS Please report all bugs to the issue tracker: https://bugs.chromium.org/p/webp Patches welcome! See this page to get started: https://www.webmproject.org/code/contribute/submitting-patches/
|
gif2webp picture.gif -o picture.webp gif2webp -q 70 picture.gif -o picture.webp gif2webp -lossy -m 3 picture.gif -o picture_lossy.webp gif2webp -lossy -f 50 picture.gif -o picture.webp gif2webp -q 70 -o picture.webp -- ---picture.gif cat picture.gif | gif2webp -o - -- - > output.webp AUTHORS gif2webp is a part of libwebp and was written by the WebP team. The latest source tree is available at https://chromium.googlesource.com/webm/libwebp This manual page was written by Urvang Joshi <urvang@google.com>, for the Debian project (and may be used by others). SEE ALSO cwebp(1), dwebp(1), webpmux(1) Please refer to https://developers.google.com/speed/webp/ for additional information. November 17, 2021 GIF2WEBP(1)
|
pyrsa-encrypt
| null | null | null | null | null |
psicc
|
lcms is a standalone CMM engine, which deals with the color management. It implements a fast transformation between ICC profiles. psicc is a little cms PostScript converter.
|
psicc - little cms PostScript converter.
|
psicc [options]
|
-b Black point compensation (CRD only). -c precision Precision (0=LowRes, 1=Normal, 2=Hi-res) (CRD only) [defaults to 1]. -i profile Input profile: Generates Color Space Array (CSA). -n gridpoints Alternate way to set precision, number of CLUT points (CRD only). -o profile Output profile: Generates Color Rendering Dictionary(CRD). -t intent Intent (0=Perceptual, 1=Colorimetric, 2=Saturation, 3=Absolute) [defaults to 0]. -u Do NOT generate resource name on CRD. NOTES For suggestions, comments, bug reports etc. send mail to info@littlecms.com. SEE ALSO jpgicc(1), linkicc(1), tificc(1), transicc(1) AUTHOR This manual page was written by Shiju p. Nair <shiju.p@gmail.com>, for the Debian project. September 30, 2004 PSICC(1)
| null |
djxl_fuzzer_corpus
| null | null | null | null | null |
gsha224sum
|
Print or check SHA224 (224-bit) checksums. With no FILE, or when FILE is -, read standard input. -b, --binary read in binary mode -c, --check read checksums from the FILEs and check them --tag create a BSD-style checksum -t, --text read in text mode (default) -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in RFC 3874. When checking, the input should be a former output of this program. The default mode is to print a line with: checksum, a space, a character indicating input mode ('*' for binary, ' ' for text or where binary is insignificant), and name for each FILE. Note: There is no difference between binary mode and text mode on GNU systems. AUTHOR Written by Ulrich Drepper, Scott Miller, and David Madore. REPORTING BUGS GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT Copyright © 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO cksum(1) Full documentation <https://www.gnu.org/software/coreutils/sha224sum> or available locally via: info '(coreutils) sha2 utilities' GNU coreutils 9.3 April 2023 SHA224SUM(1)
|
sha224sum - compute and check SHA224 message digest
|
sha224sum [OPTION]... [FILE]...
| null | null |
p11tool
|
Program that allows operations on PKCS #11 smart cards and security modules. To use PKCS #11 tokens with GnuTLS the p11-kit configuration files need to be setup. That is create a .module file in /etc/pkcs11/modules with the contents 'module: /path/to/pkcs11.so'. Alternatively the configuration file /etc/gnutls/pkcs11.conf has to exist and contain a number of lines of the form 'load=/usr/lib/opensc-pkcs11.so'. You can provide the PIN to be used for the PKCS #11 operations with the environment variables GNUTLS_PIN and GNUTLS_SO_PIN.
|
p11tool - GnuTLS PKCS #11 tool
|
p11tool [-flags] [-flag [value]] [--option-name[[=| ]value]] [url] Operands and options may be intermixed. They will be reordered.
|
Tokens --list-tokens List all available tokens. --list-token-urls List the URLs available tokens. This is a more compact version of --list-tokens. --list-mechanisms List all available mechanisms in a token. --initialize Initializes a PKCS #11 token. --initialize-pin Initializes/Resets a PKCS #11 token user PIN. --initialize-so-pin Initializes/Resets a PKCS #11 token security officer PIN. This initializes the security officer's PIN. When used non-interactively use the GNUTLS_NEW_SO_PIN environment variables to initialize SO's PIN. --set-pin=str Specify the PIN to use on token operations. Alternatively the GNUTLS_PIN environment variable may be used. --set-so-pin=str Specify the Security Officer's PIN to use on token initialization. Alternatively the GNUTLS_SO_PIN environment variable may be used. Object listing --list-all List all available objects in a token. All objects available in the token will be listed. That includes objects which are potentially unaccessible using this tool. --list-all-certs List all available certificates in a token. That option will also provide more information on the certificates, for example, expand the attached extensions in a trust token (like p11-kit-trust). --list-certs List all certificates that have an associated private key. That option will only display certificates which have a private key associated with them (share the same ID). --list-all-privkeys List all available private keys in a token. Lists all the private keys in a token that match the specified URL. --list-privkeys This is an alias for the --list-all-privkeys option. --list-keys This is an alias for the --list-all-privkeys option. --list-all-trusted List all available certificates marked as trusted. --export Export the object specified by the URL. This option must not appear in combination with any of the following options: export- stapled, export-chain, export-pubkey. --export-stapled Export the certificate object specified by the URL. This option must not appear in combination with any of the following options: export, export-chain, export-pubkey. Exports the certificate specified by the URL while including any attached extensions to it. Since attached extensions are a p11-kit extension, this option is only available on p11-kit registered trust modules. --export-chain Export the certificate specified by the URL and its chain of trust. This option must not appear in combination with any of the following options: export-stapled, export, export-pubkey. Exports the certificate specified by the URL and generates its chain of trust based on the stored certificates in the module. --export-pubkey Export the public key for a private key. This option must not appear in combination with any of the following options: export- stapled, export, export-chain. Exports the public key for the specified private key --info List information on an available object in a token. --trusted This is an alias for the --mark-trusted option. --distrusted This is an alias for the --mark-distrusted option. Key generation --generate-privkey=str Generate private-public key pair of given type. Generates a private-public key pair in the specified token. Acceptable types are RSA, ECDSA, Ed25519, and DSA. Should be combined with --sec-param or --bits. --generate-rsa Generate an RSA private-public key pair. Generates an RSA private-public key pair on the specified token. Should be combined with --sec-param or --bits. NOTE: THIS OPTION IS DEPRECATED --generate-dsa Generate a DSA private-public key pair. Generates a DSA private-public key pair on the specified token. Should be combined with --sec-param or --bits. NOTE: THIS OPTION IS DEPRECATED --generate-ecc Generate an ECDSA private-public key pair. Generates an ECDSA private-public key pair on the specified token. Should be combined with --curve, --sec-param or --bits. NOTE: THIS OPTION IS DEPRECATED --bits=num Specify the number of bits for the key generate. This option takes an integer number as its argument. For applications which have no key-size restrictions the --sec-param option is recommended, as the sec-param levels will adapt to the acceptable security levels with the new versions of gnutls. --curve=str Specify the curve used for EC key generation. Supported values are secp192r1, secp224r1, secp256r1, secp384r1 and secp521r1. --sec-param=security parameter Specify the security level. This is alternative to the bits option. Available options are [low, legacy, medium, high, ultra]. Writing objects --set-id=str Set the CKA_ID (in hex) for the specified by the URL object. This option must not appear in combination with any of the following options: write. Modifies or sets the CKA_ID in the specified by the URL object. The ID should be specified in hexadecimal format without a '0x' prefix. --set-label=str Set the CKA_LABEL for the specified by the URL object. This option must not appear in combination with any of the following options: write, set-id. Modifies or sets the CKA_LABEL in the specified by the URL object --write Writes the loaded objects to a PKCS #11 token. It can be used to write private, public keys, certificates or secret keys to a token. Must be combined with one of --load-privkey, --load-pubkey, --load-certificate option. When writing a certificate object, its CKA_ID is set to the same CKA_ID of the corresponding public key, if it exists on the token; otherwise it will be derived from the X.509 Subject Key Identifier of the certificate. If this behavior is undesired, write the public key to the token beforehand. --delete Deletes the objects matching the given PKCS #11 URL. --label=str Sets a label for the write operation. --id=str Sets an ID for the write operation. Sets the CKA_ID to be set by the write operation. The ID should be specified in hexadecimal format without a '0x' prefix. --mark-wrap, --no-mark-wrap Marks the generated key to be a wrapping key. The no-mark-wrap form will disable the option. Marks the generated key with the CKA_WRAP flag. --mark-trusted, --no-mark-trusted Marks the object to be written as trusted. This option must not appear in combination with any of the following options: mark- distrusted. The no-mark-trusted form will disable the option. Marks the object to be generated/written with the CKA_TRUST flag. --mark-distrusted When retrieving objects, it requires the objects to be distrusted. This option must not appear in combination with any of the following options: mark-trusted. Ensures that the objects retrieved have the CKA_X_TRUST flag. This is p11-kit trust module extension, thus this flag is only valid with p11-kit registered trust modules. --mark-decrypt, --no-mark-decrypt Marks the object to be written for decryption. The no-mark-decrypt form will disable the option. Marks the object to be generated/written with the CKA_DECRYPT flag set to true. --mark-sign, --no-mark-sign Marks the object to be written for signature generation. The no-mark-sign form will disable the option. Marks the object to be generated/written with the CKA_SIGN flag set to true. --mark-ca, --no-mark-ca Marks the object to be written as a CA. The no-mark-ca form will disable the option. Marks the object to be generated/written with the CKA_CERTIFICATE_CATEGORY as CA. --mark-private, --no-mark-private Marks the object to be written as private. The no-mark-private form will disable the option. Marks the object to be generated/written with the CKA_PRIVATE flag. The written object will require a PIN to be used. --ca This is an alias for the --mark-ca option. --private This is an alias for the --mark-private option. --mark-always-authenticate, --no-mark-always-authenticate Marks the object to be written as always authenticate. The no-mark-always-authenticate form will disable the option. Marks the object to be generated/written with the CKA_ALWAYS_AUTHENTICATE flag. The written object will Mark the object as requiring authentication (pin entry) before every operation. --secret-key=str Provide a hex encoded secret key. This secret key will be written to the module if --write is specified. --load-privkey=file Private key file to use. --load-pubkey=file Public key file to use. --load-certificate=file Certificate file to use. Other options -d num, --debug=num Enable debugging. This option takes an integer number as its argument. The value of num is constrained to being: in the range 0 through 9999 Specifies the debug level. --outfile=str Output file. --login, --no-login Force (user) login to token. The no-login form will disable the option. --so-login, --no-so-login Force security officer login to token. The no-so-login form will disable the option. Forces login to the token as security officer (admin). --admin-login This is an alias for the --so-login option. --test-sign Tests the signature operation of the provided object. It can be used to test the correct operation of the signature operation. If both a private and a public key are available this operation will sign and verify the signed data. --sign-params=str Sign with a specific signature algorithm. This option can be combined with --test-sign, to sign with a specific signature algorithm variant. The only option supported is 'RSA-PSS', and should be specified in order to use RSA-PSS signature on RSA keys. --hash=str Hash algorithm to use for signing. This option can be combined with test-sign. Available hash functions are SHA1, RMD160, SHA256, SHA384, SHA512, SHA3-224, SHA3-256, SHA3-384, SHA3-512. --generate-random=num Generate random data. This option takes an integer number as its argument. Asks the token to generate a number of bytes of random bytes. -8, --pkcs8 Use PKCS #8 format for private keys. --inder, --no-inder Use DER/RAW format for input. The no-inder form will disable the option. Use DER/RAW format for input certificates and private keys. --inraw This is an alias for the --inder option. --outder, --no-outder Use DER format for output certificates, private keys, and DH parameters. The no-outder form will disable the option. The output will be in DER or RAW format. --outraw This is an alias for the --outder option. --provider=file Specify the PKCS #11 provider library. This will override the default options in /etc/gnutls/pkcs11.conf --provider-opts=str Specify parameters for the PKCS #11 provider library. This is a PKCS#11 internal option used by few modules. Mainly for testing PKCS#11 modules. NOTE: THIS OPTION IS DEPRECATED --detailed-url, --no-detailed-url Print detailed URLs. The no-detailed-url form will disable the option. --only-urls Print a compact listing using only the URLs. --batch Disable all interaction with the tool. In batch mode there will be no prompts, all parameters need to be specified on command line. -v arg, --version=arg Output version of program and exit. The default mode is `v', a simple version. The `c' mode will print copyright information and `n' will print the full copyright notice. -h, --help Display usage information and exit. -!, --more-help Pass the extended usage information through a pager.
|
To view all tokens in your system use: $ p11tool --list-tokens To view all objects in a token use: $ p11tool --login --list-all "pkcs11:TOKEN-URL" To store a private key and a certificate in a token run: $ p11tool --login --write "pkcs11:URL" --load-privkey key.pem --label "Mykey" $ p11tool --login --write "pkcs11:URL" --load-certificate cert.pem --label "Mykey" Note that some tokens require the same label to be used for the certificate and its corresponding private key. To generate an RSA private key inside the token use: $ p11tool --login --generate-privkey rsa --bits 1024 --label "MyNewKey" --outfile MyNewKey.pub "pkcs11:TOKEN-URL" The bits parameter in the above example is explicitly set because some tokens only support limited choices in the bit length. The output file is the corresponding public key. This key can be used to general a certificate request with certtool. certtool --generate-request --load-privkey "pkcs11:KEY-URL" --load-pubkey MyNewKey.pub --outfile request.pem EXIT STATUS One of the following exit values will be returned: 0 (EXIT_SUCCESS) Successful program execution. 1 (EXIT_FAILURE) The operation failed or the command syntax was not valid. SEE ALSO certtool (1) AUTHORS COPYRIGHT Copyright (C) 2020-2023 Free Software Foundation, and others all rights reserved. This program is released under the terms of the GNU General Public License, version 3 or later BUGS Please send bug reports to: bugs@gnutls.org 3.8.4 19 Mar 2024 p11tool(1)
|
recode-sr-latin
|
Recode Serbian text from Cyrillic to Latin script. The input text is read from standard input. The converted text is output to standard output. Informative output: -h, --help display this help and exit -V, --version output version information and exit AUTHOR Written by Danilo Segan and Bruno Haible. REPORTING BUGS Report bugs in the bug tracker at <https://savannah.gnu.org/projects/gettext> or by email to <bug-gettext@gnu.org>. COPYRIGHT Copyright © 2006-2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO The full documentation for recode-sr-latin is maintained as a Texinfo manual. If the info and recode-sr-latin programs are properly installed at your site, the command info recode-sr-latin should give you access to the complete manual. GNU gettext-tools 0.22.5 February 2024 RECODE-SR-LATIN(1)
|
recode-sr-latin - convert Serbian text from Cyrillic to Latin script
|
recode-sr-latin [OPTION]
| null | null |
speexenc
|
Encodes input_file using Speex. It can read the WAV or raw files. input_file can be: filename.wav wav file filename.* Raw PCM file (any extension other than .wav) - stdin output_file can be: filename.spx Speex file - stdout
|
speexenc - The reference implementation speex encoder.
|
speexenc [options] input_file output_file
|
-n, --narrowband Narrowband (8 kHz) input file -w, --wideband Wideband (16 kHz) input file -u, --ultra-wideband "Ultra-wideband" (32 kHz) input file --quality n Encoding quality (0-10), default 8 --bitrate n Encoding bit-rate (use bit-rate n or lower) --vbr Enable variable bit-rate (VBR) --abr rate Enable average bit-rate (ABR) at rate bps --vad Enable voice activity detection (VAD) --dtx Enable file-based discontinuous transmission (DTX) --comp n Set encoding complexity (0-10), default 3 --nframes n Number of frames per Ogg packet (1-10), default 1 --comment Add the given string as an extra comment. This may be used multiple times --author Author of this track --title Title for this track -h, --help This help -v, --version Version information -V Verbose mode (show bit-rate) Raw input options: --rate n Sampling rate for raw input --stereo Consider raw input as stereo --le Raw input is little-endian --be Raw input is big-endian --8bit Raw input is 8-bit unsigned --16bit Raw input is 16-bit signed Default raw PCM input is 16-bit, little-endian, mono More information is available from the Speex site: http://www.speex.org Please report bugs to the mailing list `speex-dev@xiph.org'. COPYRIGHT Copyright © 2002 Jean-Marc Valin speexenc version 1.1 September 2003 SPEEXENC(1)
| null |
glib-gettextize
| null | null | null | null | null |
xzcmp
|
xzcmp and xzdiff compare uncompressed contents of two files. Uncompressed data and options are passed to cmp(1) or diff(1) unless --help or --version is specified. If both file1 and file2 are specified, they can be uncompressed files or files in formats that xz(1), gzip(1), bzip2(1), lzop(1), zstd(1), or lz4(1) can decompress. The required decompression commands are determined from the filename suffixes of file1 and file2. A file with an unknown suffix is assumed to be either uncompressed or in a format that xz(1) can decompress. If only one filename is provided, file1 must have a suffix of a supported compression format and the name for file2 is assumed to be file1 with the compression format suffix removed. The commands lzcmp and lzdiff are provided for backward compatibility with LZMA Utils. EXIT STATUS If a decompression error occurs, the exit status is 2. Otherwise the exit status of cmp(1) or diff(1) is used. SEE ALSO cmp(1), diff(1), xz(1), gzip(1), bzip2(1), lzop(1), zstd(1), lz4(1) Tukaani 2024-02-13 XZDIFF(1)
|
xzcmp, xzdiff, lzcmp, lzdiff - compare compressed files
|
xzcmp [option...] file1 [file2] xzdiff ... lzcmp ... lzdiff ...
| null | null |
shred
|
Overwrite the specified FILE(s) repeatedly, in order to make it harder for even very expensive hardware probing to recover the data. If FILE is -, shred standard output. Mandatory arguments to long options are mandatory for short options too. -f, --force change permissions to allow writing if necessary -n, --iterations=N overwrite N times instead of the default (3) --random-source=FILE get random bytes from FILE -s, --size=N shred this many bytes (suffixes like K, M, G accepted) -u deallocate and remove file after overwriting --remove[=HOW] like -u but give control on HOW to delete; See below -v, --verbose show progress -x, --exact do not round file sizes up to the next full block; this is the default for non-regular files -z, --zero add a final overwrite with zeros to hide shredding --help display this help and exit --version output version information and exit Delete FILE(s) if --remove (-u) is specified. The default is not to remove the files because it is common to operate on device files like /dev/hda, and those files usually should not be removed. The optional HOW parameter indicates how to remove a directory entry: 'unlink' => use a standard unlink call. 'wipe' => also first obfuscate bytes in the name. 'wipesync' => also sync each obfuscated byte to the device. The default mode is 'wipesync', but note it can be expensive. CAUTION: shred assumes the file system and hardware overwrite data in place. Although this is common, many platforms operate otherwise. Also, backups and mirrors may contain unremovable copies that will let a shredded file be recovered later. See the GNU coreutils manual for details. AUTHOR Written by Colin Plumb. REPORTING BUGS GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT Copyright © 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO Full documentation <https://www.gnu.org/software/coreutils/shred> or available locally via: info '(coreutils) shred invocation' GNU coreutils 9.3 April 2023 SHRED(1)
|
shred - overwrite a file to hide its contents, and optionally delete it
|
shred [OPTION]... FILE...
| null | null |
pdftops
|
Pdftops converts Portable Document Format (PDF) files to PostScript so they can be printed. Pdftops reads the PDF file, PDF-file, and writes a PostScript file, PS-file. If PS-file is not specified, pdftops converts file.pdf to file.ps (or file.eps with the -eps option). If PS-file is ´-', the PostScript is sent to stdout. If PDF-file is ´-', Pdftops reads the PDF file from stdin.
|
pdftops - Portable Document Format (PDF) to PostScript converter (version 3.03)
|
pdftops [options] <PDF-file> [<PS-file>]
|
-f number Specifies the first page to print. -l number Specifies the last page to print. -level1 Generate Level 1 PostScript. The resulting PostScript files will be significantly larger (if they contain images), but will print on Level 1 printers. This also converts all images to black and white. No more than one of the PostScript level options (-level1, -level1sep, -level2, -level2sep, -level3, -level3sep) may be given. -level1sep Generate Level 1 separable PostScript. All colors are converted to CMYK. Images are written with separate stream data for the four components. -level2 Generate Level 2 PostScript. Level 2 supports color images and image compression. This is the default setting. -level2sep Generate Level 2 separable PostScript. All colors are converted to CMYK. The PostScript separation convention operators are used to handle custom (spot) colors. -level3 Generate Level 3 PostScript. This enables all Level 2 features plus CID font embedding. -level3sep Generate Level 3 separable PostScript. The separation handling is the same as for -level2sep. -eps Generate an Encapsulated PostScript (EPS) file. An EPS file contains a single image, so if you use this option with a multi- page PDF file, you must use -f and -l to specify a single page. No more than one of the mode options (-eps, -form) may be given. -form Generate a PostScript form which can be imported by software that understands forms. A form contains a single page, so if you use this option with a multi-page PDF file, you must use -f and -l to specify a single page. The -level1 option cannot be used with -form. No more than one of the mode options (-eps, -form) may be given. -opi Generate OPI comments for all images and forms which have OPI information. (This option is only available if pdftops was compiled with OPI support.) -binary Write binary data in Level 1 PostScript. By default, pdftops writes hex-encoded data in Level 1 PostScript. Binary data is non-standard in Level 1 PostScript but reduces the file size and can be useful when Level 1 PostScript is required only for its restricted use of PostScript operators. -r number Set the resolution in DPI when pdftops rasterizes images with transparencies or, for Level 1 PostScript, when pdftops rasterizes images with color masks. By default, pdftops rasterizes images to 300 DPI. -noembt1 By default, any Type 1 fonts which are embedded in the PDF file are copied into the PostScript file. This option causes pdftops to substitute base fonts instead. Embedded fonts make PostScript files larger, but may be necessary for readable output. -noembtt By default, any TrueType fonts which are embedded in the PDF file are copied into the PostScript file. This option causes pdftops to substitute base fonts instead. Embedded fonts make PostScript files larger, but may be necessary for readable output. Also, some PostScript interpreters do not have TrueType rasterizers. -noembcidps By default, any CID PostScript fonts which are embedded in the PDF file are copied into the PostScript file. This option disables that embedding. No attempt is made to substitute for non-embedded CID PostScript fonts. -noembcidtt By default, any CID TrueType fonts which are embedded in the PDF file are copied into the PostScript file. This option disables that embedding. No attempt is made to substitute for non- embedded CID TrueType fonts. -passfonts By default, references to non-embedded 8-bit fonts in the PDF file are substituted with the closest "Helvetica", "Times- Roman", or "Courier" font. This option passes references to non-embedded fonts through to the PostScript file. -aaRaster yes | no Enable or disable raster anti-aliasing. This defaults to "no". pdftops may need to rasterize transparencies and pattern image masks in the PDF. If the PostScript will be printed, leave -aaRaster disabled and set -r to the resolution of the printer. If the PostScript will be viewed, enabling -aaRaster may make rasterized text easier to read. -rasterize always | never | whenneeded By default, pdftops rasterizes pages as needed, for example, if they contain transparencies. To force rasterization, set -rasterize to "always". Use this to eliminate fonts. To prevent rasterization, set -rasterize to "never". This may produce files that display incorrectly. -processcolorformat MONO8 | CMYK8 | RGB8 Sets the process color format as it is used during rasterization and transparency reduction. The default depends on the other settings: For -level1 the default is MONO8, for -level{1,2,3}sep or -overprint the default is CMYK8, and in all other cases RGB8 is the default. If -processcolorprofile is given then -processcolorformat is inferred from the specified ICC profile. -processcolorprofile filename Sets the ICC profile that is assumed during rasterization and transparency reduction. -defaultgrayprofile defaultgrayprofilefile If poppler is compiled with colour management support, this option sets the DefaultGray color space to the ICC profile stored in defaultgrayprofilefile. -defaultrgbprofile defaultrgbprofilefile If poppler is compiled with colour management support, this option sets the DefaultRGB color space to the ICC profile stored in defaultrgbprofilefile. -defaultcmykprofile defaultcmykprofilefile If poppler is compiled with colour management support, this option sets the DefaultCMYK color space to the ICC profile stored in defaultcmykprofilefile. -optimizecolorspace By default, bitmap images in the PDF pass through to the output PostScript in their original color space, which produces predictable results. This option converts RGB and CMYK images into Gray images if every pixel of the image has equal components. This can fix problems when doing color separations of PDFs that contain embedded black and white images encoded as RGB. -preload preload images and forms -paper size Set the paper size to one of "letter", "legal", "A4", or "A3". This can also be set to "match", which will set the paper size of each page to match the size specified in the PDF file. If none the -paper, -paperw, or -paperh options are specified the default is to match the paper size. -paperw size Set the paper width, in points. -paperh size Set the paper height, in points. -origpagesizes This option is the same as "-paper match". -nocrop By default, output is cropped to the CropBox specified in the PDF file. This option disables cropping. -expand Expand PDF pages smaller than the paper to fill the paper. By default, these pages are not scaled. -noshrink Don't scale PDF pages which are larger than the paper. By default, pages larger than the paper are shrunk to fit. -nocenter By default, PDF pages smaller than the paper (after any scaling) are centered on the paper. This option causes them to be aligned to the lower-left corner of the paper instead. -duplex Set the Duplex pagedevice entry in the PostScript file. This tells duplex-capable printers to enable duplexing. -opw password Specify the owner password for the PDF file. Providing this will bypass all security restrictions. -upw password Specify the user password for the PDF file. -overprint Enable overprint emulation during rasterization. For -processcolorformat being CMYK8 and the language level being higher than 2, this option is set to true by default. Note: This option requires -processcolorformat to be CMYK8. -q Don't print any messages or errors. -v Print copyright and version information. -h Print usage information. (-help and --help are equivalent.) EXIT CODES The Xpdf tools use the following exit codes: 0 No error. 1 Error opening a PDF file. 2 Error opening an output file. 3 Error related to PDF permissions. 99 Other error. AUTHOR The pdftops software and documentation are copyright 1996-2011 Glyph & Cog, LLC. SEE ALSO pdfdetach(1), pdffonts(1), pdfimages(1), pdfinfo(1), pdftocairo(1), pdftohtml(1), pdftoppm(1), pdftotext(1) pdfseparate(1), pdfsig(1), pdfunite(1) 15 August 2011 pdftops(1)
| null |
cert_app
| null | null | null | null | null |
giftool
|
A filter for transforming GIFS. With no options, it's an expensive copy of a GIF in standard input to standard output. Options specify filtering operations and are performed in the order specified on the command line. The -n option selects images, allowing the tool to act on a subset of images in a multi-image GIF. This option takes a comma-separated list of decimal integers which are interpreted as 1-origin image indices; these are the images that will be acted on. If no -n option is specified, the tool will select and transform all images. The -b option takes a decimal integer argument and uses it to set the (0-origin) screen background color index. The -f option accepts a printf-style format string and substitutes into it the values of image-descriptor and graphics-control fields. The string is formatted and output once for each selected image. Normal C-style escapes \b, \f, \n, \r, \t. \v, and \xNN are interpreted; also \e produces ESC (ASCII 0x1b). The following format cookies are substituted: %a Pixel aspect byte. %b Screen background color. %d Image delay time %h Image height (y dimension) %n Image index %p Image position (as an x,y pair) %s Screen size (as an x,y pair) %t Image transparent-color index %u Image user-input flag (boolean) %v GIF version string %w Image width (x dimension) %x Image GIF89 disposal mode %z Image's color table sort flag (boolean, false if no local color map) Boolean substitutions may take a prefix to modify how they are displayed: 1 "1" or "0" o "on" or "off" t "t" or "f" y "yes" or "no" Thus, for example, "%oz" displays image sort flags using the strings "on" and "off". The default with no prefix is numeric. The -a option takes an unsigned decimal integer argument and uses it to set the aspect-ratio bye in the logical screen descriptor block. The -b option takes an unsigned decimal integer argument and uses it to set the background color index in the logical screen descriptor block. The -d option takes a decimal integer argument and uses it to set a delay time, in hundredths of a second, on selected images. The -i option sets or clears interlaccing in selected images. Acceptable arguments are "1", "0", "yes", "no", "on", "off", "t", "f" The -p option takes a (0-origin) x,y coordinate-pair and sets it as the preferred upper-left-corner coordinates of selected images. The -s option takes a (0-origin) x,y coordinate-pair and sets it as the expected display screen size. The -t option takes a decimal integer argument and uses it to set the (0-origin) index of the transparency color in selected images. The -u option sets or clears the user-input flag in selected images. Acceptable arguments are "1", "0", "yes", "no", "on", "off", "t", "f". The -x option takes a decimal integer argument and uses it to set the GIF89 disposal mode in selected images. The -z option sets or clears the color-table sort flag in selected images. Acceptable arguments are "1", "0", "yes", "no", "on", "off", "t", "f". Note that the -a, -b, -p, -s, and -z options are included to complete the ability to modify all fields defined in the GIF standard, but should have no effect on how an image renders on browsers or modern viewers. AUTHOR Eric S. Raymond. GIFLIB 3 June 2012 GIFTOOL(1)
|
giftool - GIF transformation tool
|
giftool [-a aspect] [-b bgcolor] [-d delaytime] [-i interlacing] [-n imagelist] [-p left,top] [-s width,height] [-t transcolor] [-u sort-flag] [-x disposal] [-z sort-flag]
| null | null |
ovc
| null | null | null | null | null |
evaluate-cli
| null | null | null | null | null |
qprofdiff
| null | null | null | null | null |
lzcat
|
xz is a general-purpose data compression tool with command line syntax similar to gzip(1) and bzip2(1). The native file format is the .xz format, but the legacy .lzma format used by LZMA Utils and raw compressed streams with no container format headers are also supported. In addition, decompression of the .lz format used by lzip is supported. xz compresses or decompresses each file according to the selected operation mode. If no files are given or file is -, xz reads from standard input and writes the processed data to standard output. xz will refuse (display an error and skip the file) to write compressed data to standard output if it is a terminal. Similarly, xz will refuse to read compressed data from standard input if it is a terminal. Unless --stdout is specified, files other than - are written to a new file whose name is derived from the source file name: • When compressing, the suffix of the target file format (.xz or .lzma) is appended to the source filename to get the target filename. • When decompressing, the .xz, .lzma, or .lz suffix is removed from the filename to get the target filename. xz also recognizes the suffixes .txz and .tlz, and replaces them with the .tar suffix. If the target file already exists, an error is displayed and the file is skipped. Unless writing to standard output, xz will display a warning and skip the file if any of the following applies: • File is not a regular file. Symbolic links are not followed, and thus they are not considered to be regular files. • File has more than one hard link. • File has setuid, setgid, or sticky bit set. • The operation mode is set to compress and the file already has a suffix of the target file format (.xz or .txz when compressing to the .xz format, and .lzma or .tlz when compressing to the .lzma format). • The operation mode is set to decompress and the file doesn't have a suffix of any of the supported file formats (.xz, .txz, .lzma, .tlz, or .lz). After successfully compressing or decompressing the file, xz copies the owner, group, permissions, access time, and modification time from the source file to the target file. If copying the group fails, the permissions are modified so that the target file doesn't become accessible to users who didn't have permission to access the source file. xz doesn't support copying other metadata like access control lists or extended attributes yet. Once the target file has been successfully closed, the source file is removed unless --keep was specified. The source file is never removed if the output is written to standard output or if an error occurs. Sending SIGINFO or SIGUSR1 to the xz process makes it print progress information to standard error. This has only limited use since when standard error is a terminal, using --verbose will display an automatically updating progress indicator. Memory usage The memory usage of xz varies from a few hundred kilobytes to several gigabytes depending on the compression settings. The settings used when compressing a file determine the memory requirements of the decompressor. Typically the decompressor needs 5 % to 20 % of the amount of memory that the compressor needed when creating the file. For example, decompressing a file created with xz -9 currently requires 65 MiB of memory. Still, it is possible to have .xz files that require several gigabytes of memory to decompress. Especially users of older systems may find the possibility of very large memory usage annoying. To prevent uncomfortable surprises, xz has a built-in memory usage limiter, which is disabled by default. While some operating systems provide ways to limit the memory usage of processes, relying on it wasn't deemed to be flexible enough (for example, using ulimit(1) to limit virtual memory tends to cripple mmap(2)). The memory usage limiter can be enabled with the command line option --memlimit=limit. Often it is more convenient to enable the limiter by default by setting the environment variable XZ_DEFAULTS, for example, XZ_DEFAULTS=--memlimit=150MiB. It is possible to set the limits separately for compression and decompression by using --memlimit-compress=limit and --memlimit-decompress=limit. Using these two options outside XZ_DEFAULTS is rarely useful because a single run of xz cannot do both compression and decompression and --memlimit=limit (or -M limit) is shorter to type on the command line. If the specified memory usage limit is exceeded when decompressing, xz will display an error and decompressing the file will fail. If the limit is exceeded when compressing, xz will try to scale the settings down so that the limit is no longer exceeded (except when using --format=raw or --no-adjust). This way the operation won't fail unless the limit is very small. The scaling of the settings is done in steps that don't match the compression level presets, for example, if the limit is only slightly less than the amount required for xz -9, the settings will be scaled down only a little, not all the way down to xz -8. Concatenation and padding with .xz files It is possible to concatenate .xz files as is. xz will decompress such files as if they were a single .xz file. It is possible to insert padding between the concatenated parts or after the last part. The padding must consist of null bytes and the size of the padding must be a multiple of four bytes. This can be useful, for example, if the .xz file is stored on a medium that measures file sizes in 512-byte blocks. Concatenation and padding are not allowed with .lzma files or raw streams.
|
xz, unxz, xzcat, lzma, unlzma, lzcat - Compress or decompress .xz and .lzma files
|
xz [option...] [file...] COMMAND ALIASES unxz is equivalent to xz --decompress. xzcat is equivalent to xz --decompress --stdout. lzma is equivalent to xz --format=lzma. unlzma is equivalent to xz --format=lzma --decompress. lzcat is equivalent to xz --format=lzma --decompress --stdout. When writing scripts that need to decompress files, it is recommended to always use the name xz with appropriate arguments (xz -d or xz -dc) instead of the names unxz and xzcat.
|
Integer suffixes and special values In most places where an integer argument is expected, an optional suffix is supported to easily indicate large integers. There must be no space between the integer and the suffix. KiB Multiply the integer by 1,024 (2^10). Ki, k, kB, K, and KB are accepted as synonyms for KiB. MiB Multiply the integer by 1,048,576 (2^20). Mi, m, M, and MB are accepted as synonyms for MiB. GiB Multiply the integer by 1,073,741,824 (2^30). Gi, g, G, and GB are accepted as synonyms for GiB. The special value max can be used to indicate the maximum integer value supported by the option. Operation mode If multiple operation mode options are given, the last one takes effect. -z, --compress Compress. This is the default operation mode when no operation mode option is specified and no other operation mode is implied from the command name (for example, unxz implies --decompress). -d, --decompress, --uncompress Decompress. -t, --test Test the integrity of compressed files. This option is equivalent to --decompress --stdout except that the decompressed data is discarded instead of being written to standard output. No files are created or removed. -l, --list Print information about compressed files. No uncompressed output is produced, and no files are created or removed. In list mode, the program cannot read the compressed data from standard input or from other unseekable sources. The default listing shows basic information about files, one file per line. To get more detailed information, use also the --verbose option. For even more information, use --verbose twice, but note that this may be slow, because getting all the extra information requires many seeks. The width of verbose output exceeds 80 characters, so piping the output to, for example, less -S may be convenient if the terminal isn't wide enough. The exact output may vary between xz versions and different locales. For machine-readable output, --robot --list should be used. Operation modifiers -k, --keep Don't delete the input files. Since xz 5.2.6, this option also makes xz compress or decompress even if the input is a symbolic link to a regular file, has more than one hard link, or has the setuid, setgid, or sticky bit set. The setuid, setgid, and sticky bits are not copied to the target file. In earlier versions this was only done with --force. -f, --force This option has several effects: • If the target file already exists, delete it before compressing or decompressing. • Compress or decompress even if the input is a symbolic link to a regular file, has more than one hard link, or has the setuid, setgid, or sticky bit set. The setuid, setgid, and sticky bits are not copied to the target file. • When used with --decompress --stdout and xz cannot recognize the type of the source file, copy the source file as is to standard output. This allows xzcat --force to be used like cat(1) for files that have not been compressed with xz. Note that in future, xz might support new compressed file formats, which may make xz decompress more types of files instead of copying them as is to standard output. --format=format can be used to restrict xz to decompress only a single file format. -c, --stdout, --to-stdout Write the compressed or decompressed data to standard output instead of a file. This implies --keep. --single-stream Decompress only the first .xz stream, and silently ignore possible remaining input data following the stream. Normally such trailing garbage makes xz display an error. xz never decompresses more than one stream from .lzma files or raw streams, but this option still makes xz ignore the possible trailing data after the .lzma file or raw stream. This option has no effect if the operation mode is not --decompress or --test. --no-sparse Disable creation of sparse files. By default, if decompressing into a regular file, xz tries to make the file sparse if the decompressed data contains long sequences of binary zeros. It also works when writing to standard output as long as standard output is connected to a regular file and certain additional conditions are met to make it safe. Creating sparse files may save disk space and speed up the decompression by reducing the amount of disk I/O. -S .suf, --suffix=.suf When compressing, use .suf as the suffix for the target file instead of .xz or .lzma. If not writing to standard output and the source file already has the suffix .suf, a warning is displayed and the file is skipped. When decompressing, recognize files with the suffix .suf in addition to files with the .xz, .txz, .lzma, .tlz, or .lz suffix. If the source file has the suffix .suf, the suffix is removed to get the target filename. When compressing or decompressing raw streams (--format=raw), the suffix must always be specified unless writing to standard output, because there is no default suffix for raw streams. --files[=file] Read the filenames to process from file; if file is omitted, filenames are read from standard input. Filenames must be terminated with the newline character. A dash (-) is taken as a regular filename; it doesn't mean standard input. If filenames are given also as command line arguments, they are processed before the filenames read from file. --files0[=file] This is identical to --files[=file] except that each filename must be terminated with the null character. Basic file format and compression options -F format, --format=format Specify the file format to compress or decompress: auto This is the default. When compressing, auto is equivalent to xz. When decompressing, the format of the input file is automatically detected. Note that raw streams (created with --format=raw) cannot be auto- detected. xz Compress to the .xz file format, or accept only .xz files when decompressing. lzma, alone Compress to the legacy .lzma file format, or accept only .lzma files when decompressing. The alternative name alone is provided for backwards compatibility with LZMA Utils. lzip Accept only .lz files when decompressing. Compression is not supported. The .lz format version 0 and the unextended version 1 are supported. Version 0 files were produced by lzip 1.3 and older. Such files aren't common but may be found from file archives as a few source packages were released in this format. People might have old personal files in this format too. Decompression support for the format version 0 was removed in lzip 1.18. lzip 1.4 and later create files in the format version 1. The sync flush marker extension to the format version 1 was added in lzip 1.6. This extension is rarely used and isn't supported by xz (diagnosed as corrupt input). raw Compress or uncompress a raw stream (no headers). This is meant for advanced users only. To decode raw streams, you need use --format=raw and explicitly specify the filter chain, which normally would have been stored in the container headers. -C check, --check=check Specify the type of the integrity check. The check is calculated from the uncompressed data and stored in the .xz file. This option has an effect only when compressing into the .xz format; the .lzma format doesn't support integrity checks. The integrity check (if any) is verified when the .xz file is decompressed. Supported check types: none Don't calculate an integrity check at all. This is usually a bad idea. This can be useful when integrity of the data is verified by other means anyway. crc32 Calculate CRC32 using the polynomial from IEEE-802.3 (Ethernet). crc64 Calculate CRC64 using the polynomial from ECMA-182. This is the default, since it is slightly better than CRC32 at detecting damaged files and the speed difference is negligible. sha256 Calculate SHA-256. This is somewhat slower than CRC32 and CRC64. Integrity of the .xz headers is always verified with CRC32. It is not possible to change or disable it. --ignore-check Don't verify the integrity check of the compressed data when decompressing. The CRC32 values in the .xz headers will still be verified normally. Do not use this option unless you know what you are doing. Possible reasons to use this option: • Trying to recover data from a corrupt .xz file. • Speeding up decompression. This matters mostly with SHA-256 or with files that have compressed extremely well. It's recommended to not use this option for this purpose unless the file integrity is verified externally in some other way. -0 ... -9 Select a compression preset level. The default is -6. If multiple preset levels are specified, the last one takes effect. If a custom filter chain was already specified, setting a compression preset level clears the custom filter chain. The differences between the presets are more significant than with gzip(1) and bzip2(1). The selected compression settings determine the memory requirements of the decompressor, thus using a too high preset level might make it painful to decompress the file on an old system with little RAM. Specifically, it's not a good idea to blindly use -9 for everything like it often is with gzip(1) and bzip2(1). -0 ... -3 These are somewhat fast presets. -0 is sometimes faster than gzip -9 while compressing much better. The higher ones often have speed comparable to bzip2(1) with comparable or better compression ratio, although the results depend a lot on the type of data being compressed. -4 ... -6 Good to very good compression while keeping decompressor memory usage reasonable even for old systems. -6 is the default, which is usually a good choice for distributing files that need to be decompressible even on systems with only 16 MiB RAM. (-5e or -6e may be worth considering too. See --extreme.) -7 ... -9 These are like -6 but with higher compressor and decompressor memory requirements. These are useful only when compressing files bigger than 8 MiB, 16 MiB, and 32 MiB, respectively. On the same hardware, the decompression speed is approximately a constant number of bytes of compressed data per second. In other words, the better the compression, the faster the decompression will usually be. This also means that the amount of uncompressed output produced per second can vary a lot. The following table summarises the features of the presets: Preset DictSize CompCPU CompMem DecMem -0 256 KiB 0 3 MiB 1 MiB -1 1 MiB 1 9 MiB 2 MiB -2 2 MiB 2 17 MiB 3 MiB -3 4 MiB 3 32 MiB 5 MiB -4 4 MiB 4 48 MiB 5 MiB -5 8 MiB 5 94 MiB 9 MiB -6 8 MiB 6 94 MiB 9 MiB -7 16 MiB 6 186 MiB 17 MiB -8 32 MiB 6 370 MiB 33 MiB -9 64 MiB 6 674 MiB 65 MiB Column descriptions: • DictSize is the LZMA2 dictionary size. It is waste of memory to use a dictionary bigger than the size of the uncompressed file. This is why it is good to avoid using the presets -7 ... -9 when there's no real need for them. At -6 and lower, the amount of memory wasted is usually low enough to not matter. • CompCPU is a simplified representation of the LZMA2 settings that affect compression speed. The dictionary size affects speed too, so while CompCPU is the same for levels -6 ... -9, higher levels still tend to be a little slower. To get even slower and thus possibly better compression, see --extreme. • CompMem contains the compressor memory requirements in the single-threaded mode. It may vary slightly between xz versions. • DecMem contains the decompressor memory requirements. That is, the compression settings determine the memory requirements of the decompressor. The exact decompressor memory usage is slightly more than the LZMA2 dictionary size, but the values in the table have been rounded up to the next full MiB. Memory requirements of the multi-threaded mode are significantly higher than that of the single-threaded mode. With the default value of --block-size, each thread needs 3*3*DictSize plus CompMem or DecMem. For example, four threads with preset -6 needs 660–670 MiB of memory. -e, --extreme Use a slower variant of the selected compression preset level (-0 ... -9) to hopefully get a little bit better compression ratio, but with bad luck this can also make it worse. Decompressor memory usage is not affected, but compressor memory usage increases a little at preset levels -0 ... -3. Since there are two presets with dictionary sizes 4 MiB and 8 MiB, the presets -3e and -5e use slightly faster settings (lower CompCPU) than -4e and -6e, respectively. That way no two presets are identical. Preset DictSize CompCPU CompMem DecMem -0e 256 KiB 8 4 MiB 1 MiB -1e 1 MiB 8 13 MiB 2 MiB -2e 2 MiB 8 25 MiB 3 MiB -3e 4 MiB 7 48 MiB 5 MiB -4e 4 MiB 8 48 MiB 5 MiB -5e 8 MiB 7 94 MiB 9 MiB -6e 8 MiB 8 94 MiB 9 MiB -7e 16 MiB 8 186 MiB 17 MiB -8e 32 MiB 8 370 MiB 33 MiB -9e 64 MiB 8 674 MiB 65 MiB For example, there are a total of four presets that use 8 MiB dictionary, whose order from the fastest to the slowest is -5, -6, -5e, and -6e. --fast --best These are somewhat misleading aliases for -0 and -9, respectively. These are provided only for backwards compatibility with LZMA Utils. Avoid using these options. --block-size=size When compressing to the .xz format, split the input data into blocks of size bytes. The blocks are compressed independently from each other, which helps with multi-threading and makes limited random-access decompression possible. This option is typically used to override the default block size in multi- threaded mode, but this option can be used in single-threaded mode too. In multi-threaded mode about three times size bytes will be allocated in each thread for buffering input and output. The default size is three times the LZMA2 dictionary size or 1 MiB, whichever is more. Typically a good value is 2–4 times the size of the LZMA2 dictionary or at least 1 MiB. Using size less than the LZMA2 dictionary size is waste of RAM because then the LZMA2 dictionary buffer will never get fully used. In multi-threaded mode, the sizes of the blocks are stored in the block headers. This size information is required for multi-threaded decompression. In single-threaded mode no block splitting is done by default. Setting this option doesn't affect memory usage. No size information is stored in block headers, thus files created in single-threaded mode won't be identical to files created in multi-threaded mode. The lack of size information also means that xz won't be able decompress the files in multi-threaded mode. --block-list=items When compressing to the .xz format, start a new block with an optional custom filter chain after the given intervals of uncompressed data. The items are a comma-separated list. Each item consists of an optional filter chain number between 0 and 9 followed by a colon (:) and a required size of uncompressed data. Omitting an item (two or more consecutive commas) is a shorthand to use the size and filters of the previous item. If the input file is bigger than the sum of the sizes in items, the last item is repeated until the end of the file. A special value of 0 may be used as the last size to indicate that the rest of the file should be encoded as a single block. An alternative filter chain for each block can be specified in combination with the --filters1=filters ... --filters9=filters options. These options define filter chains with an identifier between 1–9. Filter chain 0 can be used to refer to the default filter chain, which is the same as not specifying a filter chain. The filter chain identifier can be used before the uncompressed size, followed by a colon (:). For example, if one specifies --block-list=1:2MiB,3:2MiB,2:4MiB,,2MiB,0:4MiB then blocks will be created using: • The filter chain specified by --filters1 and 2 MiB input • The filter chain specified by --filters3 and 2 MiB input • The filter chain specified by --filters2 and 4 MiB input • The filter chain specified by --filters2 and 4 MiB input • The default filter chain and 2 MiB input • The default filter chain and 4 MiB input for every block until end of input. If one specifies a size that exceeds the encoder's block size (either the default value in threaded mode or the value specified with --block-size=size), the encoder will create additional blocks while keeping the boundaries specified in items. For example, if one specifies --block-size=10MiB --block-list=5MiB,10MiB,8MiB,12MiB,24MiB and the input file is 80 MiB, one will get 11 blocks: 5, 10, 8, 10, 2, 10, 10, 4, 10, 10, and 1 MiB. In multi-threaded mode the sizes of the blocks are stored in the block headers. This isn't done in single-threaded mode, so the encoded output won't be identical to that of the multi-threaded mode. --flush-timeout=timeout When compressing, if more than timeout milliseconds (a positive integer) has passed since the previous flush and reading more input would block, all the pending input data is flushed from the encoder and made available in the output stream. This can be useful if xz is used to compress data that is streamed over a network. Small timeout values make the data available at the receiving end with a small delay, but large timeout values give better compression ratio. This feature is disabled by default. If this option is specified more than once, the last one takes effect. The special timeout value of 0 can be used to explicitly disable this feature. This feature is not available on non-POSIX systems. This feature is still experimental. Currently xz is unsuitable for decompressing the stream in real time due to how xz does buffering. --memlimit-compress=limit Set a memory usage limit for compression. If this option is specified multiple times, the last one takes effect. If the compression settings exceed the limit, xz will attempt to adjust the settings downwards so that the limit is no longer exceeded and display a notice that automatic adjustment was done. The adjustments are done in this order: reducing the number of threads, switching to single-threaded mode if even one thread in multi-threaded mode exceeds the limit, and finally reducing the LZMA2 dictionary size. When compressing with --format=raw or if --no-adjust has been specified, only the number of threads may be reduced since it can be done without affecting the compressed output. If the limit cannot be met even with the adjustments described above, an error is displayed and xz will exit with exit status 1. The limit can be specified in multiple ways: • The limit can be an absolute value in bytes. Using an integer suffix like MiB can be useful. Example: --memlimit-compress=80MiB • The limit can be specified as a percentage of total physical memory (RAM). This can be useful especially when setting the XZ_DEFAULTS environment variable in a shell initialization script that is shared between different computers. That way the limit is automatically bigger on systems with more memory. Example: --memlimit-compress=70% • The limit can be reset back to its default value by setting it to 0. This is currently equivalent to setting the limit to max (no memory usage limit). For 32-bit xz there is a special case: if the limit would be over 4020 MiB, the limit is set to 4020 MiB. On MIPS32 2000 MiB is used instead. (The values 0 and max aren't affected by this. A similar feature doesn't exist for decompression.) This can be helpful when a 32-bit executable has access to 4 GiB address space (2 GiB on MIPS32) while hopefully doing no harm in other situations. See also the section Memory usage. --memlimit-decompress=limit Set a memory usage limit for decompression. This also affects the --list mode. If the operation is not possible without exceeding the limit, xz will display an error and decompressing the file will fail. See --memlimit-compress=limit for possible ways to specify the limit. --memlimit-mt-decompress=limit Set a memory usage limit for multi-threaded decompression. This can only affect the number of threads; this will never make xz refuse to decompress a file. If limit is too low to allow any multi-threading, the limit is ignored and xz will continue in single-threaded mode. Note that if also --memlimit-decompress is used, it will always apply to both single-threaded and multi- threaded modes, and so the effective limit for multi-threading will never be higher than the limit set with --memlimit-decompress. In contrast to the other memory usage limit options, --memlimit-mt-decompress=limit has a system-specific default limit. xz --info-memory can be used to see the current value. This option and its default value exist because without any limit the threaded decompressor could end up allocating an insane amount of memory with some input files. If the default limit is too low on your system, feel free to increase the limit but never set it to a value larger than the amount of usable RAM as with appropriate input files xz will attempt to use that amount of memory even with a low number of threads. Running out of memory or swapping will not improve decompression performance. See --memlimit-compress=limit for possible ways to specify the limit. Setting limit to 0 resets the limit to the default system-specific value. -M limit, --memlimit=limit, --memory=limit This is equivalent to specifying --memlimit-compress=limit --memlimit-decompress=limit --memlimit-mt-decompress=limit. --no-adjust Display an error and exit if the memory usage limit cannot be met without adjusting settings that affect the compressed output. That is, this prevents xz from switching the encoder from multi-threaded mode to single-threaded mode and from reducing the LZMA2 dictionary size. Even when this option is used the number of threads may be reduced to meet the memory usage limit as that won't affect the compressed output. Automatic adjusting is always disabled when creating raw streams (--format=raw). -T threads, --threads=threads Specify the number of worker threads to use. Setting threads to a special value 0 makes xz use up to as many threads as the processor(s) on the system support. The actual number of threads can be fewer than threads if the input file is not big enough for threading with the given settings or if using more threads would exceed the memory usage limit. The single-threaded and multi-threaded compressors produce different output. Single-threaded compressor will give the smallest file size but only the output from the multi-threaded compressor can be decompressed using multiple threads. Setting threads to 1 will use the single-threaded mode. Setting threads to any other value, including 0, will use the multi-threaded compressor even if the system supports only one hardware thread. (xz 5.2.x used single-threaded mode in this situation.) To use multi-threaded mode with only one thread, set threads to +1. The + prefix has no effect with values other than 1. A memory usage limit can still make xz switch to single-threaded mode unless --no-adjust is used. Support for the + prefix was added in xz 5.4.0. If an automatic number of threads has been requested and no memory usage limit has been specified, then a system-specific default soft limit will be used to possibly limit the number of threads. It is a soft limit in sense that it is ignored if the number of threads becomes one, thus a soft limit will never stop xz from compressing or decompressing. This default soft limit will not make xz switch from multi-threaded mode to single- threaded mode. The active limits can be seen with xz --info-memory. Currently the only threading method is to split the input into blocks and compress them independently from each other. The default block size depends on the compression level and can be overridden with the --block-size=size option. Threaded decompression only works on files that contain multiple blocks with size information in block headers. All large enough files compressed in multi-threaded mode meet this condition, but files compressed in single-threaded mode don't even if --block-size=size has been used. The default value for threads is 0. In xz 5.4.x and older the default is 1. Custom compressor filter chains A custom filter chain allows specifying the compression settings in detail instead of relying on the settings associated to the presets. When a custom filter chain is specified, preset options (-0 ... -9 and --extreme) earlier on the command line are forgotten. If a preset option is specified after one or more custom filter chain options, the new preset takes effect and the custom filter chain options specified earlier are forgotten. A filter chain is comparable to piping on the command line. When compressing, the uncompressed input goes to the first filter, whose output goes to the next filter (if any). The output of the last filter gets written to the compressed file. The maximum number of filters in the chain is four, but typically a filter chain has only one or two filters. Many filters have limitations on where they can be in the filter chain: some filters can work only as the last filter in the chain, some only as a non-last filter, and some work in any position in the chain. Depending on the filter, this limitation is either inherent to the filter design or exists to prevent security issues. A custom filter chain can be specified in two different ways. The options --filters=filters and --filters1=filters ... --filters9=filters allow specifying an entire filter chain in one option using the liblzma filter string syntax. Alternatively, a filter chain can be specified by using one or more individual filter options in the order they are wanted in the filter chain. That is, the order of the individual filter options is significant! When decoding raw streams (--format=raw), the filter chain must be specified in the same order as it was specified when compressing. Any individual filter or preset options specified before the full chain option (--filters=filters) will be forgotten. Individual filters specified after the full chain option will reset the filter chain. Both the full and individual filter options take filter-specific options as a comma-separated list. Extra commas in options are ignored. Every option has a default value, so specify those you want to change. To see the whole filter chain and options, use xz -vv (that is, use --verbose twice). This works also for viewing the filter chain options used by presets. --filters=filters Specify the full filter chain or a preset in a single option. Each filter can be separated by spaces or two dashes (--). filters may need to be quoted on the shell command line so it is parsed as a single option. To denote options, use : or =. A preset can be prefixed with a - and followed with zero or more flags. The only supported flag is e to apply the same options as --extreme. --filters1=filters ... --filters9=filters Specify up to nine additional filter chains that can be used with --block-list. For example, when compressing an archive with executable files followed by text files, the executable part could use a filter chain with a BCJ filter and the text part only the LZMA2 filter. --filters-help Display a help message describing how to specify presets and custom filter chains in the --filters and --filters1=filters ... --filters9=filters options, and exit successfully. --lzma1[=options] --lzma2[=options] Add LZMA1 or LZMA2 filter to the filter chain. These filters can be used only as the last filter in the chain. LZMA1 is a legacy filter, which is supported almost solely due to the legacy .lzma file format, which supports only LZMA1. LZMA2 is an updated version of LZMA1 to fix some practical issues of LZMA1. The .xz format uses LZMA2 and doesn't support LZMA1 at all. Compression speed and ratios of LZMA1 and LZMA2 are practically the same. LZMA1 and LZMA2 share the same set of options: preset=preset Reset all LZMA1 or LZMA2 options to preset. Preset consist of an integer, which may be followed by single- letter preset modifiers. The integer can be from 0 to 9, matching the command line options -0 ... -9. The only supported modifier is currently e, which matches --extreme. If no preset is specified, the default values of LZMA1 or LZMA2 options are taken from the preset 6. dict=size Dictionary (history buffer) size indicates how many bytes of the recently processed uncompressed data is kept in memory. The algorithm tries to find repeating byte sequences (matches) in the uncompressed data, and replace them with references to the data currently in the dictionary. The bigger the dictionary, the higher is the chance to find a match. Thus, increasing dictionary size usually improves compression ratio, but a dictionary bigger than the uncompressed file is waste of memory. Typical dictionary size is from 64 KiB to 64 MiB. The minimum is 4 KiB. The maximum for compression is currently 1.5 GiB (1536 MiB). The decompressor already supports dictionaries up to one byte less than 4 GiB, which is the maximum for the LZMA1 and LZMA2 stream formats. Dictionary size and match finder (mf) together determine the memory usage of the LZMA1 or LZMA2 encoder. The same (or bigger) dictionary size is required for decompressing that was used when compressing, thus the memory usage of the decoder is determined by the dictionary size used when compressing. The .xz headers store the dictionary size either as 2^n or 2^n + 2^(n-1), so these sizes are somewhat preferred for compression. Other sizes will get rounded up when stored in the .xz headers. lc=lc Specify the number of literal context bits. The minimum is 0 and the maximum is 4; the default is 3. In addition, the sum of lc and lp must not exceed 4. All bytes that cannot be encoded as matches are encoded as literals. That is, literals are simply 8-bit bytes that are encoded one at a time. The literal coding makes an assumption that the highest lc bits of the previous uncompressed byte correlate with the next byte. For example, in typical English text, an upper-case letter is often followed by a lower-case letter, and a lower-case letter is usually followed by another lower-case letter. In the US-ASCII character set, the highest three bits are 010 for upper-case letters and 011 for lower-case letters. When lc is at least 3, the literal coding can take advantage of this property in the uncompressed data. The default value (3) is usually good. If you want maximum compression, test lc=4. Sometimes it helps a little, and sometimes it makes compression worse. If it makes it worse, test lc=2 too. lp=lp Specify the number of literal position bits. The minimum is 0 and the maximum is 4; the default is 0. Lp affects what kind of alignment in the uncompressed data is assumed when encoding literals. See pb below for more information about alignment. pb=pb Specify the number of position bits. The minimum is 0 and the maximum is 4; the default is 2. Pb affects what kind of alignment in the uncompressed data is assumed in general. The default means four-byte alignment (2^pb=2^2=4), which is often a good choice when there's no better guess. When the alignment is known, setting pb accordingly may reduce the file size a little. For example, with text files having one-byte alignment (US-ASCII, ISO-8859-*, UTF-8), setting pb=0 can improve compression slightly. For UTF-16 text, pb=1 is a good choice. If the alignment is an odd number like 3 bytes, pb=0 might be the best choice. Even though the assumed alignment can be adjusted with pb and lp, LZMA1 and LZMA2 still slightly favor 16-byte alignment. It might be worth taking into account when designing file formats that are likely to be often compressed with LZMA1 or LZMA2. mf=mf Match finder has a major effect on encoder speed, memory usage, and compression ratio. Usually Hash Chain match finders are faster than Binary Tree match finders. The default depends on the preset: 0 uses hc3, 1–3 use hc4, and the rest use bt4. The following match finders are supported. The memory usage formulas below are rough approximations, which are closest to the reality when dict is a power of two. hc3 Hash Chain with 2- and 3-byte hashing Minimum value for nice: 3 Memory usage: dict * 7.5 (if dict <= 16 MiB); dict * 5.5 + 64 MiB (if dict > 16 MiB) hc4 Hash Chain with 2-, 3-, and 4-byte hashing Minimum value for nice: 4 Memory usage: dict * 7.5 (if dict <= 32 MiB); dict * 6.5 (if dict > 32 MiB) bt2 Binary Tree with 2-byte hashing Minimum value for nice: 2 Memory usage: dict * 9.5 bt3 Binary Tree with 2- and 3-byte hashing Minimum value for nice: 3 Memory usage: dict * 11.5 (if dict <= 16 MiB); dict * 9.5 + 64 MiB (if dict > 16 MiB) bt4 Binary Tree with 2-, 3-, and 4-byte hashing Minimum value for nice: 4 Memory usage: dict * 11.5 (if dict <= 32 MiB); dict * 10.5 (if dict > 32 MiB) mode=mode Compression mode specifies the method to analyze the data produced by the match finder. Supported modes are fast and normal. The default is fast for presets 0–3 and normal for presets 4–9. Usually fast is used with Hash Chain match finders and normal with Binary Tree match finders. This is also what the presets do. nice=nice Specify what is considered to be a nice length for a match. Once a match of at least nice bytes is found, the algorithm stops looking for possibly better matches. Nice can be 2–273 bytes. Higher values tend to give better compression ratio at the expense of speed. The default depends on the preset. depth=depth Specify the maximum search depth in the match finder. The default is the special value of 0, which makes the compressor determine a reasonable depth from mf and nice. Reasonable depth for Hash Chains is 4–100 and 16–1000 for Binary Trees. Using very high values for depth can make the encoder extremely slow with some files. Avoid setting the depth over 1000 unless you are prepared to interrupt the compression in case it is taking far too long. When decoding raw streams (--format=raw), LZMA2 needs only the dictionary size. LZMA1 needs also lc, lp, and pb. --x86[=options] --arm[=options] --armthumb[=options] --arm64[=options] --powerpc[=options] --ia64[=options] --sparc[=options] --riscv[=options] Add a branch/call/jump (BCJ) filter to the filter chain. These filters can be used only as a non-last filter in the filter chain. A BCJ filter converts relative addresses in the machine code to their absolute counterparts. This doesn't change the size of the data but it increases redundancy, which can help LZMA2 to produce 0–15 % smaller .xz file. The BCJ filters are always reversible, so using a BCJ filter for wrong type of data doesn't cause any data loss, although it may make the compression ratio slightly worse. The BCJ filters are very fast and use an insignificant amount of memory. These BCJ filters have known problems related to the compression ratio: • Some types of files containing executable code (for example, object files, static libraries, and Linux kernel modules) have the addresses in the instructions filled with filler values. These BCJ filters will still do the address conversion, which will make the compression worse with these files. • If a BCJ filter is applied on an archive, it is possible that it makes the compression ratio worse than not using a BCJ filter. For example, if there are similar or even identical executables then filtering will likely make the files less similar and thus compression is worse. The contents of non- executable files in the same archive can matter too. In practice one has to try with and without a BCJ filter to see which is better in each situation. Different instruction sets have different alignment: the executable file must be aligned to a multiple of this value in the input data to make the filter work. Filter Alignment Notes x86 1 32-bit or 64-bit x86 ARM 4 ARM-Thumb 2 ARM64 4 4096-byte alignment is best PowerPC 4 Big endian only IA-64 16 Itanium SPARC 4 RISC-V 2 Since the BCJ-filtered data is usually compressed with LZMA2, the compression ratio may be improved slightly if the LZMA2 options are set to match the alignment of the selected BCJ filter. Examples: • IA-64 filter has 16-byte alignment so pb=4,lp=4,lc=0 is good with LZMA2 (2^4=16). • RISC-V code has 2-byte or 4-byte alignment depending on whether the file contains 16-bit compressed instructions (the C extension). When 16-bit instructions are used, pb=2,lp=1,lc=3 or pb=1,lp=1,lc=3 is good. When 16-bit instructions aren't present, pb=2,lp=2,lc=2 is the best. readelf -h can be used to check if "RVC" appears on the "Flags" line. • ARM64 is always 4-byte aligned so pb=2,lp=2,lc=2 is the best. • The x86 filter is an exception. It's usually good to stick to LZMA2's defaults (pb=2,lp=0,lc=3) when compressing x86 executables. All BCJ filters support the same options: start=offset Specify the start offset that is used when converting between relative and absolute addresses. The offset must be a multiple of the alignment of the filter (see the table above). The default is zero. In practice, the default is good; specifying a custom offset is almost never useful. --delta[=options] Add the Delta filter to the filter chain. The Delta filter can be only used as a non-last filter in the filter chain. Currently only simple byte-wise delta calculation is supported. It can be useful when compressing, for example, uncompressed bitmap images or uncompressed PCM audio. However, special purpose algorithms may give significantly better results than Delta + LZMA2. This is true especially with audio, which compresses faster and better, for example, with flac(1). Supported options: dist=distance Specify the distance of the delta calculation in bytes. distance must be 1–256. The default is 1. For example, with dist=2 and eight-byte input A1 B1 A2 B3 A3 B5 A4 B7, the output will be A1 B1 01 02 01 02 01 02. Other options -q, --quiet Suppress warnings and notices. Specify this twice to suppress errors too. This option has no effect on the exit status. That is, even if a warning was suppressed, the exit status to indicate a warning is still used. -v, --verbose Be verbose. If standard error is connected to a terminal, xz will display a progress indicator. Specifying --verbose twice will give even more verbose output. The progress indicator shows the following information: • Completion percentage is shown if the size of the input file is known. That is, the percentage cannot be shown in pipes. • Amount of compressed data produced (compressing) or consumed (decompressing). • Amount of uncompressed data consumed (compressing) or produced (decompressing). • Compression ratio, which is calculated by dividing the amount of compressed data processed so far by the amount of uncompressed data processed so far. • Compression or decompression speed. This is measured as the amount of uncompressed data consumed (compression) or produced (decompression) per second. It is shown after a few seconds have passed since xz started processing the file. • Elapsed time in the format M:SS or H:MM:SS. • Estimated remaining time is shown only when the size of the input file is known and a couple of seconds have already passed since xz started processing the file. The time is shown in a less precise format which never has any colons, for example, 2 min 30 s. When standard error is not a terminal, --verbose will make xz print the filename, compressed size, uncompressed size, compression ratio, and possibly also the speed and elapsed time on a single line to standard error after compressing or decompressing the file. The speed and elapsed time are included only when the operation took at least a few seconds. If the operation didn't finish, for example, due to user interruption, also the completion percentage is printed if the size of the input file is known. -Q, --no-warn Don't set the exit status to 2 even if a condition worth a warning was detected. This option doesn't affect the verbosity level, thus both --quiet and --no-warn have to be used to not display warnings and to not alter the exit status. --robot Print messages in a machine-parsable format. This is intended to ease writing frontends that want to use xz instead of liblzma, which may be the case with various scripts. The output with this option enabled is meant to be stable across xz releases. See the section ROBOT MODE for details. --info-memory Display, in human-readable format, how much physical memory (RAM) and how many processor threads xz thinks the system has and the memory usage limits for compression and decompression, and exit successfully. -h, --help Display a help message describing the most commonly used options, and exit successfully. -H, --long-help Display a help message describing all features of xz, and exit successfully -V, --version Display the version number of xz and liblzma in human readable format. To get machine-parsable output, specify --robot before --version. ROBOT MODE The robot mode is activated with the --robot option. It makes the output of xz easier to parse by other programs. Currently --robot is supported only together with --list, --filters-help, --info-memory, and --version. It will be supported for compression and decompression in the future. List mode xz --robot --list uses tab-separated output. The first column of every line has a string that indicates the type of the information found on that line: name This is always the first line when starting to list a file. The second column on the line is the filename. file This line contains overall information about the .xz file. This line is always printed after the name line. stream This line type is used only when --verbose was specified. There are as many stream lines as there are streams in the .xz file. block This line type is used only when --verbose was specified. There are as many block lines as there are blocks in the .xz file. The block lines are shown after all the stream lines; different line types are not interleaved. summary This line type is used only when --verbose was specified twice. This line is printed after all block lines. Like the file line, the summary line contains overall information about the .xz file. totals This line is always the very last line of the list output. It shows the total counts and sizes. The columns of the file lines: 2. Number of streams in the file 3. Total number of blocks in the stream(s) 4. Compressed size of the file 5. Uncompressed size of the file 6. Compression ratio, for example, 0.123. If ratio is over 9.999, three dashes (---) are displayed instead of the ratio. 7. Comma-separated list of integrity check names. The following strings are used for the known check types: None, CRC32, CRC64, and SHA-256. For unknown check types, Unknown-N is used, where N is the Check ID as a decimal number (one or two digits). 8. Total size of stream padding in the file The columns of the stream lines: 2. Stream number (the first stream is 1) 3. Number of blocks in the stream 4. Compressed start offset 5. Uncompressed start offset 6. Compressed size (does not include stream padding) 7. Uncompressed size 8. Compression ratio 9. Name of the integrity check 10. Size of stream padding The columns of the block lines: 2. Number of the stream containing this block 3. Block number relative to the beginning of the stream (the first block is 1) 4. Block number relative to the beginning of the file 5. Compressed start offset relative to the beginning of the file 6. Uncompressed start offset relative to the beginning of the file 7. Total compressed size of the block (includes headers) 8. Uncompressed size 9. Compression ratio 10. Name of the integrity check If --verbose was specified twice, additional columns are included on the block lines. These are not displayed with a single --verbose, because getting this information requires many seeks and can thus be slow: 11. Value of the integrity check in hexadecimal 12. Block header size 13. Block flags: c indicates that compressed size is present, and u indicates that uncompressed size is present. If the flag is not set, a dash (-) is shown instead to keep the string length fixed. New flags may be added to the end of the string in the future. 14. Size of the actual compressed data in the block (this excludes the block header, block padding, and check fields) 15. Amount of memory (in bytes) required to decompress this block with this xz version 16. Filter chain. Note that most of the options used at compression time cannot be known, because only the options that are needed for decompression are stored in the .xz headers. The columns of the summary lines: 2. Amount of memory (in bytes) required to decompress this file with this xz version 3. yes or no indicating if all block headers have both compressed size and uncompressed size stored in them Since xz 5.1.2alpha: 4. Minimum xz version required to decompress the file The columns of the totals line: 2. Number of streams 3. Number of blocks 4. Compressed size 5. Uncompressed size 6. Average compression ratio 7. Comma-separated list of integrity check names that were present in the files 8. Stream padding size 9. Number of files. This is here to keep the order of the earlier columns the same as on file lines. If --verbose was specified twice, additional columns are included on the totals line: 10. Maximum amount of memory (in bytes) required to decompress the files with this xz version 11. yes or no indicating if all block headers have both compressed size and uncompressed size stored in them Since xz 5.1.2alpha: 12. Minimum xz version required to decompress the file Future versions may add new line types and new columns can be added to the existing line types, but the existing columns won't be changed. Filters help xz --robot --filters-help prints the supported filters in the following format: filter:option=<value>,option=<value>... filter Name of the filter option Name of a filter specific option value Numeric value ranges appear as <min-max>. String value choices are shown within < > and separated by a | character. Each filter is printed on its own line. Memory limit information xz --robot --info-memory prints a single line with multiple tab- separated columns: 1. Total amount of physical memory (RAM) in bytes. 2. Memory usage limit for compression in bytes (--memlimit-compress). A special value of 0 indicates the default setting which for single-threaded mode is the same as no limit. 3. Memory usage limit for decompression in bytes (--memlimit-decompress). A special value of 0 indicates the default setting which for single-threaded mode is the same as no limit. 4. Since xz 5.3.4alpha: Memory usage for multi-threaded decompression in bytes (--memlimit-mt-decompress). This is never zero because a system-specific default value shown in the column 5 is used if no limit has been specified explicitly. This is also never greater than the value in the column 3 even if a larger value has been specified with --memlimit-mt-decompress. 5. Since xz 5.3.4alpha: A system-specific default memory usage limit that is used to limit the number of threads when compressing with an automatic number of threads (--threads=0) and no memory usage limit has been specified (--memlimit-compress). This is also used as the default value for --memlimit-mt-decompress. 6. Since xz 5.3.4alpha: Number of available processor threads. In the future, the output of xz --robot --info-memory may have more columns, but never more than a single line. Version xz --robot --version prints the version number of xz and liblzma in the following format: XZ_VERSION=XYYYZZZS LIBLZMA_VERSION=XYYYZZZS X Major version. YYY Minor version. Even numbers are stable. Odd numbers are alpha or beta versions. ZZZ Patch level for stable releases or just a counter for development releases. S Stability. 0 is alpha, 1 is beta, and 2 is stable. S should be always 2 when YYY is even. XYYYZZZS are the same on both lines if xz and liblzma are from the same XZ Utils release. Examples: 4.999.9beta is 49990091 and 5.0.0 is 50000002. EXIT STATUS 0 All is good. 1 An error occurred. 2 Something worth a warning occurred, but no actual errors occurred. Notices (not warnings or errors) printed on standard error don't affect the exit status. ENVIRONMENT xz parses space-separated lists of options from the environment variables XZ_DEFAULTS and XZ_OPT, in this order, before parsing the options from the command line. Note that only options are parsed from the environment variables; all non-options are silently ignored. Parsing is done with getopt_long(3) which is used also for the command line arguments. XZ_DEFAULTS User-specific or system-wide default options. Typically this is set in a shell initialization script to enable xz's memory usage limiter by default. Excluding shell initialization scripts and similar special cases, scripts must never set or unset XZ_DEFAULTS. XZ_OPT This is for passing options to xz when it is not possible to set the options directly on the xz command line. This is the case when xz is run by a script or tool, for example, GNU tar(1): XZ_OPT=-2v tar caf foo.tar.xz foo Scripts may use XZ_OPT, for example, to set script-specific default compression options. It is still recommended to allow users to override XZ_OPT if that is reasonable. For example, in sh(1) scripts one may use something like this: XZ_OPT=${XZ_OPT-"-7e"} export XZ_OPT LZMA UTILS COMPATIBILITY The command line syntax of xz is practically a superset of lzma, unlzma, and lzcat as found from LZMA Utils 4.32.x. In most cases, it is possible to replace LZMA Utils with XZ Utils without breaking existing scripts. There are some incompatibilities though, which may sometimes cause problems. Compression preset levels The numbering of the compression level presets is not identical in xz and LZMA Utils. The most important difference is how dictionary sizes are mapped to different presets. Dictionary size is roughly equal to the decompressor memory usage. Level xz LZMA Utils -0 256 KiB N/A -1 1 MiB 64 KiB -2 2 MiB 1 MiB -3 4 MiB 512 KiB -4 4 MiB 1 MiB -5 8 MiB 2 MiB -6 8 MiB 4 MiB -7 16 MiB 8 MiB -8 32 MiB 16 MiB -9 64 MiB 32 MiB The dictionary size differences affect the compressor memory usage too, but there are some other differences between LZMA Utils and XZ Utils, which make the difference even bigger: Level xz LZMA Utils 4.32.x -0 3 MiB N/A -1 9 MiB 2 MiB -2 17 MiB 12 MiB -3 32 MiB 12 MiB -4 48 MiB 16 MiB -5 94 MiB 26 MiB -6 94 MiB 45 MiB -7 186 MiB 83 MiB -8 370 MiB 159 MiB -9 674 MiB 311 MiB The default preset level in LZMA Utils is -7 while in XZ Utils it is -6, so both use an 8 MiB dictionary by default. Streamed vs. non-streamed .lzma files The uncompressed size of the file can be stored in the .lzma header. LZMA Utils does that when compressing regular files. The alternative is to mark that uncompressed size is unknown and use end-of-payload marker to indicate where the decompressor should stop. LZMA Utils uses this method when uncompressed size isn't known, which is the case, for example, in pipes. xz supports decompressing .lzma files with or without end-of-payload marker, but all .lzma files created by xz will use end-of-payload marker and have uncompressed size marked as unknown in the .lzma header. This may be a problem in some uncommon situations. For example, a .lzma decompressor in an embedded device might work only with files that have known uncompressed size. If you hit this problem, you need to use LZMA Utils or LZMA SDK to create .lzma files with known uncompressed size. Unsupported .lzma files The .lzma format allows lc values up to 8, and lp values up to 4. LZMA Utils can decompress files with any lc and lp, but always creates files with lc=3 and lp=0. Creating files with other lc and lp is possible with xz and with LZMA SDK. The implementation of the LZMA1 filter in liblzma requires that the sum of lc and lp must not exceed 4. Thus, .lzma files, which exceed this limitation, cannot be decompressed with xz. LZMA Utils creates only .lzma files which have a dictionary size of 2^n (a power of 2) but accepts files with any dictionary size. liblzma accepts only .lzma files which have a dictionary size of 2^n or 2^n + 2^(n-1). This is to decrease false positives when detecting .lzma files. These limitations shouldn't be a problem in practice, since practically all .lzma files have been compressed with settings that liblzma will accept. Trailing garbage When decompressing, LZMA Utils silently ignore everything after the first .lzma stream. In most situations, this is a bug. This also means that LZMA Utils don't support decompressing concatenated .lzma files. If there is data left after the first .lzma stream, xz considers the file to be corrupt unless --single-stream was used. This may break obscure scripts which have assumed that trailing garbage is ignored. NOTES Compressed output may vary The exact compressed output produced from the same uncompressed input file may vary between XZ Utils versions even if compression options are identical. This is because the encoder can be improved (faster or better compression) without affecting the file format. The output can vary even between different builds of the same XZ Utils version, if different build options are used. The above means that once --rsyncable has been implemented, the resulting files won't necessarily be rsyncable unless both old and new files have been compressed with the same xz version. This problem can be fixed if a part of the encoder implementation is frozen to keep rsyncable output stable across xz versions. Embedded .xz decompressors Embedded .xz decompressor implementations like XZ Embedded don't necessarily support files created with integrity check types other than none and crc32. Since the default is --check=crc64, you must use --check=none or --check=crc32 when creating files for embedded systems. Outside embedded systems, all .xz format decompressors support all the check types, or at least are able to decompress the file without verifying the integrity check if the particular check is not supported. XZ Embedded supports BCJ filters, but only with the default start offset.
|
Basics Compress the file foo into foo.xz using the default compression level (-6), and remove foo if compression is successful: xz foo Decompress bar.xz into bar and don't remove bar.xz even if decompression is successful: xz -dk bar.xz Create baz.tar.xz with the preset -4e (-4 --extreme), which is slower than the default -6, but needs less memory for compression and decompression (48 MiB and 5 MiB, respectively): tar cf - baz | xz -4e > baz.tar.xz A mix of compressed and uncompressed files can be decompressed to standard output with a single command: xz -dcf a.txt b.txt.xz c.txt d.txt.lzma > abcd.txt Parallel compression of many files On GNU and *BSD, find(1) and xargs(1) can be used to parallelize compression of many files: find . -type f \! -name '*.xz' -print0 \ | xargs -0r -P4 -n16 xz -T1 The -P option to xargs(1) sets the number of parallel xz processes. The best value for the -n option depends on how many files there are to be compressed. If there are only a couple of files, the value should probably be 1; with tens of thousands of files, 100 or even more may be appropriate to reduce the number of xz processes that xargs(1) will eventually create. The option -T1 for xz is there to force it to single-threaded mode, because xargs(1) is used to control the amount of parallelization. Robot mode Calculate how many bytes have been saved in total after compressing multiple files: xz --robot --list *.xz | awk '/^totals/{print $5-$4}' A script may want to know that it is using new enough xz. The following sh(1) script checks that the version number of the xz tool is at least 5.0.0. This method is compatible with old beta versions, which didn't support the --robot option: if ! eval "$(xz --robot --version 2> /dev/null)" || [ "$XZ_VERSION" -lt 50000002 ]; then echo "Your xz is too old." fi unset XZ_VERSION LIBLZMA_VERSION Set a memory usage limit for decompression using XZ_OPT, but if a limit has already been set, don't increase it: NEWLIM=$((123 << 20)) # 123 MiB OLDLIM=$(xz --robot --info-memory | cut -f3) if [ $OLDLIM -eq 0 -o $OLDLIM -gt $NEWLIM ]; then XZ_OPT="$XZ_OPT --memlimit-decompress=$NEWLIM" export XZ_OPT fi Custom compressor filter chains The simplest use for custom filter chains is customizing a LZMA2 preset. This can be useful, because the presets cover only a subset of the potentially useful combinations of compression settings. The CompCPU columns of the tables from the descriptions of the options -0 ... -9 and --extreme are useful when customizing LZMA2 presets. Here are the relevant parts collected from those two tables: Preset CompCPU -0 0 -1 1 -2 2 -3 3 -4 4 -5 5 -6 6 -5e 7 -6e 8 If you know that a file requires somewhat big dictionary (for example, 32 MiB) to compress well, but you want to compress it quicker than xz -8 would do, a preset with a low CompCPU value (for example, 1) can be modified to use a bigger dictionary: xz --lzma2=preset=1,dict=32MiB foo.tar With certain files, the above command may be faster than xz -6 while compressing significantly better. However, it must be emphasized that only some files benefit from a big dictionary while keeping the CompCPU value low. The most obvious situation, where a big dictionary can help a lot, is an archive containing very similar files of at least a few megabytes each. The dictionary size has to be significantly bigger than any individual file to allow LZMA2 to take full advantage of the similarities between consecutive files. If very high compressor and decompressor memory usage is fine, and the file being compressed is at least several hundred megabytes, it may be useful to use an even bigger dictionary than the 64 MiB that xz -9 would use: xz -vv --lzma2=dict=192MiB big_foo.tar Using -vv (--verbose --verbose) like in the above example can be useful to see the memory requirements of the compressor and decompressor. Remember that using a dictionary bigger than the size of the uncompressed file is waste of memory, so the above command isn't useful for small files. Sometimes the compression time doesn't matter, but the decompressor memory usage has to be kept low, for example, to make it possible to decompress the file on an embedded system. The following command uses -6e (-6 --extreme) as a base and sets the dictionary to only 64 KiB. The resulting file can be decompressed with XZ Embedded (that's why there is --check=crc32) using about 100 KiB of memory. xz --check=crc32 --lzma2=preset=6e,dict=64KiB foo If you want to squeeze out as many bytes as possible, adjusting the number of literal context bits (lc) and number of position bits (pb) can sometimes help. Adjusting the number of literal position bits (lp) might help too, but usually lc and pb are more important. For example, a source code archive contains mostly US-ASCII text, so something like the following might give slightly (like 0.1 %) smaller file than xz -6e (try also without lc=4): xz --lzma2=preset=6e,pb=0,lc=4 source_code.tar Using another filter together with LZMA2 can improve compression with certain file types. For example, to compress a x86-32 or x86-64 shared library using the x86 BCJ filter: xz --x86 --lzma2 libfoo.so Note that the order of the filter options is significant. If --x86 is specified after --lzma2, xz will give an error, because there cannot be any filter after LZMA2, and also because the x86 BCJ filter cannot be used as the last filter in the chain. The Delta filter together with LZMA2 can give good results with bitmap images. It should usually beat PNG, which has a few more advanced filters than simple delta but uses Deflate for the actual compression. The image has to be saved in uncompressed format, for example, as uncompressed TIFF. The distance parameter of the Delta filter is set to match the number of bytes per pixel in the image. For example, 24-bit RGB bitmap needs dist=3, and it is also good to pass pb=0 to LZMA2 to accommodate the three-byte alignment: xz --delta=dist=3 --lzma2=pb=0 foo.tiff If multiple images have been put into a single archive (for example, .tar), the Delta filter will work on that too as long as all images have the same number of bytes per pixel. SEE ALSO xzdec(1), xzdiff(1), xzgrep(1), xzless(1), xzmore(1), gzip(1), bzip2(1), 7z(1) XZ Utils: <https://tukaani.org/xz/> XZ Embedded: <https://tukaani.org/xz/embedded.html> LZMA SDK: <https://7-zip.org/sdk.html> Tukaani 2024-04-08 XZ(1)
|
SvtAv1DecApp
| null | null | null | null | null |
gpg-connect-agent
|
The gpg-connect-agent is a utility to communicate with a running gpg-agent. It is useful to check out the commands gpg-agent provides using the Assuan interface. It might also be useful for scripting simple applications. Input is expected at stdin and output gets printed to stdout. It is very similar to running gpg-agent in server mode; but here we connect to a running instance. The following options may be used: --dirmngr Connect to a running directory manager (keyserver client) instead of to the gpg-agent. If a dirmngr is not running, start it. --keyboxd Connect to a running keybox daemon instead of to the gpg-agent. If a keyboxd is not running, start it. -S --raw-socket name Connect to socket name assuming this is an Assuan style server. Do not run any special initializations or environment checks. This may be used to directly connect to any Assuan style socket server. -E --exec Take the rest of the command line as a program and it's arguments and execute it as an Assuan server. Here is how you would run gpgsm: gpg-connect-agent --exec gpgsm --server Note that you may not use options on the command line in this case. -v --verbose Output additional information while running. -q --quiet Try to be as quiet as possible. --homedir dir Set the name of the home directory to dir. If this option is not used, the home directory defaults to ‘~/.gnupg’. It is only recognized when given on the command line. It also overrides any home directory stated through the environment variable ‘GNUPGHOME’ or (on Windows systems) by means of the Registry entry HKCU\Software\GNU\GnuPG:HomeDir. On Windows systems it is possible to install GnuPG as a portable application. In this case only this command line option is considered, all other ways to set a home directory are ignored. --chuid uid Change the current user to uid which may either be a number or a name. This can be used from the root account to run gpg- connect-agent for another user. If uid is not the current UID a standard PATH is set and the envvar GNUPGHOME is unset. To override the latter the option --homedir can be used. This option has only an effect when used on the command line. This option has currently no effect at all on Windows. --no-ext-connect When using -S or --exec, gpg-connect-agent connects to the Assuan server in extended mode to allow descriptor passing. This option makes it use the old mode. --no-autostart Do not start the gpg-agent or the dirmngr if it has not yet been started. --no-history In interactive mode the command line history is usually saved and restored to and from a file below the GnuPG home directory. This option inhibits the use of that file. --agent-program file Specify the agent program to be started if none is running. The default value is determined by running gpgconf with the option --list-dirs. Note that the pipe symbol (|) is used for a regression test suite hack and may thus not be used in the file name. --dirmngr-program file Specify the directory manager (keyserver client) program to be started if none is running. This has only an effect if used together with the option --dirmngr. --keyboxd-program file Specify the keybox daemon program to be started if none is running. This has only an effect if used together with the option --keyboxd. -r file --run file Run the commands from file at startup and then continue with the regular input method. Note, that commands given on the command line are executed after this file. -s --subst Run the command /subst at startup. --hex Print data lines in a hex format and the ASCII representation of non-control characters. --decode Decode data lines. That is to remove percent escapes but make sure that a new line always starts with a D and a space. -u --unbuffered Set stdin and stdout into unbuffered I/O mode. This this sometimes useful for scripting. CONTROL COMMANDS While reading Assuan commands, gpg-agent also allows a few special commands to control its operation. These control commands all start with a slash (/). /echo args Just print args. /let name value Set the variable name to value. Variables are only substituted on the input if the /subst has been used. Variables are referenced by prefixing the name with a dollar sign and optionally include the name in curly braces. The rules for a valid name are identically to those of the standard bourne shell. This is not yet enforced but may be in the future. When used with curly braces no leading or trailing white space is allowed. If a variable is not found, it is searched in the environment and if found copied to the table of variables. Variable functions are available: The name of the function must be followed by at least one space and the at least one argument. The following functions are available: get Return a value described by the argument. Available arguments are: cwd The current working directory. homedir The gnupg homedir. sysconfdir GnuPG's system configuration directory. bindir GnuPG's binary directory. libdir GnuPG's library directory. libexecdir GnuPG's library directory for executable files. datadir GnuPG's data directory. serverpid The PID of the current server. Command /serverpid must have been given to return a useful value. unescape args Remove C-style escapes from args. Note that \0 and \x00 terminate the returned string implicitly. The string to be converted are the entire arguments right behind the delimiting space of the function name. unpercent args unpercent+ args Remove percent style escaping from args. Note that %00 terminates the string implicitly. The string to be converted are the entire arguments right behind the delimiting space of the function name. unpercent+ also maps plus signs to a spaces. percent args percent+ args Escape the args using percent style escaping. Tabs, formfeeds, linefeeds, carriage returns and colons are escaped. percent+ also maps spaces to plus signs. errcode arg errsource arg errstring arg Assume arg is an integer and evaluate it using strtol. Return the gpg-error error code, error source or a formatted string with the error code and error source. + - * / % Evaluate all arguments as long integers using strtol and apply this operator. A division by zero yields an empty string. ! | & Evaluate all arguments as long integers using strtol and apply the logical operators NOT, OR or AND. The NOT operator works on the last argument only. /definq name var Use content of the variable var for inquiries with name. name may be an asterisk (*) to match any inquiry. /definqfile name file Use content of file for inquiries with name. name may be an asterisk (*) to match any inquiry. /definqprog name prog Run prog for inquiries matching name and pass the entire line to it as command line arguments. /datafile name Write all data lines from the server to the file name. The file is opened for writing and created if it does not exists. An existing file is first truncated to 0. The data written to the file fully decoded. Using a single dash for name writes to stdout. The file is kept open until a new file is set using this command or this command is used without an argument. /showdef Print all definitions /cleardef Delete all definitions /sendfd file mode Open file in mode (which needs to be a valid fopen mode string) and send the file descriptor to the server. This is usually followed by a command like INPUT FD to set the input source for other commands. /recvfd Not yet implemented. /open var file [mode] Open file and assign the file descriptor to var. Warning: This command is experimental and might change in future versions. /close fd Close the file descriptor fd. Warning: This command is experimental and might change in future versions. /showopen Show a list of open files. /serverpid Send the Assuan command GETINFO pid to the server and store the returned PID for internal purposes. /sleep Sleep for a second. /hex /nohex Same as the command line option --hex. /decode /nodecode Same as the command line option --decode. /subst /nosubst Enable and disable variable substitution. It defaults to disabled unless the command line option --subst has been used. If /subst as been enabled once, leading whitespace is removed from input lines which makes scripts easier to read. /while condition /end These commands provide a way for executing loops. All lines between the while and the corresponding end are executed as long as the evaluation of condition yields a non-zero value or is the string true or yes. The evaluation is done by passing condition to the strtol function. Example: /subst /let i 3 /while $i /echo loop counter is $i /let i ${- $i 1} /end /if condition /end These commands provide a way for conditional execution. All lines between the if and the corresponding end are executed only if the evaluation of condition yields a non-zero value or is the string true or yes. The evaluation is done by passing condition to the strtol function. /run file Run commands from file. /history --clear Clear the command history. /bye Terminate the connection and the program. /help Print a list of available control commands. SEE ALSO gpg-agent(1), scdaemon(1) The full documentation for this tool is maintained as a Texinfo manual. If GnuPG and the info program are properly installed at your site, the command info gnupg should give you access to the complete manual including a menu structure and an index. GnuPG 2.4.5 2024-03-04 GPG-CONNECT-AGENT(1)
|
gpg-connect-agent - Communicate with a running agent
|
gpg-connect-agent [options][commands]
| null | null |
zeroize
| null | null | null | null | null |
psa_constant_names
| null | null | null | null | null |
jpgicc
|
lcms is a standalone CMM engine, which deals with the color management. It implements a fast transformation between ICC profiles. jpgicc is a little cms ICC profile applier for JPEG.
|
jpgicc - little cms ICC profile applier for JPEG.
|
jpgicc [options] input.jpg output.jpg
|
-b Black point compensation. -c NUM Precalculates transform (0=Off, 1=Normal, 2=Hi-res, 3=LoRes) [defaults to 1]. -d NUM Observer adaptation state (abs.col. only), (0..1.0, float value) [defaults to 0.0]. -e Embed destination profile. -g Marks out-of-gamut colors on softproof. -h NUM Show summary of options and examples (0=help, 1=Examples, 2=Built-in profiles, 3=Contact information) -i profile Input profile (defaults to sRGB). -l link TODO: explain this option. -m NUM SoftProof intent (0,1,2,3) [defaults to 0]. -n Ignore embedded profile. -o profile Output profile (defaults to sRGB). -p profile Soft proof profile. -q NUM Output JPEG quality, (0..100) [defaults to 75]. -s newprofile Save embedded profile as newprofile. -t NUM Rendering intent 0=Perceptual [default] 1=Relative colorimetric 2=Saturation 3=Absolute colorimetric 10=Perceptual preserving black ink 11=Relative colorimetric preserving black ink 12=Saturation preserving black ink 13=Perceptual preserving black plane 14=Relative colorimetric preserving black plane 15=Saturation preserving black plane -v Verbose. -! NUM,NUM,NUM Out-of-gamut marker channel values (r,g,b) [defaults: 128,128,128]. BUILT-IN PROFILES *Lab2 -- D50-based v2 CIEL*a*b *Lab4 -- D50-based v4 CIEL*a*b *Lab -- D50-based v4 CIEL*a*b *XYZ -- CIE XYZ (PCS) *sRGB -- sRGB color space *Gray22 - Monochrome of Gamma 2.2 *Gray30 - Monochrome of Gamma 3.0 *null - Monochrome black for all input *Lin2222- CMYK linearization of gamma 2.2 on each channel
|
To color correct from scanner to sRGB: jpgicc -iscanner.icm in.jpg out.jpg To convert from monitor1 to monitor2: jpgicc -imon1.icm -omon2.icm in.jpg out.jpg To make a CMYK separation: jpgicc -oprinter.icm inrgb.jpg outcmyk.jpg To recover sRGB from a CMYK separation: jpgicc -iprinter.icm incmyk.jpg outrgb.jpg To convert from CIELab ITU/Fax JPEG to sRGB jpgicc -iitufax.icm in.jpg out.jpg To convert from CIELab ITU/Fax JPEG to sRGB jpgicc in.jpg out.jpg NOTES For suggestions, comments, bug reports etc. send mail to info@littlecms.com. SEE ALSO linkicc(1), psicc(1), tificc(1), transicc(1) AUTHOR This manual page was written by Shiju p. Nair <shiju.p@gmail.com>, for the Debian project. September 30, 2004 JPGICC(1)
|
target_dec_fate.sh
| null | null | null | null | null |
gpgconf
|
The gpgconf is a utility to automatically and reasonable safely query and modify configuration files in the ‘.gnupg’ home directory. It is designed not to be invoked manually by the user, but automatically by graphical user interfaces (GUI). ([Please note that currently no locking is done, so concurrent access should be avoided. There are some precautions to avoid corruption with concurrent usage, but results may be inconsistent and some changes may get lost. The stateless design makes it difficult to provide more guarantees.]) gpgconf provides access to the configuration of one or more components of the GnuPG system. These components correspond more or less to the programs that exist in the GnuPG framework, like GPG, GPGSM, DirMngr, etc. But this is not a strict one-to-one relationship. Not all configuration options are available through gpgconf. gpgconf provides a generic and abstract method to access the most important configuration options that can feasibly be controlled via such a mechanism. gpgconf can be used to gather and change the options available in each component, and can also provide their default values. gpgconf will give detailed type information that can be used to restrict the user's input without making an attempt to commit the changes. gpgconf provides the backend of a configuration editor. The configuration editor would usually be a graphical user interface program that displays the current options, their default values, and allows the user to make changes to the options. These changes can then be made active with gpgconf again. Such a program that uses gpgconf in this way will be called GUI throughout this section. COMMANDS One of the following commands must be given: --list-components List all components. This is the default command used if none is specified. --check-programs List all available backend programs and test whether they are runnable. --list-options component List all options of the component component. --change-options component Change the options of the component component. --check-options component Check the options for the component component. --apply-profile file Apply the configuration settings listed in file to the configuration files. If file has no suffix and no slashes the command first tries to read a file with the suffix .prf from the data directory (gpgconf --list-dirs datadir) before it reads the file verbatim. A profile is divided into sections using the bracketed component name. Each section then lists the option which shall go into the respective configuration file. --apply-defaults Update all configuration files with values taken from the global configuration file (usually ‘/etc/gnupg/gpgconf.conf’). Note: This is a legacy mechanism. Please use global configuration files instead. --list-dirs [names] -L Lists the directories used by gpgconf. One directory is listed per line, and each line consists of a colon-separated list where the first field names the directory type (for example sysconfdir) and the second field contains the percent-escaped directory. Although they are not directories, the socket file names used by gpg-agent and dirmngr are printed as well. Note that the socket file names and the homedir lines are the default names and they may be overridden by command line switches. If names are given only the directories or file names specified by the list names are printed without any escaping. --list-config [filename] List the global configuration file in a colon separated format. If filename is given, check that file instead. --check-config [filename] Run a syntax check on the global configuration file. If filename is given, check that file instead. --query-swdb package_name [version_string] Returns the current version for package_name and if version_string is given also an indicator on whether an update is available. The actual file with the software version is automatically downloaded and checked by dirmngr. dirmngr uses a thresholds to avoid download the file too often and it does this by default only if it can be done via Tor. To force an update of that file this command can be used: gpg-connect-agent --dirmngr 'loadswdb --force' /bye --reload [component] -R Reload all or the given component. This is basically the same as sending a SIGHUP to the component. Components which don't support reloading are ignored. Without component or by using "all" for component all components which are daemons are reloaded. --launch [component] If the component is not already running, start it. component must be a daemon. This is in general not required because the system starts these daemons as needed. However, external software making direct use of gpg-agent or dirmngr may use this command to ensure that they are started. Using "all" for component launches all components which are daemons. --kill [component] -K Kill the given component that runs as a daemon, including gpg-agent, dirmngr, and scdaemon. A component which does not run as a daemon will be ignored. Using "all" for component kills all components running as daemons. Note that as of now reload and kill have the same effect for scdaemon. --create-socketdir Create a directory for sockets below /run/user or /var/run/user. This is command is only required if a non default home directory is used and the /run based sockets shall be used. For the default home directory GnuPG creates a directory on the fly. --remove-socketdir Remove a directory created with command --create-socketdir. --unlock name --lock name Remove a stale lock file hold for ‘file’. The file is expected in the current GnuPG home directory. This command is usually not required because GnuPG is able to detect and remove stale lock files. Before using the command make sure that the file protected by the lock file is actually not in use. The lock command may be used to lock an accidentally removed lock file. Note that the commands have no effect on Windows because the mere existence of a lock file does not mean that the lock is active. The string in this field contains a human-readable description of the component. It can be displayed to the user of the GUI for informational purposes. It is percent-escaped and localized. pgmname The string in this field contains the absolute name of the program's file. It can be used to unambiguously invoke that program. It is percent-escaped. Example: $ gpgconf --list-components gpg:GPG for OpenPGP:/usr/local/bin/gpg2: gpg-agent:GPG Agent:/usr/local/bin/gpg-agent: scdaemon:Smartcard Daemon:/usr/local/bin/scdaemon: gpgsm:GPG for S/MIME:/usr/local/bin/gpgsm: dirmngr:Directory Manager:/usr/local/bin/dirmngr: Checking programs The command --check-programs is similar to --list-components but works on backend programs and not on components. It runs each program to test whether it is installed and runnable. This also includes a syntax check of all config file options of the program. The command --check-programs lists all available programs, one per line. The format of each line is: name:description:pgmname:avail:okay:cfgfile:line:error: name This field contains a name tag of the program which is identical to the name of the component. The name tag is to be used verbatim. It is thus not in any escaped format. This field may be empty to indicate a continuation of error descriptions for the last name. The description and pgmname fields are then also empty. The string in this field contains a human-readable description of the component. It can be displayed to the user of the GUI for informational purposes. It is percent-escaped and localized. pgmname The string in this field contains the absolute name of the program's file. It can be used to unambiguously invoke that program. It is percent-escaped. avail The boolean value in this field indicates whether the program is installed and runnable. okay The boolean value in this field indicates whether the program's config file is syntactically okay. cfgfile If an error occurred in the configuration file (as indicated by a false value in the field okay), this field has the name of the failing configuration file. It is percent-escaped. line If an error occurred in the configuration file, this field has the line number of the failing statement in the configuration file. It is an unsigned number. error If an error occurred in the configuration file, this field has the error text of the failing statement in the configuration file. It is percent-escaped and localized. In the following example the dirmngr is not runnable and the configuration file of scdaemon is not okay. $ gpgconf --check-programs gpg:GPG for OpenPGP:/usr/local/bin/gpg2:1:1: gpg-agent:GPG Agent:/usr/local/bin/gpg-agent:1:1: scdaemon:Smartcard Daemon:/usr/local/bin/scdaemon:1:0: gpgsm:GPG for S/MIME:/usr/local/bin/gpgsm:1:1: dirmngr:Directory Manager:/usr/local/bin/dirmngr:0:0: The command configuration file in the same manner as --check-programs, but only for the component component. Listing options Every component contains one or more options. Options may be gathered into option groups to allow the GUI to give visual hints to the user about which options are related. The command lists all options (and the groups they belong to) in the component component, one per line. component must be the string in the field name in the output of the --list-components command. There is one line for each option and each group. First come all options that are not in any group. Then comes a line describing a group. Then come all options that belong into each group. Then comes the next group and so on. There does not need to be any group (and in this case the output will stop after the last non-grouped option). The format of each line is: name:flags:level:description:type:alt-type:argname:default:argdef:value name This field contains a name tag for the group or option. The name tag is used to specify the group or option in all communication with gpgconf. The name tag is to be used verbatim. It is thus not in any escaped format. flags The flags field contains an unsigned number. Its value is the OR-wise combination of the following flag values: group (1) If this flag is set, this is a line describing a group and not an option. The following flag values are only defined for options (that is, if the group flag is not used). optional arg (2) If this flag is set, the argument is optional. This is never set for type 0 (none) options. list (4) If this flag is set, the option can be given multiple times. runtime (8) If this flag is set, the option can be changed at runtime. default (16) If this flag is set, a default value is available. default desc (32) If this flag is set, a (runtime) default is available. This and the default flag are mutually exclusive. no arg desc (64) If this flag is set, and the optional arg flag is set, then the option has a special meaning if no argument is given. no change (128) If this flag is set, gpgconf ignores requests to change the value. GUI frontends should grey out this option. Note, that manual changes of the configuration files are still possible. level This field is defined for options and for groups. It contains an unsigned number that specifies the expert level under which this group or option should be displayed. The following expert levels are defined for options (they have analogous meaning for groups): basic (0) This option should always be offered to the user. advanced (1) This option may be offered to advanced users. expert (2) This option should only be offered to expert users. invisible (3) This option should normally never be displayed, not even to expert users. internal (4) This option is for internal use only. Ignore it. The level of a group will always be the lowest level of all options it contains. This field is defined for options and groups. The string in this field contains a human-readable description of the option or group. It can be displayed to the user of the GUI for informational purposes. It is percent-escaped and localized. type This field is only defined for options. It contains an unsigned number that specifies the type of the option's argument, if any. The following types are defined: Basic types: none (0) No argument allowed. string (1) An unformatted string. int32 (2) A signed number. uint32 (3) An unsigned number. Complex types: pathname (32) A string that describes the pathname of a file. The file does not necessarily need to exist. ldap server (33) A string that describes an LDAP server in the format: hostname:port:username:password:base_dn key fingerprint (34) A string with a 40 digit fingerprint specifying a certificate. pub key (35) A string that describes a certificate by user ID, key ID or fingerprint. sec key (36) A string that describes a certificate with a key by user ID, key ID or fingerprint. alias list (37) A string that describes an alias list, like the one used with gpg's group option. The list consists of a key, an equal sign and space separated values. More types will be added in the future. Please see the alt-type field for information on how to cope with unknown types. alt-type This field is identical to type, except that only the types 0 to 31 are allowed. The GUI is expected to present the user the option in the format specified by type. But if the argument type type is not supported by the GUI, it can still display the option in the more generic basic type alt-type. The GUI must support all the defined basic types to be able to display all options. More basic types may be added in future versions. If the GUI encounters a basic type it doesn't support, it should report an error and abort the operation. argname This field is only defined for options with an argument type type that is not 0. In this case it may contain a percent- escaped and localized string that gives a short name for the argument. The field may also be empty, though, in which case a short name is not known. default This field is defined only for options for which the default or default desc flag is set. If the default flag is set, its format is that of an option argument (see: [Format conventions], for details). If the default value is empty, then no default is known. Otherwise, the value specifies the default value for this option. If the default desc flag is set, the field is either empty or contains a description of the effect if the option is not given. argdef This field is defined only for options for which the optional arg flag is set. If the no arg desc flag is not set, its format is that of an option argument (see: [Format conventions], for details). If the default value is empty, then no default is known. Otherwise, the value specifies the default argument for this option. If the no arg desc flag is set, the field is either empty or contains a description of the effect of this option if no argument is given. value This field is defined only for options. Its format is that of an option argument. If it is empty, then the option is not explicitly set in the current configuration, and the default applies (if any). Otherwise, it contains the current value of the option. Note that this field is also meaningful if the option itself does not take a real argument (in this case, it contains the number of times the option appears). Changing options The command to change the options of the component component to the specified values. component must be the string in the field name in the output of the --list-components command. You have to provide the options that shall be changed in the following format on standard input: name:flags:new-value name This is the name of the option to change. name must be the string in the field name in the output of the --list-options command. flags The flags field contains an unsigned number. Its value is the OR-wise combination of the following flag values: default (16) If this flag is set, the option is deleted and the default value is used instead (if applicable). new-value The new value for the option. This field is only defined if the default flag is not set. The format is that of an option argument. If it is empty (or the field is omitted), the default argument is used (only allowed if the argument is optional for this option). Otherwise, the option will be set to the specified value. The output of the command is the same as that of --check-options for the modified configuration file. Examples: To set the force option, which is of basic type none (0): $ echo 'force:0:1' | gpgconf --change-options dirmngr To delete the force option: $ echo 'force:16:' | gpgconf --change-options dirmngr The --runtime option can influence when the changes take effect. Listing global options Some legacy applications look at the global configuration file for the gpgconf tool itself; this is the file ‘gpgconf.conf’. Modern applications should not use it but use per component global configuration files which are more flexible than the ‘gpgconf.conf’. Using both files is not suggested. The colon separated listing format is record oriented and uses the first field to identify the record type: k This describes a key record to start the definition of a new ruleset for a user/group. The format of a key record is: k:user:group: user This is the user field of the key. It is percent escaped. See the definition of the gpgconf.conf format for details. group This is the group field of the key. It is percent escaped. r This describes a rule record. All rule records up to the next key record make up a rule set for that key. The format of a rule record is: r:::component:option:flag:value: component This is the component part of a rule. It is a plain string. option This is the option part of a rule. It is a plain string. flag This is the flags part of a rule. There may be only one flag per rule but by using the same component and option, several flags may be assigned to an option. It is a plain string. value This is the optional value for the option. It is a percent escaped string with a single quotation mark to indicate a string. The quotation mark is only required to distinguish between no value specified and an empty string. Unknown record types should be ignored. Note that there is intentionally no feature to change the global option file through gpgconf. Get and compare software versions. The GnuPG Project operates a server to query the current versions of software packages related to GnuPG. gpgconf can be used to access this online database. To allow for offline operations, this feature works by having dirmngr download a file from https://versions.gnupg.org, checking the signature of that file and storing the file in the GnuPG home directory. If gpgconf is used and dirmngr is running, it may ask dirmngr to refresh that file before itself uses the file. The command --query-swdb returns information for the given package in a colon delimited format: name This is the name of the package as requested. Note that "gnupg" is a special name which is replaced by the actual package implementing this version of GnuPG. For this name it is also not required to specify a version because gpgconf takes its own version in this case. iversion The currently installed version or an empty string. The value is taken from the command line argument but may be provided by gpg if not given. status The status of the software package according to this table: - No information available. This is either because no current version has been specified or due to an error. ? The given name is not known in the online database. u An update of the software is available. c The installed version of the software is current. n The installed version is already newer than the released version. urgency If the value (the empty string should be considered as zero) is greater than zero an important update is available. error This returns an gpg-error error code to distinguish between various failure modes. filedate This gives the date of the file with the version numbers in standard ISO format (yyyymmddThhmmss). The date has been extracted by dirmngr from the signature of the file. verified This gives the date in ISO format the file was downloaded. This value can be used to evaluate the freshness of the information. version This returns the version string for the requested software from the file. reldate This returns the release date in ISO format. size This returns the size of the package as decimal number of bytes. hash This returns a hexified SHA-2 hash of the package. More fields may be added in future to the output. FILES gpgconf.ctl Under Unix ‘gpgconf.ctl’ may be used to change some of the compiled in directories where the GnuPG components are expected. This file is expected in the same directory as ‘gpgconf’. The physical installation directories are evaluated and no symlinks. Blank lines and lines starting with pound sign are ignored in the file. The keywords must be followed by optional white space, an equal sign, optional white space, and the value. Environment variables are substituted in standard shell manner, the final value must start with a slash, trailing slashes are stripped. Valid keywords are rootdir, sysconfdir, socketdir, and .enable. No errors are printed for unknown keywords. The .enable keyword is special: if the keyword is used and its value evaluates to true the entire file is ignored. Under Windows this file is used to install GnuPG as a portable application. An empty file named ‘gpgconf.ctl’ is expected in the same directory as the tool ‘gpgconf.exe’. The root of the installation is then that directory; or, if ‘gpgconf.exe’ has been installed directly below a directory named ‘bin’, its parent directory. You also need to make sure that the following directories exist and are writable: ‘ROOT/home’ for the GnuPG home and ‘ROOT/opt/homebrew/Cellar/gnupg/2.4.5/var/cache/gnupg’ for internal cache files. /etc/gnupg/gpgconf.conf If this file exists, it is processed as a global configuration file. This is a legacy mechanism which should not be used together with the modern global per component configuration files. A commented example can be found in the ‘examples’ directory of the distribution. GNUPGHOME/swdb.lst A file with current software versions. dirmngr creates this file on demand from an online resource. SEE ALSO gpg(1), gpgsm(1), gpg-agent(1), scdaemon(1), dirmngr(1) The full documentation for this tool is maintained as a Texinfo manual. If GnuPG and the info program are properly installed at your site, the command info gnupg should give you access to the complete manual including a menu structure and an index. GnuPG 2.4.5 2024-03-04 GPGCONF(1)
|
gpgconf - Modify .gnupg home directories
|
gpgconf [options] --list-components gpgconf [options] --list-options component gpgconf [options] --change-options component
|
The following options may be used: -o file --output file Write output to file. Default is to write to stdout. -v --verbose Outputs additional information while running. Specifically, this extends numerical field values by human-readable descriptions. -q --quiet Try to be as quiet as possible. --homedir dir Set the name of the home directory to dir. If this option is not used, the home directory defaults to ‘~/.gnupg’. It is only recognized when given on the command line. It also overrides any home directory stated through the environment variable ‘GNUPGHOME’ or (on Windows systems) by means of the Registry entry HKCU\Software\GNU\GnuPG:HomeDir. On Windows systems it is possible to install GnuPG as a portable application. In this case only this command line option is considered, all other ways to set a home directory are ignored. --chuid uid Change the current user to uid which may either be a number or a name. This can be used from the root account to get information on the GnuPG environment of the specified user or to start or kill daemons. If uid is not the current UID a standard PATH is set and the envvar GNUPGHOME is unset. To override the latter the option --homedir can be used. This option has currently no effect on Windows. -n --dry-run Do not actually change anything. This is currently only implemented for --change-options and can be used for testing purposes. -r --runtime Only used together with --change-options. If one of the modified options can be changed in a running daemon process, signal the running daemon to ask it to reparse its configuration file after changing. This means that the changes will take effect at run-time, as far as this is possible. Otherwise, they will take effect at the next start of the respective backend programs. --status-fd n Write special status strings to the file descriptor n. This program returns the status messages SUCCESS or FAILURE which are helpful when the caller uses a double fork approach and can't easily get the return code of the process. USAGE The command --list-components will list all components that can be configured with gpgconf. Usually, one component will correspond to one GnuPG-related program and contain the options of that program's configuration file that can be modified using gpgconf. However, this is not necessarily the case. A component might also be a group of selected options from several programs, or contain entirely virtual options that have a special effect rather than changing exactly one option in one configuration file. A component is a set of configuration options that semantically belong together. Furthermore, several changes to a component can be made in an atomic way with a single operation. The GUI could for example provide a menu with one entry for each component, or a window with one tabulator sheet per component. The command --list-components lists all available components, one per line. The format of each line is: name:description:pgmname: name This field contains a name tag of the component. The name tag is used to specify the component in all communication with gpgconf. The name tag is to be used verbatim. It is thus not in any escaped format.
| null |
nettle-hash
| null | null | null | null | null |
parcat
|
GNU parcat reads files or fifos in parallel. It writes full lines so there will be no problem with mixed-half-lines which you risk if you use: (cat file1 & cat file2 &) | ... It is faster than doing: parallel -j0 --lb cat ::: file* Arguments can be given on the command line or passed in on stdin (standard input).
|
parcat - cat files or fifos in parallel
|
parcat [--rm] [-#] file(s) [-#] file(s)
|
-# Arguments following this will be sent to the file descriptor #. E.g. parcat -1 stdout1 stdout2 -2 stderr1 stderr2 will send stdout1 and stdout2 to stdout (standard output = file descriptor 1), and send stderr1 and stderr2 to stderr (standard error = file descriptor 2). --rm Remove files after opening. As soon as the files are opened, unlink the files.
|
Simple line buffered output traceroute will often print half a line. If run in parallel, two instances may half-lines of their output. This can be avoided by saving the output to a fifo and then using parcat to read the two fifos in parallel: mkfifo freenetproject.org.fifo tange.dk.fifo traceroute freenetproject.org > freenetproject.org.fifo & traceroute tange.dk > tange.dk.fifo & parcat --rm *fifo REPORTING BUGS GNU parcat is part of GNU parallel. Report bugs to <bug-parallel@gnu.org>. AUTHOR Copyright (C) 2016-2024 Ole Tange, http://ole.tange.dk and Free Software Foundation, Inc. LICENSE This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or at your option any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. Documentation license I Permission is granted to copy, distribute and/or modify this documentation under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the file LICENSES/GFDL-1.3-or-later.txt. Documentation license II You are free: to Share to copy, distribute and transmit the work to Remix to adapt the work Under the following conditions: Attribution You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). Share Alike If you alter, transform, or build upon this work, you may distribute the resulting work only under the same, similar or a compatible license. With the understanding that: Waiver Any of the above conditions can be waived if you get permission from the copyright holder. Public Domain Where the work or any of its elements is in the public domain under applicable law, that status is in no way affected by the license. Other Rights In no way are any of the following rights affected by the license: • Your fair dealing or fair use rights, or other applicable copyright exceptions and limitations; • The author's moral rights; • Rights other persons may have either in the work itself or in how the work is used, such as publicity or privacy rights. Notice For any reuse or distribution, you must make clear to others the license terms of this work. A copy of the full license is included in the file as LICENCES/CC-BY-SA-4.0.txt DEPENDENCIES GNU parcat uses Perl. SEE ALSO cat(1), parallel(1) 20240522 2024-06-22 PARCAT(1)
|
grpc_cli
| null | null | null | null | null |
hwloc-diff
|
hwloc-diff computes the difference between two XML topologies and stores the result into <output.xml> if any, or dumps it to stdout otherwise. The output difference may later be applied to another topology with hwloc-patch. hwloc-compress-dir may be used for computing the diffs between all XML files in a directory. NOTE: If some application-specific userdata were been exported to the input XMLs, they will be ignored and discarded from the output because hwloc has no way to understand and compare them. NOTE: It is highly recommended that you read the hwloc(7) overview page before reading this man page. Most of the concepts described in hwloc(7) directly apply to the hwloc-diff utility.
|
hwloc-diff - Compute differences between two XML topologies
|
hwloc-diff [options] <input1.xml> <input2.xml> hwloc-diff [options] <input1.xml> <input2.xml> <output.xml>
|
--refname <name> Use <name> as the identifier for the reference topology in the output XML difference. It is meant to tell which topology should be used when applying the resulting difference. hwloc-patch may use that name to automatically load the relevant reference topology XML. By default, <input1.xml> is used without its full path. --version Report version and exit. -h --help Display help message and exit.
|
hwloc-diff's operation is best described through several examples. Compute the difference between two XML topologies and output it to stdout: $ hwloc-diff fourmi023.xml fourmi024.xml Found 11 differences, exporting to stdout <?xml version="1.0" encoding="UTF-8"?> ... Output the difference to file diff.xml instead: $ hwloc-diff fourmi023.xml fourmi024.xml diff.xml Found 11 differences, exporting to diff.xml When the difference is too complex to be represented: $ hwloc-diff fourmi023.xml avakas-frontend1.xml Found 1 differences, including 1 too complex ones. Cannot export differences to stdout Directly compute the difference between two topologies and apply it to another one: $ hwloc-diff fourmi023.xml fourmi024.xml | hwloc-patch fourmi025.xml - RETURN VALUE Upon successful execution, hwloc-diff outputs the difference. The return value is 0. If the difference is too complex to be represented, an error is returned and the output is not generated. hwloc-diff also returns nonzero if any kind of error occurs, such as (but not limited to) failure to parse the command line. SEE ALSO hwloc(7), lstopo(1), hwloc-patch(1), hwloc-compress-dir(1) 2.10.0 December 4, 2023 HWLOC-DIFF(1)
|
qt-faststart
| null | null | null | null | null |
dirmngr-client
|
The dirmngr-client is a simple tool to contact a running dirmngr and test whether a certificate has been revoked — either by being listed in the corresponding CRL or by running the OCSP protocol. If no dirmngr is running, a new instances will be started but this is in general not a good idea due to the huge performance overhead. The usual way to run this tool is either: dirmngr-client acert or dirmngr-client <acert Where acert is one DER encoded (binary) X.509 certificates to be tested. RETURN VALUE dirmngr-client returns these values: 0 The certificate under question is valid; i.e. there is a valid CRL available and it is not listed there or the OCSP request returned that that certificate is valid. 1 The certificate has been revoked 2 (and other values) There was a problem checking the revocation state of the certificate. A message to stderr has given more detailed information. Most likely this is due to a missing or expired CRL or due to a network problem.
|
dirmngr-client - Tool to access the Dirmngr services
|
dirmngr-client [options] [certfile|pattern]
|
dirmngr-client may be called with the following options: --version Print the program version and licensing information. Note that you cannot abbreviate this command. --help, -h Print a usage message summarizing the most useful command-line options. Note that you cannot abbreviate this command. --quiet, -q Make the output extra brief by suppressing any informational messages. -v --verbose Outputs additional information while running. You can increase the verbosity by giving several verbose commands to dirmngr, such as ‘-vv’. --pem Assume that the given certificate is in PEM (armored) format. --ocsp Do the check using the OCSP protocol and ignore any CRLs. --force-default-responder When checking using the OCSP protocol, force the use of the default OCSP responder. That is not to use the Reponder as given by the certificate. --ping Check whether the dirmngr daemon is up and running. --cache-cert Put the given certificate into the cache of a running dirmngr. This is mainly useful for debugging. --validate Validate the given certificate using dirmngr's internal validation code. This is mainly useful for debugging. --load-crl This command expects a list of filenames with DER encoded CRL files. With the option --url URLs are expected in place of filenames and they are loaded directly from the given location. All CRLs will be validated and then loaded into dirmngr's cache. --lookup Take the remaining arguments and run a lookup command on each of them. The results are Base-64 encoded outputs (without header lines). This may be used to retrieve certificates from a server. However the output format is not very well suited if more than one certificate is returned. --url -u Modify the lookup and load-crl commands to take an URL. --local -l Let the lookup command only search the local cache. --squid-mode Run dirmngr-client in a mode suitable as a helper program for Squid's external_acl_type option. SEE ALSO dirmngr(8), gpgsm(1) The full documentation for this tool is maintained as a Texinfo manual. If GnuPG and the info program are properly installed at your site, the command info gnupg should give you access to the complete manual including a menu structure and an index. GnuPG 2.4.5 2024-03-04 DIRMNGR-CLIENT(1)
| null |
dh_genprime
| null | null | null | null | null |
sndfile-info
|
sndfile-info displays basic information about sound files such as format, number of channels, samplerate, and length. The following options are recognized: --broadcast Display broadcast (BWF) info. --cart Display the cart chunk of a WAV (or related) file. --channel-map Display channel map. --instrument Display instrument info: a base note, gain, velocity, key, and loop points. SEE ALSO http://libsndfile.github.io/libsndfile/ AUTHORS Erik de Castro Lopo <erikd@mega-nerd.com>. macOS 14.5 November 2, 2014 macOS 14.5
|
sndfile-info – display information about sound files
|
sndfile-info [--broadcast] [--cart] [--channel-map] [--instrument] file ...
| null | null |
sha384sum
|
Print or check SHA384 (384-bit) checksums. With no FILE, or when FILE is -, read standard input. -b, --binary read in binary mode -c, --check read checksums from the FILEs and check them --tag create a BSD-style checksum -t, --text read in text mode (default) -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --help display this help and exit --version output version information and exit The sums are computed as described in FIPS-180-2. When checking, the input should be a former output of this program. The default mode is to print a line with: checksum, a space, a character indicating input mode ('*' for binary, ' ' for text or where binary is insignificant), and name for each FILE. Note: There is no difference between binary mode and text mode on GNU systems. AUTHOR Written by Ulrich Drepper, Scott Miller, and David Madore. REPORTING BUGS GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT Copyright © 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO cksum(1) Full documentation <https://www.gnu.org/software/coreutils/sha384sum> or available locally via: info '(coreutils) sha2 utilities' GNU coreutils 9.3 April 2023 SHA384SUM(1)
|
sha384sum - compute and check SHA384 message digest
|
sha384sum [OPTION]... [FILE]...
| null | null |
h2xs
|
h2xs builds a Perl extension from C header files. The extension will include functions which can be used to retrieve the value of any #define statement which was in the C header files. The module_name will be used for the name of the extension. If module_name is not supplied then the name of the first header file will be used, with the first character capitalized. If the extension might need extra libraries, they should be included here. The extension Makefile.PL will take care of checking whether the libraries actually exist and how they should be loaded. The extra libraries should be specified in the form -lm -lposix, etc, just as on the cc command line. By default, the Makefile.PL will search through the library path determined by Configure. That path can be augmented by including arguments of the form -L/another/library/path in the extra-libraries argument. In spite of its name, h2xs may also be used to create a skeleton pure Perl module. See the -X option.
|
h2xs - convert .h C header files to Perl extensions
|
h2xs [OPTIONS ...] [headerfile ... [extra_libraries]] h2xs -h|-?|--help
|
-A, --omit-autoload Omit all autoload facilities. This is the same as -c but also removes the "use AutoLoader" statement from the .pm file. -B, --beta-version Use an alpha/beta style version number. Causes version number to be "0.00_01" unless -v is specified. -C, --omit-changes Omits creation of the Changes file, and adds a HISTORY section to the POD template. -F, --cpp-flags=addflags Additional flags to specify to C preprocessor when scanning header for function declarations. Writes these options in the generated Makefile.PL too. -M, --func-mask=regular expression selects functions/macros to process. -O, --overwrite-ok Allows a pre-existing extension directory to be overwritten. -P, --omit-pod Omit the autogenerated stub POD section. -X, --omit-XS Omit the XS portion. Used to generate a skeleton pure Perl module. "-c" and "-f" are implicitly enabled. -a, --gen-accessors Generate an accessor method for each element of structs and unions. The generated methods are named after the element name; will return the current value of the element if called without additional arguments; and will set the element to the supplied value (and return the new value) if called with an additional argument. Embedded structures and unions are returned as a pointer rather than the complete structure, to facilitate chained calls. These methods all apply to the Ptr type for the structure; additionally two methods are constructed for the structure type itself, "_to_ptr" which returns a Ptr type pointing to the same structure, and a "new" method to construct and return a new structure, initialised to zeroes. -b, --compat-version=version Generates a .pm file which is backwards compatible with the specified perl version. For versions < 5.6.0, the changes are. - no use of 'our' (uses 'use vars' instead) - no 'use warnings' Specifying a compatibility version higher than the version of perl you are using to run h2xs will have no effect. If unspecified h2xs will default to compatibility with the version of perl you are using to run h2xs. -c, --omit-constant Omit constant() from the .xs file and corresponding specialised "AUTOLOAD" from the .pm file. -d, --debugging Turn on debugging messages. -e, --omit-enums=[regular expression] If regular expression is not given, skip all constants that are defined in a C enumeration. Otherwise skip only those constants that are defined in an enum whose name matches regular expression. Since regular expression is optional, make sure that this switch is followed by at least one other switch if you omit regular expression and have some pending arguments such as header-file names. This is ok: h2xs -e -n Module::Foo foo.h This is not ok: h2xs -n Module::Foo -e foo.h In the latter, foo.h is taken as regular expression. -f, --force Allows an extension to be created for a header even if that header is not found in standard include directories. -g, --global Include code for safely storing static data in the .xs file. Extensions that do no make use of static data can ignore this option. -h, -?, --help Print the usage, help and version for this h2xs and exit. -k, --omit-const-func For function arguments declared as "const", omit the const attribute in the generated XS code. -m, --gen-tied-var Experimental: for each variable declared in the header file(s), declare a perl variable of the same name magically tied to the C variable. -n, --name=module_name Specifies a name to be used for the extension, e.g., -n RPC::DCE -o, --opaque-re=regular expression Use "opaque" data type for the C types matched by the regular expression, even if these types are "typedef"-equivalent to types from typemaps. Should not be used without -x. This may be useful since, say, types which are "typedef"-equivalent to integers may represent OS-related handles, and one may want to work with these handles in OO-way, as in "$handle->do_something()". Use "-o ." if you want to handle all the "typedef"ed types as opaque types. The type-to-match is whitewashed (except for commas, which have no whitespace before them, and multiple "*" which have no whitespace between them). -p, --remove-prefix=prefix Specify a prefix which should be removed from the Perl function names, e.g., -p sec_rgy_ This sets up the XS PREFIX keyword and removes the prefix from functions that are autoloaded via the constant() mechanism. -s, --const-subs=sub1,sub2 Create a perl subroutine for the specified macros rather than autoload with the constant() subroutine. These macros are assumed to have a return type of char *, e.g., -s sec_rgy_wildcard_name,sec_rgy_wildcard_sid. -t, --default-type=type Specify the internal type that the constant() mechanism uses for macros. The default is IV (signed integer). Currently all macros found during the header scanning process will be assumed to have this type. Future versions of "h2xs" may gain the ability to make educated guesses. --use-new-tests When --compat-version (-b) is present the generated tests will use "Test::More" rather than "Test" which is the default for versions before 5.6.2. "Test::More" will be added to PREREQ_PM in the generated "Makefile.PL". --use-old-tests Will force the generation of test code that uses the older "Test" module. --skip-exporter Do not use "Exporter" and/or export any symbol. --skip-ppport Do not use "Devel::PPPort": no portability to older version. --skip-autoloader Do not use the module "AutoLoader"; but keep the constant() function and "sub AUTOLOAD" for constants. --skip-strict Do not use the pragma "strict". --skip-warnings Do not use the pragma "warnings". -v, --version=version Specify a version number for this extension. This version number is added to the templates. The default is 0.01, or 0.00_01 if "-B" is specified. The version specified should be numeric. -x, --autogen-xsubs Automatically generate XSUBs basing on function declarations in the header file. The package "C::Scan" should be installed. If this option is specified, the name of the header file may look like "NAME1,NAME2". In this case NAME1 is used instead of the specified string, but XSUBs are emitted only for the declarations included from file NAME2. Note that some types of arguments/return-values for functions may result in XSUB-declarations/typemap-entries which need hand- editing. Such may be objects which cannot be converted from/to a pointer (like "long long"), pointers to functions, or arrays. See also the section on "LIMITATIONS of -x".
|
# Default behavior, extension is Rusers h2xs rpcsvc/rusers # Same, but extension is RUSERS h2xs -n RUSERS rpcsvc/rusers # Extension is rpcsvc::rusers. Still finds <rpcsvc/rusers.h> h2xs rpcsvc::rusers # Extension is ONC::RPC. Still finds <rpcsvc/rusers.h> h2xs -n ONC::RPC rpcsvc/rusers # Without constant() or AUTOLOAD h2xs -c rpcsvc/rusers # Creates templates for an extension named RPC h2xs -cfn RPC # Extension is ONC::RPC. h2xs -cfn ONC::RPC # Extension is a pure Perl module with no XS code. h2xs -X My::Module # Extension is Lib::Foo which works at least with Perl5.005_03. # Constants are created for all #defines and enums h2xs can find # in foo.h. h2xs -b 5.5.3 -n Lib::Foo foo.h # Extension is Lib::Foo which works at least with Perl5.005_03. # Constants are created for all #defines but only for enums # whose names do not start with 'bar_'. h2xs -b 5.5.3 -e '^bar_' -n Lib::Foo foo.h # Makefile.PL will look for library -lrpc in # additional directory /opt/net/lib h2xs rpcsvc/rusers -L/opt/net/lib -lrpc # Extension is DCE::rgynbase # prefix "sec_rgy_" is dropped from perl function names h2xs -n DCE::rgynbase -p sec_rgy_ dce/rgynbase # Extension is DCE::rgynbase # prefix "sec_rgy_" is dropped from perl function names # subroutines are created for sec_rgy_wildcard_name and # sec_rgy_wildcard_sid h2xs -n DCE::rgynbase -p sec_rgy_ \ -s sec_rgy_wildcard_name,sec_rgy_wildcard_sid dce/rgynbase # Make XS without defines in perl.h, but with function declarations # visible from perl.h. Name of the extension is perl1. # When scanning perl.h, define -DEXT=extern -DdEXT= -DINIT(x)= # Extra backslashes below because the string is passed to shell. # Note that a directory with perl header files would # be added automatically to include path. h2xs -xAn perl1 -F "-DEXT=extern -DdEXT= -DINIT\(x\)=" perl.h # Same with function declaration in proto.h as visible from perl.h. h2xs -xAn perl2 perl.h,proto.h # Same but select only functions which match /^av_/ h2xs -M '^av_' -xAn perl2 perl.h,proto.h # Same but treat SV* etc as "opaque" types h2xs -o '^[S]V \*$' -M '^av_' -xAn perl2 perl.h,proto.h Extension based on .h and .c files Suppose that you have some C files implementing some functionality, and the corresponding header files. How to create an extension which makes this functionality accessible in Perl? The example below assumes that the header files are interface_simple.h and interface_hairy.h, and you want the perl module be named as "Ext::Ension". If you need some preprocessor directives and/or linking with external libraries, see the flags "-F", "-L" and "-l" in "OPTIONS". Find the directory name Start with a dummy run of h2xs: h2xs -Afn Ext::Ension The only purpose of this step is to create the needed directories, and let you know the names of these directories. From the output you can see that the directory for the extension is Ext/Ension. Copy C files Copy your header files and C files to this directory Ext/Ension. Create the extension Run h2xs, overwriting older autogenerated files: h2xs -Oxan Ext::Ension interface_simple.h interface_hairy.h h2xs looks for header files after changing to the extension directory, so it will find your header files OK. Archive and test As usual, run cd Ext/Ension perl Makefile.PL make dist make make test Hints It is important to do "make dist" as early as possible. This way you can easily merge(1) your changes to autogenerated files if you decide to edit your ".h" files and rerun h2xs. Do not forget to edit the documentation in the generated .pm file. Consider the autogenerated files as skeletons only, you may invent better interfaces than what h2xs could guess. Consider this section as a guideline only, some other options of h2xs may better suit your needs. ENVIRONMENT No environment variables are used. AUTHOR Larry Wall and others SEE ALSO perl, perlxstut, ExtUtils::MakeMaker, and AutoLoader. DIAGNOSTICS The usual warnings if it cannot read or write the files involved. LIMITATIONS of -x h2xs would not distinguish whether an argument to a C function which is of the form, say, "int *", is an input, output, or input/output parameter. In particular, argument declarations of the form int foo(n) int *n should be better rewritten as int foo(n) int &n if "n" is an input parameter. Additionally, h2xs has no facilities to intuit that a function int foo(addr,l) char *addr int l takes a pair of address and length of data at this address, so it is better to rewrite this function as int foo(sv) SV *addr PREINIT: STRLEN len; char *s; CODE: s = SvPV(sv,len); RETVAL = foo(s, len); OUTPUT: RETVAL or alternately static int my_foo(SV *sv) { STRLEN len; char *s = SvPV(sv,len); return foo(s, len); } MODULE = foo PACKAGE = foo PREFIX = my_ int foo(sv) SV *sv See perlxs and perlxstut for additional details. perl v5.38.2 2023-11-28 H2XS(1)
|
nonspr10
| null | null | null | null | null |
lstopo-no-graphics
|
lstopo and lstopo-no-graphics are capable of displaying a topological map of the system in a variety of different output formats. The only difference between lstopo and lstopo-no-graphics is that graphical outputs are only supported by lstopo, to reduce dependencies on external libraries. hwloc-ls is identical to lstopo-no-graphics. The filename specified directly implies the output format that will be used; see the OUTPUT FORMATS section, below. Output formats that support color will indicate specific characteristics about individual CPUs by their color; see the COLORS section, below. OUTPUT FORMATS By default, if no output filename is specified, the output is sent to a graphical window if possible in the current environment (DISPLAY environment variable set on Unix, etc.). Otherwise, a text summary is displayed in the console. The console is also used when the program runs from a terminal and the output is redirected to a pipe or file. These default behaviors may be changed by passing --of console to force console mode or --of window for graphical window. The filename on the command line usually determines the format of the output. There are a few filenames that indicate specific output formats and devices (e.g., a filename of "-" will output a text summary to stdout), but most filenames indicate the desired output format by their suffix (e.g., "topo.png" will output a PNG-format file). The format of the output may also be changed with "--of". For instance, "--of pdf" will generate a PDF-format file on the standard output, while "--of fig toto" will output a Xfig-format file named "toto". The list of currently supported formats is given below. Any of them may be used with "--of" or as a filename suffix. default Send the output to a window or to the console depending on the environment. window Send the output to a graphical window. console Send a text summary to stdout. Binding or unallowed processors are only annotated in this mode if verbose; see the COLORS section, below. ascii Output an ASCII art representation of the map (formerly called txt). If outputting to stdout and if colors are supported on the terminal, the output will be colorized. tikz or tex Output a LaTeX tikzpicture representation of the map that can be compiled with a LaTeX compiler. fig Output a representation of the map that can be loaded in Xfig. svg Output a SVG representation of the map, using Cairo (by default, if supported) or a native SVG backend (fallback, always supported). See cairosvg and nativesvg below. cairosvg or svg(cairo) If lstopo was compiled with the proper support, output a SVG representation of the map using Cairo. nativesvg or svg(native) Output a SVG representation of the map using the native SVG backend. It may be less pretty than the Cairo output, but it is always supported, and SVG objects have attributes for identifying and manipulating them. See dynamic_SVG_example.html for an example. pdf If lstopo was compiled with the proper support, lstopo outputs a PDF representation of the map. ps If lstopo was compiled with the proper support, lstopo outputs a Postscript representation of the map. png If lstopo was compiled with the proper support, lstopo outputs a PNG representation of the map. synthetic If the topology is symmetric (which requires that the root object has its symmetric_subtree field set), lstopo outputs a synthetic description string. This output may be reused as an input synthetic topology description later. See also the Synthetic topologies section in the documentation. Note that Misc and I/O devices are ignored during this export. xml lstopo outputs an XML representation of the map. It may be reused later, even on another machine, with lstopo --input, the HWLOC_XMLFILE environment variable, or the hwloc_topology_set_xml() function. The following special names may be used: - Send a text summary to stdout. /dev/stdout Send a text summary to stdout. It is effectively the same as specifying "-". -.<format> If the entire filename is "-.<format>", lstopo behaves as if "--of <format> -" was given, which means a file of the given format is sent to the standard output. See the output of "lstopo --help" for a specific list of what graphical output formats are supported in your hwloc installation. GRAPHICAL OUTPUT The graphical output is made of nested boxes representing the inclusion of objects in the hierarchy of resources. Usually a Machine box contains one or several Package boxes, that contain multiple Core boxes, with one or several PUs each. Caches Caches are displayed in a slightly different manner because they do not actually include computing resources such as cores. For instance, a L2 Cache shared by a pair of Cores is drawn as a Cache box on top of two Core boxes (instead of having Core boxes inside the Cache box). NUMA nodes and Memory-side Caches By default, NUMA nodes boxes are drawn on top of their local computing resources. For instance, a processor Package containing one NUMA node and four Cores is displayed as a Package box containing the NUMA node box above four Core boxes. If a NUMA node is local to the L3 Cache, the NUMA node is displayed above that Cache box. All this specific drawing strategy for memory objects may be disabled by passing command- line option --children-order plain. If multiple NUMA nodes are attached to the same parent object, they are displayed inside an additional unnamed memory box. If some Memory-side Caches exist in front of some NUMA nodes, they are drawn as boxes immediately above them. PCI bridges, PCI devices and OS devices The PCI hierarchy is not drawn as a set of included boxes but rather as a tree of bridges (that may actually be switches) with links between them. The tree starts with a small square on the left for the hostbridge or root complex. It ends with PCI device boxes on the right. Intermediate PCI bridges/switches may appear as additional small squares in the middle. PCI devices on the right of the tree are boxes containing their PCI bus ID (such as 00:02.3). They may also contain sub-boxes for OS device objects such as a network interface eth0 or a CUDA GPU cuda0. When there is a single link (horizontal line) on the right of a PCI bridge, it means that a single device or bridge is connected on the secondary PCI bus behind that bridge. When there is a vertical line, it means that multiple devices and/or bridges are connected to the same secondary PCI bus. The datarate of a PCI link may be written (in GB/s) right below its drawn line (if the operating system and/or libraries are able to report that information). This datarate is the currently configured speed of the entire PCI link (sum of the bandwidth of all PCI lanes in that link). It may change during execution since some devices are able to slow their PCI links down when idle. LAYOUT In its graphical output, lstopo uses simple rectangular heuristics to try to achieve a 4/3 ratio between width and height. Although the hierarchy of resources is properly reflected, the exact physical organization (NUMA distances, rings, complete graphs, etc.) is currently ignored. The layout of a level may be changed with --vert, --horiz, and --rect to force a parent object to arrange its children in vertical, horizontal or rectangular manners respectively. The position of Memory, I/O and Misc children with respect to other children objects may be changed using --children-order. This effectivement divides children into multiple sections. The layout of children is first computed inside each section, before sections are placed inside (or below) the parent box. The vertical/horizontal/rectangular layout of these additional sections may also be configured through --children-order. COLORS Individual CPUs and NUMA nodes are colored in the graphical output formats to indicate different characteristics: Green The topology is reported as seen by a specific process (see --pid), and the given CPU or NUMA node is in this process CPU or Memory binding mask. White The CPU or NUMA node is in the allowed set (see below). If the topology is reported as seen by a specific process (see --pid), the object is also not in this process binding mask. Red The CPU or NUMA node is not in the allowed set (see below). The "allowed set" is the set of CPUs or NUMA nodes to which the current process is allowed to bind. The allowed set is usually either inherited from the parent process or set by administrative qpolicies on the system. Linux cpusets are one example of limiting the allowed set for a process and its children to be less than the full set of CPUs or NUMA nodes on the system. Different processes may therefore have different CPUs or NUMA nodes in the allowed set. Hence, invoking lstopo in different contexts and/or as different users may display different colors for the same individual CPUs (e.g., running lstopo in one context may show a specific CPU as red, but running lstopo in a different context may show the same CPU as white). Some lstopo output modes, e.g. the console mode (default non-graphical output), do not support colors at all. The console mode displays the above characteristics by appending text to each PU line if verbose messages are enabled. CUSTOM COLORS The colors of different kinds of boxes may be configured with --palette. The color of each object in the graphical output may also be enforced by specifying a "lstopoStyle" info attribute in that object. Its value should be a semi-colon separated list of "<attribute>=#rrggbb" where rr, gg and bb are the RGB components of a color, each between 0 and 255, in hexadecimal (00 to ff). <attribute> may be Background Sets the background color of the main object box. Text Sets the color of the text showing the object name, type, index, etc. Text2 Sets the color of the additional text near the object, for instance the link speed behind a PCI bridge. The "lstopoStyle" info may be added to a temporarily-saved XML topologies with hwloc-annotate, or with hwloc_obj_add_info(). For instance, to display all core objects in blue (with white names): lstopo save.xml hwloc-annotate save.xml save.xml core:all info lstopoStyle "Background=#0000ff;Text=#ffffff" lstopo -i save.xml
|
lstopo, lstopo-no-graphics, hwloc-ls - Show the topology of the system
|
lstopo [ options ]... [ filename ] lstopo-no-graphics [ options ]... [ filename ] hwloc-ls [ options ]... [ filename ] Note that hwloc(7) provides a detailed explanation of the hwloc system; it should be read before reading this man page
|
--of <format>, --output-format <format> Enforce the output in the given format. See the OUTPUT FORMATS section below. -i <path>, --input <path> Read the topology from <path> instead of discovering the topology of the local machine. If <path> is a file, it may be a XML file exported by a previous hwloc program. If <path> is "-", the standard input may be used as a XML file. On Linux, <path> may be a directory containing the topology files gathered from another machine topology with hwloc-gather- topology. On x86, <path> may be a directory containing a cpuid dump gathered with hwloc-gather-cpuid. When the archivemount program is available, <path> may also be a tarball containing such Linux or x86 topology files. -i <specification>, --input <specification> Simulate a fake hierarchy (instead of discovering the topology on the local machine). If <specification> is "node:2 pu:3", the topology will contain two NUMA nodes with 3 processing units in each of them. The <specification> string must end with a number of PUs. --if <format>, --input-format <format> Enforce the input in the given format, among xml, fsroot, cpuid and synthetic. --export-xml-flags <flags> Enforce flags when exporting to the XML format. Flags may be given as numeric values or as a comma-separated list of flag names that are passed to hwloc_topology_export_xml(). Those names may be substrings of actual flag names as long as a single one matches. A value of 1 (or v1) reverts to the format of hwloc v1.x. The default is 0 (or none). --export-synthetic-flags <flags> Enforce flags when exporting to the synthetic format. Flags may be given as numeric values or as a comma-separated list of flag names that are passed to hwloc_topology_export_synthetic(). Those names may be substrings of actual flag names as long as a single one matches. A value of 2 (or no_attr) reverts to the format of hwloc v1.9. A value of 3 (or no_ext,no_attr) reverts to the original minimalistic format (before v1.9). The default is 0 (or none). -v --verbose Include additional detail. The hwloc-info tool may be used to display even more information about specific objects. -q --quiet -s --silent Reduce the amount of details to show. --distances Only display distance matrices. --distances-transform <links|merge-switch-ports|transitive-closure> Try applying a transformation to distances structures before displaying them. See hwloc_distances_transform() for details. More transformations may be applied using hwloc-annotate(1) (and it may save their output to XML). --memattrs Only display memory attributes. All of them are displayed (while the default textual output selects memory attribute details depending on the verbosity level). --cpukinds Only display CPU kinds. CPU kinds are displayed in order, starting from the most energy efficient ones up to the rather higher performance and power hungry ones. --windows-processor-groups On Windows, only show information about processor groups. All of them are displayed, while the default verbose output only shows them if there are more than one. -f --force If the destination file already exists, overwrite it. -l --logical Display hwloc logical indexes of all objects, with prefix "L#". By default, both logical and physical/OS indexes are displayed for PUs and NUMA nodes, logical only for cores, dies and packages, and no index for other types. -p --physical Display OS/physical indexes of all objects, with prefix "P#". By default, both logical and physical/OS indexes are displayed for PUs and NUMA nodes, logical only for cores, dies and packages, and no index for other types. --logical-index-prefix <prefix> Replace " L#" with the given prefix for logical indexes. --os-index-prefix <prefix> Replace " P#" with the given prefix for physical/OS indexes. -c --cpuset Display the cpuset of each object. -C --cpuset-only Only display the cpuset of each object; do not display anything else about the object. --taskset Show CPU set strings in the format recognized by the taskset command-line program instead of hwloc-specific CPU set string format. This option should be combined with --cpuset or --cpuset-only, otherwise it will imply --cpuset. --only <type> Only show objects of the given type in the textual output. <type> may contain a filter to select specific objects among the type. For instance --only NUMA[HBM] only shows NUMA nodes marked with subtype "HBM", while --only "numa[mcdram]" only shows MCDRAM NUMA nodes on KNL. --filter <type>:<kind>, --filter <type> Filter objects of type <type>, or of any type if <type> is "all". "io", "cache" and "icache" are also supported. <kind> specifies the filtering behavior. If "none" or not specified, all objects of the given type are removed. If "all", all objects are kept as usual. If "structure", objects are kept when they bring structure to the topology. If "important" (only applicable to I/O), only important objects are kept. See hwloc_topology_set_type_filter() for more details. hwloc supports filtering any type except PUs and NUMA nodes. lstopo also offers PU and NUMA node filtering by hiding them in the graphical and textual outputs, but any object included in them (for instance Misc) will be hidden as well. Note that PUs and NUMA nodes may not be ignored in the XML output. Note also that the top-level object type cannot be ignored (usually Machine or System). --ignore <type> This is the old way to specify --filter <type>:none. --no-smt Ignore PUs. This is identical to --filter PU:none. --no-caches Do not show caches. This is identical to --filter cache:none. --no-useless-caches This is identical to --filter cache:structure. --no-icaches This is identical to --filter icache:none. --disallowed Include objects disallowed by administrative limitations (e.g Cgroups on Linux). Offline PUs and NUMA nodes are still ignored. --allow <all|local|0xff|nodeset=0xf0> Include objects disallowed by administrative limitations (implies --disallowed) and also change the set of allowed ones. If local is given, only objects available to the current process are allowed (default behavior when loading from the native operating system backend). It may be useful if the topology was created by another process (with different administrative restrictions such as Linux Cgroups) and loaded here loaded from XML or synthetic. This case implies --thissystem. If all, all objects are allowed. If a bitmap is given as a hexadecimal string, it is used as the set of allowed PUs. If a bitmap is given after prefix nodeset=, it is the set of allowed NUMA nodes. --flags <flags> Enforce topology flags. Flags may be given as numeric values or as a comma-separated list of flag names that are passed to hwloc_topology_set_flags(). Those names may be substrings of actual flag names as long as a single one matches, for instance disallowed,thissystem_allowed. The default is 8 (or import). --merge Do not show levels that do not have a hierarchical impact. This sets HWLOC_TYPE_FILTER_KEEP_STRUCTURE for all object types. This is identical to --filter all:structure. --no-factorize --no-factorize=<type> Never factorize identical objects in the graphical output. If an object type is given, only factorizing of these objects is disabled. This only applies to normal CPU-side objects, it is independent from PCI collapsing. --factorize --factorize=[<type>,]<N>[,<L>[,<F>] Factorize identical children in the graphical output (enabled by default). If <N> is specified (4 by default), factorizing only occurs when there are strictly more than N identical children. If <L> and <F> are specified, they set the numbers of first and last children to keep after factorizing. If an object type is given, only factorizing of these objects is configured. This only applies to normal CPU-side object, it is independent from PCI collapsing. --no-collapse Do not collapse identical PCI devices. By default, identical sibling PCI devices (such as many virtual functions inside a single physical device) are collapsed. --no-cpukinds Do not show different kinds of CPUs in the graphical output. By default, when supported, different types of lines, thickness and bold font may be used to display PU boxes of different kinds. --restrict <cpuset> Restrict the topology to the given cpuset. This removes some PUs and their now-child-less parents. Beware that restricting the PUs in a topology may change the logical indexes of many objects, including NUMA nodes. --restrict nodeset=<nodeset> Restrict the topology to the given nodeset. (unless --restrict-flags specifies something different). This removes some NUMA nodes and their now-child-less parents. Beware that restricting the NUMA nodes in a topology may change the logical indexes of many objects, including PUs. --restrict binding Restrict the topology to the current process binding. This option requires the use of the actual current machine topology (or any other topology with --thissystem or with HWLOC_THISSYSTEM set to 1 in the environment). Beware that restricting the topology may change the logical indexes of many objects, including PUs and NUMA nodes. --restrict-flags <flags> Enforce flags when restricting the topology. Flags may be given as numeric values or as a comma-separated list of flag names that are passed to hwloc_topology_restrict(). Those names may be substrings of actual flag names as long as a single one matches, for instance bynodeset,memless. The default is 0 (or none). --no-io Do not show any I/O device or bridge. This is identical to --filter io:none. By default, common devices (GPUs, NICs, block devices, ...) and interesting bridges/switches are shown. --no-bridges Do not show any I/O bridge except hostbridges. This is identical to --filter bridge:none. By default, common devices (GPUs, NICs, block devices, ...) and interesting bridges/switches are shown. --whole-io Show all I/O devices and bridges. This is identical to --filter io:all. By default, only common devices (GPUs, NICs, block devices, ...) and interesting bridges/switches are shown. --thissystem Assume that the selected backend provides the topology for the system on which we are running. This is useful when loading a custom topology such as an XML file and using --restrict binding or --allow all. --pid <pid> Detect topology as seen by process <pid>, i.e. as if process <pid> did the discovery itself. Note that this can for instance change the set of allowed processors. Also show this process current CPU and Memory binding by marking the corresponding PUs and NUMA nodes (in Green in the graphical output, see the COLORS section below, or by appending (binding) to the verbose text output). If 0 is given as pid, the current binding for the lstopo process will be shown. --ps --top Show existing processes as misc objects in the output. To avoid uselessly cluttering the output, only processes that are restricted to some part of the machine are shown. On Linux, kernel threads are not shown. If many processes appear, the output may become hard to read anyway, making the hwloc-ps program more practical. See --misc-from for a customizable variant using hwloc-ps. --misc-from <file> Add Misc objects as described in <file> containing entries such as: name=myMisc1 cpuset=0x5 name=myMisc2 cpuset=0x7 subtype=myOptionalSubtype This is useful for combining with hwloc-ps --lstopo-misc (see EXAMPLES below) because hwloc-ps is far more customizable than lstopo's --top option. --children-order <order> Change the order of the different kinds of children with respect to their parent in the graphical output. <order> may be a comma-separated list of keywords among: memory:above displays memory children above other children (and above the parent if it is a cache). PUs are therefore below their local NUMA nodes, like hwloc 1.x did. io:right and misc:right place I/O or Misc children on the right of CPU children. io:below and misc:below place I/O or Misc children below CPU children. plain places everything not specified together with normal CPU children. If only plain is specified, lstopo displays the topology in a basic manner that strictly matches the actual tree: Memory, I/O and Misc children are listed below their parent just like any other child. PUs are therefore on the side of their local NUMA nodes, below a common ancestor. This output may result in strange layouts since the size of Memory, CPU and I/O children may be very different, causing the placement algorithm to poorly arrange them in rows. The default order is memory:above,io:right,misc:right which means Memory children are above CPU children while I/O and Misc are together on the right. Up to hwloc 2.5, the default was rather to memory:above,plain. Additionally, memory:above, io:right, io:below, misc:right and misc:below may be suffixed with :horiz, :vert or :rect to force the horizontal, vertical or rectangular layout of children inside these sections. See also the GRAPHICAL OUTPUT and LAYOUT sections below. --fontsize <size> Set the size of text font in the graphical output. The default is 10. Boxes are scaled according to the text size. The LSTOPO_TEXT_XSCALE environment variable may be used to further scale the width of boxes (its default value is 1.0). The --fontsize option is ignored in the ASCII backend. --gridsize <size> Set the margin between elements in the graphical output. The default is 7. It was 10 prior to hwloc 2.1. This option is ignored in the ASCII backend. --linespacing <size> Set the spacing between lines of text in the graphical output. The default is 4. The option was included in --gridsize prior to hwloc 2.1 (and its default was 10). This option is ignored in the ASCII backend. --thickness <size> Set the thickness of lines and boxes in the graphical output. The default is 1. This option is ignored in the ASCII backend. --horiz, --horiz=<type1,...> Force a horizontal graphical layout instead of nearly 4/3 ratio in the graphical output. If a comma-separated list of object types is given, the layout only applies to the corresponding container objects. Ignored for bridges since their children are always vertically aligned. --vert, --vert=<type1,...> Force a vertical graphical layout instead of nearly 4/3 ratio in the graphical output. If a comma-separated list of object types is given, the layout only applies to the corresponding container objects. --rect, --rect=<type1,...> Force a rectangular graphical layout with nearly 4/3 ratio in the graphical output. If a comma-separated list of object types is given, the layout only applies to the corresponding container objects. Ignored for bridges since their children are always vertically aligned. --no-text, --no-text=<type1,...> Do not display any text in boxes in the graphical output. If a comma-separated list of object types is given, text is disabled for the corresponding objects. This is mostly useful for removing text from Group objects. --text, --text=<type1,...> Display text in boxes in the graphical output (default). If a comma-separated list of object types is given, text is reenabled for the corresponding objects (if it was previously disabled with --no-text). --no-index, --no-index=<type1,...> Do not show object indexes in the graphical output. If a comma- separated list of object types is given, indexes are disabled for the corresponding objects. --index, --index=<type1,...> Show object indexes in the graphical output (default). If a comma-separated list of object types is given, indexes are reenabled for the corresponding objects (if they were previously disabled with --no-index). --no-attrs, --no-attrs=<type1,...> Do not show object attributes (such as memory size, cache size, PCI bus ID, PCI link speed, etc.) in the graphical output. If a comma-separated list of object types is given, attributes are disabled for the corresponding objects. --attrs, --attrs=<type1,...> Show object attributes (such as memory size, cache size, PCI bus ID, PCI link speed, etc.) in the graphical output (default). If a comma-separated list of object types is given, attributes are reenabled for the corresponding objects (if they were previously disabled with --no-attrs). --no-legend Remove all text legend lines at the bottom of the graphical output. --no-default-legend Remove default text legend lines at the bottom of the graphical output. User-added legend lines with --append-legend or the "lstopoLegend" info are still displayed if any. --append-legend <line> Append the line of text to the bottom of the legend in the graphical output. If adding multiple lines, each line should be given separately by passing this option multiple times. Additional legend lines may also be specified inside the topology using the "lstopoLegend" info attributes on the topology root object. --grey, --greyscale Use greyscale instead of colors in the graphical output. --palette <grey|greyscale|defaut|colors|white|none> Change the color palette. Passing grey or greyscale is identical to passing --grey or --greyscale. Passing white or none uses white instead of colors for all box backgrounds. Passing default or colors reverts back to the default color palette. --palette type=#rrggbb Replace the color of the given box type with the given 3x8bit hexadecimal RGB combination (e.g. #ff0000 is red). Existing types are machine, group, package, group_in_package, die, core, pu, numanode, memories (box containing multiple memory children), cache, pcidev, osdev, bridge, and misc. See also CUSTOM COLOR below for customizing individual objects. --binding-color <none|#rrggbb> Do not colorize PUs and NUMA nodes according to the binding in the graphical output. Or change the color to the given 3x8bit hexadecimal RGB combination (e.g. #ff0000 is red). --disallowed-color <none|#rrggbb> Do not colorize disallowed PUs and NUMA nodes in the graphical output. Or change the color to the given 3x8bit hexadecimal RGB combination (e.g. #00ff00 is green). --top-color <none|#rrggbb> Do not colorize task objects in the graphical output when --top is given. Or change the color to the given 3x8bit hexadecimal RGB combination (e.g. #0000ff is blue). This is actually applied to Misc objects of subtype Process or Thread. --version Report version and exit. -h --help Display help message and exit.
|
To display the machine topology in textual mode: lstopo-no-graphics To display the machine topology in ascii-art mode: lstopo-no-graphics -.ascii To display in graphical mode (assuming that the DISPLAY environment variable is set to a relevant value): lstopo To export the topology to a PNG file: lstopo file.png To export an XML file on a machine and later display the corresponding graphical output on another machine: machine1$ lstopo file.xml <transfer file.xml from machine1 to machine2> machine2$ lstopo --input file.xml To save the current machine topology to XML and later reload it faster while still considering it as the current machine: $ lstopo file.xml <...> $ lstopo --input file.xml --thissystem To restrict an XML topology to only physical processors 0, 1, 4 and 5: lstopo --input file.xml --restrict 0x33 newfile.xml To restrict an XML topology to only numa node whose logical index is 1: lstopo --input file.xml --restrict $(hwloc-calc --input file.xml node:1) newfile.xml To display a summary of the topology: lstopo -s To get more details about the topology: lstopo -v To only show cores: lstopo --only core To show cpusets: lstopo --cpuset To only show the cpusets of package: lstopo --only package --cpuset-only Simulate a fake hierarchy; this example shows with 2 NUMA nodes of 2 processor units: lstopo --input "node:2 2" To count the number of logical processors in the system lstopo --only pu | wc -l To append the kernel release and version to the graphical legend: lstopo --append-legend "Kernel release: $(uname -r)" --append-legend "Kernel version: $(uname -v)" To show where a process and its children are bound by combining with hwloc-ps: hwloc-ps --pid-children 23 --lstopo-misc - | lstopo --misc-from - NOTES lstopo displays memory and cache sizes with units such as kB (1 kilobyte = 1000 bytes) or GB (1 gigabyte = 1000*1000*1000 bytes) while it actually means KiB (1 kibibyte = 1024 bytes) or GiB (1 gibibytes = 1024*1024*1024 bytes) . SEE ALSO hwloc(7), hwloc-info(1), hwloc-bind(1), hwloc-annotate(1), hwloc-ps(1), hwloc-gather-topology(1), hwloc-gather-cpuid(1) 2.10.0 December 4, 2023 LSTOPO(1)
|
myisamlog
|
myisamlog processes the contents of a MyISAM log file. To create such a file, start the server with a --log-isam=log_file option. Invoke myisamlog like this: myisamlog [options] [file_name [tbl_name] ...] The default operation is update (-u). If a recovery is done (-r), all writes and possibly updates and deletes are done and errors are only counted. The default log file name is myisam.log if no log_file argument is given. If tables are named on the command line, only those tables are updated. myisamlog supports the following options: • -?, -I Display a help message and exit. • -c N Execute only N commands. • -f N Specify the maximum number of open files. • -F filepath/ Specify the file path with a trailing slash. • -i Display extra information before exiting. • -o offset Specify the starting offset. • -p N Remove N components from path. • -r Perform a recovery operation. • -R record_pos_file record_pos Specify record position file and record position. • -u Perform an update operation. • -v Verbose mode. Print more output about what the program does. This option can be given multiple times to produce more and more output. • -w write_file Specify the write file. • -V Display version information. COPYRIGHT Copyright © 1997, 2023, Oracle and/or its affiliates. This documentation is free software; you can redistribute it and/or modify it only under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License. This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with the program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA or see http://www.gnu.org/licenses/. SEE ALSO For more information, please refer to the MySQL Reference Manual, which may already be installed locally and which is also available online at http://dev.mysql.com/doc/. AUTHOR Oracle Corporation (http://dev.mysql.com/). MySQL 8.3 11/23/2023 MYISAMLOG(1)
|
myisamlog - display MyISAM log file contents
|
myisamlog [options] [log_file [tbl_name] ...]
| null | null |
dtls_server
| null | null | null | null | null |
fc-cache
| null | null | null | null | null |
unlz4
|
lz4 is an extremely fast lossless compression algorithm, based on byte-aligned LZ77 family of compression scheme. lz4 offers compression speeds > 500 MB/s per core, linearly scalable with multi-core CPUs. It features an extremely fast decoder, offering speed in multiple GB/s per core, typically reaching RAM speed limit on multi-core systems. The native file format is the .lz4 format. Difference between lz4 and gzip lz4 supports a command line syntax similar but not identical to gzip(1). Differences are : • lz4 compresses a single file by default (see -m for multiple files) • lz4 file1 file2 means : compress file1 into file2 • lz4 file.lz4 will default to decompression (use -z to force compression) • lz4 preserves original files (see --rm to erase source file on completion) • lz4 shows real-time notification statistics during compression or decompression of a single file (use -q to silence them) • When no destination is specified, result is sent on implicit output, which depends on stdout status. When stdout is Not the console, it becomes the implicit output. Otherwise, if stdout is the console, the implicit output is filename.lz4. • It is considered bad practice to rely on implicit output in scripts. because the script´s environment may change. Always use explicit output in scripts. -c ensures that output will be stdout. Conversely, providing a destination name, or using -m ensures that the output will be either the specified name, or filename.lz4 respectively. Default behaviors can be modified by opt-in commands, detailed below. • lz4 -m makes it possible to provide multiple input filenames, which will be compressed into files using suffix .lz4. Progress notifications become disabled by default (use -v to enable them). This mode has a behavior which more closely mimics gzip command line, with the main remaining difference being that source files are preserved by default. • Similarly, lz4 -m -d can decompress multiple *.lz4 files. • It´s possible to opt-in to erase source files on successful compression or decompression, using --rm command. • Consequently, lz4 -m --rm behaves the same as gzip. Concatenation of .lz4 files It is possible to concatenate .lz4 files as is. lz4 will decompress such files as if they were a single .lz4 file. For example: lz4 file1 > foo.lz4 lz4 file2 >> foo.lz4 Then lz4cat foo.lz4 is equivalent to cat file1 file2.
|
lz4 - lz4, unlz4, lz4cat - Compress or decompress .lz4 files
|
lz4 [OPTIONS] [-|INPUT-FILE] OUTPUT-FILE unlz4 is equivalent to lz4 -d lz4cat is equivalent to lz4 -dcfm When writing scripts that need to decompress files, it is recommended to always use the name lz4 with appropriate arguments (lz4 -d or lz4 -dc) instead of the names unlz4 and lz4cat.
|
Short commands concatenation In some cases, some options can be expressed using short command -x or long command --long-word. Short commands can be concatenated together. For example, -d -c is equivalent to -dc. Long commands cannot be concatenated. They must be clearly separated by a space. Multiple commands When multiple contradictory commands are issued on a same command line, only the latest one will be applied. Operation mode -z --compress Compress. This is the default operation mode when no operation mode option is specified, no other operation mode is implied from the command name (for example, unlz4 implies --decompress), nor from the input file name (for example, a file extension .lz4 implies --decompress by default). -z can also be used to force compression of an already compressed .lz4 file. -d --decompress --uncompress Decompress. --decompress is also the default operation when the input filename has an .lz4 extension. -t --test Test the integrity of compressed .lz4 files. The decompressed data is discarded. No files are created nor removed. -b# Benchmark mode, using # compression level. --list List information about .lz4 files. note : current implementation is limited to single-frame .lz4 files. Operation modifiers -# Compression level, with # being any value from 1 to 12. Higher values trade compression speed for compression ratio. Values above 12 are considered the same as 12. Recommended values are 1 for fast compression (default), and 9 for high compression. Speed/compression trade-off will vary depending on data to compress. Decompression speed remains fast at all settings. --fast[=#] Switch to ultra-fast compression levels. The higher the value, the faster the compression speed, at the cost of some compression ratio. If =# is not present, it defaults to 1. This setting overrides compression level if one was set previously. Similarly, if a compression level is set after --fast, it overrides it. --best Set highest compression level. Same as -12. --favor-decSpeed Generate compressed data optimized for decompression speed. Compressed data will be larger as a consequence (typically by ~0.5%), while decompression speed will be improved by 5-20%, depending on use cases. This option only works in combination with very high compression levels (>=10). -D dictionaryName Compress, decompress or benchmark using dictionary dictionaryName. Compression and decompression must use the same dictionary to be compatible. Using a different dictionary during decompression will either abort due to decompression error, or generate a checksum error. -f --[no-]force This option has several effects: If the target file already exists, overwrite it without prompting. When used with --decompress and lz4 cannot recognize the type of the source file, copy the source file as is to standard output. This allows lz4cat --force to be used like cat (1) for files that have not been compressed with lz4. -c --stdout --to-stdout Force write to standard output, even if it is the console. -m --multiple Multiple input files. Compressed file names will be appended a .lz4 suffix. This mode also reduces notification level. Can also be used to list multiple files. lz4 -m has a behavior equivalent to gzip -k (it preserves source files by default). -r operate recursively on directories. This mode also sets -m (multiple input files). -B# Block size [4-7](default : 7) -B4= 64KB ; -B5= 256KB ; -B6= 1MB ; -B7= 4MB -BI Produce independent blocks (default) -BD Blocks depend on predecessors (improves compression ratio, more noticeable on small blocks) -BX Generate block checksums (default:disabled) --[no-]frame-crc Select frame checksum (default:enabled) --no-crc Disable both frame and block checksums --[no-]content-size Header includes original size (default:not present) Note : this option can only be activated when the original size can be determined, hence for a file. It won´t work with unknown source size, such as stdin or pipe. --[no-]sparse Sparse mode support (default:enabled on file, disabled on stdout) -l Use Legacy format (typically for Linux Kernel compression) Note : -l is not compatible with -m (--multiple) nor -r Other options -v --verbose Verbose mode -q --quiet Suppress warnings and real-time statistics; specify twice to suppress errors too -h -H --help Display help/long help and exit -V --version Display Version number and exit -k --keep Preserve source files (default behavior) --rm Delete source files on successful compression or decompression -- Treat all subsequent arguments as files Benchmark mode -b# Benchmark file(s), using # compression level -e# Benchmark multiple compression levels, from b# to e# (included) -i# Minimum evaluation time in seconds [1-9] (default : 3) BUGS Report bugs at: https://github.com/lz4/lz4/issues AUTHOR Yann Collet lz4 v1.9.4 August 2022 LZ4(1)
| null |
trasher
| null | null | null | null | null |
encodeinttest
| null | null | null | null | null |
gcomm
|
Compare sorted files FILE1 and FILE2 line by line. When FILE1 or FILE2 (not both) is -, read standard input. With no options, produce three-column output. Column one contains lines unique to FILE1, column two contains lines unique to FILE2, and column three contains lines common to both files. -1 suppress column 1 (lines unique to FILE1) -2 suppress column 2 (lines unique to FILE2) -3 suppress column 3 (lines that appear in both files) --check-order check that the input is correctly sorted, even if all input lines are pairable --nocheck-order do not check that the input is correctly sorted --output-delimiter=STR separate columns with STR --total output a summary -z, --zero-terminated line delimiter is NUL, not newline --help display this help and exit --version output version information and exit Note, comparisons honor the rules specified by 'LC_COLLATE'.
|
comm - compare two sorted files line by line
|
comm [OPTION]... FILE1 FILE2
| null |
comm -12 file1 file2 Print only lines present in both file1 and file2. comm -3 file1 file2 Print lines in file1 not in file2, and vice versa. AUTHOR Written by Richard M. Stallman and David MacKenzie. REPORTING BUGS GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT Copyright © 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO join(1), uniq(1) Full documentation <https://www.gnu.org/software/coreutils/comm> or available locally via: info '(coreutils) comm invocation' GNU coreutils 9.3 April 2023 COMM(1)
|
selfserv
| null | null | null | null | null |
tflite_convert
| null | null | null | null | null |
gtty
|
Print the file name of the terminal connected to standard input. -s, --silent, --quiet print nothing, only return an exit status --help display this help and exit --version output version information and exit AUTHOR Written by David MacKenzie. REPORTING BUGS GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT Copyright © 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO Full documentation <https://www.gnu.org/software/coreutils/tty> or available locally via: info '(coreutils) tty invocation' GNU coreutils 9.3 April 2023 TTY(1)
|
tty - print the file name of the terminal connected to standard input
|
tty [OPTION]...
| null | null |
gcksum
|
Print or verify checksums. By default use the 32 bit CRC algorithm. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -a, --algorithm=TYPE select the digest type to use. See DIGEST below. -b, --base64 emit base64-encoded digests, not hexadecimal -c, --check read checksums from the FILEs and check them -l, --length=BITS digest length in bits; must not exceed the max for the blake2 algorithm and must be a multiple of 8 --raw emit a raw binary digest, not hexadecimal --tag create a BSD-style checksum (the default) --untagged create a reversed style checksum, without digest type -z, --zero end each output line with NUL, not newline, and disable file name escaping The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files --quiet don't print OK for each successfully verified file --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines --debug indicate which implementation used --help display this help and exit --version output version information and exit DIGEST determines the digest algorithm and default output format: sysv (equivalent to sum -s) bsd (equivalent to sum -r) crc (equivalent to cksum) md5 (equivalent to md5sum) sha1 (equivalent to sha1sum) sha224 (equivalent to sha224sum) sha256 (equivalent to sha256sum) sha384 (equivalent to sha384sum) sha512 (equivalent to sha512sum) blake2b (equivalent to b2sum) sm3 (only available through cksum) When checking, the input should be a former output of this program, or equivalent standalone program. AUTHOR Written by Padraig Brady and Q. Frank Xia. REPORTING BUGS GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT Copyright © 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO Full documentation <https://www.gnu.org/software/coreutils/cksum> or available locally via: info '(coreutils) cksum invocation' GNU coreutils 9.3 April 2023 CKSUM(1)
|
cksum - compute and verify file checksums
|
cksum [OPTION]... [FILE]...
| null | null |
hwloc-ps
|
By default, hwloc-ps lists only those currently-running processes that are bound. If -t is given, processes that are not bound but contain at least one bound thread are also displayed, as well as all their threads. hwloc-ps displays process identifier, command-line and binding. The binding may be reported as objects or cpusets. By default, process bindings are restricted to the currently available topology. If some processes are bound to processors that are not available to the current process, they are ignored unless --disallowed is given. The output is a plain list. If you wish to annotate the hierarchical topology with processes so as to see how they are actual distributed on the machine, you might want to use lstopo --ps instead (which also only shows processes that are bound). The -a switch can be used to show all processes, if desired.
|
hwloc-ps - List currently-running processes or threads that are bound
|
hwloc-ps [options]
|
-a List all processes, even those that are not bound to any specific part of the machine. --pid <pid> Only show process of PID <pid>, even if it is not bound to any specific part of the machine. --children-of-pid <pid> Only show process of PID <pid> and its hierarchy of children, even if they are not bound to any specific part of the machine. --name <name> Only show processes whose name contains <name>, even if they are not bound to any specific part of the machine. This is not supported on all operating systems. --uid <uid> Only show processes of the user whose UID is <uid>, or processes of all users if all is given. By default, only processes of the current user are displayed. This is currently only supported on Linux. -p --physical Report OS/physical indexes instead of logical indexes -l --logical Report logical indexes instead of physical/OS indexes (default) -c --cpuset Show process bindings as cpusets instead of objects. -t --threads Show threads inside processes. If -a is given as well, list all threads within each process. Otherwise, show all threads inside each process where at least one thread is bound. This is currently only supported on Linux. --single-ancestor When the object is bound to different objects, report their common ancestor (even if it may be larger than the actual binding). -e --get-last-cpu-location Report the last processors where the process/thread ran. Note that the result may already be outdated when reported since the operating system may move the tasks to other processors at any time according to the binding. --disallowed Include objects disallowed by administrative limitations. --pid-cmd <cmd> Append the output of the given command to each PID line. For each displayed process ID, execute the command <cmd> <pid> and append the first line of its output to the regular hwloc- ps line. --pid-cmd env=<name> On Linux, try to read the value of environment variable name in each process and display it at the end of the line. --pid-cmd mpirank On Linux, try to find the process MPI rank (by querying some widespread environment variables) and display it at the end of the line. --lstopo-misc <file> Output a file that may be given to lstopo --misc-from for displaying processes/threads as Misc objects. See EXAMPLES below. --json-server Run the tool as a JSON server that waits for other process' requests on a port and sends back binding information. See contrib/hwloc-ps.www/ for details. --json-port <port> Use the given port number instead of the default 8888. -v --verbose Increase verbosity of the JSON server. --short-name Show only the process short name instead of the path. --version Report version and exit. -h --help Display help message and exit.
|
If a process is bound, it appears in the default output: $ hwloc-ps 4759 Core:0 myprogram If a process is bound on two cores of a larger package, the output will show these cores. Option --single-ancestor will rather return the package even if it is actually larger than the binding here (the process is not bound to Core:0 of Package:0): $ hwloc-ps 4863 Core:1 Core:2 myprogram $ hwloc-ps --single-ancestor 4863 Package:0 myprogram If a process is not bound but 3 of his 4 threads are bound, it only appears in the thread-aware output (or if explicitly selected): $ hwloc-ps $ hwloc-ps -t 4759 Machine:0 myprogram 4759 Machine:0 4761 PU:0 4762 PU:2 4765 PU:1 $ hwloc-ps --pid 4759 4759 Machine:0 myprogram The output may be a file that lstopo uses for adding Misc objects (more flexible version of lstopo --top): $ hwloc-ps --misc-from foo $ cat foo name=12444 myprogram cpuset=0x000000f0 subtype=Process name=12444 mythread1 cpuset=0x00000050 subtype=Thread name=12444 mythread2 cpuset=0x000000a0 subtype=Thread This may be directly given to lstopo: $ hwloc-ps --misc-from - | lstopo --misc-from - On Linux, hwloc-ps may also display some process specific environment variable at the end of the line. This is for instance useful for identify MPI ranks among processes: $ hwloc-ps --pid-cmd env=OMPI_COMM_WORLD_RANK 29093 PU:0 myprogram OMPI_COMM_WORLD_RANK=0 29094 PU:2 myprogram OMPI_COMM_WORLD_RANK=1 29095 PU:1 myprogram OMPI_COMM_WORLD_RANK=2 29096 PU:3 myprogram OMPI_COMM_WORLD_RANK=3 Some widespread MPI specific environment variables (OMPI_COMM_WORLD_RANK, PMIX_RANK, PMI_RANK and SLURM_PROCID) are actually directly recognized by hwloc-ps when requesting the mpirank command: $ hwloc-ps --pid-cmd mpirank 29093 PU:0 myprogram PMIX_RANK=0 29094 PU:2 myprogram PMIX_RANK=1 29095 PU:1 myprogram PMIX_RANK=2 29096 PU:3 myprogram PMIX_RANK=3 Beside reading environment variables, hwloc-ps may also append the output of a custom program. Again, for reading the Open MPI process rank: $ hwloc-ps --pid-cmd myscript 29093 PU:0 myprogram OMPI_COMM_WORLD_RANK=0 29094 PU:2 myprogram OMPI_COMM_WORLD_RANK=1 29095 PU:1 myprogram OMPI_COMM_WORLD_RANK=2 29096 PU:3 myprogram OMPI_COMM_WORLD_RANK=3 where myscript is a shell script doing: #!/bin/sh cat /proc/$1/environ 2>/dev/null | xargs --null --max-args=1 echo | grep OMPI_COMM_WORLD_RANK SEE ALSO hwloc(7), lstopo(1), hwloc-calc(1), hwloc-distrib(1), and hwloc- ps.www/README 2.10.0 December 4, 2023 HWLOC-PS(1)
|
gdbus
| null | null | null | null | null |
ddgs
| null | null | null | null | null |
trust
| null | null | null | null | null |
zstdmt
| null | null | null | null | null |
pk_decrypt
| null | null | null | null | null |
rsa_encrypt
| null | null | null | null | null |
cmsutil
| null | null | null | null | null |
youtube-dl
|
youtube-dl is a command-line program to download videos from YouTube.com and a few more sites. It requires the Python interpreter, version 2.6, 2.7, or 3.2+, and it is not platform specific. It should work on your Unix box, on Windows or on macOS. It is released to the public domain, which means you can modify it, redistribute it or use it however you like.
|
youtube-dl - download videos from youtube.com or other video platforms
|
youtube-dl [OPTIONS] URL [URL...]
|
-h, --help Print this help text and exit --version Print program version and exit -U, --update Update this program to latest version. Make sure that you have sufficient permissions (run with sudo if needed) -i, --ignore-errors Continue on download errors, for example to skip unavailable videos in a playlist --abort-on-error Abort downloading of further videos (in the playlist or the command line) if an error occurs --dump-user-agent Display the current browser identification --list-extractors List all supported extractors --extractor-descriptions Output descriptions of all supported extractors --force-generic-extractor Force extraction to use the generic extractor --default-search PREFIX Use this prefix for unqualified URLs. For example "gvsearch2:" downloads two videos from google videos for youtube- dl "large apple". Use the value "auto" to let youtube-dl guess ("auto_warning" to emit a warning when guessing). "error" just throws an error. The default value "fixup_error" repairs broken URLs, but emits an error if this is not possible instead of searching. --ignore-config Do not read configuration files. When given in the global configuration file /etc/youtube-dl.conf: Do not read the user configuration in ~/.config/youtube-dl/config (%APPDATA%/youtube-dl/config.txt on Windows) --config-location PATH Location of the configuration file; either the path to the config or its containing directory. --flat-playlist Do not extract the videos of a playlist, only list them. --mark-watched Mark videos watched (YouTube only) --no-mark-watched Do not mark videos watched (YouTube only) --no-color Do not emit color codes in output Network Options: --proxy URL Use the specified HTTP/HTTPS/SOCKS proxy. To enable SOCKS proxy, specify a proper scheme. For example socks5://127.0.0.1:1080/. Pass in an empty string (--proxy "") for direct connection --socket-timeout SECONDS Time to wait before giving up, in seconds --source-address IP Client-side IP address to bind to -4, --force-ipv4 Make all connections via IPv4 -6, --force-ipv6 Make all connections via IPv6 Geo Restriction: --geo-verification-proxy URL Use this proxy to verify the IP address for some geo-restricted sites. The default proxy specified by --proxy (or none, if the option is not present) is used for the actual downloading. --geo-bypass Bypass geographic restriction via faking X-Forwarded-For HTTP header --no-geo-bypass Do not bypass geographic restriction via faking X-Forwarded-For HTTP header --geo-bypass-country CODE Force bypass geographic restriction with explicitly provided two-letter ISO 3166-2 country code --geo-bypass-ip-block IP_BLOCK Force bypass geographic restriction with explicitly provided IP block in CIDR notation Video Selection: --playlist-start NUMBER Playlist video to start at (default is 1) --playlist-end NUMBER Playlist video to end at (default is last) --playlist-items ITEM_SPEC Playlist video items to download. Specify indices of the videos in the playlist separated by commas like: "-- playlist-items 1,2,5,8" if you want to download videos indexed 1, 2, 5, 8 in the playlist. You can specify range: " --playlist-items 1-3,7,10-13", it will download the videos at index 1, 2, 3, 7, 10, 11, 12 and 13. --match-title REGEX Download only matching titles (regex or caseless sub-string) --reject-title REGEX Skip download for matching titles (regex or caseless sub-string) --max-downloads NUMBER Abort after downloading NUMBER files --min-filesize SIZE Do not download any videos smaller than SIZE (e.g. 50k or 44.6m) --max-filesize SIZE Do not download any videos larger than SIZE (e.g. 50k or 44.6m) --date DATE Download only videos uploaded in this date --datebefore DATE Download only videos uploaded on or before this date (i.e. inclusive) --dateafter DATE Download only videos uploaded on or after this date (i.e. inclusive) --min-views COUNT Do not download any videos with less than COUNT views --max-views COUNT Do not download any videos with more than COUNT views --match-filter FILTER Generic video filter. Specify any key (see the "OUTPUT TEMPLATE" for a list of available keys) to match if the key is present, !key to check if the key is not present, key > NUMBER (like "comment_count > 12", also works with >=, <, <=, !=, =) to compare against a number, key = 'LITERAL' (like "uploader = 'Mike Smith'", also works with !=) to match against a string literal and & to require multiple matches. Values which are not known are excluded unless you put a question mark (?) after the operator. For example, to only match videos that have been liked more than 100 times and disliked less than 50 times (or the dislike functionality is not available at the given service), but who also have a description, use --match-filter "like_count > 100 & dislike_count <? 50 & description" . --no-playlist Download only the video, if the URL refers to a video and a playlist. --yes-playlist Download the playlist, if the URL refers to a video and a playlist. --age-limit YEARS Download only videos suitable for the given age --download-archive FILE Download only videos not listed in the archive file. Record the IDs of all downloaded videos in it. --include-ads Download advertisements as well (experimental) Download Options: -r, --limit-rate RATE Maximum download rate in bytes per second (e.g. 50K or 4.2M) -R, --retries RETRIES Number of retries (default is 10), or "infinite". --fragment-retries RETRIES Number of retries for a fragment (default is 10), or "infinite" (DASH, hlsnative and ISM) --skip-unavailable-fragments Skip unavailable fragments (DASH, hlsnative and ISM) --abort-on-unavailable-fragment Abort downloading when some fragment is not available --keep-fragments Keep downloaded fragments on disk after downloading is finished; fragments are erased by default --buffer-size SIZE Size of download buffer (e.g. 1024 or 16K) (default is 1024) --no-resize-buffer Do not automatically adjust the buffer size. By default, the buffer size is automatically resized from an initial value of SIZE. --http-chunk-size SIZE Size of a chunk for chunk-based HTTP downloading (e.g. 10485760 or 10M) (default is disabled). May be useful for bypassing bandwidth throttling imposed by a webserver (experimental) --playlist-reverse Download playlist videos in reverse order --playlist-random Download playlist videos in random order --xattr-set-filesize Set file xattribute ytdl.filesize with expected file size --hls-prefer-native Use the native HLS downloader instead of ffmpeg --hls-prefer-ffmpeg Use ffmpeg instead of the native HLS downloader --hls-use-mpegts Use the mpegts container for HLS videos, allowing to play the video while downloading (some players may not be able to play it) --external-downloader COMMAND Use the specified external downloader. Currently supports aria2c,avconv,axel,c url,ffmpeg,httpie,wget --external-downloader-args ARGS Give these arguments to the external downloader Filesystem Options: -a, --batch-file FILE File containing URLs to download ('-' for stdin), one URL per line. Lines starting with '#', ';' or ']' are considered as comments and ignored. --id Use only video ID in file name -o, --output TEMPLATE Output filename template, see the "OUTPUT TEMPLATE" for all the info --output-na-placeholder PLACEHOLDER Placeholder value for unavailable meta fields in output filename template (default is "NA") --autonumber-start NUMBER Specify the start value for %(autonumber)s (default is 1) --restrict-filenames Restrict filenames to only ASCII characters, and avoid "&" and spaces in filenames -w, --no-overwrites Do not overwrite files -c, --continue Force resume of partially downloaded files. By default, youtube-dl will resume downloads if possible. --no-continue Do not resume partially downloaded files (restart from beginning) --no-part Do not use .part files - write directly into output file --no-mtime Do not use the Last-modified header to set the file modification time --write-description Write video description to a .description file --write-info-json Write video metadata to a .info.json file --write-annotations Write video annotations to a .annotations.xml file --load-info-json FILE JSON file containing the video information (created with the "--write- info-json" option) --cookies FILE File to read cookies from and dump cookie jar in --cache-dir DIR Location in the filesystem where youtube-dl can store some downloaded information permanently. By default $XDG_CACHE_HOME/youtube-dl or ~/.cache/youtube-dl . At the moment, only YouTube player files (for videos with obfuscated signatures) are cached, but that may change. --no-cache-dir Disable filesystem caching --rm-cache-dir Delete all filesystem cache files Thumbnail Options: --write-thumbnail Write thumbnail image to disk --write-all-thumbnails Write all thumbnail image formats to disk --list-thumbnails Simulate and list all available thumbnail formats Verbosity / Simulation Options: -q, --quiet Activate quiet mode --no-warnings Ignore warnings -s, --simulate Do not download the video and do not write anything to disk --skip-download Do not download the video -g, --get-url Simulate, quiet but print URL -e, --get-title Simulate, quiet but print title --get-id Simulate, quiet but print id --get-thumbnail Simulate, quiet but print thumbnail URL --get-description Simulate, quiet but print video description --get-duration Simulate, quiet but print video length --get-filename Simulate, quiet but print output filename --get-format Simulate, quiet but print output format -j, --dump-json Simulate, quiet but print JSON information. See the "OUTPUT TEMPLATE" for a description of available keys. -J, --dump-single-json Simulate, quiet but print JSON information for each command-line argument. If the URL refers to a playlist, dump the whole playlist information in a single line. --print-json Be quiet and print the video information as JSON (video is still being downloaded). --newline Output progress bar as new lines --no-progress Do not print progress bar --console-title Display progress in console titlebar -v, --verbose Print various debugging information --dump-pages Print downloaded pages encoded using base64 to debug problems (very verbose) --write-pages Write downloaded intermediary pages to files in the current directory to debug problems --print-traffic Display sent and read HTTP traffic -C, --call-home Contact the youtube-dl server for debugging --no-call-home Do NOT contact the youtube-dl server for debugging Workarounds: --encoding ENCODING Force the specified encoding (experimental) --no-check-certificate Suppress HTTPS certificate validation --prefer-insecure Use an unencrypted connection to retrieve information about the video. (Currently supported only for YouTube) --user-agent UA Specify a custom user agent --referer URL Specify a custom referer, use if the video access is restricted to one domain --add-header FIELD:VALUE Specify a custom HTTP header and its value, separated by a colon ':'. You can use this option multiple times --bidi-workaround Work around terminals that lack bidirectional text support. Requires bidiv or fribidi executable in PATH --sleep-interval SECONDS Number of seconds to sleep before each download when used alone or a lower bound of a range for randomized sleep before each download (minimum possible number of seconds to sleep) when used along with --max-sleep-interval. --max-sleep-interval SECONDS Upper bound of a range for randomized sleep before each download (maximum possible number of seconds to sleep). Must only be used along with --min- sleep-interval. Video Format Options: -f, --format FORMAT Video format code, see the "FORMAT SELECTION" for all the info --all-formats Download all available video formats --prefer-free-formats Prefer free video formats unless a specific one is requested -F, --list-formats List all available formats of requested videos --youtube-skip-dash-manifest Do not download the DASH manifests and related data on YouTube videos --merge-output-format FORMAT If a merge is required (e.g. bestvideo+bestaudio), output to given container format. One of mkv, mp4, ogg, webm, flv. Ignored if no merge is required Subtitle Options: --write-sub Write subtitle file --write-auto-sub Write automatically generated subtitle file (YouTube only) --all-subs Download all the available subtitles of the video --list-subs List all available subtitles for the video --sub-format FORMAT Subtitle format, accepts formats preference, for example: "srt" or "ass/srt/best" --sub-lang LANGS Languages of the subtitles to download (optional) separated by commas, use --list-subs for available language tags Authentication Options: -u, --username USERNAME Login with this account ID -p, --password PASSWORD Account password. If this option is left out, youtube-dl will ask interactively. -2, --twofactor TWOFACTOR Two-factor authentication code -n, --netrc Use .netrc authentication data --video-password PASSWORD Video password (vimeo, youku) Adobe Pass Options: --ap-mso MSO Adobe Pass multiple-system operator (TV provider) identifier, use --ap-list-mso for a list of available MSOs --ap-username USERNAME Multiple-system operator account login --ap-password PASSWORD Multiple-system operator account password. If this option is left out, youtube-dl will ask interactively. --ap-list-mso List all supported multiple-system operators Post-processing Options: -x, --extract-audio Convert video files to audio-only files (requires ffmpeg/avconv and ffprobe/avprobe) --audio-format FORMAT Specify audio format: "best", "aac", "flac", "mp3", "m4a", "opus", "vorbis", or "wav"; "best" by default; No effect without -x --audio-quality QUALITY Specify ffmpeg/avconv audio quality, insert a value between 0 (better) and 9 (worse) for VBR or a specific bitrate like 128K (default 5) --recode-video FORMAT Encode the video to another format if necessary (currently supported: mp4|flv|ogg|webm|mkv|avi) --postprocessor-args ARGS Give these arguments to the postprocessor -k, --keep-video Keep the video file on disk after the post-processing; the video is erased by default --no-post-overwrites Do not overwrite post-processed files; the post-processed files are overwritten by default --embed-subs Embed subtitles in the video (only for mp4, webm and mkv videos) --embed-thumbnail Embed thumbnail in the audio as cover art --add-metadata Write metadata to the video file --metadata-from-title FORMAT Parse additional metadata like song title / artist from the video title. The format syntax is the same as --output. Regular expression with named capture groups may also be used. The parsed parameters replace existing values. Example: --metadata-from-title "%(artist)s - %(title)s" matches a title like "Coldplay - Paradise". Example (regex): --metadata-from-title "(?P.+?) - (?P .+)" --xattrs Write metadata to the video file's xattrs (using dublin core and xdg standards) --fixup POLICY Automatically correct known faults of the file. One of never (do nothing), warn (only emit a warning), detect_or_warn (the default; fix file if we can, warn otherwise) --prefer-avconv Prefer avconv over ffmpeg for running the postprocessors --prefer-ffmpeg Prefer ffmpeg over avconv for running the postprocessors (default) --ffmpeg-location PATH Location of the ffmpeg/avconv binary; either the path to the binary or its containing directory. --exec CMD Execute a command on the file after downloading and post-processing, similar to find's -exec syntax. Example: --exec 'adb push {} /sdcard/Music/ && rm {}' --convert-subs FORMAT Convert the subtitles to other format (currently supported: srt|ass|vtt|lrc) CONFIGURATION You can configure youtube-dl by placing any supported command line option to a configuration file. On Linux and macOS, the system wide configuration file is located at /etc/youtube-dl.conf and the user wide configuration file at ~/.config/youtube-dl/config. On Windows, the user wide configuration file locations are %APPDATA%\youtube-dl\config.txt or C:\Users\<user name>\youtube-dl.conf. Note that by default configuration file may not exist so you may need to create it yourself. For example, with the following configuration file youtube-dl will always extract the audio, not copy the mtime, use a proxy and save all videos under Movies directory in your home directory: # Lines starting with # are comments # Always extract audio -x # Do not copy the mtime --no-mtime # Use this proxy --proxy 127.0.0.1:3128 # Save all videos under Movies directory in your home directory -o ~/Movies/%(title)s.%(ext)s Note that options in configuration file are just the same options aka switches used in regular command line calls thus there must be no whitespace after - or --, e.g. -o or --proxy but not - o or -- proxy. You can use --ignore-config if you want to disable the configuration file for a particular youtube-dl run. You can also use --config-location if you want to use custom configuration file for a particular youtube-dl run. Authentication with .netrc file You may also want to configure automatic credentials storage for extractors that support authentication (by providing login and password with --username and --password) in order not to pass credentials as command line arguments on every youtube-dl execution and prevent tracking plain text passwords in the shell command history. You can achieve this using a .netrc file (https://stackoverflow.com/tags/.netrc/info) on a per extractor basis. For that you will need to create a .netrc file in your $HOME and restrict permissions to read/write by only you: touch $HOME/.netrc chmod a-rwx,u+rw $HOME/.netrc After that you can add credentials for an extractor in the following format, where extractor is the name of the extractor in lowercase: machine <extractor> login <login> password <password> For example: machine youtube login myaccount@gmail.com password my_youtube_password machine twitch login my_twitch_account_name password my_twitch_password To activate authentication with the .netrc file you should pass --netrc to youtube-dl or place it in the configuration file. On Windows you may also need to setup the %HOME% environment variable manually. For example: set HOME=%USERPROFILE% OUTPUT TEMPLATE The -o option allows users to indicate a template for the output file names. tl;dr: navigate me to examples. The basic usage is not to set any template arguments when downloading a single file, like in youtube-dl -o funny_video.flv "https://some/video". However, it may contain special sequences that will be replaced when downloading each video. The special sequences may be formatted according to python string formatting operations (https://docs.python.org/2/library/stdtypes.html#string-formatting). For example, %(NAME)s or %(NAME)05d. To clarify, that is a percent symbol followed by a name in parentheses, followed by formatting operations. Allowed names along with sequence type are: • id (string): Video identifier • title (string): Video title • url (string): Video URL • ext (string): Video filename extension • alt_title (string): A secondary title of the video • display_id (string): An alternative identifier for the video • uploader (string): Full name of the video uploader • license (string): License name the video is licensed under • creator (string): The creator of the video • release_date (string): The date (YYYYMMDD) when the video was released • timestamp (numeric): UNIX timestamp of the moment the video became available • upload_date (string): Video upload date (YYYYMMDD) • uploader_id (string): Nickname or id of the video uploader • channel (string): Full name of the channel the video is uploaded on • channel_id (string): Id of the channel • location (string): Physical location where the video was filmed • duration (numeric): Length of the video in seconds • view_count (numeric): How many users have watched the video on the platform • like_count (numeric): Number of positive ratings of the video • dislike_count (numeric): Number of negative ratings of the video • repost_count (numeric): Number of reposts of the video • average_rating (numeric): Average rating give by users, the scale used depends on the webpage • comment_count (numeric): Number of comments on the video • age_limit (numeric): Age restriction for the video (years) • is_live (boolean): Whether this video is a live stream or a fixed-length video • start_time (numeric): Time in seconds where the reproduction should start, as specified in the URL • end_time (numeric): Time in seconds where the reproduction should end, as specified in the URL • format (string): A human-readable description of the format • format_id (string): Format code specified by --format • format_note (string): Additional info about the format • width (numeric): Width of the video • height (numeric): Height of the video • resolution (string): Textual description of width and height • tbr (numeric): Average bitrate of audio and video in KBit/s • abr (numeric): Average audio bitrate in KBit/s • acodec (string): Name of the audio codec in use • asr (numeric): Audio sampling rate in Hertz • vbr (numeric): Average video bitrate in KBit/s • fps (numeric): Frame rate • vcodec (string): Name of the video codec in use • container (string): Name of the container format • filesize (numeric): The number of bytes, if known in advance • filesize_approx (numeric): An estimate for the number of bytes • protocol (string): The protocol that will be used for the actual download • extractor (string): Name of the extractor • extractor_key (string): Key name of the extractor • epoch (numeric): Unix epoch when creating the file • autonumber (numeric): Number that will be increased with each download, starting at --autonumber-start • playlist (string): Name or id of the playlist that contains the video • playlist_index (numeric): Index of the video in the playlist padded with leading zeros according to the total length of the playlist • playlist_id (string): Playlist identifier • playlist_title (string): Playlist title • playlist_uploader (string): Full name of the playlist uploader • playlist_uploader_id (string): Nickname or id of the playlist uploader Available for the video that belongs to some logical chapter or section: • chapter (string): Name or title of the chapter the video belongs to • chapter_number (numeric): Number of the chapter the video belongs to • chapter_id (string): Id of the chapter the video belongs to Available for the video that is an episode of some series or programme: • series (string): Title of the series or programme the video episode belongs to • season (string): Title of the season the video episode belongs to • season_number (numeric): Number of the season the video episode belongs to • season_id (string): Id of the season the video episode belongs to • episode (string): Title of the video episode • episode_number (numeric): Number of the video episode within a season • episode_id (string): Id of the video episode Available for the media that is a track or a part of a music album: • track (string): Title of the track • track_number (numeric): Number of the track within an album or a disc • track_id (string): Id of the track • artist (string): Artist(s) of the track • genre (string): Genre(s) of the track • album (string): Title of the album the track belongs to • album_type (string): Type of the album • album_artist (string): List of all artists appeared on the album • disc_number (numeric): Number of the disc or other physical medium the track belongs to • release_year (numeric): Year (YYYY) when the album was released Each aforementioned sequence when referenced in an output template will be replaced by the actual value corresponding to the sequence name. Note that some of the sequences are not guaranteed to be present since they depend on the metadata obtained by a particular extractor. Such sequences will be replaced with placeholder value provided with --output-na-placeholder (NA by default). For example for -o %(title)s-%(id)s.%(ext)s and an mp4 video with title youtube-dl test video and id BaW_jenozKcj, this will result in a youtube-dl test video-BaW_jenozKcj.mp4 file created in the current directory. For numeric sequences you can use numeric related formatting, for example, %(view_count)05d will result in a string with view count padded with zeros up to 5 characters, like in 00042. Output templates can also contain arbitrary hierarchical path, e.g. -o '%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s' which will result in downloading each video in a directory corresponding to this path template. Any missing directory will be automatically created for you. To use percent literals in an output template use %%. To output to stdout use -o -. The current default template is %(title)s-%(id)s.%(ext)s. In some cases, you don't want special characters such as 中, spaces, or &, such as when transferring the downloaded filename to a Windows system or the filename through an 8bit-unsafe channel. In these cases, add the --restrict-filenames flag to get a shorter title: Output template and Windows batch files If you are using an output template inside a Windows batch file then you must escape plain percent characters (%) by doubling, so that -o "%(title)s-%(id)s.%(ext)s" should become -o "%%(title)s-%%(id)s.%%(ext)s". However you should not touch %'s that are not plain characters, e.g. environment variables for expansion should stay intact: -o "C:\%HOMEPATH%\Desktop\%%(title)s.%%(ext)s". Output template examples Note that on Windows you may need to use double quotes instead of single. $ youtube-dl --get-filename -o '%(title)s.%(ext)s' BaW_jenozKc youtube-dl test video ''_ä↭𝕐.mp4 # All kinds of weird characters $ youtube-dl --get-filename -o '%(title)s.%(ext)s' BaW_jenozKc --restrict-filenames youtube-dl_test_video_.mp4 # A simple file name # Download YouTube playlist videos in separate directory indexed by video order in a playlist $ youtube-dl -o '%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s' https://www.youtube.com/playlist?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re # Download all playlists of YouTube channel/user keeping each playlist in separate directory: $ youtube-dl -o '%(uploader)s/%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s' https://www.youtube.com/user/TheLinuxFoundation/playlists # Download Udemy course keeping each chapter in separate directory under MyVideos directory in your home $ youtube-dl -u user -p password -o '~/MyVideos/%(playlist)s/%(chapter_number)s - %(chapter)s/%(title)s.%(ext)s' https://www.udemy.com/java-tutorial/ # Download entire series season keeping each series and each season in separate directory under C:/MyVideos $ youtube-dl -o "C:/MyVideos/%(series)s/%(season_number)s - %(season)s/%(episode_number)s - %(episode)s.%(ext)s" https://videomore.ru/kino_v_detalayah/5_sezon/367617 # Stream the video being downloaded to stdout $ youtube-dl -o - BaW_jenozKc FORMAT SELECTION By default youtube-dl tries to download the best available quality, i.e. if you want the best quality you don't need to pass any special options, youtube-dl will guess it for you by default. But sometimes you may want to download in a different format, for example when you are on a slow or intermittent connection. The key mechanism for achieving this is so-called format selection based on which you can explicitly specify desired format, select formats based on some criterion or criteria, setup precedence and much more. The general syntax for format selection is --format FORMAT or shorter -f FORMAT where FORMAT is a selector expression, i.e. an expression that describes format or formats you would like to download. tl;dr: navigate me to examples. The simplest case is requesting a specific format, for example with -f 22 you can download the format with format code equal to 22. You can get the list of available format codes for particular video using --list-formats or -F. Note that these format codes are extractor specific. You can also use a file extension (currently 3gp, aac, flv, m4a, mp3, mp4, ogg, wav, webm are supported) to download the best quality format of a particular file extension served as a single file, e.g. -f webm will download the best quality format with the webm extension served as a single file. You can also use special names to select particular edge case formats: • best: Select the best quality format represented by a single file with video and audio. • worst: Select the worst quality format represented by a single file with video and audio. • bestvideo: Select the best quality video-only format (e.g. DASH video). May not be available. • worstvideo: Select the worst quality video-only format. May not be available. • bestaudio: Select the best quality audio only-format. May not be available. • worstaudio: Select the worst quality audio only-format. May not be available. For example, to download the worst quality video-only format you can use -f worstvideo. If you want to download multiple videos and they don't have the same formats available, you can specify the order of preference using slashes. Note that slash is left-associative, i.e. formats on the left hand side are preferred, for example -f 22/17/18 will download format 22 if it's available, otherwise it will download format 17 if it's available, otherwise it will download format 18 if it's available, otherwise it will complain that no suitable formats are available for download. If you want to download several formats of the same video use a comma as a separator, e.g. -f 22,17,18 will download all these three formats, of course if they are available. Or a more sophisticated example combined with the precedence feature: -f 136/137/mp4/bestvideo,140/m4a/bestaudio. You can also filter the video formats by putting a condition in brackets, as in -f "best[height=720]" (or -f "[filesize>10M]"). The following numeric meta fields can be used with comparisons <, <=, >, >=, = (equals), != (not equals): • filesize: The number of bytes, if known in advance • width: Width of the video, if known • height: Height of the video, if known • tbr: Average bitrate of audio and video in KBit/s • abr: Average audio bitrate in KBit/s • vbr: Average video bitrate in KBit/s • asr: Audio sampling rate in Hertz • fps: Frame rate Also filtering work for comparisons = (equals), ^= (starts with), $= (ends with), *= (contains) and following string meta fields: • ext: File extension • acodec: Name of the audio codec in use • vcodec: Name of the video codec in use • container: Name of the container format • protocol: The protocol that will be used for the actual download, lower-case (http, https, rtsp, rtmp, rtmpe, mms, f4m, ism, http_dash_segments, m3u8, or m3u8_native) • format_id: A short description of the format • language: Language code Any string comparison may be prefixed with negation ! in order to produce an opposite comparison, e.g. !*= (does not contain). Note that none of the aforementioned meta fields are guaranteed to be present since this solely depends on the metadata obtained by particular extractor, i.e. the metadata offered by the video hoster. Formats for which the value is not known are excluded unless you put a question mark (?) after the operator. You can combine format filters, so -f "[height <=? 720][tbr>500]" selects up to 720p videos (or videos where the height is not known) with a bitrate of at least 500 KBit/s. You can merge the video and audio of two formats into a single file using -f <video-format>+<audio-format> (requires ffmpeg or avconv installed), for example -f bestvideo+bestaudio will download the best video-only format, the best audio-only format and mux them together with ffmpeg/avconv. Format selectors can also be grouped using parentheses, for example if you want to download the best mp4 and webm formats with a height lower than 480 you can use -f '(mp4,webm)[height<480]'. Since the end of April 2015 and version 2015.04.26, youtube-dl uses -f bestvideo+bestaudio/best as the default format selection (see #5447 (https://github.com/ytdl-org/youtube-dl/issues/5447), #5456 (https://github.com/ytdl-org/youtube-dl/issues/5456)). If ffmpeg or avconv are installed this results in downloading bestvideo and bestaudio separately and muxing them together into a single file giving the best overall quality available. Otherwise it falls back to best and results in downloading the best available quality served as a single file. best is also needed for videos that don't come from YouTube because they don't provide the audio and video in two different files. If you want to only download some DASH formats (for example if you are not interested in getting videos with a resolution higher than 1080p), you can add -f bestvideo[height<=?1080]+bestaudio/best to your configuration file. Note that if you use youtube-dl to stream to stdout (and most likely to pipe it to your media player then), i.e. you explicitly specify output template as -o -, youtube-dl still uses -f best format selection in order to start content delivery immediately to your player and not to wait until bestvideo and bestaudio are downloaded and muxed. If you want to preserve the old format selection behavior (prior to youtube-dl 2015.04.26), i.e. you want to download the best available quality media served as a single file, you should explicitly specify your choice with -f best. You may want to add it to the configuration file in order not to type it every time you run youtube-dl. Format selection examples Note that on Windows you may need to use double quotes instead of single. # Download best mp4 format available or any other best if no mp4 available $ youtube-dl -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best' # Download best format available but no better than 480p $ youtube-dl -f 'bestvideo[height<=480]+bestaudio/best[height<=480]' # Download best video only format but no bigger than 50 MB $ youtube-dl -f 'best[filesize<50M]' # Download best format available via direct link over HTTP/HTTPS protocol $ youtube-dl -f '(bestvideo+bestaudio/best)[protocol^=http]' # Download the best video format and the best audio format without merging them $ youtube-dl -f 'bestvideo,bestaudio' -o '%(title)s.f%(format_id)s.%(ext)s' Note that in the last example, an output template is recommended as bestvideo and bestaudio may have the same file name. VIDEO SELECTION Videos can be filtered by their upload date using the options --date, --datebefore or --dateafter. They accept dates in two formats: • Absolute dates: Dates in the format YYYYMMDD. • Relative dates: Dates in the format (now|today)[+-][0-9](day|week|month|year)(s)? Examples: # Download only the videos uploaded in the last 6 months $ youtube-dl --dateafter now-6months # Download only the videos uploaded on January 1, 1970 $ youtube-dl --date 19700101 $ # Download only the videos uploaded in the 200x decade $ youtube-dl --dateafter 20000101 --datebefore 20091231 FAQ How do I update youtube-dl? If you've followed our manual installation instructions (https://ytdl- org.github.io/youtube-dl/download.html), you can simply run youtube-dl -U (or, on Linux, sudo youtube-dl -U). If you have used pip, a simple sudo pip install -U youtube-dl is sufficient to update. If you have installed youtube-dl using a package manager like apt-get or yum, use the standard system update mechanism to update. Note that distribution packages are often outdated. As a rule of thumb, youtube-dl releases at least once a month, and often weekly or even daily. Simply go to https://yt-dl.org to find out the current version. Unfortunately, there is nothing we youtube-dl developers can do if your distribution serves a really outdated version. You can (and should) complain to your distribution in their bugtracker or support forum. As a last resort, you can also uninstall the version installed by your package manager and follow our manual installation instructions. For that, remove the distribution's package, with a line like sudo apt-get remove -y youtube-dl Afterwards, simply follow our manual installation instructions (https://ytdl-org.github.io/youtube-dl/download.html): sudo wget https://yt-dl.org/downloads/latest/youtube-dl -O /usr/local/bin/youtube-dl sudo chmod a+rx /usr/local/bin/youtube-dl hash -r Again, from then on you'll be able to update with sudo youtube-dl -U. youtube-dl is extremely slow to start on Windows Add a file exclusion for youtube-dl.exe in Windows Defender settings. I'm getting an error Unable to extract OpenGraph title on YouTube playlists YouTube changed their playlist format in March 2014 and later on, so you'll need at least youtube-dl 2014.07.25 to download all YouTube videos. If you have installed youtube-dl with a package manager, pip, setup.py or a tarball, please use that to update. Note that Ubuntu packages do not seem to get updated anymore. Since we are not affiliated with Ubuntu, there is little we can do. Feel free to report bugs (https://bugs.launchpad.net/ubuntu/+source/youtube-dl/+filebug) to the Ubuntu packaging people (mailto:ubuntu- motu@lists.ubuntu.com?subject=outdated%20version%20of%20youtube-dl) - all they have to do is update the package to a somewhat recent version. See above for a way to update. I'm getting an error when trying to use output template: error: using output template conflicts with using title, video ID or auto number Make sure you are not using -o with any of these options -t, --title, --id, -A or --auto-number set in command line or in a configuration file. Remove the latter if any. Do I always have to pass -citw? By default, youtube-dl intends to have the best options (incidentally, if you have a convincing case that these should be different, please file an issue where you explain that (https://yt-dl.org/bug)). Therefore, it is unnecessary and sometimes harmful to copy long option strings from webpages. In particular, the only option out of -citw that is regularly useful is -i. Can you please put the -b option back? Most people asking this question are not aware that youtube-dl now defaults to downloading the highest available quality as reported by YouTube, which will be 1080p or 720p in some cases, so you no longer need the -b option. For some specific videos, maybe YouTube does not report them to be available in a specific high quality format you're interested in. In that case, simply request it with the -f option and youtube-dl will try to download it. I get HTTP error 402 when trying to download a video. What's this? Apparently YouTube requires you to pass a CAPTCHA test if you download too much. We're considering to provide a way to let you solve the CAPTCHA (https://github.com/ytdl-org/youtube-dl/issues/154), but at the moment, your best course of action is pointing a web browser to the youtube URL, solving the CAPTCHA, and restart youtube-dl. Do I need any other programs? youtube-dl works fine on its own on most sites. However, if you want to convert video/audio, you'll need avconv (https://libav.org/) or ffmpeg (https://www.ffmpeg.org/). On some sites - most notably YouTube - videos can be retrieved in a higher quality format without sound. youtube-dl will detect whether avconv/ffmpeg is present and automatically pick the best option. Videos or video formats streamed via RTMP protocol can only be downloaded when rtmpdump (https://rtmpdump.mplayerhq.hu/) is installed. Downloading MMS and RTSP videos requires either mplayer (https://mplayerhq.hu/) or mpv (https://mpv.io/) to be installed. I have downloaded a video but how can I play it? Once the video is fully downloaded, use any video player, such as mpv (https://mpv.io/), vlc (https://www.videolan.org/) or mplayer (https://www.mplayerhq.hu/). I extracted a video URL with -g, but it does not play on another machine / in my web browser. It depends a lot on the service. In many cases, requests for the video (to download/play it) must come from the same IP address and with the same cookies and/or HTTP headers. Use the --cookies option to write the required cookies into a file, and advise your downloader to read cookies from that file. Some sites also require a common user agent to be used, use --dump-user-agent to see the one in use by youtube-dl. You can also get necessary cookies and HTTP headers from JSON output obtained with --dump-json. It may be beneficial to use IPv6; in some cases, the restrictions are only applied to IPv4. Some services (sometimes only for a subset of videos) do not restrict the video URL by IP address, cookie, or user-agent, but these are the exception rather than the rule. Please bear in mind that some URL protocols are not supported by browsers out of the box, including RTMP. If you are using -g, your own downloader must support these as well. If you want to play the video on a machine that is not running youtube-dl, you can relay the video content from the machine that runs youtube-dl. You can use -o - to let youtube-dl stream a video to stdout, or simply allow the player to download the files written by youtube-dl in turn. ERROR: no fmt_url_map or conn information found in video info YouTube has switched to a new video info format in July 2011 which is not supported by old versions of youtube-dl. See above for how to update youtube-dl. ERROR: unable to download video YouTube requires an additional signature since September 2012 which is not supported by old versions of youtube-dl. See above for how to update youtube-dl. Video URL contains an ampersand and I'm getting some strange output [1] 2839 or 'v' is not recognized as an internal or external command That's actually the output from your shell. Since ampersand is one of the special shell characters it's interpreted by the shell preventing you from passing the whole URL to youtube-dl. To disable your shell from interpreting the ampersands (or any other special characters) you have to either put the whole URL in quotes or escape them with a backslash (which approach will work depends on your shell). For example if your URL is https://www.youtube.com/watch?t=4&v=BaW_jenozKc you should end up with following command: youtube-dl 'https://www.youtube.com/watch?t=4&v=BaW_jenozKc' or youtube-dl https://www.youtube.com/watch?t=4\&v=BaW_jenozKc For Windows you have to use the double quotes: youtube-dl "https://www.youtube.com/watch?t=4&v=BaW_jenozKc" ExtractorError: Could not find JS function u'OF' In February 2015, the new YouTube player contained a character sequence in a string that was misinterpreted by old versions of youtube-dl. See above for how to update youtube-dl. HTTP Error 429: Too Many Requests or 402: Payment Required These two error codes indicate that the service is blocking your IP address because of overuse. Usually this is a soft block meaning that you can gain access again after solving CAPTCHA. Just open a browser and solve a CAPTCHA the service suggests you and after that pass cookies to youtube-dl. Note that if your machine has multiple external IPs then you should also pass exactly the same IP you've used for solving CAPTCHA with --source-address. Also you may need to pass a User-Agent HTTP header of your browser with --user-agent. If this is not the case (no CAPTCHA suggested to solve by the service) then you can contact the service and ask them to unblock your IP address, or - if you have acquired a whitelisted IP address already - use the --proxy or --source-address options to select another IP address. SyntaxError: Non-ASCII character The error File "youtube-dl", line 2 SyntaxError: Non-ASCII character '\x93' ... means you're using an outdated version of Python. Please update to Python 2.6 or 2.7. What is this binary file? Where has the code gone? Since June 2012 (#342 (https://github.com/ytdl-org/youtube- dl/issues/342)) youtube-dl is packed as an executable zipfile, simply unzip it (might need renaming to youtube-dl.zip first on some systems) or clone the git repository, as laid out above. If you modify the code, you can run it by executing the __main__.py file. To recompile the executable, run make youtube-dl. The exe throws an error due to missing MSVCR100.dll To run the exe you need to install first the Microsoft Visual C++ 2010 Service Pack 1 Redistributable Package (x86) (https://download.microsoft.com/download/1/6/5/165255E7-1014-4D0A- B094-B6A430A6BFFC/vcredist_x86.exe). On Windows, how should I set up ffmpeg and youtube-dl? Where should I put the exe files? If you put youtube-dl and ffmpeg in the same directory that you're running the command from, it will work, but that's rather cumbersome. To make a different directory work - either for ffmpeg, or for youtube-dl, or for both - simply create the directory (say, C:\bin, or C:\Users\<User name>\bin), put all the executables directly in there, and then set your PATH environment variable (https://www.java.com/en/download/help/path.xml) to include that directory. From then on, after restarting your shell, you will be able to access both youtube-dl and ffmpeg (and youtube-dl will be able to find ffmpeg) by simply typing youtube-dl or ffmpeg, no matter what directory you're in. How do I put downloads into a specific folder? Use the -o to specify an output template, for example -o "/home/user/videos/%(title)s-%(id)s.%(ext)s". If you want this for all of your downloads, put the option into your configuration file. How do I download a video starting with a -? Either prepend https://www.youtube.com/watch?v= or separate the ID from the options with --: youtube-dl -- -wNyEUrxzFU youtube-dl "https://www.youtube.com/watch?v=-wNyEUrxzFU" How do I pass cookies to youtube-dl? Use the --cookies option, for example --cookies /path/to/cookies/file.txt. In order to extract cookies from browser use any conforming browser extension for exporting cookies. For example, Get cookies.txt (https://chrome.google.com/webstore/detail/get- cookiestxt/bgaddhkoddajcdgocldbbfleckgcbcid/) (for Chrome) or cookies.txt (https://addons.mozilla.org/en-US/firefox/addon/cookies- txt/) (for Firefox). Note that the cookies file must be in Mozilla/Netscape format and the first line of the cookies file must be either # HTTP Cookie File or # Netscape HTTP Cookie File. Make sure you have correct newline format (https://en.wikipedia.org/wiki/Newline) in the cookies file and convert newlines if necessary to correspond with your OS, namely CRLF (\r\n) for Windows and LF (\n) for Unix and Unix-like systems (Linux, macOS, etc.). HTTP Error 400: Bad Request when using --cookies is a good sign of invalid newline format. Passing cookies to youtube-dl is a good way to workaround login when a particular extractor does not implement it explicitly. Another use case is working around CAPTCHA (https://en.wikipedia.org/wiki/CAPTCHA) some websites require you to solve in particular cases in order to get access (e.g. YouTube, CloudFlare). How do I stream directly to media player? You will first need to tell youtube-dl to stream media to stdout with -o -, and also tell your media player to read from stdin (it must be capable of this for streaming) and then pipe former to latter. For example, streaming to vlc (https://www.videolan.org/) can be achieved with: youtube-dl -o - "https://www.youtube.com/watch?v=BaW_jenozKcj" | vlc - How do I download only new videos from a playlist? Use download-archive feature. With this feature you should initially download the complete playlist with --download-archive /path/to/download/archive/file.txt that will record identifiers of all the videos in a special file. Each subsequent run with the same --download-archive will download only new videos and skip all videos that have been downloaded before. Note that only successful downloads are recorded in the file. For example, at first, youtube-dl --download-archive archive.txt "https://www.youtube.com/playlist?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re" will download the complete PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re playlist and create a file archive.txt. Each subsequent run will only download new videos if any: youtube-dl --download-archive archive.txt "https://www.youtube.com/playlist?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re" Should I add --hls-prefer-native into my config? When youtube-dl detects an HLS video, it can download it either with the built-in downloader or ffmpeg. Since many HLS streams are slightly invalid and ffmpeg/youtube-dl each handle some invalid cases better than the other, there is an option to switch the downloader if needed. When youtube-dl knows that one particular downloader works better for a given website, that downloader will be picked. Otherwise, youtube-dl will pick the best downloader for general compatibility, which at the moment happens to be ffmpeg. This choice may change in future versions of youtube-dl, with improvements of the built-in downloader and/or ffmpeg. In particular, the generic extractor (used when your website is not in the list of supported sites by youtube-dl (https://ytdl- org.github.io/youtube-dl/supportedsites.html) cannot mandate one specific downloader. If you put either --hls-prefer-native or --hls-prefer-ffmpeg into your configuration, a different subset of videos will fail to download correctly. Instead, it is much better to file an issue (https://yt- dl.org/bug) or a pull request which details why the native or the ffmpeg HLS downloader is a better choice for your use case. Can you add support for this anime video site, or site which shows current movies for free? As a matter of policy (as well as legality), youtube-dl does not include support for services that specialize in infringing copyright. As a rule of thumb, if you cannot easily find a video that the service is quite obviously allowed to distribute (i.e. that has been uploaded by the creator, the creator's distributor, or is published under a free license), the service is probably unfit for inclusion to youtube-dl. A note on the service that they don't host the infringing content, but just link to those who do, is evidence that the service should not be included into youtube-dl. The same goes for any DMCA note when the whole front page of the service is filled with videos they are not allowed to distribute. A "fair use" note is equally unconvincing if the service shows copyright-protected videos in full without authorization. Support requests for services that do purchase the rights to distribute their content are perfectly fine though. If in doubt, you can simply include a source that mentions the legitimate purchase of content. How can I speed up work on my issue? (Also known as: Help, my important issue not being solved!) The youtube-dl core developer team is quite small. While we do our best to solve as many issues as possible, sometimes that can take quite a while. To speed up your issue, here's what you can do: First of all, please do report the issue at our issue tracker (https://yt-dl.org/bugs). That allows us to coordinate all efforts by users and developers, and serves as a unified point. Unfortunately, the youtube-dl project has grown too large to use personal email as an effective communication channel. Please read the bug reporting instructions below. A lot of bugs lack all the necessary information. If you can, offer proxy, VPN, or shell access to the youtube-dl developers. If you are able to, test the issue from multiple computers in multiple countries to exclude local censorship or misconfiguration issues. If nobody is interested in solving your issue, you are welcome to take matters into your own hands and submit a pull request (or coerce/pay somebody else to do so). Feel free to bump the issue from time to time by writing a small comment ("Issue is still present in youtube-dl version ...from France, but fixed from Belgium"), but please not more than once a month. Please do not declare your issue as important or urgent. How can I detect whether a given URL is supported by youtube-dl? For one, have a look at the list of supported sites (docs/supportedsites.md). Note that it can sometimes happen that the site changes its URL scheme (say, from https://example.com/video/1234567 to https://example.com/v/1234567 ) and youtube-dl reports an URL of a service in that list as unsupported. In that case, simply report a bug. It is not possible to detect whether a URL is supported or not. That's because youtube-dl contains a generic extractor which matches all URLs. You may be tempted to disable, exclude, or remove the generic extractor, but the generic extractor not only allows users to extract videos from lots of websites that embed a video from another service, but may also be used to extract video from a service that it's hosting itself. Therefore, we neither recommend nor support disabling, excluding, or removing the generic extractor. If you want to find out whether a given URL is supported, simply call youtube-dl with it. If you get no videos back, chances are the URL is either not referring to a video or unsupported. You can find out which by examining the output (if you run youtube-dl on the console) or catching an UnsupportedError exception if you run it from a Python program. Why do I need to go through that much red tape when filing bugs? Before we had the issue template, despite our extensive bug reporting instructions, about 80% of the issue reports we got were useless, for instance because people used ancient versions hundreds of releases old, because of simple syntactic errors (not in youtube-dl but in general shell usage), because the problem was already reported multiple times before, because people did not actually read an error message, even if it said "please install ffmpeg", because people did not mention the URL they were trying to download and many more simple, easy-to-avoid problems, many of whom were totally unrelated to youtube-dl. youtube-dl is an open-source project manned by too few volunteers, so we'd rather spend time fixing bugs where we are certain none of those simple problems apply, and where we can be reasonably confident to be able to reproduce the issue without asking the reporter repeatedly. As such, the output of youtube-dl -v YOUR_URL_HERE is really all that's required to file an issue. The issue template also guides you through some basic steps you can do, such as checking that your version of youtube-dl is current. DEVELOPER INSTRUCTIONS Most users do not need to build youtube-dl and can download the builds (https://ytdl-org.github.io/youtube-dl/download.html) or get them from their distribution. To run youtube-dl as a developer, you don't need to build anything either. Simply execute python -m youtube_dl To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work: python -m unittest discover python test/test_download.py nosetests See item 6 of new extractor tutorial for how to run extractor specific test cases. If you want to create a build of youtube-dl yourself, you'll need • python • make (only GNU make is supported) • pandoc • zip • nosetests Adding support for a new site If you want to add support for a new site, first of all make sure this site is not dedicated to copyright infringement (README.md#can-you-add- support-for-this-anime-video-site-or-site-which-shows-current-movies- for-free). youtube-dl does not support such sites thus pull requests adding support for them will be rejected. After you have ensured this site is distributing its content legally, you can follow this quick list (assuming your service is called yourextractor): 1. Fork this repository (https://github.com/ytdl-org/youtube-dl/fork) 2. Check out the source code with: git clone git@github.com:YOUR_GITHUB_USERNAME/youtube-dl.git 3. Start a new git branch with cd youtube-dl git checkout -b yourextractor 4. Start with this simple template and save it to youtube_dl/extractor/yourextractor.py: # coding: utf-8 from __future__ import unicode_literals from .common import InfoExtractor class YourExtractorIE(InfoExtractor): _VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[0-9]+)' _TEST = { 'url': 'https://yourextractor.com/watch/42', 'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)', 'info_dict': { 'id': '42', 'ext': 'mp4', 'title': 'Video title goes here', 'thumbnail': r're:^https?://.*\.jpg$', # TODO more properties, either as: # * A value # * MD5 checksum; start the string with md5: # * A regular expression; start the string with re: # * Any Python type (for example int or float) } } def _real_extract(self, url): video_id = self._match_id(url) webpage = self._download_webpage(url, video_id) # TODO more code goes here, for example ... title = self._html_search_regex(r'<h1>(.+?)</h1>', webpage, 'title') return { 'id': video_id, 'title': title, 'description': self._og_search_description(webpage), 'uploader': self._search_regex(r'<div[^>]+id="uploader"[^>]*>([^<]+)<', webpage, 'uploader', fatal=False), # TODO more properties (see youtube_dl/extractor/common.py) } 5. Add an import in youtube_dl/extractor/extractors.py (https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/extractors.py). 6. Run python test/test_download.py TestDownload.test_YourExtractor. This should fail at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename _TEST to _TESTS and make it into a list of dictionaries. The tests will then be named TestDownload.test_YourExtractor, TestDownload.test_YourExtractor_1, TestDownload.test_YourExtractor_2, etc. Note that tests with only_matching key in test's dict are not counted in. 7. Have a look at youtube_dl/extractor/common.py (https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/common.py) for possible helper methods and a detailed description of what your extractor should and may return (https://github.com/ytdl- org/youtube- dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303). Add tests and code for as many as you want. 8. Make sure your code follows youtube-dl coding conventions and check the code with flake8 (https://flake8.pycqa.org/en/latest/index.html#quickstart): $ flake8 youtube_dl/extractor/yourextractor.py 9. Make sure your code works under all Python (https://www.python.org/) versions claimed supported by youtube-dl, namely 2.6, 2.7, and 3.2+. 10. When the tests pass, add (https://git-scm.com/docs/git-add) the new files and commit (https://git-scm.com/docs/git-commit) them and push (https://git-scm.com/docs/git-push) the result, like this: $ git add youtube_dl/extractor/extractors.py $ git add youtube_dl/extractor/yourextractor.py $ git commit -m '[yourextractor] Add new extractor' $ git push origin yourextractor 11. Finally, create a pull request (https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it. In any case, thank you very much for your contributions! youtube-dl coding conventions This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code. Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old youtube-dl versions working. Even though this breakage issue is easily fixed by emitting a new version of youtube-dl with a fix incorporated, all the previous versions become broken in all repositories and distros' packages that may not be so prompt in fetching the update from us. Needless to say, some non rolling release distros may never receive an update at all. Mandatory and optional metafields For extraction to work youtube-dl relies on metadata your extractor extracts and provides to youtube-dl expressed by an information dictionary (https://github.com/ytdl-org/youtube- dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303) or simply info dict. Only the following meta fields in the info dict are considered mandatory for a successful extraction process by youtube-dl: • id (media identifier) • title (media title) • url (media download URL) or formats In fact only the last option is technically mandatory (i.e. if you can't figure out the download location of the media the extraction does not make any sense). But by convention youtube-dl also treats id and title as mandatory. Thus the aforementioned metafields are the critical data that the extraction does not make any sense without and if any of them fail to be extracted then the extractor is considered completely broken. Any field (https://github.com/ytdl-org/youtube- dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L188-L303) apart from the aforementioned ones are considered optional. That means that extraction should be tolerant to situations when sources for these fields can potentially be unavailable (even if they are always available at the moment) and future-proof in order not to break the extraction of general purpose mandatory fields. Example Say you have some source dictionary meta that you've fetched as JSON with HTTP request and it has a key summary: meta = self._download_json(url, video_id) Assume at this point meta's layout is: { ... "summary": "some fancy summary text", ... } Assume you want to extract summary and put it into the resulting info dict as description. Since description is an optional meta field you should be ready that this key may be missing from the meta dict, so that you should extract it like: description = meta.get('summary') # correct and not like: description = meta['summary'] # incorrect The latter will break extraction process with KeyError if summary disappears from meta at some later time but with the former approach extraction will just go ahead with description set to None which is perfectly fine (remember None is equivalent to the absence of data). Similarly, you should pass fatal=False when extracting optional data from a webpage with _search_regex, _html_search_regex or similar methods, for instance: description = self._search_regex( r'<span[^>]+id="title"[^>]*>([^<]+)<', webpage, 'description', fatal=False) With fatal set to False if _search_regex fails to extract description it will emit a warning and continue extraction. You can also pass default=<some fallback value>, for example: description = self._search_regex( r'<span[^>]+id="title"[^>]*>([^<]+)<', webpage, 'description', default=None) On failure this code will silently continue the extraction with description set to None. That is useful for metafields that may or may not be present. Provide fallbacks When extracting metadata try to do so from multiple sources. For example if title is present in several places, try extracting from at least some of them. This makes it more future-proof in case some of the sources become unavailable. Example Say meta from the previous example has a title and you are about to extract it. Since title is a mandatory meta field you should end up with something like: title = meta['title'] If title disappears from meta in future due to some changes on the hoster's side the extraction would fail since title is mandatory. That's expected. Assume that you have some another source you can extract title from, for example og:title HTML meta of a webpage. In this case you can provide a fallback scenario: title = meta.get('title') or self._og_search_title(webpage) This code will try to extract from meta first and if it fails it will try extracting og:title from a webpage. Regular expressions Don't capture groups you don't use Capturing group must be an indication that it's used somewhere in the code. Any group that is not used must be non capturing. Example Don't capture id attribute name here since you can't use it for anything anyway. Correct: r'(?:id|ID)=(?P<id>\d+)' Incorrect: r'(id|ID)=(?P<id>\d+)' Make regular expressions relaxed and flexible When using regular expressions try to write them fuzzy, relaxed and flexible, skipping insignificant parts that are more likely to change, allowing both single and double quotes for quoted values and so on. Example Say you need to extract title from the following HTML code: <span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">some fancy title</span> The code for that task should look similar to: title = self._search_regex( r'<span[^>]+class="title"[^>]*>([^<]+)', webpage, 'title') Or even better: title = self._search_regex( r'<span[^>]+class=(["\'])title\1[^>]*>(?P<title>[^<]+)', webpage, 'title', group='title') Note how you tolerate potential changes in the style attribute's value or switch from using double quotes to single for class attribute: The code definitely should not look like: title = self._search_regex( r'<span style="position: absolute; left: 910px; width: 90px; float: right; z-index: 9999;" class="title">(.*?)</span>', webpage, 'title', group='title') Long lines policy There is a soft limit to keep lines of code under 80 characters long. This means it should be respected if possible and if it does not make readability and code maintenance worse. For example, you should never split long string literals like URLs or some other often copied entities over multiple lines to fit this limit: Correct: 'https://www.youtube.com/watch?v=FqZTN594JQw&list=PLMYEtVRpaqY00V9W81Cwmzp6N6vZqfUKD4' Incorrect: 'https://www.youtube.com/watch?v=FqZTN594JQw&list=' 'PLMYEtVRpaqY00V9W81Cwmzp6N6vZqfUKD4' Inline values Extracting variables is acceptable for reducing code duplication and improving readability of complex expressions. However, you should avoid extracting variables used only once and moving them to opposite parts of the extractor file, which makes reading the linear flow difficult. Example Correct: title = self._html_search_regex(r'<title>([^<]+)</title>', webpage, 'title') Incorrect: TITLE_RE = r'<title>([^<]+)</title>' # ...some lines of code... title = self._html_search_regex(TITLE_RE, webpage, 'title') Collapse fallbacks Multiple fallback values can quickly become unwieldy. Collapse multiple fallback values into a single expression via a list of patterns. Example Good: description = self._html_search_meta( ['og:description', 'description', 'twitter:description'], webpage, 'description', default=None) Unwieldy: description = ( self._og_search_description(webpage, default=None) or self._html_search_meta('description', webpage, default=None) or self._html_search_meta('twitter:description', webpage, default=None)) Methods supporting list of patterns are: _search_regex, _html_search_regex, _og_search_property, _html_search_meta. Trailing parentheses Always move trailing parentheses after the last argument. Example Correct: lambda x: x['ResultSet']['Result'][0]['VideoUrlSet']['VideoUrl'], list) Incorrect: lambda x: x['ResultSet']['Result'][0]['VideoUrlSet']['VideoUrl'], list, ) Use convenience conversion and parsing functions Wrap all extracted numeric data into safe functions from youtube_dl/utils.py (https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/utils.py): int_or_none, float_or_none. Use them for string to number conversions as well. Use url_or_none for safe URL processing. Use try_get for safe metadata extraction from parsed JSON. Use unified_strdate for uniform upload_date or any YYYYMMDD meta field extraction, unified_timestamp for uniform timestamp extraction, parse_filesize for filesize extraction, parse_count for count meta fields extraction, parse_resolution, parse_duration for duration extraction, parse_age_limit for age_limit extraction. Explore youtube_dl/utils.py (https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/utils.py) for more useful convenience functions. More examples Safely extract optional description from parsed JSON description = try_get(response, lambda x: x['result']['video'][0]['summary'], compat_str) Safely extract more optional metadata video = try_get(response, lambda x: x['result']['video'][0], dict) or {} description = video.get('summary') duration = float_or_none(video.get('durationMs'), scale=1000) view_count = int_or_none(video.get('views')) EMBEDDING YOUTUBE-DL youtube-dl makes the best effort to be a good command-line program, and thus should be callable from any programming language. If you encounter any problems parsing its output, feel free to create a report (https://github.com/ytdl-org/youtube-dl/issues/new). From a Python program, you can embed youtube-dl in a more powerful fashion, like this: from __future__ import unicode_literals import youtube_dl ydl_opts = {} with youtube_dl.YoutubeDL(ydl_opts) as ydl: ydl.download(['https://www.youtube.com/watch?v=BaW_jenozKc']) Most likely, you'll want to use various options. For a list of options available, have a look at youtube_dl/YoutubeDL.py (https://github.com/ytdl-org/youtube-dl/blob/3e4cedf9e8cd3157df2457df7274d0c842421945/youtube_dl/YoutubeDL.py#L137-L312). For a start, if you want to intercept youtube-dl's output, set a logger object. Here's a more complete example of a program that outputs only errors (and a short message after the download is finished), and downloads/converts the video to an mp3 file: from __future__ import unicode_literals import youtube_dl class MyLogger(object): def debug(self, msg): pass def warning(self, msg): pass def error(self, msg): print(msg) def my_hook(d): if d['status'] == 'finished': print('Done downloading, now converting ...') ydl_opts = { 'format': 'bestaudio/best', 'postprocessors': [{ 'key': 'FFmpegExtractAudio', 'preferredcodec': 'mp3', 'preferredquality': '192', }], 'logger': MyLogger(), 'progress_hooks': [my_hook], } with youtube_dl.YoutubeDL(ydl_opts) as ydl: ydl.download(['https://www.youtube.com/watch?v=BaW_jenozKc']) BUGS Bugs and suggestions should be reported at: <https://github.com/ytdl- org/youtube-dl/issues>. Unless you were prompted to or there is another pertinent reason (e.g. GitHub fails to accept the bug report), please do not send bug reports via personal email. For discussions, join us in the IRC channel #youtube-dl (irc://chat.freenode.net/#youtube-dl) on freenode (webchat (https://webchat.freenode.net/?randomnick=1&channels=youtube-dl)). Please include the full output of youtube-dl when run with -v, i.e. add -v flag to your command line, copy the whole output and post it in the issue body wrapped in ``` for better formatting. It should look similar to this: $ youtube-dl -v <your command line> [debug] System config: [] [debug] User config: [] [debug] Command-line args: [u'-v', u'https://www.youtube.com/watch?v=BaW_jenozKcj'] [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] youtube-dl version 2015.12.06 [debug] Git HEAD: 135392e [debug] Python version 2.6.6 - Windows-2003Server-5.2.3790-SP2 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4 [debug] Proxy map: {} ... Do not post screenshots of verbose logs; only plain text is acceptable. The output (including the first lines) contains important debugging information. Issues without the full output are often not reproducible and therefore do not get solved in short order, if ever. Please re-read your issue once again to avoid a couple of common mistakes (you can and should use this as a checklist): Is the description of the issue itself sufficient? We often get issue reports that we cannot really decipher. While in most cases we eventually get the required information after asking back multiple times, this poses an unnecessary drain on our resources. Many contributors, including myself, are also not native speakers, so we may misread some parts. So please elaborate on what feature you are requesting, or what bug you want to be fixed. Make sure that it's obvious • What the problem is • How it could be fixed • How your proposed solution would look like If your report is shorter than two lines, it is almost certainly missing some of these, which makes it hard for us to respond to it. We're often too polite to close the issue outright, but the missing info makes misinterpretation likely. As a committer myself, I often get frustrated by these issues, since the only possible way for me to move forward on them is to ask for clarification over and over. For bug reports, this means that your report should contain the complete output of youtube-dl when called with the -v flag. The error message you get for (most) bugs even says so, but you would not believe how many of our bug reports do not contain this information. If your server has multiple IPs or you suspect censorship, adding --call-home may be a good idea to get more diagnostics. If the error is ERROR: Unable to extract ... and you cannot reproduce it from multiple countries, add --dump-pages (warning: this will yield a rather large output, redirect it to the file log.txt by adding >log.txt 2>&1 to your command-line) or upload the .dump files you get when you add --write-pages somewhere (https://gist.github.com/). Site support requests must contain an example URL. An example URL is a URL you might want to download, like https://www.youtube.com/watch?v=BaW_jenozKc. There should be an obvious video present. Except under very special circumstances, the main page of a video service (e.g. https://www.youtube.com/) is not an example URL. Are you using the latest version? Before reporting any issue, type youtube-dl -U. This should report that you're up-to-date. About 20% of the reports we receive are already fixed, but people are using outdated versions. This goes for feature requests as well. Is the issue already documented? Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the GitHub Issues (https://github.com/ytdl-org/youtube-dl/search?type=Issues) of this repository. If there is an issue, feel free to write something along the lines of "This affects me as well, with version 2015.01.01. Here is some more information on the issue: ...". While some issues may be old, a new post into them often spurs rapid activity. Why are existing options not enough? Before requesting a new feature, please have a quick peek at the list of supported options (https://github.com/ytdl-org/youtube- dl/blob/master/README.md#options). Many feature requests are for features that actually exist already! Please, absolutely do show off your work in the issue report and detail how the existing similar options do not solve your problem. Is there enough context in your bug report? People want to solve problems, and often think they do us a favor by breaking down their larger problems (e.g. wanting to skip already downloaded files) to a specific request (e.g. requesting us to look whether the file exists before downloading the info page). However, what often happens is that they break down the problem into two steps: One simple, and one impossible (or extremely complicated one). We are then presented with a very complicated request when the original problem could be solved far easier, e.g. by recording the downloaded video IDs in a separate file. To avoid this, you must include the greater context where it is non-obvious. In particular, every feature request that does not consist of adding support for a new site should contain a use case scenario that explains in what situation the missing feature would be useful. Does the issue involve one problem, and one problem only? Some of our users seem to think there is a limit of issues they can or should open. There is no limit of issues they can or should open. While it may seem appealing to be able to dump all your issues into one ticket, that means that someone who solves one of your issues cannot mark the issue as closed. Typically, reporting a bunch of issues leads to the ticket lingering since nobody wants to attack that behemoth, until someone mercifully splits the issue into multiple ones. In particular, every site support request issue should only pertain to services at one site (generally under a common domain, but always using the same backend technology). Do not request support for vimeo user videos, White house podcasts, and Google Plus pages in the same issue. Also, make sure that you don't post bug reports alongside feature requests. As a rule of thumb, a feature request does not include outputs of youtube-dl that are not immediately related to the feature at hand. Do not post reports of a network error alongside the request for a new video service. Is anyone going to need the feature? Only post features that you (or an incapacitated friend you can personally talk to) require. Do not post features because they seem like a good idea. If they are really useful, they will be requested by someone who requires them. Is your question about youtube-dl? It may sound strange, but some bug reports we receive are completely unrelated to youtube-dl and relate to a different, or even the reporter's own, application. Please make sure that you are actually using youtube-dl. If you are using a UI for youtube-dl, report the bug to the maintainer of the actual application providing the UI. On the other hand, if your UI for youtube-dl fails in some way you believe is related to youtube-dl, by all means, go ahead and report the bug. COPYRIGHT youtube-dl is released into the public domain by the copyright holders. This README file was originally written by Daniel Bolton (https://github.com/dbbolton) and is likewise released into the public domain. YOUTUBE-DL(1)
| null |
uvicorn
| null | null | null | null | null |
gexpr
|
--help display this help and exit --version output version information and exit Print the value of EXPRESSION to standard output. A blank line below separates increasing precedence groups. EXPRESSION may be: ARG1 | ARG2 ARG1 if it is neither null nor 0, otherwise ARG2 ARG1 & ARG2 ARG1 if neither argument is null or 0, otherwise 0 ARG1 < ARG2 ARG1 is less than ARG2 ARG1 <= ARG2 ARG1 is less than or equal to ARG2 ARG1 = ARG2 ARG1 is equal to ARG2 ARG1 != ARG2 ARG1 is unequal to ARG2 ARG1 >= ARG2 ARG1 is greater than or equal to ARG2 ARG1 > ARG2 ARG1 is greater than ARG2 ARG1 + ARG2 arithmetic sum of ARG1 and ARG2 ARG1 - ARG2 arithmetic difference of ARG1 and ARG2 ARG1 * ARG2 arithmetic product of ARG1 and ARG2 ARG1 / ARG2 arithmetic quotient of ARG1 divided by ARG2 ARG1 % ARG2 arithmetic remainder of ARG1 divided by ARG2 STRING : REGEXP anchored pattern match of REGEXP in STRING match STRING REGEXP same as STRING : REGEXP substr STRING POS LENGTH substring of STRING, POS counted from 1 index STRING CHARS index in STRING where any CHARS is found, or 0 length STRING length of STRING + TOKEN interpret TOKEN as a string, even if it is a keyword like 'match' or an operator like '/' ( EXPRESSION ) value of EXPRESSION Beware that many operators need to be escaped or quoted for shells. Comparisons are arithmetic if both ARGs are numbers, else lexicographical. Pattern matches return the string matched between \( and \) or null; if \( and \) are not used, they return the number of characters matched or 0. Exit status is 0 if EXPRESSION is neither null nor 0, 1 if EXPRESSION is null or 0, 2 if EXPRESSION is syntactically invalid, and 3 if an error occurred. AUTHOR Written by Mike Parker, James Youngman, and Paul Eggert. REPORTING BUGS GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT Copyright © 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO Full documentation <https://www.gnu.org/software/coreutils/expr> or available locally via: info '(coreutils) expr invocation' GNU coreutils 9.3 April 2023 EXPR(1)
|
expr - evaluate expressions
|
expr EXPRESSION expr OPTION
| null | null |
gbase64
|
Base64 encode or decode FILE, or standard input, to standard output. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -d, --decode decode data -i, --ignore-garbage when decoding, ignore non-alphabet characters -w, --wrap=COLS wrap encoded lines after COLS character (default 76). Use 0 to disable line wrapping --help display this help and exit --version output version information and exit The data are encoded as described for the base64 alphabet in RFC 4648. When decoding, the input may contain newlines in addition to the bytes of the formal base64 alphabet. Use --ignore-garbage to attempt to recover from any other non-alphabet bytes in the encoded stream. AUTHOR Written by Simon Josefsson. REPORTING BUGS GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT Copyright © 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO basenc(1) Full documentation <https://www.gnu.org/software/coreutils/base64> or available locally via: info '(coreutils) base64 invocation' GNU coreutils 9.3 April 2023 BASE64(1)
|
base64 - base64 encode/decode data and print to standard output
|
base64 [OPTION]... [FILE]
| null | null |
dumpsexp
| null | null | null | null | null |
fc-cat
| null | null | null | null | null |
gwc
|
Print newline, word, and byte counts for each FILE, and a total line if more than one FILE is specified. A word is a non-zero-length sequence of printable characters delimited by white space. With no FILE, or when FILE is -, read standard input. The options below may be used to select which counts are printed, always in the following order: newline, word, character, byte, maximum line length. -c, --bytes print the byte counts -m, --chars print the character counts -l, --lines print the newline counts --files0-from=F read input from the files specified by NUL-terminated names in file F; If F is - then read names from standard input -L, --max-line-length print the maximum display width -w, --words print the word counts --total=WHEN when to print a line with total counts; WHEN can be: auto, always, only, never --help display this help and exit --version output version information and exit AUTHOR Written by Paul Rubin and David MacKenzie. REPORTING BUGS GNU coreutils online help: <https://www.gnu.org/software/coreutils/> Report any translation bugs to <https://translationproject.org/team/> COPYRIGHT Copyright © 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO Full documentation <https://www.gnu.org/software/coreutils/wc> or available locally via: info '(coreutils) wc invocation' GNU coreutils 9.3 April 2023 WC(1)
|
wc - print newline, word, and byte counts for each file
|
wc [OPTION]... [FILE]... wc [OPTION]... --files0-from=F
| null | null |
pod2man
|
pod2man is a wrapper script around the Pod::Man module, using it to generate *roff input from POD source. The resulting *roff code is suitable for display on a terminal using nroff(1), normally via man(1), or printing using troff(1). By default (on non-EBCDIC systems), pod2man outputs UTF-8 manual pages. Its output should work with the man program on systems that use groff (most Linux distributions) or mandoc (most BSD variants), but may result in mangled output on older UNIX systems. To choose a different, possibly more backward-compatible output mangling on such systems, use "--encoding=roff" (the default in earlier Pod::Man versions). See the --encoding option and "ENCODING" in Pod::Man for more details. input is the file to read for POD source (the POD can be embedded in code). If input isn't given, it defaults to "STDIN". output, if given, is the file to which to write the formatted output. If output isn't given, the formatted output is written to "STDOUT". Several POD files can be processed in the same pod2man invocation (saving module load and compile times) by providing multiple pairs of input and output files on the command line. --section, --release, --center, --date, and --official can be used to set the headers and footers to use. If not given, Pod::Man will assume various defaults. See below for details.
|
pod2man - Convert POD data to formatted *roff input
|
pod2man [--center=string] [--date=string] [--encoding=encoding] [--errors=style] [--fixed=font] [--fixedbold=font] [--fixeditalic=font] [--fixedbolditalic=font] [--guesswork=rule[,rule...]] [--name=name] [--nourls] [--official] [--release=version] [--section=manext] [--quotes=quotes] [--lquote=quote] [--rquote=quote] [--stderr] [--utf8] [--verbose] [input [output] ...] pod2man --help
|
Each option is annotated with the version of podlators in which that option was added with its current meaning. -c string, --center=string [1.00] Sets the centered page header for the ".TH" macro to string. The default is "User Contributed Perl Documentation", but also see --official below. -d string, --date=string [4.00] Set the left-hand footer string for the ".TH" macro to string. By default, the first of POD_MAN_DATE, SOURCE_DATE_EPOCH, the modification date of the input file, or the current date (if input comes from "STDIN") will be used, and the date will be in UTC. See "CLASS METHODS" in Pod::Man for more details. -e encoding, --encoding=encoding [5.00] Specifies the encoding of the output. encoding must be an encoding recognized by the Encode module (see Encode::Supported). The default on non-EBCDIC systems is UTF-8. If the output contains characters that cannot be represented in this encoding, that is an error that will be reported as configured by the --errors option. If error handling is other than "die", the unrepresentable character will be replaced with the Encode substitution character (normally "?"). If the "encoding" option is set to the special value "groff" (the default on EBCDIC systems), or if the Encode module is not available and the encoding is set to anything other than "roff" (see below), Pod::Man will translate all non-ASCII characters to "\[uNNNN]" Unicode escapes. These are not traditionally part of the *roff language, but are supported by groff and mandoc and thus by the majority of manual page processors in use today. If encoding is set to the special value "roff", pod2man will do its historic transformation of (some) ISO 8859-1 characters into *roff escapes that may be adequate in troff and may be readable (if ugly) in nroff. This was the default behavior of versions of pod2man before 5.00. With this encoding, all other non-ASCII characters will be replaced with "X". It may be required for very old troff and nroff implementations that do not support UTF-8, but its representation of any non-ASCII character is very poor and often specific to European languages. Its use is discouraged. WARNING: The input encoding of the POD source is independent from the output encoding, and setting this option does not affect the interpretation of the POD input. Unless your POD source is US- ASCII, its encoding should be declared with the "=encoding" command in the source. If this is not done, Pod::Simple will will attempt to guess the encoding and may be successful if it's Latin-1 or UTF-8, but it will produce warnings. See perlpod(1) for more information. --errors=style [2.5.0] Set the error handling style. "die" says to throw an exception on any POD formatting error. "stderr" says to report errors on standard error, but not to throw an exception. "pod" says to include a POD ERRORS section in the resulting documentation summarizing the errors. "none" ignores POD errors entirely, as much as possible. The default is "die". --fixed=font [1.0] The fixed-width font to use for verbatim text and code. Defaults to "CW". Some systems may want "CR" instead. Only matters for troff output. --fixedbold=font [1.0] Bold version of the fixed-width font. Defaults to "CB". Only matters for troff output. --fixeditalic=font [1.0] Italic version of the fixed-width font (something of a misnomer, since most fixed-width fonts only have an oblique version, not an italic version). Defaults to "CI". Only matters for troff output. --fixedbolditalic=font [1.0] Bold italic (in theory, probably oblique in practice) version of the fixed-width font. Pod::Man doesn't assume you have this, and defaults to "CB". Some systems (such as Solaris) have this font available as "CX". Only matters for troff output. --guesswork=rule[,rule...] [5.00] By default, pod2man applies some default formatting rules based on guesswork and regular expressions that are intended to make writing Perl documentation easier and require less explicit markup. These rules may not always be appropriate, particularly for documentation that isn't about Perl. This option allows turning all or some of it off. The special rule "all" enables all guesswork. This is also the default for backward compatibility reasons. The special rule "none" disables all guesswork. Otherwise, the value of this option should be a comma-separated list of one or more of the following keywords: functions Convert function references like foo() to bold even if they have no markup. The function name accepts valid Perl characters for function names (including ":"), and the trailing parentheses must be present and empty. manref Make the first part (before the parentheses) of man page references like foo(1) bold even if they have no markup. The section must be a single number optionally followed by lowercase letters. quoting If no guesswork is enabled, any text enclosed in C<> is surrounded by double quotes in nroff (terminal) output unless the contents are already quoted. When this guesswork is enabled, quote marks will also be suppressed for Perl variables, function names, function calls, numbers, and hex constants. variables Convert Perl variable names to a fixed-width font even if they have no markup. This transformation will only be apparent in troff output, or some other output format (unlike nroff terminal output) that supports fixed-width fonts. Any unknown guesswork name is silently ignored (for potential future compatibility), so be careful about spelling. -h, --help [1.00] Print out usage information. -l, --lax [1.00] No longer used. pod2man used to check its input for validity as a manual page, but this should now be done by podchecker(1) instead. Accepted for backward compatibility; this option no longer does anything. --language=language [5.00] Add commands telling groff that the input file is in the given language. The value of this setting must be a language abbreviation for which groff provides supplemental configuration, such as "ja" (for Japanese) or "zh" (for Chinese). This adds: .mso <language>.tmac .hla <language> to the start of the file, which configure correct line breaking for the specified language. Without these commands, groff may not know how to add proper line breaks for Chinese and Japanese text if the man page is installed into the normal man page directory, such as /usr/share/man. On many systems, this will be done automatically if the man page is installed into a language-specific man page directory, such as /usr/share/man/zh_CN. In that case, this option is not required. Unfortunately, the commands added with this option are specific to groff and will not work with other troff and nroff implementations. --lquote=quote --rquote=quote [4.08] Sets the quote marks used to surround C<> text. --lquote sets the left quote mark and --rquote sets the right quote mark. Either may also be set to the special value "none", in which case no quote mark is added on that side of C<> text (but the font is still changed for troff output). Also see the --quotes option, which can be used to set both quotes at once. If both --quotes and one of the other options is set, --lquote or --rquote overrides --quotes. -n name, --name=name [4.08] Set the name of the manual page for the ".TH" macro to name. Without this option, the manual name is set to the uppercased base name of the file being converted unless the manual section is 3, in which case the path is parsed to see if it is a Perl module path. If it is, a path like ".../lib/Pod/Man.pm" is converted into a name like "Pod::Man". This option, if given, overrides any automatic determination of the name. Although one does not have to follow this convention, be aware that the convention for UNIX manual pages is for the title to be in all- uppercase, even if the command isn't. (Perl modules traditionally use mixed case for the manual page title, however.) This option is probably not useful when converting multiple POD files at once. When converting POD source from standard input, the name will be set to "STDIN" if this option is not provided. Providing this option is strongly recommended to set a meaningful manual page name. --nourls [2.5.0] Normally, L<> formatting codes with a URL but anchor text are formatted to show both the anchor text and the URL. In other words: L<foo|http://example.com/> is formatted as: foo <http://example.com/> This flag, if given, suppresses the URL when anchor text is given, so this example would be formatted as just "foo". This can produce less cluttered output in cases where the URLs are not particularly important. -o, --official [1.00] Set the default header to indicate that this page is part of the standard Perl release, if --center is not also given. -q quotes, --quotes=quotes [4.00] Sets the quote marks used to surround C<> text to quotes. If quotes is a single character, it is used as both the left and right quote. Otherwise, it is split in half, and the first half of the string is used as the left quote and the second is used as the right quote. quotes may also be set to the special value "none", in which case no quote marks are added around C<> text (but the font is still changed for troff output). Also see the --lquote and --rquote options, which can be used to set the left and right quotes independently. If both --quotes and one of the other options is set, --lquote or --rquote overrides --quotes. -r version, --release=version [1.00] Set the centered footer for the ".TH" macro to version. By default, this is set to the version of Perl you run pod2man under. Setting this to the empty string will cause some *roff implementations to use the system default value. Note that some system "an" macro sets assume that the centered footer will be a modification date and will prepend something like "Last modified: ". If this is the case for your target system, you may want to set --release to the last modified date and --date to the version number. -s string, --section=string [1.00] Set the section for the ".TH" macro. The standard section numbering convention is to use 1 for user commands, 2 for system calls, 3 for functions, 4 for devices, 5 for file formats, 6 for games, 7 for miscellaneous information, and 8 for administrator commands. There is a lot of variation here, however; some systems (like Solaris) use 4 for file formats, 5 for miscellaneous information, and 7 for devices. Still others use 1m instead of 8, or some mix of both. About the only section numbers that are reliably consistent are 1, 2, and 3. By default, section 1 will be used unless the file ends in ".pm", in which case section 3 will be selected. --stderr [2.1.3] By default, pod2man dies if any errors are detected in the POD input. If --stderr is given and no --errors flag is present, errors are sent to standard error, but pod2man does not abort. This is equivalent to "--errors=stderr" and is supported for backward compatibility. -u, --utf8 [2.1.0] This option used to tell pod2man to produce UTF-8 output. Since this is now the default as of version 5.00, it is ignored and does nothing. -v, --verbose [1.11] Print out the name of each output file as it is being generated. EXIT STATUS As long as all documents processed result in some output, even if that output includes errata (a "POD ERRORS" section generated with "--errors=pod"), pod2man will exit with status 0. If any of the documents being processed do not result in an output document, pod2man will exit with status 1. If there are syntax errors in a POD document being processed and the error handling style is set to the default of "die", pod2man will abort immediately with exit status 255. DIAGNOSTICS If pod2man fails with errors, see Pod::Man and Pod::Simple for information about what those errors might mean.
|
pod2man program > program.1 pod2man SomeModule.pm /usr/perl/man/man3/SomeModule.3 pod2man --section=7 note.pod > note.7 If you would like to print out a lot of man page continuously, you probably want to set the C and D registers to set contiguous page numbering and even/odd paging, at least on some versions of man(7). troff -man -rC1 -rD1 perl.1 perldata.1 perlsyn.1 ... To get index entries on "STDERR", turn on the F register, as in: troff -man -rF1 perl.1 The indexing merely outputs messages via ".tm" for each major page, section, subsection, item, and any "X<>" directives. AUTHOR Russ Allbery <rra@cpan.org>, based on the original pod2man by Larry Wall and Tom Christiansen. COPYRIGHT AND LICENSE Copyright 1999-2001, 2004, 2006, 2008, 2010, 2012-2019, 2022 Russ Allbery <rra@cpan.org> This program is free software; you may redistribute it and/or modify it under the same terms as Perl itself. SEE ALSO Pod::Man, Pod::Simple, man(1), nroff(1), perlpod(1), podchecker(1), perlpodstyle(1), troff(1), man(7) The man page documenting the an macro set may be man(5) instead of man(7) on your system. The current version of this script is always available from its web site at <https://www.eyrie.org/~eagle/software/podlators/>. It is also part of the Perl core distribution as of 5.6.0. perl v5.38.2 2023-11-28 POD2MAN(1)
|
pdftoppm
|
Pdftoppm converts Portable Document Format (PDF) files to color image files in Portable Pixmap (PPM) format, grayscale image files in Portable Graymap (PGM) format, or monochrome image files in Portable Bitmap (PBM) format. Pdftoppm reads the PDF file, PDF-file, and writes one PPM file for each page, PPM-root-number.ppm, where number is the page number. If PDF-file is ´-', it reads the PDF file from stdin.
|
pdftoppm - Portable Document Format (PDF) to Portable Pixmap (PPM) converter (version 3.03)
|
pdftoppm [options] PDF-file PPM-root
|
-f number Specifies the first page to convert. -l number Specifies the last page to convert. -o Generates only the odd numbered pages. -e Generates only the even numbered pages. -singlefile Writes only the first page and does not add digits. -r number Specifies the X and Y resolution, in DPI. The default is 150 DPI. -rx number Specifies the X resolution, in DPI. The default is 150 DPI. -ry number Specifies the Y resolution, in DPI. The default is 150 DPI. -scale-to number Scales the long side of each page (width for landscape pages, height for portrait pages) to fit in scale-to pixels. The size of the short side will be determined by the aspect ratio of the page. -scale-to-x number Scales each page horizontally to fit in scale-to-x pixels. If scale-to-y is set to -1, the vertical size will determined by the aspect ratio of the page. -scale-to-y number Scales each page vertically to fit in scale-to-y pixels. If scale-to-x is set to -1, the horizontal size will determined by the aspect ratio of the page. -scale-dimension-before-rotation Swaps horizontal and vertical size for a rotated (landscape) pdf before scaling instead of after. -x number Specifies the x-coordinate of the crop area top left corner -y number Specifies the y-coordinate of the crop area top left corner -W number Specifies the width of crop area in pixels (default is 0) -H number Specifies the height of crop area in pixels (default is 0) -sz number Specifies the size of crop square in pixels (sets W and H) -cropbox Uses the crop box rather than media box when generating the files -hide-annotations Do not show annotations -mono Generate a monochrome PBM file (instead of a color PPM file). -gray Generate a grayscale PGM file (instead of a color PPM file). -displayprofile displayprofilefile If poppler is compiled with colour management support, this option sets the display profile to the ICC profile stored in displayprofilefile. -defaultgrayprofile defaultgrayprofilefile If poppler is compiled with colour management support, this option sets the DefaultGray color space to the ICC profile stored in defaultgrayprofilefile. -defaultrgbprofile defaultrgbprofilefile If poppler is compiled with colour management support, this option sets the DefaultRGB color space to the ICC profile stored in defaultrgbprofilefile. -defaultcmykprofile defaultcmykprofilefile If poppler is compiled with colour management support, this option sets the DefaultCMYK color space to the ICC profile stored in defaultcmykprofilefile. -png Generates a PNG file instead a PPM file. -jpeg Generates a JPEG file instead a PPM file. -jpegopt jpeg-options When used with -jpeg, takes a list of options to control the jpeg compression. See JPEG OPTIONS for the available options. -tiff Generates a TIFF file instead a PPM file. -tiffcompression none | packbits | jpeg | lzw | deflate Specifies the TIFF compression type. This defaults to "none". -freetype yes | no Enable or disable FreeType (a TrueType / Type 1 font rasterizer). This defaults to "yes". -thinlinemode none | solid | shape Specifies the thin line mode. This defaults to "none". "solid": adjust lines with a width less than one pixel to pixel boundary and paint it with a width of one pixel. "shape": adjust lines with a width less than one pixel to pixel boundary and paint it with a width of one pixel but with a shape in proportion to its width. -aa yes | no Enable or disable font anti-aliasing. This defaults to "yes". -aaVector yes | no Enable or disable vector anti-aliasing. This defaults to "yes". -opw password Specify the owner password for the PDF file. Providing this will bypass all security restrictions. -upw password Specify the user password for the PDF file. -q Don't print any messages or errors. -progress Print progress info as each page is generated. Three space- separated fields are printed to STDERR: the number of the current page, the number of the last page that will be generated, and the path to the file written to. -sep char Specify single character separator between name and page number, default - . -forcenum Force page number even if there is only one page. -v Print copyright and version information. -h Print usage information. (-help and --help are equivalent.) EXIT CODES The Xpdf tools use the following exit codes: 0 No error. 1 Error opening a PDF file. 2 Error opening an output file. 3 Error related to PDF permissions. 99 Other error. JPEG OPTIONS When JPEG output is specified, the -jpegopt option can be used to control the JPEG compression parameters. It takes a string of the form "<opt>=<val>[,<opt>=<val>]". Currently the available options are: quality Selects the JPEG quality value. The value must be an integer between 0 and 100. progressive Select progressive JPEG output. The possible values are "y", "n", indicating progressive (yes) or non-progressive (no), respectively. optimize Sets whether to compute optimal Huffman coding tables for the JPEG output, which will create smaller files but make an extra pass over the data. The value must be "y" or "n", with "y" performing optimization, otherwise the default Huffman tables are used. AUTHOR The pdftoppm software and documentation are copyright 1996-2011 Glyph & Cog, LLC. SEE ALSO pdfdetach(1), pdffonts(1), pdfimages(1), pdfinfo(1), pdftocairo(1), pdftohtml(1), pdftops(1), pdftotext(1) pdfseparate(1), pdfsig(1), pdfunite(1) 15 August 2011 pdftoppm(1)
| null |
graph2dot
| null | null | null | null | null |
enc2xs
|
enc2xs builds a Perl extension for use by Encode from either Unicode Character Mapping files (.ucm) or Tcl Encoding Files (.enc). Besides being used internally during the build process of the Encode module, you can use enc2xs to add your own encoding to perl. No knowledge of XS is necessary. Quick Guide If you want to know as little about Perl as possible but need to add a new encoding, just read this chapter and forget the rest. 0. Have a .ucm file ready. You can get it from somewhere or you can write your own from scratch or you can grab one from the Encode distribution and customize it. For the UCM format, see the next Chapter. In the example below, I'll call my theoretical encoding myascii, defined in my.ucm. "$" is a shell prompt. $ ls -F my.ucm 1. Issue a command as follows; $ enc2xs -M My my.ucm generating Makefile.PL generating My.pm generating README generating Changes Now take a look at your current directory. It should look like this. $ ls -F Makefile.PL My.pm my.ucm t/ The following files were created. Makefile.PL - MakeMaker script My.pm - Encode submodule t/My.t - test file 1.1. If you want *.ucm installed together with the modules, do as follows; $ mkdir Encode $ mv *.ucm Encode $ enc2xs -M My Encode/*ucm 2. Edit the files generated. You don't have to if you have no time AND no intention to give it to someone else. But it is a good idea to edit the pod and to add more tests. 3. Now issue a command all Perl Mongers love: $ perl Makefile.PL Writing Makefile for Encode::My 4. Now all you have to do is make. $ make cp My.pm blib/lib/Encode/My.pm /usr/local/bin/perl /usr/local/bin/enc2xs -Q -O \ -o encode_t.c -f encode_t.fnm Reading myascii (myascii) Writing compiled form 128 bytes in string tables 384 bytes (75%) saved spotting duplicates 1 bytes (0.775%) saved using substrings .... chmod 644 blib/arch/auto/Encode/My/My.bs $ The time it takes varies depending on how fast your machine is and how large your encoding is. Unless you are working on something big like euc-tw, it won't take too long. 5. You can "make install" already but you should test first. $ make test PERL_DL_NONLAZY=1 /usr/local/bin/perl -Iblib/arch -Iblib/lib \ -e 'use Test::Harness qw(&runtests $verbose); \ $verbose=0; runtests @ARGV;' t/*.t t/My....ok All tests successful. Files=1, Tests=2, 0 wallclock secs ( 0.09 cusr + 0.01 csys = 0.09 CPU) 6. If you are content with the test result, just "make install" 7. If you want to add your encoding to Encode's demand-loading list (so you don't have to "use Encode::YourEncoding"), run enc2xs -C to update Encode::ConfigLocal, a module that controls local settings. After that, "use Encode;" is enough to load your encodings on demand. The Unicode Character Map Encode uses the Unicode Character Map (UCM) format for source character mappings. This format is used by IBM's ICU package and was adopted by Nick Ing-Simmons for use with the Encode module. Since UCM is more flexible than Tcl's Encoding Map and far more user-friendly, this is the recommended format for Encode now. A UCM file looks like this. # # Comments # <code_set_name> "US-ascii" # Required <code_set_alias> "ascii" # Optional <mb_cur_min> 1 # Required; usually 1 <mb_cur_max> 1 # Max. # of bytes/char <subchar> \x3F # Substitution char # CHARMAP <U0000> \x00 |0 # <control> <U0001> \x01 |0 # <control> <U0002> \x02 |0 # <control> .... <U007C> \x7C |0 # VERTICAL LINE <U007D> \x7D |0 # RIGHT CURLY BRACKET <U007E> \x7E |0 # TILDE <U007F> \x7F |0 # <control> END CHARMAP • Anything that follows "#" is treated as a comment. • The header section continues until a line containing the word CHARMAP. This section has a form of <keyword> value, one pair per line. Strings used as values must be quoted. Barewords are treated as numbers. \xXX represents a byte. Most of the keywords are self-explanatory. subchar means substitution character, not subcharacter. When you decode a Unicode sequence to this encoding but no matching character is found, the byte sequence defined here will be used. For most cases, the value here is \x3F; in ASCII, this is a question mark. • CHARMAP starts the character map section. Each line has a form as follows: <UXXXX> \xXX.. |0 # comment ^ ^ ^ | | +- Fallback flag | +-------- Encoded byte sequence +-------------- Unicode Character ID in hex The format is roughly the same as a header section except for the fallback flag: | followed by 0..3. The meaning of the possible values is as follows: |0 Round trip safe. A character decoded to Unicode encodes back to the same byte sequence. Most characters have this flag. |1 Fallback for unicode -> encoding. When seen, enc2xs adds this character for the encode map only. |2 Skip sub-char mapping should there be no code point. |3 Fallback for encoding -> unicode. When seen, enc2xs adds this character for the decode map only. • And finally, END OF CHARMAP ends the section. When you are manually creating a UCM file, you should copy ascii.ucm or an existing encoding which is close to yours, rather than write your own from scratch. When you do so, make sure you leave at least U0000 to U0020 as is, unless your environment is EBCDIC. CAVEAT: not all features in UCM are implemented. For example, icu:state is not used. Because of that, you need to write a perl module if you want to support algorithmical encodings, notably the ISO-2022 series. Such modules include Encode::JP::2022_JP, Encode::KR::2022_KR, and Encode::TW::HZ. Coping with duplicate mappings When you create a map, you SHOULD make your mappings round-trip safe. That is, encode('your-encoding', decode('your-encoding', $data)) eq $data stands for all characters that are marked as "|0". Here is how to make sure: • Sort your map in Unicode order. • When you have a duplicate entry, mark either one with '|1' or '|3'. • And make sure the '|1' or '|3' entry FOLLOWS the '|0' entry. Here is an example from big5-eten. <U2550> \xF9\xF9 |0 <U2550> \xA2\xA4 |3 Internally Encoding -> Unicode and Unicode -> Encoding Map looks like this; E to U U to E -------------------------------------- \xF9\xF9 => U2550 U2550 => \xF9\xF9 \xA2\xA4 => U2550 So it is round-trip safe for \xF9\xF9. But if the line above is upside down, here is what happens. E to U U to E -------------------------------------- \xA2\xA4 => U2550 U2550 => \xF9\xF9 (\xF9\xF9 => U2550 is now overwritten!) The Encode package comes with ucmlint, a crude but sufficient utility to check the integrity of a UCM file. Check under the Encode/bin directory for this. When in doubt, you can use ucmsort, yet another utility under Encode/bin directory. Bookmarks • ICU Home Page <http://www.icu-project.org/> • ICU Character Mapping Tables <http://site.icu-project.org/charts/charset> • ICU:Conversion Data <http://www.icu-project.org/userguide/conversion-data.html> SEE ALSO Encode, perlmod, perlpod perl v5.38.2 2023-11-28 ENC2XS(1)
|
enc2xs -- Perl Encode Module Generator
|
enc2xs -[options] enc2xs -M ModName mapfiles... enc2xs -C
| null | null |
git-cvsserver
|
This application is a CVS emulation layer for Git. It is highly functional. However, not all methods are implemented, and for those methods that are implemented, not all switches are implemented. Testing has been done using both the CLI CVS client, and the Eclipse CVS plugin. Most functionality works fine with both of these clients.
|
git-cvsserver - A CVS server emulator for Git
|
SSH: export CVS_SERVER="git cvsserver" cvs -d :ext:user@server/path/repo.git co <HEAD_name> pserver (/etc/inetd.conf): cvspserver stream tcp nowait nobody /usr/bin/git-cvsserver git-cvsserver pserver Usage: git-cvsserver [<options>] [pserver|server] [<directory> ...]
|
All these options obviously only make sense if enforced by the server side. They have been implemented to resemble the git-daemon(1) options as closely as possible. --base-path <path> Prepend path to requested CVSROOT --strict-paths Don’t allow recursing into subdirectories --export-all Don’t check for gitcvs.enabled in config. You also have to specify a list of allowed directories (see below) if you want to use this option. -V, --version Print version information and exit -h, -H, --help Print usage information and exit <directory> The remaining arguments provide a list of directories. If no directories are given, then all are allowed. Repositories within these directories still require the gitcvs.enabled config option, unless --export-all is specified. LIMITATIONS CVS clients cannot tag, branch or perform Git merges. git-cvsserver maps Git branches to CVS modules. This is very different from what most CVS users would expect since in CVS modules usually represent one or more directories. INSTALLATION 1. If you are going to offer CVS access via pserver, add a line in /etc/inetd.conf like cvspserver stream tcp nowait nobody git-cvsserver pserver Note: Some inetd servers let you specify the name of the executable independently of the value of argv[0] (i.e. the name the program assumes it was executed with). In this case the correct line in /etc/inetd.conf looks like cvspserver stream tcp nowait nobody /usr/bin/git-cvsserver git-cvsserver pserver Only anonymous access is provided by pserver by default. To commit you will have to create pserver accounts, simply add a gitcvs.authdb setting in the config file of the repositories you want the cvsserver to allow writes to, for example: [gitcvs] authdb = /etc/cvsserver/passwd The format of these files is username followed by the encrypted password, for example: myuser:sqkNi8zPf01HI myuser:$1$9K7FzU28$VfF6EoPYCJEYcVQwATgOP/ myuser:$5$.NqmNH1vwfzGpV8B$znZIcumu1tNLATgV2l6e1/mY8RzhUDHMOaVOeL1cxV3 You can use the htpasswd facility that comes with Apache to make these files, but only with the -d option (or -B if your system suports it). Preferably use the system specific utility that manages password hash creation in your platform (e.g. mkpasswd in Linux, encrypt in OpenBSD or pwhash in NetBSD) and paste it in the right location. Then provide your password via the pserver method, for example: cvs -d:pserver:someuser:somepassword@server:/path/repo.git co <HEAD_name> No special setup is needed for SSH access, other than having Git tools in the PATH. If you have clients that do not accept the CVS_SERVER environment variable, you can rename git-cvsserver to cvs. Note: Newer CVS versions (>= 1.12.11) also support specifying CVS_SERVER directly in CVSROOT like cvs -d ":ext;CVS_SERVER=git cvsserver:user@server/path/repo.git" co <HEAD_name> This has the advantage that it will be saved in your CVS/Root files and you don’t need to worry about always setting the correct environment variable. SSH users restricted to git-shell don’t need to override the default with CVS_SERVER (and shouldn’t) as git-shell understands cvs to mean git-cvsserver and pretends that the other end runs the real cvs better. 2. For each repo that you want accessible from CVS you need to edit config in the repo and add the following section. [gitcvs] enabled=1 # optional for debugging logFile=/path/to/logfile Note: you need to ensure each user that is going to invoke git-cvsserver has write access to the log file and to the database (see Database Backend. If you want to offer write access over SSH, the users of course also need write access to the Git repository itself. You also need to ensure that each repository is "bare" (without a Git index file) for cvs commit to work. See gitcvs-migration(7). All configuration variables can also be overridden for a specific method of access. Valid method names are "ext" (for SSH access) and "pserver". The following example configuration would disable pserver access while still allowing access over SSH. [gitcvs] enabled=0 [gitcvs "ext"] enabled=1 3. If you didn’t specify the CVSROOT/CVS_SERVER directly in the checkout command, automatically saving it in your CVS/Root files, then you need to set them explicitly in your environment. CVSROOT should be set as per normal, but the directory should point at the appropriate Git repo. As above, for SSH clients not restricted to git-shell, CVS_SERVER should be set to git-cvsserver. export CVSROOT=:ext:user@server:/var/git/project.git export CVS_SERVER="git cvsserver" 4. For SSH clients that will make commits, make sure their server-side .ssh/environment files (or .bashrc, etc., according to their specific shell) export appropriate values for GIT_AUTHOR_NAME, GIT_AUTHOR_EMAIL, GIT_COMMITTER_NAME, and GIT_COMMITTER_EMAIL. For SSH clients whose login shell is bash, .bashrc may be a reasonable alternative. 5. Clients should now be able to check out the project. Use the CVS module name to indicate what Git head you want to check out. This also sets the name of your newly checked-out directory, unless you tell it otherwise with -d <dir_name>. For example, this checks out master branch to the project-master directory: cvs co -d project-master master DATABASE BACKEND git-cvsserver uses one database per Git head (i.e. CVS module) to store information about the repository to maintain consistent CVS revision numbers. The database needs to be updated (i.e. written to) after every commit. If the commit is done directly by using git (as opposed to using git-cvsserver) the update will need to happen on the next repository access by git-cvsserver, independent of access method and requested operation. That means that even if you offer only read access (e.g. by using the pserver method), git-cvsserver should have write access to the database to work reliably (otherwise you need to make sure that the database is up to date any time git-cvsserver is executed). By default it uses SQLite databases in the Git directory, named gitcvs.<module_name>.sqlite. Note that the SQLite backend creates temporary files in the same directory as the database file on write so it might not be enough to grant the users using git-cvsserver write access to the database file without granting them write access to the directory, too. The database cannot be reliably regenerated in a consistent form after the branch it is tracking has changed. Example: For merged branches, git-cvsserver only tracks one branch of development, and after a git merge an incrementally updated database may track a different branch than a database regenerated from scratch, causing inconsistent CVS revision numbers. git-cvsserver has no way of knowing which branch it would have picked if it had been run incrementally pre-merge. So if you have to fully or partially (from old backup) regenerate the database, you should be suspicious of pre-existing CVS sandboxes. You can configure the database backend with the following configuration variables: Configuring database backend git-cvsserver uses the Perl DBI module. Please also read its documentation if changing these variables, especially about DBI->connect(). gitcvs.dbName Database name. The exact meaning depends on the selected database driver, for SQLite this is a filename. Supports variable substitution (see below). May not contain semicolons (;). Default: %Ggitcvs.%m.sqlite gitcvs.dbDriver Used DBI driver. You can specify any available driver for this here, but it might not work. cvsserver is tested with DBD::SQLite, reported to work with DBD::Pg, and reported not to work with DBD::mysql. Please regard this as an experimental feature. May not contain colons (:). Default: SQLite gitcvs.dbuser Database user. Only useful if setting dbDriver, since SQLite has no concept of database users. Supports variable substitution (see below). gitcvs.dbPass Database password. Only useful if setting dbDriver, since SQLite has no concept of database passwords. gitcvs.dbTableNamePrefix Database table name prefix. Supports variable substitution (see below). Any non-alphabetic characters will be replaced with underscores. All variables can also be set per access method, see above. Variable substitution In dbDriver and dbUser you can use the following variables: %G Git directory name %g Git directory name, where all characters except for alphanumeric ones, ., and - are replaced with _ (this should make it easier to use the directory name in a filename if wanted) %m CVS module/Git head name %a access method (one of "ext" or "pserver") %u Name of the user running git-cvsserver. If no name can be determined, the numeric uid is used. ENVIRONMENT These variables obviate the need for command-line options in some circumstances, allowing easier restricted usage through git-shell. GIT_CVSSERVER_BASE_PATH This variable replaces the argument to --base-path. GIT_CVSSERVER_ROOT This variable specifies a single directory, replacing the <directory>... argument list. The repository still requires the gitcvs.enabled config option, unless --export-all is specified. When these environment variables are set, the corresponding command-line arguments may not be used. ECLIPSE CVS CLIENT NOTES To get a checkout with the Eclipse CVS client: 1. Select "Create a new project → From CVS checkout" 2. Create a new location. See the notes below for details on how to choose the right protocol. 3. Browse the modules available. It will give you a list of the heads in the repository. You will not be able to browse the tree from there. Only the heads. 4. Pick HEAD when it asks what branch/tag to check out. Untick the "launch commit wizard" to avoid committing the .project file. Protocol notes: If you are using anonymous access via pserver, just select that. Those using SSH access should choose the ext protocol, and configure ext access on the Preferences→Team→CVS→ExtConnection pane. Set CVS_SERVER to "git cvsserver". Note that password support is not good when using ext, you will definitely want to have SSH keys setup. Alternatively, you can just use the non-standard extssh protocol that Eclipse offer. In that case CVS_SERVER is ignored, and you will have to replace the cvs utility on the server with git-cvsserver or manipulate your .bashrc so that calling cvs effectively calls git-cvsserver. CLIENTS KNOWN TO WORK • CVS 1.12.9 on Debian • CVS 1.11.17 on MacOSX (from Fink package) • Eclipse 3.0, 3.1.2 on MacOSX (see Eclipse CVS Client Notes) • TortoiseCVS OPERATIONS SUPPORTED All the operations required for normal use are supported, including checkout, diff, status, update, log, add, remove, commit. Most CVS command arguments that read CVS tags or revision numbers (typically -r) work, and also support any git refspec (tag, branch, commit ID, etc). However, CVS revision numbers for non-default branches are not well emulated, and cvs log does not show tags or branches at all. (Non-main-branch CVS revision numbers superficially resemble CVS revision numbers, but they actually encode a git commit ID directly, rather than represent the number of revisions since the branch point.) Note that there are two ways to checkout a particular branch. As described elsewhere on this page, the "module" parameter of cvs checkout is interpreted as a branch name, and it becomes the main branch. It remains the main branch for a given sandbox even if you temporarily make another branch sticky with cvs update -r. Alternatively, the -r argument can indicate some other branch to actually checkout, even though the module is still the "main" branch. Tradeoffs (as currently implemented): Each new "module" creates a new database on disk with a history for the given module, and after the database is created, operations against that main branch are fast. Or alternatively, -r doesn’t take any extra disk space, but may be significantly slower for many operations, like cvs update. If you want to refer to a git refspec that has characters that are not allowed by CVS, you have two options. First, it may just work to supply the git refspec directly to the appropriate CVS -r argument; some CVS clients don’t seem to do much sanity checking of the argument. Second, if that fails, you can use a special character escape mechanism that only uses characters that are valid in CVS tags. A sequence of 4 or 5 characters of the form (underscore ("_"), dash ("-"), one or two characters, and dash ("-")) can encode various characters based on the one or two letters: "s" for slash ("/"), "p" for period ("."), "u" for underscore ("_"), or two hexadecimal digits for any byte value at all (typically an ASCII number, or perhaps a part of a UTF-8 encoded character). Legacy monitoring operations are not supported (edit, watch and related). Exports and tagging (tags and branches) are not supported at this stage. CRLF Line Ending Conversions By default the server leaves the -k mode blank for all files, which causes the CVS client to treat them as a text files, subject to end-of-line conversion on some platforms. You can make the server use the end-of-line conversion attributes to set the -k modes for files by setting the gitcvs.usecrlfattr config variable. See gitattributes(5) for more information about end-of-line conversion. Alternatively, if gitcvs.usecrlfattr config is not enabled or the attributes do not allow automatic detection for a filename, then the server uses the gitcvs.allBinary config for the default setting. If gitcvs.allBinary is set, then file not otherwise specified will default to -kb mode. Otherwise the -k mode is left blank. But if gitcvs.allBinary is set to "guess", then the correct -k mode will be guessed based on the contents of the file. For best consistency with cvs, it is probably best to override the defaults by setting gitcvs.usecrlfattr to true, and gitcvs.allBinary to "guess". DEPENDENCIES git-cvsserver depends on DBD::SQLite. GIT Part of the git(1) suite Git 2.41.0 2023-06-01 GIT-CVSSERVER(1)
| null |
odbc_config
|
odbc_config provides information about how unixODBC was compiled for your system and architecture. The information generated is useful for building unixODBC clients and similar programs.
|
odbc_config - Generates compiler information intended for use when developing a unixODBC client
|
odbc_config [--prefix] [--exec-prefix] [--include-prefix] [--lib-prefix] [--bin-prefix] [--version] [--libs] [--static-libs] [--libtool-libs] [--cflags] [--odbcversion] [--odbcini] [--odbcinstini] [--header] [--ulen]
|
--prefix Prefix for architecture-independent files. --exec-prefix Prefix for architecture-dependent files. --include-prefix Directory containing C header files for unixODBC. --lib-prefix Directory containing unixODBC shared libraries. --bin-prefix Directory containing unixODBC utilities. --version Current version of unixODBC. --libs Compiler flags for linking dynamic libraries. --static-libs Absolute file name of the unixODBC static library (libodbc.a). --libtool-libs Absolute file name of the unixODBC libtool library (libodbc.la). --cflags Outputs compiler flags to find header files, as well as critical compiler flags and defines used when compiling the libodbc library. --odbcversion Version of the ODBC specification used by unixODBC. --odbcini Absolute file name of the system-wide DSN configuration file odbc.ini. --odbcinstini Absolute file name of the unixODBC driver configuration file odbcinst.ini. --header Definitions of C preprocessor constants used by unixODBC. Generated output can be piped into a C header file. --ulen Compiler flag that defines SIZEOF_SQLULEN. SEE ALSO unixODBC(7), odbcinst.ini(5), odbc.ini(5) "The unixODBC Administrator Manual (HTML)" AUTHORS The authors of unixODBC are Peter Harvey <pharvey@codebydesign.com> and Nick Gorham <nick@lurcher.org>. For a full list of contributors, refer to the AUTHORS file. COPYRIGHT unixODBC is licensed under the GNU Lesser General Public License. For details about the license, see the COPYING file. version 2.3.12 Thu 07 Jan 2021 odbc_config(1)
| null |
alembic
| null | null | null | null | null |
numba
| null | null | null | null | null |
lzdiff
|
xzcmp and xzdiff compare uncompressed contents of two files. Uncompressed data and options are passed to cmp(1) or diff(1) unless --help or --version is specified. If both file1 and file2 are specified, they can be uncompressed files or files in formats that xz(1), gzip(1), bzip2(1), lzop(1), zstd(1), or lz4(1) can decompress. The required decompression commands are determined from the filename suffixes of file1 and file2. A file with an unknown suffix is assumed to be either uncompressed or in a format that xz(1) can decompress. If only one filename is provided, file1 must have a suffix of a supported compression format and the name for file2 is assumed to be file1 with the compression format suffix removed. The commands lzcmp and lzdiff are provided for backward compatibility with LZMA Utils. EXIT STATUS If a decompression error occurs, the exit status is 2. Otherwise the exit status of cmp(1) or diff(1) is used. SEE ALSO cmp(1), diff(1), xz(1), gzip(1), bzip2(1), lzop(1), zstd(1), lz4(1) Tukaani 2024-02-13 XZDIFF(1)
|
xzcmp, xzdiff, lzcmp, lzdiff - compare compressed files
|
xzcmp [option...] file1 [file2] xzdiff ... lzcmp ... lzdiff ...
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.