command
stringlengths
1
42
description
stringlengths
29
182k
βŒ€
name
stringlengths
7
64.9k
βŒ€
synopsis
stringlengths
4
85.3k
βŒ€
options
stringclasses
593 values
examples
stringclasses
455 values
xzcmp
xzcmp and xzdiff compare uncompressed contents of two files. Uncompressed data and options are passed to cmp(1) or diff(1) unless --help or --version is specified. If both file1 and file2 are specified, they can be uncompressed files or files in formats that xz(1), gzip(1), bzip2(1), lzop(1), zstd(1), or lz4(1) can decompress. The required decompression commands are determined from the filename suffixes of file1 and file2. A file with an unknown suffix is assumed to be either uncompressed or in a format that xz(1) can decompress. If only one filename is provided, file1 must have a suffix of a supported compression format and the name for file2 is assumed to be file1 with the compression format suffix removed. The commands lzcmp and lzdiff are provided for backward compatibility with LZMA Utils. EXIT STATUS If a decompression error occurs, the exit status is 2. Otherwise the exit status of cmp(1) or diff(1) is used. SEE ALSO cmp(1), diff(1), xz(1), gzip(1), bzip2(1), lzop(1), zstd(1), lz4(1) Tukaani 2024-02-13 XZDIFF(1)
xzcmp, xzdiff, lzcmp, lzdiff - compare compressed files
xzcmp [option...] file1 [file2] xzdiff ... lzcmp ... lzdiff ...
null
null
.scikit-learn-post-link.sh
null
null
null
null
null
giftool
A filter for transforming GIFS. With no options, it's an expensive copy of a GIF in standard input to standard output. Options specify filtering operations and are performed in the order specified on the command line. The -n option selects images, allowing the tool to act on a subset of images in a multi-image GIF. This option takes a comma-separated list of decimal integers which are interpreted as 1-origin image indices; these are the images that will be acted on. If no -n option is specified, the tool will select and transform all images. The -b option takes a decimal integer argument and uses it to set the (0-origin) screen background color index. The -f option accepts a printf-style format string and substitutes into it the values of image-descriptor and graphics-control fields. The string is formatted and output once for each selected image. Normal C-style escapes \b, \f, \n, \r, \t. \v, and \xNN are interpreted; also \e produces ESC (ASCII 0x1b). The following format cookies are substituted: %a Pixel aspect byte. %b Screen background color. %d Image delay time %h Image height (y dimension) %n Image index %p Image position (as an x,y pair) %s Screen size (as an x,y pair) %t Image transparent-color index %u Image user-input flag (boolean) %v GIF version string %w Image width (x dimension) %x Image GIF89 disposal mode %z Image's color table sort flag (boolean, false if no local color map) Boolean substitutions may take a prefix to modify how they are displayed: 1 "1" or "0" o "on" or "off" t "t" or "f" y "yes" or "no" Thus, for example, "%oz" displays image sort flags using the strings "on" and "off". The default with no prefix is numeric. The -a option takes an unsigned decimal integer argument and uses it to set the aspect-ratio bye in the logical screen descriptor block. The -b option takes an unsigned decimal integer argument and uses it to set the background color index in the logical screen descriptor block. The -d option takes a decimal integer argument and uses it to set a delay time, in hundredths of a second, on selected images. The -i option sets or clears interlaccing in selected images. Acceptable arguments are "1", "0", "yes", "no", "on", "off", "t", "f" The -p option takes a (0-origin) x,y coordinate-pair and sets it as the preferred upper-left-corner coordinates of selected images. The -s option takes a (0-origin) x,y coordinate-pair and sets it as the expected display screen size. The -t option takes a decimal integer argument and uses it to set the (0-origin) index of the transparency color in selected images. The -u option sets or clears the user-input flag in selected images. Acceptable arguments are "1", "0", "yes", "no", "on", "off", "t", "f". The -x option takes a decimal integer argument and uses it to set the GIF89 disposal mode in selected images. The -z option sets or clears the color-table sort flag in selected images. Acceptable arguments are "1", "0", "yes", "no", "on", "off", "t", "f". Note that the -a, -b, -p, -s, and -z options are included to complete the ability to modify all fields defined in the GIF standard, but should have no effect on how an image renders on browsers or modern viewers. AUTHOR Eric S. Raymond. GIFLIB 3 June 2012 GIFTOOL(1)
giftool - GIF transformation tool
giftool [-a aspect] [-b bgcolor] [-d delaytime] [-i interlacing] [-n imagelist] [-p left,top] [-s width,height] [-t transcolor] [-u sort-flag] [-x disposal] [-z sort-flag]
null
null
matplotlib
null
null
null
null
null
lzcat
xz is a general-purpose data compression tool with command line syntax similar to gzip(1) and bzip2(1). The native file format is the .xz format, but the legacy .lzma format used by LZMA Utils and raw compressed streams with no container format headers are also supported. In addition, decompression of the .lz format used by lzip is supported. xz compresses or decompresses each file according to the selected operation mode. If no files are given or file is -, xz reads from standard input and writes the processed data to standard output. xz will refuse (display an error and skip the file) to write compressed data to standard output if it is a terminal. Similarly, xz will refuse to read compressed data from standard input if it is a terminal. Unless --stdout is specified, files other than - are written to a new file whose name is derived from the source file name: β€’ When compressing, the suffix of the target file format (.xz or .lzma) is appended to the source filename to get the target filename. β€’ When decompressing, the .xz, .lzma, or .lz suffix is removed from the filename to get the target filename. xz also recognizes the suffixes .txz and .tlz, and replaces them with the .tar suffix. If the target file already exists, an error is displayed and the file is skipped. Unless writing to standard output, xz will display a warning and skip the file if any of the following applies: β€’ File is not a regular file. Symbolic links are not followed, and thus they are not considered to be regular files. β€’ File has more than one hard link. β€’ File has setuid, setgid, or sticky bit set. β€’ The operation mode is set to compress and the file already has a suffix of the target file format (.xz or .txz when compressing to the .xz format, and .lzma or .tlz when compressing to the .lzma format). β€’ The operation mode is set to decompress and the file doesn't have a suffix of any of the supported file formats (.xz, .txz, .lzma, .tlz, or .lz). After successfully compressing or decompressing the file, xz copies the owner, group, permissions, access time, and modification time from the source file to the target file. If copying the group fails, the permissions are modified so that the target file doesn't become accessible to users who didn't have permission to access the source file. xz doesn't support copying other metadata like access control lists or extended attributes yet. Once the target file has been successfully closed, the source file is removed unless --keep was specified. The source file is never removed if the output is written to standard output or if an error occurs. Sending SIGINFO or SIGUSR1 to the xz process makes it print progress information to standard error. This has only limited use since when standard error is a terminal, using --verbose will display an automatically updating progress indicator. Memory usage The memory usage of xz varies from a few hundred kilobytes to several gigabytes depending on the compression settings. The settings used when compressing a file determine the memory requirements of the decompressor. Typically the decompressor needs 5 % to 20 % of the amount of memory that the compressor needed when creating the file. For example, decompressing a file created with xz -9 currently requires 65 MiB of memory. Still, it is possible to have .xz files that require several gigabytes of memory to decompress. Especially users of older systems may find the possibility of very large memory usage annoying. To prevent uncomfortable surprises, xz has a built-in memory usage limiter, which is disabled by default. While some operating systems provide ways to limit the memory usage of processes, relying on it wasn't deemed to be flexible enough (for example, using ulimit(1) to limit virtual memory tends to cripple mmap(2)). The memory usage limiter can be enabled with the command line option --memlimit=limit. Often it is more convenient to enable the limiter by default by setting the environment variable XZ_DEFAULTS, for example, XZ_DEFAULTS=--memlimit=150MiB. It is possible to set the limits separately for compression and decompression by using --memlimit-compress=limit and --memlimit-decompress=limit. Using these two options outside XZ_DEFAULTS is rarely useful because a single run of xz cannot do both compression and decompression and --memlimit=limit (or -M limit) is shorter to type on the command line. If the specified memory usage limit is exceeded when decompressing, xz will display an error and decompressing the file will fail. If the limit is exceeded when compressing, xz will try to scale the settings down so that the limit is no longer exceeded (except when using --format=raw or --no-adjust). This way the operation won't fail unless the limit is very small. The scaling of the settings is done in steps that don't match the compression level presets, for example, if the limit is only slightly less than the amount required for xz -9, the settings will be scaled down only a little, not all the way down to xz -8. Concatenation and padding with .xz files It is possible to concatenate .xz files as is. xz will decompress such files as if they were a single .xz file. It is possible to insert padding between the concatenated parts or after the last part. The padding must consist of null bytes and the size of the padding must be a multiple of four bytes. This can be useful, for example, if the .xz file is stored on a medium that measures file sizes in 512-byte blocks. Concatenation and padding are not allowed with .lzma files or raw streams.
xz, unxz, xzcat, lzma, unlzma, lzcat - Compress or decompress .xz and .lzma files
xz [option...] [file...] COMMAND ALIASES unxz is equivalent to xz --decompress. xzcat is equivalent to xz --decompress --stdout. lzma is equivalent to xz --format=lzma. unlzma is equivalent to xz --format=lzma --decompress. lzcat is equivalent to xz --format=lzma --decompress --stdout. When writing scripts that need to decompress files, it is recommended to always use the name xz with appropriate arguments (xz -d or xz -dc) instead of the names unxz and xzcat.
Integer suffixes and special values In most places where an integer argument is expected, an optional suffix is supported to easily indicate large integers. There must be no space between the integer and the suffix. KiB Multiply the integer by 1,024 (2^10). Ki, k, kB, K, and KB are accepted as synonyms for KiB. MiB Multiply the integer by 1,048,576 (2^20). Mi, m, M, and MB are accepted as synonyms for MiB. GiB Multiply the integer by 1,073,741,824 (2^30). Gi, g, G, and GB are accepted as synonyms for GiB. The special value max can be used to indicate the maximum integer value supported by the option. Operation mode If multiple operation mode options are given, the last one takes effect. -z, --compress Compress. This is the default operation mode when no operation mode option is specified and no other operation mode is implied from the command name (for example, unxz implies --decompress). -d, --decompress, --uncompress Decompress. -t, --test Test the integrity of compressed files. This option is equivalent to --decompress --stdout except that the decompressed data is discarded instead of being written to standard output. No files are created or removed. -l, --list Print information about compressed files. No uncompressed output is produced, and no files are created or removed. In list mode, the program cannot read the compressed data from standard input or from other unseekable sources. The default listing shows basic information about files, one file per line. To get more detailed information, use also the --verbose option. For even more information, use --verbose twice, but note that this may be slow, because getting all the extra information requires many seeks. The width of verbose output exceeds 80 characters, so piping the output to, for example, less -S may be convenient if the terminal isn't wide enough. The exact output may vary between xz versions and different locales. For machine-readable output, --robot --list should be used. Operation modifiers -k, --keep Don't delete the input files. Since xz 5.2.6, this option also makes xz compress or decompress even if the input is a symbolic link to a regular file, has more than one hard link, or has the setuid, setgid, or sticky bit set. The setuid, setgid, and sticky bits are not copied to the target file. In earlier versions this was only done with --force. -f, --force This option has several effects: β€’ If the target file already exists, delete it before compressing or decompressing. β€’ Compress or decompress even if the input is a symbolic link to a regular file, has more than one hard link, or has the setuid, setgid, or sticky bit set. The setuid, setgid, and sticky bits are not copied to the target file. β€’ When used with --decompress --stdout and xz cannot recognize the type of the source file, copy the source file as is to standard output. This allows xzcat --force to be used like cat(1) for files that have not been compressed with xz. Note that in future, xz might support new compressed file formats, which may make xz decompress more types of files instead of copying them as is to standard output. --format=format can be used to restrict xz to decompress only a single file format. -c, --stdout, --to-stdout Write the compressed or decompressed data to standard output instead of a file. This implies --keep. --single-stream Decompress only the first .xz stream, and silently ignore possible remaining input data following the stream. Normally such trailing garbage makes xz display an error. xz never decompresses more than one stream from .lzma files or raw streams, but this option still makes xz ignore the possible trailing data after the .lzma file or raw stream. This option has no effect if the operation mode is not --decompress or --test. --no-sparse Disable creation of sparse files. By default, if decompressing into a regular file, xz tries to make the file sparse if the decompressed data contains long sequences of binary zeros. It also works when writing to standard output as long as standard output is connected to a regular file and certain additional conditions are met to make it safe. Creating sparse files may save disk space and speed up the decompression by reducing the amount of disk I/O. -S .suf, --suffix=.suf When compressing, use .suf as the suffix for the target file instead of .xz or .lzma. If not writing to standard output and the source file already has the suffix .suf, a warning is displayed and the file is skipped. When decompressing, recognize files with the suffix .suf in addition to files with the .xz, .txz, .lzma, .tlz, or .lz suffix. If the source file has the suffix .suf, the suffix is removed to get the target filename. When compressing or decompressing raw streams (--format=raw), the suffix must always be specified unless writing to standard output, because there is no default suffix for raw streams. --files[=file] Read the filenames to process from file; if file is omitted, filenames are read from standard input. Filenames must be terminated with the newline character. A dash (-) is taken as a regular filename; it doesn't mean standard input. If filenames are given also as command line arguments, they are processed before the filenames read from file. --files0[=file] This is identical to --files[=file] except that each filename must be terminated with the null character. Basic file format and compression options -F format, --format=format Specify the file format to compress or decompress: auto This is the default. When compressing, auto is equivalent to xz. When decompressing, the format of the input file is automatically detected. Note that raw streams (created with --format=raw) cannot be auto- detected. xz Compress to the .xz file format, or accept only .xz files when decompressing. lzma, alone Compress to the legacy .lzma file format, or accept only .lzma files when decompressing. The alternative name alone is provided for backwards compatibility with LZMA Utils. lzip Accept only .lz files when decompressing. Compression is not supported. The .lz format version 0 and the unextended version 1 are supported. Version 0 files were produced by lzip 1.3 and older. Such files aren't common but may be found from file archives as a few source packages were released in this format. People might have old personal files in this format too. Decompression support for the format version 0 was removed in lzip 1.18. lzip 1.4 and later create files in the format version 1. The sync flush marker extension to the format version 1 was added in lzip 1.6. This extension is rarely used and isn't supported by xz (diagnosed as corrupt input). raw Compress or uncompress a raw stream (no headers). This is meant for advanced users only. To decode raw streams, you need use --format=raw and explicitly specify the filter chain, which normally would have been stored in the container headers. -C check, --check=check Specify the type of the integrity check. The check is calculated from the uncompressed data and stored in the .xz file. This option has an effect only when compressing into the .xz format; the .lzma format doesn't support integrity checks. The integrity check (if any) is verified when the .xz file is decompressed. Supported check types: none Don't calculate an integrity check at all. This is usually a bad idea. This can be useful when integrity of the data is verified by other means anyway. crc32 Calculate CRC32 using the polynomial from IEEE-802.3 (Ethernet). crc64 Calculate CRC64 using the polynomial from ECMA-182. This is the default, since it is slightly better than CRC32 at detecting damaged files and the speed difference is negligible. sha256 Calculate SHA-256. This is somewhat slower than CRC32 and CRC64. Integrity of the .xz headers is always verified with CRC32. It is not possible to change or disable it. --ignore-check Don't verify the integrity check of the compressed data when decompressing. The CRC32 values in the .xz headers will still be verified normally. Do not use this option unless you know what you are doing. Possible reasons to use this option: β€’ Trying to recover data from a corrupt .xz file. β€’ Speeding up decompression. This matters mostly with SHA-256 or with files that have compressed extremely well. It's recommended to not use this option for this purpose unless the file integrity is verified externally in some other way. -0 ... -9 Select a compression preset level. The default is -6. If multiple preset levels are specified, the last one takes effect. If a custom filter chain was already specified, setting a compression preset level clears the custom filter chain. The differences between the presets are more significant than with gzip(1) and bzip2(1). The selected compression settings determine the memory requirements of the decompressor, thus using a too high preset level might make it painful to decompress the file on an old system with little RAM. Specifically, it's not a good idea to blindly use -9 for everything like it often is with gzip(1) and bzip2(1). -0 ... -3 These are somewhat fast presets. -0 is sometimes faster than gzip -9 while compressing much better. The higher ones often have speed comparable to bzip2(1) with comparable or better compression ratio, although the results depend a lot on the type of data being compressed. -4 ... -6 Good to very good compression while keeping decompressor memory usage reasonable even for old systems. -6 is the default, which is usually a good choice for distributing files that need to be decompressible even on systems with only 16 MiB RAM. (-5e or -6e may be worth considering too. See --extreme.) -7 ... -9 These are like -6 but with higher compressor and decompressor memory requirements. These are useful only when compressing files bigger than 8 MiB, 16 MiB, and 32 MiB, respectively. On the same hardware, the decompression speed is approximately a constant number of bytes of compressed data per second. In other words, the better the compression, the faster the decompression will usually be. This also means that the amount of uncompressed output produced per second can vary a lot. The following table summarises the features of the presets: Preset DictSize CompCPU CompMem DecMem -0 256 KiB 0 3 MiB 1 MiB -1 1 MiB 1 9 MiB 2 MiB -2 2 MiB 2 17 MiB 3 MiB -3 4 MiB 3 32 MiB 5 MiB -4 4 MiB 4 48 MiB 5 MiB -5 8 MiB 5 94 MiB 9 MiB -6 8 MiB 6 94 MiB 9 MiB -7 16 MiB 6 186 MiB 17 MiB -8 32 MiB 6 370 MiB 33 MiB -9 64 MiB 6 674 MiB 65 MiB Column descriptions: β€’ DictSize is the LZMA2 dictionary size. It is waste of memory to use a dictionary bigger than the size of the uncompressed file. This is why it is good to avoid using the presets -7 ... -9 when there's no real need for them. At -6 and lower, the amount of memory wasted is usually low enough to not matter. β€’ CompCPU is a simplified representation of the LZMA2 settings that affect compression speed. The dictionary size affects speed too, so while CompCPU is the same for levels -6 ... -9, higher levels still tend to be a little slower. To get even slower and thus possibly better compression, see --extreme. β€’ CompMem contains the compressor memory requirements in the single-threaded mode. It may vary slightly between xz versions. β€’ DecMem contains the decompressor memory requirements. That is, the compression settings determine the memory requirements of the decompressor. The exact decompressor memory usage is slightly more than the LZMA2 dictionary size, but the values in the table have been rounded up to the next full MiB. Memory requirements of the multi-threaded mode are significantly higher than that of the single-threaded mode. With the default value of --block-size, each thread needs 3*3*DictSize plus CompMem or DecMem. For example, four threads with preset -6 needs 660–670 MiB of memory. -e, --extreme Use a slower variant of the selected compression preset level (-0 ... -9) to hopefully get a little bit better compression ratio, but with bad luck this can also make it worse. Decompressor memory usage is not affected, but compressor memory usage increases a little at preset levels -0 ... -3. Since there are two presets with dictionary sizes 4 MiB and 8 MiB, the presets -3e and -5e use slightly faster settings (lower CompCPU) than -4e and -6e, respectively. That way no two presets are identical. Preset DictSize CompCPU CompMem DecMem -0e 256 KiB 8 4 MiB 1 MiB -1e 1 MiB 8 13 MiB 2 MiB -2e 2 MiB 8 25 MiB 3 MiB -3e 4 MiB 7 48 MiB 5 MiB -4e 4 MiB 8 48 MiB 5 MiB -5e 8 MiB 7 94 MiB 9 MiB -6e 8 MiB 8 94 MiB 9 MiB -7e 16 MiB 8 186 MiB 17 MiB -8e 32 MiB 8 370 MiB 33 MiB -9e 64 MiB 8 674 MiB 65 MiB For example, there are a total of four presets that use 8 MiB dictionary, whose order from the fastest to the slowest is -5, -6, -5e, and -6e. --fast --best These are somewhat misleading aliases for -0 and -9, respectively. These are provided only for backwards compatibility with LZMA Utils. Avoid using these options. --block-size=size When compressing to the .xz format, split the input data into blocks of size bytes. The blocks are compressed independently from each other, which helps with multi-threading and makes limited random-access decompression possible. This option is typically used to override the default block size in multi- threaded mode, but this option can be used in single-threaded mode too. In multi-threaded mode about three times size bytes will be allocated in each thread for buffering input and output. The default size is three times the LZMA2 dictionary size or 1 MiB, whichever is more. Typically a good value is 2–4 times the size of the LZMA2 dictionary or at least 1 MiB. Using size less than the LZMA2 dictionary size is waste of RAM because then the LZMA2 dictionary buffer will never get fully used. In multi-threaded mode, the sizes of the blocks are stored in the block headers. This size information is required for multi-threaded decompression. In single-threaded mode no block splitting is done by default. Setting this option doesn't affect memory usage. No size information is stored in block headers, thus files created in single-threaded mode won't be identical to files created in multi-threaded mode. The lack of size information also means that xz won't be able decompress the files in multi-threaded mode. --block-list=items When compressing to the .xz format, start a new block with an optional custom filter chain after the given intervals of uncompressed data. The items are a comma-separated list. Each item consists of an optional filter chain number between 0 and 9 followed by a colon (:) and a required size of uncompressed data. Omitting an item (two or more consecutive commas) is a shorthand to use the size and filters of the previous item. If the input file is bigger than the sum of the sizes in items, the last item is repeated until the end of the file. A special value of 0 may be used as the last size to indicate that the rest of the file should be encoded as a single block. An alternative filter chain for each block can be specified in combination with the --filters1=filters ... --filters9=filters options. These options define filter chains with an identifier between 1–9. Filter chain 0 can be used to refer to the default filter chain, which is the same as not specifying a filter chain. The filter chain identifier can be used before the uncompressed size, followed by a colon (:). For example, if one specifies --block-list=1:2MiB,3:2MiB,2:4MiB,,2MiB,0:4MiB then blocks will be created using: β€’ The filter chain specified by --filters1 and 2 MiB input β€’ The filter chain specified by --filters3 and 2 MiB input β€’ The filter chain specified by --filters2 and 4 MiB input β€’ The filter chain specified by --filters2 and 4 MiB input β€’ The default filter chain and 2 MiB input β€’ The default filter chain and 4 MiB input for every block until end of input. If one specifies a size that exceeds the encoder's block size (either the default value in threaded mode or the value specified with --block-size=size), the encoder will create additional blocks while keeping the boundaries specified in items. For example, if one specifies --block-size=10MiB --block-list=5MiB,10MiB,8MiB,12MiB,24MiB and the input file is 80 MiB, one will get 11 blocks: 5, 10, 8, 10, 2, 10, 10, 4, 10, 10, and 1 MiB. In multi-threaded mode the sizes of the blocks are stored in the block headers. This isn't done in single-threaded mode, so the encoded output won't be identical to that of the multi-threaded mode. --flush-timeout=timeout When compressing, if more than timeout milliseconds (a positive integer) has passed since the previous flush and reading more input would block, all the pending input data is flushed from the encoder and made available in the output stream. This can be useful if xz is used to compress data that is streamed over a network. Small timeout values make the data available at the receiving end with a small delay, but large timeout values give better compression ratio. This feature is disabled by default. If this option is specified more than once, the last one takes effect. The special timeout value of 0 can be used to explicitly disable this feature. This feature is not available on non-POSIX systems. This feature is still experimental. Currently xz is unsuitable for decompressing the stream in real time due to how xz does buffering. --memlimit-compress=limit Set a memory usage limit for compression. If this option is specified multiple times, the last one takes effect. If the compression settings exceed the limit, xz will attempt to adjust the settings downwards so that the limit is no longer exceeded and display a notice that automatic adjustment was done. The adjustments are done in this order: reducing the number of threads, switching to single-threaded mode if even one thread in multi-threaded mode exceeds the limit, and finally reducing the LZMA2 dictionary size. When compressing with --format=raw or if --no-adjust has been specified, only the number of threads may be reduced since it can be done without affecting the compressed output. If the limit cannot be met even with the adjustments described above, an error is displayed and xz will exit with exit status 1. The limit can be specified in multiple ways: β€’ The limit can be an absolute value in bytes. Using an integer suffix like MiB can be useful. Example: --memlimit-compress=80MiB β€’ The limit can be specified as a percentage of total physical memory (RAM). This can be useful especially when setting the XZ_DEFAULTS environment variable in a shell initialization script that is shared between different computers. That way the limit is automatically bigger on systems with more memory. Example: --memlimit-compress=70% β€’ The limit can be reset back to its default value by setting it to 0. This is currently equivalent to setting the limit to max (no memory usage limit). For 32-bit xz there is a special case: if the limit would be over 4020 MiB, the limit is set to 4020 MiB. On MIPS32 2000 MiB is used instead. (The values 0 and max aren't affected by this. A similar feature doesn't exist for decompression.) This can be helpful when a 32-bit executable has access to 4 GiB address space (2 GiB on MIPS32) while hopefully doing no harm in other situations. See also the section Memory usage. --memlimit-decompress=limit Set a memory usage limit for decompression. This also affects the --list mode. If the operation is not possible without exceeding the limit, xz will display an error and decompressing the file will fail. See --memlimit-compress=limit for possible ways to specify the limit. --memlimit-mt-decompress=limit Set a memory usage limit for multi-threaded decompression. This can only affect the number of threads; this will never make xz refuse to decompress a file. If limit is too low to allow any multi-threading, the limit is ignored and xz will continue in single-threaded mode. Note that if also --memlimit-decompress is used, it will always apply to both single-threaded and multi- threaded modes, and so the effective limit for multi-threading will never be higher than the limit set with --memlimit-decompress. In contrast to the other memory usage limit options, --memlimit-mt-decompress=limit has a system-specific default limit. xz --info-memory can be used to see the current value. This option and its default value exist because without any limit the threaded decompressor could end up allocating an insane amount of memory with some input files. If the default limit is too low on your system, feel free to increase the limit but never set it to a value larger than the amount of usable RAM as with appropriate input files xz will attempt to use that amount of memory even with a low number of threads. Running out of memory or swapping will not improve decompression performance. See --memlimit-compress=limit for possible ways to specify the limit. Setting limit to 0 resets the limit to the default system-specific value. -M limit, --memlimit=limit, --memory=limit This is equivalent to specifying --memlimit-compress=limit --memlimit-decompress=limit --memlimit-mt-decompress=limit. --no-adjust Display an error and exit if the memory usage limit cannot be met without adjusting settings that affect the compressed output. That is, this prevents xz from switching the encoder from multi-threaded mode to single-threaded mode and from reducing the LZMA2 dictionary size. Even when this option is used the number of threads may be reduced to meet the memory usage limit as that won't affect the compressed output. Automatic adjusting is always disabled when creating raw streams (--format=raw). -T threads, --threads=threads Specify the number of worker threads to use. Setting threads to a special value 0 makes xz use up to as many threads as the processor(s) on the system support. The actual number of threads can be fewer than threads if the input file is not big enough for threading with the given settings or if using more threads would exceed the memory usage limit. The single-threaded and multi-threaded compressors produce different output. Single-threaded compressor will give the smallest file size but only the output from the multi-threaded compressor can be decompressed using multiple threads. Setting threads to 1 will use the single-threaded mode. Setting threads to any other value, including 0, will use the multi-threaded compressor even if the system supports only one hardware thread. (xz 5.2.x used single-threaded mode in this situation.) To use multi-threaded mode with only one thread, set threads to +1. The + prefix has no effect with values other than 1. A memory usage limit can still make xz switch to single-threaded mode unless --no-adjust is used. Support for the + prefix was added in xz 5.4.0. If an automatic number of threads has been requested and no memory usage limit has been specified, then a system-specific default soft limit will be used to possibly limit the number of threads. It is a soft limit in sense that it is ignored if the number of threads becomes one, thus a soft limit will never stop xz from compressing or decompressing. This default soft limit will not make xz switch from multi-threaded mode to single- threaded mode. The active limits can be seen with xz --info-memory. Currently the only threading method is to split the input into blocks and compress them independently from each other. The default block size depends on the compression level and can be overridden with the --block-size=size option. Threaded decompression only works on files that contain multiple blocks with size information in block headers. All large enough files compressed in multi-threaded mode meet this condition, but files compressed in single-threaded mode don't even if --block-size=size has been used. The default value for threads is 0. In xz 5.4.x and older the default is 1. Custom compressor filter chains A custom filter chain allows specifying the compression settings in detail instead of relying on the settings associated to the presets. When a custom filter chain is specified, preset options (-0 ... -9 and --extreme) earlier on the command line are forgotten. If a preset option is specified after one or more custom filter chain options, the new preset takes effect and the custom filter chain options specified earlier are forgotten. A filter chain is comparable to piping on the command line. When compressing, the uncompressed input goes to the first filter, whose output goes to the next filter (if any). The output of the last filter gets written to the compressed file. The maximum number of filters in the chain is four, but typically a filter chain has only one or two filters. Many filters have limitations on where they can be in the filter chain: some filters can work only as the last filter in the chain, some only as a non-last filter, and some work in any position in the chain. Depending on the filter, this limitation is either inherent to the filter design or exists to prevent security issues. A custom filter chain can be specified in two different ways. The options --filters=filters and --filters1=filters ... --filters9=filters allow specifying an entire filter chain in one option using the liblzma filter string syntax. Alternatively, a filter chain can be specified by using one or more individual filter options in the order they are wanted in the filter chain. That is, the order of the individual filter options is significant! When decoding raw streams (--format=raw), the filter chain must be specified in the same order as it was specified when compressing. Any individual filter or preset options specified before the full chain option (--filters=filters) will be forgotten. Individual filters specified after the full chain option will reset the filter chain. Both the full and individual filter options take filter-specific options as a comma-separated list. Extra commas in options are ignored. Every option has a default value, so specify those you want to change. To see the whole filter chain and options, use xz -vv (that is, use --verbose twice). This works also for viewing the filter chain options used by presets. --filters=filters Specify the full filter chain or a preset in a single option. Each filter can be separated by spaces or two dashes (--). filters may need to be quoted on the shell command line so it is parsed as a single option. To denote options, use : or =. A preset can be prefixed with a - and followed with zero or more flags. The only supported flag is e to apply the same options as --extreme. --filters1=filters ... --filters9=filters Specify up to nine additional filter chains that can be used with --block-list. For example, when compressing an archive with executable files followed by text files, the executable part could use a filter chain with a BCJ filter and the text part only the LZMA2 filter. --filters-help Display a help message describing how to specify presets and custom filter chains in the --filters and --filters1=filters ... --filters9=filters options, and exit successfully. --lzma1[=options] --lzma2[=options] Add LZMA1 or LZMA2 filter to the filter chain. These filters can be used only as the last filter in the chain. LZMA1 is a legacy filter, which is supported almost solely due to the legacy .lzma file format, which supports only LZMA1. LZMA2 is an updated version of LZMA1 to fix some practical issues of LZMA1. The .xz format uses LZMA2 and doesn't support LZMA1 at all. Compression speed and ratios of LZMA1 and LZMA2 are practically the same. LZMA1 and LZMA2 share the same set of options: preset=preset Reset all LZMA1 or LZMA2 options to preset. Preset consist of an integer, which may be followed by single- letter preset modifiers. The integer can be from 0 to 9, matching the command line options -0 ... -9. The only supported modifier is currently e, which matches --extreme. If no preset is specified, the default values of LZMA1 or LZMA2 options are taken from the preset 6. dict=size Dictionary (history buffer) size indicates how many bytes of the recently processed uncompressed data is kept in memory. The algorithm tries to find repeating byte sequences (matches) in the uncompressed data, and replace them with references to the data currently in the dictionary. The bigger the dictionary, the higher is the chance to find a match. Thus, increasing dictionary size usually improves compression ratio, but a dictionary bigger than the uncompressed file is waste of memory. Typical dictionary size is from 64 KiB to 64 MiB. The minimum is 4 KiB. The maximum for compression is currently 1.5 GiB (1536 MiB). The decompressor already supports dictionaries up to one byte less than 4 GiB, which is the maximum for the LZMA1 and LZMA2 stream formats. Dictionary size and match finder (mf) together determine the memory usage of the LZMA1 or LZMA2 encoder. The same (or bigger) dictionary size is required for decompressing that was used when compressing, thus the memory usage of the decoder is determined by the dictionary size used when compressing. The .xz headers store the dictionary size either as 2^n or 2^n + 2^(n-1), so these sizes are somewhat preferred for compression. Other sizes will get rounded up when stored in the .xz headers. lc=lc Specify the number of literal context bits. The minimum is 0 and the maximum is 4; the default is 3. In addition, the sum of lc and lp must not exceed 4. All bytes that cannot be encoded as matches are encoded as literals. That is, literals are simply 8-bit bytes that are encoded one at a time. The literal coding makes an assumption that the highest lc bits of the previous uncompressed byte correlate with the next byte. For example, in typical English text, an upper-case letter is often followed by a lower-case letter, and a lower-case letter is usually followed by another lower-case letter. In the US-ASCII character set, the highest three bits are 010 for upper-case letters and 011 for lower-case letters. When lc is at least 3, the literal coding can take advantage of this property in the uncompressed data. The default value (3) is usually good. If you want maximum compression, test lc=4. Sometimes it helps a little, and sometimes it makes compression worse. If it makes it worse, test lc=2 too. lp=lp Specify the number of literal position bits. The minimum is 0 and the maximum is 4; the default is 0. Lp affects what kind of alignment in the uncompressed data is assumed when encoding literals. See pb below for more information about alignment. pb=pb Specify the number of position bits. The minimum is 0 and the maximum is 4; the default is 2. Pb affects what kind of alignment in the uncompressed data is assumed in general. The default means four-byte alignment (2^pb=2^2=4), which is often a good choice when there's no better guess. When the alignment is known, setting pb accordingly may reduce the file size a little. For example, with text files having one-byte alignment (US-ASCII, ISO-8859-*, UTF-8), setting pb=0 can improve compression slightly. For UTF-16 text, pb=1 is a good choice. If the alignment is an odd number like 3 bytes, pb=0 might be the best choice. Even though the assumed alignment can be adjusted with pb and lp, LZMA1 and LZMA2 still slightly favor 16-byte alignment. It might be worth taking into account when designing file formats that are likely to be often compressed with LZMA1 or LZMA2. mf=mf Match finder has a major effect on encoder speed, memory usage, and compression ratio. Usually Hash Chain match finders are faster than Binary Tree match finders. The default depends on the preset: 0 uses hc3, 1–3 use hc4, and the rest use bt4. The following match finders are supported. The memory usage formulas below are rough approximations, which are closest to the reality when dict is a power of two. hc3 Hash Chain with 2- and 3-byte hashing Minimum value for nice: 3 Memory usage: dict * 7.5 (if dict <= 16 MiB); dict * 5.5 + 64 MiB (if dict > 16 MiB) hc4 Hash Chain with 2-, 3-, and 4-byte hashing Minimum value for nice: 4 Memory usage: dict * 7.5 (if dict <= 32 MiB); dict * 6.5 (if dict > 32 MiB) bt2 Binary Tree with 2-byte hashing Minimum value for nice: 2 Memory usage: dict * 9.5 bt3 Binary Tree with 2- and 3-byte hashing Minimum value for nice: 3 Memory usage: dict * 11.5 (if dict <= 16 MiB); dict * 9.5 + 64 MiB (if dict > 16 MiB) bt4 Binary Tree with 2-, 3-, and 4-byte hashing Minimum value for nice: 4 Memory usage: dict * 11.5 (if dict <= 32 MiB); dict * 10.5 (if dict > 32 MiB) mode=mode Compression mode specifies the method to analyze the data produced by the match finder. Supported modes are fast and normal. The default is fast for presets 0–3 and normal for presets 4–9. Usually fast is used with Hash Chain match finders and normal with Binary Tree match finders. This is also what the presets do. nice=nice Specify what is considered to be a nice length for a match. Once a match of at least nice bytes is found, the algorithm stops looking for possibly better matches. Nice can be 2–273 bytes. Higher values tend to give better compression ratio at the expense of speed. The default depends on the preset. depth=depth Specify the maximum search depth in the match finder. The default is the special value of 0, which makes the compressor determine a reasonable depth from mf and nice. Reasonable depth for Hash Chains is 4–100 and 16–1000 for Binary Trees. Using very high values for depth can make the encoder extremely slow with some files. Avoid setting the depth over 1000 unless you are prepared to interrupt the compression in case it is taking far too long. When decoding raw streams (--format=raw), LZMA2 needs only the dictionary size. LZMA1 needs also lc, lp, and pb. --x86[=options] --arm[=options] --armthumb[=options] --arm64[=options] --powerpc[=options] --ia64[=options] --sparc[=options] --riscv[=options] Add a branch/call/jump (BCJ) filter to the filter chain. These filters can be used only as a non-last filter in the filter chain. A BCJ filter converts relative addresses in the machine code to their absolute counterparts. This doesn't change the size of the data but it increases redundancy, which can help LZMA2 to produce 0–15 % smaller .xz file. The BCJ filters are always reversible, so using a BCJ filter for wrong type of data doesn't cause any data loss, although it may make the compression ratio slightly worse. The BCJ filters are very fast and use an insignificant amount of memory. These BCJ filters have known problems related to the compression ratio: β€’ Some types of files containing executable code (for example, object files, static libraries, and Linux kernel modules) have the addresses in the instructions filled with filler values. These BCJ filters will still do the address conversion, which will make the compression worse with these files. β€’ If a BCJ filter is applied on an archive, it is possible that it makes the compression ratio worse than not using a BCJ filter. For example, if there are similar or even identical executables then filtering will likely make the files less similar and thus compression is worse. The contents of non- executable files in the same archive can matter too. In practice one has to try with and without a BCJ filter to see which is better in each situation. Different instruction sets have different alignment: the executable file must be aligned to a multiple of this value in the input data to make the filter work. Filter Alignment Notes x86 1 32-bit or 64-bit x86 ARM 4 ARM-Thumb 2 ARM64 4 4096-byte alignment is best PowerPC 4 Big endian only IA-64 16 Itanium SPARC 4 RISC-V 2 Since the BCJ-filtered data is usually compressed with LZMA2, the compression ratio may be improved slightly if the LZMA2 options are set to match the alignment of the selected BCJ filter. Examples: β€’ IA-64 filter has 16-byte alignment so pb=4,lp=4,lc=0 is good with LZMA2 (2^4=16). β€’ RISC-V code has 2-byte or 4-byte alignment depending on whether the file contains 16-bit compressed instructions (the C extension). When 16-bit instructions are used, pb=2,lp=1,lc=3 or pb=1,lp=1,lc=3 is good. When 16-bit instructions aren't present, pb=2,lp=2,lc=2 is the best. readelf -h can be used to check if "RVC" appears on the "Flags" line. β€’ ARM64 is always 4-byte aligned so pb=2,lp=2,lc=2 is the best. β€’ The x86 filter is an exception. It's usually good to stick to LZMA2's defaults (pb=2,lp=0,lc=3) when compressing x86 executables. All BCJ filters support the same options: start=offset Specify the start offset that is used when converting between relative and absolute addresses. The offset must be a multiple of the alignment of the filter (see the table above). The default is zero. In practice, the default is good; specifying a custom offset is almost never useful. --delta[=options] Add the Delta filter to the filter chain. The Delta filter can be only used as a non-last filter in the filter chain. Currently only simple byte-wise delta calculation is supported. It can be useful when compressing, for example, uncompressed bitmap images or uncompressed PCM audio. However, special purpose algorithms may give significantly better results than Delta + LZMA2. This is true especially with audio, which compresses faster and better, for example, with flac(1). Supported options: dist=distance Specify the distance of the delta calculation in bytes. distance must be 1–256. The default is 1. For example, with dist=2 and eight-byte input A1 B1 A2 B3 A3 B5 A4 B7, the output will be A1 B1 01 02 01 02 01 02. Other options -q, --quiet Suppress warnings and notices. Specify this twice to suppress errors too. This option has no effect on the exit status. That is, even if a warning was suppressed, the exit status to indicate a warning is still used. -v, --verbose Be verbose. If standard error is connected to a terminal, xz will display a progress indicator. Specifying --verbose twice will give even more verbose output. The progress indicator shows the following information: β€’ Completion percentage is shown if the size of the input file is known. That is, the percentage cannot be shown in pipes. β€’ Amount of compressed data produced (compressing) or consumed (decompressing). β€’ Amount of uncompressed data consumed (compressing) or produced (decompressing). β€’ Compression ratio, which is calculated by dividing the amount of compressed data processed so far by the amount of uncompressed data processed so far. β€’ Compression or decompression speed. This is measured as the amount of uncompressed data consumed (compression) or produced (decompression) per second. It is shown after a few seconds have passed since xz started processing the file. β€’ Elapsed time in the format M:SS or H:MM:SS. β€’ Estimated remaining time is shown only when the size of the input file is known and a couple of seconds have already passed since xz started processing the file. The time is shown in a less precise format which never has any colons, for example, 2 min 30 s. When standard error is not a terminal, --verbose will make xz print the filename, compressed size, uncompressed size, compression ratio, and possibly also the speed and elapsed time on a single line to standard error after compressing or decompressing the file. The speed and elapsed time are included only when the operation took at least a few seconds. If the operation didn't finish, for example, due to user interruption, also the completion percentage is printed if the size of the input file is known. -Q, --no-warn Don't set the exit status to 2 even if a condition worth a warning was detected. This option doesn't affect the verbosity level, thus both --quiet and --no-warn have to be used to not display warnings and to not alter the exit status. --robot Print messages in a machine-parsable format. This is intended to ease writing frontends that want to use xz instead of liblzma, which may be the case with various scripts. The output with this option enabled is meant to be stable across xz releases. See the section ROBOT MODE for details. --info-memory Display, in human-readable format, how much physical memory (RAM) and how many processor threads xz thinks the system has and the memory usage limits for compression and decompression, and exit successfully. -h, --help Display a help message describing the most commonly used options, and exit successfully. -H, --long-help Display a help message describing all features of xz, and exit successfully -V, --version Display the version number of xz and liblzma in human readable format. To get machine-parsable output, specify --robot before --version. ROBOT MODE The robot mode is activated with the --robot option. It makes the output of xz easier to parse by other programs. Currently --robot is supported only together with --list, --filters-help, --info-memory, and --version. It will be supported for compression and decompression in the future. List mode xz --robot --list uses tab-separated output. The first column of every line has a string that indicates the type of the information found on that line: name This is always the first line when starting to list a file. The second column on the line is the filename. file This line contains overall information about the .xz file. This line is always printed after the name line. stream This line type is used only when --verbose was specified. There are as many stream lines as there are streams in the .xz file. block This line type is used only when --verbose was specified. There are as many block lines as there are blocks in the .xz file. The block lines are shown after all the stream lines; different line types are not interleaved. summary This line type is used only when --verbose was specified twice. This line is printed after all block lines. Like the file line, the summary line contains overall information about the .xz file. totals This line is always the very last line of the list output. It shows the total counts and sizes. The columns of the file lines: 2. Number of streams in the file 3. Total number of blocks in the stream(s) 4. Compressed size of the file 5. Uncompressed size of the file 6. Compression ratio, for example, 0.123. If ratio is over 9.999, three dashes (---) are displayed instead of the ratio. 7. Comma-separated list of integrity check names. The following strings are used for the known check types: None, CRC32, CRC64, and SHA-256. For unknown check types, Unknown-N is used, where N is the Check ID as a decimal number (one or two digits). 8. Total size of stream padding in the file The columns of the stream lines: 2. Stream number (the first stream is 1) 3. Number of blocks in the stream 4. Compressed start offset 5. Uncompressed start offset 6. Compressed size (does not include stream padding) 7. Uncompressed size 8. Compression ratio 9. Name of the integrity check 10. Size of stream padding The columns of the block lines: 2. Number of the stream containing this block 3. Block number relative to the beginning of the stream (the first block is 1) 4. Block number relative to the beginning of the file 5. Compressed start offset relative to the beginning of the file 6. Uncompressed start offset relative to the beginning of the file 7. Total compressed size of the block (includes headers) 8. Uncompressed size 9. Compression ratio 10. Name of the integrity check If --verbose was specified twice, additional columns are included on the block lines. These are not displayed with a single --verbose, because getting this information requires many seeks and can thus be slow: 11. Value of the integrity check in hexadecimal 12. Block header size 13. Block flags: c indicates that compressed size is present, and u indicates that uncompressed size is present. If the flag is not set, a dash (-) is shown instead to keep the string length fixed. New flags may be added to the end of the string in the future. 14. Size of the actual compressed data in the block (this excludes the block header, block padding, and check fields) 15. Amount of memory (in bytes) required to decompress this block with this xz version 16. Filter chain. Note that most of the options used at compression time cannot be known, because only the options that are needed for decompression are stored in the .xz headers. The columns of the summary lines: 2. Amount of memory (in bytes) required to decompress this file with this xz version 3. yes or no indicating if all block headers have both compressed size and uncompressed size stored in them Since xz 5.1.2alpha: 4. Minimum xz version required to decompress the file The columns of the totals line: 2. Number of streams 3. Number of blocks 4. Compressed size 5. Uncompressed size 6. Average compression ratio 7. Comma-separated list of integrity check names that were present in the files 8. Stream padding size 9. Number of files. This is here to keep the order of the earlier columns the same as on file lines. If --verbose was specified twice, additional columns are included on the totals line: 10. Maximum amount of memory (in bytes) required to decompress the files with this xz version 11. yes or no indicating if all block headers have both compressed size and uncompressed size stored in them Since xz 5.1.2alpha: 12. Minimum xz version required to decompress the file Future versions may add new line types and new columns can be added to the existing line types, but the existing columns won't be changed. Filters help xz --robot --filters-help prints the supported filters in the following format: filter:option=<value>,option=<value>... filter Name of the filter option Name of a filter specific option value Numeric value ranges appear as <min-max>. String value choices are shown within < > and separated by a | character. Each filter is printed on its own line. Memory limit information xz --robot --info-memory prints a single line with multiple tab- separated columns: 1. Total amount of physical memory (RAM) in bytes. 2. Memory usage limit for compression in bytes (--memlimit-compress). A special value of 0 indicates the default setting which for single-threaded mode is the same as no limit. 3. Memory usage limit for decompression in bytes (--memlimit-decompress). A special value of 0 indicates the default setting which for single-threaded mode is the same as no limit. 4. Since xz 5.3.4alpha: Memory usage for multi-threaded decompression in bytes (--memlimit-mt-decompress). This is never zero because a system-specific default value shown in the column 5 is used if no limit has been specified explicitly. This is also never greater than the value in the column 3 even if a larger value has been specified with --memlimit-mt-decompress. 5. Since xz 5.3.4alpha: A system-specific default memory usage limit that is used to limit the number of threads when compressing with an automatic number of threads (--threads=0) and no memory usage limit has been specified (--memlimit-compress). This is also used as the default value for --memlimit-mt-decompress. 6. Since xz 5.3.4alpha: Number of available processor threads. In the future, the output of xz --robot --info-memory may have more columns, but never more than a single line. Version xz --robot --version prints the version number of xz and liblzma in the following format: XZ_VERSION=XYYYZZZS LIBLZMA_VERSION=XYYYZZZS X Major version. YYY Minor version. Even numbers are stable. Odd numbers are alpha or beta versions. ZZZ Patch level for stable releases or just a counter for development releases. S Stability. 0 is alpha, 1 is beta, and 2 is stable. S should be always 2 when YYY is even. XYYYZZZS are the same on both lines if xz and liblzma are from the same XZ Utils release. Examples: 4.999.9beta is 49990091 and 5.0.0 is 50000002. EXIT STATUS 0 All is good. 1 An error occurred. 2 Something worth a warning occurred, but no actual errors occurred. Notices (not warnings or errors) printed on standard error don't affect the exit status. ENVIRONMENT xz parses space-separated lists of options from the environment variables XZ_DEFAULTS and XZ_OPT, in this order, before parsing the options from the command line. Note that only options are parsed from the environment variables; all non-options are silently ignored. Parsing is done with getopt_long(3) which is used also for the command line arguments. XZ_DEFAULTS User-specific or system-wide default options. Typically this is set in a shell initialization script to enable xz's memory usage limiter by default. Excluding shell initialization scripts and similar special cases, scripts must never set or unset XZ_DEFAULTS. XZ_OPT This is for passing options to xz when it is not possible to set the options directly on the xz command line. This is the case when xz is run by a script or tool, for example, GNU tar(1): XZ_OPT=-2v tar caf foo.tar.xz foo Scripts may use XZ_OPT, for example, to set script-specific default compression options. It is still recommended to allow users to override XZ_OPT if that is reasonable. For example, in sh(1) scripts one may use something like this: XZ_OPT=${XZ_OPT-"-7e"} export XZ_OPT LZMA UTILS COMPATIBILITY The command line syntax of xz is practically a superset of lzma, unlzma, and lzcat as found from LZMA Utils 4.32.x. In most cases, it is possible to replace LZMA Utils with XZ Utils without breaking existing scripts. There are some incompatibilities though, which may sometimes cause problems. Compression preset levels The numbering of the compression level presets is not identical in xz and LZMA Utils. The most important difference is how dictionary sizes are mapped to different presets. Dictionary size is roughly equal to the decompressor memory usage. Level xz LZMA Utils -0 256 KiB N/A -1 1 MiB 64 KiB -2 2 MiB 1 MiB -3 4 MiB 512 KiB -4 4 MiB 1 MiB -5 8 MiB 2 MiB -6 8 MiB 4 MiB -7 16 MiB 8 MiB -8 32 MiB 16 MiB -9 64 MiB 32 MiB The dictionary size differences affect the compressor memory usage too, but there are some other differences between LZMA Utils and XZ Utils, which make the difference even bigger: Level xz LZMA Utils 4.32.x -0 3 MiB N/A -1 9 MiB 2 MiB -2 17 MiB 12 MiB -3 32 MiB 12 MiB -4 48 MiB 16 MiB -5 94 MiB 26 MiB -6 94 MiB 45 MiB -7 186 MiB 83 MiB -8 370 MiB 159 MiB -9 674 MiB 311 MiB The default preset level in LZMA Utils is -7 while in XZ Utils it is -6, so both use an 8 MiB dictionary by default. Streamed vs. non-streamed .lzma files The uncompressed size of the file can be stored in the .lzma header. LZMA Utils does that when compressing regular files. The alternative is to mark that uncompressed size is unknown and use end-of-payload marker to indicate where the decompressor should stop. LZMA Utils uses this method when uncompressed size isn't known, which is the case, for example, in pipes. xz supports decompressing .lzma files with or without end-of-payload marker, but all .lzma files created by xz will use end-of-payload marker and have uncompressed size marked as unknown in the .lzma header. This may be a problem in some uncommon situations. For example, a .lzma decompressor in an embedded device might work only with files that have known uncompressed size. If you hit this problem, you need to use LZMA Utils or LZMA SDK to create .lzma files with known uncompressed size. Unsupported .lzma files The .lzma format allows lc values up to 8, and lp values up to 4. LZMA Utils can decompress files with any lc and lp, but always creates files with lc=3 and lp=0. Creating files with other lc and lp is possible with xz and with LZMA SDK. The implementation of the LZMA1 filter in liblzma requires that the sum of lc and lp must not exceed 4. Thus, .lzma files, which exceed this limitation, cannot be decompressed with xz. LZMA Utils creates only .lzma files which have a dictionary size of 2^n (a power of 2) but accepts files with any dictionary size. liblzma accepts only .lzma files which have a dictionary size of 2^n or 2^n + 2^(n-1). This is to decrease false positives when detecting .lzma files. These limitations shouldn't be a problem in practice, since practically all .lzma files have been compressed with settings that liblzma will accept. Trailing garbage When decompressing, LZMA Utils silently ignore everything after the first .lzma stream. In most situations, this is a bug. This also means that LZMA Utils don't support decompressing concatenated .lzma files. If there is data left after the first .lzma stream, xz considers the file to be corrupt unless --single-stream was used. This may break obscure scripts which have assumed that trailing garbage is ignored. NOTES Compressed output may vary The exact compressed output produced from the same uncompressed input file may vary between XZ Utils versions even if compression options are identical. This is because the encoder can be improved (faster or better compression) without affecting the file format. The output can vary even between different builds of the same XZ Utils version, if different build options are used. The above means that once --rsyncable has been implemented, the resulting files won't necessarily be rsyncable unless both old and new files have been compressed with the same xz version. This problem can be fixed if a part of the encoder implementation is frozen to keep rsyncable output stable across xz versions. Embedded .xz decompressors Embedded .xz decompressor implementations like XZ Embedded don't necessarily support files created with integrity check types other than none and crc32. Since the default is --check=crc64, you must use --check=none or --check=crc32 when creating files for embedded systems. Outside embedded systems, all .xz format decompressors support all the check types, or at least are able to decompress the file without verifying the integrity check if the particular check is not supported. XZ Embedded supports BCJ filters, but only with the default start offset.
Basics Compress the file foo into foo.xz using the default compression level (-6), and remove foo if compression is successful: xz foo Decompress bar.xz into bar and don't remove bar.xz even if decompression is successful: xz -dk bar.xz Create baz.tar.xz with the preset -4e (-4 --extreme), which is slower than the default -6, but needs less memory for compression and decompression (48 MiB and 5 MiB, respectively): tar cf - baz | xz -4e > baz.tar.xz A mix of compressed and uncompressed files can be decompressed to standard output with a single command: xz -dcf a.txt b.txt.xz c.txt d.txt.lzma > abcd.txt Parallel compression of many files On GNU and *BSD, find(1) and xargs(1) can be used to parallelize compression of many files: find . -type f \! -name '*.xz' -print0 \ | xargs -0r -P4 -n16 xz -T1 The -P option to xargs(1) sets the number of parallel xz processes. The best value for the -n option depends on how many files there are to be compressed. If there are only a couple of files, the value should probably be 1; with tens of thousands of files, 100 or even more may be appropriate to reduce the number of xz processes that xargs(1) will eventually create. The option -T1 for xz is there to force it to single-threaded mode, because xargs(1) is used to control the amount of parallelization. Robot mode Calculate how many bytes have been saved in total after compressing multiple files: xz --robot --list *.xz | awk '/^totals/{print $5-$4}' A script may want to know that it is using new enough xz. The following sh(1) script checks that the version number of the xz tool is at least 5.0.0. This method is compatible with old beta versions, which didn't support the --robot option: if ! eval "$(xz --robot --version 2> /dev/null)" || [ "$XZ_VERSION" -lt 50000002 ]; then echo "Your xz is too old." fi unset XZ_VERSION LIBLZMA_VERSION Set a memory usage limit for decompression using XZ_OPT, but if a limit has already been set, don't increase it: NEWLIM=$((123 << 20)) # 123 MiB OLDLIM=$(xz --robot --info-memory | cut -f3) if [ $OLDLIM -eq 0 -o $OLDLIM -gt $NEWLIM ]; then XZ_OPT="$XZ_OPT --memlimit-decompress=$NEWLIM" export XZ_OPT fi Custom compressor filter chains The simplest use for custom filter chains is customizing a LZMA2 preset. This can be useful, because the presets cover only a subset of the potentially useful combinations of compression settings. The CompCPU columns of the tables from the descriptions of the options -0 ... -9 and --extreme are useful when customizing LZMA2 presets. Here are the relevant parts collected from those two tables: Preset CompCPU -0 0 -1 1 -2 2 -3 3 -4 4 -5 5 -6 6 -5e 7 -6e 8 If you know that a file requires somewhat big dictionary (for example, 32 MiB) to compress well, but you want to compress it quicker than xz -8 would do, a preset with a low CompCPU value (for example, 1) can be modified to use a bigger dictionary: xz --lzma2=preset=1,dict=32MiB foo.tar With certain files, the above command may be faster than xz -6 while compressing significantly better. However, it must be emphasized that only some files benefit from a big dictionary while keeping the CompCPU value low. The most obvious situation, where a big dictionary can help a lot, is an archive containing very similar files of at least a few megabytes each. The dictionary size has to be significantly bigger than any individual file to allow LZMA2 to take full advantage of the similarities between consecutive files. If very high compressor and decompressor memory usage is fine, and the file being compressed is at least several hundred megabytes, it may be useful to use an even bigger dictionary than the 64 MiB that xz -9 would use: xz -vv --lzma2=dict=192MiB big_foo.tar Using -vv (--verbose --verbose) like in the above example can be useful to see the memory requirements of the compressor and decompressor. Remember that using a dictionary bigger than the size of the uncompressed file is waste of memory, so the above command isn't useful for small files. Sometimes the compression time doesn't matter, but the decompressor memory usage has to be kept low, for example, to make it possible to decompress the file on an embedded system. The following command uses -6e (-6 --extreme) as a base and sets the dictionary to only 64 KiB. The resulting file can be decompressed with XZ Embedded (that's why there is --check=crc32) using about 100 KiB of memory. xz --check=crc32 --lzma2=preset=6e,dict=64KiB foo If you want to squeeze out as many bytes as possible, adjusting the number of literal context bits (lc) and number of position bits (pb) can sometimes help. Adjusting the number of literal position bits (lp) might help too, but usually lc and pb are more important. For example, a source code archive contains mostly US-ASCII text, so something like the following might give slightly (like 0.1 %) smaller file than xz -6e (try also without lc=4): xz --lzma2=preset=6e,pb=0,lc=4 source_code.tar Using another filter together with LZMA2 can improve compression with certain file types. For example, to compress a x86-32 or x86-64 shared library using the x86 BCJ filter: xz --x86 --lzma2 libfoo.so Note that the order of the filter options is significant. If --x86 is specified after --lzma2, xz will give an error, because there cannot be any filter after LZMA2, and also because the x86 BCJ filter cannot be used as the last filter in the chain. The Delta filter together with LZMA2 can give good results with bitmap images. It should usually beat PNG, which has a few more advanced filters than simple delta but uses Deflate for the actual compression. The image has to be saved in uncompressed format, for example, as uncompressed TIFF. The distance parameter of the Delta filter is set to match the number of bytes per pixel in the image. For example, 24-bit RGB bitmap needs dist=3, and it is also good to pass pb=0 to LZMA2 to accommodate the three-byte alignment: xz --delta=dist=3 --lzma2=pb=0 foo.tiff If multiple images have been put into a single archive (for example, .tar), the Delta filter will work on that too as long as all images have the same number of bytes per pixel. SEE ALSO xzdec(1), xzdiff(1), xzgrep(1), xzless(1), xzmore(1), gzip(1), bzip2(1), 7z(1) XZ Utils: <https://tukaani.org/xz/> XZ Embedded: <https://tukaani.org/xz/embedded.html> LZMA SDK: <https://7-zip.org/sdk.html> Tukaani 2024-04-08 XZ(1)
tkconch
null
null
null
null
null
as
The as command translates assembly code in the named files to object code. If no files are specified, as reads from stdin. All undefined symbols in the assembly are treated as global. The output of the assembly is left in the file a.out by default. The program /usr/bin/as is actually a driver that executes assemblers for specific target architectures. If no target architecture is specified, it defaults to the architecture of the host it is running on.
as - Mac OS X Mach-O GNU-based assemblers
as [ option ... ] [ file ... ]
-o name Name the output file name instead of a.out. -arch arch_type Specifies the target architecture, arch_type, of the assembler to be executed. The target assemblers for each architecture are in /usr/libexec/gcc/darwin/arch_type/as or /usr/local/libexec/gcc/darwin/arch_type/as. There is only one assembler for an architecture family. If the specified target architecture is a machine-specific implementation, the assembler for that architecture family is executed (e.g., /usr/libexec/gcc/darwin/ppc/as for -arch ppc604e). See arch(3) for the currently known arch_types. -arch_multiple Precede any displayed messages with a line stating the program name (as) and the architecture (from the -arch arch_type flag), to distinguish which architecture the error messages refer to. When the cc(1) driver program is run with multiple -arch flags, it invokes as with the -arch_multiple option. -force_cpusubtype_ALL By default, the assembler will produce the CPU subtype ALL for the object file it is assembling if it finds no implementation- specific instructions. Also by default, the assembler will allow implementation-specific instructions and will combine the CPU subtype for those specific implementations. The combining of specific implementations is architecture-dependent; if some combination of instructions is not allowed, an error is generated. With the optional -force_cpusubtype_ALL flag, all instructions are allowed and the object file's CPU subtype will be the ALL subtype. If the target architecture specified is a machine-specific implementation (e.g., -arch ppc603, -arch i486), the assembler will flag as errors instructions that are not supported on that architecture, and it will produce an object file with the CPU subtype for that specific implementation (even if no implementation-specific instructions are used). The -force_cpusubtype_ALL flag is the default for all x86 and x86_64 architectures. -dynamic Enables dynamic linking features. This is the default. -static Causes the assembler to treat as an error any features for dynamic linking. Also causes the .text directive to not include the pure_instructions section attribute. -- Use stdin for the assembly source input. -n Instructs the assembler not to assume that the assembly file starts with a .text directive. Use this option when an output file is not to contain a (__TEXT,__text) section or this section is not to be first one in the output file. -f Fast; no need for the assembler preprocessor (``app''). The assembler preprocessor can also be turned off by starting the assembly file with "#NO_APP\n". This is intended for use by compilers which produce assembly code in a strict "clean" format that specifies exactly where whitespace can go. The assembler preprocessor needs to be run on hand-written assembly files and/or files that have been preprocessed by the C preprocessor cpp. This is typically needed when assembler files are assembled through the use of the cc(1) command, which automatically runs the C preprocessor on assembly source files. The assembler preprocessor strips out excess spaces, turns single-quoted characters into a decimal constants, and turns # <number> <filename> <level> into .line <number>;.file <filename> pairs. When the assembler preprocessor has been turned off by a "#NO_APP\n" at the start of a file, it can be turned back on and off again with pairs of "#APP\n" and "#NO_APP\n" at the beginnings of lines. This is used by the compiler to wrap assembly statements produced from asm() statements. -g Produce debugging information for the symbolic debugger gdb(1) so that the assembly source can be debugged symbolically. The debugger depends on correct use of the C preprocessor's #include directive or the assembler's .include directive: Any include file that produces instructions in the (__TEXT,__text) section must be included while a .text directive is in effect. In other words, there must be a .text directive before the include, and the .text directive must still be in effect at the end of the include file. Otherwise, the debugger will get confused when in that assembly file. -v Display the version of the assembler (both the Mac OS X version and the GNU version it is based on). -V Print the path and the command line of the assembler the assembler driver is using. -Idir Add the directory dir to the list of directories to search for files included with the .include directive. The default place to search is the current directory. -W Suppress warnings. -L Save non-global defined labels beginning with an 'L'; these labels are normally discarded to save space in the resultant symbol table. The compiler generates such temporary labels. -q Use the clang(1) integrated assembler instead of the GNU based system assembler. This is the default for the x86 and arm architectures. -Q Use the GNU based system assembler. Note that Apple's built-in system assemblers are deprecated; programs that rely on these asssemblers should move to the clang(1) integrated assembler instead, using the -q flag. Assembler options for the PowerPC processors -static_branch_prediction_Y_bit Treat a single trailing '+' or '-' after a conditional PowerPC branch instruction as a static branch prediction that sets the Y-bit in the opcode. Pairs of trailing "++" or "--" always set the AT-bits. This is the default for Mac OS X. -static_branch_prediction_AT_bits Treat a single trailing '+' or '-' after a conditional PowerPC branch instruction as a static branch prediction that sets the AT-bits in the opcode. Pairs of trailing "++" or "--" always set the AT-bits but with this option a warning is issued if this syntax is used. With this flag the assembler behaves like the IBM tools. -no_ppc601 Treat any PowerPC 601 instructions as an error. FILES a.out output file SEE ALSO cc(1), ld(1), nm(1), otool(1), arch(3), Mach-O(5) Apple Inc. June 23, 2020 AS(1)
null
jpgicc
lcms is a standalone CMM engine, which deals with the color management. It implements a fast transformation between ICC profiles. jpgicc is a little cms ICC profile applier for JPEG.
jpgicc - little cms ICC profile applier for JPEG.
jpgicc [options] input.jpg output.jpg
-b Black point compensation. -c NUM Precalculates transform (0=Off, 1=Normal, 2=Hi-res, 3=LoRes) [defaults to 1]. -d NUM Observer adaptation state (abs.col. only), (0..1.0, float value) [defaults to 0.0]. -e Embed destination profile. -g Marks out-of-gamut colors on softproof. -h NUM Show summary of options and examples (0=help, 1=Examples, 2=Built-in profiles, 3=Contact information) -i profile Input profile (defaults to sRGB). -l link TODO: explain this option. -m NUM SoftProof intent (0,1,2,3) [defaults to 0]. -n Ignore embedded profile. -o profile Output profile (defaults to sRGB). -p profile Soft proof profile. -q NUM Output JPEG quality, (0..100) [defaults to 75]. -s newprofile Save embedded profile as newprofile. -t NUM Rendering intent 0=Perceptual [default] 1=Relative colorimetric 2=Saturation 3=Absolute colorimetric 10=Perceptual preserving black ink 11=Relative colorimetric preserving black ink 12=Saturation preserving black ink 13=Perceptual preserving black plane 14=Relative colorimetric preserving black plane 15=Saturation preserving black plane -v Verbose. -! NUM,NUM,NUM Out-of-gamut marker channel values (r,g,b) [defaults: 128,128,128]. BUILT-IN PROFILES *Lab2 -- D50-based v2 CIEL*a*b *Lab4 -- D50-based v4 CIEL*a*b *Lab -- D50-based v4 CIEL*a*b *XYZ -- CIE XYZ (PCS) *sRGB -- sRGB color space *Gray22 - Monochrome of Gamma 2.2 *Gray30 - Monochrome of Gamma 3.0 *null - Monochrome black for all input *Lin2222- CMYK linearization of gamma 2.2 on each channel
To color correct from scanner to sRGB: jpgicc -iscanner.icm in.jpg out.jpg To convert from monitor1 to monitor2: jpgicc -imon1.icm -omon2.icm in.jpg out.jpg To make a CMYK separation: jpgicc -oprinter.icm inrgb.jpg outcmyk.jpg To recover sRGB from a CMYK separation: jpgicc -iprinter.icm incmyk.jpg outrgb.jpg To convert from CIELab ITU/Fax JPEG to sRGB jpgicc -iitufax.icm in.jpg out.jpg To convert from CIELab ITU/Fax JPEG to sRGB jpgicc in.jpg out.jpg NOTES For suggestions, comments, bug reports etc. send mail to info@littlecms.com. SEE ALSO linkicc(1), psicc(1), tificc(1), transicc(1) AUTHOR This manual page was written by Shiju p. Nair <shiju.p@gmail.com>, for the Debian project. September 30, 2004 JPGICC(1)
arm64-apple-darwin20.0.0-ctf_insert
ctf_insert inserts CTF (Compact C Type Format) data into a mach_kernel binary, storing the data in a newly created (__CTF,__ctf) section. This section must not be present in the input file. ctf_insert(1) must be passed one -arch argument for each architecture in a universal file, or exactly one -arch for a thin file. input specifies the input mach_kernel. -o output specifies the output file. -arch arch file specifies a file of CTF data to be used for the specified arch in a Mach-O or universal file. The file's content will be stored in a newly created (__CTF,__ctf) section. Apple, Inc. December 13, 2018 CTF_INSERT(1)
ctf_insert - insert Compact C Type Format data into a mach_kernel file
ctf_insert input [ -arch arch file ]... -o output
null
null
mtoc
null
null
null
null
null
ldid
null
null
null
null
null
myisamlog
myisamlog processes the contents of a MyISAM log file. To create such a file, start the server with a --log-isam=log_file option. Invoke myisamlog like this: myisamlog [options] [file_name [tbl_name] ...] The default operation is update (-u). If a recovery is done (-r), all writes and possibly updates and deletes are done and errors are only counted. The default log file name is myisam.log if no log_file argument is given. If tables are named on the command line, only those tables are updated. myisamlog supports the following options: β€’ -?, -I Display a help message and exit. β€’ -c N Execute only N commands. β€’ -f N Specify the maximum number of open files. β€’ -F filepath/ Specify the file path with a trailing slash. β€’ -i Display extra information before exiting. β€’ -o offset Specify the starting offset. β€’ -p N Remove N components from path. β€’ -r Perform a recovery operation. β€’ -R record_pos_file record_pos Specify record position file and record position. β€’ -u Perform an update operation. β€’ -v Verbose mode. Print more output about what the program does. This option can be given multiple times to produce more and more output. β€’ -w write_file Specify the write file. β€’ -V Display version information. COPYRIGHT Copyright Β© 1997, 2023, Oracle and/or its affiliates. This documentation is free software; you can redistribute it and/or modify it only under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License. This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with the program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA or see http://www.gnu.org/licenses/. SEE ALSO For more information, please refer to the MySQL Reference Manual, which may already be installed locally and which is also available online at http://dev.mysql.com/doc/. AUTHOR Oracle Corporation (http://dev.mysql.com/). MySQL 8.3 11/23/2023 MYISAMLOG(1)
myisamlog - display MyISAM log file contents
myisamlog [options] [log_file [tbl_name] ...]
null
null
unlz4
lz4 is an extremely fast lossless compression algorithm, based on byte-aligned LZ77 family of compression scheme. lz4 offers compression speeds > 500 MB/s per core, linearly scalable with multi-core CPUs. It features an extremely fast decoder, offering speed in multiple GB/s per core, typically reaching RAM speed limit on multi-core systems. The native file format is the .lz4 format. Difference between lz4 and gzip lz4 supports a command line syntax similar but not identical to gzip(1). Differences are : β€’ lz4 compresses a single file by default (see -m for multiple files) β€’ lz4 file1 file2 means : compress file1 into file2 β€’ lz4 file.lz4 will default to decompression (use -z to force compression) β€’ lz4 preserves original files (see --rm to erase source file on completion) β€’ lz4 shows real-time notification statistics during compression or decompression of a single file (use -q to silence them) β€’ When no destination is specified, result is sent on implicit output, which depends on stdout status. When stdout is Not the console, it becomes the implicit output. Otherwise, if stdout is the console, the implicit output is filename.lz4. β€’ It is considered bad practice to rely on implicit output in scripts. because the scriptΒ΄s environment may change. Always use explicit output in scripts. -c ensures that output will be stdout. Conversely, providing a destination name, or using -m ensures that the output will be either the specified name, or filename.lz4 respectively. Default behaviors can be modified by opt-in commands, detailed below. β€’ lz4 -m makes it possible to provide multiple input filenames, which will be compressed into files using suffix .lz4. Progress notifications become disabled by default (use -v to enable them). This mode has a behavior which more closely mimics gzip command line, with the main remaining difference being that source files are preserved by default. β€’ Similarly, lz4 -m -d can decompress multiple *.lz4 files. β€’ ItΒ΄s possible to opt-in to erase source files on successful compression or decompression, using --rm command. β€’ Consequently, lz4 -m --rm behaves the same as gzip. Concatenation of .lz4 files It is possible to concatenate .lz4 files as is. lz4 will decompress such files as if they were a single .lz4 file. For example: lz4 file1 > foo.lz4 lz4 file2 >> foo.lz4 Then lz4cat foo.lz4 is equivalent to cat file1 file2.
lz4 - lz4, unlz4, lz4cat - Compress or decompress .lz4 files
lz4 [OPTIONS] [-|INPUT-FILE] OUTPUT-FILE unlz4 is equivalent to lz4 -d lz4cat is equivalent to lz4 -dcfm When writing scripts that need to decompress files, it is recommended to always use the name lz4 with appropriate arguments (lz4 -d or lz4 -dc) instead of the names unlz4 and lz4cat.
Short commands concatenation In some cases, some options can be expressed using short command -x or long command --long-word. Short commands can be concatenated together. For example, -d -c is equivalent to -dc. Long commands cannot be concatenated. They must be clearly separated by a space. Multiple commands When multiple contradictory commands are issued on a same command line, only the latest one will be applied. Operation mode -z --compress Compress. This is the default operation mode when no operation mode option is specified, no other operation mode is implied from the command name (for example, unlz4 implies --decompress), nor from the input file name (for example, a file extension .lz4 implies --decompress by default). -z can also be used to force compression of an already compressed .lz4 file. -d --decompress --uncompress Decompress. --decompress is also the default operation when the input filename has an .lz4 extension. -t --test Test the integrity of compressed .lz4 files. The decompressed data is discarded. No files are created nor removed. -b# Benchmark mode, using # compression level. --list List information about .lz4 files. note : current implementation is limited to single-frame .lz4 files. Operation modifiers -# Compression level, with # being any value from 1 to 12. Higher values trade compression speed for compression ratio. Values above 12 are considered the same as 12. Recommended values are 1 for fast compression (default), and 9 for high compression. Speed/compression trade-off will vary depending on data to compress. Decompression speed remains fast at all settings. --fast[=#] Switch to ultra-fast compression levels. The higher the value, the faster the compression speed, at the cost of some compression ratio. If =# is not present, it defaults to 1. This setting overrides compression level if one was set previously. Similarly, if a compression level is set after --fast, it overrides it. --best Set highest compression level. Same as -12. --favor-decSpeed Generate compressed data optimized for decompression speed. Compressed data will be larger as a consequence (typically by ~0.5%), while decompression speed will be improved by 5-20%, depending on use cases. This option only works in combination with very high compression levels (>=10). -D dictionaryName Compress, decompress or benchmark using dictionary dictionaryName. Compression and decompression must use the same dictionary to be compatible. Using a different dictionary during decompression will either abort due to decompression error, or generate a checksum error. -f --[no-]force This option has several effects: If the target file already exists, overwrite it without prompting. When used with --decompress and lz4 cannot recognize the type of the source file, copy the source file as is to standard output. This allows lz4cat --force to be used like cat (1) for files that have not been compressed with lz4. -c --stdout --to-stdout Force write to standard output, even if it is the console. -m --multiple Multiple input files. Compressed file names will be appended a .lz4 suffix. This mode also reduces notification level. Can also be used to list multiple files. lz4 -m has a behavior equivalent to gzip -k (it preserves source files by default). -r operate recursively on directories. This mode also sets -m (multiple input files). -B# Block size [4-7](default : 7) -B4= 64KB ; -B5= 256KB ; -B6= 1MB ; -B7= 4MB -BI Produce independent blocks (default) -BD Blocks depend on predecessors (improves compression ratio, more noticeable on small blocks) -BX Generate block checksums (default:disabled) --[no-]frame-crc Select frame checksum (default:enabled) --no-crc Disable both frame and block checksums --[no-]content-size Header includes original size (default:not present) Note : this option can only be activated when the original size can be determined, hence for a file. It wonΒ΄t work with unknown source size, such as stdin or pipe. --[no-]sparse Sparse mode support (default:enabled on file, disabled on stdout) -l Use Legacy format (typically for Linux Kernel compression) Note : -l is not compatible with -m (--multiple) nor -r Other options -v --verbose Verbose mode -q --quiet Suppress warnings and real-time statistics; specify twice to suppress errors too -h -H --help Display help/long help and exit -V --version Display Version number and exit -k --keep Preserve source files (default behavior) --rm Delete source files on successful compression or decompression -- Treat all subsequent arguments as files Benchmark mode -b# Benchmark file(s), using # compression level -e# Benchmark multiple compression levels, from b# to e# (included) -i# Minimum evaluation time in seconds [1-9] (default : 3) BUGS Report bugs at: https://github.com/lz4/lz4/issues AUTHOR Yann Collet lz4 v1.9.4 August 2022 LZ4(1)
null
mergesolv
The mergesolv tool reads all solv files specified on the command line, and writes a merged version to standard output. -X Autoexpand SUSE pattern and product provides into packages. AUTHOR Michael Schroeder <mls@suse.de> libsolv 09/14/2018 MERGESOLV(1)
mergesolv - merge multiple files in solv format into a single one
mergesolv [OPTIONS] FILE1.solv FILE2.solv ...
null
null
fits2bitmap
null
null
null
null
null
qlalr
null
null
null
null
null
navigator-updater
null
null
null
null
null
gdbus
null
null
null
null
null
zstdmt
null
null
null
null
null
uvicorn
null
null
null
null
null
fpack
null
null
null
null
null
marker_single
null
null
null
null
null
verba
null
null
null
null
null
odbc_config
odbc_config provides information about how unixODBC was compiled for your system and architecture. The information generated is useful for building unixODBC clients and similar programs.
odbc_config - Generates compiler information intended for use when developing a unixODBC client
odbc_config [--prefix] [--exec-prefix] [--include-prefix] [--lib-prefix] [--bin-prefix] [--version] [--libs] [--static-libs] [--libtool-libs] [--cflags] [--odbcversion] [--odbcini] [--odbcinstini] [--header] [--ulen]
--prefix Prefix for architecture-independent files. --exec-prefix Prefix for architecture-dependent files. --include-prefix Directory containing C header files for unixODBC. --lib-prefix Directory containing unixODBC shared libraries. --bin-prefix Directory containing unixODBC utilities. --version Current version of unixODBC. --libs Compiler flags for linking dynamic libraries. --static-libs Absolute file name of the unixODBC static library (libodbc.a). --libtool-libs Absolute file name of the unixODBC libtool library (libodbc.la). --cflags Outputs compiler flags to find header files, as well as critical compiler flags and defines used when compiling the libodbc library. --odbcversion Version of the ODBC specification used by unixODBC. --odbcini Absolute file name of the system-wide DSN configuration file odbc.ini. --odbcinstini Absolute file name of the unixODBC driver configuration file odbcinst.ini. --header Definitions of C preprocessor constants used by unixODBC. Generated output can be piped into a C header file. --ulen Compiler flag that defines SIZEOF_SQLULEN. SEE ALSO unixODBC(7), odbcinst.ini(5), odbc.ini(5) "The unixODBC Administrator Manual (HTML)" AUTHORS The authors of unixODBC are Peter Harvey <pharvey@codebydesign.com> and Nick Gorham <nick@lurcher.org>. For a full list of contributors, refer to the AUTHORS file. COPYRIGHT unixODBC is licensed under the GNU Lesser General Public License. For details about the license, see the COPYING file. version 2.3.12 Thu 07 Jan 2021 odbc_config(1)
null
compile_et
Compile_et converts a table listing error-code names and associated messages into a C source file suitable for use with the com_err(3) library. The source file name must end with a suffix of ``.et''; the file consists of a declaration supplying the name (up to four characters long) of the error-code table: error_table name followed by up to 256 entries of the form: error_code name, " string " and a final end to indicate the end of the table. The name of the table is used to construct the name of a subroutine initialize_XXXX_error_table which must be called in order for the com_err library to recognize the error table. The various error codes defined are assigned sequentially increasing numbers (starting with a large number computed as a hash function of the name of the table); thus for compatibility it is suggested that new codes be added only to the end of an existing table, and that no codes be removed from tables. The names defined in the table are placed into a C header file with preprocessor directives defining them as integer constants of up to 32 bits in magnitude. A C source file is also generated which should be compiled and linked with the object files which reference these error codes; it contains the text of the messages and the initialization subroutine. Both C files have names derived from that of the original source file, with the ``.et'' suffix replaced by ``.c'' and ``.h''. A ``#'' in the source file is treated as a comment character, and all remaining text to the end of the source line will be ignored. If a text domain is provided with the --textdomain option, error messages will be looked up in the specified domain with gettext. If a locale directory is also provided with the --localedir option, the text domain will be bound to the specified locale directory with bindtextdomain when the error table is initialized. BUGS Since compile_et uses a very simple parser based on yacc(1), its error recovery leaves much to be desired. SEE ALSO com_err (3). Ken Raeburn, "A Common Error Description Library for UNIX". SIPB 22 Nov 1988 COMPILE_ET(1)
compile_et - error table compiler
compile_et [ --textdomain domain [ --localedir dir ] ] file
null
null
qscxmlc
null
null
null
null
null
numba
null
null
null
null
null
lzdiff
xzcmp and xzdiff compare uncompressed contents of two files. Uncompressed data and options are passed to cmp(1) or diff(1) unless --help or --version is specified. If both file1 and file2 are specified, they can be uncompressed files or files in formats that xz(1), gzip(1), bzip2(1), lzop(1), zstd(1), or lz4(1) can decompress. The required decompression commands are determined from the filename suffixes of file1 and file2. A file with an unknown suffix is assumed to be either uncompressed or in a format that xz(1) can decompress. If only one filename is provided, file1 must have a suffix of a supported compression format and the name for file2 is assumed to be file1 with the compression format suffix removed. The commands lzcmp and lzdiff are provided for backward compatibility with LZMA Utils. EXIT STATUS If a decompression error occurs, the exit status is 2. Otherwise the exit status of cmp(1) or diff(1) is used. SEE ALSO cmp(1), diff(1), xz(1), gzip(1), bzip2(1), lzop(1), zstd(1), lz4(1) Tukaani 2024-02-13 XZDIFF(1)
xzcmp, xzdiff, lzcmp, lzdiff - compare compressed files
xzcmp [option...] file1 [file2] xzdiff ... lzcmp ... lzdiff ...
null
null
h5redeploy
null
null
null
null
null
sclient
sclient is a sample application, primarily useful for testing purposes. It contacts a sample server sserver(8) and authenticates to it using Kerberos version 5 tickets, then displays the server's response. ENVIRONMENT See kerberos(7) for a description of Kerberos environment variables. SEE ALSO kinit(1), sserver(8), kerberos(7) AUTHOR MIT COPYRIGHT 1985-2022, MIT 1.20.1 SCLIENT(1)
sclient - sample Kerberos version 5 client
sclient remotehost
null
null
blackd
null
null
null
null
null
activate
null
null
null
null
null
msgexec
Applies a command to all translations of a translation catalog. The COMMAND can be any program that reads a translation from standard input. It is invoked once for each translation. Its output becomes msgexec's output. msgexec's return code is the maximum return code across all invocations. A special builtin command called '0' outputs the translation, followed by a null byte. The output of "msgexec 0" is suitable as input for "xargs -0". Command input: --newline add newline at the end of input Mandatory arguments to long options are mandatory for short options too. Input file location: -i, --input=INPUTFILE input PO file -D, --directory=DIRECTORY add DIRECTORY to list for input files search If no input file is given or if it is -, standard input is read. Input file syntax: -P, --properties-input input file is in Java .properties syntax --stringtable-input input file is in NeXTstep/GNUstep .strings syntax Informative output: -h, --help display this help and exit -V, --version output version information and exit AUTHOR Written by Bruno Haible. REPORTING BUGS Report bugs in the bug tracker at <https://savannah.gnu.org/projects/gettext> or by email to <bug-gettext@gnu.org>. COPYRIGHT Copyright Β© 2001-2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO The full documentation for msgexec is maintained as a Texinfo manual. If the info and msgexec programs are properly installed at your site, the command info msgexec should give you access to the complete manual. GNU gettext-tools 0.22.5 February 2024 MSGEXEC(1)
msgexec - process translations of message catalog
msgexec [OPTION] COMMAND [COMMAND-OPTION]
null
null
jupyter-console
null
null
null
null
null
rst2xml.py
null
null
null
null
null
pyuic5
null
null
null
null
null
glib-genmarshal
null
null
null
null
null
pcre-config
null
null
null
null
null
ngettext
The ngettext program translates a natural language message into the user's language, by looking up the translation in a message catalog, and chooses the appropriate plural form, which depends on the number COUNT and the language of the message catalog where the translation was found. Display native language translation of a textual message whose grammatical form depends on a number. -d, --domain=TEXTDOMAIN retrieve translated message from TEXTDOMAIN -c, --context=CONTEXT specify context for MSGID -e enable expansion of some escape sequences -E (ignored for compatibility) [TEXTDOMAIN] retrieve translated message from TEXTDOMAIN MSGID MSGID-PLURAL translate MSGID (singular) / MSGID-PLURAL (plural) COUNT choose singular/plural form based on this value Informative output: -h, --help display this help and exit -V, --version display version information and exit If the TEXTDOMAIN parameter is not given, the domain is determined from the environment variable TEXTDOMAIN. If the message catalog is not found in the regular directory, another location can be specified with the environment variable TEXTDOMAINDIR. Standard search directory: /opt/homebrew/Cellar/gettext/0.22.5/share/locale AUTHOR Written by Ulrich Drepper. REPORTING BUGS Report bugs in the bug tracker at <https://savannah.gnu.org/projects/gettext> or by email to <bug-gettext@gnu.org>. COPYRIGHT Copyright Β© 1995-1997, 2000-2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO The full documentation for ngettext is maintained as a Texinfo manual. If the info and ngettext programs are properly installed at your site, the command info ngettext should give you access to the complete manual. GNU gettext-runtime 0.22.5 February 2024 NGETTEXT(1)
ngettext - translate message and choose plural form
ngettext [OPTION] [TEXTDOMAIN] MSGID MSGID-PLURAL COUNT
null
null
imcopy
null
null
null
null
null
marker_api
null
null
null
null
null
rst2odt.py
null
null
null
null
null
pyct
null
null
null
null
null
h5debug
null
null
null
null
null
seg_hack
null
null
null
null
null
qvkgen
null
null
null
null
null
qgltf
null
null
null
null
null
dwebp
This manual page documents the dwebp command. dwebp decompresses WebP files into PNG, PAM, PPM or PGM images. Note: Animated WebP files are not supported.
dwebp - decompress a WebP file to an image file
dwebp [options] input_file.webp
The basic options are: -h Print usage summary. -version Print the version number (as major.minor.revision) and exit. -o string Specify the name of the output file (as PNG format by default). Using "-" as output name will direct output to 'stdout'. -- string Explicitly specify the input file. This option is useful if the input file starts with an '-' for instance. This option must appear last. Any other options afterward will be ignored. If the input file is "-", the data will be read from stdin instead of a file. -bmp Change the output format to uncompressed BMP. -tiff Change the output format to uncompressed TIFF. -pam Change the output format to PAM (retains alpha). -ppm Change the output format to PPM (discards alpha). -pgm Change the output format to PGM. The output consists of luma/chroma samples instead of RGB, using the IMC4 layout. This option is mainly for verification and debugging purposes. -yuv Change the output format to raw YUV. The output consists of luma/chroma-U/chroma-V samples instead of RGB, saved sequentially as individual planes. This option is mainly for verification and debugging purposes. -nofancy Don't use the fancy upscaler for YUV420. This may lead to jaggy edges (especially the red ones), but should be faster. -nofilter Don't use the in-loop filtering process even if it is required by the bitstream. This may produce visible blocks on the non- compliant output, but it will make the decoding faster. -dither strength Specify a dithering strength between 0 and 100. Dithering is a post-processing effect applied to chroma components in lossy compression. It helps by smoothing gradients and avoiding banding artifacts. -alpha_dither If the compressed file contains a transparency plane that was quantized during compression, this flag will allow dithering the reconstructed plane in order to generate smoother transparency gradients. -nodither Disable all dithering (default). -mt Use multi-threading for decoding, if possible. -crop x_position y_position width height Crop the decoded picture to a rectangle with top-left corner at coordinates (x_position, y_position) and size width x height. This cropping area must be fully contained within the source rectangle. The top-left corner will be snapped to even coordinates if needed. This option is meant to reduce the memory needed for cropping large images. Note: the cropping is applied before any scaling. -flip Flip decoded image vertically (can be useful for OpenGL textures for instance). -resize, -scale width height Rescale the decoded picture to dimension width x height. This option is mostly intended to reducing the memory needed to decode large images, when only a small version is needed (thumbnail, preview, etc.). Note: scaling is applied after cropping. If either (but not both) of the width or height parameters is 0, the value will be calculated preserving the aspect-ratio. -quiet Do not print anything. -v Print extra information (decoding time in particular). -noasm Disable all assembly optimizations. BUGS Please report all bugs to the issue tracker: https://bugs.chromium.org/p/webp Patches welcome! See this page to get started: https://www.webmproject.org/code/contribute/submitting-patches/
dwebp picture.webp -o output.png dwebp picture.webp -ppm -o output.ppm dwebp -o output.ppm -- ---picture.webp cat picture.webp | dwebp -o - -- - > output.ppm AUTHORS dwebp is a part of libwebp and was written by the WebP team. The latest source tree is available at https://chromium.googlesource.com/webm/libwebp This manual page was written by Pascal Massimino <pascal.massimino@gmail.com>, for the Debian project (and may be used by others). SEE ALSO cwebp(1), gif2webp(1), webpmux(1) Please refer to https://developers.google.com/speed/webp/ for additional information. Output file format details PAM: http://netpbm.sourceforge.net/doc/pam.html PGM: http://netpbm.sourceforge.net/doc/pgm.html PPM: http://netpbm.sourceforge.net/doc/ppm.html PNG: http://www.libpng.org/pub/png/png-sitemap.html#info November 17, 2021 DWEBP(1)
convert-caffe2-to-onnx
null
null
null
null
null
jupyter-events
null
null
null
null
null
plasma_store
null
null
null
null
null
ftfy
null
null
null
null
null
arm64-apple-darwin20.0.0-codesign_allocate
codesign_allocate sets up a Mach-O file used by the dynamic linker so space for code signing data of the specified size for the specified architecture is embedded in the Mach-O file. The program must be passed one -a argument or one -A argument for each architecture in a universal file, or exactly one -a or -A for a thin file. -i oldfile specifies the input file as oldfile. -o newfile specifies the output file as newfile. -a arch size specifies for the architecture arch that the size of the code signing data is to be size. The value of size must be a multiple of 16. -A cputype cpusubtype size specifies for the architecture as a pair of decimal integers for the cputype and cpusubtype that the size of the code signing data is to be size. The value of size must be a multiple of 16. -r remove the code signature data and the LC_CODE_SIGNATURE load command. This is the same as specifiying the -a or -A option with a size of zero. -p page align the code signature data by padding string table and changing its size. This is not the default as codesign(1) currently can't use this option. Apple, Inc. April 17, 2017 CODESIGN_ALLOCATE(1)
codesign_allocate - add code signing data to a Mach-O file
codesign_allocate -i oldfile [ -a arch size ]... [ -A cputype cpusubtype size ]... -o newfile
null
null
replace
The replace utility program changes strings in place in files or on the standard input. Note The replace utility is deprecated as of MySQL 5.7.18 and is removed in MySQL 8.0. Invoke replace in one of the following ways: shell> replace from to [from to] ... -- file_name [file_name] ... shell> replace from to [from to] ... < file_name from represents a string to look for and to represents its replacement. There can be one or more pairs of strings. Use the -- option to indicate where the string-replacement list ends and the file names begin. In this case, any file named on the command line is modified in place, so you may want to make a copy of the original before converting it. replace prints a message indicating which of the input files it actually modifies. If the -- option is not given, replace reads the standard input and writes to the standard output. replace uses a finite state machine to match longer strings first. It can be used to swap strings. For example, the following command swaps a and b in the given files, file1 and file2: shell> replace a b b a -- file1 file2 ... replace supports the following options. β€’ -?, -I Display a help message and exit. β€’ -#debug_options Enable debugging. β€’ -s Silent mode. Print less information what the program does. β€’ -v Verbose mode. Print more information about what the program does. β€’ -V Display version information and exit. COPYRIGHT Copyright Β© 1997, 2018, Oracle and/or its affiliates. All rights reserved. This documentation is free software; you can redistribute it and/or modify it only under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License. This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with the program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA or see http://www.gnu.org/licenses/. SEE ALSO For more information, please refer to the MySQL Reference Manual, which may already be installed locally and which is also available online at http://dev.mysql.com/doc/. AUTHOR Oracle Corporation (http://dev.mysql.com/). MySQL 5.7 10/03/2018 REPLACE(1)
replace - a string-replacement utility
replace arguments
null
null
coloredlogs
null
null
null
null
null
gst-stats-1.0
gst-stats-1.0 is a tool that analyses information collected from a log file based on GStreamer tracer messages.
gst-stats-1.0 - print info gathered from a GStreamer log file
gst-stats-1.0 [OPTION...] FILE
gst-stats-1.0 accepts the following arguments and options: FILE Name of a file -h, --help Print help synopsis and available FLAGS --gst-help-all Show all help options --gst-help-gst Show Gstreamer options SEE ALSO gst-launch-1.0(1) AUTHOR The GStreamer team at http://gstreamer.freedesktop.org/ April 2017 GStreamer(1)
null
size
Size (without the -m option) prints the (decimal) number of bytes required by the __TEXT, __DATA and __OBJC segments. All other segments are totaled and that size is listed in the `others' column. The final two columns is the sum in decimal and hexadecimal. If no file is specified, a.out is used. The options to size(1) are: - Treat the remaining arguments as name of object files not options to size(1). -m Print the sizes of the Mach-O segments and sections as well as the total sizes of the sections in each segment and the total size of the segments in the file. -l When used with the -m option, also print the addresses and offsets of the sections and segments. -x When used with the -m option, print the values in hexadecimal (with leading 0x's) rather than decimal. -arch arch_type Specifies the architecture, arch_type, of the file for size(1) to operate on when the file is a universal file. (See arch(3) for the currently know arch_types.) The arch_type can be "all" to operate on all architectures in the file. The default is to display only the host architecture, if the file contains it; otherwise, all architectures in the file are shown. SEE ALSO otool(1) BUGS The size of common symbols can't be reflected in any of the numbers for relocatable object files. Apple Computer, Inc. July 28, 2005 SIZE(1)
size - print the size of the sections in an object file
size [ option ... ] [ object ... ]
null
null
qdbus
null
null
null
null
null
qdbuscpp2xml
null
null
null
null
null
dask-worker
null
null
null
null
null
pt2to3
null
null
null
null
null
kswitch
kswitch makes the specified credential cache the primary cache for the collection, if a cache collection is available.
kswitch - switch primary ticket cache
kswitch {-c cachename|-p principal}
-c cachename Directly specifies the credential cache to be made primary. -p principal Causes the cache collection to be searched for a cache containing credentials for principal. If one is found, that collection is made primary. ENVIRONMENT See kerberos(7) for a description of Kerberos environment variables. FILES KCM: Default location of Kerberos 5 credentials cache SEE ALSO kinit(1), kdestroy(1), klist(1), kerberos(7) AUTHOR MIT COPYRIGHT 1985-2022, MIT 1.20.1 KSWITCH(1)
null
jupyter-serverextension
null
null
null
null
null
normalizer
null
null
null
null
null
twist
null
null
null
null
null
lzmadec
xzdec is a liblzma-based decompression-only tool for .xz (and only .xz) files. xzdec is intended to work as a drop-in replacement for xz(1) in the most common situations where a script has been written to use xz --decompress --stdout (and possibly a few other commonly used options) to decompress .xz files. lzmadec is identical to xzdec except that lzmadec supports .lzma files instead of .xz files. To reduce the size of the executable, xzdec doesn't support multithreading or localization, and doesn't read options from XZ_DEFAULTS and XZ_OPT environment variables. xzdec doesn't support displaying intermediate progress information: sending SIGINFO to xzdec does nothing, but sending SIGUSR1 terminates the process instead of displaying progress information.
xzdec, lzmadec - Small .xz and .lzma decompressors
xzdec [option...] [file...] lzmadec [option...] [file...]
-d, --decompress, --uncompress Ignored for xz(1) compatibility. xzdec supports only decompression. -k, --keep Ignored for xz(1) compatibility. xzdec never creates or removes any files. -c, --stdout, --to-stdout Ignored for xz(1) compatibility. xzdec always writes the decompressed data to standard output. -q, --quiet Specifying this once does nothing since xzdec never displays any warnings or notices. Specify this twice to suppress errors. -Q, --no-warn Ignored for xz(1) compatibility. xzdec never uses the exit status 2. -h, --help Display a help message and exit successfully. -V, --version Display the version number of xzdec and liblzma. EXIT STATUS 0 All was good. 1 An error occurred. xzdec doesn't have any warning messages like xz(1) has, thus the exit status 2 is not used by xzdec. NOTES Use xz(1) instead of xzdec or lzmadec for normal everyday use. xzdec or lzmadec are meant only for situations where it is important to have a smaller decompressor than the full-featured xz(1). xzdec and lzmadec are not really that small. The size can be reduced further by dropping features from liblzma at compile time, but that shouldn't usually be done for executables distributed in typical non- embedded operating system distributions. If you need a truly small .xz decompressor, consider using XZ Embedded. SEE ALSO xz(1) XZ Embedded: <https://tukaani.org/xz/embedded.html> Tukaani 2024-04-08 XZDEC(1)
null
fax2ps
null
null
null
null
null
genbrk
genbrk reads the break (boundary) rule source code from rule-file and creates a break iteration data file. Normally this data file has the .brk extension. The details of the rule syntax can be found in ICU's User Guide.
genbrk - Compiles ICU break iteration rules source files into binary data files
genbrk [ -h, -?, --help ] [ -V, --version ] [ -c, --copyright ] [ -v, --verbose ] [ -d, --destdir destination ] [ -i, --icudatadir directory ] -r, --rules rule-file -o, --out output-file
-h, -?, --help Print help about usage and exit. -V, --version Print the version of genbrk and exit. -c, --copyright Embeds the standard ICU copyright into the output-file. -v, --verbose Display extra informative messages during execution. -d, --destdir destination Set the destination directory of the output-file to destination. -i, --icudatadir directory Look for any necessary ICU data files in directory. For example, the file pnames.icu must be located when ICU's data is not built as a shared library. The default ICU data directory is specified by the environment variable ICU_DATA. Most configurations of ICU do not require this argument. -r, --rules rule-file The source file to read. -o, --out output-file The output data file to write. CAVEATS When the rule-file contains a byte order mark (BOM) at the beginning of the file, which is the Unicode character U+FEFF, then the rule-file is interpreted as Unicode. Without the BOM, the file is interpreted in the current operating system default codepage. In order to eliminate any ambiguity of the encoding for how the rule-file was written, it is recommended that you write this file in UTF-8 with the BOM. ENVIRONMENT ICU_DATA Specifies the directory containing ICU data. Defaults to ${prefix}/share/icu/68.1/. Some tools in ICU depend on the presence of the trailing slash. It is thus important to make sure that it is present if ICU_DATA is set. AUTHORS George Rhoten Andy Heninger VERSION 1.0 COPYRIGHT Copyright (C) 2005 International Business Machines Corporation and others SEE ALSO http://www.icu-project.org/userguide/boundaryAnalysis.html ICU MANPAGE 2 December 2005 GENBRK(1)
null
surya_layout
null
null
null
null
null
h5repart
null
null
null
null
null
lzgrep
xzgrep invokes grep(1) on uncompressed contents of files. The formats of the files are determined from the filename suffixes. Any file with a suffix supported by xz(1), gzip(1), bzip2(1), lzop(1), zstd(1), or lz4(1) will be decompressed; all other files are assumed to be uncompressed. If no files are specified or file is - then standard input is read. When reading from standard input, only files supported by xz(1) are decompressed. Other files are assumed to be in uncompressed form already. Most options of grep(1) are supported. However, the following options are not supported: -r, --recursive -R, --dereference-recursive -d, --directories=action -Z, --null -z, --null-data --include=glob --exclude=glob --exclude-from=file --exclude-dir=glob xzegrep is an alias for xzgrep -E. xzfgrep is an alias for xzgrep -F. The commands lzgrep, lzegrep, and lzfgrep are provided for backward compatibility with LZMA Utils. EXIT STATUS 0 At least one match was found from at least one of the input files. No errors occurred. 1 No matches were found from any of the input files. No errors occurred. >1 One or more errors occurred. It is unknown if matches were found. ENVIRONMENT GREP If GREP is set to a non-empty value, it is used instead of grep, grep -E, or grep -F. SEE ALSO grep(1), xz(1), gzip(1), bzip2(1), lzop(1), zstd(1), lz4(1), zgrep(1) Tukaani 2024-02-13 XZGREP(1)
xzgrep - search possibly-compressed files for patterns
xzgrep [option...] [pattern_list] [file...] xzegrep ... xzfgrep ... lzgrep ... lzegrep ... lzfgrep ...
null
null
seg_addr_table
null
null
null
null
null
msgunfmt
Convert binary message catalog to Uniforum style .po file. Mandatory arguments to long options are mandatory for short options too. Operation mode: -j, --java Java mode: input is a Java ResourceBundle class --csharp C# mode: input is a .NET .dll file --csharp-resources C# resources mode: input is a .NET .resources file --tcl Tcl mode: input is a tcl/msgcat .msg file Input file location: FILE ... input .mo files If no input file is given or if it is -, standard input is read. Input file location in Java mode: -r, --resource=RESOURCE resource name -l, --locale=LOCALE locale name, either language or language_COUNTRY The class name is determined by appending the locale name to the resource name, separated with an underscore. The class is located using the CLASSPATH. Input file location in C# mode: -r, --resource=RESOURCE resource name -l, --locale=LOCALE locale name, either language or language_COUNTRY -d DIRECTORY base directory for locale dependent .dll files The -l and -d options are mandatory. The .dll file is located in a subdirectory of the specified directory whose name depends on the locale. Input file location in Tcl mode: -l, --locale=LOCALE locale name, either language or language_COUNTRY -d DIRECTORY base directory of .msg message catalogs The -l and -d options are mandatory. The .msg file is located in the specified directory. Output file location: -o, --output-file=FILE write output to specified file The results are written to standard output if no output file is specified or if it is -. Output details: --color use colors and other text attributes always --color=WHEN use colors and other text attributes if WHEN. WHEN may be 'always', 'never', 'auto', or 'html'. --style=STYLEFILE specify CSS style rule file for --color -e, --no-escape do not use C escapes in output (default) -E, --escape use C escapes in output, no extended chars --force-po write PO file even if empty -i, --indent write indented output style --strict write strict uniforum style -p, --properties-output write out a Java .properties file --stringtable-output write out a NeXTstep/GNUstep .strings file -w, --width=NUMBER set output page width --no-wrap do not break long message lines, longer than the output page width, into several lines -s, --sort-output generate sorted output Informative output: -h, --help display this help and exit -V, --version output version information and exit -v, --verbose increase verbosity level AUTHOR Written by Ulrich Drepper. REPORTING BUGS Report bugs in the bug tracker at <https://savannah.gnu.org/projects/gettext> or by email to <bug-gettext@gnu.org>. COPYRIGHT Copyright Β© 1995-2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO The full documentation for msgunfmt is maintained as a Texinfo manual. If the info and msgunfmt programs are properly installed at your site, the command info msgunfmt should give you access to the complete manual. GNU gettext-tools 0.22.5 February 2024 MSGUNFMT(1)
msgunfmt - uncompile message catalog from binary format
msgunfmt [OPTION] [FILE]...
null
null
lprodump
null
null
null
null
null
marker
null
null
null
null
null
h5import
null
null
null
null
null
tiffcp
tiffcp combines one or more files created according to the Tag Image File Format, Revision 6.0 into a single TIFF file. Because the output file may be compressed using a different algorithm than the input files, tiffcp is most often used to convert between different compression schemes. By default, tiffcp will copy all the understood tags in a TIFF directory of an input file to the associated directory in the output file. tiffcp can be used to reorganize the storage characteristics of data in a file, but it is explicitly intended to not alter or convert the image data content in any way.
tiffcp - copy (and possibly convert) a TIFF file
tiffcp [ options ] src1.tif … srcN.tif dst.tif
-a Append to an existing output file instead of overwriting it. -b image subtract the following monochrome image from all others processed. This can be used to remove a noise bias from a set of images. This bias image is typically an image of noise the camera saw with its shutter closed. -B Force output to be written with Big-Endian byte order. This option only has an effect when the output file is created or overwritten and not when it is appended to. -C Suppress the use of "strip chopping" when reading images that have a single strip/tile of uncompressed data. -c Specify the compression to use for data written to the output file: -c none for no compression, -c packbits for PackBits compression, -c lzw for Lempel-Ziv & Welch compression, -c zip for Deflate compression, -c lzma for LZMA2 compression, -c jpeg for baseline JPEG compression, -c g3 for CCITT Group 3 (T.4) compression, -c g4 for CCITT Group 4 (T.6) compression, or -c sgilog for SGILOG compression. By default tiffcp will compress data according to the value of the Compression tag found in the source file. The CCITT Group 3 and Group 4 compression algorithms can only be used with bilevel data. Group 3 compression can be specified together with several T.4-specific options: β€’ 1d for 1-dimensional encoding, β€’ 2d for 2-dimensional encoding, and β€’ fill to force each encoded scanline to be zero-filled so that the terminating EOL code lies on a byte boundary. Group 3-specific options are specified by appending a :-separated list to the g3 option; e.g. -c g3:2d:fill to get 2D-encoded data with byte-aligned EOL codes. LZW, Deflate and LZMA2 compression can be specified together with a predictor value. A predictor value of 2 causes each scanline of the output image to undergo horizontal differencing before it is encoded; a value of 1 forces each scanline to be encoded without differencing. A value 3 is for floating point predictor which you can use if the encoded data are in floating point format. LZW-specific options are specified by appending a :-separated list to the lzw option; e.g. -c lzw:2 for LZW compression with horizontal differencing. Deflate and LZMA2 encoders support various compression levels (or encoder presets) set as character p and a preset number. p1 is the fastest one with the worst compression ratio and p9 is the slowest but with the best possible ratio; e.g. -c zip:3:p9 for Deflate encoding with maximum compression level and floating point predictor. For the Deflate codec, and in a libtiff build with libdeflate enabled, p12 is actually the maximum level. For the Deflate codec, and in a libtiff build with libdeflate enabled, s0 can be used to require zlib to be used, and s1 for libdeflate (defaults to libdeflate when it is available). -f fillorder Specify the bit fill order to use in writing output data. By default, tiffcp will create a new file with the same fill order as the original. Specifying -f lsb2msb will force data to be written with the FillOrder tag set to LSB2MSB, while -f msb2lsb will force data to be written with the FillOrder tag set to MSB2LSB. -i Ignore non-fatal read errors and continue processing of the input file. -l Specify the length of a tile (in pixels). tiffcp attempts to set the tile dimensions so that no more than 8 kilobytes of data appear in a tile. -L Force output to be written with Little-Endian byte order. This option only has an effect when the output file is created or overwritten and not when it is appended to. -M Suppress the use of memory-mapped files when reading images. -o offset Set initial directory offset. -p Specify the planar configuration to use in writing image data that has one 8-bit sample per pixel. By default, tiffcp will create a new file with the same planar configuration as the original. Specifying -p contig will force data to be written with multi-sample data packed together, while -p separate will force samples to be written in separate planes. -r Specify the number of rows (scanlines) in each strip of data written to the output file. By default (or when value 0 is specified), tiffcp attempts to set the rows/strip that no more than 8 kilobytes of data appear in a strip. If you specify special value -1 it will results in infinite number of the rows per strip. The entire image will be the one strip in that case. -s Force the output file to be written with data organized in strips (rather than tiles). -t Force the output file to be written with data organized in tiles (rather than strips). options can be used to force the resultant image to be written as strips or tiles of data, respectively. -w Specify the width of a tile (in pixels). :program::tiffcp attempts to set the tile dimensions so that no more than 8 kilobytes of data appear in a tile. -x Force the output file to be written with PAGENUMBER value in sequence. -8 Write BigTIFF instead of classic TIFF format. -,= character substitute character for , in parsing image directory indices in files. This is necessary if filenames contain commas. Note that -,= with whitespace immediately following will disable the special meaning of the , entirely. See examples. -m size Set maximum memory allocation size (in MiB). The default is 256MiB. Set to 0 to disable the limit.
The following concatenates two files and writes the result using LZW encoding: tiffcp -c lzw a.tif b.tif result.tif To convert a G3 1d-encoded TIFF to a single strip of G4-encoded data the following might be used: tiffcp -c g4 -r 10000 g3.tif g4.tif (1000 is just a number that is larger than the number of rows in the source file.) To extract a selected set of images from a multi-image TIFF file, the file name may be immediately followed by a , separated list of image directory indices. The first image is always in directory 0. Thus, to copy the 1st and 3rd images of image file album.tif to result.tif: tiffcp album.tif,0,2 result.tif A trailing comma denotes remaining images in sequence. The following command will copy all image with except the first one: tiffcp album.tif,1, result.tif Given file CCD.tif whose first image is a noise bias followed by images which include that bias, subtract the noise from all those images following it (while decompressing) with the command: tiffcp -c none -b CCD.tif CCD.tif,1, result.tif If the file above were named CCD,X.tif, the -,= option would be required to correctly parse this filename with image numbers, as follows: tiffcp -c none -,=% -b CCD,X.tif CCD,X%1%.tif result.tif SEE ALSO tiffinfo (1), tiffdump (1), tiffsplit (1), libtiff (3tiff) AUTHOR LibTIFF contributors COPYRIGHT 1988-2022, LibTIFF contributors 4.6 September 8, 2023 TIFFCP(1)
myisam_ftdump
myisam_ftdump displays information about FULLTEXT indexes in MyISAM tables. It reads the MyISAM index file directly, so it must be run on the server host where the table is located. Before using myisam_ftdump, be sure to issue a FLUSH TABLES statement first if the server is running. myisam_ftdump scans and dumps the entire index, which is not particularly fast. On the other hand, the distribution of words changes infrequently, so it need not be run often. Invoke myisam_ftdump like this: myisam_ftdump [options] tbl_name index_num The tbl_name argument should be the name of a MyISAM table. You can also specify a table by naming its index file (the file with the .MYI suffix). If you do not invoke myisam_ftdump in the directory where the table files are located, the table or index file name must be preceded by the path name to the table's database directory. Index numbers begin with 0. Example: Suppose that the test database contains a table named mytexttable that has the following definition: CREATE TABLE mytexttable ( id INT NOT NULL, txt TEXT NOT NULL, PRIMARY KEY (id), FULLTEXT (txt) ) ENGINE=MyISAM; The index on id is index 0 and the FULLTEXT index on txt is index 1. If your working directory is the test database directory, invoke myisam_ftdump as follows: myisam_ftdump mytexttable 1 If the path name to the test database directory is /usr/local/mysql/data/test, you can also specify the table name argument using that path name. This is useful if you do not invoke myisam_ftdump in the database directory: myisam_ftdump /usr/local/mysql/data/test/mytexttable 1 You can use myisam_ftdump to generate a list of index entries in order of frequency of occurrence like this on Unix-like systems: myisam_ftdump -c mytexttable 1 | sort -r On Windows, use: myisam_ftdump -c mytexttable 1 | sort /R myisam_ftdump supports the following options: β€’ --help, -h -? β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --help β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Display a help message and exit. β€’ --count, -c β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --count β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Calculate per-word statistics (counts and global weights). β€’ --dump, -d β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --dump β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Dump the index, including data offsets and word weights. β€’ --length, -l β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --length β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Report the length distribution. β€’ --stats, -s β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --stats β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Report global index statistics. This is the default operation if no other operation is specified. β€’ --verbose, -v β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --verbose β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Verbose mode. Print more output about what the program does. COPYRIGHT Copyright Β© 1997, 2023, Oracle and/or its affiliates. This documentation is free software; you can redistribute it and/or modify it only under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License. This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with the program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA or see http://www.gnu.org/licenses/. SEE ALSO For more information, please refer to the MySQL Reference Manual, which may already be installed locally and which is also available online at http://dev.mysql.com/doc/. AUTHOR Oracle Corporation (http://dev.mysql.com/). MySQL 8.3 11/23/2023 MYISAM_FTDUMP(1)
myisam_ftdump - display full-text index information
myisam_ftdump [options] tbl_name index_num
null
null
lzmore
xzmore displays text from compressed files to a terminal using more(1). Files supported by xz(1) are decompressed; other files are assumed to be in uncompressed form already. If no files are given, xzmore reads from standard input. See the more(1) manual for the keyboard commands. Note that scrolling backwards might not be possible depending on the implementation of more(1). This is because xzmore uses a pipe to pass the decompressed data to more(1). xzless(1) uses less(1) which provides more advanced features. The command lzmore is provided for backward compatibility with LZMA Utils. ENVIRONMENT PAGER If PAGER is set, its value is used as the pager instead of more(1). SEE ALSO more(1), xz(1), xzless(1), zmore(1) Tukaani 2024-02-12 XZMORE(1)
xzmore, lzmore - view xz or lzma compressed (text) files
xzmore [file...] lzmore [file...]
null
null
conda-metapackage
null
null
null
null
null
innochecksum
innochecksum prints checksums for InnoDB files. This tool reads an InnoDB tablespace file, calculates the checksum for each page, compares the calculated checksum to the stored checksum, and reports mismatches, which indicate damaged pages. It was originally developed to speed up verifying the integrity of tablespace files after power outages but can also be used after file copies. Because checksum mismatches cause InnoDB to deliberately shut down a running server, it may be preferable to use this tool rather than waiting for an in-production server to encounter the damaged pages. innochecksum cannot be used on tablespace files that the server already has open. For such files, you should use CHECK TABLE to check tables within the tablespace. Attempting to run innochecksum on a tablespace that the server already has open results in an Unable to lock file error. If checksum mismatches are found, restore the tablespace from backup or start the server and attempt to use mysqldump to make a backup of the tables within the tablespace. Invoke innochecksum like this: innochecksum [options] file_name innochecksum Options innochecksum supports the following options. For options that refer to page numbers, the numbers are zero-based. β€’ --help, -? β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --help β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Type β”‚ Boolean β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Default Value β”‚ false β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Displays command line help. Example usage: innochecksum --help β€’ --info, -I β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --info β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Type β”‚ Boolean β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Default Value β”‚ false β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Synonym for --help. Displays command line help. Example usage: innochecksum --info β€’ --version, -V β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --version β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Type β”‚ Boolean β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Default Value β”‚ false β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Displays version information. Example usage: innochecksum --version β€’ --verbose, -v β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --verbose β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Type β”‚ Boolean β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Default Value β”‚ false β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Verbose mode; prints a progress indicator to the log file every five seconds. In order for the progress indicator to be printed, the log file must be specified using the --log option. To turn on verbose mode, run: innochecksum --verbose To turn off verbose mode, run: innochecksum --verbose=FALSE The --verbose option and --log option can be specified at the same time. For example: innochecksum --verbose --log=/var/lib/mysql/test/logtest.txt To locate the progress indicator information in the log file, you can perform the following search: cat ./logtest.txt | grep -i "okay" The progress indicator information in the log file appears similar to the following: page 1663 okay: 2.863% done page 8447 okay: 14.537% done page 13695 okay: 23.568% done page 18815 okay: 32.379% done page 23039 okay: 39.648% done page 28351 okay: 48.789% done page 33023 okay: 56.828% done page 37951 okay: 65.308% done page 44095 okay: 75.881% done page 49407 okay: 85.022% done page 54463 okay: 93.722% done ... β€’ --count, -c β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --count β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Type β”‚ Base name β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Default Value β”‚ true β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Print a count of the number of pages in the file and exit. Example usage: innochecksum --count ../data/test/tab1.ibd β€’ --start-page=num, -s num β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --start-page=# β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Type β”‚ Numeric β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Default Value β”‚ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Start at this page number. Example usage: innochecksum --start-page=600 ../data/test/tab1.ibd or: innochecksum -s 600 ../data/test/tab1.ibd β€’ --end-page=num, -e num β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --end-page=# β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Type β”‚ Numeric β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Default Value β”‚ 0 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Minimum Value β”‚ 0 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Maximum Value β”‚ 18446744073709551615 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ End at this page number. Example usage: innochecksum --end-page=700 ../data/test/tab1.ibd or: innochecksum --p 700 ../data/test/tab1.ibd β€’ --page=num, -p num β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --page=# β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Type β”‚ Integer β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Default Value β”‚ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Check only this page number. Example usage: innochecksum --page=701 ../data/test/tab1.ibd β€’ --strict-check, -C β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --strict-check=algorithm β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Type β”‚ Enumeration β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Default Value β”‚ crc32 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Valid Values β”‚ innodb crc32 none β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Specify a strict checksum algorithm. Options include innodb, crc32, and none. In this example, the innodb checksum algorithm is specified: innochecksum --strict-check=innodb ../data/test/tab1.ibd In this example, the crc32 checksum algorithm is specified: innochecksum -C crc32 ../data/test/tab1.ibd The following conditions apply: β€’ If you do not specify the --strict-check option, innochecksum validates against innodb, crc32 and none. β€’ If you specify the none option, only checksums generated by none are allowed. β€’ If you specify the innodb option, only checksums generated by innodb are allowed. β€’ If you specify the crc32 option, only checksums generated by crc32 are allowed. β€’ --no-check, -n β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --no-check β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Type β”‚ Boolean β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Default Value β”‚ false β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Ignore the checksum verification when rewriting a checksum. This option may only be used with the innochecksum --write option. If the --write option is not specified, innochecksum terminates. In this example, an innodb checksum is rewritten to replace an invalid checksum: innochecksum --no-check --write innodb ../data/test/tab1.ibd β€’ --allow-mismatches, -a β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --allow-mismatches=# β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Type β”‚ Integer β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Default Value β”‚ 0 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Minimum Value β”‚ 0 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Maximum Value β”‚ 18446744073709551615 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ The maximum number of checksum mismatches allowed before innochecksum terminates. The default setting is 0. If --allow-mismatches=N, where N>=0, N mismatches are permitted and innochecksum terminates at N+1. When --allow-mismatches is set to 0, innochecksum terminates on the first checksum mismatch. In this example, an existing innodb checksum is rewritten to set --allow-mismatches to 1. innochecksum --allow-mismatches=1 --write innodb ../data/test/tab1.ibd With --allow-mismatches set to 1, if there is a mismatch at page 600 and another at page 700 on a file with 1000 pages, the checksum is updated for pages 0-599 and 601-699. Because --allow-mismatches is set to 1, the checksum tolerates the first mismatch and terminates on the second mismatch, leaving page 600 and pages 700-999 unchanged. β€’ --write=name, -w num β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --write=algorithm β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Type β”‚ Enumeration β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Default Value β”‚ crc32 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Valid Values β”‚ innodb crc32 β”‚ β”‚ β”‚ none β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Rewrite a checksum. When rewriting an invalid checksum, the --no-check option must be used together with the --write option. The --no-check option tells innochecksum to ignore verification of the invalid checksum. You do not have to specify the --no-check option if the current checksum is valid. An algorithm must be specified when using the --write option. Possible values for the --write option are: β€’ innodb: A checksum calculated in software, using the original algorithm from InnoDB. β€’ crc32: A checksum calculated using the crc32 algorithm, possibly done with a hardware assist. β€’ none: A constant number. The --write option rewrites entire pages to disk. If the new checksum is identical to the existing checksum, the new checksum is not written to disk in order to minimize I/O. innochecksum obtains an exclusive lock when the --write option is used. In this example, a crc32 checksum is written for tab1.ibd: innochecksum -w crc32 ../data/test/tab1.ibd In this example, a crc32 checksum is rewritten to replace an invalid crc32 checksum: innochecksum --no-check --write crc32 ../data/test/tab1.ibd β€’ --page-type-summary, -S β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --page-type-summary β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Type β”‚ Boolean β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Default Value β”‚ false β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Display a count of each page type in a tablespace. Example usage: innochecksum --page-type-summary ../data/test/tab1.ibd Sample output for --page-type-summary: File::../data/test/tab1.ibd ================PAGE TYPE SUMMARY============== #PAGE_COUNT PAGE_TYPE =============================================== 2 Index page 0 Undo log page 1 Inode page 0 Insert buffer free list page 2 Freshly allocated page 1 Insert buffer bitmap 0 System page 0 Transaction system page 1 File Space Header 0 Extent descriptor page 0 BLOB page 0 Compressed BLOB page 0 Other type of page =============================================== Additional information: Undo page type: 0 insert, 0 update, 0 other Undo page state: 0 active, 0 cached, 0 to_free, 0 to_purge, 0 prepared, 0 other β€’ --page-type-dump, -D β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --page-type-dump=name β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Type β”‚ String β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Default Value β”‚ [none] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Dump the page type information for each page in a tablespace to stderr or stdout. Example usage: innochecksum --page-type-dump=/tmp/a.txt ../data/test/tab1.ibd β€’ --log, -l β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Command-Line Format β”‚ --log=path β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Type β”‚ File name β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚Default Value β”‚ [none] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Log output for the innochecksum tool. A log file name must be provided. Log output contains checksum values for each tablespace page. For uncompressed tables, LSN values are also provided. Example usage: innochecksum --log=/tmp/log.txt ../data/test/tab1.ibd or: innochecksum -l /tmp/log.txt ../data/test/tab1.ibd β€’ - option. Specify the - option to read from standard input. If the - option is missing when β€œread from standard in” is expected, innochecksum prints innochecksum usage information indicating that the β€œ-” option was omitted. Example usages: cat t1.ibd | innochecksum - In this example, innochecksum writes the crc32 checksum algorithm to a.ibd without changing the original t1.ibd file. cat t1.ibd | innochecksum --write=crc32 - > a.ibd Running innochecksum on Multiple User-defined Tablespace Files The following examples demonstrate how to run innochecksum on multiple user-defined tablespace files (.ibd files). Run innochecksum for all tablespace (.ibd) files in the β€œtest” database: innochecksum ./data/test/*.ibd Run innochecksum for all tablespace files (.ibd files) that have a file name starting with β€œt”: innochecksum ./data/test/t*.ibd Run innochecksum for all tablespace files (.ibd files) in the data directory: innochecksum ./data/*/*.ibd Note Running innochecksum on multiple user-defined tablespace files is not supported on Windows operating systems, as Windows shells such as cmd.exe do not support glob pattern expansion. On Windows systems, innochecksum must be run separately for each user-defined tablespace file. For example: innochecksum.exe t1.ibd innochecksum.exe t2.ibd innochecksum.exe t3.ibd Running innochecksum on Multiple System Tablespace Files By default, there is only one InnoDB system tablespace file (ibdata1) but multiple files for the system tablespace can be defined using the innodb_data_file_path option. In the following example, three files for the system tablespace are defined using the innodb_data_file_path option: ibdata1, ibdata2, and ibdata3. ./bin/mysqld --no-defaults --innodb-data-file-path="ibdata1:10M;ibdata2:10M;ibdata3:10M:autoextend" The three files (ibdata1, ibdata2, and ibdata3) form one logical system tablespace. To run innochecksum on multiple files that form one logical system tablespace, innochecksum requires the - option to read tablespace files in from standard input, which is equivalent to concatenating multiple files to create one single file. For the example provided above, the following innochecksum command would be used: cat ibdata* | innochecksum - Refer to the innochecksum options information for more information about the β€œ-” option. Note Running innochecksum on multiple files in the same tablespace is not supported on Windows operating systems, as Windows shells such as cmd.exe do not support glob pattern expansion. On Windows systems, innochecksum must be run separately for each system tablespace file. For example: innochecksum.exe ibdata1 innochecksum.exe ibdata2 innochecksum.exe ibdata3 COPYRIGHT Copyright Β© 1997, 2023, Oracle and/or its affiliates. This documentation is free software; you can redistribute it and/or modify it only under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License. This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with the program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA or see http://www.gnu.org/licenses/. SEE ALSO For more information, please refer to the MySQL Reference Manual, which may already be installed locally and which is also available online at http://dev.mysql.com/doc/. AUTHOR Oracle Corporation (http://dev.mysql.com/). MySQL 8.3 11/23/2023 INNOCHECKSUM(1)
innochecksum - offline InnoDB file checksum utility
innochecksum [options] file_name
null
null
jupyter-trust
null
null
null
null
null
jsonpointer
null
null
null
null
null
h5fc
null
null
null
null
null
jsonpatch
null
null
null
null
null
jq
null
jq - Command-line JSON processor
jq [options...] filter [files...] jq can transform JSON in various ways, by selecting, iterating, reducing and otherwise mangling JSON documents. For instance, running the command jq Β΄map(.price) | addΒ΄ will take an array of JSON objects as input and return the sum of their "price" fields. jq can accept text input as well, but by default, jq reads a stream of JSON entities (including numbers and other literals) from stdin. Whitespace is only needed to separate entities such as 1 and 2, and true and false. One or more files may be specified, in which case jq will read input from those instead. The options are described in the INVOKING JQ section; they mostly concern input and output formatting. The filter is written in the jq language and specifies how to transform the input file or document. FILTERS A jq program is a "filter": it takes an input, and produces an output. There are a lot of builtin filters for extracting a particular field of an object, or converting a number to a string, or various other standard tasks. Filters can be combined in various ways - you can pipe the output of one filter into another filter, or collect the output of a filter into an array. Some filters produce multiple results, for instance thereΒ΄s one that produces all the elements of its input array. Piping that filter into a second runs the second filter for each element of the array. Generally, things that would be done with loops and iteration in other languages are just done by gluing filters together in jq. ItΒ΄s important to remember that every filter has an input and an output. Even literals like "hello" or 42 are filters - they take an input but always produce the same literal as output. Operations that combine two filters, like addition, generally feed the same input to both and combine the results. So, you can implement an averaging filter as add / length - feeding the input array both to the add filter and the length filter and then performing the division. But thatΒ΄s getting ahead of ourselves. :) LetΒ΄s start with something simpler: INVOKING JQ jq filters run on a stream of JSON data. The input to jq is parsed as a sequence of whitespace-separated JSON values which are passed through the provided filter one at a time. The output(s) of the filter are written to standard out, again as a sequence of whitespace-separated JSON data. Note: it is important to mind the shellΒ΄s quoting rules. As a general rule itΒ΄s best to always quote (with single-quote characters) the jq program, as too many characters with special meaning to jq are also shell meta-characters. For example, jq "foo" will fail on most Unix shells because that will be the same as jq foo, which will generally fail because foo is not defined. When using the Windows command shell (cmd.exe) itΒ΄s best to use double quotes around your jq program when given on the command-line (instead of the -f program-file option), but then double-quotes in the jq program need backslash escaping. You can affect how jq reads and writes its input and output using some command-line options: β€’ --version: Output the jq version and exit with zero. β€’ --seq: Use the application/json-seq MIME type scheme for separating JSON texts in jqΒ΄s input and output. This means that an ASCII RS (record separator) character is printed before each value on output and an ASCII LF (line feed) is printed after every output. Input JSON texts that fail to parse are ignored (but warned about), discarding all subsequent input until the next RS. This mode also parses the output of jq without the --seq option. β€’ --stream: Parse the input in streaming fashion, outputing arrays of path and leaf values (scalars and empty arrays or empty objects). For example, "a" becomes [[],"a"], and [[],"a",["b"]] becomes [[0],[]], [[1],"a"], and [[1,0],"b"]. This is useful for processing very large inputs. Use this in conjunction with filtering and the reduce and foreach syntax to reduce large inputs incrementally. β€’ --slurp/-s: Instead of running the filter for each JSON object in the input, read the entire input stream into a large array and run the filter just once. β€’ --raw-input/-R: DonΒ΄t parse the input as JSON. Instead, each line of text is passed to the filter as a string. If combined with --slurp, then the entire input is passed to the filter as a single long string. β€’ --null-input/-n: DonΒ΄t read any input at all! Instead, the filter is run once using null as the input. This is useful when using jq as a simple calculator or to construct JSON data from scratch. β€’ --compact-output / -c: By default, jq pretty-prints JSON output. Using this option will result in more compact output by instead putting each JSON object on a single line. β€’ --tab: Use a tab for each indentation level instead of two spaces. β€’ --indent n: Use the given number of spaces (no more than 8) for indentation. β€’ --color-output / -C and --monochrome-output / -M: By default, jq outputs colored JSON if writing to a terminal. You can force it to produce color even if writing to a pipe or a file using -C, and disable color with -M. Colors can be configured with the JQ_COLORS environment variable (see below). β€’ --ascii-output / -a: jq usually outputs non-ASCII Unicode codepoints as UTF-8, even if the input specified them as escape sequences (like "\u03bc"). Using this option, you can force jq to produce pure ASCII output with every non-ASCII character replaced with the equivalent escape sequence. β€’ --unbuffered Flush the output after each JSON object is printed (useful if youΒ΄re piping a slow data source into jq and piping jqΒ΄s output elsewhere). β€’ --sort-keys / -S: Output the fields of each object with the keys in sorted order. β€’ --raw-output / -r: With this option, if the filterΒ΄s result is a string then it will be written directly to standard output rather than being formatted as a JSON string with quotes. This can be useful for making jq filters talk to non-JSON-based systems. β€’ --join-output / -j: Like -r but jq wonΒ΄t print a newline after each output. β€’ -f filename / --from-file filename: Read filter from the file rather than from a command line, like awkΒ΄s -f option. You can also use Β΄#Β΄ to make comments. β€’ -Ldirectory / -L directory: Prepend directory to the search list for modules. If this option is used then no builtin search list is used. See the section on modules below. β€’ -e / --exit-status: Sets the exit status of jq to 0 if the last output values was neither false nor null, 1 if the last output value was either false or null, or 4 if no valid result was ever produced. Normally jq exits with 2 if there was any usage problem or system error, 3 if there was a jq program compile error, or 0 if the jq program ran. Another way to set the exit status is with the halt_error builtin function. β€’ --arg name value: This option passes a value to the jq program as a predefined variable. If you run jq with --arg foo bar, then $foo is available in the program and has the value "bar". Note that value will be treated as a string, so --arg foo 123 will bind $foo to "123". Named arguments are also available to the jq program as $ARGS.named. β€’ --argjson name JSON-text: This option passes a JSON-encoded value to the jq program as a predefined variable. If you run jq with --argjson foo 123, then $foo is available in the program and has the value 123. β€’ --slurpfile variable-name filename: This option reads all the JSON texts in the named file and binds an array of the parsed JSON values to the given global variable. If you run jq with --argfile foo bar, then $foo is available in the program and has an array whose elements correspond to the texts in the file named bar. β€’ --argfile variable-name filename: Do not use. Use --slurpfile instead. (This option is like --slurpfile, but when the file has just one text, then that is used, else an array of texts is used as in --slurpfile.) β€’ --args: Remaining arguments are positional string arguments. These are available to the jq program as $ARGS.positional[]. β€’ --jsonargs: Remaining arguments are positional JSON text arguments. These are available to the jq program as $ARGS.positional[]. β€’ --run-tests [filename]: Runs the tests in the given file or standard input. This must be the last option given and does not honor all preceding options. The input consists of comment lines, empty lines, and program lines followed by one input line, as many lines of output as are expected (one per output), and a terminating empty line. Compilation failure tests start with a line containing only "%%FAIL", then a line containing the program to compile, then a line containing an error message to compare to the actual. Be warned that this option can change backwards-incompatibly. BASIC FILTERS Identity: . The absolute simplest filter is . . This is a filter that takes its input and produces it unchanged as output. That is, this is the identity operator. Since jq by default pretty-prints all output, this trivial program can be a useful way of formatting JSON output from, say, curl. jq Β΄.Β΄ "Hello, world!" => "Hello, world!" Object Identifier-Index: .foo, .foo.bar The simplest useful filter is .foo. When given a JSON object (aka dictionary or hash) as input, it produces the value at the key "foo", or null if thereΒ΄s none present. A filter of the form .foo.bar is equivalent to .foo|.bar. This syntax only works for simple, identifier-like keys, that is, keys that are all made of alphanumeric characters and underscore, and which do not start with a digit. If the key contains special characters, you need to surround it with double quotes like this: ."foo$", or else .["foo$"]. For example .["foo::bar"] and .["foo.bar"] work while .foo::bar does not, and .foo.bar means .["foo"].["bar"]. jq Β΄.fooΒ΄ {"foo": 42, "bar": "less interesting data"} => 42 jq Β΄.fooΒ΄ {"notfoo": true, "alsonotfoo": false} => null jq Β΄.["foo"]Β΄ {"foo": 42} => 42 Optional Object Identifier-Index: .foo? Just like .foo, but does not output even an error when . is not an array or an object. jq Β΄.foo?Β΄ {"foo": 42, "bar": "less interesting data"} => 42 jq Β΄.foo?Β΄ {"notfoo": true, "alsonotfoo": false} => null jq Β΄.["foo"]?Β΄ {"foo": 42} => 42 jq Β΄[.foo?]Β΄ [1,2] => [] Generic Object Index: .[<string>] You can also look up fields of an object using syntax like .["foo"] (.foo above is a shorthand version of this, but only for identifier-like strings). Array Index: .[2] When the index value is an integer, .[<value>] can index arrays. Arrays are zero-based, so .[2] returns the third element. Negative indices are allowed, with -1 referring to the last element, -2 referring to the next to last element, and so on. jq Β΄.[0]Β΄ [{"name":"JSON", "good":true}, {"name":"XML", "good":false}] => {"name":"JSON", "good":true} jq Β΄.[2]Β΄ [{"name":"JSON", "good":true}, {"name":"XML", "good":false}] => null jq Β΄.[-2]Β΄ [1,2,3] => 2 Array/String Slice: .[10:15] The .[10:15] syntax can be used to return a subarray of an array or substring of a string. The array returned by .[10:15] will be of length 5, containing the elements from index 10 (inclusive) to index 15 (exclusive). Either index may be negative (in which case it counts backwards from the end of the array), or omitted (in which case it refers to the start or end of the array). jq Β΄.[2:4]Β΄ ["a","b","c","d","e"] => ["c", "d"] jq Β΄.[2:4]Β΄ "abcdefghi" => "cd" jq Β΄.[:3]Β΄ ["a","b","c","d","e"] => ["a", "b", "c"] jq Β΄.[-2:]Β΄ ["a","b","c","d","e"] => ["d", "e"] Array/Object Value Iterator: .[] If you use the .[index] syntax, but omit the index entirely, it will return all of the elements of an array. Running .[] with the input [1,2,3] will produce the numbers as three separate results, rather than as a single array. You can also use this on an object, and it will return all the values of the object. jq Β΄.[]Β΄ [{"name":"JSON", "good":true}, {"name":"XML", "good":false}] => {"name":"JSON", "good":true}, {"name":"XML", "good":false} jq Β΄.[]Β΄ [] => jq Β΄.[]Β΄ {"a": 1, "b": 1} => 1, 1 .[]? Like .[], but no errors will be output if . is not an array or object. Comma: , If two filters are separated by a comma, then the same input will be fed into both and the two filtersΒ΄ output value streams will be concatenated in order: first, all of the outputs produced by the left expression, and then all of the outputs produced by the right. For instance, filter .foo, .bar, produces both the "foo" fields and "bar" fields as separate outputs. jq Β΄.foo, .barΒ΄ {"foo": 42, "bar": "something else", "baz": true} => 42, "something else" jq Β΄.user, .projects[]Β΄ {"user":"stedolan", "projects": ["jq", "wikiflow"]} => "stedolan", "jq", "wikiflow" jq Β΄.[4,2]Β΄ ["a","b","c","d","e"] => "e", "c" Pipe: | The | operator combines two filters by feeding the output(s) of the one on the left into the input of the one on the right. ItΒ΄s pretty much the same as the Unix shellΒ΄s pipe, if youΒ΄re used to that. If the one on the left produces multiple results, the one on the right will be run for each of those results. So, the expression .[] | .foo retrieves the "foo" field of each element of the input array. Note that .a.b.c is the same as .a | .b | .c. Note too that . is the input value at the particular stage in a "pipeline", specifically: where the . expression appears. Thus .a | . | .b is the same as .a.b, as the . in the middle refers to whatever value .a produced. jq Β΄.[] | .nameΒ΄ [{"name":"JSON", "good":true}, {"name":"XML", "good":false}] => "JSON", "XML" Parenthesis Parenthesis work as a grouping operator just as in any typical programming language. jq Β΄(. + 2) * 5Β΄ 1 => 15 TYPES AND VALUES jq supports the same set of datatypes as JSON - numbers, strings, booleans, arrays, objects (which in JSON-speak are hashes with only string keys), and "null". Booleans, null, strings and numbers are written the same way as in javascript. Just like everything else in jq, these simple values take an input and produce an output - 42 is a valid jq expression that takes an input, ignores it, and returns 42 instead. Array construction: [] As in JSON, [] is used to construct arrays, as in [1,2,3]. The elements of the arrays can be any jq expression, including a pipeline. All of the results produced by all of the expressions are collected into one big array. You can use it to construct an array out of a known quantity of values (as in [.foo, .bar, .baz]) or to "collect" all the results of a filter into an array (as in [.items[].name]) Once you understand the "," operator, you can look at jqΒ΄s array syntax in a different light: the expression [1,2,3] is not using a built-in syntax for comma-separated arrays, but is instead applying the [] operator (collect results) to the expression 1,2,3 (which produces three different results). If you have a filter X that produces four results, then the expression [X] will produce a single result, an array of four elements. jq Β΄[.user, .projects[]]Β΄ {"user":"stedolan", "projects": ["jq", "wikiflow"]} => ["stedolan", "jq", "wikiflow"] jq Β΄[ .[] | . * 2]Β΄ [1, 2, 3] => [2, 4, 6] Object Construction: {} Like JSON, {} is for constructing objects (aka dictionaries or hashes), as in: {"a": 42, "b": 17}. If the keys are "identifier-like", then the quotes can be left off, as in {a:42, b:17}. Keys generated by expressions need to be parenthesized, e.g., {("a"+"b"):59}. The value can be any expression (although you may need to wrap it in parentheses if itΒ΄s a complicated one), which gets applied to the {} expressionΒ΄s input (remember, all filters have an input and an output). {foo: .bar} will produce the JSON object {"foo": 42} if given the JSON object {"bar":42, "baz":43} as its input. You can use this to select particular fields of an object: if the input is an object with "user", "title", "id", and "content" fields and you just want "user" and "title", you can write {user: .user, title: .title} Because that is so common, thereΒ΄s a shortcut syntax for it: {user, title}. If one of the expressions produces multiple results, multiple dictionaries will be produced. If the inputΒ΄s {"user":"stedolan","titles":["JQ Primer", "More JQ"]} then the expression {user, title: .titles[]} will produce two outputs: {"user":"stedolan", "title": "JQ Primer"} {"user":"stedolan", "title": "More JQ"} Putting parentheses around the key means it will be evaluated as an expression. With the same input as above, {(.user): .titles} produces {"stedolan": ["JQ Primer", "More JQ"]} jq Β΄{user, title: .titles[]}Β΄ {"user":"stedolan","titles":["JQ Primer", "More JQ"]} => {"user":"stedolan", "title": "JQ Primer"}, {"user":"stedolan", "title": "More JQ"} jq Β΄{(.user): .titles}Β΄ {"user":"stedolan","titles":["JQ Primer", "More JQ"]} => {"stedolan": ["JQ Primer", "More JQ"]} Recursive Descent: .. Recursively descends ., producing every value. This is the same as the zero-argument recurse builtin (see below). This is intended to resemble the XPath // operator. Note that ..a does not work; use ..|.a instead. In the example below we use ..|.a? to find all the values of object keys "a" in any object found "below" .. This is particularly useful in conjunction with path(EXP) (also see below) and the ? operator. jq Β΄..|.a?Β΄ [[{"a":1}]] => 1 BUILTIN OPERATORS AND FUNCTIONS Some jq operator (for instance, +) do different things depending on the type of their arguments (arrays, numbers, etc.). However, jq never does implicit type conversions. If you try to add a string to an object youΒ΄ll get an error message and no result. Addition: + The operator + takes two filters, applies them both to the same input, and adds the results together. What "adding" means depends on the types involved: β€’ Numbers are added by normal arithmetic. β€’ Arrays are added by being concatenated into a larger array. β€’ Strings are added by being joined into a larger string. β€’ Objects are added by merging, that is, inserting all the key-value pairs from both objects into a single combined object. If both objects contain a value for the same key, the object on the right of the + wins. (For recursive merge use the * operator.) null can be added to any value, and returns the other value unchanged. jq Β΄.a + 1Β΄ {"a": 7} => 8 jq Β΄.a + .bΒ΄ {"a": [1,2], "b": [3,4]} => [1,2,3,4] jq Β΄.a + nullΒ΄ {"a": 1} => 1 jq Β΄.a + 1Β΄ {} => 1 jq Β΄{a: 1} + {b: 2} + {c: 3} + {a: 42}Β΄ null => {"a": 42, "b": 2, "c": 3} Subtraction: - As well as normal arithmetic subtraction on numbers, the - operator can be used on arrays to remove all occurrences of the second arrayΒ΄s elements from the first array. jq Β΄4 - .aΒ΄ {"a":3} => 1 jq Β΄. - ["xml", "yaml"]Β΄ ["xml", "yaml", "json"] => ["json"] Multiplication, division, modulo: *, /, and % These infix operators behave as expected when given two numbers. Division by zero raises an error. x % y computes x modulo y. Multiplying a string by a number produces the concatenation of that string that many times. "x" * 0 produces null. Dividing a string by another splits the first using the second as separators. Multiplying two objects will merge them recursively: this works like addition but if both objects contain a value for the same key, and the values are objects, the two are merged with the same strategy. jq Β΄10 / . * 3Β΄ 5 => 6 jq Β΄. / ", "Β΄ "a, b,c,d, e" => ["a","b,c,d","e"] jq Β΄{"k": {"a": 1, "b": 2}} * {"k": {"a": 0,"c": 3}}Β΄ null => {"k": {"a": 0, "b": 2, "c": 3}} jq Β΄.[] | (1 / .)?Β΄ [1,0,-1] => 1, -1 length The builtin function length gets the length of various different types of value: β€’ The length of a string is the number of Unicode codepoints it contains (which will be the same as its JSON-encoded length in bytes if itΒ΄s pure ASCII). β€’ The length of an array is the number of elements. β€’ The length of an object is the number of key-value pairs. β€’ The length of null is zero. jq Β΄.[] | lengthΒ΄ [[1,2], "string", {"a":2}, null] => 2, 6, 1, 0 utf8bytelength The builtin function utf8bytelength outputs the number of bytes used to encode a string in UTF-8. jq Β΄utf8bytelengthΒ΄ "\u03bc" => 2 keys, keys_unsorted The builtin function keys, when given an object, returns its keys in an array. The keys are sorted "alphabetically", by unicode codepoint order. This is not an order that makes particular sense in any particular language, but you can count on it being the same for any two objects with the same set of keys, regardless of locale settings. When keys is given an array, it returns the valid indices for that array: the integers from 0 to length-1. The keys_unsorted function is just like keys, but if the input is an object then the keys will not be sorted, instead the keys will roughly be in insertion order. jq Β΄keysΒ΄ {"abc": 1, "abcd": 2, "Foo": 3} => ["Foo", "abc", "abcd"] jq Β΄keysΒ΄ [42,3,35] => [0,1,2] has(key) The builtin function has returns whether the input object has the given key, or the input array has an element at the given index. has($key) has the same effect as checking whether $key is a member of the array returned by keys, although has will be faster. jq Β΄map(has("foo"))Β΄ [{"foo": 42}, {}] => [true, false] jq Β΄map(has(2))Β΄ [[0,1], ["a","b","c"]] => [false, true] in The builtin function in returns whether or not the input key is in the given object, or the input index corresponds to an element in the given array. It is, essentially, an inversed version of has. jq Β΄.[] | in({"foo": 42})Β΄ ["foo", "bar"] => true, false jq Β΄map(in([0,1]))Β΄ [2, 0] => [false, true] map(x), map_values(x) For any filter x, map(x) will run that filter for each element of the input array, and return the outputs in a new array. map(.+1) will increment each element of an array of numbers. Similarly, map_values(x) will run that filter for each element, but it will return an object when an object is passed. map(x) is equivalent to [.[] | x]. In fact, this is how itΒ΄s defined. Similarly, map_values(x) is defined as .[] |= x. jq Β΄map(.+1)Β΄ [1,2,3] => [2,3,4] jq Β΄map_values(.+1)Β΄ {"a": 1, "b": 2, "c": 3} => {"a": 2, "b": 3, "c": 4} path(path_expression) Outputs array representations of the given path expression in .. The outputs are arrays of strings (object keys) and/or numbers (array indices). Path expressions are jq expressions like .a, but also .[]. There are two types of path expressions: ones that can match exactly, and ones that cannot. For example, .a.b.c is an exact match path expression, while .a[].b is not. path(exact_path_expression) will produce the array representation of the path expression even if it does not exist in ., if . is null or an array or an object. path(pattern) will produce array representations of the paths matching pattern if the paths exist in .. Note that the path expressions are not different from normal expressions. The expression path(..|select(type=="boolean")) outputs all the paths to boolean values in ., and only those paths. jq Β΄path(.a[0].b)Β΄ null => ["a",0,"b"] jq Β΄[path(..)]Β΄ {"a":[{"b":1}]} => [[],["a"],["a",0],["a",0,"b"]] del(path_expression) The builtin function del removes a key and its corresponding value from an object. jq Β΄del(.foo)Β΄ {"foo": 42, "bar": 9001, "baz": 42} => {"bar": 9001, "baz": 42} jq Β΄del(.[1, 2])Β΄ ["foo", "bar", "baz"] => ["foo"] getpath(PATHS) The builtin function getpath outputs the values in . found at each path in PATHS. jq Β΄getpath(["a","b"])Β΄ null => null jq Β΄[getpath(["a","b"], ["a","c"])]Β΄ {"a":{"b":0, "c":1}} => [0, 1] setpath(PATHS; VALUE) The builtin function setpath sets the PATHS in . to VALUE. jq Β΄setpath(["a","b"]; 1)Β΄ null => {"a": {"b": 1}} jq Β΄setpath(["a","b"]; 1)Β΄ {"a":{"b":0}} => {"a": {"b": 1}} jq Β΄setpath([0,"a"]; 1)Β΄ null => [{"a":1}] delpaths(PATHS) The builtin function delpaths sets the PATHS in .. PATHS must be an array of paths, where each path is an array of strings and numbers. jq Β΄delpaths([["a","b"]])Β΄ {"a":{"b":1},"x":{"y":2}} => {"a":{},"x":{"y":2}} to_entries, from_entries, with_entries These functions convert between an object and an array of key-value pairs. If to_entries is passed an object, then for each k: v entry in the input, the output array includes {"key": k, "value": v}. from_entries does the opposite conversion, and with_entries(foo) is a shorthand for to_entries | map(foo) | from_entries, useful for doing some operation to all keys and values of an object. from_entries accepts key, Key, name, Name, value and Value as keys. jq Β΄to_entriesΒ΄ {"a": 1, "b": 2} => [{"key":"a", "value":1}, {"key":"b", "value":2}] jq Β΄from_entriesΒ΄ [{"key":"a", "value":1}, {"key":"b", "value":2}] => {"a": 1, "b": 2} jq Β΄with_entries(.key |= "KEY_" + .)Β΄ {"a": 1, "b": 2} => {"KEY_a": 1, "KEY_b": 2} select(boolean_expression) The function select(foo) produces its input unchanged if foo returns true for that input, and produces no output otherwise. ItΒ΄s useful for filtering lists: [1,2,3] | map(select(. >= 2)) will give you [2,3]. jq Β΄map(select(. >= 2))Β΄ [1,5,3,0,7] => [5,3,7] jq Β΄.[] | select(.id == "second")Β΄ [{"id": "first", "val": 1}, {"id": "second", "val": 2}] => {"id": "second", "val": 2} arrays, objects, iterables, booleans, numbers, normals, finites, strings, nulls, values, scalars These built-ins select only inputs that are arrays, objects, iterables (arrays or objects), booleans, numbers, normal numbers, finite numbers, strings, null, non-null values, and non-iterables, respectively. jq Β΄.[]|numbersΒ΄ [[],{},1,"foo",null,true,false] => 1 empty empty returns no results. None at all. Not even null. ItΒ΄s useful on occasion. YouΒ΄ll know if you need it :) jq Β΄1, empty, 2Β΄ null => 1, 2 jq Β΄[1,2,empty,3]Β΄ null => [1,2,3] error(message) Produces an error, just like .a applied to values other than null and objects would, but with the given message as the errorΒ΄s value. Errors can be caught with try/catch; see below. halt Stops the jq program with no further outputs. jq will exit with exit status 0. halt_error, halt_error(exit_code) Stops the jq program with no further outputs. The input will be printed on stderr as raw output (i.e., strings will not have double quotes) with no decoration, not even a newline. The given exit_code (defaulting to 5) will be jqΒ΄s exit status. For example, "Error: somthing went wrong\n"|halt_error(1). $__loc__ Produces an object with a "file" key and a "line" key, with the filename and line number where $__loc__ occurs, as values. jq Β΄try error("\($__loc__)") catch .Β΄ null => "{\"file\":\"<top-level>\",\"line\":1}" paths, paths(node_filter), leaf_paths paths outputs the paths to all the elements in its input (except it does not output the empty list, representing . itself). paths(f) outputs the paths to any values for which f is true. That is, paths(numbers) outputs the paths to all numeric values. leaf_paths is an alias of paths(scalars); leaf_paths is deprecated and will be removed in the next major release. jq Β΄[paths]Β΄ [1,[[],{"a":2}]] => [[0],[1],[1,0],[1,1],[1,1,"a"]] jq Β΄[paths(scalars)]Β΄ [1,[[],{"a":2}]] => [[0],[1,1,"a"]] add The filter add takes as input an array, and produces as output the elements of the array added together. This might mean summed, concatenated or merged depending on the types of the elements of the input array - the rules are the same as those for the + operator (described above). If the input is an empty array, add returns null. jq Β΄addΒ΄ ["a","b","c"] => "abc" jq Β΄addΒ΄ [1, 2, 3] => 6 jq Β΄addΒ΄ [] => null any, any(condition), any(generator; condition) The filter any takes as input an array of boolean values, and produces true as output if any of the elements of the array are true. If the input is an empty array, any returns false. The any(condition) form applies the given condition to the elements of the input array. The any(generator; condition) form applies the given condition to all the outputs of the given generator. jq Β΄anyΒ΄ [true, false] => true jq Β΄anyΒ΄ [false, false] => false jq Β΄anyΒ΄ [] => false all, all(condition), all(generator; condition) The filter all takes as input an array of boolean values, and produces true as output if all of the elements of the array are true. The all(condition) form applies the given condition to the elements of the input array. The all(generator; condition) form applies the given condition to all the outputs of the given generator. If the input is an empty array, all returns true. jq Β΄allΒ΄ [true, false] => false jq Β΄allΒ΄ [true, true] => true jq Β΄allΒ΄ [] => true flatten, flatten(depth) The filter flatten takes as input an array of nested arrays, and produces a flat array in which all arrays inside the original array have been recursively replaced by their values. You can pass an argument to it to specify how many levels of nesting to flatten. flatten(2) is like flatten, but going only up to two levels deep. jq Β΄flattenΒ΄ [1, [2], [[3]]] => [1, 2, 3] jq Β΄flatten(1)Β΄ [1, [2], [[3]]] => [1, 2, [3]] jq Β΄flattenΒ΄ [[]] => [] jq Β΄flattenΒ΄ [{"foo": "bar"}, [{"foo": "baz"}]] => [{"foo": "bar"}, {"foo": "baz"}] range(upto), range(from;upto) range(from;upto;by) The range function produces a range of numbers. range(4;10) produces 6 numbers, from 4 (inclusive) to 10 (exclusive). The numbers are produced as separate outputs. Use [range(4;10)] to get a range as an array. The one argument form generates numbers from 0 to the given number, with an increment of 1. The two argument form generates numbers from from to upto with an increment of 1. The three argument form generates numbers from to upto with an increment of by. jq Β΄range(2;4)Β΄ null => 2, 3 jq Β΄[range(2;4)]Β΄ null => [2,3] jq Β΄[range(4)]Β΄ null => [0,1,2,3] jq Β΄[range(0;10;3)]Β΄ null => [0,3,6,9] jq Β΄[range(0;10;-1)]Β΄ null => [] jq Β΄[range(0;-5;-1)]Β΄ null => [0,-1,-2,-3,-4] floor The floor function returns the floor of its numeric input. jq Β΄floorΒ΄ 3.14159 => 3 sqrt The sqrt function returns the square root of its numeric input. jq Β΄sqrtΒ΄ 9 => 3 tonumber The tonumber function parses its input as a number. It will convert correctly-formatted strings to their numeric equivalent, leave numbers alone, and give an error on all other input. jq Β΄.[] | tonumberΒ΄ [1, "1"] => 1, 1 tostring The tostring function prints its input as a string. Strings are left unchanged, and all other values are JSON-encoded. jq Β΄.[] | tostringΒ΄ [1, "1", [1]] => "1", "1", "[1]" type The type function returns the type of its argument as a string, which is one of null, boolean, number, string, array or object. jq Β΄map(type)Β΄ [0, false, [], {}, null, "hello"] => ["number", "boolean", "array", "object", "null", "string"] infinite, nan, isinfinite, isnan, isfinite, isnormal Some arithmetic operations can yield infinities and "not a number" (NaN) values. The isinfinite builtin returns true if its input is infinite. The isnan builtin returns true if its input is a NaN. The infinite builtin returns a positive infinite value. The nan builtin returns a NaN. The isnormal builtin returns true if its input is a normal number. Note that division by zero raises an error. Currently most arithmetic operations operating on infinities, NaNs, and sub-normals do not raise errors. jq Β΄.[] | (infinite * .) < 0Β΄ [-1, 1] => true, false jq Β΄infinite, nan | typeΒ΄ null => "number", "number" sort, sort_by(path_expression) The sort functions sorts its input, which must be an array. Values are sorted in the following order: β€’ null β€’ false β€’ true β€’ numbers β€’ strings, in alphabetical order (by unicode codepoint value) β€’ arrays, in lexical order β€’ objects The ordering for objects is a little complex: first theyΒ΄re compared by comparing their sets of keys (as arrays in sorted order), and if their keys are equal then the values are compared key by key. sort may be used to sort by a particular field of an object, or by applying any jq filter. sort_by(foo) compares two elements by comparing the result of foo on each element. jq Β΄sortΒ΄ [8,3,null,6] => [null,3,6,8] jq Β΄sort_by(.foo)Β΄ [{"foo":4, "bar":10}, {"foo":3, "bar":100}, {"foo":2, "bar":1}] => [{"foo":2, "bar":1}, {"foo":3, "bar":100}, {"foo":4, "bar":10}] group_by(path_expression) group_by(.foo) takes as input an array, groups the elements having the same .foo field into separate arrays, and produces all of these arrays as elements of a larger array, sorted by the value of the .foo field. Any jq expression, not just a field access, may be used in place of .foo. The sorting order is the same as described in the sort function above. jq Β΄group_by(.foo)Β΄ [{"foo":1, "bar":10}, {"foo":3, "bar":100}, {"foo":1, "bar":1}] => [[{"foo":1, "bar":10}, {"foo":1, "bar":1}], [{"foo":3, "bar":100}]] min, max, min_by(path_exp), max_by(path_exp) Find the minimum or maximum element of the input array. The min_by(path_exp) and max_by(path_exp) functions allow you to specify a particular field or property to examine, e.g. min_by(.foo) finds the object with the smallest foo field. jq Β΄minΒ΄ [5,4,2,7] => 2 jq Β΄max_by(.foo)Β΄ [{"foo":1, "bar":14}, {"foo":2, "bar":3}] => {"foo":2, "bar":3} unique, unique_by(path_exp) The unique function takes as input an array and produces an array of the same elements, in sorted order, with duplicates removed. The unique_by(path_exp) function will keep only one element for each value obtained by applying the argument. Think of it as making an array by taking one element out of every group produced by group. jq Β΄uniqueΒ΄ [1,2,5,3,5,3,1,3] => [1,2,3,5] jq Β΄unique_by(.foo)Β΄ [{"foo": 1, "bar": 2}, {"foo": 1, "bar": 3}, {"foo": 4, "bar": 5}] => [{"foo": 1, "bar": 2}, {"foo": 4, "bar": 5}] jq Β΄unique_by(length)Β΄ ["chunky", "bacon", "kitten", "cicada", "asparagus"] => ["bacon", "chunky", "asparagus"] reverse This function reverses an array. jq Β΄reverseΒ΄ [1,2,3,4] => [4,3,2,1] contains(element) The filter contains(b) will produce true if b is completely contained within the input. A string B is contained in a string A if B is a substring of A. An array B is contained in an array A if all elements in B are contained in any element in A. An object B is contained in object A if all of the values in B are contained in the value in A with the same key. All other types are assumed to be contained in each other if they are equal. jq Β΄contains("bar")Β΄ "foobar" => true jq Β΄contains(["baz", "bar"])Β΄ ["foobar", "foobaz", "blarp"] => true jq Β΄contains(["bazzzzz", "bar"])Β΄ ["foobar", "foobaz", "blarp"] => false jq Β΄contains({foo: 12, bar: [{barp: 12}]})Β΄ {"foo": 12, "bar":[1,2,{"barp":12, "blip":13}]} => true jq Β΄contains({foo: 12, bar: [{barp: 15}]})Β΄ {"foo": 12, "bar":[1,2,{"barp":12, "blip":13}]} => false indices(s) Outputs an array containing the indices in . where s occurs. The input may be an array, in which case if s is an array then the indices output will be those where all elements in . match those of s. jq Β΄indices(", ")Β΄ "a,b, cd, efg, hijk" => [3,7,12] jq Β΄indices(1)Β΄ [0,1,2,1,3,1,4] => [1,3,5] jq Β΄indices([1,2])Β΄ [0,1,2,3,1,4,2,5,1,2,6,7] => [1,8] index(s), rindex(s) Outputs the index of the first (index) or last (rindex) occurrence of s in the input. jq Β΄index(", ")Β΄ "a,b, cd, efg, hijk" => 3 jq Β΄rindex(", ")Β΄ "a,b, cd, efg, hijk" => 12 inside The filter inside(b) will produce true if the input is completely contained within b. It is, essentially, an inversed version of contains. jq Β΄inside("foobar")Β΄ "bar" => true jq Β΄inside(["foobar", "foobaz", "blarp"])Β΄ ["baz", "bar"] => true jq Β΄inside(["foobar", "foobaz", "blarp"])Β΄ ["bazzzzz", "bar"] => false jq Β΄inside({"foo": 12, "bar":[1,2,{"barp":12, "blip":13}]})Β΄ {"foo": 12, "bar": [{"barp": 12}]} => true jq Β΄inside({"foo": 12, "bar":[1,2,{"barp":12, "blip":13}]})Β΄ {"foo": 12, "bar": [{"barp": 15}]} => false startswith(str) Outputs true if . starts with the given string argument. jq Β΄[.[]|startswith("foo")]Β΄ ["fo", "foo", "barfoo", "foobar", "barfoob"] => [false, true, false, true, false] endswith(str) Outputs true if . ends with the given string argument. jq Β΄[.[]|endswith("foo")]Β΄ ["foobar", "barfoo"] => [false, true] combinations, combinations(n) Outputs all combinations of the elements of the arrays in the input array. If given an argument n, it outputs all combinations of n repetitions of the input array. jq Β΄combinationsΒ΄ [[1,2], [3, 4]] => [1, 3], [1, 4], [2, 3], [2, 4] jq Β΄combinations(2)Β΄ [0, 1] => [0, 0], [0, 1], [1, 0], [1, 1] ltrimstr(str) Outputs its input with the given prefix string removed, if it starts with it. jq Β΄[.[]|ltrimstr("foo")]Β΄ ["fo", "foo", "barfoo", "foobar", "afoo"] => ["fo","","barfoo","bar","afoo"] rtrimstr(str) Outputs its input with the given suffix string removed, if it ends with it. jq Β΄[.[]|rtrimstr("foo")]Β΄ ["fo", "foo", "barfoo", "foobar", "foob"] => ["fo","","bar","foobar","foob"] explode Converts an input string into an array of the stringΒ΄s codepoint numbers. jq Β΄explodeΒ΄ "foobar" => [102,111,111,98,97,114] implode The inverse of explode. jq Β΄implodeΒ΄ [65, 66, 67] => "ABC" split(str) Splits an input string on the separator argument. jq Β΄split(", ")Β΄ "a, b,c,d, e, " => ["a","b,c,d","e",""] join(str) Joins the array of elements given as input, using the argument as separator. It is the inverse of split: that is, running split("foo") | join("foo") over any input string returns said input string. Numbers and booleans in the input are converted to strings. Null values are treated as empty strings. Arrays and objects in the input are not supported. jq Β΄join(", ")Β΄ ["a","b,c,d","e"] => "a, b,c,d, e" jq Β΄join(" ")Β΄ ["a",1,2.3,true,null,false] => "a 1 2.3 true false" ascii_downcase, ascii_upcase Emit a copy of the input string with its alphabetic characters (a-z and A-Z) converted to the specified case. while(cond; update) The while(cond; update) function allows you to repeatedly apply an update to . until cond is false. Note that while(cond; update) is internally defined as a recursive jq function. Recursive calls within while will not consume additional memory if update produces at most one output for each input. See advanced topics below. jq Β΄[while(.<100; .*2)]Β΄ 1 => [1,2,4,8,16,32,64] until(cond; next) The until(cond; next) function allows you to repeatedly apply the expression next, initially to . then to its own output, until cond is true. For example, this can be used to implement a factorial function (see below). Note that until(cond; next) is internally defined as a recursive jq function. Recursive calls within until() will not consume additional memory if next produces at most one output for each input. See advanced topics below. jq Β΄[.,1]|until(.[0] < 1; [.[0] - 1, .[1] * .[0]])|.[1]Β΄ 4 => 24 recurse(f), recurse, recurse(f; condition), recurse_down The recurse(f) function allows you to search through a recursive structure, and extract interesting data from all levels. Suppose your input represents a filesystem: {"name": "/", "children": [ {"name": "/bin", "children": [ {"name": "/bin/ls", "children": []}, {"name": "/bin/sh", "children": []}]}, {"name": "/home", "children": [ {"name": "/home/stephen", "children": [ {"name": "/home/stephen/jq", "children": []}]}]}]} Now suppose you want to extract all of the filenames present. You need to retrieve .name, .children[].name, .children[].children[].name, and so on. You can do this with: recurse(.children[]) | .name When called without an argument, recurse is equivalent to recurse(.[]?). recurse(f) is identical to recurse(f; . != null) and can be used without concerns about recursion depth. recurse(f; condition) is a generator which begins by emitting . and then emits in turn .|f, .|f|f, .|f|f|f, ... so long as the computed value satisfies the condition. For example, to generate all the integers, at least in principle, one could write recurse(.+1; true). For legacy reasons, recurse_down exists as an alias to calling recurse without arguments. This alias is considered deprecated and will be removed in the next major release. The recursive calls in recurse will not consume additional memory whenever f produces at most a single output for each input. jq Β΄recurse(.foo[])Β΄ {"foo":[{"foo": []}, {"foo":[{"foo":[]}]}]} => {"foo":[{"foo":[]},{"foo":[{"foo":[]}]}]}, {"foo":[]}, {"foo":[{"foo":[]}]}, {"foo":[]} jq Β΄recurseΒ΄ {"a":0,"b":[1]} => {"a":0,"b":[1]}, 0, [1], 1 jq Β΄recurse(. * .; . < 20)Β΄ 2 => 2, 4, 16 walk(f) The walk(f) function applies f recursively to every component of the input entity. When an array is encountered, f is first applied to its elements and then to the array itself; when an object is encountered, f is first applied to all the values and then to the object. In practice, f will usually test the type of its input, as illustrated in the following examples. The first example highlights the usefulness of processing the elements of an array of arrays before processing the array itself. The second example shows how all the keys of all the objects within the input can be considered for alteration. jq Β΄walk(if type == "array" then sort else . end)Β΄ [[4, 1, 7], [8, 5, 2], [3, 6, 9]] => [[1,4,7],[2,5,8],[3,6,9]] jq Β΄walk( if type == "object" then with_entries( .key |= sub( "^_+"; "") ) else . end )Β΄ [ { "_a": { "__b": 2 } } ] => [{"a":{"b":2}}] $ENV, env $ENV is an object representing the environment variables as set when the jq program started. env outputs an object representing jqΒ΄s current environment. At the moment there is no builtin for setting environment variables. jq Β΄$ENV.PAGERΒ΄ null => "less" jq Β΄env.PAGERΒ΄ null => "less" transpose Transpose a possibly jagged matrix (an array of arrays). Rows are padded with nulls so the result is always rectangular. jq Β΄transposeΒ΄ [[1], [2,3]] => [[1,2],[null,3]] bsearch(x) bsearch(x) conducts a binary search for x in the input array. If the input is sorted and contains x, then bsearch(x) will return its index in the array; otherwise, if the array is sorted, it will return (-1 - ix) where ix is an insertion point such that the array would still be sorted after the insertion of x at ix. If the array is not sorted, bsearch(x) will return an integer that is probably of no interest. jq Β΄bsearch(0)Β΄ [0,1] => 0 jq Β΄bsearch(0)Β΄ [1,2,3] => -1 jq Β΄bsearch(4) as $ix | if $ix < 0 then .[-(1+$ix)] = 4 else . endΒ΄ [1,2,3] => [1,2,3,4] String interpolation - \(foo) Inside a string, you can put an expression inside parens after a backslash. Whatever the expression returns will be interpolated into the string. jq Β΄"The input was \(.), which is one less than \(.+1)"Β΄ 42 => "The input was 42, which is one less than 43" Convert to/from JSON The tojson and fromjson builtins dump values as JSON texts or parse JSON texts into values, respectively. The tojson builtin differs from tostring in that tostring returns strings unmodified, while tojson encodes strings as JSON strings. jq Β΄[.[]|tostring]Β΄ [1, "foo", ["foo"]] => ["1","foo","[\"foo\"]"] jq Β΄[.[]|tojson]Β΄ [1, "foo", ["foo"]] => ["1","\"foo\"","[\"foo\"]"] jq Β΄[.[]|tojson|fromjson]Β΄ [1, "foo", ["foo"]] => [1,"foo",["foo"]] Format strings and escaping The @foo syntax is used to format and escape strings, which is useful for building URLs, documents in a language like HTML or XML, and so forth. @foo can be used as a filter on its own, the possible escapings are: @text: Calls tostring, see that function for details. @json: Serializes the input as JSON. @html: Applies HTML/XML escaping, by mapping the characters <>&Β΄" to their entity equivalents &lt;, &gt;, &amp;, &apos;, &quot;. @uri: Applies percent-encoding, by mapping all reserved URI characters to a %XX sequence. @csv: The input must be an array, and it is rendered as CSV with double quotes for strings, and quotes escaped by repetition. @tsv: The input must be an array, and it is rendered as TSV (tab-separated values). Each input array will be printed as a single line. Fields are separated by a single tab (ascii 0x09). Input characters line-feed (ascii 0x0a), carriage-return (ascii 0x0d), tab (ascii 0x09) and backslash (ascii 0x5c) will be output as escape sequences \n, \r, \t, \\ respectively. @sh: The input is escaped suitable for use in a command-line for a POSIX shell. If the input is an array, the output will be a series of space-separated strings. @base64: The input is converted to base64 as specified by RFC 4648. @base64d: The inverse of @base64, input is decoded as specified by RFC 4648. Note: If the decoded string is not UTF-8, the results are undefined. This syntax can be combined with string interpolation in a useful way. You can follow a @foo token with a string literal. The contents of the string literal will not be escaped. However, all interpolations made inside that string literal will be escaped. For instance, @uri "https://www.google.com/search?q=\(.search)" will produce the following output for the input {"search":"what is jq?"}: "https://www.google.com/search?q=what%20is%20jq%3F" Note that the slashes, question mark, etc. in the URL are not escaped, as they were part of the string literal. jq Β΄@htmlΒ΄ "This works if x < y" => "This works if x &lt; y" jq Β΄@sh "echo \(.)"Β΄ "OΒ΄HaraΒ΄s Ale" => "echo Β΄OΒ΄\\´´HaraΒ΄\\´´s AleΒ΄" jq Β΄@base64Β΄ "This is a message" => "VGhpcyBpcyBhIG1lc3NhZ2U=" jq Β΄@base64dΒ΄ "VGhpcyBpcyBhIG1lc3NhZ2U=" => "This is a message" Dates jq provides some basic date handling functionality, with some high-level and low-level builtins. In all cases these builtins deal exclusively with time in UTC. The fromdateiso8601 builtin parses datetimes in the ISO 8601 format to a number of seconds since the Unix epoch (1970-01-01T00:00:00Z). The todateiso8601 builtin does the inverse. The fromdate builtin parses datetime strings. Currently fromdate only supports ISO 8601 datetime strings, but in the future it will attempt to parse datetime strings in more formats. The todate builtin is an alias for todateiso8601. The now builtin outputs the current time, in seconds since the Unix epoch. Low-level jq interfaces to the C-library time functions are also provided: strptime, strftime, strflocaltime, mktime, gmtime, and localtime. Refer to your host operating systemΒ΄s documentation for the format strings used by strptime and strftime. Note: these are not necessarily stable interfaces in jq, particularly as to their localization functionality. The gmtime builtin consumes a number of seconds since the Unix epoch and outputs a "broken down time" representation of Greenwhich Meridian time as an array of numbers representing (in this order): the year, the month (zero-based), the day of the month (one-based), the hour of the day, the minute of the hour, the second of the minute, the day of the week, and the day of the year -- all one-based unless otherwise stated. The day of the week number may be wrong on some systems for dates before March 1st 1900, or after December 31 2099. The localtime builtin works like the gmtime builtin, but using the local timezone setting. The mktime builtin consumes "broken down time" representations of time output by gmtime and strptime. The strptime(fmt) builtin parses input strings matching the fmt argument. The output is in the "broken down time" representation consumed by gmtime and output by mktime. The strftime(fmt) builtin formats a time (GMT) with the given format. The strflocaltime does the same, but using the local timezone setting. The format strings for strptime and strftime are described in typical C library documentation. The format string for ISO 8601 datetime is "%Y-%m-%dT%H:%M:%SZ". jq may not support some or all of this date functionality on some systems. In particular, the %u and %j specifiers for strptime(fmt) are not supported on macOS. jq Β΄fromdateΒ΄ "2015-03-05T23:51:47Z" => 1425599507 jq Β΄strptime("%Y-%m-%dT%H:%M:%SZ")Β΄ "2015-03-05T23:51:47Z" => [2015,2,5,23,51,47,4,63] jq Β΄strptime("%Y-%m-%dT%H:%M:%SZ")|mktimeΒ΄ "2015-03-05T23:51:47Z" => 1425599507 SQL-Style Operators jq provides a few SQL-style operators. INDEX(stream; index_expression): This builtin produces an object whose keys are computed by the given index expression applied to each value from the given stream. JOIN($idx; stream; idx_expr; join_expr): This builtin joins the values from the given stream to the given index. The indexΒ΄s keys are computed by applying the given index expression to each value from the given stream. An array of the value in the stream and the corresponding value from the index is fed to the given join expression to produce each result. JOIN($idx; stream; idx_expr): Same as JOIN($idx; stream; idx_expr; .). JOIN($idx; idx_expr): This builtin joins the input . to the given index, applying the given index expression to . to compute the index key. The join operation is as described above. IN(s): This builtin outputs true if . appears in the given stream, otherwise it outputs false. IN(source; s): This builtin outputs true if any value in the source stream appears in the second stream, otherwise it outputs false. builtins Returns a list of all builtin functions in the format name/arity. Since functions with the same name but different arities are considered separate functions, all/0, all/1, and all/2 would all be present in the list. CONDITIONALS AND COMPARISONS ==, != The expression Β΄a == bΒ΄ will produce Β΄trueΒ΄ if the result of a and b are equal (that is, if they represent equivalent JSON documents) and Β΄falseΒ΄ otherwise. In particular, strings are never considered equal to numbers. If youΒ΄re coming from Javascript, jqΒ΄s == is like JavascriptΒ΄s === - considering values equal only when they have the same type as well as the same value. != is "not equal", and Β΄a != bΒ΄ returns the opposite value of Β΄a == bΒ΄ jq Β΄.[] == 1Β΄ [1, 1.0, "1", "banana"] => true, true, false, false if-then-else if A then B else C end will act the same as B if A produces a value other than false or null, but act the same as C otherwise. Checking for false or null is a simpler notion of "truthiness" than is found in Javascript or Python, but it means that youΒ΄ll sometimes have to be more explicit about the condition you want: you canΒ΄t test whether, e.g. a string is empty using if .name then A else B end, youΒ΄ll need something more like if (.name | length) > 0 then A else B end instead. If the condition A produces multiple results, then B is evaluated once for each result that is not false or null, and C is evaluated once for each false or null. More cases can be added to an if using elif A then B syntax. jq Β΄if . == 0 then "zero" elif . == 1 then "one" else "many" endΒ΄ 2 => "many" >, >=, <=, < The comparison operators >, >=, <=, < return whether their left argument is greater than, greater than or equal to, less than or equal to or less than their right argument (respectively). The ordering is the same as that described for sort, above. jq Β΄. < 5Β΄ 2 => true and/or/not jq supports the normal Boolean operators and/or/not. They have the same standard of truth as if expressions - false and null are considered "false values", and anything else is a "true value". If an operand of one of these operators produces multiple results, the operator itself will produce a result for each input. not is in fact a builtin function rather than an operator, so it is called as a filter to which things can be piped rather than with special syntax, as in .foo and .bar | not. These three only produce the values "true" and "false", and so are only useful for genuine Boolean operations, rather than the common Perl/Python/Ruby idiom of "value_that_may_be_null or default". If you want to use this form of "or", picking between two values rather than evaluating a condition, see the "//" operator below. jq Β΄42 and "a string"Β΄ null => true jq Β΄(true, false) or falseΒ΄ null => true, false jq Β΄(true, true) and (true, false)Β΄ null => true, false, true, false jq Β΄[true, false | not]Β΄ null => [false, true] Alternative operator: // A filter of the form a // b produces the same results as a, if a produces results other than false and null. Otherwise, a // b produces the same results as b. This is useful for providing defaults: .foo // 1 will evaluate to 1 if thereΒ΄s no .foo element in the input. ItΒ΄s similar to how or is sometimes used in Python (jqΒ΄s or operator is reserved for strictly Boolean operations). jq Β΄.foo // 42Β΄ {"foo": 19} => 19 jq Β΄.foo // 42Β΄ {} => 42 try-catch Errors can be caught by using try EXP catch EXP. The first expression is executed, and if it fails then the second is executed with the error message. The output of the handler, if any, is output as if it had been the output of the expression to try. The try EXP form uses empty as the exception handler. jq Β΄try .a catch ". is not an object"Β΄ true => ". is not an object" jq Β΄[.[]|try .a]Β΄ [{}, true, {"a":1}] => [null, 1] jq Β΄try error("some exception") catch .Β΄ true => "some exception" Breaking out of control structures A convenient use of try/catch is to break out of control structures like reduce, foreach, while, and so on. For example: # Repeat an expression until it raises "break" as an # error, then stop repeating without re-raising the error. # But if the error caught is not "break" then re-raise it. try repeat(exp) catch .=="break" then empty else error; jq has a syntax for named lexical labels to "break" or "go (back) to": label $out | ... break $out ... The break $label_name expression will cause the program to to act as though the nearest (to the left) label $label_name produced empty. The relationship between the break and corresponding label is lexical: the label has to be "visible" from the break. To break out of a reduce, for example: label $out | reduce .[] as $item (null; if .==false then break $out else ... end) The following jq program produces a syntax error: break $out because no label $out is visible. Error Suppression / Optional Operator: ? The ? operator, used as EXP?, is shorthand for try EXP. jq Β΄[.[]|(.a)?]Β΄ [{}, true, {"a":1}] => [null, 1] REGULAR EXPRESSIONS (PCRE) jq uses the Oniguruma regular expression library, as do php, ruby, TextMate, Sublime Text, etc, so the description here will focus on jq specifics. The jq regex filters are defined so that they can be used using one of these patterns: STRING | FILTER( REGEX ) STRING | FILTER( REGEX; FLAGS ) STRING | FILTER( [REGEX] ) STRING | FILTER( [REGEX, FLAGS] ) where: * STRING, REGEX and FLAGS are jq strings and subject to jq string interpolation; * REGEX, after string interpolation, should be a valid PCRE regex; * FILTER is one of test, match, or capture, as described below. FLAGS is a string consisting of one of more of the supported flags: β€’ g - Global search (find all matches, not just the first) β€’ i - Case insensitive search β€’ m - Multi line mode (Β΄.Β΄ will match newlines) β€’ n - Ignore empty matches β€’ p - Both s and m modes are enabled β€’ s - Single line mode (Β΄^Β΄ -> Β΄\AΒ΄, Β΄$Β΄ -> Β΄\ZΒ΄) β€’ l - Find longest possible matches β€’ x - Extended regex format (ignore whitespace and comments) To match whitespace in an x pattern use an escape such as \s, e.g. β€’ test( "a\sb", "x" ). Note that certain flags may also be specified within REGEX, e.g. β€’ jq -n Β΄("test", "TEst", "teST", "TEST") | test( "(?i)te(?-i)st" )Β΄ evaluates to: true, true, false, false. test(val), test(regex; flags) Like match, but does not return match objects, only true or false for whether or not the regex matches the input. jq Β΄test("foo")Β΄ "foo" => true jq Β΄.[] | test("a b c # spaces are ignored"; "ix")Β΄ ["xabcd", "ABC"] => true, true match(val), match(regex; flags) match outputs an object for each match it finds. Matches have the following fields: β€’ offset - offset in UTF-8 codepoints from the beginning of the input β€’ length - length in UTF-8 codepoints of the match β€’ string - the string that it matched β€’ captures - an array of objects representing capturing groups. Capturing group objects have the following fields: β€’ offset - offset in UTF-8 codepoints from the beginning of the input β€’ length - length in UTF-8 codepoints of this capturing group β€’ string - the string that was captured β€’ name - the name of the capturing group (or null if it was unnamed) Capturing groups that did not match anything return an offset of -1 jq Β΄match("(abc)+"; "g")Β΄ "abc abc" => {"offset": 0, "length": 3, "string": "abc", "captures": [{"offset": 0, "length": 3, "string": "abc", "name": null}]}, {"offset": 4, "length": 3, "string": "abc", "captures": [{"offset": 4, "length": 3, "string": "abc", "name": null}]} jq Β΄match("foo")Β΄ "foo bar foo" => {"offset": 0, "length": 3, "string": "foo", "captures": []} jq Β΄match(["foo", "ig"])Β΄ "foo bar FOO" => {"offset": 0, "length": 3, "string": "foo", "captures": []}, {"offset": 8, "length": 3, "string": "FOO", "captures": []} jq Β΄match("foo (?<bar123>bar)? foo"; "ig")Β΄ "foo bar foo foo foo" => {"offset": 0, "length": 11, "string": "foo bar foo", "captures": [{"offset": 4, "length": 3, "string": "bar", "name": "bar123"}]}, {"offset": 12, "length": 8, "string": "foo foo", "captures": [{"offset": -1, "length": 0, "string": null, "name": "bar123"}]} jq Β΄[ match("."; "g")] | lengthΒ΄ "abc" => 3 capture(val), capture(regex; flags) Collects the named captures in a JSON object, with the name of each capture as the key, and the matched string as the corresponding value. jq Β΄capture("(?<a>[a-z]+)-(?<n>[0-9]+)")Β΄ "xyzzy-14" => { "a": "xyzzy", "n": "14" } scan(regex), scan(regex; flags) Emit a stream of the non-overlapping substrings of the input that match the regex in accordance with the flags, if any have been specified. If there is no match, the stream is empty. To capture all the matches for each input string, use the idiom [ expr ], e.g. [ scan(regex) ]. split(regex; flags) For backwards compatibility, split splits on a string, not a regex. splits(regex), splits(regex; flags) These provide the same results as their split counterparts, but as a stream instead of an array. sub(regex; tostring) sub(regex; string; flags) Emit the string obtained by replacing the first match of regex in the input string with tostring, after interpolation. tostring should be a jq string, and may contain references to named captures. The named captures are, in effect, presented as a JSON object (as constructed by capture) to tostring, so a reference to a captured variable named "x" would take the form: "(.x)". gsub(regex; string), gsub(regex; string; flags) gsub is like sub but all the non-overlapping occurrences of the regex are replaced by the string, after interpolation. ADVANCED FEATURES Variables are an absolute necessity in most programming languages, but theyΒ΄re relegated to an "advanced feature" in jq. In most languages, variables are the only means of passing around data. If you calculate a value, and you want to use it more than once, youΒ΄ll need to store it in a variable. To pass a value to another part of the program, youΒ΄ll need that part of the program to define a variable (as a function parameter, object member, or whatever) in which to place the data. It is also possible to define functions in jq, although this is is a feature whose biggest use is defining jqΒ΄s standard library (many jq functions such as map and find are in fact written in jq). jq has reduction operators, which are very powerful but a bit tricky. Again, these are mostly used internally, to define some useful bits of jqΒ΄s standard library. It may not be obvious at first, but jq is all about generators (yes, as often found in other languages). Some utilities are provided to help deal with generators. Some minimal I/O support (besides reading JSON from standard input, and writing JSON to standard output) is available. Finally, there is a module/library system. Variable / Symbolic Binding Operator: ... as $identifier | ... In jq, all filters have an input and an output, so manual plumbing is not necessary to pass a value from one part of a program to the next. Many expressions, for instance a + b, pass their input to two distinct subexpressions (here a and b are both passed the same input), so variables arenΒ΄t usually necessary in order to use a value twice. For instance, calculating the average value of an array of numbers requires a few variables in most languages - at least one to hold the array, perhaps one for each element or for a loop counter. In jq, itΒ΄s simply add / length - the add expression is given the array and produces its sum, and the length expression is given the array and produces its length. So, thereΒ΄s generally a cleaner way to solve most problems in jq than defining variables. Still, sometimes they do make things easier, so jq lets you define variables using expression as $variable. All variable names start with $. HereΒ΄s a slightly uglier version of the array-averaging example: length as $array_length | add / $array_length WeΒ΄ll need a more complicated problem to find a situation where using variables actually makes our lives easier. Suppose we have an array of blog posts, with "author" and "title" fields, and another object which is used to map author usernames to real names. Our input looks like: {"posts": [{"title": "Frist psot", "author": "anon"}, {"title": "A well-written article", "author": "person1"}], "realnames": {"anon": "Anonymous Coward", "person1": "Person McPherson"}} We want to produce the posts with the author field containing a real name, as in: {"title": "Frist psot", "author": "Anonymous Coward"} {"title": "A well-written article", "author": "Person McPherson"} We use a variable, $names, to store the realnames object, so that we can refer to it later when looking up author usernames: .realnames as $names | .posts[] | {title, author: $names[.author]} The expression exp as $x | ... means: for each value of expression exp, run the rest of the pipeline with the entire original input, and with $x set to that value. Thus as functions as something of a foreach loop. Just as {foo} is a handy way of writing {foo: .foo}, so {$foo} is a handy way of writing {foo:$foo}. Multiple variables may be declared using a single as expression by providing a pattern that matches the structure of the input (this is known as "destructuring"): . as {realnames: $names, posts: [$first, $second]} | ... The variable declarations in array patterns (e.g., . as [$first, $second]) bind to the elements of the array in from the element at index zero on up, in order. When there is no value at the index for an array pattern element, null is bound to that variable. Variables are scoped over the rest of the expression that defines them, so .realnames as $names | (.posts[] | {title, author: $names[.author]}) will work, but (.realnames as $names | .posts[]) | {title, author: $names[.author]} wonΒ΄t. For programming language theorists, itΒ΄s more accurate to say that jq variables are lexically-scoped bindings. In particular thereΒ΄s no way to change the value of a binding; one can only setup a new binding with the same name, but which will not be visible where the old one was. jq Β΄.bar as $x | .foo | . + $xΒ΄ {"foo":10, "bar":200} => 210 jq Β΄. as $i|[(.*2|. as $i| $i), $i]Β΄ 5 => [10,5] jq Β΄. as [$a, $b, {c: $c}] | $a + $b + $cΒ΄ [2, 3, {"c": 4, "d": 5}] => 9 jq Β΄.[] as [$a, $b] | {a: $a, b: $b}Β΄ [[0], [0, 1], [2, 1, 0]] => {"a":0,"b":null}, {"a":0,"b":1}, {"a":2,"b":1} Defining Functions You can give a filter a name using "def" syntax: def increment: . + 1; From then on, increment is usable as a filter just like a builtin function (in fact, this is how many of the builtins are defined). A function may take arguments: def map(f): [.[] | f]; Arguments are passed as filters (functions with no arguments), not as values. The same argument may be referenced multiple times with different inputs (here f is run for each element of the input array). Arguments to a function work more like callbacks than like value arguments. This is important to understand. Consider: def foo(f): f|f; 5|foo(.*2) The result will be 20 because f is .*2, and during the first invocation of f . will be 5, and the second time it will be 10 (5 * 2), so the result will be 20. Function arguments are filters, and filters expect an input when invoked. If you want the value-argument behaviour for defining simple functions, you can just use a variable: def addvalue(f): f as $f | map(. + $f); Or use the short-hand: def addvalue($f): ...; With either definition, addvalue(.foo) will add the current inputΒ΄s .foo field to each element of the array. Do note that calling addvalue(.[]) will cause the map(. + $f) part to be evaluated once per value in the value of . at the call site. Multiple definitions using the same function name are allowed. Each re-definition replaces the previous one for the same number of function arguments, but only for references from functions (or main program) subsequent to the re-definition. See also the section below on scoping. jq Β΄def addvalue(f): . + [f]; map(addvalue(.[0]))Β΄ [[1,2],[10,20]] => [[1,2,1], [10,20,10]] jq Β΄def addvalue(f): f as $x | map(. + $x); addvalue(.[0])Β΄ [[1,2],[10,20]] => [[1,2,1,2], [10,20,1,2]] Scoping There are two types of symbols in jq: value bindings (a.k.a., "variables"), and functions. Both are scoped lexically, with expressions being able to refer only to symbols that have been defined "to the left" of them. The only exception to this rule is that functions can refer to themselves so as to be able to create recursive functions. For example, in the following expression there is a binding which is visible "to the right" of it, ... | .*3 as $times_three | [. + $times_three] | ..., but not "to the left". Consider this expression now, ... | (.*3 as $times_three | [.+ $times_three]) | ...: here the binding $times_three is not visible past the closing parenthesis. Reduce The reduce syntax in jq allows you to combine all of the results of an expression by accumulating them into a single answer. As an example, weΒ΄ll pass [3,2,1] to this expression: reduce .[] as $item (0; . + $item) For each result that .[] produces, . + $item is run to accumulate a running total, starting from 0. In this example, .[] produces the results 3, 2, and 1, so the effect is similar to running something like this: 0 | (3 as $item | . + $item) | (2 as $item | . + $item) | (1 as $item | . + $item) jq Β΄reduce .[] as $item (0; . + $item)Β΄ [10,2,5,3] => 20 isempty(exp) Returns true if exp produces no outputs, false otherwise. jq Β΄isempty(empty)Β΄ null => true limit(n; exp) The limit function extracts up to n outputs from exp. jq Β΄[limit(3;.[])]Β΄ [0,1,2,3,4,5,6,7,8,9] => [0,1,2] first(expr), last(expr), nth(n; expr) The first(expr) and last(expr) functions extract the first and last values from expr, respectively. The nth(n; expr) function extracts the nth value output by expr. This can be defined as def nth(n; expr): last(limit(n + 1; expr));. Note that nth(n; expr) doesnΒ΄t support negative values of n. jq Β΄[first(range(.)), last(range(.)), nth(./2; range(.))]Β΄ 10 => [0,9,5] first, last, nth(n) The first and last functions extract the first and last values from any array at .. The nth(n) function extracts the nth value of any array at .. jq Β΄[range(.)]|[first, last, nth(5)]Β΄ 10 => [0,9,5] foreach The foreach syntax is similar to reduce, but intended to allow the construction of limit and reducers that produce intermediate results (see example). The form is foreach EXP as $var (INIT; UPDATE; EXTRACT). Like reduce, INIT is evaluated once to produce a state value, then each output of EXP is bound to $var, UPDATE is evaluated for each output of EXP with the current state and with $var visible. Each value output by UPDATE replaces the previous state. Finally, EXTRACT is evaluated for each new state to extract an output of foreach. This is mostly useful only for constructing reduce- and limit-like functions. But it is much more general, as it allows for partial reductions (see the example below). jq Β΄[foreach .[] as $item ([[],[]]; if $item == null then [[],.[0]] else [(.[0] + [$item]),[]] end; if $item == null then .[1] else empty end)]Β΄ [1,2,3,4,null,"a","b",null] => [[1,2,3,4],["a","b"]] Recursion As described above, recurse uses recursion, and any jq function can be recursive. The while builtin is also implemented in terms of recursion. Tail calls are optimized whenever the expression to the left of the recursive call outputs its last value. In practice this means that the expression to the left of the recursive call should not produce more than one output for each input. For example: def recurse(f): def r: ., (f | select(. != null) | r); r; def while(cond; update): def _while: if cond then ., (update | _while) else empty end; _while; def repeat(exp): def _repeat: exp, _repeat; _repeat; Generators and iterators Some jq operators and functions are actually generators in that they can produce zero, one, or more values for each input, just as one might expect in other programming languages that have generators. For example, .[] generates all the values in its input (which must be an array or an object), range(0; 10) generates the integers between 0 and 10, and so on. Even the comma operator is a generator, generating first the values generated by the expression to the left of the comma, then for each of those, the values generate by the expression on the right of the comma. The empty builtin is the generator that produces zero outputs. The empty builtin backtracks to the preceding generator expression. All jq functions can be generators just by using builtin generators. It is also possible to define new generators using only recursion and the comma operator. If the recursive call(s) is(are) "in tail position" then the generator will be efficient. In the example below the recursive call by _range to itself is in tail position. The example shows off three advanced topics: tail recursion, generator construction, and sub-functions. jq Β΄def range(init; upto; by): def _range: if (by > 0 and . < upto) or (by < 0 and . > upto) then ., ((.+by)|_range) else . end; if by == 0 then init else init|_range end | select((by > 0 and . < upto) or (by < 0 and . > upto)); range(0; 10; 3)Β΄ null => 0, 3, 6, 9 jq Β΄def while(cond; update): def _while: if cond then ., (update | _while) else empty end; _while; [while(.<100; .*2)]Β΄ 1 => [1,2,4,8,16,32,64] MATH jq currently only has IEEE754 double-precision (64-bit) floating point number support. Besides simple arithmetic operators such as +, jq also has most standard math functions from the C math library. C math functions that take a single input argument (e.g., sin()) are available as zero-argument jq functions. C math functions that take two input arguments (e.g., pow()) are available as two-argument jq functions that ignore .. C math functions that take three input arguments are available as three-argument jq functions that ignore .. Availability of standard math functions depends on the availability of the corresponding math functions in your operating system and C math library. Unavailable math functions will be defined but will raise an error. One-input C math functions: acos acosh asin asinh atan atanh cbrt ceil cos cosh erf erfc exp exp10 exp2 expm1 fabs floor gamma j0 j1 lgamma log log10 log1p log2 logb nearbyint pow10 rint round significand sin sinh sqrt tan tanh tgamma trunc y0 y1. Two-input C math functions: atan2 copysign drem fdim fmax fmin fmod frexp hypot jn ldexp modf nextafter nexttoward pow remainder scalb scalbln yn. Three-input C math functions: fma. See your systemΒ΄s manual for more information on each of these. I/O At this time jq has minimal support for I/O, mostly in the form of control over when inputs are read. Two builtins functions are provided for this, input and inputs, that read from the same sources (e.g., stdin, files named on the command-line) as jq itself. These two builtins, and jqΒ΄s own reading actions, can be interleaved with each other. Two builtins provide minimal output capabilities, debug, and stderr. (Recall that a jq programΒ΄s output values are always output as JSON texts on stdout.) The debug builtin can have application-specific behavior, such as for executables that use the libjq C API but arenΒ΄t the jq executable itself. The stderr builtin outputs its input in raw mode to stder with no additional decoration, not even a newline. Most jq builtins are referentially transparent, and yield constant and repeatable value streams when applied to constant inputs. This is not true of I/O builtins. input Outputs one new input. inputs Outputs all remaining inputs, one by one. This is primarily useful for reductions over a programΒ΄s inputs. debug Causes a debug message based on the input value to be produced. The jq executable wraps the input value with ["DEBUG:", <input-value>] and prints that and a newline on stderr, compactly. This may change in the future. stderr Prints its input in raw and compact mode to stderr with no additional decoration, not even a newline. input_filename Returns the name of the file whose input is currently being filtered. Note that this will not work well unless jq is running in a UTF-8 locale. input_line_number Returns the line number of the input currently being filtered. STREAMING With the --stream option jq can parse input texts in a streaming fashion, allowing jq programs to start processing large JSON texts immediately rather than after the parse completes. If you have a single JSON text that is 1GB in size, streaming it will allow you to process it much more quickly. However, streaming isnΒ΄t easy to deal with as the jq program will have [<path>, <leaf-value>] (and a few other forms) as inputs. Several builtins are provided to make handling streams easier. The examples below use the streamed form of [0,[1]], which is [[0],0],[[1,0],1],[[1,0]],[[1]]. Streaming forms include [<path>, <leaf-value>] (to indicate any scalar value, empty array, or empty object), and [<path>] (to indicate the end of an array or object). Future versions of jq run with --stream and -seq may output additional forms such as ["error message"] when an input text fails to parse. truncate_stream(stream_expression) Consumes a number as input and truncates the corresponding number of path elements from the left of the outputs of the given streaming expression. jq Β΄[1|truncate_stream([[0],1],[[1,0],2],[[1,0]],[[1]])]Β΄ 1 => [[[0],2],[[0]]] fromstream(stream_expression) Outputs values corresponding to the stream expressionΒ΄s outputs. jq Β΄fromstream(1|truncate_stream([[0],1],[[1,0],2],[[1,0]],[[1]]))Β΄ null => [2] tostream The tostream builtin outputs the streamed form of its input. jq Β΄. as $dot|fromstream($dot|tostream)|.==$dotΒ΄ [0,[1,{"a":1},{"b":2}]] => true ASSIGNMENT Assignment works a little differently in jq than in most programming languages. jq doesnΒ΄t distinguish between references to and copies of something - two objects or arrays are either equal or not equal, without any further notion of being "the same object" or "not the same object". If an object has two fields which are arrays, .foo and .bar, and you append something to .foo, then .bar will not get bigger, even if youΒ΄ve previously set .bar = .foo. If youΒ΄re used to programming in languages like Python, Java, Ruby, Javascript, etc. then you can think of it as though jq does a full deep copy of every object before it does the assignment (for performance it doesnΒ΄t actually do that, but thatΒ΄s the general idea). This means that itΒ΄s impossible to build circular values in jq (such as an array whose first element is itself). This is quite intentional, and ensures that anything a jq program can produce can be represented in JSON. All the assignment operators in jq have path expressions on the left-hand side (LHS). The right-hand side (RHS) procides values to set to the paths named by the LHS path expressions. Values in jq are always immutable. Internally, assignment works by using a reduction to compute new, replacement values for . that have had all the desired assignments applied to ., then outputting the modified value. This might be made clear by this example: {a:{b:{c:1}}} | (.a.b|=3), .. This will output {"a":{"b":3}} and {"a":{"b":{"c":1}}} because the last sub-expression, ., sees the original value, not the modified value. Most users will want to use modification assignment operators, such as |= or +=, rather than =. Note that the LHS of assignment operators refers to a value in .. Thus $var.foo = 1 wonΒ΄t work as expected ($var.foo is not a valid or useful path expression in .); use $var | .foo = 1 instead. Note too that .a,.b=0 does not set .a and .b, but (.a,.b)=0 sets both. Update-assignment: |= This is the "update" operator Β΄|=Β΄. It takes a filter on the right-hand side and works out the new value for the property of . being assigned to by running the old value through this expression. For instance, (.foo, .bar) |= .+1 will build an object with the "foo" field set to the inputΒ΄s "foo" plus 1, and the "bar" field set to the inputΒ΄s "bar" plus 1. The left-hand side can be any general path expression; see path(). Note that the left-hand side of Β΄|=Β΄ refers to a value in .. Thus $var.foo |= . + 1 wonΒ΄t work as expected ($var.foo is not a valid or useful path expression in .); use $var | .foo |= . + 1 instead. If the right-hand side outputs no values (i.e., empty), then the left-hand side path will be deleted, as with del(path). If the right-hand side outputs multiple values, only the first one will be used (COMPATIBILITY NOTE: in jq 1.5 and earlier releases, it used to be that only the last one was used). jq Β΄(..|select(type=="boolean")) |= if . then 1 else 0 endΒ΄ [true,false,[5,true,[true,[false]],false]] => [1,0,[5,1,[1,[0]],0]] Arithmetic update-assignment: +=, -=, *=, /=, %=, //= jq has a few operators of the form a op= b, which are all equivalent to a |= . op b. So, += 1 can be used to increment values, being the same as |= . + 1. jq Β΄.foo += 1Β΄ {"foo": 42} => {"foo": 43} Plain assignment: = This is the plain assignment operator. Unlike the others, the input to the right-hand-side (RHS) is the same as the input to the left-hand-side (LHS) rather than the value at the LHS path, and all values output by the RHS will be used (as shown below). If the RHS of Β΄=Β΄ produces multiple values, then for each such value jq will set the paths on the left-hand side to the value and then it will output the modified .. For example, (.a,.b)=range(2) outputs {"a":0,"b":0}, then {"a":1,"b":1}. The "update" assignment forms (see above) do not do this. This example should show the difference between Β΄=Β΄ and Β΄|=Β΄: Provide input Β΄{"a": {"b": 10}, "b": 20}Β΄ to the programs: .a = .b .a |= .b The former will set the "a" field of the input to the "b" field of the input, and produce the output {"a": 20, "b": 20}. The latter will set the "a" field of the input to the "a" fieldΒ΄s "b" field, producing {"a": 10, "b": 20}. Another example of the difference between Β΄=Β΄ and Β΄|=Β΄: null|(.a,.b)=range(3) outputs Β΄{"a":0,"b":0}Β΄, Β΄{"a":1,"b":1}Β΄, and Β΄{"a":2,"b":2}Β΄, while null|(.a,.b)|=range(3) outputs just Β΄{"a":0,"b":0}Β΄. Complex assignments Lots more things are allowed on the left-hand side of a jq assignment than in most languages. WeΒ΄ve already seen simple field accesses on the left hand side, and itΒ΄s no surprise that array accesses work just as well: .posts[0].title = "JQ Manual" What may come as a surprise is that the expression on the left may produce multiple results, referring to different points in the input document: .posts[].comments |= . + ["this is great"] That example appends the string "this is great" to the "comments" array of each post in the input (where the input is an object with a field "posts" which is an array of posts). When jq encounters an assignment like Β΄a = bΒ΄, it records the "path" taken to select a part of the input document while executing a. This path is then used to find which part of the input to change while executing the assignment. Any filter may be used on the left-hand side of an equals - whichever paths it selects from the input will be where the assignment is performed. This is a very powerful operation. Suppose we wanted to add a comment to blog posts, using the same "blog" input above. This time, we only want to comment on the posts written by "stedolan". We can find those posts using the "select" function described earlier: .posts[] | select(.author == "stedolan") The paths provided by this operation point to each of the posts that "stedolan" wrote, and we can comment on each of them in the same way that we did before: (.posts[] | select(.author == "stedolan") | .comments) |= . + ["terrible."] MODULES jq has a library/module system. Modules are files whose names end in .jq. Modules imported by a program are searched for in a default search path (see below). The import and include directives allow the importer to alter this path. Paths in the a search path are subject to various substitutions. For paths starting with "~/", the userΒ΄s home directory is substituted for "~". For paths starting with "$ORIGIN/", the path of the jq executable is substituted for "$ORIGIN". For paths starting with "./" or paths that are ".", the path of the including file is substituted for ".". For top-level programs given on the command-line, the current directory is used. Import directives can optionally specify a search path to which the default is appended. The default search path is the search path given to the -L command-line option, else ["~/.jq", "$ORIGIN/../lib/jq", "$ORIGIN/../lib"]. Null and empty string path elements terminate search path processing. A dependency with relative path "foo/bar" would be searched for in "foo/bar.jq" and "foo/bar/bar.jq" in the given search path. This is intended to allow modules to be placed in a directory along with, for example, version control files, README files, and so on, but also to allow for single-file modules. Consecutive components with the same name are not allowed to avoid ambiguities (e.g., "foo/foo"). For example, with -L$HOME/.jq a module foo can be found in $HOME/.jq/foo.jq and $HOME/.jq/foo/foo.jq. If "$HOME/.jq" is a file, it is sourced into the main program. import RelativePathString as NAME [<metadata>]; Imports a module found at the given path relative to a directory in a search path. A ".jq" suffix will be added to the relative path string. The moduleΒ΄s symbols are prefixed with "NAME::". The optional metadata must be a constant jq expression. It should be an object with keys like "homepage" and so on. At this time jq only uses the "search" key/value of the metadata. The metadata is also made available to users via the modulemeta builtin. The "search" key in the metadata, if present, should have a string or array value (array of strings); this is the search path to be prefixed to the top-level search path. include RelativePathString [<metadata>]; Imports a module found at the given path relative to a directory in a search path as if it were included in place. A ".jq" suffix will be added to the relative path string. The moduleΒ΄s symbols are imported into the callerΒ΄s namespace as if the moduleΒ΄s content had been included directly. The optional metadata must be a constant jq expression. It should be an object with keys like "homepage" and so on. At this time jq only uses the "search" key/value of the metadata. The metadata is also made available to users via the modulemeta builtin. import RelativePathString as $NAME [<metadata>]; Imports a JSON file found at the given path relative to a directory in a search path. A ".json" suffix will be added to the relative path string. The fileΒ΄s data will be available as $NAME::NAME. The optional metadata must be a constant jq expression. It should be an object with keys like "homepage" and so on. At this time jq only uses the "search" key/value of the metadata. The metadata is also made available to users via the modulemeta builtin. The "search" key in the metadata, if present, should have a string or array value (array of strings); this is the search path to be prefixed to the top-level search path. module <metadata>; This directive is entirely optional. ItΒ΄s not required for proper operation. It serves only the purpose of providing metadata that can be read with the modulemeta builtin. The metadata must be a constant jq expression. It should be an object with keys like "homepage". At this time jq doesnΒ΄t use this metadata, but it is made available to users via the modulemeta builtin. modulemeta Takes a module name as input and outputs the moduleΒ΄s metadata as an object, with the moduleΒ΄s imports (including metadata) as an array value for the "deps" key. Programs can use this to query a moduleΒ΄s metadata, which they could then use to, for example, search for, download, and install missing dependencies. COLORS To configure alternative colors just set the JQ_COLORS environment variable to colon-delimited list of partial terminal escape sequences like "1;31", in this order: β€’ color for null β€’ color for false β€’ color for true β€’ color for numbers β€’ color for strings β€’ color for arrays β€’ color for objects The default color scheme is the same as setting "JQ_COLORS=1;30:0;39:0;39:0;39:0;32:1;39:1;39". This is not a manual for VT100/ANSI escapes. However, each of these color specifications should consist of two numbers separated by a semi-colon, where the first number is one of these: β€’ 1 (bright) β€’ 2 (dim) β€’ 4 (underscore) β€’ 5 (blink) β€’ 7 (reverse) β€’ 8 (hidden) and the second is one of these: β€’ 30 (black) β€’ 31 (red) β€’ 32 (green) β€’ 33 (yellow) β€’ 34 (blue) β€’ 35 (magenta) β€’ 36 (cyan) β€’ 37 (white) BUGS Presumably. Report them or discuss them at: https://github.com/stedolan/jq/issues AUTHOR Stephen Dolan <mu@netsoc.tcd.ie> December 2017 JQ(1)
null
null
testsolv
The testsolv tools can be used to run a testcase. Testcases can either be manually created to test specific features, or they can be written by libsolv’s testcase_write function. This is useful to evaluate bug reports about the solver. -v Increase the debug level of the solver. This option can be specified multiple times to further increase the amount of debug data. -r Write the output in testcase format instead of human readable text. The output can then be used in the result section of the test case. If the -r option is given twice, the output is formated for verbatim inclusion. -l PKGSPEC Instead of running the solver, list packages in the repositories. -s SOLUTIONSPEC This is used in the solver test suite to test the calculated solutions to encountered problems. AUTHOR Michael Schroeder <mls@suse.de> libsolv 09/14/2018 TESTSOLV(1)
testsolv - run a libsolv testcase through the solver
testsolv [OPTIONS] TESTCASE
null
null
ranlib
The libtool command takes the specified input object files and creates a library for use with the link editor, ld(1). The library's name is specified by output (the argument to the -o flag). The input object files may be in any correct format that contains object files (``universal'' files, archives, object files). Libtool will not put any non-object input file into the output library (unlike ranlib, which allows this in the archives it operates on). When producing a ``universal'' file from objects of the same CPU type and differing CPU subtypes, libtool and ranlib create at most one library for each CPU type, rather than a separate library in a universal file for each of the unique pairings of CPU type and CPU subtype. Thus, the resulting CPU subtype for each library is the _ALL CPU subtype for that CPU type. This strategy strongly encourages the implementor of a library to create one library that chooses optimum code to run at run time, rather than at link time. Libtool can create either dynamically linked shared libraries, with -dynamic, or statically linked (archive) libraries, with -static. DYNAMICALLY LINKED SHARED LIBRARIES Dynamically linked libraries, unlike statically linked libraries, are Mach-O format files and not ar(5) format files. Dynamically linked libraries have two restrictions: No symbol may be defined in more than one object file and no common symbol can be used. To maximize sharing of a dynamically linked shared library the objects should be compiled with the -dynamic flag of cc(1) to produce indirect undefined references and position-independent code. To build a dynamically linked library, libtool, runs the link editor, ld(1), with -dylib once for each architecture present in the input objects and then lipo(1) to create a universal file if needed. ARCHIVE (or statically linked) LIBRARIES Libtool with -static is intended to replace ar(5) and ranlib. For backward compatibility, ranlib is still available, and it supports universal files. Ranlib adds or updates the table of contents to each archive so it can be linked by the link editor, ld(1). The table of contents is an archive member at the beginning of the archive that indicates which symbols are defined in which library members. Because ranlib rewrites the archive, sufficient temporary file space must be available in the file system that contains the current directory. Ranlib takes all correct forms of libraries (universal files containing archives, and simple archives) and updates the table of contents for all archives in the file. Ranlib also takes one common incorrect form of archive, an archive whose members are universal object files, adding or updating the table of contents and producing the library in correct form (a universal file containing multiple archives). The archive member name for a table of contents begins with ``__.SYMDEF''. Currently, there are two types of table of contents produced by libtool -static and ranlib and understood by the link editor, ld(1). These are explained below, under the -s and -a options.
libtool - create libraries ranlib - add or update the table of contents of archive libraries
libtool -static -o output [ -sacLTD ] [ - ] [ -arch_only arch_type ] [ -no_warning_for_no_symbols ] file... [-filelist listfile[,dirname]] libtool -dynamic -o output [ -install_name name ] [ -compatibility_version number ] [ -current_version number ] [ link editor flags ] [ -v ] [ -noall_load ] [ - ] [ -arch_only arch_type ] [ -V ] file... [-filelist listfile[,dirname]] ranlib [ -sactfqLT ] [ - ] archive...
The following options pertain to libtool only. @file Arguments beginning with @ are replaced by arguments read from the specified file, as an alternative to listing those arguments on the command line. The files simply contain libtool options and files separated by whitespace: spaces, tabs, and newlines. Characters can be escaped with a backslash (\), including whitespace characters and other backslashes. Also, arguments that include whitespace can be enclosed, wholly or in part, by single- or double-quote charcters. These files may contain @file references to additional files, although libtool will error on include cycles. If a file cannot be found, the original @file argument will remain in the argument list. -static Produce a statically linked (archive) library from the input files. This is the default. -dynamic Produce a dynamically linked shared library from the input files. -install_name name For a dynamic shared library, this specifies the file name the library will be installed in for programs that use it. If this is not specified the name specified by the -o output option will be used. -compatibility_version number For a dynamic shared library, this specifies the compatibility version number of the library. When a library is used the compatibility version is checked and if the user's version is greater that the library's version, an error message is printed and the using program exits. The format of number is X[.Y[.Z]] where X must be a positive non-zero number less than or equal to 65535, and .Y and .Z are optional and if present must be non- negative numbers less than or equal to 255. If this is not specified then it has a value of 0 and no checking is done when the library is used. -current_version number For dynamic shared library files this specifies the current version number of the library. The program using the library can obtain the current version of the library programmatically to determine exactly which version of the library it is using. The format of number is X[.Y[.Z]] where X must be a positive non-zero number less than or equal to 65535, and .Y and .Z are optional and if present must be non-negative numbers less than or equal to 255. If this is not specified then it has a value of 0. -noall_load For dynamic shared library files this specifies the the default behavior of loading all members of archives on the command line is not to be done. This option is used by the GNU compiler driver, cc(1), when used with it's -dynamiclib option. This is done to allow selective loading of the GNU's compiler's runtime support library, libcc_dynamic.a . link editor flags For a dynamic shared library the following ld(1) flags are accepted and passed through: -lx, -weak-lx, -search_paths_first -weak_library, -Ldir, -ysym, -usym, -initsym, -idefinition:indirect, -seg1addr, -segs_read_only_addr, -segs_read_write_addr, -seg_addr_table, -seg_addr_table_filename, -segprot, -segalign, -sectcreate, -sectorder, -sectorder_detail, -sectalign, -undefined, -read_only_relocs, -prebind, -prebind_all_twolevel_modules, -prebind_allow_overlap, -noprebind, -framework, -weak_framework, -umbrella, -allowable_client, -sub_umbrella, -sub_library, -F, -U, -Y, -Sn, -Si, -Sp, -S, -X, -x, -whyload, -all_load. -arch_errors_fatal, -dylib_file, -run_init_lazily, -final_output, -macosx_version_min, -multiply_defined, -multiply_defined_unused, -twolevel_namespace, -twolevel_namespace_hints, -flat_namespace, -nomultidefs, -headerpad, -headerpad_max_install_names, -weak_reference_mismatches, -M, -t, -no_arch_warnings, -single_module, -multi_module, -exported_symbols_list, -unexported_symbols_list, -m, -dead_strip, -no_dead_strip_inits_and_terms, -executable_path, -syslibroot, -no_uuid. See the ld(1) man page for details on these flags. The flag -image_base is a synonym for -seg1addr. -v Verbose mode, which prints the ld(1) commands and lipo(1) commands executed. -V Print the version of libtool. -filelist listfile[,dirname] The listfile contains a list of file names and is an alternative way of specifiying file names on the command line. The file names are listed one per line separated only by newlines (spaces and tabs are assumed to be part of the file name). If the optional directory name, dirname is specified then it is prepended to each name in the list file. -arch_only arch_type This option causes libtool to build a library only for the specified arch_type and ignores all other architectures in the input files. When building a dynamic library, if this is specified with a specific cpusubtype other than the family cpusubtype then libtool it does not use the ld(1) -force_cpusubtype_ALL flag and passes the -arch_only argument to ld(1) as the -arch flag so that the output is tagged with that cpusubtype. The following options pertain to the table of contents for an archive library, and apply to both libtool -static and ranlib: -s Produce the preferred type of table of contents, which results in faster link editing when linking with the archive. The order of the table of contents is sorted by symbol name. The library member name of this type of table of contents is ``__.SYMDEF SORTED''. This type of table of contents can only be produced when the library does not have multiple members that define the same symbol. This is the default. -a Produce the original type of table of contents, whose order is based on the order of the members in the archive. The library member name of this type of table of contents is ``__.SYMDEF''. This type of table of contents must be used when the library has multiple members that define the same symbol. -c Include common symbols as definitions with respect to the table of contents. This is seldom the intended behavior for linking from a library, as it forces the linking of a library member just because it uses an uninitialized global that is undefined at that point in the linking. This option is included only because this was the original behavior of ranlib. This option is not the default. -L Use the 4.4bsd archive extended format #1, which allows archive member names to be longer than 16 characters and have spaces in their names. This option is the default. -T Truncate archive member names to 16 characters and don't use the 4.4bsd extended format #1. This option is not the default. -f Warns when the output archive is universal and ar(1) will no longer be able to operate on it. -q Do nothing if a universal file would be created. -D When building a static library, set archive contents' user ids, group ids, dates, and file modes to reasonable defaults. This allows libraries created with identical input to be identical to each other, regardless of time of day, user, group, umask, and other aspects of the environment. For compatibility, the following ranlib option is accepted (but ignored): -t This option used to request that ranlib only ``touch'' the archives instead of modifying them. The option is now ignored, and the table of contents is rebuilt. Other options applying to both libtool and ranlib: - Treat all remaining arguments as names of files (or archives) and not as options. -no_warning_for_no_symbols Don't warn about file that have no symbols. -dependency_info path Write an Xcode dependency info file describing a successful build operation. This file describes the inputs directly or indirectly used to create the library or dylib. SEE ALSO ld(1), ar(1), otool(1), make(1), redo_prebinding(1), ar(5) BUGS With the way libraries used to be created, errors were possible if the library was modified with ar(1) and the table of contents was not updated by rerunning ranlib(1). So previously the link editor, ld(1), generated an error when the modification date of a library was more recent than the creation date of its table of contents. Unfortunately, this meant that you got the error even if you only copy the library. Since this error was found to be too much of a nuisance it was removed. So now it is possible again to get link errors if the library is modified and the table of contents is not updated. Apple Inc. June 23, 2020 LIBTOOL(1)
null
jupyter-nbclassic-extension
null
null
null
null
null
tput
The tput utility uses the terminfo database to make the values of terminal-dependent capabilities and information available to the shell (see sh(1)), to initialize or reset the terminal, or return the long name of the requested terminal type. The result depends upon the capability's type: string tput writes the string to the standard output. No trailing newline is supplied. integer tput writes the decimal value to the standard output, with a trailing newline. boolean tput simply sets the exit code (0 for TRUE if the terminal has the capability, 1 for FALSE if it does not), and writes nothing to the standard output. Before using a value returned on the standard output, the application should test the exit code (e.g., $?, see sh(1)) to be sure it is 0. (See the EXIT CODES and DIAGNOSTICS sections.) For a complete list of capabilities and the capname associated with each, see terminfo(5). -Ttype indicates the type of terminal. Normally this option is unnecessary, because the default is taken from the environment variable TERM. If -T is specified, then the shell variables LINES and COLUMNS will also be ignored. capname indicates the capability from the terminfo database. When termcap support is compiled in, the termcap name for the capability is also accepted. parms If the capability is a string that takes parameters, the arguments parms will be instantiated into the string. Most parameters are numbers. Only a few terminfo capabilities require string parameters; tput uses a table to decide which to pass as strings. Normally tput uses tparm (3X) to perform the substitution. If no parameters are given for the capability, tput writes the string without performing the substitution. -S allows more than one capability per invocation of tput. The capabilities must be passed to tput from the standard input instead of from the command line (see example). Only one capname is allowed per line. The -S option changes the meaning of the 0 and 1 boolean and string exit codes (see the EXIT CODES section). Again, tput uses a table and the presence of parameters in its input to decide whether to use tparm (3X), and how to interpret the parameters. -V reports the version of ncurses which was used in this program, and exits. init If the terminfo database is present and an entry for the user's terminal exists (see -Ttype, above), the following will occur: (1) if present, the terminal's initialization strings will be output as detailed in the terminfo(5) section on Tabs and Initialization, (2) any delays (e.g., newline) specified in the entry will be set in the tty driver, (3) tabs expansion will be turned on or off according to the specification in the entry, and (4) if tabs are not expanded, standard tabs will be set (every 8 spaces). If an entry does not contain the information needed for any of the four above activities, that activity will silently be skipped. reset Instead of putting out initialization strings, the terminal's reset strings will be output if present (rs1, rs2, rs3, rf). If the reset strings are not present, but initialization strings are, the initialization strings will be output. Otherwise, reset acts identically to init. longname If the terminfo database is present and an entry for the user's terminal exists (see -Ttype above), then the long name of the terminal will be put out. The long name is the last name in the first line of the terminal's description in the terminfo database [see term(5)]. If tput is invoked by a link named reset, this has the same effect as tput reset. See @TSET@ for comparison, which has similar behavior.
tput, reset - initialize a terminal or query terminfo database
tput [-Ttype] capname [parms ... ] tput [-Ttype] init tput [-Ttype] reset tput [-Ttype] longname tput -S << tput -V
null
tput init Initialize the terminal according to the type of terminal in the environmental variable TERM. This command should be included in everyone's .profile after the environmental variable TERM has been exported, as illustrated on the profile(5) manual page. tput -T5620 reset Reset an AT&T 5620 terminal, overriding the type of terminal in the environmental variable TERM. tput cup 0 0 Send the sequence to move the cursor to row 0, column 0 (the upper left corner of the screen, usually known as the "home" cursor position). tput clear Echo the clear-screen sequence for the current terminal. tput cols Print the number of columns for the current terminal. tput -T450 cols Print the number of columns for the 450 terminal. bold=`tput smso` offbold=`@TPUT@ rmso` Set the shell variables bold, to begin stand-out mode sequence, and offbold, to end standout mode sequence, for the current terminal. This might be followed by a prompt: echo "${bold}Please type in your name: ${offbold}\c" tput hc Set exit code to indicate if the current terminal is a hard copy terminal. tput cup 23 4 Send the sequence to move the cursor to row 23, column 4. tput cup Send the terminfo string for cursor-movement, with no parameters substituted. tput longname Print the long name from the terminfo database for the type of terminal specified in the environmental variable TERM. tput -S <<! > clear > cup 10 10 > bold > ! This example shows tput processing several capabilities in one invocation. It clears the screen, moves the cursor to position 10, 10 and turns on bold (extra bright) mode. The list is terminated by an exclamation mark (!) on a line by itself. FILES /usr/share/terminfo compiled terminal description database /usr/share/tabset/* tab settings for some terminals, in a format appropriate to be output to the terminal (escape sequences that set margins and tabs); for more information, see the "Tabs and Initialization" section of terminfo(5) EXIT CODES If the -S option is used, tput checks for errors from each line, and if any errors are found, will set the exit code to 4 plus the number of lines with errors. If no errors are found, the exit code is 0. No indication of which line failed can be given so exit code 1 will never appear. Exit codes 2, 3, and 4 retain their usual interpretation. If the -S option is not used, the exit code depends on the type of capname: boolean a value of 0 is set for TRUE and 1 for FALSE. string a value of 0 is set if the capname is defined for this terminal type (the value of capname is returned on standard output); a value of 1 is set if capname is not defined for this terminal type (nothing is written to standard output). integer a value of 0 is always set, whether or not capname is defined for this terminal type. To determine if capname is defined for this terminal type, the user must test the value written to standard output. A value of -1 means that capname is not defined for this terminal type. other reset or init may fail to find their respective files. In that case, the exit code is set to 4 + errno. Any other exit code indicates an error; see the DIAGNOSTICS section. DIAGNOSTICS tput prints the following error messages and sets the corresponding exit codes. exit code error message ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0 (capname is a numeric variable that is not specified in the terminfo(5) database for this terminal type, e.g. tput -T450 lines and @TPUT@ -T2621 xmc) 1 no error message is printed, see the EXIT CODES section. 2 usage error 3 unknown terminal type or no terminfo database 4 unknown terminfo capability capname >4 error occurred in -S ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ PORTABILITY The longname and -S options, and the parameter-substitution features used in the cup example, are not supported in BSD curses or in AT&T/USL curses before SVr4. X/Open documents only the operands for clear, init and reset. In this implementation, clear is part of the capname support. Other implementations of tput on SVr4-based systems such as Solaris, IRIX64 and HPUX as well as others such as AIX and Tru64 provide support for capname operands. A few platforms such as FreeBSD and NetBSD recognize termcap names rather than terminfo capability names in their respective tput commands. Most implementations which provide support for capname operands use the tparm function to expand parameters in it. That function expects a mixture of numeric and string parameters, requiring tput to know which type to use. This implementation uses a table to determine that for the standard capname operands, and an internal library function to analyze nonstandard capname operands. Other implementations may simply guess that an operand containing only digits is intended to be a number. SEE ALSO clear(1), stty(1), tabs(1), terminfo(5), curs_termcap(3X). This describes ncurses version 5.7 (patch 20081102). tput(1)
virtualenv
null
null
null
null
null
sqlite3_analyzer
null
null
null
null
null
conda-token
null
null
null
null
null
holoviews
null
null
null
null
null
qtdiag
null
null
null
null
null
syncqt.pl
null
null
null
null
null
openssl
OpenSSL is a cryptography toolkit implementing the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) network protocols and related cryptography standards required by them. The openssl program is a command line program for using the various cryptography functions of OpenSSL's crypto library from the shell. It can be used for o Creation and management of private keys, public keys and parameters o Public key cryptographic operations o Creation of X.509 certificates, CSRs and CRLs o Calculation of Message Digests and Message Authentication Codes o Encryption and Decryption with Ciphers o SSL/TLS Client and Server Tests o Handling of S/MIME signed or encrypted mail o Timestamp requests, generation and verification COMMAND SUMMARY The openssl program provides a rich variety of commands (command in the "SYNOPSIS" above). Each command can have many options and argument parameters, shown above as options and parameters. Detailed documentation and use cases for most standard subcommands are available (e.g., openssl-x509(1)). The subcommand openssl-list(1) may be used to list subcommands. The command no-XXX tests whether a command of the specified name is available. If no command named XXX exists, it returns 0 (success) and prints no-XXX; otherwise it returns 1 and prints XXX. In both cases, the output goes to stdout and nothing is printed to stderr. Additional command line arguments are always ignored. Since for each cipher there is a command of the same name, this provides an easy way for shell scripts to test for the availability of ciphers in the openssl program. (no-XXX is not able to detect pseudo-commands such as quit, list, or no-XXX itself.) Configuration Option Many commands use an external configuration file for some or all of their arguments and have a -config option to specify that file. The default name of the file is openssl.cnf in the default certificate storage area, which can be determined from the openssl-version(1) command using the -d or -a option. The environment variable OPENSSL_CONF can be used to specify a different file location or to disable loading a configuration (using the empty string). Among others, the configuration file can be used to load modules and to specify parameters for generating certificates and random numbers. See config(5) for details. Standard Commands asn1parse Parse an ASN.1 sequence. ca Certificate Authority (CA) Management. ciphers Cipher Suite Description Determination. cms CMS (Cryptographic Message Syntax) command. crl Certificate Revocation List (CRL) Management. crl2pkcs7 CRL to PKCS#7 Conversion. dgst Message Digest calculation. MAC calculations are superseded by openssl-mac(1). dhparam Generation and Management of Diffie-Hellman Parameters. Superseded by openssl-genpkey(1) and openssl-pkeyparam(1). dsa DSA Data Management. dsaparam DSA Parameter Generation and Management. Superseded by openssl-genpkey(1) and openssl-pkeyparam(1). ec EC (Elliptic curve) key processing. ecparam EC parameter manipulation and generation. enc Encryption, decryption, and encoding. engine Engine (loadable module) information and manipulation. errstr Error Number to Error String Conversion. fipsinstall FIPS configuration installation. gendsa Generation of DSA Private Key from Parameters. Superseded by openssl-genpkey(1) and openssl-pkey(1). genpkey Generation of Private Key or Parameters. genrsa Generation of RSA Private Key. Superseded by openssl-genpkey(1). help Display information about a command's options. info Display diverse information built into the OpenSSL libraries. kdf Key Derivation Functions. list List algorithms and features. mac Message Authentication Code Calculation. nseq Create or examine a Netscape certificate sequence. ocsp Online Certificate Status Protocol command. passwd Generation of hashed passwords. pkcs12 PKCS#12 Data Management. pkcs7 PKCS#7 Data Management. pkcs8 PKCS#8 format private key conversion command. pkey Public and private key management. pkeyparam Public key algorithm parameter management. pkeyutl Public key algorithm cryptographic operation command. prime Compute prime numbers. rand Generate pseudo-random bytes. rehash Create symbolic links to certificate and CRL files named by the hash values. req PKCS#10 X.509 Certificate Signing Request (CSR) Management. rsa RSA key management. rsautl RSA command for signing, verification, encryption, and decryption. Superseded by openssl-pkeyutl(1). s_client This implements a generic SSL/TLS client which can establish a transparent connection to a remote server speaking SSL/TLS. It's intended for testing purposes only and provides only rudimentary interface functionality but internally uses mostly all functionality of the OpenSSL ssl library. s_server This implements a generic SSL/TLS server which accepts connections from remote clients speaking SSL/TLS. It's intended for testing purposes only and provides only rudimentary interface functionality but internally uses mostly all functionality of the OpenSSL ssl library. It provides both an own command line oriented protocol for testing SSL functions and a simple HTTP response facility to emulate an SSL/TLS-aware webserver. s_time SSL Connection Timer. sess_id SSL Session Data Management. smime S/MIME mail processing. speed Algorithm Speed Measurement. spkac SPKAC printing and generating command. srp Maintain SRP password file. This command is deprecated. storeutl Command to list and display certificates, keys, CRLs, etc. ts Time Stamping Authority command. verify X.509 Certificate Verification. See also the openssl-verification-options(1) manual page. version OpenSSL Version Information. x509 X.509 Certificate Data Management. Message Digest Commands blake2b512 BLAKE2b-512 Digest blake2s256 BLAKE2s-256 Digest md2 MD2 Digest md4 MD4 Digest md5 MD5 Digest mdc2 MDC2 Digest rmd160 RMD-160 Digest sha1 SHA-1 Digest sha224 SHA-2 224 Digest sha256 SHA-2 256 Digest sha384 SHA-2 384 Digest sha512 SHA-2 512 Digest sha3-224 SHA-3 224 Digest sha3-256 SHA-3 256 Digest sha3-384 SHA-3 384 Digest sha3-512 SHA-3 512 Digest keccak-224 KECCAK 224 Digest keccak-256 KECCAK 256 Digest keccak-384 KECCAK 384 Digest keccak-512 KECCAK 512 Digest shake128 SHA-3 SHAKE128 Digest shake256 SHA-3 SHAKE256 Digest sm3 SM3 Digest Encryption, Decryption, and Encoding Commands The following aliases provide convenient access to the most used encodings and ciphers. Depending on how OpenSSL was configured and built, not all ciphers listed here may be present. See openssl-enc(1) for more information. aes128, aes-128-cbc, aes-128-cfb, aes-128-ctr, aes-128-ecb, aes-128-ofb AES-128 Cipher aes192, aes-192-cbc, aes-192-cfb, aes-192-ctr, aes-192-ecb, aes-192-ofb AES-192 Cipher aes256, aes-256-cbc, aes-256-cfb, aes-256-ctr, aes-256-ecb, aes-256-ofb AES-256 Cipher aria128, aria-128-cbc, aria-128-cfb, aria-128-ctr, aria-128-ecb, aria-128-ofb Aria-128 Cipher aria192, aria-192-cbc, aria-192-cfb, aria-192-ctr, aria-192-ecb, aria-192-ofb Aria-192 Cipher aria256, aria-256-cbc, aria-256-cfb, aria-256-ctr, aria-256-ecb, aria-256-ofb Aria-256 Cipher base64 Base64 Encoding bf, bf-cbc, bf-cfb, bf-ecb, bf-ofb Blowfish Cipher camellia128, camellia-128-cbc, camellia-128-cfb, camellia-128-ctr, camellia-128-ecb, camellia-128-ofb Camellia-128 Cipher camellia192, camellia-192-cbc, camellia-192-cfb, camellia-192-ctr, camellia-192-ecb, camellia-192-ofb Camellia-192 Cipher camellia256, camellia-256-cbc, camellia-256-cfb, camellia-256-ctr, camellia-256-ecb, camellia-256-ofb Camellia-256 Cipher cast, cast-cbc CAST Cipher cast5-cbc, cast5-cfb, cast5-ecb, cast5-ofb CAST5 Cipher chacha20 Chacha20 Cipher des, des-cbc, des-cfb, des-ecb, des-ede, des-ede-cbc, des-ede-cfb, des-ede-ofb, des-ofb DES Cipher des3, desx, des-ede3, des-ede3-cbc, des-ede3-cfb, des-ede3-ofb Triple-DES Cipher idea, idea-cbc, idea-cfb, idea-ecb, idea-ofb IDEA Cipher rc2, rc2-cbc, rc2-cfb, rc2-ecb, rc2-ofb RC2 Cipher rc4 RC4 Cipher rc5, rc5-cbc, rc5-cfb, rc5-ecb, rc5-ofb RC5 Cipher seed, seed-cbc, seed-cfb, seed-ecb, seed-ofb SEED Cipher sm4, sm4-cbc, sm4-cfb, sm4-ctr, sm4-ecb, sm4-ofb SM4 Cipher
openssl - OpenSSL command line program
openssl command [ options ... ] [ parameters ... ] openssl no-XXX [ options ] openssl -help | -version
Details of which options are available depend on the specific command. This section describes some common options with common behavior. Program Options These options can be specified without a command specified to get help or version information. -help Provides a terse summary of all options. For more detailed information, each command supports a -help option. Accepts --help as well. -version Provides a terse summary of the openssl program version. For more detailed information see openssl-version(1). Accepts --version as well. Common Options -help If an option takes an argument, the "type" of argument is also given. -- This terminates the list of options. It is mostly useful if any filename parameters start with a minus sign: openssl verify [flags...] -- -cert1.pem... Format Options See openssl-format-options(1) for manual page. Pass Phrase Options See the openssl-passphrase-options(1) manual page. Random State Options Prior to OpenSSL 1.1.1, it was common for applications to store information about the state of the random-number generator in a file that was loaded at startup and rewritten upon exit. On modern operating systems, this is generally no longer necessary as OpenSSL will seed itself from a trusted entropy source provided by the operating system. These flags are still supported for special platforms or circumstances that might require them. It is generally an error to use the same seed file more than once and every use of -rand should be paired with -writerand. -rand files A file or files containing random data used to seed the random number generator. Multiple files can be specified separated by an OS-dependent character. The separator is ";" for MS-Windows, "," for OpenVMS, and ":" for all others. Another way to specify multiple files is to repeat this flag with different filenames. -writerand file Writes the seed data to the specified file upon exit. This file can be used in a subsequent command invocation. Certificate Verification Options See the openssl-verification-options(1) manual page. Name Format Options See the openssl-namedisplay-options(1) manual page. TLS Version Options Several commands use SSL, TLS, or DTLS. By default, the commands use TLS and clients will offer the lowest and highest protocol version they support, and servers will pick the highest version that the client offers that is also supported by the server. The options below can be used to limit which protocol versions are used, and whether TCP (SSL and TLS) or UDP (DTLS) is used. Note that not all protocols and flags may be available, depending on how OpenSSL was built. -ssl3, -tls1, -tls1_1, -tls1_2, -tls1_3, -no_ssl3, -no_tls1, -no_tls1_1, -no_tls1_2, -no_tls1_3 These options require or disable the use of the specified SSL or TLS protocols. When a specific TLS version is required, only that version will be offered or accepted. Only one specific protocol can be given and it cannot be combined with any of the no_ options. The no_* options do not work with s_time and ciphers commands but work with s_client and s_server commands. -dtls, -dtls1, -dtls1_2 These options specify to use DTLS instead of TLS. With -dtls, clients will negotiate any supported DTLS protocol version. Use the -dtls1 or -dtls1_2 options to support only DTLS1.0 or DTLS1.2, respectively. Engine Options -engine id Load the engine identified by id and use all the methods it implements (algorithms, key storage, etc.), unless specified otherwise in the command-specific documentation or it is configured to do so, as described in "Engine Configuration" in config(5). The engine will be used for key ids specified with -key and similar options when an option like -keyform engine is given. A special case is the "loader_attic" engine, which is meant just for internal OpenSSL testing purposes and supports loading keys, parameters, certificates, and CRLs from files. When this engine is used, files with such credentials are read via this engine. Using the "file:" schema is optional; a plain file (path) name will do. Options specifying keys, like -key and similar, can use the generic OpenSSL engine key loading URI scheme "org.openssl.engine:" to retrieve private keys and public keys. The URI syntax is as follows, in simplified form: org.openssl.engine:{engineid}:{keyid} Where "{engineid}" is the identity/name of the engine, and "{keyid}" is a key identifier that's acceptable by that engine. For example, when using an engine that interfaces against a PKCS#11 implementation, the generic key URI would be something like this (this happens to be an example for the PKCS#11 engine that's part of OpenSC): -key org.openssl.engine:pkcs11:label_some-private-key As a third possibility, for engines and providers that have implemented their own OSSL_STORE_LOADER(3), "org.openssl.engine:" should not be necessary. For a PKCS#11 implementation that has implemented such a loader, the PKCS#11 URI as defined in RFC 7512 should be possible to use directly: -key pkcs11:object=some-private-key;pin-value=1234 Provider Options -provider name Load and initialize the provider identified by name. The name can be also a path to the provider module. In that case the provider name will be the specified path and not just the provider module name. Interpretation of relative paths is platform specific. The configured "MODULESDIR" path, OPENSSL_MODULES environment variable, or the path specified by -provider-path is prepended to relative paths. See provider(7) for a more detailed description. -provider-path path Specifies the search path that is to be used for looking for providers. Equivalently, the OPENSSL_MODULES environment variable may be set. -propquery propq Specifies the property query clause to be used when fetching algorithms from the loaded providers. See property(7) for a more detailed description. ENVIRONMENT The OpenSSL library can be take some configuration parameters from the environment. Some of these variables are listed below. For information about specific commands, see openssl-engine(1), openssl-rehash(1), and tsget(1). For information about the use of environment variables in configuration, see "ENVIRONMENT" in config(5). For information about querying or specifying CPU architecture flags, see OPENSSL_ia32cap(3), and OPENSSL_s390xcap(3). For information about all environment variables used by the OpenSSL libraries, see openssl-env(7). OPENSSL_TRACE=name[,...] Enable tracing output of OpenSSL library, by name. This output will only make sense if you know OpenSSL internals well. Also, it might not give you any output at all if OpenSSL was built without tracing support. The value is a comma separated list of names, with the following available: TRACE Traces the OpenSSL trace API itself. INIT Traces OpenSSL library initialization and cleanup. TLS Traces the TLS/SSL protocol. TLS_CIPHER Traces the ciphers used by the TLS/SSL protocol. CONF Show details about provider and engine configuration. ENGINE_TABLE The function that is used by RSA, DSA (etc) code to select registered ENGINEs, cache defaults and functional references (etc), will generate debugging summaries. ENGINE_REF_COUNT Reference counts in the ENGINE structure will be monitored with a line of generated for each change. PKCS5V2 Traces PKCS#5 v2 key generation. PKCS12_KEYGEN Traces PKCS#12 key generation. PKCS12_DECRYPT Traces PKCS#12 decryption. X509V3_POLICY Generates the complete policy tree at various points during X.509 v3 policy evaluation. BN_CTX Traces BIGNUM context operations. CMP Traces CMP client and server activity. STORE Traces STORE operations. DECODER Traces decoder operations. ENCODER Traces encoder operations. REF_COUNT Traces decrementing certain ASN.1 structure references. HTTP Traces the HTTP client and server, such as messages being sent and received. SEE ALSO openssl-asn1parse(1), openssl-ca(1), openssl-ciphers(1), openssl-cms(1), openssl-crl(1), openssl-crl2pkcs7(1), openssl-dgst(1), openssl-dhparam(1), openssl-dsa(1), openssl-dsaparam(1), openssl-ec(1), openssl-ecparam(1), openssl-enc(1), openssl-engine(1), openssl-errstr(1), openssl-gendsa(1), openssl-genpkey(1), openssl-genrsa(1), openssl-kdf(1), openssl-list(1), openssl-mac(1), openssl-nseq(1), openssl-ocsp(1), openssl-passwd(1), openssl-pkcs12(1), openssl-pkcs7(1), openssl-pkcs8(1), openssl-pkey(1), openssl-pkeyparam(1), openssl-pkeyutl(1), openssl-prime(1), openssl-rand(1), openssl-rehash(1), openssl-req(1), openssl-rsa(1), openssl-rsautl(1), openssl-s_client(1), openssl-s_server(1), openssl-s_time(1), openssl-sess_id(1), openssl-smime(1), openssl-speed(1), openssl-spkac(1), openssl-srp(1), openssl-storeutl(1), openssl-ts(1), openssl-verify(1), openssl-version(1), openssl-x509(1), config(5), crypto(7), openssl-env(7). ssl(7), x509v3_config(5) HISTORY The list -XXX-algorithms options were added in OpenSSL 1.0.0; For notes on the availability of other commands, see their individual manual pages. The -issuer_checks option is deprecated as of OpenSSL 1.1.0 and is silently ignored. The -xcertform and -xkeyform options are obsolete since OpenSSL 3.0 and have no effect. The interactive mode, which could be invoked by running "openssl" with no further arguments, was removed in OpenSSL 3.0, and running that program with no arguments is now equivalent to "openssl help". COPYRIGHT Copyright 2000-2023 The OpenSSL Project Authors. All Rights Reserved. Licensed under the Apache License 2.0 (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at <https://www.openssl.org/source/license.html>. 3.3.1 2024-06-04 OPENSSL(1ssl)
null
derb
derb reads the compiled resource bundle files passed on the command line and write them back in text form. The resulting text files have a .txt extension while compiled resource bundle source files typically have a .res extension. It is customary to name the resource bundles by their locale name, i.e. to use a local identifier for the bundle filename, e.g. ja_JP.res for Japanese (Japan) data, or root.res for the root bundle. This is especially important for derb since the locale name is not accessible directly from the compiled resource bundle, and to know which locale to ask for when opening the bundle. derb will produce a file whose base name is the base name of the compiled resource file itself. If the --to-stdout, -c option is used, however, the text will be written on the standard output.
derb - disassemble a resource bundle
derb [ -h, -?, --help ] [ -V, --version ] [ -v, --verbose ] [ -e, --encoding encoding ] [ --bom ] [ -t, --truncate [ size ] ] [ -s, --sourcedir source ] [ -d, --destdir destination ] [ -i, --icudatadir directory ] [ -c, --to-stdout ] bundle ...
-h, -?, --help Print help about usage and exit. -V, --version Print the version of derb and exit. -v, --verbose Display extra informative messages during execution. -A, --suppressAliases Don't follow aliases when producing output. -e, --encoding encoding Set the encoding used to write output files to encoding. The default encoding is the invariant (subset of ASCII or EBCDIC) codepage for the system (see section INVARIANT CHARACTERS). The choice of the encoding does not affect the data, just their representation. Characters that cannot be represented in the encoding will be represented using \uhhhh escape sequences. --bom Write a byte order mark (BOM) at the beginning of the file. -l, --locale locale Set the locale for the resource bundle, which is used both in the generated text and as the base name of the output file. -t, --truncate [ size ] Truncate individual resources (strings or binary data) to size bytes. The default if size is not specified is 80 bytes. -s, --sourcedir source Set the source directory to source. The default source directory is the current directory. If - is passed for source, then the bundle will be looked for in its default location, specified by the ICU_DATA environment variable (or defaulting to the location set when ICU was built if ICU_DATA is not set). -d, --destdir destination Set the destination directory to destination. The default destination directory is specified by the environment variable ICU_DATA or is the location set when ICU was built if ICU_DATA is not set. -i, --icudatadir directory Look for any necessary ICU data files in directory. For example, when processing collation overrides, the file ucadata.dat must be located. The default ICU data directory is specified by the environment variable ICU_DATA. -c, --to-stdout Write the disassembled bundle on standard output instead of into a file. CAVEATS When the option --bom is used, the character U+FEFF is written in the destination encoding regardless of whether it is a Unicode transformation format (UTF) or not. This option should only be used with an UTF encoding, as byte order marks are not meaningful for other encodings. INVARIANT CHARACTERS The invariant character set consists of the following set of characters, expressed as a standard POSIX regular expression: [a-z]|[A-Z]|[0-9]|_| |+|-|*|/. This is the set which is guaranteed to be available regardless of code page. ENVIRONMENT ICU_DATA Specifies the directory containing ICU data. Defaults to ${prefix}/share/icu/68.1/. Some tools in ICU depend on the presence of the trailing slash. It is thus important to make sure that it is present if ICU_DATA is set. AUTHORS Vladimir Weinstein Yves Arrouye VERSION 1.0 COPYRIGHT Copyright (C) 2002 IBM, Inc. and others. SEE ALSO genrb(1) ICU MANPAGE 7 Mar 2014 DERB(1)
null
arm64-apple-darwin20.0.0-vtool
null
null
null
null
null
zopflipng
null
null
null
null
null