command
stringlengths
1
42
description
stringlengths
29
182k
name
stringlengths
7
64.9k
synopsis
stringlengths
4
85.3k
options
stringclasses
593 values
examples
stringclasses
455 values
strip
strip removes or modifies the symbol table attached to the output of the assembler and link editor. This is useful to save space after a program has been debugged and to limit dynamically bound symbols. strip no longer removes relocation entries under any condition. Instead, it updates the external relocation entries (and indirect symbol table entries) to reflect the resulting symbol table. strip prints an error message for those symbols not in the resulting symbol table that are needed by an external relocation entry or an indirect symbol table. The link editor ld(1) is the only program that can strip relocation entries and know if it is safe to do so. When strip is used with no options on an executable file, it checks that file to see if it uses the dynamic link editor. If it does, the effect of the strip command is the same as using the -u and -r options. If the file does not use the dynamic link editor (e.g. -preload or -static), the effect of strip without any options is to completely remove the symbol table. The options -S, -x, and -X have the same effect as the ld(1) options. The options to strip(1) can be combined to trim the symbol table to just what is desired. You should trim the symbol table of files used with dynamic linking so that only those symbols intended to be external interfaces are saved. Files used with dynamic linking include executables, objects that are loaded (usually bundles), and dynamic shared libraries. Only global symbols are used by the dynamic linking process. You should strip all non-global symbols. When an executable is built with all its dependent dynamic shared libraries, it is typically stripped with: % strip -u -r executable which saves all undefined symbols (usually defined in the dynamic shared libraries) and all global symbols defined in the executable referenced by the dynamic libraries (as marked by the static link editor when the executable was built). This is the maximum level of striping for an executable that will still allow the program to run correctly with its libraries. If the executable loads objects, however, the global symbols that the objects reference from the executable also must not be stripped. In this case, when linking the executable you should use the `-exported_symbols_list` option of the link editor ld(1) to limit which symbols can be referenced by the objects. Then you only need to strip local and debug symbols, like that: % strip -x -S executable For objects that will be loaded into an executable, you should trim the symbol table to limit the global symbols the executable will see. This would be done with: % strip -s interface_symbols -u object which would leave only the undefined symbols and symbols listed in the file interface_symbols in the object file. In this case, strip(1) has updated the relocation entries and indirect symbol table to reflect the new symbol table. For dynamic shared libraries, the maximum level of stripping is usually -x (to remove all non-global symbols). STRIPPING FILES FOR USE WITH RUNTIME LOADED CODE Trimming the symbol table for programs that load code at runtime allows you to control the interface that the executable wants to provide to the objects that it will load; it will not have to publish symbols that are not part of its interface. For example, an executable that wishes to allow only a subset of its global symbols but all of the statically linked shared library's globals to be used would be stripped with: % strip -s interface_symbols -A executable where the file interface_symbols would contain only those symbols from the executable that it wishes the code loaded at runtime to have access to. Another example is an object that is made up of a number of other objects that will be loaded into an executable would built and then stripped with: % ld -o relocatable.o -r a.o b.o c.o % strip -s interface_symbols -u relocatable.o which would leave only the undefined symbols and symbols listed in the file interface_symbols in the object file. In this case strip(1) has updated the relocation entries to reflect the new symbol table.
strip - remove symbols
strip [ option ] name ...
The first set of options indicate symbols that are to be saved in the resulting output file. -u Save all undefined symbols. This is intended for use with relocatable objects to save symbols referred to by external relocation entries. Note that common symbols are also referred to by external relocation entries and this flag does not save those symbols. -r Save all symbols referenced dynamically. -s filename Save the symbol table entries for the global symbols listed in filename. The symbol names listed in filename must be one per line. Leading and trailing white space are not part of the symbol name. Lines starting with # are ignored, as are lines with only white space. -R filename Remove the symbol table entries for the global symbols listed in filename. This file has the same format as the -s filename option above. This option is usually used in combination with other options that save some symbols, -S, -x, etc. -i Ignore symbols listed in the -s filename or -R filename options that are not in the files to be stripped (this is normally an error). -d filename Save the debugging symbol table entries for each source file name listed in filename. The source file names listed in filename must be one per line with no other white space in the file except the newlines on the end of each line. And they must be just the base name of the source file without any leading directories. This option works only with the stab(5) debugging format, it has no affect when using the DWARF debugging format. -A Save all global absolute symbols except those with a value of zero, and save Objective C class symbols. This is intended for use of programs that load code at runtime and want the loaded code to use symbols from the shared libraries (this is only used with NEXTSTEP 3.3 and earlier releases). -n Save all N_SECT global symbols. This is intended for use with executable programs in combination with -A to remove the symbols needed for correct static link editing which are not needed for use with runtime loading interfaces where using the -s filename would be too much trouble (this is only used with NEXTSTEP 3.3 and earlier releases). These options specify symbols to be removed from the resulting output file. -S Remove the debugging symbol table entries (those created by the -g option to cc(1) and other compilers). -X Remove the local symbols whose names begin with `L'. -T The intent of this flag is to remove Swift symbols from the Mach-O symbol table, It removes the symbols whose names begin with `_$S' or `_$s' only when it finds an __objc_imageinfo section with and it has a non-zero swift version. The future the implementation of this flag may change to match the intent. When used together with -R,/ -s files the Swift symbols will also be removed from global symbol lists used by dyld. -N In binaries that use the dynamic linker remove all nlist symbols and the string table. Setting the environment variable STRIP_NLISTS has the same effect. -x Remove all local symbols (saving only global symbols). -c Remove the section contents of a dynamic library creating a stub library output file. And the last options: - Treat all remaining arguments as file names and not options. -D When stripping a static library, set the archive's SYMDEF file's user id, group id, date, and file mode to reasonable defaults. See the libtool(1) documentation for -D for more information. -o output Write the result into the file output. -v Print the arguments passed to other tools run by strip(1) when processing object files. -no_uuid Remove any LC_UUID load commands. -no_split_info Remove the LC_SEGMENT_SPLIT_INFO load command and its payload. -no_atom_info Remove the LC_ATOM_INFO load command and its payload. -no_code_signature_warning Don't warn when the code signature would be invalid in the output. -arch arch_type Specifies the architecture, arch_type, of the file for strip(1) to operate on when the file is a universal file. (See arch(3) for the currently know arch_types.) The arch_type can be "all" to operate on all architectures in the file, which is the default. SEE ALSO ld(1), libtool(1), cc(1)
When creating a stub library the -c and -x are typically used: strip -x -c libfoo -o libfoo.stripped LIMITATIONS Not every layout of a Mach-O file can be stripped by this program. But all layouts produced by the Apple compiler system can be stripped. Apple Inc. June 23, 2023 STRIP(1)
b2
null
null
null
null
null
lupdate
null
null
null
null
null
gst-typefind-1.0
gst-typefind-1.0 uses the GStreamer type finding system to determine the relevant GStreamer plugin to parse or decode file, and the corresponding media type.
gst-typefind-1.0 - print Media type of file
gst-typefind-1.0 <file>
gst-typefind-1.0 accepts the following options: --help Print help synopsis and available FLAGS --gst-info-mask=FLAGS GStreamer info flags to set (list with --help) --gst-debug-mask=FLAGS GStreamer debugging flags to set (list with --help) --gst-mask=FLAGS GStreamer info and debugging flags to set (list with --help) --gst-plugin-spew GStreamer info flags to set Enable printout of errors while loading GStreamer plugins --gst-plugin-path=PATH Add directories separated with ':' to the plugin search path SEE ALSO gst-inspect-1.0(1), gst-launch-1.0(1) AUTHOR The GStreamer team at http://gstreamer.freedesktop.org/ May 2003 GStreamer(1)
null
grpc_cpp_plugin
null
null
null
null
null
tifffile
null
null
null
null
null
arm64-apple-darwin20.0.0-indr
Indr builds the output file, output, by translating each symbol name listed in the file list to the same name with and underbar prepended to it in all the objects in the input file, input. This is used in building the ANSI C library and ``hiding'' non-ANSI C library symbols that are used by the ANSI C routines. The input file can be either and object file or an archive and the output file will be the same type as the input file. Then it if the input file is an archive and the -n flag is not specified then it creates an object for each of these symbols with an indirect symbol (n_type == N_INDR) for the symbol name with an underbar and adds that to the output archive. Some or all of the following options may be specified: -n Suppress creating the indirect objects when the output file is an archive. This is assumed when the output file is an object file. -arch arch_type Specifies the architecture, arch_type, of the file for indr(1) to process when the file is a universal file (see arch(3) for the currently know arch_types). The arch_type can be all to process all architectures in the file. The default is to process only the host architecture if that is contained in the file if not and only one architecture is present in the file then it is processed. In all other cases the architecture(s) to process must be specified with this flag. -arch arch_type list Same as above but specifies the list of symbol that applys only to this architecture. SEE ALSO Mach-O(5), arch(3) Apple Computer, Inc. July 28, 2005 INDR(1)
indr - add indirection to symbols in object files
indr [-n] [[-arch arch_type] ...] list input output indr [-n] [[-arch_indr arch_type list] ...] input output
null
null
surya_ocr
null
null
null
null
null
bitcode_strip
bitcode_strip strips Mach-O files and Universal files containing LLVM bitcode, either by removing the bitcode or by removing the native executable code. If the Mach-O file or architecture in an Universal file does not have a bitcode segment it is left essentially unchanged. By default bitcode_strip will remove the code signature load commands from the output file as the code signature is no longer valid.
bitcode_strip - remove or leave the bitcode segment in a Mach-O file
bitcode_strip input [ -r | -m | -l ] [ -v ] [ -keep_cs ] -o output
-r Remove the __LLVM bitcode segment entirely. -m Remove the bitcode from the __LLVM segment, leaving behind a marker. -l Remove all of the native executable code, leaving the LLVM bitcode behind. In this case, bitcode_strip will take care to preserve the (__TEXT,__info_plist) section while removing the rest of the __TEXT segment. -v Print the arguments passed to other tools run by bitcode_strip when processing bitcode files. -keep_cs Preserve the codesign load commands in the output binary, even though the code signature is no longer valid. This can be useful when using codesign(1) -preserve-metadata to resign the binary. -o output specifies the output file as output. input specifies the input Mach-O or Universal file to operate on. SEE ALSO codesign(1), otool(1). Apple, Inc. June 23, 2020 BITCODE_STRIP(1)
null
resolveip
The resolveip utility resolves host names to IP addresses and vice versa. Invoke resolveip like this: shell> resolveip [options] {host_name|ip-addr} ... resolveip supports the following options. • --help, --info, -?, -I Display a help message and exit. • --silent, -s Silent mode. Produce less output. • --version, -V Display version information and exit. COPYRIGHT Copyright © 1997, 2018, Oracle and/or its affiliates. All rights reserved. This documentation is free software; you can redistribute it and/or modify it only under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License. This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with the program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA or see http://www.gnu.org/licenses/. SEE ALSO For more information, please refer to the MySQL Reference Manual, which may already be installed locally and which is also available online at http://dev.mysql.com/doc/. AUTHOR Oracle Corporation (http://dev.mysql.com/). MySQL 5.7 10/03/2018 RESOLVEIP(1)
resolveip - resolve host name to IP address or vice versa
resolveip [options] {host_name|ip-addr} ...
null
null
opj_dump
null
opj_dump - This program reads in a jpeg2000 image and dumps the contents to stdout. It is part of the OpenJPEG library. Valid input image extensions are .j2k, .jp2, .jpt
opj_dump -i infile.j2k opj_dump -ImgDir images/ Dump all files in images/ opj_dump -h Print help message and exit
-i name (jpeg2000 input file name) -ImgDir directory_name (directory containing jpeg2000 input files) AUTHORS Copyright (c) 2010, Mathieu Malaterre SEE ALSO opj_compress(1) opj_decompress(1) opj_dump Version 2.1.1 opj_dump(1)
null
2to3
null
null
null
null
null
jupyter-nbclassic
null
null
null
null
null
bsdcpio
cpio copies files between archives and directories. This implementation can extract from tar, pax, cpio, zip, jar, ar, and ISO 9660 cdrom images and can create tar, pax, cpio, ar, and shar archives. The first option to cpio is a mode indicator from the following list: -i Input. Read an archive from standard input (unless overridden) and extract the contents to disk or (if the -t option is specified) list the contents to standard output. If one or more file patterns are specified, only files matching one of the patterns will be extracted. -o Output. Read a list of filenames from standard input and produce a new archive on standard output (unless overridden) containing the specified items. -p Pass-through. Read a list of filenames from standard input and copy the files to the specified directory.
cpio – copy files to and from archives
cpio -i [options] [pattern ...] [< archive] cpio -o [options] < name-list [> archive] cpio -p [options] dest-dir < name-list
Unless specifically stated otherwise, options are applicable in all operating modes. -0, --null Read filenames separated by NUL characters instead of newlines. This is necessary if any of the filenames being read might contain newlines. -6, --pwb When reading a binary format archive, assume it's the earlier one, from the PWB variant of 6th Edition UNIX. When writing a cpio archive, use the PWB format. -7, --binary (o mode only) When writing a cpio archive, use the (newer, non- PWB) binary format. -A (o mode only) Append to the specified archive. (Not yet implemented.) -a (o and p modes) Reset access times on files after they are read. -B (o mode only) Block output to records of 5120 bytes. -C size (o mode only) Block output to records of size bytes. -c (o mode only) Use the old POSIX portable character format. Equivalent to --format odc. -d, --make-directories (i and p modes) Create directories as necessary. -E file (i mode only) Read list of file name patterns from file to list and extract. -F file, --file file Read archive from or write archive to file. -f pattern (i mode only) Ignore files that match pattern. -H format, --format format (o mode only) Produce the output archive in the specified format. Supported formats include: cpio Synonym for odc. newc The SVR4 portable cpio format. odc The old POSIX.1 portable octet-oriented cpio format. pax The POSIX.1 pax format, an extension of the ustar format. ustar The POSIX.1 tar format. The default format is odc. See libarchive-formats(5) for more complete information about the formats currently supported by the underlying libarchive(3) library. -h, --help Print usage information. -I file Read archive from file. -i, --extract Input mode. See above for description. --insecure (i and p mode only) Disable security checks during extraction or copying. This allows extraction via symbolic links, absolute paths, and path names containing ‘..’ in the name. -J, --xz (o mode only) Compress the file with xz-compatible compression before writing it. In input mode, this option is ignored; xz compression is recognized automatically on input. -j Synonym for -y. -L (o and p modes) All symbolic links will be followed. Normally, symbolic links are archived and copied as symbolic links. With this option, the target of the link will be archived or copied instead. -l, --link (p mode only) Create links from the target directory to the original files, instead of copying. --lrzip (o mode only) Compress the resulting archive with lrzip(1). In input mode, this option is ignored. --lz4 (o mode only) Compress the archive with lz4-compatible compression before writing it. In input mode, this option is ignored; lz4 compression is recognized automatically on input. --zstd (o mode only) Compress the archive with zstd-compatible compression before writing it. In input mode, this option is ignored; zstd compression is recognized automatically on input. --lzma (o mode only) Compress the file with lzma-compatible compression before writing it. In input mode, this option is ignored; lzma compression is recognized automatically on input. --lzop (o mode only) Compress the resulting archive with lzop(1). In input mode, this option is ignored. --passphrase passphrase The passphrase is used to extract or create an encrypted archive. Currently, zip is only a format that cpio can handle encrypted archives. You shouldn't use this option unless you realize how insecure use of this option is. -m, --preserve-modification-time (i and p modes) Set file modification time on created files to match those in the source. -n, --numeric-uid-gid (i mode, only with -t) Display numeric uid and gid. By default, cpio displays the user and group names when they are provided in the archive, or looks up the user and group names in the system password database. --no-preserve-owner (i mode only) Do not attempt to restore file ownership. This is the default when run by non-root users. -O file Write archive to file. -o, --create Output mode. See above for description. -p, --pass-through Pass-through mode. See above for description. --preserve-owner (i mode only) Restore file ownership. This is the default when run by the root user. --quiet Suppress unnecessary messages. -R [user][:][group], --owner [user][:][group] Set the owner and/or group on files in the output. If group is specified with no user (for example, -R :wheel) then the group will be set but not the user. If the user is specified with a trailing colon and no group (for example, -R root:) then the group will be set to the user's default group. If the user is specified with no trailing colon, then the user will be set but not the group. In -i and -p modes, this option can only be used by the super-user. (For compatibility, a period can be used in place of the colon.) -r (All modes.) Rename files interactively. For each file, a prompt is written to /dev/tty containing the name of the file and a line is read from /dev/tty. If the line read is blank, the file is skipped. If the line contains a single period, the file is processed normally. Otherwise, the line is taken to be the new name of the file. -t, --list (i mode only) List the contents of the archive to stdout; do not restore the contents to disk. -u, --unconditional (i and p modes) Unconditionally overwrite existing files. Ordinarily, an older file will not overwrite a newer file on disk. -V, --dot Print a dot to stderr for each file as it is processed. Superseded by -v. -v, --verbose Print the name of each file to stderr as it is processed. With -t, provide a detailed listing of each file. --version Print the program version information and exit. -y (o mode only) Compress the archive with bzip2-compatible compression before writing it. In input mode, this option is ignored; bzip2 compression is recognized automatically on input. -Z (o mode only) Compress the archive with compress-compatible compression before writing it. In input mode, this option is ignored; compression is recognized automatically on input. -z (o mode only) Compress the archive with gzip-compatible compression before writing it. In input mode, this option is ignored; gzip compression is recognized automatically on input. EXIT STATUS The cpio utility exits 0 on success, and >0 if an error occurs. ENVIRONMENT The following environment variables affect the execution of cpio: LANG The locale to use. See environ(7) for more information. TZ The timezone to use when displaying dates. See environ(7) for more information.
The cpio command is traditionally used to copy file hierarchies in conjunction with the find(1) command. The first example here simply copies all files from src to dest: find src | cpio -pmud dest By carefully selecting options to the find(1) command and combining it with other standard utilities, it is possible to exercise very fine control over which files are copied. This next example copies files from src to dest that are more than 2 days old and whose names match a particular pattern: find src -mtime +2 | grep foo[bar] | cpio -pdmu dest This example copies files from src to dest that are more than 2 days old and which contain the word “foobar”: find src -mtime +2 | xargs grep -l foobar | cpio -pdmu dest COMPATIBILITY The mode options i, o, and p and the options a, B, c, d, f, l, m, r, t, u, and v comply with SUSv2. The old POSIX.1 standard specified that only -i, -o, and -p were interpreted as command-line options. Each took a single argument of a list of modifier characters. For example, the standard syntax allows -imu but does not support -miu or -i -m -u, since m and u are only modifiers to -i, they are not command-line options in their own right. The syntax supported by this implementation is backwards-compatible with the standard. For best compatibility, scripts should limit themselves to the standard syntax. SEE ALSO bzip2(1), gzip(1), mt(1), pax(1), tar(1), libarchive(3), cpio(5), libarchive-formats(5), tar(5) STANDARDS There is no current POSIX standard for the cpio command; it appeared in ISO/IEC 9945-1:1996 (“POSIX.1”) but was dropped from IEEE Std 1003.1-2001 (“POSIX.1”). The cpio, ustar, and pax interchange file formats are defined by IEEE Std 1003.1-2001 (“POSIX.1”) for the pax command. HISTORY The original cpio and find utilities were written by Dick Haight while working in AT&T's Unix Support Group. They first appeared in 1977 in PWB/UNIX 1.0, the “Programmer's Work Bench” system developed for use within AT&T. They were first released outside of AT&T as part of System III Unix in 1981. As a result, cpio actually predates tar, even though it was not well-known outside of AT&T until some time later. This is a complete re-implementation based on the libarchive(3) library. BUGS The cpio archive format has several basic limitations: It does not store user and group names, only numbers. As a result, it cannot be reliably used to transfer files between systems with dissimilar user and group numbering. Older cpio formats limit the user and group numbers to 16 or 18 bits, which is insufficient for modern systems. The cpio archive formats cannot support files over 4 gigabytes, except for the “odc” variant, which can support files up to 8 gigabytes. macOS 14.5 September 16, 2014 macOS 14.5
icu-config
icu-config simplifies the task of building and linking against ICU as compared to manually configuring user makefiles or equivalent. Because icu-config is an executable script, it also solves the problem of locating the ICU libraries and headers, by allowing the system PATH to locate it.
icu-config - output ICU build options
icu-config [ --bindir ] [ --cc ] [ --cflags ] [ --cppflags ] [ --cppflags-searchpath ] [ --cxx ] [ --cxxflags ] [ --detect-prefix ] [ --exec-prefix ] [ --exists ] [ --help, -?, --usage ] [ --icudata ] [ --icudata-install-dir ] [ --icudata-mode ] [ --icudatadir ] [ --invoke ] [ --invoke=prog ] [ --ldflags ] [ --ldflags-libsonly ] [ --ldflags-searchpath ] [ --ldflags-system ] [ --ldflags-icuio ] [ --mandir ] [ --prefix ] [ --prefix=prefix ] [ --sbindir ] [ --shared-datadir ] [ --sysconfdir ] [ --unicode-version ] [ --version ] [ --incfile ]
--bindir Prints the binary (executable) directory path. Normally equivalent to 'bin'. ICU user-executable applications and scripts are found here. --cc Print the C compiler used. Equivalent to the $(CC) Makefile variable. --cflags Print the C compiler flags. Equivalent to the $(CFLAGS) Makefile variable. Does NOT include preprocessor directives such as include path or defined symbols. Examples include debugging (-g) and optimization flags --cppflags Print the C preprocessor flags. Equivalent to the $(CPPFLAGS) Makefile variable. Examples are -I include paths and -D define directives. --cppflags-searchpath Print the C preprocessor flags, as above but only -I search paths. --cxx Print the C++ compiler. Equivalent to the $(CXX) Makefile variable. --cxxflags Print the C++ compiler flags. Equivalent to the $(CXXFLAGS) Makefile variable. --detect-prefix If ICU has been moved from its installed location, prepending this flag to other icu-config calls will attempt to locate ICU relative to where the icu-config script has been located. Can be used as a last-chance effort if the ICU install has been damaged. --exec-prefix Print the prefix used for executable program directories (such as bin, sbin, etc). Normally the same as the prefix. --exists Script will return with a successful (0) status if ICU seems to be installed and located correctly, otherwise an error message and nonzero status will be displayed. --help, -?,--usage Print a help and usage message. --icudata Print the shortname of the ICU data file. This does not include any suffix such as .dat, .dll, .so, .lib, .a, etc nor does it include prefixes such as 'lib'. It may be in the form icudt21b --icudata-install-dir Print the directory where ICU packaged data should be installed. Can use as pkgdata(1)'s --install option. --icudata-mode Print the default ICU pkgdata mode, such as dll or common. Can use as pkgdata(1)'s --mode option. --icudatadir Print the path to packaged archive data. (should be where $ICU_DATA or equivalent default path points.) Will NOT point to the libdir. --invoke If ICU is not installed in a location where the operating system will locate its shared libraries, this option will print out commands so as to set the appropriate environment variables to load ICU's shared libraries. For example, on many systems a variable named LD_LIBRARY_PATH or equivalent must be set. --invoke=prog Same as the --invoke option, except includes options for invoking a program named prog. If prog is the name of an ICU tool, such as genrb(1), then icu-config will also include the full path to that tool. --ldflags Print any flags which should be passed to the linker. These may include -L for library search paths, and -l for including ICU libraries. By default, this option will attempt to link in the "common" (libicuuc) and "i18n" (libicui18n) libraries, as well as the data library. If additional libraries are required, any of the following two flags may be added in conjunction with this one, for example "--ldflags --ldflags-icuio" if the icuio library is required in addition to the standard ICU libraries. Equivalent to the $(LDFLAGS) Makefile variable. --ldflags-layout Prints the link option for the ICU layout library. --ldflags-icuio Prints the link option to add the ICU I/O package --ldflags-libsonly Similar to --ldflags but only includes the -l options. --ldflags-searchpath Similar to --ldflags but only includes the -L search path options. --ldflags-system Similar to --ldflags but only includes system libraries (such as pthreads) --mandir Prints the location of the installed ICU man pages. Normally (man) --prefix Prints the prefix (base directory) under which the installed ICU resides. --prefix=prefix Sets the ICU prefix to prefix for the remainder of this command line. Does test whether the new prefix is valid. --sbindir Prints the location of ICU system binaries, normally (sbin) --shared-datadir Prints the location of ICU shared data, normally (share) --sysconfdir Prints the location of ICU system configuration data, normally (etc) --unicode-version Prints the Version of the Unicode Standard which the current ICU uses. --version Prints the current version of ICU. --incfile Prints the 'Makefile.inc' path, suitable for use with pkgdata(1)'s -O option. AUTHORS Steven Loomis VERSION 68.1 COPYRIGHT Copyright (C) 2002-2004 IBM, Inc. and others. ICU MANPAGE 17 May 2004 ICU-CONFIG(1)
icu-config can be used without a makefile. The command line below is sufficient for building a single-file c++ program against ICU. (For example, icu/source/samples/props/props.cpp) `icu-config --cxx --cxxflags --cppflags --ldflags` -o props props.cpp More commonly, icu-config will be called from within a makefile, and used to set up variables. The following example also builds the props example. CC=$(shell icu-config --cc) CXX=$(shell icu-config --cxx) CPPFLAGS=$(shell icu-config --cppflags) CXXFLAGS=$(shell icu-config --cxxflags) LDFLAGS =$(shell icu-config --ldflags) all: props props.o: props.cpp make(1) will automatically use the above variables.
panel
Panels are curses(3X) windows with the added feature of depth. Panel functions allow the use of stacked windows and ensure the proper portions of each window and the curses stdscr window are hidden or displayed when panels are added, moved, modified or removed. The set of currently visible panels is the stack of panels. The stdscr window is beneath all panels, and is not considered part of the stack. A window is associated with every panel. The panel routines enable you to create, move, hide, and show panels, as well as position a panel at any desired location in the stack. Panel routines are a functional layer added to curses(3X), make only high-level curses calls, and work anywhere terminfo curses does. FUNCTIONS new_panel(win) allocates a PANEL structure, associates it with win, places the panel on the top of the stack (causes it to be displayed above any other panel) and returns a pointer to the new panel. update_panels() refreshes the virtual screen to reflect the relations between the panels in the stack, but does not call doupdate() to refresh the physical screen. Use this function and not wrefresh or wnoutrefresh. update_panels may be called more than once before a call to doupdate(), but doupdate() is the function responsible for updating the physical screen. del_panel(pan) removes the given panel from the stack and deallocates the PANEL structure (but not its associated window). hide_panel(pan) removes the given panel from the panel stack and thus hides it from view. The PANEL structure is not lost, merely removed from the stack. panel_hidden(pan) returns TRUE if the panel is in the panel stack, FALSE if it is not. If the panel is a null pointer, return ERR. show_panel(pan) makes a hidden panel visible by placing it on top of the panels in the panel stack. See COMPATIBILITY below. top_panel(pan) puts the given visible panel on top of all panels in the stack. See COMPATIBILITY below. bottom_panel(pan) puts panel at the bottom of all panels. move_panel(pan,starty,startx) moves the given panel window so that its upper-left corner is at starty, startx. It does not change the position of the panel in the stack. Be sure to use this function, not mvwin(), to move a panel window. replace_panel(pan,window) replaces the current window of panel with window (useful, for example if you want to resize a panel; if you're using ncurses, you can call replace_panel on the output of wresize(3X)). It does not change the position of the panel in the stack. panel_above(pan) returns a pointer to the panel above pan. If the panel argument is (PANEL *)0, it returns a pointer to the bottom panel in the stack. panel_below(pan) returns a pointer to the panel just below pan. If the panel argument is (PANEL *)0, it returns a pointer to the top panel in the stack. set_panel_userptr(pan,ptr) sets the panel's user pointer. panel_userptr(pan) returns the user pointer for a given panel. panel_window(pan) returns a pointer to the window of the given panel. DIAGNOSTICS Each routine that returns a pointer returns NULL if an error occurs. Each routine that returns an int value returns OK if it executes successfully and ERR if not. COMPATIBILITY Reasonable care has been taken to ensure compatibility with the native panel facility introduced in SVr3.2 (inspection of the SVr4 manual pages suggests the programming interface is unchanged). The PANEL data structures are merely similar. The programmer is cautioned not to directly use PANEL fields. The functions show_panel() and top_panel() are identical in this implementation, and work equally well with displayed or hidden panels. In the native System V implementation, show_panel() is intended for making a hidden panel visible (at the top of the stack) and top_panel() is intended for making an already-visible panel move to the top of the stack. You are cautioned to use the correct function to ensure compatibility with native panel libraries. NOTE In your library list, libpanel.a should be before libncurses.a; that is, you want to say `-lpanel -lncurses', not the other way around (which would usually give a link-error). FILES panel.h interface for the panels library libpanel.a the panels library itself SEE ALSO curses(3X), curs_variables(3X), This describes ncurses version 5.7 (patch 20081102). AUTHOR Originally written by Warren Tucker <wht@n4hgf.mt-park.ga.us>, primarily to assist in porting u386mon to systems without a native panels library. Repackaged for ncurses by Zeyd ben-Halim. panel(3X)
panel - panel stack extension for curses
#include <panel.h> cc [flags] sourcefiles -lpanel -lncurses PANEL *new_panel(WINDOW *win) int bottom_panel(PANEL *pan) int top_panel(PANEL *pan) int show_panel(PANEL *pan) void update_panels(); int hide_panel(PANEL *pan) WINDOW *panel_window(const PANEL *pan) int replace_panel(PANEL *pan, WINDOW *window) int move_panel(PANEL *pan, int starty, int startx) int panel_hidden(const PANEL *pan) PANEL *panel_above(const PANEL *pan) PANEL *panel_below(const PANEL *pan) int set_panel_userptr(PANEL *pan, const void *ptr) const void *panel_userptr(const PANEL *pan) int del_panel(PANEL *pan)
null
null
python3
Python is an interpreted, interactive, object-oriented programming language that combines remarkable power with very clear syntax. For an introduction to programming in Python, see the Python Tutorial. The Python Library Reference documents built-in and standard types, constants, functions and modules. Finally, the Python Reference Manual describes the syntax and semantics of the core language in (perhaps too) much detail. (These documents may be located via the INTERNET RESOURCES below; they may be installed on your system as well.) Python's basic power can be extended with your own modules written in C or C++. On most systems such modules may be dynamically loaded. Python is also adaptable as an extension language for existing applications. See the internal documentation for hints. Documentation for installed Python modules and packages can be viewed by running the pydoc program. COMMAND LINE OPTIONS -B Don't write .pyc files on import. See also PYTHONDONTWRITEBYTECODE. -b Issue warnings about str(bytes_instance), str(bytearray_instance) and comparing bytes/bytearray with str. (-bb: issue errors) -c command Specify the command to execute (see next section). This terminates the option list (following options are passed as arguments to the command). --check-hash-based-pycs mode Configure how Python evaluates the up-to-dateness of hash-based .pyc files. -d Turn on parser debugging output (for expert only, depending on compilation options). -E Ignore environment variables like PYTHONPATH and PYTHONHOME that modify the behavior of the interpreter. -h , -? , --help Prints the usage for the interpreter executable and exits. --help-env Prints help about Python-specific environment variables and exits. --help-xoptions Prints help about implementation-specific -X options and exits. --help-all Prints complete usage information and exits. -i When a script is passed as first argument or the -c option is used, enter interactive mode after executing the script or the command. It does not read the $PYTHONSTARTUP file. This can be useful to inspect global variables or a stack trace when a script raises an exception. -I Run Python in isolated mode. This also implies -E, -P and -s. In isolated mode sys.path contains neither the script's directory nor the user's site-packages directory. All PYTHON* environment variables are ignored, too. Further restrictions may be imposed to prevent the user from injecting malicious code. -m module-name Searches sys.path for the named module and runs the corresponding .py file as a script. This terminates the option list (following options are passed as arguments to the module). -O Remove assert statements and any code conditional on the value of __debug__; augment the filename for compiled (bytecode) files by adding .opt-1 before the .pyc extension. -OO Do -O and also discard docstrings; change the filename for compiled (bytecode) files by adding .opt-2 before the .pyc extension. -P Don't automatically prepend a potentially unsafe path to sys.path such as the current directory, the script's directory or an empty string. See also the PYTHONSAFEPATH environment variable. -q Do not print the version and copyright messages. These messages are also suppressed in non-interactive mode. -s Don't add user site directory to sys.path. -S Disable the import of the module site and the site-dependent manipulations of sys.path that it entails. Also disable these manipulations if site is explicitly imported later. -u Force the stdout and stderr streams to be unbuffered. This option has no effect on the stdin stream. -v Print a message each time a module is initialized, showing the place (filename or built-in module) from which it is loaded. When given twice, print a message for each file that is checked for when searching for a module. Also provides information on module cleanup at exit. -V , --version Prints the Python version number of the executable and exits. When given twice, print more information about the build. -W argument Warning control. Python's warning machinery by default prints warning messages to sys.stderr. The simplest settings apply a particular action unconditionally to all warnings emitted by a process (even those that are otherwise ignored by default): -Wdefault # Warn once per call location -Werror # Convert to exceptions -Walways # Warn every time -Wmodule # Warn once per calling module -Wonce # Warn once per Python process -Wignore # Never warn The action names can be abbreviated as desired and the interpreter will resolve them to the appropriate action name. For example, -Wi is the same as -Wignore . The full form of argument is: action:message:category:module:lineno Empty fields match all values; trailing empty fields may be omitted. For example -W ignore::DeprecationWarning ignores all DeprecationWarning warnings. The action field is as explained above but only applies to warnings that match the remaining fields. The message field must match the whole printed warning message; this match is case-insensitive. The category field matches the warning category (ex: "DeprecationWarning"). This must be a class name; the match test whether the actual warning category of the message is a subclass of the specified warning category. The module field matches the (fully-qualified) module name; this match is case-sensitive. The lineno field matches the line number, where zero matches all line numbers and is thus equivalent to an omitted line number. Multiple -W options can be given; when a warning matches more than one option, the action for the last matching option is performed. Invalid -W options are ignored (though, a warning message is printed about invalid options when the first warning is issued). Warnings can also be controlled using the PYTHONWARNINGS environment variable and from within a Python program using the warnings module. For example, the warnings.filterwarnings() function can be used to use a regular expression on the warning message. -X option Set implementation-specific option. The following options are available: -X faulthandler: enable faulthandler -X showrefcount: output the total reference count and number of used memory blocks when the program finishes or after each statement in the interactive interpreter. This only works on debug builds -X tracemalloc: start tracing Python memory allocations using the tracemalloc module. By default, only the most recent frame is stored in a traceback of a trace. Use -X tracemalloc=NFRAME to start tracing with a traceback limit of NFRAME frames -X importtime: show how long each import takes. It shows module name, cumulative time (including nested imports) and self time (excluding nested imports). Note that its output may be broken in multi-threaded application. Typical usage is python3 -X importtime -c 'import asyncio' -X dev: enable CPython's "development mode", introducing additional runtime checks which are too expensive to be enabled by default. It will not be more verbose than the default if the code is correct: new warnings are only emitted when an issue is detected. Effect of the developer mode: * Add default warning filter, as -W default * Install debug hooks on memory allocators: see the PyMem_SetupDebugHooks() C function * Enable the faulthandler module to dump the Python traceback on a crash * Enable asyncio debug mode * Set the dev_mode attribute of sys.flags to True * io.IOBase destructor logs close() exceptions -X utf8: enable UTF-8 mode for operating system interfaces, overriding the default locale-aware mode. -X utf8=0 explicitly disables UTF-8 mode (even when it would otherwise activate automatically). See PYTHONUTF8 for more details -X pycache_prefix=PATH: enable writing .pyc files to a parallel tree rooted at the given directory instead of to the code tree. -X warn_default_encoding: enable opt-in EncodingWarning for 'encoding=None' -X no_debug_ranges: disable the inclusion of the tables mapping extra location information (end line, start column offset and end column offset) to every instruction in code objects. This is useful when smaller code objects and pyc files are desired as well as suppressing the extra visual location indicators when the interpreter displays tracebacks. -X frozen_modules=[on|off]: whether or not frozen modules should be used. The default is "on" (or "off" if you are running a local build). -X int_max_str_digits=number: limit the size of int<->str conversions. This helps avoid denial of service attacks when parsing untrusted data. The default is sys.int_info.default_max_str_digits. 0 disables. -x Skip the first line of the source. This is intended for a DOS specific hack only. Warning: the line numbers in error messages will be off by one! INTERPRETER INTERFACE The interpreter interface resembles that of the UNIX shell: when called with standard input connected to a tty device, it prompts for commands and executes them until an EOF is read; when called with a file name argument or with a file as standard input, it reads and executes a script from that file; when called with -c command, it executes the Python statement(s) given as command. Here command may contain multiple statements separated by newlines. Leading whitespace is significant in Python statements! In non-interactive mode, the entire input is parsed before it is executed. If available, the script name and additional arguments thereafter are passed to the script in the Python variable sys.argv, which is a list of strings (you must first import sys to be able to access it). If no script name is given, sys.argv[0] is an empty string; if -c is used, sys.argv[0] contains the string '-c'. Note that options interpreted by the Python interpreter itself are not placed in sys.argv. In interactive mode, the primary prompt is `>>>'; the second prompt (which appears when a command is not complete) is `...'. The prompts can be changed by assignment to sys.ps1 or sys.ps2. The interpreter quits when it reads an EOF at a prompt. When an unhandled exception occurs, a stack trace is printed and control returns to the primary prompt; in non-interactive mode, the interpreter exits after printing the stack trace. The interrupt signal raises the KeyboardInterrupt exception; other UNIX signals are not caught (except that SIGPIPE is sometimes ignored, in favor of the IOError exception). Error messages are written to stderr. FILES AND DIRECTORIES These are subject to difference depending on local installation conventions; ${prefix} and ${exec_prefix} are installation-dependent and should be interpreted as for GNU software; they may be the same. The default for both is /usr/local. ${exec_prefix}/bin/python Recommended location of the interpreter. ${prefix}/lib/python<version> ${exec_prefix}/lib/python<version> Recommended locations of the directories containing the standard modules. ${prefix}/include/python<version> ${exec_prefix}/include/python<version> Recommended locations of the directories containing the include files needed for developing Python extensions and embedding the interpreter. ENVIRONMENT VARIABLES PYTHONSAFEPATH If this is set to a non-empty string, don't automatically prepend a potentially unsafe path to sys.path such as the current directory, the script's directory or an empty string. See also the -P option. PYTHONHOME Change the location of the standard Python libraries. By default, the libraries are searched in ${prefix}/lib/python<version> and ${exec_prefix}/lib/python<version>, where ${prefix} and ${exec_prefix} are installation-dependent directories, both defaulting to /usr/local. When $PYTHONHOME is set to a single directory, its value replaces both ${prefix} and ${exec_prefix}. To specify different values for these, set $PYTHONHOME to ${prefix}:${exec_prefix}. PYTHONPATH Augments the default search path for module files. The format is the same as the shell's $PATH: one or more directory pathnames separated by colons. Non-existent directories are silently ignored. The default search path is installation dependent, but generally begins with ${prefix}/lib/python<version> (see PYTHONHOME above). The default search path is always appended to $PYTHONPATH. If a script argument is given, the directory containing the script is inserted in the path in front of $PYTHONPATH. The search path can be manipulated from within a Python program as the variable sys.path. PYTHONPLATLIBDIR Override sys.platlibdir. PYTHONSTARTUP If this is the name of a readable file, the Python commands in that file are executed before the first prompt is displayed in interactive mode. The file is executed in the same name space where interactive commands are executed so that objects defined or imported in it can be used without qualification in the interactive session. You can also change the prompts sys.ps1 and sys.ps2 in this file. PYTHONOPTIMIZE If this is set to a non-empty string it is equivalent to specifying the -O option. If set to an integer, it is equivalent to specifying -O multiple times. PYTHONDEBUG If this is set to a non-empty string it is equivalent to specifying the -d option. If set to an integer, it is equivalent to specifying -d multiple times. PYTHONDONTWRITEBYTECODE If this is set to a non-empty string it is equivalent to specifying the -B option (don't try to write .pyc files). PYTHONINSPECT If this is set to a non-empty string it is equivalent to specifying the -i option. PYTHONIOENCODING If this is set before running the interpreter, it overrides the encoding used for stdin/stdout/stderr, in the syntax encodingname:errorhandler The errorhandler part is optional and has the same meaning as in str.encode. For stderr, the errorhandler part is ignored; the handler will always be ´backslashreplace´. PYTHONNOUSERSITE If this is set to a non-empty string it is equivalent to specifying the -s option (Don't add the user site directory to sys.path). PYTHONUNBUFFERED If this is set to a non-empty string it is equivalent to specifying the -u option. PYTHONVERBOSE If this is set to a non-empty string it is equivalent to specifying the -v option. If set to an integer, it is equivalent to specifying -v multiple times. PYTHONWARNINGS If this is set to a comma-separated string it is equivalent to specifying the -W option for each separate value. PYTHONHASHSEED If this variable is set to "random", a random value is used to seed the hashes of str and bytes objects. If PYTHONHASHSEED is set to an integer value, it is used as a fixed seed for generating the hash() of the types covered by the hash randomization. Its purpose is to allow repeatable hashing, such as for selftests for the interpreter itself, or to allow a cluster of python processes to share hash values. The integer must be a decimal number in the range [0,4294967295]. Specifying the value 0 will disable hash randomization. PYTHONINTMAXSTRDIGITS Limit the maximum digit characters in an int value when converting from a string and when converting an int back to a str. A value of 0 disables the limit. Conversions to or from bases 2, 4, 8, 16, and 32 are never limited. PYTHONMALLOC Set the Python memory allocators and/or install debug hooks. The available memory allocators are malloc and pymalloc. The available debug hooks are debug, malloc_debug, and pymalloc_debug. When Python is compiled in debug mode, the default is pymalloc_debug and the debug hooks are automatically used. Otherwise, the default is pymalloc. PYTHONMALLOCSTATS If set to a non-empty string, Python will print statistics of the pymalloc memory allocator every time a new pymalloc object arena is created, and on shutdown. This variable is ignored if the $PYTHONMALLOC environment variable is used to force the malloc(3) allocator of the C library, or if Python is configured without pymalloc support. PYTHONASYNCIODEBUG If this environment variable is set to a non-empty string, enable the debug mode of the asyncio module. PYTHONTRACEMALLOC If this environment variable is set to a non-empty string, start tracing Python memory allocations using the tracemalloc module. The value of the variable is the maximum number of frames stored in a traceback of a trace. For example, PYTHONTRACEMALLOC=1 stores only the most recent frame. PYTHONFAULTHANDLER If this environment variable is set to a non-empty string, faulthandler.enable() is called at startup: install a handler for SIGSEGV, SIGFPE, SIGABRT, SIGBUS and SIGILL signals to dump the Python traceback. This is equivalent to the -X faulthandler option. PYTHONEXECUTABLE If this environment variable is set, sys.argv[0] will be set to its value instead of the value got through the C runtime. Only works on Mac OS X. PYTHONUSERBASE Defines the user base directory, which is used to compute the path of the user site-packages directory and installation paths for python -m pip install --user. PYTHONPROFILEIMPORTTIME If this environment variable is set to a non-empty string, Python will show how long each import takes. This is exactly equivalent to setting -X importtime on the command line. PYTHONBREAKPOINT If this environment variable is set to 0, it disables the default debugger. It can be set to the callable of your debugger of choice. Debug-mode variables Setting these variables only has an effect in a debug build of Python, that is, if Python was configured with the --with-pydebug build option. PYTHONDUMPREFS If this environment variable is set, Python will dump objects and reference counts still alive after shutting down the interpreter. AUTHOR The Python Software Foundation: https://www.python.org/psf/ INTERNET RESOURCES Main website: https://www.python.org/ Documentation: https://docs.python.org/ Developer resources: https://devguide.python.org/ Downloads: https://www.python.org/downloads/ Module repository: https://pypi.org/ Newsgroups: comp.lang.python, comp.lang.python.announce LICENSING Python is distributed under an Open Source license. See the file "LICENSE" in the Python source distribution for information on terms & conditions for accessing and otherwise using Python and for a DISCLAIMER OF ALL WARRANTIES. PYTHON(1)
python - an interpreted, interactive, object-oriented programming language
python [ -B ] [ -b ] [ -d ] [ -E ] [ -h ] [ -i ] [ -I ] [ -m module-name ] [ -q ] [ -O ] [ -OO ] [ -P ] [ -s ] [ -S ] [ -u ] [ -v ] [ -V ] [ -W argument ] [ -x ] [ -X option ] [ -? ] [ --check-hash-based-pycs default | always | never ] [ --help ] [ --help-env ] [ --help-xoptions ] [ --help-all ] [ -c command | script | - ] [ arguments ]
null
null
gencfu
gencfu reads confusable character definitions in the input file, which are plain text files containing confusable character definitions in the input format defined by Unicode UAX39 for the files confusables.txt and confusablesWholeScript.txt. This source (.txt) format is also accepted by ICU spoof detectors. The files must be encoded in utf-8 format, with or without a BOM. Normally the output data file has the .cfu extension.
gencfu - Generates Unicode Confusable data files
gencfu [ -h, -?, --help ] [ -V, --version ] [ -c, --copyright ] [ -v, --verbose ] [ -d, --destdir destination ] [ -i, --icudatadir directory ] -r, --rules rule-file -w, --wsrules whole-script-rule-file -o, --out output-file
-h, -?, --help Print help about usage and exit. -V, --version Print the version of gencfu and exit. -c, --copyright Embeds the standard ICU copyright into the output-file. -v, --verbose Display extra informative messages during execution. -d, --destdir destination Set the destination directory of the output-file to destination. -i, --icudatadir directory Look for any necessary ICU data files in directory. For example, the file pnames.icu must be located when ICU's data is not built as a shared library. The default ICU data directory is specified by the environment variable ICU_DATA. Most configurations of ICU do not require this argument. -r, --rules rule-file The source file to read. -w, --wsrules whole-script-rule-file The whole script source file to read. -o, --out output-file The output data file to write. VERSION 1.0 COPYRIGHT Copyright (C) 2009 International Business Machines Corporation and others ICU MANPAGE 24 May 2009 GENCFU(1)
null
arm64-apple-darwin20.0.0-bitcode_strip
bitcode_strip edits the input Mach-O file and with the -r option removes the bitcode segment, the segment named __LLVM and its section. With the -m option removes the bitcode segment but leaves a marker. Or with the -l option leaves only the bitcode segment in the Mach-O file an the (__TEXT,__info_plist) section. If the Mach-O file, or slice of a universal file, does not have a bitcode segment, it is left essentially unchanged. input specifies the input Mach-O file to operate on. -o output specifies the output file as output. -r specifies the bitcode segment is to be removed. -m specifies the bitcode segment is to be removed and a marker is left. -l specifies that only the bitcode segment and the (__TEXT,__info_plist) section is to be left in the Mach-O file. Apple, Inc. July 12, 2016 BITCODE_STRIP(1)
bitcode_strip - remove or leave the bitcode segment in a Mach-O file
bitcode_strip input [ -r | -m | -l ] -o output
null
null
wish8.6
null
null
null
null
null
spyder
null
null
null
null
null
grpc_python_plugin
null
null
null
null
null
png-fix-itxt
null
null
null
null
null
dyldinfo
null
null
null
null
null
gst-play-1.0
null
null
null
null
null
rst2latex.py
null
null
null
null
null
pybabel
null
null
null
null
null
xsltproc
xsltproc is a command line tool for applying XSLT stylesheets to XML documents. It is part of libxslt(3), the XSLT C library for GNOME. While it was developed as part of the GNOME project, it can operate independently of the GNOME desktop. xsltproc is invoked from the command line with the name of the stylesheet to be used followed by the name of the file or files to which the stylesheet is to be applied. It will use the standard input if a filename provided is - . If a stylesheet is included in an XML document with a Stylesheet Processing Instruction, no stylesheet need to be named at the command line. xsltproc will automatically detect the included stylesheet and use it. By default, output is to stdout. You can specify a file for output using the -o or --output option.
xsltproc - command line XSLT processor
xsltproc [[-V | --version] [-v | --verbose] [{-o | --output} {FILE | DIRECTORY}] | --timing | --repeat | --debug | --novalid | --noout | --maxdepth VALUE | --maxvars VALUE | --maxparserdepth VALUE | --huge | --seed-rand VALUE | --html | --encoding ENCODING | --param PARAMNAME PARAMVALUE | --stringparam PARAMNAME PARAMVALUE | --nonet | --path "PATH(S)" | --load-trace | --catalogs | --xinclude | --xincludestyle | [--profile | --norman] | --dumpextensions | --nowrite | --nomkdir | --writesubtree PATH | --nodtdattr] [STYLESHEET] {XML-FILE... | -}
xsltproc accepts the following options (in alphabetical order): --catalogs Use the SGML catalog specified in SGML_CATALOG_FILES to resolve the location of external entities. By default, xsltproc looks for the catalog specified in XML_CATALOG_FILES. If that is not specified, it uses /etc/xml/catalog. --debug Output an XML tree of the transformed document for debugging purposes. --dumpextensions Dumps the list of all registered extensions on stdout. --html The input document is an HTML file. --load-trace Display all the documents loaded during the processing to stderr. --maxdepth VALUE Adjust the maximum depth of the template stack before libxslt(3) concludes it is in an infinite loop. The default is 3000. --maxvars VALUE Maximum number of variables. The default is 15000. --maxparserdepth VALUE Maximum element nesting level of parsed XML documents. The default is 256. --huge Relax hardcoded limits of the XML parser by setting the XML_PARSE_HUGE parser option. --seed-rand VALUE Initialize pseudo random number generator with specific seed. --nodtdattr Do not apply default attributes from the document's DTD. --nomkdir Refuses to create directories. --nonet Do not use the Internet to fetch DTDs, entities or documents. --noout Do not output the result. --novalid Skip loading the document's DTD. --nowrite Refuses to write to any file or resource. -o or --output FILE | DIRECTORY Direct output to the given FILE. Using the option with a DIRECTORY directs the output files to the specified directory. This can be useful for multiple outputs (also known as "chunking") or manpage processing. Important The given directory must already exist. Note Make sure that FILE and DIRECTORY follow the “URI reference computation” as described in RFC 2396 and laters. This means, that e.g. -o directory will maybe not work, but -o directory/ will. --encoding ENCODING Allow to specify the encoding for the input. --param PARAMNAME PARAMVALUE Pass a parameter of name PARAMNAME and value PARAMVALUE to the stylesheet. You may pass multiple name/value pairs up to a maximum of 32. If the value being passed is a string, you can use --stringparam instead, to avoid additional quote characters that appear in string expressions. Note: the XPath expression must be UTF-8 encoded. --path "PATH(S)" Use the (space- or colon-separated) list of filesystem paths specified by PATHS to load DTDs, entities or documents. Enclose space-separated lists by quotation marks. --profile or --norman Output profiling information detailing the amount of time spent in each part of the stylesheet. This is useful in optimizing stylesheet performance. --repeat Run the transformation 20 times. Used for timing tests. --stringparam PARAMNAME PARAMVALUE Pass a parameter of name PARAMNAME and value PARAMVALUE where PARAMVALUE is a string rather than a node identifier. Note: The string must be UTF-8 encoded. --timing Display the time used for parsing the stylesheet, parsing the document and applying the stylesheet and saving the result. Displayed in milliseconds. -v or --verbose Output each step taken by xsltproc in processing the stylesheet and the document. -V or --version Show the version of libxml(3) and libxslt(3) used. --writesubtree PATH Allow file write only within the PATH subtree. --xinclude Process the input document using the XInclude specification. More details on this can be found in the XInclude specification: http://www.w3.org/TR/xinclude/ --xincludestyle Process the stylesheet with XInclude. ENVIRONMENT SGML_CATALOG_FILES SGML catalog behavior can be changed by redirecting queries to the user's own set of catalogs. This can be done by setting the SGML_CATALOG_FILES environment variable to a list of catalogs. An empty one should deactivate loading the default /etc/sgml/catalog catalog. XML_CATALOG_FILES XML catalog behavior can be changed by redirecting queries to the user's own set of catalogs. This can be done by setting the XML_CATALOG_FILES environment variable to a list of catalogs. An empty one should deactivate loading the default /etc/xml/catalog catalog. DIAGNOSTICS xsltproc return codes provide information that can be used when calling it from scripts. 0 No error (normal operation) 1 No argument 2 Too many parameters 3 Unknown option 4 Failed to parse the stylesheet 5 Error in the stylesheet 6 Error in one of the documents 7 Unsupported xsl:output method 8 String parameter contains both quote and double-quotes 9 Internal processing error 10 Processing was stopped by a terminating message 11 Could not write the result to the output file SEE ALSO libxml(3), libxslt(3) More information can be found at • libxml(3) web page https://gitlab.gnome.org/GNOME/libxslt • W3C XSLT page http://www.w3.org/TR/xslt AUTHOR John Fleck <jfleck@inkstain.net> Author. COPYRIGHT Copyright © 2001, 2002 libxslt 08/17/2022 XSLTPROC(1)
null
pep8
null
null
null
null
null
arm64-apple-darwin20.0.0-otool
null
null
null
null
null
pyrsa-priv2pub
null
null
null
null
null
ctf_insert
ctf_insert inserts CTF (Compact C Type Format) data into a mach_kernel binary, storing the data in a newly created (__CTF,__ctf) section. This section must not be present in the input file. ctf_insert(1) must be passed one -arch argument for each architecture in a universal file, or exactly one -arch for a thin file. input specifies the input mach_kernel. -o output specifies the output file. -arch arch file specifies a file of CTF data to be used for the specified arch in a Mach-O or universal file. The file's content will be stored in a newly created (__CTF,__ctf) section. SEE ALSO otool(1), segedit(1). Apple, Inc. June 23, 2020 CTF_INSERT(1)
ctf_insert - insert Compact C Type Format data into a mach_kernel file
ctf_insert input [ -arch arch file ]... -o output
null
null
timezone-dump
null
null
null
null
null
mysql_install_db
Note mysql_install_db is deprecated as of MySQL 5.7.6 because its functionality has been integrated into mysqld, the MySQL server. To initialize a MySQL installation, invoke mysqld with the --initialize or --initialize-insecure option. For more information, see Section 2.10.1.1, “Initializing the Data Directory Manually Using mysqld”. mysql_install_db will be removed in a future MySQL release. mysql_install_db handles initialization tasks that must be performed before the MySQL server, mysqld, is ready to use: • It initializes the MySQL data directory and creates the system tables that it contains. • It initializes the system tablespace and related data structures needed to manage InnoDB tables. • It loads the server-side help tables. • It installs the sys schema. • It creates an administrative account. Older versions of mysql_install_db may create anonymous-user accounts. Before MySQL 5.7.5, mysql_install_db is a Perl script and requires that Perl be installed. As of 5.7.5, mysql_install_db is written in C++ and supplied in binary distributions as an executable binary. In addition, a number of new options were added and old options removed. If you find that an option does not work as you expect, be sure to check which options apply in your version of mysql_install_db (invoke it with the --help option). Secure-by-Default Deployment.PP Current versions of mysql_install_db produce a MySQL deployment that is secure by default. It is recommended that you use mysql_install_db from MySQL 5.7.5 or up for best security, but version-dependent information about security characteristics is included here for completeness (secure-by-default deployment was introduced in stages in MySQL 5.7). MySQL 5.7.5 and up is secure by default, with these characteristics: • A single administrative account named 'root'@'localhost' is created with a randomly generated password, which is marked expired. • No anonymous-user accounts are created. • No test database accessible by all users is created. • --admin-xxx options are available to control characteristics of the administrative account. • The --random-password-file option is available to control where the random password is written. • The --insecure option is available to suppress random password generation. MySQL 5.7.4 is secure by default, with these characteristics: • A single administrative account named 'root'@'localhost' is created with a randomly generated password, which is marked expired. • No anonymous-user accounts are created. • No test database accessible by all users is created. • The --skip-random-passwords option is available to suppress random password generation, and to create a test database. MySQL 5.7.3 and earlier are not secure by default, with these characteristics: • Multiple administrative root accounts are created with no password. • Anonymous-user accounts are created. • A test database accessible by all users is created. • The --random-passwords option is available to generate random passwords for administrative accounts and mark them expired, and to not create anonymous-user accounts. If mysql_install_db generates a random administative password, it writes the password to a file and displays the file name. The password entry includes a timestamp to indicate when it was written. By default, the file is .mysql_secret in the home directory of the effective user running the script. .mysql_secret is created with mode 600 to be accessible only to the system user for whom it is created. Important When mysql_install_db generates a random password for the administrative account, it is necessary after mysql_install_db has been run to start the server, connect using the administrative account with the password written to the .mysql_secret file, and specify a new administrative password. Until this is done, the administrative account cannot be used for anything else. To change the password, you can use the SET PASSWORD statement (for example, with the mysql or mysqladmin client). After resetting the password, remove the .mysql_secret file; otherwise, if you run mysql_secure_installation, that command may see the file and expire the root password again as part of ensuring secure deployment. Invocation Syntax.PP Several changes to mysql_install_db were made in MySQL 5.7.5 that affect the invocation syntax. Change location to the MySQL installation directory and use the command appropriate to your version of MySQL: • Invocation syntax for MySQL 5.7.5 and up: shell> bin/mysql_install_db --datadir=path/to/datadir [other_options] The --datadir option is mandatory. mysql_install_db creates the data directory, which must not already exist: • If the data directory does already exist, you are performing an upgrade operation (not an install operation) and should run mysql_upgrade, not mysql_install_db. See mysql_upgrade(1). • If the data directory does not exist but mysql_install_db fails, you must remove any partially created data directory before running mysql_install_db again. • Invocation syntax before MySQL 5.7.5: shell> scripts/mysql_install_db [options] Because the MySQL server, mysqld, must access the data directory when it runs later, you should either run mysql_install_db from the same system account that will be used for running mysqld, or run it as root and specify the --user option to indicate the user name that mysqld will run as. It might be necessary to specify other options such as --basedir if mysql_install_db does not use the correct location for the installation directory. For example: shell> bin/mysql_install_db --user=mysql \ --basedir=/opt/mysql/mysql \ --datadir=/opt/mysql/mysql/data Note After mysql_install_db sets up the InnoDB system tablespace, changes to some tablespace characteristics require setting up a whole new instance. This includes the file name of the first file in the system tablespace and the number of undo logs. If you do not want to use the default values, make sure that the settings for the innodb_data_file_path and innodb_log_file_size configuration parameters are in place in the MySQL configuration file before running mysql_install_db. Also make sure to specify as necessary other parameters that affect the creation and location of InnoDB files, such as innodb_data_home_dir and innodb_log_group_home_dir. If those options are in your configuration file but that file is not in a location that MySQL reads by default, specify the file location using the --defaults-extra-file option when you run mysql_install_db. Note If you have set a custom TMPDIR environment variable when performing the installation, and the specified directory is not accessible, mysql_install_db may fail. If so, unset TMPDIR or set TMPDIR to point to the system temporary directory (usually /tmp). Administrative Account Creation.PP mysql_install_db creates an administrative account named 'root'@'localhost' by default. (Before MySQL 5.7.4, mysql_install_db creates additional root accounts, such as 'root'@'127.0.0.1'. This is no longer done.) As of MySQL 5.7.5, mysql_install_db provides options that enable you to control several aspects of the administrative account: • To change the user or host parts of the account name, use --login-path, or --admin-user and --admin-host. • --insecure suppresses generation of a random password. • --admin-auth-plugin specifies the authentication plugin. • --admin-require-ssl specifies whether the account must use SSL connections. For more information, see the descriptions of those options. mysql_install_db assigns user table rows a nonempty plugin column value to set the authentication plugin. The default value is mysql_native_password. The value can be changed using the --admin-auth-plugin option in MySQL 5.7.5 and up (as noted previously), or by setting the default_authentication_plugin system variable in MySQL 5.7.2 to 5.7.4. Default my.cnf File.PP As of MySQL 5.7.5, mysql_install_db creates no default my.cnf file. Before MySQL 5.7.5, mysql_install_db creates a default option file named my.cnf in the base installation directory. This file is created from a template included in the distribution package named my-default.cnf. You can find the template in or under the base installation directory. When started using mysqld_safe, the server uses my.cnf file by default. If my.cnf already exists, mysql_install_db assumes it to be in use and writes a new file named my-new.cnf instead. Note As of MySQL 5.7.18, my-default.cnf is no longer included in or installed by distribution packages. With one exception, the settings in the default option file are commented and have no effect. The exception is that the file sets the sql_mode system variable to NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES. This setting produces a server configuration that results in errors rather than warnings for bad data in operations that modify transactional tables. See Section 5.1.10, “Server SQL Modes”. Command Options.PP mysql_install_db supports the following options, which can be specified on the command line or in the [mysql_install_db] group of an option file. For information about option files used by MySQL programs, see Section 4.2.6, “Using Option Files”. Before MySQL 5.7.5, mysql_install_db passes unrecognized options to mysqld. • --help, -? Display a help message and exit. The -? form of this option was added in MySQL 5.7.5. • --admin-auth-plugin=plugin_name The authentication plugin to use for the administrative account. The default is mysql_native_password. This option was added in MySQL 5.7.5. • --admin-host=host_name The host part to use for the adminstrative account name. The default is localhost. This option is ignored if --login-path is also specified. This option was added in MySQL 5.7.5. • --admin-require-ssl Whether to require SSL for the administrative account. The default is not to require it. With this option enabled, the statement that mysql_install_db uses to create the account includes a REQUIRE SSL clause. As a result, the administrative account must use secure connections when connecting to the server. This option was added in MySQL 5.7.5. • --admin-user=user_name The user part to use for the adminstrative account name. The default is root. This option is ignored if --login-path is also specified. This option was added in MySQL 5.7.5. • --basedir=dir_name The path to the MySQL installation directory. • --builddir=dir_name For use with --srcdir and out-of-source builds. Set this to the location of the directory where the built files reside. • --cross-bootstrap For internal use. This option is used for building system tables on one host intended for another. This option was removed in MySQL 5.7.5. • --datadir=dir_name The path to the MySQL data directory. Only the last component of the path name is created if it does not exist; the parent directory must already exist or an error occurs. Note As of MySQL 5.7.5, the --datadir option is mandatory and the data directory must not already exist. (It remains true that the parent directory must exist.) • --defaults This option causes mysql_install_db to invoke mysqld in such a way that it reads option files from the default locations. If given as --no-defaults, and --defaults-file or --defaults-extra-file is not also specified, mysql_install_db passes --no-defaults to mysqld, to prevent option files from being read. This may help if program startup fails due to reading unknown options from an option file. This option was added in MySQL 5.7.5. (Before 5.7.5, only the --no-defaults variant was supported.) • --defaults-extra-file=file_name Read this option file after the global option file but (on Unix) before the user option file. If the file does not exist or is otherwise inaccessible, an error occurs. file_name is interpreted relative to the current directory if given as a relative path name rather than a full path name. This option is passed by mysql_install_db to mysqld. • --defaults-file=file_name Use only the given option file. If the file does not exist or is otherwise inaccessible, an error occurs. file_name is interpreted relative to the current directory if given as a relative path name rather than a full path name. This option is passed by mysql_install_db to mysqld. • --extra-sql-file=file_name, -f file_name This option names a file containing additional SQL statements to be executed after the standard bootstrapping statements. Accepted statement syntax in the file is like that of the mysql command-line client, including support for multiple-line C-style comments and delimiter handling to enable definition of stored programs. This option was added in MySQL 5.7.5. • --force Cause mysql_install_db to run even if DNS does not work. Grant table entries normally created using host names will use IP addresses instead. This option was removed in MySQL 5.7.5. • --insecure Do not generate a random password for the adminstrative account. Note The --insecure option was added in MySQL 5.7.5, replacing the --skip-random-passwords option. If --insecure is not given, it is necessary after mysql_install_db has been run to start the server, connect using the administrative account with the password written to the .mysql_secret file, and specify a new administrative password. Until this is done, the administrative account cannot be used for anything else. To change the password, you can use the SET PASSWORD statement (for example, with the mysql or mysqladmin client). After resetting the password, remove the .mysql_secret file; otherwise, if you run mysql_secure_installation, that command may see the file and expire the root password again as part of ensuring secure deployment. • --keep-my-cnf Tell mysql_install_db to preserve any existing my.cnf file and not create a new default my.cnf file. This option was added in MySQL 5.7.4 and removed in 5.7.5. As of 5.7.5, mysql_install_db does not create a default my.cnf file. • --lc-messages=name The locale to use for error messages. The default is en_US. The argument is converted to a language name and combined with the value of --lc-messages-dir to produce the location for the error message file. See Section 10.11, “Setting the Error Message Language”. This option was added in MySQL 5.7.5. • --lc-messages-dir=dir_name The directory where error messages are located. The value is used together with the value of --lc-messages to produce the location for the error message file. See Section 10.11, “Setting the Error Message Language”. This option was added in MySQL 5.7.5. • --ldata=dir_name A synonym for --datadir. This option was removed in MySQL 5.7.5. • --login-file=file_name The file from which to read the login path if the --login-path=file_name option is specified. The default file is .mylogin.cnf. This option was added in MySQL 5.7.5. • --login-path=name Read options from the named login path in the .mylogin.cnf login path file. The default login path is client. (To read a different file, use the --login-file=name option.) A “login path” is an option group containing options that specify which MySQL server to connect to and which account to authenticate as. To create or modify a login path file, use the mysql_config_editor utility. See mysql_config_editor(1). If the --login-path option is specified, the user, host, and password values are taken from the login path and used to create the administrative account. The password must be defined in the login path or an error occurs, unless the --insecure option is also specified. In addition, with --login-path, any --admin-host and --admin-user options are ignored. This option was added in MySQL 5.7.5. • --mysqld-file=file_name The path name of the mysqld binary to execute. The option value must be an absolute path name or an error occurs. If this option is not given, mysql_install_db searches for mysqld in these locations: • In the bin directory under the --basedir option value, if that option was given. • In the bin directory under the --srcdir option value, if that option was given. • In the bin directory under the --builddir option value, if that option was given. • In the local directory and in the bin and sbin directories under the local directory. • In /usr/bin, /usr/sbin, /usr/local/bin, /usr/local/sbin, /opt/local/bin, /opt/local/sbin. This option was added in MySQL 5.7.5. • --no-defaults Before MySQL 5.7.5, do not read any option files. If program startup fails due to reading unknown options from an option file, --no-defaults can be used to prevent them from being read. For behavior of this option as of MySQL 5.7.5, see the description of --defaults. • --random-password-file=file_name The path name of the file in which to write the randomly generated password for the administrative account. The option value must be an absolute path name or an error occurs. The default is $HOME/.mysql_secret. This option was added in MySQL 5.7.5. • --random-passwords Note This option was removed in MySQL 5.7.4 and replaced with --skip-random-passwords, which was in turn removed in MySQL 5.7.5 and replaced with --insecure. On Unix platforms, this option provides for more secure MySQL installation. Invoking mysql_install_db with --random-passwords causes it to perform the following actions in addition to its normal operation: • The installation process creates a random password, assigns it to the initial MySQL root accounts, and marks the password expired for those accounts. • The initial random root password is written to the .mysql_secret file in the directory named by the HOME environment variable. Depending on operating system, using a command such as sudo may cause the value of HOME to refer to the home directory of the root system user. .mysql_secret is created with mode 600 to be accessible only to the system user for whom it is created. If .mysql_secret already exists, the new password information is appended to it. Each password entry includes a timestamp to indicate when it was written. • No anonymous-user MySQL accounts are created. As a result of these actions, it is necessary after installation to start the server, connect as root using the password written to the .mysql_secret file, and specify a new root password. Until this is done, root cannot do anything else. This must be done for each root account you intend to use. To change the password, you can use the SET PASSWORD statement (for example, with the mysql client). You can also use mysqladmin or mysql_secure_installation. New install operations (not upgrades) using RPM packages and Solaris PKG packages invoke mysql_install_db with the --random-passwords option. (Install operations using RPMs for Unbreakable Linux Network are unaffected because they do not use mysql_install_db.) For install operations using a binary .tar.gz distribution or a source distribution, you can invoke mysql_install_db with the --random-passwords option manually to make your MySQL installation more secure. This is recommended, particularly for sites with sensitive data. • --rpm For internal use. This option is used during the MySQL installation process for install operations performed using RPM packages. This option was removed in MySQL 5.7.5. • --skip-name-resolve Use IP addresses rather than host names when creating grant table entries. This option can be useful if your DNS does not work. This option was removed in MySQL 5.7.5. • --skip-random-passwords Note The --skip-random-passwords option was added in MySQL 5.7.4, replacing the --random-passwords option. --skip-random-passwords was in turn removed in MySQL 5.7.5 and replaced with --insecure. As of MySQL 5.7.4, MySQL deployments produced using mysql_install_db are secure by default. When invoked without the --skip-random-passwords option, mysql_install_db uses these default deployment characteristics: • The installation process creates a single root account, 'root'@'localhost', automatically generates a random password for this account, and marks the password expired. • The initial random root password is written to the .mysql_secret file in the home directory of the effective user running the script. .mysql_secret is created with mode 600 to be accessible only to the system user for whom it is created. If .mysql_secret already exists, the new password information is appended to it. Each password entry includes a timestamp to indicate when it was written. • No anonymous-user MySQL accounts are created. • No test database is created. As a result of these actions, it is necessary after installation to start the server, connect as root using the password written to the .mysql_secret file, and specify a new root password. Until this is done, the administrative account cannot be used for anything else. To change the password, you can use the SET PASSWORD statement (for example, with the mysql client). You can also use mysqladmin or mysql_secure_installation. To produce a MySQL deployment that is not secure by default, you must explicitly specify the --skip-random-passwords option when you invoke mysql_install_db. With this option, mysql_install_db performs the following actions: • No random password is generated for the 'root'@'localhost' account. • A test database is created that is accessible by any user. • --skip-sys-schema As of MySQL 5.7.7, mysql_install_db installs the sys schema. The --skip-sys-schema option suppresses this behavior. This option was added in MySQL 5.7.7. • --srcdir=dir_name For internal use. This option specifies the directory under which mysql_install_db looks for support files such as the error message file and the file for populating the help tables. • --user=user_name, -u user_name The system (login) user name to use for running mysqld. Files and directories created by mysqld will be owned by this user. You must be the system root user to use this option. By default, mysqld runs using your current login name and files and directories that it creates will be owned by you. The -u form of this option was added in MySQL 5.7.5. • --verbose, -v Verbose mode. Print more information about what the program does. You can use this option to see the mysqld command that mysql_install_db invokes to start the server in bootstrap mode. The -v form of this option was added in MySQL 5.7.5. • --version, -V Display version information and exit. This option was added in MySQL 5.7.5. • --windows For internal use. This option is used for creating Windows distributions. It is a deprecated alias for --cross-bootstrap This option was removed in MySQL 5.7.5. COPYRIGHT Copyright © 1997, 2018, Oracle and/or its affiliates. All rights reserved. This documentation is free software; you can redistribute it and/or modify it only under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License. This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with the program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA or see http://www.gnu.org/licenses/. SEE ALSO For more information, please refer to the MySQL Reference Manual, which may already be installed locally and which is also available online at http://dev.mysql.com/doc/. AUTHOR Oracle Corporation (http://dev.mysql.com/). MySQL 5.7 10/03/2018 MYSQL_INSTALL_DB(1)
mysql_install_db - initialize MySQL data directory
mysql_install_db [options]
null
null
zstdless
zstdless runs less(1) on files or stdin, if no file argument is given, after decompressing them with zstdcat(1). SEE ALSO zstd(1) zstd 1.5.6 March 2024 ZSTDLESS(1)
zstdless - view zstandard-compressed files
zstdless [flags] [file ...]
null
null
iconv
The iconv program converts text from one encoding to another encoding. More precisely, it converts from the encoding given for the -f option to the encoding given for the -t option. Either of these encodings defaults to the encoding of the current locale. All the inputfiles are read and converted in turn; if no inputfile is given, the standard input is used. The converted text is printed to standard output. The encodings permitted are system dependent. For the libiconv implementation, they are listed in the iconv_open(3) manual page. Options controlling the input and output format: -f encoding, --from-code=encoding Specifies the encoding of the input. -t encoding, --to-code=encoding Specifies the encoding of the output. Options controlling conversion problems: -c When this option is given, characters that cannot be converted are silently discarded, instead of leading to a conversion error. --unicode-subst=formatstring When this option is given, Unicode characters that cannot be represented in the target encoding are replaced with a placeholder string that is constructed from the given formatstring, applied to the Unicode code point. The formatstring must be a format string in the same format as for the printf command or the printf() function, taking either no argument or exactly one unsigned integer argument. --byte-subst=formatstring When this option is given, bytes in the input that are not valid in the source encoding are replaced with a placeholder string that is constructed from the given formatstring, applied to the byte's value. The formatstring must be a format string in the same format as for the printf command or the printf() function, taking either no argument or exactly one unsigned integer argument. --widechar-subst=formatstring When this option is given, wide characters in the input that are not valid in the source encoding are replaced with a placeholder string that is constructed from the given formatstring, applied to the byte's value. The formatstring must be a format string in the same format as for the printf command or the printf() function, taking either no argument or exactly one unsigned integer argument. Options controlling error output: -s, --silent When this option is given, error messages about invalid or unconvertible characters are omitted, but the actual converted text is unaffected. The iconv -l or iconv --list command lists the names of the supported encodings, in a system dependent format. For the libiconv implementation, the names are printed in upper case, separated by whitespace, and alias names of an encoding are listed on the same line as the encoding itself.
iconv - character set conversion
iconv [OPTION...] [-f encoding] [-t encoding] [inputfile ...] iconv -l
null
iconv -f ISO-8859-1 -t UTF-8 converts input from the old West-European encoding ISO-8859-1 to Unicode. iconv -f KOI8-R --byte-subst="<0x%x>" --unicode-subst="<U+%04X>" converts input from the old Russian encoding KOI8-R to the locale encoding, substituting an angle bracket notation with hexadecimal numbers for invalid bytes and for valid but unconvertible characters. iconv --list lists the supported encodings. CONFORMING TO POSIX:2001 SEE ALSO iconv_open(3), locale(7) GNU March 31, 2007 ICONV(1)
captoinfo
captoinfo looks in each given text file for termcap descriptions. For each one found, an equivalent terminfo description is written to standard output. Termcap tc capabilities are translated directly to terminfo use capabilities. If no file is given, then the environment variable TERMCAP is used for the filename or entry. If TERMCAP is a full pathname to a file, only the terminal whose name is specified in the environment variable TERM is extracted from that file. If the environment variable TERMCAP is not set, then the file /usr/share/terminfo is read. -v print out tracing information on standard error as the program runs. -V print out the version of the program in use on standard error and exit. -1 cause the fields to print out one to a line. Otherwise, the fields will be printed several to a line to a maximum width of 60 characters. -w change the output to width characters. FILES /usr/share/terminfo Compiled terminal description database. TRANSLATIONS FROM NONSTANDARD CAPABILITIES Some obsolete nonstandard capabilities will automatically be translated into standard (SVr4/XSI Curses) terminfo capabilities by captoinfo. Whenever one of these automatic translations is done, the program will issue an notification to stderr, inviting the user to check that it has not mistakenly translated a completely unknown and random capability and/or syntax error. Nonstd Std From Terminfo name name capability ─────────────────────────────────────────────── BO mr AT&T enter_reverse_mode CI vi AT&T cursor_invisible CV ve AT&T cursor_normal DS mh AT&T enter_dim_mode EE me AT&T exit_attribute_mode FE LF AT&T label_on FL LO AT&T label_off XS mk AT&T enter_secure_mode EN @7 XENIX key_end GE ae XENIX exit_alt_charset_mode GS as XENIX enter_alt_charset_mode HM kh XENIX key_home LD kL XENIX key_dl PD kN XENIX key_npage PN po XENIX prtr_off PS pf XENIX prtr_on PU kP XENIX key_ppage RT @8 XENIX kent UP ku XENIX kcuu1 KA k; Tek key_f10 KB F1 Tek key_f11 KC F2 Tek key_f12 KD F3 Tek key_f13 KE F4 Tek key_f14 KF F5 Tek key_f15 BC Sb Tek set_background FC Sf Tek set_foreground HS mh Iris enter_dim_mode XENIX termcap also used to have a set of extension capabilities for forms drawing, designed to take advantage of the IBM PC high-half graphics. They were as follows: Cap Graphic ───────────────────────────── G2 upper left G3 lower left G1 upper right G4 lower right GR pointing right GL pointing left GU pointing up GD pointing down GH horizontal line GV vertical line GC intersection G6 upper left G7 lower left G5 upper right G8 lower right Gr tee pointing right Gr tee pointing left Gu tee pointing up Gd tee pointing down Gh horizontal line Gv vertical line Gc intersection GG acs magic cookie count If the single-line capabilities occur in an entry, they will automatically be composed into an acsc string. The double-line capabilities and GG are discarded with a warning message. IBM's AIX has a terminfo facility descended from SVr1 terminfo but incompatible with the SVr4 format. The following AIX extensions are automatically translated: IBM XSI ───────────── ksel kslt kbtab kcbt font0 s0ds font1 s1ds font2 s2ds font3 s3ds Additionally, the AIX box1 capability will be automatically translated to an acsc string. Hewlett-Packard's terminfo library supports two nonstandard terminfo capabilities meml (memory lock) and memu (memory unlock). These will be discarded with a warning message. NOTES This utility is actually a link to tic(1M), running in -I mode. You can use other tic options such as -f and -x. The trace option is not identical to SVr4's. Under SVr4, instead of following the -v with a trace level n, you repeat it n times. SEE ALSO infocmp(1M), curses(3X), terminfo(5) This describes ncurses version 5.7 (patch 20081102). AUTHOR Eric S. Raymond <esr@snark.thyrsus.com> and Thomas E. Dickey <dickey@invisible-island.net> captoinfo(1M)
captoinfo - convert a termcap description into a terminfo description
captoinfo [-vn width] [-V] [-1] [-w width] file . . .
null
null
opj_decompress
null
opj_decompress - This program reads in a jpeg2000 image and converts it to another image type. It is part of the OpenJPEG library. Valid input image extensions are .j2k, .jp2, .j2c, .jpt Valid output image extensions are .bmp, .pgm, .pgx, .png, .pnm, .ppm, .raw, .tga, .tif . For PNG resp. TIF it needs libpng resp. libtiff .
opj_decompress -i infile.j2k -o outfile.png opj_decompress -ImgDir images/ -OutFor bmp opj_decompress -h Print help message and exit See JPWL OPTIONS for special options
-i name (jpeg2000 input file name) -l n n is the maximum number of quality layers to decode. See LAYERS below) -o name (output file name with extension) -r n (n is the highest resolution level to be discarded. See REDUCTION below) -x name (use name as index file and fill it) -ImgDir directory_name (directory containing input files) -OutFor ext (extension for output files) JPIP OPTIONS Options usable only if the library has been compiled with BUILD_JPIP -jpip Embed index table box into the output JP2 file (compulsory for JPIP) -TP R Partition a tile into tile parts of different resolution levels (compulsory for JPT-stream) JPWL OPTIONS Options usable only if the library has been compiled with BUILD_JPWL -W c[=Nc] (Nc is the number of expected components in the codestream; default:3) -W t[=Nt] (Nt is the maximum number of tiles in the codestream; default:8192) -W c[=Nc], t[=Nt] (same as above) REDUCTION Set the number of highest resolution levels to be discarded. The image resolution is effectively divided by 2 to the power of the number of discarded levels. The reduce factor is limited by the smallest total number of decomposition levels among tiles. TILES Set the maximum number of quality layers to decode. If there are less quality layers than the specified number, all the quality layers are decoded. AUTHORS Copyright (c) 2002-2014, Universite catholique de Louvain (UCL), Belgium Copyright (c) 2002-2014, Professor Benoit Macq Copyright (c) 2001-2003, David Janssens Copyright (c) 2002-2003, Yannick Verschueren Copyright (c) 2003-2007, Francois-Olivier Devaux and Antonin Descampe Copyright (c) 2005, Herve Drolon, FreeImage Team Copyright (c) 2006-2007, Parvatha Elangovan SEE ALSO opj_compress(1) opj_dump(1) opj_decompress Version 2.1.1 opj_decompress(1)
null
isort-identify-imports
null
null
null
null
null
tiffdump
tiffdump displays directory information from files created according to the Tag Image File Format, Revision 6.0. The header of each TIFF file (magic number, version, and first directory offset) is displayed, followed by the tag contents of each directory in the file. For each tag, the name, data type, count, and value(s) is displayed. When the symbolic name for a tag or data type is known, the symbolic name is displayed followed by it's numeric (decimal) value. Tag values are displayed enclosed in <> characters immediately preceded by the value of the count field. For example, an ImageWidth tag might be displayed as ImageWidth (256) SHORT (3) 1<800>. tiffdump is particularly useful for investigating the contents of TIFF files that libtiff does not understand.
tiffdump - print verbatim information about TIFF files
tiffdump [ options ] name …
-h Force numeric data to be printed in hexadecimal rather than the default decimal. -m items Change the number of indirect data items that are printed. By default, this will be 24. -o offset Dump the contents of the IFD at the a particular file offset. The file offset may be specified using the usual C-style syntax; i.e. a leading 0x for hexadecimal and a leading 0 for octal. SEE ALSO tiffinfo (1), libtiff (3tiff) AUTHOR LibTIFF contributors COPYRIGHT 1988-2022, LibTIFF contributors 4.6 September 8, 2023 TIFFDUMP(1)
null
jlpm
null
null
null
null
null
nghttpx
null
null
null
null
null
mailmail
null
null
null
null
null
ttx
ttx is a tool for manipulating TrueType and OpenType fonts. It can convert TrueType and OpenType fonts to and from an XML-based format called TTX. TTX files have a ‘.ttx’ extension. For each file argument it is given, ttx detects whether it is a ‘.ttf’, ‘.otf’ or ‘.ttx’ file and acts accordingly: if it is a ‘.ttf’ or ‘.otf’ file, it generates a ‘.ttx’ file; if it is a ‘.ttx’ file, it generates a ‘.ttf’ or ‘.otf’ file. By default, every output file is created in the same directory as the corresponding input file and with the same name except for the extension, which is substituted appropriately. ttx never overwrites existing files; if necessary, it appends a suffix to the output file name before the extension, as in Arial#1.ttf. General options -h Display usage information. -d dir Write the output files to directory dir instead of writing every output file to the same directory as the corresponding input file. -o file Write the output to file instead of writing it to the same directory as the corresponding input file. -v Be verbose. Write more messages to the standard output describing what is being done. -a Allow virtual glyphs ID's on compile or decompile. Dump options The following options control the process of dumping font files (TrueType or OpenType) to TTX files. -l List table information. Instead of dumping the font to a TTX file, display minimal information about each table. -t table Dump table table. This option may be given multiple times to dump several tables at once. When not specified, all tables are dumped. -x table Exclude table table from the list of tables to dump. This option may be given multiple times to exclude several tables from the dump. The -t and -x options are mutually exclusive. -s Split tables. Dump each table to a separate TTX file and write (under the name that would have been used for the output file if the -s option had not been given) one small TTX file containing references to the individual table dump files. This file can be used as input to ttx as long as the referenced files can be found in the same directory. -i Don't disassemble TrueType instructions. When this option is specified, all TrueType programs (glyph programs, the font program and the pre-program) are written to the TTX file as hexadecimal data instead of assembly. This saves some time and results in smaller TTX files. -y n When decompiling a TrueType Collection (TTC) file, decompile font number n, starting from 0. Compilation options The following options control the process of compiling TTX files into font files (TrueType or OpenType): -m fontfile Merge the input TTX file file with fontfile. No more than one file argument can be specified when this option is used. -b Don't recalculate glyph bounding boxes. Use the values in the TTX file as is. THE TTX FILE FORMAT You can find some information about the TTX file format in documentation.html. In particular, you will find in that file the list of tables understood by ttx and the relations between TrueType GlyphIDs and the glyph names used in TTX files.
ttx – tool for manipulating TrueType and OpenType fonts
ttx [option ...] file ...
null
In the following examples, all files are read from and written to the current directory. Additionally, the name given for the output file assumes in every case that it did not exist before ttx was invoked. Dump the TrueType font contained in FreeSans.ttf to FreeSans.ttx: ttx FreeSans.ttf Compile MyFont.ttx into a TrueType or OpenType font file: ttx MyFont.ttx List the tables in FreeSans.ttf along with some information: ttx -l FreeSans.ttf Dump the ‘cmap’ table from FreeSans.ttf to FreeSans.ttx: ttx -t cmap FreeSans.ttf NOTES On MS-Windows and MacOS, ttx is available as a graphical application to which files can be dropped. SEE ALSO documentation.html fontforge(1), ftinfo(1), gfontview(1), xmbdfed(1), Font::TTF(3pm) AUTHORS ttx was written by Just van Rossum ⟨just@letterror.com⟩. This manual page was written by Florent Rougon ⟨f.rougon@free.fr⟩ for the Debian GNU/Linux system based on the existing FontTools documentation. It may be freely used, modified and distributed without restrictions. macOS 14.5 May 18, 2004 macOS 14.5
pytest
null
null
null
null
null
dask
null
null
null
null
null
h5unjam
null
null
null
null
null
typer
null
null
null
null
null
inout
null
null
null
null
null
mtor
null
null
null
null
null
my_print_defaults
my_print_defaults displays the options that are present in option groups of option files. The output indicates what options are used by programs that read the specified option groups. For example, the mysqlcheck program reads the [mysqlcheck] and [client] option groups. To see what options are present in those groups in the standard option files, invoke my_print_defaults like this: $> my_print_defaults mysqlcheck client --user=myusername --password=password --host=localhost The output consists of options, one per line, in the form that they would be specified on the command line. my_print_defaults supports the following options. • --help, -? Display a help message and exit. • --config-file=file_name, --defaults-file=file_name, -c file_name Read only the given option file. • --debug=debug_options, -# debug_options Write a debugging log. A typical debug_options string is d:t:o,file_name. The default is d:t:o,/tmp/my_print_defaults.trace. • --defaults-extra-file=file_name, --extra-file=file_name, -e file_name Read this option file after the global option file but (on Unix) before the user option file. For additional information about this and other option-file options, see Section 4.2.2.3, “Command-Line Options that Affect Option-File Handling”. • --defaults-group-suffix=suffix, -g suffix In addition to the groups named on the command line, read groups that have the given suffix. For additional information about this and other option-file options, see Section 4.2.2.3, “Command-Line Options that Affect Option-File Handling”. • --login-path=name, -l name Read options from the named login path in the .mylogin.cnf login path file. A “login path” is an option group containing options that specify which MySQL server to connect to and which account to authenticate as. To create or modify a login path file, use the mysql_config_editor utility. See mysql_config_editor(1). For additional information about this and other option-file options, see Section 4.2.2.3, “Command-Line Options that Affect Option-File Handling”. • --no-login-paths Skips reading options from the login path file. See --login-path for related information. For additional information about this and other option-file options, see Section 4.2.2.3, “Command-Line Options that Affect Option-File Handling”. • --no-defaults, -n Return an empty string. For additional information about this and other option-file options, see Section 4.2.2.3, “Command-Line Options that Affect Option-File Handling”. • --show, -s my_print_defaults masks passwords by default. Use this option to display passwords as cleartext. • --verbose, -v Verbose mode. Print more information about what the program does. • --version, -V Display version information and exit. COPYRIGHT Copyright © 1997, 2023, Oracle and/or its affiliates. This documentation is free software; you can redistribute it and/or modify it only under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License. This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with the program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA or see http://www.gnu.org/licenses/. SEE ALSO For more information, please refer to the MySQL Reference Manual, which may already be installed locally and which is also available online at http://dev.mysql.com/doc/. AUTHOR Oracle Corporation (http://dev.mysql.com/). MySQL 8.3 11/23/2023 MY_PRINT_DEFAULTS(1)
my_print_defaults - display options from option files
my_print_defaults [options] option_group ...
null
null
sip-install
null
null
null
null
null
pydoc
null
null
null
null
null
python
null
null
null
null
null
resolve_stack_dump
resolve_stack_dump resolves a numeric stack dump to symbols. Invoke resolve_stack_dump like this: shell> resolve_stack_dump [options] symbols_file [numeric_dump_file] The symbols file should include the output from the nm --numeric-sort mysqld command. The numeric dump file should contain a numeric stack track from mysqld. If no numeric dump file is named on the command line, the stack trace is read from the standard input. resolve_stack_dump supports the following options. • --help, -h Display a help message and exit. • --numeric-dump-file=file_name, -n file_name Read the stack trace from the given file. • --symbols-file=file_name, -s file_name Use the given symbols file. • --version, -V Display version information and exit. For more information, see Section 28.5.1.5, “Using a Stack Trace”. COPYRIGHT Copyright © 1997, 2018, Oracle and/or its affiliates. All rights reserved. This documentation is free software; you can redistribute it and/or modify it only under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License. This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with the program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA or see http://www.gnu.org/licenses/. SEE ALSO For more information, please refer to the MySQL Reference Manual, which may already be installed locally and which is also available online at http://dev.mysql.com/doc/. AUTHOR Oracle Corporation (http://dev.mysql.com/). MySQL 5.7 10/03/2018 RESOLVE_STACK_DUMP(1)
resolve_stack_dump - resolve numeric stack trace dump to symbols
resolve_stack_dump [options] symbols_file [numeric_dump_file]
null
null
conda
null
null
null
null
null
flake8
null
null
null
null
null
unxz
xz is a general-purpose data compression tool with command line syntax similar to gzip(1) and bzip2(1). The native file format is the .xz format, but the legacy .lzma format used by LZMA Utils and raw compressed streams with no container format headers are also supported. In addition, decompression of the .lz format used by lzip is supported. xz compresses or decompresses each file according to the selected operation mode. If no files are given or file is -, xz reads from standard input and writes the processed data to standard output. xz will refuse (display an error and skip the file) to write compressed data to standard output if it is a terminal. Similarly, xz will refuse to read compressed data from standard input if it is a terminal. Unless --stdout is specified, files other than - are written to a new file whose name is derived from the source file name: • When compressing, the suffix of the target file format (.xz or .lzma) is appended to the source filename to get the target filename. • When decompressing, the .xz, .lzma, or .lz suffix is removed from the filename to get the target filename. xz also recognizes the suffixes .txz and .tlz, and replaces them with the .tar suffix. If the target file already exists, an error is displayed and the file is skipped. Unless writing to standard output, xz will display a warning and skip the file if any of the following applies: • File is not a regular file. Symbolic links are not followed, and thus they are not considered to be regular files. • File has more than one hard link. • File has setuid, setgid, or sticky bit set. • The operation mode is set to compress and the file already has a suffix of the target file format (.xz or .txz when compressing to the .xz format, and .lzma or .tlz when compressing to the .lzma format). • The operation mode is set to decompress and the file doesn't have a suffix of any of the supported file formats (.xz, .txz, .lzma, .tlz, or .lz). After successfully compressing or decompressing the file, xz copies the owner, group, permissions, access time, and modification time from the source file to the target file. If copying the group fails, the permissions are modified so that the target file doesn't become accessible to users who didn't have permission to access the source file. xz doesn't support copying other metadata like access control lists or extended attributes yet. Once the target file has been successfully closed, the source file is removed unless --keep was specified. The source file is never removed if the output is written to standard output or if an error occurs. Sending SIGINFO or SIGUSR1 to the xz process makes it print progress information to standard error. This has only limited use since when standard error is a terminal, using --verbose will display an automatically updating progress indicator. Memory usage The memory usage of xz varies from a few hundred kilobytes to several gigabytes depending on the compression settings. The settings used when compressing a file determine the memory requirements of the decompressor. Typically the decompressor needs 5 % to 20 % of the amount of memory that the compressor needed when creating the file. For example, decompressing a file created with xz -9 currently requires 65 MiB of memory. Still, it is possible to have .xz files that require several gigabytes of memory to decompress. Especially users of older systems may find the possibility of very large memory usage annoying. To prevent uncomfortable surprises, xz has a built-in memory usage limiter, which is disabled by default. While some operating systems provide ways to limit the memory usage of processes, relying on it wasn't deemed to be flexible enough (for example, using ulimit(1) to limit virtual memory tends to cripple mmap(2)). The memory usage limiter can be enabled with the command line option --memlimit=limit. Often it is more convenient to enable the limiter by default by setting the environment variable XZ_DEFAULTS, for example, XZ_DEFAULTS=--memlimit=150MiB. It is possible to set the limits separately for compression and decompression by using --memlimit-compress=limit and --memlimit-decompress=limit. Using these two options outside XZ_DEFAULTS is rarely useful because a single run of xz cannot do both compression and decompression and --memlimit=limit (or -M limit) is shorter to type on the command line. If the specified memory usage limit is exceeded when decompressing, xz will display an error and decompressing the file will fail. If the limit is exceeded when compressing, xz will try to scale the settings down so that the limit is no longer exceeded (except when using --format=raw or --no-adjust). This way the operation won't fail unless the limit is very small. The scaling of the settings is done in steps that don't match the compression level presets, for example, if the limit is only slightly less than the amount required for xz -9, the settings will be scaled down only a little, not all the way down to xz -8. Concatenation and padding with .xz files It is possible to concatenate .xz files as is. xz will decompress such files as if they were a single .xz file. It is possible to insert padding between the concatenated parts or after the last part. The padding must consist of null bytes and the size of the padding must be a multiple of four bytes. This can be useful, for example, if the .xz file is stored on a medium that measures file sizes in 512-byte blocks. Concatenation and padding are not allowed with .lzma files or raw streams.
xz, unxz, xzcat, lzma, unlzma, lzcat - Compress or decompress .xz and .lzma files
xz [option...] [file...] COMMAND ALIASES unxz is equivalent to xz --decompress. xzcat is equivalent to xz --decompress --stdout. lzma is equivalent to xz --format=lzma. unlzma is equivalent to xz --format=lzma --decompress. lzcat is equivalent to xz --format=lzma --decompress --stdout. When writing scripts that need to decompress files, it is recommended to always use the name xz with appropriate arguments (xz -d or xz -dc) instead of the names unxz and xzcat.
Integer suffixes and special values In most places where an integer argument is expected, an optional suffix is supported to easily indicate large integers. There must be no space between the integer and the suffix. KiB Multiply the integer by 1,024 (2^10). Ki, k, kB, K, and KB are accepted as synonyms for KiB. MiB Multiply the integer by 1,048,576 (2^20). Mi, m, M, and MB are accepted as synonyms for MiB. GiB Multiply the integer by 1,073,741,824 (2^30). Gi, g, G, and GB are accepted as synonyms for GiB. The special value max can be used to indicate the maximum integer value supported by the option. Operation mode If multiple operation mode options are given, the last one takes effect. -z, --compress Compress. This is the default operation mode when no operation mode option is specified and no other operation mode is implied from the command name (for example, unxz implies --decompress). -d, --decompress, --uncompress Decompress. -t, --test Test the integrity of compressed files. This option is equivalent to --decompress --stdout except that the decompressed data is discarded instead of being written to standard output. No files are created or removed. -l, --list Print information about compressed files. No uncompressed output is produced, and no files are created or removed. In list mode, the program cannot read the compressed data from standard input or from other unseekable sources. The default listing shows basic information about files, one file per line. To get more detailed information, use also the --verbose option. For even more information, use --verbose twice, but note that this may be slow, because getting all the extra information requires many seeks. The width of verbose output exceeds 80 characters, so piping the output to, for example, less -S may be convenient if the terminal isn't wide enough. The exact output may vary between xz versions and different locales. For machine-readable output, --robot --list should be used. Operation modifiers -k, --keep Don't delete the input files. Since xz 5.2.6, this option also makes xz compress or decompress even if the input is a symbolic link to a regular file, has more than one hard link, or has the setuid, setgid, or sticky bit set. The setuid, setgid, and sticky bits are not copied to the target file. In earlier versions this was only done with --force. -f, --force This option has several effects: • If the target file already exists, delete it before compressing or decompressing. • Compress or decompress even if the input is a symbolic link to a regular file, has more than one hard link, or has the setuid, setgid, or sticky bit set. The setuid, setgid, and sticky bits are not copied to the target file. • When used with --decompress --stdout and xz cannot recognize the type of the source file, copy the source file as is to standard output. This allows xzcat --force to be used like cat(1) for files that have not been compressed with xz. Note that in future, xz might support new compressed file formats, which may make xz decompress more types of files instead of copying them as is to standard output. --format=format can be used to restrict xz to decompress only a single file format. -c, --stdout, --to-stdout Write the compressed or decompressed data to standard output instead of a file. This implies --keep. --single-stream Decompress only the first .xz stream, and silently ignore possible remaining input data following the stream. Normally such trailing garbage makes xz display an error. xz never decompresses more than one stream from .lzma files or raw streams, but this option still makes xz ignore the possible trailing data after the .lzma file or raw stream. This option has no effect if the operation mode is not --decompress or --test. --no-sparse Disable creation of sparse files. By default, if decompressing into a regular file, xz tries to make the file sparse if the decompressed data contains long sequences of binary zeros. It also works when writing to standard output as long as standard output is connected to a regular file and certain additional conditions are met to make it safe. Creating sparse files may save disk space and speed up the decompression by reducing the amount of disk I/O. -S .suf, --suffix=.suf When compressing, use .suf as the suffix for the target file instead of .xz or .lzma. If not writing to standard output and the source file already has the suffix .suf, a warning is displayed and the file is skipped. When decompressing, recognize files with the suffix .suf in addition to files with the .xz, .txz, .lzma, .tlz, or .lz suffix. If the source file has the suffix .suf, the suffix is removed to get the target filename. When compressing or decompressing raw streams (--format=raw), the suffix must always be specified unless writing to standard output, because there is no default suffix for raw streams. --files[=file] Read the filenames to process from file; if file is omitted, filenames are read from standard input. Filenames must be terminated with the newline character. A dash (-) is taken as a regular filename; it doesn't mean standard input. If filenames are given also as command line arguments, they are processed before the filenames read from file. --files0[=file] This is identical to --files[=file] except that each filename must be terminated with the null character. Basic file format and compression options -F format, --format=format Specify the file format to compress or decompress: auto This is the default. When compressing, auto is equivalent to xz. When decompressing, the format of the input file is automatically detected. Note that raw streams (created with --format=raw) cannot be auto- detected. xz Compress to the .xz file format, or accept only .xz files when decompressing. lzma, alone Compress to the legacy .lzma file format, or accept only .lzma files when decompressing. The alternative name alone is provided for backwards compatibility with LZMA Utils. lzip Accept only .lz files when decompressing. Compression is not supported. The .lz format version 0 and the unextended version 1 are supported. Version 0 files were produced by lzip 1.3 and older. Such files aren't common but may be found from file archives as a few source packages were released in this format. People might have old personal files in this format too. Decompression support for the format version 0 was removed in lzip 1.18. lzip 1.4 and later create files in the format version 1. The sync flush marker extension to the format version 1 was added in lzip 1.6. This extension is rarely used and isn't supported by xz (diagnosed as corrupt input). raw Compress or uncompress a raw stream (no headers). This is meant for advanced users only. To decode raw streams, you need use --format=raw and explicitly specify the filter chain, which normally would have been stored in the container headers. -C check, --check=check Specify the type of the integrity check. The check is calculated from the uncompressed data and stored in the .xz file. This option has an effect only when compressing into the .xz format; the .lzma format doesn't support integrity checks. The integrity check (if any) is verified when the .xz file is decompressed. Supported check types: none Don't calculate an integrity check at all. This is usually a bad idea. This can be useful when integrity of the data is verified by other means anyway. crc32 Calculate CRC32 using the polynomial from IEEE-802.3 (Ethernet). crc64 Calculate CRC64 using the polynomial from ECMA-182. This is the default, since it is slightly better than CRC32 at detecting damaged files and the speed difference is negligible. sha256 Calculate SHA-256. This is somewhat slower than CRC32 and CRC64. Integrity of the .xz headers is always verified with CRC32. It is not possible to change or disable it. --ignore-check Don't verify the integrity check of the compressed data when decompressing. The CRC32 values in the .xz headers will still be verified normally. Do not use this option unless you know what you are doing. Possible reasons to use this option: • Trying to recover data from a corrupt .xz file. • Speeding up decompression. This matters mostly with SHA-256 or with files that have compressed extremely well. It's recommended to not use this option for this purpose unless the file integrity is verified externally in some other way. -0 ... -9 Select a compression preset level. The default is -6. If multiple preset levels are specified, the last one takes effect. If a custom filter chain was already specified, setting a compression preset level clears the custom filter chain. The differences between the presets are more significant than with gzip(1) and bzip2(1). The selected compression settings determine the memory requirements of the decompressor, thus using a too high preset level might make it painful to decompress the file on an old system with little RAM. Specifically, it's not a good idea to blindly use -9 for everything like it often is with gzip(1) and bzip2(1). -0 ... -3 These are somewhat fast presets. -0 is sometimes faster than gzip -9 while compressing much better. The higher ones often have speed comparable to bzip2(1) with comparable or better compression ratio, although the results depend a lot on the type of data being compressed. -4 ... -6 Good to very good compression while keeping decompressor memory usage reasonable even for old systems. -6 is the default, which is usually a good choice for distributing files that need to be decompressible even on systems with only 16 MiB RAM. (-5e or -6e may be worth considering too. See --extreme.) -7 ... -9 These are like -6 but with higher compressor and decompressor memory requirements. These are useful only when compressing files bigger than 8 MiB, 16 MiB, and 32 MiB, respectively. On the same hardware, the decompression speed is approximately a constant number of bytes of compressed data per second. In other words, the better the compression, the faster the decompression will usually be. This also means that the amount of uncompressed output produced per second can vary a lot. The following table summarises the features of the presets: Preset DictSize CompCPU CompMem DecMem -0 256 KiB 0 3 MiB 1 MiB -1 1 MiB 1 9 MiB 2 MiB -2 2 MiB 2 17 MiB 3 MiB -3 4 MiB 3 32 MiB 5 MiB -4 4 MiB 4 48 MiB 5 MiB -5 8 MiB 5 94 MiB 9 MiB -6 8 MiB 6 94 MiB 9 MiB -7 16 MiB 6 186 MiB 17 MiB -8 32 MiB 6 370 MiB 33 MiB -9 64 MiB 6 674 MiB 65 MiB Column descriptions: • DictSize is the LZMA2 dictionary size. It is waste of memory to use a dictionary bigger than the size of the uncompressed file. This is why it is good to avoid using the presets -7 ... -9 when there's no real need for them. At -6 and lower, the amount of memory wasted is usually low enough to not matter. • CompCPU is a simplified representation of the LZMA2 settings that affect compression speed. The dictionary size affects speed too, so while CompCPU is the same for levels -6 ... -9, higher levels still tend to be a little slower. To get even slower and thus possibly better compression, see --extreme. • CompMem contains the compressor memory requirements in the single-threaded mode. It may vary slightly between xz versions. • DecMem contains the decompressor memory requirements. That is, the compression settings determine the memory requirements of the decompressor. The exact decompressor memory usage is slightly more than the LZMA2 dictionary size, but the values in the table have been rounded up to the next full MiB. Memory requirements of the multi-threaded mode are significantly higher than that of the single-threaded mode. With the default value of --block-size, each thread needs 3*3*DictSize plus CompMem or DecMem. For example, four threads with preset -6 needs 660–670 MiB of memory. -e, --extreme Use a slower variant of the selected compression preset level (-0 ... -9) to hopefully get a little bit better compression ratio, but with bad luck this can also make it worse. Decompressor memory usage is not affected, but compressor memory usage increases a little at preset levels -0 ... -3. Since there are two presets with dictionary sizes 4 MiB and 8 MiB, the presets -3e and -5e use slightly faster settings (lower CompCPU) than -4e and -6e, respectively. That way no two presets are identical. Preset DictSize CompCPU CompMem DecMem -0e 256 KiB 8 4 MiB 1 MiB -1e 1 MiB 8 13 MiB 2 MiB -2e 2 MiB 8 25 MiB 3 MiB -3e 4 MiB 7 48 MiB 5 MiB -4e 4 MiB 8 48 MiB 5 MiB -5e 8 MiB 7 94 MiB 9 MiB -6e 8 MiB 8 94 MiB 9 MiB -7e 16 MiB 8 186 MiB 17 MiB -8e 32 MiB 8 370 MiB 33 MiB -9e 64 MiB 8 674 MiB 65 MiB For example, there are a total of four presets that use 8 MiB dictionary, whose order from the fastest to the slowest is -5, -6, -5e, and -6e. --fast --best These are somewhat misleading aliases for -0 and -9, respectively. These are provided only for backwards compatibility with LZMA Utils. Avoid using these options. --block-size=size When compressing to the .xz format, split the input data into blocks of size bytes. The blocks are compressed independently from each other, which helps with multi-threading and makes limited random-access decompression possible. This option is typically used to override the default block size in multi- threaded mode, but this option can be used in single-threaded mode too. In multi-threaded mode about three times size bytes will be allocated in each thread for buffering input and output. The default size is three times the LZMA2 dictionary size or 1 MiB, whichever is more. Typically a good value is 2–4 times the size of the LZMA2 dictionary or at least 1 MiB. Using size less than the LZMA2 dictionary size is waste of RAM because then the LZMA2 dictionary buffer will never get fully used. In multi-threaded mode, the sizes of the blocks are stored in the block headers. This size information is required for multi-threaded decompression. In single-threaded mode no block splitting is done by default. Setting this option doesn't affect memory usage. No size information is stored in block headers, thus files created in single-threaded mode won't be identical to files created in multi-threaded mode. The lack of size information also means that xz won't be able decompress the files in multi-threaded mode. --block-list=items When compressing to the .xz format, start a new block with an optional custom filter chain after the given intervals of uncompressed data. The items are a comma-separated list. Each item consists of an optional filter chain number between 0 and 9 followed by a colon (:) and a required size of uncompressed data. Omitting an item (two or more consecutive commas) is a shorthand to use the size and filters of the previous item. If the input file is bigger than the sum of the sizes in items, the last item is repeated until the end of the file. A special value of 0 may be used as the last size to indicate that the rest of the file should be encoded as a single block. An alternative filter chain for each block can be specified in combination with the --filters1=filters ... --filters9=filters options. These options define filter chains with an identifier between 1–9. Filter chain 0 can be used to refer to the default filter chain, which is the same as not specifying a filter chain. The filter chain identifier can be used before the uncompressed size, followed by a colon (:). For example, if one specifies --block-list=1:2MiB,3:2MiB,2:4MiB,,2MiB,0:4MiB then blocks will be created using: • The filter chain specified by --filters1 and 2 MiB input • The filter chain specified by --filters3 and 2 MiB input • The filter chain specified by --filters2 and 4 MiB input • The filter chain specified by --filters2 and 4 MiB input • The default filter chain and 2 MiB input • The default filter chain and 4 MiB input for every block until end of input. If one specifies a size that exceeds the encoder's block size (either the default value in threaded mode or the value specified with --block-size=size), the encoder will create additional blocks while keeping the boundaries specified in items. For example, if one specifies --block-size=10MiB --block-list=5MiB,10MiB,8MiB,12MiB,24MiB and the input file is 80 MiB, one will get 11 blocks: 5, 10, 8, 10, 2, 10, 10, 4, 10, 10, and 1 MiB. In multi-threaded mode the sizes of the blocks are stored in the block headers. This isn't done in single-threaded mode, so the encoded output won't be identical to that of the multi-threaded mode. --flush-timeout=timeout When compressing, if more than timeout milliseconds (a positive integer) has passed since the previous flush and reading more input would block, all the pending input data is flushed from the encoder and made available in the output stream. This can be useful if xz is used to compress data that is streamed over a network. Small timeout values make the data available at the receiving end with a small delay, but large timeout values give better compression ratio. This feature is disabled by default. If this option is specified more than once, the last one takes effect. The special timeout value of 0 can be used to explicitly disable this feature. This feature is not available on non-POSIX systems. This feature is still experimental. Currently xz is unsuitable for decompressing the stream in real time due to how xz does buffering. --memlimit-compress=limit Set a memory usage limit for compression. If this option is specified multiple times, the last one takes effect. If the compression settings exceed the limit, xz will attempt to adjust the settings downwards so that the limit is no longer exceeded and display a notice that automatic adjustment was done. The adjustments are done in this order: reducing the number of threads, switching to single-threaded mode if even one thread in multi-threaded mode exceeds the limit, and finally reducing the LZMA2 dictionary size. When compressing with --format=raw or if --no-adjust has been specified, only the number of threads may be reduced since it can be done without affecting the compressed output. If the limit cannot be met even with the adjustments described above, an error is displayed and xz will exit with exit status 1. The limit can be specified in multiple ways: • The limit can be an absolute value in bytes. Using an integer suffix like MiB can be useful. Example: --memlimit-compress=80MiB • The limit can be specified as a percentage of total physical memory (RAM). This can be useful especially when setting the XZ_DEFAULTS environment variable in a shell initialization script that is shared between different computers. That way the limit is automatically bigger on systems with more memory. Example: --memlimit-compress=70% • The limit can be reset back to its default value by setting it to 0. This is currently equivalent to setting the limit to max (no memory usage limit). For 32-bit xz there is a special case: if the limit would be over 4020 MiB, the limit is set to 4020 MiB. On MIPS32 2000 MiB is used instead. (The values 0 and max aren't affected by this. A similar feature doesn't exist for decompression.) This can be helpful when a 32-bit executable has access to 4 GiB address space (2 GiB on MIPS32) while hopefully doing no harm in other situations. See also the section Memory usage. --memlimit-decompress=limit Set a memory usage limit for decompression. This also affects the --list mode. If the operation is not possible without exceeding the limit, xz will display an error and decompressing the file will fail. See --memlimit-compress=limit for possible ways to specify the limit. --memlimit-mt-decompress=limit Set a memory usage limit for multi-threaded decompression. This can only affect the number of threads; this will never make xz refuse to decompress a file. If limit is too low to allow any multi-threading, the limit is ignored and xz will continue in single-threaded mode. Note that if also --memlimit-decompress is used, it will always apply to both single-threaded and multi- threaded modes, and so the effective limit for multi-threading will never be higher than the limit set with --memlimit-decompress. In contrast to the other memory usage limit options, --memlimit-mt-decompress=limit has a system-specific default limit. xz --info-memory can be used to see the current value. This option and its default value exist because without any limit the threaded decompressor could end up allocating an insane amount of memory with some input files. If the default limit is too low on your system, feel free to increase the limit but never set it to a value larger than the amount of usable RAM as with appropriate input files xz will attempt to use that amount of memory even with a low number of threads. Running out of memory or swapping will not improve decompression performance. See --memlimit-compress=limit for possible ways to specify the limit. Setting limit to 0 resets the limit to the default system-specific value. -M limit, --memlimit=limit, --memory=limit This is equivalent to specifying --memlimit-compress=limit --memlimit-decompress=limit --memlimit-mt-decompress=limit. --no-adjust Display an error and exit if the memory usage limit cannot be met without adjusting settings that affect the compressed output. That is, this prevents xz from switching the encoder from multi-threaded mode to single-threaded mode and from reducing the LZMA2 dictionary size. Even when this option is used the number of threads may be reduced to meet the memory usage limit as that won't affect the compressed output. Automatic adjusting is always disabled when creating raw streams (--format=raw). -T threads, --threads=threads Specify the number of worker threads to use. Setting threads to a special value 0 makes xz use up to as many threads as the processor(s) on the system support. The actual number of threads can be fewer than threads if the input file is not big enough for threading with the given settings or if using more threads would exceed the memory usage limit. The single-threaded and multi-threaded compressors produce different output. Single-threaded compressor will give the smallest file size but only the output from the multi-threaded compressor can be decompressed using multiple threads. Setting threads to 1 will use the single-threaded mode. Setting threads to any other value, including 0, will use the multi-threaded compressor even if the system supports only one hardware thread. (xz 5.2.x used single-threaded mode in this situation.) To use multi-threaded mode with only one thread, set threads to +1. The + prefix has no effect with values other than 1. A memory usage limit can still make xz switch to single-threaded mode unless --no-adjust is used. Support for the + prefix was added in xz 5.4.0. If an automatic number of threads has been requested and no memory usage limit has been specified, then a system-specific default soft limit will be used to possibly limit the number of threads. It is a soft limit in sense that it is ignored if the number of threads becomes one, thus a soft limit will never stop xz from compressing or decompressing. This default soft limit will not make xz switch from multi-threaded mode to single- threaded mode. The active limits can be seen with xz --info-memory. Currently the only threading method is to split the input into blocks and compress them independently from each other. The default block size depends on the compression level and can be overridden with the --block-size=size option. Threaded decompression only works on files that contain multiple blocks with size information in block headers. All large enough files compressed in multi-threaded mode meet this condition, but files compressed in single-threaded mode don't even if --block-size=size has been used. The default value for threads is 0. In xz 5.4.x and older the default is 1. Custom compressor filter chains A custom filter chain allows specifying the compression settings in detail instead of relying on the settings associated to the presets. When a custom filter chain is specified, preset options (-0 ... -9 and --extreme) earlier on the command line are forgotten. If a preset option is specified after one or more custom filter chain options, the new preset takes effect and the custom filter chain options specified earlier are forgotten. A filter chain is comparable to piping on the command line. When compressing, the uncompressed input goes to the first filter, whose output goes to the next filter (if any). The output of the last filter gets written to the compressed file. The maximum number of filters in the chain is four, but typically a filter chain has only one or two filters. Many filters have limitations on where they can be in the filter chain: some filters can work only as the last filter in the chain, some only as a non-last filter, and some work in any position in the chain. Depending on the filter, this limitation is either inherent to the filter design or exists to prevent security issues. A custom filter chain can be specified in two different ways. The options --filters=filters and --filters1=filters ... --filters9=filters allow specifying an entire filter chain in one option using the liblzma filter string syntax. Alternatively, a filter chain can be specified by using one or more individual filter options in the order they are wanted in the filter chain. That is, the order of the individual filter options is significant! When decoding raw streams (--format=raw), the filter chain must be specified in the same order as it was specified when compressing. Any individual filter or preset options specified before the full chain option (--filters=filters) will be forgotten. Individual filters specified after the full chain option will reset the filter chain. Both the full and individual filter options take filter-specific options as a comma-separated list. Extra commas in options are ignored. Every option has a default value, so specify those you want to change. To see the whole filter chain and options, use xz -vv (that is, use --verbose twice). This works also for viewing the filter chain options used by presets. --filters=filters Specify the full filter chain or a preset in a single option. Each filter can be separated by spaces or two dashes (--). filters may need to be quoted on the shell command line so it is parsed as a single option. To denote options, use : or =. A preset can be prefixed with a - and followed with zero or more flags. The only supported flag is e to apply the same options as --extreme. --filters1=filters ... --filters9=filters Specify up to nine additional filter chains that can be used with --block-list. For example, when compressing an archive with executable files followed by text files, the executable part could use a filter chain with a BCJ filter and the text part only the LZMA2 filter. --filters-help Display a help message describing how to specify presets and custom filter chains in the --filters and --filters1=filters ... --filters9=filters options, and exit successfully. --lzma1[=options] --lzma2[=options] Add LZMA1 or LZMA2 filter to the filter chain. These filters can be used only as the last filter in the chain. LZMA1 is a legacy filter, which is supported almost solely due to the legacy .lzma file format, which supports only LZMA1. LZMA2 is an updated version of LZMA1 to fix some practical issues of LZMA1. The .xz format uses LZMA2 and doesn't support LZMA1 at all. Compression speed and ratios of LZMA1 and LZMA2 are practically the same. LZMA1 and LZMA2 share the same set of options: preset=preset Reset all LZMA1 or LZMA2 options to preset. Preset consist of an integer, which may be followed by single- letter preset modifiers. The integer can be from 0 to 9, matching the command line options -0 ... -9. The only supported modifier is currently e, which matches --extreme. If no preset is specified, the default values of LZMA1 or LZMA2 options are taken from the preset 6. dict=size Dictionary (history buffer) size indicates how many bytes of the recently processed uncompressed data is kept in memory. The algorithm tries to find repeating byte sequences (matches) in the uncompressed data, and replace them with references to the data currently in the dictionary. The bigger the dictionary, the higher is the chance to find a match. Thus, increasing dictionary size usually improves compression ratio, but a dictionary bigger than the uncompressed file is waste of memory. Typical dictionary size is from 64 KiB to 64 MiB. The minimum is 4 KiB. The maximum for compression is currently 1.5 GiB (1536 MiB). The decompressor already supports dictionaries up to one byte less than 4 GiB, which is the maximum for the LZMA1 and LZMA2 stream formats. Dictionary size and match finder (mf) together determine the memory usage of the LZMA1 or LZMA2 encoder. The same (or bigger) dictionary size is required for decompressing that was used when compressing, thus the memory usage of the decoder is determined by the dictionary size used when compressing. The .xz headers store the dictionary size either as 2^n or 2^n + 2^(n-1), so these sizes are somewhat preferred for compression. Other sizes will get rounded up when stored in the .xz headers. lc=lc Specify the number of literal context bits. The minimum is 0 and the maximum is 4; the default is 3. In addition, the sum of lc and lp must not exceed 4. All bytes that cannot be encoded as matches are encoded as literals. That is, literals are simply 8-bit bytes that are encoded one at a time. The literal coding makes an assumption that the highest lc bits of the previous uncompressed byte correlate with the next byte. For example, in typical English text, an upper-case letter is often followed by a lower-case letter, and a lower-case letter is usually followed by another lower-case letter. In the US-ASCII character set, the highest three bits are 010 for upper-case letters and 011 for lower-case letters. When lc is at least 3, the literal coding can take advantage of this property in the uncompressed data. The default value (3) is usually good. If you want maximum compression, test lc=4. Sometimes it helps a little, and sometimes it makes compression worse. If it makes it worse, test lc=2 too. lp=lp Specify the number of literal position bits. The minimum is 0 and the maximum is 4; the default is 0. Lp affects what kind of alignment in the uncompressed data is assumed when encoding literals. See pb below for more information about alignment. pb=pb Specify the number of position bits. The minimum is 0 and the maximum is 4; the default is 2. Pb affects what kind of alignment in the uncompressed data is assumed in general. The default means four-byte alignment (2^pb=2^2=4), which is often a good choice when there's no better guess. When the alignment is known, setting pb accordingly may reduce the file size a little. For example, with text files having one-byte alignment (US-ASCII, ISO-8859-*, UTF-8), setting pb=0 can improve compression slightly. For UTF-16 text, pb=1 is a good choice. If the alignment is an odd number like 3 bytes, pb=0 might be the best choice. Even though the assumed alignment can be adjusted with pb and lp, LZMA1 and LZMA2 still slightly favor 16-byte alignment. It might be worth taking into account when designing file formats that are likely to be often compressed with LZMA1 or LZMA2. mf=mf Match finder has a major effect on encoder speed, memory usage, and compression ratio. Usually Hash Chain match finders are faster than Binary Tree match finders. The default depends on the preset: 0 uses hc3, 1–3 use hc4, and the rest use bt4. The following match finders are supported. The memory usage formulas below are rough approximations, which are closest to the reality when dict is a power of two. hc3 Hash Chain with 2- and 3-byte hashing Minimum value for nice: 3 Memory usage: dict * 7.5 (if dict <= 16 MiB); dict * 5.5 + 64 MiB (if dict > 16 MiB) hc4 Hash Chain with 2-, 3-, and 4-byte hashing Minimum value for nice: 4 Memory usage: dict * 7.5 (if dict <= 32 MiB); dict * 6.5 (if dict > 32 MiB) bt2 Binary Tree with 2-byte hashing Minimum value for nice: 2 Memory usage: dict * 9.5 bt3 Binary Tree with 2- and 3-byte hashing Minimum value for nice: 3 Memory usage: dict * 11.5 (if dict <= 16 MiB); dict * 9.5 + 64 MiB (if dict > 16 MiB) bt4 Binary Tree with 2-, 3-, and 4-byte hashing Minimum value for nice: 4 Memory usage: dict * 11.5 (if dict <= 32 MiB); dict * 10.5 (if dict > 32 MiB) mode=mode Compression mode specifies the method to analyze the data produced by the match finder. Supported modes are fast and normal. The default is fast for presets 0–3 and normal for presets 4–9. Usually fast is used with Hash Chain match finders and normal with Binary Tree match finders. This is also what the presets do. nice=nice Specify what is considered to be a nice length for a match. Once a match of at least nice bytes is found, the algorithm stops looking for possibly better matches. Nice can be 2–273 bytes. Higher values tend to give better compression ratio at the expense of speed. The default depends on the preset. depth=depth Specify the maximum search depth in the match finder. The default is the special value of 0, which makes the compressor determine a reasonable depth from mf and nice. Reasonable depth for Hash Chains is 4–100 and 16–1000 for Binary Trees. Using very high values for depth can make the encoder extremely slow with some files. Avoid setting the depth over 1000 unless you are prepared to interrupt the compression in case it is taking far too long. When decoding raw streams (--format=raw), LZMA2 needs only the dictionary size. LZMA1 needs also lc, lp, and pb. --x86[=options] --arm[=options] --armthumb[=options] --arm64[=options] --powerpc[=options] --ia64[=options] --sparc[=options] --riscv[=options] Add a branch/call/jump (BCJ) filter to the filter chain. These filters can be used only as a non-last filter in the filter chain. A BCJ filter converts relative addresses in the machine code to their absolute counterparts. This doesn't change the size of the data but it increases redundancy, which can help LZMA2 to produce 0–15 % smaller .xz file. The BCJ filters are always reversible, so using a BCJ filter for wrong type of data doesn't cause any data loss, although it may make the compression ratio slightly worse. The BCJ filters are very fast and use an insignificant amount of memory. These BCJ filters have known problems related to the compression ratio: • Some types of files containing executable code (for example, object files, static libraries, and Linux kernel modules) have the addresses in the instructions filled with filler values. These BCJ filters will still do the address conversion, which will make the compression worse with these files. • If a BCJ filter is applied on an archive, it is possible that it makes the compression ratio worse than not using a BCJ filter. For example, if there are similar or even identical executables then filtering will likely make the files less similar and thus compression is worse. The contents of non- executable files in the same archive can matter too. In practice one has to try with and without a BCJ filter to see which is better in each situation. Different instruction sets have different alignment: the executable file must be aligned to a multiple of this value in the input data to make the filter work. Filter Alignment Notes x86 1 32-bit or 64-bit x86 ARM 4 ARM-Thumb 2 ARM64 4 4096-byte alignment is best PowerPC 4 Big endian only IA-64 16 Itanium SPARC 4 RISC-V 2 Since the BCJ-filtered data is usually compressed with LZMA2, the compression ratio may be improved slightly if the LZMA2 options are set to match the alignment of the selected BCJ filter. Examples: • IA-64 filter has 16-byte alignment so pb=4,lp=4,lc=0 is good with LZMA2 (2^4=16). • RISC-V code has 2-byte or 4-byte alignment depending on whether the file contains 16-bit compressed instructions (the C extension). When 16-bit instructions are used, pb=2,lp=1,lc=3 or pb=1,lp=1,lc=3 is good. When 16-bit instructions aren't present, pb=2,lp=2,lc=2 is the best. readelf -h can be used to check if "RVC" appears on the "Flags" line. • ARM64 is always 4-byte aligned so pb=2,lp=2,lc=2 is the best. • The x86 filter is an exception. It's usually good to stick to LZMA2's defaults (pb=2,lp=0,lc=3) when compressing x86 executables. All BCJ filters support the same options: start=offset Specify the start offset that is used when converting between relative and absolute addresses. The offset must be a multiple of the alignment of the filter (see the table above). The default is zero. In practice, the default is good; specifying a custom offset is almost never useful. --delta[=options] Add the Delta filter to the filter chain. The Delta filter can be only used as a non-last filter in the filter chain. Currently only simple byte-wise delta calculation is supported. It can be useful when compressing, for example, uncompressed bitmap images or uncompressed PCM audio. However, special purpose algorithms may give significantly better results than Delta + LZMA2. This is true especially with audio, which compresses faster and better, for example, with flac(1). Supported options: dist=distance Specify the distance of the delta calculation in bytes. distance must be 1–256. The default is 1. For example, with dist=2 and eight-byte input A1 B1 A2 B3 A3 B5 A4 B7, the output will be A1 B1 01 02 01 02 01 02. Other options -q, --quiet Suppress warnings and notices. Specify this twice to suppress errors too. This option has no effect on the exit status. That is, even if a warning was suppressed, the exit status to indicate a warning is still used. -v, --verbose Be verbose. If standard error is connected to a terminal, xz will display a progress indicator. Specifying --verbose twice will give even more verbose output. The progress indicator shows the following information: • Completion percentage is shown if the size of the input file is known. That is, the percentage cannot be shown in pipes. • Amount of compressed data produced (compressing) or consumed (decompressing). • Amount of uncompressed data consumed (compressing) or produced (decompressing). • Compression ratio, which is calculated by dividing the amount of compressed data processed so far by the amount of uncompressed data processed so far. • Compression or decompression speed. This is measured as the amount of uncompressed data consumed (compression) or produced (decompression) per second. It is shown after a few seconds have passed since xz started processing the file. • Elapsed time in the format M:SS or H:MM:SS. • Estimated remaining time is shown only when the size of the input file is known and a couple of seconds have already passed since xz started processing the file. The time is shown in a less precise format which never has any colons, for example, 2 min 30 s. When standard error is not a terminal, --verbose will make xz print the filename, compressed size, uncompressed size, compression ratio, and possibly also the speed and elapsed time on a single line to standard error after compressing or decompressing the file. The speed and elapsed time are included only when the operation took at least a few seconds. If the operation didn't finish, for example, due to user interruption, also the completion percentage is printed if the size of the input file is known. -Q, --no-warn Don't set the exit status to 2 even if a condition worth a warning was detected. This option doesn't affect the verbosity level, thus both --quiet and --no-warn have to be used to not display warnings and to not alter the exit status. --robot Print messages in a machine-parsable format. This is intended to ease writing frontends that want to use xz instead of liblzma, which may be the case with various scripts. The output with this option enabled is meant to be stable across xz releases. See the section ROBOT MODE for details. --info-memory Display, in human-readable format, how much physical memory (RAM) and how many processor threads xz thinks the system has and the memory usage limits for compression and decompression, and exit successfully. -h, --help Display a help message describing the most commonly used options, and exit successfully. -H, --long-help Display a help message describing all features of xz, and exit successfully -V, --version Display the version number of xz and liblzma in human readable format. To get machine-parsable output, specify --robot before --version. ROBOT MODE The robot mode is activated with the --robot option. It makes the output of xz easier to parse by other programs. Currently --robot is supported only together with --list, --filters-help, --info-memory, and --version. It will be supported for compression and decompression in the future. List mode xz --robot --list uses tab-separated output. The first column of every line has a string that indicates the type of the information found on that line: name This is always the first line when starting to list a file. The second column on the line is the filename. file This line contains overall information about the .xz file. This line is always printed after the name line. stream This line type is used only when --verbose was specified. There are as many stream lines as there are streams in the .xz file. block This line type is used only when --verbose was specified. There are as many block lines as there are blocks in the .xz file. The block lines are shown after all the stream lines; different line types are not interleaved. summary This line type is used only when --verbose was specified twice. This line is printed after all block lines. Like the file line, the summary line contains overall information about the .xz file. totals This line is always the very last line of the list output. It shows the total counts and sizes. The columns of the file lines: 2. Number of streams in the file 3. Total number of blocks in the stream(s) 4. Compressed size of the file 5. Uncompressed size of the file 6. Compression ratio, for example, 0.123. If ratio is over 9.999, three dashes (---) are displayed instead of the ratio. 7. Comma-separated list of integrity check names. The following strings are used for the known check types: None, CRC32, CRC64, and SHA-256. For unknown check types, Unknown-N is used, where N is the Check ID as a decimal number (one or two digits). 8. Total size of stream padding in the file The columns of the stream lines: 2. Stream number (the first stream is 1) 3. Number of blocks in the stream 4. Compressed start offset 5. Uncompressed start offset 6. Compressed size (does not include stream padding) 7. Uncompressed size 8. Compression ratio 9. Name of the integrity check 10. Size of stream padding The columns of the block lines: 2. Number of the stream containing this block 3. Block number relative to the beginning of the stream (the first block is 1) 4. Block number relative to the beginning of the file 5. Compressed start offset relative to the beginning of the file 6. Uncompressed start offset relative to the beginning of the file 7. Total compressed size of the block (includes headers) 8. Uncompressed size 9. Compression ratio 10. Name of the integrity check If --verbose was specified twice, additional columns are included on the block lines. These are not displayed with a single --verbose, because getting this information requires many seeks and can thus be slow: 11. Value of the integrity check in hexadecimal 12. Block header size 13. Block flags: c indicates that compressed size is present, and u indicates that uncompressed size is present. If the flag is not set, a dash (-) is shown instead to keep the string length fixed. New flags may be added to the end of the string in the future. 14. Size of the actual compressed data in the block (this excludes the block header, block padding, and check fields) 15. Amount of memory (in bytes) required to decompress this block with this xz version 16. Filter chain. Note that most of the options used at compression time cannot be known, because only the options that are needed for decompression are stored in the .xz headers. The columns of the summary lines: 2. Amount of memory (in bytes) required to decompress this file with this xz version 3. yes or no indicating if all block headers have both compressed size and uncompressed size stored in them Since xz 5.1.2alpha: 4. Minimum xz version required to decompress the file The columns of the totals line: 2. Number of streams 3. Number of blocks 4. Compressed size 5. Uncompressed size 6. Average compression ratio 7. Comma-separated list of integrity check names that were present in the files 8. Stream padding size 9. Number of files. This is here to keep the order of the earlier columns the same as on file lines. If --verbose was specified twice, additional columns are included on the totals line: 10. Maximum amount of memory (in bytes) required to decompress the files with this xz version 11. yes or no indicating if all block headers have both compressed size and uncompressed size stored in them Since xz 5.1.2alpha: 12. Minimum xz version required to decompress the file Future versions may add new line types and new columns can be added to the existing line types, but the existing columns won't be changed. Filters help xz --robot --filters-help prints the supported filters in the following format: filter:option=<value>,option=<value>... filter Name of the filter option Name of a filter specific option value Numeric value ranges appear as <min-max>. String value choices are shown within < > and separated by a | character. Each filter is printed on its own line. Memory limit information xz --robot --info-memory prints a single line with multiple tab- separated columns: 1. Total amount of physical memory (RAM) in bytes. 2. Memory usage limit for compression in bytes (--memlimit-compress). A special value of 0 indicates the default setting which for single-threaded mode is the same as no limit. 3. Memory usage limit for decompression in bytes (--memlimit-decompress). A special value of 0 indicates the default setting which for single-threaded mode is the same as no limit. 4. Since xz 5.3.4alpha: Memory usage for multi-threaded decompression in bytes (--memlimit-mt-decompress). This is never zero because a system-specific default value shown in the column 5 is used if no limit has been specified explicitly. This is also never greater than the value in the column 3 even if a larger value has been specified with --memlimit-mt-decompress. 5. Since xz 5.3.4alpha: A system-specific default memory usage limit that is used to limit the number of threads when compressing with an automatic number of threads (--threads=0) and no memory usage limit has been specified (--memlimit-compress). This is also used as the default value for --memlimit-mt-decompress. 6. Since xz 5.3.4alpha: Number of available processor threads. In the future, the output of xz --robot --info-memory may have more columns, but never more than a single line. Version xz --robot --version prints the version number of xz and liblzma in the following format: XZ_VERSION=XYYYZZZS LIBLZMA_VERSION=XYYYZZZS X Major version. YYY Minor version. Even numbers are stable. Odd numbers are alpha or beta versions. ZZZ Patch level for stable releases or just a counter for development releases. S Stability. 0 is alpha, 1 is beta, and 2 is stable. S should be always 2 when YYY is even. XYYYZZZS are the same on both lines if xz and liblzma are from the same XZ Utils release. Examples: 4.999.9beta is 49990091 and 5.0.0 is 50000002. EXIT STATUS 0 All is good. 1 An error occurred. 2 Something worth a warning occurred, but no actual errors occurred. Notices (not warnings or errors) printed on standard error don't affect the exit status. ENVIRONMENT xz parses space-separated lists of options from the environment variables XZ_DEFAULTS and XZ_OPT, in this order, before parsing the options from the command line. Note that only options are parsed from the environment variables; all non-options are silently ignored. Parsing is done with getopt_long(3) which is used also for the command line arguments. XZ_DEFAULTS User-specific or system-wide default options. Typically this is set in a shell initialization script to enable xz's memory usage limiter by default. Excluding shell initialization scripts and similar special cases, scripts must never set or unset XZ_DEFAULTS. XZ_OPT This is for passing options to xz when it is not possible to set the options directly on the xz command line. This is the case when xz is run by a script or tool, for example, GNU tar(1): XZ_OPT=-2v tar caf foo.tar.xz foo Scripts may use XZ_OPT, for example, to set script-specific default compression options. It is still recommended to allow users to override XZ_OPT if that is reasonable. For example, in sh(1) scripts one may use something like this: XZ_OPT=${XZ_OPT-"-7e"} export XZ_OPT LZMA UTILS COMPATIBILITY The command line syntax of xz is practically a superset of lzma, unlzma, and lzcat as found from LZMA Utils 4.32.x. In most cases, it is possible to replace LZMA Utils with XZ Utils without breaking existing scripts. There are some incompatibilities though, which may sometimes cause problems. Compression preset levels The numbering of the compression level presets is not identical in xz and LZMA Utils. The most important difference is how dictionary sizes are mapped to different presets. Dictionary size is roughly equal to the decompressor memory usage. Level xz LZMA Utils -0 256 KiB N/A -1 1 MiB 64 KiB -2 2 MiB 1 MiB -3 4 MiB 512 KiB -4 4 MiB 1 MiB -5 8 MiB 2 MiB -6 8 MiB 4 MiB -7 16 MiB 8 MiB -8 32 MiB 16 MiB -9 64 MiB 32 MiB The dictionary size differences affect the compressor memory usage too, but there are some other differences between LZMA Utils and XZ Utils, which make the difference even bigger: Level xz LZMA Utils 4.32.x -0 3 MiB N/A -1 9 MiB 2 MiB -2 17 MiB 12 MiB -3 32 MiB 12 MiB -4 48 MiB 16 MiB -5 94 MiB 26 MiB -6 94 MiB 45 MiB -7 186 MiB 83 MiB -8 370 MiB 159 MiB -9 674 MiB 311 MiB The default preset level in LZMA Utils is -7 while in XZ Utils it is -6, so both use an 8 MiB dictionary by default. Streamed vs. non-streamed .lzma files The uncompressed size of the file can be stored in the .lzma header. LZMA Utils does that when compressing regular files. The alternative is to mark that uncompressed size is unknown and use end-of-payload marker to indicate where the decompressor should stop. LZMA Utils uses this method when uncompressed size isn't known, which is the case, for example, in pipes. xz supports decompressing .lzma files with or without end-of-payload marker, but all .lzma files created by xz will use end-of-payload marker and have uncompressed size marked as unknown in the .lzma header. This may be a problem in some uncommon situations. For example, a .lzma decompressor in an embedded device might work only with files that have known uncompressed size. If you hit this problem, you need to use LZMA Utils or LZMA SDK to create .lzma files with known uncompressed size. Unsupported .lzma files The .lzma format allows lc values up to 8, and lp values up to 4. LZMA Utils can decompress files with any lc and lp, but always creates files with lc=3 and lp=0. Creating files with other lc and lp is possible with xz and with LZMA SDK. The implementation of the LZMA1 filter in liblzma requires that the sum of lc and lp must not exceed 4. Thus, .lzma files, which exceed this limitation, cannot be decompressed with xz. LZMA Utils creates only .lzma files which have a dictionary size of 2^n (a power of 2) but accepts files with any dictionary size. liblzma accepts only .lzma files which have a dictionary size of 2^n or 2^n + 2^(n-1). This is to decrease false positives when detecting .lzma files. These limitations shouldn't be a problem in practice, since practically all .lzma files have been compressed with settings that liblzma will accept. Trailing garbage When decompressing, LZMA Utils silently ignore everything after the first .lzma stream. In most situations, this is a bug. This also means that LZMA Utils don't support decompressing concatenated .lzma files. If there is data left after the first .lzma stream, xz considers the file to be corrupt unless --single-stream was used. This may break obscure scripts which have assumed that trailing garbage is ignored. NOTES Compressed output may vary The exact compressed output produced from the same uncompressed input file may vary between XZ Utils versions even if compression options are identical. This is because the encoder can be improved (faster or better compression) without affecting the file format. The output can vary even between different builds of the same XZ Utils version, if different build options are used. The above means that once --rsyncable has been implemented, the resulting files won't necessarily be rsyncable unless both old and new files have been compressed with the same xz version. This problem can be fixed if a part of the encoder implementation is frozen to keep rsyncable output stable across xz versions. Embedded .xz decompressors Embedded .xz decompressor implementations like XZ Embedded don't necessarily support files created with integrity check types other than none and crc32. Since the default is --check=crc64, you must use --check=none or --check=crc32 when creating files for embedded systems. Outside embedded systems, all .xz format decompressors support all the check types, or at least are able to decompress the file without verifying the integrity check if the particular check is not supported. XZ Embedded supports BCJ filters, but only with the default start offset.
Basics Compress the file foo into foo.xz using the default compression level (-6), and remove foo if compression is successful: xz foo Decompress bar.xz into bar and don't remove bar.xz even if decompression is successful: xz -dk bar.xz Create baz.tar.xz with the preset -4e (-4 --extreme), which is slower than the default -6, but needs less memory for compression and decompression (48 MiB and 5 MiB, respectively): tar cf - baz | xz -4e > baz.tar.xz A mix of compressed and uncompressed files can be decompressed to standard output with a single command: xz -dcf a.txt b.txt.xz c.txt d.txt.lzma > abcd.txt Parallel compression of many files On GNU and *BSD, find(1) and xargs(1) can be used to parallelize compression of many files: find . -type f \! -name '*.xz' -print0 \ | xargs -0r -P4 -n16 xz -T1 The -P option to xargs(1) sets the number of parallel xz processes. The best value for the -n option depends on how many files there are to be compressed. If there are only a couple of files, the value should probably be 1; with tens of thousands of files, 100 or even more may be appropriate to reduce the number of xz processes that xargs(1) will eventually create. The option -T1 for xz is there to force it to single-threaded mode, because xargs(1) is used to control the amount of parallelization. Robot mode Calculate how many bytes have been saved in total after compressing multiple files: xz --robot --list *.xz | awk '/^totals/{print $5-$4}' A script may want to know that it is using new enough xz. The following sh(1) script checks that the version number of the xz tool is at least 5.0.0. This method is compatible with old beta versions, which didn't support the --robot option: if ! eval "$(xz --robot --version 2> /dev/null)" || [ "$XZ_VERSION" -lt 50000002 ]; then echo "Your xz is too old." fi unset XZ_VERSION LIBLZMA_VERSION Set a memory usage limit for decompression using XZ_OPT, but if a limit has already been set, don't increase it: NEWLIM=$((123 << 20)) # 123 MiB OLDLIM=$(xz --robot --info-memory | cut -f3) if [ $OLDLIM -eq 0 -o $OLDLIM -gt $NEWLIM ]; then XZ_OPT="$XZ_OPT --memlimit-decompress=$NEWLIM" export XZ_OPT fi Custom compressor filter chains The simplest use for custom filter chains is customizing a LZMA2 preset. This can be useful, because the presets cover only a subset of the potentially useful combinations of compression settings. The CompCPU columns of the tables from the descriptions of the options -0 ... -9 and --extreme are useful when customizing LZMA2 presets. Here are the relevant parts collected from those two tables: Preset CompCPU -0 0 -1 1 -2 2 -3 3 -4 4 -5 5 -6 6 -5e 7 -6e 8 If you know that a file requires somewhat big dictionary (for example, 32 MiB) to compress well, but you want to compress it quicker than xz -8 would do, a preset with a low CompCPU value (for example, 1) can be modified to use a bigger dictionary: xz --lzma2=preset=1,dict=32MiB foo.tar With certain files, the above command may be faster than xz -6 while compressing significantly better. However, it must be emphasized that only some files benefit from a big dictionary while keeping the CompCPU value low. The most obvious situation, where a big dictionary can help a lot, is an archive containing very similar files of at least a few megabytes each. The dictionary size has to be significantly bigger than any individual file to allow LZMA2 to take full advantage of the similarities between consecutive files. If very high compressor and decompressor memory usage is fine, and the file being compressed is at least several hundred megabytes, it may be useful to use an even bigger dictionary than the 64 MiB that xz -9 would use: xz -vv --lzma2=dict=192MiB big_foo.tar Using -vv (--verbose --verbose) like in the above example can be useful to see the memory requirements of the compressor and decompressor. Remember that using a dictionary bigger than the size of the uncompressed file is waste of memory, so the above command isn't useful for small files. Sometimes the compression time doesn't matter, but the decompressor memory usage has to be kept low, for example, to make it possible to decompress the file on an embedded system. The following command uses -6e (-6 --extreme) as a base and sets the dictionary to only 64 KiB. The resulting file can be decompressed with XZ Embedded (that's why there is --check=crc32) using about 100 KiB of memory. xz --check=crc32 --lzma2=preset=6e,dict=64KiB foo If you want to squeeze out as many bytes as possible, adjusting the number of literal context bits (lc) and number of position bits (pb) can sometimes help. Adjusting the number of literal position bits (lp) might help too, but usually lc and pb are more important. For example, a source code archive contains mostly US-ASCII text, so something like the following might give slightly (like 0.1 %) smaller file than xz -6e (try also without lc=4): xz --lzma2=preset=6e,pb=0,lc=4 source_code.tar Using another filter together with LZMA2 can improve compression with certain file types. For example, to compress a x86-32 or x86-64 shared library using the x86 BCJ filter: xz --x86 --lzma2 libfoo.so Note that the order of the filter options is significant. If --x86 is specified after --lzma2, xz will give an error, because there cannot be any filter after LZMA2, and also because the x86 BCJ filter cannot be used as the last filter in the chain. The Delta filter together with LZMA2 can give good results with bitmap images. It should usually beat PNG, which has a few more advanced filters than simple delta but uses Deflate for the actual compression. The image has to be saved in uncompressed format, for example, as uncompressed TIFF. The distance parameter of the Delta filter is set to match the number of bytes per pixel in the image. For example, 24-bit RGB bitmap needs dist=3, and it is also good to pass pb=0 to LZMA2 to accommodate the three-byte alignment: xz --delta=dist=3 --lzma2=pb=0 foo.tiff If multiple images have been put into a single archive (for example, .tar), the Delta filter will work on that too as long as all images have the same number of bytes per pixel. SEE ALSO xzdec(1), xzdiff(1), xzgrep(1), xzless(1), xzmore(1), gzip(1), bzip2(1), 7z(1) XZ Utils: <https://tukaani.org/xz/> XZ Embedded: <https://tukaani.org/xz/embedded.html> LZMA SDK: <https://7-zip.org/sdk.html> Tukaani 2024-04-08 XZ(1)
bunzip2
bzip2 compresses files using the Burrows-Wheeler block sorting text compression algorithm, and Huffman coding. Compression is generally considerably better than that achieved by more conventional LZ77/LZ78-based compressors, and approaches the performance of the PPM family of statistical compressors. The command-line options are deliberately very similar to those of GNU gzip, but they are not identical. bzip2 expects a list of file names to accompany the command-line flags. Each file is replaced by a compressed version of itself, with the name "original_name.bz2". Each compressed file has the same modification date, permissions, and, when possible, ownership as the corresponding original, so that these properties can be correctly restored at decompression time. File name handling is naive in the sense that there is no mechanism for preserving original file names, permissions, ownerships or dates in filesystems which lack these concepts, or have serious file name length restrictions, such as MS-DOS. bzip2 and bunzip2 will by default not overwrite existing files. If you want this to happen, specify the -f flag. If no file names are specified, bzip2 compresses from standard input to standard output. In this case, bzip2 will decline to write compressed output to a terminal, as this would be entirely incomprehensible and therefore pointless. bunzip2 (or bzip2 -d) decompresses all specified files. Files which were not created by bzip2 will be detected and ignored, and a warning issued. bzip2 attempts to guess the filename for the decompressed file from that of the compressed file as follows: filename.bz2 becomes filename filename.bz becomes filename filename.tbz2 becomes filename.tar filename.tbz becomes filename.tar anyothername becomes anyothername.out If the file does not end in one of the recognised endings, .bz2, .bz, .tbz2 or .tbz, bzip2 complains that it cannot guess the name of the original file, and uses the original name with .out appended. As with compression, supplying no filenames causes decompression from standard input to standard output. bunzip2 will correctly decompress a file which is the concatenation of two or more compressed files. The result is the concatenation of the corresponding uncompressed files. Integrity testing (-t) of concatenated compressed files is also supported. You can also compress or decompress files to the standard output by giving the -c flag. Multiple files may be compressed and decompressed like this. The resulting outputs are fed sequentially to stdout. Compression of multiple files in this manner generates a stream containing multiple compressed file representations. Such a stream can be decompressed correctly only by bzip2 version 0.9.0 or later. Earlier versions of bzip2 will stop after decompressing the first file in the stream. bzcat (or bzip2 -dc) decompresses all specified files to the standard output. bzip2 will read arguments from the environment variables BZIP2 and BZIP, in that order, and will process them before any arguments read from the command line. This gives a convenient way to supply default arguments. Compression is always performed, even if the compressed file is slightly larger than the original. Files of less than about one hundred bytes tend to get larger, since the compression mechanism has a constant overhead in the region of 50 bytes. Random data (including the output of most file compressors) is coded at about 8.05 bits per byte, giving an expansion of around 0.5%. As a self-check for your protection, bzip2 uses 32-bit CRCs to make sure that the decompressed version of a file is identical to the original. This guards against corruption of the compressed data, and against undetected bugs in bzip2 (hopefully very unlikely). The chances of data corruption going undetected is microscopic, about one chance in four billion for each file processed. Be aware, though, that the check occurs upon decompression, so it can only tell you that something is wrong. It can't help you recover the original uncompressed data. You can use bzip2recover to try to recover data from damaged files. Return values: 0 for a normal exit, 1 for environmental problems (file not found, invalid flags, I/O errors, &c), 2 to indicate a corrupt compressed file, 3 for an internal consistency error (eg, bug) which caused bzip2 to panic.
bzip2, bunzip2 - a block-sorting file compressor, v1.0.8 bzcat - decompresses files to stdout bzip2recover - recovers data from damaged bzip2 files
bzip2 [ -cdfkqstvzVL123456789 ] [ filenames ... ] bunzip2 [ -fkvsVL ] [ filenames ... ] bzcat [ -s ] [ filenames ... ] bzip2recover filename
-c --stdout Compress or decompress to standard output. -d --decompress Force decompression. bzip2, bunzip2 and bzcat are really the same program, and the decision about what actions to take is done on the basis of which name is used. This flag overrides that mechanism, and forces bzip2 to decompress. -z --compress The complement to -d: forces compression, regardless of the invocation name. -t --test Check integrity of the specified file(s), but don't decompress them. This really performs a trial decompression and throws away the result. -f --force Force overwrite of output files. Normally, bzip2 will not overwrite existing output files. Also forces bzip2 to break hard links to files, which it otherwise wouldn't do. bzip2 normally declines to decompress files which don't have the correct magic header bytes. If forced (-f), however, it will pass such files through unmodified. This is how GNU gzip behaves. -k --keep Keep (don't delete) input files during compression or decompression. -s --small Reduce memory usage, for compression, decompression and testing. Files are decompressed and tested using a modified algorithm which only requires 2.5 bytes per block byte. This means any file can be decompressed in 2300k of memory, albeit at about half the normal speed. During compression, -s selects a block size of 200k, which limits memory use to around the same figure, at the expense of your compression ratio. In short, if your machine is low on memory (8 megabytes or less), use -s for everything. See MEMORY MANAGEMENT below. -q --quiet Suppress non-essential warning messages. Messages pertaining to I/O errors and other critical events will not be suppressed. -v --verbose Verbose mode -- show the compression ratio for each file processed. Further -v's increase the verbosity level, spewing out lots of information which is primarily of interest for diagnostic purposes. -L --license -V --version Display the software version, license terms and conditions. -1 (or --fast) to -9 (or --best) Set the block size to 100 k, 200 k .. 900 k when compressing. Has no effect when decompressing. See MEMORY MANAGEMENT below. The --fast and --best aliases are primarily for GNU gzip compatibility. In particular, --fast doesn't make things significantly faster. And --best merely selects the default behaviour. -- Treats all subsequent arguments as file names, even if they start with a dash. This is so you can handle files with names beginning with a dash, for example: bzip2 -- -myfilename. --repetitive-fast --repetitive-best These flags are redundant in versions 0.9.5 and above. They provided some coarse control over the behaviour of the sorting algorithm in earlier versions, which was sometimes useful. 0.9.5 and above have an improved algorithm which renders these flags irrelevant. MEMORY MANAGEMENT bzip2 compresses large files in blocks. The block size affects both the compression ratio achieved, and the amount of memory needed for compression and decompression. The flags -1 through -9 specify the block size to be 100,000 bytes through 900,000 bytes (the default) respectively. At decompression time, the block size used for compression is read from the header of the compressed file, and bunzip2 then allocates itself just enough memory to decompress the file. Since block sizes are stored in compressed files, it follows that the flags -1 to -9 are irrelevant to and so ignored during decompression. Compression and decompression requirements, in bytes, can be estimated as: Compression: 400k + ( 8 x block size ) Decompression: 100k + ( 4 x block size ), or 100k + ( 2.5 x block size ) Larger block sizes give rapidly diminishing marginal returns. Most of the compression comes from the first two or three hundred k of block size, a fact worth bearing in mind when using bzip2 on small machines. It is also important to appreciate that the decompression memory requirement is set at compression time by the choice of block size. For files compressed with the default 900k block size, bunzip2 will require about 3700 kbytes to decompress. To support decompression of any file on a 4 megabyte machine, bunzip2 has an option to decompress using approximately half this amount of memory, about 2300 kbytes. Decompression speed is also halved, so you should use this option only where necessary. The relevant flag is -s. In general, try and use the largest block size memory constraints allow, since that maximises the compression achieved. Compression and decompression speed are virtually unaffected by block size. Another significant point applies to files which fit in a single block -- that means most files you'd encounter using a large block size. The amount of real memory touched is proportional to the size of the file, since the file is smaller than a block. For example, compressing a file 20,000 bytes long with the flag -9 will cause the compressor to allocate around 7600k of memory, but only touch 400k + 20000 * 8 = 560 kbytes of it. Similarly, the decompressor will allocate 3700k but only touch 100k + 20000 * 4 = 180 kbytes. Here is a table which summarises the maximum memory usage for different block sizes. Also recorded is the total compressed size for 14 files of the Calgary Text Compression Corpus totalling 3,141,622 bytes. This column gives some feel for how compression varies with block size. These figures tend to understate the advantage of larger block sizes for larger files, since the Corpus is dominated by smaller files. Compress Decompress Decompress Corpus Flag usage usage -s usage Size -1 1200k 500k 350k 914704 -2 2000k 900k 600k 877703 -3 2800k 1300k 850k 860338 -4 3600k 1700k 1100k 846899 -5 4400k 2100k 1350k 845160 -6 5200k 2500k 1600k 838626 -7 6100k 2900k 1850k 834096 -8 6800k 3300k 2100k 828642 -9 7600k 3700k 2350k 828642 RECOVERING DATA FROM DAMAGED FILES bzip2 compresses files in blocks, usually 900kbytes long. Each block is handled independently. If a media or transmission error causes a multi-block .bz2 file to become damaged, it may be possible to recover data from the undamaged blocks in the file. The compressed representation of each block is delimited by a 48-bit pattern, which makes it possible to find the block boundaries with reasonable certainty. Each block also carries its own 32-bit CRC, so damaged blocks can be distinguished from undamaged ones. bzip2recover is a simple program whose purpose is to search for blocks in .bz2 files, and write each block out into its own .bz2 file. You can then use bzip2 -t to test the integrity of the resulting files, and decompress those which are undamaged. bzip2recover takes a single argument, the name of the damaged file, and writes a number of files "rec00001file.bz2", "rec00002file.bz2", etc, containing the extracted blocks. The output filenames are designed so that the use of wildcards in subsequent processing -- for example, "bzip2 -dc rec*file.bz2 > recovered_data" -- processes the files in the correct order. bzip2recover should be of most use dealing with large .bz2 files, as these will contain many blocks. It is clearly futile to use it on damaged single-block files, since a damaged block cannot be recovered. If you wish to minimise any potential data loss through media or transmission errors, you might consider compressing with a smaller block size. PERFORMANCE NOTES The sorting phase of compression gathers together similar strings in the file. Because of this, files containing very long runs of repeated symbols, like "aabaabaabaab ..." (repeated several hundred times) may compress more slowly than normal. Versions 0.9.5 and above fare much better than previous versions in this respect. The ratio between worst-case and average-case compression time is in the region of 10:1. For previous versions, this figure was more like 100:1. You can use the -vvvv option to monitor progress in great detail, if you want. Decompression speed is unaffected by these phenomena. bzip2 usually allocates several megabytes of memory to operate in, and then charges all over it in a fairly random fashion. This means that performance, both for compressing and decompressing, is largely determined by the speed at which your machine can service cache misses. Because of this, small changes to the code to reduce the miss rate have been observed to give disproportionately large performance improvements. I imagine bzip2 will perform best on machines with very large caches. CAVEATS I/O error messages are not as helpful as they could be. bzip2 tries hard to detect I/O errors and exit cleanly, but the details of what the problem is sometimes seem rather misleading. This manual page pertains to version 1.0.8 of bzip2. Compressed data created by this version is entirely forwards and backwards compatible with the previous public releases, versions 0.1pl2, 0.9.0, 0.9.5, 1.0.0, 1.0.1, 1.0.2 and above, but with the following exception: 0.9.0 and above can correctly decompress multiple concatenated compressed files. 0.1pl2 cannot do this; it will stop after decompressing just the first file in the stream. bzip2recover versions prior to 1.0.2 used 32-bit integers to represent bit positions in compressed files, so they could not handle compressed files more than 512 megabytes long. Versions 1.0.2 and above use 64-bit ints on some platforms which support them (GNU supported targets, and Windows). To establish whether or not bzip2recover was built with such a limitation, run it without arguments. In any event you can build yourself an unlimited version if you can recompile it with MaybeUInt64 set to be an unsigned 64-bit integer. AUTHOR Julian Seward, jseward@acm.org. https://sourceware.org/bzip2/ The ideas embodied in bzip2 are due to (at least) the following people: Michael Burrows and David Wheeler (for the block sorting transformation), David Wheeler (again, for the Huffman coder), Peter Fenwick (for the structured coding model in the original bzip, and many refinements), and Alistair Moffat, Radford Neal and Ian Witten (for the arithmetic coder in the original bzip). I am much indebted for their help, support and advice. See the manual in the source distribution for pointers to sources of documentation. Christian von Roques encouraged me to look for faster sorting algorithms, so as to speed up compression. Bela Lubkin encouraged me to improve the worst-case compression performance. Donna Robinson XMLised the documentation. The bz* scripts are derived from those of GNU gzip. Many people sent patches, helped with portability problems, lent machines, gave advice and were generally helpful. bzip2(1)
null
docx2txt
null
null
null
null
null
pythonw
null
null
null
null
null
lrelease
null
null
null
null
null
nghttpd
null
null
null
null
null
tiff2bw
null
null
null
null
null
krb5-config
krb5-config tells the application programmer what flags to use to compile and link programs against the installed Kerberos libraries.
krb5-config - tool for linking against MIT Kerberos libraries
krb5-config [--help | --all | --version | --vendor | --prefix | --exec-prefix | --defccname | --defktname | --defcktname | --cflags | --libs [libraries]]
--help prints a usage message. This is the default behavior when no options are specified. --all prints the version, vendor, prefix, and exec-prefix. --version prints the version number of the Kerberos installation. --vendor prints the name of the vendor of the Kerberos installation. --prefix prints the prefix for which the Kerberos installation was built. --exec-prefix prints the prefix for executables for which the Kerberos installation was built. --defccname prints the built-in default credentials cache location. --defktname prints the built-in default keytab location. --defcktname prints the built-in default client (initiator) keytab location. --cflags prints the compilation flags used to build the Kerberos installation. --libs [library] prints the compiler options needed to link against library. Allowed values for library are: ┌────────────┬──────────────────────────┐ │krb5 │ Kerberos 5 applications │ │ │ (default) │ ├────────────┼──────────────────────────┤ │gssapi │ GSSAPI applications with │ │ │ Kerberos 5 bindings │ ├────────────┼──────────────────────────┤ │kadm-client │ Kadmin client │ ├────────────┼──────────────────────────┤ │kadm-server │ Kadmin server │ ├────────────┼──────────────────────────┤ │kdb │ Applications that access │ │ │ the Kerberos database │ └────────────┴──────────────────────────┘
krb5-config is particularly useful for compiling against a Kerberos installation that was installed in a non-standard location. For example, a Kerberos installation that is installed in /opt/krb5/ but uses libraries in /usr/local/lib/ for text localization would produce the following output: shell% krb5-config --libs krb5 -L/opt/krb5/lib -Wl,-rpath -Wl,/opt/krb5/lib -L/usr/local/lib -lkrb5 -lk5crypto -lcom_err SEE ALSO kerberos(7), cc(1) AUTHOR MIT COPYRIGHT 1985-2022, MIT 1.20.1 KRB5-CONFIG(1)
mysqldumpslow
The MySQL slow query log contains information about queries that take a long time to execute (see Section 5.4.5, “The Slow Query Log”). mysqldumpslow parses MySQL slow query log files and summarizes their contents. Normally, mysqldumpslow groups queries that are similar except for the particular values of number and string data values. It “abstracts” these values to N and 'S' when displaying summary output. To modify value abstracting behavior, use the -a and -n options. Invoke mysqldumpslow like this: mysqldumpslow [options] [log_file ...] Example output with no options given: Reading mysql slow query log from /usr/local/mysql/data/mysqld83-slow.log Count: 1 Time=4.32s (4s) Lock=0.00s (0s) Rows=0.0 (0), root[root]@localhost insert into t2 select * from t1 Count: 3 Time=2.53s (7s) Lock=0.00s (0s) Rows=0.0 (0), root[root]@localhost insert into t2 select * from t1 limit N Count: 3 Time=2.13s (6s) Lock=0.00s (0s) Rows=0.0 (0), root[root]@localhost insert into t1 select * from t1 mysqldumpslow supports the following options. • --help ┌────────────────────┬────────┐ │Command-Line Format │ --help │ └────────────────────┴────────┘ Display a help message and exit. • -a Do not abstract all numbers to N and strings to 'S'. • --debug, -d ┌────────────────────┬─────────┐ │Command-Line Format │ --debug │ └────────────────────┴─────────┘ Run in debug mode. This option is available only if MySQL was built using WITH_DEBUG. MySQL release binaries provided by Oracle are not built using this option. • -g pattern ┌─────┬────────┐ │Type │ String │ └─────┴────────┘ Consider only queries that match the (grep-style) pattern. • -h host_name ┌──────────────┬────────┐ │Type │ String │ ├──────────────┼────────┤ │Default Value │ * │ └──────────────┴────────┘ Host name of MySQL server for *-slow.log file name. The value can contain a wildcard. The default is * (match all). • -i name ┌─────┬────────┐ │Type │ String │ └─────┴────────┘ Name of server instance (if using mysql.server startup script). • -l Do not subtract lock time from total time. • -n N ┌─────┬─────────┐ │Type │ Numeric │ └─────┴─────────┘ Abstract numbers with at least N digits within names. • -r Reverse the sort order. • -s sort_type ┌──────────────┬────────┐ │Type │ String │ ├──────────────┼────────┤ │Default Value │ at │ └──────────────┴────────┘ How to sort the output. The value of sort_type should be chosen from the following list: • t, at: Sort by query time or average query time • l, al: Sort by lock time or average lock time • r, ar: Sort by rows sent or average rows sent • c: Sort by count By default, mysqldumpslow sorts by average query time (equivalent to -s at). • -t N ┌─────┬─────────┐ │Type │ Numeric │ └─────┴─────────┘ Display only the first N queries in the output. • --verbose, -v ┌────────────────────┬───────────┐ │Command-Line Format │ --verbose │ └────────────────────┴───────────┘ Verbose mode. Print more information about what the program does. COPYRIGHT Copyright © 1997, 2023, Oracle and/or its affiliates. This documentation is free software; you can redistribute it and/or modify it only under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License. This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with the program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA or see http://www.gnu.org/licenses/. SEE ALSO For more information, please refer to the MySQL Reference Manual, which may already be installed locally and which is also available online at http://dev.mysql.com/doc/. AUTHOR Oracle Corporation (http://dev.mysql.com/). MySQL 8.3 11/23/2023 MYSQLDUMPSLOW(1)
mysqldumpslow - Summarize slow query log files
mysqldumpslow [options] [log_file ...]
null
null
grpc_objective_c_plugin
null
null
null
null
null
mysqld
mysqld, also known as MySQL Server, is a single multithreaded program that does most of the work in a MySQL installation. It does not spawn additional processes. MySQL Server manages access to the MySQL data directory that contains databases and tables. The data directory is also the default location for other information such as log files and status files. Note Some installation packages contain a debugging version of the server named mysqld-debug. Invoke this version instead of mysqld for debugging support, memory allocation checking, and trace file support (see Section 5.9.1.2, “Creating Trace Files”). When MySQL server starts, it listens for network connections from client programs and manages access to databases on behalf of those clients. The mysqld program has many options that can be specified at startup. For a complete list of options, run this command: mysqld --verbose --help MySQL Server also has a set of system variables that affect its operation as it runs. System variables can be set at server startup, and many of them can be changed at runtime to effect dynamic server reconfiguration. MySQL Server also has a set of status variables that provide information about its operation. You can monitor these status variables to access runtime performance characteristics. For a full description of MySQL Server command options, system variables, and status variables, see Section 5.1, “The MySQL Server”. For information about installing MySQL and setting up the initial configuration, see Chapter 2, Installing and Upgrading MySQL. COPYRIGHT Copyright © 1997, 2023, Oracle and/or its affiliates. This documentation is free software; you can redistribute it and/or modify it only under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License. This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with the program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA or see http://www.gnu.org/licenses/. SEE ALSO For more information, please refer to the MySQL Reference Manual, which may already be installed locally and which is also available online at http://dev.mysql.com/doc/. AUTHOR Oracle Corporation (http://dev.mysql.com/). MySQL 8.3 11/23/2023 MYSQLD(8)
mysqld - the MySQL server
mysqld [options]
null
null
freetype-config
freetype-config returns information needed for compiling and linking programs with the FreeType library, such as linker flags and compilation parameters. Alternatively, it can be used to query information about the FreeType library version installed on the system, such as the installation (directory path) prefix or the FreeType version number. If pkg-config(1) is found in the path, freetype-config acts as a wrapper for pkg-config. This program is part of the FreeType package.
freetype-config - Get information about a libfreetype installation
freetype-config [options]
There are two types of options: output/display selection options, and path override options. Output selection options Only one of the output selection options should be given at each program invocation. --prefix Return the prefix value of the installed FreeType library (the default prefix will be `/usr' in most cases for distribution- installed packages). --exec-prefix Return the executable prefix value of the installed FreeType library (will often be the same as the prefix value). --ftversion Return the FreeType version number, directly derived from file `freetype.h'. --version Return the `libtool version' of the FreeType library. --libtool Return the library name for linking with libtool. --libs Return compiler flags for linking with the installed FreeType library. --cflags Return compiler flags for compiling against the installed FreeType library. --static Make command line options display flags for static linking. --help Show help and exit. Path override options These affect any selected output option, except the libtool version returned by --version. --prefix=PREFIX Override --prefix value with PREFIX. This also sets --exec-prefix=PREFIX if option --exec-prefix is not explicitly given. --exec-prefix=EPREFIX Override --exec-prefix value with EPREFIX. BUGS In case the libraries FreeType links to are located in non-standard directories, and pkg-config(1) is not available, the output from option --libs might be incomplete. It is thus recommended to use the pkg-config(1) interface instead, which is able to correctly resolve all dependencies. Setting --exec-prefix (either explicitly or implicitly) might return incorrect results if combined with option --static. The same problem can occur if you set the SYSROOT environment variable. AUTHOR This manual page was contributed by Nis Martensen <nis.martensen@web.de>, with further refinements from the FreeType team. FreeType 2.13.2 August 2023 FREETYPE-CONFIG(1)
null
pip3
null
null
null
null
null
colorcet
null
null
null
null
null
volint
null
null
null
null
null
libdeflate-gzip
null
null
null
null
null
mysqltest_embedded
null
null
null
null
null
ld
The ld command combines several object files and libraries, resolves references, and produces an output file. ld can produce a final linked image (executable, dylib, or bundle), or with the -r option, produce another object file. If the -o option is not used, the output file produced is named "a.out". Universal The linker accepts universal (multiple-architecture) input files, but always creates a "thin" (single-architecture), standard Mach-O output file. The architecture for the output file is specified using the -arch option. If this option is not used, ld attempts to determine the output architecture by examining the object files in command line order. The first "thin" architecture determines that of the output file. If no input object file is a "thin" file, the native 32-bit architecture for the host is used. Usually, ld is not used directly. Instead the compiler driver invokes ld. The compiler driver can be passed multiple -arch options and it will create a universal final linked image by invoking ld multiple times and then running lipo(1) merge the outputs into a universal file. Layout The object files are loaded in the order in which they are specified on the command line. The segments and the sections in those segments will appear in the output file in the order they are encountered in the object files being linked. All zero fill sections will appear after all non- zero fill sections in their segments. Libraries A static library (aka static archive) is a collection of .o files with a table of contents that lists the global symbols in the .o files. ld will only pull .o files out of a static library if needed to resolve some symbol reference. Unlike traditional linkers, ld will continually search a static library while linking. There is no need to specify a static library multiple times on the command line. A dynamic library (aka dylib or framework) is a final linked image. Putting a dynamic library on the command line causes two things: 1) The generated final linked image will have encoded that it depends on that dynamic library. 2) Exported symbols from the dynamic library are used to resolve references. Both dynamic and static libraries are searched as they appear on the command line. Search paths ld maintains a list of directories to search for a library or framework to use. The default library search path is /usr/lib then /usr/local/lib. The -L option will add a new library search path. The default framework search path is /Library/Frameworks then /System/Library/Frameworks. (Note: previously, /Network/Library/Frameworks was at the end of the default path. If you need that functionality, you need to explicitly add -F/Network/Library/Frameworks). The -F option will add a new framework search path. The -Z option will remove the standard search paths. The -syslibroot option will prepend a prefix to all search paths. Two-level namespace By default all references resolved to a dynamic library record the library to which they were resolved. At runtime, dyld uses that information to directly resolve symbols. The alternative is to use the -flat_namespace option. With flat namespace, the library is not recorded. At runtime, dyld will search each dynamic library in load order when resolving symbols. This is slower, but more like how other operating systems resolve symbols. Indirect dynamic libraries If the command line specifies to link against dylib A, and when dylib A was built it linked against dylib B, then B is considered an indirect dylib. When linking for two-level namespace, ld does not look at indirect dylibs, except when re-exported by a direct dylibs. On the other hand when linking for flat namespace, ld does load all indirect dylibs and uses them to resolve references. Even though indirect dylibs are specified via a full path, ld first uses the specified search paths to locate each indirect dylib. If one cannot be found using the search paths, the full path is used. Dynamic libraries undefines When linking for two-level namespace, ld does not verify that undefines in dylibs actually exist. But when linking for flat namespace, ld does check that all undefines from all loaded dylibs have a matching definition. This is sometimes used to force selected functions to be loaded from a static library.
ld – linker
ld files... [options] [-o outputfile]
Options that control the kind of output -execute The default. Produce a mach-o main executable that has file type MH_EXECUTE. -dylib Produce a mach-o shared library that has file type MH_DYLIB. -bundle Produce a mach-o bundle that has file type MH_BUNDLE. -r Merges object files to produce another mach-o object file with file type MH_OBJECT. -dylinker Produce a mach-o dylinker that has file type MH_DYLINKER. Only used when building dyld. -dynamic The default. Implied by -dylib, -bundle, or -execute -static Produces a mach-o file that does not use the dyld. Only used building the kernel. -preload Produces a mach-o file in which the mach_header, load commands, and symbol table are not in any segment. This output type is used for firmware or embedded development where the segments are copied out of the mach-o into ROM/Flash. -arch arch_name Specifies which architecture (e.g. ppc, ppc64, i386, x86_64) the output file should be. -o path Specifies the name and location of the output file. If not specified, `a.out' is used. Options that control libraries -lx This option tells the linker to search for libx.dylib or libx.a in the library search path. If string x is of the form y.o, then that file is searched for in the same places, but without prepending `lib' or appending `.a' or `.dylib' to the filename. -needed-lx This is the same as the -lx but means to really link with the dylib even if no symbols are used from it. Thus, it can be used suppress warnings about unused dylibs. -reexport-lx This is the same as the -lx but specifies that the all symbols in library x should be available to clients linking to the library being created. This was previously done with a separate -sub_library option. -upward-lx This is the same as the -lx but specifies that the dylib is an upward dependency. -hidden-lx This is the same as the -lx for locating a static library, but treats all global symbols from the static library as if they are visibility hidden. Useful when building a dynamic library that uses a static library but does not want to export anything from that static library. -weak-lx This is the same as the -lx but forces the library and all references to it to be marked as weak imports. That is, the library is allowed to be missing at runtime. -needed_library path_to_dylib This is the same as placing path_to_dylib on the link line but means to really link with the dylib even if no symbols are used from it. Thus, it can be used suppress warnings about unused dylibs. -reexport_library path_to_library This is the same as listing a file name path to a library on the link line and it specifies that the all symbols in library path should be available to clients linking to the library being created. This was previously done with a separate -sub_library option. -upward_library path_to_library This is the same as listing a file name path to a library on the link line but also marks the dylib as an upward dependency. -weak_library path_to_library This is the same as listing a file name path to a library on the link line except that it forces the library and all references to it to be marked as weak imports. -Ldir Add dir to the list of directories in which to search for libraries. Directories specified with -L are searched in the order they appear on the command line and before the default search path. In Xcode4 and later, there can be a space between the -L and directory. -Z Do not search the standard directories when searching for libraries and frameworks. -syslibroot rootdir Prepend rootdir to all search paths when searching for libraries or frameworks. -search_paths_first This is now the default (in Xcode4 tools). When processing -lx the linker now searches each directory in its library search paths for `libx.dylib' then `libx.a' before the moving on to the next path in the library search path. -search_dylibs_first Changes the searching behavior for libraries. The default is that when processing -lx the linker searches each directory in its library search paths for `libx.dylib' then `libx.a'. This option changes the behavior to first search for a file of the form `libx.dylib' in each directory in the library search path, then a file of the form `libx.a' is searched for in the library search paths. This option restores the search behavior of the linker prior to Xcode4. -framework name[,suffix] This option tells the linker to search for `name.framework/name' the framework search path. If the optional suffix is specified the framework is first searched for the name with the suffix and then without (e.g. look for `name.framework/name_suffix' first, if not there try `name.framework/name'). -needed_framework name[,suffix] This is the same as the -framework name[,suffix] but means to really link with the framework even if no symbols are used from it. Thus, it can be used suppress warnings about unused dylibs. -weak_framework name[,suffix] This is the same as the -framework name[,suffix] but forces the framework and all references to it to be marked as weak imports. Note: due to a clang optimizations, if functions are not marked weak, the compiler will optimize out any checks if the function address is NULL. -reexport_framework name[,suffix] This is the same as the -framework name[,suffix] but also specifies that the all symbols in that framework should be available to clients linking to the library being created. This was previously done with a separate -sub_umbrella option. -upward_framework name[,suffix] This is the same as the -framework name[,suffix] but also specifies that the framework is an upward dependency. -Fdir Add dir to the list of directories in which to search for frameworks. Directories specified with -F are searched in the order they appear on the command line and before the default search path. In Xcode4 and later, there can be a space between the -F and directory. -all_load Loads all members of static archive libraries. -ObjC Loads all members of static archive libraries that implement an Objective-C class or category. -force_load path_to_archive Loads all members of the specified static archive library. Note: -all_load forces all members of all archives to be loaded. This option allows you to target a specific archive. -load_hidden path_to_archive Uses specified static library as usual, but treats all global symbols from the static library to as if they are visibility hidden. Useful when building a dynamic library that uses a static library but does not want to export anything from that static library. -image_suffix suffix Search for libraries and frameworks with suffix and then without. Options that control additional content -sectcreate segname sectname file The section sectname in the segment segname is created from the contents of file file. If there's a section (segname,sectname) from any other input, the linker will append the content from the file to that section. -add_empty_section segname sectname An empty section named sectname in the segment segname. If any of the inputs contains a section (segname,sectname), that section will be included in the output, and this option will be ignored. -add_ast_path file The linker will add a N_AST stab symbol to the output file where the string is the path pointed by file and its values is the modification time of the file. -filelist file[,dirname] Specifies that the linker should link the files listed in file. This is an alternative to listing the files on the command line. The file names are listed one per line separated only by newlines. (Spaces and tabs are assumed to be part of the file name.) If the optional directory name, dirname is specified, it is prepended to each name in the list file. -dtrace file Enables dtrace static probes when producing a final linked image. The file file must be a DTrace script which declares the static probes. Options that control optimizations -dead_strip Remove functions and data that are unreachable by the entry point or exported symbols. -order_file file Alters the order in which functions and data are laid out. For each section in the output file, any symbol in that section that are specified in the order file file is moved to the start of its section and laid out in the same order as in the order file file. Order files are text files with one symbol name per line. Lines starting with a # are comments. A symbol name may be optionally preceded with its object file leaf name and a colon (e.g. foo.o:_foo). This is useful for static functions/data that occur in multiple files. A symbol name may also be optionally preceded with the architecture (e.g. ppc:_foo or ppc:foo.o:_foo). This enables you to have one order file that works for multiple architectures. Literal c-strings may be ordered by by quoting the string (e.g. "Hello, world\n") in the order file. -no_order_inits When the -order_file option is not used, the linker lays out functions in object file order and it moves all initializer routines to the start of the __text section and terminator routines to the end. Use this option to disable the automatic rearrangement of initializers and terminators. -platform_version platform min_version sdk_version This is set to indicate the platform, oldest supported version of that platform that output is to be used on, and the SDK that the output was built against. platform is a numeric value as defined in <mach-o/loader.h>, or it may be one of the following strings: • macos • ios • tvos • watchos • bridgeos • visionos • xros • mac-catalyst • ios-simulator • tvos-simulator • watchos-simulator • visionos-simulator • xros-simulator • driverkit Specifying a newer min or SDK version enables the linker to assume features of that OS or SDK in the output file. The format of min_version and sdk_version is a version number such as 10.13 or 10.14 -macos_version_min version This is set to indicate the oldest macOS version that that the output is to be used on. Specifying a later version enables the linker to assumes features of that OS in the output file. The format of version is a macOS version number such as 10.9 or 10.14 -ios_version_min version This is set to indicate the oldest iOS version that that the output is to be used on. Specifying a later version enables the linker to assumes features of that OS in the output file. The format of version is an iOS version number such as 3.1 or 4.0 -image_base address Specifies the preferred load address for a dylib or bundle. The argument address is a hexadecimal number with an optional leading 0x. By choosing non-overlapping address for all dylibs and bundles that a program loads, launch time can be improved because dyld will not need to "rebase" the image (that is, adjust pointers within the image to work at the loaded address). It is often easier to not use this option, but instead use the rebase(1) tool, and give it a list of dylibs. It will then choose non-overlapping addresses for the list and rebase them all. When building a position independent executable, this option will be ignored. This option is also called -seg1addr for compatibility. -no_implicit_dylibs When creating a two-level namespace final linked image, normally the linker will hoist up public dylibs that are implicitly linked to make the two-level namespace encoding more efficient for dyld. For example, Cocoa re-exports AppKit and AppKit re-exports Foundation. If you link with -framework Cocoa and use a symbol from Foundation, the linker will implicitly add a load command to load Foundation and encode the symbol as coming from Foundation. If you use this option, the linker will not add a load command for Foundation and encode the symbol as coming from Cocoa. Then at runtime dyld will have to search Cocoa and AppKit before finding the symbol in Foundation. -no_zero_fill_sections By default the linker moves all zero fill sections to the end of the __DATA segment and configures them to use no space on disk. This option suppresses that optimization, so zero-filled data occupies space on disk in a final linked image. -merge_zero_fill_sections Causes all zero-fill sections in the __DATA segment to be merged into one __zerofill section. -no_branch_islands Disables linker creation of branch islands which allows images to be created that are larger than the maximum branch distance. Useful with -preload when code is in multiple sections but all are within the branch range. -O0 Disables certain optimizations and layout algorithms to optimize build time. This option should be used with debug builds to speed up incremental development. The exact implementation might change to match the intent. -reproducible By default output content will be deterministic, but small changes in input files such as a compilation time might affect certain data structures in the linked binary. This option instructs ld to create a reproducible output binary by ignoring certain input properties or using alternative algorithms. Options when creating a dynamic library (dylib) -install_name name Sets an internal "install path" (LC_ID_DYLIB) in a dynamic library. Any clients linked against the library will record that path as the way dyld should locate this library. If this option is not specified, then the -o path will be used. This option is also called -dylib_install_name for compatibility. -compatibility_version number Specifies the compatibility version number of the library. When a library is loaded by dyld, the compatibility version is checked and if the program's version is greater that the library's version, it is an error. The format of number is X[.Y[.Z]] where X must be a positive non-zero number less than or equal to 65535, and .Y and .Z are optional and if present must be non-negative numbers less than or equal to 255. If the compatibility version number is not specified, it has a value of 0 and no checking is done when the library is used. This option is also called -dylib_compatibility_version for compatibility. -current_version number Specifies the current version number of the library. The current version of the library can be obtained programmatically by the user of the library so it can determine exactly which version of the library it is using. The format of number is X[.Y[.Z]] where X must be a positive non-zero number less than or equal to 65535, and .Y and .Z are optional and if present must be non-negative numbers less than or equal to 255. If the version number is not specified, it has a value of 0. This option is also called -dylib_current_version for compatibility. Options when creating a main executable -pie This makes a special kind of main executable that is position independent (PIE). On Mac OS X 10.5 and later, the OS the OS will load a PIE at a random address each time it is executed. You cannot create a PIE from .o files compiled with -mdynamic-no- pic. That means the codegen is less optimal, but the address randomization adds some security. When targeting Mac OS X 10.7 or later PIE is the default for main executables. -no_pie Do not make a position independent executable (PIE). This is the default, when targeting 10.6 and earlier. -pagezero_size size By default the linker creates an unreadable segment starting at address zero named __PAGEZERO. Its existence will cause a bus error if a NULL pointer is dereferenced. The argument size is a hexadecimal number with an optional leading 0x. If size is zero, the linker will not generate a page zero segment. By default on 32-bit architectures the page zero size is 4KB. On 64-bit architectures, the default size is 4GB. -stack_size size Specifies the maximum stack size for the main thread in a program. Without this option a program has a 8MB stack. The argument size is a hexadecimal number with an optional leading 0x. The size should be a multiple of the architecture's page size (4KB or 16KB). -allow_stack_execute Marks executable so that all stacks in the task will be given stack execution privilege. This includes pthread stacks. This option is only valid when targeting architectures that support stack execution (i.e. Intel). -export_dynamic Preserves all global symbols in main executables during LTO. Without this option, Link Time Optimization is allowed to inline and remove global functions. This option is used when a main executable may load a plug-in which requires certain symbols from the main executable. Options when creating a bundle -bundle_loader executable This specifies the executable that will be loading the bundle output file being linked. Undefined symbols from the bundle are checked against the specified executable like it was one of the dynamic libraries the bundle was linked with. Options when creating an object file -keep_private_externs Don't turn private external (aka visibility=hidden) symbols into static symbols, but rather leave them as private external in the resulting object file. -d Force definition of common symbols. That is, transform tentative definitions into real definitions. Options that control symbol resolution -exported_symbols_list filename The specified filename contains a list of global symbol names that will remain as global symbols in the output file. All other global symbols will be treated as if they were marked as __private_extern__ (aka visibility=hidden) and will not be global in the output file. The symbol names listed in filename must be one per line. Leading and trailing white space are not part of the symbol name. Lines starting with # are ignored, as are lines with only white space. Some wildcards (similar to shell file matching) are supported. The * matches zero or more characters. The ? matches one character. [abc] matches one character which must be an 'a', 'b', or 'c'. [a-z] matches any single lower case letter from 'a' to 'z'. -exported_symbol symbol The specified symbol is added to the list of global symbols names that will remain as global symbols in the output file. This option can be used multiple times. For short lists, this can be more convenient than creating a file and using -exported_symbols_list. -no_exported_symbols Useful for main executable that don't have plugins and thus need no symbol exports. -unexported_symbols_list file The specified filename contains a list of global symbol names that will not remain as global symbols in the output file. The symbols will be treated as if they were marked as __private_extern__ (aka visibility=hidden) and will not be global in the output file. The symbol names listed in filename must be one per line. Leading and trailing white space are not part of the symbol name. Lines starting with # are ignored, as are lines with only white space. Some wildcards (similar to shell file matching) are supported. The * matches zero or more characters. The ? matches one character. [abc] matches one character which must be an 'a', 'b', or 'c'. [a-z] matches any single lower case letter from 'a' to 'z'. -unexported_symbol symbol The specified symbol is added to the list of global symbols names that will not remain as global symbols in the output file. This option can be used multiple times. For short lists, this can be more convenient than creating a file and using -unexported_symbols_list. -reexported_symbols_list file The specified filename contains a list of symbol names that are implemented in a dependent dylib and should be re-exported through the dylib being created. -alias symbol_name alternate_symbol_name Create an alias named alternate_symbol_name for the symbol symbol_name. By default the alias symbol has global visibility. This option was previous the -idef:indir option. -alias_list filename The specified filename contains a list of aliases. The symbol name and its alias are on one line, separated by whitespace. Lines starting with # are ignored. -flat_namespace Alters how symbols are resolved at build time and runtime. With -two_levelnamespace (the default), the linker only searches dylibs on the command line for symbols, and records in which dylib they were found. With -flat_namespace, the linker searches all dylibs on the command line and all dylibs those original dylibs depend on. The linker does not record which dylib an external symbol came from, so at runtime dyld again searches all images and uses the first definition it finds. In addition, any undefines in loaded flat_namespace dylibs must be resolvable at build time. -u symbol_name Specified that symbol symbol_name must be defined for the link to succeed. This is useful to force selected functions to be loaded from a static library. -U symbol_name Specified that it is ok for symbol_name to have no definition. With -two_levelnamespace, the resulting symbol will be marked dynamic_lookup which means dyld will search all loaded images. -undefined treatment Specifies how undefined symbols are to be treated. Options are: error, warning, suppress, or dynamic_lookup. The default is error. Note: dynamic_lookup that depends on lazy binding will not work with chained fixups. -rpath path Add path to the runpath search path list for image being created. At runtime, dyld uses the runpath when searching for dylibs whose load path begins with @rpath/. -commons treatment Specifies how commons (aka tentative definitions) are resolved with respect to dylibs. Options are: ignore_dylibs, error. The default is ignore_dylibs which means the linker will turn a tentative definition in an object file into a real definition and not even check dylibs for conflicts. The error option means the linker should issue an error whenever a tentative definition in an object file conflicts with an external symbol in a linked dylib. See also -warn_commons. Options for introspecting the linker -why_load Log why each object file in a static library is loaded. That is, what symbol was needed. Also called -whyload for compatibility. -why_live symbol_name Logs a chain of references to symbol_name. Only applicable with -dead_strip . It can help debug why something that you think should be dead strip removed is not removed. See -exported_symbols_list for syntax and use of wildcards. -print_statistics Logs information about the amount of memory and time the linker used. -t Logs each file (object, archive, or dylib) the linker loads. Useful for debugging problems with search paths where the wrong library is loaded. -order_file_statistics Logs information about the processing of a -order_file. -map map_file_path Writes a map file to the specified path which details all symbols and their addresses in the output image. Options for controlling symbol table optimizations -S Do not put debug information (STABS or DWARF) in the output file. -x Do not put non-global symbols in the output file's symbol table. Non-global symbols are useful when debugging and getting symbol names in back traces, but are not used at runtime. If -x is used with -r non-global symbol names are not removed, but instead replaced with a unique, dummy name that will be automatically removed when linked into a final linked image. This allows dead code stripping, which uses symbols to break up code and data, to work properly and provides the security of having source symbol names removed. -non_global_symbols_strip_list filename The specified filename contains a list of non-global symbol names that should be removed from the output file's symbol table. All other non-global symbol names will remain in the output files symbol table. See -exported_symbols_list for syntax and use of wildcards. -non_global_symbols_no_strip_list filename The specified filename contains a list of non-global symbol names that should be remain in the output file's symbol table. All other symbol names will be removed from the output file's symbol table. See -exported_symbols_list for syntax and use of wildcards. -oso_prefix prefix-path When generating the debug map, the linker will remove the specified prefix-path from the path in OSO symbols. This can be used so to help build servers generate identical binaries. If '.' is passed as argument, the linker will expand the argument to the current working directory. Options for Bitcode build flow -bitcode_bundle Generates an embedded bitcode bundle in the output binary. The bitcode bundle is embedded in __LLVM, __bundle section. This option requires all the object files, static libraries and user frameworks/dylibs contain bitcode. Note: not all the linker options are supported to use together with -bitcode_bundle. -bitcode_hide_symbols Specifies this option together with -bitcode_bundle to hide all non-exported symbols from output bitcode bundle. The hide symbol process might not be reversible. To obtain a reverse mapping file to recover all the symbols, use -bitcode_symbol_map option. -bitcode_symbol_map path Specifies the output for bitcode symbol reverse mapping (.bcsymbolmap). If path is an existing directory, UUID.bcsymbolmap will be written to that directory. Otherwise, the reverse map will be written to a file at path. Rarely used Options @response_file_path Inserts contents of file at response_file_path into arguments. This allows for linker command line args to be store in a file. Note: ld is normally invoked through clang, and clang also interprets @file on the command line. To have clang ignore the @file and pass it through to ld, use -Wl,@file. -v Prints the version of the linker. -adhoc_codesign Directs the linker to add an ad-hoc codesignature to the output file. The default for Apple Silicon binaries is to be ad-hoc codesigned. -no_adhoc_codesign Directs the linker to not add ad-hoc codesignature to the output file, even for Apple Silicon binaries. -data_const By default the linker moves some data sections into __DATA_CONST if it knows the target OS version supports that. This option option overrides the default behavior and forces the use of __DATA_CONST. -no_data_const By default the linker moves some data sections into __DATA_CONST if it knows the target OS version supports that. This option option overrides the default behavior and forces the linker to never move sections to __DATA_CONST. -const_selrefs By default the linker moves __objc_selrefs section into __DATA_CONST if it knows the target OS version supports that. This option option overrides the default behavior and forces __objc_selrefs being in __DATA_CONST. Note this only applies if the __DATA_CONST segment is enabled. See -data_const for more information. -no_const_selrefs By default the linker moves __objc_selrefs section into __DATA_CONST if it knows the target OS version supports that. This option option overrides the default behavior and keeps the __objc_selrefs section in __DATA. -version_details Prints the version info about the linker in JSON -no_weak_imports Error if any symbols are weak imports (i.e. allowed to be unresolved (NULL) at runtime). Useful for config based projects that assume they are built and run on the same OS version. -no_deduplicate Don't run deduplication pass in linker -verbose_deduplicate Prints names of functions that are eliminated by deduplication and total code savings size. -no_inits Error if the output contains any static initializers -no_warn_inits Do not warn if the output contains any static initializers -warn_duplicate_libraries Warn if the input contains duplicate library options. -no_warn_duplicate_libraries Do not warn if the input contains duplicate library options. -debug_variant Do not warn about issues that are only problems for binaries shipping to customers. -unaligned_pointers treatment Specifies how unaligned pointers in __DATA segments should be handled. Options are: 'warning', 'error', or 'suppress'. The default for arm64e is 'error' and for all other architectures it is 'suppress'. -dirty_data_list filename Specifies a file containing the names of data symbols likely to be dirtied. If the linker is creating a __DATA_DIRTY segment, those symbols will be moved to that segment. -max_default_common_align value Any common symbols (aka tentative definitions, or uninitialized (zeroed) variables) that have no explicit alignment are normally aligned to their next power of two size (e.g. a 240 byte array is 256 aligned). This option lets you reduce the max alignment. For instance, a value of 0x40 would reduce the alignment for a 240 byte array to 64 bytes (instead of 256). The value specified must be a hexadecimal power of two If -max_default_common_align is not used, the default alignment is already limited to 0x8 (2^3) bytes for -preload and 0x8000 (2^15) for all other output types. -move_to_rw_segment segment_name filename Moves data symbols to another segment. The command line option specifies the target segment name and a path to a file containing a list of symbols to move. Comments can be added to the symbol file by starting a line with a #. If there are multiple instances of a symbol name (for instance a "static int foo=5;" in multiple files) the symbol name in the symbol list file can be prefixed with the object file name (e.g. "init.o:_foo") to move a specific instance. -move_to_ro_segment segment_name filename Moves code symbols to another segment. The command line option specifies the target segment name and a path to a file containing a list of symbols to move. Comments can be added to the symbol file by starting a line with a #. If there are multiple instances of a symbol name (for instance a "static int foo() {}" in multiple files) the symbol name in the symbol list file can be prefixed with the object file name (e.g. "init.o:_foo") to move a specific instance. -rename_section orgSegment orgSection newSegment newSection Renames section orgSegment/orgSection to newSegment/newSection. -rename_segment orgSegment newSegment Renames all sections with orgSegment segment name to have newSegment segment name. -trace_symbol_layout For using in debugging -rename_section, -rename_segment, -move_to_ro_segment, and -move_to_rw_segment. This option prints out a line show where and why each symbol was moved. Note: These options do chain. For each symbol, the linker first checks -move_to_ro_segment and -move_to_rw_segment. Next it applies any -rename_section options, and lastly and -rename_segment options. -section_order segname colon_separated_section_list Only for use with -preload. Specifies the order that sections with the specified segment should be layout out. For example: "-section_order __ROM __text:__const:__cstring". -segment_order colon_separated_segment_list Only for use with -preload. Specifies the order segments should be layout out. For example: "-segment_order __ROM:__ROM2:__RAM". -allow_heap_execute Normally i386 main executables will be marked so that the Mac OS X 10.7 and later kernel will only allow pages with the x-bit to execute instructions. This option overrides that behavior and allows instructions on any page to be executed. -application_extension Specifies that the code is being linked for use in an application extension. The linker will then validate that any dynamic libraries linked against are safe for use in application extensions. -no_application_extension Specifies that the code is being linked is not safe for use in an application extension. For instance, can be used when creating a framework that should not be used in an application extension. -fatal_warnings Causes the linker to exit with a non-zero value if any warnings were emitted. -no_eh_labels Normally in -r mode, the linker produces .eh labels on all FDEs in the __eh_frame section. This option suppresses those labels. Those labels are not needed by the Mac OS X 10.6 linker but are needed by earlier linker tools. -warn_compact_unwind When producing a final linked image, the linker processes the __eh_frame section and produces an __unwind_info section. Most FDE entries in the __eh_frame can be represented by a 32-bit value in the __unwind_info section. The option issues a warning for any function whose FDE cannot be expressed in the compact unwind format. -warn_weak_exports Issue a warning if the resulting final linked image contains weak external symbols. Such symbols require dyld to do extra work at launch time to coalesce those symbols. -no_weak_exports Issue an erro if the resulting final linked image contains weak external symbols. Such symbols require dyld to do extra work at launch time to coalesce those symbols. -warn_unused_dylibs Warn about dylibs that are linked by no symbols are used from them. -no_warn_unused_dylibs Don't warn about dylibs that are linked by no symbols are used from them. -dead_strip_dylibs Remove dylibs that are unreachable by the entry point or exported symbols. That is, suppresses the generation of load command commands for dylibs which supplied no symbols during the link. This option should not be used when linking against a dylib which is required at runtime for some indirect reason such as the dylib has an important initializer. -allow_sub_type_mismatches Normally the linker considers different cpu-subtype for ARM (e.g. armv4t and armv6) to be different different architectures that cannot be mixed at build time. This option relaxes that requirement, allowing you to mix object files compiled for different ARM subtypes. -no_uuid Do not generate an LC_UUID load command in the output file. Be warned that binaries without UUIDs may cause the debugger and crash reporting tools to be unable to track and inspect the binary. -random_uuid Generate a random LC_UUID load command in the output file. By default the linker generates the UUID of the output file based on a hash of the output file's content. But for very large output files, the hash can slow down the link. Using a hash based UUID is important for reproducible builds, but if you are just doing rapid debug builds, using -random_uuid may improve turn around time. -root_safe Sets the MH_ROOT_SAFE bit in the mach header of the output file. -setuid_safe Sets the MH_SETUID_SAFE bit in the mach header of the output file. -interposable Indirects access to all to exported symbols when creating a dynamic library. -init symbol_name The specified symbol_name will be run as the first initializer. Only used when creating a dynamic library. -sub_library library_name The specified dylib will be re-exported. For example the library_name for /usr/lib/libobjc_profile.A.dylib would be libobjc. Only used when creating a dynamic library. -sub_umbrella framework_name The specified framework will be re-exported. Only used when creating a dynamic library. -allowable_client name Restricts what can link against the dynamic library being created. By default any code can link against any dylib. But if a dylib is supposed to be private to a small set of clients, you can formalize that by adding a -allowable_client for each client. If a client is libfoo.1.dylib its -allowable_client name would be "foo". If a client is Foo.framework its -allowable_client name would be "Foo". For the degenerate case where you want no one to ever link against a dylib, you can set the -allowable_client to "!". -client_name name Enables a bundle to link against a dylib that was built with -allowable_client. The name specified must match one of the -allowable_client names specified when the dylib was created. -umbrella framework_name Specifies that the dylib being linked is re-exported through an umbrella framework of the specified name. -headerpad size Specifies the minimum space for future expansion of the load commands. Only useful if intend to run install_name_tool to alter the load commands later. Size is a hexadecimal number. -headerpad_max_install_names Automatically adds space for future expansion of load commands such that all paths could expand to MAXPATHLEN. Only useful if intend to run install_name_tool to alter the load commands later. -bind_at_load Sets a bit in the mach header of the resulting binary which tells dyld to bind all symbols when the binary is loaded, rather than lazily. -force_flat_namespace Sets a bit in the mach header of the resulting binary which tells dyld to not only use flat namespace for the binary, but force flat namespace binding on all dylibs and bundles loaded in the process. Can only be used when linking main executables. -sectalign segname sectname value The section named sectname in the segment segname will have its alignment set to value, where value is a hexadecimal number that must be an integral power of 2. -stack_addr address Specifies the initial address of the stack pointer value, where value is a hexadecimal number rounded to a page boundary. -segprot segname max_prot init_prot Specifies the maximum and initial virtual memory protection of the named segment, name, to be max and init ,respectively. The values for max and init are any combination of the characters `r' (for read), `w' (for write), `x' (for execute) and `-' (no access). -seg_addr_table filename Specifies a file containing base addresses for dynamic libraries. Each line of the file is a hexadecimal base address followed by whitespace then the install name of the corresponding dylib. The # character denotes a comment. -segs_read_write_addr address Allows a dynamic library to be built where the read-only and read-write segments are not contiguous. The address specified is a hexadecimal number that indicates the base address for the read-write segments. -segs_read_only_addr address Allows a dynamic library to be built where the read-only and read-write segments are not contiguous. The address specified is a hexadecimal number that indicates the base address for the read-only segments. -segaddr name address Specifies the starting address of the segment named name to be address. The address must be a hexadecimal number that is a multiple of 4K page size. -seg_page_size name size Specifies the page size used by the specified segment. By default the page size is 4096 for all segments. The linker will lay out segments such that size of a segment is always an even multiple of its page size. -dylib_file install_name:file_name Specifies that a dynamic shared library is in a different location than its standard location. Use this option when you link with a library that is dependent on a dynamic library, and the dynamic library is in a location other than its default location. install_name specifies the path where the library normally resides. file_name specifies the path of the library you want to use instead. For example, if you link to a library that depends upon the dynamic library libsys and you have libsys installed in a nondefault location, you would use this option: -dylib_file /lib/libsys_s.A.dylib:/me/lib/libsys_s.A.dylib. -prebind The created output file will be in the prebound format. This was used in Mac OS X 10.3 and earlier to improve launch performance. -weak_reference_mismatches treatment Specifies what to do if a symbol is weak-imported in one object file but not weak-imported in another. The valid treatments are: error, weak, or non-weak. The default is non-weak. -read_only_relocs treatment Enables the use of relocations which will cause dyld to modify (copy-on-write) read-only pages. The compiler will normally never generate such code. -force_cpusubtype_ALL The is only applicable with -arch ppc. It tells the linker to ignore the PowerPC cpu requirements (e.g. G3, G4 or G5) encoded in the object files and mark the resulting binary as runnable on any PowerPC cpu. -dylinker_install_name path Only used when building dyld. -no_arch_warnings Suppresses warning messages about files that have the wrong architecture for the -arch flag -arch_errors_fatal Turns into errors, warnings about files that have the wrong architecture for the -arch flag. -e symbol_name Specifies the entry point of a main executable. By default the entry name is "start" which is found in crt1.o which contains the glue code need to set up and call main(). -w Suppress all warning messages -final_output name Specifies the install name of a dylib if -install_name is not used. This option is used by compiler driver when it is invoked with multiple -arch arguments. -arch_multiple Specifies that the linker should augment error and warning messages with the architecture name. This option is used by compiler driver when it is invoked with multiple -arch arguments. -twolevel_namespace_hints Specifies that hints should be added to the resulting binary that can help speed up runtime binding by dyld as long as the libraries being linked against have not changed. -dot path Create a file at the specified path containing a graph of symbol dependencies. The .dot file can be viewed in GraphViz. -keep_relocs Add section based relocation records to a final linked image. These relocations are ignored at runtime by dyld. -warn_stabs Print a warning when the linker cannot do a BINCL/EINCL optimization because the compiler put a bad stab symbol inside a BINCL/EINCL range. -warn_commons Print a warning whenever a tentative definition in an object file is found and a external symbol by the same name is also found in a linked dylib. This often means that the extern keyword is missing from a variable declaration in a header file. -read_only_stubs [i386 only] Makes the __IMPORT segment of a final linked images read-only. This option makes a program slightly more secure in that the JMP instructions in the i386 fast stubs cannot be easily overwritten by malicious code. The downside is the dyld must use mprotect() to temporarily make the segment writable while it is binding the stubs. -slow_stubs [i386 only] Instead of using single JMP instruction stubs, the linker creates code in the __TEXT segment which calls through a lazy pointer in the __DATA segment. -interposable_list filename The specified filename contains a list of global symbol names that should always be accessed indirectly. For instance, if libSystem.dylib is linked such that _malloc is interposable, then calls to malloc() from within libSystem will go through a dyld stub and could potentially indirected to an alternate malloc. If libSystem.dylib were built without making _malloc interposable then if _malloc was interposed at runtime, calls to malloc from with libSystem would be missed (not interposed) because they would be direct calls. -no_function_starts By default the linker creates a compress table of function start addresses in the LINKEDIT of final linked image. This option disables that behavior. -no_objc_category_merging By default when producing final linked image, the linker will optimize Objective-C classes by merging any categories on a class into the class. Both the class and its categories must be defined in the image being linked for the optimization to occur. Using this option disables that behavior. -objc_relative_method_lists By default when producing final linked image, if targeting a new enough OS version, the linker will rewrite ObjC method lists from the tradition three pointers to use three read-only delta pointers. This option allows you to force the use of relative method lists even though the OS version is too low. -no_objc_relative_method_lists By default when producing final linked image, if targeting a new enough OS version, the linker will rewrite ObjC method lists from the tradition three pointers to use three read-only delta pointers. This option allows you to force the use of traditional three pointer method lists. -object_path_lto filename When performing Link Time Optimization (LTO) and a temporary mach-o object file is needed, if this option is used, the temporary file will be stored at the specified path and remain after the link is complete. Without the option, the linker picks a path and deletes the object file before the linker tool completes, thus tools such as the debugger or dsymutil will not be able to access the DWARF debug info in the temporary object file. -lto_library path When performing Link Time Optimization (LTO), the linker normally loads libLTO.dylib relative to the linker binary (../lib/libLTO.dylib). This option allows the user to specify the path to a specific libLTO.dylib to load instead. -cache_path_lto path When performing Incremental Link Time Optimization (LTO), use this directory as a cache for incremental rebuild. -prune_interval_lto seconds When performing Incremental Link Time Optimization (LTO), the cache will pruned after the specified interval. A value 0 will force pruning to occur and a value of -1 will disable pruning. -prune_after_lto seconds When pruning the cache for Incremental Link Time Optimization (LTO), the cache entries are removed after the specified interval. -max_relative_cache_size_lto percent When performing Incremental Link Time Optimization (LTO), the cache will be pruned to not go over this percentage of the free space. I.e. a value of 100 would indicate that the cache may fill the disk, and a value of 50 would indicate that the cache size will be kept under the free disk space. -fixup_chains_section For use with -static or -preload when -pie is used. Tells the linker to add a __TEXT,__chain_starts section which starts with a dyld_chained_starts_offsets struct which specifies the pointer format and the offsets to the start of every fixup chain. -fixup_chains_section_vm Same as -fixup_chains_section but fixes a bug. The offsets in the __chain_starts section are vm-offsets from the __TEXT segment, and the rebase targets in the chains are vm-offsets. -threaded_starts_section For arm64e only. For use with -static or -preload when -pie is used. Tells the linker to add a __TEXT,__thread_starts section which starts with a 32-bit flag field, followed by an array 32-bit values. Each value is the offset to the start of a fixup chain. This option is deprecated. -page_align_data_atoms During development, this option can be used to space out all global variables so each is on a separate page. This is useful when analyzing dirty and resident pages. The information can then be used to create an order file to cluster commonly used/dirty globals onto the same page(s). -not_for_dyld_shared_cache Normally, the linker will add extra info to dylibs with -install_name starting with /usr/lib or /System/Library/ that allows the dylib to be placed into the dyld shared cache. Adding this option tells the linker to not add that extra info. -search_in_sparse_frameworks For use when linking against versioned frameworks that do not have a normal variant. By default when -framework Foo,_suffix is used, the linker will follow Foo.framework/Foo if it is a symbolic link, append _suffix and search for a file with that path. When this option is used, the linker will also search for Foo.framework/Versions/Current/Foo_suffix. -ld_classic Override the choice of linker, and force the use of ld-classic to link the binary. This is incompatible with options such as -merge*, used to build/merge libraries. -ld_new Override the choice of linker, and force the use of ld to link the binary. This is incompatible with older architectures such as armv7k and i386. Mergeable Library Options -make_mergeable Adds additional metadata to a dylib which makes it a mergeable library. It can still be used as a dylib, or can be merged into other binaries when they link it with a -merge* option. -merge-lx This is the same as the -lx option but means to merge the contents of the library x into this binary. -merge_library path_to_library This is the same as listing a file name path to a library on the link line but also merges the contents of the library into this binary. -merge_framework name[,suffix] This is the same as the -framework name[,suffix] but means that the contents of the framework should be merged into this binary. -no_merged_libraries_hook When using mergeable libraries ld automatically adds a hook to redirect bundle resource lookups from mergeable frameworks into the merged binary. Use this option to disable the hook. The hook requires a minimum deployment version of iOS 12, you can use the option to disable the hook with a lower deployment target if your frameworks don't require bundle resource lookups. Disabling the hook might also improve launch time performance, so it's good to disable it regardless of the deployment target if it's not required. Obsolete Options -segalign value All segments must be page aligned. -seglinkedit Object files (MH_OBJECT) with a LINKEDIT segment are no longer supported. This option is obsolete. -noseglinkedit This is the default. This option is obsolete. -fvmlib Fixed VM shared libraries (MH_FVMLIB) are no longer supported. This option is obsolete. -sectobjectsymbols segname sectname Adding a local label at a section start is no longer supported. This option is obsolete. -nofixprebinding The MH_NOFIXPREBINDING bit of mach_headers has been ignored since Mac OS X 10.3.9. This option is obsolete. -noprebind_all_twolevel_modules Multi-modules in dynamic libraries have been ignored at runtime since Mac OS X 10.4.0. This option is obsolete. -prebind_all_twolevel_modules Multi-modules in dynamic libraries have been ignored at runtime since Mac OS X 10.4.0. This option is obsolete. -prebind_allow_overlap When using -prebind, the linker allows overlapping by default, so this option is obsolete. -noprebind LD_PREBIND is no longer supported as a way to force on prebinding, so there no longer needs to be a command line way to override LD_PREBIND. This option is obsolete. -sect_diff_relocs treatment This option was an attempt to warn about linking .o files compiled without -mdynamic-no-pic into a main executable, but the false positive rate generated too much noise to make the option useful. This option is obsolete. -run_init_lazily This option was removed in Mac OS X 10.2. -single_module This is now the default so does not need to be specified. -multi_module Multi-modules in dynamic libraries have been ignored at runtime since Mac OS X 10.4.0. This option is obsolete. -no_dead_strip_inits_and_terms The linker never dead strips initialization and termination routines. They are considered "roots" of the dead strip graph. -A basefile Obsolete incremental load format. This option is obsolete. -b Used with -A option to strip base file's symbols. This option is obsolete. Obsolete option to produce a load map. Use -map option instead. -Sn Don't strip any symbols. This is the default. This option is obsolete. -Si Optimize stabs debug symbols to remove duplicates. This is the default. This option is obsolete. -Sp Write minimal stabs which causes the debugger to open and read the original .o file for full stabs. This style of debugging is obsolete in Mac OS X 10.5. This option is obsolete. -X Strip local symbols that begin with 'L'. This is the default. This option is obsolete. -s Completely strip the output, including removing the symbol table. This file format variant is no longer supported. This option is obsolete. -m Don't treat multiple definitions as an error. This is no longer supported. This option is obsolete. -ysymbol Display each file in which symbol is used. This was previously used to debug where an undefined symbol was used, but the linker now automatically prints out all usages. The -why_live option can also be used to display what kept a symbol from being dead striped. This option is obsolete. -Y number Used to control how many occurrences of each symbol specified with -y would be shown. This option is obsolete. -nomultidefs Only used when linking an umbrella framework. Sets the MH_NOMULTIDEFS bit in the mach_header. The MH_NOMULTIDEFS bit has been obsolete since Mac OS X 10.4. This option is obsolete. -multiply_defined_unused treatment Previously provided a way to warn or error if any of the symbol definitions in the output file matched any definitions in dynamic library being linked. This option is obsolete. -multiply_defined treatment Previously provided a way to warn or error if any of the symbols used from a dynamic library were also available in another linked dynamic library. This option is obsolete. -private_bundle Previously prevented errors when -flat_namespace, -bundle, and -bundle_loader were used and the bundle contained a definition that conflicted with a symbol in the main executable. The linker no longer errors on such conflicts. This option is obsolete. -noall_load This is the default. This option is obsolete. -seg_addr_table_filename path Use path instead of the install name of the library for matching an entry in the seg_addr_table. This option is obsolete. -sectorder segname sectname orderfile Replaced by more general -order_file option. -sectorder_detail Produced extra logging about which entries from a sectorder entries were used. Replaced by -order_file_statistics. This option is obsolete. -lazy_framework name[,suffix] This is the same as the -framework name[,suffix] except that the linker will construct glue code so that the framework is not loaded until the first function in it is called. You cannot directly access data or Objective-C classes in a framework linked this way. This option is deprecated. -lazy-lx This is the same as the -lx but it is only for shared libraries and the linker will construct glue code so that the shared library is not loaded until the first function in it is called. This option is deprecated. -lazy_library path_to_library This is the same as listing a file name path to a shared library on the link line except that the linker will construct glue code so that the shared library is not loaded until the first function in it is called. This option is deprecated. SEE ALSO ld-classic(1), as(1), ar(1), cc(1), dyld_info(1), nm(1), otool(1) lipo(1), arch(3), dyld(3), Mach-O(5), strip(1), rebase(1) Darwin June 21, 2023 Darwin
null
distro
null
null
null
null
null
pcre2test
If pcre2test is given two filename arguments, it reads from the first and writes to the second. If the first name is "-", input is taken from the standard input. If pcre2test is given only one argument, it reads from that file and writes to stdout. Otherwise, it reads from stdin and writes to stdout. When pcre2test is built, a configuration option can specify that it should be linked with the libreadline or libedit library. When this is done, if the input is from a terminal, it is read using the readline() function. This provides line-editing and history facilities. The output from the -help option states whether or not readline() will be used. The program handles any number of tests, each of which consists of a set of input lines. Each set starts with a regular expression pattern, followed by any number of subject lines to be matched against that pattern. In between sets of test data, command lines that begin with # may appear. This file format, with some restrictions, can also be processed by the perltest.sh script that is distributed with PCRE2 as a means of checking that the behaviour of PCRE2 and Perl is the same. For a specification of perltest.sh, see the comments near its beginning. See also the #perltest command below. When the input is a terminal, pcre2test prompts for each line of input, using "re>" to prompt for regular expression patterns, and "data>" to prompt for subject lines. Command lines starting with # can be entered only in response to the "re>" prompt. Each subject line is matched separately and independently. If you want to do multi-line matches, you have to use the \n escape sequence (or \r or \r\n, etc., depending on the newline setting) in a single line of input to encode the newline sequences. There is no limit on the length of subject lines; the input buffer is automatically extended if it is too small. There are replication features that makes it possible to generate long repetitive pattern or subject lines without having to supply them explicitly. An empty line or the end of the file signals the end of the subject lines for a test, at which point a new pattern or command line is expected if there is still input to be read. COMMAND LINES In between sets of test data, a line that begins with # is interpreted as a command line. If the first character is followed by white space or an exclamation mark, the line is treated as a comment, and ignored. Otherwise, the following commands are recognized: #forbid_utf Subsequent patterns automatically have the PCRE2_NEVER_UTF and PCRE2_NEVER_UCP options set, which locks out the use of the PCRE2_UTF and PCRE2_UCP options and the use of (*UTF) and (*UCP) at the start of patterns. This command also forces an error if a subsequent pattern contains any occurrences of \P, \p, or \X, which are still supported when PCRE2_UTF is not set, but which require Unicode property support to be included in the library. This is a trigger guard that is used in test files to ensure that UTF or Unicode property tests are not accidentally added to files that are used when Unicode support is not included in the library. Setting PCRE2_NEVER_UTF and PCRE2_NEVER_UCP as a default can also be obtained by the use of #pattern; the difference is that #forbid_utf cannot be unset, and the automatic options are not displayed in pattern information, to avoid cluttering up test output. #load <filename> This command is used to load a set of precompiled patterns from a file, as described in the section entitled "Saving and restoring compiled patterns" below. #loadtables <filename> This command is used to load a set of binary character tables that can be accessed by the tables=3 qualifier. Such tables can be created by the pcre2_dftables program with the -b option. #newline_default [<newline-list>] When PCRE2 is built, a default newline convention can be specified. This determines which characters and/or character pairs are recognized as indicating a newline in a pattern or subject string. The default can be overridden when a pattern is compiled. The standard test files contain tests of various newline conventions, but the majority of the tests expect a single linefeed to be recognized as a newline by default. Without special action the tests would fail when PCRE2 is compiled with either CR or CRLF as the default newline. The #newline_default command specifies a list of newline types that are acceptable as the default. The types must be one of CR, LF, CRLF, ANYCRLF, ANY, or NUL (in upper or lower case), for example: #newline_default LF Any anyCRLF If the default newline is in the list, this command has no effect. Otherwise, except when testing the POSIX API, a newline modifier that specifies the first newline convention in the list (LF in the above example) is added to any pattern that does not already have a newline modifier. If the newline list is empty, the feature is turned off. This command is present in a number of the standard test input files. When the POSIX API is being tested there is no way to override the default newline convention, though it is possible to set the newline convention from within the pattern. A warning is given if the posix or posix_nosub modifier is used when #newline_default would set a default for the non-POSIX API. #pattern <modifier-list> This command sets a default modifier list that applies to all subsequent patterns. Modifiers on a pattern can change these settings. #perltest This line is used in test files that can also be processed by perltest.sh to confirm that Perl gives the same results as PCRE2. Subsequent tests are checked for the use of pcre2test features that are incompatible with the perltest.sh script. Patterns must use '/' as their delimiter, and only certain modifiers are supported. Comment lines, #pattern commands, and #subject commands that set or unset "mark" are recognized and acted on. The #perltest, #forbid_utf, and #newline_default commands, which are needed in the relevant pcre2test files, are silently ignored. All other command lines are ignored, but give a warning message. The #perltest command helps detect tests that are accidentally put in the wrong file or use the wrong delimiter. For more details of the perltest.sh script see the comments it contains. #pop [<modifiers>] #popcopy [<modifiers>] These commands are used to manipulate the stack of compiled patterns, as described in the section entitled "Saving and restoring compiled patterns" below. #save <filename> This command is used to save a set of compiled patterns to a file, as described in the section entitled "Saving and restoring compiled patterns" below. #subject <modifier-list> This command sets a default modifier list that applies to all subsequent subject lines. Modifiers on a subject line can change these settings. MODIFIER SYNTAX Modifier lists are used with both pattern and subject lines. Items in a list are separated by commas followed by optional white space. Trailing whitespace in a modifier list is ignored. Some modifiers may be given for both patterns and subject lines, whereas others are valid only for one or the other. Each modifier has a long name, for example "anchored", and some of them must be followed by an equals sign and a value, for example, "offset=12". Values cannot contain comma characters, but may contain spaces. Modifiers that do not take values may be preceded by a minus sign to turn off a previous setting. A few of the more common modifiers can also be specified as single letters, for example "i" for "caseless". In documentation, following the Perl convention, these are written with a slash ("the /i modifier") for clarity. Abbreviated modifiers must all be concatenated in the first item of a modifier list. If the first item is not recognized as a long modifier name, it is interpreted as a sequence of these abbreviations. For example: /abc/ig,newline=cr,jit=3 This is a pattern line whose modifier list starts with two one-letter modifiers (/i and /g). The lower-case abbreviated modifiers are the same as used in Perl. PATTERN SYNTAX A pattern line must start with one of the following characters (common symbols, excluding pattern meta-characters): / ! " ' ` - = _ : ; , % & @ ~ This is interpreted as the pattern's delimiter. A regular expression may be continued over several input lines, in which case the newline characters are included within it. It is possible to include the delimiter as a literal within the pattern by escaping it with a backslash, for example /abc\/def/ If you do this, the escape and the delimiter form part of the pattern, but since the delimiters are all non-alphanumeric, the inclusion of the backslash does not affect the pattern's interpretation. Note, however, that this trick does not work within \Q...\E literal bracketing because the backslash will itself be interpreted as a literal. If the terminating delimiter is immediately followed by a backslash, for example, /abc/\ a backslash is added to the end of the pattern. This is done to provide a way of testing the error condition that arises if a pattern finishes with a backslash, because /abc\/ is interpreted as the first line of a pattern that starts with "abc/", causing pcre2test to read the next line as a continuation of the regular expression. A pattern can be followed by a modifier list (details below). SUBJECT LINE SYNTAX Before each subject line is passed to pcre2_match(), pcre2_dfa_match(), or pcre2_jit_match(), leading and trailing white space is removed, and the line is scanned for backslash escapes, unless the subject_literal modifier was set for the pattern. The following provide a means of encoding non-printing characters in a visible way: \a alarm (BEL, \x07) \b backspace (\x08) \e escape (\x27) \f form feed (\x0c) \n newline (\x0a) \r carriage return (\x0d) \t tab (\x09) \v vertical tab (\x0b) \nnn octal character (up to 3 octal digits); always a byte unless > 255 in UTF-8 or 16-bit or 32-bit mode \o{dd...} octal character (any number of octal digits} \xhh hexadecimal byte (up to 2 hex digits) \x{hh...} hexadecimal character (any number of hex digits) The use of \x{hh...} is not dependent on the use of the utf modifier on the pattern. It is recognized always. There may be any number of hexadecimal digits inside the braces; invalid values provoke error messages. Note that \xhh specifies one byte rather than one character in UTF-8 mode; this makes it possible to construct invalid UTF-8 sequences for testing purposes. On the other hand, \x{hh} is interpreted as a UTF-8 character in UTF-8 mode, generating more than one byte if the value is greater than 127. When testing the 8-bit library not in UTF-8 mode, \x{hh} generates one byte for values less than 256, and causes an error for greater values. In UTF-16 mode, all 4-digit \x{hhhh} values are accepted. This makes it possible to construct invalid UTF-16 sequences for testing purposes. In UTF-32 mode, all 4- to 8-digit \x{...} values are accepted. This makes it possible to construct invalid UTF-32 sequences for testing purposes. There is a special backslash sequence that specifies replication of one or more characters: \[<characters>]{<count>} This makes it possible to test long strings without having to provide them as part of the file. For example: \[abc]{4} is converted to "abcabcabcabc". This feature does not support nesting. To include a closing square bracket in the characters, code it as \x5D. A backslash followed by an equals sign marks the end of the subject string and the start of a modifier list. For example: abc\=notbol,notempty If the subject string is empty and \= is followed by whitespace, the line is treated as a comment line, and is not used for matching. For example: \= This is a comment. abc\= This is an invalid modifier list. A backslash followed by any other non-alphanumeric character just escapes that character. A backslash followed by anything else causes an error. However, if the very last character in the line is a backslash (and there is no modifier list), it is ignored. This gives a way of passing an empty line as data, since a real empty line terminates the data input. If the subject_literal modifier is set for a pattern, all subject lines that follow are treated as literals, with no special treatment of backslashes. No replication is possible, and any subject modifiers must be set as defaults by a #subject command. PATTERN MODIFIERS There are several types of modifier that can appear in pattern lines. Except where noted below, they may also be used in #pattern commands. A pattern's modifier list can add to or override default modifiers that were set by a previous #pattern command. Setting compilation options The following modifiers set options for pcre2_compile(). Most of them set bits in the options argument of that function, but those whose names start with PCRE2_EXTRA are additional options that are set in the compile context. Some of these options have single-letter abbreviations. There is special handling for /x: if a second x is present, PCRE2_EXTENDED is converted into PCRE2_EXTENDED_MORE as in Perl. A third appearance adds PCRE2_EXTENDED as well, though this makes no difference to the way pcre2_compile() behaves. See pcre2api for a description of the effects of these options. allow_empty_class set PCRE2_ALLOW_EMPTY_CLASS allow_lookaround_bsk set PCRE2_EXTRA_ALLOW_LOOKAROUND_BSK allow_surrogate_escapes set PCRE2_EXTRA_ALLOW_SURROGATE_ESCAPES alt_bsux set PCRE2_ALT_BSUX alt_circumflex set PCRE2_ALT_CIRCUMFLEX alt_verbnames set PCRE2_ALT_VERBNAMES anchored set PCRE2_ANCHORED /a ascii_all set all ASCII options ascii_bsd set PCRE2_EXTRA_ASCII_BSD ascii_bss set PCRE2_EXTRA_ASCII_BSS ascii_bsw set PCRE2_EXTRA_ASCII_BSW ascii_digit set PCRE2_EXTRA_ASCII_DIGIT ascii_posix set PCRE2_EXTRA_ASCII_POSIX auto_callout set PCRE2_AUTO_CALLOUT bad_escape_is_literal set PCRE2_EXTRA_BAD_ESCAPE_IS_LITERAL /i caseless set PCRE2_CASELESS /r caseless_restrict set PCRE2_EXTRA_CASELESS_RESTRICT dollar_endonly set PCRE2_DOLLAR_ENDONLY /s dotall set PCRE2_DOTALL dupnames set PCRE2_DUPNAMES endanchored set PCRE2_ENDANCHORED escaped_cr_is_lf set PCRE2_EXTRA_ESCAPED_CR_IS_LF /x extended set PCRE2_EXTENDED /xx extended_more set PCRE2_EXTENDED_MORE extra_alt_bsux set PCRE2_EXTRA_ALT_BSUX firstline set PCRE2_FIRSTLINE literal set PCRE2_LITERAL match_line set PCRE2_EXTRA_MATCH_LINE match_invalid_utf set PCRE2_MATCH_INVALID_UTF match_unset_backref set PCRE2_MATCH_UNSET_BACKREF match_word set PCRE2_EXTRA_MATCH_WORD /m multiline set PCRE2_MULTILINE never_backslash_c set PCRE2_NEVER_BACKSLASH_C never_ucp set PCRE2_NEVER_UCP never_utf set PCRE2_NEVER_UTF /n no_auto_capture set PCRE2_NO_AUTO_CAPTURE no_auto_possess set PCRE2_NO_AUTO_POSSESS no_dotstar_anchor set PCRE2_NO_DOTSTAR_ANCHOR no_start_optimize set PCRE2_NO_START_OPTIMIZE no_utf_check set PCRE2_NO_UTF_CHECK ucp set PCRE2_UCP ungreedy set PCRE2_UNGREEDY use_offset_limit set PCRE2_USE_OFFSET_LIMIT utf set PCRE2_UTF As well as turning on the PCRE2_UTF option, the utf modifier causes all non-printing characters in output strings to be printed using the \x{hh...} notation. Otherwise, those less than 0x100 are output in hex without the curly brackets. Setting utf in 16-bit or 32-bit mode also causes pattern and subject strings to be translated to UTF-16 or UTF-32, respectively, before being passed to library functions. Setting compilation controls The following modifiers affect the compilation process or request information about the pattern. There are single-letter abbreviations for some that are heavily used in the test files. bsr=[anycrlf|unicode] specify \R handling /B bincode show binary code without lengths callout_info show callout information convert=<options> request foreign pattern conversion convert_glob_escape=c set glob escape character convert_glob_separator=c set glob separator character convert_length set convert buffer length debug same as info,fullbincode framesize show matching frame size fullbincode show binary code with lengths /I info show info about compiled pattern hex unquoted characters are hexadecimal jit[=<number>] use JIT jitfast use JIT fast path jitverify verify JIT use locale=<name> use this locale max_pattern_compiled ) set maximum compiled pattern _length=<n> ) length (bytes) max_pattern_length=<n> set maximum pattern length (code units) max_varlookbehind=<n> set maximum variable lookbehind length memory show memory used newline=<type> set newline type null_context compile with a NULL context null_pattern pass pattern as NULL parens_nest_limit=<n> set maximum parentheses depth posix use the POSIX API posix_nosub use the POSIX API with REG_NOSUB push push compiled pattern onto the stack pushcopy push a copy onto the stack stackguard=<number> test the stackguard feature subject_literal treat all subject lines as literal tables=[0|1|2|3] select internal tables use_length do not zero-terminate the pattern utf8_input treat input as UTF-8 The effects of these modifiers are described in the following sections. Newline and \R handling The bsr modifier specifies what \R in a pattern should match. If it is set to "anycrlf", \R matches CR, LF, or CRLF only. If it is set to "unicode", \R matches any Unicode newline sequence. The default can be specified when PCRE2 is built; if it is not, the default is set to Unicode. The newline modifier specifies which characters are to be interpreted as newlines, both in the pattern and in subject lines. The type must be one of CR, LF, CRLF, ANYCRLF, ANY, or NUL (in upper or lower case). Information about a pattern The debug modifier is a shorthand for info,fullbincode, requesting all available information. The bincode modifier causes a representation of the compiled code to be output after compilation. This information does not contain length and offset values, which ensures that the same output is generated for different internal link sizes and different code unit widths. By using bincode, the same regression tests can be used in different environments. The fullbincode modifier, by contrast, does include length and offset values. This is used in a few special tests that run only for specific code unit widths and link sizes, and is also useful for one-off tests. The info modifier requests information about the compiled pattern (whether it is anchored, has a fixed first character, and so on). The information is obtained from the pcre2_pattern_info() function. Here are some typical examples: re> /(?i)(^a|^b)/m,info Capture group count = 1 Compile options: multiline Overall options: caseless multiline First code unit at start or follows newline Subject length lower bound = 1 re> /(?i)abc/info Capture group count = 0 Compile options: <none> Overall options: caseless First code unit = 'a' (caseless) Last code unit = 'c' (caseless) Subject length lower bound = 3 "Compile options" are those specified by modifiers; "overall options" have added options that are taken or deduced from the pattern. If both sets of options are the same, just a single "options" line is output; if there are no options, the line is omitted. "First code unit" is where any match must start; if there is more than one they are listed as "starting code units". "Last code unit" is the last literal code unit that must be present in any match. This is not necessarily the last character. These lines are omitted if no starting or ending code units are recorded. The subject length line is omitted when no_start_optimize is set because the minimum length is not calculated when it can never be used. The framesize modifier shows the size, in bytes, of each storage frame used by pcre2_match() for handling backtracking. The size depends on the number of capturing parentheses in the pattern. A vector of these frames is used at matching time; its overall size is shown when the heaframes_size subject modifier is set. The callout_info modifier requests information about all the callouts in the pattern. A list of them is output at the end of any other information that is requested. For each callout, either its number or string is given, followed by the item that follows it in the pattern. Passing a NULL context Normally, pcre2test passes a context block to pcre2_compile(). If the null_context modifier is set, however, NULL is passed. This is for testing that pcre2_compile() behaves correctly in this case (it uses default values). Passing a NULL pattern The null_pattern modifier is for testing the behaviour of pcre2_compile() when the pattern argument is NULL. The length value passed is the default PCRE2_ZERO_TERMINATED unless use_length is set. Any length other than zero causes an error. Specifying pattern characters in hexadecimal The hex modifier specifies that the characters of the pattern, except for substrings enclosed in single or double quotes, are to be interpreted as pairs of hexadecimal digits. This feature is provided as a way of creating patterns that contain binary zeros and other non- printing characters. White space is permitted between pairs of digits. For example, this pattern contains three characters: /ab 32 59/hex Parts of such a pattern are taken literally if quoted. This pattern contains nine characters, only two of which are specified in hexadecimal: /ab "literal" 32/hex Either single or double quotes may be used. There is no way of including the delimiter within a substring. The hex and expand modifiers are mutually exclusive. Specifying the pattern's length By default, patterns are passed to the compiling functions as zero- terminated strings but can be passed by length instead of being zero- terminated. The use_length modifier causes this to happen. Using a length happens automatically (whether or not use_length is set) when hex is set, because patterns specified in hexadecimal may contain binary zeros. If hex or use_length is used with the POSIX wrapper API (see "Using the POSIX wrapper API" below), the REG_PEND extension is used to pass the pattern's length. Specifying a maximum for variable lookbehinds Variable lookbehind assertions are supported only if, for each one, there is a maximum length (in characters) that it can match. There is a limit on this, whose default can be set at build time, with an ultimate default of 255. The max_varlookbehind modifier uses the pcre2_set_max_varlookbehind() function to change the limit. Lookbehinds whose branches each match a fixed length are limited to 65535 characters per branch. Specifying wide characters in 16-bit and 32-bit modes In 16-bit and 32-bit modes, all input is automatically treated as UTF-8 and translated to UTF-16 or UTF-32 when the utf modifier is set. For testing the 16-bit and 32-bit libraries in non-UTF mode, the utf8_input modifier can be used. It is mutually exclusive with utf. Input lines are interpreted as UTF-8 as a means of specifying wide characters. More details are given in "Input encoding" above. Generating long repetitive patterns Some tests use long patterns that are very repetitive. Instead of creating a very long input line for such a pattern, you can use a special repetition feature, similar to the one described for subject lines above. If the expand modifier is present on a pattern, parts of the pattern that have the form \[<characters>]{<count>} are expanded before the pattern is passed to pcre2_compile(). For example, \[AB]{6000} is expanded to "ABAB..." 6000 times. This construction cannot be nested. An initial "\[" sequence is recognized only if "]{" followed by decimal digits and "}" is found later in the pattern. If not, the characters remain in the pattern unaltered. The expand and hex modifiers are mutually exclusive. If part of an expanded pattern looks like an expansion, but is really part of the actual pattern, unwanted expansion can be avoided by giving two values in the quantifier. For example, \[AB]{6000,6000} is not recognized as an expansion item. If the info modifier is set on an expanded pattern, the result of the expansion is included in the information that is output. JIT compilation Just-in-time (JIT) compiling is a heavyweight optimization that can greatly speed up pattern matching. See the pcre2jit documentation for details. JIT compiling happens, optionally, after a pattern has been successfully compiled into an internal form. The JIT compiler converts this to optimized machine code. It needs to know whether the match-time options PCRE2_PARTIAL_HARD and PCRE2_PARTIAL_SOFT are going to be used, because different code is generated for the different cases. See the partial modifier in "Subject Modifiers" below for details of how these options are specified for each match attempt. JIT compilation is requested by the jit pattern modifier, which may optionally be followed by an equals sign and a number in the range 0 to 7. The three bits that make up the number specify which of the three JIT operating modes are to be compiled: 1 compile JIT code for non-partial matching 2 compile JIT code for soft partial matching 4 compile JIT code for hard partial matching The possible values for the jit modifier are therefore: 0 disable JIT 1 normal matching only 2 soft partial matching only 3 normal and soft partial matching 4 hard partial matching only 6 soft and hard partial matching only 7 all three modes If no number is given, 7 is assumed. The phrase "partial matching" means a call to pcre2_match() with either the PCRE2_PARTIAL_SOFT or the PCRE2_PARTIAL_HARD option set. Note that such a call may return a complete match; the options enable the possibility of a partial match, but do not require it. Note also that if you request JIT compilation only for partial matching (for example, jit=2) but do not set the partial modifier on a subject line, that match will not use JIT code because none was compiled for non-partial matching. If JIT compilation is successful, the compiled JIT code will automatically be used when an appropriate type of match is run, except when incompatible run-time options are specified. For more details, see the pcre2jit documentation. See also the jitstack modifier below for a way of setting the size of the JIT stack. If the jitfast modifier is specified, matching is done using the JIT "fast path" interface, pcre2_jit_match(), which skips some of the sanity checks that are done by pcre2_match(), and of course does not work when JIT is not supported. If jitfast is specified without jit, jit=7 is assumed. If the jitverify modifier is specified, information about the compiled pattern shows whether JIT compilation was or was not successful. If jitverify is specified without jit, jit=7 is assumed. If JIT compilation is successful when jitverify is set, the text "(JIT)" is added to the first output line after a match or non match when JIT- compiled code was actually used in the match. Setting a locale The locale modifier must specify the name of a locale, for example: /pattern/locale=fr_FR The given locale is set, pcre2_maketables() is called to build a set of character tables for the locale, and this is then passed to pcre2_compile() when compiling the regular expression. The same tables are used when matching the following subject lines. The locale modifier applies only to the pattern on which it appears, but can be given in a #pattern command if a default is needed. Setting a locale and alternate character tables are mutually exclusive. Showing pattern memory The memory modifier causes the size in bytes of the memory used to hold the compiled pattern to be output. This does not include the size of the pcre2_code block; it is just the actual compiled data. If the pattern is subsequently passed to the JIT compiler, the size of the JIT compiled code is also output. Here is an example: re> /a(b)c/jit,memory Memory allocation (code space): 21 Memory allocation (JIT code): 1910 Limiting nested parentheses The parens_nest_limit modifier sets a limit on the depth of nested parentheses in a pattern. Breaching the limit causes a compilation error. The default for the library is set when PCRE2 is built, but pcre2test sets its own default of 220, which is required for running the standard test suite. Limiting the pattern length The max_pattern_length modifier sets a limit, in code units, to the length of pattern that pcre2_compile() will accept. Breaching the limit causes a compilation error. The default is the largest number a PCRE2_SIZE variable can hold (essentially unlimited). Limiting the size of a compiled pattern The max_pattern_compiled_length modifier sets a limit, in bytes, to the amount of memory used by a compiled pattern. Breaching the limit causes a compilation error. The default is the largest number a PCRE2_SIZE variable can hold (essentially unlimited). Using the POSIX wrapper API The posix and posix_nosub modifiers cause pcre2test to call PCRE2 via the POSIX wrapper API rather than its native API. When posix_nosub is used, the POSIX option REG_NOSUB is passed to regcomp(). The POSIX wrapper supports only the 8-bit library. Note that it does not imply POSIX matching semantics; for more detail see the pcre2posix documentation. The following pattern modifiers set options for the regcomp() function: caseless REG_ICASE multiline REG_NEWLINE dotall REG_DOTALL ) ungreedy REG_UNGREEDY ) These options are not part of ucp REG_UCP ) the POSIX standard utf REG_UTF8 ) The regerror_buffsize modifier specifies a size for the error buffer that is passed to regerror() in the event of a compilation error. For example: /abc/posix,regerror_buffsize=20 This provides a means of testing the behaviour of regerror() when the buffer is too small for the error message. If this modifier has not been set, a large buffer is used. The aftertext and allaftertext subject modifiers work as described below. All other modifiers are either ignored, with a warning message, or cause an error. The pattern is passed to regcomp() as a zero-terminated string by default, but if the use_length or hex modifiers are set, the REG_PEND extension is used to pass it by length. Testing the stack guard feature The stackguard modifier is used to test the use of pcre2_set_compile_recursion_guard(), a function that is provided to enable stack availability to be checked during compilation (see the pcre2api documentation for details). If the number specified by the modifier is greater than zero, pcre2_set_compile_recursion_guard() is called to set up callback from pcre2_compile() to a local function. The argument it receives is the current nesting parenthesis depth; if this is greater than the value given by the modifier, non-zero is returned, causing the compilation to be aborted. Using alternative character tables The value specified for the tables modifier must be one of the digits 0, 1, 2, or 3. It causes a specific set of built-in character tables to be passed to pcre2_compile(). This is used in the PCRE2 tests to check behaviour with different character tables. The digit specifies the tables as follows: 0 do not pass any special character tables 1 the default ASCII tables, as distributed in pcre2_chartables.c.dist 2 a set of tables defining ISO 8859 characters 3 a set of tables loaded by the #loadtables command In tables 2, some characters whose codes are greater than 128 are identified as letters, digits, spaces, etc. Tables 3 can be used only after a #loadtables command has loaded them from a binary file. Setting alternate character tables and a locale are mutually exclusive. Setting certain match controls The following modifiers are really subject modifiers, and are described under "Subject Modifiers" below. However, they may be included in a pattern's modifier list, in which case they are applied to every subject line that is processed with that pattern. These modifiers do not affect the compilation process. aftertext show text after match allaftertext show text after captures allcaptures show all captures allvector show the entire ovector allusedtext show all consulted text altglobal alternative global matching /g global global matching heapframes_size show match data heapframes size jitstack=<n> set size of JIT stack mark show mark values replace=<string> specify a replacement string startchar show starting character when relevant substitute_callout use substitution callouts substitute_extended use PCRE2_SUBSTITUTE_EXTENDED substitute_literal use PCRE2_SUBSTITUTE_LITERAL substitute_matched use PCRE2_SUBSTITUTE_MATCHED substitute_overflow_length use PCRE2_SUBSTITUTE_OVERFLOW_LENGTH substitute_replacement_only use PCRE2_SUBSTITUTE_REPLACEMENT_ONLY substitute_skip=<n> skip substitution <n> substitute_stop=<n> skip substitution <n> and following substitute_unknown_unset use PCRE2_SUBSTITUTE_UNKNOWN_UNSET substitute_unset_empty use PCRE2_SUBSTITUTE_UNSET_EMPTY These modifiers may not appear in a #pattern command. If you want them as defaults, set them in a #subject command. Specifying literal subject lines If the subject_literal modifier is present on a pattern, all the subject lines that it matches are taken as literal strings, with no interpretation of backslashes. It is not possible to set subject modifiers on such lines, but any that are set as defaults by a #subject command are recognized. Saving a compiled pattern When a pattern with the push modifier is successfully compiled, it is pushed onto a stack of compiled patterns, and pcre2test expects the next line to contain a new pattern (or a command) instead of a subject line. This facility is used when saving compiled patterns to a file, as described in the section entitled "Saving and restoring compiled patterns" below. If pushcopy is used instead of push, a copy of the compiled pattern is stacked, leaving the original as current, ready to match the following input lines. This provides a way of testing the pcre2_code_copy() function. The push and pushcopy modifiers are incompatible with compilation modifiers such as global that act at match time. Any that are specified are ignored (for the stacked copy), with a warning message, except for replace, which causes an error. Note that jitverify, which is allowed, does not carry through to any subsequent matching that uses a stacked pattern. Testing foreign pattern conversion The experimental foreign pattern conversion functions in PCRE2 can be tested by setting the convert modifier. Its argument is a colon- separated list of options, which set the equivalent option for the pcre2_pattern_convert() function: glob PCRE2_CONVERT_GLOB glob_no_starstar PCRE2_CONVERT_GLOB_NO_STARSTAR glob_no_wild_separator PCRE2_CONVERT_GLOB_NO_WILD_SEPARATOR posix_basic PCRE2_CONVERT_POSIX_BASIC posix_extended PCRE2_CONVERT_POSIX_EXTENDED unset Unset all options The "unset" value is useful for turning off a default that has been set by a #pattern command. When one of these options is set, the input pattern is passed to pcre2_pattern_convert(). If the conversion is successful, the result is reflected in the output and then passed to pcre2_compile(). The normal utf and no_utf_check options, if set, cause the PCRE2_CONVERT_UTF and PCRE2_CONVERT_NO_UTF_CHECK options to be passed to pcre2_pattern_convert(). By default, the conversion function is allowed to allocate a buffer for its output. However, if the convert_length modifier is set to a value greater than zero, pcre2test passes a buffer of the given length. This makes it possible to test the length check. The convert_glob_escape and convert_glob_separator modifiers can be used to specify the escape and separator characters for glob processing, overriding the defaults, which are operating-system dependent. SUBJECT MODIFIERS The modifiers that can appear in subject lines and the #subject command are of two types. Setting match options The following modifiers set options for pcre2_match() or pcre2_dfa_match(). See pcreapi for a description of their effects. anchored set PCRE2_ANCHORED endanchored set PCRE2_ENDANCHORED dfa_restart set PCRE2_DFA_RESTART dfa_shortest set PCRE2_DFA_SHORTEST disable_recurseloop_check set PCRE2_DISABLE_RECURSELOOP_CHECK no_jit set PCRE2_NO_JIT no_utf_check set PCRE2_NO_UTF_CHECK notbol set PCRE2_NOTBOL notempty set PCRE2_NOTEMPTY notempty_atstart set PCRE2_NOTEMPTY_ATSTART noteol set PCRE2_NOTEOL partial_hard (or ph) set PCRE2_PARTIAL_HARD partial_soft (or ps) set PCRE2_PARTIAL_SOFT The partial matching modifiers are provided with abbreviations because they appear frequently in tests. If the posix or posix_nosub modifier was present on the pattern, causing the POSIX wrapper API to be used, the only option-setting modifiers that have any effect are notbol, notempty, and noteol, causing REG_NOTBOL, REG_NOTEMPTY, and REG_NOTEOL, respectively, to be passed to regexec(). The other modifiers are ignored, with a warning message. There is one additional modifier that can be used with the POSIX wrapper. It is ignored (with a warning) if used for non-POSIX matching. posix_startend=<n>[:<m>] This causes the subject string to be passed to regexec() using the REG_STARTEND option, which uses offsets to specify which part of the string is searched. If only one number is given, the end offset is passed as the end of the subject string. For more detail of REG_STARTEND, see the pcre2posix documentation. If the subject string contains binary zeros (coded as escapes such as \x{00} because pcre2test does not support actual binary zeros in its input), you must use posix_startend to specify its length. Setting match controls The following modifiers affect the matching process or request additional information. Some of them may also be specified on a pattern line (see above), in which case they apply to every subject line that is matched against that pattern, but can be overridden by modifiers on the subject. aftertext show text after match allaftertext show text after captures allcaptures show all captures allvector show the entire ovector allusedtext show all consulted text (non-JIT only) altglobal alternative global matching callout_capture show captures at callout time callout_data=<n> set a value to pass via callouts callout_error=<n>[:<m>] control callout error callout_extra show extra callout information callout_fail=<n>[:<m>] control callout failure callout_no_where do not show position of a callout callout_none do not supply a callout function copy=<number or name> copy captured substring depth_limit=<n> set a depth limit dfa use pcre2_dfa_match() find_limits find heap, match and depth limits find_limits_noheap find match and depth limits get=<number or name> extract captured substring getall extract all captured substrings /g global global matching heapframes_size show match data heapframes size heap_limit=<n> set a limit on heap memory (Kbytes) jitstack=<n> set size of JIT stack mark show mark values match_limit=<n> set a match limit memory show heap memory usage null_context match with a NULL context null_replacement substitute with NULL replacement null_subject match with NULL subject offset=<n> set starting offset offset_limit=<n> set offset limit ovector=<n> set size of output vector recursion_limit=<n> obsolete synonym for depth_limit replace=<string> specify a replacement string startchar show startchar when relevant startoffset=<n> same as offset=<n> substitute_callout use substitution callouts substitute_extedded use PCRE2_SUBSTITUTE_EXTENDED substitute_literal use PCRE2_SUBSTITUTE_LITERAL substitute_matched use PCRE2_SUBSTITUTE_MATCHED substitute_overflow_length use PCRE2_SUBSTITUTE_OVERFLOW_LENGTH substitute_replacement_only use PCRE2_SUBSTITUTE_REPLACEMENT_ONLY substitute_skip=<n> skip substitution number n substitute_stop=<n> skip substitution number n and greater substitute_unknown_unset use PCRE2_SUBSTITUTE_UNKNOWN_UNSET substitute_unset_empty use PCRE2_SUBSTITUTE_UNSET_EMPTY zero_terminate pass the subject as zero-terminated The effects of these modifiers are described in the following sections. When matching via the POSIX wrapper API, the aftertext, allaftertext, and ovector subject modifiers work as described below. All other modifiers are either ignored, with a warning message, or cause an error. Showing more text The aftertext modifier requests that as well as outputting the part of the subject string that matched the entire pattern, pcre2test should in addition output the remainder of the subject string. This is useful for tests where the subject contains multiple copies of the same substring. The allaftertext modifier requests the same action for captured substrings as well as the main matched substring. In each case the remainder is output on the following line with a plus character following the capture number. The allusedtext modifier requests that all the text that was consulted during a successful pattern match by the interpreter should be shown, for both full and partial matches. This feature is not supported for JIT matching, and if requested with JIT it is ignored (with a warning message). Setting this modifier affects the output if there is a lookbehind at the start of a match, or, for a complete match, a lookahead at the end, or if \K is used in the pattern. Characters that precede or follow the start and end of the actual match are indicated in the output by '<' or '>' characters underneath them. Here is an example: re> /(?<=pqr)abc(?=xyz)/ data> 123pqrabcxyz456\=allusedtext 0: pqrabcxyz <<< >>> data> 123pqrabcxy\=ph,allusedtext Partial match: pqrabcxy <<< The first, complete match shows that the matched string is "abc", with the preceding and following strings "pqr" and "xyz" having been consulted during the match (when processing the assertions). The partial match can indicate only the preceding string. The startchar modifier requests that the starting character for the match be indicated, if it is different to the start of the matched string. The only time when this occurs is when \K has been processed as part of the match. In this situation, the output for the matched string is displayed from the starting character instead of from the match point, with circumflex characters under the earlier characters. For example: re> /abc\Kxyz/ data> abcxyz\=startchar 0: abcxyz ^^^ Unlike allusedtext, the startchar modifier can be used with JIT. However, these two modifiers are mutually exclusive. Showing the value of all capture groups The allcaptures modifier requests that the values of all potential captured parentheses be output after a match. By default, only those up to the highest one actually used in the match are output (corresponding to the return code from pcre2_match()). Groups that did not take part in the match are output as "<unset>". This modifier is not relevant for DFA matching (which does no capturing) and does not apply when replace is specified; it is ignored, with a warning message, if present. Showing the entire ovector, for all outcomes The allvector modifier requests that the entire ovector be shown, whatever the outcome of the match. Compare allcaptures, which shows only up to the maximum number of capture groups for the pattern, and then only for a successful complete non-DFA match. This modifier, which acts after any match result, and also for DFA matching, provides a means of checking that there are no unexpected modifications to ovector fields. Before each match attempt, the ovector is filled with a special value, and if this is found in both elements of a capturing pair, "<unchanged>" is output. After a successful match, this applies to all groups after the maximum capture group for the pattern. In other cases it applies to the entire ovector. After a partial match, the first two elements are the only ones that should be set. After a DFA match, the amount of ovector that is used depends on the number of matches that were found. Testing pattern callouts A callout function is supplied when pcre2test calls the library matching functions, unless callout_none is specified. Its behaviour can be controlled by various modifiers listed above whose names begin with callout_. Details are given in the section entitled "Callouts" below. Testing callouts from pcre2_substitute() is described separately in "Testing the substitution function" below. Finding all matches in a string Searching for all possible matches within a subject can be requested by the global or altglobal modifier. After finding a match, the matching function is called again to search the remainder of the subject. The difference between global and altglobal is that the former uses the start_offset argument to pcre2_match() or pcre2_dfa_match() to start searching at a new point within the entire string (which is what Perl does), whereas the latter passes over a shortened subject. This makes a difference to the matching process if the pattern begins with a lookbehind assertion (including \b or \B). If an empty string is matched, the next match is done with the PCRE2_NOTEMPTY_ATSTART and PCRE2_ANCHORED flags set, in order to search for another, non-empty, match at the same point in the subject. If this match fails, the start offset is advanced, and the normal match is retried. This imitates the way Perl handles such cases when using the /g modifier or the split() function. Normally, the start offset is advanced by one character, but if the newline convention recognizes CRLF as a newline, and the current character is CR followed by LF, an advance of two characters occurs. Testing substring extraction functions The copy and get modifiers can be used to test the pcre2_substring_copy_xxx() and pcre2_substring_get_xxx() functions. They can be given more than once, and each can specify a capture group name or number, for example: abcd\=copy=1,copy=3,get=G1 If the #subject command is used to set default copy and/or get lists, these can be unset by specifying a negative number to cancel all numbered groups and an empty name to cancel all named groups. The getall modifier tests pcre2_substring_list_get(), which extracts all captured substrings. If the subject line is successfully matched, the substrings extracted by the convenience functions are output with C, G, or L after the string number instead of a colon. This is in addition to the normal full list. The string length (that is, the return from the extraction function) is given in parentheses after each substring, followed by the name when the extraction was by name. Testing the substitution function If the replace modifier is set, the pcre2_substitute() function is called instead of one of the matching functions (or after one call of pcre2_match() in the case of PCRE2_SUBSTITUTE_MATCHED). Note that replacement strings cannot contain commas, because a comma signifies the end of a modifier. This is not thought to be an issue in a test program. Specifying a completely empty replacement string disables this modifier. However, it is possible to specify an empty replacement by providing a buffer length, as described below, for an otherwise empty replacement. Unlike subject strings, pcre2test does not process replacement strings for escape sequences. In UTF mode, a replacement string is checked to see if it is a valid UTF-8 string. If so, it is correctly converted to a UTF string of the appropriate code unit width. If it is not a valid UTF-8 string, the individual code units are copied directly. This provides a means of passing an invalid UTF-8 string for testing purposes. The following modifiers set options (in additional to the normal match options) for pcre2_substitute(): global PCRE2_SUBSTITUTE_GLOBAL substitute_extended PCRE2_SUBSTITUTE_EXTENDED substitute_literal PCRE2_SUBSTITUTE_LITERAL substitute_matched PCRE2_SUBSTITUTE_MATCHED substitute_overflow_length PCRE2_SUBSTITUTE_OVERFLOW_LENGTH substitute_replacement_only PCRE2_SUBSTITUTE_REPLACEMENT_ONLY substitute_unknown_unset PCRE2_SUBSTITUTE_UNKNOWN_UNSET substitute_unset_empty PCRE2_SUBSTITUTE_UNSET_EMPTY See the pcre2api documentation for details of these options. After a successful substitution, the modified string is output, preceded by the number of replacements. This may be zero if there were no matches. Here is a simple example of a substitution test: /abc/replace=xxx =abc=abc= 1: =xxx=abc= =abc=abc=\=global 2: =xxx=xxx= Subject and replacement strings should be kept relatively short (fewer than 256 characters) for substitution tests, as fixed-size buffers are used. To make it easy to test for buffer overflow, if the replacement string starts with a number in square brackets, that number is passed to pcre2_substitute() as the size of the output buffer, with the replacement string starting at the next character. Here is an example that tests the edge case: /abc/ 123abc123\=replace=[10]XYZ 1: 123XYZ123 123abc123\=replace=[9]XYZ Failed: error -47: no more memory The default action of pcre2_substitute() is to return PCRE2_ERROR_NOMEMORY when the output buffer is too small. However, if the PCRE2_SUBSTITUTE_OVERFLOW_LENGTH option is set (by using the substitute_overflow_length modifier), pcre2_substitute() continues to go through the motions of matching and substituting (but not doing any callouts), in order to compute the size of buffer that is required. When this happens, pcre2test shows the required buffer length (which includes space for the trailing zero) as part of the error message. For example: /abc/substitute_overflow_length 123abc123\=replace=[9]XYZ Failed: error -47: no more memory: 10 code units are needed A replacement string is ignored with POSIX and DFA matching. Specifying partial matching provokes an error return ("bad option value") from pcre2_substitute(). Testing substitute callouts If the substitute_callout modifier is set, a substitution callout function is set up. The null_context modifier must not be set, because the address of the callout function is passed in a match context. When the callout function is called (after each substitution), details of the input and output strings are output. For example: /abc/g,replace=<$0>,substitute_callout abcdefabcpqr 1(1) Old 0 3 "abc" New 0 5 "<abc>" 2(1) Old 6 9 "abc" New 8 13 "<abc>" 2: <abc>def<abc>pqr The first number on each callout line is the count of matches. The parenthesized number is the number of pairs that are set in the ovector (that is, one more than the number of capturing groups that were set). Then are listed the offsets of the old substring, its contents, and the same for the replacement. By default, the substitution callout function returns zero, which accepts the replacement and causes matching to continue if /g was used. Two further modifiers can be used to test other return values. If substitute_skip is set to a value greater than zero the callout function returns +1 for the match of that number, and similarly substitute_stop returns -1. These cause the replacement to be rejected, and -1 causes no further matching to take place. If either of them are set, substitute_callout is assumed. For example: /abc/g,replace=<$0>,substitute_skip=1 abcdefabcpqr 1(1) Old 0 3 "abc" New 0 5 "<abc> SKIPPED" 2(1) Old 6 9 "abc" New 6 11 "<abc>" 2: abcdef<abc>pqr abcdefabcpqr\=substitute_stop=1 1(1) Old 0 3 "abc" New 0 5 "<abc> STOPPED" 1: abcdefabcpqr If both are set for the same number, stop takes precedence. Only a single skip or stop is supported, which is sufficient for testing that the feature works. Setting the JIT stack size The jitstack modifier provides a way of setting the maximum stack size that is used by the just-in-time optimization code. It is ignored if JIT optimization is not being used. The value is a number of kibibytes (units of 1024 bytes). Setting zero reverts to the default of 32KiB. Providing a stack that is larger than the default is necessary only for very complicated patterns. If jitstack is set non-zero on a subject line it overrides any value that was set on the pattern. Setting heap, match, and depth limits The heap_limit, match_limit, and depth_limit modifiers set the appropriate limits in the match context. These values are ignored when the find_limits or find_limits_noheap modifier is specified. Finding minimum limits If the find_limits modifier is present on a subject line, pcre2test calls the relevant matching function several times, setting different values in the match context via pcre2_set_heap_limit(), pcre2_set_match_limit(), or pcre2_set_depth_limit() until it finds the smallest value for each parameter that allows the match to complete without a "limit exceeded" error. The match itself may succeed or fail. An alternative modifier, find_limits_noheap, omits the heap limit. This is used in the standard tests, because the minimum heap limit varies between systems. If JIT is being used, only the match limit is relevant, and the other two are automatically omitted. When using this modifier, the pattern should not contain any limit settings such as (*LIMIT_MATCH=...) within it. If such a setting is present and is lower than the minimum matching value, the minimum value cannot be found because pcre2_set_match_limit() etc. are only able to reduce the value of an in-pattern limit; they cannot increase it. For non-DFA matching, the minimum depth_limit number is a measure of how much nested backtracking happens (that is, how deeply the pattern's tree is searched). In the case of DFA matching, depth_limit controls the depth of recursive calls of the internal function that is used for handling pattern recursion, lookaround assertions, and atomic groups. For non-DFA matching, the match_limit number is a measure of the amount of backtracking that takes place, and learning the minimum value can be instructive. For most simple matches, the number is quite small, but for patterns with very large numbers of matching possibilities, it can become large very quickly with increasing length of subject string. In the case of DFA matching, match_limit controls the total number of calls, both recursive and non-recursive, to the internal matching function, thus controlling the overall amount of computing resource that is used. For both kinds of matching, the heap_limit number, which is in kibibytes (units of 1024 bytes), limits the amount of heap memory used for matching. Showing MARK names The mark modifier causes the names from backtracking control verbs that are returned from calls to pcre2_match() to be displayed. If a mark is returned for a match, non-match, or partial match, pcre2test shows it. For a match, it is on a line by itself, tagged with "MK:". Otherwise, it is added to the non-match message. Showing memory usage The memory modifier causes pcre2test to log the sizes of all heap memory allocation and freeing calls that occur during a call to pcre2_match() or pcre2_dfa_match(). In the latter case, heap memory is used only when a match requires more internal workspace that the default allocation on the stack, so in many cases there will be no output. No heap memory is allocated during matching with JIT. For this modifier to work, the null_context modifier must not be set on both the pattern and the subject, though it can be set on one or the other. Showing the heap frame overall vector size The heapframes_size modifier is relevant for matches using pcre2_match() without JIT. After a match has run (whether successful or not) the size, in bytes, of the allocated heap frames vector that is left attached to the match data block is shown. If the matching action involved several calls to pcre2_match() (for example, global matching or for timing) only the final value is shown. This modifier is ignored, with a warning, for POSIX or DFA matching. JIT matching does not use the heap frames vector, so the size is always zero, unless there was a previous non-JIT match. Note that specifing a size of zero for the output vector (see below) causes pcre2test to free its match data block (and associated heap frames vector) and allocate a new one. Setting a starting offset The offset modifier sets an offset in the subject string at which matching starts. Its value is a number of code units, not characters. Setting an offset limit The offset_limit modifier sets a limit for unanchored matches. If a match cannot be found starting at or before this offset in the subject, a "no match" return is given. The data value is a number of code units, not characters. When this modifier is used, the use_offset_limit modifier must have been set for the pattern; if not, an error is generated. Setting the size of the output vector The ovector modifier applies only to the subject line in which it appears, though of course it can also be used to set a default in a #subject command. It specifies the number of pairs of offsets that are available for storing matching information. The default is 15. A value of zero is useful when testing the POSIX API because it causes regexec() to be called with a NULL capture vector. When not testing the POSIX API, a value of zero is used to cause pcre2_match_data_create_from_pattern() to be called, in order to create a new match block of exactly the right size for the pattern. (It is not possible to create a match block with a zero-length ovector; there is always at least one pair of offsets.) The old match data block is freed. Passing the subject as zero-terminated By default, the subject string is passed to a native API matching function with its correct length. In order to test the facility for passing a zero-terminated string, the zero_terminate modifier is provided. It causes the length to be passed as PCRE2_ZERO_TERMINATED. When matching via the POSIX interface, this modifier is ignored, with a warning. When testing pcre2_substitute(), this modifier also has the effect of passing the replacement string as zero-terminated. Passing a NULL context, subject, or replacement Normally, pcre2test passes a context block to pcre2_match(), pcre2_dfa_match(), pcre2_jit_match() or pcre2_substitute(). If the null_context modifier is set, however, NULL is passed. This is for testing that the matching and substitution functions behave correctly in this case (they use default values). This modifier cannot be used with the find_limits, find_limits_noheap, or substitute_callout modifiers. Similarly, for testing purposes, if the null_subject or null_replacement modifier is set, the subject or replacement string pointers are passed as NULL, respectively, to the relevant functions. THE ALTERNATIVE MATCHING FUNCTION By default, pcre2test uses the standard PCRE2 matching function, pcre2_match() to match each subject line. PCRE2 also supports an alternative matching function, pcre2_dfa_match(), which operates in a different way, and has some restrictions. The differences between the two functions are described in the pcre2matching documentation. If the dfa modifier is set, the alternative matching function is used. This function finds all possible matches at a given point in the subject. If, however, the dfa_shortest modifier is set, processing stops after the first match is found. This is always the shortest possible match. DEFAULT OUTPUT FROM pcre2test This section describes the output when the normal matching function, pcre2_match(), is being used. When a match succeeds, pcre2test outputs the list of captured substrings, starting with number 0 for the string that matched the whole pattern. Otherwise, it outputs "No match" when the return is PCRE2_ERROR_NOMATCH, or "Partial match:" followed by the partially matching substring when the return is PCRE2_ERROR_PARTIAL. (Note that this is the entire substring that was inspected during the partial match; it may include characters before the actual match start if a lookbehind assertion, \K, \b, or \B was involved.) For any other return, pcre2test outputs the PCRE2 negative error number and a short descriptive phrase. If the error is a failed UTF string check, the code unit offset of the start of the failing character is also output. Here is an example of an interactive pcre2test run. $ pcre2test PCRE2 version 10.22 2016-07-29 re> /^abc(\d+)/ data> abc123 0: abc123 1: 123 data> xyz No match Unset capturing substrings that are not followed by one that is set are not shown by pcre2test unless the allcaptures modifier is specified. In the following example, there are two capturing substrings, but when the first data line is matched, the second, unset substring is not shown. An "internal" unset substring is shown as "<unset>", as for the second data line. re> /(a)|(b)/ data> a 0: a 1: a data> b 0: b 1: <unset> 2: b If the strings contain any non-printing characters, they are output as \xhh escapes if the value is less than 256 and UTF mode is not set. Otherwise they are output as \x{hh...} escapes. See below for the definition of non-printing characters. If the aftertext modifier is set, the output for substring 0 is followed by the rest of the subject string, identified by "0+" like this: re> /cat/aftertext data> cataract 0: cat 0+ aract If global matching is requested, the results of successive matching attempts are output in sequence, like this: re> /\Bi(\w\w)/g data> Mississippi 0: iss 1: ss 0: iss 1: ss 0: ipp 1: pp "No match" is output only if the first match attempt fails. Here is an example of a failure message (the offset 4 that is specified by the offset modifier is past the end of the subject string): re> /xyz/ data> xyz\=offset=4 Error -24 (bad offset value) Note that whereas patterns can be continued over several lines (a plain ">" prompt is used for continuations), subject lines may not. However newlines can be included in a subject by means of the \n escape (or \r, \r\n, etc., depending on the newline sequence setting). OUTPUT FROM THE ALTERNATIVE MATCHING FUNCTION When the alternative matching function, pcre2_dfa_match(), is used, the output consists of a list of all the matches that start at the first point in the subject where there is at least one match. For example: re> /(tang|tangerine|tan)/ data> yellow tangerine\=dfa 0: tangerine 1: tang 2: tan Using the normal matching function on this data finds only "tang". The longest matching string is always given first (and numbered zero). After a PCRE2_ERROR_PARTIAL return, the output is "Partial match:", followed by the partially matching substring. Note that this is the entire substring that was inspected during the partial match; it may include characters before the actual match start if a lookbehind assertion, \b, or \B was involved. (\K is not supported for DFA matching.) If global matching is requested, the search for further matches resumes at the end of the longest match. For example: re> /(tang|tangerine|tan)/g data> yellow tangerine and tangy sultana\=dfa 0: tangerine 1: tang 2: tan 0: tang 1: tan 0: tan The alternative matching function does not support substring capture, so the modifiers that are concerned with captured substrings are not relevant. RESTARTING AFTER A PARTIAL MATCH When the alternative matching function has given the PCRE2_ERROR_PARTIAL return, indicating that the subject partially matched the pattern, you can restart the match with additional subject data by means of the dfa_restart modifier. For example: re> /^\d?\d(jan|feb|mar|apr|may|jun|jul|aug|sep|oct|nov|dec)\d\d$/ data> 23ja\=ps,dfa Partial match: 23ja data> n05\=dfa,dfa_restart 0: n05 For further information about partial matching, see the pcre2partial documentation. CALLOUTS If the pattern contains any callout requests, pcre2test's callout function is called during matching unless callout_none is specified. This works with both matching functions, and with JIT, though there are some differences in behaviour. The output for callouts with numerical arguments and those with string arguments is slightly different. Callouts with numerical arguments By default, the callout function displays the callout number, the start and current positions in the subject text at the callout time, and the next pattern item to be tested. For example: --->pqrabcdef 0 ^ ^ \d This output indicates that callout number 0 occurred for a match attempt starting at the fourth character of the subject string, when the pointer was at the seventh character, and when the next pattern item was \d. Just one circumflex is output if the start and current positions are the same, or if the current position precedes the start position, which can happen if the callout is in a lookbehind assertion. Callouts numbered 255 are assumed to be automatic callouts, inserted as a result of the auto_callout pattern modifier. In this case, instead of showing the callout number, the offset in the pattern, preceded by a plus, is output. For example: re> /\d?[A-E]\*/auto_callout data> E* --->E* +0 ^ \d? +3 ^ [A-E] +8 ^^ \* +10 ^ ^ 0: E* If a pattern contains (*MARK) items, an additional line is output whenever a change of latest mark is passed to the callout function. For example: re> /a(*MARK:X)bc/auto_callout data> abc --->abc +0 ^ a +1 ^^ (*MARK:X) +10 ^^ b Latest Mark: X +11 ^ ^ c +12 ^ ^ 0: abc The mark changes between matching "a" and "b", but stays the same for the rest of the match, so nothing more is output. If, as a result of backtracking, the mark reverts to being unset, the text "<unset>" is output. Callouts with string arguments The output for a callout with a string argument is similar, except that instead of outputting a callout number before the position indicators, the callout string and its offset in the pattern string are output before the reflection of the subject string, and the subject string is reflected for each callout. For example: re> /^ab(?C'first')cd(?C"second")ef/ data> abcdefg Callout (7): 'first' --->abcdefg ^ ^ c Callout (20): "second" --->abcdefg ^ ^ e 0: abcdef Callout modifiers The callout function in pcre2test returns zero (carry on matching) by default, but you can use a callout_fail modifier in a subject line to change this and other parameters of the callout (see below). If the callout_capture modifier is set, the current captured groups are output when a callout occurs. This is useful only for non-DFA matching, as pcre2_dfa_match() does not support capturing, so no captures are ever shown. The normal callout output, showing the callout number or pattern offset (as described above) is suppressed if the callout_no_where modifier is set. When using the interpretive matching function pcre2_match() without JIT, setting the callout_extra modifier causes additional output from pcre2test's callout function to be generated. For the first callout in a match attempt at a new starting position in the subject, "New match attempt" is output. If there has been a backtrack since the last callout (or start of matching if this is the first callout), "Backtrack" is output, followed by "No other matching paths" if the backtrack ended the previous match attempt. For example: re> /(a+)b/auto_callout,no_start_optimize,no_auto_possess data> aac\=callout_extra New match attempt --->aac +0 ^ ( +1 ^ a+ +3 ^ ^ ) +4 ^ ^ b Backtrack --->aac +3 ^^ ) +4 ^^ b Backtrack No other matching paths New match attempt --->aac +0 ^ ( +1 ^ a+ +3 ^^ ) +4 ^^ b Backtrack No other matching paths New match attempt --->aac +0 ^ ( +1 ^ a+ Backtrack No other matching paths New match attempt --->aac +0 ^ ( +1 ^ a+ No match Notice that various optimizations must be turned off if you want all possible matching paths to be scanned. If no_start_optimize is not used, there is an immediate "no match", without any callouts, because the starting optimization fails to find "b" in the subject, which it knows must be present for any match. If no_auto_possess is not used, the "a+" item is turned into "a++", which reduces the number of backtracks. The callout_extra modifier has no effect if used with the DFA matching function, or with JIT. Return values from callouts The default return from the callout function is zero, which allows matching to continue. The callout_fail modifier can be given one or two numbers. If there is only one number, 1 is returned instead of 0 (causing matching to backtrack) when a callout of that number is reached. If two numbers (<n>:<m>) are given, 1 is returned when callout <n> is reached and there have been at least <m> callouts. The callout_error modifier is similar, except that PCRE2_ERROR_CALLOUT is returned, causing the entire matching process to be aborted. If both these modifiers are set for the same callout number, callout_error takes precedence. Note that callouts with string arguments are always given the number zero. The callout_data modifier can be given an unsigned or a negative number. This is set as the "user data" that is passed to the matching function, and passed back when the callout function is invoked. Any value other than zero is used as a return from pcre2test's callout function. Inserting callouts can be helpful when using pcre2test to check complicated regular expressions. For further information about callouts, see the pcre2callout documentation. NON-PRINTING CHARACTERS When pcre2test is outputting text in the compiled version of a pattern, bytes other than 32-126 are always treated as non-printing characters and are therefore shown as hex escapes. When pcre2test is outputting text that is a matched part of a subject string, it behaves in the same way, unless a different locale has been set for the pattern (using the locale modifier). In this case, the isprint() function is used to distinguish printing and non-printing characters. SAVING AND RESTORING COMPILED PATTERNS It is possible to save compiled patterns on disc or elsewhere, and reload them later, subject to a number of restrictions. JIT data cannot be saved. The host on which the patterns are reloaded must be running the same version of PCRE2, with the same code unit width, and must also have the same endianness, pointer width and PCRE2_SIZE type. Before compiled patterns can be saved they must be serialized, that is, converted to a stream of bytes. A single byte stream may contain any number of compiled patterns, but they must all use the same character tables. A single copy of the tables is included in the byte stream (its size is 1088 bytes). The functions whose names begin with pcre2_serialize_ are used for serializing and de-serializing. They are described in the pcre2serialize documentation. In this section we describe the features of pcre2test that can be used to test these functions. Note that "serialization" in PCRE2 does not convert compiled patterns to an abstract format like Java or .NET. It just makes a reloadable byte code stream. Hence the restrictions on reloading mentioned above. In pcre2test, when a pattern with push modifier is successfully compiled, it is pushed onto a stack of compiled patterns, and pcre2test expects the next line to contain a new pattern (or command) instead of a subject line. By contrast, the pushcopy modifier causes a copy of the compiled pattern to be stacked, leaving the original available for immediate matching. By using push and/or pushcopy, a number of patterns can be compiled and retained. These modifiers are incompatible with posix, and control modifiers that act at match time are ignored (with a message) for the stacked patterns. The jitverify modifier applies only at compile time. The command #save <filename> causes all the stacked patterns to be serialized and the result written to the named file. Afterwards, all the stacked patterns are freed. The command #load <filename> reads the data in the file, and then arranges for it to be de- serialized, with the resulting compiled patterns added to the pattern stack. The pattern on the top of the stack can be retrieved by the #pop command, which must be followed by lines of subjects that are to be matched with the pattern, terminated as usual by an empty line or end of file. This command may be followed by a modifier list containing only control modifiers that act after a pattern has been compiled. In particular, hex, posix, posix_nosub, push, and pushcopy are not allowed, nor are any option-setting modifiers. The JIT modifiers are, however permitted. Here is an example that saves and reloads two patterns. /abc/push /xyz/push #save tempfile #load tempfile #pop info xyz #pop jit,bincode abc If jitverify is used with #pop, it does not automatically imply jit, which is different behaviour from when it is used on a pattern. The #popcopy command is analogous to the pushcopy modifier in that it makes current a copy of the topmost stack pattern, leaving the original still on the stack. SEE ALSO pcre2(3), pcre2api(3), pcre2callout(3), pcre2jit, pcre2matching(3), pcre2partial(d), pcre2pattern(3), pcre2serialize(3). AUTHOR Philip Hazel Retired from University Computing Service Cambridge, England. REVISION Last updated: 24 April 2024 Copyright (c) 1997-2024 University of Cambridge. PCRE 10.44 24 April 2024 PCRE2TEST(1)
pcre2test - a program for testing Perl-compatible regular expressions.
pcre2test [options] [input file [output file]] pcre2test is a test program for the PCRE2 regular expression libraries, but it can also be used for experimenting with regular expressions. This document describes the features of the test program; for details of the regular expressions themselves, see the pcre2pattern documentation. For details of the PCRE2 library function calls and their options, see the pcre2api documentation. The input for pcre2test is a sequence of regular expression patterns and subject strings to be matched. There are also command lines for setting defaults and controlling some special actions. The output shows the result of each match attempt. Modifiers on external or internal command lines, the patterns, and the subject lines specify PCRE2 function options, control how the subject is processed, and what output is produced. There are many obscure modifiers, some of which are specifically designed for use in conjunction with the test script and data files that are distributed as part of PCRE2. All the modifiers are documented here, some without much justification, but many of them are unlikely to be of use except when testing the libraries. PCRE2's 8-BIT, 16-BIT AND 32-BIT LIBRARIES Different versions of the PCRE2 library can be built to support character strings that are encoded in 8-bit, 16-bit, or 32-bit code units. One, two, or all three of these libraries may be simultaneously installed. The pcre2test program can be used to test all the libraries. However, its own input and output are always in 8-bit format. When testing the 16-bit or 32-bit libraries, patterns and subject strings are converted to 16-bit or 32-bit format before being passed to the library functions. Results are converted back to 8-bit code units for output. In the rest of this document, the names of library functions and structures are given in generic form, for example, pcre2_compile(). The actual names used in the libraries have a suffix _8, _16, or _32, as appropriate. INPUT ENCODING Input to pcre2test is processed line by line, either by calling the C library's fgets() function, or via the libreadline or libedit library. In some Windows environments character 26 (hex 1A) causes an immediate end of file, and no further data is read, so this character should be avoided unless you really want that action. The input is processed using C's string functions, so must not contain binary zeros, even though in Unix-like environments, fgets() treats any bytes other than newline as data characters. An error is generated if a binary zero is encountered. By default subject lines are processed for backslash escapes, which makes it possible to include any data value in strings that are passed to the library for matching. For patterns, there is a facility for specifying some or all of the 8-bit input characters as hexadecimal pairs, which makes it possible to include binary zeros. Input for the 16-bit and 32-bit libraries When testing the 16-bit or 32-bit libraries, there is a need to be able to generate character code points greater than 255 in the strings that are passed to the library. For subject lines, backslash escapes can be used. In addition, when the utf modifier (see "Setting compilation options" below) is set, the pattern and any following subject lines are interpreted as UTF-8 strings and translated to UTF-16 or UTF-32 as appropriate. For non-UTF testing of wide characters, the utf8_input modifier can be used. This is mutually exclusive with utf, and is allowed only in 16-bit or 32-bit mode. It causes the pattern and following subject lines to be treated as UTF-8 according to the original definition (RFC 2279), which allows for character values up to 0x7fffffff. Each character is placed in one 16-bit or 32-bit code unit (in the 16-bit case, values greater than 0xffff cause an error to occur). UTF-8 (in its original definition) is not capable of encoding values greater than 0x7fffffff, but such values can be handled by the 32-bit library. When testing this library in non-UTF mode with utf8_input set, if any character is preceded by the byte 0xff (which is an invalid byte in UTF-8) 0x80000000 is added to the character's value. This is the only way of passing such code points in a pattern string. For subject strings, using an escape sequence is preferable. COMMAND LINE OPTIONS -8 If the 8-bit library has been built, this option causes it to be used (this is the default). If the 8-bit library has not been built, this option causes an error. -16 If the 16-bit library has been built, this option causes it to be used. If the 8-bit library has not been built, this is the default. If the 16-bit library has not been built, this option causes an error. -32 If the 32-bit library has been built, this option causes it to be used. If no other library has been built, this is the default. If the 32-bit library has not been built, this option causes an error. -ac Behave as if each pattern has the auto_callout modifier, that is, insert automatic callouts into every pattern that is compiled. -AC As for -ac, but in addition behave as if each subject line has the callout_extra modifier, that is, show additional information from callouts. -b Behave as if each pattern has the fullbincode modifier; the full internal binary form of the pattern is output after compilation. -C Output the version number of the PCRE2 library, and all available information about the optional features that are included, and then exit with zero exit code. All other options are ignored. If both -C and -LM are present, whichever is first is recognized. -C option Output information about a specific build-time option, then exit. This functionality is intended for use in scripts such as RunTest. The following options output the value and set the exit code as indicated: ebcdic-nl the code for LF (= NL) in an EBCDIC environment: 0x15 or 0x25 0 if used in an ASCII environment exit code is always 0 linksize the configured internal link size (2, 3, or 4) exit code is set to the link size newline the default newline setting: CR, LF, CRLF, ANYCRLF, ANY, or NUL exit code is always 0 bsr the default setting for what \R matches: ANYCRLF or ANY exit code is always 0 The following options output 1 for true or 0 for false, and set the exit code to the same value: backslash-C \C is supported (not locked out) ebcdic compiled for an EBCDIC environment jit just-in-time support is available pcre2-16 the 16-bit library was built pcre2-32 the 32-bit library was built pcre2-8 the 8-bit library was built unicode Unicode support is available If an unknown option is given, an error message is output; the exit code is 0. -d Behave as if each pattern has the debug modifier; the internal form and information about the compiled pattern is output after compilation; -d is equivalent to -b -i. -dfa Behave as if each subject line has the dfa modifier; matching is done using the pcre2_dfa_match() function instead of the default pcre2_match(). -error number[,number,...] Call pcre2_get_error_message() for each of the error numbers in the comma-separated list, display the resulting messages on the standard output, then exit with zero exit code. The numbers may be positive or negative. This is a convenience facility for PCRE2 maintainers. -help Output a brief summary these options and then exit. -i Behave as if each pattern has the info modifier; information about the compiled pattern is given after compilation. -jit Behave as if each pattern line has the jit modifier; after successful compilation, each pattern is passed to the just- in-time compiler, if available. -jitfast Behave as if each pattern line has the jitfast modifier; after successful compilation, each pattern is passed to the just-in-time compiler, if available, and each subject line is passed directly to the JIT matcher via its "fast path". -jitverify Behave as if each pattern line has the jitverify modifier; after successful compilation, each pattern is passed to the just-in-time compiler, if available, and the use of JIT for matching is verified. -LM List modifiers: write a list of available pattern and subject modifiers to the standard output, then exit with zero exit code. All other options are ignored. If both -C and any -Lx options are present, whichever is first is recognized. -LP List properties: write a list of recognized Unicode properties to the standard output, then exit with zero exit code. All other options are ignored. If both -C and any -Lx options are present, whichever is first is recognized. -LS List scripts: write a list of recognized Unicode script names to the standard output, then exit with zero exit code. All other options are ignored. If both -C and any -Lx options are present, whichever is first is recognized. -pattern modifier-list Behave as if each pattern line contains the given modifiers. -q Do not output the version number of pcre2test at the start of execution. -S size On Unix-like systems, set the size of the run-time stack to size mebibytes (units of 1024*1024 bytes). -subject modifier-list Behave as if each subject line contains the given modifiers. -t Run each compile and match many times with a timer, and output the resulting times per compile or match. When JIT is used, separate times are given for the initial compile and the JIT compile. You can control the number of iterations that are used for timing by following -t with a number (as a separate item on the command line). For example, "-t 1000" iterates 1000 times. The default is to iterate 500,000 times. -tm This is like -t except that it times only the matching phase, not the compile phase. -T -TM These behave like -t and -tm, but in addition, at the end of a run, the total times for all compiles and matches are output. -version Output the PCRE2 version number and then exit.
null
null
mysql_config
mysql_config provides you with useful information for compiling your MySQL client and connecting it to MySQL. It is a shell script, so it is available only on Unix and Unix-like systems. Note pkg-config can be used as an alternative to mysql_config for obtaining information such as compiler flags or link libraries required to compile MySQL applications. For more information, see Building C API Client Programs Using pkg-config[1]. mysql_config supports the following options. • --cflags C Compiler flags to find include files and critical compiler flags and defines used when compiling the libmysqlclient library. The options returned are tied to the specific compiler that was used when the library was created and might clash with the settings for your own compiler. Use --include for more portable options that contain only include paths. • --cxxflags Like --cflags, but for C++ compiler flags. • --include Compiler options to find MySQL include files. • --libs Libraries and options required to link with the MySQL client library. • --libs_r Libraries and options required to link with the thread-safe MySQL client library. In MySQL 8.3, all client libraries are thread-safe, so this option need not be used. The --libs option can be used in all cases. • --plugindir The default plugin directory path name, defined when configuring MySQL. • --port The default TCP/IP port number, defined when configuring MySQL. • --socket The default Unix socket file, defined when configuring MySQL. • --variable=var_name Display the value of the named configuration variable. Permitted var_name values are pkgincludedir (the header file directory), pkglibdir (the library directory), and plugindir (the plugin directory). • --version Version number for the MySQL distribution. If you invoke mysql_config with no options, it displays a list of all options that it supports, and their values: $> mysql_config Usage: /usr/local/mysql/bin/mysql_config [options] Options: --cflags [-I/usr/local/mysql/include/mysql -mcpu=pentiumpro] --cxxflags [-I/usr/local/mysql/include/mysql -mcpu=pentiumpro] --include [-I/usr/local/mysql/include/mysql] --libs [-L/usr/local/mysql/lib/mysql -lmysqlclient -lpthread -lm -lrt -lssl -lcrypto -ldl] --libs_r [-L/usr/local/mysql/lib/mysql -lmysqlclient_r -lpthread -lm -lrt -lssl -lcrypto -ldl] --plugindir [/usr/local/mysql/lib/plugin] --socket [/tmp/mysql.sock] --port [3306] --version [5.8.0-m17] --variable=VAR VAR is one of: pkgincludedir [/usr/local/mysql/include] pkglibdir [/usr/local/mysql/lib] plugindir [/usr/local/mysql/lib/plugin] You can use mysql_config within a command line using backticks to include the output that it produces for particular options. For example, to compile and link a MySQL client program, use mysql_config as follows: gcc -c `mysql_config --cflags` progname.c gcc -o progname progname.o `mysql_config --libs` COPYRIGHT Copyright © 1997, 2023, Oracle and/or its affiliates. This documentation is free software; you can redistribute it and/or modify it only under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License. This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with the program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA or see http://www.gnu.org/licenses/. NOTES 1. Building C API Client Programs Using pkg-config https://dev.mysql.com/doc/c-api/8.2/en/c-api-building-clients-pkg- config.html SEE ALSO For more information, please refer to the MySQL Reference Manual, which may already be installed locally and which is also available online at http://dev.mysql.com/doc/. AUTHOR Oracle Corporation (http://dev.mysql.com/). MySQL 8.3 11/23/2023 MYSQL_CONFIG(1)
mysql_config - display options for compiling clients
mysql_config options
null
null
gdbus-codegen
null
null
null
null
null
cjpeg
cjpeg compresses the named image file, or the standard input if no file is named, and produces a JPEG/JFIF file on the standard output. The currently supported input file formats are: PPM (PBMPLUS color format), PGM (PBMPLUS grayscale format), BMP, GIF, and Targa.
cjpeg - compress an image file to a JPEG file
cjpeg [ options ] [ filename ]
All switch names may be abbreviated; for example, -grayscale may be written -gray or -gr. Most of the "basic" switches can be abbreviated to as little as one letter. Upper and lower case are equivalent (thus -BMP is the same as -bmp). British spellings are also accepted (e.g., -greyscale), though for brevity these are not mentioned below. The basic switches are: -quality N[,...] Scale quantization tables to adjust image quality. Quality is 0 (worst) to 100 (best); default is 75. (See below for more info.) -grayscale Create monochrome JPEG file from color input. By saying -grayscale, you'll get a smaller JPEG file that takes less time to process. -rgb Create RGB JPEG file. Using this switch suppresses the conversion from RGB colorspace input to the default YCbCr JPEG colorspace. -optimize Perform optimization of entropy encoding parameters. Without this, default encoding parameters are used. -optimize usually makes the JPEG file a little smaller, but cjpeg runs somewhat slower and needs much more memory. Image quality and speed of decompression are unaffected by -optimize. -progressive Create progressive JPEG file (see below). -targa Input file is Targa format. Targa files that contain an "identification" field will not be automatically recognized by cjpeg; for such files you must specify -targa to make cjpeg treat the input as Targa format. For most Targa files, you won't need this switch. The -quality switch lets you trade off compressed file size against quality of the reconstructed image: the higher the quality setting, the larger the JPEG file, and the closer the output image will be to the original input. Normally you want to use the lowest quality setting (smallest file) that decompresses into something visually indistinguishable from the original image. For this purpose the quality setting should generally be between 50 and 95 (the default is 75) for photographic images. If you see defects at -quality 75, then go up 5 or 10 counts at a time until you are happy with the output image. (The optimal setting will vary from one image to another.) -quality 100 will generate a quantization table of all 1's, minimizing loss in the quantization step (but there is still information loss in subsampling, as well as roundoff error.) For most images, specifying a quality value above about 95 will increase the size of the compressed file dramatically, and while the quality gain from these higher quality values is measurable (using metrics such as PSNR or SSIM), it is rarely perceivable by human vision. In the other direction, quality values below 50 will produce very small files of low image quality. Settings around 5 to 10 might be useful in preparing an index of a large image library, for example. Try -quality 2 (or so) for some amusing Cubist effects. (Note: quality values below about 25 generate 2-byte quantization tables, which are considered optional in the JPEG standard. cjpeg emits a warning message when you give such a quality value, because some other JPEG programs may be unable to decode the resulting file. Use -baseline if you need to ensure compatibility at low quality values.) The -quality option has been extended in this version of cjpeg to support separate quality settings for luminance and chrominance (or, in general, separate settings for every quantization table slot.) The principle is the same as chrominance subsampling: since the human eye is more sensitive to spatial changes in brightness than spatial changes in color, the chrominance components can be quantized more than the luminance components without incurring any visible image quality loss. However, unlike subsampling, this feature reduces data in the frequency domain instead of the spatial domain, which allows for more fine- grained control. This option is useful in quality-sensitive applications, for which the artifacts generated by subsampling may be unacceptable. The -quality option accepts a comma-separated list of parameters, which respectively refer to the quality levels that should be assigned to the quantization table slots. If there are more q-table slots than parameters, then the last parameter is replicated. Thus, if only one quality parameter is given, this is used for both luminance and chrominance (slots 0 and 1, respectively), preserving the legacy behavior of cjpeg v6b and prior. More (or customized) quantization tables can be set with the -qtables option and assigned to components with the -qslots option (see the "wizard" switches below.) JPEG files generated with separate luminance and chrominance quality are fully compliant with standard JPEG decoders. CAUTION: For this setting to be useful, be sure to pass an argument of -sample 1x1 to cjpeg to disable chrominance subsampling. Otherwise, the default subsampling level (2x2, AKA "4:2:0") will be used. The -progressive switch creates a "progressive JPEG" file. In this type of JPEG file, the data is stored in multiple scans of increasing quality. If the file is being transmitted over a slow communications link, the decoder can use the first scan to display a low-quality image very quickly, and can then improve the display with each subsequent scan. The final image is exactly equivalent to a standard JPEG file of the same quality setting, and the total file size is about the same --- often a little smaller. Switches for advanced users: -precision N Create JPEG file with N-bit data precision. N is 8, 12, or 16; default is 8. If N is 16, then -lossless must also be specified. Caution: 12-bit and 16-bit JPEG is not yet widely implemented, so many decoders will be unable to view a 12-bit or 16-bit JPEG file at all. -lossless psv[,Pt] Create a lossless JPEG file using the specified predictor selection value (1 through 7) and optional point transform (0 through precision - 1, where precision is the JPEG data precision in bits). A point transform value of 0 (the default) is necessary in order to create a fully lossless JPEG file. (A non-zero point transform value right-shifts the input samples by the specified number of bits, which is effectively a form of lossy color quantization.) Caution: lossless JPEG is not yet widely implemented, so many decoders will be unable to view a lossless JPEG file at all. In most cases, compressing and decompressing a lossless JPEG file is considerably slower than compressing and decompressing a lossy JPEG file, and lossless JPEG files are much larger than lossy JPEG files. Also note that the following features will be unavailable when compressing or decompressing a lossless JPEG file: - Quality/quantization table selection - Color space conversion (the JPEG image will use the same color space as the input image) - Color quantization - DCT/IDCT algorithm selection - Smoothing - Downsampling/upsampling - IDCT scaling - Partial image decompression - Transformations using jpegtran Any switches used to enable or configure those features will be ignored. -arithmetic Use arithmetic coding. Caution: arithmetic coded JPEG is not yet widely implemented, so many decoders will be unable to view an arithmetic coded JPEG file at all. -dct int Use accurate integer DCT method (default). -dct fast Use less accurate integer DCT method [legacy feature]. When the Independent JPEG Group's software was first released in 1991, the compression time for a 1-megapixel JPEG image on a mainstream PC was measured in minutes. Thus, the fast integer DCT algorithm provided noticeable performance benefits. On modern CPUs running libjpeg-turbo, however, the compression time for a 1-megapixel JPEG image is measured in milliseconds, and thus the performance benefits of the fast algorithm are much less noticeable. On modern x86/x86-64 CPUs that support AVX2 instructions, the fast and int methods have similar performance. On other types of CPUs, the fast method is generally about 5-15% faster than the int method. For quality levels of 90 and below, there should be little or no perceptible quality difference between the two algorithms. For quality levels above 90, however, the difference between the fast and int methods becomes more pronounced. With quality=97, for instance, the fast method incurs generally about a 1-3 dB loss in PSNR relative to the int method, but this can be larger for some images. Do not use the fast method with quality levels above 97. The algorithm often degenerates at quality=98 and above and can actually produce a more lossy image than if lower quality levels had been used. Also, in libjpeg-turbo, the fast method is not fully accelerated for quality levels above 97, so it will be slower than the int method. -dct float Use floating-point DCT method [legacy feature]. The float method does not produce significantly more accurate results than the int method, and it is much slower. The float method may also give different results on different machines due to varying roundoff behavior, whereas the integer methods should give the same results on all machines. -icc file Embed ICC color management profile contained in the specified file. -restart N Emit a JPEG restart marker every N MCU rows, or every N MCU blocks (samples in lossless mode) if "B" is attached to the number. -restart 0 (the default) means no restart markers. -smooth N Smooth the input image to eliminate dithering noise. N, ranging from 1 to 100, indicates the strength of smoothing. 0 (the default) means no smoothing. -maxmemory N Set limit for amount of memory to use in processing large images. Value is in thousands of bytes, or millions of bytes if "M" is attached to the number. For example, -max 4m selects 4000000 bytes. If more space is needed, an error will occur. -outfile name Send output image to the named file, not to standard output. -memdst Compress to memory instead of a file. This feature was implemented mainly as a way of testing the in-memory destination manager (jpeg_mem_dest()), but it is also useful for benchmarking, since it reduces the I/O overhead. -report Report compression progress. -strict Treat all warnings as fatal. Enabling this option will cause the compressor to abort if an LZW-compressed GIF input image contains incomplete or corrupt image data. -verbose Enable debug printout. More -v's give more output. Also, version information is printed at startup. -debug Same as -verbose. -version Print version information and exit. The -restart option inserts extra markers that allow a JPEG decoder to resynchronize after a transmission error. Without restart markers, any damage to a compressed file will usually ruin the image from the point of the error to the end of the image; with restart markers, the damage is usually confined to the portion of the image up to the next restart marker. Of course, the restart markers occupy extra space. We recommend -restart 1 for images that will be transmitted across unreliable networks such as Usenet. The -smooth option filters the input to eliminate fine-scale noise. This is often useful when converting dithered images to JPEG: a moderate smoothing factor of 10 to 50 gets rid of dithering patterns in the input file, resulting in a smaller JPEG file and a better-looking image. Too large a smoothing factor will visibly blur the image, however. Switches for wizards: -baseline Force baseline-compatible quantization tables to be generated. This clamps quantization values to 8 bits even at low quality settings. (This switch is poorly named, since it does not ensure that the output is actually baseline JPEG. For example, you can use -baseline and -progressive together.) -qtables file Use the quantization tables given in the specified text file. -qslots N[,...] Select which quantization table to use for each color component. -sample HxV[,...] Set JPEG sampling factors for each color component. -scans file Use the scan script given in the specified text file. The "wizard" switches are intended for experimentation with JPEG. If you don't know what you are doing, don't use them. These switches are documented further in the file wizard.txt.
This example compresses the PPM file foo.ppm with a quality factor of 60 and saves the output as foo.jpg: cjpeg -quality 60 foo.ppm > foo.jpg HINTS Color GIF files are not the ideal input for JPEG; JPEG is really intended for compressing full-color (24-bit) images. In particular, don't try to convert cartoons, line drawings, and other images that have only a few distinct colors. GIF works great on these, JPEG does not. If you want to convert a GIF to JPEG, you should experiment with cjpeg's -quality and -smooth options to get a satisfactory conversion. -smooth 10 or so is often helpful. Avoid running an image through a series of JPEG compression/decompression cycles. Image quality loss will accumulate; after ten or so cycles the image may be noticeably worse than it was after one cycle. It's best to use a lossless format while manipulating an image, then convert to JPEG format when you are ready to file the image away. The -optimize option to cjpeg is worth using when you are making a "final" version for posting or archiving. It's also a win when you are using low quality settings to make very small JPEG files; the percentage improvement is often a lot more than it is on larger files. (At present, -optimize mode is always selected when generating progressive JPEG files.) ENVIRONMENT JPEGMEM If this environment variable is set, its value is the default memory limit. The value is specified as described for the -maxmemory switch. JPEGMEM overrides the default value specified when the program was compiled, and itself is overridden by an explicit -maxmemory. SEE ALSO djpeg(1), jpegtran(1), rdjpgcom(1), wrjpgcom(1) ppm(5), pgm(5) Wallace, Gregory K. "The JPEG Still Picture Compression Standard", Communications of the ACM, April 1991 (vol. 34, no. 4), pp. 30-44. AUTHOR Independent JPEG Group This file was modified by The libjpeg-turbo Project to include only information relevant to libjpeg-turbo, to wordsmith certain sections, and to describe features not present in libjpeg. ISSUES Not all variants of BMP and Targa file formats are supported. The -targa switch is not a bug, it's a feature. (It would be a bug if the Targa format designers had not been clueless.) 14 Dec 2023 CJPEG(1)
ipython
An interactive Python shell with automatic history (input and output), dynamic object introspection, easier configuration, command completion, access to the system shell, integration with numerical and scientific computing tools, web notebook, Qt console, and more. For more information on how to use IPython, see 'ipython --help', or 'ipython --help-all' for all available command‐line options. ENVIRONMENT VARIABLES IPYTHONDIR This is the location where IPython stores all its configuration files. The default is $HOME/.ipython if IPYTHONDIR is not defined. You can see the computed value of IPYTHONDIR with `ipython locate`. FILES IPython uses various configuration files stored in profiles within IPYTHONDIR. To generate the default configuration files and start configuring IPython, do 'ipython profile create', and edit '*_config.py' files located in IPYTHONDIR/profile_default. AUTHORS IPython is written by the IPython Development Team <https://github.com/ipython/ipython>. July 15, 2011 IPYTHON(1)
ipython - Tools for Interactive Computing in Python.
ipython [options] files... ipython subcommand [options]...
null
null
arm64-apple-darwin20.0.0-redo_prebinding
Redo_prebinding is used to redo the prebinding of an executable or dynamic library when one of the dependent dynamic libraries changes. The input file, executable or dynamic library, must have initially been prebound for this program to redo the prebinding. Also all depended libraries must have their prebinding up to date. So when redoing the prebinding for libraries they must be done in dependency order. Also when building executables or dynamic libraries that are to be prebound (with the -prebind options to ld(1) or libtool(1)) the dependent libraries must have their prebinding up to date or the result will not be prebound. The options allow for different types of checking for use in shell scripts. Only one of -c, -p or -d can be used at a time. If redo_prebinding redoes the prebinding on an input file it will run /usr/bin/objcunique if it exists on the result.
redo_prebinding - redo the prebinding of an executable or dynamic library
redo_prebinding [-c | -p | -d] [-i] [-z] [-u] [-r rootdir] [-e executable_path] [-seg_addr_table table_file_name] [-seg_addr_table_filename pathname] [-seg1addr address] [-o output_file] [-s] input_file
-c only check if the file needs to have it's prebinding redone and return status. A 0 exit means the file's prebinding is up to date, 1 means it needs to be redone and 2 means it could not be checked for reasons like a dependent library is missing (an error message is printed in these cases). -p check only for prebound files and return status. An exit status of 1 means the file is a Mach-O that could have been prebound and is not otherwise the exit status is 0. -d check only for dynamic shared library files and return status. An exit status of 0 means the file is a dynamic shared library, 1 means the file is not, 2 means there is some mix in the architectures. -i ignore non-prebound files (useful when running on all types of files). -z zero out the prebind check sum in the output if it has one. -u unprebind, rather than reprebind (-c, -p, -d, -e ignored). Resets or removes prebinding-specific information from the input file. As unprebinding is intended to produce a canonical Mach-O binary, bundles and non-prebound executables and dylibs are acceptable as input. For these files, the unprebind operation will zero library time stamps and version numbers and zero entries in the two-level hints table. -e executable_path replace any dependent library's "@executable_path" prefix with the executable_path argument. -seg_addr_table table_file_name The -seg_addr_table option is used when the input a dynamic library and if specified the table entry for the install_name of the dynamic library is used for checking and the address to relocate the library to as it prefered address. -seg_addr_table_filename pathname Use pathname instead of the install name of the library for matching an entry in the segment address table. -seg1addr address Move the input library to base address address. This option does not apply when -u, -seg_addr_table or -seg_addr_table_filename are specified. -r rootdir prepend the rootdir argument to all dependent libraries. -o output_file write the updated file into output_file rather than back into the input file. -s write the updated file to standard output DIAGNOSTICS With no -c, -p or -d an exit status of 0 means success and 2 means it could not be done for reasons like a dependent library is missing (an error message is printed in these cases). And exit of 3 is for the specific case when the dependent libraries are out of date with respect to each other. Apple Computer, Inc. March 29, 2004 REDO_PREBINDING(1)
null
unidecode
null
null
null
null
null
h5stat
null
null
null
null
null
jpegtran
jpegtran performs various useful transformations of JPEG files. It can translate the coded representation from one variant of JPEG to another, for example from baseline JPEG to progressive JPEG or vice versa. It can also perform some rearrangements of the image data, for example turning an image from landscape to portrait format by rotation. For EXIF files and JPEG files containing Exif data, you may prefer to use exiftran instead. jpegtran works by rearranging the compressed data (DCT coefficients), without ever fully decoding the image. Therefore, its transformations are lossless: there is no image degradation at all, which would not be true if you used djpeg followed by cjpeg to accomplish the same conversion. But by the same token, jpegtran cannot perform lossy operations such as changing the image quality. However, while the image data is losslessly transformed, metadata can be removed. See the -copy option for specifics. jpegtran reads the named JPEG/JFIF file, or the standard input if no file is named, and produces a JPEG/JFIF file on the standard output.
jpegtran - lossless transformation of JPEG files
jpegtran [ options ] [ filename ]
All switch names may be abbreviated; for example, -optimize may be written -opt or -o. Upper and lower case are equivalent. British spellings are also accepted (e.g., -optimise), though for brevity these are not mentioned below. To specify the coded JPEG representation used in the output file, jpegtran accepts a subset of the switches recognized by cjpeg: -optimize Perform optimization of entropy encoding parameters. -progressive Create progressive JPEG file. -restart N Emit a JPEG restart marker every N MCU rows, or every N MCU blocks if "B" is attached to the number. -arithmetic Use arithmetic coding. -scans file Use the scan script given in the specified text file. See cjpeg(1) for more details about these switches. If you specify none of these switches, you get a plain baseline-JPEG output file. The quality setting and so forth are determined by the input file. The image can be losslessly transformed by giving one of these switches: -flip horizontal Mirror image horizontally (left-right). -flip vertical Mirror image vertically (top-bottom). -rotate 90 Rotate image 90 degrees clockwise. -rotate 180 Rotate image 180 degrees. -rotate 270 Rotate image 270 degrees clockwise (or 90 ccw). -transpose Transpose image (across UL-to-LR axis). -transverse Transverse transpose (across UR-to-LL axis). The transpose transformation has no restrictions regarding image dimensions. The other transformations operate rather oddly if the image dimensions are not a multiple of the iMCU size (usually 8 or 16 pixels), because they can only transform complete blocks of DCT coefficient data in the desired way. jpegtran's default behavior when transforming an odd-size image is designed to preserve exact reversibility and mathematical consistency of the transformation set. As stated, transpose is able to flip the entire image area. Horizontal mirroring leaves any partial iMCU column at the right edge untouched, but is able to flip all rows of the image. Similarly, vertical mirroring leaves any partial iMCU row at the bottom edge untouched, but is able to flip all columns. The other transforms can be built up as sequences of transpose and flip operations; for consistency, their actions on edge pixels are defined to be the same as the end result of the corresponding transpose-and-flip sequence. For practical use, you may prefer to discard any untransformable edge pixels rather than having a strange-looking strip along the right and/or bottom edges of a transformed image. To do this, add the -trim switch: -trim Drop non-transformable edge blocks. Obviously, a transformation with -trim is not reversible, so strictly speaking jpegtran with this switch is not lossless. Also, the expected mathematical equivalences between the transformations no longer hold. For example, -rot 270 -trim trims only the bottom edge, but -rot 90 -trim followed by -rot 180 -trim trims both edges. -perfect If you are only interested in perfect transformations, add the -perfect switch. This causes jpegtran to fail with an error if the transformation is not perfect. For example, you may want to do (jpegtran -rot 90 -perfect foo.jpg || djpeg foo.jpg | pnmflip -r90 | cjpeg) to do a perfect rotation, if available, or an approximated one if not. This version of jpegtran also offers a lossless crop option, which discards data outside of a given image region but losslessly preserves what is inside. Like the rotate and flip transforms, lossless crop is restricted by the current JPEG format; the upper left corner of the selected region must fall on an iMCU boundary. If it doesn't, then it is silently moved up and/or left to the nearest iMCU boundary (the lower right corner is unchanged.) Thus, the output image covers at least the requested region, but it may cover more. The adjustment of the region dimensions may be optionally disabled by attaching an The image can be losslessly cropped by giving the switch: -crop WxH+X+Y Crop the image to a rectangular region of width W and height H, starting at point X,Y. The lossless crop feature discards data outside of a given image region but losslessly preserves what is inside. Like the rotate and flip transforms, lossless crop is restricted by the current JPEG format; the upper left corner of the selected region must fall on an iMCU boundary. If it doesn't, then it is silently moved up and/or left to the nearest iMCU boundary (the lower right corner is unchanged.) If W or H is larger than the width/height of the input image, then the output image is expanded in size, and the expanded region is filled in with zeros (neutral gray). Attaching an 'f' character ("flatten") to the width number will cause each block in the expanded region to be filled in with the DC coefficient of the nearest block in the input image rather than grayed out. Attaching an 'r' character ("reflect") to the width number will cause the expanded region to be filled in with repeated reflections of the input image rather than grayed out. A complementary lossless wipe option is provided to discard (gray out) data inside a given image region while losslessly preserving what is outside: -wipe WxH+X+Y Wipe (gray out) a rectangular region of width W and height H from the input image, starting at point X,Y. Attaching an 'f' character ("flatten") to the width number will cause the region to be filled with the average of adjacent blocks rather than grayed out. If the wipe region and the region outside the wipe region, when adjusted to the nearest iMCU boundary, form two horizontally adjacent rectangles, then attaching an 'r' character ("reflect") to the width number will cause the wipe region to be filled with repeated reflections of the outside region rather than grayed out. A lossless drop option is also provided, which allows another JPEG image to be inserted ("dropped") into the input image data at a given position, replacing the existing image data at that position: -drop +X+Y filename Drop (insert) another image at point X,Y Both the input image and the drop image must have the same subsampling level. It is best if they also have the same quantization (quality.) Otherwise, the quantization of the output image will be adapted to accommodate the higher of the input image quality and the drop image quality. The trim option can be used with the drop option to requantize the drop image to match the input image. Note that a grayscale image can be dropped into a full-color image or vice versa, as long as the full-color image has no vertical subsampling. If the input image is grayscale and the drop image is full-color, then the chrominance channels from the drop image will be discarded. Other not-strictly-lossless transformation switches are: -grayscale Force grayscale output. This option discards the chrominance channels if the input image is YCbCr (ie, a standard color JPEG), resulting in a grayscale JPEG file. The luminance channel is preserved exactly, so this is a better method of reducing to grayscale than decompression, conversion, and recompression. This switch is particularly handy for fixing a monochrome picture that was mistakenly encoded as a color JPEG. (In such a case, the space savings from getting rid of the near-empty chroma channels won't be large; but the decoding time for a grayscale JPEG is substantially less than that for a color JPEG.) jpegtran also recognizes these switches that control what to do with "extra" markers, such as comment blocks: -copy none Copy no extra markers from source file. This setting suppresses all comments and other metadata in the source file. -copy comments Copy only comment markers. This setting copies comments from the source file but discards any other metadata. -copy icc Copy only ICC profile markers. This setting copies the ICC profile from the source file but discards any other metadata. -copy all Copy all extra markers. This setting preserves miscellaneous markers found in the source file, such as JFIF thumbnails, Exif data, and Photoshop settings. In some files, these extra markers can be sizable. Note that this option will copy thumbnails as-is; they will not be transformed. The default behavior is -copy comments. (Note: in IJG releases v6 and v6a, jpegtran always did the equivalent of -copy none.) Additional switches recognized by jpegtran are: -icc file Embed ICC color management profile contained in the specified file. Note that this will cause jpegtran to ignore any APP2 markers in the input file, even if -copy all or -copy icc is specified. -maxmemory N Set limit for amount of memory to use in processing large images. Value is in thousands of bytes, or millions of bytes if "M" is attached to the number. For example, -max 4m selects 4000000 bytes. If more space is needed, an error will occur. -maxscans N Abort if the input image contains more than N scans. This feature demonstrates a method by which applications can guard against denial-of-service attacks instigated by specially- crafted malformed JPEG images containing numerous scans with missing image data or image data consisting only of "EOB runs" (a feature of progressive JPEG images that allows potentially hundreds of thousands of adjoining zero-value pixels to be represented using only a few bytes.) Attempting to transform such malformed JPEG images can cause excessive CPU activity, since the decompressor must fully process each scan (even if the scan is corrupt) before it can proceed to the next scan. -outfile name Send output image to the named file, not to standard output. -report Report transformation progress. -strict Treat all warnings as fatal. This feature also demonstrates a method by which applications can guard against attacks instigated by specially-crafted malformed JPEG images. Enabling this option will cause the decompressor to abort if the input image contains incomplete or corrupt image data. -verbose Enable debug printout. More -v's give more output. Also, version information is printed at startup. -debug Same as -verbose. -version Print version information and exit.
This example converts a baseline JPEG file to progressive form: jpegtran -progressive foo.jpg > fooprog.jpg This example rotates an image 90 degrees clockwise, discarding any unrotatable edge pixels: jpegtran -rot 90 -trim foo.jpg > foo90.jpg ENVIRONMENT JPEGMEM If this environment variable is set, its value is the default memory limit. The value is specified as described for the -maxmemory switch. JPEGMEM overrides the default value specified when the program was compiled, and itself is overridden by an explicit -maxmemory. SEE ALSO cjpeg(1), djpeg(1), rdjpgcom(1), wrjpgcom(1) Wallace, Gregory K. "The JPEG Still Picture Compression Standard", Communications of the ACM, April 1991 (vol. 34, no. 4), pp. 30-44. AUTHOR Independent JPEG Group This file was modified by The libjpeg-turbo Project to include only information relevant to libjpeg-turbo and to wordsmith certain sections. BUGS The transform options can't transform odd-size images perfectly. Use -trim or -perfect if you don't like the results. The entire image is read into memory and then written out again, even in cases where this isn't really necessary. Expect swapping on large images, especially when using the more complex transform options. 13 July 2021 JPEGTRAN(1)
arm64-apple-darwin20.0.0-checksyms
Checksyms preforms checks on a Mach-O production binary to see that it follows the proper conventions. If the binary follows the conventions, or is not a binary, checksyms(1) exits with zero status. A non-zero exit status indicates the binary does not follow the conventions for a production binary. The conventions include: proper stripping of symbols for the file type (and whether is is using the dynamic linker), if the file is a dynamic library then it checks for the preferred linked address, setting of compatibility and current versions, proper prebinding and objcunique if is uses the dynamic linker, and proper code generation if a dynamic binary so that there are no relocation entries in read-only sections. Some or all of the following options may be specified: -r The binary is expected to do runtime loading of bundles and is allowed to have static symbols. This is to aid in the debugging of bundles using the production binary so that back traces in the debugger shows symbol names for static routines. This is the default for dynamic libraries but not the default for executables. -d Print the detail of why the binary fails the checks. -v Print a single string token for each of the checks the binary fails. This is used by the verification tools to allow execptions to specific checks. -t Allow all symbols except debugging symbols (those created by the compiler's -g option). -b Check the binaries that use the dynamic linker to see that they do not have relocation entries in read-only sections, are prebound and had objcunique run on them. This is now the default. - All of arguments following this flag are treated as filenames of binaries to check. -arch arch_type Specifies the architecture, arch_type, in the file to check. More than one -arch arch_type can be specified. The default is -arch all which checks all architectures in the file. See arch(3) for the currently known arch_types. -dylib_table filename This option has been removed. In previous versions of checksyms it specifies the filename of the table of dynamic libraries and their addresses. If not specified the default is ~rc/Data/DylibTable. The format of the table is lines of a dynamic library name and an address in hex separated by spaces and / or tabs. -seg_addr_table filename This option has been removed. In previous versions of checksyms it specifies the filename of the segment address table of dynamic library install names and their addresses. The format of this table is lines of two forms. The entries in the table are lines containing either a single hex address and an install name or two hex addresses and an install name. In the first form the single hex address is used for ``flat'' libraries and is the -seg1addr . In the second form is used for ``split'' libraries and the first address is the -segs_read_only_addr address and the second address is the -segs_read_write_addr address. If the -seg_addr_table option is specified this is used instead of the table specified with the -dylib_table option. If the the environment variable LD_SEG_ADDR_TABLE is set it is also used instead of the table specified with the -dylib_table option. -seg_addr_table_filename pathname This option has been removed. In previous versions it instructs checksyms to use pathname instead of the install name of the library for matching an entry in the segment address table. Apple Computer, Inc. February 28, 2019 CHECKSYMS(1)
checksyms - check Mach-O production binary for proper conventions
checksyms [-r] [-d] [-t] [-b] [-v] [-] [-arch arch_flag] [-dylib_table filename] [-seg_addr_table filename] [-seg_addr_table_filename pathname] file [...]
null
null
h5ls
null
null
null
null
null
transformers-cli
null
null
null
null
null
sshtunnel
null
null
null
null
null
qt.conf
null
null
null
null
null
xslt-config
null
null
null
null
null
pdf2txt.py
null
null
null
null
null
libpng16-config
null
null
null
null
null
nltk
null
null
null
null
null
imageio_remove_bin
null
null
null
null
null
xzegrep
xzgrep invokes grep(1) on uncompressed contents of files. The formats of the files are determined from the filename suffixes. Any file with a suffix supported by xz(1), gzip(1), bzip2(1), lzop(1), zstd(1), or lz4(1) will be decompressed; all other files are assumed to be uncompressed. If no files are specified or file is - then standard input is read. When reading from standard input, only files supported by xz(1) are decompressed. Other files are assumed to be in uncompressed form already. Most options of grep(1) are supported. However, the following options are not supported: -r, --recursive -R, --dereference-recursive -d, --directories=action -Z, --null -z, --null-data --include=glob --exclude=glob --exclude-from=file --exclude-dir=glob xzegrep is an alias for xzgrep -E. xzfgrep is an alias for xzgrep -F. The commands lzgrep, lzegrep, and lzfgrep are provided for backward compatibility with LZMA Utils. EXIT STATUS 0 At least one match was found from at least one of the input files. No errors occurred. 1 No matches were found from any of the input files. No errors occurred. >1 One or more errors occurred. It is unknown if matches were found. ENVIRONMENT GREP If GREP is set to a non-empty value, it is used instead of grep, grep -E, or grep -F. SEE ALSO grep(1), xz(1), gzip(1), bzip2(1), lzop(1), zstd(1), lz4(1), zgrep(1) Tukaani 2024-02-13 XZGREP(1)
xzgrep - search possibly-compressed files for patterns
xzgrep [option...] [pattern_list] [file...] xzegrep ... xzfgrep ... lzgrep ... lzegrep ... lzfgrep ...
null
null
qmltyperegistrar
null
null
null
null
null
gsettings
null
null
null
null
null
bzcmp
Bzcmp and bzdiff are used to invoke the cmp or the diff program on bzip2 compressed files. All options specified are passed directly to cmp or diff. If only 1 file is specified, then the files compared are file1 and an uncompressed file1.bz2. If two files are specified, then they are uncompressed if necessary and fed to cmp or diff. The exit status from cmp or diff is preserved. SEE ALSO cmp(1), diff(1), bzmore(1), bzless(1), bzgrep(1), bzip2(1) BUGS Messages from the cmp or diff programs refer to temporary filenames instead of those specified. BZDIFF(1)
bzcmp, bzdiff - compare bzip2 compressed files
bzcmp [ cmp_options ] file1 [ file2 ] bzdiff [ diff_options ] file1 [ file2 ]
null
null
pngfix
null
null
null
null
null