command
stringlengths
1
42
description
stringlengths
29
182k
name
stringlengths
7
64.9k
synopsis
stringlengths
4
85.3k
options
stringclasses
593 values
examples
stringclasses
455 values
shazam
The shazam utility offers commands for creating signatures as well as processing custom catalogs to build better audio experiences with ShazamKit. Current operations supported for a custom catalog include: - Creating a custom catalog from a signature asset file and csv of metadata. - Updating an existing custom catalog with a signature asset file and csv of metadata. - Exporting the signatures and meta data contents of a custom catalog file to disk. - Removing a signature and/or media item entry from a custom catalog file. - Displaying the contents of an existing custom catalog file. The latest ShazamKit documentation can be viewed at https://developer.apple.com/shazamkit. SHAZAM COMMANDS The commands supported by shazam are divided into Signature and Custom Catalog commands. Signature Commands shazam signature Generate a signature file from a media input. See shazam-signature(1) for more info. Custom Catalog Commands shazam custom-catalog create Generate a custom catalog file from a signature asset file and a media items csv file. See shazam-custom-catalog-create(1) for more information. shazam custom-catalog display Display the contents of a custom catalog file. See shazam-custom-catalog-display(1) for more information. shazam custom-catalog update Update an existing custom catalog file with a signature asset file and a media items csv file. See shazam-custom-catalog-update(1) for more information. shazam custom-catalog export Export the contents of a custom catalog file to disk. See shazam-custom-catalog-export(1) for more information. shazam custom-catalog remove Remove a signature and/or media item entry from a custom catalog file. See shazam-custom-catalog-remove(1) for more information.
shazam – A utility for ShazamKit
shazam <subcommand> [--help]
-h, --help Show help information about the command. SEE ALSO shazam-signature(1), shazam-custom-catalog-create(1), shazam-custom-catalog-update(1), shazam-custom-catalog-export(1), shazam-custom-catalog-display(1), shazam-custom-catalog-remove(1) NOTES 1. ShazamKit Documentation https://developer.apple.com/shazamkit macOS 14.5 August 10, 2024 macOS 14.5
null
productsign
productsign adds a digital signature to a product archive previously created with productbuild(1). Although you can add a digital signature at the time you run productbuild(1), you may wish to add a signature later, once the product archive has been tested and is ready to deploy. If you run productsign on a product archive that was previously signed, the existing signature will be replaced. To sign a product archive, you will need to have a certificate and corresponding private key -- together called an “identity” -- in one of your accessible keychains. To add a signature, specify the name of the identity using the --sign option. The identity's name is the same as the “Common Name” of the certificate. If you want to search for the identity in a specific keychain, specify the path to the keychain file using the --keychain option. Otherwise, the default keychain search path is used. productsign will embed the signing certificate in the product archive, as well as any intermediate certificates that are found in the keychain. If you need to embed additional certificates to form a chain of trust between the signing certificate and a trusted root certificate on the system, use the --cert option to give the Common Name of the intermediate certificate. Multiple --cert options may be used to embed multiple intermediate certificates. The signature can optionally include a trusted timestamp. This is enabled by default when signing with a Developer ID identity, but it can be enabled explicitly using the --timestamp option. A timestamp server must be contacted to embed a trusted timestamp. If you aren't connected to the Internet, you can use --timestamp=none to disable timestamps, even for a Developer ID identity. ARGUMENTS AND OPTIONS --sign identity-name The name of the identity to use for signing the product archive. --keychain keychain-path Specify a specific keychain to search for the signing identity. --cert certificate-name Specify an intermediate certificate to be embedded in the product archive. --timestamp Include a trusted timestamp with the signature. --timestamp=none Disable trusted timestamp, regardless of identity. input-product-path The product archive to be signed. output-product-path The path to which the signed product archive will be written. Must not be the same as input-product-path. The output path should be package. If the package already exists, it will be overwritten. SEE ALSO productbuild(1) macOS September 15, 2010 macOS
productsign – Sign a macOS Installer product archive
productsign [options] --sign identity input-product-path.pkg output-product-path.pkg
null
null
mklocale
The mklocale utility reads a LC_CTYPE source file from standard input and produces a LC_CTYPE binary file on standard output suitable for placement in /usr/share/locale/language/LC_CTYPE. The format of src-file is quite simple. It consists of a series of lines which start with a keyword and have associated data following. C style comments are used to place comments in the file. Following options are available: -d Turns on debugging messages. -o Specify output file. Besides the keywords which will be listed below, the following are valid tokens in src-file: RUNE A RUNE may be any of the following: 'x' The ASCII character x. '\x' The ANSI C character \x where \x is one of \a, \b, \f, \n, \r, \t, or \v. 0x[0-9a-z]* A hexadecimal number representing a rune code. 0[0-7]* An octal number representing a rune code. [1-9][0-9]* A decimal number representing a rune code. STRING A string enclosed in double quotes ("). THRU Either ... or -. Used to indicate ranges. literal The follow characters are taken literally: <([ Used to start a mapping. All are equivalent. >)] Used to end a mapping. All are equivalent. : Used as a delimiter in mappings. Key words which should only appear once are: ENCODING Followed by a STRING which indicates the encoding mechanism to be used for this locale. The current encodings are: ASCII American Standard Code for Information Interchange. BIG5 The “Big5” encoding of Chinese. EUC EUC encoding as used by several vendors of UNIX systems. GB18030 PRC national standard for encoding of Chinese text. GB2312 Older PRC national standard for encoding Chinese text. GBK A widely used encoding method for Chinese text, backwards compatible with GB 2312-1980. MSKanji The method of encoding Japanese used by Microsoft, loosely based on JIS. Also known as “Shift JIS” and “SJIS”. NONE No translation and the default. UTF-8 The UTF-8 transformation format of ISO 10646 as defined by RFC 2279. VARIABLE This keyword must be followed by a single tab or space character, after which encoding specific data is placed. Currently only the EUC encoding requires variable data. See euc(5) for further details. INVALID (obsolete) A single RUNE follows and is used as the invalid rune for this locale. The following keywords may appear multiple times and have the following format for data: <RUNE1 RUNE2> RUNE1 is mapped to RUNE2. <RUNE1 THRU RUNEn: RUNE2> Runes RUNE1 through RUNEn are mapped to RUNE2 through RUNE2 + n-1. MAPLOWER Defines the tolower mappings. RUNE2 is the lower case representation of RUNE1. MAPUPPER Defines the toupper mappings. RUNE2 is the upper case representation of RUNE1. TODIGIT Defines a map from runes to their digit value. RUNE2 is the integer value represented by RUNE1. For example, the ASCII character ‘0’ would map to the decimal value 0. Only values up to 255 are allowed. The following keywords may appear multiple times and have the following format for data: RUNE This rune has the property defined by the keyword. RUNE1 THRU RUNEn All the runes between and including RUNE1 and RUNEn have the property defined by the keyword. ALPHA Defines runes which are alphabetic, printable and graphic. CONTROL Defines runes which are control characters. DIGIT Defines runes which are decimal digits, printable and graphic. GRAPH Defines runes which are graphic and printable. LOWER Defines runes which are lower case, printable and graphic. PUNCT Defines runes which are punctuation, printable and graphic. SPACE Defines runes which are spaces. UPPER Defines runes which are upper case, printable and graphic. XDIGIT Defines runes which are hexadecimal digits, printable and graphic. BLANK Defines runes which are blank. PRINT Defines runes which are printable. IDEOGRAM Defines runes which are ideograms, printable and graphic. SPECIAL Defines runes which are special characters, printable and graphic. PHONOGRAM Defines runes which are phonograms, printable and graphic. SWIDTH0 Defines runes with display width 0. SWIDTH1 Defines runes with display width 1. SWIDTH2 Defines runes with display width 2. SWIDTH3 Defines runes with display width 3. If no display width explicitly defined, width 1 assumed for printable runes by default. SEE ALSO colldef(1), localedef(1), setlocale(3), wcwidth(3), big5(5), euc(5), gb18030(5), gb2312(5), gbk(5), mskanji(5), utf8(5) HISTORY The mklocale utility first appeared in 4.4BSD. BUGS The mklocale utility is overly simplistic. macOS 14.5 April 18, 2016 macOS 14.5
mklocale – make LC_CTYPE locale files
mklocale [-d] < src-file > language/LC_CTYPE mklocale [-d] -o language/LC_CTYPE src-file
null
null
defaults
Defaults allows users to read, write, and delete Mac OS X user defaults from a command-line shell. Mac OS X applications and other programs use the defaults system to record user preferences and other information that must be maintained when the applications aren't running (such as default font for new documents, or the position of an Info panel). Much of this information is accessible through an application's Preferences panel, but some of it isn't, such as the position of the Info panel. You can access this information with defaults Note: Since applications do access the defaults system while they're running, you shouldn't modify the defaults of a running application. If you change a default in a domain that belongs to a running application, the application won't see the change and might even overwrite the default. User defaults belong to domains, which typically correspond to individual applications. Each domain has a dictionary of keys and values representing its defaults; for example, "Default Font" = "Helvetica". Keys are always strings, but values can be complex data structures comprising arrays, dictionaries, strings, and binary data. These data structures are stored as XML Property Lists. Though all applications, system services, and other programs have their own domains, they also share a domain named NSGlobalDomain. If a default isn't specified in the application's domain, but is specified in NSGlobalDomain, then the application uses the value in that domain. The commands are as follows: read Prints all of the user's defaults, for every domain, to standard output. read domain Prints all of the user's defaults for domain to standard output. read domain key Prints the value for the default of domain identified by key. read-type domain key Prints the plist type for the given domain identified by key. write domain key 'value' Writes value as the value for key in domain. value must be a property list, and must be enclosed in single quotes. For example: defaults write com.companyname.appname "Default Color" '(255, 0, 0)' sets the value for Default Color to an array containing the strings 255, 0, 0 (the red, green, and blue components). Note that the key is enclosed in quotation marks because it contains a space. write domain 'plist' Overwrites the defaults information in domain with that given as plist. plist must be a property list representation of a dictionary, and must be enclosed in single quotes. For example: defaults write com.companyname.appname '{ "Default Color" = (255, 0, 0); "Default Font" = Helvetica; }'; erases any previous defaults for com.companyname.appname and writes the values for the two names into the defaults system. delete domain Removes all default information for domain. delete domain key Removes the default named key from domain. delete-all domain Removes all default information for domain in all known containers. delete-all domain key Removes the default named key from domain in all known containers. domains Prints the names of all domains in the user's defaults system. find word Searches for word in the domain names, keys, and values of the user's defaults, and prints out a list of matches. help Prints a list of possible command formats.
defaults – access the Mac OS X user defaults system
defaults [-currentHost | -host hostname] read [domain [key]] defaults [-currentHost | -host hostname] read-type domain key defaults [-currentHost | -host hostname] write domain { 'plist' | key 'value' } defaults [-currentHost | -host hostname] rename domain old_key new_key defaults [-currentHost | -host hostname] delete [domain [key]] defaults [-currentHost | -host hostname] { domains | find word | help }
Specifying domains: domain If no flag is specified, domain is a domain name of the form com.companyname.appname. Example: defaults read com.apple.TextEdit -app application The name of an application may be provided instead of a domain using the -app flag. Example: defaults read -app TextEdit filepath Domains may also be specified as a path to an arbitrary plist file, with or without the '.plist' extension. For example: defaults read ~/Library/Containers/com.apple.TextEdit/Data/Library/Preferences/com.apple.TextEdit.plist normally gives the same result as the two previous examples. In the following example: defaults write ~/Desktop/TestFile foo bar will write the key 'foo' with the value 'bar' into the plist file 'TestFile.plist' that is on the user's desktop. If the file does not exist, it will be created. If it does exist, the key-value pair will be added, overwriting the value of 'foo' if it already existed. WARNING: The defaults command will be changed in an upcoming major release to only operate on preferences domains. General plist manipulation utilities will be folded into a different command-line program. -g | -globalDomain | NSGlobalDomain Specify the global domain. '-g' and '-globalDomain' may be used as synonyms for NSGlobalDomain. Specifying value types for preference keys: If no type flag is provided, defaults will assume the value is a string. For best results, use one of the type flags, listed below. -string Allows the user to specify a string as the value for the given preference key. -data Allows the user to specify a bunch of raw data bytes as the value for the given preference key. The data must be provided in hexidecimal. -int[eger] Allows the user to specify an integer as the value for the given preference key. -float Allows the user to specify a floating point number as the value for the given preference key. -bool[ean] Allows the user to specify a boolean as the value for the given preference key. Value must be TRUE, FALSE, YES, or NO. -date Allows the user to specify a date as the value for the given preference key. -array Allows the user to specify an array as the value for the given preference key: defaults write somedomain preferenceKey -array element1 element2 element3 The specified array overwrites the value of the key if the key was present at the time of the write. If the key was not present, it is created with the new value. -array-add Allows the user to add new elements to the end of an array for a key which has an array as its value. Usage is the same as -array above. If the key was not present, it is created with the specified array as its value. -dict Allows the user to add a dictionary to the defaults database for a domain. Keys and values are specified in order: defaults write somedomain preferenceKey -dict key1 value1 key2 value2 The specified dictionary overwrites the value of the key if the key was present at the time of the write. If the key was not present, it is created with the new value. -dict-add Allows the user to add new key/value pairs to a dictionary for a key which has a dictionary as its value. Usage is the same as -dict above. If the key was not present, it is created with the specified dictionary as its value. Specifying a host for preferences: Operations on the defaults database normally apply to any host the user may log in on, but may be restricted to apply only to a specific host. If no host is provided, preferences operations will apply to any host the user may log in on. -currentHost Restricts preferences operations to the host the user is currently logged in on. -host hostname Restricts preferences operations to hostname. BUGS Defaults can be structured in very complex ways, making it difficult for the user to enter them with this command. HISTORY First appeared in NeXTStep. Mac OS X November 3, 2003 Mac OS X
null
tic
The tic command translates a terminfo file from source format into compiled format. The compiled format is necessary for use with the library routines in ncurses(3X). As described in term(5), the database may be either a directory tree (one file per terminal entry) or a hashed database (one record per entry). The tic command writes only one type of entry, depending on how it was built: • For directory trees, the top-level directory, e.g., /usr/share/terminfo, specifies the location of the database. • For hashed databases, a filename is needed. If the given file is not found by that name, but can be found by adding the suffix ".db", then that is used. The default name for the hashed database is the same as the default directory name (only adding a ".db" suffix). In either case (directory or hashed database), tic will create the container if it does not exist. For a directory, this would be the "terminfo" leaf, versus a "terminfo.db" file. The results are normally placed in the system terminfo database /usr/share/terminfo. The compiled terminal description can be placed in a different terminfo database. There are two ways to achieve this: • First, you may override the system default either by using the -o option, or by setting the variable TERMINFO in your shell environment to a valid database location. • Secondly, if tic cannot write in /usr/share/terminfo or the location specified using your TERMINFO variable, it looks for the directory $HOME/.terminfo (or hashed database $HOME/.terminfo.db); if that location exists, the entry is placed there. Libraries that read terminfo entries are expected to check in succession • a location specified with the TERMINFO environment variable, • $HOME/.terminfo, • directories listed in the TERMINFO_DIRS environment variable, • a compiled-in list of directories (@TERMINFO_DIRS@), and • the system terminfo database (/usr/share/terminfo).
tic - the terminfo entry-description compiler
tic [-01CDGIKLNTUVacfgrstx] [-e names] [-o dir] [-R subset] [-v[n]] [-w[n]] file
-0 restricts the output to a single line -1 restricts the output to a single column -a tells tic to retain commented-out capabilities rather than discarding them. Capabilities are commented by prefixing them with a period. This sets the -x option, because it treats the commented-out entries as user-defined names. If the source is termcap, accept the 2-character names required by version 6. Otherwise these are ignored. -C Force source translation to termcap format. Note: this differs from the -C option of infocmp(1M) in that it does not merely translate capability names, but also translates terminfo strings to termcap format. Capabilities that are not translatable are left in the entry under their terminfo names but commented out with two preceding dots. The actual format used incorporates some improvements for escaped characters from terminfo format. For a stricter BSD-compatible translation, add the -K option. If this is combined with -c, tic makes additional checks to report cases where the terminfo values do not have an exact equivalent in termcap form. For example: • sgr usually will not convert, because termcap lacks the ability to work with more than two parameters, and because termcap lacks many of the arithmetic/logical operators used in terminfo. • capabilities with more than one delay or with delays before the end of the string will not convert completely. -c tells tic to only check file for errors, including syntax problems and bad use links. If you specify -C (-I) with this option, the code will print warnings about entries which, after use resolution, are more than 1023 (4096) bytes long. Due to a fixed buffer length in older termcap libraries, as well as buggy checking for the buffer length (and a documented limit in terminfo), these entries may cause core dumps with other implementations. tic checks string capabilities to ensure that those with parameters will be valid expressions. It does this check only for the predefined string capabilities; those which are defined with the -x option are ignored. -D tells tic to print the database locations that it knows about, and exit. The first location shown is the one to which it would write compiled terminal descriptions. If tic is not able to find a writable database location according to the rules summarized above, it will print a diagnostic and exit with an error rather than printing a list of database locations. -e names Limit writes and translations to the following comma-separated list of terminals. If any name or alias of a terminal matches one of the names in the list, the entry will be written or translated as normal. Otherwise no output will be generated for it. The option value is interpreted as a file containing the list if it contains a '/'. (Note: depending on how tic was compiled, this option may require -I or -C.) -f Display complex terminfo strings which contain if/then/else/endif expressions indented for readability. -G Display constant literals in decimal form rather than their character equivalents. -g Display constant character literals in quoted form rather than their decimal equivalents. -I Force source translation to terminfo format. -K Suppress some longstanding ncurses extensions to termcap format, e.g., "\s" for space. -L Force source translation to terminfo format using the long C variable names listed in <term.h> -N Disable smart defaults. Normally, when translating from termcap to terminfo, the compiler makes a number of assumptions about the defaults of string capabilities reset1_string, carriage_return, cursor_left, cursor_down, scroll_forward, tab, newline, key_backspace, key_left, and key_down, then attempts to use obsolete termcap capabilities to deduce correct values. It also normally suppresses output of obsolete termcap capabilities such as bs. This option forces a more literal translation that also preserves the obsolete capabilities. -odir Write compiled entries to given database location. Overrides the TERMINFO environment variable. -Rsubset Restrict output to a given subset. This option is for use with archaic versions of terminfo like those on SVr1, Ultrix, or HP/UX that do not support the full set of SVR4/XSI Curses terminfo; and outright broken ports like AIX 3.x that have their own extensions incompatible with SVr4/XSI. Available subsets are "SVr1", "Ultrix", "HP", "BSD" and "AIX"; see terminfo(5) for details. -r Force entry resolution (so there are no remaining tc capabilities) even when doing translation to termcap format. This may be needed if you are preparing a termcap file for a termcap library (such as GNU termcap through version 1.3 or BSD termcap through 4.3BSD) that does not handle multiple tc capabilities per entry. -s Summarize the compile by showing the database location into which entries are written, and the number of entries which are compiled. -T eliminates size-restrictions on the generated text. This is mainly useful for testing and analysis, since the compiled descriptions are limited (e.g., 1023 for termcap, 4096 for terminfo). -t tells tic to discard commented-out capabilities. Normally when translating from terminfo to termcap, untranslatable capabilities are commented-out. -U tells tic to not post-process the data after parsing the source file. Normally, it infers data which is commonly missing in older terminfo data, or in termcaps. -V reports the version of ncurses which was used in this program, and exits. -vn specifies that (verbose) output be written to standard error trace information showing tic's progress. The optional parameter n is a number from 1 to 10, inclusive, indicating the desired level of detail of information. If n is omitted, the default level is 1. If n is specified and greater than 1, the level of detail is increased. The debug flag levels are as follows: 1 Names of files created and linked 2 Information related to the “use” facility 3 Statistics from the hashing algorithm 5 String-table memory allocations 7 Entries into the string-table 8 List of tokens encountered by scanner 9 All values computed in construction of the hash table If the debug level n is not given, it is taken to be one. -wn specifies the width of the output. The parameter is optional. If it is omitted, it defaults to 60. -x Treat unknown capabilities as user-defined. That is, if you supply a capability name which tic does not recognize, it will infer its type (boolean, number or string) from the syntax and make an extended table entry for that. User-defined capability strings whose name begins with “k” are treated as function keys. PARAMETERS file contains one or more terminfo terminal descriptions in source format [see terminfo(5)]. Each description in the file describes the capabilities of a particular terminal. If file is “-”, then the data is read from the standard input. The file parameter may also be the path of a character-device. PROCESSING All but one of the capabilities recognized by tic are documented in terminfo(5). The exception is the use capability. When a use=entry-name field is discovered in a terminal entry currently being compiled, tic reads in the binary from /usr/share/terminfo to complete the entry. (Entries created from file will be used first. tic duplicates the capabilities in entry-name for the current entry, with the exception of those capabilities that explicitly are defined in the current entry. When an entry, e.g., entry_name_1, contains a use=entry_name_2 field, any canceled capabilities in entry_name_2 must also appear in entry_name_1 before use= for these capabilities to be canceled in entry_name_1. Total compiled entries cannot exceed 4096 bytes. The name field cannot exceed 512 bytes. Terminal names exceeding the maximum alias length (32 characters on systems with long filenames, 14 characters otherwise) will be truncated to the maximum alias length and a warning message will be printed. COMPATIBILITY There is some evidence that historic tic implementations treated description fields with no whitespace in them as additional aliases or short names. This tic does not do that, but it does warn when description fields may be treated that way and check them for dangerous characters. EXTENSIONS Unlike the SVr4 tic command, this implementation can actually compile termcap sources. In fact, entries in terminfo and termcap syntax can be mixed in a single source file. See terminfo(5) for the list of termcap names taken to be equivalent to terminfo names. The SVr4 manual pages are not clear on the resolution rules for use capabilities. This implementation of tic will find use targets anywhere in the source file, or anywhere in the file tree rooted at TERMINFO (if TERMINFO is defined), or in the user's $HOME/.terminfo database (if it exists), or (finally) anywhere in the system's file tree of compiled entries. The error messages from this tic have the same format as GNU C error messages, and can be parsed by GNU Emacs's compile facility. The -0, -1, -C, -G, -I, -N, -R, -T, -V, -a, -e, -f, -g, -o, -r, -s, -t and -x options are not supported under SVr4. The SVr4 -c mode does not report bad use links. System V does not compile entries to or read entries from your $HOME/.terminfo database unless TERMINFO is explicitly set to it. FILES /usr/share/terminfo/?/* Compiled terminal description database. SEE ALSO infocmp(1M), captoinfo(1M), infotocap(1M), toe(1M), curses(3X), term(5). terminfo(5). This describes ncurses version 5.7 (patch 20081102). AUTHOR Eric S. Raymond <esr@snark.thyrsus.com> and Thomas E. Dickey <dickey@invisible-island.net> tic(1M)
null
zip
zip is a compression and file packaging utility for Unix, VMS, MSDOS, OS/2, Windows 9x/NT/XP, Minix, Atari, Macintosh, Amiga, and Acorn RISC OS. It is analogous to a combination of the Unix commands tar(1) and compress(1) and is compatible with PKZIP (Phil Katz's ZIP for MSDOS systems). A companion program (unzip(1L)) unpacks zip archives. The zip and unzip(1L) programs can work with archives produced by PKZIP (supporting most PKZIP features up to PKZIP version 4.6), and PKZIP and PKUNZIP can work with archives produced by zip (with some exceptions, notably streamed archives, but recent changes in the zip file standard may facilitate better compatibility). zip version 3.0 is compatible with PKZIP 2.04 and also supports the Zip64 extensions of PKZIP 4.5 which allow archives as well as files to exceed the previous 2 GB limit (4 GB in some cases). zip also now supports bzip2 compression if the bzip2 library is included when zip is compiled. Note that PKUNZIP 1.10 cannot extract files produced by PKZIP 2.04 or zip_3.0. You must use PKUNZIP 2.04g or unzip_5.0p1 (or later versions) to extract them. See the EXAMPLES section at the bottom of this page for examples of some typical uses of zip. Large Archives and Zip64. zip automatically uses the Zip64 extensions when files larger than 4 GB are added to an archive, an archive containing Zip64 entries is updated (if the resulting archive still needs Zip64), the size of the archive will exceed 4 GB, or when the number of entries in the archive will exceed about 64K. Zip64 is also used for archives streamed from standard input as the size of such archives are not known in advance, but the option -fz- can be used to force zip to create PKZIP 2 compatible archives (as long as Zip64 extensions are not needed). You must use a PKZIP 4.5 compatible unzip, such as unzip_6.0 or later, to extract files using the Zip64 extensions. In addition, streamed archives, entries encrypted with standard encryption, or split archives created with the pause option may not be compatible with PKZIP as data descriptors are used and PKZIP at the time of this writing does not support data descriptors (but recent changes in the PKWare published zip standard now include some support for the data descriptor format zip uses). Mac OS X. Though previous Mac versions had their own zip port, zip supports Mac OS X as part of the Unix port and most Unix features apply. References to "MacOS" below generally refer to MacOS versions older than OS X. Support for some Mac OS features in the Unix Mac OS X port, such as resource forks, is expected in the next zip release. For a brief help on zip and unzip, run each without specifying any parameters on the command line. USE The program is useful for packaging a set of files for distribution; for archiving files; and for saving disk space by temporarily compressing unused files or directories. The zip program puts one or more compressed files into a single zip archive, along with information about the files (name, path, date, time of last modification, protection, and check information to verify file integrity). An entire directory structure can be packed into a zip archive with a single command. Compression ratios of 2:1 to 3:1 are common for text files. zip has one compression method (deflation) and can also store files without compression. (If bzip2 support is added, zip can also compress using bzip2 compression, but such entries require a reasonably modern unzip to decompress. When bzip2 compression is selected, it replaces deflation as the default method.) zip automatically chooses the better of the two (deflation or store or, if bzip2 is selected, bzip2 or store) for each file to be compressed. Command format. The basic command format is zip options archive inpath inpath ... where archive is a new or existing zip archive and inpath is a directory or file path optionally including wildcards. When given the name of an existing zip archive, zip will replace identically named entries in the zip archive (matching the relative names as stored in the archive) or add entries for new names. For example, if foo.zip exists and contains foo/file1 and foo/file2, and the directory foo contains the files foo/file1 and foo/file3, then: zip -r foo.zip foo or more concisely zip -r foo foo will replace foo/file1 in foo.zip and add foo/file3 to foo.zip. After this, foo.zip contains foo/file1, foo/file2, and foo/file3, with foo/file2 unchanged from before. So if before the zip command is executed foo.zip has: foo/file1 foo/file2 and directory foo has: file1 file3 then foo.zip will have: foo/file1 foo/file2 foo/file3 where foo/file1 is replaced and foo/file3 is new. -@ file lists. If a file list is specified as -@ [Not on MacOS], zip takes the list of input files from standard input instead of from the command line. For example, zip -@ foo will store the files listed one per line on stdin in foo.zip. Under Unix, this option can be used to powerful effect in conjunction with the find (1) command. For example, to archive all the C source files in the current directory and its subdirectories: find . -name "*.[ch]" -print | zip source -@ (note that the pattern must be quoted to keep the shell from expanding it). Streaming input and output. zip will also accept a single dash ("-") as the zip file name, in which case it will write the zip file to standard output, allowing the output to be piped to another program. For example: zip -r - . | dd of=/dev/nrst0 obs=16k would write the zip output directly to a tape with the specified block size for the purpose of backing up the current directory. zip also accepts a single dash ("-") as the name of a file to be compressed, in which case it will read the file from standard input, allowing zip to take input from another program. For example: tar cf - . | zip backup - would compress the output of the tar command for the purpose of backing up the current directory. This generally produces better compression than the previous example using the -r option because zip can take advantage of redundancy between files. The backup can be restored using the command unzip -p backup | tar xf - When no zip file name is given and stdout is not a terminal, zip acts as a filter, compressing standard input to standard output. For example, tar cf - . | zip | dd of=/dev/nrst0 obs=16k is equivalent to tar cf - . | zip - - | dd of=/dev/nrst0 obs=16k zip archives created in this manner can be extracted with the program funzip which is provided in the unzip package, or by gunzip which is provided in the gzip package (but some gunzip may not support this if zip used the Zip64 extensions). For example: dd if=/dev/nrst0 ibs=16k | funzip | tar xvf - The stream can also be saved to a file and unzip used. If Zip64 support for large files and archives is enabled and zip is used as a filter, zip creates a Zip64 archive that requires a PKZIP 4.5 or later compatible unzip to read it. This is to avoid amgibuities in the zip file structure as defined in the current zip standard (PKWARE AppNote) where the decision to use Zip64 needs to be made before data is written for the entry, but for a stream the size of the data is not known at that point. If the data is known to be smaller than 4 GB, the option -fz- can be used to prevent use of Zip64, but zip will exit with an error if Zip64 was in fact needed. zip_3 and unzip_6 and later can read archives with Zip64 entries. Also, zip removes the Zip64 extensions if not needed when archive entries are copied (see the -U (--copy) option). When directing the output to another file, note that all options should be before the redirection including -x. For example: zip archive "*.h" "*.c" -x donotinclude.h orthis.h > tofile Zip files. When changing an existing zip archive, zip will write a temporary file with the new contents, and only replace the old one when the process of creating the new version has been completed without error. If the name of the zip archive does not contain an extension, the extension .zip is added. If the name already contains an extension other than .zip, the existing extension is kept unchanged. However, split archives (archives split over multiple files) require the .zip extension on the last split. Scanning and reading files. When zip starts, it scans for files to process (if needed). If this scan takes longer than about 5 seconds, zip will display a "Scanning files" message and start displaying progress dots every 2 seconds or every so many entries processed, whichever takes longer. If there is more than 2 seconds between dots it could indicate that finding each file is taking time and could mean a slow network connection for example. (Actually the initial file scan is a two-step process where the directory scan is followed by a sort and these two steps are separated with a space in the dots. If updating an existing archive, a space also appears between the existing file scan and the new file scan.) The scanning files dots are not controlled by the -ds dot size option, but the dots are turned off by the -q quiet option. The -sf show files option can be used to scan for files and get the list of files scanned without actually processing them. If zip is not able to read a file, it issues a warning but continues. See the -MM option below for more on how zip handles patterns that are not matched and files that are not readable. If some files were skipped, a warning is issued at the end of the zip operation noting how many files were read and how many skipped. Command modes. zip now supports two distinct types of command modes, external and internal. The external modes (add, update, and freshen) read files from the file system (as well as from an existing archive) while the internal modes (delete and copy) operate exclusively on entries in an existing archive. add Update existing entries and add new files. If the archive does not exist create it. This is the default mode. update (-u) Update existing entries if newer on the file system and add new files. If the archive does not exist issue warning then create a new archive. freshen (-f) Update existing entries of an archive if newer on the file system. Does not add new files to the archive. delete (-d) Select entries in an existing archive and delete them. copy (-U) Select entries in an existing archive and copy them to a new archive. This new mode is similar to update but command line patterns select entries in the existing archive rather than files from the file system and it uses the --out option to write the resulting archive to a new file rather than update the existing archive, leaving the original archive unchanged. The new File Sync option (-FS) is also considered a new mode, though it is similar to update. This mode synchronizes the archive with the files on the OS, only replacing files in the archive if the file time or size of the OS file is different, adding new files, and deleting entries from the archive where there is no matching file. As this mode can delete entries from the archive, consider making a backup copy of the archive. Also see -DF for creating difference archives. See each option description below for details and the EXAMPLES section below for examples. Split archives. zip version 3.0 and later can create split archives. A split archive is a standard zip archive split over multiple files. (Note that split archives are not just archives split in to pieces, as the offsets of entries are now based on the start of each split. Concatenating the pieces together will invalidate these offsets, but unzip can usually deal with it. zip will usually refuse to process such a spliced archive unless the -FF fix option is used to fix the offsets.) One use of split archives is storing a large archive on multiple removable media. For a split archive with 20 split files the files are typically named (replace ARCHIVE with the name of your archive) ARCHIVE.z01, ARCHIVE.z02, ..., ARCHIVE.z19, ARCHIVE.zip. Note that the last file is the .zip file. In contrast, spanned archives are the original multi-disk archive generally requiring floppy disks and using volume labels to store disk numbers. zip supports split archives but not spanned archives, though a procedure exists for converting split archives of the right size to spanned archives. The reverse is also true, where each file of a spanned archive can be copied in order to files with the above names to create a split archive. Use -s to set the split size and create a split archive. The size is given as a number followed optionally by one of k (kB), m (MB), g (GB), or t (TB) (the default is m). The -sp option can be used to pause zip between splits to allow changing removable media, for example, but read the descriptions and warnings for both -s and -sp below. Though zip does not update split archives, zip provides the new option -O (--output-file or --out) to allow split archives to be updated and saved in a new archive. For example, zip inarchive.zip foo.c bar.c --out outarchive.zip reads archive inarchive.zip, even if split, adds the files foo.c and bar.c, and writes the resulting archive to outarchive.zip. If inarchive.zip is split then outarchive.zip defaults to the same split size. Be aware that if outarchive.zip and any split files that are created with it already exist, these are always overwritten as needed without warning. This may be changed in the future. Unicode. Though the zip standard requires storing paths in an archive using a specific character set, in practice zips have stored paths in archives in whatever the local character set is. This creates problems when an archive is created or updated on a system using one character set and then extracted on another system using a different character set. When compiled with Unicode support enabled on platforms that support wide characters, zip now stores, in addition to the standard local path for backward compatibility, the UTF-8 translation of the path. This provides a common universal character set for storing paths that allows these paths to be fully extracted on other systems that support Unicode and to match as close as possible on systems that don't. On Win32 systems where paths are internally stored as Unicode but represented in the local character set, it's possible that some paths will be skipped during a local character set directory scan. zip with Unicode support now can read and store these paths. Note that Win 9x systems and FAT file systems don't fully support Unicode. Be aware that console windows on Win32 and Unix, for example, sometimes don't accurately show all characters due to how each operating system switches in character sets for display. However, directory navigation tools should show the correct paths if the needed fonts are loaded. Command line format. This version of zip has updated command line processing and support for long options. Short options take the form -s[-][s[-]...][value][=value][ value] where s is a one or two character short option. A short option that takes a value is last in an argument and anything after it is taken as the value. If the option can be negated and "-" immediately follows the option, the option is negated. Short options can also be given as separate arguments -s[-][value][=value][ value] -s[-][value][=value][ value] ... Short options in general take values either as part of the same argument or as the following argument. An optional = is also supported. So -ttmmddyyyy and -tt=mmddyyyy and -tt mmddyyyy all work. The -x and -i options accept lists of values and use a slightly different format described below. See the -x and -i options. Long options take the form --longoption[-][=value][ value] where the option starts with --, has a multicharacter name, can include a trailing dash to negate the option (if the option supports it), and can have a value (option argument) specified by preceeding it with = (no spaces). Values can also follow the argument. So --before-date=mmddyyyy and --before-date mmddyyyy both work. Long option names can be shortened to the shortest unique abbreviation. See the option descriptions below for which support long options. To avoid confusion, avoid abbreviating a negatable option with an embedded dash ("-") at the dash if you plan to negate it (the parser would consider a trailing dash, such as for the option --some-option using --some- as the option, as part of the name rather than a negating dash). This may be changed to force the last dash in --some- to be negating in the future.
zip - package and compress (archive) files
zip [-aABcdDeEfFghjklLmoqrRSTuvVwXyz!@$] [--longoption ...] [-b path] [-n suffixes] [-t date] [-tt date] [zipfile [file ...]] [-xi list] zipcloak (see separate man page) zipnote (see separate man page) zipsplit (see separate man page) Note: Command line processing in zip has been changed to support long options and handle all options and arguments more consistently. Some old command lines that depend on command line inconsistencies may no longer work.
-a --ascii [Systems using EBCDIC] Translate file to ASCII format. -A --adjust-sfx Adjust self-extracting executable archive. A self-extracting executable archive is created by prepending the SFX stub to an existing archive. The -A option tells zip to adjust the entry offsets stored in the archive to take into account this "preamble" data. Note: self-extracting archives for the Amiga are a special case. At present, only the Amiga port of zip is capable of adjusting or updating these without corrupting them. -J can be used to remove the SFX stub if other updates need to be made. -AC --archive-clear [WIN32] Once archive is created (and tested if -T is used, which is recommended), clear the archive bits of files processed. WARNING: Once the bits are cleared they are cleared. You may want to use the -sf show files option to store the list of files processed in case the archive operation must be repeated. Also consider using the -MM must match option. Be sure to check out -DF as a possibly better way to do incremental backups. -AS --archive-set [WIN32] Only include files that have the archive bit set. Directories are not stored when -AS is used, though by default the paths of entries, including directories, are stored as usual and can be used by most unzips to recreate directories. The archive bit is set by the operating system when a file is modified and, if used with -AC, -AS can provide an incremental backup capability. However, other applications can modify the archive bit and it may not be a reliable indicator of which files have changed since the last archive operation. Alternative ways to create incremental backups are using -t to use file dates, though this won't catch old files copied to directories being archived, and -DF to create a differential archive. -B --binary [VM/CMS and MVS] force file to be read binary (default is text). -Bn [TANDEM] set Edit/Enscribe formatting options with n defined as bit 0: Don't add delimiter (Edit/Enscribe) bit 1: Use LF rather than CR/LF as delimiter (Edit/Enscribe) bit 2: Space fill record to maximum record length (Enscribe) bit 3: Trim trailing space (Enscribe) bit 8: Force 30K (Expand) large read for unstructured files -b path --temp-path path Use the specified path for the temporary zip archive. For example: zip -b /tmp stuff * will put the temporary zip archive in the directory /tmp, copying over stuff.zip to the current directory when done. This option is useful when updating an existing archive and the file system containing this old archive does not have enough space to hold both old and new archives at the same time. It may also be useful when streaming in some cases to avoid the need for data descriptors. Note that using this option may require zip take additional time to copy the archive file when done to the destination file system. -c --entry-comments Add one-line comments for each file. File operations (adding, updating) are done first, and the user is then prompted for a one-line comment for each file. Enter the comment followed by return, or just return for no comment. -C --preserve-case [VMS] Preserve case all on VMS. Negating this option (-C-) downcases. -C2 --preserve-case-2 [VMS] Preserve case ODS2 on VMS. Negating this option (-C2-) downcases. -C5 --preserve-case-5 [VMS] Preserve case ODS5 on VMS. Negating this option (-C5-) downcases. -d --delete Remove (delete) entries from a zip archive. For example: zip -d foo foo/tom/junk foo/harry/\* \*.o will remove the entry foo/tom/junk, all of the files that start with foo/harry/, and all of the files that end with .o (in any path). Note that shell pathname expansion has been inhibited with backslashes, so that zip can see the asterisks, enabling zip to match on the contents of the zip archive instead of the contents of the current directory. (The backslashes are not used on MSDOS-based platforms.) Can also use quotes to escape the asterisks as in zip -d foo foo/tom/junk "foo/harry/*" "*.o" Not escaping the asterisks on a system where the shell expands wildcards could result in the asterisks being converted to a list of files in the current directory and that list used to delete entries from the archive. Under MSDOS, -d is case sensitive when it matches names in the zip archive. This requires that file names be entered in upper case if they were zipped by PKZIP on an MSDOS system. (We considered making this case insensitive on systems where paths were case insensitive, but it is possible the archive came from a system where case does matter and the archive could include both Bar and bar as separate files in the archive.) But see the new option -ic to ignore case in the archive. -db --display-bytes Display running byte counts showing the bytes zipped and the bytes to go. -dc --display-counts Display running count of entries zipped and entries to go. -dd --display-dots Display dots while each entry is zipped (except on ports that have their own progress indicator). See -ds below for setting dot size. The default is a dot every 10 MB of input file processed. The -v option also displays dots (previously at a much higher rate than this but now -v also defaults to 10 MB) and this rate is also controlled by -ds. -df --datafork [MacOS] Include only data-fork of files zipped into the archive. Good for exporting files to foreign operating-systems. Resource-forks will be ignored at all. -dg --display-globaldots Display progress dots for the archive instead of for each file. The command zip -qdgds 10m will turn off most output except dots every 10 MB. -ds size --dot-size size Set amount of input file processed for each dot displayed. See -dd to enable displaying dots. Setting this option implies -dd. Size is in the format nm where n is a number and m is a multiplier. Currently m can be k (KB), m (MB), g (GB), or t (TB), so if n is 100 and m is k, size would be 100k which is 100 KB. The default is 10 MB. The -v option also displays dots and now defaults to 10 MB also. This rate is also controlled by this option. A size of 0 turns dots off. This option does not control the dots from the "Scanning files" message as zip scans for input files. The dot size for that is fixed at 2 seconds or a fixed number of entries, whichever is longer. -du --display-usize Display the uncompressed size of each entry. -dv --display-volume Display the volume (disk) number each entry is being read from, if reading an existing archive, and being written to. -D --no-dir-entries Do not create entries in the zip archive for directories. Directory entries are created by default so that their attributes can be saved in the zip archive. The environment variable ZIPOPT can be used to change the default options. For example under Unix with sh: ZIPOPT="-D"; export ZIPOPT (The variable ZIPOPT can be used for any option, including -i and -x using a new option format detailed below, and can include several options.) The option -D is a shorthand for -x "*/" but the latter previously could not be set as default in the ZIPOPT environment variable as the contents of ZIPOPT gets inserted near the beginning of the command line and the file list had to end at the end of the line. This version of zip does allow -x and -i options in ZIPOPT if the form -x file file ... @ is used, where the @ (an argument that is just @) terminates the list. -DF --difference-archive Create an archive that contains all new and changed files since the original archive was created. For this to work, the input file list and current directory must be the same as during the original zip operation. For example, if the existing archive was created using zip -r foofull . from the bar directory, then the command zip -r foofull . -DF --out foonew also from the bar directory creates the archive foonew with just the files not in foofull and the files where the size or file time of the files do not match those in foofull. Note that the timezone environment variable TZ should be set according to the local timezone in order for this option to work correctly. A change in timezone since the original archive was created could result in no times matching and all files being included. A possible approach to backing up a directory might be to create a normal archive of the contents of the directory as a full backup, then use this option to create incremental backups. -e --encrypt Encrypt the contents of the zip archive using a password which is entered on the terminal in response to a prompt (this will not be echoed; if standard error is not a tty, zip will exit with an error). The password prompt is repeated to save the user from typing errors. -E --longnames [OS/2] Use the .LONGNAME Extended Attribute (if found) as filename. -f --freshen Replace (freshen) an existing entry in the zip archive only if it has been modified more recently than the version already in the zip archive; unlike the update option (-u) this will not add files that are not already in the zip archive. For example: zip -f foo This command should be run from the same directory from which the original zip command was run, since paths stored in zip archives are always relative. Note that the timezone environment variable TZ should be set according to the local timezone in order for the -f, -u and -o options to work correctly. The reasons behind this are somewhat subtle but have to do with the differences between the Unix-format file times (always in GMT) and most of the other operating systems (always local time) and the necessity to compare the two. A typical TZ value is ``MET-1MEST'' (Middle European time with automatic adjustment for ``summertime'' or Daylight Savings Time). The format is TTThhDDD, where TTT is the time zone such as MET, hh is the difference between GMT and local time such as -1 above, and DDD is the time zone when daylight savings time is in effect. Leave off the DDD if there is no daylight savings time. For the US Eastern time zone EST5EDT. -F --fix -FF --fixfix Fix the zip archive. The -F option can be used if some portions of the archive are missing, but requires a reasonably intact central directory. The input archive is scanned as usual, but zip will ignore some problems. The resulting archive should be valid, but any inconsistent entries will be left out. When doubled as in -FF, the archive is scanned from the beginning and zip scans for special signatures to identify the limits between the archive members. The single -F is more reliable if the archive is not too much damaged, so try this option first. If the archive is too damaged or the end has been truncated, you must use -FF. This is a change from zip_2.32, where the -F option is able to read a truncated archive. The -F option now more reliably fixes archives with minor damage and the -FF option is needed to fix archives where -F might have been sufficient before. Neither option will recover archives that have been incorrectly transferred in ascii mode instead of binary. After the repair, the -t option of unzip may show that some files have a bad CRC. Such files cannot be recovered; you can remove them from the archive using the -d option of zip. Note that -FF may have trouble fixing archives that include an embedded zip archive that was stored (without compression) in the archive and, depending on the damage, it may find the entries in the embedded archive rather than the archive itself. Try -F first as it does not have this problem. The format of the fix commands have changed. For example, to fix the damaged archive foo.zip, zip -F foo --out foofix tries to read the entries normally, copying good entries to the new archive foofix.zip. If this doesn't work, as when the archive is truncated, or if some entries you know are in the archive are missed, then try zip -FF foo --out foofixfix and compare the resulting archive to the archive created by -F. The -FF option may create an inconsistent archive. Depending on what is damaged, you can then use the -F option to fix that archive. A split archive with missing split files can be fixed using -F if you have the last split of the archive (the .zip file). If this file is missing, you must use -FF to fix the archive, which will prompt you for the splits you have. Currently the fix options can't recover entries that have a bad checksum or are otherwise damaged. -FI --fifo [Unix] Normally zip skips reading any FIFOs (named pipes) encountered, as zip can hang if the FIFO is not being fed. This option tells zip to read the contents of any FIFO it finds. -FS --filesync Synchronize the contents of an archive with the files on the OS. Normally when an archive is updated, new files are added and changed files are updated but files that no longer exist on the OS are not deleted from the archive. This option enables a new mode that checks entries in the archive against the file system. If the file time and file size of the entry matches that of the OS file, the entry is copied from the old archive instead of being read from the file system and compressed. If the OS file has changed, the entry is read and compressed as usual. If the entry in the archive does not match a file on the OS, the entry is deleted. Enabling this option should create archives that are the same as new archives, but since existing entries are copied instead of compressed, updating an existing archive with -FS can be much faster than creating a new archive. Also consider using -u for updating an archive. For this option to work, the archive should be updated from the same directory it was created in so the relative paths match. If few files are being copied from the old archive, it may be faster to create a new archive instead. Note that the timezone environment variable TZ should be set according to the local timezone in order for this option to work correctly. A change in timezone since the original archive was created could result in no times matching and recompression of all files. This option deletes files from the archive. If you need to preserve the original archive, make a copy of the archive first or use the --out option to output the updated archive to a new file. Even though it may be slower, creating a new archive with a new archive name is safer, avoids mismatches between archive and OS paths, and is preferred. -g --grow Grow (append to) the specified zip archive, instead of creating a new one. If this operation fails, zip attempts to restore the archive to its original state. If the restoration fails, the archive might become corrupted. This option is ignored when there's no existing archive or when at least one archive member must be updated or deleted. -h -? --help Display the zip help information (this also appears if zip is run with no arguments). -h2 --more-help Display extended help including more on command line format, pattern matching, and more obscure options. -i files --include files Include only the specified files, as in: zip -r foo . -i \*.c which will include only the files that end in .c in the current directory and its subdirectories. (Note for PKZIP users: the equivalent command is pkzip -rP foo *.c PKZIP does not allow recursion in directories other than the current one.) The backslash avoids the shell filename substitution, so that the name matching is performed by zip at all directory levels. [This is for Unix and other systems where \ escapes the next character. For other systems where the shell does not process * do not use \ and the above is zip -r foo . -i *.c Examples are for Unix unless otherwise specified.] So to include dir, a directory directly under the current directory, use zip -r foo . -i dir/\* or zip -r foo . -i "dir/*" to match paths such as dir/a and dir/b/file.c [on ports without wildcard expansion in the shell such as MSDOS and Windows zip -r foo . -i dir/* is used.] Note that currently the trailing / is needed for directories (as in zip -r foo . -i dir/ to include directory dir). The long option form of the first example is zip -r foo . --include \*.c and does the same thing as the short option form. Though the command syntax used to require -i at the end of the command line, this version actually allows -i (or --include) anywhere. The list of files terminates at the next argument starting with -, the end of the command line, or the list terminator @ (an argument that is just @). So the above can be given as zip -i \*.c @ -r foo . for example. There must be a space between the option and the first file of a list. For just one file you can use the single value form zip -i\*.c -r foo . (no space between option and value) or zip --include=\*.c -r foo . as additional examples. The single value forms are not recommended because they can be confusing and, in particular, the -ifile format can cause problems if the first letter of file combines with i to form a two-letter option starting with i. Use -sc to see how your command line will be parsed. Also possible: zip -r foo . -i@include.lst which will only include the files in the current directory and its subdirectories that match the patterns in the file include.lst. Files to -i and -x are patterns matching internal archive paths. See -R for more on patterns. -I --no-image [Acorn RISC OS] Don't scan through Image files. When used, zip will not consider Image files (eg. DOS partitions or Spark archives when SparkFS is loaded) as directories but will store them as single files. For example, if you have SparkFS loaded, zipping a Spark archive will result in a zipfile containing a directory (and its content) while using the 'I' option will result in a zipfile containing a Spark archive. Obviously this second case will also be obtained (without the 'I' option) if SparkFS isn't loaded. -ic --ignore-case [VMS, WIN32] Ignore case when matching archive entries. This option is only available on systems where the case of files is ignored. On systems with case-insensitive file systems, case is normally ignored when matching files on the file system but is not ignored for -f (freshen), -d (delete), -U (copy), and similar modes when matching against archive entries (currently -f ignores case on VMS) because archive entries can be from systems where case does matter and names that are the same except for case can exist in an archive. The -ic option makes all matching case insensitive. This can result in multiple archive entries matching a command line pattern. -j --junk-paths Store just the name of a saved file (junk the path), and do not store directory names. By default, zip will store the full path (relative to the current directory). -jj --absolute-path [MacOS] record Fullpath (+ Volname). The complete path including volume will be stored. By default the relative path will be stored. -J --junk-sfx Strip any prepended data (e.g. a SFX stub) from the archive. -k --DOS-names Attempt to convert the names and paths to conform to MSDOS, store only the MSDOS attribute (just the user write attribute from Unix), and mark the entry as made under MSDOS (even though it was not); for compatibility with PKUNZIP under MSDOS which cannot handle certain names such as those with two dots. -l --to-crlf Translate the Unix end-of-line character LF into the MSDOS convention CR LF. This option should not be used on binary files. This option can be used on Unix if the zip file is intended for PKUNZIP under MSDOS. If the input files already contain CR LF, this option adds an extra CR. This is to ensure that unzip -a on Unix will get back an exact copy of the original file, to undo the effect of zip -l. See -ll for how binary files are handled. -la --log-append Append to existing logfile. Default is to overwrite. -lf logfilepath --logfile-path logfilepath Open a logfile at the given path. By default any existing file at that location is overwritten, but the -la option will result in an existing file being opened and the new log information appended to any existing information. Only warnings and errors are written to the log unless the -li option is also given, then all information messages are also written to the log. -li --log-info Include information messages, such as file names being zipped, in the log. The default is to only include the command line, any warnings and errors, and the final status. -ll --from-crlf Translate the MSDOS end-of-line CR LF into Unix LF. This option should not be used on binary files. This option can be used on MSDOS if the zip file is intended for unzip under Unix. If the file is converted and the file is later determined to be binary a warning is issued and the file is probably corrupted. In this release if -ll detects binary in the first buffer read from a file, zip now issues a warning and skips line end conversion on the file. This check seems to catch all binary files tested, but the original check remains and if a converted file is later determined to be binary that warning is still issued. A new algorithm is now being used for binary detection that should allow line end conversion of text files in UTF-8 and similar encodings. -L --license Display the zip license. -m --move Move the specified files into the zip archive; actually, this deletes the target directories/files after making the specified zip archive. If a directory becomes empty after removal of the files, the directory is also removed. No deletions are done until zip has created the archive without error. This is useful for conserving disk space, but is potentially dangerous so it is recommended to use it in combination with -T to test the archive before removing all input files. -MM --must-match All input patterns must match at least one file and all input files found must be readable. Normally when an input pattern does not match a file the "name not matched" warning is issued and when an input file has been found but later is missing or not readable a missing or not readable warning is issued. In either case zip continues creating the archive, with missing or unreadable new files being skipped and files already in the archive remaining unchanged. After the archive is created, if any files were not readable zip returns the OPEN error code (18 on most systems) instead of the normal success return (0 on most systems). With -MM set, zip exits as soon as an input pattern is not matched (whenever the "name not matched" warning would be issued) or when an input file is not readable. In either case zip exits with an OPEN error and no archive is created. This option is useful when a known list of files is to be zipped so any missing or unreadable files will result in an error. It is less useful when used with wildcards, but zip will still exit with an error if any input pattern doesn't match at least one file and if any matched files are unreadable. If you want to create the archive anyway and only need to know if files were skipped, don't use -MM and just check the return code. Also -lf could be useful. -n suffixes --suffixes suffixes Do not attempt to compress files named with the given suffixes. Such files are simply stored (0% compression) in the output zip file, so that zip doesn't waste its time trying to compress them. The suffixes are separated by either colons or semicolons. For example: zip -rn .Z:.zip:.tiff:.gif:.snd foo foo will copy everything from foo into foo.zip, but will store any files that end in .Z, .zip, .tiff, .gif, or .snd without trying to compress them (image and sound files often have their own specialized compression methods). By default, zip does not compress files with extensions in the list .Z:.zip:.zoo:.arc:.lzh:.arj. Such files are stored directly in the output archive. The environment variable ZIPOPT can be used to change the default options. For example under Unix with csh: setenv ZIPOPT "-n .gif:.zip" To attempt compression on all files, use: zip -n : foo The maximum compression option -9 also attempts compression on all files regardless of extension. On Acorn RISC OS systems the suffixes are actually filetypes (3 hex digit format). By default, zip does not compress files with filetypes in the list DDC:D96:68E (i.e. Archives, CFS files and PackDir files). -nw --no-wild Do not perform internal wildcard processing (shell processing of wildcards is still done by the shell unless the arguments are escaped). Useful if a list of paths is being read and no wildcard substitution is desired. -N --notes [Amiga, MacOS] Save Amiga or MacOS filenotes as zipfile comments. They can be restored by using the -N option of unzip. If -c is used also, you are prompted for comments only for those files that do not have filenotes. -o --latest-time Set the "last modified" time of the zip archive to the latest (oldest) "last modified" time found among the entries in the zip archive. This can be used without any other operations, if desired. For example: zip -o foo will change the last modified time of foo.zip to the latest time of the entries in foo.zip. -O output-file --output-file output-file Process the archive changes as usual, but instead of updating the existing archive, output the new archive to output-file. Useful for updating an archive without changing the existing archive and the input archive must be a different file than the output archive. This option can be used to create updated split archives. It can also be used with -U to copy entries from an existing archive to a new archive. See the EXAMPLES section below. Another use is converting zip files from one split size to another. For instance, to convert an archive with 700 MB CD splits to one with 2 GB DVD splits, can use: zip -s 2g cd-split.zip --out dvd-split.zip which uses copy mode. See -U below. Also: zip -s 0 split.zip --out unsplit.zip will convert a split archive to a single-file archive. Copy mode will convert stream entries (using data descriptors and which should be compatible with most unzips) to normal entries (which should be compatible with all unzips), except if standard encryption was used. For archives with encrypted entries, zipcloak will decrypt the entries and convert them to normal entries. -p --paths Include relative file paths as part of the names of files stored in the archive. This is the default. The -j option junks the paths and just stores the names of the files. -P password --password password Use password to encrypt zipfile entries (if any). THIS IS INSECURE! Many multi-user operating systems provide ways for any user to see the current command line of any other user; even on stand-alone systems there is always the threat of over-the- shoulder peeking. Storing the plaintext password as part of a command line in an automated script is even worse. Whenever possible, use the non-echoing, interactive prompt to enter passwords. (And where security is truly important, use strong encryption such as Pretty Good Privacy instead of the relatively weak standard encryption provided by zipfile utilities.) -q --quiet Quiet mode; eliminate informational messages and comment prompts. (Useful, for example, in shell scripts and background tasks). -Qn --Q-flag n [QDOS] store information about the file in the file header with n defined as bit 0: Don't add headers for any file bit 1: Add headers for all files bit 2: Don't wait for interactive key press on exit -r --recurse-paths Travel the directory structure recursively; for example: zip -r foo.zip foo or more concisely zip -r foo foo In this case, all the files and directories in foo are saved in a zip archive named foo.zip, including files with names starting with ".", since the recursion does not use the shell's file-name substitution mechanism. If you wish to include only a specific subset of the files in directory foo and its subdirectories, use the -i option to specify the pattern of files to be included. You should not use -r with the name ".*", since that matches ".." which will attempt to zip up the parent directory (probably not what was intended). Multiple source directories are allowed as in zip -r foo foo1 foo2 which first zips up foo1 and then foo2, going down each directory. Note that while wildcards to -r are typically resolved while recursing down directories in the file system, any -R, -x, and -i wildcards are applied to internal archive pathnames once the directories are scanned. To have wildcards apply to files in subdirectories when recursing on Unix and similar systems where the shell does wildcard substitution, either escape all wildcards or put all arguments with wildcards in quotes. This lets zip see the wildcards and match files in subdirectories using them as it recurses. -R --recurse-patterns Travel the directory structure recursively starting at the current directory; for example: zip -R foo "*.c" In this case, all the files matching *.c in the tree starting at the current directory are stored into a zip archive named foo.zip. Note that *.c will match file.c, a/file.c and a/b/.c. More than one pattern can be listed as separate arguments. Note for PKZIP users: the equivalent command is pkzip -rP foo *.c Patterns are relative file paths as they appear in the archive, or will after zipping, and can have optional wildcards in them. For example, given the current directory is foo and under it are directories foo1 and foo2 and in foo1 is the file bar.c, zip -R foo/* will zip up foo, foo/foo1, foo/foo1/bar.c, and foo/foo2. zip -R */bar.c will zip up foo/foo1/bar.c. See the note for -r on escaping wildcards. -RE --regex [WIN32] Before zip 3.0, regular expression list matching was enabled by default on Windows platforms. Because of confusion resulting from the need to escape "[" and "]" in names, it is now off by default for Windows so "[" and "]" are just normal characters in names. This option enables [] matching again. -s splitsize --split-size splitsize Enable creating a split archive and set the split size. A split archive is an archive that could be split over many files. As the archive is created, if the size of the archive reaches the specified split size, that split is closed and the next split opened. In general all splits but the last will be the split size and the last will be whatever is left. If the entire archive is smaller than the split size a single-file archive is created. Split archives are stored in numbered files. For example, if the output archive is named archive and three splits are required, the resulting archive will be in the three files archive.z01, archive.z02, and archive.zip. Do not change the numbering of these files or the archive will not be readable as these are used to determine the order the splits are read. Split size is a number optionally followed by a multiplier. Currently the number must be an integer. The multiplier can currently be one of k (kilobytes), m (megabytes), g (gigabytes), or t (terabytes). As 64k is the minimum split size, numbers without multipliers default to megabytes. For example, to create a split archive called foo with the contents of the bar directory with splits of 670 MB that might be useful for burning on CDs, the command: zip -s 670m -r foo bar could be used. Currently the old splits of a split archive are not excluded from a new archive, but they can be specifically excluded. If possible, keep the input and output archives out of the path being zipped when creating split archives. Using -s without -sp as above creates all the splits where foo is being written, in this case the current directory. This split mode updates the splits as the archive is being created, requiring all splits to remain writable, but creates split archives that are readable by any unzip that supports split archives. See -sp below for enabling split pause mode which allows splits to be written directly to removable media. The option -sv can be used to enable verbose splitting and provide details of how the splitting is being done. The -sb option can be used to ring the bell when zip pauses for the next split destination. Split archives cannot be updated, but see the -O (--out) option for how a split archive can be updated as it is copied to a new archive. A split archive can also be converted into a single- file archive using a split size of 0 or negating the -s option: zip -s 0 split.zip --out single.zip Also see -U (--copy) for more on using copy mode. -sb --split-bell If splitting and using split pause mode, ring the bell when zip pauses for each split destination. -sc --show-command Show the command line starting zip as processed and exit. The new command parser permutes the arguments, putting all options and any values associated with them before any non-option arguments. This allows an option to appear anywhere in the command line as long as any values that go with the option go with it. This option displays the command line as zip sees it, including any arguments from the environment such as from the ZIPOPT variable. Where allowed, options later in the command line can override options earlier in the command line. -sf --show-files Show the files that would be operated on, then exit. For instance, if creating a new archive, this will list the files that would be added. If the option is negated, -sf-, output only to an open log file. Screen display is not recommended for large lists. -so --show-options Show all available options supported by zip as compiled on the current system. As this command reads the option table, it should include all options. Each line includes the short option (if defined), the long option (if defined), the format of any value that goes with the option, if the option can be negated, and a small description. The value format can be no value, required value, optional value, single character value, number value, or a list of values. The output of this option is not intended to show how to use any option but only show what options are available. -sp --split-pause If splitting is enabled with -s, enable split pause mode. This creates split archives as -s does, but stream writing is used so each split can be closed as soon as it is written and zip will pause between each split to allow changing split destination or media. Though this split mode allows writing splits directly to removable media, it uses stream archive format that may not be readable by some unzips. Before relying on splits created with -sp, test a split archive with the unzip you will be using. To convert a stream split archive (created with -sp) to a standard archive see the --out option. -su --show-unicode As -sf, but also show Unicode version of the path if exists. -sU --show-just-unicode As -sf, but only show Unicode version of the path if exists, otherwise show the standard version of the path. -sv --split-verbose Enable various verbose messages while splitting, showing how the splitting is being done. -S --system-hidden [MSDOS, OS/2, WIN32 and ATARI] Include system and hidden files. [MacOS] Includes finder invisible files, which are ignored otherwise. -t mmddyyyy --from-date mmddyyyy Do not operate on files modified prior to the specified date, where mm is the month (00-12), dd is the day of the month (01-31), and yyyy is the year. The ISO_8601 date format yyyy-mm-dd is also accepted. For example: zip -rt 12071991 infamy foo zip -rt 1991-12-07 infamy foo will add all the files in foo and its subdirectories that were last modified on or after 7 December 1991, to the zip archive infamy.zip. -tt mmddyyyy --before-date mmddyyyy Do not operate on files modified after or at the specified date, where mm is the month (00-12), dd is the day of the month (01-31), and yyyy is the year. The ISO_8601 date format yyyy-mm-dd is also accepted. For example: zip -rtt 11301995 infamy foo zip -rtt 1995-11-30 infamy foo will add all the files in foo and its subdirectories that were last modified before 30 November 1995, to the zip archive infamy.zip. -T --test Test the integrity of the new zip file. If the check fails, the old zip file is unchanged and (with the -m option) no input files are removed. -TT cmd --unzip-command cmd Use command cmd instead of 'unzip -tqq' to test an archive when the -T option is used. On Unix, to use a copy of unzip in the current directory instead of the standard system unzip, could use: zip archive file1 file2 -T -TT "./unzip -tqq" In cmd, {} is replaced by the name of the temporary archive, otherwise the name of the archive is appended to the end of the command. The return code is checked for success (0 on Unix). -u --update Replace (update) an existing entry in the zip archive only if it has been modified more recently than the version already in the zip archive. For example: zip -u stuff * will add any new files in the current directory, and update any files which have been modified since the zip archive stuff.zip was last created/modified (note that zip will not try to pack stuff.zip into itself when you do this). Note that the -u option with no input file arguments acts like the -f (freshen) option. -U --copy-entries Copy entries from one archive to another. Requires the --out option to specify a different output file than the input archive. Copy mode is the reverse of -d delete. When delete is being used with --out, the selected entries are deleted from the archive and all other entries are copied to the new archive, while copy mode selects the files to include in the new archive. Unlike -u update, input patterns on the command line are matched against archive entries only and not the file system files. For instance, zip inarchive "*.c" --copy --out outarchive copies entries with names ending in .c from inarchive to outarchive. The wildcard must be escaped on some systems to prevent the shell from substituting names of files from the file system which may have no relevance to the entries in the archive. If no input files appear on the command line and --out is used, copy mode is assumed: zip inarchive --out outarchive This is useful for changing split size for instance. Encrypting and decrypting entries is not yet supported using copy mode. Use zipcloak for that. -UN v --unicode v Determine what zip should do with Unicode file names. zip_3.0, in addition to the standard file path, now includes the UTF-8 translation of the path if the entry path is not entirely 7-bit ASCII. When an entry is missing the Unicode path, zip reverts back to the standard file path. The problem with using the standard path is this path is in the local character set of the zip that created the entry, which may contain characters that are not valid in the character set being used by the unzip. When zip is reading an archive, if an entry also has a Unicode path, zip now defaults to using the Unicode path to recreate the standard path using the current local character set. This option can be used to determine what zip should do with this path if there is a mismatch between the stored standard path and the stored UTF-8 path (which can happen if the standard path was updated). In all cases, if there is a mismatch it is assumed that the standard path is more current and zip uses that. Values for v are q - quit if paths do not match w - warn, continue with standard path i - ignore, continue with standard path n - no Unicode, do not use Unicode paths The default is to warn and continue. Characters that are not valid in the current character set are escaped as #Uxxxx and #Lxxxxxx, where x is an ASCII character for a hex digit. The first is used if a 16-bit character number is sufficient to represent the Unicode character and the second if the character needs more than 16 bits to represent it's Unicode character code. Setting -UN to e - escape as in zip archive -sU -UN=e forces zip to escape all characters that are not printable 7-bit ASCII. Normally zip stores UTF-8 directly in the standard path field on systems where UTF-8 is the current character set and stores the UTF-8 in the new extra fields otherwise. The option u - UTF-8 as in zip archive dir -r -UN=UTF8 forces zip to store UTF-8 as native in the archive. Note that storing UTF-8 directly is the default on Unix systems that support it. This option could be useful on Windows systems where the escaped path is too large to be a valid path and the UTF-8 version of the path is smaller, but native UTF-8 is not backward compatible on Windows systems. -v --verbose Verbose mode or print diagnostic version info. Normally, when applied to real operations, this option enables the display of a progress indicator during compression (see -dd for more on dots) and requests verbose diagnostic info about zipfile structure oddities. However, when -v is the only command line argument a diagnostic screen is printed instead. This should now work even if stdout is redirected to a file, allowing easy saving of the information for sending with bug reports to Info-ZIP. The version screen provides the help screen header with program name, version, and release date, some pointers to the Info-ZIP home and distribution sites, and shows information about the target environment (compiler type and version, OS version, compilation date and the enabled optional features used to create the zip executable). -V --VMS-portable [VMS] Save VMS file attributes. (Files are truncated at EOF.) When a -V archive is unpacked on a non-VMS system, some file types (notably Stream_LF text files and pure binary files like fixed-512) should be extracted intact. Indexed files and file types with embedded record sizes (notably variable-length record types) will probably be seen as corrupt elsewhere. -VV --VMS-specific [VMS] Save VMS file attributes, and all allocated blocks in a file, including any data beyond EOF. Useful for moving ill- formed files among VMS systems. When a -VV archive is unpacked on a non-VMS system, almost all files will appear corrupt. -w --VMS-versions [VMS] Append the version number of the files to the name, including multiple versions of files. Default is to use only the most recent version of a specified file. -ww --VMS-dot-versions [VMS] Append the version number of the files to the name, including multiple versions of files, using the .nnn format. Default is to use only the most recent version of a specified file. -ws --wild-stop-dirs Wildcards match only at a directory level. Normally zip handles paths as strings and given the paths /foo/bar/dir/file1.c /foo/bar/file2.c an input pattern such as /foo/bar/* normally would match both paths, the * matching dir/file1.c and file2.c. Note that in the first case a directory boundary (/) was crossed in the match. With -ws no directory bounds will be included in the match, making wildcards local to a specific directory level. So, with -ws enabled, only the second path would be matched. When using -ws, use ** to match across directory boundaries as * does normally. -x files --exclude files Explicitly exclude the specified files, as in: zip -r foo foo -x \*.o which will include the contents of foo in foo.zip while excluding all the files that end in .o. The backslash avoids the shell filename substitution, so that the name matching is performed by zip at all directory levels. Also possible: zip -r foo foo -x@exclude.lst which will include the contents of foo in foo.zip while excluding all the files that match the patterns in the file exclude.lst. The long option forms of the above are zip -r foo foo --exclude \*.o and zip -r foo foo --exclude @exclude.lst Multiple patterns can be specified, as in: zip -r foo foo -x \*.o \*.c If there is no space between -x and the pattern, just one value is assumed (no list): zip -r foo foo -x\*.o See -i for more on include and exclude. -X --no-extra Do not save extra file attributes (Extended Attributes on OS/2, uid/gid and file times on Unix). The zip format uses extra fields to include additional information for each entry. Some extra fields are specific to particular systems while others are applicable to all systems. Normally when zip reads entries from an existing archive, it reads the extra fields it knows, strips the rest, and adds the extra fields applicable to that system. With -X, zip strips all old fields and only includes the Unicode and Zip64 extra fields (currently these two extra fields cannot be disabled). Negating this option, -X-, includes all the default extra fields, but also copies over any unrecognized extra fields. -y --symlinks For UNIX and VMS (V8.3 and later), store symbolic links as such in the zip archive, instead of compressing and storing the file referred to by the link. This can avoid multiple copies of files being included in the archive as zip recurses the directory trees and accesses files directly and by links. -z --archive-comment Prompt for a multi-line comment for the entire zip archive. The comment is ended by a line containing just a period, or an end of file condition (^D on Unix, ^Z on MSDOS, OS/2, and VMS). The comment can be taken from a file: zip -z foo < foowhat -Z cm --compression-method cm Set the default compression method. Currently the main methods supported by zip are store and deflate. Compression method can be set to: store - Setting the compression method to store forces zip to store entries with no compression. This is generally faster than compressing entries, but results in no space savings. This is the same as using -0 (compression level zero). deflate - This is the default method for zip. If zip determines that storing is better than deflation, the entry will be stored instead. bzip2 - If bzip2 support is compiled in, this compression method also becomes available. Only some modern unzips currently support the bzip2 compression method, so test the unzip you will be using before relying on archives using this method (compression method 12). For example, to add bar.c to archive foo using bzip2 compression: zip -Z bzip2 foo bar.c The compression method can be abbreviated: zip -Zb foo bar.c -# (-0, -1, -2, -3, -4, -5, -6, -7, -8, -9) Regulate the speed of compression using the specified digit #, where -0 indicates no compression (store all files), -1 indicates the fastest compression speed (less compression) and -9 indicates the slowest compression speed (optimal compression, ignores the suffix list). The default compression level is -6. Though still being worked, the intention is this setting will control compression speed for all compression methods. Currently only deflation is controlled. -! --use-privileges [WIN32] Use priviliges (if granted) to obtain all aspects of WinNT security. -@ --names-stdin Take the list of input files from standard input. Only one filename per line. -$ --volume-label [MSDOS, OS/2, WIN32] Include the volume label for the drive holding the first file to be compressed. If you want to include only the volume label or to force a specific drive, use the drive name as first file name, as in: zip -$ foo a: c:bar
The simplest example: zip stuff * creates the archive stuff.zip (assuming it does not exist) and puts all the files in the current directory in it, in compressed form (the .zip suffix is added automatically, unless the archive name contains a dot already; this allows the explicit specification of other suffixes). Because of the way the shell on Unix does filename substitution, files starting with "." are not included; to include these as well: zip stuff .* * Even this will not include any subdirectories from the current directory. To zip up an entire directory, the command: zip -r foo foo creates the archive foo.zip, containing all the files and directories in the directory foo that is contained within the current directory. You may want to make a zip archive that contains the files in foo, without recording the directory name, foo. You can use the -j option to leave off the paths, as in: zip -j foo foo/* If you are short on disk space, you might not have enough room to hold both the original directory and the corresponding compressed zip archive. In this case, you can create the archive in steps using the -m option. If foo contains the subdirectories tom, dick, and harry, you can: zip -rm foo foo/tom zip -rm foo foo/dick zip -rm foo foo/harry where the first command creates foo.zip, and the next two add to it. At the completion of each zip command, the last created archive is deleted, making room for the next zip command to function. Use -s to set the split size and create a split archive. The size is given as a number followed optionally by one of k (kB), m (MB), g (GB), or t (TB). The command zip -s 2g -r split.zip foo creates a split archive of the directory foo with splits no bigger than 2 GB each. If foo contained 5 GB of contents and the contents were stored in the split archive without compression (to make this example simple), this would create three splits, split.z01 at 2 GB, split.z02 at 2 GB, and split.zip at a little over 1 GB. The -sp option can be used to pause zip between splits to allow changing removable media, for example, but read the descriptions and warnings for both -s and -sp below. Though zip does not update split archives, zip provides the new option -O (--output-file) to allow split archives to be updated and saved in a new archive. For example, zip inarchive.zip foo.c bar.c --out outarchive.zip reads archive inarchive.zip, even if split, adds the files foo.c and bar.c, and writes the resulting archive to outarchive.zip. If inarchive.zip is split then outarchive.zip defaults to the same split size. Be aware that outarchive.zip and any split files that are created with it are always overwritten without warning. This may be changed in the future. PATTERN MATCHING This section applies only to Unix. Watch this space for details on MSDOS and VMS operation. However, the special wildcard characters * and [] below apply to at least MSDOS also. The Unix shells (sh, csh, bash, and others) normally do filename substitution (also called "globbing") on command arguments. Generally the special characters are: ? match any single character * match any number of characters (including none) [] match any character in the range indicated within the brackets (example: [a-f], [0-9]). This form of wildcard matching allows a user to specify a list of characters between square brackets and if any of the characters match the expression matches. For example: zip archive "*.[hc]" would archive all files in the current directory that end in .h or .c. Ranges of characters are supported: zip archive "[a-f]*" would add to the archive all files starting with "a" through "f". Negation is also supported, where any character in that position not in the list matches. Negation is supported by adding ! or ^ to the beginning of the list: zip archive "*.[!o]" matches files that don't end in ".o". On WIN32, [] matching needs to be turned on with the -RE option to avoid the confusion that names with [ or ] have caused. When these characters are encountered (without being escaped with a backslash or quotes), the shell will look for files relative to the current path that match the pattern, and replace the argument with a list of the names that matched. The zip program can do the same matching on names that are in the zip archive being modified or, in the case of the -x (exclude) or -i (include) options, on the list of files to be operated on, by using backslashes or quotes to tell the shell not to do the name expansion. In general, when zip encounters a name in the list of files to do, it first looks for the name in the file system. If it finds it, it then adds it to the list of files to do. If it does not find it, it looks for the name in the zip archive being modified (if it exists), using the pattern matching characters described above, if present. For each match, it will add that name to the list of files to be processed, unless this name matches one given with the -x option, or does not match any name given with the -i option. The pattern matching includes the path, and so patterns like \*.o match names that end in ".o", no matter what the path prefix is. Note that the backslash must precede every special character (i.e. ?*[]), or the entire argument must be enclosed in double quotes (""). In general, use backslashes or double quotes for paths that have wildcards to make zip do the pattern matching for file paths, and always for paths and strings that have spaces or wildcards for -i, -x, -R, -d, and -U and anywhere zip needs to process the wildcards. ENVIRONMENT The following environment variables are read and used by zip as described. ZIPOPT contains default options that will be used when running zip. The contents of this environment variable will get added to the command line just after the zip command. ZIP [Not on RISC OS and VMS] see ZIPOPT Zip$Options [RISC OS] see ZIPOPT Zip$Exts [RISC OS] contains extensions separated by a : that will cause native filenames with one of the specified extensions to be added to the zip file with basename and extension swapped. ZIP_OPTS [VMS] see ZIPOPT SEE ALSO compress(1), shar(1L), tar(1), unzip(1L), gzip(1L) DIAGNOSTICS The exit status (or error level) approximates the exit codes defined by PKWARE and takes on the following values, except under VMS: 0 normal; no errors or warnings detected. 2 unexpected end of zip file. 3 a generic error in the zipfile format was detected. Processing may have completed successfully anyway; some broken zipfiles created by other archivers have simple work-arounds. 4 zip was unable to allocate memory for one or more buffers during program initialization. 5 a severe error in the zipfile format was detected. Processing probably failed immediately. 6 entry too large to be processed (such as input files larger than 2 GB when not using Zip64 or trying to read an existing archive that is too large) or entry too large to be split with zipsplit 7 invalid comment format 8 zip -T failed or out of memory 9 the user aborted zip prematurely with control-C (or similar) 10 zip encountered an error while using a temp file 11 read or seek error 12 zip has nothing to do 13 missing or empty zip file 14 error writing to a file 15 zip was unable to create a file to write to 16 bad command line parameters 18 zip could not open a specified file to read 19 zip was compiled with options not supported on this system VMS interprets standard Unix (or PC) return values as other, scarier- looking things, so zip instead maps them into VMS-style status codes. In general, zip sets VMS Facility = 1955 (0x07A3), Code = 2* Unix_status, and an appropriate Severity (as specified in ziperr.h). More details are included in the VMS-specific documentation. See [.vms]NOTES.TXT and [.vms]vms_msg_gen.c. BUGS zip 3.0 is not compatible with PKUNZIP 1.10. Use zip 1.1 to produce zip files which can be extracted by PKUNZIP 1.10. zip files produced by zip 3.0 must not be updated by zip 1.1 or PKZIP 1.10, if they contain encrypted members or if they have been produced in a pipe or on a non-seekable device. The old versions of zip or PKZIP would create an archive with an incorrect format. The old versions can list the contents of the zip file but cannot extract it anyway (because of the new compression algorithm). If you do not use encryption and use regular disk files, you do not have to care about this problem. Under VMS, not all of the odd file formats are treated properly. Only stream-LF format zip files are expected to work with zip. Others can be converted using Rahul Dhesi's BILF program. This version of zip handles some of the conversion internally. When using Kermit to transfer zip files from VMS to MSDOS, type "set file type block" on VMS. When transfering from MSDOS to VMS, type "set file type fixed" on VMS. In both cases, type "set file type binary" on MSDOS. Under some older VMS versions, zip may hang for file specifications that use DECnet syntax foo::*.*. On OS/2, zip cannot match some names, such as those including an exclamation mark or a hash sign. This is a bug in OS/2 itself: the 32-bit DosFindFirst/Next don't find such names. Other programs such as GNU tar are also affected by this bug. Under OS/2, the amount of Extended Attributes displayed by DIR is (for compatibility) the amount returned by the 16-bit version of DosQueryPathInfo(). Otherwise OS/2 1.3 and 2.0 would report different EA sizes when DIRing a file. However, the structure layout returned by the 32-bit DosQueryPathInfo() is a bit different, it uses extra padding bytes and link pointers (it's a linked list) to have all fields on 4-byte boundaries for portability to future RISC OS/2 versions. Therefore the value reported by zip (which uses this 32-bit-mode size) differs from that reported by DIR. zip stores the 32-bit format for portability, even the 16-bit MS-C-compiled version running on OS/2 1.3, so even this one shows the 32-bit-mode size. AUTHORS Copyright (C) 1997-2008 Info-ZIP. Currently distributed under the Info-ZIP license. Copyright (C) 1990-1997 Mark Adler, Richard B. Wales, Jean-loup Gailly, Onno van der Linden, Kai Uwe Rommel, Igor Mandrichenko, John Bush and Paul Kienitz. Original copyright: Permission is granted to any individual or institution to use, copy, or redistribute this software so long as all of the original files are included, that it is not sold for profit, and that this copyright notice is retained. LIKE ANYTHING ELSE THAT'S FREE, ZIP AND ITS ASSOCIATED UTILITIES ARE PROVIDED AS IS AND COME WITH NO WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED. IN NO EVENT WILL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY DAMAGES RESULTING FROM THE USE OF THIS SOFTWARE. Please send bug reports and comments using the web page at: www.info-zip.org. For bug reports, please include the version of zip (see zip_-h), the make options used to compile it (see zip_-v), the machine and operating system in use, and as much additional information as possible. ACKNOWLEDGEMENTS Thanks to R. P. Byrne for his Shrink.Pas program, which inspired this project, and from which the shrink algorithm was stolen; to Phil Katz for placing in the public domain the zip file format, compression format, and .ZIP filename extension, and for accepting minor changes to the file format; to Steve Burg for clarifications on the deflate format; to Haruhiko Okumura and Leonid Broukhis for providing some useful ideas for the compression algorithm; to Keith Petersen, Rich Wales, Hunter Goatley and Mark Adler for providing a mailing list and ftp site for the Info-ZIP group to use; and most importantly, to the Info-ZIP group itself (listed in the file infozip.who) without whose tireless testing and bug-fixing efforts a portable zip would not have been possible. Finally we should thank (blame) the first Info-ZIP moderator, David Kirschbaum, for getting us into this mess in the first place. The manual page was rewritten for Unix by R. P. C. Rodgers and updated by E. Gordon for zip 3.0. Info-ZIP 16 June 2008 (v3.0) ZIP(1L)
uulog
The uulog program can be used to display entries in the UUCP log file. It can select the entries for a particular system or a particular user. You can use it to see what has happened to your queued jobs in the past. Different options may be used to select which parts of the file to display.
uulog - display entries in the UUCP log file.
uulog [-#] [-n lines] [-sf system] [-u user] [-DSF] [--lines lines] [--system system] [--user user] [--debuglog] [--statslog] [--follow] [--follow=system]
-#, -n lines, --lines lines Here '#' is a number; e.g., `-10'. The specified number of lines is displayed from the end of the log file. The default is to display the entire log file, unless the -f, -F, or --follow options are used, in which case the default is to display 10 lines. -s system, --system system Display only log entries pertaining to the specified system. -u, --user user Display only log entries pertaining to the specified user. -D --debuglog Display the debugging log file. -S, --statslog Display the statistics log file. -F, --follow Keep displaying the log file forever, printing new lines as they are appended to the log file. -f system, --follow=system Keep displaying the log file forever, displaying only log entries pertaining to the specified system. Standard UUCP options: -X type, --debug type Turn on particular debugging types. The following types are recognized: abnormal, chat, handshake, uucp-proto, proto, port, config, spooldir, execute, incoming, outgoing. --debug option may appear multiple times. A number may also be given, which will turn on that many types from the foregoing list; for example, --debug 2 is equivalent to --debug abnormal,chat. -I file, --config Set configuration file to use. -v, --version Report version information and exit. --help Print a help message and exit. SEE ALSO uucp(1) AUTHOR Ian Lance Taylor <ian@airs.com>. Text for this Manpage comes from Taylor UUCP, version 1.07 Info documentation. Taylor UUCP 1.07 uulog(1)
null
pl2pm5.34
pl2pm is a tool to aid in the conversion of Perl4-style .pl library files to Perl5-style library modules. Usually, your old .pl file will still work fine and you should only use this tool if you plan to update your library to use some of the newer Perl 5 features, such as AutoLoading. LIMITATIONS It's just a first step, but it's usually a good first step. AUTHOR Larry Wall <larry@wall.org> perl v5.34.1 2024-04-13 PL2PM(1)
pl2pm - Rough tool to translate Perl4 .pl files to Perl5 .pm modules.
pl2pm files
null
null
atrm
The at and batch utilities read commands from standard input or a specified file which are to be executed at a later time, using sh(1). at executes commands at a specified time; atq lists the user's pending jobs, unless the user is the superuser; in that case, everybody's jobs are listed; atrm deletes jobs; batch executes commands when system load levels permit; in other words, when the load average drops below 1.5 times number of active CPUs, or the value specified in the invocation of atrun. The at utility allows some moderately complex time specifications. It accepts times of the form HHMM or HH:MM to run a job at a specific time of day. (If that time is already past, the next day is assumed.) As an alternative, the following keywords may be specified: midnight, noon, or teatime (4pm) and time-of-day may be suffixed with AM, PM, or UTC for running in the morning, the evening, or in UTC time. The day on which the job is to be run may also be specified by giving a date in the form month-name day with an optional year, or giving a date of the forms DD.MM.YYYY, DD.MM.YY, MM/DD/YYYY, MM/DD/YY, MMDDYYYY, or MMDDYY. The specification of a date must follow the specification of the time of day. Time can also be specified as: [now] + count time-units, where the time- units can be minutes, hours, days, weeks, months or years and at may be told to run the job today by suffixing the time with today and to run the job tomorrow by suffixing the time with tomorrow. The shortcut next can be used instead of + 1. For example, to run a job at 4pm three days from now, use at 4pm + 3 days, to run a job at 10:00am on July 31, use at 10am Jul 31 and to run a job at 1am tomorrow, use at 1am tomorrow. The at utility also supports the POSIX time format (see -t option). For both at and batch, commands are read from standard input or the file specified with the -f option and executed. The working directory, the environment (except for the variables TERM, TERMCAP, DISPLAY and _) and the umask are retained from the time of invocation. An at or batch command invoked from a su(1) shell will retain the current userid. The user will be mailed standard error and standard output from his commands, if any. Mail will be sent using the command sendmail(8). If at is executed from a su(1) shell, the owner of the login shell will receive the mail. The superuser may use these commands in any case. For other users, permission to use at is determined by the files /usr/lib/cron/at.allow and /usr/lib/cron/at.deny. If the file /usr/lib/cron/at.allow exists, only usernames mentioned in it are allowed to use at. In these two files, a user is considered to be listed only if the user name has no blank or other characters before it on its line and a newline character immediately after the name, even at the end of the file. Other lines are ignored and may be used for comments. If /usr/lib/cron/at.allow does not exist, /usr/lib/cron/at.deny is checked, every username not mentioned in it is then allowed to use at. If neither exists, only the superuser is allowed use of at. This is the default configuration. IMPLEMENTATION NOTES Note that at is implemented through the launchd(8) daemon periodically invoking atrun(8), which is disabled by default. See atrun(8) for information about enabling atrun.
at, batch, atq, atrm – queue, examine or delete jobs for later execution
at [-q queue] [-f file] [-mldbv] time at [-q queue] [-f file] [-mldbv] -t [[CC]YY]MMDDhhmm[.SS] at -c job [job ...] at -l [job ...] at -l -q queue at -r job [job ...] atq [-q queue] [-v] atrm job [job ...] batch [-q queue] [-f file] [-mv] [time]
-q queue Use the specified queue. A queue designation consists of a single letter; valid queue designations range from a to z and A to Z. The a queue is the default for at and the b queue for batch. Queues with higher letters run with increased niceness. If a job is submitted to a queue designated with an uppercase letter, it is treated as if it had been submitted to batch at that time. If atq is given a specific queue, it will only show jobs pending in that queue. -m Send mail to the user when the job has completed even if there was no output. -f file Read the job from file rather than standard input. -l With no arguments, list all jobs for the invoking user. If one or more job numbers are given, list only those jobs. -d Is an alias for atrm (this option is deprecated; use -r instead). -b Is an alias for batch. -v For atq, shows completed but not yet deleted jobs in the queue; otherwise shows the time the job will be executed. -c Cat the jobs listed on the command line to standard output. -r Remove the specified jobs. -t Specify the job time using the POSIX time format. The argument should be in the form [[CC]YY]MMDDhhmm[.SS] where each pair of letters represents the following: CC The first two digits of the year (the century). YY The second two digits of the year. MM The month of the year, from 1 to 12. DD the day of the month, from 1 to 31. hh The hour of the day, from 0 to 23. mm The minute of the hour, from 0 to 59. SS The second of the minute, from 0 to 60. If the CC and YY letter pairs are not specified, the values default to the current year. If the SS letter pair is not specified, the value defaults to 0. FILES /usr/lib/cron/jobs directory containing job files /usr/lib/cron/spool directory containing output spool files /var/run/utmpx login records /usr/lib/cron/at.allow allow permission control /usr/lib/cron/at.deny deny permission control /usr/lib/cron/jobs/.lockfile job-creation lock file SEE ALSO nice(1), sh(1), umask(2), compat(5), atrun(8), cron(8), launchd(8), sendmail(8) AUTHORS At was mostly written by Thomas Koenig <ig25@rz.uni-karlsruhe.de>. The time parsing routines are by David Parsons <orc@pell.chi.il.us>, with minor enhancements by Joe Halpin <joe.halpin@attbi.com>. BUGS If the file /var/run/utmpx is not available or corrupted, or if the user is not logged on at the time at is invoked, the mail is sent to the userid found in the environment variable LOGNAME. If that is undefined or empty, the current userid is assumed. The at and batch utilities as presently implemented are not suitable when users are competing for resources. If this is the case, another batch system such as nqs may be more suitable. Specifying a date past 2038 may not work on some systems. macOS 14.5 August 11, 2018 macOS 14.5
null
curl
curl is a tool for transferring data from or to a server using URLs. It supports these protocols: DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS and WSS. curl is powered by libcurl for all transfer-related features. See libcurl(3) for details. URL The URL syntax is protocol-dependent. You find a detailed description in RFC 3986. If you provide a URL without a leading protocol:// scheme, curl guesses what protocol you want. It then defaults to HTTP but assumes others based on often-used host name prefixes. For example, for host names starting with "ftp." curl assumes you want FTP. You can specify any amount of URLs on the command line. They are fetched in a sequential manner in the specified order unless you use -Z, --parallel. You can specify command line options and URLs mixed and in any order on the command line. curl attempts to reuse connections when doing multiple transfers, so that getting many files from the same server do not use multiple connects and setup handshakes. This improves speed. Connection reuse can only be done for URLs specified for a single command line invocation and cannot be performed between separate curl runs. Provide an IPv6 zone id in the URL with an escaped percentage sign. Like in "http://[fe80::3%25eth0]/" Everything provided on the command line that is not a command line option or its argument, curl assumes is a URL and treats it as such. GLOBBING You can specify multiple URLs or parts of URLs by writing lists within braces or ranges within brackets. We call this "globbing". Provide a list with three different names like this: "http://site.{one,two,three}.com" Do sequences of alphanumeric series by using [] as in: "ftp://ftp.example.com/file[1-100].txt" With leading zeroes: "ftp://ftp.example.com/file[001-100].txt" With letters through the alphabet: "ftp://ftp.example.com/file[a-z].txt" Nested sequences are not supported, but you can use several ones next to each other: "http://example.com/archive[1996-1999]/vol[1-4]/part{a,b,c}.html" You can specify a step counter for the ranges to get every Nth number or letter: "http://example.com/file[1-100:10].txt" "http://example.com/file[a-z:2].txt" When using [] or {} sequences when invoked from a command line prompt, you probably have to put the full URL within double quotes to avoid the shell from interfering with it. This also goes for other characters treated special, like for example '&', '?' and '*'. Switch off globbing with -g, --globoff. VARIABLES curl supports command line variables (added in 8.3.0). Set variables with --variable name=content or --variable name@file (where "file" can be stdin if set to a single dash (-)). Variable contents can be expanded in option parameters using "{{name}}" (without the quotes) if the option name is prefixed with "--expand-". This gets the contents of the variable "name" inserted, or a blank if the name does not exist as a variable. Insert "{{" verbatim in the string by prefixing it with a backslash, like "\{{". You an access and expand environment variables by first importing them. You can select to either require the environment variable to be set or you can provide a default value in case it is not already set. Plain --variable %name imports the variable called 'name' but exits with an error if that environment variable is not already set. To provide a default value if it is not set, use --variable %name=content or --variable %name@content. Example. Get the USER environment variable into the URL, fail if USER is not set: --variable '%USER' --expand-url = "https://example.com/api/{{USER}}/method" When expanding variables, curl supports a set of functions that can make the variable contents more convenient to use. It can trim leading and trailing white space with trim, it can output the contents as a JSON quoted string with json, URL encode the string with url or base64 encode it with b64. You apply function to a variable expansion, add them colon separated to the right side of the variable. Variable content holding null bytes that are not encoded when expanded cause error. Example: get the contents of a file called $HOME/.secret into a variable called "fix". Make sure that the content is trimmed and percent-encoded sent as POST data: --variable %HOME --expand-variable fix@{{HOME}}/.secret --expand-data "{{fix:trim:url}}" https://example.com/ Command line variables and expansions were added in in 8.3.0. OUTPUT If not told otherwise, curl writes the received data to stdout. It can be instructed to instead save that data into a local file, using the -o, --output or -O, --remote-name options. If curl is given multiple URLs to transfer on the command line, it similarly needs multiple options for where to save them. curl does not parse or otherwise "understand" the content it gets or writes as output. It does no encoding or decoding, unless explicitly asked to with dedicated command line options. PROTOCOLS curl supports numerous protocols, or put in URL terms: schemes. Your particular build may not support them all. DICT Lets you lookup words using online dictionaries. FILE Read or write local files. curl does not support accessing file:// URL remotely, but when running on Microsoft Windows using the native UNC approach works. FTP(S) curl supports the File Transfer Protocol with a lot of tweaks and levers. With or without using TLS. GOPHER(S) Retrieve files. HTTP(S) curl supports HTTP with numerous options and variations. It can speak HTTP version 0.9, 1.0, 1.1, 2 and 3 depending on build options and the correct command line options. IMAP(S) Using the mail reading protocol, curl can "download" emails for you. With or without using TLS. LDAP(S) curl can do directory lookups for you, with or without TLS. MQTT curl supports MQTT version 3. Downloading over MQTT equals "subscribe" to a topic while uploading/posting equals "publish" on a topic. MQTT over TLS is not supported (yet). POP3(S) Downloading from a pop3 server means getting a mail. With or without using TLS. RTMP(S) The Realtime Messaging Protocol is primarily used to serve streaming media and curl can download it. RTSP curl supports RTSP 1.0 downloads. SCP curl supports SSH version 2 scp transfers. SFTP curl supports SFTP (draft 5) done over SSH version 2. SMB(S) curl supports SMB version 1 for upload and download. SMTP(S) Uploading contents to an SMTP server means sending an email. With or without TLS. TELNET Telling curl to fetch a telnet URL starts an interactive session where it sends what it reads on stdin and outputs what the server sends it. TFTP curl can do TFTP downloads and uploads. PROGRESS METER curl normally displays a progress meter during operations, indicating the amount of transferred data, transfer speeds and estimated time left, etc. The progress meter displays the transfer rate in bytes per second. The suffixes (k, M, G, T, P) are 1024 based. For example 1k is 1024 bytes. 1M is 1048576 bytes. curl displays this data to the terminal by default, so if you invoke curl to do an operation and it is about to write data to the terminal, it disables the progress meter as otherwise it would mess up the output mixing progress meter and response data. If you want a progress meter for HTTP POST or PUT requests, you need to redirect the response output to a file, using shell redirect (>), -o, --output or similar. This does not apply to FTP upload as that operation does not spit out any response data to the terminal. If you prefer a progress "bar" instead of the regular meter, -#, --progress-bar is your friend. You can also disable the progress meter completely with the -s, --silent option. VERSION This man page describes curl %VERSION. If you use a later version, chances are this man page does not fully document it. If you use an earlier version, this document tries to include version information about which specific version that introduced changes. You can always learn which the latest curl version is by running curl https://curl.se/info The online version of this man page is always showing the latest incarnation: https://curl.se/docs/manpage.html
curl - transfer a URL
curl [options / URLs]
Options start with one or two dashes. Many of the options require an additional value next to them. If provided text does not start with a dash, it is presumed to be and treated as a URL. The short "single-dash" form of the options, -d for example, may be used with or without a space between it and its value, although a space is a recommended separator. The long "double-dash" form, -d, --data for example, requires a space between it and its value. Short version options that do not need any additional values can be used immediately next to each other, like for example you can specify all the options -O, -L and -v at once as -OLv. In general, all boolean options are enabled with --option and yet again disabled with --no-option. That is, you use the same option name but prefix it with "no-". However, in this list we mostly only list and show the --option version of them. When -:, --next is used, it resets the parser state and you start again with a clean option state, except for the options that are "global". Global options retain their values and meaning even after -:, --next. The following options are global: --fail-early, --libcurl, --parallel-immediate, -Z, --parallel, -#, --progress-bar, --rate, -S, --show-error, --stderr, --styled-output, --trace-ascii, --trace-config, --trace-ids, --trace-time, --trace and -v, --verbose. --abstract-unix-socket <path> (HTTP) Connect through an abstract Unix domain socket, instead of using the network. Note: netstat shows the path of an abstract socket prefixed with '@', however the <path> argument should not have this leading character. If --abstract-unix-socket is provided several times, the last set value is used. Example: curl --abstract-unix-socket socketpath https://example.com See also --unix-socket. Added in 7.53.0. --alt-svc <file name> (HTTPS) This option enables the alt-svc parser in curl. If the file name points to an existing alt-svc cache file, that gets used. After a completed transfer, the cache is saved to the file name again if it has been modified. Specify a "" file name (zero length) to avoid loading/saving and make curl just handle the cache in memory. If this option is used several times, curl loads contents from all the files but the last one is used for saving. --alt-svc can be used several times in a command line Example: curl --alt-svc svc.txt https://example.com See also --resolve and --connect-to. Added in 7.64.1. --anyauth (HTTP) Tells curl to figure out authentication method by itself, and use the most secure one the remote site claims to support. This is done by first doing a request and checking the response-headers, thus possibly inducing an extra network round-trip. This is used instead of setting a specific authentication method, which you can do with --basic, --digest, --ntlm, and --negotiate. Using --anyauth is not recommended if you do uploads from stdin, since it may require data to be sent twice and then the client must be able to rewind. If the need should arise when uploading from stdin, the upload operation fails. Used together with -u, --user. Providing --anyauth multiple times has no extra effect. Example: curl --anyauth --user me:pwd https://example.com See also --proxy-anyauth, --basic and --digest. -a, --append (FTP SFTP) When used in an upload, this option makes curl append to the target file instead of overwriting it. If the remote file does not exist, it is created. Note that this flag is ignored by some SFTP servers (including OpenSSH). Providing --append multiple times has no extra effect. Disable it again with --no-append. Example: curl --upload-file local --append ftp://example.com/ See also -r, --range and -C, --continue-at. --aws-sigv4 <provider1[:provider2[:region[:service]]]> (HTTP) Use AWS V4 signature authentication in the transfer. The provider argument is a string that is used by the algorithm when creating outgoing authentication headers. The region argument is a string that points to a geographic area of a resources collection (region-code) when the region name is omitted from the endpoint. The service argument is a string that points to a function provided by a cloud (service-code) when the service name is omitted from the endpoint. If --aws-sigv4 is provided several times, the last set value is used. Example: curl --aws-sigv4 "aws:amz:us-east-2:es" --user "key:secret" https://example.com See also --basic and -u, --user. Added in 7.75.0. --basic (HTTP) Tells curl to use HTTP Basic authentication with the remote host. This is the default and this option is usually pointless, unless you use it to override a previously set option that sets a different authentication method (such as --ntlm, --digest, or --negotiate). Used together with -u, --user. Providing --basic multiple times has no extra effect. Example: curl -u name:password --basic https://example.com See also --proxy-basic. --ca-native (TLS) Tells curl to use the CA store from the native operating system to verify the peer. By default, curl otherwise uses a CA store provided in a single file or directory, but when using this option it interfaces the operating system's own vault. This option works for curl on Windows when built to use OpenSSL, wolfSSL (added in 8.3.0) or GnuTLS (added in 8.5.0). When curl on Windows is built to use Schannel, this feature is implied and curl then only uses the native CA store. Providing --ca-native multiple times has no extra effect. Disable it again with --no-ca-native. Example: curl --ca-native https://example.com See also --cacert, --capath and -k, --insecure. Added in 8.2.0. --cacert <file> (TLS) Tells curl to use the specified certificate file to verify the peer. The file may contain multiple CA certificates. The certificate(s) must be in PEM format. Normally curl is built to use a default file for this, so this option is typically used to alter that default file. curl recognizes the environment variable named 'CURL_CA_BUNDLE' if it is set and the TLS backend is not Schannel, and uses the given path as a path to a CA cert bundle. This option overrides that variable. The windows version of curl automatically looks for a CA certs file named 'curl-ca-bundle.crt', either in the same directory as curl.exe, or in the Current Working Directory, or in any folder along your PATH. (iOS and macOS only) If curl is built against Secure Transport, then this option is supported for backward compatibility with other SSL engines, but it should not be set. If the option is not set, then curl uses the certificates in the system and user Keychain to verify the peer, which is the preferred method of verifying the peer's certificate chain. (Schannel only) This option is supported for Schannel in Windows 7 or later (added in 7.60.0). This option is supported for backward compatibility with other SSL engines; instead it is recommended to use Windows' store of root certificates (the default for Schannel). If --cacert is provided several times, the last set value is used. Example: curl --cacert CA-file.txt https://example.com See also --capath and -k, --insecure. --capath <dir> (TLS) Tells curl to use the specified certificate directory to verify the peer. Multiple paths can be provided by separating them with ":" (e.g. "path1:path2:path3"). The certificates must be in PEM format, and if curl is built against OpenSSL, the directory must have been processed using the c_rehash utility supplied with OpenSSL. Using --capath can allow OpenSSL-powered curl to make SSL-connections much more efficiently than using --cacert if the --cacert file contains many CA certificates. If this option is set, the default capath value is ignored. If --capath is provided several times, the last set value is used. Example: curl --capath /local/directory https://example.com See also --cacert and -k, --insecure. --cert-status (TLS) Tells curl to verify the status of the server certificate by using the Certificate Status Request (aka. OCSP stapling) TLS extension. If this option is enabled and the server sends an invalid (e.g. expired) response, if the response suggests that the server certificate has been revoked, or no response at all is received, the verification fails. This is currently only implemented in the OpenSSL and GnuTLS backends. Providing --cert-status multiple times has no extra effect. Disable it again with --no-cert-status. Example: curl --cert-status https://example.com See also --pinnedpubkey. --cert-type <type> (TLS) Tells curl what type the provided client certificate is using. PEM, DER, ENG and P12 are recognized types. The default type depends on the TLS backend and is usually PEM, however for Secure Transport and Schannel it is P12. If -E, --cert is a pkcs11: URI then ENG is the default type. If --cert-type is provided several times, the last set value is used. Example: curl --cert-type PEM --cert file https://example.com See also -E, --cert, --key and --key-type. -E, --cert <certificate[:password]> (TLS) Tells curl to use the specified client certificate file when getting a file with HTTPS, FTPS or another SSL-based protocol. The certificate must be in PKCS#12 format if using Secure Transport, or PEM format if using any other engine. If the optional password is not specified, it is queried for on the terminal. Note that this option assumes a certificate file that is the private key and the client certificate concatenated. See -E, --cert and --key to specify them independently. In the <certificate> portion of the argument, you must escape the character ":" as "\:" so that it is not recognized as the password delimiter. Similarly, you must escape the double quote character as \" so that it is not recognized as an escape character. If curl is built against OpenSSL library, and the engine pkcs11 is available, then a PKCS#11 URI (RFC 7512) can be used to specify a certificate located in a PKCS#11 device. A string beginning with "pkcs11:" is interpreted as a PKCS#11 URI. If a PKCS#11 URI is provided, then the --engine option is set as "pkcs11" if none was provided and the --cert-type option is set as "ENG" if none was provided. (iOS and macOS only) If curl is built against Secure Transport, then the certificate string can either be the name of a certificate/private key in the system or user keychain, or the path to a PKCS#12-encoded certificate and private key. If you want to use a file from the current directory, please precede it with "./" prefix, in order to avoid confusion with a nickname. (Schannel only) Client certificates must be specified by a path expression to a certificate store. (Loading PFX is not supported; you can import it to a store first). You can use "<store location>\<store name>\<thumbprint>" to refer to a certificate in the system certificates store, for example, "CurrentUser\MY\934a7ac6f8a5d579285a74fa61e19f23ddfe8d7a". Thumbprint is usually a SHA-1 hex string which you can see in certificate details. Following store locations are supported: CurrentUser, LocalMachine, CurrentService, Services, CurrentUserGroupPolicy, LocalMachineGroupPolicy and LocalMachineEnterprise. If --cert is provided several times, the last set value is used. Example: curl --cert certfile --key keyfile https://example.com See also --cert-type, --key and --key-type. --ciphers <list of ciphers> (TLS) Specifies which ciphers to use in the connection. The list of ciphers must specify valid ciphers. Read up on SSL cipher list details on this URL: https://curl.se/docs/ssl-ciphers.html If --ciphers is provided several times, the last set value is used. Example: curl --ciphers ECDHE-ECDSA-AES256-CCM8 https://example.com See also --tlsv1.3, --tls13-ciphers and --proxy-ciphers. --compressed-ssh (SCP SFTP) Enables built-in SSH compression. This is a request, not an order; the server may or may not do it. Providing --compressed-ssh multiple times has no extra effect. Disable it again with --no-compressed-ssh. Example: curl --compressed-ssh sftp://example.com/ See also --compressed. Added in 7.56.0. --compressed (HTTP) Request a compressed response using one of the algorithms curl supports, and automatically decompress the content. Response headers are not modified when saved, so if they are "interpreted" separately again at a later point they might appear to be saying that the content is (still) compressed; while in fact it has already been decompressed. If this option is used and the server sends an unsupported encoding, curl reports an error. This is a request, not an order; the server may or may not deliver data compressed. Providing --compressed multiple times has no extra effect. Disable it again with --no-compressed. Example: curl --compressed https://example.com See also --compressed-ssh. -K, --config <file> Specify a text file to read curl arguments from. The command line arguments found in the text file are used as if they were provided on the command line. Options and their parameters must be specified on the same line in the file, separated by whitespace, colon, or the equals sign. Long option names can optionally be given in the config file without the initial double dashes and if so, the colon or equals characters can be used as separators. If the option is specified with one or two dashes, there can be no colon or equals character between the option and its parameter. If the parameter contains whitespace or starts with a colon (:) or equals sign (=), it must be specified enclosed within double quotes ("). Within double quotes the following escape sequences are available: \\, \", \t, \n, \r and \v. A backslash preceding any other letter is ignored. If the first non-blank column of a config line is a '#' character, that line is treated as a comment. Only write one option per physical line in the config file. A single line is required to be no more than 10 megabytes (since 8.2.0). Specify the filename to -K, --config as '-' to make curl read the file from stdin. Note that to be able to specify a URL in the config file, you need to specify it using the --url option, and not by simply writing the URL on its own line. So, it could look similar to this: url = "https://curl.se/docs/" # --- Example file --- # this is a comment url = "example.com" output = "curlhere.html" user-agent = "superagent/1.0" # and fetch another URL too url = "example.com/docs/manpage.html" -O referer = "http://nowhereatall.example.com/" # --- End of example file --- When curl is invoked, it (unless -q, --disable is used) checks for a default config file and uses it if found, even when -K, --config is used. The default config file is checked for in the following places in this order: 1) "$CURL_HOME/.curlrc" 2) "$XDG_CONFIG_HOME/curlrc" (Added in 7.73.0) 3) "$HOME/.curlrc" 4) Windows: "%USERPROFILE%\.curlrc" 5) Windows: "%APPDATA%\.curlrc" 6) Windows: "%USERPROFILE%\Application Data\.curlrc" 7) Non-Windows: use getpwuid to find the home directory 8) On Windows, if it finds no .curlrc file in the sequence described above, it checks for one in the same dir the curl executable is placed. On Windows two filenames are checked per location: .curlrc and _curlrc, preferring the former. Older versions on Windows checked for _curlrc only. --config can be used several times in a command line Example: curl --config file.txt https://example.com See also -q, --disable. --connect-timeout <fractional seconds> Maximum time in seconds that you allow curl's connection to take. This only limits the connection phase, so if curl connects within the given period it continues - if not it exits. This option accepts decimal values. The decimal value needs to be provided using a dot (.) as decimal separator - not the local version even if it might be using another separator. The connection phase is considered complete when the DNS lookup and requested TCP, TLS or QUIC handshakes are done. If --connect-timeout is provided several times, the last set value is used. Examples: curl --connect-timeout 20 https://example.com curl --connect-timeout 3.14 https://example.com See also -m, --max-time. --connect-to <HOST1:PORT1:HOST2:PORT2> For a request to the given "HOST1:PORT1" pair, connect to "HOST2:PORT2" instead. This option is suitable to direct requests at a specific server, e.g. at a specific cluster node in a cluster of servers. This option is only used to establish the network connection. It does NOT affect the hostname/port that is used for TLS/SSL (e.g. SNI, certificate verification) or for the application protocols. "HOST1" and "PORT1" may be the empty string, meaning "any host/port". "HOST2" and "PORT2" may also be the empty string, meaning "use the request's original host/port". A hostname specified to this option is compared as a string, so it needs to match the name used in request URL. It can be either numerical such as "127.0.0.1" or the full host name such as "example.org". --connect-to can be used several times in a command line Example: curl --connect-to example.com:443:example.net:8443 https://example.com See also --resolve and -H, --header. -C, --continue-at <offset> Continue/Resume a previous file transfer at the given offset. The given offset is the exact number of bytes that are skipped, counting from the beginning of the source file before it is transferred to the destination. If used with uploads, the FTP server command SIZE is not used by curl. Use "-C -" to tell curl to automatically find out where/how to resume the transfer. It then uses the given output/input files to figure that out. If --continue-at is provided several times, the last set value is used. Examples: curl -C - https://example.com curl -C 400 https://example.com See also -r, --range. -c, --cookie-jar <filename> (HTTP) Specify to which file you want curl to write all cookies after a completed operation. Curl writes all cookies from its in-memory cookie storage to the given file at the end of operations. If no cookies are known, no data is written. The file is created using the Netscape cookie file format. If you set the file name to a single dash, "-", the cookies are written to stdout. The file specified with -c, --cookie-jar is only used for output. No cookies are read from the file. To read cookies, use the -b, --cookie option. Both options can specify the same file. This command line option activates the cookie engine that makes curl record and use cookies. The -b, --cookie option also activates it. If the cookie jar cannot be created or written to, the whole curl operation does not fail or even report an error clearly. Using -v, --verbose gets a warning displayed, but that is the only visible feedback you get about this possibly lethal situation. If --cookie-jar is provided several times, the last set value is used. Examples: curl -c store-here.txt https://example.com curl -c store-here.txt -b read-these https://example.com See also -b, --cookie. -b, --cookie <data|filename> (HTTP) Pass the data to the HTTP server in the Cookie header. It is supposedly the data previously received from the server in a "Set-Cookie:" line. The data should be in the format "NAME1=VALUE1; NAME2=VALUE2". This makes curl use the cookie header with this content explicitly in all outgoing request(s). If multiple requests are done due to authentication, followed redirects or similar, they all get this cookie passed on. If no '=' symbol is used in the argument, it is instead treated as a filename to read previously stored cookie from. This option also activates the cookie engine which makes curl record incoming cookies, which may be handy if you are using this in combination with the -L, --location option or do multiple URL transfers on the same invoke. If the file name is exactly a minus ("-"), curl instead reads the contents from stdin. If the file name is an empty string ("") and is the only cookie input, curl will activate the cookie engine without any cookies. The file format of the file to read cookies from should be plain HTTP headers (Set-Cookie style) or the Netscape/Mozilla cookie file format. The file specified with -b, --cookie is only used as input. No cookies are written to the file. To store cookies, use the -c, --cookie-jar option. If you use the Set-Cookie file format and do not specify a domain then the cookie is not sent since the domain never matches. To address this, set a domain in Set-Cookie line (doing that includes subdomains) or preferably: use the Netscape format. Users often want to both read cookies from a file and write updated cookies back to a file, so using both -b, --cookie and -c, --cookie-jar in the same command line is common. If curl is built with PSL (Public Suffix List) support, it detects and discards cookies that are specified for such suffix domains that should not be allowed to have cookies. If curl is not built with PSL support, it has no ability to stop super cookies. --cookie can be used several times in a command line Examples: curl -b "" https://example.com curl -b cookiefile https://example.com curl -b cookiefile -c cookiefile https://example.com See also -c, --cookie-jar and -j, --junk-session-cookies. --create-dirs When used in conjunction with the -o, --output option, curl creates the necessary local directory hierarchy as needed. This option creates the directories mentioned with the -o, --output option combined with the path possibly set with --output-dir. If the combined output file name uses no directory, or if the directories it mentions already exist, no directories are created. Created directories are made with mode 0750 on unix style file systems. To create remote directories when using FTP or SFTP, try --ftp-create-dirs. Providing --create-dirs multiple times has no extra effect. Disable it again with --no-create-dirs. Example: curl --create-dirs --output local/dir/file https://example.com See also --ftp-create-dirs and --output-dir. --create-file-mode <mode> (SFTP SCP FILE) When curl is used to create files remotely using one of the supported protocols, this option allows the user to set which 'mode' to set on the file at creation time, instead of the default 0644. This option takes an octal number as argument. If --create-file-mode is provided several times, the last set value is used. Example: curl --create-file-mode 0777 -T localfile sftp://example.com/new See also --ftp-create-dirs. Added in 7.75.0. --crlf (FTP SMTP) Convert line feeds to carriage return plus line feeds in upload. Useful for MVS (OS/390). (SMTP added in 7.40.0) Providing --crlf multiple times has no extra effect. Disable it again with --no-crlf. Example: curl --crlf -T file ftp://example.com/ See also -B, --use-ascii. --crlfile <file> (TLS) Provide a file using PEM format with a Certificate Revocation List that may specify peer certificates that are to be considered revoked. If --crlfile is provided several times, the last set value is used. Example: curl --crlfile rejects.txt https://example.com See also --cacert and --capath. --curves <algorithm list> (TLS) Tells curl to request specific curves to use during SSL session establishment according to RFC 8422, 5.1. Multiple algorithms can be provided by separating them with ":" (e.g. "X25519:P-521"). The parameter is available identically in the OpenSSL "s_client" and "s_server" utilities. --curves allows a OpenSSL powered curl to make SSL-connections with exactly the (EC) curve requested by the client, avoiding nontransparent client/server negotiations. If this option is set, the default curves list built into OpenSSL are ignored. If --curves is provided several times, the last set value is used. Example: curl --curves X25519 https://example.com See also --ciphers. Added in 7.73.0. --data-ascii <data> (HTTP) This is just an alias for -d, --data. --data-ascii can be used several times in a command line Example: curl --data-ascii @file https://example.com See also --data-binary, --data-raw and --data-urlencode. --data-binary <data> (HTTP) This posts data exactly as specified with no extra processing whatsoever. If you start the data with the letter @, the rest should be a filename. Data is posted in a similar manner as -d, --data does, except that newlines and carriage returns are preserved and conversions are never done. Like -d, --data the default content-type sent to the server is application/x-www-form-urlencoded. If you want the data to be treated as arbitrary binary data by the server then set the content-type to octet-stream: -H "Content-Type: application/octet-stream". If this option is used several times, the ones following the first append data as described in -d, --data. --data-binary can be used several times in a command line Example: curl --data-binary @filename https://example.com See also --data-ascii. --data-raw <data> (HTTP) This posts data similarly to -d, --data but without the special interpretation of the @ character. --data-raw can be used several times in a command line Examples: curl --data-raw "hello" https://example.com curl --data-raw "@at@at@" https://example.com See also -d, --data. --data-urlencode <data> (HTTP) This posts data, similar to the other -d, --data options with the exception that this performs URL-encoding. To be CGI-compliant, the <data> part should begin with a name followed by a separator and a content specification. The <data> part can be passed to curl using one of the following syntaxes: content This makes curl URL-encode the content and pass that on. Just be careful so that the content does not contain any = or @ symbols, as that makes the syntax match one of the other cases below! =content This makes curl URL-encode the content and pass that on. The preceding = symbol is not included in the data. name=content This makes curl URL-encode the content part and pass that on. Note that the name part is expected to be URL-encoded already. @filename This makes curl load data from the given file (including any newlines), URL-encode that data and pass it on in the POST. name@filename This makes curl load data from the given file (including any newlines), URL-encode that data and pass it on in the POST. The name part gets an equal sign appended, resulting in name=urlencoded-file-content. Note that the name is expected to be URL-encoded already. --data-urlencode can be used several times in a command line Examples: curl --data-urlencode name=val https://example.com curl --data-urlencode =encodethis https://example.com curl --data-urlencode name@file https://example.com curl --data-urlencode @fileonly https://example.com See also -d, --data and --data-raw. -d, --data <data> (HTTP MQTT) Sends the specified data in a POST request to the HTTP server, in the same way that a browser does when a user has filled in an HTML form and presses the submit button. This makes curl pass the data to the server using the content-type application/x-www-form-urlencoded. Compare to -F, --form. --data-raw is almost the same but does not have a special interpretation of the @ character. To post data purely binary, you should instead use the --data-binary option. To URL-encode the value of a form field you may use --data-urlencode. If any of these options is used more than once on the same command line, the data pieces specified are merged with a separating &-symbol. Thus, using '-d name=daniel -d skill=lousy' would generate a post chunk that looks like 'name=daniel&skill=lousy'. If you start the data with the letter @, the rest should be a file name to read the data from, or - if you want curl to read the data from stdin. Posting data from a file named 'foobar' would thus be done with -d, --data @foobar. When -d, --data is told to read from a file like that, carriage returns and newlines are stripped out. If you do not want the @ character to have a special interpretation use --data-raw instead. The data for this option is passed on to the server exactly as provided on the command line. curl does not convert, change or improve it. It is up to the user to provide the data in the correct form. --data can be used several times in a command line Examples: curl -d "name=curl" https://example.com curl -d "name=curl" -d "tool=cmdline" https://example.com curl -d @filename https://example.com See also --data-binary, --data-urlencode and --data-raw. This option is mutually exclusive to -F, --form and -I, --head and -T, --upload-file. --delegation <LEVEL> (GSS/kerberos) Set LEVEL to tell the server what it is allowed to delegate when it comes to user credentials. none Do not allow any delegation. policy Delegates if and only if the OK-AS-DELEGATE flag is set in the Kerberos service ticket, which is a matter of realm policy. always Unconditionally allow the server to delegate. If --delegation is provided several times, the last set value is used. Example: curl --delegation "none" https://example.com See also -k, --insecure and --ssl. --digest (HTTP) Enables HTTP Digest authentication. This is an authentication scheme that prevents the password from being sent over the wire in clear text. Use this in combination with the normal -u, --user option to set user name and password. Providing --digest multiple times has no extra effect. Disable it again with --no-digest. Example: curl -u name:password --digest https://example.com See also -u, --user, --proxy-digest and --anyauth. This option is mutually exclusive to --basic and --ntlm and --negotiate. --disable-eprt (FTP) Tell curl to disable the use of the EPRT and LPRT commands when doing active FTP transfers. Curl normally first attempts to use EPRT before using PORT, but with this option, it uses PORT right away. EPRT is an extension to the original FTP protocol, and does not work on all servers, but enables more functionality in a better way than the traditional PORT command. --eprt can be used to explicitly enable EPRT again and --no-eprt is an alias for --disable-eprt. If the server is accessed using IPv6, this option has no effect as EPRT is necessary then. Disabling EPRT only changes the active behavior. If you want to switch to passive mode you need to not use -P, --ftp-port or force it with --ftp-pasv. Providing --disable-eprt multiple times has no extra effect. Disable it again with --no-disable-eprt. Example: curl --disable-eprt ftp://example.com/ See also --disable-epsv and -P, --ftp-port. --disable-epsv (FTP) Tell curl to disable the use of the EPSV command when doing passive FTP transfers. Curl normally first attempts to use EPSV before PASV, but with this option, it does not try EPSV. --epsv can be used to explicitly enable EPSV again and --no-epsv is an alias for --disable-epsv. If the server is an IPv6 host, this option has no effect as EPSV is necessary then. Disabling EPSV only changes the passive behavior. If you want to switch to active mode you need to use -P, --ftp-port. Providing --disable-epsv multiple times has no extra effect. Disable it again with --no-disable-epsv. Example: curl --disable-epsv ftp://example.com/ See also --disable-eprt and -P, --ftp-port. -q, --disable If used as the first parameter on the command line, the curlrc config file is not read or used. See the -K, --config for details on the default config file search path. Prior to 7.50.0 curl supported the short option name q but not the long option name disable. Providing --disable multiple times has no extra effect. Disable it again with --no-disable. Example: curl -q https://example.com See also -K, --config. --disallow-username-in-url This tells curl to exit if passed a URL containing a username. This is probably most useful when the URL is being provided at runtime or similar. Providing --disallow-username-in-url multiple times has no extra effect. Disable it again with --no-disallow-username-in-url. Example: curl --disallow-username-in-url https://example.com See also --proto. Added in 7.61.0. --dns-interface <interface> (DNS) Tell curl to send outgoing DNS requests through <interface>. This option is a counterpart to --interface (which does not affect DNS). The supplied string must be an interface name (not an address). If --dns-interface is provided several times, the last set value is used. Example: curl --dns-interface eth0 https://example.com See also --dns-ipv4-addr and --dns-ipv6-addr. --dns-interface requires that the underlying libcurl was built to support c- ares. --dns-ipv4-addr <address> (DNS) Tell curl to bind to a specific IP address when making IPv4 DNS requests, so that the DNS requests originate from this address. The argument should be a single IPv4 address. If --dns-ipv4-addr is provided several times, the last set value is used. Example: curl --dns-ipv4-addr 10.1.2.3 https://example.com See also --dns-interface and --dns-ipv6-addr. --dns-ipv4-addr requires that the underlying libcurl was built to support c- ares. --dns-ipv6-addr <address> (DNS) Tell curl to bind to a specific IP address when making IPv6 DNS requests, so that the DNS requests originate from this address. The argument should be a single IPv6 address. If --dns-ipv6-addr is provided several times, the last set value is used. Example: curl --dns-ipv6-addr 2a04:4e42::561 https://example.com See also --dns-interface and --dns-ipv4-addr. --dns-ipv6-addr requires that the underlying libcurl was built to support c- ares. --dns-servers <addresses> (DNS) Set the list of DNS servers to be used instead of the system default. The list of IP addresses should be separated with commas. Port numbers may also optionally be given as :<port-number> after each IP address. If --dns-servers is provided several times, the last set value is used. Example: curl --dns-servers 192.168.0.1,192.168.0.2 https://example.com See also --dns-interface and --dns-ipv4-addr. --dns-servers requires that the underlying libcurl was built to support c- ares. --doh-cert-status Same as --cert-status but used for DoH (DNS-over-HTTPS). Providing --doh-cert-status multiple times has no extra effect. Disable it again with --no-doh-cert-status. Example: curl --doh-cert-status --doh-url https://doh.example https://example.com See also --doh-insecure. Added in 7.76.0. --doh-insecure Same as -k, --insecure but used for DoH (DNS-over-HTTPS). Providing --doh-insecure multiple times has no extra effect. Disable it again with --no-doh-insecure. Example: curl --doh-insecure --doh-url https://doh.example https://example.com See also --doh-url. Added in 7.76.0. --doh-url <URL> Specifies which DNS-over-HTTPS (DoH) server to use to resolve hostnames, instead of using the default name resolver mechanism. The URL must be HTTPS. Some SSL options that you set for your transfer also applies to DoH since the name lookups take place over SSL. However, the certificate verification settings are not inherited but are controlled separately via --doh-insecure and --doh-cert-status. This option is unset if an empty string "" is used as the URL. (Added in 7.85.0) If --doh-url is provided several times, the last set value is used. Example: curl --doh-url https://doh.example https://example.com See also --doh-insecure. Added in 7.62.0. -D, --dump-header <filename> (HTTP FTP) Write the received protocol headers to the specified file. If no headers are received, the use of this option creates an empty file. When used in FTP, the FTP server response lines are considered being "headers" and thus are saved there. Having multiple transfers in one set of operations (i.e. the URLs in one -:, --next clause), appends them to the same file, separated by a blank line. If --dump-header is provided several times, the last set value is used. Example: curl --dump-header store.txt https://example.com See also -o, --output. --egd-file <file> (TLS) Deprecated option (added in 7.84.0). Prior to that it only had an effect on curl if built to use old versions of OpenSSL. Specify the path name to the Entropy Gathering Daemon socket. The socket is used to seed the random engine for SSL connections. If --egd-file is provided several times, the last set value is used. Example: curl --egd-file /random/here https://example.com See also --random-file. --engine <name> (TLS) Select the OpenSSL crypto engine to use for cipher operations. Use --engine list to print a list of build-time supported engines. Note that not all (and possibly none) of the engines may be available at runtime. If --engine is provided several times, the last set value is used. Example: curl --engine flavor https://example.com See also --ciphers and --curves. --etag-compare <file> (HTTP) This option makes a conditional HTTP request for the specific ETag read from the given file by sending a custom If-None-Match header using the stored ETag. For correct results, make sure that the specified file contains only a single line with the desired ETag. An empty file is parsed as an empty ETag. Use the option --etag-save to first save the ETag from a response, and then use this option to compare against the saved ETag in a subsequent request. If --etag-compare is provided several times, the last set value is used. Example: curl --etag-compare etag.txt https://example.com See also --etag-save and -z, --time-cond. Added in 7.68.0. --etag-save <file> (HTTP) This option saves an HTTP ETag to the specified file. An ETag is a caching related header, usually returned in a response. If no ETag is sent by the server, an empty file is created. If --etag-save is provided several times, the last set value is used. Example: curl --etag-save storetag.txt https://example.com See also --etag-compare. Added in 7.68.0. --expect100-timeout <seconds> (HTTP) Maximum time in seconds that you allow curl to wait for a 100-continue response when curl emits an Expects: 100-continue header in its request. By default curl waits one second. This option accepts decimal values! When curl stops waiting, it continues as if the response has been received. The decimal value needs to provided using a dot (.) as decimal separator - not the local version even if it might be using another separator. If --expect100-timeout is provided several times, the last set value is used. Example: curl --expect100-timeout 2.5 -T file https://example.com See also --connect-timeout. --fail-early Fail and exit on the first detected transfer error. When curl is used to do multiple transfers on the command line, it attempts to operate on each given URL, one by one. By default, it ignores errors if there are more URLs given and the last URL's success determines the error code curl returns. So early failures are "hidden" by subsequent successful transfers. Using this option, curl instead returns an error on the first transfer that fails, independent of the amount of URLs that are given on the command line. This way, no transfer failures go undetected by scripts and similar. This option does not imply -f, --fail, which causes transfers to fail due to the server's HTTP status code. You can combine the two options, however note -f, --fail is not global and is therefore contained by -:, --next. This option is global and does not need to be specified for each use of --next. Providing --fail-early multiple times has no extra effect. Disable it again with --no-fail-early. Example: curl --fail-early https://example.com https://two.example See also -f, --fail and --fail-with-body. Added in 7.52.0. --fail-with-body (HTTP) Return an error on server errors where the HTTP response code is 400 or greater). In normal cases when an HTTP server fails to deliver a document, it returns an HTML document stating so (which often also describes why and more). This flag allows curl to output and save that content but also to return error 22. This is an alternative option to -f, --fail which makes curl fail for the same circumstances but without saving the content. Providing --fail-with-body multiple times has no extra effect. Disable it again with --no-fail-with-body. Example: curl --fail-with-body https://example.com See also -f, --fail and --fail-early. This option is mutually exclusive to -f, --fail. Added in 7.76.0. -f, --fail (HTTP) Fail fast with no output at all on server errors. This is useful to enable scripts and users to better deal with failed attempts. In normal cases when an HTTP server fails to deliver a document, it returns an HTML document stating so (which often also describes why and more). This flag prevents curl from outputting that and return error 22. This method is not fail-safe and there are occasions where non-successful response codes slip through, especially when authentication is involved (response codes 401 and 407). Providing --fail multiple times has no extra effect. Disable it again with --no-fail. Example: curl --fail https://example.com See also --fail-with-body and --fail-early. This option is mutually exclusive to --fail-with-body. --false-start (TLS) Tells curl to use false start during the TLS handshake. False start is a mode where a TLS client starts sending application data before verifying the server's Finished message, thus saving a round trip when performing a full handshake. This is currently only implemented in the Secure Transport (on iOS 7.0 or later, or OS X 10.9 or later) backend. Providing --false-start multiple times has no extra effect. Disable it again with --no-false-start. Example: curl --false-start https://example.com See also --tcp-fastopen. --form-escape (HTTP) Tells curl to pass on names of multipart form fields and files using backslash-escaping instead of percent-encoding. If --form-escape is provided several times, the last set value is used. Example: curl --form-escape -F 'field\name=curl' -F 'file=@load"this' https://example.com See also -F, --form. Added in 7.81.0. --form-string <name=string> (HTTP SMTP IMAP) Similar to -F, --form except that the value string for the named parameter is used literally. Leading '@' and '<' characters, and the ';type=' string in the value have no special meaning. Use this in preference to -F, --form if there is any possibility that the string value may accidentally trigger the '@' or '<' features of -F, --form. --form-string can be used several times in a command line Example: curl --form-string "data" https://example.com See also -F, --form. -F, --form <name=content> (HTTP SMTP IMAP) For HTTP protocol family, this lets curl emulate a filled-in form in which a user has pressed the submit button. This causes curl to POST data using the Content-Type multipart/form-data according to RFC 2388. For SMTP and IMAP protocols, this is the means to compose a multipart mail message to transmit. This enables uploading of binary files etc. To force the 'content' part to be a file, prefix the file name with an @ sign. To just get the content part from a file, prefix the file name with the symbol <. The difference between @ and < is then that @ makes a file get attached in the post as a file upload, while the < makes a text field and just get the contents for that text field from a file. Tell curl to read content from stdin instead of a file by using - as filename. This goes for both @ and < constructs. When stdin is used, the contents is buffered in memory first by curl to determine its size and allow a possible resend. Defining a part's data from a named non-regular file (such as a named pipe or similar) is not subject to buffering and is instead read at transmission time; since the full size is unknown before the transfer starts, such data is sent as chunks by HTTP and rejected by IMAP. Example: send an image to an HTTP server, where 'profile' is the name of the form-field to which the file portrait.jpg is the input: curl -F profile=@portrait.jpg https://example.com/upload.cgi Example: send your name and shoe size in two text fields to the server: curl -F name=John -F shoesize=11 https://example.com/ Example: send your essay in a text field to the server. Send it as a plain text field, but get the contents for it from a local file: curl -F "story=<hugefile.txt" https://example.com/ You can also tell curl what Content-Type to use by using 'type=', in a manner similar to: curl -F "web=@index.html;type=text/html" example.com or curl -F "name=daniel;type=text/foo" example.com You can also explicitly change the name field of a file upload part by setting filename=, like this: curl -F "file=@localfile;filename=nameinpost" example.com If filename/path contains ',' or ';', it must be quoted by double-quotes like: curl -F "file=@\"local,file\";filename=\"name;in;post\"" example.com or curl -F 'file=@"local,file";filename="name;in;post"' example.com Note that if a filename/path is quoted by double-quotes, any double-quote or backslash within the filename must be escaped by backslash. Quoting must also be applied to non-file data if it contains semicolons, leading/trailing spaces or leading double quotes: curl -F 'colors="red; green; blue";type=text/x-myapp' example.com You can add custom headers to the field by setting headers=, like curl -F "submit=OK;headers=\"X-submit-type: OK\"" example.com or curl -F "submit=OK;headers=@headerfile" example.com The headers= keyword may appear more that once and above notes about quoting apply. When headers are read from a file, Empty lines and lines starting with '#' are comments and ignored; each header can be folded by splitting between two words and starting the continuation line with a space; embedded carriage-returns and trailing spaces are stripped. Here is an example of a header file contents: # This file contain two headers. X-header-1: this is a header # The following header is folded. X-header-2: this is another header To support sending multipart mail messages, the syntax is extended as follows: - name can be omitted: the equal sign is the first character of the argument, - if data starts with '(', this signals to start a new multipart: it can be followed by a content type specification. - a multipart can be terminated with a '=)' argument. Example: the following command sends an SMTP mime email consisting in an inline part in two alternative formats: plain text and HTML. It attaches a text file: curl -F '=(;type=multipart/alternative' \ -F '=plain text message' \ -F '= <body>HTML message</body>;type=text/html' \ -F '=)' -F '=@textfile.txt' ... smtp://example.com Data can be encoded for transfer using encoder=. Available encodings are binary and 8bit that do nothing else than adding the corresponding Content-Transfer-Encoding header, 7bit that only rejects 8-bit characters with a transfer error, quoted-printable and base64 that encodes data according to the corresponding schemes, limiting lines length to 76 characters. Example: send multipart mail with a quoted-printable text message and a base64 attached file: curl -F '=text message;encoder=quoted-printable' \ -F '=@localfile;encoder=base64' ... smtp://example.com See further examples and details in the MANUAL. --form can be used several times in a command line Example: curl --form "name=curl" --form "file=@loadthis" https://example.com See also -d, --data, --form-string and --form-escape. This option is mutually exclusive to -d, --data and -I, --head and -T, --upload-file. --ftp-account <data> (FTP) When an FTP server asks for "account data" after user name and password has been provided, this data is sent off using the ACCT command. If --ftp-account is provided several times, the last set value is used. Example: curl --ftp-account "mr.robot" ftp://example.com/ See also -u, --user. --ftp-alternative-to-user <command> (FTP) If authenticating with the USER and PASS commands fails, send this command. When connecting to Tumbleweed's Secure Transport server over FTPS using a client certificate, using "SITE AUTH" tells the server to retrieve the username from the certificate. If --ftp-alternative-to-user is provided several times, the last set value is used. Example: curl --ftp-alternative-to-user "U53r" ftp://example.com See also --ftp-account and -u, --user. --ftp-create-dirs (FTP SFTP) When an FTP or SFTP URL/operation uses a path that does not currently exist on the server, the standard behavior of curl is to fail. Using this option, curl instead attempts to create missing directories. Providing --ftp-create-dirs multiple times has no extra effect. Disable it again with --no-ftp-create-dirs. Example: curl --ftp-create-dirs -T file ftp://example.com/remote/path/file See also --create-dirs. --ftp-method <method> (FTP) Control what method curl should use to reach a file on an FTP(S) server. The method argument should be one of the following alternatives: multicwd curl does a single CWD operation for each path part in the given URL. For deep hierarchies this means many commands. This is how RFC 1738 says it should be done. This is the default but the slowest behavior. nocwd curl does no CWD at all. curl does SIZE, RETR, STOR etc and give a full path to the server for all these commands. This is the fastest behavior. singlecwd curl does one CWD with the full target directory and then operates on the file "normally" (like in the multicwd case). This is somewhat more standards compliant than 'nocwd' but without the full penalty of 'multicwd'. If --ftp-method is provided several times, the last set value is used. Examples: curl --ftp-method multicwd ftp://example.com/dir1/dir2/file curl --ftp-method nocwd ftp://example.com/dir1/dir2/file curl --ftp-method singlecwd ftp://example.com/dir1/dir2/file See also -l, --list-only. --ftp-pasv (FTP) Use passive mode for the data connection. Passive is the internal default behavior, but using this option can be used to override a previous -P, --ftp-port option. Reversing an enforced passive really is not doable but you must then instead enforce the correct -P, --ftp-port again. Passive mode means that curl tries the EPSV command first and then PASV, unless --disable-epsv is used. Providing --ftp-pasv multiple times has no extra effect. Disable it again with --no-ftp-pasv. Example: curl --ftp-pasv ftp://example.com/ See also --disable-epsv. -P, --ftp-port <address> (FTP) Reverses the default initiator/listener roles when connecting with FTP. This option makes curl use active mode. curl then tells the server to connect back to the client's specified address and port, while passive mode asks the server to setup an IP address and port for it to connect to. <address> should be one of: interface e.g. eth0 to specify which interface's IP address you want to use (Unix only) IP address e.g. 192.168.10.1 to specify the exact IP address host name e.g. my.host.domain to specify the machine - make curl pick the same IP address that is already used for the control connection. This is the recommended choice. Disable the use of PORT with --ftp-pasv. Disable the attempt to use the EPRT command instead of PORT by using --disable-eprt. EPRT is really PORT++. You can also append ":[start]-[end]" to the right of the address, to tell curl what TCP port range to use. That means you specify a port range, from a lower to a higher number. A single number works as well, but do note that it increases the risk of failure since the port may not be available. If --ftp-port is provided several times, the last set value is used. Examples: curl -P - ftp:/example.com curl -P eth0 ftp:/example.com curl -P 192.168.0.2 ftp:/example.com See also --ftp-pasv and --disable-eprt. --ftp-pret (FTP) Tell curl to send a PRET command before PASV (and EPSV). Certain FTP servers, mainly drftpd, require this non-standard command for directory listings as well as up and downloads in PASV mode. Providing --ftp-pret multiple times has no extra effect. Disable it again with --no-ftp-pret. Example: curl --ftp-pret ftp://example.com/ See also -P, --ftp-port and --ftp-pasv. --ftp-skip-pasv-ip (FTP) Tell curl to not use the IP address the server suggests in its response to curl's PASV command when curl connects the data connection. Instead curl reuses the same IP address it already uses for the control connection. This option is enabled by default (added in 7.74.0). This option has no effect if PORT, EPRT or EPSV is used instead of PASV. Providing --ftp-skip-pasv-ip multiple times has no extra effect. Disable it again with --no-ftp-skip-pasv-ip. Example: curl --ftp-skip-pasv-ip ftp://example.com/ See also --ftp-pasv. --ftp-ssl-ccc-mode <active/passive> (FTP) Sets the CCC mode. The passive mode does not initiate the shutdown, but instead waits for the server to do it, and does not reply to the shutdown from the server. The active mode initiates the shutdown and waits for a reply from the server. Providing --ftp-ssl-ccc-mode multiple times has no extra effect. Disable it again with --no-ftp-ssl-ccc-mode. Example: curl --ftp-ssl-ccc-mode active --ftp-ssl-ccc ftps://example.com/ See also --ftp-ssl-ccc. --ftp-ssl-ccc (FTP) Use CCC (Clear Command Channel) Shuts down the SSL/TLS layer after authenticating. The rest of the control channel communication is be unencrypted. This allows NAT routers to follow the FTP transaction. The default mode is passive. Providing --ftp-ssl-ccc multiple times has no extra effect. Disable it again with --no-ftp-ssl-ccc. Example: curl --ftp-ssl-ccc ftps://example.com/ See also --ssl and --ftp-ssl-ccc-mode. --ftp-ssl-control (FTP) Require SSL/TLS for the FTP login, clear for transfer. Allows secure authentication, but non-encrypted data transfers for efficiency. Fails the transfer if the server does not support SSL/TLS. Providing --ftp-ssl-control multiple times has no extra effect. Disable it again with --no-ftp-ssl-control. Example: curl --ftp-ssl-control ftp://example.com See also --ssl. -G, --get (HTTP) When used, this option makes all data specified with -d, --data, --data-binary or --data-urlencode to be used in an HTTP GET request instead of the POST request that otherwise would be used. The data is appended to the URL with a '?' separator. If used in combination with -I, --head, the POST data is instead appended to the URL with a HEAD request. Providing --get multiple times has no extra effect. Disable it again with --no-get. Examples: curl --get https://example.com curl --get -d "tool=curl" -d "age=old" https://example.com curl --get -I -d "tool=curl" https://example.com See also -d, --data and -X, --request. -g, --globoff This option switches off the "URL globbing parser". When you set this option, you can specify URLs that contain the letters {}[] without having curl itself interpret them. Note that these letters are not normal legal URL contents but they should be encoded according to the URI standard. Providing --globoff multiple times has no extra effect. Disable it again with --no-globoff. Example: curl -g "https://example.com/{[]}}}}" See also -K, --config and -q, --disable. --happy-eyeballs-timeout-ms <milliseconds> Happy Eyeballs is an algorithm that attempts to connect to both IPv4 and IPv6 addresses for dual-stack hosts, giving IPv6 a head-start of the specified number of milliseconds. If the IPv6 address cannot be connected to within that time, then a connection attempt is made to the IPv4 address in parallel. The first connection to be established is the one that is used. The range of suggested useful values is limited. Happy Eyeballs RFC 6555 says "It is RECOMMENDED that connection attempts be paced 150-250 ms apart to balance human factors against network load." libcurl currently defaults to 200 ms. Firefox and Chrome currently default to 300 ms. If --happy-eyeballs-timeout-ms is provided several times, the last set value is used. Example: curl --happy-eyeballs-timeout-ms 500 https://example.com See also -m, --max-time and --connect-timeout. Added in 7.59.0. --haproxy-clientip <IP address> (HTTP) Sets a client IP in HAProxy PROXY protocol v1 header at the beginning of the connection. For valid requests, IPv4 addresses must be indicated as a series of exactly 4 integers in the range [0..255] inclusive written in decimal representation separated by exactly one dot between each other. Heading zeroes are not permitted in front of numbers in order to avoid any possible confusion with octal numbers. IPv6 addresses must be indicated as series of 4 hexadecimal digits (upper or lower case) delimited by colons between each other, with the acceptance of one double colon sequence to replace the largest acceptable range of consecutive zeroes. The total number of decoded bits must exactly be 128. Otherwise, any string can be accepted for the client IP and get sent. It replaces --haproxy-protocol if used, it is not necessary to specify both flags. If --haproxy-clientip is provided several times, the last set value is used. Example: curl --haproxy-clientip $IP See also -x, --proxy. Added in 8.2.0. --haproxy-protocol (HTTP) Send a HAProxy PROXY protocol v1 header at the beginning of the connection. This is used by some load balancers and reverse proxies to indicate the client's true IP address and port. This option is primarily useful when sending test requests to a service that expects this header. Providing --haproxy-protocol multiple times has no extra effect. Disable it again with --no-haproxy-protocol. Example: curl --haproxy-protocol https://example.com See also -x, --proxy. Added in 7.60.0. -I, --head (HTTP FTP FILE) Fetch the headers only! HTTP-servers feature the command HEAD which this uses to get nothing but the header of a document. When used on an FTP or FILE file, curl displays the file size and last modification time only. Providing --head multiple times has no extra effect. Disable it again with --no-head. Example: curl -I https://example.com See also -G, --get, -v, --verbose and --trace-ascii. -H, --header <header/@file> (HTTP IMAP SMTP) Extra header to include in information sent. When used within an HTTP request, it is added to the regular request headers. For an IMAP or SMTP MIME uploaded mail built with -F, --form options, it is prepended to the resulting MIME document, effectively including it at the mail global level. It does not affect raw uploaded mails (Added in 7.56.0). You may specify any number of extra headers. Note that if you should add a custom header that has the same name as one of the internal ones curl would use, your externally set header is used instead of the internal one. This allows you to make even trickier stuff than curl would normally do. You should not replace internally set headers without knowing perfectly well what you are doing. Remove an internal header by giving a replacement without content on the right side of the colon, as in: -H "Host:". If you send the custom header with no-value then its header must be terminated with a semicolon, such as \-H "X-Custom-Header;" to send "X-Custom-Header:". curl makes sure that each header you add/replace is sent with the proper end-of-line marker, you should thus not add that as a part of the header content: do not add newlines or carriage returns, they only mess things up for you. curl passes on the verbatim string you give it without any filter or other safe guards. That includes white space and control characters. This option can take an argument in @filename style, which then adds a header for each line in the input file. Using @- makes curl read the header file from stdin. Added in 7.55.0. Please note that most anti-spam utilities check the presence and value of several MIME mail headers: these are "From:", "To:", "Date:" and "Subject:" among others and should be added with this option. You need --proxy-header to send custom headers intended for an HTTP proxy. Added in 7.37.0. Passing on a "Transfer-Encoding: chunked" header when doing an HTTP request with a request body, makes curl send the data using chunked encoding. WARNING: headers set with this option are set in all HTTP requests - even after redirects are followed, like when told with -L, --location. This can lead to the header being sent to other hosts than the original host, so sensitive headers should be used with caution combined with following redirects. --header can be used several times in a command line Examples: curl -H "X-First-Name: Joe" https://example.com curl -H "User-Agent: yes-please/2000" https://example.com curl -H "Host:" https://example.com curl -H @headers.txt https://example.com See also -A, --user-agent and -e, --referer. -h, --help <category> Usage help. This lists all curl command line options within the given category. If no argument is provided, curl displays only the most important command line arguments. For category all, curl displays help for all options. If category is specified, curl displays all available help categories. Example: curl --help all See also -v, --verbose. --hostpubmd5 <md5> (SFTP SCP) Pass a string containing 32 hexadecimal digits. The string should be the 128 bit MD5 checksum of the remote host's public key, curl refuses the connection with the host unless the checksums match. If --hostpubmd5 is provided several times, the last set value is used. Example: curl --hostpubmd5 e5c1c49020640a5ab0f2034854c321a8 sftp://example.com/ See also --hostpubsha256. --hostpubsha256 <sha256> (SFTP SCP) Pass a string containing a Base64-encoded SHA256 hash of the remote host's public key. Curl refuses the connection with the host unless the hashes match. This feature requires libcurl to be built with libssh2 and does not work with other SSH backends. If --hostpubsha256 is provided several times, the last set value is used. Example: curl --hostpubsha256 NDVkMTQxMGQ1ODdmMjQ3MjczYjAyOTY5MmRkMjVmNDQ= sftp://example.com/ See also --hostpubmd5. Added in 7.80.0. --hsts <file name> (HTTPS) This option enables HSTS for the transfer. If the file name points to an existing HSTS cache file, that is used. After a completed transfer, the cache is saved to the file name again if it has been modified. If curl is told to use HTTP:// for a transfer involving a host name that exists in the HSTS cache, it upgrades the transfer to use HTTPS. Each HSTS cache entry has an individual life time after which the upgrade is no longer performed. Specify a "" file name (zero length) to avoid loading/saving and make curl just handle HSTS in memory. If this option is used several times, curl loads contents from all the files but the last one is used for saving. --hsts can be used several times in a command line Example: curl --hsts cache.txt https://example.com See also --proto. Added in 7.74.0. --http0.9 (HTTP) Tells curl to be fine with HTTP version 0.9 response. HTTP/0.9 is a response without headers and therefore you can also connect with this to non-HTTP servers and still get a response since curl simply transparently downgrades - if allowed. HTTP/0.9 is disabled by default (added in 7.66.0) Providing --http0.9 multiple times has no extra effect. Disable it again with --no-http0.9. Example: curl --http0.9 https://example.com See also --http1.1, --http2 and --http3. Added in 7.64.0. -0, --http1.0 (HTTP) Tells curl to use HTTP version 1.0 instead of using its internally preferred HTTP version. Providing --http1.0 multiple times has no extra effect. Example: curl --http1.0 https://example.com See also --http0.9 and --http1.1. This option is mutually exclusive to --http1.1 and --http2 and --http2-prior-knowledge and --http3. --http1.1 (HTTP) Tells curl to use HTTP version 1.1. Providing --http1.1 multiple times has no extra effect. Example: curl --http1.1 https://example.com See also -0, --http1.0 and --http0.9. This option is mutually exclusive to -0, --http1.0 and --http2 and --http2-prior-knowledge and --http3. --http2-prior-knowledge (HTTP) Tells curl to issue its non-TLS HTTP requests using HTTP/2 without HTTP/1.1 Upgrade. It requires prior knowledge that the server supports HTTP/2 straight away. HTTPS requests still do HTTP/2 the standard way with negotiated protocol version in the TLS handshake. Providing --http2-prior-knowledge multiple times has no extra effect. Disable it again with --no-http2-prior-knowledge. Example: curl --http2-prior-knowledge https://example.com See also --http2 and --http3. --http2-prior-knowledge requires that the underlying libcurl was built to support HTTP/2. This option is mutually exclusive to --http1.1 and -0, --http1.0 and --http2 and --http3. --http2 (HTTP) Tells curl to use HTTP version 2. For HTTPS, this means curl negotiates HTTP/2 in the TLS handshake. curl does this by default. For HTTP, this means curl attempts to upgrade the request to HTTP/2 using the Upgrade: request header. When curl uses HTTP/2 over HTTPS, it does not itself insist on TLS 1.2 or higher even though that is required by the specification. A user can add this version requirement with --tlsv1.2. Providing --http2 multiple times has no extra effect. Example: curl --http2 https://example.com See also --http1.1, --http3 and --no-alpn. --http2 requires that the underlying libcurl was built to support HTTP/2. This option is mutually exclusive to --http1.1 and -0, --http1.0 and --http2-prior-knowledge and --http3. --http3-only (HTTP) Instructs curl to use HTTP/3 to the host in the URL, with no fallback to earlier HTTP versions. HTTP/3 can only be used for HTTPS and not for HTTP URLs. For HTTP, this option triggers an error. This option allows a user to avoid using the Alt-Svc method of upgrading to HTTP/3 when you know that the target speaks HTTP/3 on the given host and port. This option makes curl fail if a QUIC connection cannot be established, it does not attempt any other HTTP versions on its own. Use --http3 for similar functionality with a fallback. Providing --http3-only multiple times has no extra effect. Example: curl --http3-only https://example.com See also --http1.1, --http2 and --http3. --http3-only requires that the underlying libcurl was built to support HTTP/3. This option is mutually exclusive to --http1.1 and -0, --http1.0 and --http2 and --http2-prior-knowledge and --http3. Added in 7.88.0. --http3 (HTTP) Tells curl to try HTTP/3 to the host in the URL, but fallback to earlier HTTP versions if the HTTP/3 connection establishment fails. HTTP/3 is only available for HTTPS and not for HTTP URLs. This option allows a user to avoid using the Alt-Svc method of upgrading to HTTP/3 when you know that the target speaks HTTP/3 on the given host and port. When asked to use HTTP/3, curl issues a separate attempt to use older HTTP versions with a slight delay, so if the HTTP/3 transfer fails or is slow, curl still tries to proceed with an older HTTP version. Use --http3-only for similar functionality without a fallback. Providing --http3 multiple times has no extra effect. Example: curl --http3 https://example.com See also --http1.1 and --http2. --http3 requires that the underlying libcurl was built to support HTTP/3. This option is mutually exclusive to --http1.1 and -0, --http1.0 and --http2 and --http2-prior-knowledge and --http3-only. Added in 7.66.0. --ignore-content-length (FTP HTTP) For HTTP, Ignore the Content-Length header. This is particularly useful for servers running Apache 1.x, which reports incorrect Content-Length for files larger than 2 gigabytes. For FTP, this makes curl skip the SIZE command to figure out the size before downloading a file. This option does not work for HTTP if libcurl was built to use hyper. Providing --ignore-content-length multiple times has no extra effect. Disable it again with --no-ignore-content-length. Example: curl --ignore-content-length https://example.com See also --ftp-skip-pasv-ip. -i, --include (HTTP FTP) Include response headers in the output. HTTP response headers can include things like server name, cookies, date of the document, HTTP version and more... With non-HTTP protocols, the "headers" are other server communication. To view the request headers, consider the -v, --verbose option. Prior to 7.75.0 curl did not print the headers if -f, --fail was used in combination with this option and there was error reported by server. Providing --include multiple times has no extra effect. Disable it again with --no-include. Example: curl -i https://example.com See also -v, --verbose. -k, --insecure (TLS SFTP SCP) By default, every secure connection curl makes is verified to be secure before the transfer takes place. This option makes curl skip the verification step and proceed without checking. When this option is not used for protocols using TLS, curl verifies the server's TLS certificate before it continues: that the certificate contains the right name which matches the host name used in the URL and that the certificate has been signed by a CA certificate present in the cert store. See this online resource for further details: https://curl.se/docs/sslcerts.html For SFTP and SCP, this option makes curl skip the known_hosts verification. known_hosts is a file normally stored in the user's home directory in the ".ssh" subdirectory, which contains host names and their public keys. WARNING: using this option makes the transfer insecure. When curl uses secure protocols it trusts responses and allows for example HSTS and Alt-Svc information to be stored and used subsequently. Using -k, --insecure can make curl trust and use such information from malicious servers. Providing --insecure multiple times has no extra effect. Disable it again with --no-insecure. Example: curl --insecure https://example.com See also --proxy-insecure, --cacert and --capath. --interface <name> Perform an operation using a specified interface. You can enter interface name, IP address or host name. An example could look like: curl --interface eth0:1 https://www.example.com/ On Linux it can be used to specify a VRF, but the binary needs to either have CAP_NET_RAW or to be run as root. More information about Linux VRF: https://www.kernel.org/doc/Documentation/networking/vrf.txt If --interface is provided several times, the last set value is used. Example: curl --interface eth0 https://example.com See also --dns-interface. --ipfs-gateway <URL> (IPFS) Specify which gateway to use for IPFS and IPNS URLs. Not specifying this will instead make curl check if the IPFS_GATEWAY environment variable is set, or if a "~/.ipfs/gateway" file holding the gateway URL exists. If you run a local IPFS node, this gateway is by default available under "http://localhost:8080". A full example URL would look like: curl --ipfs-gateway http://localhost:8080 ipfs://bafybeigagd5nmnn2iys2f3doro7ydrevyr2mzarwidgadawmamiteydbzi There are many public IPFS gateways. See for example: https://ipfs.github.io/public-gateway-checker/ WARNING: If you opt to go for a remote gateway you should be aware that you completely trust the gateway. This is fine in local gateways as you host it yourself. With remote gateways there could potentially be a malicious actor returning you data that does not match the request you made, inspect or even interfere with the request. You will not notice this when using curl. A mitigation could be to go for a "trustless" gateway. This means you locally verify that the data. Consult the docs page on trusted vs trustless: https://docs.ipfs.tech/reference/http/gateway/#trusted-vs-trustless If --ipfs-gateway is provided several times, the last set value is used. Example: curl --ipfs-gateway https://example.com ipfs:// See also -h, --help and -M, --manual. Added in 8.4.0. -4, --ipv4 This option tells curl to use IPv4 addresses only when resolving host names, and not for example try IPv6. Providing --ipv4 multiple times has no extra effect. Example: curl --ipv4 https://example.com See also --http1.1 and --http2. This option is mutually exclusive to -6, --ipv6. -6, --ipv6 This option tells curl to use IPv6 addresses only when resolving host names, and not for example try IPv4. Providing --ipv6 multiple times has no extra effect. Example: curl --ipv6 https://example.com See also --http1.1 and --http2. This option is mutually exclusive to -4, --ipv4. --json <data> (HTTP) Sends the specified JSON data in a POST request to the HTTP server. --json works as a shortcut for passing on these three options: --data [arg] --header "Content-Type: application/json" --header "Accept: application/json" There is no verification that the passed in data is actual JSON or that the syntax is correct. If you start the data with the letter @, the rest should be a file name to read the data from, or a single dash (-) if you want curl to read the data from stdin. Posting data from a file named 'foobar' would thus be done with --json @foobar and to instead read the data from stdin, use --json @-. If this option is used more than once on the same command line, the additional data pieces are concatenated to the previous before sending. The headers this option sets can be overridden with -H, --header as usual. --json can be used several times in a command line Examples: curl --json '{ "drink": "coffe" }' https://example.com curl --json '{ "drink":' --json ' "coffe" }' https://example.com curl --json @prepared https://example.com curl --json @- https://example.com < json.txt See also --data-binary and --data-raw. This option is mutually exclusive to -F, --form and -I, --head and -T, --upload-file. Added in 7.82.0. -j, --junk-session-cookies (HTTP) When curl is told to read cookies from a given file, this option makes it discard all "session cookies". This has the same effect as if a new session is started. Typical browsers discard session cookies when they are closed down. Providing --junk-session-cookies multiple times has no extra effect. Disable it again with --no-junk-session-cookies. Example: curl --junk-session-cookies -b cookies.txt https://example.com See also -b, --cookie and -c, --cookie-jar. --keepalive-time <seconds> This option sets the time a connection needs to remain idle before sending keepalive probes and the time between individual keepalive probes. It is currently effective on operating systems offering the "TCP_KEEPIDLE" and "TCP_KEEPINTVL" socket options (meaning Linux, recent AIX, HP-UX and more). Keepalive is used by the TCP stack to detect broken networks on idle connections. The number of missed keepalive probes before declaring the connection down is OS dependent and is commonly 9 or 10. This option has no effect if --no-keepalive is used. If unspecified, the option defaults to 60 seconds. If --keepalive-time is provided several times, the last set value is used. Example: curl --keepalive-time 20 https://example.com See also --no-keepalive and -m, --max-time. --key-type <type> (TLS) Private key file type. Specify which type your --key provided private key is. DER, PEM, and ENG are supported. If not specified, PEM is assumed. If --key-type is provided several times, the last set value is used. Example: curl --key-type DER --key here https://example.com See also --key. --key <key> (TLS SSH) Private key file name. Allows you to provide your private key in this separate file. For SSH, if not specified, curl tries the following candidates in order: "~/.ssh/id_rsa", "~/.ssh/id_dsa", "./id_rsa", "./id_dsa". If curl is built against OpenSSL library, and the engine pkcs11 is available, then a PKCS#11 URI (RFC 7512) can be used to specify a private key located in a PKCS#11 device. A string beginning with "pkcs11:" is interpreted as a PKCS#11 URI. If a PKCS#11 URI is provided, then the --engine option is set as "pkcs11" if none was provided and the --key-type option is set as "ENG" if none was provided. If curl is built against Secure Transport or Schannel then this option is ignored for TLS protocols (HTTPS, etc). Those backends expect the private key to be already present in the keychain or PKCS#12 file containing the certificate. If --key is provided several times, the last set value is used. Example: curl --cert certificate --key here https://example.com See also --key-type and -E, --cert. --krb <level> (FTP) Enable Kerberos authentication and use. The level must be entered and should be one of 'clear', 'safe', 'confidential', or 'private'. Should you use a level that is not one of these, 'private' is used. If --krb is provided several times, the last set value is used. Example: curl --krb clear ftp://example.com/ See also --delegation and --ssl. --krb requires that the underlying libcurl was built to support Kerberos. --libcurl <file> Append this option to any ordinary curl command line, and you get libcurl-using C source code written to the file that does the equivalent of what your command-line operation does! This option is global and does not need to be specified for each use of --next. If --libcurl is provided several times, the last set value is used. Example: curl --libcurl client.c https://example.com See also -v, --verbose. --limit-rate <speed> Specify the maximum transfer rate you want curl to use - for both downloads and uploads. This feature is useful if you have a limited pipe and you would like your transfer not to use your entire bandwidth. To make it slower than it otherwise would be. The given speed is measured in bytes/second, unless a suffix is appended. Appending 'k' or 'K' counts the number as kilobytes, 'm' or 'M' makes it megabytes, while 'g' or 'G' makes it gigabytes. The suffixes (k, M, G, T, P) are 1024 based. For example 1k is 1024. Examples: 200K, 3m and 1G. The rate limiting logic works on averaging the transfer speed to no more than the set threshold over a period of multiple seconds. If you also use the -Y, --speed-limit option, that option takes precedence and might cripple the rate-limiting slightly, to help keeping the speed-limit logic working. If --limit-rate is provided several times, the last set value is used. Examples: curl --limit-rate 100K https://example.com curl --limit-rate 1000 https://example.com curl --limit-rate 10M https://example.com See also --rate, -Y, --speed-limit and -y, --speed-time. -l, --list-only (FTP POP3 SFTP) (FTP) When listing an FTP directory, this switch forces a name-only view. This is especially useful if the user wants to machine-parse the contents of an FTP directory since the normal directory view does not use a standard look or format. When used like this, the option causes an NLST command to be sent to the server instead of LIST. Note: Some FTP servers list only files in their response to NLST; they do not include sub-directories and symbolic links. (SFTP) When listing an SFTP directory, this switch forces a name-only view, one per line. This is especially useful if the user wants to machine-parse the contents of an SFTP directory since the normal directory view provides more information than just file names. (POP3) When retrieving a specific email from POP3, this switch forces a LIST command to be performed instead of RETR. This is particularly useful if the user wants to see if a specific message-id exists on the server and what size it is. Note: When combined with -X, --request, this option can be used to send a UIDL command instead, so the user may use the email's unique identifier rather than its message-id to make the request. Providing --list-only multiple times has no extra effect. Disable it again with --no-list-only. Example: curl --list-only ftp://example.com/dir/ See also -Q, --quote and -X, --request. --local-port <num/range> Set a preferred single number or range (FROM-TO) of local port numbers to use for the connection(s). Note that port numbers by nature are a scarce resource so setting this range to something too narrow might cause unnecessary connection setup failures. If --local-port is provided several times, the last set value is used. Example: curl --local-port 1000-3000 https://example.com See also -g, --globoff. --location-trusted (HTTP) Like -L, --location, but allows sending the name + password to all hosts that the site may redirect to. This may or may not introduce a security breach if the site redirects you to a site to which you send your authentication info (which is clear-text in the case of HTTP Basic authentication). Providing --location-trusted multiple times has no extra effect. Disable it again with --no-location-trusted. Example: curl --location-trusted -u user:password https://example.com See also -u, --user. -L, --location (HTTP) If the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response code), this option makes curl redo the request on the new place. If used together with -i, --include or -I, --head, headers from all requested pages are shown. When authentication is used, curl only sends its credentials to the initial host. If a redirect takes curl to a different host, it does not get the user+password pass on. See also --location-trusted on how to change this. Limit the amount of redirects to follow by using the --max-redirs option. When curl follows a redirect and if the request is a POST, it sends the following request with a GET if the HTTP response was 301, 302, or 303. If the response code was any other 3xx code, curl resends the following request using the same unmodified method. You can tell curl to not change POST requests to GET after a 30x response by using the dedicated options for that: --post301, --post302 and --post303. The method set with -X, --request overrides the method curl would otherwise select to use. Providing --location multiple times has no extra effect. Disable it again with --no-location. Example: curl -L https://example.com See also --resolve and --alt-svc. --login-options <options> (IMAP LDAP POP3 SMTP) Specify the login options to use during server authentication. You can use login options to specify protocol specific options that may be used during authentication. At present only IMAP, POP3 and SMTP support login options. For more information about login options please see RFC 2384, RFC 5092 and the IETF draft https://datatracker.ietf.org/doc/html/draft-earhart-url-smtp-00 Since 8.2.0, IMAP supports the login option "AUTH=+LOGIN". With this option, curl uses the plain (not SASL) "LOGIN IMAP" command even if the server advertises SASL authentication. Care should be taken in using this option, as it sends your password over the network in plain text. This does not work if the IMAP server disables the plain "LOGIN" (e.g. to prevent password snooping). If --login-options is provided several times, the last set value is used. Example: curl --login-options 'AUTH=*' imap://example.com See also -u, --user. --mail-auth <address> (SMTP) Specify a single address. This is used to specify the authentication address (identity) of a submitted message that is being relayed to another server. If --mail-auth is provided several times, the last set value is used. Example: curl --mail-auth user@example.come -T mail smtp://example.com/ See also --mail-rcpt and --mail-from. --mail-from <address> (SMTP) Specify a single address that the given mail should get sent from. If --mail-from is provided several times, the last set value is used. Example: curl --mail-from user@example.com -T mail smtp://example.com/ See also --mail-rcpt and --mail-auth. --mail-rcpt-allowfails (SMTP) When sending data to multiple recipients, by default curl aborts SMTP conversation if at least one of the recipients causes RCPT TO command to return an error. The default behavior can be changed by passing --mail-rcpt-allowfails command-line option which makes curl ignore errors and proceed with the remaining valid recipients. If all recipients trigger RCPT TO failures and this flag is specified, curl still aborts the SMTP conversation and returns the error received from to the last RCPT TO command. Providing --mail-rcpt-allowfails multiple times has no extra effect. Disable it again with --no-mail-rcpt-allowfails. Example: curl --mail-rcpt-allowfails --mail-rcpt dest@example.com smtp://example.com See also --mail-rcpt. Added in 7.69.0. --mail-rcpt <address> (SMTP) Specify a single email address, user name or mailing list name. Repeat this option several times to send to multiple recipients. When performing an address verification (VRFY command), the recipient should be specified as the user name or user name and domain (as per Section 3.5 of RFC 5321). When performing a mailing list expand (EXPN command), the recipient should be specified using the mailing list name, such as "Friends" or "London-Office". --mail-rcpt can be used several times in a command line Example: curl --mail-rcpt user@example.net smtp://example.com See also --mail-rcpt-allowfails. -M, --manual Manual. Display the huge help text. Example: curl --manual See also -v, --verbose, --libcurl and --trace. --max-filesize <bytes> (FTP HTTP MQTT) Specify the maximum size (in bytes) of a file to download. If the file requested is larger than this value, the transfer does not start and curl returns with exit code 63. A size modifier may be used. For example, Appending 'k' or 'K' counts the number as kilobytes, 'm' or 'M' makes it megabytes, while 'g' or 'G' makes it gigabytes. Examples: 200K, 3m and 1G. (Added in 7.58.0) NOTE: before curl 8.4.0, when the file size is not known prior to download, for such files this option has no effect even if the file transfer ends up being larger than this given limit. Starting with curl 8.4.0, this option aborts the transfer if it reaches the threshold during transfer. If --max-filesize is provided several times, the last set value is used. Example: curl --max-filesize 100K https://example.com See also --limit-rate. --max-redirs <num> (HTTP) Set maximum number of redirections to follow. When -L, --location is used, to prevent curl from following too many redirects, by default, the limit is set to 50 redirects. Set this option to -1 to make it unlimited. If --max-redirs is provided several times, the last set value is used. Example: curl --max-redirs 3 --location https://example.com See also -L, --location. -m, --max-time <fractional seconds> Maximum time in seconds that you allow each transfer to take. This is useful for preventing your batch jobs from hanging for hours due to slow networks or links going down. This option accepts decimal values. If you enable retrying the transfer (--retry) then the maximum time counter is reset each time the transfer is retried. You can use --retry-max-time to limit the retry time. The decimal value needs to provided using a dot (.) as decimal separator - not the local version even if it might be using another separator. If --max-time is provided several times, the last set value is used. Examples: curl --max-time 10 https://example.com curl --max-time 2.92 https://example.com See also --connect-timeout and --retry-max-time. --metalink This option was previously used to specify a Metalink resource. Metalink support is disabled in curl for security reasons (added in 7.78.0). If --metalink is provided several times, the last set value is used. Example: curl --metalink file https://example.com See also -Z, --parallel. --negotiate (HTTP) Enables Negotiate (SPNEGO) authentication. This option requires a library built with GSS-API or SSPI support. Use -V, --version to see if your curl supports GSS-API/SSPI or SPNEGO. When using this option, you must also provide a fake -u, --user option to activate the authentication code properly. Sending a '-u :' is enough as the user name and password from the -u, --user option are not actually used. Providing --negotiate multiple times has no extra effect. Example: curl --negotiate -u : https://example.com See also --basic, --ntlm, --anyauth and --proxy-negotiate. --netrc-file <filename> This option is similar to -n, --netrc, except that you provide the path (absolute or relative) to the netrc file that curl should use. You can only specify one netrc file per invocation. It abides by --netrc-optional if specified. If --netrc-file is provided several times, the last set value is used. Example: curl --netrc-file netrc https://example.com See also -n, --netrc, -u, --user and -K, --config. This option is mutually exclusive to -n, --netrc. --netrc-optional Similar to -n, --netrc, but this option makes the .netrc usage optional and not mandatory as the -n, --netrc option does. Providing --netrc-optional multiple times has no extra effect. Disable it again with --no-netrc-optional. Example: curl --netrc-optional https://example.com See also --netrc-file. This option is mutually exclusive to -n, --netrc. -n, --netrc Makes curl scan the .netrc file in the user's home directory for login name and password. This is typically used for FTP on Unix. If used with HTTP, curl enables user authentication. See netrc(5) and ftp(1) for details on the file format. Curl does not complain if that file does not have the right permissions (it should be neither world- nor group-readable). The environment variable "HOME" is used to find the home directory. On Windows two filenames in the home directory are checked: .netrc and _netrc, preferring the former. Older versions on Windows checked for _netrc only. A quick and simple example of how to setup a .netrc to allow curl to FTP to the machine host.domain.com with user name 'myself' and password 'secret' could look similar to: machine host.domain.com login myself password secret Providing --netrc multiple times has no extra effect. Disable it again with --no-netrc. Example: curl --netrc https://example.com See also --netrc-file, -K, --config and -u, --user. This option is mutually exclusive to --netrc-file and --netrc-optional. -:, --next Tells curl to use a separate operation for the following URL and associated options. This allows you to send several URL requests, each with their own specific options, for example, such as different user names or custom requests for each. -:, --next resets all local options and only global ones have their values survive over to the operation following the -:, --next instruction. Global options include -v, --verbose, --trace, --trace-ascii and --fail-early. For example, you can do both a GET and a POST in a single command line: curl www1.example.com --next -d postthis www2.example.com --next can be used several times in a command line Examples: curl https://example.com --next -d postthis www2.example.com curl -I https://example.com --next https://example.net/ See also -Z, --parallel and -K, --config. --no-alpn (HTTPS) Disable the ALPN TLS extension. ALPN is enabled by default if libcurl was built with an SSL library that supports ALPN. ALPN is used by a libcurl that supports HTTP/2 to negotiate HTTP/2 support with the server during https sessions. Note that this is the negated option name documented. You can use --alpn to enable ALPN. Providing --no-alpn multiple times has no extra effect. Disable it again with --alpn. Example: curl --no-alpn https://example.com See also --no-npn and --http2. --no-alpn requires that the underlying libcurl was built to support TLS. -N, --no-buffer Disables the buffering of the output stream. In normal work situations, curl uses a standard buffered output stream that has the effect that it outputs the data in chunks, not necessarily exactly when the data arrives. Using this option disables that buffering. Note that this is the negated option name documented. You can use --buffer to enable buffering again. Providing --no-buffer multiple times has no extra effect. Disable it again with --buffer. Example: curl --no-buffer https://example.com See also -#, --progress-bar. --no-clobber When used in conjunction with the -o, --output, -J, --remote-header-name, -O, --remote-name, or --remote-name-all options, curl avoids overwriting files that already exist. Instead, a dot and a number gets appended to the name of the file that would be created, up to filename.100 after which it does not create any file. Note that this is the negated option name documented. You can thus use --clobber to enforce the clobbering, even if -J, --remote-header-name is specified. Providing --no-clobber multiple times has no extra effect. Disable it again with --clobber. Example: curl --no-clobber --output local/dir/file https://example.com See also -o, --output and -O, --remote-name. Added in 7.83.0. --no-keepalive Disables the use of keepalive messages on the TCP connection. curl otherwise enables them by default. Note that this is the negated option name documented. You can thus use --keepalive to enforce keepalive. Providing --no-keepalive multiple times has no extra effect. Disable it again with --keepalive. Example: curl --no-keepalive https://example.com See also --keepalive-time. --no-npn (HTTPS) curl never uses NPN, this option has no effect (added in 7.86.0). Disable the NPN TLS extension. NPN is enabled by default if libcurl was built with an SSL library that supports NPN. NPN is used by a libcurl that supports HTTP/2 to negotiate HTTP/2 support with the server during https sessions. Providing --no-npn multiple times has no extra effect. Disable it again with --npn. Example: curl --no-npn https://example.com See also --no-alpn and --http2. --no-npn requires that the underlying libcurl was built to support TLS. --no-progress-meter Option to switch off the progress meter output without muting or otherwise affecting warning and informational messages like -s, --silent does. Note that this is the negated option name documented. You can thus use --progress-meter to enable the progress meter again. Providing --no-progress-meter multiple times has no extra effect. Disable it again with --progress-meter. Example: curl --no-progress-meter -o store https://example.com See also -v, --verbose and -s, --silent. Added in 7.67.0. --no-sessionid (TLS) Disable curl's use of SSL session-ID caching. By default all transfers are done using the cache. Note that while nothing should ever get hurt by attempting to reuse SSL session-IDs, there seem to be broken SSL implementations in the wild that may require you to disable this in order for you to succeed. Note that this is the negated option name documented. You can thus use --sessionid to enforce session-ID caching. Providing --no-sessionid multiple times has no extra effect. Disable it again with --sessionid. Example: curl --no-sessionid https://example.com See also -k, --insecure. --noproxy <no-proxy-list> Comma-separated list of hosts for which not to use a proxy, if one is specified. The only wildcard is a single "*" character, which matches all hosts, and effectively disables the proxy. Each name in this list is matched as either a domain which contains the hostname, or the hostname itself. For example, "local.com" would match "local.com", "local.com:80", and "www.local.com", but not "www.notlocal.com". This option overrides the environment variables that disable the proxy ("no_proxy" and "NO_PROXY") (added in 7.53.0). If there is an environment variable disabling a proxy, you can set the no proxy list to "" to override it. IP addresses specified to this option can be provided using CIDR notation (added in 7.86.0): an appended slash and number specifies the number of network bits out of the address to use in the comparison. For example "192.168.0.0/16" would match all addresses starting with "192.168". If --noproxy is provided several times, the last set value is used. Example: curl --noproxy "www.example" https://example.com See also -x, --proxy. --ntlm-wb (HTTP) Enables NTLM much in the style --ntlm does, but hand over the authentication to the separate binary "ntlmauth" application that is executed when needed. Providing --ntlm-wb multiple times has no extra effect. Example: curl --ntlm-wb -u user:password https://example.com See also --ntlm and --proxy-ntlm. --ntlm (HTTP) Enables NTLM authentication. The NTLM authentication method was designed by Microsoft and is used by IIS web servers. It is a proprietary protocol, reverse-engineered by clever people and implemented in curl based on their efforts. This kind of behavior should not be endorsed, you should encourage everyone who uses NTLM to switch to a public and documented authentication method instead, such as Digest. If you want to enable NTLM for your proxy authentication, then use --proxy-ntlm. Providing --ntlm multiple times has no extra effect. Example: curl --ntlm -u user:password https://example.com See also --proxy-ntlm. --ntlm requires that the underlying libcurl was built to support TLS. This option is mutually exclusive to --basic and --negotiate and --digest and --anyauth. --oauth2-bearer <token> (IMAP LDAP POP3 SMTP HTTP) Specify the Bearer Token for OAUTH 2.0 server authentication. The Bearer Token is used in conjunction with the user name which can be specified as part of the --url or -u, --user options. The Bearer Token and user name are formatted according to RFC 6750. If --oauth2-bearer is provided several times, the last set value is used. Example: curl --oauth2-bearer "mF_9.B5f-4.1JqM" https://example.com See also --basic, --ntlm and --digest. --output-dir <dir> This option specifies the directory in which files should be stored, when -O, --remote-name or -o, --output are used. The given output directory is used for all URLs and output options on the command line, up until the first -:, --next. If the specified target directory does not exist, the operation fails unless --create-dirs is also used. If --output-dir is provided several times, the last set value is used. Example: curl --output-dir "tmp" -O https://example.com See also -O, --remote-name and -J, --remote-header-name. Added in 7.73.0. -o, --output <file> Write output to <file> instead of stdout. If you are using {} or [] to fetch multiple documents, you should quote the URL and you can use '#' followed by a number in the <file> specifier. That variable is replaced with the current string for the URL being fetched. Like in: curl "http://{one,two}.example.com" -o "file_#1.txt" or use several variables like: curl "http://{site,host}.host[1-5].example" -o "#1_#2" You may use this option as many times as the number of URLs you have. For example, if you specify two URLs on the same command line, you can use it like this: curl -o aa example.com -o bb example.net and the order of the -o options and the URLs does not matter, just that the first -o is for the first URL and so on, so the above command line can also be written as curl example.com example.net -o aa -o bb See also the --create-dirs option to create the local directories dynamically. Specifying the output as '-' (a single dash) passes the output to stdout. To suppress response bodies, you can redirect output to /dev/null: curl example.com -o /dev/null Or for Windows: curl example.com -o nul --output can be used several times in a command line Examples: curl -o file https://example.com curl "http://{one,two}.example.com" -o "file_#1.txt" curl "http://{site,host}.host[1-5].example" -o "#1_#2" curl -o file https://example.com -o file2 https://example.net See also -O, --remote-name, --remote-name-all and -J, --remote-header-name. --parallel-immediate When doing parallel transfers, this option instructs curl that it should rather prefer opening up more connections in parallel at once rather than waiting to see if new transfers can be added as multiplexed streams on another connection. This option is global and does not need to be specified for each use of --next. Providing --parallel-immediate multiple times has no extra effect. Disable it again with --no-parallel-immediate. Example: curl --parallel-immediate -Z https://example.com -o file1 https://example.com -o file2 See also -Z, --parallel and --parallel-max. Added in 7.68.0. --parallel-max <num> When asked to do parallel transfers, using -Z, --parallel, this option controls the maximum amount of transfers to do simultaneously. This option is global and does not need to be specified for each use of -:, --next. The default is 50. If --parallel-max is provided several times, the last set value is used. Example: curl --parallel-max 100 -Z https://example.com ftp://example.com/ See also -Z, --parallel. Added in 7.66.0. -Z, --parallel Makes curl perform its transfers in parallel as compared to the regular serial manner. This option is global and does not need to be specified for each use of --next. Providing --parallel multiple times has no extra effect. Disable it again with --no-parallel. Example: curl --parallel https://example.com -o file1 https://example.com -o file2 See also -:, --next and -v, --verbose. Added in 7.66.0. --pass <phrase> (SSH TLS) Passphrase for the private key. If --pass is provided several times, the last set value is used. Example: curl --pass secret --key file https://example.com See also --key and -u, --user. --path-as-is Tell curl to not handle sequences of /../ or /./ in the given URL path. Normally curl squashes or merges them according to standards but with this option set you tell it not to do that. Providing --path-as-is multiple times has no extra effect. Disable it again with --no-path-as-is. Example: curl --path-as-is https://example.com/../../etc/passwd See also --request-target. --pinnedpubkey <hashes> (TLS) Tells curl to use the specified public key file (or hashes) to verify the peer. This can be a path to a file which contains a single public key in PEM or DER format, or any number of base64 encoded sha256 hashes preceded by 'sha256//' and separated by ';'. When negotiating a TLS or SSL connection, the server sends a certificate indicating its identity. A public key is extracted from this certificate and if it does not exactly match the public key provided to this option, curl aborts the connection before sending or receiving any data. This option is independent of option -k, --insecure. If you use both options together then the peer is still verified by public key. PEM/DER support: OpenSSL and GnuTLS, wolfSSL (added in 7.43.0), mbedTLS , Secure Transport macOS 10.7+/iOS 10+ (7.54.1), Schannel (7.58.1) sha256 support: OpenSSL, GnuTLS and wolfSSL, mbedTLS (added in 7.47.0), Secure Transport macOS 10.7+/iOS 10+ (7.54.1), Schannel (7.58.1) Other SSL backends not supported. If --pinnedpubkey is provided several times, the last set value is used. Examples: curl --pinnedpubkey keyfile https://example.com curl --pinnedpubkey 'sha256//ce118b51897f4452dc' https://example.com See also --hostpubsha256. --post301 (HTTP) Tells curl to respect RFC 7231/6.4.2 and not convert POST requests into GET requests when following a 301 redirection. The non-RFC behavior is ubiquitous in web browsers, so curl does the conversion by default to maintain consistency. However, a server may require a POST to remain a POST after such a redirection. This option is meaningful only when using -L, --location. Providing --post301 multiple times has no extra effect. Disable it again with --no-post301. Example: curl --post301 --location -d "data" https://example.com See also --post302, --post303 and -L, --location. --post302 (HTTP) Tells curl to respect RFC 7231/6.4.3 and not convert POST requests into GET requests when following a 302 redirection. The non-RFC behavior is ubiquitous in web browsers, so curl does the conversion by default to maintain consistency. However, a server may require a POST to remain a POST after such a redirection. This option is meaningful only when using -L, --location. Providing --post302 multiple times has no extra effect. Disable it again with --no-post302. Example: curl --post302 --location -d "data" https://example.com See also --post301, --post303 and -L, --location. --post303 (HTTP) Tells curl to violate RFC 7231/6.4.4 and not convert POST requests into GET requests when following 303 redirections. A server may require a POST to remain a POST after a 303 redirection. This option is meaningful only when using -L, --location. Providing --post303 multiple times has no extra effect. Disable it again with --no-post303. Example: curl --post303 --location -d "data" https://example.com See also --post302, --post301 and -L, --location. --preproxy [protocol://]host[:port] Use the specified SOCKS proxy before connecting to an HTTP or HTTPS -x, --proxy. In such a case curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. Hence pre proxy. The pre proxy string should be specified with a protocol:// prefix to specify alternative proxy protocols. Use socks4://, socks4a://, socks5:// or socks5h:// to request the specific SOCKS version to be used. No protocol specified makes curl default to SOCKS4. If the port number is not specified in the proxy string, it is assumed to be 1080. User and password that might be provided in the proxy string are URL decoded by curl. This allows you to pass in special characters such as @ by using %40 or pass in a colon with %3a. If --preproxy is provided several times, the last set value is used. Example: curl --preproxy socks5://proxy.example -x http://http.example https://example.com See also -x, --proxy and --socks5. Added in 7.52.0. -#, --progress-bar Make curl display transfer progress as a simple progress bar instead of the standard, more informational, meter. This progress bar draws a single line of '#' characters across the screen and shows a percentage if the transfer size is known. For transfers without a known size, there is a space ship (-=o=-) that moves back and forth but only while data is being transferred, with a set of flying hash sign symbols on top. This option is global and does not need to be specified for each use of --next. Providing --progress-bar multiple times has no extra effect. Disable it again with --no-progress-bar. Example: curl -# -O https://example.com See also --styled-output. --proto-default <protocol> Tells curl to use protocol for any URL missing a scheme name. An unknown or unsupported protocol causes error CURLE_UNSUPPORTED_PROTOCOL (1). This option does not change the default proxy protocol (http). Without this option set, curl guesses protocol based on the host name, see --url for details. If --proto-default is provided several times, the last set value is used. Example: curl --proto-default https ftp.example.com See also --proto and --proto-redir. --proto-redir <protocols> Tells curl to limit what protocols it may use on redirect. Protocols denied by --proto are not overridden by this option. See --proto for how protocols are represented. Example, allow only HTTP and HTTPS on redirect: curl --proto-redir -all,http,https http://example.com By default curl only allows HTTP, HTTPS, FTP and FTPS on redirects (added in 7.65.2). Specifying all or +all enables all protocols on redirects, which is not good for security. If --proto-redir is provided several times, the last set value is used. Example: curl --proto-redir =http,https https://example.com See also --proto. --proto <protocols> Tells curl to limit what protocols it may use for transfers. Protocols are evaluated left to right, are comma separated, and are each a protocol name or 'all', optionally prefixed by zero or more modifiers. Available modifiers are: + Permit this protocol in addition to protocols already permitted (this is the default if no modifier is used). - Deny this protocol, removing it from the list of protocols already permitted. = Permit only this protocol (ignoring the list already permitted), though subject to later modification by subsequent entries in the comma separated list. For example: --proto -ftps uses the default protocols, but disables ftps --proto -all,https,+http only enables http and https --proto =http,https also only enables http and https Unknown and disabled protocols produce a warning. This allows scripts to safely rely on being able to disable potentially dangerous protocols, without relying upon support for that protocol being built into curl to avoid an error. This option can be used multiple times, in which case the effect is the same as concatenating the protocols into one instance of the option. If --proto is provided several times, the last set value is used. Example: curl --proto =http,https,sftp https://example.com See also --proto-redir and --proto-default. --proxy-anyauth Tells curl to pick a suitable authentication method when communicating with the given HTTP proxy. This might cause an extra request/response round-trip. Providing --proxy-anyauth multiple times has no extra effect. Example: curl --proxy-anyauth --proxy-user user:passwd -x proxy https://example.com See also -x, --proxy, --proxy-basic and --proxy-digest. --proxy-basic Tells curl to use HTTP Basic authentication when communicating with the given proxy. Use --basic for enabling HTTP Basic with a remote host. Basic is the default authentication method curl uses with proxies. Providing --proxy-basic multiple times has no extra effect. Example: curl --proxy-basic --proxy-user user:passwd -x proxy https://example.com See also -x, --proxy, --proxy-anyauth and --proxy-digest. --proxy-ca-native (TLS) Tells curl to use the CA store from the native operating system to verify the HTTPS proxy. By default, curl uses a CA store provided in a single file or directory, but when using this option it interfaces the operating system's own vault. This option works for curl on Windows when built to use OpenSSL, wolfSSL (added in 8.3.0) or GnuTLS (added in 8.5.0). When curl on Windows is built to use Schannel, this feature is implied and curl then only uses the native CA store. Providing --proxy-ca-native multiple times has no extra effect. Disable it again with --no-proxy-ca-native. Example: curl --ca-native https://example.com See also --cacert, --capath and -k, --insecure. Added in 8.2.0. --proxy-cacert <file> Same as --cacert but used in HTTPS proxy context. If --proxy-cacert is provided several times, the last set value is used. Example: curl --proxy-cacert CA-file.txt -x https://proxy https://example.com See also --proxy-capath, --cacert, --capath and -x, --proxy. Added in 7.52.0. --proxy-capath <dir> Same as --capath but used in HTTPS proxy context. If --proxy-capath is provided several times, the last set value is used. Example: curl --proxy-capath /local/directory -x https://proxy https://example.com See also --proxy-cacert, -x, --proxy and --capath. Added in 7.52.0. --proxy-cert-type <type> Same as --cert-type but used in HTTPS proxy context. If --proxy-cert-type is provided several times, the last set value is used. Example: curl --proxy-cert-type PEM --proxy-cert file -x https://proxy https://example.com See also --proxy-cert. Added in 7.52.0. --proxy-cert <cert[:passwd]> Same as -E, --cert but used in HTTPS proxy context. If --proxy-cert is provided several times, the last set value is used. Example: curl --proxy-cert file -x https://proxy https://example.com See also --proxy-cert-type. Added in 7.52.0. --proxy-ciphers <list> Same as --ciphers but used in HTTPS proxy context. Specifies which ciphers to use in the connection to the HTTPS proxy. The list of ciphers must specify valid ciphers. Read up on SSL cipher list details on this URL: https://curl.se/docs/ssl-ciphers.html If --proxy-ciphers is provided several times, the last set value is used. Example: curl --proxy-ciphers ECDHE-ECDSA-AES256-CCM8 -x https://proxy https://example.com See also --ciphers, --curves and -x, --proxy. Added in 7.52.0. --proxy-crlfile <file> Same as --crlfile but used in HTTPS proxy context. If --proxy-crlfile is provided several times, the last set value is used. Example: curl --proxy-crlfile rejects.txt -x https://proxy https://example.com See also --crlfile and -x, --proxy. Added in 7.52.0. --proxy-digest Tells curl to use HTTP Digest authentication when communicating with the given proxy. Use --digest for enabling HTTP Digest with a remote host. Providing --proxy-digest multiple times has no extra effect. Example: curl --proxy-digest --proxy-user user:passwd -x proxy https://example.com See also -x, --proxy, --proxy-anyauth and --proxy-basic. --proxy-header <header/@file> (HTTP) Extra header to include in the request when sending HTTP to a proxy. You may specify any number of extra headers. This is the equivalent option to -H, --header but is for proxy communication only like in CONNECT requests when you want a separate header sent to the proxy to what is sent to the actual remote host. curl makes sure that each header you add/replace is sent with the proper end-of-line marker, you should thus not add that as a part of the header content: do not add newlines or carriage returns, they only mess things up for you. Headers specified with this option are not included in requests that curl knows are not be sent to a proxy. This option can take an argument in @filename style, which then adds a header for each line in the input file (added in 7.55.0). Using @- makes curl read the headers from stdin. This option can be used multiple times to add/replace/remove multiple headers. --proxy-header can be used several times in a command line Examples: curl --proxy-header "X-First-Name: Joe" -x http://proxy https://example.com curl --proxy-header "User-Agent: surprise" -x http://proxy https://example.com curl --proxy-header "Host:" -x http://proxy https://example.com See also -x, --proxy. --proxy-http2 (HTTP) Tells curl to try negotiate HTTP version 2 with an HTTPS proxy. The proxy might still only offer HTTP/1 and then curl sticks to using that version. This has no effect for any other kinds of proxies. Providing --proxy-http2 multiple times has no extra effect. Disable it again with --no-proxy-http2. Example: curl --proxy-http2 -x proxy https://example.com See also -x, --proxy. --proxy-http2 requires that the underlying libcurl was built to support HTTP/2. Added in 8.1.0. --proxy-insecure Same as -k, --insecure but used in HTTPS proxy context. Providing --proxy-insecure multiple times has no extra effect. Disable it again with --no-proxy-insecure. Example: curl --proxy-insecure -x https://proxy https://example.com See also -x, --proxy and -k, --insecure. Added in 7.52.0. --proxy-key-type <type> Same as --key-type but used in HTTPS proxy context. If --proxy-key-type is provided several times, the last set value is used. Example: curl --proxy-key-type DER --proxy-key here -x https://proxy https://example.com See also --proxy-key and -x, --proxy. Added in 7.52.0. --proxy-key <key> Same as --key but used in HTTPS proxy context. If --proxy-key is provided several times, the last set value is used. Example: curl --proxy-key here -x https://proxy https://example.com See also --proxy-key-type and -x, --proxy. Added in 7.52.0. --proxy-negotiate Tells curl to use HTTP Negotiate (SPNEGO) authentication when communicating with the given proxy. Use --negotiate for enabling HTTP Negotiate (SPNEGO) with a remote host. Providing --proxy-negotiate multiple times has no extra effect. Example: curl --proxy-negotiate --proxy-user user:passwd -x proxy https://example.com See also --proxy-anyauth and --proxy-basic. --proxy-ntlm Tells curl to use HTTP NTLM authentication when communicating with the given proxy. Use --ntlm for enabling NTLM with a remote host. Providing --proxy-ntlm multiple times has no extra effect. Example: curl --proxy-ntlm --proxy-user user:passwd -x http://proxy https://example.com See also --proxy-negotiate and --proxy-anyauth. --proxy-pass <phrase> Same as --pass but used in HTTPS proxy context. If --proxy-pass is provided several times, the last set value is used. Example: curl --proxy-pass secret --proxy-key here -x https://proxy https://example.com See also -x, --proxy and --proxy-key. Added in 7.52.0. --proxy-pinnedpubkey <hashes> (TLS) Tells curl to use the specified public key file (or hashes) to verify the proxy. This can be a path to a file which contains a single public key in PEM or DER format, or any number of base64 encoded sha256 hashes preceded by 'sha256//' and separated by ';'. When negotiating a TLS or SSL connection, the server sends a certificate indicating its identity. A public key is extracted from this certificate and if it does not exactly match the public key provided to this option, curl aborts the connection before sending or receiving any data. If --proxy-pinnedpubkey is provided several times, the last set value is used. Examples: curl --proxy-pinnedpubkey keyfile https://example.com curl --proxy-pinnedpubkey 'sha256//ce118b51897f4452dc' https://example.com See also --pinnedpubkey and -x, --proxy. Added in 7.59.0. --proxy-service-name <name> This option allows you to change the service name for proxy negotiation. If --proxy-service-name is provided several times, the last set value is used. Example: curl --proxy-service-name "shrubbery" -x proxy https://example.com See also --service-name and -x, --proxy. --proxy-ssl-allow-beast Same as --ssl-allow-beast but used in HTTPS proxy context. Providing --proxy-ssl-allow-beast multiple times has no extra effect. Disable it again with --no-proxy-ssl-allow-beast. Example: curl --proxy-ssl-allow-beast -x https://proxy https://example.com See also --ssl-allow-beast and -x, --proxy. Added in 7.52.0. --proxy-ssl-auto-client-cert Same as --ssl-auto-client-cert but used in HTTPS proxy context. Providing --proxy-ssl-auto-client-cert multiple times has no extra effect. Disable it again with --no-proxy-ssl-auto-client- cert. Example: curl --proxy-ssl-auto-client-cert -x https://proxy https://example.com See also --ssl-auto-client-cert and -x, --proxy. Added in 7.77.0. --proxy-tls13-ciphers <ciphersuite list> (TLS) Specifies which cipher suites to use in the connection to your HTTPS proxy when it negotiates TLS 1.3. The list of ciphers suites must specify valid ciphers. Read up on TLS 1.3 cipher suite details on this URL: https://curl.se/docs/ssl-ciphers.html This option is currently used only when curl is built to use OpenSSL 1.1.1 or later. If you are using a different SSL backend you can try setting TLS 1.3 cipher suites by using the --proxy-ciphers option. If --proxy-tls13-ciphers is provided several times, the last set value is used. Example: curl --proxy-tls13-ciphers TLS_AES_128_GCM_SHA256 -x proxy https://example.com See also --tls13-ciphers, --curves and --proxy-ciphers. Added in 7.61.0. --proxy-tlsauthtype <type> Same as --tlsauthtype but used in HTTPS proxy context. If --proxy-tlsauthtype is provided several times, the last set value is used. Example: curl --proxy-tlsauthtype SRP -x https://proxy https://example.com See also -x, --proxy and --proxy-tlsuser. Added in 7.52.0. --proxy-tlspassword <string> Same as --tlspassword but used in HTTPS proxy context. If --proxy-tlspassword is provided several times, the last set value is used. Example: curl --proxy-tlspassword passwd -x https://proxy https://example.com See also -x, --proxy and --proxy-tlsuser. Added in 7.52.0. --proxy-tlsuser <name> Same as --tlsuser but used in HTTPS proxy context. If --proxy-tlsuser is provided several times, the last set value is used. Example: curl --proxy-tlsuser smith -x https://proxy https://example.com See also -x, --proxy and --proxy-tlspassword. Added in 7.52.0. --proxy-tlsv1 Same as -1, --tlsv1 but used in HTTPS proxy context. Providing --proxy-tlsv1 multiple times has no extra effect. Example: curl --proxy-tlsv1 -x https://proxy https://example.com See also -x, --proxy. Added in 7.52.0. -U, --proxy-user <user:password> Specify the user name and password to use for proxy authentication. If you use a Windows SSPI-enabled curl binary and do either Negotiate or NTLM authentication then you can tell curl to select the user name and password from your environment by specifying a single colon with this option: "-U :". On systems where it works, curl hides the given option argument from process listings. This is not enough to protect credentials from possibly getting seen by other users on the same system as they still are visible for a moment before cleared. Such sensitive data should be retrieved from a file instead or similar and never used in clear text in a command line. If --proxy-user is provided several times, the last set value is used. Example: curl --proxy-user name:pwd -x proxy https://example.com See also --proxy-pass. -x, --proxy [protocol://]host[:port] Use the specified proxy. The proxy string can be specified with a protocol:// prefix. No protocol specified or http:// it is treated as an HTTP proxy. Use socks4://, socks4a://, socks5:// or socks5h:// to request a specific SOCKS version to be used. Unix domain sockets are supported for socks proxy. Set localhost for the host part. e.g. socks5h://localhost/path/to/socket.sock HTTPS proxy support works set with the https:// protocol prefix for OpenSSL and GnuTLS (added in 7.52.0). It also works for BearSSL, mbedTLS, rustls, Schannel, Secure Transport and wolfSSL (added in 7.87.0). Unrecognized and unsupported proxy protocols cause an error (added in 7.52.0). Ancient curl versions ignored unknown schemes and used http:// instead. If the port number is not specified in the proxy string, it is assumed to be 1080. This option overrides existing environment variables that set the proxy to use. If there is an environment variable setting a proxy, you can set proxy to "" to override it. All operations that are performed over an HTTP proxy are transparently converted to HTTP. It means that certain protocol specific operations might not be available. This is not the case if you can tunnel through the proxy, as one with the -p, --proxytunnel option. User and password that might be provided in the proxy string are URL decoded by curl. This allows you to pass in special characters such as @ by using %40 or pass in a colon with %3a. The proxy host can be specified the same way as the proxy environment variables, including the protocol prefix (http://) and the embedded user + password. When a proxy is used, the active FTP mode as set with -P, --ftp-port, cannot be used. If --proxy is provided several times, the last set value is used. Example: curl --proxy http://proxy.example https://example.com See also --socks5 and --proxy-basic. --proxy1.0 <host[:port]> Use the specified HTTP 1.0 proxy. If the port number is not specified, it is assumed at port 1080. The only difference between this and the HTTP proxy option -x, --proxy, is that attempts to use CONNECT through the proxy specifies an HTTP 1.0 protocol instead of the default HTTP 1.1. Providing --proxy1.0 multiple times has no extra effect. Example: curl --proxy1.0 -x http://proxy https://example.com See also -x, --proxy, --socks5 and --preproxy. -p, --proxytunnel When an HTTP proxy is used -x, --proxy, this option makes curl tunnel the traffic through the proxy. The tunnel approach is made with the HTTP proxy CONNECT request and requires that the proxy allows direct connect to the remote port number curl wants to tunnel through to. To suppress proxy CONNECT response headers when curl is set to output headers use --suppress-connect-headers. Providing --proxytunnel multiple times has no extra effect. Disable it again with --no-proxytunnel. Example: curl --proxytunnel -x http://proxy https://example.com See also -x, --proxy. --pubkey <key> (SFTP SCP) Public key file name. Allows you to provide your public key in this separate file. curl attempts to automatically extract the public key from the private key file, so passing this option is generally not required. Note that this public key extraction requires libcurl to be linked against a copy of libssh2 1.2.8 or higher that is itself linked against OpenSSL. If --pubkey is provided several times, the last set value is used. Example: curl --pubkey file.pub sftp://example.com/ See also --pass. -Q, --quote <command> (FTP SFTP) Send an arbitrary command to the remote FTP or SFTP server. Quote commands are sent BEFORE the transfer takes place (just after the initial PWD command in an FTP transfer, to be exact). To make commands take place after a successful transfer, prefix them with a dash '-'. (FTP only) To make commands be sent after curl has changed the working directory, just before the file transfer command(s), prefix the command with a '+'. This is not performed when a directory listing is performed. You may specify any number of commands. By default curl stops at first failure. To make curl continue even if the command fails, prefix the command with an asterisk (*). Otherwise, if the server returns failure for one of the commands, the entire operation is aborted. You must send syntactically correct FTP commands as RFC 959 defines to FTP servers, or one of the commands listed below to SFTP servers. SFTP is a binary protocol. Unlike for FTP, curl interprets SFTP quote commands itself before sending them to the server. File names may be quoted shell-style to embed spaces or special characters. Following is the list of all supported SFTP quote commands: atime date file The atime command sets the last access time of the file named by the file operand. The <date expression> can be all sorts of date strings, see the curl_getdate(3) man page for date expression details. (Added in 7.73.0) chgrp group file The chgrp command sets the group ID of the file named by the file operand to the group ID specified by the group operand. The group operand is a decimal integer group ID. chmod mode file The chmod command modifies the file mode bits of the specified file. The mode operand is an octal integer mode number. chown user file The chown command sets the owner of the file named by the file operand to the user ID specified by the user operand. The user operand is a decimal integer user ID. ln source_file target_file The ln and symlink commands create a symbolic link at the target_file location pointing to the source_file location. mkdir directory_name The mkdir command creates the directory named by the directory_name operand. mtime date file The mtime command sets the last modification time of the file named by the file operand. The <date expression> can be all sorts of date strings, see the curl_getdate(3) man page for date expression details. (Added in 7.73.0) pwd The pwd command returns the absolute path name of the current working directory. rename source target The rename command renames the file or directory named by the source operand to the destination path named by the target operand. rm file The rm command removes the file specified by the file operand. rmdir directory The rmdir command removes the directory entry specified by the directory operand, provided it is empty. symlink source_file target_file See ln. --quote can be used several times in a command line Example: curl --quote "DELE file" ftp://example.com/foo See also -X, --request. --random-file <file> Deprecated option. This option is ignored (added in 7.84.0). Prior to that it only had an effect on curl if built to use old versions of OpenSSL. Specify the path name to file containing random data. The data may be used to seed the random engine for SSL connections. If --random-file is provided several times, the last set value is used. Example: curl --random-file rubbish https://example.com See also --egd-file. -r, --range <range> (HTTP FTP SFTP FILE) Retrieve a byte range (i.e. a partial document) from an HTTP/1.1, FTP or SFTP server or a local FILE. Ranges can be specified in a number of ways. 0-499 specifies the first 500 bytes 500-999 specifies the second 500 bytes -500 specifies the last 500 bytes 9500- specifies the bytes from offset 9500 and forward 0-0,-1 specifies the first and last byte only(*)(HTTP) 100-199,500-599 specifies two separate 100-byte ranges(*) (HTTP) (*) = NOTE that these make the server reply with a multipart response, which is returned as-is by curl! Parsing or otherwise transforming this response is the responsibility of the caller. Only digit characters (0-9) are valid in the 'start' and 'stop' fields of the 'start-stop' range syntax. If a non-digit character is given in the range, the server's response is unspecified, depending on the server's configuration. Many HTTP/1.1 servers do not have this feature enabled, so that when you attempt to get a range, curl instead gets the whole document. FTP and SFTP range downloads only support the simple 'start-stop' syntax (optionally with one of the numbers omitted). FTP use depends on the extended FTP command SIZE. If --range is provided several times, the last set value is used. Example: curl --range 22-44 https://example.com See also -C, --continue-at and -a, --append. --rate <max request rate> Specify the maximum transfer frequency you allow curl to use - in number of transfer starts per time unit (sometimes called request rate). Without this option, curl starts the next transfer as fast as possible. If given several URLs and a transfer completes faster than the allowed rate, curl waits until the next transfer is started to maintain the requested rate. This option has no effect when -Z, --parallel is used. The request rate is provided as "N/U" where N is an integer number and U is a time unit. Supported units are 's' (second), 'm' (minute), 'h' (hour) and 'd' /(day, as in a 24 hour unit). The default time unit, if no "/U" is provided, is number of transfers per hour. If curl is told to allow 10 requests per minute, it does not start the next request until 6 seconds have elapsed since the previous transfer was started. This function uses millisecond resolution. If the allowed frequency is set more than 1000 per second, it instead runs unrestricted. When retrying transfers, enabled with --retry, the separate retry delay logic is used and not this setting. This option is global and does not need to be specified for each use of --next. If --rate is provided several times, the last set value is used. Examples: curl --rate 2/s https://example.com ... curl --rate 3/h https://example.com ... curl --rate 14/m https://example.com ... See also --limit-rate and --retry-delay. Added in 7.84.0. --raw (HTTP) When used, it disables all internal HTTP decoding of content or transfer encodings and instead makes them passed on unaltered, raw. Providing --raw multiple times has no extra effect. Disable it again with --no-raw. Example: curl --raw https://example.com See also --tr-encoding. -e, --referer <URL> (HTTP) Sends the "Referrer Page" information to the HTTP server. This can also be set with the -H, --header flag of course. When used with -L, --location you can append ";auto" to the -e, --referer URL to make curl automatically set the previous URL when it follows a Location: header. The ";auto" string can be used alone, even if you do not set an initial -e, --referer. If --referer is provided several times, the last set value is used. Examples: curl --referer "https://fake.example" https://example.com curl --referer "https://fake.example;auto" -L https://example.com curl --referer ";auto" -L https://example.com See also -A, --user-agent and -H, --header. -J, --remote-header-name (HTTP) This option tells the -O, --remote-name option to use the server-specified Content-Disposition filename instead of extracting a filename from the URL. If the server-provided file name contains a path, that is stripped off before the file name is used. The file is saved in the current directory, or in the directory specified with --output-dir. If the server specifies a file name and a file with that name already exists in the destination directory, it is not overwritten and an error occurs - unless you allow it by using the --clobber option. If the server does not specify a file name then this option has no effect. There is no attempt to decode %-sequences (yet) in the provided file name, so this option may provide you with rather unexpected file names. This feature uses the name from the "filename" field, it does not yet support the "filename*" field (filenames with explicit character sets). WARNING: Exercise judicious use of this option, especially on Windows. A rogue server could send you the name of a DLL or other file that could be loaded automatically by Windows or some third party software. Providing --remote-header-name multiple times has no extra effect. Disable it again with --no-remote-header-name. Example: curl -OJ https://example.com/file See also -O, --remote-name. --remote-name-all This option changes the default action for all given URLs to be dealt with as if -O, --remote-name were used for each one. So if you want to disable that for a specific URL after --remote-name-all has been used, you must use "-o -" or --no-remote-name. Providing --remote-name-all multiple times has no extra effect. Disable it again with --no-remote-name-all. Example: curl --remote-name-all ftp://example.com/file1 ftp://example.com/file2 See also -O, --remote-name. -O, --remote-name Write output to a local file named like the remote file we get. (Only the file part of the remote file is used, the path is cut off.) The file is saved in the current working directory. If you want the file saved in a different directory, make sure you change the current working directory before invoking curl with this option or use --output-dir. The remote file name to use for saving is extracted from the given URL, nothing else, and if it already exists it is overwritten. If you want the server to be able to choose the file name refer to -J, --remote-header-name which can be used in addition to this option. If the server chooses a file name and that name already exists it is not overwritten. There is no URL decoding done on the file name. If it has %20 or other URL encoded parts of the name, they end up as-is as file name. You may use this option as many times as the number of URLs you have. --remote-name can be used several times in a command line Example: curl -O https://example.com/filename See also --remote-name-all, --output-dir and -J, --remote-header-name. -R, --remote-time Makes curl attempt to figure out the timestamp of the remote file that is getting downloaded, and if that is available make the local file get that same timestamp. Providing --remote-time multiple times has no extra effect. Disable it again with --no-remote-time. Example: curl --remote-time -o foo https://example.com See also -O, --remote-name and -z, --time-cond. --remove-on-error When curl returns an error when told to save output in a local file, this option removes that saved file before exiting. This prevents curl from leaving a partial file in the case of an error during transfer. If the output is not a regular file, this option has no effect. Providing --remove-on-error multiple times has no extra effect. Disable it again with --no-remove-on-error. Example: curl --remove-on-error -o output https://example.com See also -f, --fail. Added in 7.83.0. --request-target <path> (HTTP) Tells curl to use an alternative "target" (path) instead of using the path as provided in the URL. Particularly useful when wanting to issue HTTP requests without leading slash or other data that does not follow the regular URL pattern, like "OPTIONS *". curl passes on the verbatim string you give it its the request without any filter or other safe guards. That includes white space and control characters. If --request-target is provided several times, the last set value is used. Example: curl --request-target "*" -X OPTIONS https://example.com See also -X, --request. Added in 7.55.0. -X, --request <method> Change the method to use when starting the transfer. curl passes on the verbatim string you give it its the request without any filter or other safe guards. That includes white space and control characters. HTTP Specifies a custom request method to use when communicating with the HTTP server. The specified request method is used instead of the method otherwise used (which defaults to GET). Read the HTTP 1.1 specification for details and explanations. Common additional HTTP requests include PUT and DELETE, while related technologies like WebDAV offers PROPFIND, COPY, MOVE and more. Normally you do not need this option. All sorts of GET, HEAD, POST and PUT requests are rather invoked by using dedicated command line options. This option only changes the actual word used in the HTTP request, it does not alter the way curl behaves. So for example if you want to make a proper HEAD request, using -X HEAD does not suffice. You need to use the -I, --head option. The method string you set with -X, --request is used for all requests, which if you for example use -L, --location may cause unintended side-effects when curl does not change request method according to the HTTP 30x response codes - and similar. FTP Specifies a custom FTP command to use instead of LIST when doing file lists with FTP. POP3 Specifies a custom POP3 command to use instead of LIST or RETR. IMAP Specifies a custom IMAP command to use instead of LIST. SMTP Specifies a custom SMTP command to use instead of HELP or VRFY. If --request is provided several times, the last set value is used. Examples: curl -X "DELETE" https://example.com curl -X NLST ftp://example.com/ See also --request-target. --resolve <[+]host:port:addr[,addr]...> Provide a custom address for a specific host and port pair. Using this, you can make the curl requests(s) use a specified address and prevent the otherwise normally resolved address to be used. Consider it a sort of /etc/hosts alternative provided on the command line. The port number should be the number used for the specific protocol the host is used for. It means you need several entries if you want to provide address for the same host but different ports. By specifying '*' as host you can tell curl to resolve any host and specific port pair to the specified address. Wildcard is resolved last so any --resolve with a specific host and port is used first. The provided address set by this option is used even if -4, --ipv4 or -6, --ipv6 is set to make curl use another IP version. By prefixing the host with a '+' you can make the entry time out after curl's default timeout (1 minute). Note that this only makes sense for long running parallel transfers with a lot of files. In such cases, if this option is used curl tries to resolve the host as it normally would once the timeout has expired. Support for providing the IP address within [brackets] was added in 7.57.0. Support for providing multiple IP addresses per entry was added in 7.59.0. Support for resolving with wildcard was added in 7.64.0. Support for the '+' prefix was was added in 7.75.0. --resolve can be used several times in a command line Example: curl --resolve example.com:443:127.0.0.1 https://example.com See also --connect-to and --alt-svc. --retry-all-errors Retry on any error. This option is used together with --retry. This option is the "sledgehammer" of retrying. Do not use this option by default (for example in your curlrc), there may be unintended consequences such as sending or receiving duplicate data. Do not use with redirected input or output. You'd be much better off handling your unique problems in shell script. Please read the example below. WARNING: For server compatibility curl attempts to retry failed flaky transfers as close as possible to how they were started, but this is not possible with redirected input or output. For example, before retrying it removes output data from a failed partial transfer that was written to an output file. However this is not true of data redirected to a | pipe or > file, which are not reset. We strongly suggest you do not parse or record output via redirect in combination with this option, since you may receive duplicate data. By default curl does not return error for transfers with an HTTP response code that indicates an HTTP error, if the transfer was successful. For example, if a server replies 404 Not Found and the reply is fully received then that is not an error. When --retry is used then curl retries on some HTTP response codes that indicate transient HTTP errors, but that does not include most 4xx response codes such as 404. If you want to retry on all response codes that indicate HTTP errors (4xx and 5xx) then combine with -f, --fail. Providing --retry-all-errors multiple times has no extra effect. Disable it again with --no-retry-all-errors. Example: curl --retry 5 --retry-all-errors https://example.com See also --retry. Added in 7.71.0. --retry-connrefused In addition to the other conditions, consider ECONNREFUSED as a transient error too for --retry. This option is used together with --retry. Providing --retry-connrefused multiple times has no extra effect. Disable it again with --no-retry-connrefused. Example: curl --retry-connrefused --retry 7 https://example.com See also --retry and --retry-all-errors. Added in 7.52.0. --retry-delay <seconds> Make curl sleep this amount of time before each retry when a transfer has failed with a transient error (it changes the default backoff time algorithm between retries). This option is only interesting if --retry is also used. Setting this delay to zero makes curl use the default backoff time. If --retry-delay is provided several times, the last set value is used. Example: curl --retry-delay 5 --retry 7 https://example.com See also --retry. --retry-max-time <seconds> The retry timer is reset before the first transfer attempt. Retries are done as usual (see --retry) as long as the timer has not reached this given limit. Notice that if the timer has not reached the limit, the request is made and while performing, it may take longer than this given time period. To limit a single request's maximum time, use -m, --max-time. Set this option to zero to not timeout retries. If --retry-max-time is provided several times, the last set value is used. Example: curl --retry-max-time 30 --retry 10 https://example.com See also --retry. --retry <num> If a transient error is returned when curl tries to perform a transfer, it retries this number of times before giving up. Setting the number to 0 makes curl do no retries (which is the default). Transient error means either: a timeout, an FTP 4xx response code or an HTTP 408, 429, 500, 502, 503 or 504 response code. When curl is about to retry a transfer, it first waits one second and then for all forthcoming retries it doubles the waiting time until it reaches 10 minutes which then remains delay between the rest of the retries. By using --retry-delay you disable this exponential backoff algorithm. See also --retry-max-time to limit the total time allowed for retries. curl complies with the Retry-After: response header if one was present to know when to issue the next retry (added in 7.66.0). If --retry is provided several times, the last set value is used. Example: curl --retry 7 https://example.com See also --retry-max-time. --sasl-authzid <identity> Use this authorization identity (authzid), during SASL PLAIN authentication, in addition to the authentication identity (authcid) as specified by -u, --user. If the option is not specified, the server derives the authzid from the authcid, but if specified, and depending on the server implementation, it may be used to access another user's inbox, that the user has been granted access to, or a shared mailbox for example. If --sasl-authzid is provided several times, the last set value is used. Example: curl --sasl-authzid zid imap://example.com/ See also --login-options. Added in 7.66.0. --sasl-ir Enable initial response in SASL authentication. Providing --sasl-ir multiple times has no extra effect. Disable it again with --no-sasl-ir. Example: curl --sasl-ir imap://example.com/ See also --sasl-authzid. --service-name <name> This option allows you to change the service name for SPNEGO. If --service-name is provided several times, the last set value is used. Example: curl --service-name sockd/server https://example.com See also --negotiate and --proxy-service-name. -S, --show-error When used with -s, --silent, it makes curl show an error message if it fails. This option is global and does not need to be specified for each use of --next. Providing --show-error multiple times has no extra effect. Disable it again with --no-show-error. Example: curl --show-error --silent https://example.com See also --no-progress-meter. -s, --silent Silent or quiet mode. Do not show progress meter or error messages. Makes Curl mute. It still outputs the data you ask for, potentially even to the terminal/stdout unless you redirect it. Use -S, --show-error in addition to this option to disable progress meter but still show error messages. Providing --silent multiple times has no extra effect. Disable it again with --no-silent. Example: curl -s https://example.com See also -v, --verbose, --stderr and --no-progress-meter. --socks4 <host[:port]> Use the specified SOCKS4 proxy. If the port number is not specified, it is assumed at port 1080. Using this socket type make curl resolve the host name and passing the address on to the proxy. To specify proxy on a unix domain socket, use localhost for host, e.g. "socks4://localhost/path/to/socket.sock" This option overrides any previous use of -x, --proxy, as they are mutually exclusive. This option is superfluous since you can specify a socks4 proxy with -x, --proxy using a socks4:// protocol prefix. --preproxy can be used to specify a SOCKS proxy at the same time proxy is used with an HTTP/HTTPS proxy (added in 7.52.0). In such a case, curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. If --socks4 is provided several times, the last set value is used. Example: curl --socks4 hostname:4096 https://example.com See also --socks4a, --socks5 and --socks5-hostname. --socks4a <host[:port]> Use the specified SOCKS4a proxy. If the port number is not specified, it is assumed at port 1080. This asks the proxy to resolve the host name. To specify proxy on a unix domain socket, use localhost for host, e.g. "socks4a://localhost/path/to/socket.sock" This option overrides any previous use of -x, --proxy, as they are mutually exclusive. This option is superfluous since you can specify a socks4a proxy with -x, --proxy using a socks4a:// protocol prefix. --preproxy can be used to specify a SOCKS proxy at the same time -x, --proxy is used with an HTTP/HTTPS proxy (added in 7.52.0). In such a case, curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. If --socks4a is provided several times, the last set value is used. Example: curl --socks4a hostname:4096 https://example.com See also --socks4, --socks5 and --socks5-hostname. --socks5-basic Tells curl to use username/password authentication when connecting to a SOCKS5 proxy. The username/password authentication is enabled by default. Use --socks5-gssapi to force GSS-API authentication to SOCKS5 proxies. Providing --socks5-basic multiple times has no extra effect. Example: curl --socks5-basic --socks5 hostname:4096 https://example.com See also --socks5. Added in 7.55.0. --socks5-gssapi-nec As part of the GSS-API negotiation a protection mode is negotiated. RFC 1961 says in section 4.3/4.4 it should be protected, but the NEC reference implementation does not. The option --socks5-gssapi-nec allows the unprotected exchange of the protection mode negotiation. Providing --socks5-gssapi-nec multiple times has no extra effect. Disable it again with --no-socks5-gssapi-nec. Example: curl --socks5-gssapi-nec --socks5 hostname:4096 https://example.com See also --socks5. --socks5-gssapi-service <name> The default service name for a socks server is rcmd/server-fqdn. This option allows you to change it. If --socks5-gssapi-service is provided several times, the last set value is used. Example: curl --socks5-gssapi-service sockd --socks5 hostname:4096 https://example.com See also --socks5. --socks5-gssapi Tells curl to use GSS-API authentication when connecting to a SOCKS5 proxy. The GSS-API authentication is enabled by default (if curl is compiled with GSS-API support). Use --socks5-basic to force username/password authentication to SOCKS5 proxies. Providing --socks5-gssapi multiple times has no extra effect. Disable it again with --no-socks5-gssapi. Example: curl --socks5-gssapi --socks5 hostname:4096 https://example.com See also --socks5. Added in 7.55.0. --socks5-hostname <host[:port]> Use the specified SOCKS5 proxy (and let the proxy resolve the host name). If the port number is not specified, it is assumed at port 1080. To specify proxy on a unix domain socket, use localhost for host, e.g. "socks5h://localhost/path/to/socket.sock" This option overrides any previous use of -x, --proxy, as they are mutually exclusive. This option is superfluous since you can specify a socks5 hostname proxy with -x, --proxy using a socks5h:// protocol prefix. --preproxy can be used to specify a SOCKS proxy at the same time -x, --proxy is used with an HTTP/HTTPS proxy (added in 7.52.0). In such a case, curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. If --socks5-hostname is provided several times, the last set value is used. Example: curl --socks5-hostname proxy.example:7000 https://example.com See also --socks5 and --socks4a. --socks5 <host[:port]> Use the specified SOCKS5 proxy - but resolve the host name locally. If the port number is not specified, it is assumed at port 1080. To specify proxy on a unix domain socket, use localhost for host, e.g. "socks5://localhost/path/to/socket.sock" This option overrides any previous use of -x, --proxy, as they are mutually exclusive. This option is superfluous since you can specify a socks5 proxy with -x, --proxy using a socks5:// protocol prefix. --preproxy can be used to specify a SOCKS proxy at the same time -x, --proxy is used with an HTTP/HTTPS proxy (added in 7.52.0). In such a case, curl first connects to the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS proxy. This option (as well as --socks4) does not work with IPV6, FTPS or LDAP. If --socks5 is provided several times, the last set value is used. Example: curl --socks5 proxy.example:7000 https://example.com See also --socks5-hostname and --socks4a. -Y, --speed-limit <speed> If a transfer is slower than this set speed (in bytes per second) for a given number of seconds, it gets aborted. The time period is set with -y, --speed-time and is 30 seconds by default. If --speed-limit is provided several times, the last set value is used. Example: curl --speed-limit 300 --speed-time 10 https://example.com See also -y, --speed-time, --limit-rate and -m, --max-time. -y, --speed-time <seconds> If a transfer runs slower than speed-limit bytes per second during a speed-time period, the transfer is aborted. If speed-time is used, the default speed-limit is 1 unless set with -Y, --speed-limit. This option controls transfers (in both directions) but does not affect slow connects etc. If this is a concern for you, try the --connect-timeout option. If --speed-time is provided several times, the last set value is used. Example: curl --speed-limit 300 --speed-time 10 https://example.com See also -Y, --speed-limit and --limit-rate. --ssl-allow-beast (TLS) This option tells curl to not work around a security flaw in the SSL3 and TLS1.0 protocols known as BEAST. If this option is not used, the SSL layer may use workarounds known to cause interoperability problems with some older SSL implementations. WARNING: this option loosens the SSL security, and by using this flag you ask for exactly that. Providing --ssl-allow-beast multiple times has no extra effect. Disable it again with --no-ssl-allow-beast. Example: curl --ssl-allow-beast https://example.com See also --proxy-ssl-allow-beast and -k, --insecure. --ssl-auto-client-cert (TLS) (Schannel) Tell libcurl to automatically locate and use a client certificate for authentication, when requested by the server. Since the server can request any certificate that supports client authentication in the OS certificate store it could be a privacy violation and unexpected. Providing --ssl-auto-client-cert multiple times has no extra effect. Disable it again with --no-ssl-auto-client-cert. Example: curl --ssl-auto-client-cert https://example.com See also --proxy-ssl-auto-client-cert. Added in 7.77.0. --ssl-no-revoke (TLS) (Schannel) This option tells curl to disable certificate revocation checks. WARNING: this option loosens the SSL security, and by using this flag you ask for exactly that. Providing --ssl-no-revoke multiple times has no extra effect. Disable it again with --no-ssl-no-revoke. Example: curl --ssl-no-revoke https://example.com See also --crlfile. --ssl-reqd (FTP IMAP POP3 SMTP LDAP) Require SSL/TLS for the connection. Terminates the connection if the transfer cannot be upgraded to use SSL/TLS. This option is handled in LDAP (added in 7.81.0). It is fully supported by the OpenLDAP backend and rejected by the generic ldap backend if explicit TLS is required. This option is unnecessary if you use a URL scheme that in itself implies immediate and implicit use of TLS, like for FTPS, IMAPS, POP3S, SMTPS and LDAPS. Such a transfer always fails if the TLS handshake does not work. This option was formerly known as --ftp-ssl-reqd. Providing --ssl-reqd multiple times has no extra effect. Disable it again with --no-ssl-reqd. Example: curl --ssl-reqd ftp://example.com See also --ssl and -k, --insecure. --ssl-revoke-best-effort (TLS) (Schannel) This option tells curl to ignore certificate revocation checks when they failed due to missing/offline distribution points for the revocation check lists. Providing --ssl-revoke-best-effort multiple times has no extra effect. Disable it again with --no-ssl-revoke-best-effort. Example: curl --ssl-revoke-best-effort https://example.com See also --crlfile and -k, --insecure. Added in 7.70.0. --ssl (FTP IMAP POP3 SMTP LDAP) Warning: this is considered an insecure option. Consider using --ssl-reqd instead to be sure curl upgrades to a secure connection. Try to use SSL/TLS for the connection. Reverts to a non-secure connection if the server does not support SSL/TLS. See also --ftp-ssl-control and --ssl-reqd for different levels of encryption required. This option is handled in LDAP (added in 7.81.0). It is fully supported by the OpenLDAP backend and ignored by the generic ldap backend. Please note that a server may close the connection if the negotiation does not succeed. This option was formerly known as --ftp-ssl. That option name can still be used but might be removed in a future version. Providing --ssl multiple times has no extra effect. Disable it again with --no-ssl. Example: curl --ssl pop3://example.com/ See also --ssl-reqd, -k, --insecure and --ciphers. -2, --sslv2 (SSL) This option previously asked curl to use SSLv2, but is now ignored (added in 7.77.0). SSLv2 is widely considered insecure (see RFC 6176). Providing --sslv2 multiple times has no extra effect. Example: curl --sslv2 https://example.com See also --http1.1 and --http2. -2, --sslv2 requires that the underlying libcurl was built to support TLS. This option is mutually exclusive to -3, --sslv3 and -1, --tlsv1 and --tlsv1.1 and --tlsv1.2. -3, --sslv3 (SSL) This option previously asked curl to use SSLv3, but is now ignored (added in 7.77.0). SSLv3 is widely considered insecure (see RFC 7568). Providing --sslv3 multiple times has no extra effect. Example: curl --sslv3 https://example.com See also --http1.1 and --http2. -3, --sslv3 requires that the underlying libcurl was built to support TLS. This option is mutually exclusive to -2, --sslv2 and -1, --tlsv1 and --tlsv1.1 and --tlsv1.2. --stderr <file> Redirect all writes to stderr to the specified file instead. If the file name is a plain '-', it is instead written to stdout. This option is global and does not need to be specified for each use of --next. If --stderr is provided several times, the last set value is used. Example: curl --stderr output.txt https://example.com See also -v, --verbose and -s, --silent. --styled-output Enables the automatic use of bold font styles when writing HTTP headers to the terminal. Use --no-styled-output to switch them off. Styled output requires a terminal that supports bold fonts. This feature is not present on curl for Windows due to lack of this capability. This option is global and does not need to be specified for each use of --next. Providing --styled-output multiple times has no extra effect. Disable it again with --no-styled-output. Example: curl --styled-output -I https://example.com See also -I, --head and -v, --verbose. Added in 7.61.0. --suppress-connect-headers When -p, --proxytunnel is used and a CONNECT request is made do not output proxy CONNECT response headers. This option is meant to be used with -D, --dump-header or -i, --include which are used to show protocol headers in the output. It has no effect on debug options such as -v, --verbose or --trace, or any statistics. Providing --suppress-connect-headers multiple times has no extra effect. Disable it again with --no-suppress-connect-headers. Example: curl --suppress-connect-headers --include -x proxy https://example.com See also -D, --dump-header, -i, --include and -p, --proxytunnel. Added in 7.54.0. --tcp-fastopen Enable use of TCP Fast Open (RFC 7413). TCP Fast Open is a TCP extension that allows data to get sent earlier over the connection (before the final handshake ACK) if the client and server have been connected previously. Providing --tcp-fastopen multiple times has no extra effect. Disable it again with --no-tcp-fastopen. Example: curl --tcp-fastopen https://example.com See also --false-start. --tcp-nodelay Turn on the TCP_NODELAY option. See the curl_easy_setopt(3) man page for details about this option. curl sets this option by default and you need to explicitly switch it off if you do not want it on (added in 7.50.2). Providing --tcp-nodelay multiple times has no extra effect. Disable it again with --no-tcp-nodelay. Example: curl --tcp-nodelay https://example.com See also -N, --no-buffer. -t, --telnet-option <opt=val> Pass options to the telnet protocol. Supported options are: `TTYPE=<term>` Sets the terminal type. `XDISPLOC=<X display>` Sets the X display location. `NEW_ENV=<var,val>` Sets an environment variable. --telnet-option can be used several times in a command line Example: curl -t TTYPE=vt100 telnet://example.com/ See also -K, --config. --tftp-blksize <value> (TFTP) Set the TFTP BLKSIZE option (must be >512). This is the block size that curl tries to use when transferring data to or from a TFTP server. By default 512 bytes are used. If --tftp-blksize is provided several times, the last set value is used. Example: curl --tftp-blksize 1024 tftp://example.com/file See also --tftp-no-options. --tftp-no-options (TFTP) Tells curl not to send TFTP options requests. This option improves interop with some legacy servers that do not acknowledge or properly implement TFTP options. When this option is used --tftp-blksize is ignored. Providing --tftp-no-options multiple times has no extra effect. Disable it again with --no-tftp-no-options. Example: curl --tftp-no-options tftp://192.168.0.1/ See also --tftp-blksize. -z, --time-cond <time> (HTTP FTP) Request a file that has been modified later than the given time and date, or one that has been modified before that time. The <date expression> can be all sorts of date strings or if it does not match any internal ones, it is taken as a filename and tries to get the modification date (mtime) from <file> instead. See the curl_getdate(3) man pages for date expression details. Start the date expression with a dash (-) to make it request for a document that is older than the given date/time, default is a document that is newer than the specified date/time. If provided a non-existing file, curl outputs a warning about that fact and proceeds to do the transfer without a time condition. If --time-cond is provided several times, the last set value is used. Examples: curl -z "Wed 01 Sep 2021 12:18:00" https://example.com curl -z "-Wed 01 Sep 2021 12:18:00" https://example.com curl -z file https://example.com See also --etag-compare and -R, --remote-time. --tls-max <VERSION> (TLS) VERSION defines maximum supported TLS version. The minimum acceptable version is set by tlsv1.0, tlsv1.1, tlsv1.2 or tlsv1.3. If the connection is done without TLS, this option has no effect. This includes QUIC-using (HTTP/3) transfers. default Use up to recommended TLS version. 1.0 Use up to TLSv1.0. 1.1 Use up to TLSv1.1. 1.2 Use up to TLSv1.2. 1.3 Use up to TLSv1.3. If --tls-max is provided several times, the last set value is used. Examples: curl --tls-max 1.2 https://example.com curl --tls-max 1.3 --tlsv1.2 https://example.com See also --tlsv1.0, --tlsv1.1, --tlsv1.2 and --tlsv1.3. --tls-max requires that the underlying libcurl was built to support TLS. Added in 7.54.0. --tls13-ciphers <ciphersuite list> (TLS) Specifies which cipher suites to use in the connection if it negotiates TLS 1.3. The list of ciphers suites must specify valid ciphers. Read up on TLS 1.3 cipher suite details on this URL: https://curl.se/docs/ssl-ciphers.html This option is currently used only when curl is built to use OpenSSL 1.1.1 or later, or Schannel. If you are using a different SSL backend you can try setting TLS 1.3 cipher suites by using the --ciphers option. If --tls13-ciphers is provided several times, the last set value is used. Example: curl --tls13-ciphers TLS_AES_128_GCM_SHA256 https://example.com See also --ciphers, --curves and --proxy-tls13-ciphers. Added in 7.61.0. --tlsauthtype <type> (TLS) Set TLS authentication type. Currently, the only supported option is "SRP", for TLS-SRP (RFC 5054). If --tlsuser and --tlspassword are specified but --tlsauthtype is not, then this option defaults to "SRP". This option works only if the underlying libcurl is built with TLS-SRP support, which requires OpenSSL or GnuTLS with TLS-SRP support. If --tlsauthtype is provided several times, the last set value is used. Example: curl --tlsauthtype SRP https://example.com See also --tlsuser. --tlspassword <string> (TLS) Set password for use with the TLS authentication method specified with --tlsauthtype. Requires that --tlsuser also be set. This option does not work with TLS 1.3. If --tlspassword is provided several times, the last set value is used. Example: curl --tlspassword pwd --tlsuser user https://example.com See also --tlsuser. --tlsuser <name> (TLS) Set username for use with the TLS authentication method specified with --tlsauthtype. Requires that --tlspassword also is set. This option does not work with TLS 1.3. If --tlsuser is provided several times, the last set value is used. Example: curl --tlspassword pwd --tlsuser user https://example.com See also --tlspassword. --tlsv1.0 (TLS) Forces curl to use TLS version 1.0 or later when connecting to a remote TLS server. In old versions of curl this option was documented to allow _only_ TLS 1.0. That behavior was inconsistent depending on the TLS library. Use --tls-max if you want to set a maximum TLS version. Providing --tlsv1.0 multiple times has no extra effect. Example: curl --tlsv1.0 https://example.com See also --tlsv1.3. --tlsv1.1 (TLS) Forces curl to use TLS version 1.1 or later when connecting to a remote TLS server. In old versions of curl this option was documented to allow _only_ TLS 1.1. That behavior was inconsistent depending on the TLS library. Use --tls-max if you want to set a maximum TLS version. Providing --tlsv1.1 multiple times has no extra effect. Example: curl --tlsv1.1 https://example.com See also --tlsv1.3 and --tls-max. --tlsv1.2 (TLS) Forces curl to use TLS version 1.2 or later when connecting to a remote TLS server. In old versions of curl this option was documented to allow _only_ TLS 1.2. That behavior was inconsistent depending on the TLS library. Use --tls-max if you want to set a maximum TLS version. Providing --tlsv1.2 multiple times has no extra effect. Example: curl --tlsv1.2 https://example.com See also --tlsv1.3 and --tls-max. --tlsv1.3 (TLS) Forces curl to use TLS version 1.3 or later when connecting to a remote TLS server. If the connection is done without TLS, this option has no effect. This includes QUIC-using (HTTP/3) transfers. Note that TLS 1.3 is not supported by all TLS backends. Providing --tlsv1.3 multiple times has no extra effect. Example: curl --tlsv1.3 https://example.com See also --tlsv1.2 and --tls-max. Added in 7.52.0. -1, --tlsv1 (TLS) Tells curl to use at least TLS version 1.x when negotiating with a remote TLS server. That means TLS version 1.0 or higher Providing --tlsv1 multiple times has no extra effect. Example: curl --tlsv1 https://example.com See also --http1.1 and --http2. -1, --tlsv1 requires that the underlying libcurl was built to support TLS. This option is mutually exclusive to --tlsv1.1 and --tlsv1.2 and --tlsv1.3. --tr-encoding (HTTP) Request a compressed Transfer-Encoding response using one of the algorithms curl supports, and uncompress the data while receiving it. Providing --tr-encoding multiple times has no extra effect. Disable it again with --no-tr-encoding. Example: curl --tr-encoding https://example.com See also --compressed. --trace-ascii <file> Enables a full trace dump of all incoming and outgoing data, including descriptive information, to the given output file. Use "-" as filename to have the output sent to stdout. This is similar to --trace, but leaves out the hex part and only shows the ASCII part of the dump. It makes smaller output that might be easier to read for untrained humans. Note that verbose output of curl activities and network traffic might contain sensitive data, including user names, credentials or secret data content. Be aware and be careful when sharing trace logs with others. This option is global and does not need to be specified for each use of --next. If --trace-ascii is provided several times, the last set value is used. Example: curl --trace-ascii log.txt https://example.com See also -v, --verbose and --trace. This option is mutually exclusive to --trace and -v, --verbose. --trace-config <string> Set configuration for trace output. A comma-separated list of components where detailed output can be made available from. Names are case-insensitive. Specify 'all' to enable all trace components. In addition to trace component names, specify "ids" and "time" to avoid extra --trace-ids or --trace-time parameters. See the curl_global_trace(3) man page for more details. This option is global and does not need to be specified for each use of --next. --trace-config can be used several times in a command line Example: curl --trace-config ids,http/2 https://example.com See also -v, --verbose and --trace. This option is mutually exclusive to --trace and -v, --verbose. Added in 8.3.0. --trace-ids Prepends the transfer and connection identifiers to each trace or verbose line that curl displays. This option is global and does not need to be specified for each use of --next. Providing --trace-ids multiple times has no extra effect. Disable it again with --no-trace-ids. Example: curl --trace-ids --trace-ascii output https://example.com See also --trace and -v, --verbose. Added in 8.2.0. --trace-time Prepends a time stamp to each trace or verbose line that curl displays. This option is global and does not need to be specified for each use of --next. Providing --trace-time multiple times has no extra effect. Disable it again with --no-trace-time. Example: curl --trace-time --trace-ascii output https://example.com See also --trace and -v, --verbose. --trace <file> Enables a full trace dump of all incoming and outgoing data, including descriptive information, to the given output file. Use "-" as filename to have the output sent to stdout. Use "%" as filename to have the output sent to stderr. Note that verbose output of curl activities and network traffic might contain sensitive data, including user names, credentials or secret data content. Be aware and be careful when sharing trace logs with others. This option is global and does not need to be specified for each use of --next. If --trace is provided several times, the last set value is used. Example: curl --trace log.txt https://example.com See also --trace-ascii, --trace-config, --trace-ids and --trace-time. This option is mutually exclusive to -v, --verbose and --trace-ascii. --unix-socket <path> (HTTP) Connect through this Unix domain socket, instead of using the network. If --unix-socket is provided several times, the last set value is used. Example: curl --unix-socket socket-path https://example.com See also --abstract-unix-socket. -T, --upload-file <file> This transfers the specified local file to the remote URL. If there is no file part in the specified URL, curl appends the local file name to the end of the URL before the operation starts. You must use a trailing slash (/) on the last directory to prove to curl that there is no file name or curl thinks that your last directory name is the remote file name to use. When putting the local file name at the end of the URL, curl ignores what is on the left side of any slash (/) or backslash (\) used in the file name and only appends what is on the right side of the rightmost such character. Use the file name "-" (a single dash) to use stdin instead of a given file. Alternately, the file name "." (a single period) may be specified instead of "-" to use stdin in non-blocking mode to allow reading server output while stdin is being uploaded. If this option is used with a HTTP(S) URL, the PUT method is used. You can specify one -T, --upload-file for each URL on the command line. Each -T, --upload-file + URL pair specifies what to upload and to where. curl also supports "globbing" of the -T, --upload-file argument, meaning that you can upload multiple files to a single URL by using the same URL globbing style supported in the URL. When uploading to an SMTP server: the uploaded data is assumed to be RFC 5322 formatted. It has to feature the necessary set of headers and mail body formatted correctly by the user as curl does not transcode nor encode it further in any way. --upload-file can be used several times in a command line Examples: curl -T file https://example.com curl -T "img[1-1000].png" ftp://ftp.example.com/ curl --upload-file "{file1,file2}" https://example.com See also -G, --get, -I, --head, -X, --request and -d, --data. --url-query <data> (all) This option adds a piece of data, usually a name + value pair, to the end of the URL query part. The syntax is identical to that used for --data-urlencode with one extension: If the argument starts with a '+' (plus), the rest of the string is provided as-is unencoded. The query part of a URL is the one following the question mark on the right end. --url-query can be used several times in a command line Examples: curl --url-query name=val https://example.com curl --url-query =encodethis http://example.net/foo curl --url-query name@file https://example.com curl --url-query @fileonly https://example.com curl --url-query "+name=%20foo" https://example.com See also --data-urlencode and -G, --get. Added in 7.87.0. --url <url> Specify a URL to fetch. This option is mostly handy when you want to specify URL(s) in a config file. If the given URL is missing a scheme name (such as "http://" or "ftp://" etc) then curl makes a guess based on the host. If the outermost subdomain name matches DICT, FTP, IMAP, LDAP, POP3 or SMTP then that protocol is used, otherwise HTTP is used. Guessing can be avoided by providing a full URL including the scheme, or disabled by setting a default protocol (added in 7.45.0), see --proto-default for details. To control where this URL is written, use the -o, --output or the -O, --remote-name options. WARNING: On Windows, particular file:// accesses can be converted to network accesses by the operating system. Beware! --url can be used several times in a command line Example: curl --url https://example.com See also -:, --next and -K, --config. -B, --use-ascii (FTP LDAP) Enable ASCII transfer. For FTP, this can also be enforced by using a URL that ends with ";type=A". This option causes data sent to stdout to be in text mode for win32 systems. Providing --use-ascii multiple times has no extra effect. Disable it again with --no-use-ascii. Example: curl -B ftp://example.com/README See also --crlf and --data-ascii. -A, --user-agent <name> (HTTP) Specify the User-Agent string to send to the HTTP server. To encode blanks in the string, surround the string with single quote marks. This header can also be set with the -H, --header or the --proxy-header options. If you give an empty argument to -A, --user-agent (""), it removes the header completely from the request. If you prefer a blank header, you can set it to a single space (" "). If --user-agent is provided several times, the last set value is used. Example: curl -A "Agent 007" https://example.com See also -H, --header and --proxy-header. -u, --user <user:password> Specify the user name and password to use for server authentication. Overrides -n, --netrc and --netrc-optional. If you simply specify the user name, curl prompts for a password. The user name and passwords are split up on the first colon, which makes it impossible to use a colon in the user name with this option. The password can, still. On systems where it works, curl hides the given option argument from process listings. This is not enough to protect credentials from possibly getting seen by other users on the same system as they still are visible for a moment before cleared. Such sensitive data should be retrieved from a file instead or similar and never used in clear text in a command line. When using Kerberos V5 with a Windows based server you should include the Windows domain name in the user name, in order for the server to successfully obtain a Kerberos Ticket. If you do not, then the initial authentication handshake may fail. When using NTLM, the user name can be specified simply as the user name, without the domain, if there is a single domain and forest in your setup for example. To specify the domain name use either Down-Level Logon Name or UPN (User Principal Name) formats. For example, EXAMPLE\user and user@example.com respectively. If you use a Windows SSPI-enabled curl binary and perform Kerberos V5, Negotiate, NTLM or Digest authentication then you can tell curl to select the user name and password from your environment by specifying a single colon with this option: "-u :". If --user is provided several times, the last set value is used. Example: curl -u user:secret https://example.com See also -n, --netrc and -K, --config. --variable <[%]name=text/@file> Set a variable with "name=content" or "name@file" (where "file" can be stdin if set to a single dash (-)). The name is a case sensitive identifier that must consist of no other letters than a-z, A-Z, 0-9 or underscore. The specified content is then associated with this identifier. Setting the same variable name again overwrites the old contents with the new. The contents of a variable can be referenced in a later command line option when that option name is prefixed with "--expand-", and the name is used as "{{name}}" (without the quotes). --variable can import environment variables into the name space. Opt to either require the environment variable to be set or provide a default value for the variable in case it is not already set. --variable %name imports the variable called 'name' but exits with an error if that environment variable is not already set. To provide a default value if the environment variable is not set, use --variable %name=content or --variable %name@content. Note that on some systems - but not all - environment variables are case insensitive. When expanding variables, curl supports a set of functions that can make the variable contents more convenient to use. You apply a function to a variable expansion by adding a colon and then list the desired functions in a comma-separated list that is evaluated in a left-to-right order. Variable content holding null bytes that are not encoded when expanded, causes an error. Available functions: trim removes all leading and trailing white space. json outputs the content using JSON string quoting rules. url shows the content URL (percent) encoded. b64 expands the variable base64 encoded --variable can be used several times in a command line Example: curl --variable name=smith https://example.com See also -K, --config. Added in 8.3.0. -v, --verbose Makes curl verbose during the operation. Useful for debugging and seeing what's going on "under the hood". A line starting with '>' means "header data" sent by curl, '<' means "header data" received by curl that is hidden in normal cases, and a line starting with '*' means additional info provided by curl. If you only want HTTP headers in the output, -i, --include or -D, --dump-header might be more suitable options. If you think this option still does not give you enough details, consider using --trace or --trace-ascii instead. Note that verbose output of curl activities and network traffic might contain sensitive data, including user names, credentials or secret data content. Be aware and be careful when sharing trace logs with others. This option is global and does not need to be specified for each use of --next. Providing --verbose multiple times has no extra effect. Disable it again with --no-verbose. Example: curl --verbose https://example.com See also -i, --include, -s, --silent, --trace and --trace-ascii. This option is mutually exclusive to --trace and --trace-ascii. -V, --version Displays information about curl and the libcurl version it uses. The first line includes the full version of curl, libcurl and other 3rd party libraries linked with the executable. The second line (starts with "Release-Date:") shows the release date. The third line (starts with "Protocols:") shows all protocols that libcurl reports to support. The fourth line (starts with "Features:") shows specific features libcurl reports to offer. Available features include: `alt-svc` Support for the Alt-Svc: header is provided. `AsynchDNS` This curl uses asynchronous name resolves. Asynchronous name resolves can be done using either the c-ares or the threaded resolver backends. `brotli` Support for automatic brotli compression over HTTP(S). `CharConv` curl was built with support for character set conversions (like EBCDIC) `Debug` This curl uses a libcurl built with Debug. This enables more error-tracking and memory debugging etc. For curl-developers only! `gsasl` The built-in SASL authentication includes extensions to support SCRAM because libcurl was built with libgsasl. `GSS-API` GSS-API is supported. `HSTS` HSTS support is present. `HTTP2` HTTP/2 support has been built-in. `HTTP3` HTTP/3 support has been built-in. `HTTPS-proxy` This curl is built to support HTTPS proxy. `IDN` This curl supports IDN - international domain names. `IPv6` You can use IPv6 with this. `Kerberos` Kerberos V5 authentication is supported. `Largefile` This curl supports transfers of large files, files larger than 2GB. `libz` Automatic decompression (via gzip, deflate) of compressed files over HTTP is supported. `MultiSSL` This curl supports multiple TLS backends. `NTLM` NTLM authentication is supported. `NTLM_WB` NTLM delegation to winbind helper is supported. `PSL` PSL is short for Public Suffix List and means that this curl has been built with knowledge about "public suffixes". `SPNEGO` SPNEGO authentication is supported. `SSL` SSL versions of various protocols are supported, such as HTTPS, FTPS, POP3S and so on. `SSPI` SSPI is supported. `TLS-SRP` SRP (Secure Remote Password) authentication is supported for TLS. `TrackMemory` Debug memory tracking is supported. `Unicode` Unicode support on Windows. `UnixSockets` Unix sockets support is provided. `zstd` Automatic decompression (via zstd) of compressed files over HTTP is supported. Example: curl --version See also -h, --help and -M, --manual. -w, --write-out <format> Make curl display information on stdout after a completed transfer. The format is a string that may contain plain text mixed with any number of variables. The format can be specified as a literal "string", or you can have curl read the format from a file with "@filename" and to tell curl to read the format from stdin you write "@-". The variables present in the output format are substituted by the value or text that curl thinks fit, as described below. All variables are specified as %{variable_name} and to output a normal % you just write them as %%. You can output a newline by using \n, a carriage return with \r and a tab space with \t. The output is by default written to standard output, but can be changed with %{stderr} and %output{}. Output HTTP headers from the most recent request by using %header{name} where name is the case insensitive name of the header (without the trailing colon). The header contents are exactly as sent over the network, with leading and trailing whitespace trimmed (added in 7.84.0). Select a specific target destination file to write the output to, by using %output{name} (added in curl 8.3.0) where name is the full file name. The output following that instruction is then written to that file. More than one %output{} instruction can be specified in the same write-out argument. If the file name cannot be created, curl leaves the output destination to the one used prior to the %output{} instruction. Use %output{>>name} to append data to an existing file. NOTE: In Windows the %-symbol is a special symbol used to expand environment variables. In batch files all occurrences of % must be doubled when using this option to properly escape. If this option is used at the command prompt then the % cannot be escaped and unintended expansion is possible. The variables available are: `certs` Output the certificate chain with details. Supported only by the OpenSSL, GnuTLS, Schannel and Secure Transport backends. (Added in 7.88.0) `content_type` The Content-Type of the requested document, if there was any. `errormsg` The error message. (Added in 7.75.0) `exitcode` The numerical exit code of the transfer. (Added in 7.75.0) `filename_effective` The ultimate filename that curl writes out to. This is only meaningful if curl is told to write to a file with the -O, --remote-name or -o, --output option. It's most useful in combination with the -J, --remote-header-name option. `ftp_entry_path` The initial path curl ended up in when logging on to the remote FTP server. `header_json` A JSON object with all HTTP response headers from the recent transfer. Values are provided as arrays, since in the case of multiple headers there can be multiple values. (Added in 7.83.0) The header names provided in lowercase, listed in order of appearance over the wire. Except for duplicated headers. They are grouped on the first occurrence of that header, each value is presented in the JSON array. `http_code` The numerical response code that was found in the last retrieved HTTP(S) or FTP(s) transfer. `http_connect` The numerical code that was found in the last response (from a proxy) to a curl CONNECT request. `http_version` The http version that was effectively used. (Added in 7.50.0) `json` A JSON object with all available keys. (Added in 7.70.0) `local_ip` The IP address of the local end of the most recently done connection - can be either IPv4 or IPv6. `local_port` The local port number of the most recently done connection. `method` The http method used in the most recent HTTP request. (Added in 7.72.0) `num_certs` Number of server certificates received in the TLS handshake. Supported only by the OpenSSL, GnuTLS, Schannel and Secure Transport backends. (Added in 7.88.0) `num_connects` Number of new connects made in the recent transfer. `num_headers` The number of response headers in the most recent request (restarted at each redirect). Note that the status line IS NOT a header. (Added in 7.73.0) `num_redirects` Number of redirects that were followed in the request. `onerror` The rest of the output is only shown if the transfer returned a non-zero error. (Added in 7.75.0) `proxy_ssl_verify_result` The result of the HTTPS proxy's SSL peer certificate verification that was requested. 0 means the verification was successful. (Added in 7.52.0) `redirect_url` When an HTTP request was made without -L, --location to follow redirects (or when --max-redirs is met), this variable shows the actual URL a redirect would have gone to. `referer` The Referer: header, if there was any. (Added in 7.76.0) `remote_ip` The remote IP address of the most recently done connection - can be either IPv4 or IPv6. `remote_port` The remote port number of the most recently done connection. `response_code` The numerical response code that was found in the last transfer (formerly known as "http_code"). `scheme` The URL scheme (sometimes called protocol) that was effectively used. (Added in 7.52.0) `size_download` The total amount of bytes that were downloaded. This is the size of the body/data that was transferred, excluding headers. `size_header` The total amount of bytes of the downloaded headers. `size_request` The total amount of bytes that were sent in the HTTP request. `size_upload` The total amount of bytes that were uploaded. This is the size of the body/data that was transferred, excluding headers. `speed_download` The average download speed that curl measured for the complete download. Bytes per second. `speed_upload` The average upload speed that curl measured for the complete upload. Bytes per second. `ssl_verify_result` The result of the SSL peer certificate verification that was requested. 0 means the verification was successful. `stderr` From this point on, the -w, --write-out output is written to standard error. (Added in 7.63.0) `stdout` From this point on, the -w, --write-out output is written to standard output. This is the default, but can be used to switch back after switching to stderr. (Added in 7.63.0) `time_appconnect` The time, in seconds, it took from the start until the SSL/SSH/etc connect/handshake to the remote host was completed. `time_connect` The time, in seconds, it took from the start until the TCP connect to the remote host (or proxy) was completed. `time_namelookup` The time, in seconds, it took from the start until the name resolving was completed. `time_pretransfer` The time, in seconds, it took from the start until the file transfer was just about to begin. This includes all pre-transfer commands and negotiations that are specific to the particular protocol(s) involved. `time_redirect` The time, in seconds, it took for all redirection steps including name lookup, connect, pretransfer and transfer before the final transaction was started. "time_redirect" shows the complete execution time for multiple redirections. `time_starttransfer` The time, in seconds, it took from the start until the first byte is received. This includes time_pretransfer and also the time the server needed to calculate the result. `time_total` The total time, in seconds, that the full operation lasted. `url` The URL that was fetched. (Added in 7.75.0) `url.scheme` The scheme part of the URL that was fetched. (Added in 8.1.0) `url.user` The user part of the URL that was fetched. (Added in 8.1.0) `url.password` The password part of the URL that was fetched. (Added in 8.1.0) `url.options` The options part of the URL that was fetched. (Added in 8.1.0) `url.host` The host part of the URL that was fetched. (Added in 8.1.0) `url.port` The port number of the URL that was fetched. If no port number was specified and the URL scheme is known, that scheme's default port number is shown. (Added in 8.1.0) `url.path` The path part of the URL that was fetched. (Added in 8.1.0) `url.query` The query part of the URL that was fetched. (Added in 8.1.0) `url.fragment` The fragment part of the URL that was fetched. (Added in 8.1.0) `url.zoneid` The zone id part of the URL that was fetched. (Added in 8.1.0) `urle.scheme` The scheme part of the effective (last) URL that was fetched. (Added in 8.1.0) `urle.user` The user part of the effective (last) URL that was fetched. (Added in 8.1.0) `urle.password` The password part of the effective (last) URL that was fetched. (Added in 8.1.0) `urle.options` The options part of the effective (last) URL that was fetched. (Added in 8.1.0) `urle.host` The host part of the effective (last) URL that was fetched. (Added in 8.1.0) `urle.port` The port number of the effective (last) URL that was fetched. If no port number was specified, but the URL scheme is known, that scheme's default port number is shown. (Added in 8.1.0) `urle.path` The path part of the effective (last) URL that was fetched. (Added in 8.1.0) `urle.query` The query part of the effective (last) URL that was fetched. (Added in 8.1.0) `urle.fragment` The fragment part of the effective (last) URL that was fetched. (Added in 8.1.0) `urle.zoneid` The zone id part of the effective (last) URL that was fetched. (Added in 8.1.0) `urlnum` The URL index number of this transfer, 0-indexed. Unglobbed URLs share the same index number as the origin globbed URL. (Added in 7.75.0) `url_effective` The URL that was fetched last. This is most meaningful if you have told curl to follow location: headers. If --write-out is provided several times, the last set value is used. Example: curl -w '%{response_code}\n' https://example.com See also -v, --verbose and -I, --head. --xattr When saving output to a file, this option tells curl to store certain file metadata in extended file attributes. Currently, the URL is stored in the "xdg.origin.url" attribute and, for HTTP, the content type is stored in the "mime_type" attribute. If the file system does not support extended attributes, a warning is issued. Providing --xattr multiple times has no extra effect. Disable it again with --no-xattr. Example: curl --xattr -o storage https://example.com See also -R, --remote-time, -w, --write-out and -v, --verbose. FILES ~/.curlrc Default config file, see -K, --config for details. ENVIRONMENT The environment variables can be specified in lower case or upper case. The lower case version has precedence. "http_proxy" is an exception as it is only available in lower case. Using an environment variable to set the proxy has the same effect as using the -x, --proxy option. `http_proxy` [protocol://]<host>[:port] Sets the proxy server to use for HTTP. `HTTPS_PROXY` [protocol://]<host>[:port] Sets the proxy server to use for HTTPS. `[url-protocol]_PROXY` [protocol://]<host>[:port] Sets the proxy server to use for [url-protocol], where the protocol is a protocol that curl supports and as specified in a URL. FTP, FTPS, POP3, IMAP, SMTP, LDAP, etc. `ALL_PROXY` [protocol://]<host>[:port] Sets the proxy server to use if no protocol-specific proxy is set. `NO_PROXY` <comma-separated list of hosts/domains> list of host names that should not go through any proxy. If set to an asterisk '*' only, it matches all hosts. Each name in this list is matched as either a domain name which contains the hostname, or the hostname itself. This environment variable disables use of the proxy even when specified with the -x, --proxy option. That is NO_PROXY=direct.example.com curl -x http://proxy.example.com http://direct.example.com accesses the target URL directly, and NO_PROXY=direct.example.com curl -x http://proxy.example.com http://somewhere.example.com accesses the target URL through the proxy. The list of host names can also be include numerical IP addresses, and IPv6 versions should then be given without enclosing brackets. IP addresses can be specified using CIDR notation: an appended slash and number specifies the number of "network bits" out of the address to use in the comparison (added in 7.86.0). For example "192.168.0.0/16" would match all addresses starting with "192.168". `APPDATA` <dir> On Windows, this variable is used when trying to find the home directory. If the primary home variable are all unset. `COLUMNS` <terminal width> If set, the specified number of characters is used as the terminal width when the alternative progress-bar is shown. If not set, curl tries to figure it out using other ways. `CURL_CA_BUNDLE` <file> If set, it is used as the --cacert value. This environment variable is ignored if Schannel is used as the TLS backend. `CURL_HOME` <dir> If set, is the first variable curl checks when trying to find its home directory. If not set, it continues to check XDG_CONFIG_HOME `CURL_SSL_BACKEND` <TLS backend> If curl was built with support for "MultiSSL", meaning that it has built-in support for more than one TLS backend, this environment variable can be set to the case insensitive name of the particular backend to use when curl is invoked. Setting a name that is not a built-in alternative makes curl stay with the default. SSL backend names (case-insensitive): bearssl, gnutls, mbedtls, openssl, rustls, schannel, secure-transport, wolfssl `HOME` <dir> If set, this is used to find the home directory when that is needed. Like when looking for the default .curlrc. CURL_HOME and XDG_CONFIG_HOME have preference. `QLOGDIR` <directory name> If curl was built with HTTP/3 support, setting this environment variable to a local directory makes curl produce qlogs in that directory, using file names named after the destination connection id (in hex). Do note that these files can become rather large. Works with the ngtcp2 and quiche QUIC backends. `SHELL` Used on VMS when trying to detect if using a DCL or a unix shell. `SSL_CERT_DIR` <dir> If set, it is used as the --capath value. This environment variable is ignored if Schannel is used as the TLS backend. `SSL_CERT_FILE` <path> If set, it is used as the --cacert value. This environment variable is ignored if Schannel is used as the TLS backend. `SSLKEYLOGFILE` <file name> If you set this environment variable to a file name, curl stores TLS secrets from its connections in that file when invoked to enable you to analyze the TLS traffic in real time using network analyzing tools such as Wireshark. This works with the following TLS backends: OpenSSL, libressl, BoringSSL, GnuTLS and wolfSSL. `USERPROFILE` <dir> On Windows, this variable is used when trying to find the home directory. If the other, primary, variable are all unset. If set, curl uses the path "$USERPROFILE\Application Data". `XDG_CONFIG_HOME` <dir> If CURL_HOME is not set, this variable is checked when looking for a default .curlrc file. PROXY PROTOCOL PREFIXES The proxy string may be specified with a protocol:// prefix to specify alternative proxy protocols. If no protocol is specified in the proxy string or if the string does not match a supported one, the proxy is treated as an HTTP proxy. The supported proxy protocol prefixes are as follows: http:// Makes it use it as an HTTP proxy. The default if no scheme prefix is used. https:// Makes it treated as an HTTPS proxy. socks4:// Makes it the equivalent of --socks4 socks4a:// Makes it the equivalent of --socks4a socks5:// Makes it the equivalent of --socks5 socks5h:// Makes it the equivalent of --socks5-hostname EXIT CODES There are a bunch of different error codes and their corresponding error messages that may appear under error conditions. At the time of this writing, the exit codes are: 0 Success. The operation completed successfully according to the instructions. 1 Unsupported protocol. This build of curl has no support for this protocol. 2 Failed to initialize. 3 URL malformed. The syntax was not correct. 4 A feature or option that was needed to perform the desired request was not enabled or was explicitly disabled at build-time. To make curl able to do this, you probably need another build of libcurl. 5 Could not resolve proxy. The given proxy host could not be resolved. 6 Could not resolve host. The given remote host could not be resolved. 7 Failed to connect to host. 8 Weird server reply. The server sent data curl could not parse. 9 FTP access denied. The server denied login or denied access to the particular resource or directory you wanted to reach. Most often you tried to change to a directory that does not exist on the server. 10 FTP accept failed. While waiting for the server to connect back when an active FTP session is used, an error code was sent over the control connection or similar. 11 FTP weird PASS reply. Curl could not parse the reply sent to the PASS request. 12 During an active FTP session while waiting for the server to connect back to curl, the timeout expired. 13 FTP weird PASV reply, Curl could not parse the reply sent to the PASV request. 14 FTP weird 227 format. Curl could not parse the 227-line the server sent. 15 FTP cannot use host. Could not resolve the host IP we got in the 227-line. 16 HTTP/2 error. A problem was detected in the HTTP2 framing layer. This is somewhat generic and can be one out of several problems, see the error message for details. 17 FTP could not set binary. Could not change transfer method to binary. 18 Partial file. Only a part of the file was transferred. 19 FTP could not download/access the given file, the RETR (or similar) command failed. 21 FTP quote error. A quote command returned error from the server. 22 HTTP page not retrieved. The requested URL was not found or returned another error with the HTTP error code being 400 or above. This return code only appears if -f, --fail is used. 23 Write error. Curl could not write data to a local filesystem or similar. 25 Failed starting the upload. For FTP, the server typically denied the STOR command. 26 Read error. Various reading problems. 27 Out of memory. A memory allocation request failed. 28 Operation timeout. The specified time-out period was reached according to the conditions. 30 FTP PORT failed. The PORT command failed. Not all FTP servers support the PORT command, try doing a transfer using PASV instead. 31 FTP could not use REST. The REST command failed. This command is used for resumed FTP transfers. 33 HTTP range error. The range "command" did not work. 34 HTTP post error. Internal post-request generation error. 35 SSL connect error. The SSL handshaking failed. 36 Bad download resume. Could not continue an earlier aborted download. 37 FILE could not read file. Failed to open the file. Permissions? 38 LDAP cannot bind. LDAP bind operation failed. 39 LDAP search failed. 41 Function not found. A required LDAP function was not found. 42 Aborted by callback. An application told curl to abort the operation. 43 Internal error. A function was called with a bad parameter. 45 Interface error. A specified outgoing interface could not be used. 47 Too many redirects. When following redirects, curl hit the maximum amount. 48 Unknown option specified to libcurl. This indicates that you passed a weird option to curl that was passed on to libcurl and rejected. Read up in the manual! 49 Malformed telnet option. 52 The server did not reply anything, which here is considered an error. 53 SSL crypto engine not found. 54 Cannot set SSL crypto engine as default. 55 Failed sending network data. 56 Failure in receiving network data. 58 Problem with the local certificate. 59 Could not use specified SSL cipher. 60 Peer certificate cannot be authenticated with known CA certificates. 61 Unrecognized transfer encoding. 63 Maximum file size exceeded. 64 Requested FTP SSL level failed. 65 Sending the data requires a rewind that failed. 66 Failed to initialize SSL Engine. 67 The user name, password, or similar was not accepted and curl failed to log in. 68 File not found on TFTP server. 69 Permission problem on TFTP server. 70 Out of disk space on TFTP server. 71 Illegal TFTP operation. 72 Unknown TFTP transfer ID. 73 File already exists (TFTP). 74 No such user (TFTP). 77 Problem reading the SSL CA cert (path? access rights?). 78 The resource referenced in the URL does not exist. 79 An unspecified error occurred during the SSH session. 80 Failed to shut down the SSL connection. 82 Could not load CRL file, missing or wrong format. 83 Issuer check failed. 84 The FTP PRET command failed. 85 Mismatch of RTSP CSeq numbers. 86 Mismatch of RTSP Session Identifiers. 87 Unable to parse FTP file list. 88 FTP chunk callback reported error. 89 No connection available, the session is queued. 90 SSL public key does not matched pinned public key. 91 Invalid SSL certificate status. 92 Stream error in HTTP/2 framing layer. 93 An API function was called from inside a callback. 94 An authentication function returned an error. 95 A problem was detected in the HTTP/3 layer. This is somewhat generic and can be one out of several problems, see the error message for details. 96 QUIC connection error. This error may be caused by an SSL library error. QUIC is the protocol used for HTTP/3 transfers. 97 Proxy handshake error. 98 A client-side certificate is required to complete the TLS handshake. 99 Poll or select returned fatal error. XX More error codes might appear here in future releases. The existing ones are meant to never change. BUGS If you experience any problems with curl, submit an issue in the project's bug tracker on GitHub: https://github.com/curl/curl/issues AUTHORS Daniel Stenberg is the main author, but the whole list of contributors is found in the separate THANKS file. WWW https://curl.se SEE ALSO ftp (1), wget (1) curl 8.6.0 March 12 2024 curl(1)
null
wdutil
wdutil provides functionality of the Wireless Diagnostics application in command line form. COMMANDS The following commands are available: diagnose [-f outputDirectoryPath] Equivalent to running the Wireless Diagnostics application, without UI. The default outputDirectoryPath is /var/tmp. Requires sudo. info Displays all of the content included in the Wi-Fi Info utility view. log [{+|-} {dhcp|od|dns|eapol|wifi}]+ Enables or disables logging for DHCP, OpenDirectory, DNS, EAPOL, and Wi-Fi Requires sudo. dump Dumps the temporary Wi-Fi log buffer to /tmp/wifi-XXXXXX.log
wdutil – Wireless Diagnostics command line utility.
wdutil [command]
null
Run all diagnostic tests: sudo wdutil diagnose Gather information about the current wireless environment: wdutil info Enable DHCP and OpenDirectory logging: sudo wdutil log +dhcp +od Disable EAPOL logging: sudo wdutil log -eapol AUTHOR This program and document are maintained by Apple Inc. <wifi- diags@group.apple.com>. HISTORY The wdutil command first appeared in Mac OS X Version 10.9. macOS 14.5 12/8/12 macOS 14.5
agentxtrap
agentxtrap issues an AgentX NotifyPDU to a master agent. One or more object identifiers (OIDs) can be given as arguments on the command line. A type and a value must accompany each object identifier. Each variable name is given in the format specified in variables(5).
agentxtrap - send an AgentX NotifyPDU to an AgentX master agent
agentxtrap [OPTIONS] trap-oid [OID TYPE VALUE...]
-c contextName if the -c option is present then the notification is sent in the nondefault name context. -U uptime if the -U option is present then that value, parsed as centiseconds, is taken to be the sysUpTime field of the application. -x ADDRESS if the -x option is present then contact the AgentX master at ADDRESS and not the default one. Additionally all the options described in snmpcmd(1) under the MIB PARSING OPTIONS, LOGGING OPTIONS and INPUT OPTIONS headers as well as the -d, -D, -m and -M options are supported. In OID TYPE VALUE the parsing of the VALUE field is controlled by the TYPE field. The possible values for the TYPE field is one of the following characters: = Let OID decide how VALUE should be interpreted i INTEGER u Unsigned c Counter32 s OCTET STRING of chaacters x OCTET STRING, entered as a sequence of optionally space separated hexadecimal digit pairs d OCTET STRING, entered as a sequence of space separated decimal digits in the range 0 - 255 n NULL o OBJECT IDENTIFIER t TimeTicks a IpAddress b BITS which are handled in the same way as the snmpset command.
To send a generic linkUp trap to the manager for interface 1 the following command can be used: agentxtrap netSnmp.0.3 ifindex.1 i 1 SEE ALSO snmpcmd(1), snmpset(1), variables(5), RFC 2741 V5.6.2.1 20 Dec 2009 AGENTXTRAP(1)
db_verify
The db_verify utility verifies the structure of one or more files and the databases they contain. The options are as follows: -h Specify a home directory for the database environment; by default, the current working directory is used. -o Skip the database checks for btree and duplicate sort order and for hashing. If the file being verified contains databases with non-default comparison or hashing configurations, calling the db_verify utility without the -o flag will usually return failure. The -o flag causes db_verify to ignore database sort or hash ordering and allows db_verify to be used on these files. To fully verify these files, verify them explicitly using the DB->verify method, after configuring the correct comparison or hashing functions. -N Do not acquire shared region mutexes while running. Other problems, such as potentially fatal errors in Berkeley DB, will be ignored as well. This option is intended only for debugging errors, and should not be used under any other circumstances. -P Specify an environment password. Although Berkeley DB utilities overwrite password strings as soon as possible, be aware there may be a window of vulnerability on systems where unprivileged users can see command-line arguments or where utilities are not able to overwrite the memory containing the command-line arguments. -q Suppress the printing of any error descriptions, simply exit success or failure. -V Write the library version number to the standard output, and exit. The db_verify utility does not perform any locking, even in Berkeley DB environments that are configured with a locking subsystem. As such, it should only be used on files that are not being modified by another thread of control. The db_verify utility may be used with a Berkeley DB environment (as described for the -h option, the environment variable DB_HOME, or because the utility was run in a directory containing a Berkeley DB environment). In order to avoid environment corruption when using a Berkeley DB environment, db_verify should always be given the chance to detach from the environment and exit gracefully. To cause db_verify to release all environment resources and exit cleanly, send it an interrupt signal (SIGINT). The db_verify utility exits 0 on success, and >0 if an error occurs. ENVIRONMENT DB_HOME If the -h option is not specified and the environment variable DB_HOME is set, it is used as the path of the database home, as described in DB_ENV->open. SEE ALSO db_archive(1), db_checkpoint(1), db_deadlock(1), db_dump(1), db_load(1), db_printlog(1), db_recover(1), db_stat(1), db_upgrade(1) Darwin December 3, 2003 Darwin
db_verify
db_verify [-NoqV] [-h home] [-P password] file ...
null
null
servertool
null
null
null
null
null
newgrp
The newgrp utility creates a new shell execution environment with modified real and effective group IDs. The options are as follows: -l Simulate a full login. The environment and umask are set to what would be expected if the user actually logged in again. If the group operand is present, a new shell is started with the specified effective and real group IDs. The user will be prompted for a password if they are not a member of the specified group. Otherwise, the real, effective and supplementary group IDs are restored to those from the current user's password database entry. EXIT STATUS The newgrp utility attempts to start the shell regardless of whether group IDs were successfully changed. If an error occurs and the shell cannot be started, newgrp exits >0. Otherwise, the exit status of newgrp is the exit status of the shell. SEE ALSO csh(1), groups(1), login(1), sh(1), su(1), umask(1), group(5), passwd(5), environ(7) STANDARDS The newgrp utility conforms to IEEE Std 1003.1-2001 (“POSIX.1”). HISTORY A newgrp utility appeared in Version 6 AT&T UNIX. BUGS Group passwords are inherently insecure as there is no way to stop users obtaining the password hash from the group database. Their use is discouraged. Instead, users should simply be added to the necessary groups. macOS 14.5 February 8, 2013 macOS 14.5
newgrp – change to a new group
newgrp [-l] [group]
null
null
pod2usage5.34
pod2usage will read the given input file looking for pod documentation and will print the corresponding usage message. If no input file is specified then standard input is read. pod2usage invokes the pod2usage() function in the Pod::Usage module. Please see "pod2usage()" in Pod::Usage. SEE ALSO Pod::Usage, pod2text, Pod::Text, Pod::Text::Termcap, perldoc AUTHOR Please report bugs using <http://rt.cpan.org>. Brad Appleton <bradapp@enteract.com> Based on code for pod2text(1) written by Tom Christiansen <tchrist@mox.perl.com> perl v5.34.1 2024-04-13 POD2USAGE(1)
pod2usage - print usage messages from embedded pod docs in files
pod2usage [-help] [-man] [-exit exitval] [-output outfile] [-verbose level] [-pathlist dirlist] [-formatter module] [-utf8] file OPTIONS AND ARGUMENTS -help Print a brief help message and exit. -man Print this command's manual page and exit. -exit exitval The exit status value to return. -output outfile The output file to print to. If the special names "-" or ">&1" or ">&STDOUT" are used then standard output is used. If ">&2" or ">&STDERR" is used then standard error is used. -verbose level The desired level of verbosity to use: 1 : print SYNOPSIS only 2 : print SYNOPSIS sections and any OPTIONS/ARGUMENTS sections 3 : print the entire manpage (similar to running pod2text) -pathlist dirlist Specifies one or more directories to search for the input file if it was not supplied with an absolute path. Each directory path in the given list should be separated by a ':' on Unix (';' on MSWin32 and DOS). -formatter module Which text formatter to use. Default is Pod::Text, or for very old Perl versions Pod::PlainText. An alternative would be e.g. Pod::Text::Termcap. -utf8 This option assumes that the formatter (see above) understands the option "utf8". It turns on generation of utf8 output. file The pathname of a file containing pod documentation to be output in usage message format. If omitted, standard input is read - but the output is then formatted with Pod::Text only - unless a specific formatter has been specified with -formatter.
null
null
timerfires
The timerfires utility lists timers as they fire. The options are as follows: -t timeout Run only for timeout seconds; then exit. -p pid Analyze only timers from the process with process ID pid. -n name Analyze only timers from processes with name name. It is an error to specify both -p and -n. -s Show call stacks for "sleep"-type timers. SAMPLE USAGE timerfires -n MyApp -s -t 10 timerfires will run for ten seconds, displaying timer data for all instances of processes named "MyApp", including stacks. OS X May 20, 2013 OS X
timerfires – analyze timers as they fire
timerfires [-t timeout] [-p pid | -n name] [-s]
null
null
rpcgen
rpcgen is a tool that generates C code to implement an RPC protocol. The input to rpcgen is a language similar to C known as RPC Language (Remote Procedure Call Language). rpcgen is normally used as in the first synopsis where it takes an input file and generates up to four output files. If the infile is named proto.x, then rpcgen will generate a header file in proto.h, XDR routines in proto_xdr.c, server-side stubs in proto_svc.c, and client-side stubs in proto_clnt.c. With the -T option, it will also generate the RPC dispatch table in proto_tbl.i. With the -Sc option, it will also generate sample code which would illustrate how to use the remote procedures on the client side. This code would be created in proto_client.c. With the -Ss option, it will also generate a sample server code which would illustrate how to write the remote procedures. This code would be created in proto_server.c. The server created can be started both by the port monitors (for example, inetd or listen) or by itself. When it is started by a port monitor, it creates servers only for the transport for which the file descriptor 0 was passed. The name of the transport must be specified by setting up the environmental variable PM_TRANSPORT. When the server generated by rpcgen is executed, it creates server handles for all the transports specified in NETPATH environment variable, or if it is unset, it creates server handles for all the visible transports from /etc/netconfig file. Note: the transports are chosen at run time and not at compile time. When the server is self-started, it backgrounds itself by default. A special define symbol RPC_SVC_FG can be used to run the server process in foreground. The second synopsis provides special features which allow for the creation of more sophisticated RPC servers. These features include support for user provided #defines and RPC dispatch tables. The entries in the RPC dispatch table contain: + pointers to the service routine corresponding to that procedure, + a pointer to the input and output arguments, + the size of these routines A server can use the dispatch table to check authorization and then to execute the service routine; a client library may use it to deal with the details of storage management and XDR data conversion. The other three synopses shown above are used when one does not want to generate all the output files, but only a particular one. Some examples of their usage is described in the EXAMPLE section below. When rpcgen is executed with the -s option, it creates servers for that particular class of transports. When executed with the -n option, it creates a server for the transport specified by netid. If infile is not specified, rpcgen accepts the standard input. The C preprocessor, cpp(1) is run on the input file before it is actually interpreted by rpcgen For each type of output file, rpcgen defines a special preprocessor symbol for use by the rpcgen programmer: RPC_HDR defined when compiling into header files RPC_XDR defined when compiling into XDR routines RPC_SVC defined when compiling into server-side stubs RPC_CLNT defined when compiling into client-side stubs RPC_TBL defined when compiling into RPC dispatch tables Any line beginning with ‘%’ is passed directly into the output file, uninterpreted by rpcgen. For every data type referred to in infile rpcgen assumes that there exists a routine with the string “xdr_” prepended to the name of the data type. If this routine does not exist in the RPC/XDR library, it must be provided. Providing an undefined data type allows customization of XDR routines.
rpcgen – Remote Procedure Call (RPC) protocol compiler
rpcgen infile rpcgen [-D [name=value]] [-A] [-M] [-T] [-K secs] infile rpcgen [-L] -c | -h | -l | -m | -t | -Sc | -Ss | [-o outfile] [infile] rpcgen -c | nettype [-o outfile] [infile] rpcgen -s | netid [-o outfile] [infile]
-a Generate all the files including sample code for client and server side. -b This generates code for the SunOS 4.1 style of RPC. This is the default. -C Generate code in ANSI C. This option also generates code that could be compiled with the C++ compiler. -c Compile into XDR routines. -D name[=value] Define a symbol name. Equivalent to the #define directive in the source. If no value is given, value is defined as 1. This option may be specified more than once. -h Compile into C data-definitions (a header file). The -T option can be used in conjunction to produce a header file which supports RPC dispatch tables. -K secs By default, services created using rpcgen wait 120 seconds after servicing a request before exiting. That interval can be changed using the -K flag. To create a server that exits immediately upon servicing a request, “-K 0” can be used. To create a server that never exits, the appropriate argument is “-K -1”. When monitoring for a server, some port monitors, like the AT&T System V Release 4 UNIX utility listen(1), always spawn a new process in response to a service request. If it is known that a server will be used with such a monitor, the server should exit immediately on completion. For such servers, rpcgen should be used with “-K -1”. -L Server errors will be sent to syslog instead of stderr. -l Compile into client-side stubs. -m Compile into server-side stubs, but do not generate a main() routine. This option is useful for doing callback-routines and for users who need to write their own main() routine to do initialization. -N Use the newstyle of rpcgen. This allows procedures to have multiple arguments. It also uses the style of parameter passing that closely resembles C. So, when passing an argument to a remote procedure you do not have to pass a pointer to the argument but the argument itself. This behaviour is different from the oldstyle of rpcgen generated code. The newstyle is not the default case because of backward compatibility. -n netid Compile into server-side stubs for the transport specified by netid. There should be an entry for netid in the netconfig database. This option may be specified more than once, so as to compile a server that serves multiple transports. -o outfile Specify the name of the output file. If none is specified, standard output is used (-c -h -l -m -n -s modes only) -Sc Generate sample code to show the use of remote procedure and how to bind to the server before calling the client side stubs generated by rpcgen. -Ss Generate skeleton code for the remote procedures on the server side. You would need to fill in the actual code for the remote procedures. -s nettype Compile into server-side stubs for all the transports belonging to the class nettype. The supported classes are netpath, visible, circuit_n, circuit_v, datagram_n, datagram_v, tcp, and udp [see rpc(3) for the meanings associated with these classes. Note: BSD currently supports only the tcp and udp classes]. This option may be specified more than once. Note: the transports are chosen at run time and not at compile time. -T Generate the code to support RPC dispatch tables. -t Compile into RPC dispatch table. The options -c, -h, -l, -m, -s, and -t are used exclusively to generate a particular type of file, while the options -D and -T are global and can be used with the other options. NOTES The RPC Language does not support nesting of structures. As a work- around, structures can be declared at the top-level, and their name used inside other structures in order to achieve the same effect. Name clashes can occur when using program definitions, since the apparent scoping does not really apply. Most of these can be avoided by giving unique names for programs, versions, procedures and types. The server code generated with -n option refers to the transport indicated by netid and hence is very site specific. EXAMPLE The command $ rpcgen -T prot.x generates the five files: prot.h, prot_clnt.c, prot_svc.c, prot_xdr.c and prot_tbl.i. The following example sends the C data-definitions (header file) to standard output. $ rpcgen -h prot.x To send the test version of the -DTEST, server side stubs for all the transport belonging to the class datagram_n to standard output, use: $ rpcgen -s datagram_n -DTEST prot.x To create the server side stubs for the transport indicated by netid tcp, use: $ rpcgen -n tcp -o prot_svc.c prot.x SEE ALSO cpp(1) June 11, 1995
null
swcutil
null
null
null
null
null
perlthanks
This program is designed to help you generate bug reports (and thank- you notes) about perl5 and the modules which ship with it. In most cases, you can just run it interactively from a command line without any special arguments and follow the prompts. If you have found a bug with a non-standard port (one that was not part of the standard distribution), a binary distribution, or a non-core module (such as Tk, DBI, etc), then please see the documentation that came with that distribution to determine the correct place to report bugs. Bug reports should be submitted to the GitHub issue tracker at <https://github.com/Perl/perl5/issues>. The perlbug@perl.org address no longer automatically opens tickets. You can use this tool to compose your report and save it to a file which you can then submit to the issue tracker. In extreme cases, perlbug may not work well enough on your system to guide you through composing a bug report. In those cases, you may be able to use perlbug -d or perl -V to get system configuration information to include in your issue report. When reporting a bug, please run through this checklist: What version of Perl you are running? Type "perl -v" at the command line to find out. Are you running the latest released version of perl? Look at <http://www.perl.org/> to find out. If you are not using the latest released version, please try to replicate your bug on the latest stable release. Note that reports about bugs in old versions of Perl, especially those which indicate you haven't also tested the current stable release of Perl, are likely to receive less attention from the volunteers who build and maintain Perl than reports about bugs in the current release. Are you sure what you have is a bug? A significant number of the bug reports we get turn out to be documented features in Perl. Make sure the issue you've run into isn't intentional by glancing through the documentation that comes with the Perl distribution. Given the sheer volume of Perl documentation, this isn't a trivial undertaking, but if you can point to documentation that suggests the behaviour you're seeing is wrong, your issue is likely to receive more attention. You may want to start with perldoc perltrap for pointers to common traps that new (and experienced) Perl programmers run into. If you're unsure of the meaning of an error message you've run across, perldoc perldiag for an explanation. If the message isn't in perldiag, it probably isn't generated by Perl. You may have luck consulting your operating system documentation instead. If you are on a non-UNIX platform perldoc perlport, as some features may be unimplemented or work differently. You may be able to figure out what's going wrong using the Perl debugger. For information about how to use the debugger perldoc perldebug. Do you have a proper test case? The easier it is to reproduce your bug, the more likely it will be fixed -- if nobody can duplicate your problem, it probably won't be addressed. A good test case has most of these attributes: short, simple code; few dependencies on external commands, modules, or libraries; no platform-dependent code (unless it's a platform-specific bug); clear, simple documentation. A good test case is almost always a good candidate to be included in Perl's test suite. If you have the time, consider writing your test case so that it can be easily included into the standard test suite. Have you included all relevant information? Be sure to include the exact error messages, if any. "Perl gave an error" is not an exact error message. If you get a core dump (or equivalent), you may use a debugger (dbx, gdb, etc) to produce a stack trace to include in the bug report. NOTE: unless your Perl has been compiled with debug info (often -g), the stack trace is likely to be somewhat hard to use because it will most probably contain only the function names and not their arguments. If possible, recompile your Perl with debug info and reproduce the crash and the stack trace. Can you describe the bug in plain English? The easier it is to understand a reproducible bug, the more likely it will be fixed. Any insight you can provide into the problem will help a great deal. In other words, try to analyze the problem (to the extent you can) and report your discoveries. Can you fix the bug yourself? If so, that's great news; bug reports with patches are likely to receive significantly more attention and interest than those without patches. Please submit your patch via the GitHub Pull Request workflow as described in perldoc perlhack. You may also send patches to perl5-porters@perl.org. When sending a patch, create it using "git format-patch" if possible, though a unified diff created with "diff -pu" will do nearly as well. Your patch may be returned with requests for changes, or requests for more detailed explanations about your fix. Here are a few hints for creating high-quality patches: Make sure the patch is not reversed (the first argument to diff is typically the original file, the second argument your changed file). Make sure you test your patch by applying it with "git am" or the "patch" program before you send it on its way. Try to follow the same style as the code you are trying to patch. Make sure your patch really does work ("make test", if the thing you're patching is covered by Perl's test suite). Can you use "perlbug" to submit a thank-you note? Yes, you can do this by either using the "-T" option, or by invoking the program as "perlthanks". Thank-you notes are good. It makes people smile. Please make your issue title informative. "a bug" is not informative. Neither is "perl crashes" nor is "HELP!!!". These don't help. A compact description of what's wrong is fine. Having done your bit, please be prepared to wait, to be told the bug is in your code, or possibly to get no reply at all. The volunteers who maintain Perl are busy folks, so if your problem is an obvious bug in your own code, is difficult to understand or is a duplicate of an existing report, you may not receive a personal reply. If it is important to you that your bug be fixed, do monitor the issue tracker (you will be subscribed to notifications for issues you submit or comment on) and the commit logs to development versions of Perl, and encourage the maintainers with kind words or offers of frosty beverages. (Please do be kind to the maintainers. Harassing or flaming them is likely to have the opposite effect of the one you want.) Feel free to update the ticket about your bug on <https://github.com/Perl/perl5/issues> if a new version of Perl is released and your bug is still present.
perlbug - how to submit bug reports on Perl
perlbug perlbug [ -v ] [ -a address ] [ -s subject ] [ -b body | -f inputfile ] [ -F outputfile ] [ -r returnaddress ] [ -e editor ] [ -c adminaddress | -C ] [ -S ] [ -t ] [ -d ] [ -h ] [ -T ] perlbug [ -v ] [ -r returnaddress ] [ -ok | -okay | -nok | -nokay ] perlthanks
-a Address to send the report to instead of saving to a file. -b Body of the report. If not included on the command line, or in a file with -f, you will get a chance to edit the report. -C Don't send copy to administrator when sending report by mail. -c Address to send copy of report to when sending report by mail. Defaults to the address of the local perl administrator (recorded when perl was built). -d Data mode (the default if you redirect or pipe output). This prints out your configuration data, without saving or mailing anything. You can use this with -v to get more complete data. -e Editor to use. -f File containing the body of the report. Use this to quickly send a prepared report. -F File to output the results to. Defaults to perlbug.rep. -h Prints a brief summary of the options. -ok Report successful build on this system to perl porters. Forces -S and -C. Forces and supplies values for -s and -b. Only prompts for a return address if it cannot guess it (for use with make). Honors return address specified with -r. You can use this with -v to get more complete data. Only makes a report if this system is less than 60 days old. -okay As -ok except it will report on older systems. -nok Report unsuccessful build on this system. Forces -C. Forces and supplies a value for -s, then requires you to edit the report and say what went wrong. Alternatively, a prepared report may be supplied using -f. Only prompts for a return address if it cannot guess it (for use with make). Honors return address specified with -r. You can use this with -v to get more complete data. Only makes a report if this system is less than 60 days old. -nokay As -nok except it will report on older systems. -p The names of one or more patch files or other text attachments to be included with the report. Multiple files must be separated with commas. -r Your return address. The program will ask you to confirm its default if you don't use this option. -S Save or send the report without asking for confirmation. -s Subject to include with the report. You will be prompted if you don't supply one on the command line. -t Test mode. Makes it possible to command perlbug from a pipe or file, for testing purposes. -T Send a thank-you note instead of a bug report. -v Include verbose configuration data in the report. AUTHORS Kenneth Albanowski (<kjahds@kjahds.com>), subsequently doctored by Gurusamy Sarathy (<gsar@activestate.com>), Tom Christiansen (<tchrist@perl.com>), Nathan Torkington (<gnat@frii.com>), Charles F. Randall (<cfr@pobox.com>), Mike Guy (<mjtg@cam.ac.uk>), Dominic Dunlop (<domo@computer.org>), Hugo van der Sanden (<hv@crypt.org>), Jarkko Hietaniemi (<jhi@iki.fi>), Chris Nandor (<pudge@pobox.com>), Jon Orwant (<orwant@media.mit.edu>, Richard Foley (<richard.foley@rfi.net>), Jesse Vincent (<jesse@bestpractical.com>), and Craig A. Berry (<craigberry@mac.com>). SEE ALSO perl(1), perldebug(1), perldiag(1), perlport(1), perltrap(1), diff(1), patch(1), dbx(1), gdb(1) BUGS None known (guess what must have been used to report them?) perl v5.38.2 2023-11-28 PERLBUG(1)
null
dbiprof5.34
This tool is a command-line client for the DBI::ProfileData. It allows you to analyze the profile data file produced by DBI::ProfileDumper and produce various useful reports.
dbiprof - command-line client for DBI::ProfileData
See a report of the ten queries with the longest total runtime in the profile dump file prof1.out: dbiprof prof1.out See the top 10 most frequently run queries in the profile file dbi.prof (the default): dbiprof --sort count See the same report with 15 entries: dbiprof --sort count --number 15
This program accepts the following options: --number N Produce this many items in the report. Defaults to 10. If set to "all" then all results are shown. --sort field Sort results by the given field. Sorting by multiple fields isn't currently supported (patches welcome). The available sort fields are: total Sorts by total time run time across all runs. This is the default sort. longest Sorts by the longest single run. count Sorts by total number of runs. first Sorts by the time taken in the first run. shortest Sorts by the shortest single run. key1 Sorts by the value of the first element in the Path, which should be numeric. You can also sort by "key2" and "key3". --reverse Reverses the selected sort. For example, to see a report of the shortest overall time: dbiprof --sort total --reverse --match keyN=value Consider only items where the specified key matches the given value. Keys are numbered from 1. For example, let's say you used a DBI::Profile Path of: [ DBIprofile_Statement, DBIprofile_Methodname ] And called dbiprof as in: dbiprof --match key2=execute Your report would only show execute queries, leaving out prepares, fetches, etc. If the value given starts and ends with slashes ("/") then it will be treated as a regular expression. For example, to only include SELECT queries where key1 is the statement: dbiprof --match key1=/^SELECT/ By default the match expression is matched case-insensitively, but this can be changed with the --case-sensitive option. --exclude keyN=value Remove items for where the specified key matches the given value. For example, to exclude all prepare entries where key2 is the method name: dbiprof --exclude key2=prepare Like "--match", If the value given starts and ends with slashes ("/") then it will be treated as a regular expression. For example, to exclude UPDATE queries where key1 is the statement: dbiprof --match key1=/^UPDATE/ By default the exclude expression is matched case-insensitively, but this can be changed with the --case-sensitive option. --case-sensitive Using this option causes --match and --exclude to work case- sensitively. Defaults to off. --delete Sets the "DeleteFiles" option to DBI::ProfileData which causes the files to be deleted after reading. See DBI::ProfileData for more details. --dumpnodes Print the list of nodes in the form of a perl data structure. Use the "-sort" option if you want the list sorted. --version Print the dbiprof version number and exit. AUTHOR Sam Tregar <sam@tregar.com> COPYRIGHT AND LICENSE Copyright (C) 2002 Sam Tregar This program is free software; you can redistribute it and/or modify it under the same terms as Perl 5 itself. SEE ALSO DBI::ProfileDumper, DBI::Profile, DBI. perl v5.34.0 2024-04-13 DBIPROF(1)
null
parldyn5.30
null
null
null
null
null
snmpnetstat
The snmpnetstat command symbolically displays the values of various network-related information retrieved from a remote system using the SNMP protocol. There are a number of output formats, depending on the options for the information presented. The first form of the command displays a list of active sockets. The second form presents the values of other network-related information according to the option selected. Using the third form, with an interval specified, snmpnetstat will continuously display the information regarding packet traffic on the configured network interfaces. The fourth form displays statistics about the named protocol. snmpnetstat will issue GETBULK requests to query for information if at least protocol version v2 is used. AGENT identifies a target SNMP agent, which is instrumented to monitor the given objects. At its simplest, the AGENT specification will consist of a hostname or an IPv4 address. In this situation, the command will attempt communication with the agent, using UDP/IPv4 to port 161 of the given target host. See snmpcmd(1) for a full list of the possible formats for AGENT.
snmpnetstat - display networking status and configuration information from a network entity via SNMP
snmpnetstat [COMMON OPTIONS] [-Ca] [-Cn] AGENT snmpnetstat [COMMON OPTIONS] [-Ci] [-Co] [-Cr] [-Cn] [-Cs] AGENT snmpnetstat [COMMON OPTIONS] [-Ci] [-Cn] [-CI interface] AGENT [interval] snmpnetstat [COMMON OPTIONS] [-Ca] [-Cn] [-Cs] [-Cp protocol] AGENT
The options have the following meaning: COMMON OPTIONS Please see snmpcmd(1) for a list of possible values for common options as well as their descriptions. -Ca With the default display, show the state of all sockets; normally sockets used by server processes are not shown. -Ci Show the state of all of the network interfaces. The interface display provides a table of cumulative statistics regarding packets transferred, errors, and collisions. The network addresses of the interface and the maximum transmission unit (``mtu'') are also displayed. -Co Show an abbreviated interface status, giving octets in place of packets. This is useful when enquiring virtual interfaces (such as Frame-Relay circuits) on a router. -CI interface Show information only about this interface; used with an interval as described below. -Cn Show network addresses as numbers (normally snmpnetstat interprets addresses and attempts to display them symbolically). This option may be used with any of the display formats. -Cp protocol Show statistics about protocol, which is either a well- known name for a protocol or an alias for it. Some protocol names and aliases are listed in the file /etc/protocols. A null response typically means that there are no interesting numbers to report. The program will complain if protocol is unknown or if there is no statistics routine for it. -Cs Show per-protocol statistics. When used with the -Cr option, show routing statistics instead. -Cr Show the routing tables. When -Cs is also present, show per- protocol routing statistics instead of the routing tables. -CR repeaters For GETBULK requests, repeaters specifies the max- repeaters value to use. When snmpnetstat is invoked with an interval argument, it displays a running count of statistics related to network interfaces. interval is the number of seconds between reporting of statistics. The Active Sockets Display (default) The default display, for active sockets, shows the local and remote addresses, protocol, and the internal state of the protocol. Address formats are of the form ``host.port'' or ``network.port'' if a socket's address specifies a network but no specific host address. When known, the host and network addresses are displayed symbolically according to the data bases /etc/hosts and /etc/networks, respectively. If a symbolic name for an address is unknown, or if the -Cn option is specified, the address is printed numerically, according to the address family. For more information regarding the Internet ``dot format,'' refer to inet(3N). Unspecified, or ``wildcard'', addresses and ports appear as ``*''. The Interface Display The interface display provides a table of cumulative statistics regarding packets transferred, errors, and col- lisions. The network addresses of the interface and the maximum transmission unit (``mtu'') are also displayed. The Routing Table Display The routing table display indicates the available routes and their status. Each route consists of a destination host or network and a gateway to use in forwarding pack- ets. The flags field shows the state of the route (``U'' if ``up''), whether the route is to a gateway (``G''), whether the route was created dynamically by a redirect (``D''), and whether the route has been modified by a redirect (``M''). Direct routes are created for each interface attached to the local host; the gateway field for such entries shows the address of the outgoing inter- face. The interface entry indicates the network interface utilized for the route. The Interface Display with an Interval When snmpnetstat is invoked with an interval argument, it displays a running count of statistics related to network interfaces. This display consists of a column for the primary interface and a column summarizing information for all interfaces. The primary interface may be replaced with another interface with the -CI option. The first line of each screen of information contains a summary since the system was last rebooted. Subsequent lines of output show values accumulated over the preceding interval. The Active Sockets Display for a Single Protocol When a protocol is specified with the -Cp option, the information displayed is similar to that in the default display for active sockets, except the display is limited to the given protocol.
Example of using snmpnetstat to display active sockets (default): % snmpnetstat -v 2c -c public -Ca testhost Active Internet (tcp) Connections (including servers) Proto Local Address Foreign Address (state) tcp *.echo *.* LISTEN tcp *.discard *.* LISTEN tcp *.daytime *.* LISTEN tcp *.chargen *.* LISTEN tcp *.ftp *.* LISTEN tcp *.telnet *.* LISTEN tcp *.smtp *.* LISTEN ... Active Internet (udp) Connections Proto Local Address udp *.echo udp *.discard udp *.daytime udp *.chargen udp *.time ... % snmpnetstat -v 2c -c public -Ci testhost Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Queue eri0 1500 10.6.9/24 testhost 170548881 245601 687976 0 0 lo0 8232 127 localhost 7530982 0 7530982 0 0 Example of using snmpnetstat to show statistics about a specific protocol: % snmpnetstat -v 2c -c public -Cp tcp testhost Active Internet (tcp) Connections Proto Local Address Foreign Address (state) tcp *.echo *.* LISTEN tcp *.discard *.* LISTEN tcp *.daytime *.* LISTEN tcp *.chargen *.* LISTEN tcp *.ftp *.* LISTEN tcp *.telnet *.* LISTEN tcp *.smtp *.* LISTEN ... SEE ALSO snmpcmd(1), iostat(1), vmstat(1), hosts(5), networks(5), protocols(5), services(5). BUGS The notion of errors is ill-defined. V5.6.2.1 20 Jan 2010 SNMPNETSTAT(1)
package-stash-conflicts5.34
null
package-stash-conflicts VERSION version 0.38 SUPPORT Bugs may be submitted through the RT bug tracker <https://rt.cpan.org/Public/Dist/Display.html?Name=Package-Stash> (or bug-Package-Stash@rt.cpan.org <mailto:bug-Package-Stash@rt.cpan.org>). AUTHOR Jesse Luehrs <doy@tozt.net> COPYRIGHT AND LICENSE This software is copyright (c) 2018 by Jesse Luehrs. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. perl v5.34.0 2018-12-31 PACKAGE-STASH-CONFLICTS(1)
null
null
null
lastcomm
lastcomm gives information on previously executed commands. With no arguments, lastcomm prints information about all the commands recorded during the current accounting file's lifetime. Option: -f file Read from file rather than the default accounting file. -w Use as many columns as needed to print the output instead of limiting it to 80. This is the default behavior on Apple systems. If called with arguments, only accounting entries with a matching command name, user name, or terminal name are printed. So, for example: lastcomm a.out root ttyd0 would produce a listing of all the executions of commands named a.out by user root on the terminal ttyd0. For each process entry, the following are printed. • The name of the user who ran the process. • Flags, as accumulated by the accounting facilities in the system. • The command name under which the process was called. • The amount of cpu time used by the process (in seconds). • The time the process started. • The elapsed time of the process. The flags are encoded as follows: “S” indicates the command was executed by the super-user, “F” indicates the command ran after a fork, but without a following exec(3), “C” indicates the command was run in PDP-11 compatibility mode (VAX only), “D” indicates the command terminated with the generation of a core file, and “X” indicates the command was terminated with a signal. FILES /var/account/acct Default accounting file. SEE ALSO last(1), sigaction(2), acct(5), core(5) HISTORY The lastcomm command appeared in 3.0BSD. macOS 14.5 January 31, 2012 macOS 14.5
lastcomm – show last commands executed in reverse order
lastcomm [-w] [-f file] [command ...] [user ...] [terminal ...]
null
null
getconf
The getconf utility prints the values of POSIX or X/Open path or system configuration variables to the standard output. If a variable is undefined, the string “undefined” is output. The first form of the command displays all of the path or system configuration variables to standard output. If file is provided, all path configuration variables are reported for file using pathconf(2). Otherwise, all system configuration variables are reported using confstr(3) and sysconf(3). The second form of the command, with two mandatory arguments, retrieves file- and file system-specific configuration variables using pathconf(2). The third form, with a single argument, retrieves system configuration variables using confstr(3) and sysconf(3), depending on the type of variable. As an extension, the second form can also be used to query static limits from <limits.h>. All sysconf(3) and pathconf(2) variables use the same name as the manifest constants defined in the relevant standard C-language bindings, including any leading underscore or prefix. That is to say, system_var might be ARG_MAX or _POSIX_VERSION, as opposed to the sysconf(3) names _SC_ARG_MAX or _SC_POSIX_VERSION. Variables retrieved from confstr(3) have the leading ‘_CS_’ stripped off; thus, _CS_PATH is queried by a system_var of “PATH”. Programming Environments The -v environment option specifies a IEEE Std 1003.1-2001 (“POSIX.1”) programming environment under which the values are to be queried. This option currently does nothing, but may in the future be used to select between 32-bit and 64-bit execution environments on platforms which support both. Specifying an environment which is not supported on the current execution platform gives undefined results. The standard programming environments are as follows: POSIX_V6_ILP32_OFF32 Exactly 32-bit integer, long, pointer, and file offset. Supported platforms: None. POSIX_V6_ILP32_OFFBIG Exactly 32-bit integer, long, and pointer; at least 64-bit file offset. Supported platforms: IA32, PowerPC. POSIX_V6_LP64_OFF64 Exactly 32-bit integer; exactly 64-bit long, pointer, and file offset. Supported platforms: AMD64, SPARC64. POSIX_V6_LPBIG_OFFBIG At least 32-bit integer; at least 64-bit long, pointer, and file offset. Supported platforms: None. The command: getconf POSIX_V6_WIDTH_RESTRICTED_ENVS returns a newline-separated list of environments in which the width of certain fundamental types is no greater than the width of the native C type long. At present, all programming environments supported by FreeBSD have this property. Several of the confstr(3) variables provide information on the necessary compiler and linker flags to use the standard programming environments described above. Many of these values are also available through the sysctl(8) mechanism. EXIT STATUS The getconf utility exits 0 on success, and >0 if an error occurs.
getconf – retrieve standard configuration variables
getconf -a [file] getconf [-v environment] path_var file getconf [-v environment] system_var
null
The command: getconf PATH will display the system default setting for the PATH environment variable. The command: getconf NAME_MAX /tmp will display the maximum length of a filename in the /tmp directory. The command: getconf -v POSIX_V6_LPBIG_OFFBIG LONG_MAX will display the maximum value of the C type long in the POSIX_V6_LPBIG_OFFBIG programming environment, if the system supports that environment. DIAGNOSTICS Use of a system_var or path_var which is completely unrecognized is considered an error, causing a diagnostic message to be written to standard error. One which is known but merely undefined does not result in an error indication. The getconf utility recognizes all of the variables defined for IEEE Std 1003.1-2001 (“POSIX.1”), including those which are not currently implemented. SEE ALSO pathconf(2), confstr(3), sysconf(3), sysctl(8) STANDARDS The getconf utility is expected to be compliant with IEEE Std 1003.1-2001 (“POSIX.1”). HISTORY The getconf utility first appeared in FreeBSD 5.0. AUTHORS Garrett A. Wollman <wollman@lcs.mit.edu> macOS 14.5 September 15, 2017 macOS 14.5
kpasswd
The kpasswd command is used to change a Kerberos principal's password. kpasswd first prompts for the current Kerberos password, then prompts the user twice for the new password, and the password is changed. If the principal is governed by a policy that specifies the length and/or number of character classes required in the new password, the new password must conform to the policy. (The five character classes are lower case, upper case, numbers, punctuation, and all other characters.)
kpasswd - change a user's Kerberos password
kpasswd [principal]
principal Change the password for the Kerberos principal principal. Otherwise, kpasswd uses the principal name from an existing ccache if there is one; if not, the principal is derived from the identity of the user invoking the kpasswd command. ENVIRONMENT See kerberos(7) for a description of Kerberos environment variables. SEE ALSO kadmin(1), kadmind(8), kerberos(7) AUTHOR MIT COPYRIGHT 1985-2022, MIT 1.20.1 KPASSWD(1)
null
dist_package_tool
null
null
null
null
null
vis
vis is a filter for converting non-printable characters into a visual representation. It differs from ‘cat -v’ in that the form is unique and invertible. By default, all non-graphic characters except space, tab, and newline are encoded. A detailed description of the various visual formats is given in vis(3). The options are as follows: -b Turns off prepending of backslash before up-arrow control sequences and meta characters, and disables the doubling of backslashes. This produces output which is neither invertible or precise, but does represent a minimum of change to the input. It is similar to “cat -v”. (VIS_NOSLASH) -c Request a format which displays a small subset of the non- printable characters using C-style backslash sequences. (VIS_CSTYLE) -e extra Also encode characters in extra, per svis(3). -F foldwidth Causes vis to fold output lines to foldwidth columns (default 80), like fold(1), except that a hidden newline sequence is used, (which is removed when inverting the file back to its original form with unvis(1)). If the last character in the encoded file does not end in a newline, a hidden newline sequence is appended to the output. This makes the output usable with various editors and other utilities which typically don't work with partial lines. -f Same as -F. -h Encode using the URI encoding from RFC 1808. (VIS_HTTPSTYLE) -l Mark newlines with the visible sequence ‘\$’, followed by the newline. -M Encode all shell meta characters (implies -S, -w, -g) (VIS_META) -m Encode using the MIME Quoted-Printable encoding from RFC 2045. (VIS_MIMESTYLE) -N Turn on the VIS_NOLOCALE flag which encodes using the “C” locale, removing any encoding dependencies caused by the current locale settings specified in the environment. -n Turns off any encoding, except for the fact that backslashes are still doubled and hidden newline sequences inserted if -f or -F is selected. When combined with the -f flag, vis becomes like an invertible version of the fold(1) utility. That is, the output can be unfolded by running the output through unvis(1). -o Request a format which displays non-printable characters as an octal number, \ddd. (VIS_OCTAL) -S Encode shell meta-characters that are non-white space or glob. (VIS_SHELL) -s Only characters considered unsafe to send to a terminal are encoded. This flag allows backspace, bell, and carriage return in addition to the default space, tab and newline. (VIS_SAFE) -t Tabs are also encoded. (VIS_TAB) -w White space (space-tab-newline) is also encoded. (VIS_WHITE) MULTIBYTE CHARACTER SUPPORT vis supports multibyte character input. The encoding conversion is influenced by the setting of the LC_CTYPE environment variable which defines the set of characters that can be copied without encoding. When 8-bit data is present in the input, LC_CTYPE must be set to the correct locale or to the C locale. If the locales of the data and the conversion are mismatched, multibyte character recognition may fail and encoding will be performed byte-by-byte instead. ENVIRONMENT LC_CTYPE Specify the locale of the input data. Set to C if the input data locale is unknown.
vis – display non-printable characters in a visual format
vis [-bcfhlMmNnoSstw] [-e extra] [-F foldwidth] [file ...]
null
Visualize characters encoding white spaces and tabs: $ echo -e "\x10\n\t" | vis -w -t \^P\012\011\012 Same as above but using `\$' for newline followed by an actual newline: $ echo -e "\x10\n\t" | vis -w -t -l \^P\$ \011\$ Visualize string using URI encoding: $ echo http://www.freebsd.org | vis -h http%3a%2f%2fwww.freebsd.org%0a SEE ALSO unvis(1), svis(3), vis(3) HISTORY The vis command appears in 4.4BSD. Multibyte character support was added in NetBSD 7.0 and FreeBSD 9.2. macOS 14.5 February 18, 2021 macOS 14.5
dbiproxy
This tool is just a front end for the DBI::ProxyServer package. All it does is picking options from the command line and calling DBI::ProxyServer::main(). See DBI::ProxyServer for details. Available options include: --chroot=dir (UNIX only) After doing a bind(), change root directory to the given directory by doing a chroot(). This is useful for security, but it restricts the environment a lot. For example, you need to load DBI drivers in the config file or you have to create hard links to Unix sockets, if your drivers are using them. For example, with MySQL, a config file might contain the following lines: my $rootdir = '/var/dbiproxy'; my $unixsockdir = '/tmp'; my $unixsockfile = 'mysql.sock'; foreach $dir ($rootdir, "$rootdir$unixsockdir") { mkdir 0755, $dir; } link("$unixsockdir/$unixsockfile", "$rootdir$unixsockdir/$unixsockfile"); require DBD::mysql; { 'chroot' => $rootdir, ... } If you don't know chroot(), think of an FTP server where you can see a certain directory tree only after logging in. See also the --group and --user options. --configfile=file Config files are assumed to return a single hash ref that overrides the arguments of the new method. However, command line arguments in turn take precedence over the config file. See the "CONFIGURATION FILE" section in the DBI::ProxyServer documentation for details on the config file. --debug Turn debugging mode on. Mainly this asserts that logging messages of level "debug" are created. --facility=mode (UNIX only) Facility to use for Sys::Syslog. The default is daemon. --group=gid After doing a bind(), change the real and effective GID to the given. This is useful, if you want your server to bind to a privileged port (<1024), but don't want the server to execute as root. See also the --user option. GID's can be passed as group names or numeric values. --localaddr=ip By default a daemon is listening to any IP number that a machine has. This attribute allows one to restrict the server to the given IP number. --localport=port This attribute sets the port on which the daemon is listening. It must be given somehow, as there's no default. --logfile=file Be default logging messages will be written to the syslog (Unix) or to the event log (Windows NT). On other operating systems you need to specify a log file. The special value "STDERR" forces logging to stderr. See Net::Daemon::Log for details. --mode=modename The server can run in three different modes, depending on the environment. If you are running Perl 5.005 and did compile it for threads, then the server will create a new thread for each connection. The thread will execute the server's Run() method and then terminate. This mode is the default, you can force it with "--mode=threads". If threads are not available, but you have a working fork(), then the server will behave similar by creating a new process for each connection. This mode will be used automatically in the absence of threads or if you use the "--mode=fork" option. Finally there's a single-connection mode: If the server has accepted a connection, he will enter the Run() method. No other connections are accepted until the Run() method returns (if the client disconnects). This operation mode is useful if you have neither threads nor fork(), for example on the Macintosh. For debugging purposes you can force this mode with "--mode=single". --pidfile=file (UNIX only) If this option is present, a PID file will be created at the given location. Default is to not create a pidfile. --user=uid After doing a bind(), change the real and effective UID to the given. This is useful, if you want your server to bind to a privileged port (<1024), but don't want the server to execute as root. See also the --group and the --chroot options. UID's can be passed as group names or numeric values. --version Suppresses startup of the server; instead the version string will be printed and the program exits immediately. AUTHOR Copyright (c) 1997 Jochen Wiedmann Am Eisteich 9 72555 Metzingen Germany Email: joe@ispsoft.de Phone: +49 7123 14881 The DBI::ProxyServer module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. In particular permission is granted to Tim Bunce for distributing this as a part of the DBI. SEE ALSO DBI::ProxyServer, DBD::Proxy, DBI perl v5.34.0 2024-04-13 DBIPROXY(1)
dbiproxy - A proxy server for the DBD::Proxy driver
dbiproxy <options> --localport=<port>
null
null
ipcrm
The ipcrm utility removes the specified message queues, semaphores and shared memory segments. These System V IPC objects can be specified by their creation ID or any associated key. The following options are used to specify which IPC objects will be removed. Any number and combination of these options can be used: -q msqid Remove the message queue associated with the ID msqid from the system. -m shmid Mark the shared memory segment associated with ID shmid for removal. This marked segment will be destroyed after the last detach. -s semid Remove the semaphore set associated with ID semid from the system. -Q msgkey Remove the message queue associated with key msgkey from the system. -M shmkey Mark the shared memory segment associated with key shmkey for removal. This marked segment will be destroyed after the last detach. -S semkey Remove the semaphore set associated with key semkey from the system. The identifiers and keys associated with these System V IPC objects can be determined by using ipcs(1). SEE ALSO ipcs(1) AUTHORS The original author was Adam Glass. The wiping of all System V IPC objects was thought up by Callum Gibson and extended and implemented by Edwin Groothuis. macOS 14.5 December 12, 2007 macOS 14.5
ipcrm – remove the specified message queues, semaphore sets, and shared segments
ipcrm [-q msqid] [-m shmid] [-s semid] [-Q msgkey] [-M shmkey] [-S semkey] ...
null
null
bputil
This utility is not meant for normal users or even sysadmins. It provides unabstracted access to capabilities which are normally handled for the user automatically when changing the security policy through GUIs such as the Startup Security Utility in macOS Recovery (“recoveryOS”). It is possible to make your system security much weaker and therefore easier to compromise using this tool. This tool is not to be used in production environments. It is possible to render your system unbootable with this tool. It should only be used to understand how the security of Apple Silicon Macs works. Use at your own risk! bputil performs actions by calling the BootPolicy library. This modifies the security configuration of the system, which is stored in a file called the LocalPolicy. This file is digitally signed by the Secure Enclave Processor (SEP). The private key which is used to sign the LocalPolicy is protected by a separate key which is only accessible when a user has put in their password as part of a successful authentication. This is why this tool must either have a username and password specified on the command line, or via the interactive prompt. macOS 12 Monterey introduced a new concept of “paired recoveryOS”, and a new set of restrictions related to it. Every installation of macOS 12 has its own paired recoveryOS with matching version stored on the same APFS volume group. Installations of macOS 11 Big Sur are paired to a single recoveryOS stored on a separate APFS volume group called “system recoveryOS”. By design, the SEP application which is responsible for making changes to the LocalPolicy will inspect the boot state of the main Application Processor (AP), and the pairing status between the booted OS and the target LocalPolicy. It will only allow the below security-downgrading operations if it detects that the AP is in the intended boot state, and the OS pairing status is valid. When System Integrity Protection (SIP) was first introduced to Macs, it was decided that requiring a reboot to recoveryOS would provide intentional friction which would make it harder for malicious software to downgrade the system. That precedent is extended here to detect the special boot to recoveryOS via holding the power key at boot time. We refer to this as One True Recovery (1TR), and most of the below downgrade options will only work when booted into 1TR, not when called from normal macOS or any other OS environment. This helps ensure that only a physically-present user, not malicious software running in macOS, can permanently downgrade the security settings. The below CLI options specify what boot environments a downgrade can be performed from. The pairing restrictions are enforced as follows: - All installations of macOS 11 are paired to the system recoveryOS. If a macOS 11 installation is selected to boot by default, then the system recoveryOS will be booted by holding down the power key at boot time. The system recoveryOS can downgrade security settings of any macOS 11 installations, but not any installations of macOS 12. - Every installation of macOS 12 is paired to a recoveryOS stored on the corresponding APFS volume group. If a macOS 12 installation is selected to boot by default, then its paired recoveryOS will be booted by holding down the power key at boot time. The paired recoveryOS can downgrade security settings for the paired macOS installation, but not any other macOS installation. The SEP-signed LocalPolicy is evaluated at boot time by iBoot. Configurations within the LocalPolicy change iBoot's behavior, such as whether it will require that all boot objects must be signed with metadata specific to the particular machine (a “personalized” signature, which is the default, and the always-required policy on iOS), or whether it will accept “global” signatures which are valid for all units of a specific hardware model. The LocalPolicy can also influence other boot or OS security behavior as described in the below options. -u, --username username Used to specify the username for a user with access to the signing key to authenticate the change. If this is specified the below password option is required too. If this is not specified, an interactive prompt will request the username. -p, --password password Used to specify the password for a user with access to the signing key to authenticate the change. If this is specified the above username option is required too. If this is not specified, an interactive prompt will request the password. -v, --vuid AABBCCDD-EEFF-0011-2233-4455667788990 Specify the APFS Volume Group UUID of the OS intended to have its policy changed. If no option is specified, and there are multiple OS installations detected, an interactive prompt will request the UUID. The Volume Group UUID for a given OS can be found with 'diskutil apfs listVolumeGroups'. -l, --debug-logging Enables verbose logging to assist in debugging any issues associated with changing the policy. -d, --display-policy Display the detailed contents of the LocalPolicy. This will show specific 4-character “tags” in the Apple Image4 data structure which is used to capture the customer-specified security policy. More details about the displayed entries are available in the “Apple Platform Security” website documentation. If the system has multiple bootable OSes, an interactive prompt will ask to select an OS volume to display the policy for. -e, --display-all-policies Display the detailed contents of the LocalPolicy for every bootable OS installation. -j, --json Switch display mode to JSON. Can only be combined with --display-policy and --display-all-policies. -r, --remove AABBCCDD-EEFF-0011-2233-445566778899 Remove macOS and paired recoveryOS local policies for a given Volume Group UUID. Boot environment requirements: software-launched recoveryOS or 1TR. Pairing requirements: None. -f, --full-security Changes security mode to Full Security. This option is mutually exclusive with all options below which cause security downgrades. Full Security is effectively a LocalPolicy which is in its default state, lacking all available security downgrades. Full Security also performs an online check at software install and upgrade time to ensure that only the latest version of software can be installed. This prevents accidentally installing old software which has known vulnerabilities in it. If the security is downgraded away from Full Security, and then re- upgraded to Full Security, the online check will be performed, and if the software is no longer the latest available, it will not be possible to set it to Full Security again. Because online checks are only performed at software installation, upgrade, and Full Security policy setting time, it is possible for an OS to report that it is Full Security despite not being the latest software version. Full Security only indicates the state as of the latest install or upgrade. Boot environment requirements: None. Pairing requirements: None. -g, --reduced-security Selecting this option will make your system easier to compromise! This changes the security mode to Reduced Security. Reduced Security will use the “global” digital signature for macOS, in order to allow running software which is not the latest version. Thus anything other than the latest software therefore may have security vulnerabilities. At a high level, Reduced Security does not necessarily require the latest software, but it does still require all software be digitally signed by Apple or 3rd party software developers. Passing this option will explicitly recreate the LocalPolicy from scratch, (i.e. it does not preserve any existing security policy options) and only the options specified via this tool will exist in the output local policy. Boot environment requirements: software-launched recoveryOS or 1TR. Pairing requirements: None. -n, --permissive-security Selecting this option will make your system easier to compromise! This changes the security mode to Permissive Security. Permissive Security uses the same “global” digital signature for macOS as the above Reduced Security option, in order to allow running software which is not the latest version. Thus anything other than the latest software therefore may have security vulnerabilities. At a high level, Permissive Security allows configuration options to be set to not require all software to be digitally signed. This can allow users who are not part of the Apple Developer program to still be able to introduce their own software into their system. Additionally, especially dangerous security downgrades may be restricted to Permissive Security, and only available via CLI tools for power users rather than GUIs. Passing this option will explicitly recreate the LocalPolicy from scratch, (i.e. it does not preserve any existing security policy options) and only the options specified via this tool will exist in the output local policy. Boot environment requirements: 1TR. Pairing requirements: Paired only. -k, --enable-kexts. Because this option automatically downgrades to Reduced Security mode if not already true, selecting this option will make your system easier to compromise! The AuxiliaryKernelCache is a SEP-signed boot object which can be verified and loaded into kernel memory before that memory is restricted to being non-writable by a “Configurable Text Read- only Region” (CTRR) hardware register. Introducing 3rd party kernel extensions can introduce architectural or implementation flaws into the kernel, which can lead to system compromise. In order to achieve iOS-like security properties, 3rd party kexts must be denied by default, and only loadable if the customer consciously opts in to lowering their security from 1TR. Boot environment requirements: 1TR. Pairing requirements: Paired only. -c, --disable-kernel-ctrr Because this option automatically downgrades to Permissive Security mode if not already true, selecting this option will make your system easier to compromise! This disables the enforcement of the “Configurable Text Read- only Region” (CTRR) hardware register that marks kernel memory as non-writable. This is sometimes required for performing actions such as using dynamic DTrace code hooks to profile kernel behavior or perform 3rd party kernel extension debugging. However, the lack of CTRR enforcement makes it much easier for an attacker to modify the kernel with exploits. Boot environment requirements: 1TR. Pairing requirements: Paired only. -a, --disable-boot-args-restriction Because this option automatically downgrades to Permissive Security mode if not already true, selecting this option will make your system easier to compromise! The macOS kernel accepts a variety of configuration options via an nvram variable named “boot-args”. However, some of these options direct the kernel to reduce some security enforcement. In order to achieve iOS-like security properties, this security- downgrading behavior needs to be denied by default, and only available if the customer consciously opts in to lowering their security from 1TR. Boot environment requirements: 1TR. Pairing requirements: Paired only. -s, --disable-ssv Because this option automatically downgrades to Permissive Security mode if not already true, selecting this option will make your system easier to compromise! The Signed System Volume is a mechanism to digitally sign and verify all data from the System volume (where the primary macOS software is stored). The result is that malware cannot directly manipulate executables there in order to achieve persistent execution, or manipulate the data stored there in order to try to exploit programs. This option disables Signed System Volume integrity enforcement, to allow customers to modify the System volume. SSV cannot be disabled while FileVault is enabled. Customer modifications to the System volume are not expected to persist across software updates. Boot environment requirements: 1TR. Pairing requirements: Paired only. -m, --enable-mdm Because this option automatically downgrades to Reduced Security mode if not already true, selecting this option will make your system easier to compromise! Enables remote MDM management of software updates & kernel extensions. After this option is set, the MDM can install older software with known vulnerabilities, or 3rd party kernel extensions with architectural or implementation flaws which can lead to kernel compromise. Therefore this requires a person to explicitly approve this capability for the MDM. Boot environment requirements: 1TR. Pairing requirements: Paired only. HISTORY bputil first appeared in macOS 11 for Apple Silicon Macs. Darwin September 1, 2020 Darwin
bputil – Utility to precisely modify the security settings on Apple Silicon Macs.
bputil [-ldejfgnmkcas] [-u username] [-p password] [-v APFS Volume Group UUID] [-r APFS Volume Group UUID]
null
null
net-snmp-config
The net-snmp-config shell script is designed to retrieve the configuration information about the libraries and binaries dealing with the Simple Network Management Protocol (SNMP), built from the net-snmp source package. The information is particularily useful for applications that need to link against the SNMP libraries and hence must know about any other libraries that must be linked in as well.
net-snmp-config - returns information about installed net-snmp libraries and binaries
net-snmp-config [OPTIONS]
--version displays the net-snmp version number --indent-options displays the indent options from the Coding Style --debug-tokens displays a example command line to search to source code for a list of available debug tokens SNMP Setup commands: --create-snmpv3-user [-ro] [-a authpass] [-x privpass] [-X DES|AES] [-A MD5|SHA] [username] These options produce the various compilation flags needed when building external SNMP applications: --base-cflags lists additional compilation flags needed for external applications (excludes -I. and extra developer warning flags, if any) --cflags lists additional compilation flags needed --libs lists libraries needed for building applications --agent-libs lists libraries needed for building subagents --netsnmp-libs lists netsnmp specific libraries --external-libs lists libraries needed by netsnmp libs --netsnmp-agent-libs lists netsnmp specific agent libraries --external-agent-libs lists libraries needed by netsnmp libs Automated subagent building (produces an OUTPUTNAME binary file): [This feature has not been extensively tested, use at your own risk.] --compile-subagent OUTPUTNAME [--norm] [--cflags flags] [--ldflags flags] mibmodule1.c [...]] --norm leave the generated .c file around to read. --cflags flags extra cflags to use (e.g. -I...). --ldflags flags extra ld flags to use (e.g. -L... -l...). Details on how the net-nsmp package was compiled: --configure-options Display original configure arguments --snmpd-module-list Display the modules compiled into the agent --prefix Display the installation prefix V5.6.2.1 16 Nov 2006 net-snmp-config(1)
null
otool
null
null
null
null
null
lwp-dump5.30
The lwp-dump program will get the resource identified by the URL and then dump the response object to STDOUT. This will display the headers returned and the initial part of the content, escaped so that it's safe to display even binary content. The escapes syntax used is the same as for Perl's double quoted strings. If there is no content the string "(no content)" is shown in its place. The following options are recognized: --agent string Override the user agent string passed to the server. --keep-client-headers LWP internally generate various "Client-*" headers that are stripped by lwp-dump in order to show the headers exactly as the server provided them. This option will suppress this. --max-length n How much of the content to show. The default is 512. Set this to 0 for unlimited. If the content is longer then the string is chopped at the limit and the string "...\n(### more bytes not shown)" appended. --method string Use the given method for the request instead of the default "GET". --parse-head By default lwp-dump will not try to initialize headers by looking at the head section of HTML documents. This option enables this. This corresponds to "parse_head" in LWP::UserAgent. --request Also dump the request sent. SEE ALSO lwp-request, LWP, "dump" in HTTP::Message perl v5.30.3 2020-04-14 LWP-DUMP(1)
lwp-dump - See what headers and content is returned for a URL
lwp-dump [ options ] URL
null
null
killall
The killall utility kills processes selected by name, as opposed to the selection by PID as done by kill(1). By default, it will send a TERM signal to all processes with a real UID identical to the caller of killall that match the name procname. The super-user is allowed to kill any process. The options are as follows: -d Be more verbose about what will be done, but do not send any signal. The total number of user processes and the real user ID is shown. A list of the processes that will be sent the signal will be printed, or a message indicating that no matching processes have been found. -e Use the effective user ID instead of the (default) real user ID for matching processes specified with the -u option. -help Give a help on the command usage and exit. -I Request confirmation before attempting to signal each process. -l List the names of the available signals and exit, like in kill(1). -m Match the argument procname as a (case sensitive) regular expression against the names of processes found. CAUTION! This is dangerous, a single dot will match any process running under the real UID of the caller. -v Be verbose about what will be done. -s Same as -v, but do not send any signal. -SIGNAL Send a different signal instead of the default TERM. The signal may be specified either as a name (with or without a leading “SIG”), or numerically. -u user Limit potentially matching processes to those belonging to the specified user. -t tty Limit potentially matching processes to those running on the specified tty. -c procname Limit potentially matching processes to those matching the specified procname. -q Suppress error message if no processes are matched. -z Do not skip zombies. This should not have any effect except to print a few error messages if there are zombie processes that match the specified pattern. ALL PROCESSES Sending a signal to all processes with the given UID is already supported by kill(1). So use kill(1) for this job (e.g. “kill -TERM -1” or as root “echo kill -TERM -1 | su -m <user>”). IMPLEMENTATION NOTES This FreeBSD implementation of killall has completely different semantics as compared to the traditional UNIX System V behavior of killall. The latter will kill all processes that the current user is able to kill, and is intended to be used by the system shutdown process only. EXIT STATUS The killall utility exits 0 if some processes have been found and signalled successfully. Otherwise, a status of 1 will be returned.
killall – kill processes by name
killall [-delmsvqz] [-help] [-I] [-u user] [-t tty] [-c procname] [-SIGNAL] [procname ...]
null
Send SIGTERM to all firefox processes: killall firefox Send SIGTERM to firefox processes belonging to USER: killall -u ${USER} firefox Stop all firefox processes: killall -SIGSTOP firefox Resume firefox processes: killall -SIGCONT firefox Show what would be done to firefox processes, but do not actually signal them: killall -s firefox Send SIGTERM to all processes matching provided pattern (like vim and vimdiff): killall -m 'vim*' DIAGNOSTICS Diagnostic messages will only be printed if the -d flag is used. SEE ALSO kill(1), pkill(1), sysctl(3) HISTORY The killall command appeared in FreeBSD 2.1. It has been modeled after the killall command as available on other platforms. AUTHORS The killall program was originally written in Perl and was contributed by Wolfram Schneider, this manual page has been written by Jörg Wunsch. The current version of killall was rewritten in C by Peter Wemm using sysctl(3). macOS 14.5 June 27, 2020 macOS 14.5
patch
patch takes a patch file patchfile containing a difference listing produced by the diff program and applies those differences to one or more original files, producing patched versions. Normally the patched versions are put in place of the originals. Backups can be made; see the -b or --backup option. The names of the files to be patched are usually taken from the patch file, but if there's just one file to be patched it can be specified on the command line as originalfile. Upon startup, patch attempts to determine the type of the diff listing, unless overruled by a -c (--context), -e (--ed), -n (--normal), or -u (--unified) option. Context diffs (old-style, new-style, and unified) and normal diffs are applied by the patch program itself, while ed diffs are simply fed to the ed(1) editor via a pipe. patch tries to skip any leading garbage, apply the diff, and then skip any trailing garbage. Thus you could feed an article or message containing a diff listing to patch, and it should work. If the entire diff is indented by a consistent amount, if lines end in CRLF, or if a diff is encapsulated one or more times by prepending "- " to lines starting with "-" as specified by Internet RFC 934, this is taken into account. After removing indenting or encapsulation, lines beginning with # are ignored, as they are considered to be comments. With context diffs, and to a lesser extent with normal diffs, patch can detect when the line numbers mentioned in the patch are incorrect, and attempts to find the correct place to apply each hunk of the patch. As a first guess, it takes the line number mentioned for the hunk, plus or minus any offset used in applying the previous hunk. If that is not the correct place, patch scans both forwards and backwards for a set of lines matching the context given in the hunk. First patch looks for a place where all lines of the context match. If no such place is found, and it's a context diff, and the maximum fuzz factor is set to 1 or more, then another scan takes place ignoring the first and last line of context. If that fails, and the maximum fuzz factor is set to 2 or more, the first two and last two lines of context are ignored, and another scan is made. (The default maximum fuzz factor is 2.) Hunks with less prefix context than suffix context (after applying fuzz) must apply at the start of the file if their first line number is 1. Hunks with more prefix context than suffix context (after applying fuzz) must apply at the end of the file. If patch cannot find a place to install that hunk of the patch, it puts the hunk out to a reject file, which normally is the name of the output file plus a .rej suffix, or # if .rej would generate a file name that is too long (if even appending the single character # makes the file name too long, then # replaces the file name's last character). The rejected hunk comes out in unified or context diff format. If the input was a normal diff, many of the contexts are simply null. The line numbers on the hunks in the reject file may be different than in the patch file: they reflect the approximate location patch thinks the failed hunks belong in the new file rather than the old one. As each hunk is completed, you are told if the hunk failed, and if so which line (in the new file) patch thought the hunk should go on. If the hunk is installed at a different line from the line number specified in the diff, you are told the offset. A single large offset may indicate that a hunk was installed in the wrong place. You are also told if a fuzz factor was used to make the match, in which case you should also be slightly suspicious. If the --verbose option is given, you are also told about hunks that match exactly. If no original file origfile is specified on the command line, patch tries to figure out from the leading garbage what the name of the file to edit is, using the following rules. First, patch takes an ordered list of candidate file names as follows: • If the header is that of a context diff, patch takes the old and new file names in the header. A name is ignored if it does not have enough slashes to satisfy the -pnum or --strip=num option. The name /dev/null is also ignored. • If there is an Index: line in the leading garbage and if either the old and new names are both absent or if patch is conforming to POSIX, patch takes the name in the Index: line. • For the purpose of the following rules, the candidate file names are considered to be in the order (old, new, index), regardless of the order that they appear in the header. Then patch selects a file name from the candidate list as follows: • If some of the named files exist, patch selects the first name if conforming to POSIX, and the best name otherwise. • If patch is not ignoring RCS, ClearCase, Perforce, and SCCS (see the -g num or --get=num option), and no named files exist but an RCS, ClearCase, Perforce, or SCCS master is found, patch selects the first named file with an RCS, ClearCase, Perforce, or SCCS master. • If no named files exist, no RCS, ClearCase, Perforce, or SCCS master was found, some names are given, patch is not conforming to POSIX, and the patch appears to create a file, patch selects the best name requiring the creation of the fewest directories. • If no file name results from the above heuristics, you are asked for the name of the file to patch, and patch selects that name. To determine the best of a nonempty list of file names, patch first takes all the names with the fewest path name components; of those, it then takes all the names with the shortest basename; of those, it then takes all the shortest names; finally, it takes the first remaining name. Additionally, if the leading garbage contains a Prereq: line, patch takes the first word from the prerequisites line (normally a version number) and checks the original file to see if that word can be found. If not, patch asks for confirmation before proceeding. The upshot of all this is that you should be able to say, while in a news interface, something like the following: | patch -d /usr/src/local/blurfl and patch a file in the blurfl directory directly from the article containing the patch. If the patch file contains more than one patch, patch tries to apply each of them as if they came from separate patch files. This means, among other things, that it is assumed that the name of the file to patch must be determined for each diff listing, and that the garbage before each diff listing contains interesting things such as file names and revision level, as mentioned previously.
patch - apply a diff file to an original
patch [options] [originalfile [patchfile]] but usually just patch -pnum <patchfile
-b or --backup Make backup files. That is, when patching a file, rename or copy the original instead of removing it. When backing up a file that does not exist, an empty, unreadable backup file is created as a placeholder to represent the nonexistent file. See the -V or --version-control option for details about how backup file names are determined. --backup-if-mismatch Back up a file if the patch does not match the file exactly and if backups are not otherwise requested. This is the default unless patch is conforming to POSIX. --no-backup-if-mismatch Do not back up a file if the patch does not match the file exactly and if backups are not otherwise requested. This is the default if patch is conforming to POSIX. -B pref or --prefix=pref Use the simple method to determine backup file names (see the -V method or --version-control method option), and append pref to a file name when generating its backup file name. For example, with -B /junk/ the simple backup file name for src/patch/util.c is /junk/src/patch/util.c. --binary Write all files in binary mode, except for standard output and /dev/tty. When reading, disable the heuristic for transforming CRLF line endings into LF line endings. This option is needed on POSIX systems when applying patches generated on non-POSIX systems to non-POSIX files. (On POSIX systems, file reads and writes never transform line endings. On Windows, reads and writes do transform line endings by default, and patches should be generated by diff --binary when line endings are significant.) -c or --context Interpret the patch file as a ordinary context diff. -d dir or --directory=dir Change to the directory dir immediately, before doing anything else. -D define or --ifdef=define Use the #ifdef ... #endif construct to mark changes, with define as the differentiating symbol. --dry-run Print the results of applying the patches without actually changing any files. -e or --ed Interpret the patch file as an ed script. -E or --remove-empty-files Remove output files that are empty after the patches have been applied. Normally this option is unnecessary, since patch can examine the time stamps on the header to determine whether a file should exist after patching. However, if the input is not a context diff or if patch is conforming to POSIX, patch does not remove empty patched files unless this option is given. When patch removes a file, it also attempts to remove any empty ancestor directories. -f or --force Assume that the user knows exactly what he or she is doing, and do not ask any questions. Skip patches whose headers do not say which file is to be patched; patch files even though they have the wrong version for the Prereq: line in the patch; and assume that patches are not reversed even if they look like they are. This option does not suppress commentary; use -s for that. -F num or --fuzz=num Set the maximum fuzz factor. This option only applies to diffs that have context, and causes patch to ignore up to that many lines of context in looking for places to install a hunk. Note that a larger fuzz factor increases the odds of a faulty patch. The default fuzz factor is 2. A fuzz factor greater than or equal to the number of lines of context in the context diff, ordinarily 3, ignores all context. -g num or --get=num This option controls patch's actions when a file is under RCS or SCCS control, and does not exist or is read-only and matches the default version, or when a file is under ClearCase or Perforce control and does not exist. If num is positive, patch gets (or checks out) the file from the revision control system; if zero, patch ignores RCS, ClearCase, Perforce, and SCCS and does not get the file; and if negative, patch asks the user whether to get the file. The default value of this option is given by the value of the PATCH_GET environment variable if it is set; if not, the default value is zero. --help Print a summary of options and exit. -i patchfile or --input=patchfile Read the patch from patchfile. If patchfile is -, read from standard input, the default. -l or --ignore-whitespace Match patterns loosely, in case tabs or spaces have been munged in your files. Any sequence of one or more blanks in the patch file matches any sequence in the original file, and sequences of blanks at the ends of lines are ignored. Normal characters must still match exactly. Each line of the context must still match a line in the original file. --merge or --merge=merge or --merge=diff3 Merge a patch file into the original files similar to diff3(1) or merge(1). If a conflict is found, patch outputs a warning and brackets the conflict with <<<<<<< and >>>>>>> lines. A typical conflict will look like this: <<<<<<< lines from the original file ||||||| original lines from the patch ======= new lines from the patch >>>>>>> The optional argument of --merge determines the output format for conflicts: the diff3 format shows the ||||||| section with the original lines from the patch; in the merge format, this section is missing. The merge format is the default. This option implies --forward and does not take the --fuzz=num option into account. -n or --normal Interpret the patch file as a normal diff. -N or --forward When a patch does not apply, patch usually checks if the patch looks like it has been applied already by trying to reverse-apply the first hunk. The --forward option prevents that. See also -R. -o outfile or --output=outfile Send output to outfile instead of patching files in place. Do not use this option if outfile is one of the files to be patched. When outfile is -, send output to standard output, and send any messages that would usually go to standard output to standard error. -pnum or --strip=num Strip the smallest prefix containing num leading slashes from each file name found in the patch file. A sequence of one or more adjacent slashes is counted as a single slash. This controls how file names found in the patch file are treated, in case you keep your files in a different directory than the person who sent out the patch. For example, supposing the file name in the patch file was /u/howard/src/blurfl/blurfl.c setting -p0 gives the entire file name unmodified, -p1 gives u/howard/src/blurfl/blurfl.c without the leading slash, -p4 gives blurfl/blurfl.c and not specifying -p at all just gives you blurfl.c. Whatever you end up with is looked for either in the current directory, or the directory specified by the -d option. --posix Conform more strictly to the POSIX standard, as follows. • Take the first existing file from the list (old, new, index) when intuiting file names from diff headers. • Do not remove files that are empty after patching. • Do not ask whether to get files from RCS, ClearCase, Perforce, or SCCS. • Require that all options precede the files in the command line. • Do not backup files when there is a mismatch. --quoting-style=word Use style word to quote output names. The word should be one of the following: literal Output names as-is. shell Quote names for the shell if they contain shell metacharacters or would cause ambiguous output. shell-always Quote names for the shell, even if they would normally not require quoting. c Quote names as for a C language string. escape Quote as with c except omit the surrounding double-quote characters. You can specify the default value of the --quoting-style option with the environment variable QUOTING_STYLE. If that environment variable is not set, the default value is shell. -r rejectfile or --reject-file=rejectfile Put rejects into rejectfile instead of the default .rej file. When rejectfile is -, discard rejects. -R or --reverse Assume that this patch was created with the old and new files swapped. (Yes, I'm afraid that does happen occasionally, human nature being what it is.) patch attempts to swap each hunk around before applying it. Rejects come out in the swapped format. The -R option does not work with ed diff scripts because there is too little information to reconstruct the reverse operation. If the first hunk of a patch fails, patch reverses the hunk to see if it can be applied that way. If it can, you are asked if you want to have the -R option set. If it can't, the patch continues to be applied normally. (Note: this method cannot detect a reversed patch if it is a normal diff and if the first command is an append (i.e. it should have been a delete) since appends always succeed, due to the fact that a null context matches anywhere. Luckily, most patches add or change lines rather than delete them, so most reversed normal diffs begin with a delete, which fails, triggering the heuristic.) --read-only=behavior Behave as requested when trying to modify a read-only file: ignore the potential problem, warn about it (the default), or fail. --reject-format=format Produce reject files in the specified format (either context or unified). Without this option, rejected hunks come out in unified diff format if the input patch was of that format, otherwise in ordinary context diff form. -s or --silent or --quiet Work silently, unless an error occurs. --follow-symlinks When looking for input files, follow symbolic links. Replaces the symbolic links, instead of modifying the files the symbolic links point to. Git-style patches to symbolic links will no longer apply. This option exists for backwards compatibility with previous versions of patch; its use is discouraged. -t or --batch Suppress questions like -f, but make some different assumptions: skip patches whose headers do not contain file names (the same as -f); skip patches for which the file has the wrong version for the Prereq: line in the patch; and assume that patches are reversed if they look like they are. -T or --set-time Set the modification and access times of patched files from time stamps given in context diff headers. Unless specified in the time stamps, assume that the context diff headers use local time. Use of this option with time stamps that do not include time zones is not recommended, because patches using local time cannot easily be used by people in other time zones, and because local time stamps are ambiguous when local clocks move backwards during daylight- saving time adjustments. Make sure that time stamps include time zones, or generate patches with UTC and use the -Z or --set-utc option instead. -u or --unified Interpret the patch file as a unified context diff. -v or --version Print out patch's revision header and patch level, and exit. -V method or --version-control=method Use method to determine backup file names. The method can also be given by the PATCH_VERSION_CONTROL (or, if that's not set, the VERSION_CONTROL) environment variable, which is overridden by this option. The method does not affect whether backup files are made; it affects only the names of any backup files that are made. The value of method is like the GNU Emacs `version-control' variable; patch also recognizes synonyms that are more descriptive. The valid values for method are (unique abbreviations are accepted): existing or nil Make numbered backups of files that already have them, otherwise simple backups. This is the default. numbered or t Make numbered backups. The numbered backup file name for F is F.~N~ where N is the version number. simple or never Make simple backups. The -B or --prefix, -Y or --basename-prefix, and -z or --suffix options specify the simple backup file name. If none of these options are given, then a simple backup suffix is used; it is the value of the SIMPLE_BACKUP_SUFFIX environment variable if set, and is .orig otherwise. With numbered or simple backups, if the backup file name is too long, the backup suffix ~ is used instead; if even appending ~ would make the name too long, then ~ replaces the last character of the file name. --verbose Output extra information about the work being done. -x num or --debug=num Set internal debugging flags of interest only to patch patchers. -Y pref or --basename-prefix=pref Use the simple method to determine backup file names (see the -V method or --version-control method option), and prefix pref to the basename of a file name when generating its backup file name. For example, with -Y .del/ the simple backup file name for src/patch/util.c is src/patch/.del/util.c. -z suffix or --suffix=suffix Use the simple method to determine backup file names (see the -V method or --version-control method option), and use suffix as the suffix. For example, with -z - the backup file name for src/patch/util.c is src/patch/util.c-. -Z or --set-utc Set the modification and access times of patched files from time stamps given in context diff headers. Unless specified in the time stamps, assume that the context diff headers use Coordinated Universal Time (UTC, often known as GMT). Also see the -T or --set-time option. The -Z or --set-utc and -T or --set-time options normally refrain from setting a file's time if the file's original time does not match the time given in the patch header, or if its contents do not match the patch exactly. However, if the -f or --force option is given, the file time is set regardless. Due to the limitations of diff output format, these options cannot update the times of files whose contents have not changed. Also, if you use these options, you should remove (e.g. with make clean) all files that depend on the patched files, so that later invocations of make do not get confused by the patched files' times. ENVIRONMENT PATCH_GET This specifies whether patch gets missing or read-only files from RCS, ClearCase, Perforce, or SCCS by default; see the -g or --get option. POSIXLY_CORRECT If set, patch conforms more strictly to the POSIX standard by default: see the --posix option. QUOTING_STYLE Default value of the --quoting-style option. SIMPLE_BACKUP_SUFFIX Extension to use for simple backup file names instead of .orig. TMPDIR, TMP, TEMP Directory to put temporary files in; patch uses the first environment variable in this list that is set. If none are set, the default is system-dependent; it is normally /tmp on Unix hosts. VERSION_CONTROL or PATCH_VERSION_CONTROL Selects version control style; see the -v or --version-control option. FILES $TMPDIR/p* temporary files /dev/tty controlling terminal; used to get answers to questions asked of the user SEE ALSO diff(1), ed(1), merge(1). Marshall T. Rose and Einar A. Stefferud, Proposed Standard for Message Encapsulation, Internet RFC 934 <URL:ftp://ftp.isi.edu/in- notes/rfc934.txt> (1985-01). NOTES FOR PATCH SENDERS There are several things you should bear in mind if you are going to be sending out patches. Create your patch systematically. A good method is the command diff -Naur old_new where old and new identify the old and new directories. The names old and new should not contain any slashes. The diff command's headers should have dates and times in Universal Time using traditional Unix format, so that patch recipients can use the -Z or --set-utc option. Here is an example command, using Bourne shell syntax: LC_ALL=C TZ=UTC0 diff -Naur gcc-2.7 gcc-2.8 Tell your recipients how to apply the patch by telling them which directory to cd to, and which patch options to use. The option string -Np1 is recommended. Test your procedure by pretending to be a recipient and applying your patch to a copy of the original files. You can save people a lot of grief by keeping a patchlevel.h file which is patched to increment the patch level as the first diff in the patch file you send out. If you put a Prereq: line in with the patch, it won't let them apply patches out of order without some warning. You can create a file by sending out a diff that compares /dev/null or an empty file dated the Epoch (1970-01-01 00:00:00 UTC) to the file you want to create. This only works if the file you want to create doesn't exist already in the target directory. Conversely, you can remove a file by sending out a context diff that compares the file to be deleted with an empty file dated the Epoch. The file will be removed unless patch is conforming to POSIX and the -E or --remove-empty-files option is not given. An easy way to generate patches that create and remove files is to use GNU diff's -N or --new-file option. If the recipient is supposed to use the -pN option, do not send output that looks like this: diff -Naur v2.0.29/prog/README prog/README --- v2.0.29/prog/README Mon Mar 10 15:13:12 1997 +++ prog/README Mon Mar 17 14:58:22 1997 because the two file names have different numbers of slashes, and different versions of patch interpret the file names differently. To avoid confusion, send output that looks like this instead: diff -Naur v2.0.29/prog/README v2.0.30/prog/README --- v2.0.29/prog/README Mon Mar 10 15:13:12 1997 +++ v2.0.30/prog/README Mon Mar 17 14:58:22 1997 Avoid sending patches that compare backup file names like README.orig, since this might confuse patch into patching a backup file instead of the real file. Instead, send patches that compare the same base file names in different directories, e.g. old/README and new/README. Take care not to send out reversed patches, since it makes people wonder whether they already applied the patch. Try not to have your patch modify derived files (e.g. the file configure where there is a line configure: configure.in in your makefile), since the recipient should be able to regenerate the derived files anyway. If you must send diffs of derived files, generate the diffs using UTC, have the recipients apply the patch with the -Z or --set-utc option, and have them remove any unpatched files that depend on patched files (e.g. with make clean). While you may be able to get away with putting 582 diff listings into one file, it may be wiser to group related patches into separate files in case something goes haywire. DIAGNOSTICS Diagnostics generally indicate that patch couldn't parse your patch file. If the --verbose option is given, the message Hmm... indicates that there is unprocessed text in the patch file and that patch is attempting to intuit whether there is a patch in that text and, if so, what kind of patch it is. patch's exit status is 0 if all hunks are applied successfully, 1 if some hunks cannot be applied or there were merge conflicts, and 2 if there is more serious trouble. When applying a set of patches in a loop it behooves you to check this exit status so you don't apply a later patch to a partially patched file. CAVEATS Context diffs cannot reliably represent the creation or deletion of empty files, empty directories, or special files such as symbolic links. Nor can they represent changes to file metadata like ownership, permissions, or whether one file is a hard link to another. If changes like these are also required, separate instructions (e.g. a shell script) to accomplish them should accompany the patch. patch cannot tell if the line numbers are off in an ed script, and can detect bad line numbers in a normal diff only when it finds a change or deletion. A context diff using fuzz factor 3 may have the same problem. You should probably do a context diff in these cases to see if the changes made sense. Of course, compiling without errors is a pretty good indication that the patch worked, but not always. patch usually produces the correct results, even when it has to do a lot of guessing. However, the results are guaranteed to be correct only when the patch is applied to exactly the same version of the file that the patch was generated from. COMPATIBILITY ISSUES The POSIX standard specifies behavior that differs from patch's traditional behavior. You should be aware of these differences if you must interoperate with patch versions 2.1 and earlier, which do not conform to POSIX. • In traditional patch, the -p option's operand was optional, and a bare -p was equivalent to -p0. The -p option now requires an operand, and -p 0 is now equivalent to -p0. For maximum compatibility, use options like -p0 and -p1. Also, traditional patch simply counted slashes when stripping path prefixes; patch now counts pathname components. That is, a sequence of one or more adjacent slashes now counts as a single slash. For maximum portability, avoid sending patches containing // in file names. • In traditional patch, backups were enabled by default. This behavior is now enabled with the -b or --backup option. Conversely, in POSIX patch, backups are never made, even when there is a mismatch. In GNU patch, this behavior is enabled with the --no-backup-if-mismatch option, or by conforming to POSIX with the --posix option or by setting the POSIXLY_CORRECT environment variable. The -b_suffix option of traditional patch is equivalent to the -b -z_suffix options of GNU patch. • Traditional patch used a complicated (and incompletely documented) method to intuit the name of the file to be patched from the patch header. This method did not conform to POSIX, and had a few gotchas. Now patch uses a different, equally complicated (but better documented) method that is optionally POSIX-conforming; we hope it has fewer gotchas. The two methods are compatible if the file names in the context diff header and the Index: line are all identical after prefix-stripping. Your patch is normally compatible if each header's file names all contain the same number of slashes. • When traditional patch asked the user a question, it sent the question to standard error and looked for an answer from the first file in the following list that was a terminal: standard error, standard output, /dev/tty, and standard input. Now patch sends questions to standard output and gets answers from /dev/tty. Defaults for some answers have been changed so that patch never goes into an infinite loop when using default answers. • Traditional patch exited with a status value that counted the number of bad hunks, or with status 1 if there was real trouble. Now patch exits with status 1 if some hunks failed, or with 2 if there was real trouble. • Limit yourself to the following options when sending instructions meant to be executed by anyone running GNU patch, traditional patch, or a patch that conforms to POSIX. Spaces are significant in the following list, and operands are required. -c -d dir -D define -e -l -n -N -o outfile -pnum -R -r rejectfile BUGS Please report bugs via email to <bug-patch@gnu.org>. If code has been duplicated (for instance with #ifdef OLDCODE ... #else ... #endif), patch is incapable of patching both versions, and, if it works at all, will likely patch the wrong one, and tell you that it succeeded to boot. If you apply a patch you've already applied, patch thinks it is a reversed patch, and offers to un-apply the patch. This could be construed as a feature. Computing how to merge a hunk is significantly harder than using the standard fuzzy algorithm. Bigger hunks, more context, a bigger offset from the original location, and a worse match all slow the algorithm down. COPYING Copyright (C) 1984, 1985, 1986, 1988 Larry Wall. Copyright (C) 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2009 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions, except that this permission notice may be included in translations approved by the copyright holders instead of in the original English. AUTHORS Larry Wall wrote the original version of patch. Paul Eggert removed patch's arbitrary limits; added support for binary files, setting file times, and deleting files; and made it conform better to POSIX. Other contributors include Wayne Davison, who added unidiff support, and David MacKenzie, who added configuration and backup support. Andreas Grünbacher added support for merging. GNU PATCH(1)
null
tailspin
tailspin configures the system to continuously sample callstacks of processes and select kdebug events in the kernel trace buffer. When tailspin data is recorded to a file, the tailspin file will contain information about the system state from about 20s prior to the save. The tailspind daemon is a helper daemon for the tailspin feature and should not be run manually. COLLECTING TAILSPIN DATA tailspin data can be collected using the keychord when enabled: Shift-Control-Option-Command-Comma. When the command is completed, a Finder window will pop up with the saved tailspin file. SUBCOMMANDS tailspin uses a subcommand syntax to separate different functionality into logical groups. Each subcommand takes its own set of options. info Print information about the current configuration of tailspin. enable Enable tailspin collection. Enablement persists across reboots and upgrade installs. disable Stop tailspin collection. Disablement persists across reboots and upgrade installs. tailspin can be enabled again after it has been disabled, using the same configuration. set Configure the 4 tunable parameters of tailspin. Any change applied will persist across reboots and upgrade installs. buffer-size buffer-size-mb Set up the kernel trace buffer to be buffer-size-mb big. ktrace-filter-descriptor (add:|remove:)filter-desc Apply the filter-desc to the tailspin configuration, thereby controlling which events are traced by tailspin. See FILTER DESCRIPTIONS on the syntax of a filter. The filter may be prefixed with "add:" or "remove:" to modify an existing filter rather than replace it entirely oncore-sampling-period period-in-ns Set up a timer in the tailspin configuration to sample the threads that are on the CPU when the timer fires every period-in-ns. The minimum period allowed is 1 ms. "disabled" may be used to disable the oncore sampling timer. full-system-sampling-period period-in-ns Set up a timer in the tailspin configuration to sample all threads of all processes when the timer fires every period-in-ns. The minimum period allowed is 10 ms. "disabled" may be used to disable the full sampling timer. sampling-option (add:|remove:)options Apply the sampling options specified by options to the tailspin configuration, thereby controlling what sampling is enabled by tailspin. See SAMPLING OPTIONS on the syntax of a sampling options. reset [buffer-size-mb|ktrace-filter-descriptor|oncore-sampling-period|full-system-sampling-period] Remove all custom configuration of tailspin and reset to system default, or reset specific setting to the system default. save [-r reason-string] [-l num-seconds] [-n] [path-to-file] Save the current contents of the kernel trace buffer containing tailspin data to path-to-file. -r reason-string Include a key in the tailspin file indicating why it was saved. This reason can be viewed with tailspin stat. -l num-seconds Limit the data in tailspin file to that of the last num-seconds. -n Save tailspin file without symbolicating. augment [-d] [-s] [-l] [-L -path-to-log-archive] path-to-file Augment the tailspin report at path-to-file with additional information like symbols, os logs and os signposts. If not used with -d, needs to be run on the same device and build on which the tailspin file was saved . stat [-v] [-s] path-to-file Print aggregate information about the data in the tailspin file. -v Print layout information of tailspin file. -s Sort ktrace statistics by frequency of trace class/subclass. Default sorting is by class/subclass code. FILTER DESCRIPTIONS A filter description is a comma-separated list of class and subclass specifiers that indicate which events should be traced. A class specifier starts with ‘C’ followed by a number between 0 and 255 inclusive, specified in either decimal or hex (when prepended with "0x"). A subclass specifier starts with ‘S’ and takes two bytes. The high byte is the class and the low byte is the subclass of that class. For example, this filter description would enable classes 0x1 and 0x25 and the subclasses 0x21 and 0x23 of class 0x5: ‘C1,C0x25,S0x0521,S0x0523’. The ‘ALL’ filter description enables events from all classes. SAMPLING OPTIONS Sampling options are specified via a comma-separated list of recognized names that indicate what sampling should be enabled/disabled. The names that are recognized are: ‘cswitch-sampling’ and ‘syscall-sampling’ VIEWING TAILSPIN DATA tailspin data can be viewed by ktrace(1), spindump(8) and fs_usage(1). DIAGNOSTICS The tailspin utility exits 0 on success, and >0 if an error occurs. SEE ALSO ktrace(1), fs_usage(1), spindump(8) Darwin 22 June 2016 Darwin
tailspin – configure, save and print tailspin output
tailspin info tailspin enable tailspin disable tailspin set buffer-size buffer-size-mb ktrace-filter-descriptor (add:|remove:)filter-desc oncore-sampling-period period-in-ns|disabled full-system-sampling-period period-in-ns|disabled sampling-option (add:|remove:)options tailspin reset [buffer-size-mb|ktrace-filter-descriptor|oncore-sampling-period|full-system-sampling-period] tailspin save [-r reason-string] [-l num-seconds] [-n] path-to-file tailspin augment [-s] [-d] [-a] [-l] [-L path-to-log-archive] path-to-file tailspin stat [-v] [-s] path-to-file
null
null
su
The su utility requests appropriate user credentials via PAM and switches to that user ID (the default user is the superuser). A shell is then executed. PAM is used to set the policy su(1) will use. In particular, by default only users in the “admin” or “wheel” groups can switch to UID 0 (“root”). This group requirement may be changed by modifying the “pam_group” section of /etc/pam.d/su. See pam_group(8) for details on how to modify this setting. By default, the environment is unmodified with the exception of USER, HOME, and SHELL. HOME and SHELL are set to the target login's default values. USER is set to the target login, unless the target login has a user ID of 0, in which case it is unmodified. The invoked shell is the one belonging to the target login. This is the traditional behavior of su. The options are as follows: -f If the invoked shell is csh(1), this option prevents it from reading the “.cshrc” file. -l Simulate a full login. The environment is discarded except for HOME, SHELL, PATH, TERM, and USER. HOME and SHELL are modified as above. USER is set to the target login. PATH is set to “/bin:/usr/bin”. TERM is imported from your current environment. The invoked shell is the target login's, and su will change directory to the target login's home directory. - (no letter) The same as -l. -m Leave the environment unmodified. The invoked shell is your login shell, and no directory changes are made. As a security precaution, if the target user's shell is a non-standard shell (as defined by getusershell(3)) and the caller's real uid is non- zero, su will fail. The -l (or -) and -m options are mutually exclusive; the last one specified overrides any previous ones. If the optional args are provided on the command line, they are passed to the login shell of the target login. Note that all command line arguments before the target login name are processed by su itself, everything after the target login name gets passed to the login shell. By default (unless the prompt is reset by a startup file) the super-user prompt is set to “#” to remind one of its awesome power. ENVIRONMENT Environment variables used by su: HOME Default home directory of real user ID unless modified as specified above. PATH Default search path of real user ID unless modified as specified above. TERM Provides terminal type which may be retained for the substituted user ID. USER The user ID is always the effective ID (the target user ID) after an su unless the user ID is 0 (root). FILES /etc/pam.d/su PAM configuration for su.
su – substitute user identity
su [-] [-flm] [login [args]]
null
su -m operator -c poweroff Starts a shell as user operator, and runs the command poweroff. You will be asked for operator's password unless your real UID is 0. Note that the -m option is required since user “operator” does not have a valid shell by default. In this example, -c is passed to the shell of the user “operator”, and is not interpreted as an argument to su. su -m operator -c 'shutdown -p now' Same as above, but the target command consists of more than a single word and hence is quoted for use with the -c option being passed to the shell. (Most shells expect the argument to -c to be a single word). su -l foo Simulate a login for user foo. su - foo Same as above. su - Simulate a login for root. SEE ALSO csh(1), sh(1), group(5), passwd(5), environ(7), pam_group(8) HISTORY A su command appeared in Version 1 AT&T UNIX. macOS 14.5 March 26, 2020 macOS 14.5
ul
The ul utility reads the named files (or standard input if none are given) and translates occurrences of underscores to the sequence which indicates underlining for the terminal in use, as specified by the environment variable TERM. The file /etc/termcap is read to determine the appropriate sequences for underlining. If the terminal is incapable of underlining, but is capable of a standout mode then that is used instead. If the terminal can overstrike, or handles underlining automatically, ul degenerates to cat(1). If the terminal cannot underline, underlining is ignored. The following options are available: -i Underlining is indicated by a separate line containing appropriate dashes ‘-’. -t terminal Overrides the terminal type specified in the environment with terminal. ENVIRONMENT The LANG, LC_ALL, LC_CTYPE and TERM environment variables affect the execution of ul as described in environ(7). EXIT STATUS The ul utility exits 0 on success, and >0 if an error occurs. SEE ALSO man(1), mandoc(1) HISTORY The ul command appeared in 3.0BSD. macOS 14.5 October 7, 2020 macOS 14.5
ul – do underlining
ul [-i] [-t terminal] [file ...]
null
null
compression_tool
compression_tool encodes (compresses), or decodes (uncompresses) files using the Compression library.
compression_tool – encode/decode files using the Compression library.
compression_tool compression_tool -encode | -decode [-a algorithm] [-A algorithm] [-i input_file] [-o output_file] [-v] [-h]
-encode Encode (compress) the input -decode Decode (uncompress) the input --a algorithm Set the compression algorithm, valid options are zlib, lzma, lzfse, lz4, lz4_raw. Default is lzfse. - zlib raw DEFLATE payload, as defined in IETF RFC 1951, encoder is zlib level 5, - lzma LZMA2 payload inside a XZ container, encoder is LZMA2 preset 6, - lz4 raw LZ4 payload inside a simple frame format (described in compression.h), - lz4_raw raw LZ4 payload, - lzfse LZFSE payload. --A algorithm Enable block compression, and set compression algorithm, valid options are zlib, lzma, lzfse, lz4. Default is lzfse. --b block_size Set block size for block compression. The integer value can be followed by m or k or b. --t thread_count Set the number of worker threads to use for block compression/decompression. Default is the number of logical threads on the machine. --i input_file Input file. If omitted, read from standard input. --o output_file Output file. If omitted, write to standard output. --v Increase verbosity. Default is silent operation. --h Print usage and exit. BLOCK COMPRESSION FILE FORMAT The file starts with a 4-byte header 'p','b','z',<algo>, where <algo> indicates the algorithm used to compress data. The header is followed by the 64-bit block size in bytes. Then for each block, we have 64-bit uncompressed size (will batch the block size, except possibly for the last block), 64-bit compressed size, and the compressed payload. If both uncompressed and compressed sizes for a block are equal, the payload is stored uncompressed. All 64-bit values are stored big-endian. Valid values for <algo> are: 'z' for zlib, 'x' for lzma, '4' for lz4, and 'e' for lzfse. macOS January 4, 2023 macOS
null
dbiproxy5.30
This tool is just a front end for the DBI::ProxyServer package. All it does is picking options from the command line and calling DBI::ProxyServer::main(). See DBI::ProxyServer for details. Available options include: --chroot=dir (UNIX only) After doing a bind(), change root directory to the given directory by doing a chroot(). This is useful for security, but it restricts the environment a lot. For example, you need to load DBI drivers in the config file or you have to create hard links to Unix sockets, if your drivers are using them. For example, with MySQL, a config file might contain the following lines: my $rootdir = '/var/dbiproxy'; my $unixsockdir = '/tmp'; my $unixsockfile = 'mysql.sock'; foreach $dir ($rootdir, "$rootdir$unixsockdir") { mkdir 0755, $dir; } link("$unixsockdir/$unixsockfile", "$rootdir$unixsockdir/$unixsockfile"); require DBD::mysql; { 'chroot' => $rootdir, ... } If you don't know chroot(), think of an FTP server where you can see a certain directory tree only after logging in. See also the --group and --user options. --configfile=file Config files are assumed to return a single hash ref that overrides the arguments of the new method. However, command line arguments in turn take precedence over the config file. See the "CONFIGURATION FILE" section in the DBI::ProxyServer documentation for details on the config file. --debug Turn debugging mode on. Mainly this asserts that logging messages of level "debug" are created. --facility=mode (UNIX only) Facility to use for Sys::Syslog. The default is daemon. --group=gid After doing a bind(), change the real and effective GID to the given. This is useful, if you want your server to bind to a privileged port (<1024), but don't want the server to execute as root. See also the --user option. GID's can be passed as group names or numeric values. --localaddr=ip By default a daemon is listening to any IP number that a machine has. This attribute allows one to restrict the server to the given IP number. --localport=port This attribute sets the port on which the daemon is listening. It must be given somehow, as there's no default. --logfile=file Be default logging messages will be written to the syslog (Unix) or to the event log (Windows NT). On other operating systems you need to specify a log file. The special value "STDERR" forces logging to stderr. See Net::Daemon::Log for details. --mode=modename The server can run in three different modes, depending on the environment. If you are running Perl 5.005 and did compile it for threads, then the server will create a new thread for each connection. The thread will execute the server's Run() method and then terminate. This mode is the default, you can force it with "--mode=threads". If threads are not available, but you have a working fork(), then the server will behave similar by creating a new process for each connection. This mode will be used automatically in the absence of threads or if you use the "--mode=fork" option. Finally there's a single-connection mode: If the server has accepted a connection, he will enter the Run() method. No other connections are accepted until the Run() method returns (if the client disconnects). This operation mode is useful if you have neither threads nor fork(), for example on the Macintosh. For debugging purposes you can force this mode with "--mode=single". --pidfile=file (UNIX only) If this option is present, a PID file will be created at the given location. Default is to not create a pidfile. --user=uid After doing a bind(), change the real and effective UID to the given. This is useful, if you want your server to bind to a privileged port (<1024), but don't want the server to execute as root. See also the --group and the --chroot options. UID's can be passed as group names or numeric values. --version Suppresses startup of the server; instead the version string will be printed and the program exits immediately. AUTHOR Copyright (c) 1997 Jochen Wiedmann Am Eisteich 9 72555 Metzingen Germany Email: joe@ispsoft.de Phone: +49 7123 14881 The DBI::ProxyServer module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. In particular permission is granted to Tim Bunce for distributing this as a part of the DBI. SEE ALSO DBI::ProxyServer, DBD::Proxy, DBI perl v5.30.3 2024-04-13 DBIPROXY(1)
dbiproxy - A proxy server for the DBD::Proxy driver
dbiproxy <options> --localport=<port>
null
null
tkcon
TkCon is a replacement for the standard console that comes with Tk (on Windows/Mac, but also works on Unix). The console itself provides many more features than the standard console. TkCon works on all platforms where Tcl/Tk is available. It is meant primarily to aid one when working with the little details inside Tcl and Tk, giving Unix users the GUI console provided by default in the Mac and Windows Tk. tkcon [{option value | tcl_script} ...]
tkcon - Tk console replacement
tkcon [{option value | tcl_script} ...] ______________________________________________________________________________
Except for -rcfile, command line arguments are handled after the TkCon resource file is sourced, but before the slave interpreter or the TkCon user interface is initialized. -rcfile is handled right before it would be sourced, allowing you to specify any alternate file. Command line arguments are passed to each new console and will be evaluated by each. To prevent this from happening, you have to say tkcon main set argv {}; tkcon main set argc 0 For these options, any unique substring is allowed. -argv (also --) Causes TkCon to stop evaluating arguments and set the remaining args to be argv/argc (with -- prepended). This carries over for any further consoles. This is meant only for wrapping TkCon around programs that require their own arguments. -color-<color> color Sets the requested color type to the specified color for tkcon. See tkconrc(5) for the recognized <color> names. -eval tcl_script (also -main or -e) A Tcl script to eval in each main interpreter. This is evaluated after the resource file is loaded and the slave interpreter is created. Multiple -eval switches will be recognized (in order). -exec slavename Sets the named slave that tkcon operates in. In general, this is only useful to set to "" (empty), indicating to tkcon to avoid the multi-interpreter model and operate in the main environment. When this is empty, any further arguments will be only used in the first tkcon console and not passed onto further new consoles. This is useful when using tkcon as a console for extended wish executables that don't load there commands into slave interpreters. -font font Sets the font that tkcon uses for its text windows. If this isn't a fixed width font, tkcon will override it. -nontcl TCL_BOOLEAN Sets ::tkcon::OPT(nontcl) to TCL_BOOLEAN (see tkconrc(5)). Needed when attaching to non-Tcl interpreters. -package package_name (also -load) Packages to automatically load into the slave interpreters (i.e. "Tk"). -rcfile filename Specify an alternate tkcon resource file name. -root widgetname Makes the named widget the root name of all consoles (i.e. .tkcon). -slave tcl_script A Tcl script to eval in each slave interpreter. This will append the one specified in the tkcon resource file, if any. KEY BINDINGS Most of the bindings are the same as for the text widget. Some have been modified to make sure that the integrity of the console is maintained. Others have been added to enhance the usefulness of the console. Only the modified or new bindings are listed here. Control-x or Cut (on Sparc5 keyboards) Cut. Control-c or Copy (on Sparc5 keyboards) Copy. Control-v or Paste (on Sparc5 keyboards) Paste. Insert Insert (duh). Up Goes up one level in the commands line history when cursor is on the prompt line, otherwise it moves through the buffer. Down Goes down one level in the commands line history when cursor is on the last line of the buffer, otherwise it moves through the buffer. Control-p Goes up one level in the commands line history. Control-n Goes down one level in the commands line history. Tab Tries to expand file path names, then variable names, then proc names. Escape Tries to expand file path names. Control-P Tries to expand procedure names. The procedure names will be those that are actually in the attached interpreter (unless nontcl is specified, in which case it always does the lookup in the default slave interpreter). Control-V Tries to expand variable names (those returned by [info vars]). It's search behavior is like that for procedure names. Return or Enter Evaluates the current command line if it is a complete command, otherwise it just goes to a new line. Control-a Go to the beginning of the current command line. Control-l Clear the entire console buffer. Control-r Searches backwards in the history for any command that contains the string in the current command line. Repeatable to search farther back. The matching substring off the found command will blink. Control-s As above, but searches forward (only useful if you searched too far back). Control-t Transposes characters. Control-u Clears the current command line. Control-z Saves current command line in a buffer that can be retrieved with another Control-z. If the current command line is empty, then any saved command is retrieved without being overwritten, otherwise the current contents get swapped with what's in the saved command buffer. Control-Key-1 Attaches console to the console's slave interpreter. Control-Key-2 Attaches console to the console's master interpreter. Control-Key-3 Attaches console to main TkCon interpreter. Control-A Pops up the "About" dialog. Control-N Creates a new console. Each console has separate state, including it's own widget hierarchy (it's a slave interpreter). Control-q Close the current console OR Quit the program (depends on the value of ::tkcon::TKCON(slaveexit)). Control-w Closes the current console. Closing the main console will exit the program (something has to control all the slaves...). TkCon also has electric bracing (similar to that in emacs). It will highlight matching pairs of {}'s, []'s, ()'s and ""'s. For the first three, if there is no matching left element for the right, then it blinks the entire current command line. For the double quote, if there is no proper match then it just blinks the current double quote character. It does properly recognize most escaping (except escaped escapes), but does not look for commenting (why would you interactively put comments in?). COMMANDS There are several new procedures introduced in TkCon to improve productivity and/or account for lost functionality in the Tcl environment that users are used to in native environments. There are also some redefined procedures. Here is a non-comprehensive list: alias ?sourceCmd targetCmd ?arg arg ...?? Simple alias mechanism. It will overwrite existing commands. When called without args, it returns current aliases. Note that TkCon makes some aliases for you (in slaves). Don't delete those. clear ?percentage? Clears the text widget. Same as the <Control-l> binding, except this will accept a percentage of the buffer to clear (1-100, 100 default). dir ?-all? ?-full? ?-long? ?pattern pattern ...? Cheap way to get directory listings. Uses glob style pattern matching. dump type ?-nocomplain? ?-filter pattern? ?--? pattern ?pattern ...? The dump command provides a way for the user to spit out state information about the interpreter in a Tcl readable (and human readable) form. See dump(n) for details. echo ?arg arg ...? Concatenates the args and spits the result to the console (stdout). edit ?-type type? ?-find str? ?-attach interp? arg Opens an editor with the data from arg. The optional type argument can be one of: proc, var or file. For proc or var, the arg may be a pattern. idebug command ?args? Interactive debugging command. See idebug(n) for details. lremove ?-all? ?-regexp -glob? list items Removes one or more items from a list and returns the new list. If -all is specified, it removes all instances of each item in the list. If -regexp or -glob is specified, it interprets each item in the items list as a regexp or glob pattern to match against. less Aliased to edit. ls Aliased to dir -full. more Aliased to edit. observe type ?args? This command provides passive runtime debugging output for variables and commands. See observe(n) for details. puts (same options as always) Redefined to put the output into TkCon. tkcon method ?args? Multi-purpose command. See tkcon(n) for details. tclindex ?-extensions patternlist? ?-index TCL_BOOLEAN? ?-package TCL_BOOLEAN? ?dir1 dir2 ...? Convenience proc to update the "tclIndex" (controlled by -index switch) and/or "pkgIndex.tcl" (controlled by -package switch) file in the named directories based on the given pattern for files. It defaults to creating the "tclIndex" but not the "pkgIndex.tcl" file, with the directory defaulting to [pwd]. The extension defaults to *.tcl, with *.[info sharelibextension] added when -package is true. unalias cmd unaliases command. what string The what command will identify the word given in string in the Tcl environment and return a list of types that it was recognized as. Possible types are: alias, procedure, command, array variable, scalar variable, directory, file, widget, and executable. Used by procedures dump and which. which command Like the which command of Unix shells, this will tell you if a particular command is known, and if so, whether it is internal or external to the interpreter. If it is an internal command and there is a slot in auto_index for it, it tells you the file that auto_index would load. This does not necessarily mean that that is where the file came from, but if it were not in the interpreter previously, then that is where the command was found. There are several procedures that I use as helpers that some may find helpful in there coding (i.e. expanding pathnames). Feel free to lift them from the code (but do assign proper attribution). EXAMLPES Some examples of tkcon command line startup situations: megawish /usr/bin/tkcon -exec "" -root .tkcon mainfile.tcl Use tkcon as a console for your megawish application. You can avoid starting the line with megawish if that is the default wish that TkCon would use. The -root ensures that tkcon will not conflict with the application root window. tkcon -font "Courier 12" -load Tk Use the courier font for TkCon and always load Tk in slave interpreters at startup. tkcon -rcfile ~/.wishrc -color-bg white Use the ~/.wishrc file as the resource file, and a white background for TkCon's text widgets. FILES TkCon will search for a resource file in "~/.tkconrc". TkCon never sources the "~/.wishrc" file. The resource file is sourced by each new instance of the console. An example resource file is provided in tkconrc(5). SEE ALSO dump(n), idebug(n), observe(n), text(n), tkcon(n), tkconrc(5) KEYWORDS Tk, console COPYRIGHT Copyright (c) Jeffrey Hobbs (jeff at hobbs.org) TkCon 2.5 tkcon(1)
null
ssh
ssh (SSH client) is a program for logging into a remote machine and for executing commands on a remote machine. It is intended to provide secure encrypted communications between two untrusted hosts over an insecure network. X11 connections, arbitrary TCP ports and UNIX-domain sockets can also be forwarded over the secure channel. ssh connects and logs into the specified destination, which may be specified as either [user@]hostname or a URI of the form ssh://[user@]hostname[:port]. The user must prove their identity to the remote machine using one of several methods (see below). If a command is specified, it will be executed on the remote host instead of a login shell. A complete command line may be specified as command, or it may have additional arguments. If supplied, the arguments will be appended to the command, separated by spaces, before it is sent to the server to be executed. The options are as follows: -4 Forces ssh to use IPv4 addresses only. -6 Forces ssh to use IPv6 addresses only. -A Enables forwarding of connections from an authentication agent such as ssh-agent(1). This can also be specified on a per-host basis in a configuration file. Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's UNIX-domain socket) can access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the keys that enable them to authenticate using the identities loaded into the agent. A safer alternative may be to use a jump host (see -J). -a Disables forwarding of the authentication agent connection. -B bind_interface Bind to the address of bind_interface before attempting to connect to the destination host. This is only useful on systems with more than one address. -b bind_address Use bind_address on the local machine as the source address of the connection. Only useful on systems with more than one address. -C Requests compression of all data (including stdin, stdout, stderr, and data for forwarded X11, TCP and UNIX-domain connections). The compression algorithm is the same used by gzip(1). Compression is desirable on modem lines and other slow connections, but will only slow down things on fast networks. The default value can be set on a host-by-host basis in the configuration files; see the Compression option in ssh_config(5). -c cipher_spec Selects the cipher specification for encrypting the session. cipher_spec is a comma-separated list of ciphers listed in order of preference. See the Ciphers keyword in ssh_config(5) for more information. -D [bind_address:]port Specifies a local “dynamic” application-level port forwarding. This works by allocating a socket to listen to port on the local side, optionally bound to the specified bind_address. Whenever a connection is made to this port, the connection is forwarded over the secure channel, and the application protocol is then used to determine where to connect to from the remote machine. Currently the SOCKS4 and SOCKS5 protocols are supported, and ssh will act as a SOCKS server. Only root can forward privileged ports. Dynamic port forwardings can also be specified in the configuration file. IPv6 addresses can be specified by enclosing the address in square brackets. Only the superuser can forward privileged ports. By default, the local port is bound in accordance with the GatewayPorts setting. However, an explicit bind_address may be used to bind the connection to a specific address. The bind_address of “localhost” indicates that the listening port be bound for local use only, while an empty address or ‘*’ indicates that the port should be available from all interfaces. -E log_file Append debug logs to log_file instead of standard error. -e escape_char Sets the escape character for sessions with a pty (default: ‘~’). The escape character is only recognized at the beginning of a line. The escape character followed by a dot (‘.’) closes the connection; followed by control-Z suspends the connection; and followed by itself sends the escape character once. Setting the character to “none” disables any escapes and makes the session fully transparent. -F configfile Specifies an alternative per-user configuration file. If a configuration file is given on the command line, the system-wide configuration file (/etc/ssh/ssh_config) will be ignored. The default for the per-user configuration file is ~/.ssh/config. If set to “none”, no configuration files will be read. -f Requests ssh to go to background just before command execution. This is useful if ssh is going to ask for passwords or passphrases, but the user wants it in the background. This implies -n. The recommended way to start X11 programs at a remote site is with something like ssh -f host xterm. If the ExitOnForwardFailure configuration option is set to “yes”, then a client started with -f will wait for all remote port forwards to be successfully established before placing itself in the background. Refer to the description of ForkAfterAuthentication in ssh_config(5) for details. -G Causes ssh to print its configuration after evaluating Host and Match blocks and exit. -g Allows remote hosts to connect to local forwarded ports. If used on a multiplexed connection, then this option must be specified on the master process. -I pkcs11 Specify the PKCS#11 shared library ssh should use to communicate with a PKCS#11 token providing keys for user authentication. Use of this option will disable UseKeychain. -i identity_file Selects a file from which the identity (private key) for public key authentication is read. You can also specify a public key file to use the corresponding private key that is loaded in ssh-agent(1) when the private key file is not present locally. The default is ~/.ssh/id_rsa, ~/.ssh/id_ecdsa, ~/.ssh/id_ecdsa_sk, ~/.ssh/id_ed25519, ~/.ssh/id_ed25519_sk and ~/.ssh/id_dsa. Identity files may also be specified on a per- host basis in the configuration file. It is possible to have multiple -i options (and multiple identities specified in configuration files). If no certificates have been explicitly specified by the CertificateFile directive, ssh will also try to load certificate information from the filename obtained by appending -cert.pub to identity filenames. -J destination Connect to the target host by first making an ssh connection to the jump host described by destination and then establishing a TCP forwarding to the ultimate destination from there. Multiple jump hops may be specified separated by comma characters. This is a shortcut to specify a ProxyJump configuration directive. Note that configuration directives supplied on the command-line generally apply to the destination host and not any specified jump hosts. Use ~/.ssh/config to specify configuration for jump hosts. -K Enables GSSAPI-based authentication and forwarding (delegation) of GSSAPI credentials to the server. -k Disables forwarding (delegation) of GSSAPI credentials to the server. -L [bind_address:]port:host:hostport -L [bind_address:]port:remote_socket -L local_socket:host:hostport -L local_socket:remote_socket Specifies that connections to the given TCP port or Unix socket on the local (client) host are to be forwarded to the given host and port, or Unix socket, on the remote side. This works by allocating a socket to listen to either a TCP port on the local side, optionally bound to the specified bind_address, or to a Unix socket. Whenever a connection is made to the local port or socket, the connection is forwarded over the secure channel, and a connection is made to either host port hostport, or the Unix socket remote_socket, from the remote machine. Port forwardings can also be specified in the configuration file. Only the superuser can forward privileged ports. IPv6 addresses can be specified by enclosing the address in square brackets. By default, the local port is bound in accordance with the GatewayPorts setting. However, an explicit bind_address may be used to bind the connection to a specific address. The bind_address of “localhost” indicates that the listening port be bound for local use only, while an empty address or ‘*’ indicates that the port should be available from all interfaces. -l login_name Specifies the user to log in as on the remote machine. This also may be specified on a per-host basis in the configuration file. -M Places the ssh client into “master” mode for connection sharing. Multiple -M options places ssh into “master” mode but with confirmation required using ssh-askpass(1) before each operation that changes the multiplexing state (e.g. opening a new session). Refer to the description of ControlMaster in ssh_config(5) for details. -m mac_spec A comma-separated list of MAC (message authentication code) algorithms, specified in order of preference. See the MACs keyword in ssh_config(5) for more information. -N Do not execute a remote command. This is useful for just forwarding ports. Refer to the description of SessionType in ssh_config(5) for details. -n Redirects stdin from /dev/null (actually, prevents reading from stdin). This must be used when ssh is run in the background. A common trick is to use this to run X11 programs on a remote machine. For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically forwarded over an encrypted channel. The ssh program will be put in the background. (This does not work if ssh needs to ask for a password or passphrase; see also the -f option.) Refer to the description of StdinNull in ssh_config(5) for details. -O ctl_cmd Control an active connection multiplexing master process. When the -O option is specified, the ctl_cmd argument is interpreted and passed to the master process. Valid commands are: “check” (check that the master process is running), “forward” (request forwardings without command execution), “cancel” (cancel forwardings), “exit” (request the master to exit), and “stop” (request the master to stop accepting further multiplexing requests). -o option Can be used to give options in the format used in the configuration file. This is useful for specifying options for which there is no separate command-line flag. For full details of the options listed below, and their possible values, see ssh_config(5). AddKeysToAgent AddressFamily BatchMode BindAddress CanonicalDomains CanonicalizeFallbackLocal CanonicalizeHostname CanonicalizeMaxDots CanonicalizePermittedCNAMEs CASignatureAlgorithms CertificateFile CheckHostIP Ciphers ClearAllForwardings Compression ConnectionAttempts ConnectTimeout ControlMaster ControlPath ControlPersist DynamicForward EnableEscapeCommandline EscapeChar ExitOnForwardFailure FingerprintHash ForkAfterAuthentication ForwardAgent ForwardX11 ForwardX11Timeout ForwardX11Trusted GatewayPorts GlobalKnownHostsFile GSSAPIAuthentication GSSAPIDelegateCredentials HashKnownHosts Host HostbasedAcceptedAlgorithms HostbasedAuthentication HostKeyAlgorithms HostKeyAlias Hostname IdentitiesOnly IdentityAgent IdentityFile IPQoS KbdInteractiveAuthentication KbdInteractiveDevices KexAlgorithms KnownHostsCommand LocalCommand LocalForward LogLevel MACs Match NoHostAuthenticationForLocalhost NumberOfPasswordPrompts PasswordAuthentication PermitLocalCommand PermitRemoteOpen PKCS11Provider Port PreferredAuthentications ProxyCommand ProxyJump ProxyUseFdpass PubkeyAcceptedAlgorithms PubkeyAuthentication RekeyLimit RemoteCommand RemoteForward RequestTTY RequiredRSASize SendEnv ServerAliveInterval ServerAliveCountMax SessionType SetEnv StdinNull StreamLocalBindMask StreamLocalBindUnlink StrictHostKeyChecking TCPKeepAlive Tunnel TunnelDevice UpdateHostKeys UseKeychain User UserKnownHostsFile VerifyHostKeyDNS VisualHostKey XAuthLocation -P tag Specify a tag name that may be used to select configuration in ssh_config(5). Refer to the Tag and Match keywords in ssh_config(5) for more information. -p port Port to connect to on the remote host. This can be specified on a per-host basis in the configuration file. -Q query_option Queries for the algorithms supported by one of the following features: cipher (supported symmetric ciphers), cipher-auth (supported symmetric ciphers that support authenticated encryption), help (supported query terms for use with the -Q flag), mac (supported message integrity codes), kex (key exchange algorithms), key (key types), key-ca-sign (valid CA signature algorithms for certificates), key-cert (certificate key types), key-plain (non-certificate key types), key-sig (all key types and signature algorithms), protocol-version (supported SSH protocol versions), and sig (supported signature algorithms). Alternatively, any keyword from ssh_config(5) or sshd_config(5) that takes an algorithm list may be used as an alias for the corresponding query_option. -q Quiet mode. Causes most warning and diagnostic messages to be suppressed. -R [bind_address:]port:host:hostport -R [bind_address:]port:local_socket -R remote_socket:host:hostport -R remote_socket:local_socket -R [bind_address:]port Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the local side. This works by allocating a socket to listen to either a TCP port or to a Unix socket on the remote side. Whenever a connection is made to this port or Unix socket, the connection is forwarded over the secure channel, and a connection is made from the local machine to either an explicit destination specified by host port hostport, or local_socket, or, if no explicit destination was specified, ssh will act as a SOCKS 4/5 proxy and forward connections to the destinations requested by the remote SOCKS client. Port forwardings can also be specified in the configuration file. Privileged ports can be forwarded only when logging in as root on the remote machine. IPv6 addresses can be specified by enclosing the address in square brackets. By default, TCP listening sockets on the server will be bound to the loopback interface only. This may be overridden by specifying a bind_address. An empty bind_address, or the address ‘*’, indicates that the remote socket should listen on all interfaces. Specifying a remote bind_address will only succeed if the server's GatewayPorts option is enabled (see sshd_config(5)). If the port argument is ‘0’, the listen port will be dynamically allocated on the server and reported to the client at run time. When used together with -O forward, the allocated port will be printed to the standard output. -S ctl_path Specifies the location of a control socket for connection sharing, or the string “none” to disable connection sharing. Refer to the description of ControlPath and ControlMaster in ssh_config(5) for details. -s May be used to request invocation of a subsystem on the remote system. Subsystems facilitate the use of SSH as a secure transport for other applications (e.g. sftp(1)). The subsystem is specified as the remote command. Refer to the description of SessionType in ssh_config(5) for details. -T Disable pseudo-terminal allocation. -t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty. -V Display the version number and exit. -v Verbose mode. Causes ssh to print debugging messages about its progress. This is helpful in debugging connection, authentication, and configuration problems. Multiple -v options increase the verbosity. The maximum is 3. -W host:port Requests that standard input and output on the client be forwarded to host on port over the secure channel. Implies -N, -T, ExitOnForwardFailure and ClearAllForwardings, though these can be overridden in the configuration file or using -o command line options. -w local_tun[:remote_tun] Requests tunnel device forwarding with the specified tun(4) devices between the client (local_tun) and the server (remote_tun). The devices may be specified by numerical ID or the keyword “any”, which uses the next available tunnel device. If remote_tun is not specified, it defaults to “any”. See also the Tunnel and TunnelDevice directives in ssh_config(5). If the Tunnel directive is unset, it will be set to the default tunnel mode, which is “point-to-point”. If a different Tunnel forwarding mode it desired, then it should be specified before -w. -X Enables X11 forwarding. This can also be specified on a per-host basis in a configuration file. X11 forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the user's X authorization database) can access the local X11 display through the forwarded connection. An attacker may then be able to perform activities such as keystroke monitoring. For this reason, X11 forwarding is subjected to X11 SECURITY extension restrictions by default. Refer to the ssh -Y option and the ForwardX11Trusted directive in ssh_config(5) for more information. -x Disables X11 forwarding. -Y Enables trusted X11 forwarding. Trusted X11 forwardings are not subjected to the X11 SECURITY extension controls. -y Send log information using the syslog(3) system module. By default this information is sent to stderr. ssh may additionally obtain configuration data from a per-user configuration file and a system-wide configuration file. The file format and configuration options are described in ssh_config(5). AUTHENTICATION The OpenSSH SSH client supports SSH protocol 2. The methods available for authentication are: GSSAPI-based authentication, host-based authentication, public key authentication, keyboard-interactive authentication, and password authentication. Authentication methods are tried in the order specified above, though PreferredAuthentications can be used to change the default order. Host-based authentication works as follows: If the machine the user logs in from is listed in /etc/hosts.equiv or /etc/shosts.equiv on the remote machine, the user is non-root and the user names are the same on both sides, or if the files ~/.rhosts or ~/.shosts exist in the user's home directory on the remote machine and contain a line containing the name of the client machine and the name of the user on that machine, the user is considered for login. Additionally, the server must be able to verify the client's host key (see the description of /etc/ssh/ssh_known_hosts and ~/.ssh/known_hosts, below) for login to be permitted. This authentication method closes security holes due to IP spoofing, DNS spoofing, and routing spoofing. [Note to the administrator: /etc/hosts.equiv, ~/.rhosts, and the rlogin/rsh protocol in general, are inherently insecure and should be disabled if security is desired.] Public key authentication works as follows: The scheme is based on public-key cryptography, using cryptosystems where encryption and decryption are done using separate keys, and it is unfeasible to derive the decryption key from the encryption key. The idea is that each user creates a public/private key pair for authentication purposes. The server knows the public key, and only the user knows the private key. ssh implements public key authentication protocol automatically, using one of the DSA, ECDSA, Ed25519 or RSA algorithms. The HISTORY section of ssl(8) contains a brief discussion of the DSA and RSA algorithms. The file ~/.ssh/authorized_keys lists the public keys that are permitted for logging in. When the user logs in, the ssh program tells the server which key pair it would like to use for authentication. The client proves that it has access to the private key and the server checks that the corresponding public key is authorized to accept the account. The server may inform the client of errors that prevented public key authentication from succeeding after authentication completes using a different method. These may be viewed by increasing the LogLevel to DEBUG or higher (e.g. by using the -v flag). The user creates their key pair by running ssh-keygen(1). This stores the private key in ~/.ssh/id_dsa (DSA), ~/.ssh/id_ecdsa (ECDSA), ~/.ssh/id_ecdsa_sk (authenticator-hosted ECDSA), ~/.ssh/id_ed25519 (Ed25519), ~/.ssh/id_ed25519_sk (authenticator-hosted Ed25519), or ~/.ssh/id_rsa (RSA) and stores the public key in ~/.ssh/id_dsa.pub (DSA), ~/.ssh/id_ecdsa.pub (ECDSA), ~/.ssh/id_ecdsa_sk.pub (authenticator-hosted ECDSA), ~/.ssh/id_ed25519.pub (Ed25519), ~/.ssh/id_ed25519_sk.pub (authenticator-hosted Ed25519), or ~/.ssh/id_rsa.pub (RSA) in the user's home directory. The user should then copy the public key to ~/.ssh/authorized_keys in their home directory on the remote machine. The authorized_keys file corresponds to the conventional ~/.rhosts file, and has one key per line, though the lines can be very long. After this, the user can log in without giving the password. A variation on public key authentication is available in the form of certificate authentication: instead of a set of public/private keys, signed certificates are used. This has the advantage that a single trusted certification authority can be used in place of many public/private keys. See the CERTIFICATES section of ssh-keygen(1) for more information. The most convenient way to use public key or certificate authentication may be with an authentication agent. See ssh-agent(1) and (optionally) the AddKeysToAgent directive in ssh_config(5) for more information. Keyboard-interactive authentication works as follows: The server sends an arbitrary "challenge" text and prompts for a response, possibly multiple times. Examples of keyboard-interactive authentication include BSD Authentication (see login.conf(5)) and PAM (some non-OpenBSD systems). Finally, if other authentication methods fail, ssh prompts the user for a password. The password is sent to the remote host for checking; however, since all communications are encrypted, the password cannot be seen by someone listening on the network. ssh automatically maintains and checks a database containing identification for all hosts it has ever been used with. Host keys are stored in ~/.ssh/known_hosts in the user's home directory. Additionally, the file /etc/ssh/ssh_known_hosts is automatically checked for known hosts. Any new hosts are automatically added to the user's file. If a host's identification ever changes, ssh warns about this and disables password authentication to prevent server spoofing or man-in-the-middle attacks, which could otherwise be used to circumvent the encryption. The StrictHostKeyChecking option can be used to control logins to machines whose host key is not known or has changed. When the user's identity has been accepted by the server, the server either executes the given command in a non-interactive session or, if no command has been specified, logs into the machine and gives the user a normal shell as an interactive session. All communication with the remote command or shell will be automatically encrypted. If an interactive session is requested, ssh by default will only request a pseudo-terminal (pty) for interactive sessions when the client has one. The flags -T and -t can be used to override this behaviour. If a pseudo-terminal has been allocated, the user may use the escape characters noted below. If no pseudo-terminal has been allocated, the session is transparent and can be used to reliably transfer binary data. On most systems, setting the escape character to “none” will also make the session transparent even if a tty is used. The session terminates when the command or shell on the remote machine exits and all X11 and TCP connections have been closed. ESCAPE CHARACTERS When a pseudo-terminal has been requested, ssh supports a number of functions through the use of an escape character. A single tilde character can be sent as ~~ or by following the tilde by a character other than those described below. The escape character must always follow a newline to be interpreted as special. The escape character can be changed in configuration files using the EscapeChar configuration directive or on the command line by the -e option. The supported escapes (assuming the default ‘~’) are: ~. Disconnect. ~^Z Background ssh. ~# List forwarded connections. ~& Background ssh at logout when waiting for forwarded connection / X11 sessions to terminate. ~? Display a list of escape characters. ~B Send a BREAK to the remote system (only useful if the peer supports it). ~C Open command line. Currently this allows the addition of port forwardings using the -L, -R and -D options (see above). It also allows the cancellation of existing port-forwardings with -KL[bind_address:]port for local, -KR[bind_address:]port for remote and -KD[bind_address:]port for dynamic port-forwardings. !command allows the user to execute a local command if the PermitLocalCommand option is enabled in ssh_config(5). Basic help is available, using the -h option. ~R Request rekeying of the connection (only useful if the peer supports it). ~V Decrease the verbosity (LogLevel) when errors are being written to stderr. ~v Increase the verbosity (LogLevel) when errors are being written to stderr. TCP FORWARDING Forwarding of arbitrary TCP connections over a secure channel can be specified either on the command line or in a configuration file. One possible application of TCP forwarding is a secure connection to a mail server; another is going through firewalls. In the example below, we look at encrypting communication for an IRC client, even though the IRC server it connects to does not directly support encrypted communication. This works as follows: the user connects to the remote host using ssh, specifying the ports to be used to forward the connection. After that it is possible to start the program locally, and ssh will encrypt and forward the connection to the remote server. The following example tunnels an IRC session from the client to an IRC server at “server.example.com”, joining channel “#users”, nickname “pinky”, using the standard IRC port, 6667: $ ssh -f -L 6667:localhost:6667 server.example.com sleep 10 $ irc -c '#users' pinky IRC/127.0.0.1 The -f option backgrounds ssh and the remote command “sleep 10” is specified to allow an amount of time (10 seconds, in the example) to start the program which is going to use the tunnel. If no connections are made within the time specified, ssh will exit. X11 FORWARDING If the ForwardX11 variable is set to “yes” (or see the description of the -X, -x, and -Y options above) and the user is using X11 (the DISPLAY environment variable is set), the connection to the X11 display is automatically forwarded to the remote side in such a way that any X11 programs started from the shell (or command) will go through the encrypted channel, and the connection to the real X server will be made from the local machine. The user should not manually set DISPLAY. Forwarding of X11 connections can be configured on the command line or in configuration files. The DISPLAY value set by ssh will point to the server machine, but with a display number greater than zero. This is normal, and happens because ssh creates a “proxy” X server on the server machine for forwarding the connections over the encrypted channel. ssh will also automatically set up Xauthority data on the server machine. For this purpose, it will generate a random authorization cookie, store it in Xauthority on the server, and verify that any forwarded connections carry this cookie and replace it by the real cookie when the connection is opened. The real authentication cookie is never sent to the server machine (and no cookies are sent in the plain). If the ForwardAgent variable is set to “yes” (or see the description of the -A and -a options above) and the user is using an authentication agent, the connection to the agent is automatically forwarded to the remote side. VERIFYING HOST KEYS When connecting to a server for the first time, a fingerprint of the server's public key is presented to the user (unless the option StrictHostKeyChecking has been disabled). Fingerprints can be determined using ssh-keygen(1): $ ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key If the fingerprint is already known, it can be matched and the key can be accepted or rejected. If only legacy (MD5) fingerprints for the server are available, the ssh-keygen(1) -E option may be used to downgrade the fingerprint algorithm to match. Because of the difficulty of comparing host keys just by looking at fingerprint strings, there is also support to compare host keys visually, using random art. By setting the VisualHostKey option to “yes”, a small ASCII graphic gets displayed on every login to a server, no matter if the session itself is interactive or not. By learning the pattern a known server produces, a user can easily find out that the host key has changed when a completely different pattern is displayed. Because these patterns are not unambiguous however, a pattern that looks similar to the pattern remembered only gives a good probability that the host key is the same, not guaranteed proof. To get a listing of the fingerprints along with their random art for all known hosts, the following command line can be used: $ ssh-keygen -lv -f ~/.ssh/known_hosts If the fingerprint is unknown, an alternative method of verification is available: SSH fingerprints verified by DNS. An additional resource record (RR), SSHFP, is added to a zonefile and the connecting client is able to match the fingerprint with that of the key presented. In this example, we are connecting a client to a server, “host.example.com”. The SSHFP resource records should first be added to the zonefile for host.example.com: $ ssh-keygen -r host.example.com. The output lines will have to be added to the zonefile. To check that the zone is answering fingerprint queries: $ dig -t SSHFP host.example.com Finally the client connects: $ ssh -o "VerifyHostKeyDNS ask" host.example.com [...] Matching host key fingerprint found in DNS. Are you sure you want to continue connecting (yes/no)? See the VerifyHostKeyDNS option in ssh_config(5) for more information. SSH-BASED VIRTUAL PRIVATE NETWORKS ssh contains support for Virtual Private Network (VPN) tunnelling using the tun(4) network pseudo-device, allowing two networks to be joined securely. The sshd_config(5) configuration option PermitTunnel controls whether the server supports this, and at what level (layer 2 or 3 traffic). The following example would connect client network 10.0.50.0/24 with remote network 10.0.99.0/24 using a point-to-point connection from 10.1.1.1 to 10.1.1.2, provided that the SSH server running on the gateway to the remote network, at 192.168.1.15, allows it. On the client: # ssh -f -w 0:1 192.168.1.15 true # ifconfig tun0 10.1.1.1 10.1.1.2 netmask 255.255.255.252 # route add 10.0.99.0/24 10.1.1.2 On the server: # ifconfig tun1 10.1.1.2 10.1.1.1 netmask 255.255.255.252 # route add 10.0.50.0/24 10.1.1.1 Client access may be more finely tuned via the /root/.ssh/authorized_keys file (see below) and the PermitRootLogin server option. The following entry would permit connections on tun(4) device 1 from user “jane” and on tun device 2 from user “john”, if PermitRootLogin is set to “forced-commands-only”: tunnel="1",command="sh /etc/netstart tun1" ssh-rsa ... jane tunnel="2",command="sh /etc/netstart tun2" ssh-rsa ... john Since an SSH-based setup entails a fair amount of overhead, it may be more suited to temporary setups, such as for wireless VPNs. More permanent VPNs are better provided by tools such as ipsecctl(8) and isakmpd(8). ENVIRONMENT ssh will normally set the following environment variables: DISPLAY The DISPLAY variable indicates the location of the X11 server. It is automatically set by ssh to point to a value of the form “hostname:n”, where “hostname” indicates the host where the shell runs, and ‘n’ is an integer ≥ 1. ssh uses this special value to forward X11 connections over the secure channel. The user should normally not set DISPLAY explicitly, as that will render the X11 connection insecure (and will require the user to manually copy any required authorization cookies). HOME Set to the path of the user's home directory. LOGNAME Synonym for USER; set for compatibility with systems that use this variable. MAIL Set to the path of the user's mailbox. PATH Set to the default PATH, as specified when compiling ssh. SSH_ASKPASS If ssh needs a passphrase, it will read the passphrase from the current terminal if it was run from a terminal. If ssh does not have a terminal associated with it but DISPLAY and SSH_ASKPASS are set, it will execute the program specified by SSH_ASKPASS and open an X11 window to read the passphrase. This is particularly useful when calling ssh from a .xsession or related script. (Note that on some machines it may be necessary to redirect the input from /dev/null to make this work.) SSH_ASKPASS_REQUIRE Allows further control over the use of an askpass program. If this variable is set to “never” then ssh will never attempt to use one. If it is set to “prefer”, then ssh will prefer to use the askpass program instead of the TTY when requesting passwords. Finally, if the variable is set to “force”, then the askpass program will be used for all passphrase input regardless of whether DISPLAY is set. SSH_AUTH_SOCK Identifies the path of a UNIX-domain socket used to communicate with the agent. SSH_CONNECTION Identifies the client and server ends of the connection. The variable contains four space- separated values: client IP address, client port number, server IP address, and server port number. SSH_ORIGINAL_COMMAND This variable contains the original command line if a forced command is executed. It can be used to extract the original arguments. SSH_TTY This is set to the name of the tty (path to the device) associated with the current shell or command. If the current session has no tty, this variable is not set. SSH_TUNNEL Optionally set by sshd(8) to contain the interface names assigned if tunnel forwarding was requested by the client. SSH_USER_AUTH Optionally set by sshd(8), this variable may contain a pathname to a file that lists the authentication methods successfully used when the session was established, including any public keys that were used. TZ This variable is set to indicate the present time zone if it was set when the daemon was started (i.e. the daemon passes the value on to new connections). USER Set to the name of the user logging in. Additionally, ssh reads ~/.ssh/environment, and adds lines of the format “VARNAME=value” to the environment if the file exists and users are allowed to change their environment. For more information, see the PermitUserEnvironment option in sshd_config(5). FILES ~/.rhosts This file is used for host-based authentication (see above). On some machines this file may need to be world-readable if the user's home directory is on an NFS partition, because sshd(8) reads it as root. Additionally, this file must be owned by the user, and must not have write permissions for anyone else. The recommended permission for most machines is read/write for the user, and not accessible by others. ~/.shosts This file is used in exactly the same way as .rhosts, but allows host-based authentication without permitting login with rlogin/rsh. ~/.ssh/ This directory is the default location for all user-specific configuration and authentication information. There is no general requirement to keep the entire contents of this directory secret, but the recommended permissions are read/write/execute for the user, and not accessible by others. ~/.ssh/authorized_keys Lists the public keys (DSA, ECDSA, Ed25519, RSA) that can be used for logging in as this user. The format of this file is described in the sshd(8) manual page. This file is not highly sensitive, but the recommended permissions are read/write for the user, and not accessible by others. ~/.ssh/config This is the per-user configuration file. The file format and configuration options are described in ssh_config(5). Because of the potential for abuse, this file must have strict permissions: read/write for the user, and not writable by others. ~/.ssh/environment Contains additional definitions for environment variables; see ENVIRONMENT, above. ~/.ssh/id_dsa ~/.ssh/id_ecdsa ~/.ssh/id_ecdsa_sk ~/.ssh/id_ed25519 ~/.ssh/id_ed25519_sk ~/.ssh/id_rsa Contains the private key for authentication. These files contain sensitive data and should be readable by the user but not accessible by others (read/write/execute). ssh will simply ignore a private key file if it is accessible by others. It is possible to specify a passphrase when generating the key which will be used to encrypt the sensitive part of this file using AES-128. ~/.ssh/id_dsa.pub ~/.ssh/id_ecdsa.pub ~/.ssh/id_ecdsa_sk.pub ~/.ssh/id_ed25519.pub ~/.ssh/id_ed25519_sk.pub ~/.ssh/id_rsa.pub Contains the public key for authentication. These files are not sensitive and can (but need not) be readable by anyone. ~/.ssh/known_hosts Contains a list of host keys for all hosts the user has logged into that are not already in the systemwide list of known host keys. See sshd(8) for further details of the format of this file. ~/.ssh/rc Commands in this file are executed by ssh when the user logs in, just before the user's shell (or command) is started. See the sshd(8) manual page for more information. /etc/hosts.equiv This file is for host-based authentication (see above). It should only be writable by root. /etc/shosts.equiv This file is used in exactly the same way as hosts.equiv, but allows host-based authentication without permitting login with rlogin/rsh. /etc/ssh/ssh_config Systemwide configuration file. The file format and configuration options are described in ssh_config(5). /etc/ssh/ssh_host_key /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_ecdsa_key /etc/ssh/ssh_host_ed25519_key /etc/ssh/ssh_host_rsa_key These files contain the private parts of the host keys and are used for host-based authentication. /etc/ssh/ssh_known_hosts Systemwide list of known host keys. This file should be prepared by the system administrator to contain the public host keys of all machines in the organization. It should be world-readable. See sshd(8) for further details of the format of this file. /etc/ssh/sshrc Commands in this file are executed by ssh when the user logs in, just before the user's shell (or command) is started. See the sshd(8) manual page for more information. EXIT STATUS ssh exits with the exit status of the remote command or with 255 if an error occurred. SEE ALSO scp(1), sftp(1), ssh-add(1), ssh-agent(1), ssh-keygen(1), ssh-keyscan(1), tun(4), ssh_config(5), ssh-keysign(8), sshd(8) STANDARDS S. Lehtinen and C. Lonvick, The Secure Shell (SSH) Protocol Assigned Numbers, RFC 4250, January 2006. T. Ylonen and C. Lonvick, The Secure Shell (SSH) Protocol Architecture, RFC 4251, January 2006. T. Ylonen and C. Lonvick, The Secure Shell (SSH) Authentication Protocol, RFC 4252, January 2006. T. Ylonen and C. Lonvick, The Secure Shell (SSH) Transport Layer Protocol, RFC 4253, January 2006. T. Ylonen and C. Lonvick, The Secure Shell (SSH) Connection Protocol, RFC 4254, January 2006. J. Schlyter and W. Griffin, Using DNS to Securely Publish Secure Shell (SSH) Key Fingerprints, RFC 4255, January 2006. F. Cusack and M. Forssen, Generic Message Exchange Authentication for the Secure Shell Protocol (SSH), RFC 4256, January 2006. J. Galbraith and P. Remaker, The Secure Shell (SSH) Session Channel Break Extension, RFC 4335, January 2006. M. Bellare, T. Kohno, and C. Namprempre, The Secure Shell (SSH) Transport Layer Encryption Modes, RFC 4344, January 2006. B. Harris, Improved Arcfour Modes for the Secure Shell (SSH) Transport Layer Protocol, RFC 4345, January 2006. M. Friedl, N. Provos, and W. Simpson, Diffie-Hellman Group Exchange for the Secure Shell (SSH) Transport Layer Protocol, RFC 4419, March 2006. J. Galbraith and R. Thayer, The Secure Shell (SSH) Public Key File Format, RFC 4716, November 2006. D. Stebila and J. Green, Elliptic Curve Algorithm Integration in the Secure Shell Transport Layer, RFC 5656, December 2009. A. Perrig and D. Song, Hash Visualization: a New Technique to improve Real-World Security, 1999, International Workshop on Cryptographic Techniques and E-Commerce (CrypTEC '99). AUTHORS OpenSSH is a derivative of the original and free ssh 1.2.12 release by Tatu Ylonen. Aaron Campbell, Bob Beck, Markus Friedl, Niels Provos, Theo de Raadt and Dug Song removed many bugs, re-added newer features and created OpenSSH. Markus Friedl contributed the support for SSH protocol versions 1.5 and 2.0. macOS 14.5 October 11, 2023 macOS 14.5
ssh – OpenSSH remote login client
ssh [-46AaCfGgKkMNnqsTtVvXxYy] [-B bind_interface] [-b bind_address] [-c cipher_spec] [-D [bind_address:]port] [-E log_file] [-e escape_char] [-F configfile] [-I pkcs11] [-i identity_file] [-J destination] [-L address] [-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-P tag] [-p port] [-R address] [-S ctl_path] [-W host:port] [-w local_tun[:remote_tun]] destination [command [argument ...]] ssh [-Q query_option]
null
null
kdestroy
The kdestroy utility destroys the user's active Kerberos authorization tickets by overwriting and deleting the credentials cache that contains them. If the credentials cache is not specified, the default credentials cache is destroyed.
kdestroy - destroy Kerberos tickets
kdestroy [-A] [-q] [-c cache_name] [-p princ_name]
-A Destroys all caches in the collection, if a cache collection is available. May be used with the -c option to specify the collection to be destroyed. -q Run quietly. Normally kdestroy beeps if it fails to destroy the user's tickets. The -q flag suppresses this behavior. -c cache_name Use cache_name as the credentials (ticket) cache name and location; if this option is not used, the default cache name and location are used. The default credentials cache may vary between systems. If the KRB5CCNAME environment variable is set, its value is used to name the default ticket cache. -p princ_name If a cache collection is available, destroy the cache for princ_name instead of the primary cache. May be used with the -c option to specify the collection to be searched. NOTE Most installations recommend that you place the kdestroy command in your .logout file, so that your tickets are destroyed automatically when you log out. ENVIRONMENT See kerberos(7) for a description of Kerberos environment variables. FILES KCM: Default location of Kerberos 5 credentials cache SEE ALSO kinit(1), klist(1), kerberos(7) AUTHOR MIT COPYRIGHT 1985-2022, MIT 1.20.1 KDESTROY(1)
null
yapp5.30
yapp is a frontend to the Parse::Yapp module, which lets you compile Parse::Yapp grammar input files into Perl LALR(1) OO parser modules.
yapp - A perl frontend to the Parse::Yapp module SYNOPSYS yapp [options] grammar[.yp] yapp -V yapp -h
null
Options, as of today, are all optionals :-) -v Creates a file grammar.output describing your parser. It will show you a summary of conflicts, rules, the DFA (Deterministic Finite Automaton) states and overall usage of the parser. -s Create a standalone module in which the driver is included. Note that if you have more than one parser module called from a program, to have it standalone, you need this option only for one of your parser module. -n Disable source file line numbering embedded in your parser module. I don't know why one should need it, but it's there. -m module Gives your parser module the package name (or name space or module name or class name or whatever-you-call-it) of module. It defaults to grammar -o outfile The compiled output file will be named outfile for your parser module. It defaults to grammar.pm or, if you specified the option -m A::Module::Name (see below), to Name.pm, in the current working directory. -t filename The -t filename option allows you to specify a file which should be used as template for generating the parser output. The default is to use the internal template defined in Parse::Yapp::Output.pm. For how to write your own template and which substitutions are available, have a look to the module Parse::Yapp::Output.pm : it should be obvious. -b shebang If you work on systems that understand so called shebangs, and your generated parser is directly an executable script, you can specifie one with the -b option, ie: yapp -b '/usr/local/bin/perl -w' -o myscript.pl myscript.yp This will output a file called myscript.pl whose very first line is: #!/usr/local/bin/perl -w The argument is mandatory, but if you specify an empty string, the value of $Config{perlpath} will be used instead. grammar The input grammar file. If no suffix is given, and the file does not exists, an attempt to open the file with a suffix of .yp is tried before exiting. -V Display current version of Parse::Yapp and gracefully exits. -h Display the usage screen. BUGS None known now :-) AUTHOR William N. Braswell, Jr. <wbraswell_cpan@NOSPAM.nym.hush.com> (Remove "NOSPAM".) COPYRIGHT Copyright © 1998, 1999, 2000, 2001, Francois Desarmenien. Copyright © 2017 William N. Braswell, Jr. See Parse::Yapp(3) for legal use and distribution rights SEE ALSO Parse::Yapp(3) Perl(1) yacc(1) bison(1) perl v5.30.3 2017-08-04 YAPP(1)
null
snmpgetnext
snmpget is an SNMP application that uses the SNMP GETNEXT request to query for information on a network entity. One or more object identifiers (OIDs) may be given as arguments on the command line. Each variable name is given in the format specified in variables(5). For each one, the variable that is lexicographically "next" in the remote entity's MIB will be returned. For example: snmpgetnext -c public zeus interfaces.ifTable.ifEntry.ifType.1 will retrieve the variable interfaces.ifTable.ifEntry.ifType.2: interfaces.ifTable.ifEntry.ifType.2 = softwareLoopback(24) If the network entity has an error processing the request packet, an error message will be shown, helping to pinpoint in what way the request was malformed.
snmpgetnext - communicates with a network entity using SNMP GETNEXT requests
snmpgetnext [COMMON OPTIONS] [-Cf] AGENT OID [OID]...
-Cf If -Cf is not specified, some applications (snmpdelta, snmpget, snmpgetnext and snmpstatus) will try to fix errors returned by the agent that you were talking to and resend the request. The only time this is really useful is if you specified a OID that didn't exist in your request and you're using SNMPv1 which requires "all or nothing" kinds of requests. In addition to this option, snmpgetnext takes the common options described in the snmpcmd(1) manual page. Note that snmpgetnext REQUIRES an argument specifying the agent to query and at least one OID argument, as described there. SEE ALSO snmpcmd(1), snmpget(1), variables(5). V5.6.2.1 04 Mar 2002 SNMPGETNEXT(1)
null
type
Shell builtin commands are commands that can be executed within the running shell's process. Note that, in the case of csh(1) builtin commands, the command is executed in a subshell if it occurs as any component of a pipeline except the last. If a command specified to the shell contains a slash ‘/’, the shell will not execute a builtin command, even if the last component of the specified command matches the name of a builtin command. Thus, while specifying “echo” causes a builtin command to be executed under shells that support the echo builtin command, specifying “/bin/echo” or “./echo” does not. While some builtin commands may exist in more than one shell, their operation may be different under each shell which supports them. Below is a table which lists shell builtin commands, the standard shells that support them and whether they exist as standalone utilities. Only builtin commands for the csh(1) and sh(1) shells are listed here. Consult a shell's manual page for details on the operation of its builtin commands. Beware that the sh(1) manual page, at least, calls some of these commands “built-in commands” and some of them “reserved words”. Users of other shells may need to consult an info(1) page or other sources of documentation. Commands marked “No**” under External do exist externally, but are implemented as scripts using a builtin command of the same name. Command External csh(1) sh(1) ! No No Yes % No Yes No . No No Yes : No Yes Yes @ No Yes Yes [ Yes No Yes { No No Yes } No No Yes alias No** Yes Yes alloc No Yes No bg No** Yes Yes bind No No Yes bindkey No Yes No break No Yes Yes breaksw No Yes No builtin No No Yes builtins No Yes No case No Yes Yes cd No** Yes Yes chdir No Yes Yes command No** No Yes complete No Yes No continue No Yes Yes default No Yes No dirs No Yes No do No No Yes done No No Yes echo Yes Yes Yes echotc No Yes No elif No No Yes else No Yes Yes end No Yes No endif No Yes No endsw No Yes No esac No No Yes eval No Yes Yes exec No Yes Yes exit No Yes Yes export No No Yes false Yes No Yes fc No** No Yes fg No** Yes Yes filetest No Yes No fi No No Yes for No No Yes foreach No Yes No getopts No** No Yes glob No Yes No goto No Yes No hash No** No Yes hashstat No Yes No history No Yes No hup No Yes No if No Yes Yes jobid No No Yes jobs No** Yes Yes kill Yes Yes Yes limit No Yes No local No No Yes log No Yes No login Yes Yes No logout No Yes No ls-F No Yes No nice Yes Yes No nohup Yes Yes No notify No Yes No onintr No Yes No popd No Yes No printenv Yes Yes No printf Yes No Yes pushd No Yes No pwd Yes No Yes read No** No Yes readonly No No Yes rehash No Yes No repeat No Yes No return No No Yes sched No Yes No set No Yes Yes setenv No Yes No settc No Yes No setty No Yes No setvar No No Yes shift No Yes Yes source No Yes No stop No Yes No suspend No Yes No switch No Yes No telltc No Yes No test Yes No Yes then No No Yes time Yes Yes No times No No Yes trap No No Yes true Yes No Yes type No** No Yes ulimit No** No Yes umask No** Yes Yes unalias No** Yes Yes uncomplete No Yes No unhash No Yes No unlimit No Yes No unset No Yes Yes unsetenv No Yes No until No No Yes wait No** Yes Yes where No Yes No which Yes Yes No while No Yes Yes SEE ALSO csh(1), dash(1), echo(1), false(1), info(1), kill(1), login(1), nice(1), nohup(1), printenv(1), printf(1), pwd(1), sh(1), test(1), time(1), true(1), which(1), zsh(1) HISTORY The builtin manual page first appeared in FreeBSD 3.4. AUTHORS This manual page was written by Sheldon Hearn <sheldonh@FreeBSD.org>. macOS 14.5 December 21, 2010 macOS 14.5
builtin, !, %, ., :, @, [, {, }, alias, alloc, bg, bind, bindkey, break, breaksw, builtins, case, cd, chdir, command, complete, continue, default, dirs, do, done, echo, echotc, elif, else, end, endif, endsw, esac, eval, exec, exit, export, false, fc, fg, filetest, fi, for, foreach, getopts, glob, goto, hash, hashstat, history, hup, if, jobid, jobs, kill, limit, local, log, login, logout, ls-F, nice, nohup, notify, onintr, popd, printenv, printf, pushd, pwd, read, readonly, rehash, repeat, return, sched, set, setenv, settc, setty, setvar, shift, source, stop, suspend, switch, telltc, test, then, time, times, trap, true, type, ulimit, umask, unalias, uncomplete, unhash, unlimit, unset, unsetenv, until, wait, where, which, while – shell built-in commands
See the built-in command description in the appropriate shell manual page.
null
null
xsubpp
This compiler is typically run by the makefiles created by ExtUtils::MakeMaker or by Module::Build or other Perl module build tools. xsubpp will compile XS code into C code by embedding the constructs necessary to let C functions manipulate Perl values and creates the glue necessary to let Perl access those functions. The compiler uses typemaps to determine how to map C function parameters and variables to Perl values. The compiler will search for typemap files called typemap. It will use the following search path to find default typemaps, with the rightmost typemap taking precedence. ../../../typemap:../../typemap:../typemap:typemap It will also use a default typemap installed as "ExtUtils::typemap".
xsubpp - compiler to convert Perl XS code into C code
xsubpp [-v] [-except] [-s pattern] [-prototypes] [-noversioncheck] [-nolinenumbers] [-nooptimize] [-typemap typemap] [-output filename]... file.xs
Note that the "XSOPT" MakeMaker option may be used to add these options to any makefiles generated by MakeMaker. -hiertype Retains '::' in type names so that C++ hierarchical types can be mapped. -except Adds exception handling stubs to the C code. -typemap typemap Indicates that a user-supplied typemap should take precedence over the default typemaps. This option may be used multiple times, with the last typemap having the highest precedence. -output filename Specifies the name of the output file to generate. If no file is specified, output will be written to standard output. -v Prints the xsubpp version number to standard output, then exits. -prototypes By default xsubpp will not automatically generate prototype code for all xsubs. This flag will enable prototypes. -noversioncheck Disables the run time test that determines if the object file (derived from the ".xs" file) and the ".pm" files have the same version number. -nolinenumbers Prevents the inclusion of '#line' directives in the output. -nooptimize Disables certain optimizations. The only optimization that is currently affected is the use of targets by the output C code (see perlguts). This may significantly slow down the generated code, but this is the way xsubpp of 5.005 and earlier operated. -noinout Disable recognition of "IN", "OUT_LIST" and "INOUT_LIST" declarations. -noargtypes Disable recognition of ANSI-like descriptions of function signature. -C++ Currently doesn't do anything at all. This flag has been a no-op for many versions of perl, at least as far back as perl5.003_07. It's allowed here for backwards compatibility. -s=... or -strip=... This option is obscure and discouraged. If specified, the given string will be stripped off from the beginning of the C function name in the generated XS functions (if it starts with that prefix). This only applies to XSUBs without "CODE" or "PPCODE" blocks. For example, the XS: void foo_bar(int i); when "xsubpp" is invoked with "-s foo_" will install a "foo_bar" function in Perl, but really call bar(i) in C. Most of the time, this is the opposite of what you want and failure modes are somewhat obscure, so please avoid this option where possible. ENVIRONMENT No environment variables are used. AUTHOR Originally by Larry Wall. Turned into the "ExtUtils::ParseXS" module by Ken Williams. MODIFICATION HISTORY See the file Changes. SEE ALSO perl(1), perlxs(1), perlxstut(1), ExtUtils::ParseXS perl v5.38.2 2023-11-28 XSUBPP(1)
null
jhat
null
null
null
null
null
tidy
Tidy reads HTML, XHTML, and XML files and writes cleaned-up markup. For HTML variants, it detects, reports, and corrects many common coding errors and strives to produce visually equivalent markup that is both conformant to the HTML specifications and that works in most browsers. A common use of Tidy is to convert plain HTML to XHTML. For generic XML files, Tidy is limited to correcting basic well-formedness errors and pretty printing. If no input file is specified, Tidy reads the standard input. If no output file is specified, Tidy writes the tidied markup to the standard output. If no error file is specified, Tidy writes messages to the standard error.
tidy - check, correct, and pretty-print HTML(5) files
tidy [options] [file ...] [options] [file ...] ...
Tidy supports two different kinds of options. Purely command-line options, starting with a single dash '-', can only be used on the command-line, not in configuration files. They are listed in the first part of this section. Configuration options, on the other hand, can either be passed on the command line, starting with two dashes --, or specified in a configuration file, using the option name, followed by a colon :, plus the value, without the starting dashes. They are listed in the second part of this section, with a sample config file. For command-line options that expect a numerical argument, a default is assumed if no meaningful value can be found. On the other hand, configuration options cannot be used without a value; a configuration option without a value is simply discarded and reported as an error. Using a command-line option is sometimes equivalent to setting the value of a configuration option. The equivalent option and value are shown in parentheses in the list below, as they would appear in a configuration file. For example, -quiet, -q (quiet: yes) means that using the command-line option -quiet or -q is equivalent to setting the configuration option quiet to yes. Single-letter command-line options without an associated value can be combined; for example '-i', '-m' and '-u' may be combined as '-imu'. File manipulation -output <file>, -o <file> (output-file: <file>) write output to the specified <file> -config <file> set configuration options from the specified <file> -file <file>, -f <file> (error-file: <file>) write errors and warnings to the specified <file> -modify, -m (write-back: yes) modify the original input files Processing directives -indent, -i (indent: auto) indent element content -wrap <column>, -w <column> (wrap: <column>) wrap text at the specified <column>. 0 is assumed if <column> is missing. When this option is omitted, the default of the configuration option 'wrap' applies. -upper, -u (uppercase-tags: yes) force tags to upper case -clean, -c (clean: yes) replace FONT, NOBR and CENTER tags with CSS -bare, -b (bare: yes) strip out smart quotes and em dashes, etc. -gdoc, -g (gdoc: yes) produce clean version of html exported by Google Docs -numeric, -n (numeric-entities: yes) output numeric rather than named entities -errors, -e (markup: no) show only errors and warnings -quiet, -q (quiet: yes) suppress nonessential output -omit (omit-optional-tags: yes) omit optional start tags and end tags -xml (input-xml: yes) specify the input is well formed XML -asxml, -asxhtml (output-xhtml: yes) convert HTML to well formed XHTML -ashtml (output-html: yes) force XHTML to well formed HTML -access <level> (accessibility-check: <level>) do additional accessibility checks (<level> = 0, 1, 2, 3). 0 is assumed if <level> is missing. Character encodings -raw output values above 127 without conversion to entities -ascii use ISO-8859-1 for input, US-ASCII for output -latin0 use ISO-8859-15 for input, US-ASCII for output -latin1 use ISO-8859-1 for both input and output -iso2022 use ISO-2022 for both input and output -utf8 use UTF-8 for both input and output -mac use MacRoman for input, US-ASCII for output -win1252 use Windows-1252 for input, US-ASCII for output -ibm858 use IBM-858 (CP850+Euro) for input, US-ASCII for output -utf16le use UTF-16LE for both input and output -utf16be use UTF-16BE for both input and output -utf16 use UTF-16 for both input and output -big5 use Big5 for both input and output -shiftjis use Shift_JIS for both input and output Miscellaneous -version, -v show the version of Tidy -help, -h, -? list the command line options -help-config list all configuration options -help-env show information about the environment and runtime configuration -show-config list the current configuration settings -export-config list the current configuration settings, suitable for a config file -export-default-config list the default configuration settings, suitable for a config file -help-option <option> show a description of the <option> -language <lang> (language: <lang>) set Tidy's output language to <lang>. Specify '-language help' for more help. Use before output-causing arguments to ensure the language takes effect, e.g.,`tidy -lang es -lang help`. XML -xml-help list the command line options in XML format -xml-config list all configuration options in XML format -xml-strings output all of Tidy's strings in XML format -xml-error-strings output error constants and strings in XML format -xml-options-strings output option descriptions in XML format Configuration Options General Configuration options can be specified by preceding each option with -- at the command line, followed by its desired value, OR by placing the options and values in a configuration file, and telling tidy to read that file with the -config option: tidy --option1 value1 --option2 value2 ... tidy -config config-file ... Configuration options can be conveniently grouped in a single config file. A Tidy configuration file is simply a text file, where each option is listed on a separate line in the form option1: value1 option2: value2 etc. The permissible values for a given option depend on the option's Type. There are five Types: Boolean, AutoBool, DocType, Enum, and String. Boolean Types allow any of yes/no, y/n, true/false, t/f, 1/0. AutoBools allow auto in addition to the values allowed by Booleans. Integer Types take non-negative integers. String Types generally have no defaults, and you should provide them in non-quoted form (unless you wish the output to contain the literal quotes). Enum, Encoding, and DocType Types have a fixed repertoire of items, which are listed in the Supported values sections below. You only need to provide options and values for those whose defaults you wish to override, although you may wish to include some already- defaulted options and values for the sake of documentation and explicitness. Here is a sample config file, with at least one example of each of the five Types: // sample Tidy configuration options output-xhtml: yes add-xml-decl: no doctype: strict char-encoding: ascii indent: auto wrap: 76 repeated-attributes: keep-last error-file: errs.txt Below is a summary and brief description of each of the options. They are listed alphabetically within each category. Document Display options --gnu-emacs Boolean (no if unset) This option specifies that Tidy should change the format for reporting errors and warnings to a format that is more easily parsed by GNU Emacs or some other program. It changes them from the default line <line number> column <column number> - (Error|Warning): <message> to a form which includes the input filename: <filename>:<line number>:<column number>: (Error|Warning): <message> See also: --show-filename --markup Boolean (yes if unset) This option specifies if Tidy should generate a pretty printed version of the markup. Note that Tidy won't generate a pretty printed version if it finds significant errors (see force- output). --mute String Use this option to prevent Tidy from displaying certain types of report output, for example, for conditions that you wish to ignore. This option takes a list of one or more keys indicating the message type to mute. You can discover these message keys by using the mute-id configuration option and examining Tidy's output. See also: --mute-id --mute-id Boolean (no if unset) This option indicates whether or not Tidy should display message ID's with each of its error reports. This could be useful if you wanted to use the mute configuration option in order to filter out certain report messages. See also: --mute --quiet Boolean (no if unset) When enabled, this option limits Tidy's non-document output to report only document warnings and errors. --show-body-only Enum (no if unset) Supported values: no, yes, auto This option specifies if Tidy should print only the contents of the body tag as an HTML fragment. If set to auto, this is performed only if the body tag has been inferred. Useful for incorporating existing whole pages as a portion of another page. This option has no effect if XML output is requested. --show-errors Integer (6 if unset) This option specifies the number Tidy uses to determine if further errors should be shown. If set to 0, then no errors are shown. --show-filename Boolean (no if unset) This option specifies if Tidy should show the filename in messages. eg: tidy -q -e --show-filename yes index.html index.html: line 43 column 3 - Warning: replacing invalid UTF-8 bytes (char. code U+00A9) See also: --gnu-emacs --show-info Boolean (yes if unset) This option specifies if Tidy should display info-level messages. --show-warnings Boolean (yes if unset) This option specifies if Tidy should suppress warnings. This can be useful when a few errors are hidden in a flurry of warnings. Document In and Out options --add-meta-charset Boolean (no if unset) This option, when enabled, adds a <meta> element and sets the charset attribute to the encoding of the document. Set this option to yes to enable it. --add-xml-decl Boolean (no if unset) This option specifies if Tidy should add the XML declaration when outputting XML or XHTML. Note that if the input already includes an <?xml ... ?> declaration then this option will be ignored. If the encoding for the output is different from ascii, one of the utf* encodings, or raw, then the declaration is always added as required by the XML standard. See also: --char-encoding, --output-encoding --add-xml-space Boolean (no if unset) This option specifies if Tidy should add xml:space="preserve" to elements such as <pre>, <style> and <script> when generating XML. This is needed if the whitespace in such elements is to be parsed appropriately without having access to the DTD. --doctype String (auto if unset) This option specifies the DOCTYPE declaration generated by Tidy. If set to omit the output won't contain a DOCTYPE declaration. Note this this also implies numeric-entities is set to yes. If set to html5 the DOCTYPE is set to <!DOCTYPE html>. If set to auto (the default) Tidy will use an educated guess based upon the contents of the document. Note that selecting this option will not change the current document's DOCTYPE on output. If set to strict, Tidy will set the DOCTYPE to the HTML4 or XHTML1 strict DTD. If set to loose, the DOCTYPE is set to the HTML4 or XHTML1 loose (transitional) DTD. Alternatively, you can supply a string for the formal public identifier (FPI). For example: doctype: "-//ACME//DTD HTML 3.14159//EN" If you specify the FPI for an XHTML document, Tidy will set the system identifier to an empty string. For an HTML document, Tidy adds a system identifier only if one was already present in order to preserve the processing mode of some browsers. Tidy leaves the DOCTYPE for generic XML documents unchanged. This option does not offer a validation of document conformance. --input-xml Boolean (no if unset) This option specifies if Tidy should use the XML parser rather than the error correcting HTML parser. --output-html Boolean (no if unset) This option specifies if Tidy should generate pretty printed output, writing it as HTML. --output-xhtml Boolean (no if unset) This option specifies if Tidy should generate pretty printed output, writing it as extensible HTML. This option causes Tidy to set the DOCTYPE and default namespace as appropriate to XHTML, and will use the corrected value in output regardless of other sources. For XHTML, entities can be written as named or numeric entities according to the setting of numeric-entities. The original case of tags and attributes will be preserved, regardless of other options. --output-xml Boolean (no if unset) This option specifies if Tidy should pretty print output, writing it as well-formed XML. Any entities not defined in XML 1.0 will be written as numeric entities to allow them to be parsed by an XML parser. The original case of tags and attributes will be preserved, regardless of other options. File Input-Output options --error-file String This option specifies the error file Tidy uses for errors and warnings. Normally errors and warnings are output to stderr. See also: --output-file --keep-time Boolean (no if unset) This option specifies if Tidy should keep the original modification time of files that Tidy modifies in place. Setting the option to yes allows you to tidy files without changing the file modification date, which may be useful with certain tools that use the modification date for things such as automatic server deployment. Note this feature is not supported on some platforms. --output-file String This option specifies the output file Tidy uses for markup. Normally markup is written to stdout. See also: --error-file --write-back Boolean (no if unset) This option specifies if Tidy should write back the tidied markup to the same file it read from. You are advised to keep copies of important files before tidying them, as on rare occasions the result may not be what you expect. Diagnostics options --accessibility-check Enum (0 (Tidy Classic) if unset) Supported values: 0 (Tidy Classic), 1 (Priority 1 Checks), 2 (Priority 2 Checks), 3 (Priority 3 Checks) This option specifies what level of accessibility checking, if any, that Tidy should perform. Level 0 (Tidy Classic) performs no additional accessibility checking. Level 1 (Priority 1 Checks) performs the Priority Level 1 checks. Level 2 (Priority 2 Checks) performs the Priority Level 1 and 2 checks. Level 3 (Priority 3 Checks) performs the Priority Level 1, 2, and 3 checks. For more information on Tidy's accessibility checking, including the specific checks that are made for each Priority Level, please visit Tidy's Accessibility Page at http://www.html- tidy.org/accessibility/. --force-output Boolean (no if unset) This option specifies if Tidy should produce output even if errors are encountered. Use this option with care; if Tidy reports an error, this means Tidy was not able to (or is not sure how to) fix the error, so the resulting output may not reflect your intention. --show-meta-change Boolean (no if unset) This option enables a message whenever Tidy changes the content attribute of a meta charset declaration to match the encoding of the document. Set this option to yes to enable it. --warn-proprietary-attributes Boolean (yes if unset) This option specifies if Tidy should warn on proprietary attributes. Encoding options --char-encoding Encoding (utf8 if unset) Supported values: raw, ascii, latin0, latin1, utf8, iso2022, mac, win1252, ibm858, utf16le, utf16be, utf16, big5, shiftjis This option specifies the character encoding Tidy uses for input, and when set, automatically chooses an appropriate character encoding to be used for output. The output encoding Tidy chooses may be different from the input encoding. For ascii, latin0, ibm858, mac, and win1252 input encodings, the output-encoding option will automatically be set to ascii. You can set output-encoding manually to override this. For other input encodings, the output-encoding option will automatically be set to the the same value. Regardless of the preset value, you can set output-encoding manually to override this. Tidy is not an encoding converter. Although the Latin and UTF encodings can be mixed freely, it is not possible to convert Asian encodings to Latin encodings with Tidy. See also: --input-encoding, --output-encoding --input-encoding Encoding (utf8 if unset) Supported values: raw, ascii, latin0, latin1, utf8, iso2022, mac, win1252, ibm858, utf16le, utf16be, utf16, big5, shiftjis This option specifies the character encoding Tidy uses for input. Tidy makes certain assumptions about some of the input encodings. For ascii, Tidy will accept Latin-1 (ISO-8859-1) character values and convert them to entities as necessary. For raw, Tidy will make no assumptions about the character values and will pass them unchanged to output. For mac and win1252, vendor specific characters values will be accepted and converted to entities as necessary. Asian encodings such as iso2022 will be handled appropriately assuming the corresponding output-encoding is also specified. Tidy is not an encoding converter. Although the Latin and UTF encodings can be mixed freely, it is not possible to convert Asian encodings to Latin encodings with Tidy. See also: --char-encoding --newline Enum (LF if unset) Supported values: LF, CRLF, CR The default is appropriate to the current platform. Genrally CRLF on PC-DOS, Windows and OS/2; CR on Classic Mac OS; and LF everywhere else (Linux, macOS, and Unix). --output-bom Enum (auto if unset) Supported values: no, yes, auto This option specifies if Tidy should write a Unicode Byte Order Mark character (BOM; also known as Zero Width No-Break Space; has value of U+FEFF) to the beginning of the output, and only applies to UTF-8 and UTF-16 output encodings. If set to auto this option causes Tidy to write a BOM to the output only if a BOM was present at the beginning of the input. A BOM is always written for XML/XHTML output using UTF-16 output encodings. --output-encoding Encoding (utf8 if unset) Supported values: raw, ascii, latin0, latin1, utf8, iso2022, mac, win1252, ibm858, utf16le, utf16be, utf16, big5, shiftjis This option specifies the character encoding Tidy uses for output. Some of the output encodings affect whether or not some characters are translated to entities, although in all cases, some entities will be written according to other Tidy configuration options. For ascii, mac, and win1252 output encodings, entities will be used for all characters with values over 127. For raw output, Tidy will write values above 127 without translating them to entities. Output using latin1 will cause Tidy to write character values higher than 255 as entities. The UTF family such as utf8 will write output in the respective UTF encoding. Asian output encodings such as iso2022 will write output in the specified encoding, assuming a corresponding input-encoding was specified. Tidy is not an encoding converter. Although the Latin and UTF encodings can be mixed freely, it is not possible to convert Asian encodings to Latin encodings with Tidy. See also: --char-encoding Cleanup options --bare Boolean (no if unset) This option specifies if Tidy should replace smart quotes and em dashes with ASCII, and output spaces rather than non-breaking spaces, where they exist in the input. --clean Boolean (no if unset) This option specifies if Tidy should perform cleaning of some legacy presentational tags (currently <i>, <b>, <center> when enclosed within appropriate inline tags, and <font>). If set to yes, then the legacy tags will be replaced with CSS <style> tags and structural markup as appropriate. --drop-empty-elements Boolean (yes if unset) This option specifies if Tidy should discard empty elements. --drop-empty-paras Boolean (yes if unset) This option specifies if Tidy should discard empty paragraphs. --drop-proprietary-attributes Boolean (no if unset) This option specifies if Tidy should strip out proprietary attributes, such as Microsoft data binding attributes. Additionally attributes that aren't permitted in the output version of HTML will be dropped if used with strict-tags- attributes. --gdoc Boolean (no if unset) This option specifies if Tidy should enable specific behavior for cleaning up HTML exported from Google Docs. --logical-emphasis Boolean (no if unset) This option specifies if Tidy should replace any occurrence of <i> with <em> and any occurrence of <b> with <strong>. Any attributes are preserved unchanged. This option can be set independently of the clean option. --merge-divs Enum (auto if unset) Supported values: no, yes, auto This option can be used to modify the behavior of clean when set to yes. This option specifies if Tidy should merge nested <div> such as <div><div>...</div></div>. If set to auto the attributes of the inner <div> are moved to the outer one. Nested <div> with id attributes are not merged. If set to yes the attributes of the inner <div> are discarded with the exception of class and style. See also: --clean, --merge-spans --merge-spans Enum (auto if unset) Supported values: no, yes, auto This option can be used to modify the behavior of clean when set to yes. This option specifies if Tidy should merge nested <span> such as <span><span>...</span></span>. The algorithm is identical to the one used by merge-divs. See also: --clean, --merge-divs --word-2000 Boolean (no if unset) This option specifies if Tidy should go to great pains to strip out all the surplus stuff Microsoft Word 2000 inserts when you save Word documents as "Web pages". It doesn't handle embedded images or VML. You should consider saving using Word's Save As..., and choosing Web Page, Filtered. Entities options --ascii-chars Boolean (no if unset) Can be used to modify behavior of the clean option when set to yes. If set to yes when using clean, &emdash;, &rdquo;, and other named character entities are downgraded to their closest ASCII equivalents. See also: --clean --ncr Boolean (yes if unset) This option specifies if Tidy should allow numeric character references. --numeric-entities Boolean (no if unset) This option specifies if Tidy should output entities other than the built-in HTML entities (&amp;, &lt;, &gt;, and &quot;) in the numeric rather than the named entity form. Only entities compatible with the DOCTYPE declaration generated are used. Entities that can be represented in the output encoding are translated correspondingly. See also: --doctype, --preserve-entities --preserve-entities Boolean (no if unset) This option specifies if Tidy should preserve well-formed entities as found in the input. --quote-ampersand Boolean (yes if unset) This option specifies if Tidy should output unadorned & characters as &amp;, in legacy doctypes only. --quote-marks Boolean (no if unset) This option specifies if Tidy should output " characters as &quot; as is preferred by some editing environments. The apostrophe character ' is written out as &#39; since many web browsers don't yet support &apos;. --quote-nbsp Boolean (yes if unset) This option specifies if Tidy should output non-breaking space characters as entities, rather than as the Unicode character value 160 (decimal). Repair options --alt-text String This option specifies the default alt= text Tidy uses for <img> attributes when the alt= attribute is missing. Use with care, as it is your responsibility to make your documents accessible to people who cannot see the images. --anchor-as-name Boolean (yes if unset) This option controls the deletion or addition of the name attribute in elements where it can serve as anchor. If set to yes a name attribute, if not already existing, is added along an existing id attribute if the DTD allows it. If set to no any existing name attribute is removed if an id attribute exists or has been added. --assume-xml-procins Boolean (no if unset) This option specifies if Tidy should change the parsing of processing instructions to require ?> as the terminator rather than >. This option is automatically set if the input is in XML. --coerce-endtags Boolean (yes if unset) This option specifies if Tidy should coerce a start tag into an end tag in cases where it looks like an end tag was probably intended; for example, given <span>foo <b>bar<b> baz</span> Tidy will output <span>foo <b>bar</b> baz</span> --css-prefix String (c if unset) This option specifies the prefix that Tidy uses for styles rules. By default, c will be used. --custom-tags Enum (no if unset) Supported values: no, blocklevel, empty, inline, pre This option enables the use of tags for autonomous custom elements, e.g. <flag-icon> with Tidy. Custom tags are disabled if this value is no. Other settings - blocklevel, empty, inline, and pre will treat all detected custom tags accordingly. The use of new-blocklevel-tags, new-empty-tags, new-inline-tags, or new-pre-tags will override the treatment of custom tags by this configuration option. This may be useful if you have different types of custom tags. When enabled these tags are determined during the processing of your document using opening tags; matching closing tags will be recognized accordingly, and unknown closing tags will be discarded. See also: --new-blocklevel-tags, --new-empty-tags, --new-inline- tags, --new-pre-tags --enclose-block-text Boolean (no if unset) This option specifies if Tidy should insert a <p> element to enclose any text it finds in any element that allows mixed content for HTML transitional but not HTML strict. --enclose-text Boolean (no if unset) This option specifies if Tidy should enclose any text it finds in the body element within a <p> element. This is useful when you want to take existing HTML and use it with a style sheet. --escape-scripts Boolean (yes if unset) This option causes items that look like closing tags, like </g to be escaped to <\/g. Set this option to no if you do not want this. --fix-backslash Boolean (yes if unset) This option specifies if Tidy should replace backslash characters \ in URLs with forward slashes /. --fix-bad-comments Enum (auto if unset) Supported values: no, yes, auto This option specifies if Tidy should replace unexpected hyphens with = characters when it comes across adjacent hyphens. The default is auto will which will act as no for HTML5 document types, and yes for all other document types. HTML has abandoned SGML comment syntax, and allows adjacent hyphens for all versions of HTML, although XML and XHTML do not. If you plan to support older browsers that require SGML comment syntax, then consider setting this value to yes. --fix-style-tags Boolean (yes if unset) This option specifies if Tidy should move all style tags to the head of the document. --fix-uri Boolean (yes if unset) This option specifies if Tidy should check attribute values that carry URIs for illegal characters and if such are found, escape them as HTML4 recommends. --literal-attributes Boolean (no if unset) This option specifies how Tidy deals with whitespace characters within attribute values. If the value is no Tidy normalizes attribute values by replacing any newline or tab with a single space, and further by replacing any contiguous whitespace with a single space. To force Tidy to preserve the original, literal values of all attributes and ensure that whitespace within attribute values is passed through unchanged, set this option to yes. --lower-literals Boolean (yes if unset) This option specifies if Tidy should convert the value of an attribute that takes a list of predefined values to lower case. This is required for XHTML documents. --repeated-attributes Enum (keep-last if unset) Supported values: keep-first, keep-last This option specifies if Tidy should keep the first or last attribute, if an attribute is repeated, e.g. has two align attributes. See also: --join-classes, --join-styles --skip-nested Boolean (yes if unset) This option specifies that Tidy should skip nested tags when parsing script and style data. --strict-tags-attributes Boolean (no if unset) This options ensures that tags and attributes are applicable for the version of HTML that Tidy outputs. When set to yes and the output document type is a strict doctype, then Tidy will report errors. If the output document type is a loose or transitional doctype, then Tidy will report warnings. Additionally if drop-proprietary-attributes is enabled, then not applicable attributes will be dropped, too. When set to no, these checks are not performed. --uppercase-attributes Enum (no if unset) Supported values: no, yes, preserve This option specifies if Tidy should output attribute names in upper case. When set to no, attribute names will be written in lower case. Specifying yes will output attribute names in upper case, and preserve can used to leave attribute names untouched. When using XML input, the original case is always preserved. --uppercase-tags Boolean (no if unset) This option specifies if Tidy should output tag names in upper case. The default is no which results in lower case tag names, except for XML input where the original case is preserved. Transformation options --decorate-inferred-ul Boolean (no if unset) This option specifies if Tidy should decorate inferred <ul> elements with some CSS markup to avoid indentation to the right. --escape-cdata Boolean (no if unset) This option specifies if Tidy should convert <![CDATA[]]> sections to normal text. --hide-comments Boolean (no if unset) This option specifies if Tidy should not print out comments. --join-classes Boolean (no if unset) This option specifies if Tidy should combine class names to generate a single, new class name if multiple class assignments are detected on an element. --join-styles Boolean (yes if unset) This option specifies if Tidy should combine styles to generate a single, new style if multiple style values are detected on an element. --merge-emphasis Boolean (yes if unset) This option specifies if Tidy should merge nested <b> and <i> elements; for example, for the case <b class="rtop-2">foo <b class="r2-2">bar</b> baz</b>, Tidy will output <b class="rtop-2">foo bar baz</b>. --replace-color Boolean (no if unset) This option specifies if Tidy should replace numeric values in color attributes with HTML/XHTML color names where defined, e.g. replace #ffffff with white. Teaching Tidy options --new-blocklevel-tags Tag Names Supported values: tagX, tagY, ... This option specifies new block-level tags. This option takes a space or comma separated list of tag names. Unless you declare new tags, Tidy will refuse to generate a tidied file if the input includes previously unknown tags. Note you can't change the content model for elements such as <table>, <ul>, <ol> and <dl>. This option is ignored in XML mode. See also: --new-empty-tags, --new-inline-tags, --new-pre-tags, --custom-tags --new-empty-tags Tag Names Supported values: tagX, tagY, ... This option specifies new empty inline tags. This option takes a space or comma separated list of tag names. Unless you declare new tags, Tidy will refuse to generate a tidied file if the input includes previously unknown tags. Remember to also declare empty tags as either inline or blocklevel. This option is ignored in XML mode. See also: --new-blocklevel-tags, --new-inline-tags, --new-pre- tags, --custom-tags --new-inline-tags Tag Names Supported values: tagX, tagY, ... This option specifies new non-empty inline tags. This option takes a space or comma separated list of tag names. Unless you declare new tags, Tidy will refuse to generate a tidied file if the input includes previously unknown tags. This option is ignored in XML mode. See also: --new-blocklevel-tags, --new-empty-tags, --new-pre- tags, --custom-tags --new-pre-tags Tag Names Supported values: tagX, tagY, ... This option specifies new tags that are to be processed in exactly the same way as HTML's <pre> element. This option takes a space or comma separated list of tag names. Unless you declare new tags, Tidy will refuse to generate a tidied file if the input includes previously unknown tags. Note you cannot as yet add new CDATA elements. This option is ignored in XML mode. See also: --new-blocklevel-tags, --new-empty-tags, --new-inline- tags, --custom-tags Pretty Print options --break-before-br Boolean (no if unset) This option specifies if Tidy should output a line break before each <br> element. --indent Enum (no if unset) Supported values: no, yes, auto This option specifies if Tidy should indent block-level tags. If set to auto Tidy will decide whether or not to indent the content of tags such as <title>, <h1>-<h6>, <li>, <td>, or <p> based on the content including a block-level element. Setting indent to yes can expose layout bugs in some browsers. Use the option indent-spaces to control the number of spaces or tabs output per level of indent, and indent-with-tabs to specify whether spaces or tabs are used. See also: --indent-spaces --indent-attributes Boolean (no if unset) This option specifies if Tidy should begin each attribute on a new line. --indent-cdata Boolean (no if unset) This option specifies if Tidy should indent <![CDATA[]]> sections. --indent-spaces Integer (2 if unset) This option specifies the number of spaces or tabs that Tidy uses to indent content when indent is enabled. Note that the default value for this option is dependent upon the value of indent-with-tabs (see also). See also: --indent --indent-with-tabs Boolean (no if unset) This option specifies if Tidy should indent with tabs instead of spaces, assuming indent is yes. Set it to yes to indent using tabs instead of the default spaces. Use the option indent-spaces to control the number of tabs output per level of indent. Note that when indent-with-tabs is enabled the default value of indent-spaces is reset to 1. Note tab-size controls converting input tabs to spaces. Set it to zero to retain input tabs. --keep-tabs Boolean (no if unset) With the default no Tidy will replace all source tabs with spaces, controlled by the option tab-size, and the current line offset. Of course, except in the special blocks/elements enumerated below, this will later be reduced to just one space. If set yes this option specifies Tidy should keep certain tabs found in the source, but only in preformatted blocks like <pre>, and other CDATA elements like <script>, <style>, and other pseudo elements like <?php ... ?>. As always, all other tabs, or sequences of tabs, in the source will continue to be replaced with a space. --omit-optional-tags Boolean (no if unset) This option specifies if Tidy should omit optional start tags and end tags when generating output. Setting this option causes all tags for the <html>, <head>, and <body> elements to be omitted from output, as well as such end tags as </p>, </li>, </dt>, </dd>, </option>, </tr>, </td>, and </th>. This option is ignored for XML output. --priority-attributes Attributes Names Supported values: attributeX, attributeY, ... This option allows prioritizing the writing of attributes in tidied documents, allowing them to written before the other attributes of an element. For example, you might specify that id and name are written before every other attribute. This option takes a space or comma separated list of attribute names. --punctuation-wrap Boolean (no if unset) This option specifies if Tidy should line wrap after some Unicode or Chinese punctuation characters. --sort-attributes Enum (none if unset) Supported values: none, alpha This option specifies that Tidy should sort attributes within an element using the specified sort algorithm. If set to alpha, the algorithm is an ascending alphabetic sort. When used while sorting with priority-attributes, any attribute sorting will take place after the priority attributes have been output. See also: --priority-attributes --tab-size Integer (8 if unset) This option specifies the number of columns that Tidy uses between successive tab stops. It is used to map tabs to spaces when reading the input. --tidy-mark Boolean (yes if unset) This option specifies if Tidy should add a meta element to the document head to indicate that the document has been tidied. Tidy won't add a meta element if one is already present. --vertical-space Enum (no if unset) Supported values: no, yes, auto This option specifies if Tidy should add some extra empty lines for readability. The default is no. If set to auto Tidy will eliminate nearly all newline characters. --wrap Integer (68 if unset) This option specifies the right margin Tidy uses for line wrapping. Tidy tries to wrap lines so that they do not exceed this length. Set wrap to 0 (zero) if you want to disable line wrapping. --wrap-asp Boolean (yes if unset) This option specifies if Tidy should line wrap text contained within ASP pseudo elements, which look like: <% ... %>. --wrap-attributes Boolean (no if unset) This option specifies if Tidy should line-wrap attribute values, meaning that if the value of an attribute causes a line to exceed the width specified by wrap, Tidy will add one or more line breaks to the value, causing it to be wrapped into multiple lines. Note that this option can be set independently of wrap-script- literals. By default Tidy replaces any newline or tab with a single space and replaces any sequences of whitespace with a single space. To force Tidy to preserve the original, literal values of all attributes, and ensure that whitespace characters within attribute values are passed through unchanged, set literal- attributes to yes. See also: --wrap-script-literals, --literal-attributes --wrap-jste Boolean (yes if unset) This option specifies if Tidy should line wrap text contained within JSTE pseudo elements, which look like: <# ... #>. --wrap-php Boolean (no if unset) This option specifies if Tidy should add a new line after a PHP pseudo elements, which look like: <?php ... ?>. --wrap-script-literals Boolean (no if unset) This option specifies if Tidy should line wrap string literals assigned to element event handler attributes, such as element.onmouseover(). See also: --wrap-attributes --wrap-sections Boolean (yes if unset) This option specifies if Tidy should line wrap text contained within <![ ... ]> section tags. ENVIRONMENT HTML_TIDY Name of the default configuration file. This should be an absolute path, since you will probably invoke tidy from different directories. The value of HTML_TIDY will be parsed after the compiled-in default (defined with -DTIDY_CONFIG_FILE), but before any of the files specified using -config. RUNTIME CONFIGURATION FILES You can also specify runtime configuration files from which tidy will attempt to load a configuration automatically. The system runtime configuration file (/etc/tidy.conf), if it exists will be loaded and applied first, followed by the user runtime configuration file (~/.tidyrc). Subsequent usage of a specific option will override any previous usage. Note that if you use the HTML_TIDY environment variable, then the user runtime configuration file will not be used. This is a feature, not a bug. EXIT STATUS 0 All input files were processed successfully. 1 There were warnings. 2 There were errors. SEE ALSO For more information about HTML Tidy: http://www.html-tidy.org/ For more information on HTML: HTML: Edition for Web Authors (the latest HTML specification) http://dev.w3.org/html5/spec-author-view HTML: The Markup Language (an HTML language reference) http://dev.w3.org/html5/markup/ For bug reports and comments: https://github.com/htacg/tidy-html5/issues/ Or send questions and comments to public-htacg@w3.org. Validate your HTML documents using the W3C Nu Markup Validator: http://validator.w3.org/nu/ AUTHOR Tidy was written by Dave Raggett <dsr@w3.org>, and subsequently maintained by a team at http://tidy.sourceforge.net/, and now maintained by HTACG (http://www.htacg.org). The sources for HTML Tidy are available at https://github.com/htacg/tidy-html5/ under the MIT Licence. HTML Tidy 5.8.0 TIDY(1)
null
tftp
The tftp utility is the user interface to the Internet TFTP (Trivial File Transfer Protocol), which allows users to transfer files to and from a remote machine. The remote host may be specified on the command line, in which case tftp uses host as the default host for future transfers (see the connect command below). The optional -e argument sets the blocksize to the largest supported value and enables the TFTP timeout option as if the tout command had been given. In previous versions of tftp, it also enabled binary mode and the TFTP blksize option; these are now on by default. COMMANDS Once tftp is running, it issues the prompt “tftp>” and recognizes the following commands: ? command-name ... Print help information. ascii Shorthand for "mode ascii" binary Shorthand for "mode binary" blocksize [size] Sets the TFTP blksize option in TFTP Read Request or Write Request packets to [size] as specified in RFC 2348. Valid values are between 8 and 65464. If no blocksize is specified, then by default a blocksize of 512 bytes will be used. blocksize2 [size] Sets the TFTP blksize2 option in TFTP Read Request or Write Request packets to [size]. Values are restricted to powers of 2 between 8 and 32768. This is a non-standard TFTP option. connect host [port] Set the host (and optionally port) for transfers. Note that the TFTP protocol, unlike the FTP protocol, does not maintain connections between transfers; thus, the connect command does not actually create a connection, but merely remembers what host is to be used for transfers. You do not have to use the connect command; the remote host can be specified as part of the get or put commands. debug level Enable or disable debugging levels during verbose output. The value of level can be one of packet, simple, options, or access. get [host:]file [localname] get [host1:]file1 [host2:]file2 ... [hostN:]fileN Get one or more files from the remote host. When using the host argument, the host will be used as default host for future transfers. If localname is specified, the file is stored locally as localname, otherwise the original filename is used. Note that it is not possible to download two files at a time, only one, three, or more than three files, at a time. To specify an IPv6 numeric address for a host, wrap it using square brackets like “[3ffe:2900:e00c:ffee::1234]:file” to disambiguate the colons used in the IPv6 address from the colon separating the host and the filename. mode transfer-mode Set the mode for transfers; transfer-mode may be one of ascii or binary. The default is binary. packetdrop [arg] Randomly drop arg out of 100 packets during a transfer. This is a debugging feature. put file [[host:]remotename] put file1 file2 ... fileN [[host:]remote-directory] Put a file or set of files to the remote host. When remotename is specified, the file is stored remotely as remotename, otherwise the original filename is used. If the remote-directory argument is used, the remote host is assumed to be a UNIX machine. To specify an IPv6 numeric address for a host, see the example under the get command. options [arg] Enable or disable support for TFTP options. The valid values of arg are on (enable RFC 2347 options), off (disable RFC 2347 options), and extra (toggle support for non-RFC defined options). quit Exit tftp. An end of file also exits. rexmt retransmission-timeout Set the per-packet retransmission timeout, in seconds. rollover [arg] Specify the rollover option in TFTP Read Request or Write Request packets. After 65535 packets have been transmitted, set the block counter to arg. Valid values of arg are 0 and 1. This is a non-standard TFTP option. status Show current status. timeout total-transmission-timeout Set the total transmission timeout, in seconds. trace Toggle packet tracing. verbose Toggle verbose mode. windowsize [size] Sets the TFTP windowsize option in TFTP Read Request or Write Request packets to [size] blocks as specified in RFC 7440. Valid values are between 1 and 65535. If no windowsize is specified, then the default windowsize of 1 block will be used. SEE ALSO tftpd(8) The following RFC's are supported: RFC 1350: The TFTP Protocol (Revision 2). RFC 2347: TFTP Option Extension. RFC 2348: TFTP Blocksize Option. RFC 2349: TFTP Timeout Interval and Transfer Size Options. RFC 3617: Uniform Resource Identifier (URI) Scheme and Applicability Statement for the Trivial File Transfer Protocol (TFTP). RFC 7440: TFTP Windowsize Option. The non-standard rollover and blksize2 TFTP options are mentioned here: Extending TFTP, https://www.compuphase.com/tftp.htm. HISTORY The tftp command appeared in 4.3BSD. Edwin Groothuis <edwin@FreeBSD.org> performed a major rewrite of the tftpd(8) and tftp code to support RFC2348. NOTES Because there is no user-login or validation within the TFTP protocol, the remote site will probably have some sort of file-access restrictions in place. The exact methods are specific to each site and therefore difficult to document here. Files larger than 33488896 octets (65535 blocks) cannot be transferred without client and server supporting the TFTP blocksize option (RFC2348), or the non-standard TFTP rollover option. macOS 14.5 November 16, 2022 macOS 14.5
tftp – trivial file transfer program
tftp [host [port]]
null
null
assetutil
assetutil processes a .car file generated from a image catalog removing requested scale factors, device idioms, subtypes, performance and memory classes. When thinning scale, idiom, subtype, performance, memory, and graphicsclassfallbacks can be given multiple times, the resulting file will contain all of the assets that match all of the parameters given. If scale, idiom, subtype and graphics class are given in one set, the same parameters must all be present in the subsequent set of parmeters. (IE the count must match) A list of flags and their descriptions: -V version information for assetutil -I Produce a JSON description of the asset catalog object with the given name to --output directory if given or to stdout if no output path given. If no name is provided, report on the contents of the entire car file. -i Keep all assets that have idiom that is given on the command line. -s Keep all assets that have scale factor that is given on the command line, present scale factors will not be removed if there is no fallback available. -p Keep all assets that have the display gamut that is given on the command line, present display gamuts will not be removed if there is no fall back available. -M Keep all assets that have memory class that is given on the command line, present memory class will not be removed if there is no fallback available. -g Keep all assets that have graphics class that is given on the command line. The present graphics class will not be removed if there is no fallback available. -h process the hosted idioms list, this is a list of the idioms that must always be preserved in the car file. This list cannot contain universal, and the different idioms should be given in a comma separated list. -i Idiom to keep. Can be one of universal/phone/pad. -t Subtype to keep (integer) -c Main Assets.car file used to supply the names of the assets to the -I (--info) and the dump options -d (--dump) and -D (--dump- stack). -o Output file name, if no output file is given then input file is overwritten. -T compare thinning attributes ´scale=2:idiom=phone:memory=2:graphicsclass=MTL1,2/scale=2:idiom=phone:memory=1:graphicsclass=MTL2,2´ will print to stdout if the files was thinned with the above thinning attributes, would the same Asset file result in both cases. -n given a comma separated list of names, any assets in the car file that match one of the given names are preserved. Names are compared case insensitivly Name == name, uses the -o --output file to save the resulting .car file. -Z do an integrity check of the input file. Darwin November 13, 2019 Darwin
assetutil process asset catalog .car files
assetutil [-ViotshMgpTZ] inputfile
null
null
AssetCacheLocatorUtil
AssetCacheLocatorUtil reports information related to macOS Content Caches running on the computer or on the local network. Some of the information that AssetCacheLocatorUtil reports depends on the current network configuration, and on the user running it. It might produce different results for different users, on different client devices, or on different networks. Applications that use content caches might choose ones other than the ones AssetCacheLocatorUtil reports due to factors beyond its knowledge, such as iCloud affinity. AssetCacheLocatorUtil reports the following information separately for system daemons and for the current user: Availability hint The system can temporarily save a hint about whether or not there might be content caches on the computer or on the local network. AssetCacheLocatorUtil prints that saved hint if it is available. Saved content caches The system can temporarily save information about content caches it has previously found on the computer or on the local network. AssetCacheLocatorUtil prints that saved information if it is available. Refreshed content caches AssetCacheLocatorUtil forces the system to search for content caches on the computer and on the local network and to refresh the saved information above. It then prints the results. Saved and refreshed public IP address ranges If your network administrator has configured public IP address ranges in DNS, which the system uses when looking up content caches, AssetCacheLocatorUtil prints saved and refreshed information about those ranges. Saved and refreshed favored server ranges If your network administrator has configured favored server ranges in DNS, which the system uses when looking up content caches, AssetCacheLocatorUtil prints saved and refreshed information about those ranges. AssetCacheLocatorUtil then reports the reachability status of all of the content caches it found. If the computer cannot communicate with a content cache over the local network then it cannot request files from that content cache. However, just because the computer can "ping" a content cache does not imply that that content cache will serve requests sent from this computer. The --json option prints the results in machine-parseable JSON format to stdout. WARNINGS AssetCacheLocatorUtil also reports warnings about potential issues it discovers. The Apple cloud service with which content caches register limits the number of content caches on a network. This limit can change at any time. If a larger number of content caches are available on a network than the cloud allows, client devices might not always choose the "best" content cache. AssetCacheLocatorUtil warns when it detects this possibility. The number of content caches available on a network can be reduced by changing the settings of some of the content caches, using System Settings > Sharing > Content Caching > press the option key > Advanced Options... > Clients > Cache content for:. AssetCacheLocatorUtil warns when it detects content caches with different ranks. The exact value and meaning of each rank is defined by the Apple cloud service with which content caches register, and can change at any time, but client devices use only the content caches with the lowest- numbered rank available to them. A content cache's rank can be changed by adjusting its settings, using System Settings > Sharing > Content Caching > press the option key > Advanced Options... > Clients > Cache content for:. A content cache on the same computer as the client always has the lowest-numbered rank. Having content caches in different ranks can be intentional or accidental, depending on your organization. AssetCacheLocatorUtil warns about mixed ranks in case it is accidental. An example of an intentional use of mixed ranks is when a school has a content cache that caches content for devices using the same local networks and the school's district office has another content cache that caches content for devices using the same public IP address. Client devices in the school use the school's content cache. Client devices in a different school in the same district use the district's content cache. Every content cache must have a unique GUID. AssetCacheLocatorUtil warns when it finds content caches in your organization with duplicate GUIDs. A content cache's GUID can be changed by stopping the content cache, running the following command in Terminal as an admin user, and then restarting it: sudo -u _assetcache defaults write /Library/Preferences/com.apple.AssetCache.plist ServerGUID = `uuidgen` When public IP address ranges are configured but the client device's public IP address is not in the configured ranges, this could prevent the device from using your organization's content caches. AssetCacheLocatorUtil warns about this condition. To configure custom public IP address ranges use System Settings > Sharing > Content Caching > press the option key > Advanced Options... > Clients > My local networks: and set DNS TXT records appropriately. Your network administrator can designate some content caches as "favored." AssetCacheLocatorUtil warns when it finds content caches that are not favored, with the exception of a content cache on the same computer as the client. Client devices use only favored content caches when any are available. The system can temporarily mark content caches as "unhealthy" after attempts to use a content cache fail due to either HTTP error responses or network errors. Each client device maintains its own health records for each content cache. Client devices use only healthy content caches. AssetCacheLocatorUtil warns when any of the content caches it finds are unhealthy. Note that when AssetCacheLocatorUtil refreshes the list of content caches, it also resets the health of every content cache it finds to "healthy." SEE ALSO System Settings > Sharing > Content Caching, AssetCacheManagerUtil(8) macOS 8/1/19 macOS
AssetCacheLocatorUtil – Utility for reporting information about macOS Content Caches
AssetCacheLocatorUtil [-j|--json]
null
null
orbd
null
null
null
null
null
usbcfwflasher
null
null
null
null
null
streamzip
This program will read data from "stdin", compress it into a zip container and, by default, write a streamed zip file to "stdout". No temporary files are created. The zip container written to "stdout" is, by necessity, written in streaming format. Most programs that read Zip files can cope with a streamed zip file, but if interoperability is important, and your workflow allows you to write the zip file directly to disk you can create a non-streamed zip file using the "zipfile" option.
streamzip - create a zip file from stdin
producer | streamzip [opts] | consumer producer | streamzip [opts] -zipfile=output.zip
-zip64 Create a Zip64-compliant zip container. Use this option if the input is greater than 4Gig. Default is disabled. -zipfile=F Write zip container to the filename "F". Use the "Stream" option to force the creation of a streamed zip file. -member-name=M This option is used to name the "file" in the zip container. Default is '-'. -stream Ignored when writing to "stdout". If the "zipfile" option is specified, including this option will trigger the creation of a streamed zip file. Default: Always enabled when writing to "stdout", otherwise disabled. -method=M Compress using method "M". Valid method names are * store Store without compression * deflate Use Deflate compression [Deflault] * bzip2 Use Bzip2 compression * lzma Use LZMA compression * xz Use xz compression * zstd Use Zstandard compression Note that Lzma compress needs "IO::Compress::Lzma" to be installed. Note that Zstd compress needs "IO::Compress::Zstd" to be installed. Default is "deflate". -0, -1, -2, -3, -4, -5, -6, -7, -8, -9 Sets the compression level for "deflate". Ignored for all other compression methods. -0 means no compression and -9 for maximum compression. Default is 6 -version Display version number -help Display help
Create a zip file bt reading daa from stdin $ echo Lorem ipsum dolor sit | perl ./bin/streamzip >abcd.zip Check the contents of "abcd,zip" with the standard "unzip" utility Archive: abcd.zip Length Date Time Name --------- ---------- ----- ---- 22 2021-01-08 19:45 - --------- ------- 22 1 file Notice how the "Name" is set to "-". That is the default for a few zip utilities whwre the member name is not given. If you want to explicitly name the file, use the "-member-name" option as follows $ echo Lorem ipsum dolor sit | perl ./bin/streamzip -member-name latin >abcd.zip $ unzip -l abcd.zip Archive: abcd.zip Length Date Time Name --------- ---------- ----- ---- 22 2021-01-08 19:47 latin --------- ------- 22 1 file When to write a Streamed Zip File A Streamed Zip File is useful in situations where you cannot seek backwards/forwards in the file. A good examples is when you are serving dynamic content from a Web Server straight into a socket without needing to create a temporary zip file in the filesystsm. Similarly if your workfow uses a Linux pipelined commands. SUPPORT General feedback/questions/bug reports should be sent to <https://github.com/pmqs/IO-Compress/issues> (preferred) or <https://rt.cpan.org/Public/Dist/Display.html?Name=IO-Compress>. AUTHOR Paul Marquess pmqs@cpan.org. COPYRIGHT Copyright (c) 2019-2022 Paul Marquess. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.38.2 2023-11-28 STREAMZIP(1)
networkQuality
networkQuality allows for measuring the different aspects of Network Quality, including: Maximal capacity (often described as speed) The responsiveness of the connection. Responsiveness measures the quality of your network by the number of roundtrips completed per minute (RPM) under working conditions. See https://support.apple.com/kb/HT212313 Other aspects of the connection that affect the quality of experience. NOTE: This tool will connect to the Internet to perform its tests. This will use data on your Internet service plan. For more details about the RPM score and the methodology around the testing, see https://datatracker.ietf.org/doc/draft-cpaasch-ippm-responsiveness/ The following options are available: -C configuration URL/path Use custom configuration URL or path (with scheme file://). See https://github.com/network-quality/server for server implementation details. -f <option[,<option>,...]> Force usage of a specific protocol selection: h1: To enforce HTTP/1.1 h2: To enforce HTTP/2 h3: To enforce HTTP/3 (QUIC) L4S: To force-enable L4S noL4S: To force-disable L4S -I interface Bind test to interface (e.g., en0, pdp_ip0,...) If not specified, the default interface will be used. -r host Connect to host or IP, overriding DNS for initial config request -S port Create a networkQuality server-instance running locally on the specified port. It will display the URL of the config-file that can be passed on to a client-instance with option -C. Note that the certificate is self-signed and thus option -k needs to be used on the client-instance. -c Produce computer-readable output. -d Do not run a download test (implies -s) -h Show help -k Disable verification of the server identity via TLS. -p Use iCloud Private Relay. -s Run tests sequentially instead of parallel upload/download. -u Do not run an upload test (implies -s) -v Verbose output. COMPUTER OUTPUT FIELD DESCRIPTION The -c option will produce JSON output with the following fields: base_rt The calculated idle latency of the test run (in milliseconds). dl_flows Number of download flows initiated. dl_responsiveness The downlink responsiveness score (in RPM) (only available when -s is specified). dl_throughput The measured downlink throughput (in bytes per second). end_date Time when test run was completed (in local time). il_h2_req_resp The idle-latency Request/Response times for HTTP/2 (in milliseconds). il_tcp_handshake_443 The idle-latency TCP-handshake times (in milliseconds). il_tls_handshake The idle-latency TLS-handshake times (in milliseconds). interface_name Interface name in which the test ran against. lud_foreign_dl_h2_req_resp Download latency-under-load request/response times for HTTP/2 (in milliseconds). (only available when -s is specified). lud_foreign_dl_tcp_handshake_443 Download latency-under-load for for TCP- handshake times (in milliseconds). (only available when -s is specified). lud_foreign_dl_tls_handshake Download latency-under-load for for TLS- handshake times (in milliseconds). (only available when -s is specified). lud_foreign_h2_req_resp Combined upload/download latency-under-load request/response times for HTTP/2 (in milliseconds). (only available when -s is not specified). lud_foreign_tcp_handshake_443 Combined upload/download latency-under-load for for TCP-handshake times (in milliseconds). (only available when -s is not specified). lud_foreign_tls_handshake Combined foreign upload/download latency- under-load for for TLS-handshake times (in milliseconds). (only available when -s is not specified). lud_foreign_ul_h2_req_resp Foreign upload latency-under-load request/response times for HTTP/2 (in milliseconds). (only available when -s is specified). lud_foreign_ul_tcp_handshake_443 Foreign upload latency-under-load for for TCP-handshake times (in milliseconds). (only available when -s is specified). lud_foreign_ul_tls_handshake Upload latency-under-load for for TLS- handshake times (in milliseconds). (only available when -s is specified). lud_self_dl_h2_req_resp Self download latency-under-load request/response times for HTTP/2 (in milliseconds). (only available when -s is specified). lud_self_h2_req_resp Combined self upload/download latency-under- load request/response times for HTTP/2 (in milliseconds). (only available when -s is not specified). lud_self_ul_h2_req_resp Self upload latency-under-load request/response times for HTTP/2 (in milliseconds). (only available when -s is specified). os_version The version of the OS the test was run on. responsiveness The responsiveness score (in RPM) (the combined value if -c is not specified). start_date Time when test run was started (in local time). ul_flows Number of upload flows created. ul_responsiveness The uplink responsiveness score (in RPM) (only available when -s is specified). ul_throughput The measured uplink throughput (in bytes per second). SEE ALSO ping(8), traceroute(8) Darwin 9/22/20 Darwin
networkQuality – Network quality testing tool
networkQuality [-cdhksuv] [-C configuration URL] [-f protocol selection] [-I interface] [-r host] [-S port]
null
null
ktrace
ktrace can configure the system to trace events, or record them to a file, and print a human-readable representation of the events. SUBCOMMANDS ktrace uses a subcommand syntax to separate different functionality into logical groups. Each subcommand takes its own set of options, though a few options may be used in multiple subcommands. info Print information about the current configuration of kernel trace. trace [-ACNnrSstu] [-R path | -E] [-C codes-path [...]] [-T timeout] [-f filter-desc] [-b buffer-size-mb] [-x pid-or-process-name [...] | -p pid-or-process-name [...]] [--json | --csv | --ndjson | --json-64] [-c command [...]] [--only-named-events] [--no-default-codes-files] [--continuous] [--disable-coprocessors] Print events to stdout(4) in a human-readable format, automatically providing wall clock time, process names, and event names for each event. Without the -R or -E options, ktrace initializes the trace buffers to a reasonable size and enables tracing until it terminates. -R path Print events from the trace file at path. -E Use an existing configuration, instead of creating a new configuration. This is necessary to use the trace subcommand with other ktrace subcommands, like init and setopt. -N Don't display names of events. -C Print timestamps in continuous time. -n Display thread names. -r Just configure and start trace running in windowed or ring buffer mode -- do not print the events. ktrace trace -E can later be used to read the in-memory events. -S Print arguments as strings for tracepoints known to include strings -s Attempt to symbolicate addresses found in arguments to symbols. -t Print times as Mach absolute timestamps, instead of the default local wall clock time. -A Print times as seconds since the start of trace. -u Attempt to symbolicate addressess to uuid-offset tuples. -C codes-path Use a custom codes file to provide event ID to name mappings. See trace(1) for more details on the format of codes files. -b buffer-size-mb Set a custom buffer size in megabytes. -f filter-desc Apply a filter description to the trace session, controlling which events are traced. See FILTER DESCRIPTIONS for details on the syntax of a filter. If no filter description is provided, all events will be traced. -T timeout End tracing after timeout has elapsed. Suffixes like ns or ms are supported, but seconds are the default if just a number is supplied. -x pid-or-process-name [...] | -p pid-or-process-name [...] Restrict the processes that can trace events. Either exclude (-x) or only trace events (-p) from the provided processes by name or pid. These options are mutually exlusive. Processes that cannot be attached to are always excluded on release kernels. Similarly, events in the Mach scheduling subclass are included, regardless of the this option, to allow tools to maintain thread scheduling state machines. --json Print events as an array of JSON objects. --csv Print events as CSV entries. --ndjson Print events as a stream of newline-delimited JSON objects. --json-64 Print events as JSON objects, with 64-bit numbers. -c command [...] Run the command specified by command and stop tracing when it exits. All arguments after this option are passed to the command. dump Write trace to a file at path for later inspection with ktrace trace -R. If no path is specified, the tool writes to a new, numbered file in the working directory, starting with trace001.ktrace. The command continues to write events until ktrace is terminated, the optional timeout triggers, or the trace buffers fill up when using an existing configuration with wrapping disabled. If a compression level is specified, the file is compressed as it is written. Using non-default values for this option may increase the overhead of collecting events. -E Use an existing configuration, instead of creating a new configuration. -f filter-desc Apply a filter description to events written to the file, controlling which events are traced. See FILTER DESCRIPTIONS for details on the syntax of a filter. If no filter description is provided, all events will be traced. -p pid-or-process-name Only record events that occur for the process identified by pid or process-name. Only the first 16 characters of the name are observed, due to a kernel limitation. FILTER DESCRIPTIONS for details on the syntax of a filter. If no filter description is provided, all events will be traced. -T timeout End tracing after timeout has elapsed. Suffixes like ns or ms are supported, but seconds are the default if just a number is supplied. --stackshot-flags extra-flags Pass the provided extra-flags integer as additional flags when recording stackshots. --notify-tracing-started key Post a notification on key after tracing has started. init -b buffer-size-mb | -n n-events Initialize trace to allocate buffer-size-mb megabytes of space or n-events events for its trace buffers. This subcommand must be provided before using the setopt, enable, or disable subcommands initially or after using the remove subcommand. setopt [-f filter-desc] [-w] [-x pid-or-process-name [...] | -p pid-or-process-name [...]] Set options on the existing trace configuration. The trace configuration must already be initialized. -f filter-desc Apply a filter description to the current configuration, controlling which events are traced. See FILTER DESCRIPTIONS for details on the syntax of a filter. If no filter description is provided, all events will be traced. -w Configure trace to operate in “windowed” mode, where the trace buffer acts as a ring buffer, removing old events to make room for new ones. By default, tracing ends when the buffer runs out of space for new events. -x pid-or-process-name [...] | -p pid-or-process-name [...] Restrict the processes that can trace events. Either exclude (-x) or only trace events (-p) from the provided processes by name or pid. These options are mutually exlusive. Processes that cannot be attached to are always excluded on release kernels. Similarly, events in the Mach scheduling subclass are included, regardless of the this option, to allow tools to maintain thread scheduling state machines. enable Start tracing events. disable Stop tracing events. Tracing can be started again after it has been disabled, using the same configuration. remove Remove the current trace configuration and free the memory associated with tracing. reset Reset tracing and associated subsystems, including kperf, to their default state. decode debugid [debugid [...]] Print the components that make up the provided debugids. emit debugid [arg1 [arg2 [arg3 [arg4]]]] Emit an event into the trace stream with the provided debugid and arguments. symbolicate path Symbolicate the trace file located at path. config Print the current system's trace configuration. machine Print the current system's machine information. compress [-l fast|balanced|small] path Compress the trace file located at path using the small compression level, unless otherwise specified with the -l option. artrace [-nr] [-t timeout] [-i interval] [-o filename] [-b buffer-size-mb] [-f filter-desc] [-F filter-desc] [-p pid-or-process-name] [--remote[=device-name]] [--type=full|profile|lite|morelite|none] [--kperf=sampler-name,sampler-name@timer-period|timer-frequency|kdebug-filter-desc] [-d group] [-e group] [--stackshot-flags extra-flags] [--disable-coprocessors] [-c command [...]] Profile the system, writing trace events to an automatically named file. By default, this measures scheduler, VM, and system call usage, and samples threads on-core periodically. -o path Specify the name of the file to be created. -f filter-desc Trace the classes and subclasses specified by the filter description. See FILTER DESCRIPTIONS for details on the syntax of a filter. -F filter-desc Exclude events from the default set. Use this options with care, since analysis tools may rely on certain events being present. -t timeout Stop tracing and exit after timeout option is provided, stop tracing and exit after timeout has elapsed. The timeout value may have us, ms, or s appended to indicate the time units. -i interval Set the interval that the profiling timer fires (supports the same time suffixes as -t). -n Disable the profiling timer entirely. -b buffer-size-mb Set the trace buffer size. -r Configure tracing and leave it running in ring buffer mode. -p pid-or-process-name Only record events that occur for the process identified by pid or process-name. Only the first 16 characters of the name are observed, due to a kernel limitation. -d group Disable the group named group. See GROUPS for a list of groups. -e group Enable the group named group. See GROUPS for a list of groups. --remote[=device-name] Also trace on the provided device-name or the local bridge if not specified. --type=full|profile|lite|morelite|none Trace using the specified type. full is the default, while profile just enables the profiling timer, but does not closely track scheduling events. The lite and morelite trace types are meant for long-running, low overhead analysis and prioritize analyzing threads that are blocked for relatively long periods of time, at the cost of an unbiased sample towards threads that cause a CPU to come out of idle. The ‘lite’ modes work by lazily sampling threads as they are unblocked, and only those threads that block for more than a set threshold. Further, the typical profiling timer is disabled, in lieu of sampling the CPUs opportunistically, on other interrupts. The morelite mode has a more restrictive typefilter than lite. none mode acts like ktrace dump. --stackshot-flags extra-flags Pass the provided extra-flags integer as additional flags when recording stackshots. -c command [...] Run the command specified by command and stop tracing when it exits. All arguments after this option are passed to the command. --kperf=sampler-name[,sampler-name]@timer-period|timer-frequency|kdebug-filter-desc Sample using kperf according to the given sampling description. For the syntax of sampling descriptions, see SAMPLING DESCRIPTIONS. FILTER DESCRIPTIONS A filter description is a comma-separated list of class and subclass specifiers that indicate which events should be traced. A class specifier starts with ‘C’ and contains a single byte, specified in either decimal or hex. A subclass specifier starts with ‘S’ and takes two bytes. The high byte is the class and the low byte is the subclass of that class. For example, this filter description would enable classes 1 and 37 and the subclasses 33 and 35 of class 5: ‘C1,C0x25,S0x0521,S0x0523’. The ‘ALL’ filter description enables events from all classes. SAMPLING DESCRIPTIONS A sampling description is similar to a filter description, but it configures sampling. It's composed of two parts: a samplers section and a trigger section, separated by @. The overall form is sampler-name[,sampler-name]@ Ns timer-period|timer-frequency|kdebug-filter-desc. The valid names of samplers are ‘ustack’, ‘kstack’, ‘thinfo’, ‘thsnapshot’, ‘meminfo’, ‘thsched’, ‘thdispatch’, ‘tksnapshot’, ‘sysmem’, and ‘thinstrscycles’. For example, to sample user stacks every 10 milliseconds, use ‘ustack@10ms’. To sample thread scheduling information and system memory every time the ‘0xfeedfac0’ event is emitted, use ‘thsched,sysmem@D0xfeedfac0’. GROUPS syscall-sampling Sample backtraces on system calls. fault-sampling Sample backtraces on page faults. graphics Include graphics events. EXIT STATUS The ktrace utility exits 0 on success, and >0 if an error occurs. CAVEATS Once trace has been initialized with the init subcommand (or the trace and artrace subcommands with the -r flag), it remains in use until the space is reclaimed with the remove subcommand. This prevents background diagnostic tools from making use of trace. SEE ALSO fs_usage(1), notify(3), ktrace(5), and trace(1) Darwin June 1, 2022 Darwin
ktrace – record kernel trace files
ktrace info ktrace trace [-ACNnrSstu] [-R path | -E] [-C codes-path [...]] [-T timeout] [-f filter-desc] [-b buffer-size-mb] [-x pid-or-process-name [...] | -p pid-or-process-name [...]] [--json | --csv | --ndjson | --json-64] [-c command [...]] [--only-named-events] [--no-default-codes-files] [--continuous] [--disable-coprocessors] ktrace dump [-E] [-f filter-desc] [-l compression-level] [-T timeout] [-b buffer-size-mb] [-p pid-or-process-name] [--stackshot-flags extra-flags] [--include-log-content] [--disable-coprocessors] [--notify-tracing-started key] [path] ktrace init -b buffer-size-mb | -n n-events ktrace setopt [-f filter-desc] [-w] [-x pid-or-process-name [...] | -p pid-or-process-name [...]] ktrace enable ktrace disable ktrace remove ktrace reset ktrace decode debugid [debugid [...]] ktrace emit debugid [arg1 [arg2 [arg3 [arg4]]]] ktrace symbolicate path ktrace machine ktrace config ktrace compress [-l -fast|balanced|small] path ktrace artrace [-nr] [-t timeout] [-i interval] [-o filename] [-b buffer-size-mb] [-f filter-desc] [-F filter-desc] [-p pid-or-process-name] [--kperf=sampler-name[,sampler-name@]timer-period|timer-frequency|kdebug-filter-desc] [--remote[=remote-device]] [--type=full|profile|lite|morelite|none] [--stackshot-flags extra-flags] [--notify-tracing-started key] [-c command [...]]
null
null
DeRez
Tools supporting Carbon development, including DeRez, were deprecated with Xcode 6. The DeRez tool decompiles the resource fork of resourceFile according to the type declarations supplied by the type declaration files. The resource description produced by this decompilation contains the resource definitions (resource and data statements) associated with these type declarations. If for some reason it cannot reproduce the appropriate resource statements, DeRez generates hexadecimal data statements instead. A type declaration file is a file of type declarations used by the resource compiler, Rez. The type declarations for the standard Macintosh resources are contained in the Carbon.r resource header file, contained in the Carbon framework. You may use the ${RIncludes} shell environment variable to define a default path to resource header files. If you do not specify any type declaration files, DeRez produces data statements in hexadecimal form. This same process works backward to recompile the resource fork. If you use the output of DeRez and the appropriate type declaration files as input to Rez, it produces the original resource fork of resourceFile. INPUT An input file containing resources in its resource fork. DeRez does not read standard input. You can also specify resource description files containing type declarations. For each type declaration file on the command line, DeRez applies the following search rules: 1. DeRez tries to open the file with the name specified as is. 2. If rule 1 fails and the filename contains no colons or begins with a colon, DeRez appends the filename to each of the pathnames specified by the {RIncludes} environment variable and tries to open the file. OUTPUT Standard output. DeRez writes a resource description to standard output consisting of resource and data statements that can be understood by Rez. If you omit the typeDeclFile1 [ typeDeclFile2 ]... parameter, DeRez generates hexadecimal data statements instead. Errors and warnings are written to diagnostic output. ALIAS RESOLUTION This command resolves Finder aliases on all input file specifications. Finder aliases are also resolved in the pathnames of any files included by specified resource definition files. You can optionally suppress the resolution of leaf aliases for the input resource file (with the -noResolve option). STATUS DeRez can return the following status codes: 0 no errors 1 error in parameters 2 syntax error in resourceFile 3 I/O or program error PARAMETERS resourceFile Specifies a file containing the resource fork to be decompiled. typeDeclFile1 [ typeDeclFile2 ]... Specifies one or more files containing type declarations. These type declarations are the templates associated with the information in the resource description. In addition to using those in the ${RIncludes} folder, you can also specify your own type declaration files. Note The DeRez tool ignores any include (but not #include), read, data, change, delete, and resource statements found in these files. However, it still checks these statements for correct syntax.
DeRez - decompiles resources (DEPRECATED) SYNTAX DeRez resourceFile [ typeDeclFile1 [ typeDeclFile2 ] ... ] [ -c[ompatible] ] [ -d[efine] macro [ = data ] ] [ -e[scape] ] [ -i directoryPath ] [ -is[ysroot] sdkPath ] [ -m[axstringsize] n ] [ -noResolve ] [ -only typeExpr [ (idExpr1 [:idExpr2 ] | resourceName) ] ] [ -only type ] [ -p ] [ -rd ] [ -script Roman | Japanese | Korean | SimpChinese | TradChinese ] [ -s[kip] typeExpr [ (idExpr1 [:idExpr2 ] | resourceName) ] ] [ -s[kip] type ] [ -u[ndef] macro ] [ -useDF ]
null
-c[ompatible] Generates output that is backward-compatible with Rez 1.0. -d[efine] macro [ = data ] Defines the macro variable macro as having the value data. You can use this option more than once on a command line. macro Specifies the macro variable to be defined. data Specifies the value of macro. This is the same as writing #define macro [ data ] at the beginning of the resource file. If you do not specify data, DeRez sets the value of data to the null string. Note that this still defines the macro. -e[scape] Prints characters that are normally escaped, such as \0xff, as extended Macintosh characters. By default, characters with values between $20 and $FF are printed as Macintosh characters. With this option, however, DeRez prints all characters (except null, newline, tab, backspace, form feed, vertical tab, and rubout) as characters, not as escape sequences. Note Not all fonts have all the characters defined. -i directoryPath Specifies the directory to search for #include files. You may specify this option more than once. Directory paths are searched in the order in which they appear on the command line. -is[ysroot] sdkPath Specifies the system SDK in which to search for include files and frameworks. If omitted, the system root ("/") is assumed. -m[axstringsize] n Sets the maximum output string width to n, where n must be in the range 2-120. -noResolve Suppresses leaf alias resolution of the file or pathname for the input resource file thus allowing the resource fork of a Finder alias file to be decompiled. Finder aliases are still resolved on all resource definition file paths and on any files they may include. -only typeExpr [ (idExpr1[:idExpr2] | resourceName) ] Reads only resources of the type indicated by typeExpr. An ID (idExpr1), range of IDs (idExpr1:idExpr2), or resource name can also be supplied to further specify which resources to read. If you provide this additional information, DeRez reads only the specified resources. This option can be repeated multiple times. Note that this option cannot be specified in conjunction with the -skip option. Note The typeExpr parameter is an expression and must be enclosed in single quotation marks. If you also specify an ID, range of IDs, or resource name, you must place double quotation marks around the entire option parameter, as in these examples: -only "'MENU' (1:128)" -only "'MENU' ("'"Edit"'")" -only type Reads only resources of the specified type. It is not necessary to place quotation marks around the type as long as it starts with a letter and contains no spaces or special characters. For example, this specification doesn't require quotation marks: -only MENU Escape characters are not allowed. This option can be repeated multiple times. -p Writes progress and summary information to standard output. -rd Suppresses warning messages emitted when a resource type is redeclared. -script Roman | Japanese | Korean | SimpChinese | TradChinese Enables the recognition of any of several 2-byte character script systems to use when compiling and decompiling files. This option insures that 2-byte characters in strings are handled as indivisible entities. The default language is Roman and specifies 1-byte character sets. -s[kip] typeExpr [ (idExpr1 [:idExpr2 ] | resourceName) ] Skips resources of the type indicated by typeExpr. For example, it is very useful to be able to skip 'CODE' resources. An ID (idExpr1), range of IDs (idExpr1:idExpr2), or resource name can also be supplied to further specify which resources to skip. If you provide this additional information, DeRez skips only the specified resources. You can repeat this option multiple times. Note that this option cannot be used in conjunction with the -only option. Note The typeExpr parameter is an expression and must be enclosed in single quotation marks. If you also specify an ID, range of IDs, or resource name, you must place double quotation marks around the entire option parameter, as in this example: -skip "'MENU' (1:128)" -skip -only "'MENU' ("'"Edit"'")" -s[kip] type Skips only resources of the specified type. It is not necessary to place quotation marks around the type as long as it starts with a letter and does not contain spaces or special characters. For example, this specification doesn't require quotation marks: -skip CODE Escape characters are not allowed. This option can be repeated multiple times. -u[ndef] macro Undefines the preset macro variable This is the same as writing #undef macro at the beginning of the resource file. This option can be repeated more than once on a command line. -useDF Reads and writes resource information from the files' data forks, instead of their resource forks.
The following command line displays the 'cfrg' resources in the CarbonLib library. The type declaration for 'cfrg' resources is found in the CarbonCore.r framework umbrella resource header file. /Developer/Tools/DeRez -I /System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/ /System/Library/CFMSupport/CarbonLib CarbonCore.r In the following example, DeRez decompiles the 'itl1' resource ID 0 in the data-fork-based localized resource file in the HIToolbox framework. $ export RIncludes=/System/Library/Frameworks/Carbon.framework/Headers/ $ /Developer/Tools/DeRez -only 'itl1' /System/Library/Frameworks/Carbon.framework/Frameworks/HIToolbox.framework/Resources/English.lproj/Localized.rsrc Carbon.r -useDF SEE ALSO Rez Mac OS X July 24, 2000 DeRez(1)
textutil
textutil can be used to manipulate text files of various formats, using the mechanisms provided by the Cocoa text system. The first argument indicates the operation to perform, one of: -help Show the usage information for the command and exit. This is the default command option if none is specified. -info Display information about the specified files. -convert fmt Convert the specified files to the indicated format and write each one back to the file system. -cat fmt Read the specified files, concatenate them, and write the result out as a single file in the indicated format. fmt is one of: txt, html, rtf, rtfd, doc, docx, wordml, odt, or webarchive There are some additional options for general use: -extension ext Specify an extension to be used for output files (by default, the extension will be determined from the format). -output path Specify the file name to be used for the first output file. -stdin Specify that input should be read from stdin rather than from files. -stdout Specify that the first output file should go to stdout. -encoding IANA_name | NSStringEncoding Specify the encoding to be used for plain text or HTML output files (by default, the output encoding will be UTF-8). NSStringEncoding refers to one of the numeric values recognized by NSString. IANA_name refers to an IANA character set name as understood by CFString. The operation will fail if the file cannot be converted to the specified encoding. -inputencoding IANA_name | NSStringEncoding Force all plain text input files to be interpreted using the specified encoding (by default, a file's encoding will be determined from its BOM). The operation will fail if the file cannot be interpreted using the specified encoding. -format fmt Force all input files to be interpreted using the indicated format (by default, a file's format will be determined from its contents). -font font Specify the name of the font to be used for converting plain to rich text. -fontsize size Specify the size in points of the font to be used for converting plain to rich text. -- Specify that all further arguments are file names. There are some additional options for HTML and WebArchive files: -noload Do not load subsidiary resources. -nostore Do not write out subsidiary resources. -baseurl url Specify a base URL to be used for relative URLs. -timeout t Specify the time in seconds to wait for resources to load. -textsizemultiplier x Specify a numeric factor by which to multiply font sizes. -excludedelements (tag1, tag2, ...) Specify which HTML elements should not be used in generated HTML (the list should be a single argument, and so will usually need to be quoted in a shell context). -prefixspaces n Specify the number of spaces by which to indent nested elements in generated HTML (default is 2). There are some additional options for treating metadata: -strip Do not copy metadata from input files to output files. -title val Specify the title metadata attribute for output files. -author val Specify the author metadata attribute for output files. -subject val Specify the subject metadata attribute for output files. -keywords (val1, val2, ...) Specify the keywords metadata attribute for output files (the list should be a single argument, and so will usually need to be quoted in a shell context). -comment val Specify the comment metadata attribute for output files. -editor val Specify the editor metadata attribute for output files. -company val Specify the company metadata attribute for output files. -creationtime yyyy-mm-ddThh:mm:ssZ Specify the creation time metadata attribute for output files. -modificationtime yyyy-mm-ddThh:mm:ssZ Specify the modification time metadata attribute for output files.
textutil – text utility
textutil [command_option] [other_options] file ...
null
textutil -info foo.rtf displays information about foo.rtf. textutil -convert html foo.rtf converts foo.rtf into foo.html. textutil -convert rtf -font Times -fontsize 10 foo.txt converts foo.txt into foo.rtf, using Times 10 for the font. textutil -cat html -title "Several Files" -output index.html *.rtf loads all RTF files in the current directory, concatenates their contents, and writes the result out as index.html with the HTML title set to "Several Files". DIAGNOSTICS The textutil command exits 0 on success, and 1 on failure. CAUTIONS Some options may require a connection to the window server. HISTORY The textutil command first appeared in Mac OS X 10.4. macOS September 9, 2004 macOS
xgettext.pl
This program extracts translatable strings from given input files, or from STDIN if none are given. Please see Locale::Maketext::Extract for a list of supported input file formats.
xgettext.pl - Extract translatable strings from source VERSION version 1.00
xgettext.pl [OPTION] [INPUTFILE]...
Mandatory arguments to long options are mandatory for short options too. Similarly for optional arguments. Input file location: INPUTFILE... Files to extract messages from. If not specified, STDIN is assumed. -f, --files-from=FILE Get list of input files from FILE. -D, --directory=DIRECTORY Add DIRECTORY to list for input files search. Input file format: -u, --use-gettext-style Specifies that the source programs uses the Gettext style (e.g. %1) instead of the Maketext style (e.g. "[_1]") in its localization calls. Output file location: -d, --default-domain=NAME Use NAME.po for output, instead of "messages.po". -o, --output=FILE PO file name to be written or incrementally updated; "-" means writing to STDOUT. -p, --output-dir=DIR Output files will be placed in directory DIR. Output details: -g, --gnu-gettext Enables GNU gettext interoperability by printing "#, perl-maketext-format" before each entry that has "%" variables. -W, --wrap If wrap is enabled, then, for entries with multiple file locations, each location is listed on a separate line. The default is to put them all on a single line. Other comments are not affected. Plugins: By default, all builtin parser plugins are enabled for all file types, with warnings turned off. If any plugin is specified on the command line, then warnings are turned on by default - you can turn them off with "-now" -P|--plugin pluginname Use the specified plugin for the default file types recognised by that plugin. -P|--plugin 'pluginname=*' Use the specified plugin for all file types. -P|--plugin pluginname=ext,ext2 Use the specified plugin for any files ending in C<.ext> or C<.ext2> -P|--plugin My::Module::Name='*' Use your custom plugin module for all file types Multiple plugins can be specified on the command line. Available plugins: "perl" : Locale::Maketext::Extract::Plugin::Perl For a slightly more accurate but much slower Perl parser, you can use the PPI plugin. This does not have a short name, but must be specified in full, eg: xgettext.pl -P Locale::Maketext::Extract::Plugin::PPI "tt2" : Locale::Maketext::Extract::Plugin::TT2 "yaml" : Locale::Maketext::Extract::Plugin::YAML "formfu" : Locale::Maketext::Extract::Plugin::FormFu "mason" : Locale::Maketext::Extract::Plugin::Mason "text" : Locale::Maketext::Extract::Plugin::TextTemplate "generic" : Locale::Maketext::Extract::Plugin::Generic Warnings: If a parser plugin encounters a syntax error while parsing, it will abort parsing and hand over to the next parser plugin. If warnings are turned on then the error will be echoed to STDERR. Off by default, unless any plugin has been specified on the command line. -w|--warnings -now|--nowarnings Verbose: If you would like to see which files have been processed, which plugins were used, and which strings were extracted, then enable "verbose". If no acceptable plugin was found, or no strings were extracted, then the file is not listed: -v|--verbose Lists processed files. -v -v|--verbose --verbose : Lists processed files and which plugins managed to extract strings. -v -v|--verbose --verbose : Lists processed files, which plugins managed to extract strings, and the extracted strings, the line where they were found, and any variables. SEE ALSO Locale::Maketext::Extract Locale::Maketext::Lexicon::Gettext Locale::Maketext Locale::Maketext::Extract::Plugin::Perl Locale::Maketext::Extract::Plugin::PPI Locale::Maketext::Extract::Plugin::TT2 Locale::Maketext::Extract::Plugin::YAML Locale::Maketext::Extract::Plugin::FormFu Locale::Maketext::Extract::Plugin::Mason Locale::Maketext::Extract::Plugin::TextTemplate Locale::Maketext::Extract::Plugin::Generic AUTHORS Audrey Tang <cpan@audreyt.org> COPYRIGHT Copyright 2002-2008 by Audrey Tang <cpan@audreyt.org>. This software is released under the MIT license cited below. The "MIT" License Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. AUTHORS • Clinton Gormley <drtech@cpan.org> • Audrey Tang <cpan@audreyt.org> COPYRIGHT AND LICENSE This software is Copyright (c) 2014 by Audrey Tang. This is free software, licensed under: The MIT (X11) License perl v5.34.0 2014-03-06 XGETTEXT(1)
null
automator
automator runs the specified workflow. To create or edit a workflow, use the Automator application. The following options are available: -D name=value Set variable name to value for this run of workflow. -i input Set input as the input to workflow. If input is - then the contents of standard input is used. Each newline (\n)-terminated line is treated as one text input item. -v Run in verbose mode. Mac OS X April 1, 2007 Mac OS X
automator – Runs Automator workflow.
automator [-v] [-i input] [-D name=value ...] workflow
null
null
wsgen
null
null
null
null
null
stat
The stat utility displays information about the file pointed to by file. Read, write, or execute permissions of the named file are not required, but all directories listed in the pathname leading to the file must be searchable. If no argument is given, stat displays information about the file descriptor for standard input. When invoked as readlink, only the target of the symbolic link is printed. If the given argument is not a symbolic link and the -f option is not specified, readlink will print nothing and exit with an error. If the -f option is specified, the output is canonicalized by following every symlink in every component of the given path recursively. readlink will resolve both absolute and relative paths, and return the absolute pathname corresponding to file. In this case, the argument does not need to be a symbolic link. The information displayed is obtained by calling lstat(2) with the given argument and evaluating the returned structure. The default format displays the st_dev, st_ino, st_mode, st_nlink, st_uid, st_gid, st_rdev, st_size, st_atime, st_mtime, st_ctime, st_birthtime, st_blksize, st_blocks, and st_flags fields, in that order. The options are as follows: -F As in ls(1), display a slash (‘/’) immediately after each pathname that is a directory, an asterisk (‘*’) after each that is executable, an at sign (‘@’) after each symbolic link, a percent sign (‘%’) after each whiteout, an equal sign (‘=’) after each socket, and a vertical bar (‘|’) after each that is a FIFO. The use of -F implies -l. -L Use stat(2) instead of lstat(2). The information reported by stat will refer to the target of file, if file is a symbolic link, and not to file itself. If the link is broken or the target does not exist, fall back on lstat(2) and report information about the link. -f format Display information using the specified format. See the Formats section for a description of valid formats. -l Display output in ls -lT format. -n Do not force a newline to appear at the end of each piece of output. -q Suppress failure messages if calls to stat(2) or lstat(2) fail. When run as readlink, error messages are automatically suppressed. -r Display raw information. That is, for all the fields in the stat structure, display the raw, numerical value (for example, times in seconds since the epoch, etc.). -s Display information in “shell output” format, suitable for initializing variables. -t timefmt Display timestamps using the specified format. This format is passed directly to strftime(3). -x Display information in a more verbose way as known from some Linux distributions. Formats Format strings are similar to printf(3) formats in that they start with %, are then followed by a sequence of formatting characters, and end in a character that selects the field of the struct stat which is to be formatted. If the % is immediately followed by one of n, t, %, or @, then a newline character, a tab character, a percent character, or the current file number is printed, otherwise the string is examined for the following: Any of the following optional flags: # Selects an alternate output form for octal and hexadecimal output. Non-zero octal output will have a leading zero, and non- zero hexadecimal output will have “0x” prepended to it. + Asserts that a sign indicating whether a number is positive or negative should always be printed. Non-negative numbers are not usually printed with a sign. - Aligns string output to the left of the field, instead of to the right. 0 Sets the fill character for left padding to the ‘0’ character, instead of a space. space Reserves a space at the front of non-negative signed output fields. A ‘+’ overrides a space if both are used. Then the following fields: size An optional decimal digit string specifying the minimum field width. prec An optional precision composed of a decimal point ‘.’ and a decimal digit string that indicates the maximum string length, the number of digits to appear after the decimal point in floating point output, or the minimum number of digits to appear in numeric output. fmt An optional output format specifier which is one of D, O, U, X, F, or S. These represent signed decimal output, octal output, unsigned decimal output, hexadecimal output, floating point output, and string output, respectively. Some output formats do not apply to all fields. Floating point output only applies to timespec fields (the a, m, and c fields). The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: amc Display date in strftime(3) format. dr Display actual device name. f Display the flags of file as in ls -lTdo. gu Display group or user name. p Display the mode of file as in ls -lTd. N Displays the name of file. T Displays the type of file. Y Insert a “ -> ” into the output. Note that the default output format for Y is a string, but if specified explicitly, these four characters are prepended. sub An optional sub field specifier (high, middle, low). Only applies to the p, d, r, and T output formats. It can be one of the following: H “High” — specifies the major number for devices from r or d, the “user” bits for permissions from the string form of p, the file “type” bits from the numeric forms of p, and the long output form of T. L “Low” — specifies the minor number for devices from r or d, the “other” bits for permissions from the string form of p, the “user”, “group”, and “other” bits from the numeric forms of p, and the ls -F style output character for file type when used with T (the use of L for this is optional). M “Middle” — specifies the “group” bits for permissions from the string output form of p, or the “suid”, “sgid”, and “sticky” bits for the numeric forms of p. datum A required field specifier, being one of the following: d Device upon which file resides (st_dev). i file's inode number (st_ino). p File type and permissions (st_mode). l Number of hard links to file (st_nlink). u, g User ID and group ID of file's owner (st_uid, st_gid). r Device number for character and block device special files (st_rdev). a, m, c, B The time file was last accessed or modified, or when the inode was last changed, or the birth time of the inode (st_atime, st_mtime, st_ctime, st_birthtime). z The size of file in bytes (st_size). b Number of blocks allocated for file (st_blocks). k Optimal file system I/O operation block size (st_blksize). f User defined flags for file. v Inode generation number (st_gen). The following five field specifiers are not drawn directly from the data in struct stat, but are: N The name of the file. R The absolute pathname corresponding to the file. T The file type, either as in ls -F or in a more descriptive form if the sub field specifier H is given. Y The target of a symbolic link. Z Expands to “major,minor” from the rdev field for character or block special devices and gives size output for all others. Only the % and the field specifier are required. Most field specifiers default to U as an output form, with the exception of p which defaults to O; a, m, and c which default to D; and Y, T, and N which default to S. EXIT STATUS The stat and readlink utilities exit 0 on success, and >0 if an error occurs.
stat, readlink – display file status
stat [-FLnq] [-f format | -l | -r | -s | -x] [-t timefmt] [file ...] readlink [-fn] [file ...]
null
If no options are specified, the default format is "%d %i %Sp %l %Su %Sg %r %z \"%Sa\" \"%Sm\" \"%Sc\" \"%SB\" %k %b %#Xf %N". > stat /tmp/bar 0 78852 -rw-r--r-- 1 root wheel 0 0 "Jul 8 10:26:03 2004" "Jul 8 10:26:03 2004" "Jul 8 10:28:13 2004" "Jan 1 09:00:00 1970" 16384 0 0 /tmp/bar Given a symbolic link “foo” that points from /tmp/foo to /, you would use stat as follows: > stat -F /tmp/foo lrwxrwxrwx 1 jschauma cs 1 Apr 24 16:37:28 2002 /tmp/foo@ -> / > stat -LF /tmp/foo drwxr-xr-x 16 root wheel 512 Apr 19 10:57:54 2002 /tmp/foo/ To initialize some shell variables, you could use the -s flag as follows: > csh % eval set `stat -s .cshrc` % echo $st_size $st_mtimespec 1148 1015432481 > sh $ eval $(stat -s .profile) $ echo $st_size $st_mtimespec 1148 1015432481 In order to get a list of file types including files pointed to if the file is a symbolic link, you could use the following format: $ stat -f "%N: %HT%SY" /tmp/* /tmp/bar: Symbolic Link -> /tmp/foo /tmp/output25568: Regular File /tmp/blah: Directory /tmp/foo: Symbolic Link -> / In order to get a list of the devices, their types and the major and minor device numbers, formatted with tabs and linebreaks, you could use the following format: stat -f "Name: %N%n%tType: %HT%n%tMajor: %Hr%n%tMinor: %Lr%n%n" /dev/* [...] Name: /dev/wt8 Type: Block Device Major: 3 Minor: 8 Name: /dev/zero Type: Character Device Major: 2 Minor: 12 In order to determine the permissions set on a file separately, you could use the following format: > stat -f "%Sp -> owner=%SHp group=%SMp other=%SLp" . drwxr-xr-x -> owner=rwx group=r-x other=r-x In order to determine the three files that have been modified most recently, you could use the following format: > stat -f "%m%t%Sm %N" /tmp/* | sort -rn | head -3 | cut -f2- Apr 25 11:47:00 2002 /tmp/blah Apr 25 10:36:34 2002 /tmp/bar Apr 24 16:47:35 2002 /tmp/foo To display a file's modification time: > stat -f %m /tmp/foo 1177697733 To display the same modification time in a readable format: > stat -f %Sm /tmp/foo Apr 27 11:15:33 2007 To display the same modification time in a readable and sortable format: > stat -f %Sm -t %Y%m%d%H%M%S /tmp/foo 20070427111533 To display the same in UTC: > sh $ TZ= stat -f %Sm -t %Y%m%d%H%M%S /tmp/foo 20070427181533 SEE ALSO file(1), ls(1), lstat(2), readlink(2), stat(2), printf(3), strftime(3) HISTORY The stat utility appeared in NetBSD 1.6 and FreeBSD 4.10. AUTHORS The stat utility was written by Andrew Brown <atatat@NetBSD.org>. This man page was written by Jan Schaumann <jschauma@NetBSD.org>. macOS 14.5 June 22, 2017 macOS 14.5
perldoc
perldoc looks up documentation in .pod format that is embedded in the perl installation tree or in a perl script, and displays it using a variety of formatters. This is primarily used for the documentation for the perl library modules. Your system may also have man pages installed for those modules, in which case you can probably just use the man(1) command. If you are looking for a table of contents to the Perl library modules documentation, see the perltoc page.
perldoc - Look up Perl documentation in Pod format.
perldoc [-h] [-D] [-t] [-u] [-m] [-l] [-U] [-F] [-i] [-V] [-T] [-r] [-d destination_file] [-o formatname] [-M FormatterClassName] [-w formatteroption:value] [-n nroff-replacement] [-X] [-L language_code] PageName|ModuleName|ProgramName|URL Examples: perldoc -f BuiltinFunction perldoc -L it -f BuiltinFunction perldoc -q FAQ Keyword perldoc -L fr -q FAQ Keyword perldoc -v PerlVariable perldoc -a PerlAPI See below for more description of the switches.
-h Prints out a brief help message. -D Describes search for the item in detail. -t Display docs using plain text converter, instead of nroff. This may be faster, but it probably won't look as nice. -u Skip the real Pod formatting, and just show the raw Pod source (Unformatted) -m module Display the entire module: both code and unformatted pod documentation. This may be useful if the docs don't explain a function in the detail you need, and you'd like to inspect the code directly; perldoc will find the file for you and simply hand it off for display. -l Display only the file name of the module found. -U When running as the superuser, don't attempt drop privileges for security. This option is implied with -F. NOTE: Please see the heading SECURITY below for more information. -F Consider arguments as file names; no search in directories will be performed. Implies -U if run as the superuser. -f perlfunc The -f option followed by the name of a perl built-in function will extract the documentation of this function from perlfunc. Example: perldoc -f sprintf -q perlfaq-search-regexp The -q option takes a regular expression as an argument. It will search the question headings in perlfaq[1-9] and print the entries matching the regular expression. Example: perldoc -q shuffle -a perlapifunc The -a option followed by the name of a perl api function will extract the documentation of this function from perlapi. Example: perldoc -a newHV -v perlvar The -v option followed by the name of a Perl predefined variable will extract the documentation of this variable from perlvar. Examples: perldoc -v '$"' perldoc -v @+ perldoc -v DATA -T This specifies that the output is not to be sent to a pager, but is to be sent directly to STDOUT. -d destination-filename This specifies that the output is to be sent neither to a pager nor to STDOUT, but is to be saved to the specified filename. Example: "perldoc -oLaTeX -dtextwrapdocs.tex Text::Wrap" -o output-formatname This specifies that you want Perldoc to try using a Pod-formatting class for the output format that you specify. For example: "-oman". This is actually just a wrapper around the "-M" switch; using "-oformatname" just looks for a loadable class by adding that format name (with different capitalizations) to the end of different classname prefixes. For example, "-oLaTeX" currently tries all of the following classes: Pod::Perldoc::ToLaTeX Pod::Perldoc::Tolatex Pod::Perldoc::ToLatex Pod::Perldoc::ToLATEX Pod::Simple::LaTeX Pod::Simple::latex Pod::Simple::Latex Pod::Simple::LATEX Pod::LaTeX Pod::latex Pod::Latex Pod::LATEX. -M module-name This specifies the module that you want to try using for formatting the pod. The class must at least provide a "parse_from_file" method. For example: "perldoc -MPod::Perldoc::ToChecker". You can specify several classes to try by joining them with commas or semicolons, as in "-MTk::SuperPod;Tk::Pod". -w option:value or -w option This specifies an option to call the formatter with. For example, "-w textsize:15" will call "$formatter->textsize(15)" on the formatter object before it is used to format the object. For this to be valid, the formatter class must provide such a method, and the value you pass should be valid. (So if "textsize" expects an integer, and you do "-w textsize:big", expect trouble.) You can use "-w optionname" (without a value) as shorthand for "-w optionname:TRUE". This is presumably useful in cases of on/off features like: "-w page_numbering". You can use an "=" instead of the ":", as in: "-w textsize=15". This might be more (or less) convenient, depending on what shell you use. -X Use an index if it is present. The -X option looks for an entry whose basename matches the name given on the command line in the file "$Config{archlib}/pod.idx". The pod.idx file should contain fully qualified filenames, one per line. -L language_code This allows one to specify the language code for the desired language translation. If the "POD2::<language_code>" package isn't installed in your system, the switch is ignored. All available translation packages are to be found under the "POD2::" namespace. See POD2::IT (or POD2::FR) to see how to create new localized "POD2::*" documentation packages and integrate them into Pod::Perldoc. PageName|ModuleName|ProgramName|URL The item you want to look up. Nested modules (such as "File::Basename") are specified either as "File::Basename" or "File/Basename". You may also give a descriptive name of a page, such as "perlfunc". For URLs, HTTP and HTTPS are the only kind currently supported. For simple names like 'foo', when the normal search fails to find a matching page, a search with the "perl" prefix is tried as well. So "perldoc intro" is enough to find/render "perlintro.pod". -n some-formatter Specify replacement for groff -r Recursive search. -i Ignore case. -V Displays the version of perldoc you're running. SECURITY Because perldoc does not run properly tainted, and is known to have security issues, when run as the superuser it will attempt to drop privileges by setting the effective and real IDs to nobody's or nouser's account, or -2 if unavailable. If it cannot relinquish its privileges, it will not run. See the "-U" option if you do not want this behavior but beware that there are significant security risks if you choose to use "-U". Since 3.26, using "-F" as the superuser also implies "-U" as opening most files and traversing directories requires privileges that are above the nobody/nogroup level. ENVIRONMENT Any switches in the "PERLDOC" environment variable will be used before the command line arguments. Useful values for "PERLDOC" include "-oterm", "-otext", "-ortf", "-oxml", and so on, depending on what modules you have on hand; or the formatter class may be specified exactly with "-MPod::Perldoc::ToTerm" or the like. "perldoc" also searches directories specified by the "PERL5LIB" (or "PERLLIB" if "PERL5LIB" is not defined) and "PATH" environment variables. (The latter is so that embedded pods for executables, such as "perldoc" itself, are available.) In directories where either "Makefile.PL" or "Build.PL" exist, "perldoc" will add "." and "lib" first to its search path, and as long as you're not the superuser will add "blib" too. This is really helpful if you're working inside of a build directory and want to read through the docs even if you have a version of a module previously installed. "perldoc" will use, in order of preference, the pager defined in "PERLDOC_PAGER", "MANPAGER", or "PAGER" before trying to find a pager on its own. ("MANPAGER" is not used if "perldoc" was told to display plain text or unformatted pod.) When using perldoc in it's "-m" mode (display module source code), "perldoc" will attempt to use the pager set in "PERLDOC_SRC_PAGER". A useful setting for this command is your favorite editor as in "/usr/bin/nano". (Don't judge me.) One useful value for "PERLDOC_PAGER" is "less -+C -E". Having PERLDOCDEBUG set to a positive integer will make perldoc emit even more descriptive output than the "-D" switch does; the higher the number, the more it emits. CHANGES Up to 3.14_05, the switch -v was used to produce verbose messages of perldoc operation, which is now enabled by -D. SEE ALSO perlpod, Pod::Perldoc AUTHOR Current maintainer: Mark Allen "<mallen@cpan.org>" Past contributors are: brian d foy "<bdfoy@cpan.org>" Adriano R. Ferreira "<ferreira@cpan.org>", Sean M. Burke "<sburke@cpan.org>", Kenneth Albanowski "<kjahds@kjahds.com>", Andy Dougherty "<doughera@lafcol.lafayette.edu>", and many others. perl v5.38.2 2023-11-28 PERLDOC(1)
null
javac
The javac command reads source files that contain module, package and type declarations written in the Java programming language, and compiles them into class files that run on the Java Virtual Machine. The javac command can also process annotations in Java source files and classes. Source files must have a file name extension of .java. Class files have a file name extension of .class. Both source and class files normally have file names that identify the contents. For example, a class called Shape would be declared in a source file called Shape.java, and compiled into a class file called Shape.class. There are two ways to specify source files to javac: • For a small number of source files, you can list their file names on the command line. • For a large number of source files, you can use the @filename option on the command line to specify an argument file that lists their file names. See Standard Options for a description of the option and Command-Line Argument Files for a description of javac argument files. The order of source files specified on the command line or in an argument file is not important. javac will compile the files together, as a group, and will automatically resolve any dependencies between the declarations in the various source files. javac expects that source files are arranged in one or more directory hierarchies on the file system, described in Arrangement of Source Code. To compile a source file, javac needs to find the declaration of every class or interface that is used, extended, or implemented by the code in the source file. This lets javac check that the code has the right to access those classes and interfaces. Rather than specifying the source files of those classes and interfaces explicitly, you can use command-line options to tell javac where to search for their source files. If you have compiled those source files previously, you can use options to tell javac where to search for the corresponding class files. The options, which all have names ending in "path", are described in Standard Options, and further described in Configuring a Compilation and Searching for Module, Package and Type Declarations. By default, javac compiles each source file to a class file in the same directory as the source file. However, it is recommended to specify a separate destination directory with the -d option. Command-line options and environment variables also control how javac performs various tasks: • Compiling code to run on earlier releases of the JDK. • Compiling code to run under a debugger. • Checking for stylistic issues in Java source code. • Checking for problems in javadoc comments (/** ... */). • Processing annotations in source files and class files. • Upgrading and patching modules in the compile-time environment. javac supports Compiling for Earlier Releases Of The Platform and can also be invoked from Java code using one of a number of APIs
javac - read Java declarations and compile them into class files
javac [options] [sourcefiles-or-classnames]
Command-line options. sourcefiles-or-classnames Source files to be compiled (for example, Shape.java) or the names of previously compiled classes to be processed for annotations (for example, geometry.MyShape). javac provides standard options, and extra options that are either non- standard or are for advanced use. Some options take one or more arguments. If an argument contains spaces or other whitespace characters, the value should be quoted according to the conventions of the environment being used to invoke javac. If the option begins with a single dash (-) the argument should either directly follow the option name, or should be separated with a colon (:) or whitespace, depending on the option. If the option begins with a double dash (--), the argument may be separated either by whitespace or by an equals (=) character with no additional whitespace. For example, -Aname="J. Duke" -proc:only -d myDirectory --module-version 3 --module-version=3 In the following lists of options, an argument of path represents a search path, composed of a list of file system locations separated by the platform path separator character, (semicolon ; on Windows, or colon : on other systems.) Depending on the option, the file system locations may be directories, JAR files or JMOD files. Standard Options @filename Reads options and file names from a file. To shorten or simplify the javac command, you can specify one or more files that contain arguments to the javac command (except -J options). This lets you to create javac commands of any length on any operating system. See Command-Line Argument Files. -Akey[=value] Specifies options to pass to annotation processors. These options are not interpreted by javac directly, but are made available for use by individual processors. The key value should be one or more identifiers separated by a dot (.). --add-modules module,module Specifies root modules to resolve in addition to the initial modules, or all modules on the module path if module is ALL-MODULE-PATH. --boot-class-path path or -bootclasspath path Overrides the location of the bootstrap class files. Note: This can only be used when compiling for releases prior to JDK 9. As applicable, see the descriptions in --release, -source, or -target for details. For JDK 9 or later, see --system. --class-path path, -classpath path, or -cp path Specifies where to find user class files and annotation processors. This class path overrides the user class path in the CLASSPATH environment variable. • If --class-path, -classpath, or -cp are not specified, then the user class path is the value of the CLASSPATH environment variable, if that is set, or else the current directory. • If not compiling code for modules, if the --source-path or -sourcepath` option is not specified, then the user class path is also searched for source files. • If the -processorpath option is not specified, then the class path is also searched for annotation processors. -d directory Sets the destination directory (or class output directory) for class files. If a class is part of a package, then javac puts the class file in a subdirectory that reflects the module name (if appropriate) and package name. The directory, and any necessary subdirectories, will be created if they do not already exist. If the -d option is not specified, then javac puts each class file in the same directory as the source file from which it was generated. Except when compiling code for multiple modules, the contents of the class output directory will be organized in a package hierarchy. When compiling code for multiple modules, the contents of the output directory will be organized in a module hierarchy, with the contents of each module in a separate subdirectory, each organized as a package hierarchy. Note: When compiling code for one or more modules, the class output directory will automatically be checked when searching for previously compiled classes. When not compiling for modules, for backwards compatibility, the directory is not automatically checked for previously compiled classes, and so it is recommended to specify the class output directory as one of the locations on the user class path, using the --class-path option or one of its alternate forms. -deprecation Shows a description of each use or override of a deprecated member or class. Without the -deprecation option, javac shows a summary of the source files that use or override deprecated members or classes. The -deprecation option is shorthand for -Xlint:deprecation. --enable-preview Enables preview language features. Used in conjunction with either -source or --release. -encoding encoding Specifies character encoding used by source files, such as EUC- JP and UTF-8. If the -encoding option is not specified, then the platform default converter is used. -endorseddirs directories Overrides the location of the endorsed standards path. Note: This can only be used when compiling for releases prior to JDK 9. As applicable, see the descriptions in --release, -source, or -target for details. -extdirs directories Overrides the location of the installed extensions. directories is a list of directories, separated by the platform path separator (; on Windows, and : otherwise). Each JAR file in the specified directories is searched for class files. All JAR files found become part of the class path. If you are compiling for a release of the platform that supports the Extension Mechanism, then this option specifies the directories that contain the extension classes. See [Compiling for Other Releases of the Platform]. Note: This can only be used when compiling for releases prior to JDK 9. As applicable, see the descriptions in --release, -source, or -target for details. -g Generates all debugging information, including local variables. By default, only line number and source file information is generated. -g:[lines, vars, source] Generates only the kinds of debugging information specified by the comma-separated list of keywords. Valid keywords are: lines Line number debugging information. vars Local variable debugging information. source Source file debugging information. -g:none Does not generate debugging information. -h directory Specifies where to place generated native header files. When you specify this option, a native header file is generated for each class that contains native methods or that has one or more constants annotated with the java.lang.annotation.Native annotation. If the class is part of a package, then the compiler puts the native header file in a subdirectory that reflects the module name (if appropriate) and package name. The directory, and any necessary subdirectories, will be created if they do not already exist. --help, -help or -? Prints a synopsis of the standard options. --help-extra or -X Prints a synopsis of the set of extra options. --help-lint Prints the supported keys for the -Xlint option. -implicit:[none, class] Specifies whether or not to generate class files for implicitly referenced files: • -implicit:class --- Automatically generates class files. • -implicit:none --- Suppresses class file generation. If this option is not specified, then the default automatically generates class files. In this case, the compiler issues a warning if any class files are generated when also doing annotation processing. The warning is not issued when the -implicit option is explicitly set. See Searching for Module, Package and Type Declarations. -Joption Passes option to the runtime system, where option is one of the Java options described on java command. For example, -J-Xms48m sets the startup memory to 48 MB. Note: The CLASSPATH environment variable, -classpath option, -bootclasspath option, and -extdirs option do not specify the classes used to run javac. Trying to customize the compiler implementation with these options and variables is risky and often does not accomplish what you want. If you must customize the compiler implementation, then use the -J option to pass options through to the underlying Java launcher. --limit-modules module,module* Limits the universe of observable modules. --module module-name (,module-name)* or -m module-name (,module-name)* Compiles those source files in the named modules that are newer than the corresponding files in the output directory. --module-path path or -p path Specifies where to find application modules. --module-source-path module-source-path Specifies where to find source files when compiling code in multiple modules. See [Compilation Modes] and The Module Source Path Option. --module-version version Specifies the version of modules that are being compiled. -nowarn Disables warning messages. This option operates the same as the -Xlint:none option. -parameters Generates metadata for reflection on method parameters. Stores formal parameter names of constructors and methods in the generated class file so that the method java.lang.reflect.Executable.getParameters from the Reflection API can retrieve them. -proc:[none, only, full] Controls whether annotation processing and compilation are done. -proc:none means that compilation takes place without annotation processing. -proc:only means that only annotation processing is done, without any subsequent compilation. If this option is not used, or -proc:full is specified, annotation processing and compilation are done. -processor class1[,class2,class3...] Names of the annotation processors to run. This bypasses the default discovery process. --processor-module-path path Specifies the module path used for finding annotation processors. --processor-path path or -processorpath path Specifies where to find annotation processors. If this option is not used, then the class path is searched for processors. -profile profile Checks that the API used is available in the specified profile. This option is deprecated and may be removed in a future release. Note: This can only be used when compiling for releases prior to JDK 9. As applicable, see the descriptions in --release, -source, or -target for details. --release release Compiles source code according to the rules of the Java programming language for the specified Java SE release, generating class files which target that release. Source code is compiled against the combined Java SE and JDK API for the specified release. The supported values of release are the current Java SE release and a limited number of previous releases, detailed in the command-line help. For the current release, the Java SE API consists of the java.*, javax.*, and org.* packages that are exported by the Java SE modules in the release; the JDK API consists of the com.* and jdk.* packages that are exported by the JDK modules in the release, plus the javax.* packages that are exported by standard, but non-Java SE, modules in the release. For previous releases, the Java SE API and the JDK API are as defined in that release. Note: When using --release, you cannot also use the --source/-source or --target/-target options. Note: When using --release to specify a release that supports the Java Platform Module System, the --add-exports option cannot be used to enlarge the set of packages exported by the Java SE, JDK, and standard modules in the specified release. -s directory Specifies the directory used to place the generated source files. If a class is part of a package, then the compiler puts the source file in a subdirectory that reflects the module name (if appropriate) and package name. The directory, and any necessary subdirectories, will be created if they do not already exist. Except when compiling code for multiple modules, the contents of the source output directory will be organized in a package hierarchy. When compiling code for multiple modules, the contents of the source output directory will be organized in a module hierarchy, with the contents of each module in a separate subdirectory, each organized as a package hierarchy. --source release or -source release Compiles source code according to the rules of the Java programming language for the specified Java SE release. The supported values of release are the current Java SE release and a limited number of previous releases, detailed in the command- line help. If the option is not specified, the default is to compile source code according to the rules of the Java programming language for the current Java SE release. --source-path path or -sourcepath path Specifies where to find source files. Except when compiling multiple modules together, this is the source code path used to search for class or interface definitions. Note: Classes found through the class path might be recompiled when their source files are also found. See Searching for Module, Package and Type Declarations. --system jdk | none Overrides the location of system modules. --target release or -target release Generates class files suitable for the specified Java SE release. The supported values of release are the current Java SE release and a limited number of previous releases, detailed in the command-line help. Note: The target release must be equal to or higher than the source release. (See --source.) --upgrade-module-path path Overrides the location of upgradeable modules. -verbose Outputs messages about what the compiler is doing. Messages include information about each class loaded and each source file compiled. --version or -version Prints version information. -Werror Terminates compilation when warnings occur. Extra Options --add-exports module/package=other-module(,other-module)* Specifies a package to be considered as exported from its defining module to additional modules or to all unnamed modules when the value of other-module is ALL-UNNAMED. --add-reads module=other-module(,other-module)* Specifies additional modules to be considered as required by a given module. --default-module-for-created-files module-name Specifies the fallback target module for files created by annotation processors, if none is specified or inferred. -Djava.endorsed.dirs=dirs Overrides the location of the endorsed standards path. Note: This can only be used when compiling for releases prior to JDK 9. As applicable, see the descriptions in --release, -source, or -target for details. -Djava.ext.dirs=dirs Overrides the location of installed extensions. Note: This can only be used when compiling for releases prior to JDK 9. As applicable, see the descriptions in --release, -source, or -target for details. --patch-module module=path Overrides or augments a module with classes and resources in JAR files or directories. -Xbootclasspath:path Overrides the location of the bootstrap class files. Note: This can only be used when compiling for releases prior to JDK 9. As applicable, see the descriptions in --release, -source, or -target for details. -Xbootclasspath/a:path Adds a suffix to the bootstrap class path. Note: This can only be used when compiling for releases prior to JDK 9. As applicable, see the descriptions in --release, -source, or -target for details. -Xbootclasspath/p:path Adds a prefix to the bootstrap class path. Note: This can only be used when compiling for releases prior to JDK 9. As applicable, see the descriptions in --release, -source, or -target for details. -Xdiags:[compact, verbose] Selects a diagnostic mode. -Xdoclint Enables recommended checks for problems in documentation comments. -Xdoclint:(all|none|[-]group)[/access] Enables or disables specific groups of checks in documentation comments. group can have one of the following values: accessibility, html, missing, reference, syntax. The variable access specifies the minimum visibility level of classes and members that the -Xdoclint option checks. It can have one of the following values (in order of most to least visible): public, protected, package, private. The default access level is private. When prefixed by doclint:, the group names and all can be used with @SuppressWarnings to suppress warnings about documentation comments in parts of the code being compiled. For more information about these groups of checks, see the DocLint section of the javadoc command documentation. The -Xdoclint option is disabled by default in the javac command. For example, the following option checks classes and members (with all groups of checks) that have the access level of protected and higher (which includes protected and public): -Xdoclint:all/protected The following option enables all groups of checks for all access levels, except it will not check for HTML errors for classes and members that have the access level of package and higher (which includes package, protected and public): -Xdoclint:all,-html/package -Xdoclint/package:[-]packages(,[-]package)* Enables or disables checks in specific packages. Each package is either the qualified name of a package or a package name prefix followed by .*, which expands to all sub-packages of the given package. Each package can be prefixed with a hyphen (-) to disable checks for a specified package or packages. For more information, see the DocLint section of the javadoc command documentation. -Xlint Enables all recommended warnings. In this release, enabling all available warnings is recommended. -Xlint:[-]key(,[-]key)* Supplies warnings to enable or disable, separated by comma. Precede a key by a hyphen (-) to disable the specified warning. Supported values for key are: • all: Enables all warnings. • auxiliaryclass: Warns about an auxiliary class that is hidden in a source file, and is used from other files. • cast: Warns about the use of unnecessary casts. • classfile: Warns about the issues related to classfile contents. • deprecation: Warns about the use of deprecated items. • dep-ann: Warns about the items marked as deprecated in javadoc but without the @Deprecated annotation. • divzero: Warns about the division by the constant integer 0. • empty: Warns about an empty statement after if. • exports: Warns about the issues regarding module exports. • fallthrough: Warns about the falling through from one case of a switch statement to the next. • finally: Warns about finally clauses that do not terminate normally. • incubating: Warns about the use of incubating modules. • lossy-conversions: Warns about possible lossy conversions in compound assignment. • missing-explicit-ctor: Warns about missing explicit constructors in public and protected classes in exported packages. • module: Warns about the module system-related issues. • opens: Warns about the issues related to module opens. • options: Warns about the issues relating to use of command line options. • output-file-clash: Warns if any output file is overwritten during compilation. This can occur, for example, on case- insensitive filesystems. • overloads: Warns about the issues related to method overloads. • overrides: Warns about the issues related to method overrides. • path: Warns about the invalid path elements on the command line. • preview: Warns about the use of preview language features. • processing: Warns about the issues related to annotation processing. • rawtypes: Warns about the use of raw types. • removal: Warns about the use of an API that has been marked for removal. • restricted: Warns about the use of restricted methods. • requires-automatic: Warns developers about the use of automatic modules in requires clauses. • requires-transitive-automatic: Warns about automatic modules in requires transitive. • serial: Warns about the serializable classes that do not provide a serial version ID. Also warns about access to non- public members from a serializable element. • static: Warns about the accessing a static member using an instance. • strictfp: Warns about unnecessary use of the strictfp modifier. • synchronization: Warns about synchronization attempts on instances of value-based classes. • text-blocks: Warns about inconsistent white space characters in text block indentation. • this-escape: Warns about constructors leaking this prior to subclass initialization. • try: Warns about the issues relating to the use of try blocks (that is, try-with-resources). • unchecked: Warns about the unchecked operations. • varargs: Warns about the potentially unsafe vararg methods. • none: Disables all warnings. With the exception of all and none, the keys can be used with the @SuppressWarnings annotation to suppress warnings in a part of the source code being compiled. See Examples of Using -Xlint keys. -Xmaxerrs number Sets the maximum number of errors to print. -Xmaxwarns number Sets the maximum number of warnings to print. -Xpkginfo:[always, legacy, nonempty] Specifies when and how the javac command generates package-info.class files from package-info.java files using one of the following options: always Generates a package-info.class file for every package-info.java file. This option may be useful if you use a build system such as Ant, which checks that each .java file has a corresponding .class file. legacy Generates a package-info.class file only if package-info.java contains annotations. This option does not generate a package-info.class file if package-info.java contains only comments. Note: A package-info.class file might be generated but be empty if all the annotations in the package-info.java file have RetentionPolicy.SOURCE. nonempty Generates a package-info.class file only if package-info.java contains annotations with RetentionPolicy.CLASS or RetentionPolicy.RUNTIME. -Xplugin:name args Specifies the name and optional arguments for a plug-in to be run. If args are provided, name and args should be quoted or otherwise escape the whitespace characters between the name and all the arguments. For details on the API for a plugin, see the API documentation for jdk.compiler/com.sun.source.util.Plugin. -Xprefer:[source, newer] Specifies which file to read when both a source file and class file are found for an implicitly compiled class using one of the following options. See Searching for Module, Package and Type Declarations. • -Xprefer:newer: Reads the newer of the source or class files for a type (default). • -Xprefer:source : Reads the source file. Use -Xprefer:source when you want to be sure that any annotation processors can access annotations declared with a retention policy of SOURCE. -Xprint Prints a textual representation of specified types for debugging purposes. This does not perform annotation processing or compilation. The format of the output could change. -XprintProcessorInfo Prints information about which annotations a processor is asked to process. -XprintRounds Prints information about initial and subsequent annotation processing rounds. -Xstdout filename Sends compiler messages to the named file. By default, compiler messages go to System.err. ENVIRONMENT VARIABLES CLASSPATH If the --class-path option or any of its alternate forms are not specified, the class path will default to the value of the CLASSPATH environment variable if it is set. However, it is recommended that this environment variable should not be set, and that the --class-path option should be used to provide an explicit value for the class path when one is required. JDK_JAVAC_OPTIONS The content of the JDK_JAVAC_OPTIONS environment variable, separated by white-spaces ( ) or white-space characters (\n, \t, \r, or \f) is prepended to the command line arguments passed to javac as a list of arguments. The encoding requirement for the environment variable is the same as the javac command line on the system. JDK_JAVAC_OPTIONS environment variable content is treated in the same manner as that specified in the command line. Single quotes (') or double quotes (") can be used to enclose arguments that contain whitespace characters. All content between the open quote and the first matching close quote are preserved by simply removing the pair of quotes. In case a matching quote is not found, the launcher will abort with an error message. @files are supported as they are specified in the command line. However, as in @files, use of a wildcard is not supported. Examples of quoting arguments containing white spaces: export JDK_JAVAC_OPTIONS='@"C:\white spaces\argfile"' export JDK_JAVAC_OPTIONS='"@C:\white spaces\argfile"' export JDK_JAVAC_OPTIONS='@C:\"white spaces"\argfile' COMMAND-LINE ARGUMENT FILES An argument file can include command-line options and source file names in any combination. The arguments within a file can be separated by spaces or new line characters. If a file name contains embedded spaces, then put the whole file name in double quotation marks. File names within an argument file are relative to the current directory, not to the location of the argument file. Wildcards (*) are not allowed in these lists (such as for specifying *.java). Use of the at sign (@) to recursively interpret files is not supported. The -J options are not supported because they're passed to the launcher, which does not support argument files. When executing the javac command, pass in the path and name of each argument file with the at sign (@) leading character. When the javac command encounters an argument beginning with the at sign (@), it expands the contents of that file into the argument list. Examples of Using javac @filename Single Argument File You could use a single argument file named argfile to hold all javac arguments: javac @argfile This argument file could contain the contents of both files shown in the following Two Argument Files example. Two Argument Files You can create two argument files: one for the javac options and the other for the source file names. Note that the following lists have no line-continuation characters. Create a file named options that contains the following: Linux and macOS: -d classes -g -sourcepath /java/pubs/ws/1.3/src/share/classes Windows: -d classes -g -sourcepath C:\java\pubs\ws\1.3\src\share\classes Create a file named sources that contains the following: MyClass1.java MyClass2.java MyClass3.java Then, run the javac command as follows: javac @options @sources Argument Files with Paths The argument files can have paths, but any file names inside the files are relative to the current working directory (not path1 or path2): javac @path1/options @path2/sources ARRANGEMENT OF SOURCE CODE In the Java language, classes and interfaces can be organized into packages, and packages can be organized into modules. javac expects that the physical arrangement of source files in directories of the file system will mirror the organization of classes into packages, and packages into modules. It is a widely adopted convention that module names and package names begin with a lower-case letter, and that class names begin with an upper-case letter. Arrangement of Source Code for a Package When classes and interfaces are organized into a package, the package is represented as a directory, and any subpackages are represented as subdirectories. For example: • The package p is represented as a directory called p. • The package p.q -- that is, the subpackage q of package p -- is represented as the subdirectory q of directory p. The directory tree representing package p.q is therefore p\q on Windows, and p/q on other systems. • The package p.q.r is represented as the directory tree p\q\r (on Windows) or p/q/r (on other systems). Within a directory or subdirectory, .java files represent classes and interfaces in the corresponding package or subpackage. For example: • The class X declared in package p is represented by the file X.java in the p directory. • The class Y declared in package p.q is represented by the file Y.java in the q subdirectory of directory p. • The class Z declared in package p.q.r is represented by the file Z.java in the r subdirectory of p\q (on Windows) or p/q (on other systems). In some situations, it is convenient to split the code into separate directories, each structured as described above, and the aggregate list of directories specified to javac. Arrangement of Source Code for a Module In the Java language, a module is a set of packages designed for reuse. In addition to .java files for classes and interfaces, each module has a source file called module-info.java which: 1. declares the module's name; 2. lists the packages exported by the module (to allow reuse by other modules); 3. lists other modules required by the module (to reuse their exported packages). When packages are organized into a module, the module is represented by one or more directories representing the packages in the module, one of which contains the module-info.java file. It may be convenient, but it is not required, to use a single directory, named after the module, to contain the module-info.java file alongside the directory tree which represents the packages in the module (i.e., the package hierarchy described above). The exact arrangement of source code for a module is typically dictated by the conventions adopted by a development environment (IDE) or build system. For example: • The module a.b.c may be represented by the directory a.b.c, on all systems. • The module's declaration is represented by the file module-info.java in the a.b.c directory. • If the module contains package p.q.r, then the a.b.c directory contains the directory tree p\q\r (on Windows) or p/q/r (on other systems). The development environment may prescribe some directory hierarchy between the directory named for the module and the source files to be read by javac. For example: • The module a.b.c may be represented by the directory a.b.c • The module's declaration and the module's packages may be in some subdirectory of a.b.c, such as src\main\java (on Windows) or src/main/java (on other systems). CONFIGURING A COMPILATION This section describes how to configure javac to perform a basic compilation. See Configuring the Module System for additional details for use when compiling for a release of the platform that supports modules. Source Files • Specify the source files to be compiled on the command line. If there are no compilation errors, the corresponding class files will be placed in the output directory. Some systems may limit the amount you can put on a command line; to work around those limits, you can use argument files. When compiling code for modules, you can also specify source files indirectly, by using the --module or -m option. Output Directory • Use the -d option to specify an output directory in which to put the compiled class files. This will normally be organized in a package hierarchy, unless you are compiling source code from multiple modules, in which case it will be organized as a module hierarchy. When the compilation has been completed, if you are compiling one or more modules, you can place the output directory on the module path for the Java launcher; otherwise, you can place the place the output directory on the class path for the Java launcher. Precompiled Code The code to be compiled may refer to libraries beyond what is provided by the platform. If so, you must place these libraries on the class path or module path. If the library code is not in a module, place it on the class path; if it is in a module, place it on the module path. • Use the --class-path option to specify libraries to be placed on the class path. Locations on the class path should be organized in a package hierarchy. You can also use alternate forms of the option: -classpath or -cp. • Use the --module-path option to specify libraries to be placed on the module path. Locations on the module path should either be modules or directories of modules. You can also use an alternate form of the option: -p. See Configuring the Module System for details on how to modify the default configuration of library modules. Note: the options for the class path and module path are not mutually exclusive, although it is not common to specify the class path when compiling code for one or more modules. Additional Source Files The code to be compiled may refer to types in additional source files that are not specified on the command line. If so, you must put those source files on either the source path or module path. You can only specify one of these options: if you are not compiling code for a module, or if you are only compiling code for a single module, use the source path; if you are compiling code for multiple modules, use the module source path. • Use the --source-path option to specify the locations of additional source files that may be read by javac. Locations on the source path should be organized in a package hierarchy. You can also use an alternate form of the option: -sourcepath. • Use the --module-source-path option one or more times to specify the location of additional source files in different modules that may be read by javac, or when compiling source files in multiple modules. You can either specify the locations for each module individually, or you can organize the source files so that you can specify the locations all together. For more details, see The Module Source Path Option. If you want to be able to refer to types in additional source files but do not want them to be compiled, use the -implicit option. Note: if you are compiling code for multiple modules, you must always specify a module source path, and all source files specified on the command line must be in one of the directories on the module source path, or in a subdirectory thereof. Example of Compiling Multiple Source Files This example compiles the Aloha.java, GutenTag.java, Hello.java, and Hi.java source files in the greetings package. Linux and macOS: % javac greetings/*.java % ls greetings Aloha.class GutenTag.class Hello.class Hi.class Aloha.java GutenTag.java Hello.java Hi.java Windows: C:\>javac greetings\*.java C:\>dir greetings Aloha.class GutenTag.class Hello.class Hi.class Aloha.java GutenTag.java Hello.java Hi.java Example of Specifying a User Class Path After changing one of the source files in the previous example, recompile it: Linux and macOS: pwd /examples javac greetings/Hi.java Windows: C:\>cd \examples C:\>javac greetings\Hi.java Because greetings.Hi refers to other classes in the greetings package, the compiler needs to find these other classes. The previous example works because the default user class path is the directory that contains the package directory. If you want to recompile this file without concern for which directory you are in, then add the examples directory to the user class path by setting CLASSPATH. This example uses the -classpath option. Linux and macOS: javac -classpath /examples /examples/greetings/Hi.java Windows: C:\>javac -classpath \examples \examples\greetings\Hi.java If you change greetings.Hi to use a banner utility, then that utility also needs to be accessible through the user class path. Linux and macOS: javac -classpath /examples:/lib/Banners.jar \ /examples/greetings/Hi.java Windows: C:\>javac -classpath \examples;\lib\Banners.jar ^ \examples\greetings\Hi.java To execute a class in the greetings package, the program needs access to the greetings package, and to the classes that the greetings classes use. Linux and macOS: java -classpath /examples:/lib/Banners.jar greetings.Hi Windows: C:\>java -classpath \examples;\lib\Banners.jar greetings.Hi CONFIGURING THE MODULE SYSTEM If you want to include additional modules in your compilation, use the --add-modules option. This may be necessary when you are compiling code that is not in a module, or which is in an automatic module, and the code refers to API in the additional modules. If you want to restrict the set of modules in your compilation, use the --limit-modules option. This may be useful if you want to ensure that the code you are compiling is capable of running on a system with a limited set of modules installed. If you want to break encapsulation and specify that additional packages should be considered as exported from a module, use the --add-exports option. This may be useful when performing white-box testing; relying on access to internal API in production code is strongly discouraged. If you want to specify that additional packages should be considered as required by a module, use the --add-reads option. This may be useful when performing white-box testing; relying on access to internal API in production code is strongly discouraged. You can patch additional content into any module using the --patch-module option. See [Patching a Module] for more details. SEARCHING FOR MODULE, PACKAGE AND TYPE DECLARATIONS To compile a source file, the compiler often needs information about a module or type, but the declaration is not in the source files specified on the command line. javac needs type information for every class or interface used, extended, or implemented in the source file. This includes classes and interfaces not explicitly mentioned in the source file, but that provide information through inheritance. For example, when you create a subclass of java.awt.Window, you are also using the ancestor classes of Window: java.awt.Container, java.awt.Component, and java.lang.Object. When compiling code for a module, the compiler also needs to have available the declaration of that module. A successful search may produce a class file, a source file, or both. If both are found, then you can use the -Xprefer option to instruct the compiler which to use. If a search finds and uses a source file, then by default javac compiles that source file. This behavior can be altered with -implicit. The compiler might not discover the need for some type information until after annotation processing completes. When the type information is found in a source file and no -implicit option is specified, the compiler gives a warning that the file is being compiled without being subject to annotation processing. To disable the warning, either specify the file on the command line (so that it will be subject to annotation processing) or use the -implicit option to specify whether or not class files should be generated for such source files. The way that javac locates the declarations of those types depends on whether the reference exists within code for a module or not. Searching Package Oriented Paths When searching for a source or class file on a path composed of package oriented locations, javac will check each location on the path in turn for the possible presence of the file. The first occurrence of a particular file shadows (hides) any subsequent occurrences of like- named files. This shadowing does not affect any search for any files with a different name. This can be convenient when searching for source files, which may be grouped in different locations, such as shared code, platform-specific code and generated code. It can also be useful when injecting alternate versions of a class file into a package, to debugging or other instrumentation reasons. But, it can also be dangerous, such as when putting incompatible different versions of a library on the class path. Searching Module Oriented Paths Prior to scanning any module paths for any package or type declarations, javac will lazily scan the following paths and locations to determine the modules that will be used in the compilation. • The module source path (see the --module-source-path option) • The path for upgradeable modules (see the --upgrade-module-path option) • The system modules (see the --system option) • The user module path ( see the --module-path option) For any module, the first occurrence of the module during the scan completely shadows (hides) any subsequent appearance of a like-named module. While locating the modules, javac is able to determine the packages exported by the module and to associate with each module a package oriented path for the contents of the module. For any previously compiled module, this path will typically be a single entry for either a directory or a file that provides an internal directory- like hierarchy, such as a JAR file. Thus, when searching for a type that is in a package that is known to be exported by a module, javac can locate the declaration directly and efficiently. Searching for the Declaration of a Module If the module has been previously compiled, the module declaration is located in a file named module-info.class in the root of the package hierarchy for the content of the module. If the module is one of those currently being compiled, the module declaration will be either the file named module-info.class in the root of the package hierarchy for the module in the class output directory, or the file named module-info.java in one of the locations on the source path or one the module source path for the module. Searching for the Declaration of a Type When the Reference is not in a Module When searching for a type that is referenced in code that is not in a module, javac will look in the following places: • The platform classes (or the types in exported packages of the platform modules) (This is for compiled class files only.) • Types in exported packages of any modules on the module path, if applicable. (This is for compiled class files only.) • Types in packages on the class path and/or source path: • If both are specified, javac looks for compiled class files on the class path and for source files on the source path. • If the class path is specified, but not source path, javac looks for both compiled class files and source files on the class path. • If the class path is not specified, it defaults to the current directory. When looking for a type on the class path and/or source path, if both a compiled class file and a source file are found, the most recently modified file will be used by default. If the source file is newer, it will be compiled and will may override any previously compiled version of the file. You can use the -Xprefer option to override the default behavior. Searching for the Declaration of a Type When the Reference is in a Module When searching for a type that is referenced in code in a module, javac will examine the declaration of the enclosing module to determine if the type is in a package that is exported from another module that is readable by the enclosing module. If so, javac will simply and directly go to the definition of that module to find the definition of the required type. Unless the module is another of the modules being compiled, javac will only look for compiled class files files. In other words, javac will not look for source files in platform modules or modules on the module path. If the type being referenced is not in some other readable module, javac will examine the module being compiled to try and find the declaration of the type. javac will look for the declaration of the type as follows: • Source files specified on the command line or on the source path or module source path. • Previously compiled files in the output directory. DIRECTORY HIERARCHIES javac generally assumes that source files and compiled class files will be organized in a file system directory hierarchy or in a type of file that supports in an internal directory hierarchy, such as a JAR file. Three different kinds of hierarchy are supported: a package hierarchy, a module hierarchy, and a module source hierarchy. While javac is fairly relaxed about the organization of source code, beyond the expectation that source will be organized in one or package hierarchies, and can generally accommodate organizations prescribed by development environments and build tools, Java tools in general, and javac and the Java launcher in particular, are more stringent regarding the organization of compiled class files, and will be organized in package hierarchies or module hierarchies, as appropriate. The location of these hierarchies are specified to javac with command- line options, whose names typically end in "path", like --source-path or --class-path. Also as a general rule, path options whose name includes the word module, like --module-path, are used to specify module hierarchies, although some module-related path options allow a package hierarchy to be specified on a per-module basis. All other path options are used to specify package hierarchies. Package Hierarchy In a package hierarchy, directories and subdirectories are used to represent the component parts of the package name, with the source file or compiled class file for a type being stored as a file with an extension of .java or .class in the most nested directory. For example, in a package hierarchy, the source file for a class com.example.MyClass will be stored in the file com/example/MyClass.java Module Hierarchy In a module hierarchy, the first level of directories are named for the modules in the hierarchy; within each of those directories the contents of the module are organized in package hierarchies. For example, in a module hierarchy, the compiled class file for a type called com.example.MyClass in a module called my.library will be stored in my.library/com/example/MyClass.class. The various output directories used by javac (the class output directory, the source output directory, and native header output directory) will all be organized in a module hierarchy when multiple modules are being compiled. Module Source Hierarchy Although the source for each individual module should always be organized in a package hierarchy, it may be convenient to group those hierarchies into a module source hierarchy. This is similar to a module hierarchy, except that there may be intervening directories between the directory for the module and the directory that is the root of the package hierarchy for the source code of the module. For example, in a module source hierarchy, the source file for a type called com.example.MyClass in a module called my.library may be stored in a file such as my.library/src/main/java/com/example/MyClass.java. THE MODULE SOURCE PATH OPTION The --module-source-path option has two forms: a module-specific form, in which a package path is given for each module containing code to be compiled, and a module-pattern form, in which the source path for each module is specified by a pattern. The module-specific form is generally simpler to use when only a small number of modules are involved; the module-pattern form may be more convenient when the number of modules is large and the modules are organized in a regular manner that can be described by a pattern. Multiple instances of the --module-source-path option may be given, each one using either the module-pattern form or the module-specific form, subject to the following limitations: • the module-pattern form may be used at most once • the module-specific form may be used at most once for any given module If the module-specific form is used for any module, the associated search path overrides any path that might otherwise have been inferred from the module-pattern form. Module-specific form The module-specific form allows an explicit search path to be given for any specific module. This form is: • --module-source-path module-name=file-path (path-separator file- path)* The path separator character is ; on Windows, and : otherwise. Note: this is similar to the form used for the --patch-module option. Module-pattern form The module-pattern form allows a concise specification of the module source path for any number of modules organized in regular manner. • --module-source-path pattern The pattern is defined by the following rules, which are applied in order: • The argument is considered to be a series of segments separated by the path separator character (; on Windows, and : otherwise). • Each segment containing curly braces of the form string1{alt1 ( ,alt2 )* } string2 is considered to be replaced by a series of segments formed by "expanding" the braces: string1 alt1 string2 string1 alt2 string2 and so on... The braces may be nested. This rule is applied for all such usages of braces. • Each segment must have at most one asterisk (*). If a segment does not contain an asterisk, it is considered to be as though the file separator character and an asterisk are appended. For any module M, the source path for that module is formed from the series of segments obtained by substituting the module name M for the asterisk in each segment. Note: in this context, the asterisk is just used as a special marker, to denote the position in the path of the module name. It should not be confused with the use of * as a file name wildcard character, as found on most operating systems. PATCHING MODULES javac allows any content, whether in source or compiled form, to be patched into any module using the --patch-module option. You may want to do this to compile alternative implementations of a class to be patched at runtime into a JVM, or to inject additional classes into the module, such as when testing. The form of the option is: • --patch-module module-name=file-path (path-separator file-path )* The path separator character is ; on Windows, and : otherwise. The paths given for the module must specify the root of a package hierarchy for the contents of the module The option may be given at most once for any given module. Any content on the path will hide any like-named content later in the path and in the patched module. When patching source code into more than one module, the --module-source-path must also be used, so that the output directory is organized in a module hierarchy, and capable of holding the compiled class files for the modules being compiled. ANNOTATION PROCESSING The javac command provides direct support for annotation processing. The API for annotation processors is defined in the javax.annotation.processing and javax.lang.model packages and subpackages. How Annotation Processing Works Unless annotation processing is disabled with the -proc:none option, the compiler searches for any annotation processors that are available. The search path can be specified with the -processorpath option. If no path is specified, then the user class path is used. Processors are located by means of service provider-configuration files named META-INF/services/javax.annotation.processing.Processor on the search path. Such files should contain the names of any annotation processors to be used, listed one per line. Alternatively, processors can be specified explicitly, using the -processor option. After scanning the source files and classes on the command line to determine what annotations are present, the compiler queries the processors to determine what annotations they process. When a match is found, the processor is called. A processor can claim the annotations it processes, in which case no further attempt is made to find any processors for those annotations. After all of the annotations are claimed, the compiler does not search for additional processors. If any processors generate new source files, then another round of annotation processing occurs: Any newly generated source files are scanned, and the annotations processed as before. Any processors called on previous rounds are also called on all subsequent rounds. This continues until no new source files are generated. After a round occurs where no new source files are generated, the annotation processors are called one last time, to give them a chance to complete any remaining work. Finally, unless the -proc:only option is used, the compiler compiles the original and all generated source files. If you use an annotation processor that generates additional source files to be included in the compilation, you can specify a default module to be used for the newly generated files, for use when a module declaration is not also generated. In this case, use the --default-module-for-created-files option. Compilation Environment and Runtime Environment. The declarations in source files and previously compiled class files are analyzed by javac in a compilation environment that is distinct from the runtime environment used to execute javac itself. Although there is a deliberate similarity between many javac options and like- named options for the Java launcher, such as --class-path, --module-path and so on, it is important to understand that in general the javac options just affect the environment in which the source files are compiled, and do not affect the operation of javac itself. The distinction between the compilation environment and runtime environment is significant when it comes to using annotation processors. Although annotations processors process elements (declarations) that exist in the compilation environment, the annotation processor itself is executed in the runtime environment. If an annotation processor has dependencies on libraries that are not in modules, the libraries can be placed, along with the annotation processor itself, on the processor path. (See the --processor-path option.) If the annotation processor and its dependencies are in modules, you should use the processor module path instead. (See the --processor-module-path option.) When those are insufficient, it may be necessary to provide further configuration of the runtime environment. This can be done in two ways: 1. If javac is invoked from the command line, options can be passed to the underlying runtime by prefixing the option with -J. 2. You can start an instance of a Java Virtual Machine directly and use command line options and API to configure an environment in which javac can be invoked via one of its APIs. COMPILING FOR EARLIER RELEASES OF THE PLATFORM javac can compile code that is to be used on other releases of the platform, using either the --release option, or the --source/-source and --target/-target options, together with additional options to specify the platform classes. Depending on the desired platform release, there are some restrictions on some of the options that can be used. • When compiling for JDK 8 and earlier releases, you cannot use any option that is intended for use with the module system. This includes all of the following options: • --module-source-path, --upgrade-module-path, --system, --module-path, --add-modules, --add-exports, --add-opens, --add-reads, --limit-modules, --patch-module If you use the --source/-source or --target/-target options, you should also set the appropriate platform classes using the boot class path family of options. • When compiling for JDK 9 and later releases, you cannot use any option that is intended to configure the boot class path. This includes all of the following options: • -Xbootclasspath/p:, -Xbootclasspath, -Xbootclasspath/a:, -endorseddirs, -Djava.endorsed.dirs, -extdirs, -Djava.ext.dirs, -profile If you use the --source/-source or --target/-target options, you should also set the appropriate platform classes using the --system option to give the location of an appropriate installed release of JDK. When using the --release option, only the supported documented API for that release may be used; you cannot use any options to break encapsulation to access any internal classes. APIS The javac compiler can be invoked using an API in three different ways: The Java Compiler API This provides the most flexible way to invoke the compiler, including the ability to compile source files provided in memory buffers or other non-standard file systems. The ToolProvider API A ToolProvider for javac can be obtained by calling ToolProvider.findFirst("javac"). This returns an object with the equivalent functionality of the command-line tool. Note: This API should not be confused with the like-named API in the javax.tools package. The javac Legacy API This API is retained for backward compatibility only. All new code should use either the Java Compiler API or the ToolProvider API. Note: All other classes and methods found in a package with names that start with com.sun.tools.javac (subpackages of com.sun.tools.javac) are strictly internal and subject to change at any time. EXAMPLES OF USING -XLINT KEYS cast Warns about unnecessary and redundant casts, for example: String s = (String) "Hello!" classfile Warns about issues related to class file contents. deprecation Warns about the use of deprecated items. For example: java.util.Date myDate = new java.util.Date(); int currentDay = myDate.getDay(); The method java.util.Date.getDay has been deprecated since JDK 1.1. dep-ann Warns about items that are documented with the @deprecated Javadoc comment, but do not have the @Deprecated annotation, for example: /** * @deprecated As of Java SE 7, replaced by {@link #newMethod()} */ public static void deprecatedMethod() { } public static void newMethod() { } divzero Warns about division by the constant integer 0, for example: int divideByZero = 42 / 0; empty Warns about empty statements after ifstatements, for example: class E { void m() { if (true) ; } } fallthrough Checks the switch blocks for fall-through cases and provides a warning message for any that are found. Fall-through cases are cases in a switch block, other than the last case in the block, whose code does not include a break statement, allowing code execution to fall through from that case to the next case. For example, the code following the case 1 label in this switch block does not end with a break statement: switch (x) { case 1: System.out.println("1"); // No break statement here. case 2: System.out.println("2"); } If the -Xlint:fallthrough option was used when compiling this code, then the compiler emits a warning about possible fall- through into case, with the line number of the case in question. finally Warns about finally clauses that cannot be completed normally, for example: public static int m() { try { throw new NullPointerException(); } catch (NullPointerException(); { System.err.println("Caught NullPointerException."); return 1; } finally { return 0; } } The compiler generates a warning for the finally block in this example. When the int method is called, it returns a value of 0. A finally block executes when the try block exits. In this example, when control is transferred to the catch block, the int method exits. However, the finally block must execute, so it's executed, even though control was transferred outside the method. Warns about issues that related to the use of command-line options. See Compiling for Earlier Releases of the Platform. overrides Warns about issues related to method overrides. For example, consider the following two classes: public class ClassWithVarargsMethod { void varargsMethod(String... s) { } } public class ClassWithOverridingMethod extends ClassWithVarargsMethod { @Override void varargsMethod(String[] s) { } } The compiler generates a warning similar to the following:. warning: [override] varargsMethod(String[]) in ClassWithOverridingMethod overrides varargsMethod(String...) in ClassWithVarargsMethod; overriding method is missing '...' When the compiler encounters a varargs method, it translates the varargs formal parameter into an array. In the method ClassWithVarargsMethod.varargsMethod, the compiler translates the varargs formal parameter String... s to the formal parameter String[] s, an array that matches the formal parameter of the method ClassWithOverridingMethod.varargsMethod. Consequently, this example compiles. path Warns about invalid path elements and nonexistent path directories on the command line (with regard to the class path, the source path, and other paths). Such warnings cannot be suppressed with the @SuppressWarnings annotation. For example: • Linux and macOS: javac -Xlint:path -classpath /nonexistentpath Example.java • Windows: javac -Xlint:path -classpath C:\nonexistentpath Example.java processing Warns about issues related to annotation processing. The compiler generates this warning when you have a class that has an annotation, and you use an annotation processor that cannot handle that type of annotation. For example, the following is a simple annotation processor: Source file AnnoProc.java: import java.util.*; import javax.annotation.processing.*; import javax.lang.model.*; import javax.lang.model.element.*; @SupportedAnnotationTypes("NotAnno") public class AnnoProc extends AbstractProcessor { public boolean process(Set<? extends TypeElement> elems, RoundEnvironment renv){ return true; } public SourceVersion getSupportedSourceVersion() { return SourceVersion.latest(); } } Source file AnnosWithoutProcessors.java: @interface Anno { } @Anno class AnnosWithoutProcessors { } The following commands compile the annotation processor AnnoProc, then run this annotation processor against the source file AnnosWithoutProcessors.java: javac AnnoProc.java javac -cp . -Xlint:processing -processor AnnoProc -proc:only AnnosWithoutProcessors.java When the compiler runs the annotation processor against the source file AnnosWithoutProcessors.java, it generates the following warning: warning: [processing] No processor claimed any of these annotations: Anno To resolve this issue, you can rename the annotation defined and used in the class AnnosWithoutProcessors from Anno to NotAnno. rawtypes Warns about unchecked operations on raw types. The following statement generates a rawtypes warning: void countElements(List l) { ... } The following example does not generate a rawtypes warning: void countElements(List<?> l) { ... } List is a raw type. However, List<?> is an unbounded wildcard parameterized type. Because List is a parameterized interface, always specify its type argument. In this example, the List formal argument is specified with an unbounded wildcard (?) as its formal type parameter, which means that the countElements method can accept any instantiation of the List interface. serial Warns about missing serialVersionUID definitions on serializable classes. For example: public class PersistentTime implements Serializable { private Date time; public PersistentTime() { time = Calendar.getInstance().getTime(); } public Date getTime() { return time; } } The compiler generates the following warning: warning: [serial] serializable class PersistentTime has no definition of serialVersionUID If a serializable class does not explicitly declare a field named serialVersionUID, then the serialization runtime environment calculates a default serialVersionUID value for that class based on various aspects of the class, as described in the Java Object Serialization Specification. However, it's strongly recommended that all serializable classes explicitly declare serialVersionUID values because the default process of computing serialVersionUID values is highly sensitive to class details that can vary depending on compiler implementations. As a result, this might cause an unexpected InvalidClassExceptions during deserialization. To guarantee a consistent serialVersionUID value across different Java compiler implementations, a serializable class must declare an explicit serialVersionUID value. static Warns about issues relating to the use of static variables, for example: class XLintStatic { static void m1() { } void m2() { this.m1(); } } The compiler generates the following warning: warning: [static] static method should be qualified by type name, XLintStatic, instead of by an expression To resolve this issue, you can call the static method m1 as follows: XLintStatic.m1(); Alternately, you can remove the static keyword from the declaration of the method m1. this-escape Warns about constructors leaking this prior to subclass initialization. For example, this class: public class MyClass { public MyClass() { System.out.println(this.hashCode()); } } generates the following warning: MyClass.java:3: warning: [this-escape] possible 'this' escape before subclass is fully initialized System.out.println(this.hashCode()); ^ A 'this' escape warning is generated when a constructor does something that might result in a subclass method being invoked before the constructor returns. In such cases the subclass method would be operating on an incompletely initialized instance. In the above example, a subclass of MyClass that overrides hashCode() to incorporate its own fields would likely produce an incorrect result when invoked as shown. Warnings are only generated if a subclass could exist that is outside of the current module (or package, if no module) being compiled. So, for example, constructors in final and non-public classes do not generate warnings. try Warns about issues relating to the use of try blocks, including try-with-resources statements. For example, a warning is generated for the following statement because the resource ac declared in the try block is not used: try ( AutoCloseable ac = getResource() ) { // do nothing} unchecked Gives more detail for unchecked conversion warnings that are mandated by the Java Language Specification, for example: List l = new ArrayList<Number>(); List<String> ls = l; // unchecked warning During type erasure, the types ArrayList<Number> and List<String> become ArrayList and List, respectively. The ls command has the parameterized type List<String>. When the List referenced by l is assigned to ls, the compiler generates an unchecked warning. At compile time, the compiler and JVM cannot determine whether l refers to a List<String> type. In this case, l does not refer to a List<String> type. As a result, heap pollution occurs. A heap pollution situation occurs when the List object l, whose static type is List<Number>, is assigned to another List object, ls, that has a different static type, List<String>. However, the compiler still allows this assignment. It must allow this assignment to preserve backward compatibility with releases of Java SE that do not support generics. Because of type erasure, List<Number> and List<String> both become List. Consequently, the compiler allows the assignment of the object l, which has a raw type of List, to the object ls. varargs Warns about unsafe use of variable arguments (varargs) methods, in particular, those that contain non-reifiable arguments, for example: public class ArrayBuilder { public static <T> void addToList (List<T> listArg, T... elements) { for (T x : elements) { listArg.add(x); } } } A non-reifiable type is a type whose type information is not fully available at runtime. The compiler generates the following warning for the definition of the method ArrayBuilder.addToList: warning: [varargs] Possible heap pollution from parameterized vararg type T When the compiler encounters a varargs method, it translates the varargs formal parameter into an array. However, the Java programming language does not permit the creation of arrays of parameterized types. In the method ArrayBuilder.addToList, the compiler translates the varargs formal parameter T... elements to the formal parameter T[] elements, an array. However, because of type erasure, the compiler converts the varargs formal parameter to Object[] elements. Consequently, there's a possibility of heap pollution. JDK 22 2024 JAVAC(1)
null
xcsdiagnose
null
null
null
null
null
mdfind
The mdfind command consults the central metadata store and returns a list of files that match the given metadata query. The query can be a string or a query expression. The following options are available: -0 Prints an ASCII NUL character after each result path. This is useful when used in conjunction with xargs -0. -live Causes the mdfind command to provide live-updates to the number of files matching the query. When an update causes the query results to change the number of matches is updated. The find can be cancelled by typing ctrl-C. -count Causes the mdfind command to output the total number of matches, instead of the path to the matching items. -onlyin dir Limit the scope of the search to the directory specified. -name fileName Searches for matching file names only. -literal Force the provided query string to be taken as a literal query string, without interpretation. -interpret Force the provided query string to be interpreted as if the user had typed the string into the Spotlight menu. For example, the string "search" would produce the following query string: (* = search* cdw || kMDItemTextContent = search* cdw)
mdfind – finds files matching a given query
mdfind [-live] [-count] [-onlyin directory] [-name fileName] query
null
The following examples are shown as given to the shell. This returns all files with any metadata attribute value matching the string "image": mdfind image This returns all files that contain "MyFavoriteAuthor" in the kMDItemAuthor metadata attribute: mdfind "kMDItemAuthors == '*MyFavoriteAuthor*'" This returns all files with any metadata attribute value matching the string "skateboard". The find continues to run after gathering the initial results, providing a count of the number of files that match the query. mdfind -live skateboard To get a list of the available attributes for use in constructing queries, see mdimport(1), particularly the -X switch. SEE ALSO mdimport(1), mdls(1), mdutil(1), xargs(1) Mac OS X June 10, 2004 Mac OS X
mdls
The mdls command prints the values of all the metadata attributes associated with the files provided as an argument. The following options are available: -name Print only the matching metadata attribute value. Can be used multiple times. -raw Print raw attribute data in the order that was requested. Fields will be separated with a ASCII NUL character, suitable for piping to xargs(1) -0. -nullMarker Sets a marker string to be used when a requested attribute is null. Only used in -raw mode. Default is "(null)". SEE ALSO mdfind(1), mdutil(1) xargs(1) Mac OS X June 3, 2004 Mac OS X
mdls – lists the metadata attributes for the specified file
mdls [-name attributeName] [-raw [-nullMarker markerString]] file ...
null
null
codesign_allocate
codesign_allocate sets up a Mach-O file used by the dynamic linker so space for code signing data of the specified size for the specified architecture is embedded in the Mach-O file. The program must be passed one -a argument or one -A argument for each architecture in a universal file, or exactly one -a or -A for a thin file. -i oldfile specifies the input file as oldfile. -o newfile specifies the output file as newfile. -a arch size specifies for the architecture arch that the size of the code signing data is to be size. The value of size must be a multiple of 16. -A cputype cpusubtype size specifies for the architecture as a pair of decimal integers for the cputype and cpusubtype that the size of the code signing data is to be size. The value of size must be a multiple of 16. -r remove the code signature data and the LC_CODE_SIGNATURE load command. This is the same as specifiying the -a or -A option with a size of zero. -p page align the code signature data by padding string table and changing its size. This is not the default as codesign(1) currently can't use this option. Apple, Inc. April 17, 2017 CODESIGN_ALLOCATE(1)
codesign_allocate - add code signing data to a Mach-O file
codesign_allocate -i oldfile [ -a arch size ]... [ -A cputype cpusubtype size ]... -o newfile
null
null
binhex5.34.pl
null
null
null
null
null
mddiagnose
The mddiagnose tool gathers system and Spotlight information in order to assist Apple when investigating issues related to Spotlight. A great deal of information is harvested, spanning system state, system and Spotlight details. What mddiagnose Collects: • A spindump of the system • Several seconds of top output • Individual samples of mds and mdworker • Paths for all files used by Spotlight to contain its database on every volume • All system logs • All spin and crash reports for the following processes: mds mdworker • Recent spin and crash reports for the following processes, for all local users: mds mdworker • The query text and result quality statistics for recent Spotlight searches. The actual results of the searches are not gathered • General information about disks and mounted network shares • The path of the last file indexed by each mdworker process on behalf of Spotlight. The path of the last file which resulted in a crash of an mdworker process. In each case, only the path is gathered, not the contents of the file • Spotlight configuration for each volume currently mounted on your system. This includes the path and size of all files used internally by Spotlight as well as a listing of paths that are user excluded from indexing • Comprehensive information about internal state of Spotlight What mddiagnose Doesn't Collect: • No user data is harvested from any volume • No paths or files found by any search are returned • No database storage or user files are returned • No authentication credentials are harvested from the system
mddiagnose – gather information to aid in diagnosing Spotlight issues
mddiagnose -h mddiagnose [-f path] [-e path] [-p path] [-n]
-h Print full usage. -m Gather a subset of the normal report. -f path Write the diagnostic to the specified path. -p path Gather Spotlight permissions and filter information. Run as owner of that file. EXIT STATUS mddiagnose exits with status 0 if there were no internal errors encountered during the diagnostic, or >0 when an error unrelated to external state occurs or unusable input is provided by the user. Mac OS X 15 March 2011 Mac OS X
null
snmpbulkwalk
snmpbulkwalk is an SNMP application that uses SNMP GETBULK requests to query a network entity efficiently for a tree of information. An object identifier (OID) may be given on the command line. This OID specifies which portion of the object identifier space will be searched using GETBULK requests. All variables in the subtree below the given OID are queried and their values presented to the user. Each variable name is given in the format specified in variables(5). If no OID argument is present, snmpbulkwalk will search MIB-2. If the network entity has an error processing the request packet, an error packet will be returned and a message will be shown, helping to pinpoint why the request was malformed. If the tree search causes attempts to search beyond the end of the MIB, the message "End of MIB" will be displayed.
snmpbulkwalk - retrieve a subtree of management values using SNMP GETBULK requests
snmpbulkwalk [APPLICATION OPTIONS] [COMMON OPTIONS] AGENT [OID]
-Cc Do not check whether the returned OIDs are increasing. Some agents (LaserJets are an example) return OIDs out of order, but can complete the walk anyway. Other agents return OIDs that are out of order and can cause snmpbulkwalk to loop indefinitely. By default, snmpbulkwalk tries to detect this behavior and warns you when it hits an agent acting illegally. Use -Cc to turn off this behaviour. -Ci Include the given OID in the search range. Normally snmpbulkwalk uses GETBULK requests starting with the OID you specified and returns all results in the MIB tree after that OID. Sometimes, you may wish to include the OID specified on the command line in the printed results if it is a valid OID in the tree itself. This option lets you do this. -Cn<NUM> Set the non-repeaters field in the GETBULK PDUs. This specifies the number of supplied variables that should not be iterated over. The default is 0. -Cp Upon completion of the walk, print the number of variables found. -Cr<NUM> Set the max-repetitions field in the GETBULK PDUs. This specifies the maximum number of iterations over the repeating variables. The default is 10. In addition to these options, snmpbulkwalk takes the common options described in the snmpcmd(1) manual page. Note that snmpbulkget REQUIRES an argument specifying the agent to query and at most one OID argument, as described above. EXAMPLE The command: snmpbulkwalk -v2c -Os -c public zeus system will retrieve all of the variables under system: sysDescr.0 = STRING: "SunOS zeus.net.cmu.edu 4.1.3_U1 1 sun4m" sysObjectID.0 = OID: enterprises.hp.nm.hpsystem.10.1.1 sysUpTime.0 = Timeticks: (155274552) 17 days, 23:19:05 sysContact.0 = STRING: "" sysName.0 = STRING: "zeus.net.cmu.edu" sysLocation.0 = STRING: "" sysServices.0 = INTEGER: 72 In contrast to snmpwalk, this information will typically be gathered in a single transaction with the agent, rather than one transaction per variable found. snmpbulkwalk is thus more efficient in terms of network utilisation, which may be especially important when retrieving large tables. NOTE As the name implies, snmpbulkwalk utilizes the SNMP GETBULK message, which is not available in SNMP v1. SEE ALSO snmpcmd(1), variables(5). V5.6.2.1 01 May 2002 SNMPBULKWALK(1)
null
keytool
The keytool command is a key and certificate management utility. It enables users to administer their own public/private key pairs and associated certificates for use in self-authentication (where a user authenticates themselves to other users and services) or data integrity and authentication services, by using digital signatures. The keytool command also enables users to cache the public keys (in the form of certificates) of their communicating peers. A certificate is a digitally signed statement from one entity (person, company, and so on), which says that the public key (and some other information) of some other entity has a particular value. When data is digitally signed, the signature can be verified to check the data integrity and authenticity. Integrity means that the data hasn't been modified or tampered with, and authenticity means that the data comes from the individual who claims to have created and signed it. The keytool command also enables users to administer secret keys and passphrases used in symmetric encryption and decryption (Data Encryption Standard). It can also display other security-related information. The keytool command stores the keys and certificates in a keystore. The keytool command uses the jdk.certpath.disabledAlgorithms and jdk.security.legacyAlgorithms security properties to determine which algorithms are considered a security risk. It emits warnings when disabled or legacy algorithms are being used. The jdk.certpath.disabledAlgorithms and jdk.security.legacyAlgorithms security properties are defined in the java.security file (located in the JDK's $JAVA_HOME/conf/security directory). COMMAND AND OPTION NOTES The following notes apply to the descriptions in Commands and Options: • All command and option names are preceded by a hyphen sign (-). • Only one command can be provided. • Options for each command can be provided in any order. • There are two kinds of options, one is single-valued which should be only provided once. If a single-valued option is provided multiple times, the value of the last one is used. The other type is multi- valued, which can be provided multiple times and all values are used. The only multi-valued option currently supported is the -ext option used to generate X.509v3 certificate extensions. • All items not italicized or in braces ({ }) or brackets ([ ]) are required to appear as is. • Braces surrounding an option signify that a default value is used when the option isn't specified on the command line. Braces are also used around the -v, -rfc, and -J options, which have meaning only when they appear on the command line. They don't have any default values. • Brackets surrounding an option signify that the user is prompted for the values when the option isn't specified on the command line. For the -keypass option, if you don't specify the option on the command line, then the keytool command first attempts to use the keystore password to recover the private/secret key. If this attempt fails, then the keytool command prompts you for the private/secret key password. • Items in italics (option values) represent the actual values that must be supplied. For example, here is the format of the -printcert command: keytool -printcert {-file cert_file} {-v} When you specify a -printcert command, replace cert_file with the actual file name, as follows: keytool -printcert -file VScert.cer • Option values must be enclosed in quotation marks when they contain a blank (space). COMMANDS AND OPTIONS The keytool commands and their options can be grouped by the tasks that they perform. Commands for Creating or Adding Data to the Keystore: • -gencert • -genkeypair • -genseckey • -importcert • -importpass Commands for Importing Contents from Another Keystore: • -importkeystore Commands for Generating a Certificate Request: • -certreq Commands for Exporting Data: • -exportcert Commands for Displaying Data: • -list • -printcert • -printcertreq • -printcrl Commands for Managing the Keystore: • -storepasswd • -keypasswd • -delete • -changealias Commands for Displaying Security-related Information: • -showinfo Commands for Displaying Program Version: • -version COMMANDS FOR CREATING OR ADDING DATA TO THE KEYSTORE -gencert The following are the available options for the -gencert command: • {-rfc}: Output in RFC (Request For Comment) style • {-infile infile}: Input file name • {-outfile outfile}: Output file name • {-alias alias}: Alias name of the entry to process • {-sigalg sigalg}: Signature algorithm name • {-dname dname}: Distinguished name • {-startdate startdate}: Certificate validity start date and time • {-ext ext}*: X.509 extension • {-validity days}: Validity number of days • [-keypass arg]: Key password • {-keystore keystore}: Keystore name • [-storepass arg]: Keystore password • {-storetype type}: Keystore type • {-providername name}: Provider name • {-addprovider name [-providerarg arg]}: Adds a security provider by name (such as SunPKCS11) with an optional configure argument. The value of the security provider is the name of a security provider that is defined in a module. For example, keytool -addprovider SunPKCS11 -providerarg some.cfg ... Note: For compatibility reasons, the SunPKCS11 provider can still be loaded with -providerclass sun.security.pkcs11.SunPKCS11 even if it is now defined in a module. This is the only module included in the JDK that needs a configuration, and therefore the most widely used with the -providerclass option. For legacy security providers located on classpath and loaded by reflection, -providerclass should still be used. • {-providerclass class [-providerarg arg]}: Add security provider by fully qualified class name with an optional configure argument. For example, if MyProvider is a legacy provider loaded via reflection, keytool -providerclass com.example.MyProvider ... • {-providerpath list}: Provider classpath • {-v}: Verbose output • {-protected}: Password provided through a protected mechanism Use the -gencert command to generate a certificate as a response to a certificate request file (which can be created by the keytool -certreq command). The command reads the request either from infile or, if omitted, from the standard input, signs it by using the alias's private key, and outputs the X.509 certificate into either outfile or, if omitted, to the standard output. When -rfc is specified, the output format is Base64-encoded PEM; otherwise, a binary DER is created. The -sigalg value specifies the algorithm that should be used to sign the certificate. The startdate argument is the start time and date that the certificate is valid. The days argument tells the number of days for which the certificate should be considered valid. When dname is provided, it is used as the subject of the generated certificate. Otherwise, the one from the certificate request is used. The -ext value shows what X.509 extensions will be embedded in the certificate. Read Common Command Options for the grammar of -ext. The -gencert option enables you to create certificate chains. The following example creates a certificate, e1, that contains three certificates in its certificate chain. The following commands creates four key pairs named ca, ca1, ca2, and e1: keytool -alias ca -dname CN=CA -genkeypair -keyalg rsa keytool -alias ca1 -dname CN=CA -genkeypair -keyalg rsa keytool -alias ca2 -dname CN=CA -genkeypair -keyalg rsa keytool -alias e1 -dname CN=E1 -genkeypair -keyalg rsa The following two commands create a chain of signed certificates; ca signs ca1 and ca1 signs ca2, all of which are self-issued: keytool -alias ca1 -certreq | keytool -alias ca -gencert -ext san=dns:ca1 | keytool -alias ca1 -importcert keytool -alias ca2 -certreq | keytool -alias ca1 -gencert -ext san=dns:ca2 | keytool -alias ca2 -importcert The following command creates the certificate e1 and stores it in the e1.cert file, which is signed by ca2. As a result, e1 should contain ca, ca1, and ca2 in its certificate chain: keytool -alias e1 -certreq | keytool -alias ca2 -gencert > e1.cert -genkeypair The following are the available options for the -genkeypair command: • {-alias alias}: Alias name of the entry to process • -keyalg alg: Key algorithm name • {-keysize size}: Key bit size • {-groupname name}: Group name. For example, an Elliptic Curve name. • {-sigalg alg}: Signature algorithm name • {-signer alias}: Signer alias • [-signerkeypass arg]: Signer key password • [-dname name]: Distinguished name • {-startdate date}: Certificate validity start date and time • {-ext value}*: X.509 extension • {-validity days}: Validity number of days • [-keypass arg]: Key password • {-keystore keystore}: Keystore name • [-storepass arg]: Keystore password • {-storetype type}: Keystore type • {-providername name}: Provider name • {-addprovider name [-providerarg arg]}: Add security provider by name (such as SunPKCS11) with an optional configure argument. • {-providerclass class [-providerarg arg] }: Add security provider by fully qualified class name with an optional configure argument. • {-providerpath list}: Provider classpath • {-v}: Verbose output • {-protected}: Password provided through a protected mechanism Use the -genkeypair command to generate a key pair (a public key and associated private key). When the -signer option is not specified, the public key is wrapped in an X.509 v3 self-signed certificate and stored as a single-element certificate chain. When the -signer option is specified, a new certificate is generated and signed by the designated signer and stored as a multiple-element certificate chain (containing the generated certificate itself, and the signer's certificate chain). The certificate chain and private key are stored in a new keystore entry that is identified by its alias. The -keyalg value specifies the algorithm to be used to generate the key pair. The -keysize value specifies the size of each key to be generated. The -groupname value specifies the named group (for example, the standard or predefined name of an Elliptic Curve) of the key to be generated. When a -keysize value is provided, it will be used to initialize a KeyPairGenerator object using the initialize(int keysize) method. When a -groupname value is provided, it will be used to initialize a KeyPairGenerator object using the initialize(AlgorithmParameterSpec params) method where params is new NamedParameterSpec(groupname). Only one of -groupname and -keysize can be specified. If an algorithm has multiple named groups that have the same key size, the -groupname option should usually be used. In this case, if -keysize is specified, it's up to the security provider to determine which named group is chosen when generating a key pair. The -sigalg value specifies the algorithm that should be used to sign the certificate. This algorithm must be compatible with the -keyalg value. The -signer value specifies the alias of a PrivateKeyEntry for the signer that already exists in the keystore. This option is used to sign the certificate with the signer's private key. This is especially useful for key agreement algorithms (i.e. the -keyalg value is XDH, X25519, X448, or DH) as these keys cannot be used for digital signatures, and therefore a self- signed certificate cannot be created. The -signerkeypass value specifies the password of the signer's private key. It can be specified if the private key of the signer entry is protected by a password different from the store password. The -dname value specifies the X.500 Distinguished Name to be associated with the value of -alias. If the -signer option is not specified, the issuer and subject fields of the self-signed certificate are populated with the specified distinguished name. If the -signer option is specified, the subject field of the certificate is populated with the specified distinguished name and the issuer field is populated with the subject field of the signer's certificate. If a distinguished name is not provided at the command line, then the user is prompted for one. The value of -keypass is a password used to protect the private key of the generated key pair. If a password is not provided, then the user is prompted for it. If you press the Return key at the prompt, then the key password is set to the same password as the keystore password. The -keypass value must have at least six characters. The value of -startdate specifies the issue time of the certificate, also known as the "Not Before" value of the X.509 certificate's Validity field. The option value can be set in one of these two forms: ([+-]nnn[ymdHMS])+ [yyyy/mm/dd] [HH:MM:SS] With the first form, the issue time is shifted by the specified value from the current time. The value is a concatenation of a sequence of subvalues. Inside each subvalue, the plus sign (+) means shift forward, and the minus sign (-) means shift backward. The time to be shifted is nnn units of years, months, days, hours, minutes, or seconds (denoted by a single character of y, m, d, H, M, or S respectively). The exact value of the issue time is calculated by using the java.util.GregorianCalendar.add(int field, int amount) method on each subvalue, from left to right. For example, the issue time can be specified by: Calendar c = new GregorianCalendar(); c.add(Calendar.YEAR, -1); c.add(Calendar.MONTH, 1); c.add(Calendar.DATE, -1); return c.getTime() With the second form, the user sets the exact issue time in two parts, year/month/day and hour:minute:second (using the local time zone). The user can provide only one part, which means the other part is the same as the current date (or time). The user must provide the exact number of digits shown in the format definition (padding with 0 when shorter). When both date and time are provided, there is one (and only one) space character between the two parts. The hour should always be provided in 24-hour format. When the option isn't provided, the start date is the current time. The option can only be provided one time. The value of date specifies the number of days (starting at the date specified by -startdate, or the current date when -startdate isn't specified) for which the certificate should be considered valid. -genseckey The following are the available options for the -genseckey command: • {-alias alias}: Alias name of the entry to process • [-keypass arg]: Key password • -keyalg alg: Key algorithm name • {-keysize size}: Key bit size • {-keystore keystore}: Keystore name • [-storepass arg]: Keystore password • {-storetype type}: Keystore type • {-providername name}: Provider name • {-addprovider name [-providerarg arg]}: Add security provider by name (such as SunPKCS11) with an optional configure argument. • {-providerclass class [-providerarg arg]}: Add security provider by fully qualified class name with an optional configure argument. • {-providerpath list}: Provider classpath • {-v}: Verbose output • {-protected}: Password provided through a protected mechanism Use the -genseckey command to generate a secret key and store it in a new KeyStore.SecretKeyEntry identified by alias. The value of -keyalg specifies the algorithm to be used to generate the secret key, and the value of -keysize specifies the size of the key that is generated. The -keypass value is a password that protects the secret key. If a password is not provided, then the user is prompted for it. If you press the Return key at the prompt, then the key password is set to the same password that is used for the -keystore. The -keypass value must contain at least six characters. -importcert The following are the available options for the -importcert command: • {-noprompt}: Do not prompt • {-trustcacerts}: Trust certificates from cacerts • {-protected}: Password is provided through protected mechanism • {-alias alias}: Alias name of the entry to process • {-file file}: Input file name • [-keypass arg]: Key password • {-keystore keystore}: Keystore name • {-cacerts}: Access the cacerts keystore • [-storepass arg]: Keystore password • {-storetype type}: Keystore type • {-providername name}: Provider name • {-addprovider name [-providerarg arg]}: Add security provider by name (such as SunPKCS11) with an optional configure argument. • {-providerclass class [-providerarg arg]}: Add security provider by fully qualified class name with an optional configure argument. • {-providerpath list}: Provider classpath • {-v}: Verbose output Use the -importcert command to read the certificate or certificate chain (where the latter is supplied in a PKCS#7 formatted reply or in a sequence of X.509 certificates) from -file file, and store it in the keystore entry identified by -alias. If -file file is not specified, then the certificate or certificate chain is read from stdin. The keytool command can import X.509 v1, v2, and v3 certificates, and PKCS#7 formatted certificate chains consisting of certificates of that type. The data to be imported must be provided either in binary encoding format or in printable encoding format (also known as Base64 encoding) as defined by the Internet RFC 1421 standard. In the latter case, the encoding must be bounded at the beginning by a string that starts with -----BEGIN, and bounded at the end by a string that starts with -----END. You import a certificate for two reasons: To add it to the list of trusted certificates, and to import a certificate reply received from a certificate authority (CA) as the result of submitting a Certificate Signing Request (CSR) to that CA. See the -certreq command in Commands for Generating a Certificate Request. The type of import is indicated by the value of the -alias option. If the alias doesn't point to a key entry, then the keytool command assumes you are adding a trusted certificate entry. In this case, the alias shouldn't already exist in the keystore. If the alias does exist, then the keytool command outputs an error because a trusted certificate already exists for that alias, and doesn't import the certificate. If -alias points to a key entry, then the keytool command assumes that you're importing a certificate reply. -importpass The following are the available options for the -importpass command: • {-alias alias}: Alias name of the entry to process • [-keypass arg]: Key password • {-keyalg alg}: Key algorithm name • {-keysize size}: Key bit size • {-keystore keystore}: Keystore name • [-storepass arg]: Keystore password • {-storetype type}: Keystore type • {-providername name}: Provider name • {-addprovider name [-providerarg arg]}: Add security provider by name (such as SunPKCS11) with an optional configure argument. • {-providerclass class [-providerarg arg]}: Add security provider by fully qualified class name with an optional configure argument. • {-providerpath list}: Provider classpath • {-v}: Verbose output • {-protected}: Password provided through a protected mechanism Use the -importpass command to imports a passphrase and store it in a new KeyStore.SecretKeyEntry identified by -alias. The passphrase may be supplied via the standard input stream; otherwise the user is prompted for it. The -keypass option provides a password to protect the imported passphrase. If a password is not provided, then the user is prompted for it. If you press the Return key at the prompt, then the key password is set to the same password as that used for the keystore. The -keypass value must contain at least six characters. COMMANDS FOR IMPORTING CONTENTS FROM ANOTHER KEYSTORE -importkeystore The following are the available options for the -importkeystore command: • -srckeystore keystore: Source keystore name • {-destkeystore keystore}: Destination keystore name • {-srcstoretype type}: Source keystore type • {-deststoretype type}: Destination keystore type • [-srcstorepass arg]: Source keystore password • [-deststorepass arg]: Destination keystore password • {-srcprotected}: Source keystore password protected • {-destprotected}: Destination keystore password protected • {-srcprovidername name}: Source keystore provider name • {-destprovidername name}: Destination keystore provider name • {-srcalias alias}: Source alias • {-destalias alias}: Destination alias • [-srckeypass arg]: Source key password • [-destkeypass arg]: Destination key password • {-noprompt}: Do not prompt • {-addprovider name [-providerarg arg]: Add security provider by name (such as SunPKCS11) with an optional configure argument. • {-providerclass class [-providerarg arg]}: Add security provider by fully qualified class name with an optional configure argument • {-providerpath list}: Provider classpath • {-v}: Verbose output Note: This is the first line of all options: -srckeystore keystore -destkeystore keystore Use the -importkeystore command to import a single entry or all entries from a source keystore to a destination keystore. Note: If you do not specify -destkeystore when using the keytool -importkeystore command, then the default keystore used is $HOME/.keystore. When the -srcalias option is provided, the command imports the single entry identified by the alias to the destination keystore. If a destination alias isn't provided with -destalias, then -srcalias is used as the destination alias. If the source entry is protected by a password, then -srckeypass is used to recover the entry. If -srckeypass isn't provided, then the keytool command attempts to use -srcstorepass to recover the entry. If -srcstorepass is not provided or is incorrect, then the user is prompted for a password. The destination entry is protected with -destkeypass. If -destkeypass isn't provided, then the destination entry is protected with the source entry password. For example, most third-party tools require storepass and keypass in a PKCS #12 keystore to be the same. To create a PKCS#12 keystore for these tools, always specify a -destkeypass that is the same as -deststorepass. If the -srcalias option isn't provided, then all entries in the source keystore are imported into the destination keystore. Each destination entry is stored under the alias from the source entry. If the source entry is protected by a password, then -srcstorepass is used to recover the entry. If -srcstorepass is not provided or is incorrect, then the user is prompted for a password. If a source keystore entry type isn't supported in the destination keystore, or if an error occurs while storing an entry into the destination keystore, then the user is prompted either to skip the entry and continue or to quit. The destination entry is protected with the source entry password. If the destination alias already exists in the destination keystore, then the user is prompted either to overwrite the entry or to create a new entry under a different alias name. If the -noprompt option is provided, then the user isn't prompted for a new destination alias. Existing entries are overwritten with the destination alias name. Entries that can't be imported are skipped and a warning is displayed. COMMANDS FOR GENERATING A CERTIFICATE REQUEST -certreq The following are the available options for the -certreq command: • {-alias alias}: Alias name of the entry to process • {-sigalg alg}: Signature algorithm name • {-file file}: Output file name • [ -keypass arg]: Key password • {-keystore keystore}: Keystore name • {-dname name}: Distinguished name • {-ext value}: X.509 extension • [-storepass arg]: Keystore password • {-storetype type}: Keystore type • {-providername name}: Provider name • {-addprovider name [-providerarg arg]}: Add security provider by name (such as SunPKCS11) with an optional configure argument. • {-providerclass class [-providerarg arg]}: Add security provider by fully qualified class name with an optional configure argument. • {-providerpath list}: Provider classpath • {-v}: Verbose output • {-protected}: Password provided through a protected mechanism Use the -certreq command to generate a Certificate Signing Request (CSR) using the PKCS #10 format. A CSR is intended to be sent to a CA. The CA authenticates the certificate requestor (usually offline) and returns a certificate or certificate chain to replace the existing certificate chain (initially a self-signed certificate) in the keystore. The private key associated with alias is used to create the PKCS #10 certificate request. To access the private key, the correct password must be provided. If -keypass isn't provided at the command line and is different from the password used to protect the integrity of the keystore, then the user is prompted for it. If -dname is provided, then it is used as the subject in the CSR. Otherwise, the X.500 Distinguished Name associated with alias is used. The -sigalg value specifies the algorithm that should be used to sign the CSR. The CSR is stored in the -file file. If a file is not specified, then the CSR is output to -stdout. Use the -importcert command to import the response from the CA. COMMANDS FOR EXPORTING DATA -exportcert The following are the available options for the -exportcert command: • {-rfc}: Output in RFC style • {-alias alias}: Alias name of the entry to process • {-file file}: Output file name • {-keystore keystore}: Keystore name • {-cacerts}: Access the cacerts keystore • [-storepass arg]: Keystore password • {-storetype type}: Keystore type • {-providername name}: Provider name • {-addprovider name [-providerarg arg]}: Add security provider by name (such as SunPKCS11) with an optional configure argument. • {-providerclass class [-providerarg arg] }: Add security provider by fully qualified class name with an optional configure argument. • {-providerpath list}: Provider classpath • {-v}: Verbose output • {-protected}: Password provided through a protected mechanism Use the -exportcert command to read a certificate from the keystore that is associated with -alias alias and store it in the -file file. When a file is not specified, the certificate is output to stdout. By default, the certificate is output in binary encoding. If the -rfc option is specified, then the output in the printable encoding format defined by the Internet RFC 1421 Certificate Encoding Standard. If -alias refers to a trusted certificate, then that certificate is output. Otherwise, -alias refers to a key entry with an associated certificate chain. In that case, the first certificate in the chain is returned. This certificate authenticates the public key of the entity addressed by -alias. COMMANDS FOR DISPLAYING DATA -list The following are the available options for the -list command: • {-rfc}: Output in RFC style • {-alias alias}: Alias name of the entry to process • {-keystore keystore}: Keystore name • {-cacerts}: Access the cacerts keystore • [-storepass arg]: Keystore password • {-storetype type}: Keystore type • {-providername name}: Provider name • {-addprovider name [-providerarg arg]}: Add security provider by name (such as SunPKCS11) with an optional configure argument. • {-providerclass class [-providerarg arg] }: Add security provider by fully qualified class name with an optional configure argument. • {-providerpath list}: Provider classpath • {-v}: Verbose output • {-protected}: Password provided through a protected mechanism Use the -list command to print the contents of the keystore entry identified by -alias to stdout. If -alias alias is not specified, then the contents of the entire keystore are printed. By default, this command prints the SHA-256 fingerprint of a certificate. If the -v option is specified, then the certificate is printed in human-readable format, with additional information such as the owner, issuer, serial number, and any extensions. If the -rfc option is specified, then the certificate contents are printed by using the printable encoding format, as defined by the Internet RFC 1421 Certificate Encoding Standard. Note: You can't specify both -v and -rfc in the same command. Otherwise, an error is reported. -printcert The following are the available options for the -printcert command: • {-rfc}: Output in RFC style • {-file cert_file}: Input file name • {-sslserver server[:port]}:: Secure Sockets Layer (SSL) server host and port • {-jarfile JAR_file}: Signed .jar file • {-keystore keystore}: Keystore name • {-trustcacerts}: Trust certificates from cacerts • [-storepass arg]: Keystore password • {-storetype type}: Keystore type • {-providername name}: Provider name • {-addprovider name [-providerarg arg]}: Add security provider by name (such as SunPKCS11) with an optional configure argument. • {-providerclass class [-providerarg arg]}: Add security provider by fully qualified class name with an optional configure argument. • {-providerpath list}: Provider classpath • {-protected}: Password is provided through protected mechanism • {-v}: Verbose output Use the -printcert command to read and print the certificate from -file cert_file, the SSL server located at -sslserver server[:port], or the signed JAR file specified by -jarfile JAR_file. It prints its contents in a human-readable format. When a port is not specified, the standard HTTPS port 443 is assumed. Note: The -sslserver and -file options can't be provided in the same command. Otherwise, an error is reported. If you don't specify either option, then the certificate is read from stdin. When-rfc is specified, the keytool command prints the certificate in PEM mode as defined by the Internet RFC 1421 Certificate Encoding standard. If the certificate is read from a file or stdin, then it might be either binary encoded or in printable encoding format, as defined by the RFC 1421 Certificate Encoding standard. If the SSL server is behind a firewall, then the -J-Dhttps.proxyHost=proxyhost and -J-Dhttps.proxyPort=proxyport options can be specified on the command line for proxy tunneling. Note: This command can be used independently of a keystore. This command does not check for the weakness of a certificate's signature algorithm if it is a trusted certificate in the user keystore (specified by -keystore) or in the cacerts keystore (if -trustcacerts is specified). -printcertreq The following are the available options for the -printcertreq command: • {-file file}: Input file name • {-v}: Verbose output Use the -printcertreq command to print the contents of a PKCS #10 format certificate request, which can be generated by the keytool -certreq command. The command reads the request from file. If there is no file, then the request is read from the standard input. -printcrl The following are the available options for the -printcrl command: • {-file crl}: Input file name • {-keystore keystore}: Keystore name • {-trustcacerts}: Trust certificates from cacerts • [-storepass arg]: Keystore password • {-storetype type}: Keystore type • {-providername name}: Provider name • {-addprovider name [-providerarg arg]}: Add security provider by name (such as SunPKCS11) with an optional configure argument. • {-providerclass class [-providerarg arg]}: Add security provider by fully qualified class name with an optional configure argument. • {-providerpath list}: Provider classpath • {-protected}: Password is provided through protected mechanism • {-v}: Verbose output Use the -printcrl command to read the Certificate Revocation List (CRL) from -file crl . A CRL is a list of the digital certificates that were revoked by the CA that issued them. The CA generates the crl file. Note: This command can be used independently of a keystore. This command attempts to verify the CRL using a certificate from the user keystore (specified by -keystore) or the cacerts keystore (if -trustcacerts is specified), and will print out a warning if it cannot be verified. COMMANDS FOR MANAGING THE KEYSTORE -storepasswd The following are the available options for the -storepasswd command: • [-new arg]: New password • {-keystore keystore}: Keystore name • {-cacerts}: Access the cacerts keystore • [-storepass arg]: Keystore password • {-storetype type}: Keystore type • {-providername name}: Provider name • {-addprovider name [-providerarg arg]}: Add security provider by name (such as SunPKCS11) with an optional configure argument. • {-providerclass class [-providerarg arg]}: Add security provider by fully qualified class name with an optional configure argument. • {-providerpath list}: Provider classpath • {-v}: Verbose output Use the -storepasswd command to change the password used to protect the integrity of the keystore contents. The new password is set by -new arg and must contain at least six characters. -keypasswd The following are the available options for the -keypasswd command: • {-alias alias}: Alias name of the entry to process • [-keypass old_keypass]: Key password • [-new new_keypass]: New password • {-keystore keystore}: Keystore name • {-storepass arg}: Keystore password • {-storetype type}: Keystore type • {-providername name}: Provider name • {-addprovider name [-providerarg arg]}: Add security provider by name (such as SunPKCS11) with an optional configure argument. • {-providerclass class [-providerarg arg]}: Add security provider by fully qualified class name with an optional configure argument. • {-providerpath list}: Provider classpath • {-v}: Verbose output Use the -keypasswd command to change the password (under which private/secret keys identified by -alias are protected) from -keypass old_keypass to -new new_keypass. The password value must contain at least six characters. If the -keypass option isn't provided at the command line and the -keypass password is different from the keystore password (-storepass arg), then the user is prompted for it. If the -new option isn't provided at the command line, then the user is prompted for it. -delete The following are the available options for the -delete command: • [-alias alias]: Alias name of the entry to process • {-keystore keystore}: Keystore name • {-cacerts}: Access the cacerts keystore • [-storepass arg]: Keystore password • {-storetype type}: Keystore type • {-providername name}: Provider name • {-addprovider name [-providerarg arg]}: Add security provider by name (such as SunPKCS11) with an optional configure argument. • {-providerclass class [-providerarg arg]}: Add security provider by fully qualified class name with an optional configure argument. • {-providerpath list}: Provider classpath • {-v}: Verbose output • {-protected}: Password provided through a protected mechanism Use the -delete command to delete the -alias alias entry from the keystore. When not provided at the command line, the user is prompted for the alias. -changealias The following are the available options for the -changealias command: • {-alias alias}: Alias name of the entry to process • [-destalias alias]: Destination alias • [-keypass arg]: Key password • {-keystore keystore}: Keystore name • {-cacerts}: Access the cacerts keystore • [-storepass arg]: Keystore password • {-storetype type}: Keystore type • {-providername name}: Provider name • {-addprovider name [-providerarg arg]}: Add security provider by name (such as SunPKCS11) with an optional configure argument. • {-providerclass class [-providerarg arg]}: Add security provider by fully qualified class name with an optional configure argument. • {-providerpath list}: Provider classpath • {-v}: Verbose output • {-protected}: Password provided through a protected mechanism Use the -changealias command to move an existing keystore entry from -alias alias to a new -destalias alias. If a destination alias is not provided, then the command prompts you for one. If the original entry is protected with an entry password, then the password can be supplied with the -keypass option. If a key password is not provided, then the -storepass (if provided) is attempted first. If the attempt fails, then the user is prompted for a password. COMMANDS FOR DISPLAYING SECURITY-RELATED INFORMATION -showinfo The following are the available options for the -showinfo command: • {-tls}: Displays TLS configuration information • {-v}: Verbose output Use the -showinfo command to display various security-related information. The -tls option displays TLS configurations, such as the list of enabled protocols and cipher suites. COMMANDS FOR DISPLAYING PROGRAM VERSION You can use -version to print the program version of keytool. COMMANDS FOR DISPLAYING HELP INFORMATION You can use --help to display a list of keytool commands or to display help information about a specific keytool command. • To display a list of keytool commands, enter: keytool --help • To display help information about a specific keytool command, enter: keytool -<command> --help COMMON COMMAND OPTIONS The -v option can appear for all commands except --help. When the -v option appears, it signifies verbose mode, which means that more information is provided in the output. The -Joption argument can appear for any command. When the -Joption is used, the specified option string is passed directly to the Java interpreter. This option doesn't contain any spaces. It's useful for adjusting the execution environment or memory usage. For a list of possible interpreter options, enter java -h or java -X at the command line. These options can appear for all commands operating on a keystore: -storetype storetype This qualifier specifies the type of keystore to be instantiated. -keystore keystore The keystore location. If the JKS storetype is used and a keystore file doesn't yet exist, then certain keytool commands can result in a new keystore file being created. For example, if keytool -genkeypair is called and the -keystore option isn't specified, the default keystore file named .keystore is created in the user's home directory if it doesn't already exist. Similarly, if the -keystore ks_file option is specified but ks_file doesn't exist, then it is created. For more information on the JKS storetype, see the KeyStore Implementation section in KeyStore aliases. Note that the input stream from the -keystore option is passed to the KeyStore.load method. If NONE is specified as the URL, then a null stream is passed to the KeyStore.load method. NONE should be specified if the keystore isn't file-based. For example, when the keystore resides on a hardware token device. -cacerts cacerts Operates on the cacerts keystore . This option is equivalent to -keystore path_to_cacerts -storetype type_of_cacerts. An error is reported if the -keystore or -storetype option is used with the -cacerts option. -storepass [:env | :file ] argument The password that is used to protect the integrity of the keystore. If the modifier env or file isn't specified, then the password has the value argument, which must contain at least six characters. Otherwise, the password is retrieved as follows: • env: Retrieve the password from the environment variable named argument. • file: Retrieve the password from the file named argument. Note: All other options that require passwords, such as -keypass, -srckeypass, -destkeypass, -srcstorepass, and -deststorepass, accept the env and file modifiers. Remember to separate the password option and the modifier with a colon (:). The password must be provided to all commands that access the keystore contents. For such commands, when the -storepass option isn't provided at the command line, the user is prompted for it. When retrieving information from the keystore, the password is optional. If a password is not specified, then the integrity of the retrieved information can't be verified and a warning is displayed. -providername name Used to identify a cryptographic service provider's name when listed in the security properties file. -addprovider name Used to add a security provider by name (such as SunPKCS11) . -providerclass class Used to specify the name of a cryptographic service provider's master class file when the service provider isn't listed in the security properties file. -providerpath list Used to specify the provider classpath. -providerarg arg Used with the -addprovider or -providerclass option to represent an optional string input argument for the constructor of class name. -protected=true|false Specify this value as true when a password must be specified by way of a protected authentication path, such as a dedicated PIN reader. Because there are two keystores involved in the -importkeystore command, the following two options, -srcprotected and -destprotected, are provided for the source keystore and the destination keystore respectively. -ext {name{:critical} {=value}} Denotes an X.509 certificate extension. The option can be used in -genkeypair and -gencert to embed extensions into the generated certificate, or in -certreq to show what extensions are requested in the certificate request. The option can appear multiple times. The name argument can be a supported extension name (see Supported Named Extensions) or an arbitrary OID number. The value argument, when provided, denotes the argument for the extension. When value is omitted, the default value of the extension or the extension itself requires no argument. The :critical modifier, when provided, means the extension's isCritical attribute is true; otherwise, it is false. You can use :c in place of :critical. -conf file Specifies a pre-configured options file. PRE-CONFIGURED OPTIONS FILE A pre-configured options file is a Java properties file that can be specified with the -conf option. Each property represents the default option(s) for a keytool command using "keytool.command_name" as the property name. A special property named "keytool.all" represents the default option(s) applied to all commands. A property value can include ${prop} which will be expanded to the system property associated with it. If an option value includes white spaces inside, it should be surrounded by quotation marks (" or '). All property names must be in lower case. When keytool is launched with a pre-configured options file, the value for "keytool.all" (if it exists) is prepended to the keytool command line first, with the value for the command name (if it exists) comes next, and the existing options on the command line at last. For a single-valued option, this allows the property for a specific command to override the "keytool.all" value, and the value specified on the command line to override both. For multiple-valued options, all of them will be used by keytool. For example, given the following file named preconfig: # A tiny pre-configured options file keytool.all = -keystore ${user.home}/ks keytool.list = -v keytool.genkeypair = -keyalg rsa keytool -conf preconfig -list is identical to keytool -keystore ~/ks -v -list keytool -conf preconfig -genkeypair -alias me is identical to keytool -keystore ~/ks -keyalg rsa -genkeypair -alias me keytool -conf preconfig -genkeypair -alias you -keyalg ec is identical to keytool -keystore ~/ks -keyalg rsa -genkeypair -alias you -keyalg ec which is equivalent to keytool -keystore ~/ks -genkeypair -alias you -keyalg ec because -keyalg is a single-valued option and the ec value specified on the command line overrides the preconfigured options file. EXAMPLES OF OPTION VALUES The following examples show the defaults for various option values: -alias "mykey" -keysize 2048 (when using -genkeypair and -keyalg is "DSA") 3072 (when using -genkeypair and -keyalg is "RSA", "RSASSA-PSS", or "DH") 384 (when using -genkeypair and -keyalg is "EC") 56 (when using -genseckey and -keyalg is "DES") 168 (when using -genseckey and -keyalg is "DESede") -groupname ed25519 (when using -genkeypair and -keyalg is "EdDSA", key size is 255) x25519 (when using -genkeypair and -keyalg is "XDH", key size is 255) -validity 90 -keystore <the file named .keystore in the user's home directory> -destkeystore <the file named .keystore in the user's home directory> -storetype <the value of the "keystore.type" property in the security properties file, which is returned by the static getDefaultType method in java.security.KeyStore> -file stdin (if reading) stdout (if writing) -protected false When generating a certificate or a certificate request, the default signature algorithm (-sigalg option) is derived from the algorithm of the underlying private key to provide an appropriate level of security strength as follows: Default Signature Algorithms keyalg key size default sigalg ─────────────────────────────────────────── DSA any size SHA256withDSA RSA < 624 SHA256withRSA (key size is too small for using SHA-384) <= 7680 SHA384withRSA > 7680 SHA512withRSA EC < 512 SHA384withECDSA >= 512 SHA512withECDSA RSASSA-PSS < 624 RSASSA-PSS (with SHA-256, key size is too small for using SHA-384) <= 7680 RSASSA-PSS (with SHA-384) > 7680 RSASSA-PSS (with SHA-512) EdDSA 255 Ed25519 448 Ed448 Ed25519 255 Ed25519 Ed448 448 Ed448 • The key size, measured in bits, corresponds to the size of the private key. This size is determined by the value of the -keysize or -groupname options or the value derived from a default setting. • An RSASSA-PSS signature algorithm uses a MessageDigest algorithm as its hash and MGF1 algorithms. • If neither a default -keysize or -groupname is defined for an algorithm, the security provider will choose a default setting. Note: To improve out of the box security, default keysize, groupname, and signature algorithm names are periodically updated to stronger values with each release of the JDK. If interoperability with older releases of the JDK is important, make sure that the defaults are supported by those releases. Alternatively, you can use the -keysize, -groupname, or -sigalg options to override the default values at your own risk. SUPPORTED NAMED EXTENSIONS The keytool command supports these named extensions. The names aren't case-sensitive. BC or BasicContraints Values: The full form is ca:{true|false}[,pathlen:len] or len, which is short for ca:true,pathlen:len. When len is omitted, the resulting value is ca:true. KU or KeyUsage Values: usage(, usage)* usage can be one of the following: • digitalSignature • nonRepudiation (contentCommitment) • keyEncipherment • dataEncipherment • keyAgreement • keyCertSign • cRLSign • encipherOnly • decipherOnly Provided there is no ambiguity, the usage argument can be abbreviated with the first few letters (such as dig for digitalSignature) or in camel-case style (such as dS for digitalSignature or cRLS for cRLSign). The usage values are case-sensitive. EKU or ExtendedKeyUsage Values: usage(, usage)* usage can be one of the following: • anyExtendedKeyUsage • serverAuth • clientAuth • codeSigning • emailProtection • timeStamping • OCSPSigning • Any OID string Provided there is no ambiguity, the usage argument can be abbreviated with the first few letters or in camel-case style. The usage values are case-sensitive. SAN or SubjectAlternativeName Values: type:value(, type:value)* type can be one of the following: • EMAIL • URI • DNS • IP • OID The value argument is the string format value for the type. IAN or IssuerAlternativeName Values: Same as SAN or SubjectAlternativeName. SIA or SubjectInfoAccess Values: method:location-type:location-value(, method:location-type:location-value)* method can be one of the following: • timeStamping • caRepository • Any OID The location-type and location-value arguments can be any type:value supported by the SubjectAlternativeName extension. AIA or AuthorityInfoAccess Values: Same as SIA or SubjectInfoAccess. The method argument can be one of the following: • ocsp • caIssuers • Any OID When name is OID, the value is the hexadecimal dumped Definite Encoding Rules (DER) encoding of the extnValue for the extension excluding the OCTET STRING type and length bytes. Other than standard hexadecimal numbers (0-9, a-f, A-F), any extra characters are ignored in the HEX string. Therefore, both 01:02:03:04 and 01020304 are accepted as identical values. When there is no value, the extension has an empty value field. A special name honored, used only in -gencert, denotes how the extensions included in the certificate request should be honored. The value for this name is a comma-separated list of all (all requested extensions are honored), name{:[critical|non-critical]} (the named extension is honored, but it uses a different isCritical attribute), and -name (used with all, denotes an exception). Requested extensions aren't honored by default. If, besides the-ext honored option, another named or OID -ext option is provided, this extension is added to those already honored. However, if this name (or OID) also appears in the honored value, then its value and criticality override that in the request. If an extension of the same type is provided multiple times through either a name or an OID, only the last extension is used. The subjectKeyIdentifier extension is always created. For non-self- signed certificates, the authorityKeyIdentifier is created. CAUTION: Users should be aware that some combinations of extensions (and other certificate fields) may not conform to the Internet standard. See Certificate Conformance Warning. EXAMPLES OF TASKS IN CREATING A KEYSTORE The following examples describe the sequence actions in creating a keystore for managing public/private key pairs and certificates from trusted entities. • Generating the Key Pair • Requesting a Signed Certificate from a CA • Importing a Certificate for the CA • Importing the Certificate Reply from the CA • Exporting a Certificate That Authenticates the Public Key • Importing the Keystore • Generating Certificates for an SSL Server GENERATING THE KEY PAIR Create a keystore and then generate the key pair. You can enter the command as a single line such as the following: keytool -genkeypair -dname "cn=myname, ou=mygroup, o=mycompany, c=mycountry" -alias business -keyalg rsa -keypass password -keystore /working/mykeystore -storepass password -validity 180 The command creates the keystore named mykeystore in the working directory (provided it doesn't already exist), and assigns it the password specified by -keypass. It generates a public/private key pair for the entity whose distinguished name is myname, mygroup, mycompany, and a two-letter country code of mycountry. It uses the RSA key generation algorithm to create the keys; both are 3072 bits. The command uses the default SHA384withRSA signature algorithm to create a self-signed certificate that includes the public key and the distinguished name information. The certificate is valid for 180 days, and is associated with the private key in a keystore entry referred to by -alias business. The private key is assigned the password specified by -keypass. The command is significantly shorter when the option defaults are accepted. In this case, only -keyalg is required, and the defaults are used for unspecified options that have default values. You are prompted for any required values. You could have the following: keytool -genkeypair -keyalg rsa In this case, a keystore entry with the alias mykey is created, with a newly generated key pair and a certificate that is valid for 90 days. This entry is placed in your home directory in a keystore named .keystore . .keystore is created if it doesn't already exist. You are prompted for the distinguished name information, the keystore password, and the private key password. Note: The rest of the examples assume that you responded to the prompts with values equal to those specified in the first -genkeypair command. For example, a distinguished name of cn=myname, ou=mygroup, o=mycompany, c=mycountry). REQUESTING A SIGNED CERTIFICATE FROM A CA Note: Generating the key pair created a self-signed certificate; however, a certificate is more likely to be trusted by others when it is signed by a CA. To get a CA signature, complete the following process: 1. Generate a CSR: keytool -certreq -file myname.csr This creates a CSR for the entity identified by the default alias mykey and puts the request in the file named myname.csr. 2. Submit myname.csr to a CA, such as DigiCert. The CA authenticates you, the requestor (usually offline), and returns a certificate, signed by them, authenticating your public key. In some cases, the CA returns a chain of certificates, each one authenticating the public key of the signer of the previous certificate in the chain. IMPORTING A CERTIFICATE FOR THE CA To import a certificate for the CA, complete the following process: 1. Before you import the certificate reply from a CA, you need one or more trusted certificates either in your keystore or in the cacerts keystore file. See -importcert in Commands. • If the certificate reply is a certificate chain, then you need the top certificate of the chain. The root CA certificate that authenticates the public key of the CA. • If the certificate reply is a single certificate, then you need a certificate for the issuing CA (the one that signed it). If that certificate isn't self-signed, then you need a certificate for its signer, and so on, up to a self-signed root CA certificate. The cacerts keystore ships with a set of root certificates issued by the CAs of the Oracle Java Root Certificate program [http://www.oracle.com/technetwork/java/javase/javasecarootcertsprogram-1876540.html]. If you request a signed certificate from a CA, and a certificate authenticating that CA's public key hasn't been added to cacerts, then you must import a certificate from that CA as a trusted certificate. A certificate from a CA is usually self-signed or signed by another CA. If it is signed by another CA, you need a certificate that authenticates that CA's public key. For example, you have obtained a X.cer file from a company that is a CA and the file is supposed to be a self-signed certificate that authenticates that CA's public key. Before you import it as a trusted certificate, you should ensure that the certificate is valid by: 1. Viewing it with the keytool -printcert command or the keytool -importcert command without using the -noprompt option. Make sure that the displayed certificate fingerprints match the expected fingerprints. 2. Calling the person who sent the certificate, and comparing the fingerprints that you see with the ones that they show or that a secure public key repository shows. Only when the fingerprints are equal is it assured that the certificate wasn't replaced in transit with somebody else's certificate (such as an attacker's certificate). If such an attack takes place, and you didn't check the certificate before you imported it, then you would be trusting anything that the attacker signed. 2. Replace the self-signed certificate with a certificate chain, where each certificate in the chain authenticates the public key of the signer of the previous certificate in the chain, up to a root CA. If you trust that the certificate is valid, then you can add it to your keystore by entering the following command: keytool -importcert -alias alias -file *X*.cer` This command creates a trusted certificate entry in the keystore from the data in the CA certificate file and assigns the values of the alias to the entry. IMPORTING THE CERTIFICATE REPLY FROM THE CA After you import a certificate that authenticates the public key of the CA that you submitted your certificate signing request to (or there is already such a certificate in the cacerts file), you can import the certificate reply and replace your self-signed certificate with a certificate chain. The certificate chain is one of the following: • Returned by the CA when the CA reply is a chain. • Constructed when the CA reply is a single certificate. This certificate chain is constructed by using the certificate reply and trusted certificates available either in the keystore where you import the reply or in the cacerts keystore file. For example, if you sent your certificate signing request to DigiCert, then you can import their reply by entering the following command: Note: In this example, the returned certificate is named DCmyname.cer. keytool -importcert -trustcacerts -file DCmyname.cer EXPORTING A CERTIFICATE THAT AUTHENTICATES THE PUBLIC KEY Note: If you used the jarsigner command to sign a Java Archive (JAR) file, then clients that use the file will want to authenticate your signature. One way that clients can authenticate you is by importing your public key certificate into their keystore as a trusted entry. You can then export the certificate and supply it to your clients. For example: 1. Copy your certificate to a file named myname.cer by entering the following command: Note: In this example, the entry has an alias of mykey. keytool -exportcert -alias mykey -file myname.cer 2. With the certificate and the signed JAR file, a client can use the jarsigner command to authenticate your signature. IMPORTING THE KEYSTORE Use the importkeystore command to import an entire keystore into another keystore. This imports all entries from the source keystore, including keys and certificates, to the destination keystore with a single command. You can use this command to import entries from a different type of keystore. During the import, all new entries in the destination keystore will have the same alias names and protection passwords (for secret keys and private keys). If the keytool command can't recover the private keys or secret keys from the source keystore, then it prompts you for a password. If it detects alias duplication, then it asks you for a new alias, and you can specify a new alias or simply allow the keytool command to overwrite the existing one. For example, import entries from a typical JKS type keystore key.jks into a PKCS #11 type hardware-based keystore, by entering the following command: keytool -importkeystore -srckeystore key.jks -destkeystore NONE -srcstoretype JKS -deststoretype PKCS11 -srcstorepass password -deststorepass password The importkeystore command can also be used to import a single entry from a source keystore to a destination keystore. In this case, besides the options you used in the previous example, you need to specify the alias you want to import. With the -srcalias option specified, you can also specify the destination alias name, protection password for a secret or private key, and the destination protection password you want as follows: keytool -importkeystore -srckeystore key.jks -destkeystore NONE -srcstoretype JKS -deststoretype PKCS11 -srcstorepass password -deststorepass password -srcalias myprivatekey -destalias myoldprivatekey -srckeypass password -destkeypass password -noprompt GENERATING CERTIFICATES FOR AN SSL SERVER The following are keytool commands used to generate key pairs and certificates for three entities: • Root CA (root) • Intermediate CA (ca) • SSL server (server) Ensure that you store all the certificates in the same keystore. keytool -genkeypair -keystore root.jks -alias root -ext bc:c -keyalg rsa keytool -genkeypair -keystore ca.jks -alias ca -ext bc:c -keyalg rsa keytool -genkeypair -keystore server.jks -alias server -keyalg rsa keytool -keystore root.jks -alias root -exportcert -rfc > root.pem keytool -storepass password -keystore ca.jks -certreq -alias ca | keytool -storepass password -keystore root.jks -gencert -alias root -ext BC=0 -rfc > ca.pem keytool -keystore ca.jks -importcert -alias ca -file ca.pem keytool -storepass password -keystore server.jks -certreq -alias server | keytool -storepass password -keystore ca.jks -gencert -alias ca -ext ku:c=dig,kE -rfc > server.pem cat root.pem ca.pem server.pem | keytool -keystore server.jks -importcert -alias server TERMS Keystore A keystore is a storage facility for cryptographic keys and certificates. Keystore entries Keystores can have different types of entries. The two most applicable entry types for the keytool command include the following: Key entries: Each entry holds very sensitive cryptographic key information, which is stored in a protected format to prevent unauthorized access. Typically, a key stored in this type of entry is a secret key, or a private key accompanied by the certificate chain for the corresponding public key. See Certificate Chains. The keytool command can handle both types of entries, while the jarsigner tool only handles the latter type of entry, that is private keys and their associated certificate chains. Trusted certificate entries: Each entry contains a single public key certificate that belongs to another party. The entry is called a trusted certificate because the keystore owner trusts that the public key in the certificate belongs to the identity identified by the subject (owner) of the certificate. The issuer of the certificate vouches for this, by signing the certificate. Keystore aliases All keystore entries (key and trusted certificate entries) are accessed by way of unique aliases. An alias is specified when you add an entity to the keystore with the -genseckey command to generate a secret key, the -genkeypair command to generate a key pair (public and private key), or the -importcert command to add a certificate or certificate chain to the list of trusted certificates. Subsequent keytool commands must use this same alias to refer to the entity. For example, you can use the alias duke to generate a new public/private key pair and wrap the public key into a self- signed certificate with the following command. See Certificate Chains. keytool -genkeypair -alias duke -keyalg rsa -keypass passwd This example specifies an initial passwd required by subsequent commands to access the private key associated with the alias duke. If you later want to change Duke's private key password, use a command such as the following: keytool -keypasswd -alias duke -keypass passwd -new newpasswd This changes the initial passwd to newpasswd. A password shouldn't be specified on a command line or in a script unless it is for testing purposes, or you are on a secure system. If you don't specify a required password option on a command line, then you are prompted for it. Keystore implementation The KeyStore class provided in the java.security package supplies well-defined interfaces to access and modify the information in a keystore. It is possible for there to be multiple different concrete implementations, where each implementation is that for a particular type of keystore. Currently, two command-line tools (keytool and jarsigner) make use of keystore implementations. Because the KeyStore class is public, users can write additional security applications that use it. In JDK 9 and later, the default keystore implementation is PKCS12. This is a cross platform keystore based on the RSA PKCS12 Personal Information Exchange Syntax Standard. This standard is primarily meant for storing or transporting a user's private keys, certificates, and miscellaneous secrets. There is another built-in implementation, provided by Oracle. It implements the keystore as a file with a proprietary keystore type (format) named JKS. It protects each private key with its individual password, and also protects the integrity of the entire keystore with a (possibly different) password. Keystore implementations are provider-based. More specifically, the application interfaces supplied by KeyStore are implemented in terms of a Service Provider Interface (SPI). That is, there is a corresponding abstract KeystoreSpi class, also in the java.security package, which defines the Service Provider Interface methods that providers must implement. The term provider refers to a package or a set of packages that supply a concrete implementation of a subset of services that can be accessed by the Java Security API. To provide a keystore implementation, clients must implement a provider and supply a KeystoreSpi subclass implementation, as described in Steps to Implement and Integrate a Provider. Applications can choose different types of keystore implementations from different providers, using the getInstance factory method supplied in the KeyStore class. A keystore type defines the storage and data format of the keystore information, and the algorithms used to protect private/secret keys in the keystore and the integrity of the keystore. Keystore implementations of different types aren't compatible. The keytool command works on any file-based keystore implementation. It treats the keystore location that is passed to it at the command line as a file name and converts it to a FileInputStream, from which it loads the keystore information.)The jarsigner commands can read a keystore from any location that can be specified with a URL. For keytool and jarsigner, you can specify a keystore type at the command line, with the -storetype option. If you don't explicitly specify a keystore type, then the tools choose a keystore implementation based on the value of the keystore.type property specified in the security properties file. The security properties file is called java.security, and resides in the security properties directory: • Linux and macOS: java.home/lib/security • Windows: java.home\lib\security Each tool gets the keystore.type value and then examines all the currently installed providers until it finds one that implements a keystores of that type. It then uses the keystore implementation from that provider.The KeyStore class defines a static method named getDefaultType that lets applications retrieve the value of the keystore.type property. The following line of code creates an instance of the default keystore type as specified in the keystore.type property: KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType()); The default keystore type is pkcs12, which is a cross-platform keystore based on the RSA PKCS12 Personal Information Exchange Syntax Standard. This is specified by the following line in the security properties file: keystore.type=pkcs12 To have the tools utilize a keystore implementation other than the default, you can change that line to specify a different keystore type. For example, if you want to use the Oracle's jks keystore implementation, then change the line to the following: keystore.type=jks Note: Case doesn't matter in keystore type designations. For example, JKS would be considered the same as jks. Certificate A certificate (or public-key certificate) is a digitally signed statement from one entity (the issuer), saying that the public key and some other information of another entity (the subject) has some specific value. The following terms are related to certificates: • Public Keys: These are numbers associated with a particular entity, and are intended to be known to everyone who needs to have trusted interactions with that entity. Public keys are used to verify signatures. • Digitally Signed: If some data is digitally signed, then it is stored with the identity of an entity and a signature that proves that entity knows about the data. The data is rendered unforgeable by signing with the entity's private key. • Identity: A known way of addressing an entity. In some systems, the identity is the public key, and in others it can be anything from an Oracle Solaris UID to an email address to an X.509 distinguished name. • Signature: A signature is computed over some data using the private key of an entity. The signer, which in the case of a certificate is also known as the issuer. • Private Keys: These are numbers, each of which is supposed to be known only to the particular entity whose private key it is (that is, it is supposed to be kept secret). Private and public keys exist in pairs in all public key cryptography systems (also referred to as public key crypto systems). In a typical public key crypto system, such as DSA, a private key corresponds to exactly one public key. Private keys are used to compute signatures. • Entity: An entity is a person, organization, program, computer, business, bank, or something else you are trusting to some degree. Public key cryptography requires access to users' public keys. In a large-scale networked environment, it is impossible to guarantee that prior relationships between communicating entities were established or that a trusted repository exists with all used public keys. Certificates were invented as a solution to this public key distribution problem. Now a Certification Authority (CA) can act as a trusted third party. CAs are entities such as businesses that are trusted to sign (issue) certificates for other entities. It is assumed that CAs only create valid and reliable certificates because they are bound by legal agreements. There are many public Certification Authorities, such as DigiCert, Comodo, Entrust, and so on. You can also run your own Certification Authority using products such as Microsoft Certificate Server or the Entrust CA product for your organization. With the keytool command, it is possible to display, import, and export certificates. It is also possible to generate self-signed certificates. The keytool command currently handles X.509 certificates. X.509 Certificates The X.509 standard defines what information can go into a certificate and describes how to write it down (the data format). All the data in a certificate is encoded with two related standards called ASN.1/DER. Abstract Syntax Notation 1 describes data. The Definite Encoding Rules describe a single way to store and transfer that data. All X.509 certificates have the following data, in addition to the signature: • Version: This identifies which version of the X.509 standard applies to this certificate, which affects what information can be specified in it. Thus far, three versions are defined. The keytool command can import and export v1, v2, and v3 certificates. It generates v3 certificates. • X.509 Version 1 has been available since 1988, is widely deployed, and is the most generic. • X.509 Version 2 introduced the concept of subject and issuer unique identifiers to handle the possibility of reuse of subject or issuer names over time. Most certificate profile documents strongly recommend that names not be reused and that certificates shouldn't make use of unique identifiers. Version 2 certificates aren't widely used. • X.509 Version 3 is the most recent (1996) and supports the notion of extensions where anyone can define an extension and include it in the certificate. Some common extensions are: KeyUsage (limits the use of the keys to particular purposes such as signing-only) and AlternativeNames (allows other identities to also be associated with this public key, for example. DNS names, email addresses, IP addresses). Extensions can be marked critical to indicate that the extension should be checked and enforced or used. For example, if a certificate has the KeyUsage extension marked critical and set to keyCertSign, then when this certificate is presented during SSL communication, it should be rejected because the certificate extension indicates that the associated private key should only be used for signing certificates and not for SSL use. • Serial number: The entity that created the certificate is responsible for assigning it a serial number to distinguish it from other certificates it issues. This information is used in numerous ways. For example, when a certificate is revoked its serial number is placed in a Certificate Revocation List (CRL). • Signature algorithm identifier: This identifies the algorithm used by the CA to sign the certificate. • Issuer name: The X.500 Distinguished Name of the entity that signed the certificate. This is typically a CA. Using this certificate implies trusting the entity that signed this certificate. In some cases, such as root or top-level CA certificates, the issuer signs its own certificate. • Validity period: Each certificate is valid only for a limited amount of time. This period is described by a start date and time and an end date and time, and can be as short as a few seconds or almost as long as a century. The validity period chosen depends on a number of factors, such as the strength of the private key used to sign the certificate, or the amount one is willing to pay for a certificate. This is the expected period that entities can rely on the public value, when the associated private key has not been compromised. • Subject name: The name of the entity whose public key the certificate identifies. This name uses the X.500 standard, so it is intended to be unique across the Internet. This is the X.500 Distinguished Name (DN) of the entity. For example, CN=Java Duke, OU=Java Software Division, O=Oracle Corporation, C=US These refer to the subject's common name (CN), organizational unit (OU), organization (O), and country (C). • Subject public key information: This is the public key of the entity being named with an algorithm identifier that specifies which public key crypto system this key belongs to and any associated key parameters. Certificate Chains The keytool command can create and manage keystore key entries that each contain a private key and an associated certificate chain. The first certificate in the chain contains the public key that corresponds to the private key. When keys are first generated, the chain usually starts off containing a single element, a self-signed certificate. See -genkeypair in Commands. A self-signed certificate is one for which the issuer (signer) is the same as the subject. The subject is the entity whose public key is being authenticated by the certificate. When the -genkeypair command is called to generate a new public/private key pair, it also wraps the public key into a self-signed certificate (unless the -signer option is specified). Later, after a Certificate Signing Request (CSR) was generated with the -certreq command and sent to a Certification Authority (CA), the response from the CA is imported with -importcert, and the self-signed certificate is replaced by a chain of certificates. At the bottom of the chain is the certificate (reply) issued by the CA authenticating the subject's public key. The next certificate in the chain is one that authenticates the CA's public key. In many cases, this is a self-signed certificate, which is a certificate from the CA authenticating its own public key, and the last certificate in the chain. In other cases, the CA might return a chain of certificates. In this case, the bottom certificate in the chain is the same (a certificate signed by the CA, authenticating the public key of the key entry), but the second certificate in the chain is a certificate signed by a different CA that authenticates the public key of the CA you sent the CSR to. The next certificate in the chain is a certificate that authenticates the second CA's key, and so on, until a self-signed root certificate is reached. Each certificate in the chain (after the first) authenticates the public key of the signer of the previous certificate in the chain. Many CAs only return the issued certificate, with no supporting chain, especially when there is a flat hierarchy (no intermediates CAs). In this case, the certificate chain must be established from trusted certificate information already stored in the keystore. A different reply format (defined by the PKCS #7 standard) includes the supporting certificate chain in addition to the issued certificate. Both reply formats can be handled by the keytool command. The top-level (root) CA certificate is self-signed. However, the trust into the root's public key doesn't come from the root certificate itself, but from other sources such as a newspaper. This is because anybody could generate a self-signed certificate with the distinguished name of, for example, the DigiCert root CA. The root CA public key is widely known. The only reason it is stored in a certificate is because this is the format understood by most tools, so the certificate in this case is only used as a vehicle to transport the root CA's public key. Before you add the root CA certificate to your keystore, you should view it with the -printcert option and compare the displayed fingerprint with the well-known fingerprint obtained from a newspaper, the root CA's Web page, and so on. cacerts Certificates File A certificates file named cacerts resides in the security properties directory: • Linux and macOS: JAVA_HOME/lib/security • Windows: JAVA_HOME\lib\security The cacerts file represents a system-wide keystore with CA certificates. System administrators can configure and manage that file with the keytool command by specifying jks as the keystore type. The cacerts keystore file ships with a default set of root CA certificates. For Linux, macOS, and Windows, you can list the default certificates with the following command: keytool -list -cacerts The initial password of the cacerts keystore file is changeit. System administrators should change that password and the default access permission of that file upon installing the SDK. Note: It is important to verify your cacerts file. Because you trust the CAs in the cacerts file as entities for signing and issuing certificates to other entities, you must manage the cacerts file carefully. The cacerts file should contain only certificates of the CAs you trust. It is your responsibility to verify the trusted root CA certificates bundled in the cacerts file and make your own trust decisions. To remove an untrusted CA certificate from the cacerts file, use the -delete option of the keytool command. You can find the cacerts file in the JDK's $JAVA_HOME/lib/security directory. Contact your system administrator if you don't have permission to edit this file. Internet RFC 1421 Certificate Encoding Standard Certificates are often stored using the printable encoding format defined by the Internet RFC 1421 standard, instead of their binary encoding. This certificate format, also known as Base64 encoding, makes it easy to export certificates to other applications by email or through some other mechanism. Certificates read by the -importcert and -printcert commands can be in either this format or binary encoded. The -exportcert command by default outputs a certificate in binary encoding, but will instead output a certificate in the printable encoding format, when the -rfc option is specified. The -list command by default prints the SHA-256 fingerprint of a certificate. If the -v option is specified, then the certificate is printed in human-readable format. If the -rfc option is specified, then the certificate is output in the printable encoding format. In its printable encoding format, the encoded certificate is bounded at the beginning and end by the following text: -----BEGIN CERTIFICATE----- encoded certificate goes here. -----END CERTIFICATE----- X.500 Distinguished Names X.500 Distinguished Names are used to identify entities, such as those that are named by the subject and issuer (signer) fields of X.509 certificates. The keytool command supports the following subparts: • commonName: The common name of a person such as Susan Jones. • organizationUnit: The small organization (such as department or division) name. For example, Purchasing. • localityName: The locality (city) name, for example, Palo Alto. • stateName: State or province name, for example, California. • country: Two-letter country code, for example, CH. When you supply a distinguished name string as the value of a -dname option, such as for the -genkeypair command, the string must be in the following format: CN=cName, OU=orgUnit, O=org, L=city, S=state, C=countryCode All the following items represent actual values and the previous keywords are abbreviations for the following: CN=commonName OU=organizationUnit O=organizationName L=localityName S=stateName C=country A sample distinguished name string is: CN=Mark Smith, OU=Java, O=Oracle, L=Cupertino, S=California, C=US A sample command using such a string is: keytool -genkeypair -dname "CN=Mark Smith, OU=Java, O=Oracle, L=Cupertino, S=California, C=US" -alias mark -keyalg rsa Case doesn't matter for the keyword abbreviations. For example, CN, cn, and Cn are all treated the same. Order matters; each subcomponent must appear in the designated order. However, it isn't necessary to have all the subcomponents. You can use a subset, for example: CN=Smith, OU=Java, O=Oracle, C=US If a distinguished name string value contains a comma, then the comma must be escaped by a backslash (\) character when you specify the string on a command line, as in: cn=Jack, ou=Java\, Product Development, o=Oracle, c=US It is never necessary to specify a distinguished name string on a command line. When the distinguished name is needed for a command, but not supplied on the command line, the user is prompted for each of the subcomponents. In this case, a comma doesn't need to be escaped by a backslash (\). WARNINGS IMPORTING TRUSTED CERTIFICATES WARNING Important: Be sure to check a certificate very carefully before importing it as a trusted certificate. Windows Example: View the certificate first with the -printcert command or the -importcert command without the -noprompt option. Ensure that the displayed certificate fingerprints match the expected ones. For example, suppose someone sends or emails you a certificate that you put it in a file named \tmp\cert. Before you consider adding the certificate to your list of trusted certificates, you can execute a -printcert command to view its fingerprints, as follows: keytool -printcert -file \tmp\cert Owner: CN=ll, OU=ll, O=ll, L=ll, S=ll, C=ll Issuer: CN=ll, OU=ll, O=ll, L=ll, S=ll, C=ll Serial Number: 59092b34 Valid from: Thu Jun 24 18:01:13 PDT 2016 until: Wed Jun 23 17:01:13 PST 2016 Certificate Fingerprints: SHA-1: 20:B6:17:FA:EF:E5:55:8A:D0:71:1F:E8:D6:9D:C0:37:13:0E:5E:FE SHA-256: 90:7B:70:0A:EA:DC:16:79:92:99:41:FF:8A:FE:EB:90: 17:75:E0:90:B2:24:4D:3A:2A:16:A6:E4:11:0F:67:A4 Linux Example: View the certificate first with the -printcert command or the -importcert command without the -noprompt option. Ensure that the displayed certificate fingerprints match the expected ones. For example, suppose someone sends or emails you a certificate that you put it in a file named /tmp/cert. Before you consider adding the certificate to your list of trusted certificates, you can execute a -printcert command to view its fingerprints, as follows: keytool -printcert -file /tmp/cert Owner: CN=ll, OU=ll, O=ll, L=ll, S=ll, C=ll Issuer: CN=ll, OU=ll, O=ll, L=ll, S=ll, C=ll Serial Number: 59092b34 Valid from: Thu Jun 24 18:01:13 PDT 2016 until: Wed Jun 23 17:01:13 PST 2016 Certificate Fingerprints: SHA-1: 20:B6:17:FA:EF:E5:55:8A:D0:71:1F:E8:D6:9D:C0:37:13:0E:5E:FE SHA-256: 90:7B:70:0A:EA:DC:16:79:92:99:41:FF:8A:FE:EB:90: 17:75:E0:90:B2:24:4D:3A:2A:16:A6:E4:11:0F:67:A4 Then call or otherwise contact the person who sent the certificate and compare the fingerprints that you see with the ones that they show. Only when the fingerprints are equal is it guaranteed that the certificate wasn't replaced in transit with somebody else's certificate such as an attacker's certificate. If such an attack took place, and you didn't check the certificate before you imported it, then you would be trusting anything the attacker signed, for example, a JAR file with malicious class files inside. Note: It isn't required that you execute a -printcert command before importing a certificate. This is because before you add a certificate to the list of trusted certificates in the keystore, the -importcert command prints out the certificate information and prompts you to verify it. You can then stop the import operation. However, you can do this only when you call the -importcert command without the -noprompt option. If the -noprompt option is specified, then there is no interaction with the user. PASSWORDS WARNING Most commands that operate on a keystore require the store password. Some commands require a private/secret key password. Passwords can be specified on the command line in the -storepass and -keypass options. However, a password shouldn't be specified on a command line or in a script unless it is for testing, or you are on a secure system. When you don't specify a required password option on a command line, you are prompted for it. CERTIFICATE CONFORMANCE WARNING Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile [https://tools.ietf.org/rfc/rfc5280.txt] defined a profile on conforming X.509 certificates, which includes what values and value combinations are valid for certificate fields and extensions. The keytool command doesn't enforce all of these rules so it can generate certificates that don't conform to the standard, such as self- signed certificates that would be used for internal testing purposes. Certificates that don't conform to the standard might be rejected by the JDK or other applications. Users should ensure that they provide the correct options for -dname, -ext, and so on. IMPORT A NEW TRUSTED CERTIFICATE Before you add the certificate to the keystore, the keytool command verifies it by attempting to construct a chain of trust from that certificate to a self-signed certificate (belonging to a root CA), using trusted certificates that are already available in the keystore. If the -trustcacerts option was specified, then additional certificates are considered for the chain of trust, namely the certificates in a file named cacerts. If the keytool command fails to establish a trust path from the certificate to be imported up to a self-signed certificate (either from the keystore or the cacerts file), then the certificate information is printed, and the user is prompted to verify it by comparing the displayed certificate fingerprints with the fingerprints obtained from some other (trusted) source of information, which might be the certificate owner. Be very careful to ensure the certificate is valid before importing it as a trusted certificate. The user then has the option of stopping the import operation. If the -noprompt option is specified, then there is no interaction with the user. IMPORT A CERTIFICATE REPLY When you import a certificate reply, the certificate reply is validated with trusted certificates from the keystore, and optionally, the certificates configured in the cacerts keystore file when the -trustcacerts option is specified. The methods of determining whether the certificate reply is trusted are as follows: • If the reply is a single X.509 certificate, then the keytool command attempts to establish a trust chain, starting at the certificate reply and ending at a self-signed certificate (belonging to a root CA). The certificate reply and the hierarchy of certificates is used to authenticate the certificate reply from the new certificate chain of aliases. If a trust chain can't be established, then the certificate reply isn't imported. In this case, the keytool command doesn't print the certificate and prompt the user to verify it, because it is very difficult for a user to determine the authenticity of the certificate reply. • If the reply is a PKCS #7 formatted certificate chain or a sequence of X.509 certificates, then the chain is ordered with the user certificate first followed by zero or more CA certificates. If the chain ends with a self-signed root CA certificate and the-trustcacerts option was specified, the keytool command attempts to match it with any of the trusted certificates in the keystore or the cacerts keystore file. If the chain doesn't end with a self- signed root CA certificate and the -trustcacerts option was specified, the keytool command tries to find one from the trusted certificates in the keystore or the cacerts keystore file and add it to the end of the chain. If the certificate isn't found and the -noprompt option isn't specified, the information of the last certificate in the chain is printed, and the user is prompted to verify it. If the public key in the certificate reply matches the user's public key already stored with alias, then the old certificate chain is replaced with the new certificate chain in the reply. The old chain can only be replaced with a valid keypass, and so the password used to protect the private key of the entry is supplied. If no password is provided, and the private key password is different from the keystore password, the user is prompted for it. This command was named -import in earlier releases. This old name is still supported in this release. The new name, -importcert, is preferred. JDK 22 2024 KEYTOOL(1)
keytool - a key and certificate management utility
keytool [commands] commands Commands for keytool include the following: • -certreq: Generates a certificate request • -changealias: Changes an entry's alias • -delete: Deletes an entry • -exportcert: Exports certificate • -genkeypair: Generates a key pair • -genseckey: Generates a secret key • -gencert: Generates a certificate from a certificate request • -importcert: Imports a certificate or a certificate chain • -importpass: Imports a password • -importkeystore: Imports one or all entries from another keystore • -keypasswd: Changes the key password of an entry • -list: Lists entries in a keystore • -printcert: Prints the content of a certificate • -printcertreq: Prints the content of a certificate request • -printcrl: Prints the content of a Certificate Revocation List (CRL) file • -storepasswd: Changes the store password of a keystore • -showinfo: Displays security-related information • -version: Prints the program version See Commands and Options for a description of these commands with their options.
null
null
tidy_changelog5.34
Takes a changelog file, parse it using CPAN::Changes and prints out the resulting output. If a file is not given, the program will see if there is one file in the current directory beginning by 'change' (case- insensitive) and, if so, assume it to be the changelog. ARGUMENTS --next If provided, assumes that there is a placeholder header for an upcoming next release. The placeholder token is given via --token. --token Regular expression to use to detect the token for an upcoming release if --next is used. If not explicitly given, defaults to "\{\{\$NEXT\}\}". --headers If given, only print out the release header lines, without any of the changes. --reverse Prints the releases in reverse order (from the oldest to latest). --check Only check if the changelog is formatted properly using the changes_file_ok function of Test::CPAN::Changes. --help This help perl v5.34.0 2014-10-10 TIDY_CHANGELOG(1)
tidy_changelog - command-line tool for CPAN::Changes
$ tidy_changelog Changelog
null
null
fileproviderctl
fileproviderctl allows you to control the fileproviderd daemon and enumerate and manipulate files. GENERAL OPTIONS The following commands take parameters of the following forms: <provider> a (partial) provider identifier <bookmark> a string of the format “fileprovider:<provider bundle identifier>/<domain identifier>/<item identifier>” <item> a file URL, path or bookmark <item id> a simple item identifier (for commands where the provider is already otherwise specified) -h, –help Display a friendly help message. enumerate, ls <provider> Runs an interactive enumeration of the specified provider. You can press Ctrl-C to enter an interactive command line which allows you to execute commands on enumerated items. materialize <item> Causes the specified item to be written on disk, and lists the resulting contents. validate <provider> Runs the validation suite against the specified provider. dump Dumps the state of the file provider subsystem, the sync engine and all providers. SEE ALSO fpck(1) fileproviderctl(1)
fileproviderctl - introspect file provider extensions
fileproviderctl <command> [command-options and arguments]
null
null
binhex.pl
Each file is converted to file.hqx. WARNINGS Largely untested. AUTHOR Paul J. Schinder (NASA/GSFC) mostly, though Eryq can't seem to keep his grubby paws off anything... perl v5.34.0 2015-11-15 BINHEX(1)
binhex.pl - use Convert::BinHex to encode files as BinHex USAGE Usage: binhex.pl [options] file ... file Where the options are: -o dir Output in given directory (default outputs in file's directory) -v Verbose output (normally just one line per file is shown)
null
null
null
IOSDebug
null
null
null
null
null
dtruss
dtruss prints details on process system calls. It is like a DTrace version of truss, and has been designed to be less intrusive than truss. Of particular interest is the elapsed times and on cpu times, which can identify both system calls that are slow to complete, and those which are consuming CPU cycles. Since this uses DTrace, only users with root privileges can run this command.
dtruss - process syscall details. Uses DTrace.
dtruss [-acdeflhoLs] [-t syscall] { -p PID | -n name | command }
-a print all details -b bufsize dynamic variable buffer size. Increase this if you notice dynamic variable drop errors. The default is "4m" for 4 megabytes per CPU. -c print system call counts -d print relative timestamps, us -e print elapsed times, us -f follow children as they are forked -l force printing of pid/lwpid per line -L don't print pid/lwpid per line -n name examine processes with this name -W name wait for a process matching this name -o print on-cpu times, us -s print stack backtraces -p PID examine this PID -t syscall examine this syscall only
run and examine the "df -h" command # dtruss df -h examine PID 1871 # dtruss -p 1871 examine all processes called "tar" # dtruss -n tar run test.sh and follow children # dtruss -f test.sh run the "date" command and print elapsed and on cpu times, # dtruss -eo date FIELDS PID/LWPID Process ID / Lightweight Process ID RELATIVE relative timestamps to the start of the thread, us (microseconds) ELAPSD elapsed time for this system call, us CPU on-cpu time for this system call, us SYSCALL(args) system call name, with arguments (some may be evaluated) DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT dtruss will run forever until Ctrl-C is hit, or if a command was executed dtruss will finish when the command ends. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO procsystime(1M), dtrace(1M), truss(1) version 0.80 June 17, 2005 dtruss(1m)
dserr
dserr prints a description for an error code. Mac OS X Server 13 April 2005 Mac OS X Server
dserr – prints a description for an error code.
dserr errcode
null
null
tiff2icns
tiff2icns can be used to convert TIFF images to 'icns' files used by Icon Services. Searches for 48, 32, 16, 128, 256, 512, and 1024 wide images (in that order) and converts them to icons. Doesn't do any scaling, doesn't generate mini (12x12) icons. If no 32x32 image was found, and the -noLarge flag was not passed, one is generated by scaling the image. If no output file is specified, output is written to a file whose name is the input file name, with the original extension replaced by '.icns'.
tiff2icns – converts TIFF to icns format
tiff2icns [-noLarge] infile [outfile]
-noLarge tiff2icns always creates a large 32x32 icon unless the -noLarge flag is passed. macOS August 28, 2002 macOS
null
mandoc
The mandoc utility formats manual pages for display. By default, mandoc reads mdoc(7) or man(7) text from stdin and produces -T locale output. The options are as follows: -a If the standard output is a terminal device and -c is not specified, use less(1) to paginate the output, just like man(1) would. -c Copy the formatted manual pages to the standard output without using less(1) to paginate them. This is the default. It can be specified to override -a. -I os=name Override the default operating system name for the mdoc(7) Os and for the man(7) TH macro. -K encoding Specify the input encoding. The supported encoding arguments are us-ascii, iso-8859-1, and utf-8. If not specified, autodetection uses the first match in the following list: 1. If the first three bytes of the input file are the UTF-8 byte order mark (BOM, 0xefbbbf), input is interpreted as utf-8. 2. If the first or second line of the input file matches the emacs mode line format .\" -*- [...;] coding: encoding; -*- then input is interpreted according to encoding. 3. If the first non-ASCII byte in the file introduces a valid UTF-8 sequence, input is interpreted as utf-8. 4. Otherwise, input is interpreted as iso-8859-1. -mdoc | -man With -mdoc, all input files are interpreted as mdoc(7). With -man, all input files are interpreted as man(7). By default, the input language is automatically detected for each file: if the first macro is Dd or Dt, the mdoc(7) parser is used; otherwise, the man(7) parser is used. With other arguments, -m is silently ignored. -O options Comma-separated output options. See the descriptions of the individual output formats for supported options. -T output Select the output format. Supported values for the output argument are ascii, html, the default of locale, man, markdown, pdf, ps, tree, and utf8. The special -T lint mode only parses the input and produces no output. It implies -W all and redirects parser messages, which usually appear on standard error output, to standard output. -W level Specify the minimum message level to be reported on the standard error output and to affect the exit status. The level can be base, style, warning, error, or unsupp. The base level automatically derives the operating system from the contents of the Os macro, from the -Ios command line option, or from the uname(3) return value. The levels openbsd and netbsd are variants of base that bypass autodetection and request validation of base system conventions for a particular operating system. The level all is an alias for base. By default, mandoc is silent. See EXIT STATUS and DIAGNOSTICS for details. The special option -W stop tells mandoc to exit after parsing a file that causes warnings or errors of at least the requested level. No formatted output will be produced from that file. If both a level and stop are requested, they can be joined with a comma, for example -W error,stop. file Read from the given input file. If multiple files are specified, they are processed in the given order. If unspecified, mandoc reads from standard input. The options -fhklw are also supported and are documented in man(1). In -f and -k mode, mandoc also supports the options -CMmOSs described in the apropos(1) manual. The options -fkl are mutually exclusive and override each other. ASCII Output Use -T ascii to force text output in 7-bit ASCII character encoding documented in the ascii(7) manual page, ignoring the locale(1) set in the environment. Font styles are applied by using back-spaced encoding such that an underlined character ‘c’ is rendered as ‘_\[bs]c’, where ‘\[bs]’ is the back-space character number 8. Emboldened characters are rendered as ‘c\[bs]c’. This markup is typically converted to appropriate terminal sequences by the pager or ul(1). To remove the markup, pipe the output to col(1) -b instead. The special characters documented in mandoc_char(7) are rendered best- effort in an ASCII equivalent. In particular, opening and closing ‘single quotes’ are represented as characters number 0x60 and 0x27, respectively, which agrees with all ASCII standards from 1965 to the latest revision (2012) and which matches the traditional way in which roff(7) formatters represent single quotes in ASCII output. This correct ASCII rendering may look strange with modern Unicode-compatible fonts because contrary to ASCII, Unicode uses the code point U+0060 for the grave accent only, never for an opening quote. The following -O arguments are accepted: indent=indent The left margin for normal text is set to indent blank characters instead of the default of five for mdoc(7) and seven for man(7). Increasing this is not recommended; it may result in degraded formatting, for example overfull lines or ugly line breaks. When output is to a pager on a terminal that is less than 66 columns wide, the default is reduced to three columns. mdoc Format man(7) input files in mdoc(7) output style. This prints the operating system name rather than the page title on the right side of the footer line, and it implies -O indent=5. One useful application is for checking that -T man output formats in the same way as the mdoc(7) source it was generated from. tag[=term] If the formatted manual page is opened in a pager, go to the definition of the term rather than showing the manual page from the beginning. If no term is specified, reuse the first command line argument that is not a section number. If that argument is in apropos(1) key=val format, only the val is used rather than the argument as a whole. This is useful for commands like ‘man -akO tag Ic=ulimit’ to search for a keyword and jump right to its definition in the matching manual pages. width=width The output width is set to width instead of the default of 78. When output is to a pager on a terminal that is less than 79 columns wide, the default is reduced to one less than the terminal width. In any case, lines that are output in literal mode are never wrapped and may exceed the output width. HTML Output Output produced by -T html conforms to HTML5 using optional self-closing tags. Default styles use only CSS1. Equations rendered from eqn(7) blocks use MathML. The file /usr/share/misc/mandoc.css documents style-sheet classes available for customising output. If a style-sheet is not specified with -O style, -T html defaults to simple output (via an embedded style-sheet) readable in any graphical or text-based web browser. Non-ASCII characters are rendered as hexadecimal Unicode character references. The following -O arguments are accepted: fragment Omit the <!DOCTYPE> declaration and the <html>, <head>, and <body> elements and only emit the subtree below the <body> element. The style argument will be ignored. This is useful when embedding manual content within existing documents. includes=fmt The string fmt, for example, ../src/%I.html, is used as a template for linked header files (usually via the In macro). Instances of ‘%I’ are replaced with the include filename. The default is not to present a hyperlink. man=fmt[;fmt] The string fmt, for example, ../html%S/%N.%S.html, is used as a template for linked manuals (usually via the Xr macro). Instances of ‘%N’ and ‘%S’ are replaced with the linked manual's name and section, respectively. If no section is included, section 1 is assumed. The default is not to present a hyperlink. If two formats are given and a file %N.%S exists in the current directory, the first format is used; otherwise, the second format is used. style=style.css The file style.css is used for an external style-sheet. This must be a valid absolute or relative URI. tag[=term] Same syntax and semantics as for ASCII Output. This is implemented by passing a file:// URI ending in a fragment identifier to the pager rather than passing merely a file name. When using this argument, use a pager supporting such URIs, for example MANPAGER='lynx -force_html' man -T html -O tag=MANPAGER man MANPAGER='w3m -T text/html' man -T html -O tag=toc mandoc Consequently, for HTML output, this argument does not work with more(1) or less(1). For example, ‘MANPAGER=less man -T html -O tag=toc mandoc’ does not work because less(1) does not support file:// URIs. toc If an input file contains at least two non-standard sections, print a table of contents near the beginning of the output. Locale Output By default, mandoc automatically selects UTF-8 or ASCII output according to the current locale(1). If any of the environment variables LC_ALL, LC_CTYPE, or LANG are set and the first one that is set selects the UTF-8 character encoding, it produces UTF-8 Output; otherwise, it falls back to ASCII Output. This output mode can also be selected explicitly with -T locale. Man Output Use -T man to translate mdoc(7) input into man(7) output format. This is useful for distributing manual sources to legacy systems lacking mdoc(7) formatters. Embedded eqn(7) and tbl(7) code is not supported. If the input format of a file is man(7), the input is copied to the output. The parser is also run, and as usual, the -W level controls which DIAGNOSTICS are displayed before copying the input to the output. Markdown Output Use -T markdown to translate mdoc(7) input to the markdown format conforming to John Gruber's 2004 specification: http://daringfireball.net/projects/markdown/syntax.text. The output also almost conforms to the CommonMark: http://commonmark.org/ specification. The character set used for the markdown output is ASCII. Non-ASCII characters are encoded as HTML entities. Since that is not possible in literal font contexts, because these are rendered as code spans and code blocks in the markdown output, non-ASCII characters are transliterated to ASCII approximations in these contexts. Markdown is a very weak markup language, so all semantic markup is lost, and even part of the presentational markup may be lost. Do not use this as an intermediate step in converting to HTML; instead, use -T html directly. The man(7), tbl(7), and eqn(7) input languages are not supported by -T markdown output mode. PDF Output PDF-1.1 output may be generated by -T pdf. See PostScript Output for -O arguments and defaults. PostScript Output PostScript "Adobe-3.0" Level-2 pages may be generated by -T ps. Output pages default to letter sized and are rendered in the Times font family, 11-point. Margins are calculated as 1/9 the page length and width. Line-height is 1.4m. Special characters are rendered as in ASCII Output. The following -O arguments are accepted: paper=name The paper size name may be one of a3, a4, a5, legal, or letter. You may also manually specify dimensions as NNxNN, width by height in millimetres. If an unknown value is encountered, letter is used. UTF-8 Output Use -T utf8 to force text output in UTF-8 multi-byte character encoding, ignoring the locale(1) settings in the environment. See ASCII Output regarding font styles and -O arguments. On operating systems lacking locale or wide character support, and on those where the internal character representation is not UCS-4, mandoc always falls back to ASCII Output. Syntax tree output Use -T tree to show a human readable representation of the syntax tree. It is useful for debugging the source code of manual pages. The exact format is subject to change, so don't write parsers for it. The first paragraph shows meta data found in the mdoc(7) prologue, on the man(7) TH line, or the fallbacks used. In the tree dump, each output line shows one syntax tree node. Child nodes are indented with respect to their parent node. The columns are: 1. For macro nodes, the macro name; for text and tbl(7) nodes, the content. There is a special format for eqn(7) nodes. 2. Node type (text, elem, block, head, body, body-end, tail, tbl, eqn). 3. Flags: - An opening parenthesis if the node is an opening delimiter. - An asterisk if the node starts a new input line. - The input line number (starting at one). - A colon. - The input column number (starting at one). - A closing parenthesis if the node is a closing delimiter. - A full stop if the node ends a sentence. - BROKEN if the node is a block broken by another block. - NOSRC if the node is not in the input file, but automatically generated from macros. - NOPRT if the node is not supposed to generate output for any output format. The following -O argument is accepted: noval Skip validation and show the unvalidated syntax tree. This can help to find out whether a given behaviour is caused by the parser or by the validator. Meta data is not available in this case. ENVIRONMENT LC_CTYPE The character encoding locale(1). When Locale Output is selected, it decides whether to use ASCII or UTF-8 output format. It never affects the interpretation of input files. MANPAGER Any non-empty value of the environment variable MANPAGER is used instead of the standard pagination program, less(1); see man(1) for details. Only used if -a or -l is specified. PAGER Specifies the pagination program to use when MANPAGER is not defined. If neither PAGER nor MANPAGER is defined, less(1) is used. Only used if -a or -l is specified. EXIT STATUS The mandoc utility exits with one of the following values, controlled by the message level associated with the -W option: 0 No base system convention violations, style suggestions, warnings, or errors occurred, or those that did were ignored because they were lower than the requested level. 1 At least one base system convention violation or style suggestion occurred, but no warning or error, and -W base or -W style was specified. 2 At least one warning occurred, but no error, and -W warning or a lower level was requested. 3 At least one parsing error occurred, but no unsupported feature was encountered, and -W error or a lower level was requested. 4 At least one unsupported feature was encountered, and -W unsupp or a lower level was requested. 5 Invalid command line arguments were specified. No input files have been read. 6 An operating system error occurred, for example exhaustion of memory, file descriptors, or process table entries. Such errors may cause mandoc to exit at once, possibly in the middle of parsing or formatting a file. Note that selecting -T lint output mode implies -W all.
mandoc – format manual pages
mandoc [-ac] [-I os=name] [-K encoding] [-mdoc | -man] [-O options] [-T output] [-W level] [file ...]
null
To page manuals to the terminal: $ mandoc -l mandoc.1 man.1 apropos.1 makewhatis.8 To produce HTML manuals with /usr/share/misc/mandoc.css as the style- sheet: $ mandoc -T html -O style=/usr/share/misc/mandoc.css mdoc.7 > mdoc.7.html To check over a large set of manuals: $ mandoc -T lint `find /usr/src -name \*\.[1-9]` To produce a series of PostScript manuals for A4 paper: $ mandoc -T ps -O paper=a4 mdoc.7 man.7 > manuals.ps Convert a modern mdoc(7) manual to the older man(7) format, for use on systems lacking an mdoc(7) parser: $ mandoc -T man foo.mdoc > foo.man DIAGNOSTICS Messages displayed by mandoc follow this format: mandoc: file:line:column: level: message: macro arguments (os) The first three fields identify the file name, line number, and column number of the input file where the message was triggered. The line and column numbers start at 1. Both are omitted for messages referring to an input file as a whole. All level and message strings are explained below. The name of the macro triggering the message and its arguments are omitted where meaningless. The os operating system specifier is omitted for messages that are relevant for all operating systems. Fatal messages about invalid command line arguments or operating system errors, for example when memory is exhausted, may also omit the file and level fields. Message levels have the following meanings: syserr An operating system error occurred. There isn't necessarily anything wrong with the input files. Output may all the same be missing or incomplete. badarg Invalid command line arguments were specified. No input files have been read and no output is produced. unsupp An input file uses unsupported low-level roff(7) features. The output may be incomplete and/or misformatted, so using GNU troff instead of mandoc to process the file may be preferable. error Indicates a risk of information loss or severe misformatting, in most cases caused by serious syntax errors. warning Indicates a risk that the information shown or its formatting may mismatch the author's intent in minor ways. Additionally, syntax errors are classified at least as warnings, even if they do not usually cause misformatting. style An input file uses dubious or discouraged style. This is not a complaint about the syntax, and probably neither formatting nor portability are in danger. While great care is taken to avoid false positives on the higher message levels, the style level tries to reduce the probability that issues go unnoticed, so it may occasionally issue bogus suggestions. Please use your good judgement to decide whether any particular style suggestion really justifies a change to the input file. base A convention used in the base system of a specific operating system is not adhered to. These are not markup mistakes, and neither the quality of formatting nor portability are in danger. Messages of the base level are printed with the more intuitive style level tag. Messages of the base, style, warning, error, and unsupp levels are hidden unless their level, or a lower level, is requested using a -W option or -T lint output mode. As indicated below, all base and some style checks are only performed if a specific operating system name occurs in the arguments of the -W command line option, of the Os macro, of the -Ios command line option, or, if neither are present, in the return value of the uname(3) function. Conventions for base system manuals Mdocdate found (mdoc, NetBSD) The Dd macro uses CVS Mdocdate keyword substitution, which is not supported by the NetBSD base system. Consider using the conventional “Month dd, yyyy” format instead. Mdocdate missing (mdoc, OpenBSD) The Dd macro does not use CVS Mdocdate keyword substitution, but using it is conventionally expected in the OpenBSD base system. unknown architecture (mdoc, OpenBSD, NetBSD) The third argument of the Dt macro does not match any of the architectures this operating system is running on. operating system explicitly specified (mdoc, OpenBSD, NetBSD) The Os macro has an argument. In the base system, it is conventionally left blank. RCS id missing (OpenBSD, NetBSD) The manual page lacks the comment line with the RCS identifier generated by CVS OpenBSD or NetBSD keyword substitution as conventionally used in these operating systems. Style suggestions legacy man(7) date format (mdoc) The Dd macro uses the legacy man(7) date format “yyyy-dd-mm”. Consider using the conventional mdoc(7) date format “Month dd, yyyy” instead. normalizing date format to: ... (mdoc, man) The Dd or TH macro provides an abbreviated month name or a day number with a leading zero. In the formatted output, the month name is written out in full and the leading zero is omitted. lower case character in document title (mdoc, man) The title is still used as given in the Dt or TH macro. duplicate RCS id A single manual page contains two copies of the RCS identifier for the same operating system. Consider deleting the later instance and moving the first one up to the top of the page. possible typo in section name (mdoc) Fuzzy string matching revealed that the argument of an Sh macro is similar, but not identical to a standard section name. unterminated quoted argument (roff) Macro arguments can be enclosed in double quote characters such that space characters and macro names contained in the quoted argument need not be escaped. The closing quote of the last argument of a macro can be omitted. However, omitting it is not recommended because it makes the code harder to read. useless macro (mdoc) A Bt, Tn, or Ud macro was found. Simply delete it: it serves no useful purpose. consider using OS macro (mdoc) A string was found in plain text or in a Bx macro that could be represented using Ox, Nx, Fx, or Dx. errnos out of order (mdoc, NetBSD) The Er items in a Bl list are not in alphabetical order. duplicate errno (mdoc, NetBSD) A Bl list contains two consecutive It entries describing the same Er number. referenced manual not found (mdoc) An Xr macro references a manual page that was not found. When running with -W base, the search is restricted to the base system, by default to /usr/share/man:/usr/X11R6/man. This path can be configured at compile time using the MANPATH_BASE preprocessor macro. When running with -W style, the search is done along the full search path as described in the man(1) manual page, respecting the -m and -M command line options, the MANPATH environment variable, the man.conf(5) file and falling back to the default of /usr/share/man:/usr/X11R6/man:/usr/local/man, also configurable at compile time using the MANPATH_DEFAULT preprocessor macro. trailing delimiter (mdoc) The last argument of an Ex, Fo, Nd, Nm, Os, Sh, Ss, St, or Sx macro ends with a trailing delimiter. This is usually bad style and often indicates typos. Most likely, the delimiter can be removed. no blank before trailing delimiter (mdoc) The last argument of a macro that supports trailing delimiter arguments is longer than one byte and ends with a trailing delimiter. Consider inserting a blank such that the delimiter becomes a separate argument, thus moving it out of the scope of the macro. fill mode already enabled, skipping (man) A fi request occurs even though the document is still in fill mode, or already switched back to fill mode. It has no effect. fill mode already disabled, skipping (man) An nf request occurs even though the document already switched to no-fill mode and did not switch back to fill mode yet. It has no effect. input text line longer than 80 bytes Consider breaking the input text line at one of the blank characters before column 80. verbatim "--", maybe consider using \(em (mdoc) Even though the ASCII output device renders an em-dash as "--", that is not a good way to write it in an input file because it renders poorly on all other output devices. function name without markup (mdoc) A word followed by an empty pair of parentheses occurs on a text line. Consider using an Fn or Xr macro. whitespace at end of input line (mdoc, man, roff) Whitespace at the end of input lines is almost never semantically significant — but in the odd case where it might be, it is extremely confusing when reviewing and maintaining documents. bad comment style (roff) Comment lines start with a dot, a backslash, and a double-quote character. The mandoc utility treats the line as a comment line even without the backslash, but leaving out the backslash might not be portable. Warnings related to the document prologue missing manual title, using UNTITLED (mdoc) A Dt macro has no arguments, or there is no Dt macro before the first non-prologue macro. missing manual title, using "" (man) There is no TH macro, or it has no arguments. missing manual section, using "" (mdoc, man) A Dt or TH macro lacks the mandatory section argument. unknown manual section (mdoc) The section number in a Dt line is invalid, but still used. filename/section mismatch (mdoc, man) The name of the input file being processed is known and its file name extension starts with a non-zero digit, but the Dt or TH macro contains a section argument that starts with a different non-zero digit. The section argument is used as provided anyway. Consider checking whether the file name or the argument need a correction. missing date, using "" (mdoc, man) The document was parsed as mdoc(7) and it has no Dd macro, or the Dd macro has no arguments or only empty arguments; or the document was parsed as man(7) and it has no TH macro, or the TH macro has less than three arguments or its third argument is empty. cannot parse date, using it verbatim (mdoc, man) The date given in a Dd or TH macro does not follow the conventional format. date in the future, using it anyway (mdoc, man) The date given in a Dd or TH macro is more than a day ahead of the current system time(3). missing Os macro, using "" (mdoc) The default or current system is not shown in this case. late prologue macro (mdoc) A Dd or Os macro occurs after some non-prologue macro, but still takes effect. prologue macros out of order (mdoc) The prologue macros are not given in the conventional order Dd, Dt, Os. All three macros are used even when given in another order. Warnings regarding document structure .so is fragile, better use ln(1) (roff) Including files only works when the parser program runs with the correct current working directory. no document body (mdoc, man) The document body contains neither text nor macros. An empty document is shown, consisting only of a header and a footer line. content before first section header (mdoc, man) Some macros or text precede the first Sh or SH section header. The offending macros and text are parsed and added to the top level of the syntax tree, outside any section block. first section is not NAME (mdoc) The argument of the first Sh macro is not ‘NAME’. This may confuse makewhatis(8) and apropos(1). NAME section without Nm before Nd (mdoc) The NAME section does not contain any Nm child macro before the first Nd macro. NAME section without description (mdoc) The NAME section lacks the mandatory Nd child macro. description not at the end of NAME (mdoc) The NAME section does contain an Nd child macro, but other content follows it. bad NAME section content (mdoc) The NAME section contains plain text or macros other than Nm and Nd. missing comma before name (mdoc) The NAME section contains an Nm macro that is neither the first one nor preceded by a comma. missing description line, using "" (mdoc) The Nd macro lacks the required argument. The title line of the manual will end after the dash. description line outside NAME section (mdoc) An Nd macro appears outside the NAME section. The arguments are printed anyway and the following text is used for apropos(1), but none of that behaviour is portable. sections out of conventional order (mdoc) A standard section occurs after another section it usually precedes. All section titles are used as given, and the order of sections is not changed. duplicate section title (mdoc) The same standard section title occurs more than once. unexpected section (mdoc) A standard section header occurs in a section of the manual where it normally isn't useful. cross reference to self (mdoc) An Xr macro refers to a name and section matching the section of the present manual page and a name mentioned in an Nm macro in the NAME or SYNOPSIS section, or in an Fn or Fo macro in the SYNOPSIS. Consider using Nm or Fn instead of Xr. unusual Xr order (mdoc) In the SEE ALSO section, an Xr macro with a lower section number follows one with a higher number, or two Xr macros referring to the same section are out of alphabetical order. unusual Xr punctuation (mdoc) In the SEE ALSO section, punctuation between two Xr macros differs from a single comma, or there is trailing punctuation after the last Xr macro. AUTHORS section without An macro (mdoc) An AUTHORS sections contains no An macros, or only empty ones. Probably, there are author names lacking markup. Warnings related to macros and nesting obsolete macro (mdoc) See the mdoc(7) manual for replacements. macro neither callable nor escaped (mdoc) The name of a macro that is not callable appears on a macro line. It is printed verbatim. If the intention is to call it, move it to its own input line; otherwise, escape it by prepending ‘\&’. skipping paragraph macro In mdoc(7) documents, this happens - at the beginning and end of sections and subsections - right before non-compact lists and displays - at the end of items in non-column, non-compact lists - and for multiple consecutive paragraph macros. In man(7) documents, it happens - for empty P, PP, and LP macros - for IP macros having neither head nor body arguments - for br or sp right after SH or SS moving paragraph macro out of list (mdoc) A list item in a Bl list contains a trailing paragraph macro. The paragraph macro is moved after the end of the list. skipping no-space macro (mdoc) An input line begins with an Ns macro, or the next argument after an Ns macro is an isolated closing delimiter. The macro is ignored. blocks badly nested (mdoc) If two blocks intersect, one should completely contain the other. Otherwise, rendered output is likely to look strange in any output format, and rendering in SGML-based output formats is likely to be outright wrong because such languages do not support badly nested blocks at all. Typical examples of badly nested blocks are "Ao Bo Ac Bc" and "Ao Bq Ac". In these examples, Ac breaks Bo and Bq, respectively. nested displays are not portable (mdoc) A Bd, D1, or Dl display occurs nested inside another Bd display. This works with mandoc, but fails with most other implementations. moving content out of list (mdoc) A Bl list block contains text or macros before the first It macro. The offending children are moved before the beginning of the list. first macro on line Inside a Bl -column list, a Ta macro occurs as the first macro on a line, which is not portable. line scope broken (man) While parsing the next-line scope of the previous macro, another macro is found that prematurely terminates the previous one. The previous, interrupted macro is deleted from the parse tree. Warnings related to missing arguments skipping empty request (roff, eqn) The macro name is missing from a macro definition request, or an eqn(7) control statement or operation keyword lacks its required argument. conditional request controls empty scope (roff) A conditional request is only useful if any of the following follows it on the same logical input line: - The ‘\{’ keyword to open a multi-line scope. - A request or macro or some text, resulting in a single-line scope. - The immediate end of the logical line without any intervening whitespace, resulting in next-line scope. Here, a conditional request is followed by trailing whitespace only, and there is no other content on its logical input line. Note that it doesn't matter whether the logical input line is split across multiple physical input lines using ‘\’ line continuation characters. This is one of the rare cases where trailing whitespace is syntactically significant. The conditional request controls a scope containing whitespace only, so it is unlikely to have a significant effect, except that it may control a following el clause. skipping empty macro (mdoc) The indicated macro has no arguments and hence no effect. empty block (mdoc, man) A Bd, Bk, Bl, D1, Dl, MT, RS, or UR block contains nothing in its body and will produce no output. empty argument, using 0n (mdoc) The required width is missing after Bd or Bl -offset or -width. missing display type, using -ragged (mdoc) The Bd macro is invoked without the required display type. list type is not the first argument (mdoc) In a Bl macro, at least one other argument precedes the type argument. The mandoc utility copes with any argument order, but some other mdoc(7) implementations do not. missing -width in -tag list, using 8n (mdoc) Every Bl macro having the -tag argument requires -width, too. missing utility name, using "" (mdoc) The Ex -std macro is called without an argument before Nm has first been called with an argument. missing function name, using "" (mdoc) The Fo macro is called without an argument. No function name is printed. empty head in list item (mdoc) In a Bl -diag, -hang, -inset, -ohang, or -tag list, an It macro lacks the required argument. The item head is left empty. empty list item (mdoc) In a Bl -bullet, -dash, -enum, or -hyphen list, an It block is empty. An empty list item is shown. missing argument, using next line (mdoc) An It macro in a Bd -column list has no arguments. While mandoc uses the text or macros of the following line, if any, for the cell, other formatters may misformat the list. missing font type, using \fR (mdoc) A Bf macro has no argument. It switches to the default font. unknown font type, using \fR (mdoc) The Bf argument is invalid. The default font is used instead. nothing follows prefix (mdoc) A Pf macro has no argument, or only one argument and no macro follows on the same input line. This defeats its purpose; in particular, spacing is not suppressed before the text or macros following on the next input line. empty reference block (mdoc) An Rs macro is immediately followed by an Re macro on the next input line. Such an empty block does not produce any output. missing section argument (mdoc) An Xr macro lacks its second, section number argument. The first argument, i.e. the name, is printed, but without subsequent parentheses. missing -std argument, adding it (mdoc) An Ex or Rv macro lacks the required -std argument. The mandoc utility assumes -std even when it is not specified, but other implementations may not. missing option string, using "" (man) The OP macro is invoked without any argument. An empty pair of square brackets is shown. missing resource identifier, using "" (man) The MT or UR macro is invoked without any argument. An empty pair of angle brackets is shown. missing eqn box, using "" (eqn) A diacritic mark or a binary operator is found, but there is nothing to the left of it. An empty box is inserted. Warnings related to bad macro arguments duplicate argument (mdoc) A Bd or Bl macro has more than one -compact, more than one -offset, or more than one -width argument. All but the last instances of these arguments are ignored. skipping duplicate argument (mdoc) An An macro has more than one -split or -nosplit argument. All but the first of these arguments are ignored. skipping duplicate display type (mdoc) A Bd macro has more than one type argument; the first one is used. skipping duplicate list type (mdoc) A Bl macro has more than one type argument; the first one is used. skipping -width argument (mdoc) A Bl -column, -diag, -ohang, -inset, or -item list has a -width argument. That has no effect. wrong number of cells In a line of a Bl -column list, the number of tabs or Ta macros is less than the number expected from the list header line or exceeds the expected number by more than one. Missing cells remain empty, and all cells exceeding the number of columns are joined into one single cell. unknown AT&T UNIX version (mdoc) An At macro has an invalid argument. It is used verbatim, with "AT&T UNIX " prefixed to it. comma in function argument (mdoc) An argument of an Fa or Fn macro contains a comma; it should probably be split into two arguments. parenthesis in function name (mdoc) The first argument of an Fc or Fn macro contains an opening or closing parenthesis; that's probably wrong, parentheses are added automatically. unknown library name (mdoc, not on OpenBSD) An Lb macro has an unknown name argument and will be rendered as "library “name”". invalid content in Rs block (mdoc) An Rs block contains plain text or non-% macros. The bogus content is left in the syntax tree. Formatting may be poor. invalid Boolean argument (mdoc) An Sm macro has an argument other than on or off. The invalid argument is moved out of the macro, which leaves the macro empty, causing it to toggle the spacing mode. argument contains two font escapes (roff) The second argument of a char request contains more than one font escape sequence. A wrong font may remain active after using the character. unknown font, skipping request (man, tbl) A roff(7) ft request or a tbl(7) f layout modifier has an unknown font argument. odd number of characters in request (roff) A tr request contains an odd number of characters. The last character is mapped to the blank character. Warnings related to plain text blank line in fill mode, using .sp (mdoc) The meaning of blank input lines is only well-defined in non-fill mode: In fill mode, line breaks of text input lines are not supposed to be significant. However, for compatibility with groff, blank lines in fill mode are formatted like sp requests. To request a paragraph break, use Pp instead of a blank line. tab in filled text (mdoc, man) The meaning of tab characters is only well-defined in non- fill mode: In fill mode, whitespace is not supposed to be significant on text input lines. As an implementation dependent choice, tab characters on text lines are passed through to the formatters in any case. Given that the text before the tab character will be filled, it is hard to predict which tab stop position the tab will advance to. new sentence, new line (mdoc) A new sentence starts in the middle of a text line. Start it on a new input line to help formatters produce correct spacing. invalid escape sequence (roff) An escape sequence has an invalid opening argument delimiter, lacks the closing argument delimiter, the argument is of an invalid form, or it is a character escape sequence with an invalid name. If the argument is incomplete, \* and \n expand to an empty string, \B to the digit ‘0’, and \w to the length of the incomplete argument. All other invalid escape sequences are ignored. undefined escape, printing literally (roff) In an escape sequence, the first character right after the leading backslash is invalid. That character is printed literally, which is equivalent to ignoring the backslash. undefined string, using "" (roff) If a string is used without being defined before, its value is implicitly set to the empty string. However, defining strings explicitly before use keeps the code more readable. Warnings related to tables tbl line starts with span (tbl) The first cell in a table layout line is a horizontal span (‘s’). Data provided for this cell is ignored, and nothing is printed in the cell. tbl column starts with span (tbl) The first line of a table layout specification requests a vertical span (‘^’). Data provided for this cell is ignored, and nothing is printed in the cell. skipping vertical bar in tbl layout (tbl) A table layout specification contains more than two consecutive vertical bars. A double bar is printed, all additional bars are discarded. Errors related to tables non-alphabetic character in tbl options (tbl) The table options line contains a character other than a letter, blank, or comma where the beginning of an option name is expected. The character is ignored. skipping unknown tbl option (tbl) The table options line contains a string of letters that does not match any known option name. The word is ignored. missing tbl option argument (tbl) A table option that requires an argument is not followed by an opening parenthesis, or the opening parenthesis is immediately followed by a closing parenthesis. The option is ignored. wrong tbl option argument size (tbl) A table option argument contains an invalid number of characters. Both the option and the argument are ignored. empty tbl layout (tbl) A table layout specification is completely empty, specifying zero lines and zero columns. As a fallback, a single left-justified column is used. invalid character in tbl layout (tbl) A table layout specification contains a character that can neither be interpreted as a layout key character nor as a layout modifier, or a modifier precedes the first key. The invalid character is discarded. unmatched parenthesis in tbl layout (tbl) A table layout specification contains an opening parenthesis, but no matching closing parenthesis. The rest of the input line, starting from the parenthesis, has no effect. ignoring excessive spacing in tbl layout (tbl) A spacing modifier in a table layout is unreasonably large. The default spacing of 3n is used instead. tbl without any data cells (tbl) A table does not contain any data cells. It will probably produce no output. ignoring data in spanned tbl cell (tbl) A table cell is marked as a horizontal span (‘s’) or vertical span (‘^’) in the table layout, but it contains data. The data is ignored. ignoring extra tbl data cells (tbl) A data line contains more cells than the corresponding layout line. The data in the extra cells is ignored. data block open at end of tbl (tbl) A data block is opened with T{, but never closed with a matching T}. The remaining data lines of the table are all put into one cell, and any remaining cells stay empty. Errors related to roff, mdoc, and man code duplicate prologue macro (mdoc) One of the prologue macros occurs more than once. The last instance overrides all previous ones. skipping late title macro (mdoc) The Dt macro appears after the first non-prologue macro. Traditional formatters cannot handle this because they write the page header before parsing the document body. Even though this technical restriction does not apply to mandoc, traditional semantics is preserved. The late macro is discarded including its arguments. input stack limit exceeded, infinite loop? (roff) Explicit recursion limits are implemented for the following features, in order to prevent infinite loops: - expansion of nested escape sequences including expansion of strings and number registers, - expansion of nested user-defined macros, - and so file inclusion. When a limit is hit, the output is incorrect, typically losing some content, but the parser can continue. skipping bad character (mdoc, man, roff) The input file contains a byte that is not a printable ascii(7) character. The message mentions the character number. The offending byte is replaced with a question mark (‘?’). Consider editing the input file to replace the byte with an ASCII transliteration of the intended character. skipping unknown macro (mdoc, man, roff) The first identifier on a request or macro line is neither recognized as a roff(7) request, nor as a user-defined macro, nor, respectively, as an mdoc(7) or man(7) macro. It may be mistyped or unsupported. The request or macro is discarded including its arguments. skipping request outside macro (roff) A shift or return request occurs outside any macro definition and has no effect. skipping insecure request (roff) An input file attempted to run a shell command or to read or write an external file. Such attempts are denied for security reasons. skipping item outside list (mdoc, eqn) An It macro occurs outside any Bl list, or an eqn(7) above delimiter occurs outside any pile. It is discarded including its arguments. skipping column outside column list (mdoc) A Ta macro occurs outside any Bl -column block. It is discarded including its arguments. skipping end of block that is not open (mdoc, man, eqn, tbl, roff) Various syntax elements can only be used to explicitly close blocks that have previously been opened. An mdoc(7) block closing macro, a man(7) ME, RE or UE macro, an eqn(7) right delimiter or closing brace, or the end of an equation, table, or roff(7) conditional request is encountered but no matching block is open. The offending request or macro is discarded. fewer RS blocks open, skipping (man) The RE macro is invoked with an argument, but less than the specified number of RS blocks is open. The RE macro is discarded. inserting missing end of block (mdoc, tbl) Various mdoc(7) macros as well as tables require explicit closing by dedicated macros. A block that doesn't support bad nesting ends before all of its children are properly closed. The open child nodes are closed implicitly. appending missing end of block (mdoc, man, eqn, tbl, roff) At the end of the document, an explicit mdoc(7) block, a man(7) next-line scope or MT, RS or UR block, an equation, table, or roff(7) conditional or ignore block is still open. The open block is closed implicitly. escaped character not allowed in a name (roff) Macro, string and register identifiers consist of printable, non- whitespace ASCII characters. Escape sequences and characters and strings expressed in terms of them cannot form part of a name. The first argument of an am, as, de, ds, nr, or rr request, or any argument of an rm request, or the name of a request or user defined macro being called, is terminated by an escape sequence. In the cases of as, ds, and nr, the request has no effect at all. In the cases of am, de, rr, and rm, what was parsed up to this point is used as the arguments to the request, and the rest of the input line is discarded including the escape sequence. When parsing for a request or a user-defined macro name to be called, only the escape sequence is discarded. The characters preceding it are used as the request or macro name, the characters following it are used as the arguments to the request or macro. using macro argument outside macro (roff) The escape sequence \$ occurs outside any macro definition and expands to the empty string. argument number is not numeric (roff) The argument of the escape sequence \$ is not a digit; the escape sequence expands to the empty string. NOT IMPLEMENTED: Bd -file (mdoc) For security reasons, the Bd macro does not support the -file argument. By requesting the inclusion of a sensitive file, a malicious document might otherwise trick a privileged user into inadvertently displaying the file on the screen, revealing the file content to bystanders. The argument is ignored including the file name following it. skipping display without arguments (mdoc) A Bd block macro does not have any arguments. The block is discarded, and the block content is displayed in whatever mode was active before the block. missing list type, using -item (mdoc) A Bl macro fails to specify the list type. argument is not numeric, using 1 (roff) The argument of a ce request is not a number. argument is not a character (roff) The first argument of a char request is neither a single ASCII character nor a single character escape sequence. The request is ignored including all its arguments. missing manual name, using "" (mdoc) The first call to Nm, or any call in the NAME section, lacks the required argument. uname(3) system call failed, using UNKNOWN (mdoc) The Os macro is called without arguments, and the uname(3) system call failed. As a workaround, mandoc can be compiled with -DOSNAME="\"string\"". unknown standard specifier (mdoc) An St macro has an unknown argument and is discarded. skipping request without numeric argument (roff, eqn) An it request or an eqn(7) size or gsize statement has a non- numeric or negative argument or no argument at all. The invalid request or statement is ignored. excessive shift (roff) The argument of a shift request is larger than the number of arguments of the macro that is currently being executed. All macro arguments are deleted and \n(.$ is set to zero. NOT IMPLEMENTED: .so with absolute path or ".." (roff) For security reasons, mandoc allows so file inclusion requests only with relative paths and only without ascending to any parent directory. By requesting the inclusion of a sensitive file, a malicious document might otherwise trick a privileged user into inadvertently displaying the file on the screen, revealing the file content to bystanders. mandoc only shows the path as it appears behind so. .so request failed (roff) Servicing a so request requires reading an external file, but the file could not be opened. mandoc only shows the path as it appears behind so. skipping all arguments (mdoc, man, eqn, roff) An mdoc(7) Bt, Ed, Ef, Ek, El, Lp, Pp, Re, Rs, or Ud macro, an It macro in a list that don't support item heads, a man(7) LP, P, or PP macro, an eqn(7) EQ or EN macro, or a roff(7) br, fi, or nf request or ‘..’ block closing request is invoked with at least one argument. All arguments are ignored. skipping excess arguments (mdoc, man, roff) A macro or request is invoked with too many arguments: - Fo, MT, PD, RS, UR, ft, or sp with more than one argument - An with another argument after -split or -nosplit - RE with more than one argument or with a non-integer argument - OP or a request of the de family with more than two arguments - Dt with more than three arguments - TH with more than five arguments - Bd, Bk, or Bl with invalid arguments The excess arguments are ignored. Unsupported features input too large (mdoc, man) Currently, mandoc cannot handle input files larger than its arbitrary size limit of 2^31 bytes (2 Gigabytes). Since useful manuals are always small, this is not a problem in practice. Parsing is aborted as soon as the condition is detected. unsupported control character (roff) An ASCII control character supported by other roff(7) implementations but not by mandoc was found in an input file. It is replaced by a question mark. unsupported escape sequence (roff) An input file contains an escape sequence supported by GNU troff or Heirloom troff but not by mandoc, and it is likely that this will cause information loss or considerable misformatting. unsupported roff request (roff) An input file contains a roff(7) request supported by GNU troff or Heirloom troff but not by mandoc, and it is likely that this will cause information loss or considerable misformatting. eqn delim option in tbl (eqn, tbl) The options line of a table defines equation delimiters. Any equation source code contained in the table will be printed unformatted. unsupported table layout modifier (tbl) A table layout specification contains an ‘m’ modifier. The modifier is discarded. ignoring macro in table (tbl, mdoc, man) A table contains an invocation of an mdoc(7) or man(7) macro or of an undefined macro. The macro is ignored, and its arguments are handled as if they were a text line. skipping tbl in -Tman mode (mdoc, tbl) An input file contains the TS macro. This message is only generated in -T man output mode, where tbl(7) input is not supported. skipping eqn in -Tman mode (mdoc, eqn) An input file contains the EQ macro. This message is only generated in -T man output mode, where eqn(7) input is not supported. Bad command line arguments bad command line argument The argument following one of the -IKMmOTW command line options is invalid, or a file given as a command line argument cannot be opened. duplicate command line argument The -I command line option was specified twice. option has a superfluous value An argument to the -O option has a value but does not accept one. missing option value An argument to the -O option has no argument but requires one. bad option value An argument to the -O indent or width option has an invalid value. duplicate option value The same -O option is specified more than once. no such tag The -O tag option was specified but the tag was not found in any of the displayed manual pages. -Tmarkdown unsupported for man(7) input (man) The -T markdown option was specified but an input file uses the man(7) language. No output is produced for that input file. SEE ALSO apropos(1), man(1), eqn(7), man(7), mandoc_char(7), mdoc(7), roff(7), tbl(7) HISTORY The mandoc utility first appeared in OpenBSD 4.8. The option -I appeared in OpenBSD 5.2, and -aCcfhKklMSsw in OpenBSD 5.7. AUTHORS The mandoc utility was written by Kristaps Dzonsons <kristaps@bsd.lv> and is maintained by Ingo Schwarze <schwarze@openbsd.org>. macOS 14.5 August 14, 2021 macOS 14.5
pbpaste
pbcopy takes the standard input and places it in the specified pasteboard. If no pasteboard is specified, the general pasteboard will be used by default. The input is placed in the pasteboard as plain text data unless it begins with the Encapsulated PostScript (EPS) file header or the Rich Text Format (RTF) file header, in which case it is placed in the pasteboard as one of those data types. pbpaste removes the data from the pasteboard and writes it to the standard output. It normally looks first for plain text data in the pasteboard and writes that to the standard output; if no plain text data is in the pasteboard it looks for Encapsulated PostScript; if no EPS is present it looks for Rich Text. If none of those types is present in the pasteboard, pbpaste produces no output. * Encoding: pbcopy and pbpaste use locale environment variables to determine the encoding to be used for input and output. For example, absent other locale settings, setting the environment variable LANG=en_US.UTF-8 will cause pbcopy and pbpaste to use UTF-8 for input and output. If an encoding cannot be determined from the locale, the standard C encoding will be used. Use of UTF-8 is recommended. Note that by default the Terminal application uses the UTF-8 encoding and automatically sets the appropriate locale environment variable.
pbcopy, pbpaste - provide copying and pasting to the pasteboard (the Clipboard) from command line
pbcopy [-help] [-pboard {general | ruler | find | font}] pbpaste [-help] [-pboard {general | ruler | find | font}] [-Prefer {txt | rtf | ps}]
-pboard {general | ruler | find | font} specifies which pasteboard to copy to or paste from. If no pasteboard is given, the general pasteboard will be used by default. -Prefer {txt | rtf | ps} tells pbpaste what type of data to look for in the pasteboard first. As stated above, pbpaste normally looks first for plain text data; however, by specifying -Prefer ps you can tell pbpaste to look first for Encapsulated PostScript. If you specify -Prefer rtf, pbpaste looks first for Rich Text format. In any case, pbpaste looks for the other formats if the preferred one is not found. The txt option replaces the deprecated ascii option, which continues to function as before. Both indicate a preference for plain text. SEE ALSO ADC Reference Library: Cocoa > Interapplication Communication > Copying and Pasting Carbon > Interapplication Communication > Pasteboard Manager Programming Guide Carbon > Interapplication Communication > Pasteboard Manager Reference BUGS There is no way to tell pbpaste to get only a specified data type. Apple Computer, Inc. January 12, 2005 PBCOPY(1)
null
policytool
null
null
null
null
null
yamlpp-highlight5.34
null
null
null
null
null
actool
actool verifies, updates, and prints the contents of an asset catalog, generating its output in standard plist format. The tool follows a "read", "modify", "write", "print" order of operations.
actool - compiles, prints, updates, and verifies asset catalogs.
actool [options] document
Specifying Output: --output-format format By default, actool provides output in the form of an XML property list. Specifying binary1 will instruct actool to output a binary property list. Similarly, xml1 specifies an XML property list, and human-readable-text specifies human readable text. Compiling: --compile path Compiles document and writes the output to the specified directory path. The name of the CAR file will be Assets.car. The compile option instructs actool to convert an asset catalog to files optimized for runtime. Additionally, --warnings, --errors, and --output-format are three other options that are commonly combined with --compile. --warnings Include document warning messages in actool's plist output. Warnings will appear under the key com.apple.actool.document.warnings, with messages listed under the subkey message and warning types under the subkey type. --errors Include document error messages in actool's plist output. Errors will appear under the key com.apple.actool.document.errors, with messages listed under the subkey message and error types under the subkey type. --notices Include document notice messages in actool's plist output. Notices will appear under the key com.apple.actool.document.notices, with messages listed under the subkey message and error types under the subkey type. --output-partial-info-plist path Emit a plist to path that contains keys and values to include in an application's info plist. path is the full path to the info plist, and should have the path extension .plist specified. The plist is populated with information gathered while compiling the CAR file, and currently contains information about the app icon and launch images used by the project. During builds, the information produced here will be merged into the target bundle's Info.plist. --app-icon name Can be combined with --compile to select a primary app icon. The app icon will either be copied into the output directory specified by --compile, or into the generated CAR file, depending on the value of --minimum-deployment-target. Deploying to macOS 10.13 or iOS 11.0 and later will cause the app icon to be included in the generated CAR file. A partially defined image is still generated into the output path, but this behavior may go away in the future. This flag also causes actool to declare the app icon in the partial info plist component specified by --output-partial-info-plist. --include-all-app-icons When compiling, causes all app icon assets from all named catalogs to be included in the compiled CAR file. The value for --app-icon will be the primary app icon, and the additional icon names will be added to the partial info plist. --alternate-app-icon name Specifies an additional app icon set name to include in the compiled CAR file and listed in the partial info plist. Can be specified multiple times. This is an alternative to --include- all-app-icons providing more detailed control. --launch-image name Can be combined with --compile to select a launch image to compile to the output directory, for most platforms. On tvOS, the launch image is compiled into the resulting CAR file. This flag also causes actool to declare the launch image in the partial info plist component specified by --output-partial-info- plist. --accent-color name Selects a named color to use as the target's primary accent or tint color. A warning is generated if the referenced color is missing. When the color is present and has a non-empty value, the NSAccentColorName key is added to the partial info plist file specified by --output-partial-info-plist. --widget-background-color name Selects a named color to use for the background color, if the target is a widget. A warning is generated if the referenced color is missing. When the color is present and has a non-empty value, the color name is added to the partial info plist file specified by --output-partial-info-plist. --skip-app-store-deployment Whether to perform App Store-specific behaviors such as validations. For example, building for an iOS or watchOS app will warn if a 1024 App Store icon is not present, but only when compiling for App Store deployment. You might want to pass --skip-app-store-deployment for targets that are not intended to be submitted to the App Store. --include-partial-info-plist-localizations yes|no When enabled, includes the localization information of the selected assets in the generated partial Info.plist file under the CFBundleLocalizations key. This will allow the assets to be used at runtime in the absence of a corresponding lproj directory in the bundle. The default value is YES. --platform platform-name Specifies the target platform to compile for. This option influences warnings, validation, and which images are included in the built product. --minimum-deployment-target version Specifies the minimum deployment target to compile for. This option influences warnings, validation, and which images are included in the built product. --standalone-icon-behavior default|all|none Controls whether loose PNG or ICNS files are created for the app icon, in addition to including the content in the Assets.car file. By default, a small subset of sizes are included as loose files, allowing external management tools to display a representative icon without reading the CAR file. This can be set to all or none to include more or fewer icon sizes as loose files. --target-device device-name Specifies the target device to compile for, and may be passed multiple times. This option influences warnings, validation, and which images are included in the built product. --compress-pngs PNGs copied into iOS targets will be processed using pngcrush to optimize reading the images on iOS devices. This has no effect for images that wind up in the compiled CAR file, as it only affects PNG images copied in to the output bundle. --filter-for-device-model device Causes actool to filter the files put into the CAR file by device. This simulates how the App Store will thin the developer's application. For example, if you pass iPhone9,1, actool will only include images appropriate to iPhone 7. This is useful for testing to make sure thinned applications will work properly. During build time, this is driven by the TARGET_DEVICE_MODEL build setting, and is selected by choosing the active run destination in the scheme pop-up. When the argument is not present, no thinning will occur. --filter-for-device-os-version os_version Causes actool to filter the files put into the CAR file by OS version. This simulates how the App Store will thin the developer's application based on the final target OS of the app. For example, if you pass /fI11.0/fR, actool will only include images appropriate to iOS 11.0, but not previous versions. This is useful for testing to make sure thinned applications will work properly. Sticker Packs: --include-sticker-content Include sticker pack content from the input asset catalogs. --stickers-icon-role role Pass app-host or extension to select the appropriate icon sizes for the target when using a Messages-style app icon. --sticker-pack-identifier-prefix prefix Sets the default prefix used to identify your sticker path. This should be a valid domain type identifier. For example: com.mycompany. --sticker-pack-strings-file strings_file Specifies a string file that maps the stickers names to localized translations. --product-type product-type Deprecated; use the --include-sticker-content and --stickers- icon-role options instead. Sets the type of the product that's being built. In Xcode, all targets have a product type, and certain product types will cause slightly different behaviors in actool. These behaviors are currently centered around how stickers generate their content, as sticker packs have special requirements for where and how content should be formatter. actool currently recognizes two special product types: com.apple.product-type.app- extension.messages-sticker-pack and com.apple.product-type.app- extension.messages. On Demand Resources (ODR): --enable-on-demand-resources Tells actool to process on-demand resources. This may result in multiple CAR files being produced. Without this option, actool ignores ODR tags found in the asset catalog. --asset-pack-output-specifications filename Tells actool where to write the information about ODR resources found in the asset catalog. The emitted file will be a plist. Listing Content: --print-contents Include a listing of the catalog's content in the output. Version Information: --version Print the version of actool. The version information is output under the key com.apple.actool.version with the subkeys bundle-version and short-bundle-version.
actool --compile /tmp MyApp.xcassets actool will compile MyApp.xcassets and produce /tmp/Assets.car. SEE ALSO plist(1) Apple Inc. Mar 9 2018 actool(1)
xpath5.34
xpath uses the XML::XPath perl module to make XPath queries to any XML document. The XML::XPath module aims to comply exactly to the XPath specification at "http://www.w3.org/TR/xpath" and yet allows extensions to be added in the form of functions. The script takes any number of XPath pointers and tries to apply them to each XML document given on the command line. If no file arguments are given, the query is done using "STDIN" as an XML document. When multiple queries exist, the result of the last query is used as context for the next query and only the result of the last one is output. The context of the first query is always the root of the current document.
xpath - a script to query XPath statements in XML documents.
xpath [-s suffix] [-p prefix] [-n] [-q] -e query [-e query] ... [file] ...
-q Be quiet. Output only errors (and no separator) on stderr. -n Never use an external DTD, ie. instantiate the XML::Parser module with 'ParseParamEnt => 0'. -s suffix Place "suffix" at the end of each entry. Default is a linefeed. -p prefix Place "prefix" preceding each entry. Default is nothing. BUGS The author of this man page is not very fluant in english. Please, send him (fabien@tzone.org) any corrections concerning this text. SEE ALSO XML::XPath LICENSE AND COPYRIGHT This module is copyright 2000 AxKit.com Ltd. This is free software, and as such comes with NO WARRANTY. No dates are used in this module. You may distribute this module under the terms of either the Gnu GPL, or the Artistic License (the same terms as Perl itself). For support, please subscribe to the Perl-XML <http://listserv.activestate.com/mailman/listinfo/perl-xml> mailing list at the URL perl v5.34.0 2017-07-27 XPATH(1)
null
heap
heap lists the objects currently allocated on the heap of the specified process, as well as summary data. Objects are categorized by class name, type (Objective-C, C++, or CFType), and binary image. C++ objects are identified by the vtable referenced from the start of the object, so with multiple inheritance this may not give the precise class of the object. If the target process is running with MallocStackLogging, then heap attempts to identify the types of "non-object" allocations, using the form "<allocation-call> in <caller>". The <caller> is determined by walking up the allocation stack backtrace (if available) to find the symbol name of the function that called an "allocation function", such as malloc, calloc, realloc, C++ "operator new", strndup, various internal functions of libc++.1.dylib, etc. If <caller> is a C++ function, the type name is created by simplifying the demangled symbol name by removing the return type, the "__1::" substrings from use of llvm's libc++abi.dylib, and standard arguments such as "std::__1::allocator<T>"." For example, this type information: malloc in std::basic_string<char>::basic_string<std::nullptr_t>(char const*) C++ Metal is determined from the backtrace: _malloc_zone_malloc (in libsystem_malloc.dylib) operator new(unsigned long) (in libc++abi.dylib) std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::basic_string<std::nullptr_t>(char const*) (in Metal) ... The binary image identified for a class is the image which implements the class. For types derived from allocation backtraces, the binary image is that of <caller>.
heap – List all the malloc-allocated buffers in the process's heap
heap [-s | -sortBySize] [-z | -zones] [-guessNonObjects] [-sumObjectFields] [-showSizes] [-addresses all | <classes-pattern>] [-noContent] pid | partial-executable-name | memory-graph-file
heap requires one argument -- either the process ID or the full or partial executable name of the process to examine, or the pathname of a memory graph file generated by leaks or the Xcode Memory Graph Debugger. The following options are available: -s | -sortBySize Sort output by total size of class instances, rather than by count. -z | -zones Show the output separated out into the different malloc zones, instead of an aggregated summary of all zones. -guessNonObjects Look through the memory contents of each Objective-C object to find pointers to malloc'ed blocks (non-objects), such as the variable array hanging from an NSArray. These referenced blocks of memory are identified as their offset from the start of the object (say "__NSCFArray[12]"). The count, number of bytes, and average size of memory blocks referenced from each different object offset location are listed in the output. -sumObjectFields Do the same analysis as with the -guessNonObjects option, but add the sizes of those referenced non-object fields into the entries for the corresponding objects. -showSizes Show the distribution of each malloc size for each object, instead of summing and averaging the sizes in a single entry. -diffFrom=<memgraph> Show only the new objects since the specified memgraph. -addresses all | <classes-pattern> Print the addresses of all malloc blocks found on the heap in ascending address order, or the addresses of those objects whose full class name is matched by the regular expression <classes- pattern>. The string "all" indicates that the addresses of all blocks should be printed. The <classes-pattern> regular expression is interpreted as an extended (modern) regular expression as described by the re_format(7) manual page. "malloc" or "non-object" can be used to refer to blocks that are not of any specific type. Examples of valid classes-patterns include: __NSCFString 'NS.*' '__NSCFString|__NSCFArray' '.*(String|Array)' malloc non-object malloc|.*String The <classes-pattern> pattern can be followed by an optional allocation size specifier, which can be one of the following forms. The square brackets are required. The size can include a 'k' suffix for kilobytes, or an 'm' suffix for megabytes: [size] [lowerBound-upperBound] [lowerBound+] [-upperBound] Examples of <classes-pattern> with size specifications include: malloc[2048] // all malloc blocks of size 2048 malloc[1k-8k] // all malloc blocks between 1k and 8k CFData[50k+] // all CFData objects 50k or larger [-1024] // all allocations 1024 bytes or less -noContent Do not show object content in -addresses mode. SEE ALSO malloc(3), leaks(1), malloc_history(1), stringdups(1), vmmap(1), DevToolsSecurity(1) The Xcode developer tools also include Instruments, a graphical application that can give information similar to that provided by heap. The Allocations instrument graphically displays dynamic, real-time information about the object and memory use in an application, including backtraces of where the allocations occurred. The Leaks instrument performs memory leak analysis. macOS 14.5 Feb. 8, 2022 macOS 14.5
null
sum
The cksum utility writes to the standard output three whitespace separated fields for each input file. These fields are a checksum CRC, the total number of octets in the file and the file name. If no file name is specified, the standard input is used and no file name is written. The sum utility is identical to the cksum utility, except that it defaults to using historic algorithm 1, as described below. It is provided for compatibility only. The options are as follows: -o Use historic algorithms instead of the (superior) default one. Algorithm 1 is the algorithm used by historic BSD systems as the sum(1) algorithm and by historic AT&T System V UNIX systems as the sum(1) algorithm when using the -r option. This is a 16-bit checksum, with a right rotation before each addition; overflow is discarded. Algorithm 2 is the algorithm used by historic AT&T System V UNIX systems as the default sum(1) algorithm. This is a 32-bit checksum, and is defined as follows: s = sum of all bytes; r = s % 2^16 + (s % 2^32) / 2^16; cksum = (r % 2^16) + r / 2^16; Algorithm 3 is what is commonly called the ‘32bit CRC’ algorithm. This is a 32-bit checksum. Both algorithm 1 and 2 write to the standard output the same fields as the default algorithm except that the size of the file in bytes is replaced with the size of the file in blocks. For historic reasons, the block size is 1024 for algorithm 1 and 512 for algorithm 2. Partial blocks are rounded up. The default CRC used is based on the polynomial used for CRC error checking in the networking standard ISO 8802-3: 1989. The CRC checksum encoding is defined by the generating polynomial: G(x) = x^32 + x^26 + x^23 + x^22 + x^16 + x^12 + x^11 + x^10 + x^8 + x^7 + x^5 + x^4 + x^2 + x + 1 Mathematically, the CRC value corresponding to a given file is defined by the following procedure: The n bits to be evaluated are considered to be the coefficients of a mod 2 polynomial M(x) of degree n-1. These n bits are the bits from the file, with the most significant bit being the most significant bit of the first octet of the file and the last bit being the least significant bit of the last octet, padded with zero bits (if necessary) to achieve an integral number of octets, followed by one or more octets representing the length of the file as a binary value, least significant octet first. The smallest number of octets capable of representing this integer are used. M(x) is multiplied by x^32 (i.e., shifted left 32 bits) and divided by G(x) using mod 2 division, producing a remainder R(x) of degree <= 31. The coefficients of R(x) are considered to be a 32-bit sequence. The bit sequence is complemented and the result is the CRC. EXIT STATUS The cksum and sum utilities exit 0 on success, and >0 if an error occurs. SEE ALSO md5(1) The default calculation is identical to that given in pseudo-code in the following ACM article. Dilip V. Sarwate, “Computation of Cyclic Redundancy Checks Via Table Lookup”, Communications of the Tn ACM, August 1988. STANDARDS The cksum utility is expected to conform to IEEE Std 1003.2-1992 (“POSIX.2”). HISTORY The cksum utility appeared in 4.4BSD. macOS 14.5 April 28, 1995 macOS 14.5
cksum, sum – display file checksums and block counts
cksum [-o 1 | 2 | 3] [file ...] sum [file ...]
null
null
lskq
The lskq command enumerates kqueues and registered kevents of running processes.
lskq – display process kqueue state
lskq [-vhe] [-p <pid> | -a]
-p <pid> Show kqueues of process <pid>. -a Show kqueues for all running processes. Requires root. -v Verbose: show opaque user data and filter-specific extension fields. -e Ignore empty kqueues. -r Print fields in raw hex. -h Show help and exit. OUTPUT lskq prints one line of output for each registered kevent, consisting of process, kqueue, and kevent information. For kqueues with no registered kevents, a single line is output with an ident of `-'. See kevent(2) for field semantics. The output columns are: command shortened process name. pid process identifier. kq file descriptor corresponding to kqueue, or ``wq'' for the special workq kqueue. kqst kqueue status bitmask. k kqueue is in a kevent*() wait set (KQ_SLEEP). s kqueue is in a select() wait set (KQ_SEL). 3 6 q Type of kevents on this kqueue: KEV32, KEV64, or KEV_QOS. ident kevent identifier. The meaning depends on the kevent filter specified. Where possible, lskq prints both numeric and symbolic names. filter kevent filter type (EVFILT_*). fdtype file descriptor type, for filters operating on file descriptors. fflags kevent filter flags bitmask. The meaning of each field depends on the filter type. EVFILT_READ: l NOTE_LOWAT EVFILT_MACHPORT: r MACH_RCV_MSG EVFILT_VNODE: d NOTE_DELETE w NOTE_WRITE e NOTE_EXTEND a NOTE_ATTRIB l NOTE_LINK r NOTE_RENAME v NOTE_REVOKE u NOTE_FUNLOCK EVFILT_PROC: x NOTE_EXIT t NOTE_EXITSTATUS d NOTE_EXIT_DETAIL f NOTE_FORK e NOTE_EXEC s NOTE_SIGNAL r NOTE_REAP EVFILT_TIMER: s u n m NOTE_SECONDS, NOTE_USECONDS, NOTE_NSECONDS, NOTE_MACHTIME a A NOTE_ABSOLUTE, NOTE_MACH_CONTINUOUS_TIME c NOTE_CRITICAL b NOTE_BACKGROUND l NOTE_LEEWAY EVFILT_USER: t NOTE_TRIGGER a NOTE_FFAND o NOTE_FFOR EVFILT_WORKLOOP: t w i NOTE_WL_THREAD_REQUEST, NOTE_WL_SYNC_WAIT, NOTE_WL_SYNC_IPC W NOTE_WL_SYNC_WAKE q NOTE_WL_UPDATE_QOS o NOTE_WL_DISCOVER_OWNER e NOTE_WL_IGNORE_ESTALE R POLICY_RR F POLICY_FIFO P Priority Configured on workloop flags kevent generic flags bitmask. a EV_ADD n EV_ENABLE d EV_DISABLE x EV_DELETE r EV_RECEIPT 1 EV_ONESHOT c EV_CLEAR s EV_DISPATCH u EV_UDATA_SPECIFIC p EV_FLAG0 (EV_POLL) b EV_FLAG1 (EV_OOBAND) o EV_EOF e EV_ERROR evst kevent status bitmask. a KN_ACTIVE (event has triggered) q KN_QUEUED (event has been added to the active list) d KN_DISABLED (knote is disabled) p KN_SUPPRESSED (event delivery is in flight) s KN_STAYACTIVE (event is marked as always-enqueued on the active list) d KN_DROPPING (knote is about to be dropped) l KN_LOCKED (knote is locked) P KN_POSTING (knote is being posted) m KN_MERGE_QOS (knote is in override saturating mode) D KN_DEFERDELETE (knote is waiting for deferred- delete ack) v KN_REQVANISH n KN_VANISHED qos The QoS requested for the knote. data Filter-specific data. If the -v (verbose) option is specified, the opaque user-data field and further filter-specific extension fields are printed in raw hexadecimal. NOTES The output of lskq is not an atomic snapshot of system state. In cases where lskq is able to detect an inconsistency, a warning will be printed. Not all flags are symbolicated. Use -r (raw mode) to inspect additional flags. SEE ALSO ddt(1), lsmp(1), kevent(2), kqueue(2), lsof(8) macOS April 20, 2015 macOS
null
macbinary
applesingle, binhex, macbinary are implemented as a single tool with multiple names. All invocations support the three verbs encode, decode, and probe. If multiple files are passed to probe, the exit status will be non-zero only if all files contain data in the specified encoding.
applesingle, binhex, macbinary – encode and decode files
<tool> probe file ... <tool> [decode] [-c] [-fv] [-C dir] [-o outfile] [file ...] <tool> -h | -V applesingle encode [-cfv] [-s suf] [-C dir] [-o outfile] file ... binhex encode [-R] [-cfv] [-s suf] [-C dir] [-o outfile] file ... macbinary encode [-t 1-3] [-cfv] [-s suf] [-C dir] [-o outfile] file ...
-f, --force perform the operation even if the output file already exists -h, --help display version and usage, then quit -v, --verbose be verbose -V, --version display version, then quit -c, --pipe, --from-stdin, --to-stdout For decode, read encoded data from the standand input. For encode, write encoded data to the standard output. Currently, "plain" data must be written to and from specified filenames (see also mount_fdesc(8)). -C, --directory dir create output files in dir -o, --rename name Use name for output, overriding any stored or default name. For encode, the appropriate suffix will be added to name. -o implies only one file to be encoded or decoded. -s, --suffix .suf override the default suffix for the given encoding -R, --no-runlength-encoding don't use BinHex runlength compression when encoding -t, --type 1-3 Specify MacBinary encoding type. Type 1 is undesirable because it has neither a checksum nor a signature and is thus difficult to recognize. DIAGNOSTICS In general, the tool returns a non-zero exit status if it fails. Darwin 14 November 2005 Darwin
null
ncdestroy
ncctl controls the caller's kernel Kerberos credentials for any of the specified path's associated NFS mounts. If no paths are specified then all the caller's associated credentials for all NFS file systems are acted upon by the command given. When an NFS file system is mounted using Kerberos through the “sec=” option or by the export specified on the server, the resulting session context is stored in a table for each mount. If the user decides to finish his or her session or chooses to use a different credential, then ncctl can be called to invalidate or change those credentials in the kernel. ncctl supports the following commands: init, set Set the mount or mounts to obtain credentials form the associated principal. Any current credential is unset. destroy, unset Unset the current credentials on the mount or mounts. list, get List the principal(s) set on the mount or mounts for this session. If no principal was set, then display “Default credential” followed by “[from ⟨principal name⟩]” if the access succeeded and “[kinit needed]” if not. If there has been no access to the file system then display “Credentials are not set”. Note the second synopsis is equivalent to ncctl [-Pv] {init | set} [-F] -p principal The third synopsis is equivalent to ncctl [-Pv] {destroy | unset} And the last synopsis is equivalent to ncctl [-Pv] {list | get} Kerberos keeps a collection of credentials which can be seen by using klist -A. The current default credential can be seen with klist without any arguments. kswitch can be used to switch the default to a different Kerberos credential. kdestroy can be use to remove all or a particular Kerberos credential. New Kerberos credentials can be obtain and added to the collection by calling kinit and those credentials can be used when accessing the mount. See kinit(1), klist(1), kswitch(1), and kdestroy(1). ncctl can set any principal from the associated Kerberos credentials or can destroy and unset credentials currently on the mount. When accessing a Kerberos mounted NFS file system, if no principal is set on the mount, when the kernel needs credentials it will make an up call to the gssd daemon and what ever the default credentials are available at the time will be used. The options are as follows: -h, --help Print a help summary of the command and then exit. -v, --verbose Be verbose and show what file system is being operated on and any resulting errors. -P, --nofollow If the trailing component resolves to a symbolic link do not resolve the link but use the current path to determine any associate NFS file system. -p, --principal ⟨principal⟩ For the init, set and ncinit commands set the principal to ⟨principal⟩. This option is required for theses commands. This option is not valid for other commands. -F, --force For the init, set and ncinit commands to not check the presence of the required principal in the Kerberos cache collection. This may be useful if Kerberos credentials will be obtain later. WARNING: If the credential is incorrectly set it may not work and no access to the file system will ever be allowed until another set or unset operation takes place. This option is not valid for other commands.
ncctl – Control NFS kernel credentials
ncctl [-Pvh] {{init | set} [-F] -p principal | {destroy | unset} | {list | get}} [path ...] ncinit [-PvhF] -p principal [path ...] ncdestroy [-Pvh] [path ...] nclist [-Pvh] [path ...]
null
If leaving for the day: $ kdestroy -A $ ncdestroy Lets say a user does $ kinit user@FOO.COM And through the automounter access a path /Network/Serves/someserver/Sources/foo/bar where the mount of /Network/Servers/someserver/Sources/foo was done with user@FOO.COM. $ cat /Network/Servers/someserver/Sources/foo/bar cat: /Network/Servers/someserver/Sources/foo/bar: Permission denied The user realizes that in order to have access on the server his identity should be user2@BAR.COM. So: $ kinit user2@BAR.COM $ ncctl set -p user2@BAR.COM Now the local user can access bar. To see your credentials $ nclist /Network/Servers/someserver/Sources/foo: user2@BAR.COM If the user destroys his credentials and then acquires new ones $ ncdestroy $ nclist -v /private/tmp/mp : No credentials are set. /Network/Servers/xs1/release : NFS mount is not using Kerberos. $ kinit user user@FOO.COM's password: ****** $ klist Credentials cache: API:648E3003-0A6B-4BB3-8447-1D5034F98EAE Principal: user@FOO.COM Issued Expires Principal Dec 15 13:57:57 2014 Dec 15 23:57:57 2014 krbtgt/FOO.COM@FOO.COM $ ls /private/tmp/mp filesystemui.socket= sysdiagnose.tar.gz x mtrecorder/ systemstats/ z $ nclist /private/tmp/mp : Default credential [from user@FOO.COM] NOTES As mentioned above credentials are per session, so the console session's credential cache collection is separate for a collections of credentials obtain in an ssh session even by the same user. Kerberos will set the default credential with klist or kswitch. However, the default credential can change without the user's knowledge, because of renewals or some other script or program in the user's session is run and does a kswitch (krb5_cc_set_default_name()) or kinit on the user's behalf. kinit may not prompt for a password if the Kerberos password for the principal is in the user's keychain. ncctl with the set command will allow a user to change the mapping of the local user identity to a different one on the server. It is up to the user to decide which identity will be used. Previous versions of gssd daemon would attempt to select credentials if they were not set, by choosing credentials in the same realm as the server. This was imperfect and that has been removed. There may be multiple credentials in the same realm or a user may prefer a cross realm principal. It is highly recommended that after accessing a mount (typically through the automounter) that if the user has access to multiple credentials to set the credential on the mount that they want to use. The current default credential will be used by the automounter on first mount. If you do not explicitly set the credentials to use, then if the server expires the credential, the client will use the current default credential at the time of renewal and that may be a different identity. If using mount directly a user can select what credential to use for the mount and subsequently there after (at least until a new ncctl set command is run) by using the principal=⟨principal⟩ option. It is also possible to select the realm to use with the realm=⟨realm⟩ option. The latter can be useful to administrators in automounter maps. There is currently no way to remember what the chosen identity is for a given mount after its been unmounted. So for automounted mounts a reference it taken on the mount point so unmounts will not happen until all credentials on a mount with a set principal have been destroyed. Forced unmounts will not be effected. nclist or ncctl get can be used to see what credentials are actually being used and ncdestroy or ncctl unset can be used to destroy that session's credential. Accessing the mount after its credentials have been destroyed will cause the default credential to be used until the next ncinit or ncctl set Default credentials for an automounted NFS mount will not prevent the unmounting of the file system. DIAGNOSTICS The ncctl command will exit with 1 if any of the supplied paths doesn't exist or there is an error returned for any path tried. If all paths exist and no errors are returned the exit status will be 0. SEE ALSO kdestroy(1), kinit(1), klist(1), kswitch(1), mount_nfs(8) BUGS There should be an option to kdestroy to destroy cached NFS contexts. macOS 14.5 January 14, 2015 macOS 14.5
nc
The nc (or netcat) utility is used for just about anything under the sun involving TCP or UDP. It can open TCP connections, send UDP packets, listen on arbitrary TCP and UDP ports, do port scanning, and deal with both IPv4 and IPv6. Unlike telnet(1), nc scripts nicely, and separates error messages onto standard error instead of sending them to standard output, as telnet(1) does with some. Common uses include: • simple TCP proxies • shell-script based HTTP clients and servers • network daemon testing • a SOCKS or HTTP ProxyCommand for ssh(1) • and much, much more The options are as follows: -4 Forces nc to use IPv4 addresses only. -6 Forces nc to use IPv6 addresses only. -A Set SO_RECV_ANYIF on socket. -b boundif Specifies the interface to bind the socket to. -c Send CRLF as line-ending -D Enable debugging on the socket. -C Forces nc not to use cellular data context. -d Do not attempt to read from stdin. -h Prints out nc help. -i interval Specifies a delay time interval between lines of text sent and received. Also causes a delay time between connections to multiple ports. -G conntimeout TCP connection timeout in seconds. -H keepidle Initial TCP keep alive timeout in seconds. -I keepintvl Interval for repeating TCP keep alive timeouts in seconds. -J keepcnt Number of times to repeat TCP keep alive packets. -k Forces nc to stay listening for another connection after its current connection is completed. It is an error to use this option without the -l option. -l Used to specify that nc should listen for an incoming connection rather than initiate a connection to a remote host. It is an error to use this option in conjunction with the -p, -s, or -z options. Additionally, any timeouts specified with the -w option are ignored. -L num_probes Number of probes to send to the peer before declaring that the peer is not reachable and generating an adaptive timeout read/write event. -n Do not do any DNS or service lookups on any specified addresses, hostnames or ports. -p source_port Specifies the source port nc should use, subject to privilege restrictions and availability. It is an error to use this option in conjunction with the -l option. -r Specifies that source and/or destination ports should be chosen randomly instead of sequentially within a range or in the order that the system assigns them. -s source_ip_address Specifies the IP of the interface which is used to send the packets. It is an error to use this option in conjunction with the -l option. -t Causes nc to send RFC 854 DON'T and WON'T responses to RFC 854 DO and WILL requests. This makes it possible to use nc to script telnet sessions. -U Specifies to use Unix Domain Sockets. -u Use UDP instead of the default option of TCP. -v Have nc give more verbose output. -w timeout If a connection and stdin are idle for more than timeout seconds, then the connection is silently closed. The -w flag has no effect on the -l option, i.e. nc will listen forever for a connection, with or without the -w flag. The default is no timeout. -X proxy_version Requests that nc should use the specified protocol when talking to the proxy server. Supported protocols are “4” (SOCKS v.4), “5” (SOCKS v.5) and “connect” (HTTPS proxy). If the protocol is not specified, SOCKS version 5 is used. -x proxy_address[:port] Requests that nc should connect to hostname using a proxy at proxy_address and port. If port is not specified, the well-known port for the proxy protocol is used (1080 for SOCKS, 3128 for HTTPS). -z Specifies that nc should just scan for listening daemons, without sending any data to them. It is an error to use this option in conjunction with the -l option. --apple-delegate-pid Requests that nc should delegate the socket for the specified PID. It is an error to use this option in conjunction with the --apple-delegate-uuid option. --apple-delegate-uuid Requests that nc should delegate the socket for the specified UUID. It is an error to use this option in conjunction with the --apple-delegate-pid option. --apple-ext-bk-idle Requests that nc marks its socket for extended background idle time when the process becomes suspended. --apple-nowakefromsleep When the parameter n is greater than 0, requests that nc marks its socket to exclude the local port from the list of opened ports that is queried by drivers when the system goes to sleep. When n is greater than 1, set the socket option that generates the KEV_SOCKET_CLOSED kernel event when the socket gets closed. --apple-ecn Requests that nc marks to use the socket option TCP_ECN_MODE to set the ECN mode (default, enable, disable) hostname can be a numerical IP address or a symbolic hostname (unless the -n option is given). In general, a hostname must be specified, unless the -l option is given (in which case the local host is used). port[s] can be single integers or ranges. Ranges are in the form nn-mm. In general, a destination port must be specified, unless the -U option is given (in which case a socket must be specified). CLIENT/SERVER MODEL It is quite simple to build a very basic client/server model using nc. On one console, start nc listening on a specific port for a connection. For example: $ nc -l 1234 nc is now listening on port 1234 for a connection. On a second console (or a second machine), connect to the machine and port being listened on: $ nc 127.0.0.1 1234 There should now be a connection between the ports. Anything typed at the second console will be concatenated to the first, and vice-versa. After the connection has been set up, nc does not really care which side is being used as a ‘server’ and which side is being used as a ‘client’. The connection may be terminated using an EOF (‘^D’). DATA TRANSFER The example in the previous section can be expanded to build a basic data transfer model. Any information input into one end of the connection will be output to the other end, and input and output can be easily captured in order to emulate file transfer. Start by using nc to listen on a specific port, with output captured into a file: $ nc -l 1234 > filename.out Using a second machine, connect to the listening nc process, feeding it the file which is to be transferred: $ nc host.example.com 1234 < filename.in After the file has been transferred, the connection will close automatically. TALKING TO SERVERS It is sometimes useful to talk to servers “by hand” rather than through a user interface. It can aid in troubleshooting, when it might be necessary to verify what data a server is sending in response to commands issued by the client. For example, to retrieve the home page of a web site: $ echo -n "GET / HTTP/1.0\r\n\r\n" | nc host.example.com 80 Note that this also displays the headers sent by the web server. They can be filtered, using a tool such as sed(1), if necessary. More complicated examples can be built up when the user knows the format of requests required by the server. As another example, an email may be submitted to an SMTP server using: $ nc localhost 25 << EOF HELO host.example.com MAIL FROM: <user@host.example.com> RCPT TO: <user2@host.example.com> DATA Body of email. . QUIT EOF PORT SCANNING It may be useful to know which ports are open and running services on a target machine. The -z flag can be used to tell nc to report open ports, rather than initiate a connection. For example: $ nc -z host.example.com 20-30 Connection to host.example.com 22 port [tcp/ssh] succeeded! Connection to host.example.com 25 port [tcp/smtp] succeeded! The port range was specified to limit the search to ports 20 - 30. Alternatively, it might be useful to know which server software is running, and which versions. This information is often contained within the greeting banners. In order to retrieve these, it is necessary to first make a connection, and then break the connection when the banner has been retrieved. This can be accomplished by specifying a small timeout with the -w flag, or perhaps by issuing a "QUIT" command to the server: $ echo "QUIT" | nc host.example.com 20-30 SSH-1.99-OpenSSH_3.6.1p2 Protocol mismatch. 220 host.example.com IMS SMTP Receiver Version 0.84 Ready
nc – arbitrary TCP and UDP connections and listens
nc [-46AcDCdhklnrtUuvz] [-b boundif] [-i interval] [-p source_port] [-s source_ip_address] [-w timeout] [-X proxy_protocol] [-x proxy_address[:port]] [--apple-delegate-pid pid] [--apple-delegate-uuid uuid] [--apple-ext-bk-idle] [--apple-nowakefromsleep n] [--apple-ecn mode] [hostname] [port[s]]
null
Open a TCP connection to port 42 of host.example.com, using port 31337 as the source port, with a timeout of 5 seconds: $ nc -p 31337 -w 5 host.example.com 42 Open a UDP connection to port 53 of host.example.com: $ nc -u host.example.com 53 Open a TCP connection to port 42 of host.example.com using 10.1.2.3 as the IP for the local end of the connection: $ nc -s 10.1.2.3 host.example.com 42 Create and listen on a Unix Domain Socket: $ nc -lU /var/tmp/dsocket Connect to port 42 of host.example.com via an HTTP proxy at 10.2.3.4, port 8080. This example could also be used by ssh(1); see the ProxyCommand directive in ssh_config(5) for more information. $ nc -x10.2.3.4:8080 -Xconnect host.example.com 42 SEE ALSO cat(1), ssh(1) AUTHORS Original implementation by *Hobbit* ⟨hobbit@avian.org⟩. Rewritten with IPv6 support by Eric Jackson ⟨ericj@monkey.org⟩. CAVEATS UDP port scans will always succeed (i.e. report the port as open), rendering the -uz combination of flags relatively useless. macOS 14.5 June 25, 2001 macOS 14.5
json_pp5.34
json_pp converts between some input and output formats (one of them is JSON). This program was copied from json_xs and modified. The default input format is json and the default output format is json with pretty option.
json_pp - JSON::PP command utility
json_pp [-v] [-f from_format] [-t to_format] [-json_opt options_to_json1[,options_to_json2[,...]]]
-f -f from_format Reads a data in the given format from STDIN. Format types: json as JSON eval as Perl code -t Writes a data in the given format to STDOUT. null no action. json as JSON dumper as Data::Dumper -json_opt options to JSON::PP Acceptable options are: ascii latin1 utf8 pretty indent space_before space_after relaxed canonical allow_nonref allow_singlequote allow_barekey allow_bignum loose escape_slash indent_length Multiple options must be separated by commas: Right: -json_opt pretty,canonical Wrong: -json_opt pretty -json_opt canonical -v Verbose option, but currently no action in fact. -V Prints version and exits.
$ perl -e'print q|{"foo":"XX","bar":1234567890000000000000000}|' |\ json_pp -f json -t dumper -json_opt pretty,utf8,allow_bignum $VAR1 = { 'bar' => bless( { 'value' => [ '0000000', '0000000', '5678900', '1234' ], 'sign' => '+' }, 'Math::BigInt' ), 'foo' => "\x{3042}\x{3044}" }; $ perl -e'print q|{"foo":"XX","bar":1234567890000000000000000}|' |\ json_pp -f json -t dumper -json_opt pretty $VAR1 = { 'bar' => '1234567890000000000000000', 'foo' => "\x{e3}\x{81}\x{82}\x{e3}\x{81}\x{84}" }; SEE ALSO JSON::PP, json_xs AUTHOR Makamaka Hannyaharamitu, <makamaka[at]cpan.org> COPYRIGHT AND LICENSE Copyright 2010 by Makamaka Hannyaharamitu This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.34.1 2024-04-13 JSON_PP(1)
snmptrap
snmptrap is an SNMP application that uses the SNMP TRAP operation to send information to a network manager. One or more object identifiers (OIDs) can be given as arguments on the command line. A type and a value must accompany each object identifier. Each variable name is given in the format specified in variables(5). When invoked as snmpinform, or when -Ci is added to the command line flags of snmptrap, it sends an INFORM-PDU, expecting a response from the trap receiver, retransmitting if required. Otherwise it sends an TRAP-PDU or TRAP2-PDU. If any of the required version 1 parameters, enterprise-oid, agent, and uptime are specified as empty, it defaults to 1.3.6.1.4.1.3.1.1 (enterprises.cmu.1.1), hostname, and host-uptime respectively. The TYPE is a single character, one of: i INTEGER u UNSIGNED c COUNTER32 s STRING x HEX STRING d DECIMAL STRING n NULLOBJ o OBJID t TIMETICKS a IPADDRESS b BITS which are handled in the same way as the snmpset command. For example: snmptrap -v 1 -c public manager enterprises.spider test-hub 3 0 '' interfaces.iftable.ifentry.ifindex.1 i 1 will send a generic linkUp trap to manager, for interface 1.
snmptrap, snmpinform - sends an SNMP notification to a manager
snmptrap -v 1 [COMMON OPTIONS] AGENT enterprise-oid agent generic-trap specific-trap uptime [OID TYPE VALUE]... snmptrap -v [2c|3] [COMMON OPTIONS] [-Ci] AGENT uptime trap-oid [OID TYPE VALUE]... snmpinform -v [2c|3] [COMMON OPTIONS] AGENT uptime trap-oid [OID TYPE VALUE]...
snmptrap takes the common options described in the snmpcmd(1) manual page in addition to the -Ci option described above. Note that snmptrap REQUIRES an argument specifying the agent to query as described there. SEE ALSO snmpcmd(1), snmpset(1), variables(5). V5.6.2.1 19 Jun 2003 SNMPTRAP(1)
null
tsort
The tsort utility takes a list of pairs of node names representing directed arcs in a graph and prints the nodes in topological order on standard output. Input is taken from the named file, or from standard input if no file is given. There must be an even number of nodes in the input. Node names specified on the same line should be white space separated. Presence of a node in a graph can be represented by an arc from the node to itself. This is useful when a node is not connected to any other nodes. If the graph contains a cycle (and therefore cannot be properly sorted), one of the arcs in the cycle is ignored and the sort continues. Cycles are reported on standard error. The options are as follows: -d Turn on debugging. -l Search for and display the longest cycle. Can take a very long time. -q Do not display informational messages about cycles. This is primarily intended for building libraries, where optimal ordering is not critical, and cycles occur often.
tsort – topological sort of a directed graph
tsort [-dlq] [file]
null
Assuming a file named dag with the following contents representing a directed acyclic graph: A B A F B C B D D E Sort the nodes of the graph: $ tsort dag A F B D C E White spaces and new line characters are considered equal. This file for example is considered equal to the one we defined before: $ cat dga A B A F B C B D D E Assume we add a new directed arc from D to A creating a cycle: A B A F B C B D D E D A Ordering the graph detects the cycle: $ tsort dag tsort: cycle in data tsort: A tsort: B tsort: D D E A F B C Same as above but silencing the warning about the cycle: $ tsort -q dag D E A F B C SEE ALSO ar(1) HISTORY The tsort command appeared in Version 7 AT&T UNIX. This tsort command and manual page are derived from sources contributed to Berkeley by Michael Rendell of Memorial University of Newfoundland. BUGS The tsort utility does not recognize multibyte characters. macOS 14.5 August 30, 2020 macOS 14.5
uuencode
The uuencode and uudecode utilities are used to transmit binary files over transmission mediums that do not support other than simple ASCII data. The b64encode utility is synonymous with uuencode with the -m flag specified. The b64decode utility is synonymous with uudecode with the -m flag specified. The base64 utility acts as a base64 decoder when passed the --decode (or -d) flag and as a base64 encoder otherwise. As a decoder it only accepts raw base64 input and as an encoder it does not produce the framing lines. base64 reads standard input or file if it is provided and writes to standard output. Options --wrap (or -w) and --ignore-garbage (or -i) are accepted for compatibility with GNU base64, but the latter is unimplemented and silently ignored. The uuencode utility reads file (or by default the standard input) and writes an encoded version to the standard output, or output_file if one has been specified. The encoding uses only printing ASCII characters and includes the mode of the file and the operand name for use by uudecode. The uudecode utility transforms uuencoded files (or by default, the standard input) into the original form. The resulting file is named either name or (depending on options passed to uudecode) output_file and will have the mode of the original file except that setuid and execute bits are not retained. The uudecode utility ignores any leading and trailing lines. The following options are available for uuencode: -m Use the Base64 method of encoding, rather than the traditional uuencode algorithm. -r Produce raw output by excluding the initial and final framing lines. -o output_file Output to output_file instead of standard output. The following options are available for uudecode: -c Decode more than one uuencoded file from file if possible. -i Do not overwrite files. -m When used with the -r flag, decode Base64 input instead of traditional uuencode input. Without -r it has no effect. -o output_file Output to output_file instead of any pathname contained in the input data. -p Decode file and write output to standard output. -r Decode raw (or broken) input, which is missing the initial and possibly the final framing lines. The input is assumed to be in the traditional uuencode encoding, but if the -m flag is used, or if the utility is invoked as b64decode, then the input is assumed to be in Base64 format. -s Do not strip output pathname to base filename. By default uudecode deletes any prefix ending with the last slash '/' for security reasons. Additionally, b64encode accepts the following option: -w column Wrap encoded output after column. The following options are available for base64: -b count, --break=count Insert line breaks every count characters. The default is 0, which generates an unbroken stream. -d, -D, --decode Decode incoming Base64 stream into binary data. -h, --help Print usage summary and exit. -i input_file, --input=input_file Read input from input_file. The default is stdin; passing “-” also represents stdin. -o output_file, --output=output_file Write output to output_file. The default is stdout; passing “-” also represents stdout. bintrans is a generic utility that can run any of the aforementioned encoders and decoders. It can also run algorithms that are not available through a dedicated program: qp is a quoted-printable converter and accepts the following options: -u Decode. -o output_file Output to output_file instead of standard output.
bintrans, base64, uuencode, uudecode, – encode/decode a binary file
bintrans [algorithm] [...] uuencode [-m] [-r] [-o output_file] [file] name uudecode [-cimprs] [file ...] uudecode [-i] -o output_file b64encode [-r] [-w column] [-o output_file] [file] name b64decode [-cimprs] [file ...] b64decode [-i] -o output_file [file] base64 [-h | -D | -d] [-b count] [-i input_file] [-o output_file]
null
The following example packages up a source tree, compresses it, uuencodes it and mails it to a user on another system. When uudecode is run on the target system, the file ``src_tree.tar.Z'' will be created which may then be uncompressed and extracted into the original tree. tar cf - src_tree | compress | uuencode src_tree.tar.Z | mail user@example.com The following example unpacks all uuencoded files from your mailbox into your current working directory. uudecode -c < $MAIL The following example extracts a compressed tar archive from your mailbox uudecode -o /dev/stdout < $MAIL | zcat | tar xfv - SEE ALSO basename(1), compress(1), mail(1), uucp(1) (ports/net/freebsd-uucp), uuencode(5) HISTORY The uudecode and uuencode utilities appeared in 4.0BSD. BUGS Files encoded using the traditional algorithm are expanded by 35% (3 bytes become 4 plus control information). macOS 14.5 April 18, 2022 macOS 14.5
cpp
null
null
null
null
null
swift
Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns. The goal of the Swift project is to create the best available language for uses ranging from systems programming, to mobile and desktop apps, scaling up to cloud services. Most importantly, Swift is designed to make writing and maintaining correct programs easier for the developer. To achieve this goal, we believe that the most obvious way to write Swift code must also be: Safe. The most obvious way to write code should also behave in a safe manner. Undefined behavior is the enemy of safety, and developer mistakes should be caught before software is in production. Opting for safety sometimes means Swift will feel strict, but we believe that clarity saves time in the long run. Fast. Swift is intended as a replacement for C-based languages (C, C++, and Objective-C). As such, Swift must be comparable to those languages in performance for most tasks. Performance must also be predictable and consistent, not just fast in short bursts that require clean-up later. There are lots of languages with novel features - being fast is rare. Expressive. Swift benefits from decades of advancement in computer science to offer syntax that is a joy to use, with modern features developers expect. But Swift is never done. We will monitor language advancements and embrace what works, continually evolving to make Swift even better. BUGS Reporting bugs is a great way for anyone to help improve Swift. The issue tracker for Swift, an open-source project, is located at <https://github.com/apple/swift/issues>. If a bug can be reproduced only within an Xcode project or a playground, or if the bug is associated with an Apple NDA, please file a report to Apple's Feedback Assistant at <https://feedbackassistant.apple.com> instead. SEE ALSO HOME PAGE <https://swift.org> APPLE DEVELOPER RESOURCES <https://developer.apple.com/swift/resources> CODE REPOSITORIES Swift Programming Language at <https://github.com/apple/swift> Swift Package Manager at <https://github.com/apple/swift-package-manager> swift 5.10 2023-08-17 swift(1)
swift -- Safe, fast, and expressive general-purpose programming language
To invoke the Swift REPL (Read-Eval-Print-Loop): swift repl To execute a Swift program: swift program.swift <arguments> To work with the Swift Package Manager: swift build|package|run|test [options] <inputs> To invoke the Swift compiler: swiftc [options] <inputs> A list of supported options is available through the "-help" option: swift -help swift build -help swiftc -help
null
null