command
stringlengths
1
42
description
stringlengths
29
182k
name
stringlengths
7
64.9k
synopsis
stringlengths
4
85.3k
options
stringclasses
593 values
examples
stringclasses
455 values
afida
afida compares a reference audio file with a distorted version and estimates the perceivable spatial image distortions in terms of image shift and width.
afida – Audio File Image Distortion Analyzer
afida [option] reference_audiofile test_audiofile
-h print help text Darwin 3/14/13 Darwin
null
pgrep
The pgrep command searches the process table on the running system and prints the process IDs of all processes that match the criteria given on the command line. The pkill command searches the process table on the running system and signals all processes that match the criteria given on the command line. The following options are available: -F pidfile Restrict matches to a process whose PID is stored in the pidfile file. -G gid Restrict matches to processes with a real group ID in the comma-separated list gid. -I Request confirmation before attempting to signal each process. -L The pidfile file given for the -F option must be locked with the flock(2) syscall or created with pidfile(3). -P ppid Restrict matches to processes with a parent process ID in the comma-separated list ppid. -U uid Restrict matches to processes with a real user ID in the comma-separated list uid. -d delim Specify a delimiter to be printed between each process ID. The default is a newline. This option can only be used with the pgrep command. -a Include process ancestors in the match list. By default, the current pgrep or pkill process and all of its ancestors are excluded (unless -v is used). -f Match against full argument lists. The default is to match against process names. -g pgrp Restrict matches to processes with a process group ID in the comma-separated list pgrp. The value zero is taken to mean the process group ID of the running pgrep or pkill command. -i Ignore case distinctions in both the process table and the supplied pattern. -l Long output. For pgrep, print the process name in addition to the process ID for each matching process. If used in conjunction with -f, print the process ID and the full argument list for each matching process. For pkill, display the kill command used for each process killed. -n Select only the newest (most recently started) of the matching processes. -o Select only the oldest (least recently started) of the matching processes. -q For pgrep, Do not write anything to standard output. -t tty Restrict matches to processes associated with a terminal in the comma-separated list tty. Terminal names may be of the form ttyxx or the shortened form xx. A single dash (‘-’) matches processes not associated with a terminal. -u euid Restrict matches to processes with an effective user ID in the comma-separated list euid. -v Reverse the sense of the matching; display processes that do not match the given criteria. -x Require an exact match of the process name, or argument list if -f is given. The default is to match any substring. -signal A non-negative decimal number or symbolic signal name specifying the signal to be sent instead of the default TERM. This option is valid only when given as the first argument to pkill. If any pattern operands are specified, they are used as extended regular expressions to match the command name or full argument list of each process. Note that a running pgrep or pkill process will never consider itself as a potential match. EXIT STATUS The pgrep and pkill utilities return one of the following values upon exit: 0 One or more processes were matched. 1 No processes were matched. 2 Invalid options were specified on the command line. 3 An internal error occurred.
pgrep, pkill – find or signal processes by name
pgrep [-Lafilnoqvx] [-F pidfile] [-G gid] [-P ppid] [-U uid] [-d delim] [-g pgrp] [-t tty] [-u euid] pattern ... pkill [-signal] [-ILafilnovx] [-F pidfile] [-G gid] [-P ppid] [-U uid] [-g pgrp] [-t tty] [-u euid] pattern ...
null
Show the pid of the process holding the /tmp/.X0-lock pid file: $ pgrep -F /tmp/.X0-lock 1211 Show long output for firefox processes: $ pgrep -l firefox 1312 firefox 1309 firefox 1288 firefox 1280 firefox 1279 firefox 1278 firefox 1277 firefox 1264 firefox Same as above but just showing the pid of the most recent process: $ pgrep -n firefox 1312 Look for vim processes. Match against the full argument list: $ pgrep -f vim 44968 30790 Same as above but matching against the ‘list’ word and showing the full argument list: $ pgrep -f -l list 30790 vim list.txt Send SIGSTOP signal to processes that are an exact match: $ pkill -SIGSTOP -f -x "vim list.txt" Without -f names over 19 characters will silently fail: $ vim this_is_a_very_long_file_name & [1] 36689 $ [1]+ Stopped vim this_is_a_very_long_file_name $ pgrep "vim this" $ Same as above using the -f flag: $ pgrep -f "vim this" 36689 SEE ALSO kill(1), killall(1), ps(1), flock(2), kill(2), sigaction(2), pidfile(3), re_format(7) HISTORY The pkill and pgrep utilities first appeared in NetBSD 1.6. They are modelled after utilities of the same name that appeared in Sun Solaris 7. They made their first appearance in FreeBSD 5.3. AUTHORS Andrew Doran <ad@NetBSD.org> macOS 14.5 October 5, 2020 macOS 14.5
yes
The yes utility outputs expletive, or, by default, “y”, forever. SEE ALSO jot(1), seq(1) HISTORY The yes command appeared in Version 7 AT&T UNIX. macOS 14.5 June 4, 2014 macOS 14.5
yes – be repetitively affirmative
yes [expletive]
null
null
lessecho
lessecho is a program that simply echos its arguments on standard output. But any metacharacter in the output is preceded by an "escape" character, which by default is a backslash.
lessecho - expand metacharacters
lessecho [-ox] [-cx] [-pn] [-dn] [-mx] [-nn] [-ex] [-a] file ...
A summary of options is included below. -ex Specifies "x", rather than backslash, to be the escape char for metachars. If x is "-", no escape char is used and arguments containing metachars are surrounded by quotes instead. -ox Specifies "x", rather than double-quote, to be the open quote character, which is used if the -e- option is specified. -cx Specifies "x" to be the close quote character. -pn Specifies "n" to be the open quote character, as an integer. -dn Specifies "n" to be the close quote character, as an integer. -mx Specifies "x" to be a metachar. By default, no characters are considered metachars. -nn Specifies "n" to be a metachar, as an integer. -fn Specifies "n" to be the escape char for metachars, as an integer. -a Specifies that all arguments are to be quoted. The default is that only arguments containing metacharacters are quoted. SEE ALSO less(1) AUTHOR This manual page was written by Thomas Schoepf <schoepf@debian.org>, for the Debian GNU/Linux system (but may be used by others). Report bugs at https://github.com/gwsw/less/issues. Version 581.2: 28 Apr 2021 LESSECHO(1)
null
sdiff
sdiff displays two files side by side, with any differences between the two highlighted as follows: new lines are marked with ‘>’; deleted lines are marked with ‘<’; and changed lines are marked with ‘|’. sdiff can also be used to interactively merge two files, prompting at each set of differences. See the -o option for an explanation. The options are: -l --left-column Only print the left column for identical lines. -o --output outfile Interactively merge file1 and file2 into outfile. In this mode, the user is prompted for each set of differences. See EDITOR and VISUAL, below, for details of which editor, if any, is invoked. The commands are as follows: l | 1 Choose left set of diffs. r | 2 Choose right set of diffs. s Silent mode – identical lines are not printed. v Verbose mode – identical lines are printed. e Start editing an empty file, which will be merged into outfile upon exiting the editor. e l Start editing file with left set of diffs. e r Start editing file with right set of diffs. e b Start editing file with both sets of diffs. q Quit sdiff. -s --suppress-common-lines Skip identical lines. -w --width width Print a maximum of width characters on each line. The default is 130 characters. Options passed to diff(1) are: -a --text Treat file1 and file2 as text files. -b --ignore-space-change Ignore trailing blank spaces. -d --minimal Minimize diff size. -I --ignore-matching-lines regexp Ignore line changes matching regexp. All lines in the change must match regexp for the change to be ignored. -i --ignore-case Do a case-insensitive comparison. -t --expand-tabs Expand tabs to spaces. -W --ignore-all-space Ignore all spaces. -B --ignore-blank-lines Ignore blank lines. -E --ignore-tab-expansion Treat tabs and eight spaces as the same. -t --ignore-tabs Ignore tabs. -H --speed-large-files Assume scattered small changes in a large file. --ignore-file-name-case Ignore the case of file names. --no-ignore-file-name-case Do not ignore file name case. --strip-trailing-cr Skip identical lines. --tabsize NUM Change the size of tabs (default is 8.) --diff-program PROGRAM Use PROGRAM to compare files. ENVIRONMENT EDITOR, VISUAL Specifies an editor to use with the -o option. If both EDITOR and VISUAL are set, VISUAL takes precedence. If neither EDITOR nor VISUAL are set, the default is vi(1). TMPDIR Specifies a directory for temporary files to be created. The default is /tmp. SEE ALSO cmp(1), diff(1), diff3(1), vi(1), re_format(7) AUTHORS sdiff was written from scratch for the public domain by Ray Lai ⟨ray@cyth.net⟩. CAVEATS Tabs are treated as anywhere from one to eight characters wide, depending on the current column. Terminals that treat tabs as eight characters wide will look best. macOS 14.5 April 8, 2017 macOS 14.5
sdiff – side-by-side diff
sdiff [-abdilstHW] [-I regexp] [-o outfile] [-w width] file1 file2
null
null
afscexpand
The afscexpand command is used to decompress files compressed with HFS+ compression. Paths specified are recursively traversed (while remaining on the starting filesystem) and all encountered files are decompressed. If the -c option is specified, the encountered files will not be decompressed, but their contents will be printed to standard output. HISTORY afscexpand first appeared in Mac OS X 10.6. Mac OS X February 7, 2009 Mac OS X
afscexpand – decompress files compressed with HFS+ compression
afscexpand [-c] path [...]
null
null
spfquery5.30
spfquery checks if a given set of e-mail parameters (e.g., the SMTP sender's IP address) matches the responsible domain's Sender Policy Framework (SPF) policy. For more information on SPF see <http://www.openspf.org>. Preferred Usage The following usage forms are preferred over the legacy forms used by older spfquery versions: The --identity form checks if the given ip-address is an authorized SMTP sender for the given "helo" hostname, "mfrom" envelope sender e-mail address, or "pra" (so-called purported resonsible address) e-mail address, depending on the value of the --scope option (which defaults to mfrom if omitted). The --file form reads "ip-address identity [helo-identity]" tuples from the file with the specified filename, or from standard input if filename is -, and checks them against the specified scope (mfrom by default). Both forms support an optional --versions option, which specifies a comma-separated list of the SPF version numbers of SPF records that may be used. 1 means that "v=spf1" records should be used. 2 means that "spf2.0" records should be used. Defaults to 1,2, i.e., uses any SPF records that are available. Records of a higher version are preferred. Legacy Usage spfquery versions before 2.500 featured the following usage forms, which are discouraged but still supported for backwards compatibility: The --helo form checks if the given ip-address is an authorized SMTP sender for the "HELO" hostname given as the identity (so-called "HELO" check). The --mfrom form checks if the given ip-address is an authorized SMTP sender for the envelope sender email-address (or domain) given as the identity (so-called "MAIL FROM" check). If a domain is given instead of an e-mail address, "postmaster" will be substituted for the localpart. The --pra form checks if the given ip-address is an authorized SMTP sender for the PRA (Purported Responsible Address) e-mail address given as the identity. Other Usage The --version form prints version information of spfquery. The --help form prints usage information for spfquery.
spfquery - (Mail::SPF) - Checks if a given set of e-mail parameters matches a domain's SPF policy VERSION 2.501
Preferred usage: spfquery [--versions|-v 1|2|1,2] [--scope|-s helo|mfrom|pra] --identity|--id identity --ip-address|--ip ip-address [--helo-identity|--helo-id helo-identity] [OPTIONS] spfquery [--versions|-v 1|2|1,2] [--scope|-s helo|mfrom|pra] --file|-f filename|- [OPTIONS] Legacy usage: spfquery --helo helo-identity --ip-address|--ip ip-address [OPTIONS] spfquery --mfrom mfrom-identity --ip-address|--ip ip-address [--helo helo-identity] [OPTIONS] spfquery --pra pra-identity --ip-address|--ip ip-address [OPTIONS] Other usage: spfquery --version|-V spfquery --help
Standard Options The preferred and legacy forms optionally take any of the following OPTIONS: --default-explanation string --def-exp string Use the specified string as the default explanation if the authority domain does not specify an explanation string of its own. --hostname hostname Use hostname as the host name of the local system instead of auto- detecting it. --keep-comments --no-keep-comments Do (not) print any comments found when reading from a file or from standard input. --sanitize (currently ignored) --no-sanitize (currently ignored) Do (not) sanitize the output by condensing consecutive white-space into a single space and replacing non-printable characters with question marks. Enabled by default. --debug (currently ignored) Print out debug information. Black Magic Options Several options that were supported by earlier versions of spfquery are considered black magic (i.e. potentially dangerous for the innocent user) and are thus disabled by default. If the Mail::SPF::BlackMagic Perl module is installed, they may be enabled by specifying --enable-black-magic. --max-dns-interactive-terms n Evaluate a maximum of n DNS-interactive mechanisms and modifiers per SPF check. Defaults to 10. Do not override the default unless you know what you are doing! --max-name-lookups-per-term n Perform a maximum of n DNS name look-ups per mechanism or modifier. Defaults to 10. Do not override the default unless you know what you are doing! --authorize-mxes-for email-address|domain,... Consider all the MXes of the comma-separated list of email- addresses and domains as inherently authorized. --tfwl Perform "trusted-forwarder.org" accreditation checking. --guess spf-terms Use spf-terms as a default record if no SPF record is found. --local spf-terms Process spf-terms as local policy before resorting to a default result (the implicit or explicit "all" mechanism at the end of the domain's SPF record). For example, this could be used for white- listing one's secondary MXes: "mx:mydomain.example.org". --override domain=spf-record --fallback domain=spf-record Set overrides and fallbacks. Each option can be specified multiple times. For example: --override example.org='v=spf1 -all' --override '*.example.net'='v=spf1 a mx -all' --fallback example.com='v=spf1 -all' RESULT CODES pass The specified IP address is an authorized SMTP sender for the identity. fail The specified IP address is not an authorized SMTP sender for the identity. softfail The specified IP address is not an authorized SMTP sender for the identity, however the authority domain is still testing out its SPF policy. neutral The identity's authority domain makes no assertion about the status of the IP address. permerror A permanent error occurred while evaluating the authority domain's policy (e.g., a syntax error in the SPF record). Manual intervention is required from the authority domain. temperror A temporary error occurred while evaluating the authority domain's policy (e.g., a DNS error). Try again later. none There is no applicable SPF policy for the identity domain. EXIT CODES Result | Exit code -----------+----------- pass | 0 fail | 1 softfail | 2 neutral | 3 permerror | 4 temperror | 5 none | 6
spfquery --scope mfrom --id user@example.com --ip 1.2.3.4 spfquery --file test_data echo "127.0.0.1 user@example.com helohost.example.com" | spfquery -f - COMPATIBILITY spfquery has undergone the following interface changes compared to earlier versions: 2.500 • A new preferred usage style for performing individual SPF checks has been introduced. The new style accepts a unified --identity option and an optional --scope option that specifies the type (scope) of the identity. In contrast, the legacy usage style requires a separate usage form for every supported scope. See "Preferred usage" and "Legacy usage" for details. • The former "unknown" and "error" result codes have been renamed to "permerror" and "temperror", respectively, in order to comply with RFC 4408 terminology. • SPF checks with an empty identity are no longer supported. In the case of an empty "MAIL FROM" SMTP transaction parameter, perform a check with the "helo" scope directly. • The --debug and --(no-)sanitize options are currently ignored by this version of spfquery. They will again be supported in the future. • Several features that were supported by earlier versions of spfquery are considered black magic and thus are now disabled by default. See "Black Magic Options". • Several option names have been deprecated. This is a list of them and their preferred synonyms: Deprecated options | Preferred options ---------------------+----------------------------- --sender, -s | --mfrom --ipv4, -i | --ip-address, --ip --name | --hostname --max-lookup-count, | --max-dns-interactive-terms --max-lookup | --rcpt-to, -r | --authorize-mxes-for --trusted | --tfwl SEE ALSO Mail::SPF, spfd(8) <http://tools.ietf.org/html/rfc4408> AUTHORS This version of spfquery is a complete rewrite by Julian Mehnle <julian@mehnle.net>, based on an earlier version written by Meng Weng Wong <mengwong+spf@pobox.com> and Wayne Schlitt <wayne@schlitt.net>. perl v5.30.3 2024-04-13 SPFQUERY(1)
db_deadlock
The db_deadlock utility traverses the database environment lock region, and aborts a lock request each time it detects a deadlock or a lock request that has timed out. By default, in the case of a deadlock, a random lock request is chosen to be aborted. This utility should be run as a background daemon, or the underlying Berkeley DB deadlock detection interfaces should be called in some other way, whenever there are multiple threads or processes accessing a database and at least one of them is modifying it. The options are as follows: -a When a deadlock is detected, abort the locker: m with the greatest number of locks n with the fewest number of locks o with the oldest locker ID w with the fewest number of write locks y with the youngest locker ID When lock or transaction timeouts have been specified: e abort any lock request that has timed out -h Specify a home directory for the database environment; by default, the current working directory is used. -L Log the execution of the db_deadlock utility to the specified file in the following format, where ### is the process ID, and the date is the time the utility was started. db_deadlock: ### Wed Jun 15 01:23:45 EDT 1995 This file will be removed if the db_deadlock utility exits gracefully. -t Check the database environment every sec seconds plus usec microseconds to see if a process has been forced to wait for a lock; if one has, review the database environment lock structures. -V Write the library version number to the standard output, and exit. -v Run in verbose mode, generating messages each time the detector runs. If the -t option is not specified, db_deadlock will run once and exit. The db_deadlock utility uses a Berkeley DB environment (as described for the -h option, the environment variable DB_HOME, or because the utility was run in a directory containing a Berkeley DB environment). In order to avoid environment corruption when using a Berkeley DB environment, db_deadlock should always be given the chance to detach from the environment and exit gracefully. To cause db_deadlock to release all environment resources and exit cleanly, send it an interrupt signal (SIGINT). The db_deadlock utility does not attempt to create the Berkeley DB shared memory regions if they do not already exist. The application which creates the region should be started first, and then, once the region is created, the db_deadlock utility should be started. The DB_ENV->lock_detect method is the underlying method used by the db_deadlock utility. See the db_deadlock utility source code for an example of using DB_ENV->lock_detect in a IEEE/ANSI Std 1003.1 (POSIX) environment. The db_deadlock utility exits 0 on success, and >0 if an error occurs. ENVIRONMENT DB_HOME If the -h option is not specified and the environment variable DB_HOME is set, it is used as the path of the database home, as described in DB_ENV->open. SEE ALSO db_archive(1), db_checkpoint(1), db_dump(1), db_load(1), db_printlog(1), db_recover(1), db_stat(1), db_upgrade(1), db_verify(1) Darwin December 3, 2003 Darwin
db_deadlock
db_deadlock [-Vv] [-a e | m | n | o | w | y] [-h home] [-L file] [-t sec.usec]
null
null
syspolicy_check
syspolicy_check is used to check whether macOS application bundles are ready for upload to the Apple notary service, or ready for distribution to users outside of the Mac App Store. syspolicy_check combines checks from multiple parts of macOS, including the frameworks and subsystems exposed by the existing codesign, spctl, and stapler commands. syspolicy_check requires exactly one subcommand option to determine what action is to be performed. You can also pass -v or --verbose up to 3 times to get increasingly verbose output. --json outputs any errors to stdout in json format. SUBCOMMANDS The subcommands are as follows: notary-submission Runs the same checks on an app as the Apple notary service, to make sure the app is ready for uploading to the service. Notarization gives users more confidence that the Developer ID- signed software you distribute has been checked by Apple for malicious components. Notarization is not App Review. The Apple notary service is an automated system that scans your software for malicious content, checks for code-signing issues, and returns the results to you quickly. If there are no issues, the notary service generates a ticket for you to staple to your software; the notary service also publishes that ticket online where Gatekeeper can find it. For more information on Notarization, please see: https://developer.apple.com/documentation/security/notarizing_macos_software_before_distribution For more information on triaging common Notarization failures, please see: https://developer.apple.com/documentation/security/notarizing_macos_software_before_distribution/resolving_common_notarization_issues distribution Runs the same checks on your app as macOS does when determining if your app can be executed. This includes Gatekeeper checks, XProtect checks, provisioning profile checks, and more. For more information on how macOS determines if an app can run, along with tips for debugging any launch time issues, please see: https://developer.apple.com/forums/thread/706442. SEE ALSO codesign(1), spctl(8), stapler(1) macOS 14.5 February 7th, 2023 macOS 14.5
syspolicy_check – Check if macOS app is ready for notarization or distribution
syspolicy_check notary-submission path [--verbose] [--json] syspolicy_check distribution path [--verbose] [--json] syspolicy_check [notary-submission | distribution] --help
null
null
ldapcompare
ldapcompare is a shell-accessible interface to the ldap_compare_ext(3) library call. ldapcompare opens a connection to an LDAP server, binds, and performs a compare using specified parameters. The DN should be a distinguished name in the directory. Attr should be a known attribute. If followed by one colon, the assertion value should be provided as a string. If followed by two colons, the base64 encoding of the value is provided. The result code of the compare is provided as the exit code and, unless ran with -z, the program prints TRUE, FALSE, or UNDEFINED on standard output.
ldapcompare - LDAP compare tool
ldapcompare [-n] [-v] [-z] [-M[M]] [-d_debuglevel] [-D_binddn] [-W] [-w_passwd] [-y_passwdfile] [-H_ldapuri] [-h_ldaphost] [-p_ldapport] [-P {2|3}] [-e [!]ext[=extparam]] [-E [!]ext[=extparam]] [-O_security-properties] [-I] [-Q] [-U_authcid] [-R_realm] [-x] [-X_authzid] [-Y_mech] [-Z[Z]] DN {attr:value | attr::b64value}
-n Show what would be done, but don't actually perform the compare. Useful for debugging in conjunction with -v. -v Run in verbose mode, with many diagnostics written to standard output. -z Run in quiet mode, no output is written. You must check the return status. Useful in shell scripts. -M[M] Enable manage DSA IT control. -MM makes control critical. -d_debuglevel Set the LDAP debugging level to debuglevel. ldapcompare must be compiled with LDAP_DEBUG defined for this option to have any effect. -x Use simple authentication instead of SASL. -D_binddn Use the Distinguished Name binddn to bind to the LDAP directory. For SASL binds, the server is expected to ignore this value. -W Prompt for simple authentication. This is used instead of specifying the password on the command line. -w_passwd Use passwd as the password for simple authentication. -y_passwdfile Use complete contents of passwdfile as the password for simple authentication. Note that complete means that any leading or trailing whitespaces, including newlines, will be considered part of the password and, unlike other software, they will not be stripped. As a consequence, passwords stored in files by commands like echo(1) will not behave as expected, since echo(1) by default appends a trailing newline to the echoed string. The recommended portable way to store a cleartext password in a file for use with this option is to use slappasswd(8) with {CLEARTEXT} as hash and the option -n. -H_ldapuri Specify URI(s) referring to the ldap server(s); only the protocol/host/port fields are allowed; a list of URI, separated by whitespace or commas is expected. -h_ldaphost Specify an alternate host on which the ldap server is running. Deprecated in favor of -H. -p_ldapport Specify an alternate TCP port where the ldap server is listening. Deprecated in favor of -H. -P {2|3} Specify the LDAP protocol version to use. -e [!]ext[=extparam] -E [!]ext[=extparam] Specify general extensions with -e and search extensions with -E. ´!´ indicates criticality. General extensions: [!]assert=<filter> (an RFC 4515 Filter) [!]authzid=<authzid> ("dn:<dn>" or "u:<user>") [!]manageDSAit [!]noop ppolicy [!]postread[=<attrs>] (a comma-separated attribute list) [!]preread[=<attrs>] (a comma-separated attribute list) abandon, cancel (SIGINT sends abandon/cancel; not really controls) Search extensions: [!]domainScope (domain scope) [!]mv=<filter> (matched values filter) [!]pr=<size>[/prompt|noprompt] (paged results/prompt) [!]sss=[-]<attr[:OID]>[/[-]<attr[:OID]>...] (server side sorting) [!]subentries[=true|false] (subentries) [!]sync=ro[/<cookie>] (LDAP Sync refreshOnly) rp[/<cookie>][/<slimit>] (LDAP Sync refreshAndPersist) -O_security-properties Specify SASL security properties. -I Enable SASL Interactive mode. Always prompt. Default is to prompt only as needed. -Q Enable SASL Quiet mode. Never prompt. -U_authcid Specify the authentication ID for SASL bind. The form of the ID depends on the actual SASL mechanism used. -R_realm Specify the realm of authentication ID for SASL bind. The form of the realm depends on the actual SASL mechanism used. -X_authzid Specify the requested authorization ID for SASL bind. authzid must be one of the following formats: dn:<distinguished name> or u:<username> -Y_mech Specify the SASL mechanism to be used for authentication. If it's not specified, the program will choose the best mechanism the server knows. -Z[Z] Issue StartTLS (Transport Layer Security) extended operation. If you use -ZZ, the command will require the operation to be successful.
ldapcompare "uid=babs,dc=example,dc=com" sn:Jensen ldapcompare "uid=babs,dc=example,dc=com" sn::SmVuc2Vu are all equivalent. LIMITATIONS Requiring the value be passed on the command line is limiting and introduces some security concerns. The command should support a mechanism to specify the location (file name or URL) to read the value from. SEE ALSO ldap.conf(5), ldif(5), ldap(3), ldap_compare_ext(3) AUTHOR The OpenLDAP Project <http://www.openldap.org/> ACKNOWLEDGEMENTS OpenLDAP Software is developed and maintained by The OpenLDAP Project <http://www.openldap.org/>. OpenLDAP Software is derived from University of Michigan LDAP 3.3 Release. OpenLDAP 2.4.28 2011/11/24 LDAPCOMPARE(1)
unexpand
The expand utility processes the named files or the standard input writing the standard output with tabs changed into blanks. Backspace characters are preserved into the output and decrement the column count for tab calculations. The expand utility is useful for pre-processing character files (before sorting, looking at specific columns, etc.) that contain tabs. The unexpand utility puts tabs back into the data from the standard input or the named files and writes the result on the standard output. The following options are available: -a (unexpand only.) By default, only leading blanks and tabs are reconverted to maximal strings of tabs. If the -a option is given, then tabs are inserted whenever they would compress the resultant file by replacing two or more characters. -t -Sm tab1, tab2, ..., tabn Sm Set tab stops at column positions tab1, tab2, ..., tabn. If only a single number is given, tab stops are set that number of column positions apart instead of the default number of 8. ENVIRONMENT The LANG, LC_ALL and LC_CTYPE environment variables affect the execution of expand and unexpand as described in environ(7). EXIT STATUS The expand and unexpand utilities exit 0 on success, and >0 if an error occurs. STANDARDS The expand and unexpand utilities conform to IEEE Std 1003.1-2001 (“POSIX.1”). HISTORY The expand utility first appeared in 1BSD. macOS 14.5 June 6, 2015 macOS 14.5
expand, unexpand – expand tabs to spaces, and vice versa
expand [-t tab1,tab2,...,tabn] [file ...] unexpand [-a | -t tab1,tab2,...,tabn] [file ...]
null
null
prove5.30
null
prove - Run tests through a TAP harness. USAGE prove [options] [files or directories]
null
Boolean options: -v, --verbose Print all test lines. -l, --lib Add 'lib' to the path for your tests (-Ilib). -b, --blib Add 'blib/lib' and 'blib/arch' to the path for your tests -s, --shuffle Run the tests in random order. -c, --color Colored test output (default). --nocolor Do not color test output. --count Show the X/Y test count when not verbose (default) --nocount Disable the X/Y test count. -D --dry Dry run. Show test that would have run. -f, --failures Show failed tests. -o, --comments Show comments. --ignore-exit Ignore exit status from test scripts. -m, --merge Merge test scripts' STDERR with their STDOUT. -r, --recurse Recursively descend into directories. --reverse Run the tests in reverse order. -q, --quiet Suppress some test output while running tests. -Q, --QUIET Only print summary results. -p, --parse Show full list of TAP parse errors, if any. --directives Only show results with TODO or SKIP directives. --timer Print elapsed time after each test. --trap Trap Ctrl-C and print summary on interrupt. --normalize Normalize TAP output in verbose output -T Enable tainting checks. -t Enable tainting warnings. -W Enable fatal warnings. -w Enable warnings. -h, --help Display this help -?, Display this help -V, --version Display the version -H, --man Longer manpage for prove --norc Don't process default .proverc Options that take arguments: -I Library paths to include. -P Load plugin (searches App::Prove::Plugin::*.) -M Load a module. -e, --exec Interpreter to run the tests ('' for compiled tests.) --ext Set the extension for tests (default '.t') --harness Define test harness to use. See TAP::Harness. --formatter Result formatter to use. See FORMATTERS. --source Load and/or configure a SourceHandler. See SOURCE HANDLERS. -a, --archive out.tgz Store the resulting TAP in an archive file. -j, --jobs N Run N test jobs in parallel (try 9.) --state=opts Control prove's persistent state. --statefile=file Use `file` instead of `.prove` for state --rc=rcfile Process options from rcfile --rules Rules for parallel vs sequential processing. NOTES .proverc If ~/.proverc or ./.proverc exist they will be read and any options they contain processed before the command line options. Options in .proverc are specified in the same way as command line options: # .proverc --state=hot,fast,save -j9 Additional option files may be specified with the "--rc" option. Default option file processing is disabled by the "--norc" option. Under Windows and VMS the option file is named _proverc rather than .proverc and is sought only in the current directory. Reading from "STDIN" If you have a list of tests (or URLs, or anything else you want to test) in a file, you can add them to your tests by using a '-': prove - < my_list_of_things_to_test.txt See the "README" in the "examples" directory of this distribution. Default Test Directory If no files or directories are supplied, "prove" looks for all files matching the pattern "t/*.t". Colored Test Output Colored test output using TAP::Formatter::Color is the default, but if output is not to a terminal, color is disabled. You can override this by adding the "--color" switch. Color support requires Term::ANSIColor and, on windows platforms, also Win32::Console::ANSI. If the necessary module(s) are not installed colored output will not be available. Exit Code If the tests fail "prove" will exit with non-zero status. Arguments to Tests It is possible to supply arguments to tests. To do so separate them from prove's own arguments with the arisdottle, '::'. For example prove -v t/mytest.t :: --url http://example.com would run t/mytest.t with the options '--url http://example.com'. When running multiple tests they will each receive the same arguments. "--exec" Normally you can just pass a list of Perl tests and the harness will know how to execute them. However, if your tests are not written in Perl or if you want all tests invoked exactly the same way, use the "-e", or "--exec" switch: prove --exec '/usr/bin/ruby -w' t/ prove --exec '/usr/bin/perl -Tw -mstrict -Ilib' t/ prove --exec '/path/to/my/customer/exec' "--merge" If you need to make sure your diagnostics are displayed in the correct order relative to test results you can use the "--merge" option to merge the test scripts' STDERR into their STDOUT. This guarantees that STDOUT (where the test results appear) and STDERR (where the diagnostics appear) will stay in sync. The harness will display any diagnostics your tests emit on STDERR. Caveat: this is a bit of a kludge. In particular note that if anything that appears on STDERR looks like a test result the test harness will get confused. Use this option only if you understand the consequences and can live with the risk. "--trap" The "--trap" option will attempt to trap SIGINT (Ctrl-C) during a test run and display the test summary even if the run is interrupted "--state" You can ask "prove" to remember the state of previous test runs and select and/or order the tests to be run based on that saved state. The "--state" switch requires an argument which must be a comma separated list of one or more of the following options. "last" Run the same tests as the last time the state was saved. This makes it possible, for example, to recreate the ordering of a shuffled test. # Run all tests in random order $ prove -b --state=save --shuffle # Run them again in the same order $ prove -b --state=last "failed" Run only the tests that failed on the last run. # Run all tests $ prove -b --state=save # Run failures $ prove -b --state=failed If you also specify the "save" option newly passing tests will be excluded from subsequent runs. # Repeat until no more failures $ prove -b --state=failed,save "passed" Run only the passed tests from last time. Useful to make sure that no new problems have been introduced. "all" Run all tests in normal order. Multple options may be specified, so to run all tests with the failures from last time first: $ prove -b --state=failed,all,save "hot" Run the tests that most recently failed first. The last failure time of each test is stored. The "hot" option causes tests to be run in most-recent- failure order. $ prove -b --state=hot,save Tests that have never failed will not be selected. To run all tests with the most recently failed first use $ prove -b --state=hot,all,save This combination of options may also be specified thus $ prove -b --state=adrian "todo" Run any tests with todos. "slow" Run the tests in slowest to fastest order. This is useful in conjunction with the "-j" parallel testing switch to ensure that your slowest tests start running first. $ prove -b --state=slow -j9 "fast" Run test tests in fastest to slowest order. "new" Run the tests in newest to oldest order based on the modification times of the test scripts. "old" Run the tests in oldest to newest order. "fresh" Run those test scripts that have been modified since the last test run. "save" Save the state on exit. The state is stored in a file called .prove (_prove on Windows and VMS) in the current directory. The "--state" switch may be used more than once. $ prove -b --state=hot --state=all,save --rules The "--rules" option is used to control which tests are run sequentially and which are run in parallel, if the "--jobs" option is specified. The option may be specified multiple times, and the order matters. The most practical use is likely to specify that some tests are not "parallel-ready". Since mentioning a file with --rules doesn't cause it to be selected to run as a test, you can "set and forget" some rules preferences in your .proverc file. Then you'll be able to take maximum advantage of the performance benefits of parallel testing, while some exceptions are still run in parallel. --rules examples # All tests are allowed to run in parallel, except those starting with "p" --rules='seq=t/p*.t' --rules='par=**' # All tests must run in sequence except those starting with "p", which should be run parallel --rules='par=t/p*.t' --rules resolution • By default, all tests are eligible to be run in parallel. Specifying any of your own rules removes this one. • "First match wins". The first rule that matches a test will be the one that applies. • Any test which does not match a rule will be run in sequence at the end of the run. • The existence of a rule does not imply selecting a test. You must still specify the tests to run. • Specifying a rule to allow tests to run in parallel does not make them run in parallel. You still need specify the number of parallel "jobs" in your Harness object. --rules Glob-style pattern matching We implement our own glob-style pattern matching for --rules. Here are the supported patterns: ** is any number of characters, including /, within a pathname * is zero or more characters within a filename/directory name ? is exactly one character within a filename/directory name {foo,bar,baz} is any of foo, bar or baz. \ is an escape character More advanced specifications for parallel vs sequence run rules If you need more advanced management of what runs in parallel vs in sequence, see the associated 'rules' documentation in TAP::Harness and TAP::Parser::Scheduler. If what's possible directly through "prove" is not sufficient, you can write your own harness to access these features directly. @INC prove introduces a separation between "options passed to the perl which runs prove" and "options passed to the perl which runs tests"; this distinction is by design. Thus the perl which is running a test starts with the default @INC. Additional library directories can be added via the "PERL5LIB" environment variable, via -Ifoo in "PERL5OPT" or via the "-Ilib" option to prove. Taint Mode Normally when a Perl program is run in taint mode the contents of the "PERL5LIB" environment variable do not appear in @INC. Because "PERL5LIB" is often used during testing to add build directories to @INC prove passes the names of any directories found in "PERL5LIB" as -I switches. The net effect of this is that "PERL5LIB" is honoured even when prove is run in taint mode. FORMATTERS You can load a custom TAP::Parser::Formatter: prove --formatter MyFormatter SOURCE HANDLERS You can load custom TAP::Parser::SourceHandlers, to change the way the parser interprets particular sources of TAP. prove --source MyHandler --source YetAnother t If you want to provide config to the source you can use: prove --source MyCustom \ --source Perl --perl-option 'foo=bar baz' --perl-option avg=0.278 \ --source File --file-option extensions=.txt --file-option extensions=.tmp t --source pgTAP --pgtap-option pset=format=html --pgtap-option pset=border=2 Each "--$source-option" option must specify a key/value pair separated by an "=". If an option can take multiple values, just specify it multiple times, as with the "extensions=" examples above. If the option should be a hash reference, specify the value as a second pair separated by a "=", as in the "pset=" examples above (escape "=" with a backslash). All "--sources" are combined into a hash, and passed to "new" in TAP::Harness's "sources" parameter. See TAP::Parser::IteratorFactory for more details on how configuration is passed to SourceHandlers. PLUGINS Plugins can be loaded using the "-Pplugin" syntax, eg: prove -PMyPlugin This will search for a module named "App::Prove::Plugin::MyPlugin", or failing that, "MyPlugin". If the plugin can't be found, "prove" will complain & exit. You can pass arguments to your plugin by appending "=arg1,arg2,etc" to the plugin name: prove -PMyPlugin=fou,du,fafa Please check individual plugin documentation for more details. Available Plugins For an up-to-date list of plugins available, please check CPAN: <http://search.cpan.org/search?query=App%3A%3AProve+Plugin> Writing Plugins Please see "PLUGINS" in App::Prove. perl v5.30.3 2024-04-13 PROVE(1)
null
perlivp5.30
The perlivp program is set up at Perl source code build time to test the Perl version it was built under. It can be used after running: make install (or your platform's equivalent procedure) to verify that perl and its libraries have been installed correctly. A correct installation is verified by output that looks like: ok 1 ok 2 etc.
perlivp - Perl Installation Verification Procedure
perlivp [-p] [-v] [-h]
-h help Prints out a brief help message. -p print preface Gives a description of each test prior to performing it. -v verbose Gives more detailed information about each test, after it has been performed. Note that any failed tests ought to print out some extra information whether or not -v is thrown. DIAGNOSTICS • print "# Perl binary '$perlpath' does not appear executable.\n"; Likely to occur for a perl binary that was not properly installed. Correct by conducting a proper installation. • print "# Perl version '$]' installed, expected $ivp_VERSION.\n"; Likely to occur for a perl that was not properly installed. Correct by conducting a proper installation. • print "# Perl \@INC directory '$_' does not appear to exist.\n"; Likely to occur for a perl library tree that was not properly installed. Correct by conducting a proper installation. • print "# Needed module '$_' does not appear to be properly installed.\n"; One of the two modules that is used by perlivp was not present in the installation. This is a serious error since it adversely affects perlivp's ability to function. You may be able to correct this by performing a proper perl installation. • print "# Required module '$_' does not appear to be properly installed.\n"; An attempt to "eval "require $module"" failed, even though the list of extensions indicated that it should succeed. Correct by conducting a proper installation. • print "# Unnecessary module 'bLuRfle' appears to be installed.\n"; This test not coming out ok could indicate that you have in fact installed a bLuRfle.pm module or that the "eval " require \"$module_name.pm\"; "" test may give misleading results with your installation of perl. If yours is the latter case then please let the author know. • print "# file",+($#missing == 0) ? '' : 's'," missing from installation:\n"; One or more files turned up missing according to a run of "ExtUtils::Installed -> validate()" over your installation. Correct by conducting a proper installation. For further information on how to conduct a proper installation consult the INSTALL file that comes with the perl source and the README file for your platform. AUTHOR Peter Prymmer perl v5.30.3 2024-04-13 PERLIVP(1)
null
git-upload-pack
Invoked by git fetch-pack, learns what objects the other side is missing, and sends them after packing. This command is usually not invoked directly by the end user. The UI for the protocol is on the git fetch-pack side, and the program pair is meant to be used to pull updates from a remote repository. For push operations, see git send-pack.
git-upload-pack - Send objects packed back to git-fetch-pack
git-upload-pack [--[no-]strict] [--timeout=<n>] [--stateless-rpc] [--advertise-refs] <directory>
--[no-]strict Do not try <directory>/.git/ if <directory> is no Git directory. --timeout=<n> Interrupt transfer after <n> seconds of inactivity. --stateless-rpc Perform only a single read-write cycle with stdin and stdout. This fits with the HTTP POST request processing model where a program may read the request, write a response, and must exit. --http-backend-info-refs Used by git-http-backend(1) to serve up $GIT_URL/info/refs?service=git-upload-pack requests. See "Smart Clients" in gitprotocol-http(5) and "HTTP Transport" in the gitprotocol-v2(5) documentation. Also understood by git-receive- pack(1). <directory> The repository to sync from. ENVIRONMENT GIT_PROTOCOL Internal variable used for handshaking the wire protocol. Server admins may need to configure some transports to allow this variable to be passed. See the discussion in git(1). SEE ALSO gitnamespaces(7) GIT Part of the git(1) suite Git 2.41.0 2023-06-01 GIT-UPLOAD-PACK(1)
null
perldoc5.34
perldoc looks up documentation in .pod format that is embedded in the perl installation tree or in a perl script, and displays it using a variety of formatters. This is primarily used for the documentation for the perl library modules. Your system may also have man pages installed for those modules, in which case you can probably just use the man(1) command. If you are looking for a table of contents to the Perl library modules documentation, see the perltoc page.
perldoc - Look up Perl documentation in Pod format.
perldoc [-h] [-D] [-t] [-u] [-m] [-l] [-U] [-F] [-i] [-V] [-T] [-r] [-d destination_file] [-o formatname] [-M FormatterClassName] [-w formatteroption:value] [-n nroff-replacement] [-X] [-L language_code] PageName|ModuleName|ProgramName|URL Examples: perldoc -f BuiltinFunction perldoc -L it -f BuiltinFunction perldoc -q FAQ Keyword perldoc -L fr -q FAQ Keyword perldoc -v PerlVariable perldoc -a PerlAPI See below for more description of the switches.
-h Prints out a brief help message. -D Describes search for the item in detail. -t Display docs using plain text converter, instead of nroff. This may be faster, but it probably won't look as nice. -u Skip the real Pod formatting, and just show the raw Pod source (Unformatted) -m module Display the entire module: both code and unformatted pod documentation. This may be useful if the docs don't explain a function in the detail you need, and you'd like to inspect the code directly; perldoc will find the file for you and simply hand it off for display. -l Display only the file name of the module found. -U When running as the superuser, don't attempt drop privileges for security. This option is implied with -F. NOTE: Please see the heading SECURITY below for more information. -F Consider arguments as file names; no search in directories will be performed. Implies -U if run as the superuser. -f perlfunc The -f option followed by the name of a perl built-in function will extract the documentation of this function from perlfunc. Example: perldoc -f sprintf -q perlfaq-search-regexp The -q option takes a regular expression as an argument. It will search the question headings in perlfaq[1-9] and print the entries matching the regular expression. Example: perldoc -q shuffle -a perlapifunc The -a option followed by the name of a perl api function will extract the documentation of this function from perlapi. Example: perldoc -a newHV -v perlvar The -v option followed by the name of a Perl predefined variable will extract the documentation of this variable from perlvar. Examples: perldoc -v '$"' perldoc -v @+ perldoc -v DATA -T This specifies that the output is not to be sent to a pager, but is to be sent directly to STDOUT. -d destination-filename This specifies that the output is to be sent neither to a pager nor to STDOUT, but is to be saved to the specified filename. Example: "perldoc -oLaTeX -dtextwrapdocs.tex Text::Wrap" -o output-formatname This specifies that you want Perldoc to try using a Pod-formatting class for the output format that you specify. For example: "-oman". This is actually just a wrapper around the "-M" switch; using "-oformatname" just looks for a loadable class by adding that format name (with different capitalizations) to the end of different classname prefixes. For example, "-oLaTeX" currently tries all of the following classes: Pod::Perldoc::ToLaTeX Pod::Perldoc::Tolatex Pod::Perldoc::ToLatex Pod::Perldoc::ToLATEX Pod::Simple::LaTeX Pod::Simple::latex Pod::Simple::Latex Pod::Simple::LATEX Pod::LaTeX Pod::latex Pod::Latex Pod::LATEX. -M module-name This specifies the module that you want to try using for formatting the pod. The class must at least provide a "parse_from_file" method. For example: "perldoc -MPod::Perldoc::ToChecker". You can specify several classes to try by joining them with commas or semicolons, as in "-MTk::SuperPod;Tk::Pod". -w option:value or -w option This specifies an option to call the formatter with. For example, "-w textsize:15" will call "$formatter->textsize(15)" on the formatter object before it is used to format the object. For this to be valid, the formatter class must provide such a method, and the value you pass should be valid. (So if "textsize" expects an integer, and you do "-w textsize:big", expect trouble.) You can use "-w optionname" (without a value) as shorthand for "-w optionname:TRUE". This is presumably useful in cases of on/off features like: "-w page_numbering". You can use an "=" instead of the ":", as in: "-w textsize=15". This might be more (or less) convenient, depending on what shell you use. -X Use an index if it is present. The -X option looks for an entry whose basename matches the name given on the command line in the file "$Config{archlib}/pod.idx". The pod.idx file should contain fully qualified filenames, one per line. -L language_code This allows one to specify the language code for the desired language translation. If the "POD2::<language_code>" package isn't installed in your system, the switch is ignored. All available translation packages are to be found under the "POD2::" namespace. See POD2::IT (or POD2::FR) to see how to create new localized "POD2::*" documentation packages and integrate them into Pod::Perldoc. PageName|ModuleName|ProgramName|URL The item you want to look up. Nested modules (such as "File::Basename") are specified either as "File::Basename" or "File/Basename". You may also give a descriptive name of a page, such as "perlfunc". For URLs, HTTP and HTTPS are the only kind currently supported. For simple names like 'foo', when the normal search fails to find a matching page, a search with the "perl" prefix is tried as well. So "perldoc intro" is enough to find/render "perlintro.pod". -n some-formatter Specify replacement for groff -r Recursive search. -i Ignore case. -V Displays the version of perldoc you're running. SECURITY Because perldoc does not run properly tainted, and is known to have security issues, when run as the superuser it will attempt to drop privileges by setting the effective and real IDs to nobody's or nouser's account, or -2 if unavailable. If it cannot relinquish its privileges, it will not run. See the "-U" option if you do not want this behavior but beware that there are significant security risks if you choose to use "-U". Since 3.26, using "-F" as the superuser also implies "-U" as opening most files and traversing directories requires privileges that are above the nobody/nogroup level. ENVIRONMENT Any switches in the "PERLDOC" environment variable will be used before the command line arguments. Useful values for "PERLDOC" include "-oterm", "-otext", "-ortf", "-oxml", and so on, depending on what modules you have on hand; or the formatter class may be specified exactly with "-MPod::Perldoc::ToTerm" or the like. "perldoc" also searches directories specified by the "PERL5LIB" (or "PERLLIB" if "PERL5LIB" is not defined) and "PATH" environment variables. (The latter is so that embedded pods for executables, such as "perldoc" itself, are available.) In directories where either "Makefile.PL" or "Build.PL" exist, "perldoc" will add "." and "lib" first to its search path, and as long as you're not the superuser will add "blib" too. This is really helpful if you're working inside of a build directory and want to read through the docs even if you have a version of a module previously installed. "perldoc" will use, in order of preference, the pager defined in "PERLDOC_PAGER", "MANPAGER", or "PAGER" before trying to find a pager on its own. ("MANPAGER" is not used if "perldoc" was told to display plain text or unformatted pod.) When using perldoc in it's "-m" mode (display module source code), "perldoc" will attempt to use the pager set in "PERLDOC_SRC_PAGER". A useful setting for this command is your favorite editor as in "/usr/bin/nano". (Don't judge me.) One useful value for "PERLDOC_PAGER" is "less -+C -E". Having PERLDOCDEBUG set to a positive integer will make perldoc emit even more descriptive output than the "-D" switch does; the higher the number, the more it emits. CHANGES Up to 3.14_05, the switch -v was used to produce verbose messages of perldoc operation, which is now enabled by -D. SEE ALSO perlpod, Pod::Perldoc AUTHOR Current maintainer: Mark Allen "<mallen@cpan.org>" Past contributors are: brian d foy "<bdfoy@cpan.org>" Adriano R. Ferreira "<ferreira@cpan.org>", Sean M. Burke "<sburke@cpan.org>", Kenneth Albanowski "<kjahds@kjahds.com>", Andy Dougherty "<doughera@lafcol.lafayette.edu>", and many others. perl v5.34.1 2022-02-19 PERLDOC(1)
null
ssh-copy-id
ssh-copy-id is a script that uses ssh(1) to log into a remote machine (presumably using a login password, so password authentication should be enabled, unless you've done some clever use of multiple identities). It assembles a list of one or more fingerprints (as described below) and tries to log in with each key, to see if any of them are already installed (of course, if you are not using ssh-agent(1) this may result in you being repeatedly prompted for pass-phrases). It then assembles a list of those that failed to log in and, using ssh(1), enables logins with those keys on the remote server. By default it adds the keys by appending them to the remote user's ~/.ssh/authorized_keys (creating the file, and directory, if necessary). It is also capable of detecting if the remote system is a NetScreen, and using its ‘set ssh pka-dsa key ...’ command instead. The options are as follows: -i identity_file Use only the key(s) contained in identity_file (rather than looking for identities via ssh-add(1) or in the default_ID_file). If the filename does not end in .pub this is added. If the filename is omitted, the default_ID_file is used. Note that this can be used to ensure that the keys copied have the comment one prefers and/or extra options applied, by ensuring that the key file has these set as preferred before the copy is attempted. -f Forced mode: doesn't check if the keys are present on the remote server. This means that it does not need the private key. Of course, this can result in more than one copy of the key being installed on the remote system. -n do a dry-run. Instead of installing keys on the remote system simply prints the key(s) that would have been installed. -s SFTP mode: usually the public keys are installed by executing commands on the remote side. With this option the user's ~/.ssh/authorized_keys file will be downloaded, modified locally and uploaded with sftp. This option is useful if the server has restrictions on commands which can be used on the remote side. -t target_path the path on the target system where the keys should be added (defaults to ".ssh/authorized_keys") -p port, -o ssh_option These two options are simply passed through untouched, along with their argument, to allow one to set the port or other ssh(1) options, respectively. Rather than specifying these as command line options, it is often better to use (per-host) settings in ssh(1)'s configuration file: ssh_config(5). -x This option is for debugging the ssh-copy-id script itself. It sets the shell's -x flag, so that you can see the commands being run. -h, -? Print Usage summary Default behaviour without -i, is to check if ‘ssh-add -L’ provides any output, and if so those keys are used. Note that this results in the comment on the key being the filename that was given to ssh-add(1) when the key was loaded into your ssh-agent(1) rather than the comment contained in that file, which is a bit of a shame. Otherwise, if ssh-add(1) provides no keys contents of the default_ID_file will be used. The default_ID_file is the most recent file that matches: ~/.ssh/id*.pub, (excluding those that match ~/.ssh/*-cert.pub) so if you create a key that is not the one you want ssh-copy-id to use, just use touch(1) on your preferred key's .pub file to reinstate it as the most recent.
ssh-copy-id – use locally available keys to authorise logins on a remote machine
ssh-copy-id [-f] [-n] [-s] [-x] [-i [identity_file]] [-p port] [-o ssh_option] [-t target_path] [user@]hostname ssh-copy-id -h | -?
null
If you have already installed keys from one system on a lot of remote hosts, and you then create a new key, on a new client machine, say, it can be difficult to keep track of which systems on which you've installed the new key. One way of dealing with this is to load both the new key and old key(s) into your ssh-agent(1). Load the new key first, without the -c option, then load one or more old keys into the agent, possibly by ssh-ing to the client machine that has that old key, using the -A option to allow agent forwarding: user@newclient$ ssh-add user@newclient$ ssh -A old.client user@oldl$ ssh-add -c ... prompt for pass-phrase ... user@old$ logoff user@newclient$ ssh someserver now, if the new key is installed on the server, you'll be allowed in unprompted, whereas if you only have the old key(s) enabled, you'll be asked for confirmation, which is your cue to log back out and run user@newclient$ ssh-copy-id -i someserver The reason you might want to specify the -i option in this case is to ensure that the comment on the installed key is the one from the .pub file, rather than just the filename that was loaded into your agent. It also ensures that only the id you intended is installed, rather than all the keys that you have in your ssh-agent(1). Of course, you can specify another id, or use the contents of the ssh-agent(1) as you prefer. Having mentioned ssh-add(1)'s -c option, you might consider using this whenever using agent forwarding to avoid your key being hijacked, but it is much better to instead use ssh(1)'s ProxyCommand and -W option, to bounce through remote servers while always doing direct end-to-end authentication. This way the middle hop(s) don't get access to your ssh-agent(1). A web search for ‘ssh proxycommand nc’ should prove enlightening (NB the modern approach is to use the -W option, rather than nc(1)). SEE ALSO ssh(1), ssh-agent(1), sshd(8) macOS 14.5 June 17, 2010 macOS 14.5
fixproc
Fixes a process named "proc" by performing the specified action. The actions can be check, kill, restart, exist, or fix. The action is specified on the command line or is read from a default database, which describes the default action to take for each process. The database format and the meaning of each action are described below.
fixproc - Fixes a process by performing the specified action.
fixproc [-min n] [-max n] [-check | -kill | -restart | -exist | -fix] proc ...
-min n minimum number of processes that should be running, defaults to 1 -max n maximum number of processes that should be running, defaults to 1 -check check process against database /local/etc/fixproc.conf. -kill kill process, wait 5 seconds, kill -9 if still exist -restart kill process, wait 5 seconds, kill -9 if still exist, then start again -exist checks if proc exists in ps && (min <= num. of processes <= max) -fix check process against database /local/etc/fixproc.conf. Perform defined action, if check fails. V5.6.2.1 16 Nov 2006 fixproc(1)
null
debinhex.pl
Each file is expected to be a BinHex file. By default, the output file is given the name that the BinHex file dictates, regardless of the name of the BinHex file. WARNINGS Largely untested. AUTHORS Paul J. Schinder (NASA/GSFC) mostly, though Eryq can't seem to keep his grubby paws off anything... Sören M. Andersen (somian), made it actually work under Perl 5.8.7 on MSWin32. perl v5.34.0 2015-11-15 DEBINHEX(1)
debinhex.pl - use Convert::BinHex to decode BinHex files USAGE Usage: debinhex.pl [options] file ... file Where the options are: -o dir Output in given directory (default outputs in file's directory) -v Verbose output (normally just one line per file is shown)
null
null
null
lorder
The lorder utility uses nm(1) to determine interdependencies in the list of object files specified on the command line. lorder outputs a list of file names where the first file contains a symbol which is defined by the second file. The output is normally used with tsort(1) when a library is created to determine the optimum ordering of the object modules so that all references may be resolved in a single pass of the loader.
lorder – list dependencies for object files
lorder file ...
null
ar cr library.a `lorder ${OBJS} | tsort` SEE ALSO ar(1), ld(1), nm(1), ranlib(1), tsort(1) HISTORY An lorder utility appeared in Version 7 AT&T UNIX. macOS 14.5 April 28, 1995 macOS 14.5
ranlib
The libtool command takes the specified input object files and creates a library for use with the link editor, ld(1). The library's name is specified by output (the argument to the -o flag). The input object files may be in any correct format that contains object files (``universal'' files, archives, object files). Libtool will not put any non-object input file into the output library (unlike ranlib, which allows this in the archives it operates on). When producing a ``universal'' file from objects of the same CPU type and differing CPU subtypes, libtool and ranlib create at most one library for each CPU type, rather than a separate library in a universal file for each of the unique pairings of CPU type and CPU subtype. Thus, the resulting CPU subtype for each library is the _ALL CPU subtype for that CPU type. This strategy strongly encourages the implementor of a library to create one library that chooses optimum code to run at run time, rather than at link time. Libtool can create either dynamically linked shared libraries, with -dynamic, or statically linked (archive) libraries, with -static. DYNAMICALLY LINKED SHARED LIBRARIES Dynamically linked libraries, unlike statically linked libraries, are Mach-O format files and not ar(5) format files. Dynamically linked libraries have two restrictions: No symbol may be defined in more than one object file and no common symbol can be used. To maximize sharing of a dynamically linked shared library the objects should be compiled with the -dynamic flag of cc(1) to produce indirect undefined references and position-independent code. To build a dynamically linked library, libtool, runs the link editor, ld(1), with -dylib once for each architecture present in the input objects and then lipo(1) to create a universal file if needed. ARCHIVE (or statically linked) LIBRARIES Libtool with -static is intended to replace ar(5) and ranlib. For backward compatibility, ranlib is still available, and it supports universal files. Ranlib adds or updates the table of contents to each archive so it can be linked by the link editor, ld(1). The table of contents is an archive member at the beginning of the archive that indicates which symbols are defined in which library members. Because ranlib rewrites the archive, sufficient temporary file space must be available in the file system that contains the current directory. Ranlib takes all correct forms of libraries (universal files containing archives, and simple archives) and updates the table of contents for all archives in the file. Ranlib also takes one common incorrect form of archive, an archive whose members are universal object files, adding or updating the table of contents and producing the library in correct form (a universal file containing multiple archives). The archive member name for a table of contents begins with ``__.SYMDEF''. Currently, there are two types of table of contents produced by libtool -static and ranlib and understood by the link editor, ld(1). These are explained below, under the -s and -a options.
libtool - create libraries ranlib - add or update the table of contents of archive libraries
libtool -static -o output [ -sacLTD ] [ - ] [ -arch_only arch_type ] [ -no_warning_for_no_symbols ] file... [-filelist listfile[,dirname]] libtool -dynamic -o output [ -install_name name ] [ -compatibility_version number ] [ -current_version number ] [ link editor flags ] [ -v ] [ -noall_load ] [ - ] [ -arch_only arch_type ] [ -V ] file... [-filelist listfile[,dirname]] ranlib [ -sactfqLT ] [ - ] archive...
The following options pertain to libtool only. @file Arguments beginning with @ are replaced by arguments read from the specified file, as an alternative to listing those arguments on the command line. The files simply contain libtool options and files separated by whitespace: spaces, tabs, and newlines. Characters can be escaped with a backslash (\), including whitespace characters and other backslashes. Also, arguments that include whitespace can be enclosed, wholly or in part, by single- or double-quote charcters. These files may contain @file references to additional files, although libtool will error on include cycles. If a file cannot be found, the original @file argument will remain in the argument list. -static Produce a statically linked (archive) library from the input files. This is the default. -dynamic Produce a dynamically linked shared library from the input files. -install_name name For a dynamic shared library, this specifies the file name the library will be installed in for programs that use it. If this is not specified the name specified by the -o output option will be used. -compatibility_version number For a dynamic shared library, this specifies the compatibility version number of the library. When a library is used the compatibility version is checked and if the user's version is greater that the library's version, an error message is printed and the using program exits. The format of number is X[.Y[.Z]] where X must be a positive non-zero number less than or equal to 65535, and .Y and .Z are optional and if present must be non- negative numbers less than or equal to 255. If this is not specified then it has a value of 0 and no checking is done when the library is used. -current_version number For dynamic shared library files this specifies the current version number of the library. The program using the library can obtain the current version of the library programmatically to determine exactly which version of the library it is using. The format of number is X[.Y[.Z]] where X must be a positive non-zero number less than or equal to 65535, and .Y and .Z are optional and if present must be non-negative numbers less than or equal to 255. If this is not specified then it has a value of 0. -noall_load For dynamic shared library files this specifies the the default behavior of loading all members of archives on the command line is not to be done. This option is used by the GNU compiler driver, cc(1), when used with it's -dynamiclib option. This is done to allow selective loading of the GNU's compiler's runtime support library, libcc_dynamic.a . link editor flags For a dynamic shared library the following ld(1) flags are accepted and passed through: -lx, -weak-lx, -search_paths_first -weak_library, -Ldir, -ysym, -usym, -initsym, -idefinition:indirect, -seg1addr, -segs_read_only_addr, -segs_read_write_addr, -seg_addr_table, -seg_addr_table_filename, -segprot, -segalign, -sectcreate, -sectorder, -sectorder_detail, -sectalign, -undefined, -read_only_relocs, -prebind, -prebind_all_twolevel_modules, -prebind_allow_overlap, -noprebind, -framework, -weak_framework, -umbrella, -allowable_client, -sub_umbrella, -sub_library, -F, -U, -Y, -Sn, -Si, -Sp, -S, -X, -x, -whyload, -all_load. -arch_errors_fatal, -dylib_file, -run_init_lazily, -final_output, -macosx_version_min, -multiply_defined, -multiply_defined_unused, -twolevel_namespace, -twolevel_namespace_hints, -flat_namespace, -nomultidefs, -headerpad, -headerpad_max_install_names, -weak_reference_mismatches, -M, -t, -no_arch_warnings, -single_module, -multi_module, -exported_symbols_list, -unexported_symbols_list, -m, -dead_strip, -no_dead_strip_inits_and_terms, -executable_path, -syslibroot, -no_uuid. See the ld(1) man page for details on these flags. The flag -image_base is a synonym for -seg1addr. -v Verbose mode, which prints the ld(1) commands and lipo(1) commands executed. -V Print the version of libtool. -filelist listfile[,dirname] The listfile contains a list of file names and is an alternative way of specifiying file names on the command line. The file names are listed one per line separated only by newlines (spaces and tabs are assumed to be part of the file name). If the optional directory name, dirname is specified then it is prepended to each name in the list file. -arch_only arch_type This option causes libtool to build a library only for the specified arch_type and ignores all other architectures in the input files. When building a dynamic library, if this is specified with a specific cpusubtype other than the family cpusubtype then libtool it does not use the ld(1) -force_cpusubtype_ALL flag and passes the -arch_only argument to ld(1) as the -arch flag so that the output is tagged with that cpusubtype. The following options pertain to the table of contents for an archive library, and apply to both libtool -static and ranlib: -s Produce the preferred type of table of contents, which results in faster link editing when linking with the archive. The order of the table of contents is sorted by symbol name. The library member name of this type of table of contents is ``__.SYMDEF SORTED''. This type of table of contents can only be produced when the library does not have multiple members that define the same symbol. This is the default. -a Produce the original type of table of contents, whose order is based on the order of the members in the archive. The library member name of this type of table of contents is ``__.SYMDEF''. This type of table of contents must be used when the library has multiple members that define the same symbol. -c Include common symbols as definitions with respect to the table of contents. This is seldom the intended behavior for linking from a library, as it forces the linking of a library member just because it uses an uninitialized global that is undefined at that point in the linking. This option is included only because this was the original behavior of ranlib. This option is not the default. -L Use the 4.4bsd archive extended format #1, which allows archive member names to be longer than 16 characters and have spaces in their names. This option is the default. -T Truncate archive member names to 16 characters and don't use the 4.4bsd extended format #1. This option is not the default. -f Warns when the output archive is universal and ar(1) will no longer be able to operate on it. -q Do nothing if a universal file would be created. -D When building a static library, set archive contents' user ids, group ids, dates, and file modes to reasonable defaults. This allows libraries created with identical input to be identical to each other, regardless of time of day, user, group, umask, and other aspects of the environment. For compatibility, the following ranlib option is accepted (but ignored): -t This option used to request that ranlib only ``touch'' the archives instead of modifying them. The option is now ignored, and the table of contents is rebuilt. Other options applying to both libtool and ranlib: - Treat all remaining arguments as names of files (or archives) and not as options. -no_warning_for_no_symbols Don't warn about file that have no symbols. -dependency_info path Write an Xcode dependency info file describing a successful build operation. This file describes the inputs directly or indirectly used to create the library or dylib. SEE ALSO ld(1), ar(1), otool(1), make(1), redo_prebinding(1), ar(5) BUGS With the way libraries used to be created, errors were possible if the library was modified with ar(1) and the table of contents was not updated by rerunning ranlib(1). So previously the link editor, ld(1), generated an error when the modification date of a library was more recent than the creation date of its table of contents. Unfortunately, this meant that you got the error even if you only copy the library. Since this error was found to be too much of a nuisance it was removed. So now it is possible again to get link errors if the library is modified and the table of contents is not updated. Apple Inc. June 23, 2020 LIBTOOL(1)
null
ptardiff5.34
ptardiff is a small program that diffs an extracted archive against an unextracted one, using the perl module Archive::Tar. This effectively lets you view changes made to an archives contents. Provide the progam with an ARCHIVE_FILE and it will look up all the files with in the archive, scan the current working directory for a file with the name and diff it against the contents of the archive.
ptardiff - program that diffs an extracted archive against an unextracted one
ptardiff ARCHIVE_FILE ptardiff -h $ tar -xzf Acme-Buffy-1.3.tar.gz $ vi Acme-Buffy-1.3/README [...] $ ptardiff Acme-Buffy-1.3.tar.gz > README.patch
h Prints this help message SEE ALSO tar(1), Archive::Tar. perl v5.34.1 2024-04-13 PTARDIFF(1)
null
javadoc
The javadoc tool parses the declarations and documentation comments in a set of Java source files and produces corresponding HTML pages that describe (by default) the public and protected classes, nested and implicitly declared classes (but not anonymous inner classes), interfaces, constructors, methods, and fields. You can use the javadoc tool to generate the API documentation or the implementation documentation for a set of source files. You can run the javadoc tool on entire packages, individual source files, or both. When documenting entire packages, you can use the -subpackages option either to recursively traverse a directory and its subdirectories, or to pass in an explicit list of package names. When you document individual source files, pass in a list of Java source file names. Conformance The Standard Doclet does not validate the content of documentation comments for conformance, nor does it attempt to correct any errors in documentation comments. Anyone running javadoc is advised to be aware of the problems that may arise when generating non-conformant output or output containing executable content, such as JavaScript. The Standard Doclet does provide the DocLint feature to help developers detect common problems in documentation comments; but it is also recommended to check the generated output with any appropriate conformance and other checking tools. For more details on the conformance requirements for HTML5 documents, see Conformance requirements [https://www.w3.org/TR/html5/infrastructure.html#conformance- requirements] in the HTML5 Specification. For more details on security issues related to web pages, see the Open Web Application Security Project (OWASP) [https://www.owasp.org] page.
javadoc - generate HTML pages of API documentation from Java source files
javadoc [options] [packagenames] [sourcefiles] [@files]
Specifies command-line options, separated by spaces. See Standard javadoc Options, Extra javadoc Options, Standard Options for the Standard Doclet, and Extra Options for the Standard Doclet. packagenames Specifies names of packages that you want to document, separated by spaces, for example java.lang java.lang.reflect java.awt. If you want to also document the subpackages, then use the -subpackages option to specify the packages. By default, javadoc looks for the specified packages in the current directory and subdirectories. Use the -sourcepath option to specify the list of directories where to look for packages. sourcefiles Specifies names of Java source files that you want to document, separated by spaces, for example Class.java Object.java Button.java. By default, javadoc looks for the specified classes in the current directory. However, you can specify the full path to the class file and use wildcard characters, for example /home/src/java/awt/Graphics*.java. You can also specify the path relative to the current directory. @files Specifies names of files that contain a list of javadoc tool options, package names, and source file names in any order. javadoc supports command-line options for both the main javadoc tool and the currently selected doclet. The Standard Doclet is used if no other doclet is specified. GNU-style options (that is, those beginning with --) can use an equal sign (=) instead of whitespace characters to separate the name of an option from its value. Standard javadoc Options The following core javadoc options are equivalent to corresponding javac options. See Standard Options in javac for the detailed descriptions of using these options: • --add-modules • -bootclasspath • --class-path, -classpath, or -cp • --enable-preview • -encoding • -extdirs • --limit-modules • --module • --module-path or -p • --module-source-path • --release • --source or -source • --source-path or -sourcepath • --system • --upgrade-module-path The following options are the core javadoc options that are not equivalent to a corresponding javac option: -breakiterator Computes the first sentence with BreakIterator. The first sentence is copied to the package, class, or member summary and to the alphabetic index. The BreakIterator class is used to determine the end of a sentence for all languages except for English. • English default sentence-break algorithm --- Stops at a period followed by a space or an HTML block tag, such as <P>. • Breakiterator sentence-break algorithm --- Stops at a period, question mark, or exclamation point followed by a space when the next word starts with a capital letter. This is meant to handle most abbreviations (such as "The serial no. is valid", but will not handle "Mr. Smith"). The -breakiterator option doesn't stop at HTML tags or sentences that begin with numbers or symbols. The algorithm stops at the last period in ../filename, even when embedded in an HTML tag. -doclet class Generates output by using an alternate doclet. Use the fully qualified name. This doclet defines the content and formats the output. If the -doclet option isn't used, then the javadoc tool uses the standard doclet for generating the default HTML format. This class must contain the start(Root) method. The path to this starting class is defined by the -docletpath option. -docletpath path Specifies where to find doclet class files (specified with the -doclet option) and any JAR files it depends on. If the starting class file is in a JAR file, then this option specifies the path to that JAR file. You can specify an absolute path or a path relative to the current directory. If classpathlist contains multiple paths or JAR files, then they should be separated with a colon (:) on Linux and a semi-colon (;) on Windows. This option isn't necessary when the doclet starting class is already in the search path. -exclude pkglist Unconditionally, excludes the specified packages and their subpackages from the list formed by -subpackages. It excludes those packages even when they would otherwise be included by some earlier or later -subpackages option. The following example would include java.io, java.util, and java.math (among others), but would exclude packages rooted at java.net and java.lang. Notice that these examples exclude java.lang.ref, which is a subpackage of java.lang. • Linux and macOS: javadoc -sourcepath /home/user/src -subpackages java -exclude java.net:java.lang • Windows: javadoc -sourcepath \user\src -subpackages java -exclude java.net:java.lang --expand-requires value Instructs the javadoc tool to expand the set of modules to be documented. By default, only the modules given explicitly on the command line are documented. Supports the following values: • transitive: additionally includes all the required transitive dependencies of those modules. • all: includes all dependencies. --help, -help, -h, or -? Prints a synopsis of the standard options. --help-extra or -X Prints a synopsis of the set of extra options. -Jflag Passes flag directly to the Java Runtime Environment (JRE) that runs the javadoc tool. For example, if you must ensure that the system sets aside 32 MB of memory in which to process the generated documentation, then you would call the -Xmx option as follows: javadoc -J-Xmx32m -J-Xms32m com.mypackage. Be aware that -Xms is optional because it only sets the size of initial memory, which is useful when you know the minimum amount of memory required. There is no space between the J and the flag. Use the -version option to report the version of the JRE being used to run the javadoc tool. javadoc -J-version java version "17" 2021-09-14 LTS Java(TM) SE Runtime Environment (build 17+35-LTS-2724) Java HotSpot(TM) 64-Bit Server VM (build 17+35-LTS-2724, mixed mode, sharing) -locale name Specifies the locale that the javadoc tool uses when it generates documentation. The argument is the name of the locale, as described in java.util.Locale documentation, such as en_US (English, United States) or en_US_WIN (Windows variant). Specifying a locale causes the javadoc tool to choose the resource files of that locale for messages such as strings in the navigation bar, headings for lists and tables, help file contents, comments in the stylesheet.css file, and so on. It also specifies the sorting order for lists sorted alphabetically, and the sentence separator to determine the end of the first sentence. The -locale option doesn't determine the locale of the documentation comment text specified in the source files of the documented classes. -package Shows only package, protected, and public classes and members. -private Shows all classes and members. -protected Shows only protected and public classes and members. This is the default. -public Shows only the public classes and members. -quiet Shuts off messages so that only the warnings and errors appear to make them easier to view. It also suppresses the version string. --show-members value Specifies which members (fields or methods) are documented, where value can be any of the following: • public --- shows only public members • protected --- shows public and protected members; this is the default • package --- shows public, protected, and package members • private --- shows all members --show-module-contents value Specifies the documentation granularity of module declarations, where value can be api or all. --show-packages value Specifies which modules packages are documented, where value can be exported or all packages. --show-types value Specifies which types (classes, interfaces, etc.) are documented, where value can be any of the following: • public --- shows only public types • protected --- shows public and protected types; this is the default • package --- shows public, protected, and package types • private --- shows all types -subpackages subpkglist Generates documentation from source files in the specified packages and recursively in their subpackages. This option is useful when adding new subpackages to the source code because they are automatically included. Each package argument is any top-level subpackage (such as java) or fully qualified package (such as javax.swing) that doesn't need to contain source files. Arguments are separated by colons on all operating systems. Wild cards aren't allowed. Use -sourcepath to specify where to find the packages. This option doesn't process source files that are in the source tree but don't belong to the packages. For example, the following commands generates documentation for packages named java and javax.swing and all of their subpackages. • Linux and macOS: javadoc -d docs -sourcepath /home/user/src -subpackages java:javax.swing • Windows: javadoc -d docs -sourcepath \user\src -subpackages java:javax.swing -verbose Provides more detailed messages while the javadoc tool runs. Without the -verbose option, messages appear for loading the source files, generating the documentation (one message per source file), and sorting. The -verbose option causes the printing of additional messages that specify the number of milliseconds to parse each Java source file. --version Prints version information. -Werror Reports an error if any warnings occur. Note that if a Java source file contains an implicitly declared class, then that class and its public, protected, and package members will be documented regardless of the options such as --show-types, --show-members, -private, -protected, -package, and -public. If --show-members is specified with value private or if -private is used then all private members of an implicitly declared class will be documented too. Extra javadoc Options Note: The additional options for javadoc are subject to change without notice. The following additional javadoc options are equivalent to corresponding javac options. See Extra Options in javac for the detailed descriptions of using these options: • --add-exports • --add-reads • --patch-module • -Xmaxerrs • -Xmaxwarns Standard Options for the Standard Doclet The following options are provided by the standard doclet. --add-script file Adds file as an additional JavaScript file to the generated documentation. This option can be used one or more times to specify additional script files. Command-line example: javadoc --add-script first_script.js --add-script second_script.js pkg_foo --add-stylesheet file Adds file as an additional stylesheet file to the generated documentation. This option can be used one or more times to specify additional stylesheets included in the documentation. Command-line example: javadoc --add-stylesheet new_stylesheet_1.css --add-stylesheet new_stylesheet_2.css pkg_foo --allow-script-in-comments Allow JavaScript in options and comments. -author Includes the @author text in the generated docs. -bottom html-code Specifies the text to be placed at the bottom of each output file. The text is placed at the bottom of the page, underneath the lower navigation bar. The text can contain HTML tags and white space, but when it does, the text must be enclosed in quotation marks. Use escape characters for any internal quotation marks within text. -charset name Specifies the HTML character set for this document. The name should be a preferred MIME name as specified in the IANA Registry, Character Sets [http://www.iana.org/assignments/character-sets]. For example: javadoc -charset "iso-8859-1" mypackage This command inserts the following line in the head of every generated page: <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> The meta tag is described in the HTML standard (4197265 and 4137321), HTML Document Representation [http://www.w3.org/TR/REC-html40/charset.html#h-5.2.2]. -d directory Specifies the destination directory where the javadoc tool saves the generated HTML files. If you omit the -d option, then the files are saved to the current directory. The directory value can be absolute or relative to the current working directory. The destination directory is automatically created when the javadoc tool runs. • Linux and macOS: For example, the following command generates the documentation for the package com.mypackage and saves the results in the /user/doc/ directory: javadoc -d /user/doc/ com.mypackage • Windows: For example, the following command generates the documentation for the package com.mypackage and saves the results in the \user\doc\ directory: javadoc -d \user\doc\ com.mypackage -docencoding name Specifies the encoding of the generated HTML files. The name should be a preferred MIME name as specified in the IANA Registry, Character Sets [http://www.iana.org/assignments/character-sets]. Three options are available for use in a javadoc encoding command. The -encoding option is used for encoding the files read by the javadoc tool, while the -docencoding and -charset options are used for encoding the files written by the tool. Of the three available options, at most, only the input and an output encoding option are used in a single encoding command. If you specify both input and output encoding options in a command, they must be the same value. If you specify neither output option, it defaults to the input encoding. For example: javadoc -docencoding "iso-8859-1" mypackage -docfilessubdirs Recursively copies doc-file subdirectories. Enables deep copying of doc-files directories. Subdirectories and all contents are recursively copied to the destination. For example, the directory doc-files/example/images and all of its contents are copied. The -excludedocfilessubdir option can be used to exclude specific subdirectories. -doctitle html-code Specifies the title to place near the top of the overview summary file. The text specified in the title tag is placed as a centered, level-one heading directly beneath the top navigation bar. The title tag can contain HTML tags and white space, but when it does, you must enclose the title in quotation marks. Additional quotation marks within the title tag must be escaped. For example, javadoc -doctitle "<b>My Library</b><br>v1.0" com.mypackage. -excludedocfilessubdir name1,name2... Excludes any subdirectories with the given names when recursively copying doc-file subdirectories. See -docfilessubdirs. For historical reasons, : can be used anywhere in the argument as a separator instead of ,. -footer html-code Specifies the footer text to be placed at the bottom of each output file. Thehtml-code value is placed to the right of the lower navigation bar. The html-code value can contain HTML tags and white space, but when it does, the html-code value must be enclosed in quotation marks. Use escape characters for any internal quotation marks within a footer. -group name p1,p2... Group the specified packages together in the Overview page. For historical reasons, : can be used as a separator anywhere in the argument instead of ,. -header html-code Specifies the header text to be placed at the top of each output file. The header is placed to the right of the upper navigation bar. The header can contain HTML tags and white space, but when it does, the header must be enclosed in quotation marks. Use escape characters for internal quotation marks within a header. For example, javadoc -header "<b>My Library</b><br>v1.0" com.mypackage. -helpfile filename Includes the file that links to the HELP link in the top and bottom navigation bars . Without this option, the javadoc tool creates a help file help-doc.html that is hard-coded in the javadoc tool. This option lets you override the default. The filename can be any name and isn't restricted to help-doc.html. The javadoc tool adjusts the links in the navigation bar accordingly. For example: • Linux and macOS: javadoc -helpfile /home/user/myhelp.html java.awt • Windows: javadoc -helpfile C:\user\myhelp.html java.awt -html5 This option is a no-op and is just retained for backwards compatibility. --javafx or -javafx Enables JavaFX functionality. This option is enabled by default if the JavaFX library classes are detected on the module path. -keywords Adds HTML keyword <meta> tags to the generated file for each class. These tags can help search engines that look for <meta> tags find the pages. Most search engines that search the entire Internet don't look at <meta> tags, because pages can misuse them. Search engines offered by companies that confine their searches to their own website can benefit by looking at <meta> tags. The <meta> tags include the fully qualified name of the class and the unqualified names of the fields and methods. Constructors aren't included because they are identical to the class name. For example, the class String starts with these keywords: <meta name="keywords" content="java.lang.String class"> <meta name="keywords" content="CASE_INSENSITIVE_ORDER"> <meta name="keywords" content="length()"> <meta name="keywords" content="charAt()"> -link url Creates links to existing javadoc generated documentation of externally referenced classes. The url argument is the absolute or relative URL of the directory that contains the external javadoc generated documentation. You can specify multiple -link options in a specified javadoc tool run to link to multiple documents. Either a package-list or an element-list file must be in this url directory (otherwise, use the -linkoffline option). Note: The package-list and element-list files are generated by the javadoc tool when generating the API documentation and should not be modified by the user. When you use the javadoc tool to document packages, it uses the package-list file to determine the packages declared in an API. When you generate API documents for modules, the javadoc tool uses the element-list file to determine the modules and packages declared in an API. The javadoc tool reads the names from the appropriate list file and then links to the packages or modules at that URL. When the javadoc tool runs, the url value is copied into the <A HREF> links that are created. Therefore, url must be the URL to the directory and not to a file. You can use an absolute link for url to enable your documents to link to a document on any web site, or you can use a relative link to link only to a relative location. If you use a relative link, then the value you pass in should be the relative path from the destination directory (specified with the -d option) to the directory containing the packages being linked to. When you specify an absolute link, you usually use an HTTP link. However, if you want to link to a file system that has no web server, then you can use a file link. Use a file link only when everyone who wants to access the generated documentation shares the same file system. In all cases, and on all operating systems, use a slash as the separator, whether the URL is absolute or relative, and https:, http:, or file: as specified in the URL Memo: Uniform Resource Locators [http://www.ietf.org/rfc/rfc1738.txt]. -link https://<host>/<directory>/<directory>/.../<name> -link http://<host>/<directory>/<directory>/.../<name> -link file://<host>/<directory>/<directory>/.../<name> -link <directory>/<directory>/.../<name> --link-modularity-mismatch (warn|info) Specifies whether external documentation with wrong modularity (e.g. non-modular documentation for a modular library, or the reverse case) should be reported as a warning (warn) or just a message (info). The default behavior is to report a warning. -linkoffline url1 url2 This option is a variation of the -link option. They both create links to javadoc generated documentation for externally referenced classes. You can specify multiple -linkoffline options in a specified javadoc tool run. Use the -linkoffline option when: • Linking to a document on the web that the javadoc tool can't access through a web connection • The package-list or element-list file of the external document either isn't accessible or doesn't exist at the URL location, but does exist at a different location and can be specified by either the package-list or element-list file (typically local). Note: The package-list and element-list files are generated by the javadoc tool when generating the API documentation and should not be modified by the user. If url1 is accessible only on the World Wide Web, then the -linkoffline option removes the constraint that the javadoc tool must have a web connection to generate documentation. Another use of the -linkoffline option is as a work-around to update documents. After you have run the javadoc tool on a full set of packages or modules, you can run the javadoc tool again on a smaller set of changed packages or modules, so that the updated files can be inserted back into the original set. For example, the -linkoffline option takes two arguments. The first is for the string to be embedded in the <a href> links, and the second tells the javadoc tool where to find either the package-list or element-list file. The url1 or url2 value is the absolute or relative URL of the directory that contains the external javadoc generated documentation that you want to link to. When relative, the value should be the relative path from the destination directory (specified with the -d option) to the root of the packages being linked to. See url in the -link option. --link-platform-properties url Specifies a properties file used to configure links to platform documentation. The url argument is expected to point to a properties file containing one or more entries with the following format, where <version> is the platform version as passed to the --release or --source option and <url> is the base URL of the corresponding platform API documentation: doclet.platform.docs.<version>=<url> For instance, a properties file containing URLs for releases 15 to 17 might contain the following lines: doclet.platform.docs.15=https://example.com/api/15/ doclet.platform.docs.16=https://example.com/api/16/ doclet.platform.docs.17=https://example.com/api/17/ If the properties file does not contain an entry for a particular release no platform links are generated. -linksource Creates an HTML version of each source file (with line numbers) and adds links to them from the standard HTML documentation. Links are created for classes, interfaces, constructors, methods, and fields whose declarations are in a source file. Otherwise, links aren't created, such as for default constructors and generated classes. This option exposes all private implementation details in the included source files, including private classes, private fields, and the bodies of private methods, regardless of the -public, -package, -protected, and -private options. Unless you also use the -private option, not all private classes or interfaces are accessible through links. Each link appears on the name of the identifier in its declaration. For example, the link to the source code of the Button class would be on the word Button: public class Button extends Component implements Accessible The link to the source code of the getLabel method in the Button class is on the word getLabel: public String getLabel() --main-stylesheet file or -stylesheetfile file Specifies the path of an alternate stylesheet file that contains the definitions for the CSS styles used in the generated documentation. This option lets you override the default. If you do not specify the option, the javadoc tool will create and use a default stylesheet. The file name can be any name and isn't restricted to stylesheet.css. The --main-stylesheet option is the preferred form. Command-line example: javadoc --main-stylesheet main_stylesheet.css pkg_foo -nocomment Suppresses the entire comment body, including the main description and all tags, and generate only declarations. This option lets you reuse source files that were originally intended for a different purpose so that you can produce skeleton HTML documentation during the early stages of a new project. -nodeprecated Prevents the generation of any deprecated API in the documentation. This does what the -nodeprecatedlist option does, and it doesn't generate any deprecated API throughout the rest of the documentation. This is useful when writing code when you don't want to be distracted by the deprecated code. -nodeprecatedlist Prevents the generation of the file that contains the list of deprecated APIs (deprecated-list.html) and the link in the navigation bar to that page. The javadoc tool continues to generate the deprecated API throughout the rest of the document. This is useful when your source code contains no deprecated APIs, and you want to make the navigation bar cleaner. -nohelp Omits the HELP link in the navigation bar at the top of each page of output. -noindex Omits the index from the generated documents. The index is produced by default. -nonavbar Prevents the generation of the navigation bar, header, and footer, that are usually found at the top and bottom of the generated pages. The -nonavbar option has no effect on the -bottom option. The -nonavbar option is useful when you are interested only in the content and have no need for navigation, such as when you are converting the files to PostScript or PDF for printing only. --no-platform-links Prevents the generation of links to platform documentation. These links are generated by default. -noqualifier name1,name2... Excludes the list of qualifiers from the output. The package name is removed from places where class or interface names appear. For historical reasons, : can be used anywhere in the argument as a separator instead of ,. The following example omits all package qualifiers: -noqualifier all. The following example omits java.lang and java.io package qualifiers: -noqualifier java.lang:java.io. The following example omits package qualifiers starting with java and com.sun subpackages, but not javax: -noqualifier java.*:com.sun.*. Where a package qualifier would appear due to the previous behavior, the name can be suitably shortened. This rule is in effect whether or not the -noqualifier option is used. -nosince Omits from the generated documents the Since sections associated with the @since tags. -notimestamp Suppresses the time stamp, which is hidden in an HTML comment in the generated HTML near the top of each page. The -notimestamp option is useful when you want to run the javadoc tool on two source bases and get the differences between diff them, because it prevents time stamps from causing a diff (which would otherwise be a diff on every page). The time stamp includes the javadoc tool release number. -notree Omits the class and interface hierarchy pages from the generated documents. These are the pages you reach using the Tree button in the navigation bar. The hierarchy is produced by default. --override-methods (detail|summary) Documents overridden methods in the detail or summary sections. The default is detail. -overview filename Specifies that the javadoc tool should retrieve the text for the overview documentation from the source file specified by filename and place it on the Overview page (overview-summary.html). A relative path specified with the file name is relative to the current working directory. While you can use any name you want for the filename value and place it anywhere you want for the path, it is typical to name it overview.html and place it in the source tree at the directory that contains the topmost package directories. In this location, no path is needed when documenting packages, because the -sourcepath option points to this file. • Linux and macOS: For example, if the source tree for the java.lang package is src/classes/java/lang/, then you could place the overview file at src/classes/overview.html. • Windows: For example, if the source tree for the java.lang package is src\classes\java\lang\, then you could place the overview file at src\classes\overview.html The overview page is created only when you pass two or more package names to the javadoc tool. The title on the overview page is set by -doctitle. -serialwarn Generates compile-time warnings for missing @serial tags. By default, Javadoc generates no serial warnings. Use this option to display the serial warnings, which helps to properly document default serializable fields and writeExternal methods. --since release(,release)* Generates documentation for APIs that were added or newly deprecated in the specified releases. If the @since tag in the javadoc comment of an element in the documented source code matches a release passed as option argument, information about the element and the release it was added in is included in a "New API" page. If the "Deprecated API" page is generated and the since element of the java.lang.Deprecated annotation of a documented element matches a release in the option arguments, information about the release the element was deprecated in is added to the "Deprecated API" page. Releases are compared using case-sensitive string comparison. --since-label text Specifies the text to use in the heading of the "New API" page. This may contain information about the releases covered in the page, e.g. "New API in release 2.0", or "New API since release 1". --snippet-path snippetpathlist Specifies the search paths for finding files for external snippets. The snippetpathlist can contain multiple paths by separating them with the platform path separator (; on Windows; : on other platforms.) The Standard Doclet first searches the snippet-files subdirectory in the package containing the snippet, and then searches all the directories in the given list. -sourcetab tab-length Specifies the number of spaces each tab uses in the source. --spec-base-url url Specifies the base URL for relative URLs in @spec tags, to be used when generating links to any external specifications. It can either be an absolute URL, or a relative URL, in which case it is evaluated relative to the base directory of the generated output files. The default value is equivalent to {@docRoot}/../specs. -splitindex Splits the index file into multiple files, alphabetically, one file per letter, plus a file for any index entries that start with non-alphabetical symbols. -tag name:locations:header Specifies single argument custom tags. For the javadoc tool to spell-check tag names, it is important to include a -tag option for every custom tag that is present in the source code, disabling (with X) those that aren't being output in the current run. The colon (:) is always the separator. To include a colon in the tag name, escape it with a backward slash (\). The -tag option outputs the tag heading, header, in bold, followed on the next line by the text from its single argument. Similar to any block tag, the argument text can contain inline tags, which are also interpreted. The output is similar to standard one- argument tags, such as the @return and @author tags. Omitting a header value causes the name to be the heading. locations is a list of characters specifying the kinds of declarations in which the tag may be used. The following characters may be used, in either uppercase or lowercase: • A: all declarations • C: constructors • F: fields • M: methods • O: the overview page and other documentation files in doc-files subdirectories • P: packages • S: modules • T: types (classes and interfaces) • X: nowhere: the tag is disabled, and will be ignored The order in which tags are given on the command line will be used as the order in which the tags appear in the generated output. You can include standard tags in the order given on the command line by using the -tag option with no locations or header. -taglet class Specifies the fully qualified name of the taglet used in generating the documentation for that tag. Use the fully qualified name for the class value. This taglet also defines the number of text arguments that the custom tag has. The taglet accepts those arguments, processes them, and generates the output. Taglets are useful for block or inline tags. They can have any number of arguments and implement custom behavior, such as making text bold, formatting bullets, writing out the text to a file, or starting other processes. Taglets can only determine where a tag should appear and in what form. All other decisions are made by the doclet. A taglet can't do things such as remove a class name from the list of included classes. However, it can execute side effects, such as printing the tag's text to a file or triggering another process. Use the -tagletpath option to specify the path to the taglet. The following example inserts the To Do taglet after Parameters and ahead of Throws in the generated pages. -taglet com.sun.tools.doclets.ToDoTaglet -tagletpath /home/taglets -tag return -tag param -tag todo -tag throws -tag see Alternately, you can use the -taglet option in place of its -tag option, but that might be difficult to read. -tagletpath tagletpathlist Specifies the search paths for finding taglet class files. The tagletpathlist can contain multiple paths by separating them with the platform path separator (; on Windows; : on other platforms.) The javadoc tool searches all subdirectories of the specified paths. -top html-code Specifies the text to be placed at the top of each output file. -use Creates class and package usage pages. Includes one Use page for each documented class and package. The page describes what packages, classes, methods, constructors and fields use any API of the specified class or package. Given class C, things that use class C would include subclasses of C, fields declared as C, methods that return C, and methods and constructors with parameters of type C. For example, you can look at the Use page for the String type. Because the getName method in the java.awt.Font class returns type String, the getName method uses String and so the getName method appears on the Use page for String. This documents only uses of the API, not the implementation. When a method uses String in its implementation, but doesn't take a string as an argument or return a string, that isn't considered a use of String.To access the generated Use page, go to the class or package and click the Use link in the navigation bar. -version Includes the version text in the generated docs. This text is omitted by default. To find out what version of the javadoc tool you are using, use the -J-version option. -windowtitle title Specifies the title to be placed in the HTML <title> tag. The text specified in the title tag appears in the window title and in any browser bookmarks (favorite places) that someone creates for this page. This title should not contain any HTML tags because a browser will not interpret them correctly. Use escape characters on any internal quotation marks within the title tag. If the -windowtitle option is omitted, then the javadoc tool uses the value of the -doctitle option for the -windowtitle option. For example, javadoc -windowtitle "My Library" com.mypackage. Extra Options for the Standard Doclet The following are additional options provided by the Standard Doclet and are subject to change without notice. Additional options are less commonly used or are otherwise regarded as advanced. --date date-and-time Specifies the value to be used to timestamp the generated pages, in ISO 8601 [https://www.iso.org/iso-8601-date-and-time- format.html] format. The specified value must be within 10 years of the current date and time. It is an error to specify both -notimestamp and --date. Using a specific value means the generated documentation can be part of a reproducible build [https://reproducible-builds.org/]. If the option is not given, the default value is the current date and time. For example: javadoc --date 2022-02-01T17:41:59-08:00 mypackage --legal-notices (default|none|directory) Specifies the location from which to copy legal files to the generated documentation. If the option is not specified or is used with the value default, the files are copied from the default location. If the argument is used with value none, no files are copied. Every other argument is interpreted as directory from which to copy the legal files. --no-frames This option is a no-op and is just retained for backwards compatibility. -Xdoclint Enables recommended checks for problems in documentation comments. By default, the -Xdoclint option is enabled. Disable it with the option -Xdoclint:none. For more details, see DocLint. -Xdoclint:flag,flag,... Enable or disable specific checks for different kinds of issues in documentation comments. Each flag can be one of all, none, or [-]group where group has one of the following values: accessibility, html, missing, reference, syntax. For more details on these values, see DocLint Groups. When specifying two or more flags, you can either use a single -Xdoclint:... option, listing all the desired flags, or you can use multiple options giving one or more flag in each option. For example, use either of the following commands to check for the HTML, syntax, and accessibility issues in the file MyFile.java. javadoc -Xdoclint:html -Xdoclint:syntax -Xdoclint:accessibility MyFile.java javadoc -Xdoclint:html,syntax,accessibility MyFile.java The following examples illustrate how to change what DocLint reports: • -Xdoclint:none --- disables all checks • -Xdoclint:group --- enables group checks • -Xdoclint:all --- enables all groups of checks • -Xdoclint:all,-group --- enables all checks except group checks For more details, see DocLint. -Xdoclint/package:[-]packages Enables or disables checks in specific packages. packages is a comma separated list of package specifiers. A package specifier is either a qualified name of a package or a package name prefix followed by *, which expands to all subpackages of the given package. Prefix the package specifier with - to disable checks for the specified packages. For more details, see DocLint. -Xdocrootparent url Replaces all @docRoot items followed by /.. in documentation comments with url. DOCLINT DocLint provides the ability to check for possible problems in documentation comments. Problems may be reported as warnings or errors, depending on their severity. For example, a missing comment may be bad style that deserves a warning, but a link to an unknown Java declaration is more serious and deserves an error. Problems are organized into groups, and options can be used to enable or disable messages in one or more groups. Within the source code, messages in one or more groups can be suppressed by using @SuppressWarnings annotations. When invoked from javadoc, by default DocLint checks all comments that are used in the generated documentation. It thus relies on other command-line options to determine which declarations, and which corresponding documentation comments will be included. Note: this may mean that even comments on some private members of serializable classes will also be checked, if the members need to be documented in the generated Serialized Forms page. In contrast, when DocLint is invoked from javac, DocLint solely relies on the various -Xdoclint... options to determine which documentation comments to check. DocLint doesn't attempt to fix invalid input, it just reports it. Note: DocLint doesn't guarantee the completeness of these checks. In particular, it isn't a full HTML compliance checker. The goal is to just report common errors in a convenient manner. Groups The checks performed by DocLint are organized into groups. The warnings and errors in each group can be enabled or disabled with command-line options, or suppressed with @SuppressWarnings annotations. The groups are as follows: • accessibility --- Checks for issues related to accessibility. For example, no alt attribute specified in an <img> element, or no caption or summary attributes specified in a <table> element. Issues are reported as errors if a downstream validation tool might be expected to report an error in the files generated by javadoc. For reference, see the Web Content Accessibility Guidelines [https://www.w3.org/WAI/standards-guidelines/wcag/]. • html --- Detects common high-level HTML issues. For example, putting block elements inside inline elements, or not closing elements that require an end tag. Issues are reported as errors if a downstream validation tool might be expected to report an error in the files generated by javadoc. For reference, see the HTML Living Standard [https://html.spec.whatwg.org/multipage/]. • missing --- Checks for missing documentation comments or tags. For example, a missing comment on a class declaration, or a missing @param or @return tag in the comment for a method declaration. Issues related to missing items are typically reported as warnings because they are unlikely to be reported as errors by downstream validation tools that may be used to check the output generated by javadoc. • reference --- Checks for issues relating to the references to Java API elements from documentation comment tags. For example, the reference in @see or {@link ...} cannot be found, or a bad name is given for @param or @throws. Issues are typically reported as errors because while the issue may not cause problems in the generated files, the author has likely made a mistake that will lead to incorrect or unexpected documentation. • syntax --- Checks for low-level syntactic issues in documentation comments. For example, unescaped angle brackets (< and >) and ampersands (&) and invalid documentation comment tags. Issues are typically reported as errors because the issues may lead to incorrect or unexpected documentation. Suppressing Messages DocLint checks for and recognizes two strings that may be present in the arguments for an @SuppressWarnings annotation. • doclint • doclint:LIST where LIST is a comma-separated list of one or more of accessibility, html, missing, syntax, reference. The names in LIST are the same group names supported by the command- line -Xdoclint option for javac and javadoc. (This is the same convention honored by the javac -Xlint option and the corresponding names supported by @SuppressWarnings.) The names in LIST can equivalently be specified in separate arguments of the annotation. For example, the following are equivalent: • @SuppressWarnings("doclint:accessibility,missing") • @SuppressWarnings("doclint:accessibility", "doclint:missing") When DocLint detects an issue in a documentation comment, it checks for the presence of @SuppressWarnings on the associated declaration and on all lexically enclosing declarations. The issue will be ignored if any such annotation is found containing the simple string doclint or the longer form doclint:LIST where LIST contains the name of the group for the issue. Note: as with other uses of @SuppressWarnings, using the annotation on a module or package declaration only affects that declaration; it does not affect the contents of the module or package in other source files. All messages related to an issue are suppressed by the presence of an appropriate @SuppressWarnings annotation: this includes errors as well as warnings. Note: It is only possible to suppress messages. If an annotation of @SuppressWarnings("doclint") is given on a top-level declaration, all DocLint messages for that declaration and any enclosed declarations will be suppressed; it is not possible to selectively re-enable messages for issues in enclosed declarations. Comparison with downstream validation tools DocLint is a utility built into javac and javadoc that checks the content of documentation comments, as found in source files. In contrast, downstream validation tools can be used to validate the output generated from those documentation comments by javadoc and the Standard Doclet. Although there is some overlap in functionality, the two mechanisms are different and each has its own strengths and weaknesses. • Downstream validation tools can check the end result of any generated documentation, as it will be seen by the end user. This includes content from all sources, including documentation comments, the Standard Doclet itself, user-provided taglets, and content supplied via command-line options. Because such tools are analyzing complete HTML pages, they can do more complete checks than can DocLint. However, when a problem is found in the generated pages, it can be harder to track down exactly where in the build pipeline the problem needs to be fixed. • DocLint checks the content of documentation comments, in source files. This makes it very easy to identify the exact position of any issues that may be found. DocLint can also detect some semantic errors in documentation comments that downstream tools cannot detect, such as missing comments, using an @return tag in a method returning void, or an @param tag describing a non-existent parameter. But by its nature, DocLint cannot report on problems such as missing links, or errors in user-provided custom taglets, or problems in the Standard Doclet itself. It also cannot reliably detect errors in documentation comments at the boundaries between content in a documentation comment and content generated by a custom taglet. JDK 22 2024 JAVADOC(1)
null
tput
The tput utility uses the terminfo database to make the values of terminal-dependent capabilities and information available to the shell (see sh(1)), to initialize or reset the terminal, or return the long name of the requested terminal type. The result depends upon the capability's type: string tput writes the string to the standard output. No trailing newline is supplied. integer tput writes the decimal value to the standard output, with a trailing newline. boolean tput simply sets the exit code (0 for TRUE if the terminal has the capability, 1 for FALSE if it does not), and writes nothing to the standard output. Before using a value returned on the standard output, the application should test the exit code (e.g., $?, see sh(1)) to be sure it is 0. (See the EXIT CODES and DIAGNOSTICS sections.) For a complete list of capabilities and the capname associated with each, see terminfo(5). -Ttype indicates the type of terminal. Normally this option is unnecessary, because the default is taken from the environment variable TERM. If -T is specified, then the shell variables LINES and COLUMNS will also be ignored. capname indicates the capability from the terminfo database. When termcap support is compiled in, the termcap name for the capability is also accepted. parms If the capability is a string that takes parameters, the arguments parms will be instantiated into the string. Most parameters are numbers. Only a few terminfo capabilities require string parameters; tput uses a table to decide which to pass as strings. Normally tput uses tparm (3X) to perform the substitution. If no parameters are given for the capability, tput writes the string without performing the substitution. -S allows more than one capability per invocation of tput. The capabilities must be passed to tput from the standard input instead of from the command line (see example). Only one capname is allowed per line. The -S option changes the meaning of the 0 and 1 boolean and string exit codes (see the EXIT CODES section). Again, tput uses a table and the presence of parameters in its input to decide whether to use tparm (3X), and how to interpret the parameters. -V reports the version of ncurses which was used in this program, and exits. init If the terminfo database is present and an entry for the user's terminal exists (see -Ttype, above), the following will occur: (1) if present, the terminal's initialization strings will be output as detailed in the terminfo(5) section on Tabs and Initialization, (2) any delays (e.g., newline) specified in the entry will be set in the tty driver, (3) tabs expansion will be turned on or off according to the specification in the entry, and (4) if tabs are not expanded, standard tabs will be set (every 8 spaces). If an entry does not contain the information needed for any of the four above activities, that activity will silently be skipped. reset Instead of putting out initialization strings, the terminal's reset strings will be output if present (rs1, rs2, rs3, rf). If the reset strings are not present, but initialization strings are, the initialization strings will be output. Otherwise, reset acts identically to init. longname If the terminfo database is present and an entry for the user's terminal exists (see -Ttype above), then the long name of the terminal will be put out. The long name is the last name in the first line of the terminal's description in the terminfo database [see term(5)]. If tput is invoked by a link named reset, this has the same effect as tput reset. See @TSET@ for comparison, which has similar behavior.
tput, reset - initialize a terminal or query terminfo database
tput [-Ttype] capname [parms ... ] tput [-Ttype] init tput [-Ttype] reset tput [-Ttype] longname tput -S << tput -V
null
tput init Initialize the terminal according to the type of terminal in the environmental variable TERM. This command should be included in everyone's .profile after the environmental variable TERM has been exported, as illustrated on the profile(5) manual page. tput -T5620 reset Reset an AT&T 5620 terminal, overriding the type of terminal in the environmental variable TERM. tput cup 0 0 Send the sequence to move the cursor to row 0, column 0 (the upper left corner of the screen, usually known as the "home" cursor position). tput clear Echo the clear-screen sequence for the current terminal. tput cols Print the number of columns for the current terminal. tput -T450 cols Print the number of columns for the 450 terminal. bold=`tput smso` offbold=`@TPUT@ rmso` Set the shell variables bold, to begin stand-out mode sequence, and offbold, to end standout mode sequence, for the current terminal. This might be followed by a prompt: echo "${bold}Please type in your name: ${offbold}\c" tput hc Set exit code to indicate if the current terminal is a hard copy terminal. tput cup 23 4 Send the sequence to move the cursor to row 23, column 4. tput cup Send the terminfo string for cursor-movement, with no parameters substituted. tput longname Print the long name from the terminfo database for the type of terminal specified in the environmental variable TERM. tput -S <<! > clear > cup 10 10 > bold > ! This example shows tput processing several capabilities in one invocation. It clears the screen, moves the cursor to position 10, 10 and turns on bold (extra bright) mode. The list is terminated by an exclamation mark (!) on a line by itself. FILES /usr/share/terminfo compiled terminal description database /usr/share/tabset/* tab settings for some terminals, in a format appropriate to be output to the terminal (escape sequences that set margins and tabs); for more information, see the "Tabs and Initialization" section of terminfo(5) EXIT CODES If the -S option is used, tput checks for errors from each line, and if any errors are found, will set the exit code to 4 plus the number of lines with errors. If no errors are found, the exit code is 0. No indication of which line failed can be given so exit code 1 will never appear. Exit codes 2, 3, and 4 retain their usual interpretation. If the -S option is not used, the exit code depends on the type of capname: boolean a value of 0 is set for TRUE and 1 for FALSE. string a value of 0 is set if the capname is defined for this terminal type (the value of capname is returned on standard output); a value of 1 is set if capname is not defined for this terminal type (nothing is written to standard output). integer a value of 0 is always set, whether or not capname is defined for this terminal type. To determine if capname is defined for this terminal type, the user must test the value written to standard output. A value of -1 means that capname is not defined for this terminal type. other reset or init may fail to find their respective files. In that case, the exit code is set to 4 + errno. Any other exit code indicates an error; see the DIAGNOSTICS section. DIAGNOSTICS tput prints the following error messages and sets the corresponding exit codes. exit code error message ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0 (capname is a numeric variable that is not specified in the terminfo(5) database for this terminal type, e.g. tput -T450 lines and @TPUT@ -T2621 xmc) 1 no error message is printed, see the EXIT CODES section. 2 usage error 3 unknown terminal type or no terminfo database 4 unknown terminfo capability capname >4 error occurred in -S ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ PORTABILITY The longname and -S options, and the parameter-substitution features used in the cup example, are not supported in BSD curses or in AT&T/USL curses before SVr4. X/Open documents only the operands for clear, init and reset. In this implementation, clear is part of the capname support. Other implementations of tput on SVr4-based systems such as Solaris, IRIX64 and HPUX as well as others such as AIX and Tru64 provide support for capname operands. A few platforms such as FreeBSD and NetBSD recognize termcap names rather than terminfo capability names in their respective tput commands. Most implementations which provide support for capname operands use the tparm function to expand parameters in it. That function expects a mixture of numeric and string parameters, requiring tput to know which type to use. This implementation uses a table to determine that for the standard capname operands, and an internal library function to analyze nonstandard capname operands. Other implementations may simply guess that an operand containing only digits is intended to be a number. SEE ALSO clear(1), stty(1), tabs(1), terminfo(5), curs_termcap(3X). This describes ncurses version 5.7 (patch 20081102). tput(1)
fgrep
The grep utility searches any given input files, selecting lines that match one or more patterns. By default, a pattern matches an input line if the regular expression (RE) in the pattern matches the input line without its trailing newline. An empty expression matches every line. Each input line that matches at least one of the patterns is written to the standard output. grep is used for simple patterns and basic regular expressions (BREs); egrep can handle extended regular expressions (EREs). See re_format(7) for more information on regular expressions. fgrep is quicker than both grep and egrep, but can only handle fixed patterns (i.e., it does not interpret regular expressions). Patterns may consist of one or more lines, allowing any of the pattern lines to match a portion of the input. zgrep, zegrep, and zfgrep act like grep, egrep, and fgrep, respectively, but accept input files compressed with the compress(1) or gzip(1) compression utilities. bzgrep, bzegrep, and bzfgrep act like grep, egrep, and fgrep, respectively, but accept input files compressed with the bzip2(1) compression utility. The following options are available: -A num, --after-context=num Print num lines of trailing context after each match. See also the -B and -C options. -a, --text Treat all files as ASCII text. Normally grep will simply print “Binary file ... matches” if files contain binary characters. Use of this option forces grep to output lines matching the specified pattern. -B num, --before-context=num Print num lines of leading context before each match. See also the -A and -C options. -b, --byte-offset The offset in bytes of a matched pattern is displayed in front of the respective matched line. -C num, --context=num Print num lines of leading and trailing context surrounding each match. See also the -A and -B options. -c, --count Only a count of selected lines is written to standard output. --colour=[when], --color=[when] Mark up the matching text with the expression stored in the GREP_COLOR environment variable. The possible values of when are “never”, “always” and “auto”. -D action, --devices=action Specify the demanded action for devices, FIFOs and sockets. The default action is “read”, which means, that they are read as if they were normal files. If the action is set to “skip”, devices are silently skipped. -d action, --directories=action Specify the demanded action for directories. It is “read” by default, which means that the directories are read in the same manner as normal files. Other possible values are “skip” to silently ignore the directories, and “recurse” to read them recursively, which has the same effect as the -R and -r option. -E, --extended-regexp Interpret pattern as an extended regular expression (i.e., force grep to behave as egrep). -e pattern, --regexp=pattern Specify a pattern used during the search of the input: an input line is selected if it matches any of the specified patterns. This option is most useful when multiple -e options are used to specify multiple patterns, or when a pattern begins with a dash (‘-’). --exclude pattern If specified, it excludes files matching the given filename pattern from the search. Note that --exclude and --include patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. If no --include pattern is specified, all files are searched that are not excluded. Patterns are matched to the full path specified, not only to the filename component. --exclude-dir pattern If -R is specified, it excludes directories matching the given filename pattern from the search. Note that --exclude-dir and --include-dir patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. If no --include-dir pattern is specified, all directories are searched that are not excluded. -F, --fixed-strings Interpret pattern as a set of fixed strings (i.e., force grep to behave as fgrep). -f file, --file=file Read one or more newline separated patterns from file. Empty pattern lines match every input line. Newlines are not considered part of a pattern. If file is empty, nothing is matched. -G, --basic-regexp Interpret pattern as a basic regular expression (i.e., force grep to behave as traditional grep). -H Always print filename headers with output lines. -h, --no-filename Never print filename headers (i.e., filenames) with output lines. --help Print a brief help message. -I Ignore binary files. This option is equivalent to the “--binary-files=without-match” option. -i, --ignore-case Perform case insensitive matching. By default, grep is case sensitive. --include pattern If specified, only files matching the given filename pattern are searched. Note that --include and --exclude patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. Patterns are matched to the full path specified, not only to the filename component. --include-dir pattern If -R is specified, only directories matching the given filename pattern are searched. Note that --include-dir and --exclude-dir patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. -J, --bz2decompress Decompress the bzip2(1) compressed file before looking for the text. -L, --files-without-match Only the names of files not containing selected lines are written to standard output. Pathnames are listed once per file searched. If the standard input is searched, the string “(standard input)” is written unless a --label is specified. -l, --files-with-matches Only the names of files containing selected lines are written to standard output. grep will only search a file until a match has been found, making searches potentially less expensive. Pathnames are listed once per file searched. If the standard input is searched, the string “(standard input)” is written unless a --label is specified. --label Label to use in place of “(standard input)” for a file name where a file name would normally be printed. This option applies to -H, -L, and -l. --mmap Use mmap(2) instead of read(2) to read input, which can result in better performance under some circumstances but can cause undefined behaviour. -M, --lzma Decompress the LZMA compressed file before looking for the text. -m num, --max-count=num Stop reading the file after num matches. -n, --line-number Each output line is preceded by its relative line number in the file, starting at line 1. The line number counter is reset for each file processed. This option is ignored if -c, -L, -l, or -q is specified. --null Prints a zero-byte after the file name. -O If -R is specified, follow symbolic links only if they were explicitly listed on the command line. The default is not to follow symbolic links. -o, --only-matching Prints only the matching part of the lines. -p If -R is specified, no symbolic links are followed. This is the default. -q, --quiet, --silent Quiet mode: suppress normal output. grep will only search a file until a match has been found, making searches potentially less expensive. -R, -r, --recursive Recursively search subdirectories listed. (i.e., force grep to behave as rgrep). -S If -R is specified, all symbolic links are followed. The default is not to follow symbolic links. -s, --no-messages Silent mode. Nonexistent and unreadable files are ignored (i.e., their error messages are suppressed). -U, --binary Search binary files, but do not attempt to print them. -u This option has no effect and is provided only for compatibility with GNU grep. -V, --version Display version information and exit. -v, --invert-match Selected lines are those not matching any of the specified patterns. -w, --word-regexp The expression is searched for as a word (as if surrounded by ‘[[:<:]]’ and ‘[[:>:]]’; see re_format(7)). This option has no effect if -x is also specified. -x, --line-regexp Only input lines selected against an entire fixed string or regular expression are considered to be matching lines. -y Equivalent to -i. Obsoleted. -z, --null-data Treat input and output data as sequences of lines terminated by a zero-byte instead of a newline. -X, --xz Decompress the xz(1) compressed file before looking for the text. -Z, --decompress Force grep to behave as zgrep. --binary-files=value Controls searching and printing of binary files. Options are: binary (default) Search binary files but do not print them. without-match Do not search binary files. text Treat all files as text. --line-buffered Force output to be line buffered. By default, output is line buffered when standard output is a terminal and block buffered otherwise. If no file arguments are specified, the standard input is used. Additionally, “-” may be used in place of a file name, anywhere that a file name is accepted, to read from standard input. This includes both -f and file arguments. ENVIRONMENT GREP_OPTIONS May be used to specify default options that will be placed at the beginning of the argument list. Backslash-escaping is not supported, unlike the behavior in GNU grep. EXIT STATUS The grep utility exits with one of the following values: 0 One or more lines were selected. 1 No lines were selected. >1 An error occurred.
grep, egrep, fgrep, rgrep, bzgrep, bzegrep, bzfgrep, zgrep, zegrep, zfgrep – file pattern searcher
grep [-abcdDEFGHhIiJLlMmnOopqRSsUVvwXxZz] [-A num] [-B num] [-C num] [-e pattern] [-f file] [--binary-files=value] [--color[=when]] [--colour[=when]] [--context=num] [--label] [--line-buffered] [--null] [pattern] [file ...]
null
- Find all occurrences of the pattern ‘patricia’ in a file: $ grep 'patricia' myfile - Same as above but looking only for complete words: $ grep -w 'patricia' myfile - Count occurrences of the exact pattern ‘FOO’ : $ grep -c FOO myfile - Same as above but ignoring case: $ grep -c -i FOO myfile - Find all occurrences of the pattern ‘.Pp’ at the beginning of a line: $ grep '^\.Pp' myfile The apostrophes ensure the entire expression is evaluated by grep instead of by the user's shell. The caret ‘^’ matches the null string at the beginning of a line, and the ‘\’ escapes the ‘.’, which would otherwise match any character. - Find all lines in a file which do not contain the words ‘foo’ or ‘bar’: $ grep -v -e 'foo' -e 'bar' myfile - Peruse the file ‘calendar’ looking for either 19, 20, or 25 using extended regular expressions: $ egrep '19|20|25' calendar - Show matching lines and the name of the ‘*.h’ files which contain the pattern ‘FIXME’. Do the search recursively from the /usr/src/sys/arm directory $ grep -H -R FIXME --include="*.h" /usr/src/sys/arm/ - Same as above but show only the name of the matching file: $ grep -l -R FIXME --include="*.h" /usr/src/sys/arm/ - Show lines containing the text ‘foo’. The matching part of the output is colored and every line is prefixed with the line number and the offset in the file for those lines that matched. $ grep -b --colour -n foo myfile - Show lines that match the extended regular expression patterns read from the standard input: $ echo -e 'Free\nBSD\nAll.*reserved' | grep -E -f - myfile - Show lines from the output of the pciconf(8) command matching the specified extended regular expression along with three lines of leading context and one line of trailing context: $ pciconf -lv | grep -B3 -A1 -E 'class.*=.*storage' - Suppress any output and use the exit status to show an appropriate message: $ grep -q foo myfile && echo File matches SEE ALSO bzip2(1), compress(1), ed(1), ex(1), gzip(1), sed(1), xz(1), zgrep(1), re_format(7) STANDARDS The grep utility is compliant with the IEEE Std 1003.1-2008 (“POSIX.1”) specification. The flags [-AaBbCDdGHhILmopRSUVw] are extensions to that specification, and the behaviour of the -f flag when used with an empty pattern file is left undefined. All long options are provided for compatibility with GNU versions of this utility. Historic versions of the grep utility also supported the flags [-ruy]. This implementation supports those options; however, their use is strongly discouraged. HISTORY The grep command first appeared in Version 6 AT&T UNIX. BUGS The grep utility does not normalize Unicode input, so a pattern containing composed characters will not match decomposed input, and vice versa. macOS 14.5 November 10, 2021 macOS 14.5
par.pl
This stand-alone command offers roughly the same feature as "perl -MPAR", except that it takes the pre-loaded .par files via "-Afoo.par" instead of "-MPAR=foo.par". Additionally, it lets you convert a CPAN distribution to a PAR distribution, as well as manipulate such distributions. For more information about PAR distributions, see PAR::Dist. Binary PAR loader (parl) If you have a C compiler, or a pre-built binary package of PAR is available for your platform, a binary version of par.pl will also be automatically installed as parl. You can use it to run .par files: # runs script/run.pl in archive, uses its lib/* as libraries % parl myapp.par run.pl # runs run.pl or script/run.pl in myapp.par % parl otherapp.pl # also runs normal perl scripts However, if the .par archive contains either main.pl or script/main.pl, it is used instead: % parl myapp.par run.pl # runs main.pl, with 'run.pl' as @ARGV Finally, the "-O" option makes a stand-alone binary executable from a PAR file: % parl -B -Omyapp myapp.par % ./myapp # run it anywhere without perl binaries With the "--par-options" flag, generated binaries can act as "parl" to pack new binaries: % ./myapp --par-options -Omyap2 myapp.par # identical to ./myapp % ./myapp --par-options -Omyap3 myap3.par # now with different PAR Stand-alone executable format The format for the stand-alone executable is simply concatenating the following elements: • The executable itself Either in plain-text (par.pl) or native executable format (parl or parl.exe). • Any number of embedded files These are typically used for bootstrapping PAR's various XS dependencies. Each section contains: The magic string ""FILE"" Length of file name in "pack('N')" format plus 9 8 bytes of hex-encoded CRC32 of file content A single slash (""/"") The file name (without path) File length in "pack('N')" format The file's content (not compressed) • One PAR file This is just a zip file beginning with the magic string ""PK\003\004"". • Ending section The pre-computed cache name. A pack('Z40') string of the value of -T (--tempcache) or the hash of the file, followed by "\0CACHE". The hash of the file is calculated with Digest::SHA. A pack('N') number of the total length of FILE and PAR sections, followed by a 8-bytes magic string: ""\012PAR.pm\012"". SEE ALSO PAR, PAR::Dist, parl, pp AUTHORS Audrey Tang <cpan@audreyt.org>, Steffen Mueller <smueller@cpan.org> You can write to the mailing list at <par@perl.org>, or send an empty mail to <par-subscribe@perl.org> to participate in the discussion. Please submit bug reports to <bug-par-packer@rt.cpan.org>. COPYRIGHT Copyright 2002-2009 by Audrey Tang <cpan@audreyt.org>. Neither this program nor the associated parl program impose any licensing restrictions on files generated by their execution, in accordance with the 8th article of the Artistic License: "Aggregation of this Package with a commercial distribution is always permitted provided that the use of this Package is embedded; that is, when no overt attempt is made to make this Package's interfaces visible to the end user of the commercial distribution. Such use shall not be construed as a distribution of this Package." Therefore, you are absolutely free to place any license on the resulting executable, as long as the packed 3rd-party libraries are also available under the Artistic License. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See LICENSE. perl v5.34.0 2020-03-08 PAR(1)
par.pl - Make and run Perl Archives
(Please see pp for convenient ways to make self-contained executables, scripts or PAR archives from perl programs.) To make a PAR distribution from a CPAN module distribution: % par.pl -p # make a PAR dist under the current path % par.pl -p Foo-0.01 # assume unpacked CPAN dist in Foo-0.01/ To manipulate a PAR distribution: % par.pl -i Foo-0.01-i386-freebsd-5.8.0.par # install % par.pl -i http://foo.com/Foo-0.01 # auto-appends archname + perlver % par.pl -i cpan://AUTRIJUS/PAR-0.74 # uses CPAN author directory % par.pl -u Foo-0.01-i386-freebsd-5.8.0.par # uninstall % par.pl -s Foo-0.01-i386-freebsd-5.8.0.par # sign % par.pl -v Foo-0.01-i386-freebsd-5.8.0.par # verify To use Hello.pm from ./foo.par: % par.pl -A./foo.par -MHello % par.pl -A./foo -MHello # the .par part is optional Same thing, but search foo.par in the @INC; % par.pl -Ifoo.par -MHello % par.pl -Ifoo -MHello # ditto Run test.pl or script/test.pl from foo.par: % par.pl foo.par test.pl # looks for 'main.pl' by default, # otherwise run 'test.pl' To make a self-containing script containing a PAR file : % par.pl -O./foo.pl foo.par % ./foo.pl test.pl # same as above To embed the necessary non-core modules and shared objects for PAR's execution (like "Zlib", "IO", "Cwd", etc), use the -b flag: % par.pl -b -O./foo.pl foo.par % ./foo.pl test.pl # runs anywhere with core modules installed If you also wish to embed core modules along, use the -B flag instead: % par.pl -B -O./foo.pl foo.par % ./foo.pl test.pl # runs anywhere with the perl interpreter This is particularly useful when making stand-alone binary executables; see pp for details.
null
null
sigdist.d
This is a simple DTrace script that prints the number of signals recieved by process and signal number. This script is also available as /usr/demo/dtrace/sig.d, where it originates. Since this uses DTrace, only users with root privileges can run this command.
sigdist.d - signal distribution by process. Uses DTrace.
sigdist.d
null
This samples until Ctrl-C is hit. # sigdist.d FIELDS SENDER process name of sender RECIPIENT process name of target SIG signal number, see signal(3head) COUNT number of signals sent BASED ON /usr/demo/dtrace/sig.d DOCUMENTATION DTrace Guide "proc Provider" chapter (docs.sun.com) See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT sigdist.d will sample until Ctrl-C is hit. SEE ALSO kill.d(1M), dtrace(1M) version 1.00 June 9, 2005 sigdist.d(1m)
ppdhtml
ppdhtml reads a driver information file and produces a HTML summary page that lists all of the drivers in a file and the supported options. This program is deprecated and will be removed in a future release of CUPS.
ppdhtml - cups html summary generator (deprecated)
ppdhtml [ -D name[=value] ] [ -I include-directory ] source-file
ppdhtml supports the following options: -D name[=value] Sets the named variable for use in the source file. It is equivalent to using the #define directive in the source file. -I include-directory Specifies an alternate include directory. Multiple -I options can be supplied to add additional directories. NOTES PPD files are deprecated and will no longer be supported in a future feature release of CUPS. Printers that do not support IPP can be supported using applications such as ippeveprinter(1). SEE ALSO ppdc(1), ppdcfile(5), ppdi(1), ppdmerge(1), ppdpo(1), CUPS Online Help (http://localhost:631/help) COPYRIGHT Copyright © 2007-2019 by Apple Inc. 26 April 2019 CUPS ppdhtml(1)
null
auval
AUValidation tests a specified AudioUnit for API and behavioural conformance. returns: OK: 0, malformed execution: 1, unit not conformant: -1
auval – AudioUnit validation
auval [--32] [-s TYPE] [-a] [-v|vt TYPE SUBT MANU [-w] [-de] [-dw]] [-f file]
-32 must be specified first. run in 32 bit mode. If not specified, runs as 64 bit native architecture -h print help text -a lists all available AudioUnits of any type -s TYPE lists all available AudioUnits of type 'TYPE' -v TYPE SUBT MANU opens the AudioUnit specified by the TYPE SUBT MANU component ID's and tests that unit. -vt TYPE MANU iterates through all of the AU's of specified TYPE and MANU -de execution is terminated when first error is encountered -dw execution is terminated when first warning is encountered -c continue validating when an error occurs in batch mode. -q quiet - does no printing except for errors or warnings -qp doesn't print parameter or Factory Presets information -o only runs a basic open and initialize test. good for debugging basic functionality -r N repeat the whole process of validation N times. good for catching open/init bugs. -w wait after finished - good for profiling memory usage see 'man leaks' -vers The version is printed to stdout. -versh The version is printed to stdout in hexadecimal. -f FILENAME Each line in the file should contain one complete command. Darwin February 13, 2006 Darwin
null
erb
erb is a command line front-end for ERB library, which is an implementation of eRuby. ERB provides an easy to use but powerful templating system for Ruby. Using ERB, actual Ruby code can be added to any plain text document for the purposes of generating document information details and/or flow control. erb is a part of Ruby.
erb – Ruby Templating
erb [--version] [-UPdnvx] [-E ext[:int]] [-S level] [-T mode] [-r library] [--] [file ...]
--version Prints the version of erb. -E external[:internal] --encoding external[:internal] Specifies the default value(s) for external encodings and internal encoding. Values should be separated with colon (:). You can omit the one for internal encodings, then the value (Encoding.default_internal) will be nil. -P Disables ruby code evaluation for lines beginning with %. -S level Specifies the safe level in which eRuby script will run. -T mode Specifies trim mode (default 0). mode can be one of 0 EOL remains after the embedded ruby script is evaluated. 1 EOL is removed if the line ends with %>. 2 EOL is removed if the line starts with <% and ends with %>. - EOL is removed if the line ends with -%>. And leading whitespaces are removed if the erb directive starts with <%-. -r Load a library -U can be one of Sets the default value for internal encodings (Encoding.default_internal) to UTF-8. -d --debug Turns on debug mode. $DEBUG will be set to true. -h --help Prints a summary of the options. -n Used with -x. Prepends the line number to each line in the output. -v Enables verbose mode. $VERBOSE will be set to true. -x Converts the eRuby script into Ruby script and prints it without line numbers.
Here is an eRuby script <?xml version="1.0" ?> <% require 'prime' -%> <erb-example> <calc><%= 1+1 %></calc> <var><%= __FILE__ %></var> <library><%= Prime.each(10).to_a.join(", ") %></library> </erb-example> Command % erb -T - example.erb prints <?xml version="1.0" ?> <erb-example> <calc>2</calc> <var>example.erb</var> <library>2, 3, 5, 7</library> </erb-example> SEE ALSO ruby(1). And see ri(1) documentation for ERB class. REPORTING BUGS • Security vulnerabilities should be reported via an email to security@ruby-lang.org. Reported problems will be published after being fixed. • Other bugs and feature requests can be reported via the Ruby Issue Tracking System (https://bugs.ruby-lang.org/). Do not report security vulnerabilities via this system because it publishes the vulnerabilities immediately. AUTHORS Written by Masatoshi SEKI. UNIX December 16, 2018 UNIX
dappprof
dappprof prints details on user and library call times for processes as a summary style aggragation. By default the user fuctions are traced, options can be used to trace library activity. Output can include function counts, elapsed times and on cpu times. The elapsed times are interesting, to help identify functions that take some time to complete (during which the process may have slept). CPU time helps us identify syscalls that are consuming CPU cycles to run. Since this uses DTrace, only users with root privileges can run this command.
dappprof - profile user and lib function usage. Uses DTrace.
dappprof [-acehoTU] [-u lib] { -p PID | command }
-a print all data -c print function counts -e print elapsed times, ns -o print CPU times, ns -T print totals -p PID examine this PID -u lib trace this library instead -U trace all library and user functions
run and examine the "df -h" command, # dappprof df -h print elapsed times, on-cpu times and counts for "df -h", # dappprof -ceo df -h print elapsed times for PID 1871, # dappprof -p 1871 print all data for PID 1871, # dappprof -ap 1871 FIELDS CALL Function call name ELAPSED Total elapsed time, nanoseconds CPU Total on-cpu time, nanoseconds COUNT Number of occurrences DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT dappprof will sample until Ctrl-C is hit. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO dapptrace(1M), dtrace(1M), apptrace(1) version 1.10 May 14, 2005 dappprof(1m)
openssl
OpenSSL is a cryptography toolkit implementing the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) network protocols and related cryptography standards required by them. The openssl program is a command line program for using the various cryptography functions of OpenSSL's crypto library from the shell. It can be used for o Creation and management of private keys, public keys and parameters o Public key cryptographic operations o Creation of X.509 certificates, CSRs and CRLs o Calculation of Message Digests and Message Authentication Codes o Encryption and Decryption with Ciphers o SSL/TLS Client and Server Tests o Handling of S/MIME signed or encrypted mail o Timestamp requests, generation and verification COMMAND SUMMARY The openssl program provides a rich variety of commands (command in the "SYNOPSIS" above). Each command can have many options and argument parameters, shown above as options and parameters. Detailed documentation and use cases for most standard subcommands are available (e.g., openssl-x509(1)). The subcommand openssl-list(1) may be used to list subcommands. The command no-XXX tests whether a command of the specified name is available. If no command named XXX exists, it returns 0 (success) and prints no-XXX; otherwise it returns 1 and prints XXX. In both cases, the output goes to stdout and nothing is printed to stderr. Additional command line arguments are always ignored. Since for each cipher there is a command of the same name, this provides an easy way for shell scripts to test for the availability of ciphers in the openssl program. (no-XXX is not able to detect pseudo-commands such as quit, list, or no-XXX itself.) Configuration Option Many commands use an external configuration file for some or all of their arguments and have a -config option to specify that file. The default name of the file is openssl.cnf in the default certificate storage area, which can be determined from the openssl-version(1) command using the -d or -a option. The environment variable OPENSSL_CONF can be used to specify a different file location or to disable loading a configuration (using the empty string). Among others, the configuration file can be used to load modules and to specify parameters for generating certificates and random numbers. See config(5) for details. Standard Commands asn1parse Parse an ASN.1 sequence. ca Certificate Authority (CA) Management. ciphers Cipher Suite Description Determination. cms CMS (Cryptographic Message Syntax) command. crl Certificate Revocation List (CRL) Management. crl2pkcs7 CRL to PKCS#7 Conversion. dgst Message Digest calculation. MAC calculations are superseded by openssl-mac(1). dhparam Generation and Management of Diffie-Hellman Parameters. Superseded by openssl-genpkey(1) and openssl-pkeyparam(1). dsa DSA Data Management. dsaparam DSA Parameter Generation and Management. Superseded by openssl-genpkey(1) and openssl-pkeyparam(1). ec EC (Elliptic curve) key processing. ecparam EC parameter manipulation and generation. enc Encryption, decryption, and encoding. engine Engine (loadable module) information and manipulation. errstr Error Number to Error String Conversion. fipsinstall FIPS configuration installation. gendsa Generation of DSA Private Key from Parameters. Superseded by openssl-genpkey(1) and openssl-pkey(1). genpkey Generation of Private Key or Parameters. genrsa Generation of RSA Private Key. Superseded by openssl-genpkey(1). help Display information about a command's options. info Display diverse information built into the OpenSSL libraries. kdf Key Derivation Functions. list List algorithms and features. mac Message Authentication Code Calculation. nseq Create or examine a Netscape certificate sequence. ocsp Online Certificate Status Protocol command. passwd Generation of hashed passwords. pkcs12 PKCS#12 Data Management. pkcs7 PKCS#7 Data Management. pkcs8 PKCS#8 format private key conversion command. pkey Public and private key management. pkeyparam Public key algorithm parameter management. pkeyutl Public key algorithm cryptographic operation command. prime Compute prime numbers. rand Generate pseudo-random bytes. rehash Create symbolic links to certificate and CRL files named by the hash values. req PKCS#10 X.509 Certificate Signing Request (CSR) Management. rsa RSA key management. rsautl RSA command for signing, verification, encryption, and decryption. Superseded by openssl-pkeyutl(1). s_client This implements a generic SSL/TLS client which can establish a transparent connection to a remote server speaking SSL/TLS. It's intended for testing purposes only and provides only rudimentary interface functionality but internally uses mostly all functionality of the OpenSSL ssl library. s_server This implements a generic SSL/TLS server which accepts connections from remote clients speaking SSL/TLS. It's intended for testing purposes only and provides only rudimentary interface functionality but internally uses mostly all functionality of the OpenSSL ssl library. It provides both an own command line oriented protocol for testing SSL functions and a simple HTTP response facility to emulate an SSL/TLS-aware webserver. s_time SSL Connection Timer. sess_id SSL Session Data Management. smime S/MIME mail processing. speed Algorithm Speed Measurement. spkac SPKAC printing and generating command. srp Maintain SRP password file. This command is deprecated. storeutl Command to list and display certificates, keys, CRLs, etc. ts Time Stamping Authority command. verify X.509 Certificate Verification. See also the openssl-verification-options(1) manual page. version OpenSSL Version Information. x509 X.509 Certificate Data Management. Message Digest Commands blake2b512 BLAKE2b-512 Digest blake2s256 BLAKE2s-256 Digest md2 MD2 Digest md4 MD4 Digest md5 MD5 Digest mdc2 MDC2 Digest rmd160 RMD-160 Digest sha1 SHA-1 Digest sha224 SHA-2 224 Digest sha256 SHA-2 256 Digest sha384 SHA-2 384 Digest sha512 SHA-2 512 Digest sha3-224 SHA-3 224 Digest sha3-256 SHA-3 256 Digest sha3-384 SHA-3 384 Digest sha3-512 SHA-3 512 Digest keccak-224 KECCAK 224 Digest keccak-256 KECCAK 256 Digest keccak-384 KECCAK 384 Digest keccak-512 KECCAK 512 Digest shake128 SHA-3 SHAKE128 Digest shake256 SHA-3 SHAKE256 Digest sm3 SM3 Digest Encryption, Decryption, and Encoding Commands The following aliases provide convenient access to the most used encodings and ciphers. Depending on how OpenSSL was configured and built, not all ciphers listed here may be present. See openssl-enc(1) for more information. aes128, aes-128-cbc, aes-128-cfb, aes-128-ctr, aes-128-ecb, aes-128-ofb AES-128 Cipher aes192, aes-192-cbc, aes-192-cfb, aes-192-ctr, aes-192-ecb, aes-192-ofb AES-192 Cipher aes256, aes-256-cbc, aes-256-cfb, aes-256-ctr, aes-256-ecb, aes-256-ofb AES-256 Cipher aria128, aria-128-cbc, aria-128-cfb, aria-128-ctr, aria-128-ecb, aria-128-ofb Aria-128 Cipher aria192, aria-192-cbc, aria-192-cfb, aria-192-ctr, aria-192-ecb, aria-192-ofb Aria-192 Cipher aria256, aria-256-cbc, aria-256-cfb, aria-256-ctr, aria-256-ecb, aria-256-ofb Aria-256 Cipher base64 Base64 Encoding bf, bf-cbc, bf-cfb, bf-ecb, bf-ofb Blowfish Cipher camellia128, camellia-128-cbc, camellia-128-cfb, camellia-128-ctr, camellia-128-ecb, camellia-128-ofb Camellia-128 Cipher camellia192, camellia-192-cbc, camellia-192-cfb, camellia-192-ctr, camellia-192-ecb, camellia-192-ofb Camellia-192 Cipher camellia256, camellia-256-cbc, camellia-256-cfb, camellia-256-ctr, camellia-256-ecb, camellia-256-ofb Camellia-256 Cipher cast, cast-cbc CAST Cipher cast5-cbc, cast5-cfb, cast5-ecb, cast5-ofb CAST5 Cipher chacha20 Chacha20 Cipher des, des-cbc, des-cfb, des-ecb, des-ede, des-ede-cbc, des-ede-cfb, des-ede-ofb, des-ofb DES Cipher des3, desx, des-ede3, des-ede3-cbc, des-ede3-cfb, des-ede3-ofb Triple-DES Cipher idea, idea-cbc, idea-cfb, idea-ecb, idea-ofb IDEA Cipher rc2, rc2-cbc, rc2-cfb, rc2-ecb, rc2-ofb RC2 Cipher rc4 RC4 Cipher rc5, rc5-cbc, rc5-cfb, rc5-ecb, rc5-ofb RC5 Cipher seed, seed-cbc, seed-cfb, seed-ecb, seed-ofb SEED Cipher sm4, sm4-cbc, sm4-cfb, sm4-ctr, sm4-ecb, sm4-ofb SM4 Cipher
openssl - OpenSSL command line program
openssl command [ options ... ] [ parameters ... ] openssl no-XXX [ options ] openssl -help | -version
Details of which options are available depend on the specific command. This section describes some common options with common behavior. Program Options These options can be specified without a command specified to get help or version information. -help Provides a terse summary of all options. For more detailed information, each command supports a -help option. Accepts --help as well. -version Provides a terse summary of the openssl program version. For more detailed information see openssl-version(1). Accepts --version as well. Common Options -help If an option takes an argument, the "type" of argument is also given. -- This terminates the list of options. It is mostly useful if any filename parameters start with a minus sign: openssl verify [flags...] -- -cert1.pem... Format Options See openssl-format-options(1) for manual page. Pass Phrase Options See the openssl-passphrase-options(1) manual page. Random State Options Prior to OpenSSL 1.1.1, it was common for applications to store information about the state of the random-number generator in a file that was loaded at startup and rewritten upon exit. On modern operating systems, this is generally no longer necessary as OpenSSL will seed itself from a trusted entropy source provided by the operating system. These flags are still supported for special platforms or circumstances that might require them. It is generally an error to use the same seed file more than once and every use of -rand should be paired with -writerand. -rand files A file or files containing random data used to seed the random number generator. Multiple files can be specified separated by an OS-dependent character. The separator is ";" for MS-Windows, "," for OpenVMS, and ":" for all others. Another way to specify multiple files is to repeat this flag with different filenames. -writerand file Writes the seed data to the specified file upon exit. This file can be used in a subsequent command invocation. Certificate Verification Options See the openssl-verification-options(1) manual page. Name Format Options See the openssl-namedisplay-options(1) manual page. TLS Version Options Several commands use SSL, TLS, or DTLS. By default, the commands use TLS and clients will offer the lowest and highest protocol version they support, and servers will pick the highest version that the client offers that is also supported by the server. The options below can be used to limit which protocol versions are used, and whether TCP (SSL and TLS) or UDP (DTLS) is used. Note that not all protocols and flags may be available, depending on how OpenSSL was built. -ssl3, -tls1, -tls1_1, -tls1_2, -tls1_3, -no_ssl3, -no_tls1, -no_tls1_1, -no_tls1_2, -no_tls1_3 These options require or disable the use of the specified SSL or TLS protocols. When a specific TLS version is required, only that version will be offered or accepted. Only one specific protocol can be given and it cannot be combined with any of the no_ options. The no_* options do not work with s_time and ciphers commands but work with s_client and s_server commands. -dtls, -dtls1, -dtls1_2 These options specify to use DTLS instead of TLS. With -dtls, clients will negotiate any supported DTLS protocol version. Use the -dtls1 or -dtls1_2 options to support only DTLS1.0 or DTLS1.2, respectively. Engine Options -engine id Load the engine identified by id and use all the methods it implements (algorithms, key storage, etc.), unless specified otherwise in the command-specific documentation or it is configured to do so, as described in "Engine Configuration" in config(5). The engine will be used for key ids specified with -key and similar options when an option like -keyform engine is given. A special case is the "loader_attic" engine, which is meant just for internal OpenSSL testing purposes and supports loading keys, parameters, certificates, and CRLs from files. When this engine is used, files with such credentials are read via this engine. Using the "file:" schema is optional; a plain file (path) name will do. Options specifying keys, like -key and similar, can use the generic OpenSSL engine key loading URI scheme "org.openssl.engine:" to retrieve private keys and public keys. The URI syntax is as follows, in simplified form: org.openssl.engine:{engineid}:{keyid} Where "{engineid}" is the identity/name of the engine, and "{keyid}" is a key identifier that's acceptable by that engine. For example, when using an engine that interfaces against a PKCS#11 implementation, the generic key URI would be something like this (this happens to be an example for the PKCS#11 engine that's part of OpenSC): -key org.openssl.engine:pkcs11:label_some-private-key As a third possibility, for engines and providers that have implemented their own OSSL_STORE_LOADER(3), "org.openssl.engine:" should not be necessary. For a PKCS#11 implementation that has implemented such a loader, the PKCS#11 URI as defined in RFC 7512 should be possible to use directly: -key pkcs11:object=some-private-key;pin-value=1234 Provider Options -provider name Load and initialize the provider identified by name. The name can be also a path to the provider module. In that case the provider name will be the specified path and not just the provider module name. Interpretation of relative paths is platform specific. The configured "MODULESDIR" path, OPENSSL_MODULES environment variable, or the path specified by -provider-path is prepended to relative paths. See provider(7) for a more detailed description. -provider-path path Specifies the search path that is to be used for looking for providers. Equivalently, the OPENSSL_MODULES environment variable may be set. -propquery propq Specifies the property query clause to be used when fetching algorithms from the loaded providers. See property(7) for a more detailed description. ENVIRONMENT The OpenSSL library can be take some configuration parameters from the environment. Some of these variables are listed below. For information about specific commands, see openssl-engine(1), openssl-rehash(1), and tsget(1). For information about the use of environment variables in configuration, see "ENVIRONMENT" in config(5). For information about querying or specifying CPU architecture flags, see OPENSSL_ia32cap(3), and OPENSSL_s390xcap(3). For information about all environment variables used by the OpenSSL libraries, see openssl-env(7). OPENSSL_TRACE=name[,...] Enable tracing output of OpenSSL library, by name. This output will only make sense if you know OpenSSL internals well. Also, it might not give you any output at all if OpenSSL was built without tracing support. The value is a comma separated list of names, with the following available: TRACE Traces the OpenSSL trace API itself. INIT Traces OpenSSL library initialization and cleanup. TLS Traces the TLS/SSL protocol. TLS_CIPHER Traces the ciphers used by the TLS/SSL protocol. CONF Show details about provider and engine configuration. ENGINE_TABLE The function that is used by RSA, DSA (etc) code to select registered ENGINEs, cache defaults and functional references (etc), will generate debugging summaries. ENGINE_REF_COUNT Reference counts in the ENGINE structure will be monitored with a line of generated for each change. PKCS5V2 Traces PKCS#5 v2 key generation. PKCS12_KEYGEN Traces PKCS#12 key generation. PKCS12_DECRYPT Traces PKCS#12 decryption. X509V3_POLICY Generates the complete policy tree at various points during X.509 v3 policy evaluation. BN_CTX Traces BIGNUM context operations. CMP Traces CMP client and server activity. STORE Traces STORE operations. DECODER Traces decoder operations. ENCODER Traces encoder operations. REF_COUNT Traces decrementing certain ASN.1 structure references. HTTP Traces the HTTP client and server, such as messages being sent and received. SEE ALSO openssl-asn1parse(1), openssl-ca(1), openssl-ciphers(1), openssl-cms(1), openssl-crl(1), openssl-crl2pkcs7(1), openssl-dgst(1), openssl-dhparam(1), openssl-dsa(1), openssl-dsaparam(1), openssl-ec(1), openssl-ecparam(1), openssl-enc(1), openssl-engine(1), openssl-errstr(1), openssl-gendsa(1), openssl-genpkey(1), openssl-genrsa(1), openssl-kdf(1), openssl-list(1), openssl-mac(1), openssl-nseq(1), openssl-ocsp(1), openssl-passwd(1), openssl-pkcs12(1), openssl-pkcs7(1), openssl-pkcs8(1), openssl-pkey(1), openssl-pkeyparam(1), openssl-pkeyutl(1), openssl-prime(1), openssl-rand(1), openssl-rehash(1), openssl-req(1), openssl-rsa(1), openssl-rsautl(1), openssl-s_client(1), openssl-s_server(1), openssl-s_time(1), openssl-sess_id(1), openssl-smime(1), openssl-speed(1), openssl-spkac(1), openssl-srp(1), openssl-storeutl(1), openssl-ts(1), openssl-verify(1), openssl-version(1), openssl-x509(1), config(5), crypto(7), openssl-env(7). ssl(7), x509v3_config(5) HISTORY The list -XXX-algorithms options were added in OpenSSL 1.0.0; For notes on the availability of other commands, see their individual manual pages. The -issuer_checks option is deprecated as of OpenSSL 1.1.0 and is silently ignored. The -xcertform and -xkeyform options are obsolete since OpenSSL 3.0 and have no effect. The interactive mode, which could be invoked by running "openssl" with no further arguments, was removed in OpenSSL 3.0, and running that program with no arguments is now equivalent to "openssl help". COPYRIGHT Copyright 2000-2023 The OpenSSL Project Authors. All Rights Reserved. Licensed under the Apache License 2.0 (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at <https://www.openssl.org/source/license.html>. 3.3.1 2024-06-04 OPENSSL(1ssl)
null
yamlpp-load
null
null
null
null
null
drutil
drutil uses the DiscRecording framework to interact with attached burning devices. Common verbs include burn, erase, eject, help, info, list, status, and tray. The rest of the verbs are: bulkerase, cdtext, discinfo, dumpiso, dumpudf, filename, getconfig, poll, size, subchannel, trackinfo, and version. VERBS Each verb is listed with its description and individual arguments. Drive selection arguments must appear before individual arguments. Drive selection and argument descriptions can be found after the verb descriptions in the Drive Selection Criteria section. -drive drive(s) Lets you specify a drive or drives, per the output of list, for those verbs that can operate on one or more drives. See the Drive Selection Criterion section for more info. help verb Display the usage information for the specified verb. atip Displays the Absolute Time in Pre-Groove (ATIP) for inserted CD-R/RW media. bulkerase type Starts bulk erase mode, in which the drive will continually erase inserted -RW media, eject it, and prompt for another disc until terminated. Types of erase: quick Performs a quick erase, doing the minimal amount of work to make the disc appear blank. This operation typically takes only a minute or two. full Performs a complete erase, erasing every block on the disk. This operation is slow (on the order of 30 minutes) to complete. burn options path Burns a valid directory or image file to disc. The default is to burn the specified directory to a new filesystem. The -audio option creates an audio CD (redbook) in which any valid QuickTime audio file present in the path is converted to a track (in alphabetical order). If a file is specified (valid image files only: .dmg, .iso, .cue/bin, and .toc) the contents of the image file are burned. Pre-burn and post-burn options, and filesystem exclusions can be specificed for enhanced functionality. Last option takes precedence. Invalid commands are ignored. path A valid path to a directory or file. options Specify an arbitrary valid burn option(s): -test, -appendable, -erase, -mount, -noverify, -nohfsplus, -noiso9660, -nojoliet, -noudf, -nofs, -audio, -speed, -pregap. Or specify a default burn option: -notest, -noappendable, -noerase, -allfs, -hfsplus, -iso9660, -joliet, -udf, -data, -eject, -verify. cdtext Reads and displays any CD-Text information reported by the drive. The drive must contain an audio CD, and be capable of reading CD-Text. discinfo [-xml] Displays detailed information about present media. From the MMC command of the same name. dumpiso device block [format] Tool to inspect and interpret ISO-9660 and Joliet structures on the media. device Disk node, e.g. /dev/disk1, /dev/disk1s1, /dev/rdisk1. block Block number to dump (in decimal or 0x hex notation). Blocks are assumed to be 2048-byte blocks. format How to interpret the block. If format is not specified, dumpiso will attempt to guess. If present, this argument should be one of the following: None, Boot, BootCat, PVD, SVD, VPD, VDST, MPath, LPath, Dir, HFSPlusVH. dumpudf device block Tool to inspect and interpret UDF structures on the media. device Disk node, e.g. /dev/disk1, /dev/disk1s1, /dev/rdisk1. block Block number to dump (in decimal or 0x hex notation). Blocks are assumed to be 2048-byte blocks. eject Synonym for drutil tray eject. erase type Erases -RW media in the drive(s) and ejects it. Types of erase: quick Performs a quick erase, doing the minimal amount of work to make the disc appear blank. This operation typically takes only a minute or two. full Performs a complete erase, erasing every block on the disk. This operation is slow (on the order of 30 minutes) to complete. filename name Shows how the specified filename will be modified to comply with the naming rules of the filesystems that DiscRecording generates. getconfig type Displays device feature and profile list. Types of config information: current Displays current features and profiles for a drive. supported Displays all supported features and profiles for a drive. info [-xml] Displays various pieces of information for each drive, including how it's connected to the computer and a summary of capabilities. list [-xml] Lists all burning devices connected to the machine. poll Displays device and media notifications until terminated. size options path Estimates the size of a valid directory or image file (in blocks). The default is to estimate the size of the specified path as a hybrid filesystem. The -audio option calculates the contents of the directory as an audio CD (redbook) (for applicable files). If a file is specified (valid image files only: .dmg, .iso, .cue/bin, and .toc) the contents of the image file will be calculated. Filesystem exclusions can be specificed for enhanced functionality. Calculated size will be compared against blank media that is found unless the -nodrive argument is specified. Last option takes precedence. Invalid commands are ignored. path A valid path to a directory or file. options Specify an arbitrary valid burn option(s): -nodrive, -nohfsplus, -noiso9660, -nojoliet, -noudf, -nofs, -audio, -pregap. Or specify a default burn option: -allfs, -hfsplus, -iso9660, -joliet, -udf, -data. status [-xml] Displays detailed media-specific information. subchannel Displays information from the subchannels on CD media. This prints the MCN (media catalog number) for the disc, and the ISRC (international standard recording code) for all tracks. This command only works when CD media is present. From the MMC command of the same name. toc Displays table of contents (TOC) of inserted media. trackinfo [-xml] Displays detailed information about all tracks present on the media. From the MMC command of the same name. tray command Performs a tray/media related command. Note that some drives do not have trays, and some have trays but may lack motorized eject or inject capability. Tray commands: open Opens a drive's tray, if no media is present and the drive has a tray capable of motorized eject. close Closes a drive's tray, if the drive has a tray capable of motorized inject. eject Ejects media from the drive, if the drive has a tray capable of motorized eject. If no media is present, this is equivalent to open. If media is present and can be unmounted, it will be unmounted and then ejected. If media is present but cannot be unmounted, the eject will fail. version Displays operating system and DiscRecording framework version numbers.
drutil – interact with CD/DVD burners
drutil verb [options]
-xml When specified (valid options only: discinfo, info, list, status, and trackinfo) the output for the specified verb will be shown in xml format. DRIVE SELECTION CRITERIA Some functions of drutil operate on a specific drive. Since any number of drives may be available, and they may come and go at any time, the device selection arguments provide a method for selecting among them. The candidate list starts out as a list of all attached drives. One or more arguments of the form -drive drive(s) may be specified. Each argument has the effect of narrowing the candidate list, depending on what drive(s) is. It may be: • A positive decimal number, assumed to be a 1-based index into the candidate list. The candidate list is trimmed to just that device. • One of the following keywords: internal, external, usb, firewire, atapi, scsi. The candidate list is trimmed to devices which match the specified location / bus. Case is ignored in this comparison. • Any other string, assumed to be a vendor/product name. The candidate list is trimmed to devices whose vendor or product strings exactly match the argument. Case (but not whitespace) is ignored in this comparison. Multiple -drive arguments may be specified; each argument narrows the candidate list further. After all the -drive arguments have been processed, the candidate list is considered. If it contains exactly one item, that drive is used. If it contains zero items, drutil prints an error message and exits. If it contains more than one item, the selected function is executed on all drives remaining in the list.
Simple verbs with no drive commands drutil help status Displays help for the verb "status". drutil list Displays a list of attached devices. drutil info Displays miscellaneous information for all attached devices. drutil status Displays media-specific information for all attached devices. drutil -drive internal burn -noverify -eject -speed 24 ~/Documents Burns the Documents directory to the internal drive without verifying, then ejects the disc. drutil -drive internal info -xml > driveInfo.xml Creates a XML file containing info about internal drives. Examples of drive selection drutil -drive 1 tray close Closes the tray of the first burning device seen, if possible. drutil -drive external info Lists drive specific information for all externally connected burning devices. drutil -drive firewire status Lists media specific information for media present in attached firewire burning devices. drutil -drive VENDOR tray open Opens the tray of all burning devices whose vendor id is VENDOR, if possible. drutil -drive 'CD-RW CDW827ES' getconfig supported Lists supported features and profiles for attached devices whose product id is 'CD-RW CDW827ES'. HISTORY drutil first appeared in MacOS X 10.3. SEE ALSO diskutil(1), hdiutil(1), /usr/sbin/disktool (run with no args for usage). Mac OS X May 18, 2004 Mac OS X
jinfo
The jinfo command prints Java configuration information for a specified Java process. The configuration information includes Java system properties and JVM command-line flags. If the specified process is running on a 64-bit JVM, then you might need to specify the -J-d64 option, for example: jinfo -J-d64 -sysprops pid This command is unsupported and might not be available in future releases of the JDK. In Windows Systems where dbgeng.dll is not present, the Debugging Tools for Windows must be installed to have these tools work. The PATH environment variable should contain the location of the jvm.dll that's used by the target process or the location from which the core dump file was produced. OPTIONS FOR THE JINFO COMMAND Note: If none of the following options are used, both the command-line flags and the system property name-value pairs are printed. -flag name Prints the name and value of the specified command-line flag. -flag [+|-]name Enables or disables the specified Boolean command-line flag. -flag name=value Sets the specified command-line flag to the specified value. -flags Prints command-line flags passed to the JVM. -sysprops Prints Java system properties as name-value pairs. -h or -help Prints a help message. JDK 22 2024 JINFO(1)
jinfo - generate Java configuration information for a specified Java process
Note: This command is experimental and unsupported. jinfo [option] pid option This represents the jinfo command-line options. See Options for the jinfo Command. pid The process ID for which the configuration information is to be printed. The process must be a Java process. To get a list of Java processes running on a machine, use either the ps command or, if the JVM processes are not running in a separate docker instance, the jps command.
null
null
lwp-download5.30
The lwp-download program will save the file at url to a local file. If local path is not specified, then the current directory is assumed. If local path is a directory, then the last segment of the path of the url is appended to form a local filename. If the url path ends with slash the name "index" is used. With the -s option pick up the last segment of the filename from server provided sources like the Content- Disposition header or any redirect URLs. A file extension to match the server reported Content-Type might also be appended. If a file with the produced filename already exists, then lwp-download will prompt before it overwrites and will fail if its standard input is not a terminal. This form of invocation will also fail is no acceptable filename can be derived from the sources mentioned above. If local path is not a directory, then it is simply used as the path to save into. If the file already exists it's overwritten. The lwp-download program is implemented using the libwww-perl library. It is better suited to down load big files than the lwp-request program because it does not store the file in memory. Another benefit is that it will keep you updated about its progress and that you don't have much options to worry about. Use the "-a" option to save the file in text (ASCII) mode. Might make a difference on DOSish systems. EXAMPLE Fetch the newest and greatest perl version: $ lwp-download http://www.perl.com/CPAN/src/latest.tar.gz Saving to 'latest.tar.gz'... 11.4 MB received in 8 seconds (1.43 MB/sec) AUTHOR Gisle Aas <gisle@aas.no> perl v5.30.3 2020-04-14 LWP-DOWNLOAD(1)
lwp-download - Fetch large files from the web
lwp-download [-a] [-s] <url> [<local path>] Options: -a save the file in ASCII mode -s use HTTP headers to guess output filename
null
null
splain
The "diagnostics" Pragma This module extends the terse diagnostics normally emitted by both the perl compiler and the perl interpreter (from running perl with a -w switch or "use warnings"), augmenting them with the more explicative and endearing descriptions found in perldiag. Like the other pragmata, it affects the compilation phase of your program rather than merely the execution phase. To use in your program as a pragma, merely invoke use diagnostics; at the start (or near the start) of your program. (Note that this does enable perl's -w flag.) Your whole compilation will then be subject(ed :-) to the enhanced diagnostics. These still go out STDERR. Due to the interaction between runtime and compiletime issues, and because it's probably not a very good idea anyway, you may not use "no diagnostics" to turn them off at compiletime. However, you may control their behaviour at runtime using the disable() and enable() methods to turn them off and on respectively. The -verbose flag first prints out the perldiag introduction before any other diagnostics. The $diagnostics::PRETTY variable can generate nicer escape sequences for pagers. Warnings dispatched from perl itself (or more accurately, those that match descriptions found in perldiag) are only displayed once (no duplicate descriptions). User code generated warnings a la warn() are unaffected, allowing duplicate user messages to be displayed. This module also adds a stack trace to the error message when perl dies. This is useful for pinpointing what caused the death. The -traceonly (or just -t) flag turns off the explanations of warning messages leaving just the stack traces. So if your script is dieing, run it again with perl -Mdiagnostics=-traceonly my_bad_script to see the call stack at the time of death. By supplying the -warntrace (or just -w) flag, any warnings emitted will also come with a stack trace. The splain Program Another program, splain is actually nothing more than a link to the (executable) diagnostics.pm module, as well as a link to the diagnostics.pod documentation. The -v flag is like the "use diagnostics -verbose" directive. The -p flag is like the $diagnostics::PRETTY variable. Since you're post-processing with splain, there's no sense in being able to enable() or disable() processing. Output from splain is directed to STDOUT, unlike the pragma.
diagnostics, splain - produce verbose warning diagnostics
Using the "diagnostics" pragma: use diagnostics; use diagnostics -verbose; enable diagnostics; disable diagnostics; Using the "splain" standalone filter program: perl program 2>diag.out splain [-v] [-p] diag.out Using diagnostics to get stack traces from a misbehaving script: perl -Mdiagnostics=-traceonly my_script.pl
null
The following file is certain to trigger a few errors at both runtime and compiletime: use diagnostics; print NOWHERE "nothing\n"; print STDERR "\n\tThis message should be unadorned.\n"; warn "\tThis is a user warning"; print "\nDIAGNOSTIC TESTER: Please enter a <CR> here: "; my $a, $b = scalar <STDIN>; print "\n"; print $x/$y; If you prefer to run your program first and look at its problem afterwards, do this: perl -w test.pl 2>test.out ./splain < test.out Note that this is not in general possible in shells of more dubious heritage, as the theoretical (perl -w test.pl >/dev/tty) >& test.out ./splain < test.out Because you just moved the existing stdout to somewhere else. If you don't want to modify your source code, but still have on-the-fly warnings, do this: exec 3>&1; perl -w test.pl 2>&1 1>&3 3>&- | splain 1>&2 3>&- Nifty, eh? If you want to control warnings on the fly, do something like this. Make sure you do the "use" first, or you won't be able to get at the enable() or disable() methods. use diagnostics; # checks entire compilation phase print "\ntime for 1st bogus diags: SQUAWKINGS\n"; print BOGUS1 'nada'; print "done with 1st bogus\n"; disable diagnostics; # only turns off runtime warnings print "\ntime for 2nd bogus: (squelched)\n"; print BOGUS2 'nada'; print "done with 2nd bogus\n"; enable diagnostics; # turns back on runtime warnings print "\ntime for 3rd bogus: SQUAWKINGS\n"; print BOGUS3 'nada'; print "done with 3rd bogus\n"; disable diagnostics; print "\ntime for 4th bogus: (squelched)\n"; print BOGUS4 'nada'; print "done with 4th bogus\n"; INTERNALS Diagnostic messages derive from the perldiag.pod file when available at runtime. Otherwise, they may be embedded in the file itself when the splain package is built. See the Makefile for details. If an extant $SIG{__WARN__} handler is discovered, it will continue to be honored, but only after the diagnostics::splainthis() function (the module's $SIG{__WARN__} interceptor) has had its way with your warnings. There is a $diagnostics::DEBUG variable you may set if you're desperately curious what sorts of things are being intercepted. BEGIN { $diagnostics::DEBUG = 1 } BUGS Not being able to say "no diagnostics" is annoying, but may not be insurmountable. The "-pretty" directive is called too late to affect matters. You have to do this instead, and before you load the module. BEGIN { $diagnostics::PRETTY = 1 } I could start up faster by delaying compilation until it should be needed, but this gets a "panic: top_level" when using the pragma form in Perl 5.001e. While it's true that this documentation is somewhat subserious, if you use a program named splain, you should expect a bit of whimsy. AUTHOR Tom Christiansen <tchrist@mox.perl.com>, 25 June 1995. perl v5.38.2 2023-11-28 SPLAIN(1)
pmset
pmset manages power management settings such as idle sleep timing, wake on administrative access, automatic restart on power loss, etc. Note that processes may dynamically override these power management settings by using I/O Kit power assertions. Whenever processes override any system power settings, pmset will list those processes and their power assertions in -g and -g assertions. See caffeinate(8). SETTING pmset can modify the values of any of the power management settings defined below. You may specify one or more setting & value pairs on the command-line invocation of pmset. The -a, -b, -c, -u flags determine whether the settings apply to battery ( -b ), charger (wall power) ( -c ), UPS ( -u ) or all ( -a ). Use a minutes argument of 0 to set the idle time to never for sleep disksleep and displaysleep pmset must be run as root in order to modify any settings. SETTINGS displaysleep - display sleep timer; replaces 'dim' argument in 10.4 (value in minutes, or 0 to disable) disksleep - disk spindown timer; replaces 'spindown' argument in 10.4 (value in minutes, or 0 to disable) sleep - system sleep timer (value in minutes, or 0 to disable) womp - wake on ethernet magic packet (value = 0/1). Same as "Wake for network access" in System Settings. ring - wake on modem ring (value = 0/1) powernap - enable/disable Power Nap on supported machines (value = 0/1) proximitywake - On supported systems, this option controls system wake from sleep based on proximity of devices using same iCloud id. (value = 0/1) autorestart - automatic restart on power loss (value = 0/1) lidwake - wake the machine when the laptop lid (or clamshell) is opened (value = 0/1) acwake - wake the machine when power source (AC/battery) is changed (value = 0/1) lessbright - slightly turn down display brightness when switching to this power source (value = 0/1) halfdim - display sleep will use an intermediate half-brightness state between full brightness and fully off (value = 0/1) sms - use Sudden Motion Sensor to park disk heads on sudden changes in G force (value = 0/1) hibernatemode - change hibernation mode. Please use caution. (value = integer) hibernatefile - change hibernation image file location. Image may only be located on the root volume. Please use caution. (value = path) ttyskeepawake - prevent idle system sleep when any tty (e.g. remote login session) is 'active'. A tty is 'inactive' only when its idle time exceeds the system sleep timer. (value = 0/1) networkoversleep - this setting affects how OS X networking presents shared network services during system sleep. This setting is not used by all platforms; changing its value is unsupported. destroyfvkeyonstandby - Destroy File Vault Key when going to standby mode. By default File vault keys are retained even when system goes to standby. If the keys are destroyed, user will be prompted to enter the password while coming out of standby mode.(value: 1 - Destroy, 0 - Retain) GETTING -g (with no argument) will display the settings currently in use. -g live displays the settings currently in use. -g custom displays custom settings for all power sources. -g cap displays which power management features the machine supports. -g sched displays scheduled startup/wake and shutdown/sleep events. -g ups displays UPS emergency thresholds. -g ps / batt displays status of batteries and UPSs. -g pslog displays an ongoing log of power source (battery and UPS) state. -g rawlog displays an ongoing log of battery state as read directly from battery. -g therm shows thermal conditions that affect CPU speed. Not available on all platforms. -g thermlog shows a log of thermal notifications that affect CPU speed. Not available on all platforms. -g assertions displays a summary of power assertions. Assertions may prevent system sleep or display sleep. Available 10.6 and later. -g assertionslog shows a log of assertion creations and releases. Available 10.6 and later. -g sysload displays the "system load advisory" - a summary of system activity available from the IOGetSystemLoadAdvisory API. Available 10.6 and later. -g sysloadlog displays an ongoing log of lives changes to the system load advisory. Available 10.6 and later. -g ac / adapter will display details about an attached AC power adapter. Only supported for MacBook and MacBook Pro. -g log displays a history of sleeps, wakes, and other power management events. This log is for admin & debugging purposes. -g uuid displays the currently active sleep/wake UUID; used within OS X to correlate sleep/wake activity within one sleep cycle. history -g uuidlog displays the currently active sleep/wake UUID, and prints a new UUID as they're set by the system. -g history is a debugging tool. Prints a timeline of system sleeplwake UUIDs, when enabled with boot-arg io=0x3000000. -g historydetailed Prints driver-level timings for a sleep/wake. Pass a UUID as an argument. -g powerstate [class names] Prints the current power states for I/O Kit drivers. Caller may provide one or more I/O Kit class names (separated by spaces) as an argument. If no classes are provided, it will print all drivers' power states. -g powerstatelog [-i interval] [class names] Periodically prints the power state residency times for some drivers. Caller may provide one or more I/O Kit class names (separated by spaces). If no classes are provided, it will log the IOPower plane's root registry entry. Caller may specify a polling interval, in seconds with -i <polling interval>; otherwise it defaults to 5 seconds. -g stats Prints the counts for number sleeps and wakes system has gone thru since boot. -g systemstate Prints the current power state of the system and available capabilites. -g everything Prints output from every argument under the GETTING header. This is useful for quickly collecting all the output that pmset provides. Available in 10.8. SAFE SLEEP ARGUMENTS hibernatemode supports values of 0, 3, or 25. Whether or not a hibernation image gets written is also dependent on the values of standby and autopoweroff For example, on desktops that support standby a hibernation image will be written after the specified standbydelay time. To disable hibernation images completely, ensure hibernatemode standby and autopoweroff are all set to 0. hibernatemode = 0 by default on desktops. The system will not back memory up to persistent storage. The system must wake from the contents of memory; the system will lose context on power loss. This is, historically, plain old sleep. hibernatemode = 3 by default on portables. The system will store a copy of memory to persistent storage (the disk), and will power memory during sleep. The system will wake from memory, unless a power loss forces it to restore from hibernate image. hibernatemode = 25 is only settable via pmset. The system will store a copy of memory to persistent storage (the disk), and will remove power to memory. The system will restore from disk image. If you want "hibernation" - slower sleeps, slower wakes, and better battery life, you should use this setting. Please note that hibernatefile may only point to a file located on the root volume. STANDBY ARGUMENTS standby causes kernel power management to automatically hibernate a machine after it has slept for a specified time period. This saves power while asleep. This setting defaults to ON for supported hardware. The setting standby will be visible in pmset -g if the feature is supported on this machine. standbydelayhigh and standbydelaylow specify the delay, in seconds, before writing the hibernation image to disk and powering off memory for Standby. standbydelayhigh is used when the remaining battery capacity is above highstandbythreshold , and standbydelaylow is used when the remaining battery capacity is below highstandbythreshold. highstandbythreshold has a default value of 50 percent. autopoweroff is enabled by default on supported platforms as an implementation of Lot 6 to the European Energy-related Products Directive. After sleeping for <autopoweroffdelay> seconds, the system will write a hibernation image and go into a lower power chipset sleep. Wakeups from this state will take longer than wakeups from regular sleep. autopoweroffdelay specifies the delay, in seconds, before entering autopoweroff mode. UPS SPECIFIC ARGUMENTS UPS-specific arguments are only valid following the -u option. UPS settings also have an on/off value. Use a -1 argument instead of percent or minutes to turn any of these settings off. If multiple halt conditions are specified, the system will halt on the first condition that occurs in a low power situation. haltlevel - when draining UPS battery, battery level at which to trigger an emergency shutdown (value in %) haltafter - when draining UPS battery, trigger emergency shutdown after this long running on UPS power (value in minutes, or 0 to disable) haltremain - when draining UPS battery, trigger emergency shutdown when this much time remaining on UPS power is estimated (value in minutes, or 0 to disable) Note: None of these settings are observed on a system with support for an internal battery, such as a laptop. UPS emergency shutdown settings are for desktop and server only. SCHEDULED EVENT ARGUMENTS pmset allows you to schedule system sleep, shutdown, wakeup and/or power on. "schedule" is for setting up one-time power events, and "repeat" is for setting up daily/weekly power on and power off events. Note that you may only have one pair of repeating events scheduled - a "power on" event and a "power off" event. For sleep cycling applications, pmset can schedule a "relative" wakeup or poweron to occur in seconds from the end of system sleep/shutdown, but this event cannot be cancelled and is inherently imprecise. type - one of sleep, wake, poweron, shutdown, wakeorpoweron date/time - "MM/dd/yy HH:mm:ss" (in 24 hour format; must be in quotes) time - HH:mm:ss weekdays - a subset of MTWRFSU ("M" and "MTWRF" are valid strings) owner - a string describing the person or program who is scheduling this one-time power event (optional) POWER SOURCE ARGUMENTS -g with a 'batt' or 'ps' argument will show the state of all attached power sources. -g with a 'pslog' or 'rawlog' argument is normally used for debugging, such as isolating a problem with an aging battery. OTHER ARGUMENTS boot - tell the kernel that system boot is complete (normally LoginWindow does this). May be useful to Darwin users. touch - PM re-reads existing settings from disk. noidle - pmset prevents idle sleep by creating a PM assertion to prevent idle sleep(while running; hit ctrl-c to cancel). This argument is deprecated in favor of caffeinate(8). Please use caffeinate(8) instead. sleepnow - causes an immediate system sleep. restoredefaults - Restores power management settings to their default values. displaysleepnow - causes display to go to sleep immediately. resetdisplayambientparams - resets the ambient light parameters for certain Apple displays. dim - deprecated in 10.4 in favor of 'displaysleep'. 'dim' will continue to work. spindown - deprecated in 10.4 in favor of 'disksleep'. 'spindown' will continue to work.
pmset – manipulate power management settings
pmset [-a | -b | -c | -u] [setting value] [...] pmset -u [haltlevel percent] [haltafter minutes] [haltremain minutes] pmset -g [option] pmset schedule [cancel | cancelall] type date+time [owner] pmset repeat cancel pmset repeat type weekdays time pmset relative [wake | poweron] seconds pmset [touch | sleepnow | displaysleepnow | boot]
null
This command sets displaysleep to a 5 minute timer on battery power, leaving other settings on battery power and other power sources unperturbed. pmset -b displaysleep 5 Sets displaysleep to 10, disksleep to 10, system sleep to 30, and turns on WakeOnMagicPacket for ALL power sources (AC, Battery, and UPS) as appropriate pmset -a displaysleep 10 disksleep 10 sleep 30 womp 1 For a system with an attached and supported UPS, this instructs the system to perform an emergency shutdown when UPS battery drains to below 40%. pmset -u haltlevel 40 For a system with an attached and supported UPS, this instructs the system to perform an emergency shutdown when UPS battery drains to below 25%, or when the UPS estimates it has less than 30 minutes remaining runtime. The system shuts down as soon as either of these conditions is met. pmset -u haltlevel 25 haltremain 30 For a system with an attached and supported UPS, this instructs the system to perform an emergency shutdown after 2 minutes of running on UPS battery power. pmset -u haltafter 2 Schedules the system to automatically wake from sleep on July 4, 2016, at 8PM. pmset schedule wake "07/04/16 20:00:00" Schedules a repeating shutdown to occur each day, Tuesday through Saturday, at 11AM. pmset repeat shutdown TWRFS 11:00:00 Schedules a repeating wake or power on event every tuesday at 12:00 noon, and a repeating sleep event every night at 8:00 PM. pmset repeat wakeorpoweron T 12:00:00 sleep MTWRFSU 20:00:00 Cancels all scheduled system sleep, shutdown, wake, and power on events. pmset repeat cancel Prints the power management settings in use by the system. pmset -g Prints a snapshot of battery/power source state at the moment. pmset -g batt If your system suddenly sleeps on battery power with 20-50% of capacity remaining, leave this command running in a Terminal window. When you see the problem and later power and wake the computer, you'll be able to detect sudden discontinuities (like a jump from 30% to 0%) indicative of an aging battery. pmset -g pslog SEE ALSO caffeinate(8) FILES All changes made through pmset are saved in a persistent preferences file (per-system, not per-user) at /Library/Preferences/SystemConfiguration/com.apple.PowerManagement.plist Scheduled power on/off events are stored separately in /Library/Preferences/SystemConfiguration/com.apple.AutoWake.plist pmset modifies the same file that System Settings modifies. Darwin November 9, 2012 Darwin
xmlcatalog
xmlcatalog is a command line application allowing users to monitor and manipulate XML and SGML catalogs. It is included in libxml(3). Its functions can be invoked from a single command from the command line, or it can perform multiple functions in interactive mode. It can operate on both XML and SGML files.
xmlcatalog - Command line tool to parse and manipulate XML or SGML catalog files.
xmlcatalog [--sgml | --shell | --create | --del VALUE(S) | [ --add TYPE ORIG REPLACE | --add FILENAME] | --noout | --no-super-update | [-v | --verbose]] {CATALOGFILE} {ENTITIES...}
xmlcatalog accepts the following options (in alphabetical order): --add TYPE ORIG REPLACE Add an entry to CATALOGFILE. TYPE indicates the type of entry. Possible types are: public, system, uri, rewriteSystem, rewriteURI, delegatePublic, delegateSystem, delegateURI, nextCatalog. ORIG is the original reference to be replaced, and REPLACE is the URI of the replacement entity to be used. The --add option will not overwrite CATALOGFILE, outputting to stdout, unless --noout is used. The --add will always take three parameters even if some of the XML catalog constructs will have only a single argument. --add FILENAME If the --add option is used following the --sgml option, only a single argument, a FILENAME, is used. This is used to add the name of a catalog file to an SGML supercatalog, a file that contains references to other included SGML catalog files. --create Create a new XML catalog. Outputs to stdout, ignoring filename unless --noout is used, in which case it creates a new catalog file filename. --del VALUE(S) Remove entries from CATALOGFILE matching VALUE(S). The --del option will not overwrite CATALOGFILE, outputting to stdout, unless --noout is used. --noout Save output to the named file rather than outputting to stdout. --no-super-update Do not update the SGML super catalog. --shell Run a shell allowing interactive queries on catalog file CATALOGFILE. For the set of available commands see the section called “SHELL COMMANDS”. --sgml Uses SGML super catalogs for --add and --del options. -v, --verbose Output debugging information. Invoking xmlcatalog non-interactively without a designated action (imposed with options like --add) will result in a lookup of the catalog entry for ENTITIES in the catalog denoted with CATALOGFILE. The corresponding entries will be output to the command line. This mode of operation, together with --shell mode and non-modifying (i.e. without --noout) direct actions, allows for a special shortcut of the void CATALOGFILE specification (possibly expressed as "" in the shell environment) appointing the default system catalog. That simplifies the handling when its exact location is irrelevant but the respective built-in still needs to be consulted. SHELL COMMANDS Invoking xmlcatalog with the --shell CATALOGFILE option opens a command line shell allowing interactive access to the catalog file identified by CATALOGFILE. Invoking the shell provides a command line prompt after which the following commands (described in alphabetical order) can be entered. add TYPE ORIG REPLACE Add an entry to the catalog file. TYPE indicates the type of entry. Possible types are: public, system, uri, rewriteSystem, rewriteURI, delegatePublic, delegateSystem, delegateURI, nextCatalog. ORIG is the original reference to be replaced, and REPLACE is the URI of the replacement entity to be used. The --add option will not overwrite CATALOGFILE, outputting to stdout, unless --noout is used. The --add will always take three parameters even if some of the XML catalog constructs will have only a single argument. debug Print debugging statements showing the steps xmlcatalog is executing. del VALUE(S) Remove the catalog entry corresponding to VALUE(S). dump Print the current catalog. exit Quit the shell. public PUBLIC-ID Execute a Formal Public Identifier lookup of the catalog entry for PUBLIC-ID. The corresponding entry will be output to the command line. quiet Stop printing debugging statements. system SYSTEM-ID Execute a Formal Public Identifier lookup of the catalog entry for SYSTEM-ID. The corresponding entry will be output to the command line. ENVIRONMENT XML_CATALOG_FILES XML catalog behavior can be changed by redirecting queries to the user's own set of catalogs. This can be done by setting the XML_CATALOG_FILES environment variable to a space-separated list of catalogs. Use percent-encoding to escape spaces or other characters. An empty variable should deactivate loading the default /etc/xml/catalog catalog. DIAGNOSTICS xmlcatalog return codes provide information that can be used when calling it from scripts. 0 No error 1 Failed to remove an entry from the catalog 2 Failed to save to the catalog, check file permissions 3 Failed to add an entry to the catalog 4 Failed to look up an entry in the catalog SEE ALSO libxml(3) More information can be found at • libxml(3) web page https://gitlab.gnome.org/GNOME/libxml2 • libxml(3) catalog support web page at https://gitlab.gnome.org/GNOME/libxml2/-/wikis/Catalog-support • James Clark's SGML catalog page http://www.jclark.com/sp/catalog.htm • OASIS XML catalog specification http://www.oasis- open.org/committees/entity/spec.html AUTHOR John Fleck <jfleck@inkstain.net> Author. COPYRIGHT Copyright © 2001, 2004 libxml2 02/19/2022 XMLCATALOG(1)
null
zcmp
zcmp and zdiff are filters that invoke cmp(1) or diff(1) respectively to compare compressed files. Any options that are specified are passed to cmp(1) or diff(1). If only file1 is specified, it is compared against a file with the same name, but with the extension removed. When both file1 or file2 are specified, either file may be compressed. Extensions handled by gzip(1): • z, Z, • gz, • taz, • tgz. Extensions handled by bzip2(1): • bz, • bz2, • tbz, • tbz2. Extensions handled by xz(1): • lzma, • xz, • tlz, • txz. ENVIRONMENT TMPDIR Directory in which to place temporary files. If unset, /tmp is used. FILES /tmp/zcmp.XXXXXXXXXX Temporary file for zcmp. /tmp/zdiff.XXXXXXXXXX Temporary file for zdiff. SEE ALSO bzip2(1), cmp(1), diff(1), gzip(1), xz(1) CAVEATS zcmp and zdiff rely solely on the file extension to determine what is, or is not, a compressed file. Consequently, the following are not supported as arguments: - directories - device special files - filenames indicating the standard input (“-”) macOS 14.5 May 23, 2011 macOS 14.5
zcmp, zdiff – compare compressed files
zcmp [options] file [file2] zdiff [options] file [file2]
null
null
rwsnoop
This is measuring reads and writes at the application level. This matches the syscalls read, write, pread and pwrite. Since this uses DTrace, only users with root privileges can run this command.
rwsnoop - snoop read/write events. Uses DTrace.
rwsnoop [-jPtvZ] [-n name] [-p PID]
-j print project ID -P print parent process ID -t print timestamp, us -v print time, string -Z print zone ID -n name process name to track -p PID PID to track
Default output, # rwsnoop Print zone ID, # rwsnoop - Monitor processes named "bash", # rwsnoop -n bash FIELDS TIME timestamp, us TIMESTR time, string ZONE zone ID PROJ project ID UID user ID PID process ID PPID parent process ID CMD command name for the process D direction, Read or Write BYTES total bytes during sample FILE filename, if file based. Reads and writes that are not file based, for example with sockets, will print "<unknown>" as the filename. DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT rwsnoop will run forever until Ctrl-C is hit. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO rwtop(1M), dtrace(1M) version 0.70 July 24, 2005 rwsnoop(1m)
jstatd
The jstatd command is an RMI server application that monitors for the creation and termination of instrumented Java HotSpot VMs and provides an interface to enable remote monitoring tools, jstat and jps, to attach to JVMs that are running on the local host and collect information about the JVM process. The jstatd server requires an RMI registry on the local host. The jstatd server attempts to attach to the RMI registry on the default port, or on the port you specify with the -p port option. If an RMI registry is not found, then one is created within the jstatd application that's bound to the port that's indicated by the -p port option or to the default RMI registry port when the -p port option is omitted. You can stop the creation of an internal RMI registry by specifying the -nr option. OPTIONS FOR THE JSTATD COMMAND -nr This option does not attempt to create an internal RMI registry within the jstatd process when an existing RMI registry isn't found. -p port This option sets the port number where the RMI registry is expected to be found, or when not found, created if the -nr option isn't specified. -r rmiport This option sets the port number to which the RMI connector is bound. If not specified a random available port is used. -n rminame This option sets the name to which the remote RMI object is bound in the RMI registry. The default name is JStatRemoteHost. If multiple jstatd servers are started on the same host, then the name of the exported RMI object for each server can be made unique by specifying this option. However, doing so requires that the unique server name be included in the monitoring client's hostid and vmid strings. -Joption This option passes a Java option to the JVM, where the option is one of those described on the reference page for the Java application launcher. For example, -J-Xms48m sets the startup memory to 48 MB. See java. SECURITY The jstatd server can monitor only JVMs for which it has the appropriate native access permissions. Therefore, the jstatd process must be running with the same user credentials as the target JVMs. Some user credentials, such as the root user in Linux and macOS operating systems, have permission to access the instrumentation exported by any JVM on the system. A jstatd process running with such credentials can monitor any JVM on the system, but introduces additional security concerns. The jstatd server doesn't provide any authentication of remote clients. Therefore, running a jstatd server process exposes the instrumentation export by all JVMs for which the jstatd process has access permissions to any user on the network. This exposure might be undesirable in your environment, and therefore, local security policies should be considered before you start the jstatd process, particularly in production environments or on networks that aren't secure. For security purposes, the jstatd server uses an RMI ObjectInputFilter to allow only essential classes to be deserialized. If your security concerns can't be addressed, then the safest action is to not run the jstatd server and use the jstat and jps tools locally. However, when using jps to get a list of instrumented JVMs, the list will not include any JVMs running in docker containers. REMOTE INTERFACE The interface exported by the jstatd process is proprietary and guaranteed to change. Users and developers are discouraged from writing to this interface.
jstatd - monitor the creation and termination of instrumented Java HotSpot VMs
Note: This command is experimental and unsupported. jstatd [options]
This represents the jstatd command-line options. See Options for the jstatd Command.
The following are examples of the jstatd command. The jstatd scripts automatically start the server in the background. INTERNAL RMI REGISTRY This example shows how to start a jstatd session with an internal RMI registry. This example assumes that no other server is bound to the default RMI registry port (port 1099). jstatd EXTERNAL RMI REGISTRY This example starts a jstatd session with an external RMI registry. rmiregistry& jstatd This example starts a jstatd session with an external RMI registry server on port 2020. jrmiregistry 2020& jstatd -p 2020 This example starts a jstatd session with an external RMI registry server on port 2020 and JMX connector bound to port 2021. jrmiregistry 2020& jstatd -p 2020 -r 2021 This example starts a jstatd session with an external RMI registry on port 2020 that's bound to AlternateJstatdServerName. rmiregistry 2020& jstatd -p 2020 -n AlternateJstatdServerName STOP THE CREATION OF AN IN-PROCESS RMI REGISTRY This example starts a jstatd session that doesn't create an RMI registry when one isn't found. This example assumes an RMI registry is already running. If an RMI registry isn't running, then an error message is displayed. jstatd -nr ENABLE RMI LOGGING This example starts a jstatd session with RMI logging capabilities enabled. This technique is useful as a troubleshooting aid or for monitoring server activities. jstatd -J-Djava.rmi.server.logCalls=true JDK 22 2024 JSTATD(1)
lpr
lpr submits files for printing. Files named on the command line are sent to the named printer or the default destination if no destination is specified. If no files are listed on the command-line, lpr reads the print file from the standard input. THE DEFAULT DESTINATION CUPS provides many ways to set the default destination. The LPDEST and PRINTER environment variables are consulted first. If neither are set, the current default set using the lpoptions(1) command is used, followed by the default set using the lpadmin(8) command.
lpr - print files
lpr [ -E ] [ -H server[:port] ] [ -U username ] [ -P destination[/instance] ] [ -# num-copies [ -h ] [ -l ] [ -m ] [ -o option[=value] ] [ -p ] [ -q ] [ -r ] [ -C title ] [ -J title ] [ -T title ] [ file(s) ]
The following options are recognized by lpr: -E Forces encryption when connecting to the server. -H server[:port] Specifies an alternate server. -C "name" -J "name" -T "name" Sets the job name/title. -P destination[/instance] Prints files to the named printer. -U username Specifies an alternate username. -# copies Sets the number of copies to print. -h Disables banner printing. This option is equivalent to -o job-sheets=none. -l Specifies that the print file is already formatted for the destination and should be sent without filtering. This option is equivalent to -o raw. -m Send an email on job completion. -o option[=value] Sets a job option. See "COMMON JOB OPTIONS" below. -p Specifies that the print file should be formatted with a shaded header with the date, time, job name, and page number. This option is equivalent to -o prettyprint and is only useful when printing text files. -q Hold job for printing. -r Specifies that the named print files should be deleted after submitting them. COMMON JOB OPTIONS Aside from the printer-specific options reported by the lpoptions(1) command, the following generic options are available: -o job-sheets=name Prints a cover page (banner) with the document. The "name" can be "classified", "confidential", "secret", "standard", "topsecret", or "unclassified". -o media=size Sets the page size to size. Most printers support at least the size names "a4", "letter", and "legal". -o number-up={2|4|6|9|16} Prints 2, 4, 6, 9, or 16 document (input) pages on each output page. -o orientation-requested=4 Prints the job in landscape (rotated 90 degrees counter- clockwise). -o orientation-requested=5 Prints the job in landscape (rotated 90 degrees clockwise). -o orientation-requested=6 Prints the job in reverse portrait (rotated 180 degrees). -o print-quality=3 -o print-quality=4 -o print-quality=5 Specifies the output quality - draft (3), normal (4), or best (5). -o sides=one-sided Prints on one side of the paper. -o sides=two-sided-long-edge Prints on both sides of the paper for portrait output. -o sides=two-sided-short-edge Prints on both sides of the paper for landscape output. NOTES The -c, -d, -f, -g, -i, -n, -t, -v, and -w options are not supported by CUPS and produce a warning message if used.
Print two copies of a document to the default printer: lpr -# 2 filename Print a double-sided legal document to a printer called "foo": lpr -P foo -o media=legal -o sides=two-sided-long-edge filename Print a presentation document 2-up to a printer called "foo": lpr -P foo -o number-up=2 filename SEE ALSO cancel(1), lp(1), lpadmin(8), lpoptions(1), lpq(1), lprm(1), lpstat(1), CUPS Online Help (http://localhost:631/help) COPYRIGHT Copyright © 2007-2019 by Apple Inc. 26 April 2019 CUPS lpr(1)
layerutil
Creates a compiled layered image stack (lcr) file from a layered input file source, such as an lsr or suitably structured Photoshop (psd) file. If the psd file's basename ends with @Yx Y will be treated as the scale factor of the psd file. If gpu compression is not specified then lossy compression is used.
layerutil create compiled layered image stack
layerutil [-Vlhogspf] inputfile
The following options are available: -c Convert to lcr format. -f s, --flattened-image Saves the flattened image as a jpeg to the output path given by the -o flag. if the output filename doesn't end with .jpeg or .jpg then the file extension that was given will be removed and jpeg will get added. If the file that gets written out is a JPEG image, the resulting image will be compressed with the default compression options. -g s, --gpu-compression=s Sets program to use gpu optimized compression. You can choose either best or smallest. GPU Compression is only supported on iOS 10.0/AppleTV 10.0 or greater. -l n, --lossy-compression=n Set the lossy compression factor used for image content to a value between 0 and 1.0, default is 0.75. The smaller the value, the smaller the compressed file size. A value of 1.0 creates a lossless image. -s n, --scale=v Used to specify the scale factor used in the generated lcr file. When used with a PSD file it indicates the scale factor of the PSD file. When used with an LSR file indicates which scale factor of images should be kept. If scale is not specified 1 is assumed. -p n, --display-gamut=v v can be one of srgb/p3. Selecting p3 processes the image and if it contains wide gamut data will be treated as such. -g s, --palette-image Turn on palette image compression (Defaults to off). -o, --output Output file name. If you are converting an input lsr/psd file then if no output file is given, the basename of the input file is used as the output file name with an appended .lcr extension. unless you are -h, --help Prints out usage information. -V, --version Prints out version information. Darwin November 13, 2017 Darwin
null
dbicadmin5.34
null
dbicadmin - utility for administrating DBIx::Class schemata
dbicadmin: [-I] [long options...] deploy a schema to a database dbicadmin --schema=MyApp::Schema \ --connect='["dbi:SQLite:my.db", "", ""]' \ --deploy update an existing record dbicadmin --schema=MyApp::Schema --class=Employee \ --connect='["dbi:SQLite:my.db", "", ""]' \ --op=update --set='{ "name": "New_Employee" }'
Actions --create Create version diffs needs preversion --upgrade Upgrade the database to the current schema --install Install the schema version tables to an existing database --deploy Deploy the schema to the database --select Select data from the schema --insert Insert data into the schema --update Update data in the schema --delete Delete data from the schema --op compatibility option all of the above can be supplied as --op=<action> --help display this help Arguments --config-file or --config Supply the config file for parsing by Config::Any --connect-info Supply the connect info as trailing options e.g. --connect-info dsn=<dsn> user=<user> password=<pass> --connect Supply the connect info as a JSON-encoded structure, e.g. an --connect=["dsn","user","pass"] --schema-class The class of the schema to load --config-stanza Where in the config to find the connection_info, supply in form MyApp::Model::DB --resultset or --resultset-class or --class The resultset to operate on for data manipulation --sql-dir The directory where sql diffs will be created --sql-type The RDBMs flavour you wish to use --version Supply a version install --preversion The previous version to diff against --set JSON data used to perform data operations --attrs JSON string to be used for the second argument for search --where JSON string to be used for the where clause of search --force Be forceful with some operations --trace Turn on DBIx::Class trace output --quiet Be less verbose -I Same as perl's -I, prepended to current @INC AUTHORS See "AUTHORS" in DBIx::Class LICENSE You may distribute this code under the same terms as Perl itself perl v5.34.0 2018-01-29 DBICADMIN(1)
null
snmptest
snmptest is a flexible SNMP application that can monitor and manage information on a network entity. After invoking the program, a command line interpreter proceeds to accept commands. This intepreter enables the user to send different types of SNMP requests to target agents. AGENT identifies a target SNMP agent, which is instrumented to monitor the given objects. At its simplest, the AGENT specification will consist of a hostname or an IPv4 address. In this situation, the command will attempt communication with the agent, using UDP/IPv4 to port 161 of the given target host. See snmpcmd(1) for a full list of the possible formats for AGENT. Once snmptest is invoked, the command line intepreter will prompt with: Variable: At this point you can enter one or more variable names, one per line. A blank line ends the parameter input and will send the request (variables entered) in a single packet, to the remote entity. Each variable name is given in the format specified in variables(5). For example: snmptest -c public -v 1 zeus Variable: system.sysDescr.0 Variable: will return some information about the request and reply packets, as well as the information: requestid 0x5992478A errstat 0x0 errindex 0x0 system.sysDescr.0 = STRING: "Unix 4.3BSD" The errstatus value shows the error status code for the call. The possible values for errstat are in the header file snmp.h. The errindex value identifies the variable that has the given error. Index values are assigned to all the variables entered at the "Variable": prompt. The first value is assigned an index of 1. Upon startup, the program defaults to sending a GET request packet. The type of request can be changed by typing one of the following commands at the "Variable:" prompt: $G - send a GET request $N - send a GETNEXT request $S - send a SET request $B - send a GETBULK request Note: GETBULK is not available in SNMPv1 $I - send an Inform request $T - send an SNMPv2 Trap request Other values that can be entered at the "Variable:" prompt are: $D - toggle the dumping of each sent and received packet $QP - toggle a quicker, less verbose output form $Q - Quit the program Request Types: GET Request: When in "GET request" mode ($G or default), the user can enter an OID at the "Variable:" prompt. The user can enter multiple OIDs, one per prompt. The user enters a blank line to send the GET request. GETNEXT Request: The "GETNEXT request" mode ($N) is simlar to the "Get request" mode, described above. SET Request: When in the "SET request" mode ($S), more information is requested by the prompt for each variable. The prompt: Type [i|s|x|d|n|o|t|a]: requests the type of the variable be entered. Depending on the type of value you want to set, you can type one of the following: i - integer u - unsigned integer s - octet string in ASCII x - octet string in hex bytes, separated by whitespace d - octet string as decimal bytes, separated by whitespace a - ip address in dotted IP notation o - object identifier n - null t - timeticks At this point a value will be prompted for: Value: If this is an integer value, just type the integer (in decimal). If it is a decimal string, type in white-space separated decimal numbers, one per byte of the string. Again type a blank line at the prompt for the variable name to send the packet. GETBULK Request: The "GETBULK request" mode ($B) is similar to the "Set request" mode. GETBULK, however, is not available in SNMPv1. Inform Request: The "Inform request" mode ($I) is similar to the "Set request" mode. This type of request, however, is not available in SNMPv1. Also, the _agent_ specified on the snmptest command should correspond to the target snmptrapd agent. SNMPv2 Trap Request: The "SNMPv2 Trap Request" mode ($T) is similar to the "Set request" mode. This type of request, however, is not available in SNMPv1. Also, the _agent_ specified on the snmptest command should correspond to the target snmptrapd agent.
snmptest - communicates with a network entity using SNMP requests
snmptest [COMMON OPTIONS] AGENT
snmptest takes the common options described in the snmpcmd(1) manual page.
The following is an example of sending a GET request for two OIDs: % snmptest -v 2c -c public testhost:9999 Variable: system.sysDescr.0 Variable: system.sysContact.0 Variable: Received Get Response from 128.2.56.220 requestid 0x7D9FCD63 errstat 0x0 errindex 0x0 SNMPv2-MIB::sysDescr.0 = STRING: SunOS testhost 5.9 Generic_112233-02 sun4u SNMPv2-MIB::sysContact.0 = STRING: x1111 The following is an example of sending a GETNEXT request: Variable: SNMPv2-MIB::sysORUpTime Variable: Received Get Response from 128.2.56.220 requestid 0x7D9FCD64 errstat 0x0 errindex 0x0 SNMPv2-MIB::sysORUpTime.1 = Timeticks: (6) 0:00:00.06 Variable: The following is an example of sending a SET request: Variable: $S Request type is Set Request Variable: system.sysLocation.0 Type [i|u|s|x|d|n|o|t|a]: s Value: building 17 Variable: Received Get Response from 128.2.56.220 requestid 0x7D9FCD65 errstat 0x0 errindex 0x0 SNMPv2-MIB::sysLocation.0 = STRING: building A Variable: The following is an example of sending a GETBULK request: Variable: $B Request type is Bulk Request Enter a blank line to terminate the list of non-repeaters and to begin the repeating variables Variable: Now input the repeating variables Variable: system.sysContact.0 Variable: system.sysLocation.0 Variable: What repeat count? 2 Received Get Response from 128.2.56.220 requestid 0x2EA7942A errstat 0x0 errindex 0x0 SNMPv2-MIB::sysName.0 = STRING: testhost SNMPv2-MIB::sysORLastChange.0 = Timeticks: (58) 0:00:00.58 SNMPv2-MIB::sysLocation.0 = STRING: bldg A SNMPv2-MIB::sysORID.1 = OID: IF-MIB::ifMIB Variable: The following is an example of sending an Inform request: snmptest -v 2c -c public snmptrapd_host Variable: $I Request type is Inform Request (Are you sending to the right port?) Variable: system.sysContact.0 Type [i|u|sIx|d|n|o|t|a]: s Value: x12345 Variable: Inform Acknowledged Variable: The snmptrapd_host will show: snmptrapd_host [<ip address>]: Trap SNMPv2-MIB::sysContact.0 = STRING: x12345 The following is an example of sending an SNMPv2 Trap request: snmptest -v 2c -c public snmptrapd_host Variable: $T Request type is SNMPv2 Trap Request (Are you sending to the right port?) Variable: system.sysLocation.0 Type [i|u|s|x|d|n|o|t|a]: s Value: building a Variable: The snmptrapd_host will show: snmptrapd_host [<ip address>]: Trap SNMPv2-MIB::sys.0 = STRING: building a SEE ALSO snmpcmd(1), snmpget(1), snmpset(1), variables(5) V5.6.2.1 06 Sep 2003 SNMPTEST(1)
mib2c-update
Use mib2c-update to generate your mib2c code templates, and it will track the original code and the changes you make to the code. If the mib2c template changes (bug fixes, enhances in later releases), re- running mib2c will update the template and then attempt to re-apply your changes. This can be extremely useful when developing your own mib2c templates. When you first run mib2c-update, it will create several hidden directories and a .mib2c-updaterc file. You must edit the .mib2c- udpaterc file to specify two values. The first, UPDATE_OID, is the table name to specify when running mib2c. The second, UPDATE_CONF, is the mib2c configuration file to specify when running mib2c. Additional mib2c options can be specified in UPDATE_MIB2C_OPTS. BUGS mib2c-update has only been tested on individual tables. Specifying a scalar or and entire MIB might not work. V5.6.2.1 07 Apr 2010 mib2c-update(1)
mib2c-update - script to merge custom code into updated mib2c code
mib2c-update
null
null
creatbyproc.d
creatbyproc.d is a DTrace OneLiner to print file creations as it occurs, including the name of the process calling the open. This matches file creates from the creat() system call; not all file creation occurs in this way, sometimes it is through open() with a O_CREAT flag, this script will not monitor that activity. Docs/oneliners.txt and Docs/Examples/oneliners_examples.txt in the DTraceToolkit contain this as a oneliner that can be cut-n-paste to run. Since this uses DTrace, only users with root privileges can run this command.
creatbyproc.d - snoop creat()s by process name. Uses DTrace.
creatbyproc.d
null
This prints process names and new pathnames until Ctrl-C is hit. # creatbyproc.d FIELDS CPU The CPU that recieved the event ID A DTrace probe ID for the event FUNCTION:NAME The DTrace probe name for the event remaining fields The first is the name of the process, the second is the file pathname. DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT creatbyproc.d will run forever until Ctrl-C is hit. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO dtrace(1M) version 1.00 June 11, 2005 creatbyproc.d(1m)
fdesetup
fdesetup is used to enable or disable FileVault, to list, add, or remove enabled FileVault users, and to obtain status about the current state of FileVault. Most commands require root access and need to be authenticated with either a FileVault password, a personal recovery key (if enabled), and in some cases the private key from the installed institutional recovery key. Some status related commands can be run from a non-root session. Certain commands on CoreStorage volumes allow you to authenticate and unlock by providing the -key option followed by the path to a keychain file containing the private key of the installed institutional recovery key. Do not include the certificate in this keychain. By default, when enabling FileVault fdesetup will only return a personal recovery key. Given the proper certificate information, fdesetup can install an institutional recovery key. You can also set it up without creating a personal recovery key using the -norecoverykey option, though this is not recommended unless you are also installing an institutional recovery key. On APFS volumes, if you already have a personal recovery key created from a previous enablement, it will not remove or create a new personal recovery key, allowing you to reuse the existing key. Either type of keys can be added or changed at a later time. With the -keychain option, an institutional recovery key can be set up by placing an X.509 asymmetric public certificate in the /Library/Keychains/FileVaultMaster.keychain file. security create- filevaultmaster-keychain can be used to create the keychain. Alternatively a certificate can be passed in by using the -certificate option and entering the path to the DER encoded certificate file. In this case the FileVaultMaster.keychain file will be created using the certificate. With your .cer file, the optional certificate data can be obtained using the base64 tool. For example: 'base64 /path/to/mycert.cer > /mynewdata.txt', at which point you would copy the data string contained in the text file and place it into the Certificate <data></data> value area of the property list. The certificate should be self signed, and the common name must be "FileVault Recovery Key" Because the user password may not be immediately available, read the DEFERRED ENABLEMENT section below for information on how to delay enabling FileVault until the user logs in or out. The status command will indicate if FileVault is On or Off. If a FileVault master keychain is installed into the /Library/Keychains folder it will also report this back. Note that this, by itself, does not indicate whether or not FileVault has been set up with an institutional recovery key. The -extended option will display extended status information, including the time remaining for encrypting or decrypting. The calculation of this remaining time may take a few minutes and is only an approximate value. The list command will display the short names and UUIDs of enabled FileVault users. You can use the -extended option to display a full list of existing user types along with some additional information. This information will include if the recovery key was escrowed, though note that it will show "Yes" even if the information has not yet been successfully sent to the server. You can also use the -offline option to get a list of currently locked and offline CoreStorage FileVault volumes. You can use this information as part of the haspersonalrecoverykey or hasinstitutionalrecoverykey commands. The remove command will remove a user from FileVault given either the user name or the FileVault UUID. The sync command synchronizes Open Directory attributes (e.g. user pictures) with appropriate FileVault users, and removes FileVault users that were removed from Open Directory. In most cases these changes will already be updated in FileVault. sync does not add users to FileVault. Use the haspersonalrecoverykey or hasinstitutionalrecoverykey commands to see if FileVault has a personal or institutional recovery key set up. If FileVault is active and the key is set, by default these commands will return "true" or "false". Note that "false" may also be returned if any error occurs, or if FileVault is not yet fully enabled. You can use the device option to specify either a mount path (e.g. /Volumes/myvolume), a bsd name identifier (e.g. disk0), or Logical Volume or Logical Volume Family UUID (obtained using either the list command, or using diskutil(8)). If you specify a device parameter and it finds the institutional recovery key, a hex representation of the public key hash will be returned in lieu of "true". If a user currently has the system unlocked using the recovery key, the usingrecoverykey command will return "true". The changerecovery command changes or adds either the personal or institutional recovery key. You can only have one recovery key of each type, so any associated existing key will be removed. The removerecovery command will remove any existing recovery key of the type specified. It is not recommended that you remove all recovery keys since, if you lose your FileVault password, you may not be able to access your information. On APFS volumes using 10.14 or later, the existing recovery key can be used as authentication to change or remove the personal recovery key. On supported hardware, fdesetup allows restart of a FileVault-enabled system without requiring unlock during the subsequent boot using the authrestart command. WARNING: FileVault protections are reduced during authenticated restarts. In particular, fdesetup deliberately stores at least one additional copy of a permanent FDE (full disk encryption) unlock key in both system memory and (on supported systems) the System Management Controller (SMC). fdesetup must be run as root and itself prompts for a password to unlock the FileVault root volume. Use pmset destroyfvkeyonstandby to prevent saving the key across standby modes. Once authrestart is authenticated, it launches shutdown(8) and, upon successful unlock, the unlock key will be removed. You can also use this as an option to the enable command if the system supports this feature. The supportsauthrestart command will check the system to see if it supports the authrestart command option, however you should note that even if this returns true, FileVault must still be enabled for authrestart to work. VERBS Each command verb is listed with its description and individual arguments. help Shows abbreviated help list [-extended] [-offline] [-verbose] List enabled users, or locked volumes. enable [[[-user username ...] [-usertoadd added_username ...]] | [-inputplist]] [-outputplist] [-prompt] [-forcerestart] [-authrestart] [-keychain | [-certificate path_to_cer_file]] [[-defer file_path] [-forceatlogin max_cancel_attempts] [-dontaskatlogout]] [-norecoverykey] [-verbose] Enables FileVault. This command will fail if no recovery partition was found on your disk. Additionally, all Secure Token users must contain valid passwords. disable [-verbose] Disables FileVault. status [-extended] [-verbose] Returns current status about FileVault. On APFS volumes, the -extended option will give continuous updates and estimated completion times during encryption and decryption phases. sync Synchronizes information from Open Directory to FileVault. add -usertoadd added_username ... | -inputplist [-verbose] Adds additional FileVault users. A FileVault user password or recovery key must be used to authenticate. remove -uuid user_uuid | -user username [-verbose] Removes enabled user from FileVault. It will not remove the user if it's the last OS user on the volume. changerecovery -personal | -institutional -user [[-keychain] | [-certificate path_to_cer_file]] [-key path_to_keychain_file] [-inputplist] [-verbose] Adds or updates the current recovery key. Either personal and/or institutional options must be specified. When changing the personal recovery key, the updated personal recovery key will be automatically generated. When changing either key, the old value will be removed and replaced. On CoreStorage volumes the -key option can be used to unlock FileVault. More information on this is described elsewhere in this document. removerecovery -personal -user | -institutional [[-key path_to_keychain_file] | [-inputplist]] [-verbose] Removes the current recovery key. Either personal and/or institutional options must be specified. The -key option can be optionally used to unlock FileVault. More information on this is described elsewhere in this document. authrestart [-inputplist] [-delayminutes number_of_minutes_to_delay] [-verbose] If FileVault is enabled on the current volume, it restarts the system, bypassing the initial unlock. The optional -delayminutes option can be used to delay the restart command for a set number of minutes. A value of 0 represents 'immediately', and a value of -1 represents 'never'. The command may not work on all systems. isactive [-verbose] Returns status 0 if FileVault is enabled along with the string "true". Will return status 1 if FileVault is Off, along with "false". haspersonalrecoverykey [-device] [-verbose] Returns the string "true" if FileVault contains a personal recovery key. hasinstitutionalrecoverykey [-device] [-verbose] By default, this will return the string "true" if FileVault contains an institutional recovery key. On CoreStorage volumes specified using the --device option, this will return the hex representation of the public key hash instead of "true". The hash option is not supported for APFS volumes. This will return "false" if there is no institutional recovery key installed. usingrecoverykey [-verbose] Returns the string "true" if FileVault is currently unlocked using the personal recovery key. supportsauthrestart Returns the string "true" if the system supports the authenticated restart option. Note that even if true is returned, this does not necessarily mean that authrestart will work since it requires that FileVault be enabled. validaterecovery [-inputplist] [-verbose] Returns the string "true" if the personal recovery key is validated. The validated recovery key must be in the form xxxx-xxxx-xxxx-xxxx-xxxx-xxxx. showdeferralinfo If the defer mode is set, this will show the current settings. version Displays current tool version.
fdesetup – FileVault configuration tool
fdesetup verb [options]
-defer file_path Defer enabling FileVault until the user password is obtained, and recovery key and system information will be written to the file path. -user user_shortname Short user name. -uuid user_uuid User UUID in canonical form: 11111111-2222-3333-4444-555555555555. -usertoadd added_user Additional user(s) to be added to FileVault. -inputplist Acquire configuration information from stdin when enabling or adding users to FileVault. -prompt Always prompt for information. -forcerestart Force a normal restart after FileVault has been successfully configured. Only valid for CoreStorage volumes. -authrestart Do an authenticated restart after a successful enable occurs. -outputplist Outputs the recovery key and additional system information to stdout in a plist dictionary. If the recovery key changes, the dictionary will also contain a Change key and the EnableDate key will contain the date of the change. Where possible, you should avoid writing this file to a persistent location since it may pose additional security risk, and at the very least, securely remove the file as soon as possible. -keychain Use the institutional recovery key stored in /Library/Keychains/FileVaultMaster.keychain. -certificate path_to_cer_file Use the certificate data located at the path. Any existing /Library/Keychains/FileVaultMaster.keychain file will be moved away with the location logged in the system log. Do not set this option if your certificate data is located in the input plist information. The common name of the certificate must be "FileVault Recovery Key" -key path_to_keychain_file Use the keychain file located at the path containing the private key for the currently installed institutional recovery key to unlock and authenticate FileVault. -norecoverykey Do not return a personal recovery key. On APFS volumes, you can use this option to reuse an existing recovery key previously created. -forceatlogin max_cancel_attempts When using the -defer option, prompt the designated user at login time to enable FileVault. The user has at most max_cancel_attempts to cancel and bypass enabling FileVault before it will be required to log in. If this value is 0, the user's next login will require that they enable FileVault before being allowed to use their account. Other special values include -1 to ignore this option, and 9999, which means that the user should never be forced to enable FileVault (instead the user will just be prompted each time at login until FileVault is enabled). -dontaskatlogout When using the -defer option, the default action will be to prompt the designated user at user logout time for their password in order to enable FileVault. If this option is used, the logout enablement window is not shown. The assumption is that you are instead using the -forceatlogin option to prompt at user login time to enable FileVault. -extended Return extended output information for certain commands. When using this while checking status on enabling or disabling FileVault on APFS volumes, a rough estimate of the time remaining will be displayed. This value may take a few minutes to initially calculate. Hit Ctrl-C to stop the status display. -offline Display the current offline and locked FileVault volumes. Currently only used for the list command. -device bsd_name_or_mount_path_or_lvf_or_lv_UUID Device location to be applied for the command. This can be in the form "disk1", "/Volumes/MyVolume", or when asking for a CoreStorage recovery user, a UUID for the Logical Volume or Logical Volume Family of a volume. Not all commands can use this option. -delayminutes number_of_minutes_to_delay The integer number of minutes to delay the authenticated restart. If this option is not set or the value is 0, the auth restart will happen immediately. A value of -1 will never attempt to automatically restart; instead the auth restart operation will occur whenever the user next restarts. DEFERRED ENABLEMENT The -defer option can be used with the enable command option to delay enabling FileVault until after the current (or next) local user logs in or out, thus avoiding the need to enter a password when the tool is run. Depending on the options set, the user will either be prompted at logout time for the password, or the user will be prompted to enable FileVault when they log in. If the volume is not already a CoreStorage volume, the system may need to be restarted to start the encryption process. Dialogs are automatically dismissed and canceled after 60 seconds if no interaction occurs. The -defer option sets up a single user to be added to FileVault. If there was no user specified (e.g. without the -user option), then the currently logged in user will be added to the configuration and becomes the designated user. If there is no user specified and no users are logged in at the time of configuration, then the next user that logs in will become the designated user. As recovery key information is not generated until the user password is obtained, the -defer option requires a path where this information will be written to. The property list file will be created as a root-only readable file and should be placed in a secure location. You can use the showdeferralinfo command to view the current deferral configuration information. Options that can be used in conjunction with the -defer option include: -keychain, -certificate, -forcerestart, -forceatlogin, -dontaskatlogout, -user, and -norecoverykey. Note that if the designated user is being prompted at logout to enable FileVault, and doesn't complete the setup, FileVault will not be enabled, but the configuration will remain and be used again for the designated user's next logout (or login if the -forceatlogin option is enabled), thereby 'nagging' the user to enable FileVault. When using the -forceatlogin option, the user is given a certain number of attempts to enable FileVault, in which they can cancel the operation and continue to use their system without FileVault. When the number of cancel attempts is reached, the user will not be able to log into their account until FileVault is enabled. The current value of the user's remaining attempts can be viewed using the showdeferralinfo command. Special values for the -forceatlogin option include setting it to '0' to force the enablement immediately at next login, a '-1' disables the check entirely, and a special value of '9999' means that the user will never be required to enable FileVault, though it will continually prompt the user until FileVault is enabled. If a personal recovery key is used, the user should probably be warned ahead of time that, upon successful enablement, they will need to write down and keep in a safe place the FileVault recovery key shown on the screen. The designated user must be a local user (or a mobile account user). To remove an active deferred enablement configuration, you can use the disable command, even if FileVault is not currently enabled. Starting with macOS 10.15, when using the -defer option at logout time, fdesetup may not finish the enablement until after the system returns to the login window. If you are displaying the recovery key to the user, it will not appear until the enable operation has completed. INPUT PROPERTY LIST <plist> <dict> <key>Username</key> <string>sally</string> <key>Password</key> <string>mypassword</string> <key>AdditionalUsers</key> <array> <dict> <key>Username</key> <string>johnny</string> <key>Password</key> <string>johnnypassword</string> </dict> <dict> <key>Username</key> <string>henry</string> <key>Password</key> <string>henrypassword</string> </dict> (etc) </array> <key>Certificate</key> <data>2v6tJdfabvtofALrDtXAu1w5cUOMCumz ... </data> <key>KeychainPath</key> <string>/privatekey.keychain</string> </dict> </plist> Username Short name of OD user used in enabling FileVault. Password Either password of the user, or in some cases, the personal recovery key. AdditionalUsers An array of dictionaries for each OD user that will be added during enablment. AdditionalUsers/Username The OD short user name for a user to be added to the FileVault user list. Certificate The institutional recovery key asymmetric certficate data. KeychainPath The path to the private key keychain file if you are authenticating to certain comamnds. Care should be taken with passwords that may be used within files. Precautions should be taken in your scripts to try to pass plist data directly from one tool to another to avoid writing this information to a persistent location. AUTHORIZATION POLICY Starting in macOS 10.15, you cannot use fdesetup to enable FileVault encryption unless one of the following occurs: 1) The responsible application is authorized for "Full Disk Access" in the System Settings Privacy pane. 2) System Integrity Protection (SIP) is disabled. 3) fdesetup was run due to a device configuration profile installation that was either DEP enrolled or MDM user approved. 4) The user has explicitly authorized the enablement of FileVault via a confirmation dialog.
fdesetup enable Enable FileVault after prompting for an OpenDirectory user name and password, and return the personal recovery key. fdesetup enable -keychain -norecoverykey Enables FileVault using an institutional recovery key in the FileVaultMaster.keychain file. No personal recovery key will be created. fdesetup enable -defer /MykeyAndInfo.plist Enables FileVault when the current user logs out and successfully enters their password and then writes the personal recovery key and other relevant information to the file. fdesetup enable -defer /MykeyAndInfo.plist -showrecoverykey -forceatlogin 3 -dontaskatlogout Will prompt to enable FileVault when the user logs in, allowing a maximum of 3 aborted enable attempts before requiring FileVault be enabled. After the 3 attempts, the user will not be able to log in to the client until either FileVault is enabled, or the deferral information is removed (via fdesetup disable). fdesetup enable -certificate /mycertfile.cer Enables FileVault with an institutional recovery key based off the certificate data in the DER encoded file. A FileVaultMaster.keychain file will be created automatically. fdesetup enable -inputplist < /someinfo.plist Enables FileVault using information from the property list read in from stdin. fdesetup changerecovery -institutional -keychain Adds or updates the institutional recovery key from the existing FileVaultMaster.keychain. fdesetup status Shows the current status of FileVault. fdesetup list -extended Lists the current FileVault users, including recovery key records, in an extended format. fdesetup remove -uuid A6C75639-1D98-4F19-ACD5-1892BAE27991 Removes the user with the UUID from the FileVault users list. fdesetup isactive Returns with exit status zero and "true" if FileVault is enabled and active. fdesetup add -usertoadd betty Adds the user betty to the existing FileVault setup. fdesetup changerecovery -personal -inputplist < /authinfo.plist Changes the existing recovery key and generates a new recovery key. fdesetup validaterecovery Gets the existing personal recovery key and returns "true" if the recovery key appears to be valid. EXIT STATUS The exit status of the tool is set to indicate whether any error was detected. The values returned are: 0 No error, or successful operation. 1 FileVault is Off. 2 FileVault appears to be On but Busy. 11 Authentication error. 12 Parameter error. 13 Unknown command error. 14 Bad command error. 15 Bad input error. 16 Legacy FileVault error. 17 Added users failed error. 18 Unexpected keychain found error. 19 Keychain error. This usually means the FileVaultMaster keychain could not be moved or replaced. 20 Deferred configuration setup missing or error. 21 Enable failed (Keychain) error. 22 Enable failed (CoreStorage) error. 23 Enable failed (DiskManager) error. 24 Already enabled error. 25 Unable to remove user or disable FileVault. 26 Unable to change recovery key. 27 Unable to remove recovery key. 28 FileVault is either off, busy, or the volume is locked. 29 Did not find FileVault information at the specified location. 30 Unable to add user to FileVault because user record could not be found. 31 Unable to enable FileVault due to management settings. 32 FileVault is already active. 33 Command option is unsupported on this file system. 34 An option or parameter is not supported for APFS volumes. 35 An error occurred during FileVault disablement. 36 This computer does not support enabling FileVault. 37 One or more users have a blank password. FileVault cannot be enabled. 99 Internal error. SEE ALSO security(1), diskutil(8), base64(1), pmset(1), shutdown(8) macOS July 2, 2019 macOS
flex
Generates programs that perform pattern-matching on text. Table Compression: -Ca, --align trade off larger tables for better memory alignment -Ce, --ecs construct equivalence classes -Cf do not compress tables; use -f representation -CF do not compress tables; use -F representation -Cm, --meta-ecs construct meta-equivalence classes -Cr, --read use read() instead of stdio for scanner input -f, --full generate fast, large scanner. Same as -Cfr -F, --fast use alternate table representation. Same as -CFr -Cem default compression (same as --ecs --meta-ecs) Debugging: -d, --debug enable debug mode in scanner -b, --backup write backing-up information to lex.backup -p, --perf-report write performance report to stderr -s, --nodefault suppress default rule to ECHO unmatched text -T, --trace flex should run in trace mode -w, --nowarn do not generate warnings -v, --verbose write summary of scanner statistics to stdout --hex use hexadecimal numbers instead of octal in debug outputs FILES -o, --outfile=FILE specify output filename -S, --skel=FILE specify skeleton file -t, --stdout write scanner on stdout instead of lex.yy.c --yyclass=NAME name of C++ class --header-file=FILE create a C header file in addition to the scanner --tables-file[=FILE] write tables to FILE Scanner behavior: -7, --7bit generate 7-bit scanner -8, --8bit generate 8-bit scanner -B, --batch generate batch scanner (opposite of -I) -i, --case-insensitive ignore case in patterns -l, --lex-compat maximal compatibility with original lex -X, --posix-compat maximal compatibility with POSIX lex -I, --interactive generate interactive scanner (opposite of -B) --yylineno track line count in yylineno Generated code: -+, --c++ generate C++ scanner class -Dmacro[=defn] #define macro defn (default defn is '1') -L, --noline suppress #line directives in scanner -P, --prefix=STRING use STRING as prefix instead of "yy" -R, --reentrant generate a reentrant C scanner --bison-bridge scanner for bison pure parser. --bison-locations include yylloc support. --stdinit initialize yyin/yyout to stdin/stdout --nounistd do not include <unistd.h> --noFUNCTION do not generate a particular FUNCTION Miscellaneous: -c do-nothing POSIX option -n do-nothing POSIX option -? -h, --help produce this help message -V, --version report flex version SEE ALSO The full documentation for flex is maintained as a Texinfo manual. If the info and flex programs are properly installed at your site, the command info flex should give you access to the complete manual. The Flex Project May 2017 FLEX(1)
flex - the fast lexical analyser generator
flex [OPTIONS] [FILE]...
null
null
hostinfo
The hostinfo command displays information about the host system on which the command is executing. The output includes a kernel version description, processor configuration data, available physical memory, and various scheduling statistics.
hostinfo – host information
hostinfo
There are no options. DISPLAY Mach kernel version: The version string compiled into the kernel executing on the host system. Processor Configuration: The maximum possible processors for which the kernel is configured, followed by the number of physical and logical processors available. Note: on Intel architectures, physical processors are referred to as cores, and logical processors are referred to as hardware threads; there may be multiple logical processors per core and multiple cores per processor package. This command does not report the number of processor packages. Processor type: The host's processor type and subtype. Processor active: A list of active processors on the host system. Active processors are members of a processor set and are ready to dispatch threads. On a single processor system, the active processor, is processor 0. Primary memory available: The amount of physical memory that is configured for use on the host system. Default processor set: Displays the number of tasks currently assigned to the host processor set, the number of threads currently assigned to the host processor set, and the number of processors included in the host processor set. Load average: Measures the average number of threads in the run queue. Mach factor: A variant of the load average which measures the processing resources available to a new thread. Mach factor is based on the number of CPUs divided by (1 + the number of runnablethreads) or the number of CPUs minus the number of runnable threads when the number of runnable threads is less than the number of CPUs. The closer the Mach factor value is to zero, the higher the load. On an idle system with a fixed number of active processors, the mach factor will be equal to the number of CPUs. SEE ALSO sysctl(8) macOS October 30, 2003 macOS
null
jcontrol
null
null
null
null
null
lpstat
lpstat displays status information about the current classes, jobs, and printers. When run with no arguments, lpstat will list active jobs queued by the current user.
lpstat - print cups status information
lpstat [ -E ] [ -H ] [ -U username ] [ -h hostname[:port] ] [ -l ] [ -W which-jobs ] [ -a [ destination(s) ] ] [ -c [ class(es) ] ] [ -d ] [ -e ] [ -o [ destination(s) ] ] [ -p [ printer(s) ] ] [ -r ] [ -R ] [ -s ] [ -t ] [ -u [ user(s) ] ] [ -v [ printer(s) ] ]
The lpstat command supports the following options: -E Forces encryption when connecting to the server. -H Shows the server hostname and port. -R Shows the ranking of print jobs. -U username Specifies an alternate username. -W which-jobs Specifies which jobs to show, "completed" or "not-completed" (the default). This option must appear before the -o option and/or any printer names, otherwise the default ("not-completed") value will be used in the request to the scheduler. -a [printer(s)] Shows the accepting state of printer queues. If no printers are specified then all printers are listed. -c [class(es)] Shows the printer classes and the printers that belong to them. If no classes are specified then all classes are listed. -d Shows the current default destination. -e Shows all available destinations on the local network. -h server[:port] Specifies an alternate server. -l Shows a long listing of printers, classes, or jobs. -o [destination(s)] Shows the jobs queued on the specified destinations. If no destinations are specified all jobs are shown. -p [printer(s)] Shows the printers and whether they are enabled for printing. If no printers are specified then all printers are listed. -r Shows whether the CUPS server is running. -s Shows a status summary, including the default destination, a list of classes and their member printers, and a list of printers and their associated devices. This is equivalent to using the -d, -c, and -v options. -t Shows all status information. This is equivalent to using the -r, -d, -c, -v, -a, -p, and -o options. -u [user(s)] Shows a list of print jobs queued by the specified users. If no users are specified, lists the jobs queued by the current user. -v [printer(s)] Shows the printers and what device they are attached to. If no printers are specified then all printers are listed. CONFORMING TO Unlike the System V printing system, CUPS allows printer names to contain any printable character except SPACE, TAB, "/", and "#". Also, printer and class names are not case-sensitive. The -h, -e, -E, -U, and -W options are unique to CUPS. The Solaris -f, -P, and -S options are silently ignored. SEE ALSO cancel(1), lp(1), lpq(1), lpr(1), lprm(1), CUPS Online Help (http://localhost:631/help) COPYRIGHT Copyright © 2007-2019 by Apple Inc. 26 April 2019 CUPS lpstat(1)
null
SetFile
Tools supporting Carbon development, including /usr/bin/SetFile, were deprecated with Xcode 6. /usr/bin/SetFile is a tool to set the file attributes on files in an HFS+ directory. It attempts to be similar to the setfile command in MPW. It can apply rules to more than one file with the options applying to all files listed. Flags: -P Acts on a symlink file instead on the file the symlink resolves to. -a attributes Sets the file attributes bits where attributes is a string of case sensitive letters. Each letter corresponds to a file attribute: an uppercase letter indicates that the attribute bit is set (1), a lowercase letter indicates that it is not (0). Note: attributes not specified remain unchanged. A | a Alias file B | b Has bundle C | c Custom icon (allowed on folders) D | d Located on the desktop (allowed on folders) E | e Extension is hidden (allowed on folders) I | i Inited - Finder is aware of this file and has given it a location in a window. (allowed on folders) L | l Locked M | m Shared (can run multiple times) N | n File has no INIT resource S | s System file (name locked) T | t "Stationery Pad" file V | v Invisible (allowed on folders) Z | z Busy (allowed on folders) -c creator Specifies the file's creator, where creator can be a string of four MacRoman characters, an empty string ('') designating a null creator, or a binary, decimal, octal, or hexadecimal number in standard notation (e.g. 0x52486368). -d date Sets the creation date, where date is a string of the form: "mm/dd/[yy]yy [hh:mm:[:ss] [AM | PM]]" Notes: Enclose the string in quotation marks if it contains spaces. The date must be in the Unix epoch, that is, between 1/1/1970 and 1/18/2038. If the year is provided as a two-digit year, it is assumed to be in the 21st century and must be from 00 (2000) through 38 (2038). -m date Sets the modification date where date is a string of the form in -d above. (mm/dd/[yy]yy [hh:mm:[:ss] [AM | PM]]) -t type Sets the file type, where type can be a string of four MacRoman characters, an empty string ('') designating a null type, or a binary, decimal, octal, or hexadecimal number in standard notation (e.g. 0x55455955). RETURN VALUES 0 attributes set 1 syntax error 2 any other error SEE ALSO GetFileInfo(1)
/usr/bin/SetFile – set attributes of files and directories (DEPRECATED)
/usr/bin/SetFile [-P] [-a attributes] [-c creator] [-d date] [-m date] [-t type] file ...
null
This command line sets the modification date of "myFile": SetFile -m "8/4/2001 16:13" myFile Mac OS X January 4, 2009 Mac OS X
pod2html
Converts files from pod format (see perlpod) to HTML format. ARGUMENTS pod2html takes the following arguments: backlink --backlink --nobacklink Turn =head1 directives into links pointing to the top of the HTML file. --nobacklink (which is the default behavior) does not create these backlinks. cachedir --cachedir=name Specify which directory is used for storing cache. Default directory is the current working directory. css --css=URL Specify the URL of cascading style sheet to link from resulting HTML file. Default is none style sheet. flush --flush Flush the cache. header --header --noheader Create header and footer blocks containing the text of the "NAME" section. --noheader -- which is the default behavior -- does not create header or footer blocks. help --help Displays the usage message. htmldir --htmldir=name Sets the directory to which all cross references in the resulting HTML file will be relative. Not passing this causes all links to be absolute since this is the value that tells Pod::Html the root of the documentation tree. Do not use this and --htmlroot in the same call to pod2html; they are mutually exclusive. htmlroot --htmlroot=URL Sets the base URL for the HTML files. When cross-references are made, the HTML root is prepended to the URL. Do not use this if relative links are desired: use --htmldir instead. Do not pass both this and --htmldir to pod2html; they are mutually exclusive. index --index Generate an index at the top of the HTML file (default behaviour). noindex --noindex Do not generate an index at the top of the HTML file. infile --infile=name Specify the pod file to convert. Input is taken from STDIN if no infile is specified. outfile --outfile=name Specify the HTML file to create. Output goes to STDOUT if no outfile is specified. poderrors --poderrors --nopoderrors Include a "POD ERRORS" section in the outfile if there were any POD errors in the infile (default behaviour). --nopoderrors does not create this "POD ERRORS" section. podpath --podpath=name:...:name Specify which subdirectories of the podroot contain pod files whose HTML converted forms can be linked-to in cross-references. podroot --podroot=name Specify the base directory for finding library pods. quiet --quiet --noquiet Don't display mostly harmless warning messages. --noquiet -- which is the default behavior -- does display these mostly harmless warning messages (but this is not the same as "verbose" mode). recurse --recurse --norecurse Recurse into subdirectories specified in podpath (default behaviour). --norecurse does not recurse into these subdirectories. title --title=title Specify the title of the resulting HTML file. verbose --verbose --noverbose Display progress messages. --noverbose -- which is the default behavior -- does not display these progress messages. AUTHOR Tom Christiansen, <tchrist@perl.com>. BUGS See Pod::Html for a list of known bugs in the translator. SEE ALSO perlpod, Pod::Html COPYRIGHT This program is distributed under the Artistic License. perl v5.38.2 2023-11-28 POD2HTML(1)
pod2html - convert .pod files to .html files
pod2html --help --htmldir=<name> --htmlroot=<URL> --infile=<name> --outfile=<name> --podpath=<name>:...:<name> --podroot=<name> --cachedir=<name> --flush --recurse --norecurse --quiet --noquiet --verbose --noverbose --index --noindex --backlink --nobacklink --header --noheader --poderrors --nopoderrors --css=<URL> --title=<name>
null
null
tar
tar creates and manipulates streaming archive files. This implementation can extract from tar, pax, cpio, zip, jar, ar, xar, rpm, 7-zip, and ISO 9660 cdrom images and can create tar, pax, cpio, ar, zip, 7-zip, and shar archives. The first synopsis form shows a “bundled” option word. This usage is provided for compatibility with historical implementations. See COMPATIBILITY below for details. The other synopsis forms show the preferred usage. The first option to tar is a mode indicator from the following list: -c Create a new archive containing the specified items. The long option form is --create. -r Like -c, but new entries are appended to the archive. Note that this only works on uncompressed archives stored in regular files. The -f option is required. The long option form is --append. -t List archive contents to stdout. The long option form is --list. -u Like -r, but new entries are added only if they have a modification date newer than the corresponding entry in the archive. Note that this only works on uncompressed archives stored in regular files. The -f option is required. The long form is --update. -x Extract to disk from the archive. If a file with the same name appears more than once in the archive, each copy will be extracted, with later copies overwriting (replacing) earlier copies. The long option form is --extract. In -c, -r, or -u mode, each specified file or directory is added to the archive in the order specified on the command line. By default, the contents of each directory are also archived. In extract or list mode, the entire command line is read and parsed before the archive is opened. The pathnames or patterns on the command line indicate which items in the archive should be processed. Patterns are shell-style globbing patterns as documented in tcsh(1).
tar – manipulate tape archives
tar [bundled-flags ⟨args⟩] [⟨file⟩ | ⟨pattern⟩ ...] tar {-c} [options] [files | directories] tar {-r | -u} -f archive-file [options] [files | directories] tar {-t | -x} [options] [patterns]
Unless specifically stated otherwise, options are applicable in all operating modes. @archive (c and r modes only) The specified archive is opened and the entries in it will be appended to the current archive. As a simple example, tar -c -f - newfile @original.tar writes a new archive to standard output containing a file newfile and all of the entries from original.tar. In contrast, tar -c -f - newfile original.tar creates a new archive with only two entries. Similarly, tar -czf - --format pax @- reads an archive from standard input (whose format will be determined automatically) and converts it into a gzip-compressed pax-format archive on stdout. In this way, tar can be used to convert archives from one format to another. -a, --auto-compress (c mode only) Use the archive suffix to decide a set of the format and the compressions. As a simple example, tar -a -cf archive.tgz source.c source.h creates a new archive with restricted pax format and gzip compression, tar -a -cf archive.tar.bz2.uu source.c source.h creates a new archive with restricted pax format and bzip2 compression and uuencode compression, tar -a -cf archive.zip source.c source.h creates a new archive with zip format, tar -a -jcf archive.tgz source.c source.h ignores the “-j” option, and creates a new archive with restricted pax format and gzip compression, tar -a -jcf archive.xxx source.c source.h if it is unknown suffix or no suffix, creates a new archive with restricted pax format and bzip2 compression. --acls (c, r, u, x modes only) Archive or extract POSIX.1e or NFSv4 ACLs. This is the reverse of --no-acls and the default behavior in c, r, and u modes (except on Mac OS X) or if tar is run in x mode as root. On Mac OS X this option translates extended ACLs to NFSv4 ACLs. To store extended ACLs the --mac-metadata option is preferred. -B, --read-full-blocks Ignored for compatibility with other tar(1) implementations. -b blocksize, --block-size blocksize Specify the block size, in 512-byte records, for tape drive I/O. As a rule, this argument is only needed when reading from or writing to tape drives, and usually not even then as the default block size of 20 records (10240 bytes) is very common. -C directory, --cd directory, --directory directory In c and r mode, this changes the directory before adding the following files. In x mode, change directories after opening the archive but before extracting entries from the archive. --chroot (x mode only) chroot() to the current directory after processing any -C options and before extracting any files. --clear-nochange-fflags (x mode only) Before removing file system objects to replace them, clear platform-specific file attributes or file flags that might prevent removal. --exclude pattern Do not process files or directories that match the specified pattern. Note that exclusions take precedence over patterns or filenames specified on the command line. --exclude-vcs Do not process files or directories internally used by the version control systems ‘Arch’, ‘Bazaar’, ‘CVS’, ‘Darcs’, ‘Mercurial’, ‘RCS’, ‘SCCS’, ‘SVN’ and ‘git’. --fflags (c, r, u, x modes only) Archive or extract platform-specific file attributes or file flags. This is the reverse of --no-fflags and the default behavior in c, r, and u modes or if tar is run in x mode as root. --format format (c, r, u mode only) Use the specified format for the created archive. Supported formats include “cpio”, “pax”, “shar”, and “ustar”. Other formats may also be supported; see libarchive-formats(5) for more information about currently- supported formats. In r and u modes, when extending an existing archive, the format specified here must be compatible with the format of the existing archive on disk. -f file, --file file Read the archive from or write the archive to the specified file. The filename can be - for standard input or standard output. The default varies by system; on FreeBSD, the default is /dev/sa0; on Linux, the default is /dev/st0. --gid id Use the provided group id number. On extract, this overrides the group id in the archive; the group name in the archive will be ignored. On create, this overrides the group id read from disk; if --gname is not also specified, the group name will be set to match the group id. --gname name Use the provided group name. On extract, this overrides the group name in the archive; if the provided group name does not exist on the system, the group id (from the archive or from the --gid option) will be used instead. On create, this sets the group name that will be stored in the archive; the name will not be verified against the system group database. -H (c and r modes only) Symbolic links named on the command line will be followed; the target of the link will be archived, not the link itself. -h (c and r modes only) Synonym for -L. -I Synonym for -T. --help Show usage. --hfsCompression (x mode only) Mac OS X specific (v10.6 or later). Compress extracted regular files with HFS+ compression. --ignore-zeros An alias of --options read_concatenated_archives for compatibility with GNU tar. --include pattern Process only files or directories that match the specified pattern. Note that exclusions specified with --exclude take precedence over inclusions. If no inclusions are explicitly specified, all entries are processed by default. The --include option is especially useful when filtering archives. For example, the command tar -c -f new.tar --include='*foo*' @old.tgz creates a new archive new.tar containing only the entries from old.tgz containing the string ‘foo’. -J, --xz (c mode only) Compress the resulting archive with xz(1). In extract or list modes, this option is ignored. Note that this tar implementation recognizes XZ compression automatically when reading archives. -j, --bzip, --bzip2, --bunzip2 (c mode only) Compress the resulting archive with bzip2(1). In extract or list modes, this option is ignored. Note that this tar implementation recognizes bzip2 compression automatically when reading archives. -k, --keep-old-files (x mode only) Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. --keep-newer-files (x mode only) Do not overwrite existing files that are newer than the versions appearing in the archive being extracted. -L, --dereference (c and r modes only) All symbolic links will be followed. Normally, symbolic links are archived as such. With this option, the target of the link will be archived instead. -l, --check-links (c and r modes only) Issue a warning message unless all links to each file are archived. --lrzip (c mode only) Compress the resulting archive with lrzip(1). In extract or list modes, this option is ignored. Note that this tar implementation recognizes lrzip compression automatically when reading archives. --lz4 (c mode only) Compress the archive with lz4-compatible compression before writing it. In extract or list modes, this option is ignored. Note that this tar implementation recognizes lz4 compression automatically when reading archives. --zstd (c mode only) Compress the archive with zstd-compatible compression before writing it. In extract or list modes, this option is ignored. Note that this tar implementation recognizes zstd compression automatically when reading archives. --lzma (c mode only) Compress the resulting archive with the original LZMA algorithm. In extract or list modes, this option is ignored. Use of this option is discouraged and new archives should be created with --xz instead. Note that this tar implementation recognizes LZMA compression automatically when reading archives. --lzop (c mode only) Compress the resulting archive with lzop(1). In extract or list modes, this option is ignored. Note that this tar implementation recognizes LZO compression automatically when reading archives. -m, --modification-time (x mode only) Do not extract modification time. By default, the modification time is set to the time stored in the archive. --mac-metadata (c, r, u and x mode only) Mac OS X specific. Archive or extract extended ACLs and extended file attributes using copyfile(3) in AppleDouble format. This is the reverse of --no-mac-metadata. and the default behavior in c, r, and u modes or if tar is run in x mode as root. -n, --norecurse, --no-recursion Do not operate recursively on the content of directories. --newer date (c, r, u modes only) Only include files and directories newer than the specified date. This compares ctime entries. --newer-mtime date (c, r, u modes only) Like --newer, except it compares mtime entries instead of ctime entries. --newer-than file (c, r, u modes only) Only include files and directories newer than the specified file. This compares ctime entries. --newer-mtime-than file (c, r, u modes only) Like --newer-than, except it compares mtime entries instead of ctime entries. --nodump (c and r modes only) Honor the nodump file flag by skipping this file. --nopreserveHFSCompression (x mode only) Mac OS X specific (v10.6 or later). Do not compress extracted regular files which were compressed with HFS+ compression before archived. By default, compress the regular files again with HFS+ compression. --null (use with -I or -T) Filenames or patterns are separated by null characters, not by newlines. This is often used to read filenames output by the -print0 option to find(1). --no-acls (c, r, u, x modes only) Do not archive or extract POSIX.1e or NFSv4 ACLs. This is the reverse of --acls and the default behavior if tar is run as non-root in x mode (on Mac OS X as any user in c, r, u and x modes). --no-fflags (c, r, u, x modes only) Do not archive or extract file attributes or file flags. This is the reverse of --fflags and the default behavior if tar is run as non-root in x mode. --no-mac-metadata (x mode only) Mac OS X specific. Do not archive or extract ACLs and extended file attributes using copyfile(3) in AppleDouble format. This is the reverse of --mac-metadata. and the default behavior if tar is run as non-root in x mode. --no-safe-writes (x mode only) Do not create temporary files and use rename(2) to replace the original ones. This is the reverse of --safe-writes. --no-same-owner (x mode only) Do not extract owner and group IDs. This is the reverse of --same-owner and the default behavior if tar is run as non-root. --no-same-permissions (x mode only) Do not extract full permissions (SGID, SUID, sticky bit, file attributes or file flags, extended file attributes and ACLs). This is the reverse of -p and the default behavior if tar is run as non-root. --no-xattrs (c, r, u, x modes only) Do not archive or extract extended file attributes. This is the reverse of --xattrs and the default behavior if tar is run as non-root in x mode. --numeric-owner This is equivalent to --uname "" --gname "". On extract, it causes user and group names in the archive to be ignored in favor of the numeric user and group ids. On create, it causes user and group names to not be stored in the archive. -O, --to-stdout (x, t modes only) In extract (-x) mode, files will be written to standard out rather than being extracted to disk. In list (-t) mode, the file listing will be written to stderr rather than the usual stdout. -o (x mode) Use the user and group of the user running the program rather than those specified in the archive. Note that this has no significance unless -p is specified, and the program is being run by the root user. In this case, the file modes and flags from the archive will be restored, but ACLs or owner information in the archive will be discarded. -o (c, r, u mode) A synonym for --format ustar --older date (c, r, u modes only) Only include files and directories older than the specified date. This compares ctime entries. --older-mtime date (c, r, u modes only) Like --older, except it compares mtime entries instead of ctime entries. --older-than file (c, r, u modes only) Only include files and directories older than the specified file. This compares ctime entries. --older-mtime-than file (c, r, u modes only) Like --older-than, except it compares mtime entries instead of ctime entries. --one-file-system (c, r, and u modes) Do not cross mount points. --options options Select optional behaviors for particular modules. The argument is a text string containing comma-separated keywords and values. These are passed to the modules that handle particular formats to control how those formats will behave. Each option has one of the following forms: key=value The key will be set to the specified value in every module that supports it. Modules that do not support this key will ignore it. key The key will be enabled in every module that supports it. This is equivalent to key=1. !key The key will be disabled in every module that supports it. module:key=value, module:key, module:!key As above, but the corresponding key and value will be provided only to modules whose name matches module. The complete list of supported modules and keys for create and append modes is in archive_write_set_options(3) and for extract and list modes in archive_read_set_options(3). Examples of supported options: iso9660:joliet Support Joliet extensions. This is enabled by default, use !joliet or iso9660:!joliet to disable. iso9660:rockridge Support Rock Ridge extensions. This is enabled by default, use !rockridge or iso9660:!rockridge to disable. gzip:compression-level A decimal integer from 1 to 9 specifying the gzip compression level. gzip:timestamp Store timestamp. This is enabled by default, use !timestamp or gzip:!timestamp to disable. lrzip:compression=type Use type as compression method. Supported values are bzip2, gzip, lzo (ultra fast), and zpaq (best, extremely slow). lrzip:compression-level A decimal integer from 1 to 9 specifying the lrzip compression level. lz4:compression-level A decimal integer from 1 to 9 specifying the lzop compression level. lz4:stream-checksum Enable stream checksum. This is by default, use lz4:!stream-checksum to disable. lz4:block-checksum Enable block checksum (Disabled by default). lz4:block-size A decimal integer from 4 to 7 specifying the lz4 compression block size (7 is set by default). lz4:block-dependence Use the previous block of the block being compressed for a compression dictionary to improve compression ratio. zstd:compression-level A decimal integer specifying the zstd compression level. Supported values depend on the library version, common values are from 1 to 22. lzop:compression-level A decimal integer from 1 to 9 specifying the lzop compression level. xz:compression-level A decimal integer from 0 to 9 specifying the xz compression level. mtree:keyword The mtree writer module allows you to specify which mtree keywords will be included in the output. Supported keywords include: cksum, device, flags, gid, gname, indent, link, md5, mode, nlink, rmd160, sha1, sha256, sha384, sha512, size, time, uid, uname. The default is equivalent to: “device, flags, gid, gname, link, mode, nlink, size, time, type, uid, uname”. mtree:all Enables all of the above keywords. You can also use mtree:!all to disable all keywords. mtree:use-set Enable generation of /set lines in the output. mtree:indent Produce human-readable output by indenting options and splitting lines to fit into 80 columns. zip:compression=type Use type as compression method. Supported values are store (uncompressed) and deflate (gzip algorithm). zip:encryption Enable encryption using traditional zip encryption. zip:encryption=type Use type as encryption type. Supported values are zipcrypt (traditional zip encryption), aes128 (WinZip AES-128 encryption) and aes256 (WinZip AES-256 encryption). read_concatenated_archives Ignore zeroed blocks in the archive, which occurs when multiple tar archives have been concatenated together. Without this option, only the contents of the first concatenated archive would be read. This option is comparable to the -i, --ignore-zeros option of GNU tar. If a provided option is not supported by any module, that is a fatal error. -P, --absolute-paths Preserve pathnames. By default, absolute pathnames (those that begin with a / character) have the leading slash removed both when creating archives and extracting from them. Also, tar will refuse to extract archive entries whose pathnames contain .. or whose target directory would be altered by a symlink. This option suppresses these behaviors. -p, --insecure, --preserve-permissions (x mode only) Preserve file permissions. Attempt to restore the full permissions, including file modes, file attributes or file flags, extended file attributes and ACLs, if available, for each item extracted from the archive. This is the reverse of --no-same-permissions and the default if tar is being run as root. It can be partially overridden by also specifying --no-acls, --no-fflags, --no-mac-metadata or --no-xattrs. --passphrase passphrase The passphrase is used to extract or create an encrypted archive. Currently, zip is the only supported format that supports encryption. You shouldn't use this option unless you realize how insecure use of this option is. --posix (c, r, u mode only) Synonym for --format pax -q, --fast-read (x and t mode only) Extract or list only the first archive entry that matches each pattern or filename operand. Exit as soon as each specified pattern or filename has been matched. By default, the archive is always read to the very end, since there can be multiple entries with the same name and, by convention, later entries overwrite earlier entries. This option is provided as a performance optimization. -S (x mode only) Extract files as sparse files. For every block on disk, check first if it contains only NULL bytes and seek over it otherwise. This works similar to the conv=sparse option of dd. -s pattern Modify file or archive member names according to pattern. The pattern has the format /old/new/[ghHprRsS] where old is a basic regular expression, new is the replacement string of the matched part, and the optional trailing letters modify how the replacement is handled. If old is not matched, the pattern is skipped. Within new, ~ is substituted with the match, \1 to \9 with the content of the corresponding captured group. The optional trailing g specifies that matching should continue after the matched part and stop on the first unmatched pattern. The optional trailing s specifies that the pattern applies to the value of symbolic links. The optional trailing p specifies that after a successful substitution the original path name and the new path name should be printed to standard error. Optional trailing H, R, or S characters suppress substitutions for hardlink targets, regular filenames, or symlink targets, respectively. Optional trailing h, r, or s characters enable substitutions for hardlink targets, regular filenames, or symlink targets, respectively. The default is hrs which applies substitutions to all names. In particular, it is never necessary to specify h, r, or s. --safe-writes (x mode only) Extract files atomically. By default tar unlinks the original file with the same name as the extracted file (if it exists), and then creates it immediately under the same name and writes to it. For a short period of time, applications trying to access the file might not find it, or see incomplete results. If --safe-writes is enabled, tar first creates a unique temporary file, then writes the new contents to the temporary file, and finally renames the temporary file to its final name atomically using rename(2). This guarantees that an application accessing the file, will either see the old contents or the new contents at all times. --same-owner (x mode only) Extract owner and group IDs. This is the reverse of --no-same-owner and the default behavior if tar is run as root. --strip-components count Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after checking inclusion/exclusion patterns but before security checks. -T filename, --files-from filename In x or t mode, tar will read the list of names to be extracted from filename. In c mode, tar will read names to be archived from filename. The special name “-C” on a line by itself will cause the current directory to be changed to the directory specified on the following line. Names are terminated by newlines unless --null is specified. Note that --null also disables the special handling of lines containing “-C”. Note: If you are generating lists of files using find(1), you probably want to use -n as well. --totals (c, r, u modes only) After archiving all files, print a summary to stderr. -U, --unlink, --unlink-first (x mode only) Unlink files before creating them. This can be a minor performance optimization if most files already exist, but can make things slower if most files do not already exist. This flag also causes tar to remove intervening directory symlinks instead of reporting an error. See the SECURITY section below for more details. --uid id Use the provided user id number and ignore the user name from the archive. On create, if --uname is not also specified, the user name will be set to match the user id. --uname name Use the provided user name. On extract, this overrides the user name in the archive; if the provided user name does not exist on the system, it will be ignored and the user id (from the archive or from the --uid option) will be used instead. On create, this sets the user name that will be stored in the archive; the name is not verified against the system user database. --use-compress-program program Pipe the input (in x or t mode) or the output (in c mode) through program instead of using the builtin compression support. -v, --verbose Produce verbose output. In create and extract modes, tar will list each file name as it is read from or written to the archive. In list mode, tar will produce output similar to that of ls(1). An additional -v option will also provide ls-like details in create and extract mode. --version Print version of tar and libarchive, and exit. -w, --confirmation, --interactive Ask for confirmation for every action. -X filename, --exclude-from filename Read a list of exclusion patterns from the specified file. See --exclude for more information about the handling of exclusions. --xattrs (c, r, u, x modes only) Archive or extract extended file attributes. This is the reverse of --no-xattrs and the default behavior in c, r, and u modes or if tar is run in x mode as root. -y (c mode only) Compress the resulting archive with bzip2(1). In extract or list modes, this option is ignored. Note that this tar implementation recognizes bzip2 compression automatically when reading archives. -Z, --compress, --uncompress (c mode only) Compress the resulting archive with compress(1). In extract or list modes, this option is ignored. Note that this tar implementation recognizes compress compression automatically when reading archives. -z, --gunzip, --gzip (c mode only) Compress the resulting archive with gzip(1). In extract or list modes, this option is ignored. Note that this tar implementation recognizes gzip compression automatically when reading archives. ENVIRONMENT The following environment variables affect the execution of tar: TAR_READER_OPTIONS The default options for format readers and compression readers. The --options option overrides this. TAR_WRITER_OPTIONS The default options for format writers and compression writers. The --options option overrides this. LANG The locale to use. See environ(7) for more information. TAPE The default device. The -f option overrides this. Please see the description of the -f option above for more details. TZ The timezone to use when displaying dates. See environ(7) for more information. EXIT STATUS The tar utility exits 0 on success, and >0 if an error occurs.
The following creates a new archive called file.tar.gz that contains two files source.c and source.h: tar -czf file.tar.gz source.c source.h To view a detailed table of contents for this archive: tar -tvf file.tar.gz To extract all entries from the archive on the default tape drive: tar -x To examine the contents of an ISO 9660 cdrom image: tar -tf image.iso To move file hierarchies, invoke tar as tar -cf - -C srcdir . | tar -xpf - -C destdir or more traditionally cd srcdir ; tar -cf - . | (cd destdir ; tar -xpf -) In create mode, the list of files and directories to be archived can also include directory change instructions of the form -Cfoo/baz and archive inclusions of the form @archive-file. For example, the command line tar -c -f new.tar foo1 @old.tgz -C/tmp foo2 will create a new archive new.tar. tar will read the file foo1 from the current directory and add it to the output archive. It will then read each entry from old.tgz and add those entries to the output archive. Finally, it will switch to the /tmp directory and add foo2 to the output archive. An input file in mtree(5) format can be used to create an output archive with arbitrary ownership, permissions, or names that differ from existing data on disk: $ cat input.mtree #mtree usr/bin uid=0 gid=0 mode=0755 type=dir usr/bin/ls uid=0 gid=0 mode=0755 type=file content=myls $ tar -cvf output.tar @input.mtree The --newer and --newer-mtime switches accept a variety of common date and time specifications, including “12 Mar 2005 7:14:29pm”, “2005-03-12 19:14”, “5 minutes ago”, and “19:14 PST May 1”. The --options argument can be used to control various details of archive generation or reading. For example, you can generate mtree output which only contains type, time, and uid keywords: tar -cf file.tar --format=mtree --options='!all,type,time,uid' dir or you can set the compression level used by gzip or xz compression: tar -czf file.tar --options='compression-level=9'. For more details, see the explanation of the archive_read_set_options() and archive_write_set_options() API calls that are described in archive_read(3) and archive_write(3). COMPATIBILITY The bundled-arguments format is supported for compatibility with historic implementations. It consists of an initial word (with no leading - character) in which each character indicates an option. Arguments follow as separate words. The order of the arguments must match the order of the corresponding characters in the bundled command word. For example, tar tbf 32 file.tar specifies three flags t, b, and f. The b and f flags both require arguments, so there must be two additional items on the command line. The 32 is the argument to the b flag, and file.tar is the argument to the f flag. The mode options c, r, t, u, and x and the options b, f, l, m, o, v, and w comply with SUSv2. For maximum portability, scripts that invoke tar should use the bundled- argument format above, should limit themselves to the c, t, and x modes, and the b, f, m, v, and w options. Additional long options are provided to improve compatibility with other tar implementations. SECURITY Certain security issues are common to many archiving programs, including tar. In particular, carefully-crafted archives can request that tar extract files to locations outside of the target directory. This can potentially be used to cause unwitting users to overwrite files they did not intend to overwrite. If the archive is being extracted by the superuser, any file on the system can potentially be overwritten. There are three ways this can happen. Although tar has mechanisms to protect against each one, savvy users should be aware of the implications: • Archive entries can have absolute pathnames. By default, tar removes the leading / character from filenames before restoring them to guard against this problem. • Archive entries can have pathnames that include .. components. By default, tar will not extract files containing .. components in their pathname. • Archive entries can exploit symbolic links to restore files to other directories. An archive can restore a symbolic link to another directory, then use that link to restore a file into that directory. To guard against this, tar checks each extracted path for symlinks. If the final path element is a symlink, it will be removed and replaced with the archive entry. If -U is specified, any intermediate symlink will also be unconditionally removed. If neither -U nor -P is specified, tar will refuse to extract the entry. To protect yourself, you should be wary of any archives that come from untrusted sources. You should examine the contents of an archive with tar -tf filename before extraction. You should use the -k option to ensure that tar will not overwrite any existing files or the -U option to remove any pre- existing files. You should generally not extract archives while running with super-user privileges. Note that the -P option to tar disables the security checks above and allows you to extract an archive while preserving any absolute pathnames, .. components, or symlinks to other directories. SEE ALSO bzip2(1), compress(1), cpio(1), gzip(1), pax(1), shar(1), xz(1), libarchive(3), libarchive-formats(5), tar(5) STANDARDS There is no current POSIX standard for the tar command; it appeared in ISO/IEC 9945-1:1996 (“POSIX.1”) but was dropped from IEEE Std 1003.1-2001 (“POSIX.1”). The options supported by this implementation were developed by surveying a number of existing tar implementations as well as the old POSIX specification for tar and the current POSIX specification for pax. The ustar and pax interchange file formats are defined by IEEE Std 1003.1-2001 (“POSIX.1”) for the pax command. HISTORY A tar command appeared in Seventh Edition Unix, which was released in January, 1979. There have been numerous other implementations, many of which extended the file format. John Gilmore's pdtar public-domain implementation (circa November, 1987) was quite influential, and formed the basis of GNU tar. GNU tar was included as the standard system tar in FreeBSD beginning with FreeBSD 1.0. This is a complete re-implementation based on the libarchive(3) library. It was first released with FreeBSD 5.4 in May, 2005. BUGS This program follows ISO/IEC 9945-1:1996 (“POSIX.1”) for the definition of the -l option. Note that GNU tar prior to version 1.15 treated -l as a synonym for the --one-file-system option. The -C dir option may differ from historic implementations. All archive output is written in correctly-sized blocks, even if the output is being compressed. Whether or not the last output block is padded to a full block size varies depending on the format and the output device. For tar and cpio formats, the last block of output is padded to a full block size if the output is being written to standard output or to a character or block device such as a tape drive. If the output is being written to a regular file, the last block will not be padded. Many compressors, including gzip(1) and bzip2(1), complain about the null padding when decompressing an archive created by tar, although they still extract it correctly. The compression and decompression is implemented internally, so there may be insignificant differences between the compressed output generated by tar -czf - file and that generated by tar -cf - file | gzip The default should be to read and write archives to the standard I/O paths, but tradition (and POSIX) dictates otherwise. The r and u modes require that the archive be uncompressed and located in a regular file on disk. Other archives can be modified using c mode with the @archive-file extension. To archive a file called @foo or -foo you must specify it as ./@foo or ./-foo, respectively. In create mode, a leading ./ is always removed. A leading / is stripped unless the -P option is specified. There needs to be better support for file selection on both create and extract. There is not yet any support for multi-volume archives. Converting between dissimilar archive formats (such as tar and cpio) using the @- convention can cause hard link information to be lost. (This is a consequence of the incompatible ways that different archive formats store hardlink information.) macOS 14.5 January 31, 2020 macOS 14.5
yamlpp-events5.34
null
null
null
null
null
rdoc
null
null
null
null
null
uptime
The uptime utility displays the current time, the length of time the system has been up, the number of users, and the load average of the system over the last 1, 5, and 15 minutes.
uptime – show how long system has been running
uptime
null
$ uptime 11:23AM up 3:01, 8 users, load averages: 21.09, 15.43, 12.79 SEE ALSO w(1) HISTORY The uptime command appeared in 3.0BSD. macOS 14.5 August 18, 2020 macOS 14.5
pidpersec.d
This script prints the number of new processes created per second. Since this uses DTrace, only users with root privileges can run this command.
pidpersec.d - print new PIDs per sec. Uses DTrace.
pidpersec.d
null
Print PID statistics per second, # pidpersec.d FIELDS TIME time, as a string LASTPID last PID created PID/s Number of processes created per second DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT pidpersec.d will run until Ctrl-C is hit. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO execsnoop(1M), dtrace(1M) version 0.80 June 9, 2005 pidpersec.d(1m)
schemagen
null
null
null
null
null
avmediainfo
avmediainfo is a tool that can be used to parse and analyze media files. It is capable of displaying generic information about the asset related to its tracks, metadata, format extensions, chunks, and samples. It also warns the user of any errors encountered while parsing the media file. <media_file> The media file to be analysed. <options> Use the options in order to display specific information regarding the asset.
avmediainfo – media analysis tool
avmediainfo <media_file> <options>
--formatextensions Displays format description extensions for each track. --samples Lists high level sample information for each track (Decode Time(stamp), Presentation Time(stamp), Duration, Offset, Size, Dependency) for QT/ISO movies. --chunks Lists high level chunk information (Index, Offset, Size, Sample Count, Sample Range, Start Time, Chunk Info) for QT/ISO movies. --interleave Shows the interleave by listing the chunks in offset order. --integeroffsets Displays all offsets as integers, instead of hex. --brief Displays a brief description of the movie. --help Show help. --metadata [metadataTypeOptions] Displays metadata information based on options provided. Supported metadataType options: asset Displays metadata for the asset. (Default, if no are provided). track Displays metadata for each track. chapter Displays the chapter metadata. all Displays all available metadata. --mediatype [mediatypeOptions] Restricts which tracks have chunk and/or sample information displayed based on options provided. Supported mediatype options: audio Displays info about audio tracks. video Displays info about video tracks. closedcaption Displays info about closedcaption tracks. metadata Displays info about temporal metadata tracks. subtitle Displays info about subtitle tracks. text Displays info about text tracks. timecode Displays info about timecode tracks.
Display all the samples and chunks for the video tracks in a media file: avmediainfo /tmp/myTestMovie.mov --samples --chunks --mediatype video Show the interleaving order across chunks, and display all the track level metadata for a media file: avmediainfo /tmp/myTestMovie.m4v --interleave --metadata track HISTORY avmediainfo command first appeared in Mac OS X 11.0. macOS December 18, 2019 macOS
corelist5.30
See Module::CoreList for one.
corelist - a commandline frontend to Module::CoreList
corelist -v corelist [-a|-d] <ModuleName> | /<ModuleRegex>/ [<ModuleVersion>] ... corelist [-v <PerlVersion>] [ <ModuleName> | /<ModuleRegex>/ ] ... corelist [-r <PerlVersion>] ... corelist --utils [-d] <UtilityName> [<UtilityName>] ... corelist --utils -v <PerlVersion> corelist --feature <FeatureName> [<FeatureName>] ... corelist --diff PerlVersion PerlVersion corelist --upstream <ModuleName>
-a lists all versions of the given module (or the matching modules, in case you used a module regexp) in the perls Module::CoreList knows about. corelist -a Unicode Unicode was first released with perl v5.6.2 v5.6.2 3.0.1 v5.8.0 3.2.0 v5.8.1 4.0.0 v5.8.2 4.0.0 v5.8.3 4.0.0 v5.8.4 4.0.1 v5.8.5 4.0.1 v5.8.6 4.0.1 v5.8.7 4.1.0 v5.8.8 4.1.0 v5.8.9 5.1.0 v5.9.0 4.0.0 v5.9.1 4.0.0 v5.9.2 4.0.1 v5.9.3 4.1.0 v5.9.4 4.1.0 v5.9.5 5.0.0 v5.10.0 5.0.0 v5.10.1 5.1.0 v5.11.0 5.1.0 v5.11.1 5.1.0 v5.11.2 5.1.0 v5.11.3 5.2.0 v5.11.4 5.2.0 v5.11.5 5.2.0 v5.12.0 5.2.0 v5.12.1 5.2.0 v5.12.2 5.2.0 v5.12.3 5.2.0 v5.12.4 5.2.0 v5.13.0 5.2.0 v5.13.1 5.2.0 v5.13.2 5.2.0 v5.13.3 5.2.0 v5.13.4 5.2.0 v5.13.5 5.2.0 v5.13.6 5.2.0 v5.13.7 6.0.0 v5.13.8 6.0.0 v5.13.9 6.0.0 v5.13.10 6.0.0 v5.13.11 6.0.0 v5.14.0 6.0.0 v5.14.1 6.0.0 v5.15.0 6.0.0 -d finds the first perl version where a module has been released by date, and not by version number (as is the default). --diff Given two versions of perl, this prints a human-readable table of all module changes between the two. The output format may change in the future, and is meant for humans, not programs. For programs, use the Module::CoreList API. -? or -help help! help! help! to see more help, try --man. -man all of the help -v lists all of the perl release versions we got the CoreList for. If you pass a version argument (value of $], like 5.00503 or 5.008008), you get a list of all the modules and their respective versions. (If you have the "version" module, you can also use new- style version numbers, like 5.8.8.) In module filtering context, it can be used as Perl version filter. -r lists all of the perl releases and when they were released If you pass a perl version you get the release date for that version only. --utils lists the first version of perl each named utility program was released with May be used with -d to modify the first release criteria. If used with -v <version> then all utilities released with that version of perl are listed, and any utility programs named on the command line are ignored. --feature, -f lists the first version bundle of each named feature given --upstream, -u Shows if the given module is primarily maintained in perl core or on CPAN and bug tracker URL. As a special case, if you specify the module name "Unicode", you'll get the version number of the Unicode Character Database bundled with the requested perl versions.
$ corelist File::Spec File::Spec was first released with perl 5.005 $ corelist File::Spec 0.83 File::Spec 0.83 was released with perl 5.007003 $ corelist File::Spec 0.89 File::Spec 0.89 was not in CORE (or so I think) $ corelist File::Spec::Aliens File::Spec::Aliens was not in CORE (or so I think) $ corelist /IPC::Open/ IPC::Open2 was first released with perl 5 IPC::Open3 was first released with perl 5 $ corelist /MANIFEST/i ExtUtils::Manifest was first released with perl 5.001 $ corelist /Template/ /Template/ has no match in CORE (or so I think) $ corelist -v 5.8.8 B B 1.09_01 $ corelist -v 5.8.8 /^B::/ B::Asmdata 1.01 B::Assembler 0.07 B::Bblock 1.02_01 B::Bytecode 1.01_01 B::C 1.04_01 B::CC 1.00_01 B::Concise 0.66 B::Debug 1.02_01 B::Deparse 0.71 B::Disassembler 1.05 B::Lint 1.03 B::O 1.00 B::Showlex 1.02 B::Stackobj 1.00 B::Stash 1.00 B::Terse 1.03_01 B::Xref 1.01 COPYRIGHT Copyright (c) 2002-2007 by D.H. aka PodMaster Currently maintained by the perl 5 porters <perl5-porters@perl.org>. This program is distributed under the same terms as perl itself. See http://perl.org/ or http://cpan.org/ for more info on that. perl v5.30.3 2024-04-13 CORELIST(1)
iptab5.30
null
null
null
null
null
cvcp
cvcp provides a high speed, multi-threaded copy mechanism for copying directories onto and off of an Xsan volume. The utility uses IO strategies and multi-threading techniques that exploit the Xsan IO model. cvcp can work in many modes; Directory-to-directory copies of regular files. Directory copy of regular files to a Vtape virtual sub-directory. Single File-to-File copy. In terms of functionality for regular files, cvcp is much like the tar(1) utility. However, when copying a directory to a Vtape virtual directory, cvcp can rename and renumber the source images as they are being transferred. The files in the Source directory must have a decipherable numeric sequence embedded in their names. The cvcp utility was written to provide high performance data movement, therefore, unlike utilities such as rsync, it does not write data to temporary files or manipulate the target files' modification times to allow recovery of partially-copied files when interrupted. Because of this, cvcp may leave partially-copied files if interrupted by signals such as SIGINT, SIGTERM, or SIGHUP. Partially-copied target files will be of the same size as source files; however, the data will be only partially copied into them.
cvcp - Xsan Copy Utility
cvcp [options] Source Destination
The Source parameter determines whether to copy a single file or use a directory scan. Source must be a directory or file name. Using cvcp for directory copies is best accomplished by cd'ing to the Source directory and using the dot (.) as the Source. This has been shown to improve performance since fewer paths are searched in the directory tree scan. The Destination parameter determines the target file or directory. USAGE -a Archive mode. Preserve the original permissions, owner/group, modification times and links. This is the same as options w, x, y and z. -A If specified, will turn off the pre-allocation feature. This feature looks at the size of the source file and then makes an ALLOCSPACE call to the file system. This pre-allocation is a performance advantage as the file will only contain a single extent. It also promotes volume space savings since files that are dynamically expanded do so in a more coarse manner. Up to 30% savings in physical disk space can be seen using the pre- allocation feature. NOTE: Non-Xsan file systems that do not support pre-allocation will turn pre-allocation off when writing. The default is to have the pre-allocation feature on. -b buffers Set the number of IO buffers to buffers. The default is two times the number of copy threads started(see the -t option). Experimenting with other values between 1 and 2 times the number of copy threads may yield performance improvements. -B When specified, this option disables the bulk create optimization. By default, this optimization is used in certain cases to improve performance. In some circumstances, the use of bulk create can cause cvcp to return errors if the destination file system is StorNext and the FSM process exits ungracefully while a copy is in progress. The use of the -B option avoids this potentiality at the cost of performance. The effect on performance will depend on whether bulk create is being disabled for other reasons as well as the size of the files with the impact being more observable when small files are copied. -c When specified, if cvcp fails to copy a file it reports an error and continues. -d Changes directory-to-directory mode to work more like cp -R. Without -d, cvcp copies the files and sub-directories under Source to the Destination directory. With -d, cvcp first creates a sub-directory called Source in the Destination directory, then copies the files and sub-directories under Source to that new sub-directory. -k buffer_size Set the IO buffer size to buffer_size bytes. The default buffer size is 4MB. -l If set, when in directory to directory mode, follow symbolic links instead of copying the symbolic link. -n If set, do not recurse into any sub-directories. -p source_prefix If set, only copy files whose beginning file name characters match source_prefix. The matching test only checks starting at character one. -s The -s option forces allocations to line up on the beginning block modulus of the storage pool. This can help performance in situations where the I/O size perfectly spans the width of the storage pool's disks. -t num_threads Set the number of copy threads to num_threads. The default is 4 copy threads. This option may have a significant impact on speed and resource consumption. The total copy buffer pool size is calculated by multiplying the number of buffers(-b) by the buffer size(-k). Experimenting with the -t option along with the -b and -k options are encouraged. -u Update only. If set, copies only when the source file is newer than the destination file or the destination file does not exist. The file modification time check uses a granularity of one second on Windows and microseconds on other platforms. This makes it possible for a slightly newer source file to not be copied over an older destination file even though -u is used. -u cannot be used with tar files. -v Be verbose about the files being copied. May be specified twice for extreme verbosity. -w If set, when in file to file mode, copy a symbolic link instead of following the link. -x If set, ignore umask(1) and retain original permissions from the source file. If the super-user, set sticky and setuid/gid bits as well. -y If set, preserve ownership and group information if possible. -z If set, retain original modification times.
Copy directory abc and its sub-directories to directory /usr/clips/foo. This copy will use the default number of copy threads and buffers. The total buffer pool size will total 32MB (8 buffers @ 4MB each). Retain all permissions and ownerships. Show all files being copied. rock% cvcp -vxy abc /usr/clips/foo Copy the same directory the same way, but only those files that start with mumblypeg. rock# cvcp -vxy -p mumblypeg abc /usr/clips/foo Copy a single file def to the directory /usr/clips/foo/ rock# cvcp def /usr/clips/foo Copy a file sequence in the current directory prefixed with secta. Place the files into the Vtape /usr/clips/n8 yuv sub-directory. Use the verbose option. rock% cvcp -v -p secta . /usr/clips/n8/yuv CVCP TUNING cvcp can be tuned to improve performance and resource utilization. By adjusting the -t, -k and -b options cvcp can be optimized for any number of different environments. -t num_threads Increasing the number of copy threads will increase the number of concurrent copies. This option is useful when copying large directory structures. Single file copies are not affected by the number of copy threads. -b buffers The number of copy buffer should be set to a number between 1 and 3 times the number of copy threads. Increasing the number of copy buffers increases the amount of work that is queued up waiting for an available copy thread, but also increases resource consumption. -k buffer_size The size of the copy buffer may be tuned to fit the I/O characteristics of a copy. If files smaller than 4MB are being copied performance may be improved by reducing the size of copy buffers to more closely match the source file sizes. NOTE: It is important to ensure that the resource consumption of cvcp is tuned to minimize the effects of system memory pressure. On systems with limited available physical memory, performance may be increased by reducing the resource consumption of cvcp. SEE ALSO cvfs(8) Xsan File System May 2021 CVCP(1)
false
The false utility always returns with a non-zero exit code. Some shells may provide a builtin false command which is identical to this utility. Consult the builtin(1) manual page. SEE ALSO builtin(1), csh(1), sh(1), true(1) STANDARDS The false utility is expected to be IEEE Std 1003.2 (“POSIX.2”) compatible. macOS 14.5 June 6, 1993 macOS 14.5
false – return false value
false
null
null
zegrep
The grep utility searches any given input files, selecting lines that match one or more patterns. By default, a pattern matches an input line if the regular expression (RE) in the pattern matches the input line without its trailing newline. An empty expression matches every line. Each input line that matches at least one of the patterns is written to the standard output. grep is used for simple patterns and basic regular expressions (BREs); egrep can handle extended regular expressions (EREs). See re_format(7) for more information on regular expressions. fgrep is quicker than both grep and egrep, but can only handle fixed patterns (i.e., it does not interpret regular expressions). Patterns may consist of one or more lines, allowing any of the pattern lines to match a portion of the input. zgrep, zegrep, and zfgrep act like grep, egrep, and fgrep, respectively, but accept input files compressed with the compress(1) or gzip(1) compression utilities. bzgrep, bzegrep, and bzfgrep act like grep, egrep, and fgrep, respectively, but accept input files compressed with the bzip2(1) compression utility. The following options are available: -A num, --after-context=num Print num lines of trailing context after each match. See also the -B and -C options. -a, --text Treat all files as ASCII text. Normally grep will simply print “Binary file ... matches” if files contain binary characters. Use of this option forces grep to output lines matching the specified pattern. -B num, --before-context=num Print num lines of leading context before each match. See also the -A and -C options. -b, --byte-offset The offset in bytes of a matched pattern is displayed in front of the respective matched line. -C num, --context=num Print num lines of leading and trailing context surrounding each match. See also the -A and -B options. -c, --count Only a count of selected lines is written to standard output. --colour=[when], --color=[when] Mark up the matching text with the expression stored in the GREP_COLOR environment variable. The possible values of when are “never”, “always” and “auto”. -D action, --devices=action Specify the demanded action for devices, FIFOs and sockets. The default action is “read”, which means, that they are read as if they were normal files. If the action is set to “skip”, devices are silently skipped. -d action, --directories=action Specify the demanded action for directories. It is “read” by default, which means that the directories are read in the same manner as normal files. Other possible values are “skip” to silently ignore the directories, and “recurse” to read them recursively, which has the same effect as the -R and -r option. -E, --extended-regexp Interpret pattern as an extended regular expression (i.e., force grep to behave as egrep). -e pattern, --regexp=pattern Specify a pattern used during the search of the input: an input line is selected if it matches any of the specified patterns. This option is most useful when multiple -e options are used to specify multiple patterns, or when a pattern begins with a dash (‘-’). --exclude pattern If specified, it excludes files matching the given filename pattern from the search. Note that --exclude and --include patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. If no --include pattern is specified, all files are searched that are not excluded. Patterns are matched to the full path specified, not only to the filename component. --exclude-dir pattern If -R is specified, it excludes directories matching the given filename pattern from the search. Note that --exclude-dir and --include-dir patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. If no --include-dir pattern is specified, all directories are searched that are not excluded. -F, --fixed-strings Interpret pattern as a set of fixed strings (i.e., force grep to behave as fgrep). -f file, --file=file Read one or more newline separated patterns from file. Empty pattern lines match every input line. Newlines are not considered part of a pattern. If file is empty, nothing is matched. -G, --basic-regexp Interpret pattern as a basic regular expression (i.e., force grep to behave as traditional grep). -H Always print filename headers with output lines. -h, --no-filename Never print filename headers (i.e., filenames) with output lines. --help Print a brief help message. -I Ignore binary files. This option is equivalent to the “--binary-files=without-match” option. -i, --ignore-case Perform case insensitive matching. By default, grep is case sensitive. --include pattern If specified, only files matching the given filename pattern are searched. Note that --include and --exclude patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. Patterns are matched to the full path specified, not only to the filename component. --include-dir pattern If -R is specified, only directories matching the given filename pattern are searched. Note that --include-dir and --exclude-dir patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. -J, --bz2decompress Decompress the bzip2(1) compressed file before looking for the text. -L, --files-without-match Only the names of files not containing selected lines are written to standard output. Pathnames are listed once per file searched. If the standard input is searched, the string “(standard input)” is written unless a --label is specified. -l, --files-with-matches Only the names of files containing selected lines are written to standard output. grep will only search a file until a match has been found, making searches potentially less expensive. Pathnames are listed once per file searched. If the standard input is searched, the string “(standard input)” is written unless a --label is specified. --label Label to use in place of “(standard input)” for a file name where a file name would normally be printed. This option applies to -H, -L, and -l. --mmap Use mmap(2) instead of read(2) to read input, which can result in better performance under some circumstances but can cause undefined behaviour. -M, --lzma Decompress the LZMA compressed file before looking for the text. -m num, --max-count=num Stop reading the file after num matches. -n, --line-number Each output line is preceded by its relative line number in the file, starting at line 1. The line number counter is reset for each file processed. This option is ignored if -c, -L, -l, or -q is specified. --null Prints a zero-byte after the file name. -O If -R is specified, follow symbolic links only if they were explicitly listed on the command line. The default is not to follow symbolic links. -o, --only-matching Prints only the matching part of the lines. -p If -R is specified, no symbolic links are followed. This is the default. -q, --quiet, --silent Quiet mode: suppress normal output. grep will only search a file until a match has been found, making searches potentially less expensive. -R, -r, --recursive Recursively search subdirectories listed. (i.e., force grep to behave as rgrep). -S If -R is specified, all symbolic links are followed. The default is not to follow symbolic links. -s, --no-messages Silent mode. Nonexistent and unreadable files are ignored (i.e., their error messages are suppressed). -U, --binary Search binary files, but do not attempt to print them. -u This option has no effect and is provided only for compatibility with GNU grep. -V, --version Display version information and exit. -v, --invert-match Selected lines are those not matching any of the specified patterns. -w, --word-regexp The expression is searched for as a word (as if surrounded by ‘[[:<:]]’ and ‘[[:>:]]’; see re_format(7)). This option has no effect if -x is also specified. -x, --line-regexp Only input lines selected against an entire fixed string or regular expression are considered to be matching lines. -y Equivalent to -i. Obsoleted. -z, --null-data Treat input and output data as sequences of lines terminated by a zero-byte instead of a newline. -X, --xz Decompress the xz(1) compressed file before looking for the text. -Z, --decompress Force grep to behave as zgrep. --binary-files=value Controls searching and printing of binary files. Options are: binary (default) Search binary files but do not print them. without-match Do not search binary files. text Treat all files as text. --line-buffered Force output to be line buffered. By default, output is line buffered when standard output is a terminal and block buffered otherwise. If no file arguments are specified, the standard input is used. Additionally, “-” may be used in place of a file name, anywhere that a file name is accepted, to read from standard input. This includes both -f and file arguments. ENVIRONMENT GREP_OPTIONS May be used to specify default options that will be placed at the beginning of the argument list. Backslash-escaping is not supported, unlike the behavior in GNU grep. EXIT STATUS The grep utility exits with one of the following values: 0 One or more lines were selected. 1 No lines were selected. >1 An error occurred.
grep, egrep, fgrep, rgrep, bzgrep, bzegrep, bzfgrep, zgrep, zegrep, zfgrep – file pattern searcher
grep [-abcdDEFGHhIiJLlMmnOopqRSsUVvwXxZz] [-A num] [-B num] [-C num] [-e pattern] [-f file] [--binary-files=value] [--color[=when]] [--colour[=when]] [--context=num] [--label] [--line-buffered] [--null] [pattern] [file ...]
null
- Find all occurrences of the pattern ‘patricia’ in a file: $ grep 'patricia' myfile - Same as above but looking only for complete words: $ grep -w 'patricia' myfile - Count occurrences of the exact pattern ‘FOO’ : $ grep -c FOO myfile - Same as above but ignoring case: $ grep -c -i FOO myfile - Find all occurrences of the pattern ‘.Pp’ at the beginning of a line: $ grep '^\.Pp' myfile The apostrophes ensure the entire expression is evaluated by grep instead of by the user's shell. The caret ‘^’ matches the null string at the beginning of a line, and the ‘\’ escapes the ‘.’, which would otherwise match any character. - Find all lines in a file which do not contain the words ‘foo’ or ‘bar’: $ grep -v -e 'foo' -e 'bar' myfile - Peruse the file ‘calendar’ looking for either 19, 20, or 25 using extended regular expressions: $ egrep '19|20|25' calendar - Show matching lines and the name of the ‘*.h’ files which contain the pattern ‘FIXME’. Do the search recursively from the /usr/src/sys/arm directory $ grep -H -R FIXME --include="*.h" /usr/src/sys/arm/ - Same as above but show only the name of the matching file: $ grep -l -R FIXME --include="*.h" /usr/src/sys/arm/ - Show lines containing the text ‘foo’. The matching part of the output is colored and every line is prefixed with the line number and the offset in the file for those lines that matched. $ grep -b --colour -n foo myfile - Show lines that match the extended regular expression patterns read from the standard input: $ echo -e 'Free\nBSD\nAll.*reserved' | grep -E -f - myfile - Show lines from the output of the pciconf(8) command matching the specified extended regular expression along with three lines of leading context and one line of trailing context: $ pciconf -lv | grep -B3 -A1 -E 'class.*=.*storage' - Suppress any output and use the exit status to show an appropriate message: $ grep -q foo myfile && echo File matches SEE ALSO bzip2(1), compress(1), ed(1), ex(1), gzip(1), sed(1), xz(1), zgrep(1), re_format(7) STANDARDS The grep utility is compliant with the IEEE Std 1003.1-2008 (“POSIX.1”) specification. The flags [-AaBbCDdGHhILmopRSUVw] are extensions to that specification, and the behaviour of the -f flag when used with an empty pattern file is left undefined. All long options are provided for compatibility with GNU versions of this utility. Historic versions of the grep utility also supported the flags [-ruy]. This implementation supports those options; however, their use is strongly discouraged. HISTORY The grep command first appeared in Version 6 AT&T UNIX. BUGS The grep utility does not normalize Unicode input, so a pattern containing composed characters will not match decomposed input, and vice versa. macOS 14.5 November 10, 2021 macOS 14.5
memory_pressure
A tool to apply real or simulate memory pressure on the system SEE ALSO vm_stat(1) macOS March 7, 2013 macOS
memory_pressure – Tool to apply real or simulate memory pressure on the system
memory_pressure [-l level] | [-p percent_free] | [-S -l level] | [-s sleep_seconds]
-l <level> Apply real or simulate memory pressure (if specified alongside simulate argument) on the system till low memory notifications corresponding to <level> are generated. Supported values are "warn" and "critical". -p <percent_free> Allocate memory till the available memory in the system is <percent_free> of total memory. If the percentage of available memory to total memory on the system drops, the tool will free memory till either the desired percentage is achieved or it runs out of memory to free. -S Simulate memory pressure on the system by placing it artificially for <sleep_seconds> duration at the "warn" or "critical" level. -s <sleep_seconds> Duration to wait before allocating or freeing memory if applying real pressure. In case of simulating memory pressure, this is the duration the system will be maintained at an artificial memory level.
null
pp5.30
pp creates standalone executables from Perl programs, using the compressed packager provided by PAR, and dependency detection heuristics offered by Module::ScanDeps. Source files are compressed verbatim without compilation. You may think of pp as "perlcc that works without hassle". :-) A GUI interface is also available as the tkpp command. It does not provide the compilation-step acceleration provided by perlcc (however, see -f below for byte-compiled, source-hiding techniques), but makes up for it with better reliability, smaller executable size, and full retrieval of original source code. When a single input program is specified, the resulting executable will behave identically as that program. However, when multiple programs are packaged, the produced executable will run the one that has the same basename as $0 (i.e. the filename used to invoke it). If nothing matches, it dies with the error "Can't open perl script "$0"".
pp - PAR Packager
pp [ -ABCEFILMPTSVXacdefghilmnoprsuvxz ] [ parfile | scriptfile ]...
Options are available in a short form and a long form. For example, the three lines below are all equivalent: % pp -o output.exe input.pl % pp --output output.exe input.pl % pp --output=output.exe input.pl Since the command lines can become sufficiently long to reach the limits imposed by some shells, it is possible to have pp read some of its options from one or more text files. The basic usage is to just include an argument starting with an 'at' (@) sigil. This argument will be interpreted as a file to read options from. Mixing ordinary options and @file options is possible. This is implemented using the Getopt::ArgvFile module, so read its documentation for advanced usage. -a, --addfile=FILE|DIR Add an extra file into the package. If the file is a directory, recursively add all files inside that directory, with links turned into actual files. By default, files are placed under "/" inside the package with their original names. You may override this by appending the target filename after a ";", like this: % pp -a "old_filename.txt;new_filename.txt" % pp -a "old_dirname;new_dirname" You may specify "-a" multiple times. -A, --addlist=FILE Read a list of file/directory names from FILE, adding them into the package. Each line in FILE is taken as an argument to -a above. You may specify "-A" multiple times. -B, --bundle Bundle core modules in the resulting package. This option is enabled by default, except when "-p" or "-P" is specified. Since PAR version 0.953, this also strips any local paths from the list of module search paths @INC before running the contained script. -C, --clean Clean up temporary files extracted from the application at runtime. By default, these files are cached in the temporary directory; this allows the program to start up faster next time. -c, --compile Run "perl -c inputfile" to determine additional run-time dependencies. -cd, --cachedeps=FILE Use FILE to cache detected dependencies. Creates FILE unless present. This will speed up the scanning process on subsequent runs. -d, --dependent Reduce the executable size by not including a copy of perl interpreter. Executables built this way will need a separate perl5x.dll or libperl.so to function correctly. This option is only available if perl is built as a shared library. -e, --eval=STRING Package a one-liner, much the same as "perl -e '...'" -E, --evalfeature=STRING Behaves just like "-e", except that it implicitly enables all optional features (in the main compilation unit) with Perl 5.10 and later. See feature. -x, --execute Run "perl inputfile" to determine additional run-time dependencies. Using this option, pp may be able to detect the use of modules that can't be determined by static analysis of "inputfile". Examples are stuff loaded by run-time loaders like Module::Runtime or "plugin" loaders like Module::Loader. Note that which modules are detected depends on which parts of your program are exercised when running "inputfile". E.g. if your program immediately terminates when run as "perl inputfile" because it lacks mandatory arguments, then this option will probably have no effect. You may use --xargs to supply arguments in this case. --xargs=STRING If -x is given, splits the "STRING" using the function "shellwords" from Text::ParseWords and passes the result as @ARGV when running "perl inputfile". -X, --exclude=MODULE Exclude the given module from the dependency search path and from the package. If the given file is a zip or par or par executable, all the files in the given file (except MANIFEST, META.yml and script/*) will be excluded and the output file will "use" the given file at runtime. -f, --filter=FILTER Filter source script(s) with a PAR::Filter subclass. You may specify multiple such filters. If you wish to hide the source code from casual prying, this will do: % pp -f Bleach source.pl If you are more serious about hiding your source code, you should have a look at Steve Hay's PAR::Filter::Crypto module. Make sure you understand the Filter::Crypto caveats! -g, --gui Build an executable that does not have a console window. This option is ignored on non-MSWin32 platforms or when "-p" is specified. -h, --help Show basic usage information. -I, --lib=DIR Add the given directory to the perl module search path. May be specified multiple times. -l, --link=FILE|LIBRARY Add the given shared library (a.k.a. shared object or DLL) into the packed file. Also accepts names under library paths; i.e. "-l ncurses" means the same thing as "-l libncurses.so" or "-l /usr/local/lib/libncurses.so" in most Unixes. May be specified multiple times. -L, --log=FILE Log the output of packaging to a file rather than to stdout. -F, --modfilter=FILTER[=REGEX], Filter included perl module(s) with a PAR::Filter subclass. You may specify multiple such filters. By default, the PodStrip filter is applied. In case that causes trouble, you can turn this off by setting the environment variable "PAR_VERBATIM" to 1. Since PAR 0.958, you can use an optional regular expression (REGEX above) to select the files in the archive which should be filtered. Example: pp -o foo.exe -F Bleach=warnings\.pm$ foo.pl This creates a binary executable foo.exe from foo.pl packaging all files as usual except for files ending in "warnings.pm" which are filtered with PAR::Filter::Bleach. -M, --module=MODULE Add the specified module into the package, along with its dependencies. The following variants may be used to add whole module namespaces: -M Foo::** Add every module in the "Foo" namespace except "Foo" itself, i.e. add "Foo::Bar", "Foo::Bar::Quux" etc up to any depth. -M Foo::* Add every module at level 1 in the "Foo" namespace, i.e. add "Foo::Bar", but neither "Foo::Bar::Quux" nor "Foo". -M Foo:: Shorthand for "-M Foo -M Foo:**": every module in the "Foo" namespace including "Foo" itself. Instead of a module name, MODULE may also be specified as a filename relative to the @INC path, i.e. "-M Module/ScanDeps.pm" means the same thing as "-M Module::ScanDeps". If MODULE has an extension that is not ".pm"/".ix"/".al", it will not be scanned for dependencies, and will be placed under "/" instead of "/lib/" inside the PAR file. This use is deprecated -- consider using the -a option instead. You may specify "-M" multiple times. -m, --multiarch Build a multi-architecture PAR file. Implies -p. -n, --noscan Skip the default static scanning altogether, using run-time dependencies from -c or -x exclusively. -N, --namespace=NAMESPACE Add all modules in the namespace into the package, along with their dependencies. If "NAMESPACE" is something like "Foo::Bar" then this will add all modules "Foo/Bar/Quux.pm", "Foo/Bar/Fred/Barnie.pm" etc that can be located in your module search path. It mimics the behaviour of "plugin" loaders like Module::Loader. This is different from using "-M Foo::Bar::", as the latter insists on adding "Foo/Bar.pm" which might not exist in the above "plugin" scenario. You may specify "-N" multiple times. -o, --output=FILE File name for the final packaged executable. -p, --par Create PAR archives only; do not package to a standalone binary. -P, --perlscript Create stand-alone perl script; do not package to a standalone binary. -r, --run Run the resulting packaged script after packaging it. --reusable EXPERIMENTAL Make the packaged executable reusable for running arbitrary, external Perl scripts as if they were part of the package: pp -o myapp --reusable someapp.pl ./myapp --par-options --reuse otherapp.pl The second line will run otherapp.pl instead of someapp.pl. -S, --save Do not delete generated PAR file after packaging. -s, --sign Cryptographically sign the generated PAR or binary file using Module::Signature. -T, --tempcache Set the program unique part of the cache directory name that is used if the program is run without -C. If not set, a hash of the executable is used. When the program is run, its contents are extracted to a temporary directory. On Unix systems, this is commonly /tmp/par-USER/cache-XXXXXXX. USER is replaced by the name of the user running the program, but "spelled" in hex. XXXXXXX is either a hash of the executable or the value passed to the "-T" or "--tempcache" switch. -u, --unicode Package Unicode support (essentially utf8_heavy.pl and everything below the directory unicore in your perl library). This option exists because it is impossible to detect using static analysis if your program needs Unicode support at runtime. (Note: If your program contains "use utf8" this does not imply it needs Unicode support. It merely says that your program is written in UTF-8.) If your packed program exits with an error message like Can't locate utf8_heavy.pl in @INC (@INC contains: ...) try to pack it with "-u" (or use "-x"). -v, --verbose[=NUMBER] Increase verbosity of output; NUMBER is an integer from 1 to 3, 3 being the most verbose. Defaults to 1 if specified without an argument. Alternatively, -vv sets verbose level to 2, and -vvv sets it to 3. -V, --version Display the version number and copyrights of this program. -z, --compress=NUMBER Set zip compression level; NUMBER is an integer from 0 to 9, 0 = no compression, 9 = max compression. Defaults to 6 if -z is not used. ENVIRONMENT PP_OPTS Command-line options (switches). Switches in this variable are taken as if they were on every pp command line. NOTES Here are some recipes showing how to utilize pp to bundle source.pl with all its dependencies, on target machines with different expected settings: Stone-alone setup: To make a stand-alone executable, suitable for running on a machine that doesn't have perl installed: % pp -o packed.exe source.pl # makes packed.exe # Now, deploy 'packed.exe' to target machine... $ packed.exe # run it Perl interpreter only, without core modules: To make a packed .pl file including core modules, suitable for running on a machine that has a perl interpreter, but where you want to be sure of the versions of the core modules that your program uses: % pp -B -P -o packed.pl source.pl # makes packed.pl # Now, deploy 'packed.pl' to target machine... $ perl packed.pl # run it Perl with core modules installed: To make a packed .pl file without core modules, relying on the target machine's perl interpreter and its core libraries. This produces a significantly smaller file than the previous version: % pp -P -o packed.pl source.pl # makes packed.pl # Now, deploy 'packed.pl' to target machine... $ perl packed.pl # run it Perl with PAR.pm and its dependencies installed: Make a separate archive and executable that uses the archive. This relies upon the perl interpreter and libraries on the target machine. % pp -p source.pl # makes source.par % echo "use PAR 'source.par';" > packed.pl; % cat source.pl >> packed.pl; # makes packed.pl # Now, deploy 'source.par' and 'packed.pl' to target machine... $ perl packed.pl # run it, perl + core modules required Note that even if your perl was built with a shared library, the 'Stand-alone executable' above will not need a separate perl5x.dll or libperl.so to function correctly. But even in this case, the underlying system libraries such as libc must be compatible between the host and target machines. Use "--dependent" if you are willing to ship the shared library with the application, which can significantly reduce the executable size. SEE ALSO tkpp, par.pl, parl, perlcc PAR, PAR::Packer, Module::ScanDeps Getopt::Long, Getopt::ArgvFile ACKNOWLEDGMENTS Simon Cozens, Tom Christiansen and Edward Peschko for writing perlcc; this program try to mimic its interface as close as possible, and copied liberally from their code. Jan Dubois for writing the exetype.pl utility, which has been partially adapted into the "-g" flag. Mattia Barbon for providing the "myldr" binary loader code. Jeff Goff for suggesting the name pp. AUTHORS Audrey Tang <cpan@audreyt.org>, Steffen Mueller <smueller@cpan.org> Roderich Schupp <rschupp@cpan.org> You can write to the mailing list at <par@perl.org>, or send an empty mail to <par-subscribe@perl.org> to participate in the discussion. Please submit bug reports to <bug-par-packer@rt.cpan.org>. COPYRIGHT Copyright 2002-2009 by Audrey Tang <cpan@audreyt.org>. Neither this program nor the associated parl program impose any licensing restrictions on files generated by their execution, in accordance with the 8th article of the Artistic License: "Aggregation of this Package with a commercial distribution is always permitted provided that the use of this Package is embedded; that is, when no overt attempt is made to make this Package's interfaces visible to the end user of the commercial distribution. Such use shall not be construed as a distribution of this Package." Therefore, you are absolutely free to place any license on the resulting executable, as long as the packed 3rd-party libraries are also available under the Artistic License. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See LICENSE. perl v5.30.3 2020-03-08 PP(1)
Note: When running on Microsoft Windows, the a.out below will be replaced by a.exe instead. % pp hello.pl # Pack 'hello.pl' into executable 'a.out' % pp -o hello hello.pl # Pack 'hello.pl' into executable 'hello' # (or 'hello.exe' on Win32) % pp -o foo foo.pl bar.pl # Pack 'foo.pl' and 'bar.pl' into 'foo' % ./foo # Run 'foo.pl' inside 'foo' % mv foo bar; ./bar # Run 'bar.pl' inside 'foo' % mv bar baz; ./baz # Error: Can't open perl script "baz" % pp -p file # Creates a PAR file, 'a.par' % pp -o hello a.par # Pack 'a.par' to executable 'hello' % pp -S -o hello file # Combine the two steps above % pp -p -o out.par file # Creates 'out.par' from 'file' % pp -B -p -o out.par file # same as above, but bundles core modules # and removes any local paths from @INC % pp -P -o out.pl file # Creates 'out.pl' from 'file' % pp -B -p -o out.pl file # same as above, but bundles core modules # and removes any local paths from @INC # (-B is assumed when making executables) % pp -e "print 123" # Pack a one-liner into 'a.out' % pp -p -e "print 123" # Creates a PAR file 'a.par' % pp -P -e "print 123" # Creates a perl script 'a.pl' % pp -c hello # Check dependencies from "perl -c hello" % pp -x hello # Check dependencies from "perl hello" % pp -n -x hello # same as above, but skips static scanning % pp -I /foo hello # Extra include paths % pp -M Foo::Bar hello # Extra modules in the include path % pp -M abbrev.pl hello # Extra libraries in the include path % pp -X Foo::Bar hello # Exclude modules % pp -a data.txt hello # Additional data files % pp -r hello # Pack 'hello' into 'a.out', runs 'a.out' % pp -r hello a b c # Pack 'hello' into 'a.out', runs 'a.out' # with arguments 'a b c' % pp hello --log=c # Pack 'hello' into 'a.out', logs # messages into 'c' # Pack 'hello' into a console-less 'out.exe' (Win32 only) % pp --gui -o out.exe hello % pp @file hello.pl # Pack 'hello.pl' but read _additional_ # options from file 'file'
uncompress
The compress utility reduces the size of files using adaptive Lempel-Ziv coding. Each file is renamed to the same name plus the extension .Z. A file argument with a .Z extension will be ignored except it will cause an error exit after other arguments are processed. If compression would not reduce the size of a file, the file is ignored. The uncompress utility restores compressed files to their original form, renaming the files by deleting the .Z extensions. A file specification need not include the file's .Z extension. If a file's name in its file system does not have a .Z extension, it will not be uncompressed and it will cause an error exit after other arguments are processed. If renaming the files would cause files to be overwritten and the standard input device is a terminal, the user is prompted (on the standard error output) for confirmation. If prompting is not possible or confirmation is not received, the files are not overwritten. As many of the modification time, access time, file flags, file mode, user ID, and group ID as allowed by permissions are retained in the new file. If no files are specified or a file argument is a single dash (‘-’), the standard input is compressed or uncompressed to the standard output. If either the input and output files are not regular files, the checks for reduction in size and file overwriting are not performed, the input file is not removed, and the attributes of the input file are not retained in the output file. The options are as follows: -b bits The code size (see below) is limited to bits, which must be in the range 9..16. The default is 16. -c Compressed or uncompressed output is written to the standard output. No files are modified. The -v option is ignored. Compression is attempted even if the results will be larger than the original. -f Files are overwritten without prompting for confirmation. Also, for compress, files are compressed even if they are not actually reduced in size. -v Print the percentage reduction of each file. Ignored by uncompress or if the -c option is also used. The compress utility uses a modified Lempel-Ziv algorithm. Common substrings in the file are first replaced by 9-bit codes 257 and up. When code 512 is reached, the algorithm switches to 10-bit codes and continues to use more bits until the limit specified by the -b option or its default is reached. After the limit is reached, compress periodically checks the compression ratio. If it is increasing, compress continues to use the existing code dictionary. However, if the compression ratio decreases, compress discards the table of substrings and rebuilds it from scratch. This allows the algorithm to adapt to the next "block" of the file. The -b option is unavailable for uncompress since the bits parameter specified during compression is encoded within the output, along with a magic number to ensure that neither decompression of random data nor recompression of compressed data is attempted. The amount of compression obtained depends on the size of the input, the number of bits per code, and the distribution of common substrings. Typically, text such as source code or English is reduced by 50-60%. Compression is generally much better than that achieved by Huffman coding (as used in the historical command pack), or adaptive Huffman coding (as used in the historical command compact), and takes less time to compute. If file is a soft or hard link compress will replace it with a compressed copy of the file pointed to by the link. The link's target file is left uncompressed. EXIT STATUS The compress and uncompress utilities exit 0 on success, and >0 if an error occurs. The compress utility exits 2 if attempting to compress a file would not reduce its size and the -f option was not specified and if no other error occurs.
compress, uncompress – compress and expand data
compress [-fv] [-b bits] [file ...] compress -c [-b bits] [file] uncompress [-fv] [file ...] uncompress -c [file ...]
null
Create a file test_file with a single line of text: echo "This is a test" > test_file Try to reduce the size of the file using a 10-bit code and show the exit status: $ compress -b 10 test_file $ echo $? 2 Try to compress the file and show compression percentage: $ compress -v test_file test_file: file would grow; left unmodified Same as above but forcing compression: $ compress -f -v test_file test_file.Z: 79% expansion Compress and uncompress the string ‘hello’ on the fly: $ echo "hello" | compress | uncompress hello SEE ALSO gunzip(1), gzexe(1), gzip(1), zcat(1), zmore(1), znew(1) Welch, Terry A., “A Technique for High Performance Data Compression”, IEEE Computer, 17:6, pp. 8-19, June, 1984. STANDARDS The compress and uncompress utilities conform to IEEE Std 1003.1-2001 (“POSIX.1”). HISTORY The compress command appeared in 4.3BSD. BUGS The program does not handle links well and has no link-handling options. Some of these might be considered otherwise-undocumented features. compress: If the utility does not compress a file because doing so would not reduce its size, and a file of the same name except with an .Z extension exists, the named file is not really ignored as stated above; it causes a prompt to confirm the overwriting of the file with the extension. If the operation is confirmed, that file is deleted. uncompress: If an empty file is compressed (using -f), the resulting .Z file is also empty. That seems right, but if uncompress is then used on that file, an error will occur. Both utilities: If a ‘-’ argument is used and the utility prompts the user, the standard input is taken as the user's reply to the prompt. Both utilities: If the specified file does not exist, but a similarly- named one with (for compress) or without (for uncompress) a .Z extension does exist, the utility will waste the user's time by not immediately emitting an error message about the missing file and continuing. Instead, it first asks for confirmation to overwrite the existing file and then does not overwrite it. macOS 14.5 March 4, 2021 macOS 14.5
lwp-request
This program can be used to send requests to WWW servers and your local file system. The request content for POST and PUT methods is read from stdin. The content of the response is printed on stdout. Error messages are printed on stderr. The program returns a status value indicating the number of URLs that failed. The options are: -m <method> Set which method to use for the request. If this option is not used, then the method is derived from the name of the program. -f Force request through, even if the program believes that the method is illegal. The server might reject the request eventually. -b <uri> This URI will be used as the base URI for resolving all relative URIs given as argument. -t <timeout> Set the timeout value for the requests. The timeout is the amount of time that the program will wait for a response from the remote server before it fails. The default unit for the timeout value is seconds. You might append "m" or "h" to the timeout value to make it minutes or hours, respectively. The default timeout is '3m', i.e. 3 minutes. -i <time> Set the If-Modified-Since header in the request. If time is the name of a file, use the modification timestamp for this file. If time is not a file, it is parsed as a literal date. Take a look at HTTP::Date for recognized formats. -c <content-type> Set the Content-Type for the request. This option is only allowed for requests that take a content, i.e. POST and PUT. You can force methods to take content by using the "-f" option together with "-c". The default Content-Type for POST is "application/x-www-form-urlencoded". The default Content-type for the others is "text/plain". -p <proxy-url> Set the proxy to be used for the requests. The program also loads proxy settings from the environment. You can disable this with the "-P" option. -P Don't load proxy settings from environment. -H <header> Send this HTTP header with each request. You can specify several, e.g.: lwp-request \ -H 'Referer: http://other.url/' \ -H 'Host: somehost' \ http://this.url/ -C <username>:<password> Provide credentials for documents that are protected by Basic Authentication. If the document is protected and you did not specify the username and password with this option, then you will be prompted to provide these values. The following options controls what is displayed by the program: -u Print request method and absolute URL as requests are made. -U Print request headers in addition to request method and absolute URL. -s Print response status code. This option is always on for HEAD requests. -S Print response status chain. This shows redirect and authorization requests that are handled by the library. -e Print response headers. This option is always on for HEAD requests. -E Print response status chain with full response headers. -d Do not print the content of the response. -o <format> Process HTML content in various ways before printing it. If the content type of the response is not HTML, then this option has no effect. The legal format values are; "text", "ps", "links", "html" and "dump". If you specify the "text" format then the HTML will be formatted as plain "latin1" text. If you specify the "ps" format then it will be formatted as Postscript. The "links" format will output all links found in the HTML document. Relative links will be expanded to absolute ones. The "html" format will reformat the HTML code and the "dump" format will just dump the HTML syntax tree. Note that the "HTML-Tree" distribution needs to be installed for this option to work. In addition the "HTML-Format" distribution needs to be installed for "-o text" or "-o ps" to work. -v Print the version number of the program and quit. -h Print usage message and quit. -a Set text(ascii) mode for content input and output. If this option is not used, content input and output is done in binary mode. Because this program is implemented using the LWP library, it will only support the protocols that LWP supports. SEE ALSO lwp-mirror, LWP COPYRIGHT Copyright 1995-1999 Gisle Aas. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. AUTHOR Gisle Aas <gisle@aas.no> perl v5.34.0 2020-04-14 LWP-REQUEST(1)
lwp-request - Simple command line user agent
lwp-request [-afPuUsSedvhx] [-m method] [-b base URL] [-t timeout] [-i if-modified-since] [-c content-type] [-C credentials] [-p proxy-url] [-o format] url...
null
null
ncurses5.4-config
This is a shell script which simplifies configuring applications against a particular set of ncurses libraries.
ncurses5.4-config - helper script for ncurses libraries
ncurses5.4-config [options]
--prefix echos the package-prefix of ncurses --exec-prefix echos the executable-prefix of ncurses --cflags echos the C compiler flags needed to compile with ncurses --libs echos the libraries needed to link with ncurses --version echos the release+patchdate version of ncurses --abi-version echos the ABI version of ncurses --mouse-version echos the mouse-interface version of ncurses --bindir echos the directory containing ncurses programs --datadir echos the directory containing ncurses data --includedir echos the directory containing ncurses header files --libdir echos the directory containing ncurses libraries --mandir echos the directory containing ncurses manpages --terminfo echos the $TERMINFO terminfo database path, e.g., /usr/local/share/terminfo --terminfo-dirs echos the $TERMINFO_DIRS directory list, e.g., /usr/local/share/terminfo --termpath echos the $TERMPATH termcap list, if support for termcap is configured. --help prints this message SEE ALSO curses(3X) This describes ncurses version 5.9 (patch 20110404). ncurses5.4-config(1)
null
sampleproc
This program samples which process is on each CPU, at a particular configurable rate. This can be used as an estimate for which process is consuming the most CPU time. Since this uses DTrace, only users with root privileges can run this command.
sampleproc - sample processes on the CPUs. Uses DTrace.
sampleproc [hertz]
null
Sample at 100 hertz, # sampleproc Sample at 400 hertz, # sampleproc 400 FIELDS PID process ID COMMAND command name COUNT number of samples PERCENT percent of CPU usage BASED ON /usr/demo/dtrace/prof.d DOCUMENTATION DTrace Guide "profile Provider" chapter (docs.sun.com) See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT sampleproc will sample until Ctrl-C is hit. SEE ALSO dtrace(1M) version 0.70 June 9, 2005 sampleproc(1m)
db_recover
The db_recover utility must be run after an unexpected application, Berkeley DB, or system failure to restore the database to a consistent state. All committed transactions are guaranteed to appear after db_recover has run, and all uncommitted transactions will be completely undone. The options are as follows: -c Perform catastrophic recovery instead of normal recovery. -e Retain the environment after running recovery. This option will rarely be used unless a DB_CONFIG file is present in the home directory. If a DB_CONFIG file is not present, then the regions will be created with default parameter values. -h Specify a home directory for the database environment; by default, the current working directory is used. -P Specify an environment password. Although Berkeley DB utilities overwrite password strings as soon as possible, be aware there may be a window of vulnerability on systems where unprivileged users can see command-line arguments or where utilities are not able to overwrite the memory containing the command-line arguments. -t Recover to the time specified rather than to the most current possible date. The timestamp argument should be in the form [[CC]YY]MMDDhhmm[.SS] where each pair of letters represents the following: CC The first two digits of the year (the century). YY The second two digits of the year. If "YY" is specified, but "CC" is not, a value for "YY" between 69 and 99 results in a "YY" value of 19. Otherwise, a "YY" value of 20 is used. MM The month of the year, from 1 to 12. DD The day of the month, from 1 to 31. hh The hour of the day, from 0 to 23. mm The minute of the hour, from 0 to 59. SS The second of the minute, from 0 to 61. If the "CC" and "YY" letter pairs are not specified, the values default to the current year. If the "SS" letter pair is not specified, the value defaults to 0. -V Write the library version number to the standard output, and exit. -v Run in verbose mode. In the case of catastrophic recovery, an archival copy -- or snapshot -- of all database files must be restored along with all of the log files written since the database file snapshot was made. (If disk space is a problem, log files may be referenced by symbolic links). For further information on creating a database snapshot, see Archival Procedures. For further information on performing recovery, see Recovery Procedures. If the failure was not catastrophic, the files present on the system at the time of failure are sufficient to perform recovery. If log files are missing, db_recover will identify the missing log file(s) and fail, in which case the missing log files need to be restored and recovery performed again. The db_recover utility uses a Berkeley DB environment (as described for the -h option, the environment variable DB_HOME, or because the utility was run in a directory containing a Berkeley DB environment). In order to avoid environment corruption when using a Berkeley DB environment, db_recover should always be given the chance to detach from the environment and exit gracefully. To cause db_recover to release all environment resources and exit cleanly, send it an interrupt signal (SIGINT). The db_recover utility exits 0 on success, and >0 if an error occurs. ENVIRONMENT DB_HOME If the -h option is not specified and the environment variable DB_HOME is set, it is used as the path of the database home, as described in DB_ENV->open. SEE ALSO db_archive(1), db_checkpoint(1), db_deadlock(1), db_dump(1), db_load(1), db_printlog(1), db_stat(1), db_upgrade(1), db_verify(1) Darwin December 3, 2003 Darwin
db_recover
db_recover [-ceVv] [-h home] [-P password] [-t [[CC]YY]MMDDhhmm[.SS]]
null
null
dyld_usage
The dyld_usage utility presents an ongoing display of information pertaining to dynamic linker activity within one or more processes. It requires root privileges due to the kernel tracing facility it uses to operate. By default dyld_usage monitors dyld activity in all processes except the running dyld_usage process, Terminal, telnetd, sshd, rlogind, tcsh, csh, and sh. These defaults can be overridden such that output is limited to include or exclude a list of processes specified on the command line. Processes may be specified either by file name or by process id. The output of dyld_usage is formatted according to the size of your window. A narrow window will display fewer columns of data. Use a wide window for maximum data display.
dyld_usage - report dynamic linker activity in real-time
dyld_usage [-e] [-f mode] [-j] [-h] [-t seconds] [-R rawfile [-S start_time] [-E end_time]] [pid | cmd [pid | cmd] ...]
dyld_usage supports the following options: -e Exclude the specified list of pids and commands from the sample, and exclude dyld_usage by default. -j Display output in JSON format. -h Display usage information and exit. -R Specify a raw trace file to process. -t Specify timeout in seconds (for use in automated tools). DISPLAY The data columns displayed are as follows: TIMESTAMP Time of day when call occurred. OPERATION The dyld operation triggered by the process. Typically these operations are triggered by process launch or via a dlopen or dlsym system call. System call entries include both the parameters to the system call and the system call's return code (e.g., 0 on success). TIME INTERVAL The elapsed time spent in the dynamic linker operation or system call. PROCESS NAME The process that generated the dynamic linker activity. If space allows, the thread id will be appended to the process name (i.e., Mail.nnn). SAMPLE USAGE sudo dyld_usage Mail dyld_usage will display dynamic link operations for all instances of processes named Mail. SEE ALSO dyld(1), fs_usage(1) AUTHOR Apple, Inc. COPYRIGHT 2000-2020, Apple, Inc. 2020-04-13 DYLD_USAGE(1)
null
nmedit
Nmedit changes the global symbols not listed in the list_file file of the -s list_file option to static symbols. Undefined symbols and common symbols are not affected and shouldn't be listed in list_file. For dynamic libraries symbols are turned into private extern symbols that are no longer external (rather than static symbols). This is done so that the references between modules of a dynamic library are resolved to the symbols in the dynamic library. Nmedit differs from strip(1) in that it also changes the symbolic debugging information (produce by the -g option to cc(1)) for the global symbols it changes to static symbols so that the resulting object can still be used with the debugger. Nmedit like strip(1) is useful to limit the symbols for use with later linking. This allows control of the interface that the executable wants to provide to the objects that it will dynamically load, and it will not have to publish symbols that are not part of its interface. For example an executable that wishes to allow only a subset of its global symbols but all of the shared libraries globals to be used would have its symbol table edited with: % nmedit -s interface_symbols -A executable where the file interface_symbols would contain only those symbols from the executable that it wishes the objects loaded at runtime to have access to. Another example is an object that is made up of a number of other objects that will be loaded into an executable would built and then have its symbol table edited with: % ld -o relocatable.o -r a.o b.o c.o % nmedit -s interface_symbols relocatable.o which would leave only the symbols listed in the file interface_symbols (and the undefined and common symbols) as global symbols in the object file. The one or more of the following options is required to nmedit(1) is: -s filename Leave the symbol table entries for the global symbols listed in filename global but turn all other global symbols (except undefined and common symbols) into static symbols. The symbol names listed in filename must be one per line. Leading and trailing white space are not part of the symbol name. Lines starting with # are ignored, as are lines with only white space. -R filename Change the symbol table entries for the global symbols listed in filename into static symbols. This file has the same format as the -s filename option above. If the -R filename option is specified without the -s filename option, then all symbols not listed in the -R filename option's filename are left as globals. If both a -R filename and a -s filename are given the symbols listed in the -R filename are basically ignored and only those symbols listed in the -s filename are saved. -p Change symbols to private externs instead of static. This is allowed as the only option to change all defined global symbols to private externs. The options to nmedit(1) are: -A Leave all global absolute symbols except those with a value of zero, and save objective-C class symbols as globals. This is intended for use of programs that load code at runtime and want the loaded code to use symbols from the shared libraries. -D When editing a static library, set the archive's SYMDEF file's user id, group id, date, and file mode to reasonable defaults. See the libtool(1) documentation for -D for more information. - Treat all remaining arguments as file names and not options. -arch arch_type Specifies the architecture, arch_type, of the file for nmedit(1) to process when the file is a universal file (see arch(3) for the currently know arch_types). The arch_type can be all to process all architectures in the file. The default is to process all architectures that are contained in the file. -o output Write the result into the file output. SEE ALSO strip(1), ld(1), libtool(1), arch(3) BUGS The changing of the symbolic debugging information by nmedit is not known to be totally correct and could cause the debugger to crash, get confused or produce incorrect information. Apple Inc. May 29, 2007 NMEDIT(1)
nmedit - change global symbols to local symbols
nmedit -s list_file [-R list_file] [-p] [-A] [-] [[-arch arch_type] ...] object_file ... [-o output]
null
null
filebyproc.d
filebyproc.d is a DTrace OneLiner to print file pathnames as they are opened, including the name of the process calling the open. A line will be printed regardless of whether the open is actually successful or not. This is useful to learn which files applications are attempting to open, such as config files, database files, log files, etc. Docs/oneliners.txt and Docs/Examples/oneliners_examples.txt in the DTraceToolkit contain this as a oneliner that can be cut-n-paste to run. Since this uses DTrace, only users with root privileges can run this command.
filebyproc.d - snoop opens by process name. Uses DTrace.
filebyproc.d
null
This prints new process name and pathnames until Ctrl-C is hit. # filebyproc.d FIELDS CPU The CPU that recieved the event ID A DTrace probe ID for the event FUNCTION:NAME The DTrace probe name for the event remaining fields The first is the name of the process, the second is the file pathname. DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT filebyproc.d will run forever until Ctrl-C is hit. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO opensnoop(1M), dtrace(1M), truss(1) version 1.00 May 15, 2005 filebyproc.d(1m)
bison
Bison is a parser generator in the style of yacc(1). It should be upwardly compatible with input files designed for yacc. Input files should follow the yacc convention of ending in .y. Unlike yacc, the generated files do not have fixed names, but instead use the prefix of the input file. Moreover, if you need to put C++ code in the input file, you can end his name by a C++-like extension (.ypp or .y++), then bison will follow your extension to name the output file (.cpp or .c++). For instance, a grammar description file named parse.yxx would produce the generated parser in a file named parse.tab.cxx, instead of yacc's y.tab.c or old Bison version's parse.tab.c. This description of the options that can be given to bison is adapted from the node Invocation in the bison.texinfo manual, which should be taken as authoritative. Bison supports both traditional single-letter options and mnemonic long option names. Long option names are indicated with -- instead of -. Abbreviations for option names are allowed as long as they are unique. When a long option takes an argument, like --file-prefix, connect the option name and the argument with =.
bison - GNU Project parser generator (yacc replacement)
bison [ -b file-prefix ] [ --file-prefix=file-prefix ] [ -d ] [ --defines=defines-file ] [ -g ] [ --graph=graph-file ] [ -k ] [ --token-table ] [ -l ] [ --no-lines ] [ -n ] [ --no-parser ] [ -o outfile ] [ --output-file=outfile ] [ -p prefix ] [ --name-prefix=prefix ] [ -t ] [ --debug ] [ -v ] [ --verbose ] [ -V ] [ --version ] [ -y ] [ --yacc ] [ -h ] [ --help ] [ --fixed-output-files ] file yacc [ similar options and operands ]
-b file-prefix --file-prefix=file-prefix Specify a prefix to use for all bison output file names. The names are chosen as if the input file were named file-prefix.c. -d Write an extra output file containing macro definitions for the token type names defined in the grammar and the semantic value type YYSTYPE, as well as a few extern variable declarations. If the parser output file is named name.c then this file is named name.h. This output file is essential if you wish to put the definition of yylex in a separate source file, because yylex needs to be able to refer to token type codes and the variable yylval. --defines=defines-file The behavior of --defines is the same than -d option. The only difference is that it has an optional argument which is the name of the output filename. -g Output a VCG definition of the LALR(1) grammar automaton computed by Bison. If the grammar file is foo.y , the VCG output file will be foo.vcg. --graph=graph-file The behavior of --graph is the same than -g option. The only difference is that it has an optional argument which is the name of the output graph filename. -k --token-table This switch causes the name.tab.c output to include a list of token names in order by their token numbers; this is defined in the array yytname. Also generated are #defines for YYNTOKENS, YYNNTS, YYNRULES, and YYNSTATES. -l --no-lines Don't put any #line preprocessor commands in the parser file. Ordinarily bison puts them in the parser file so that the C compiler and debuggers will associate errors with your source file, the grammar file. This option causes them to associate errors with the parser file, treating it an independent source file in its own right. -n --no-parser Do not generate the parser code into the output; generate only declarations. The generated name.tab.c file will have only constant declarations. In addition, a name.act file is generated containing a switch statement body containing all the translated actions. -o outfile --output-file=outfile Specify the name outfile for the parser file. The other output files' names are constructed from outfile as described under the -v and -d switches. -p prefix --name-prefix=prefix Rename the external symbols used in the parser so that they start with prefix instead of yy. The precise list of symbols renamed is yyparse, yylex, yyerror, yylval, yychar, and yydebug. For example, if you use -p c, the names become cparse, clex, and so on. -t --debug In the parser file, define the macro YYDEBUG to 1 if it is not already defined, so that the debugging facilities are compiled. -v --verbose Write an extra output file containing verbose descriptions of the parser states and what is done for each type of look-ahead token in that state. This file also describes all the conflicts, both those resolved by operator precedence and the unresolved ones. The file's name is made by removing .tab.c or .c from the parser output file name, and adding .output instead. Therefore, if the input file is foo.y, then the parser file is called foo.tab.c by default. As a consequence, the verbose output file is called foo.output. -V --version Print the version number of bison and exit. -h --help Print a summary of the options to bison and exit. -y --yacc --fixed-output-files Equivalent to -o y.tab.c; the parser output file is called y.tab.c, and the other outputs are called y.output and y.tab.h. The purpose of this switch is to imitate yacc's output file name conventions. Thus, the following shell script can substitute for yacc and is often installed as yacc: bison -y "$@" SEE ALSO yacc(1) The Bison Reference Manual, included as the file bison.texinfo in the bison source distribution. DIAGNOSTICS Self explanatory. local BISON(1)
null
tclsh8.5
Tclsh is a shell-like application that reads Tcl commands from its standard input or from a file and evaluates them. If invoked with no arguments then it runs interactively, reading Tcl commands from standard input and printing command results and error messages to standard output. It runs until the exit command is invoked or until it reaches end-of-file on its standard input. If there exists a file .tclshrc (or tclshrc.tcl on the Windows platforms) in the home directory of the user, interactive tclsh evaluates the file as a Tcl script just before reading the first command from standard input. SCRIPT FILES If tclsh is invoked with arguments then the first few arguments specify │ the name of a script file, and, optionally, the encoding of the text │ data stored in that script file. Any additional arguments are made available to the script as variables (see below). Instead of reading commands from standard input tclsh will read Tcl commands from the named file; tclsh will exit when it reaches the end of the file. The end of the file may be marked either by the physical end of the medium, or by the character, “\032” (“\u001a”, control-Z). If this character is present in the file, the tclsh application will read text up to but not including the character. An application that requires this character in the file may safely encode it as “\032”, “\x1a”, or “\u001a”; or may generate it by use of commands such as format or binary. There is no automatic evaluation of .tclshrc when the name of a script file is presented on the tclsh command line, but the script file can always source it if desired. If you create a Tcl script in a file whose first line is #!/usr/bin/tclsh then you can invoke the script file directly from your shell if you mark the file as executable. This assumes that tclsh has been installed in the default location in /usr/bin; if it is installed somewhere else then you will have to modify the above line to match. Many UNIX systems do not allow the #! line to exceed about 30 characters in length, so be sure that the tclsh executable can be accessed with a short file name. An even better approach is to start your script files with the following three lines: #!/bin/sh # the next line restarts using tclsh \ exec tclsh "$0" "$@" This approach has three advantages over the approach in the previous paragraph. First, the location of the tclsh binary does not have to be hard-wired into the script: it can be anywhere in your shell search path. Second, it gets around the 30-character file name limit in the previous approach. Third, this approach will work even if tclsh is itself a shell script (this is done on some systems in order to handle multiple architectures or operating systems: the tclsh script selects one of several binaries to run). The three lines cause both sh and tclsh to process the script, but the exec is only executed by sh. sh processes the script first; it treats the second line as a comment and executes the third line. The exec statement cause the shell to stop processing and instead to start up tclsh to reprocess the entire script. When tclsh starts up, it treats all three lines as comments, since the backslash at the end of the second line causes the third line to be treated as part of the comment on the second line. You should note that it is also common practice to install tclsh with its version number as part of the name. This has the advantage of allowing multiple versions of Tcl to exist on the same system at once, but also the disadvantage of making it harder to write scripts that start up uniformly across different versions of Tcl. VARIABLES Tclsh sets the following Tcl variables: argc Contains a count of the number of arg arguments (0 if none), not including the name of the script file. argv Contains a Tcl list whose elements are the arg arguments, in order, or an empty string if there are no arg arguments. argv0 Contains fileName if it was specified. Otherwise, contains the name by which tclsh was invoked. tcl_interactive Contains 1 if tclsh is running interactively (no fileName was specified and standard input is a terminal- like device), 0 otherwise. PROMPTS When tclsh is invoked interactively it normally prompts for each command with “% ”. You can change the prompt by setting the variables tcl_prompt1 and tcl_prompt2. If variable tcl_prompt1 exists then it must consist of a Tcl script to output a prompt; instead of outputting a prompt tclsh will evaluate the script in tcl_prompt1. The variable tcl_prompt2 is used in a similar way when a newline is typed but the current command is not yet complete; if tcl_prompt2 is not set then no prompt is output for incomplete commands. STANDARD CHANNELS See Tcl_StandardChannels for more explanations. SEE ALSO encoding(n), fconfigure(n), tclvars(n) KEYWORDS argument, interpreter, prompt, script file, shell Tcl tclsh(1)
tclsh - Simple shell containing Tcl interpreter
tclsh ?-encoding name? ?fileName arg arg ...? ______________________________________________________________________________
null
null
jdeps
The jdeps command shows the package-level or class-level dependencies of Java class files. The input class can be a path name to a .class file, a directory, a JAR file, or it can be a fully qualified class name to analyze all class files. The options determine the output. By default, the jdeps command writes the dependencies to the system output. The command can generate the dependencies in DOT language (see the -dotoutput option). POSSIBLE OPTIONS -? or -h or --help Prints the help message. -dotoutput dir or --dot-output dir Specifies the destination directory for DOT file output. If this option is specified, then the jdepscommand generates one .dot file for each analyzed archive named archive-file-name.dot that lists the dependencies, and also a summary file named summary.dot that lists the dependencies among the archive files. -s or -summary Prints a dependency summary only. -v or -verbose Prints all class-level dependencies. This is equivalent to -verbose:class -filter:none -verbose:package Prints package-level dependencies excluding, by default, dependences within the same package. -verbose:class Prints class-level dependencies excluding, by default, dependencies within the same archive. -apionly or --api-only Restricts the analysis to APIs, for example, dependences from the signature of public and protected members of public classes including field type, method parameter types, returned type, and checked exception types. -jdkinternals or --jdk-internals Finds class-level dependences in the JDK internal APIs. By default, this option analyzes all classes specified in the --classpath option and input files unless you specified the -include option. You can't use this option with the -p, -e, and -s options. Warning: The JDK internal APIs are inaccessible. -cp path, -classpath path, or --class-path path Specifies where to find class files. --module-path module-path Specifies the module path. --upgrade-module-path module-path Specifies the upgrade module path. --system java-home Specifies an alternate system module path. --add-modules module-name[, module-name...] Adds modules to the root set for analysis. --multi-release version Specifies the version when processing multi-release JAR files. version should be an integer >=9 or base. -q or -quiet Doesn't show missing dependencies from -generate-module-info output. -version or --version Prints version information. MODULE DEPENDENCE ANALYSIS OPTIONS -m module-name or --module module-name Specifies the root module for analysis. --generate-module-info dir Generates module-info.java under the specified directory. The specified JAR files will be analyzed. This option cannot be used with --dot-output or --class-path options. Use the --generate-open-module option for open modules. --generate-open-module dir Generates module-info.java for the specified JAR files under the specified directory as open modules. This option cannot be used with the --dot-output or --class-path options. --check module-name [, module-name...] Analyzes the dependence of the specified modules. It prints the module descriptor, the resulting module dependences after analysis and the graph after transition reduction. It also identifies any unused qualified exports. --list-deps Lists the module dependences and also the package names of JDK internal APIs (if referenced). This option transitively analyzes libraries on class path and module path if referenced. Use --no-recursive option for non-transitive dependency analysis. --list-reduced-deps Same as --list-deps without listing the implied reads edges from the module graph. If module M1 reads M2, and M2 requires transitive on M3, then M1 reading M3 is implied and is not shown in the graph. --print-module-deps Same as --list-reduced-deps with printing a comma-separated list of module dependences. The output can be used by jlink --add-modules to create a custom image that contains those modules and their transitive dependences. --ignore-missing-deps Ignore missing dependences. OPTIONS TO FILTER DEPENDENCES -p pkg_name, -package pkg_name, or --package pkg_name Finds dependences matching the specified package name. You can specify this option multiple times for different packages. The -p and -e options are mutually exclusive. -e regex, -regex regex, or --regex regex Finds dependences matching the specified pattern. The -p and -e options are mutually exclusive. --require module-name Finds dependences matching the given module name (may be given multiple times). The --package, --regex, and --require options are mutually exclusive. -f regex or -filter regex Filters dependences matching the given pattern. If give multiple times, the last one will be selected. -filter:package Filters dependences within the same package. This is the default. -filter:archive Filters dependences within the same archive. -filter:module Filters dependences within the same module. -filter:none No -filter:package and -filter:archive filtering. Filtering specified via the -filter option still applies. --missing-deps Finds missing dependences. This option cannot be used with -p, -e and -s options. OPTIONS TO FILTER CLASSES TO BE ANALYZED -include regex Restricts analysis to the classes matching pattern. This option filters the list of classes to be analyzed. It can be used together with -p and -e, which apply the pattern to the dependencies. -R or --recursive Recursively traverses all run-time dependences. The -R option implies -filter:none. If -p, -e, or -f options are specified, only the matching dependences are analyzed. --no-recursive Do not recursively traverse dependences. -I or --inverse Analyzes the dependences per other given options and then finds all artifacts that directly and indirectly depend on the matching nodes. This is equivalent to the inverse of the compile-time view analysis and the print dependency summary. This option must be used with the --require, --package, or --regex options. --compile-time Analyzes the compile-time view of transitive dependencies, such as the compile-time view of the -R option. Analyzes the dependences per other specified options. If a dependency is found from a directory, a JAR file or a module, all classes in that containing archive are analyzed. EXAMPLE OF ANALYZING DEPENDENCIES The following example demonstrates analyzing the dependencies of the Notepad.jar file. Linux and macOS: $ jdeps demo/jfc/Notepad/Notepad.jar Notepad.jar -> java.base Notepad.jar -> java.desktop Notepad.jar -> java.logging <unnamed> (Notepad.jar) -> java.awt -> java.awt.event -> java.beans -> java.io -> java.lang -> java.net -> java.util -> java.util.logging -> javax.swing -> javax.swing.border -> javax.swing.event -> javax.swing.text -> javax.swing.tree -> javax.swing.undo Windows: C:\Java\jdk1.9.0>jdeps demo\jfc\Notepad\Notepad.jar Notepad.jar -> java.base Notepad.jar -> java.desktop Notepad.jar -> java.logging <unnamed> (Notepad.jar) -> java.awt -> java.awt.event -> java.beans -> java.io -> java.lang -> java.net -> java.util -> java.util.logging -> javax.swing -> javax.swing.border -> javax.swing.event -> javax.swing.text -> javax.swing.tree -> javax.swing.undo EXAMPLE USING THE --INVERSE OPTION $ jdeps --inverse --require java.xml.bind Inverse transitive dependences on [java.xml.bind] java.xml.bind <- java.se.ee java.xml.bind <- jdk.xml.ws java.xml.bind <- java.xml.ws <- java.se.ee java.xml.bind <- java.xml.ws <- jdk.xml.ws java.xml.bind <- jdk.xml.bind <- jdk.xml.ws JDK 22 2024 JDEPS(1)
jdeps - launch the Java class dependency analyzer
jdeps [options] path ...
Command-line options. For detailed descriptions of the options that can be used, see • Possible Options • Module Dependence Analysis Options • Options to Filter Dependences • Options to Filter Classes to be Analyzed path A pathname to the .class file, directory, or JAR file to analyze.
null
productbuild
A product archive is a flat file with a .pkg extension. productbuild creates a deployable product archive, which can be used with the OS X Installer, or submitted to the Mac App Store. It has 5 different modes, as shown in the SYNOPSIS above: 1. Create a product archive from a bundle (e.g. for the Mac App Store). If you have a self-contained bundle (e.g. an app) that always gets installed to the same location (e.g. /Applications), specify the bundle and install path using the --component option. You can specify additional requirements using a PRE-INSTALL REQUIREMENTS PROPERTY LIST. When you specify a bundle, productbuild automatically creates a component package, much like pkgbuild(1), and synthesizes a distribution file. 2. Create a product archive for in-app content. Specify in-app content using the --content option. 3. Create a product archive from a destination root. When you use xcodebuild(1) with the install action, the result is a destination root, either under /tmp, or in whatever location you specify with the Xcode DSTROOT setting. Use the productbuild --root option to specify that destination root directory and its install path. You can specify additional requirements using a PRE-INSTALL REQUIREMENTS PROPERTY LIST. When you specify a root, productbuild automatically creates a component package, much like pkgbuild(1), and synthesizes a distribution file. 4. Create a product archive using a distribution file. If you have a distribution file, use the --distribution option to specify the path to it, and the --package-path option to specify the directory where the component packages are found (if they are not in the current working directory). All packages referenced by the distribution will be incorporated into the resulting product archive. 5. Synthesize a distribution for one or more component packages. This also synthesizes a distribution (also using an optional PRE-INSTALL REQUIREMENTS PROPERTY LIST), but writes out the resulting distribution instead of incorporating it into a product archive. This can serve as a starting point if a more sophisticated distribution is required. When creating product archives for submission to the Mac App Store, use only the --component mode of productbuild. The other modes will create product archives that are compatible with the OS X Installer, but are not necessarily acceptable for the Mac App Store. ARGUMENTS AND OPTIONS --distribution dist-path Use the distribution file at dist-path to define the presentation, choices and packages to be installed by the product. Each of the package names referenced in the given distribution file must be found in a path specified with the --package-path flag. If --distribution is omitted, a distribution will be synthesized to install all of the bundles given by --component flags, or all of the packages given by --package flags. --package-path search-path productbuild will search in search-path for component packages named in the distribution. You can use multiple --package-path flags if necessary. The current working directory is searched automatically. --resources rsrc-dir productbuild will copy the resources from rsrc-dir into the resulting product archive. rsrc-dir can contain unlocalized resources (such as image files) and/or standard lproj directories (e.g. English.lproj) containing localized resources (such as strings files). --ui interface-type If the distribution has multiple choices-outline elements, you can use --ui to select one for building the product archive: this controls which package references are used. The interface-type should match the value of the “ui” attribute on the desired choices-outline. The default is to use the choices-outline with no ui attribute. If used without --distribution, the given interface-type will be used for the choices-outline of the synthesized distribution. --identifier product-identifier The given unique (non-localized) product-identifier will be associated with the product. --version product-version The given product-version string will be associated with the product. --component component-path [install-path] The bundle at component-path is added to the product archive (as its own component package) and to the synthesized distribution. If install-path is specified, it is used as the default install location for the bundle. (If you omit install-path, a location is inferred from the given component-path.) Valid only if --distribution is not specified. --component-compression compression-mode Allows control of compression used for storing any components added via the --component option. This option does not affect the compression used for plugins or scripts. Three compression-mode arguments are supported: • legacy forces a 10.5-compatible compression algorithm for all components. • auto enables productbuild to automatically select newer, more efficient compression algorithms based on properties of the component, such as supported operating system versions. (See os in the PRE-INSTALL REQUIREMENTS PROPERTY LIST section for more details on specifying operating system requirements.) • default provides identical behavior to omitting --component-compression entirely. It is currently equivalent to legacy but may change in future releases of OS X. Note that the Mac App Store may override the specified compression-mode for submitted product archives. Valid with --component only. To control compression of component packages with --distribution or --root use pkgbuild(1) and reference each component package in a distribution file. --content content-path The contents of the directory at content-path are added to the product archive (as its own component package) and to the synthesized distribution. Valid only if --distribution is not specified. --root root-path install-path The entire directory tree at root-path is added to the product archive (as its own component package) and to the synthesized distribution. This is typically used for a destination root created by xcodebuild(1). Valid only if --distribution is not specified. --package pkg-path [install-path] The component package at pkg-path is added to the product archive and to the synthesized distribution. If install-path is specified, it is used as the default install location for the package, overriding any default location specified by the component package itself. Valid only if --distribution is not specified. If the package provided was created by the pkgbuild tool with the --large-payload option specified, then its large payload format will be preserved. The generated product's distribution will include a minimum system version requirement of macOS 12.0 or the minimum allowable system version(s) in the requirements property list, whichever is greater. --synthesize Write the synthesized distribution directly instead of incorporating it into a product archive. --product requirements-plist When synthesizing a distribution, use the requirements from requirements-plist. See PRE-INSTALL REQUIREMENTS PROPERTY LIST (this was formerly called the "product definition property list"). --scripts scripts-path The contents of scripts-path is added to the product archive for use by system.run() commands in the distribution. This is valid only for product archives targeted to the OS X Installer application. --plugins plugins-path The contents of plugins-path is added to the product archive for use by the OS X Installer application's plugin mechanism. It will normally contain a InstallerSections.plist file, and one or more plugin bundles. --large-payload By default, packages that are nested inside of products have a per‐file size limit associated with them. This method instructs productbuild to construct a product where the included payload format supports large files. A large file is defined as any file that is 8 GiB or larger. Note: Opting into --large-payload enforces a distribution requirement that mandates macOS 12.0 or later. --sign identity-name Adds a digital signature to the resulting package. See SIGNED PRODUCT ARCHIVES --keychain keychain-path Specify a specific keychain to search for the signing identity. See SIGNED PRODUCT ARCHIVES --cert certificate-name Specify an intermediate certificate to be embedded in the package. See SIGNED PRODUCT ARCHIVES --timestamp Include a trusted timestamp with the signature. See SIGNED PRODUCT ARCHIVES --timestamp=none Disable trusted timestamp, regardless of identity. See SIGNED PRODUCT ARCHIVES --quiet Inhibits status messages on stdout. Any error messages are still sent to stderr. product-output-path The path to which the product archive will be written. distribution-output-path When --synthesize is used, the path to which the synthesized distribution will be written. PRE-INSTALL REQUIREMENTS PROPERTY LIST When you use productbuild to synthesize a distribution (e.g. with the --component option), you can specify pre-install requirements in a separate property list file, specified with the --product option. (When you use Xcode to create a package for the Mac App Store, you can specify this file using the "Pre-install Requirements Property List" build setting.) At the top level, this property list is a dictionary, with the following keys: Key Description os Minimum allowable OS versions (array of strings) arch Supported architectures (array of strings) ram Minimum required RAM in gigabytes (real) bundle Specific bundles that must exist on the system (array of dictionaries) all-bundles Are all of the bundles specified required? (Boolean) gl-renderer Required OpenGL capabilities (string) cl-device Required OpenCL capabilities (string) metal-device Required Metal capabilities (string) single-graphics-device Requires that OpenGL, OpenCL, and Metal requirements be met by a single device. (Boolean) sysctl-requirements Additional required hardware properties (string) home Should installation be allowed in user home directory? (Boolean) • The os key defines one or more minimum system versions. You might have multiple versions if a certain OS update is required for a given major OS version. For example, if you specify 10.5.4 and 10.6.2, Leopard would be allowed from 10.5.4 up, and Snow Leopard from 10.6.2 up, but 10.6 and 10.6.1 would be rejected. There is no upper-bound associated with the highest value given. NOTE: Some of the other requirements imply their own minimum system versions, which may override the values set here. This is noted below where applicable. • The arch key specifies the supported architectures, e.g. x86_64 and/or arm64. Note that i386 infers x86_64, for compatibility reasons. NOTE: On Apple Silicon, the macOS Installer will evaluate the product's distribution under Rosetta 2 unless the arch key includes the arm64 architecture specifier. Some distribution properties may be evaluated differently between Rosetta 2 and native execution, such as the predicate specified by the sysctl-requirements key. If the distribution is evaluated under Rosetta 2, any package scripts inside of product will be executed with Rosetta 2 at install time. NOTE: Starting on macOS 11.0 (Big Sur), productbuild will automatically specify support for both arm64 and x86_64 unless a custom value for arch is provided. • The ram key specifies the minimum amount of RAM required, in gigabytes. • The gl-renderer key specifies a predicate, against which each of the OpenGL hardware renderers will be checked. For the product to be installed, at least one of the renderers must match the requirements of the predicate. The given predicate string must be convertible to an NSPredicate, and can use the following key paths: Key Path Description version The supported OpenGL version as a double (e.g. major.minor). extensions An array of OpenGL extension strings supported. limits.<gl-parameter> The integer value of the named GL parameter (see below). limits.param<value> The integer value of the GL parameter named by enum <value> (see below). Note that arbitrary GL parameters can be checked via the limits key, using the same symbolic name #defined by the GL headers. For example: ( version >= 2.0 OR ( ( 'GL_ARB_texture_float' IN extensions OR 'GL_ATI_texture_float' IN extensions ) AND 'GL_ARB_vertex_blend' IN extensions ) ) AND ( limits.GL_MAX_TEXTURE_SIZE >= 1024 AND limits.GL_MAX_TEXTURE_STACK_DEPTH > 8 ) Note that recently-introduced GL parameters may not be recognized by their symbolic names, in which case you can use the alternate form of param<value>, where <value> is the enum (integer) value of the parameter. For example: limits.param0x0D33 >= 1024 NOTE: The gl-renderer requirement is ignored on versions of Mac OS X before 10.6.8. For this reason, specifying gl-renderer will cause the minimum system version to be raised to 10.6.8. This may override the values set via the os key. • The cl-device key specifies a predicate, against which each of the OpenCL GPU devices will be checked. For the product to be installed, at least one of the devices must match the requirements of the predicate. The given predicate string must be convertible to an NSPredicate, and can use the following key paths: Key Path Description version The supported OpenCL version as a double (e.g. major.minor). extensions An array of OpenCL extension strings supported. limits.<cl-parameter> The integer value of the named CL deviceInfo parameter. limits.param<value> The integer value of the CL parameter named by enum <value>. NOTE: The cl-device requirement is ignored on versions of Mac OS X before 10.7. For this reason, specifying cl-device will cause the minimum system version to be raised to 10.7. This may override the values set via the os key. • The metal-device key specifies a predicate, against which each of the Metal GPU devices will be checked. For the product to be installed, at least one of the devices must match the requirements of the predicate. The given predicate string must be convertible to an NSPredicate, and can use the following key paths: KeyPath Description deviceName The name of the Metal Device that the hardware is using. <string> supportedFeatureSets An array of Metal (MTLFeatureSet) feature sets that the device supports. <array<string>> isRemovable The device is considered to be removable. This is useful for requiring an eGPU. <bool> isHeadless The device can not and does not have any displays attached. <bool> isLowPowerDevice Returns if the device is the low power device for automatic gfx switching. <bool> rasterOrderGroupsSupported The device supports raster order groups. <bool> argumentBuffersTier The graphics buffer tier that the device supports. <integer> NOTE: The metal-device requirement is ignored on versions of macOS before 10.14.4. For this reason, specifying metal-device will cause the minimum system version to be raised to 10.14.4. This may override the value set via the os key. NOTE: An example of an MTLFeatureSet that would go into the supportedFeatureSets array would be MTLFeatureSet_macOS_GPUFamily1_v1 , a list of the current feature sets can be found in MTLDevice.h inside of Metal.framework. If the gl-device, cl-renderer, and metal-device are specified, all of the requirements must be satisfied. By default, the requirements are considered met even if one graphics device satisfies the OpenGL requirement and a different one satisfies the OpenCL one (Same with Metal). If you want to require that a single device satisfies all of the graphics requirements, add the single-graphics-device key with a value of true. NOTE: Setting the single-graphics-device to true will only be honored if all three of the graphics types are specified ( gl-device, cl-device, metal-device ). However, since legacy packages before 10.14.4 are supported, it can also be used if only gl-device and cl-device are specified. • The sysctl-requirements key specifies a predicate, against which additional hardware requirements will be checked. The predicate uses the sysctl(2) facility to obtain hardware properties for the system in use. Note that only a subset of sysctl(2) variables are available, including most of the hw.* tree and kern.ostype, kern.osrelease, kern.osrevision, and kern.version from the kern.* tree. For example: hw.physicalcpu > 1 Or: ( hw.optional.aes == 1 AND hw.memsize >= 4294967296 ) NOTE: The sysctl-requirements predicate is ignored on versions of OS X before 10.10. For this reason, specifying sysctl-requirements will cause the minimum system version to be raised to 10.10. This may override the values set via the os key. • The bundle key specifies one or more bundles that must already exist on the system (possibly at some minimum version) for the product to be installed. For example, this might be appropriate if the product installs a plugin, and you need to ensure that a compatible version of the host application is available. Each object in this array is a dictionary with the following keys: Key Description id The CFBundleIdentifier of the bundle (required) path The default path of the bundle (required) CFBundleShortVersionString The minimum short version string of the bundle (optional) search Search for bundle if not found at default path? (Boolean, optional) The given default path will be checked first. Only if the bundle does not exist at that path, and search is given as true, the bundle identifier (id) will be used to find the bundle (this is appropriate for applications which the user might move). If the bundle is found through either method, and its version is greater than or equal to the given CFBundleShortVersionString, the requirement is met. (If CFBundleShortVersionString is omitted, the bundle need only exist.) If you specify multiple bundles, all must exist, unless you specify the all-bundles key with a value of false, in which case only one of the bundles must exist. If the bundle requirement is not met, the Installer must have a localized explanation to display to the user. This should be provided in the InfoPlist.strings resource of your top-level bundle (as specified with --component), under the RequiredBundlesDescription key. • The home key, if set to true, designates that the product can be installed under the user's home directory, as an alternative to installing on the system for all users. This should be enabled only if the entire product can be installed in the home directory and be functional. (Home directory installation is disabled by default.) Note that home directory installation is not supported for the Mac App Store. SIGNED PRODUCT ARCHIVES When creating a product archive, you can optionally add a digital signature to the archive. You will need to have a certificate and corresponding private key -- together called an “identity” -- in one of your accessible keychains. To add a signature, specify the name of the identity using the --sign option. The identity's name is the same as the “Common Name” of the certificate. If you want to search for the identity in a specific keychain, specify the path to the keychain file using the --keychain option. Otherwise, the default keychain search path is used. productbuild will embed the signing certificate in the product archive, as well as any intermediate certificates that are found in the keychain. If you need to embed additional certificates to form a chain of trust between the signing certificate and a trusted root certificate on the system, use the --cert option to give the Common Name of the intermediate certificate. Multiple --cert options may be used to embed multiple intermediate certificates. The signature can optionally include a trusted timestamp. This is enabled by default when signing with a Developer ID identity, but it can be enabled explicitly using the --timestamp option. A timestamp server must be contacted to embed a trusted timestamp. If you aren't connected to the Internet, you can use --timestamp=none to disable timestamps, even for a Developer ID identity. Note that component packages do not need to be signed (e.g. with pkgbuild(1)) before adding them to a signed product archive. The signature on the product archive protects the entire product, including the added packages. If you want to postpone signing the product archive until it has been tested and is ready to deploy, you can use productsign(1) when you are ready to add the signature.
productbuild – Build a product archive for the macOS Installer or the Mac App Store.
productbuild [--product requirements-plist] {--component component-path [install-path]} product-output-path productbuild {--content content-path} product-output-path productbuild [--product requirements-plist] {--root root-path install-path} product-output-path productbuild [options] --distribution dist-path [--package-path search-path] product-output-path productbuild --synthesize [--product requirements-plist] {--package pkg-path} distribution-output-path
null
productbuild --component build/Release/Sample.app /Applications Product.pkg Build the archive Product.pkg to install Sample.app under /Applications, synthesizing a distribution. This is typical for building a Mac App Store archive. productbuild --product def.plist --component build/Release/Sample.app /Applications Product.pkg Build the archive Product.pkg to install Sample.app under /Applications, synthesizing a distribution with the requirements from def.plist. This is typical for building a Mac App Store archive with pre-install requirements. productbuild --distribution Product.dist --package-path /tmp/Packages Product.pkg Build the archive Product.pkg using Product.dist, searching for packages referenced by that distribution in /tmp/Packages (as well as in CWD). productbuild --distribution Product.dist --resources Resources Product.pkg Build the archive Product.pkg using Product.dist, incorporating the resources found under the Resources directory. productbuild --distribution Product.dist --sign sample-identity Product.pkg Build the archive Product.pkg using Product.dist, and sign the resulting archive using the identity sample-identity. You will be prompted to allow productbuild to access the keychain item, unless Always Allow was chosen previously. productbuild --package /tmp/a.pkg --package /tmp/b.pkg Product.pkg Build the archive Product.pkg with the component packages /tmp/a.pkg and /tmp/b.pkg, synthesizing a distribution. SEE ALSO pkgbuild(1), productsign(1), xcodebuild(1) macOS January 19, 2021 macOS
zipinfo
zipinfo lists technical information about files in a ZIP archive, most commonly found on MS-DOS systems. Such information includes file access permissions, encryption status, type of compression, version and operating system or file system of compressing program, and the like. The default behavior (with no options) is to list single-line entries for each file in the archive, with header and trailer lines providing summary information for the entire archive. The format is a cross between Unix ``ls -l'' and ``unzip -v'' output. See DETAILED DESCRIPTION below. Note that zipinfo is the same program as unzip (under Unix, a link to it); on some systems, however, zipinfo support may have been omitted when unzip was compiled. ARGUMENTS file[.zip] Path of the ZIP archive(s). If the file specification is a wildcard, each matching file is processed in an order determined by the operating system (or file system). Only the filename can be a wildcard; the path itself cannot. Wildcard expressions are similar to Unix egrep(1) (regular) expressions and may contain: * matches a sequence of 0 or more characters ? matches exactly 1 character [...] matches any single character found inside the brackets; ranges are specified by a beginning character, a hyphen, and an ending character. If an exclamation point or a caret (`!' or `^') follows the left bracket, then the range of characters within the brackets is complemented (that is, anything except the characters inside the brackets is considered a match). To specify a verbatim left bracket, the three-character sequence ``[[]'' has to be used. (Be sure to quote any character that might otherwise be interpreted or modified by the operating system, particularly under Unix and VMS.) If no matches are found, the specification is assumed to be a literal filename; and if that also fails, the suffix .zip is appended. Note that self-extracting ZIP files are supported, as with any other ZIP archive; just specify the .exe suffix (if any) explicitly. [file(s)] An optional list of archive members to be processed, separated by spaces. (VMS versions compiled with VMSCLI defined must delimit files with commas instead.) Regular expressions (wildcards) may be used to match multiple members; see above. Again, be sure to quote expressions that would otherwise be expanded or modified by the operating system. [-x xfile(s)] An optional list of archive members to be excluded from processing.
zipinfo - list detailed information about a ZIP archive
zipinfo [-12smlvhMtTz] file[.zip] [file(s) ...] [-x xfile(s) ...] unzip -Z [-12smlvhMtTz] file[.zip] [file(s) ...] [-x xfile(s) ...]
-1 list filenames only, one per line. This option excludes all others; headers, trailers and zipfile comments are never printed. It is intended for use in Unix shell scripts. -2 list filenames only, one per line, but allow headers (-h), trailers (-t) and zipfile comments (-z), as well. This option may be useful in cases where the stored filenames are particularly long. -s list zipfile info in short Unix ``ls -l'' format. This is the default behavior; see below. -m list zipfile info in medium Unix ``ls -l'' format. Identical to the -s output, except that the compression factor, expressed as a percentage, is also listed. -l list zipfile info in long Unix ``ls -l'' format. As with -m except that the compressed size (in bytes) is printed instead of the compression ratio. -v list zipfile information in verbose, multi-page format. -h list header line. The archive name, actual size (in bytes) and total number of files is printed. -M pipe all output through an internal pager similar to the Unix more(1) command. At the end of a screenful of output, zipinfo pauses with a ``--More--'' prompt; the next screenful may be viewed by pressing the Enter (Return) key or the space bar. zipinfo can be terminated by pressing the ``q'' key and, on some systems, the Enter/Return key. Unlike Unix more(1), there is no forward-searching or editing capability. Also, zipinfo doesn't notice if long lines wrap at the edge of the screen, effectively resulting in the printing of two or more lines and the likelihood that some text will scroll off the top of the screen before being viewed. On some systems the number of available lines on the screen is not detected, in which case zipinfo assumes the height is 24 lines. -t list totals for files listed or for all files. The number of files listed, their uncompressed and compressed total sizes , and their overall compression factor is printed; or, if only the totals line is being printed, the values for the entire archive are given. The compressed total size does not include the 12 additional header bytes of each encrypted entry. Note that the total compressed (data) size will never match the actual zipfile size, since the latter includes all of the internal zipfile headers in addition to the compressed data. -T print the file dates and times in a sortable decimal format (yymmdd.hhmmss). The default date format is a more standard, human-readable version with abbreviated month names (see examples below). -U [UNICODE_SUPPORT only] modify or disable UTF-8 handling. When UNICODE_SUPPORT is available, the option -U forces unzip to escape all non-ASCII characters from UTF-8 coded filenames as ``#Uxxxx''. This option is mainly provided for debugging purpose when the fairly new UTF-8 support is suspected to mangle up extracted filenames. The option -UU allows to entirely disable the recognition of UTF-8 encoded filenames. The handling of filename codings within unzip falls back to the behaviour of previous versions. -z include the archive comment (if any) in the listing. DETAILED DESCRIPTION zipinfo has a number of modes, and its behavior can be rather difficult to fathom if one isn't familiar with Unix ls(1) (or even if one is). The default behavior is to list files in the following format: -rw-rws--- 1.9 unx 2802 t- defX 11-Aug-91 13:48 perms.2660 The last three fields are the modification date and time of the file, and its name. The case of the filename is respected; thus files that come from MS-DOS PKZIP are always capitalized. If the file was zipped with a stored directory name, that is also displayed as part of the filename. The second and third fields indicate that the file was zipped under Unix with version 1.9 of zip. Since it comes from Unix, the file permissions at the beginning of the line are printed in Unix format. The uncompressed file-size (2802 in this example) is the fourth field. The fifth field consists of two characters, either of which may take on several values. The first character may be either `t' or `b', indicating that zip believes the file to be text or binary, respectively; but if the file is encrypted, zipinfo notes this fact by capitalizing the character (`T' or `B'). The second character may also take on four values, depending on whether there is an extended local header and/or an ``extra field'' associated with the file (fully explained in PKWare's APPNOTE.TXT, but basically analogous to pragmas in ANSI C--i.e., they provide a standard way to include non-standard information in the archive). If neither exists, the character will be a hyphen (`-'); if there is an extended local header but no extra field, `l'; if the reverse, `x'; and if both exist, `X'. Thus the file in this example is (probably) a text file, is not encrypted, and has neither an extra field nor an extended local header associated with it. The example below, on the other hand, is an encrypted binary file with an extra field: RWD,R,R 0.9 vms 168 Bx shrk 9-Aug-91 19:15 perms.0644 Extra fields are used for various purposes (see discussion of the -v option below) including the storage of VMS file attributes, which is presumably the case here. Note that the file attributes are listed in VMS format. Some other possibilities for the host operating system (which is actually a misnomer--host file system is more correct) include OS/2 or NT with High Performance File System (HPFS), MS-DOS, OS/2 or NT with File Allocation Table (FAT) file system, and Macintosh. These are denoted as follows: -rw-a-- 1.0 hpf 5358 Tl i4:3 4-Dec-91 11:33 longfilename.hpfs -r--ahs 1.1 fat 4096 b- i4:2 14-Jul-91 12:58 EA DATA. SF --w------- 1.0 mac 17357 bx i8:2 4-May-92 04:02 unzip.macr File attributes in the first two cases are indicated in a Unix-like format, where the seven subfields indicate whether the file: (1) is a directory, (2) is readable (always true), (3) is writable, (4) is executable (guessed on the basis of the extension--.exe, .com, .bat, .cmd and .btm files are assumed to be so), (5) has its archive bit set, (6) is hidden, and (7) is a system file. Interpretation of Macintosh file attributes is unreliable because some Macintosh archivers don't store any attributes in the archive. Finally, the sixth field indicates the compression method and possible sub-method used. There are six methods known at present: storing (no compression), reducing, shrinking, imploding, tokenizing (never publicly released), and deflating. In addition, there are four levels of reducing (1 through 4); four types of imploding (4K or 8K sliding dictionary, and 2 or 3 Shannon-Fano trees); and four levels of deflating (superfast, fast, normal, maximum compression). zipinfo represents these methods and their sub-methods as follows: stor; re:1, re:2, etc.; shrk; i4:2, i8:3, etc.; tokn; and defS, defF, defN, and defX. The medium and long listings are almost identical to the short format except that they add information on the file's compression. The medium format lists the file's compression factor as a percentage indicating the amount of space that has been ``removed'': -rw-rws--- 1.5 unx 2802 t- 81% defX 11-Aug-91 13:48 perms.2660 In this example, the file has been compressed by more than a factor of five; the compressed data are only 19% of the original size. The long format gives the compressed file's size in bytes, instead: -rw-rws--- 1.5 unx 2802 t- 538 defX 11-Aug-91 13:48 perms.2660 In contrast to the unzip listings, the compressed size figures in this listing format denote the complete size of compressed data, including the 12 extra header bytes in case of encrypted entries. Adding the -T option changes the file date and time to decimal format: -rw-rws--- 1.5 unx 2802 t- 538 defX 910811.134804 perms.2660 Note that because of limitations in the MS-DOS format used to store file times, the seconds field is always rounded to the nearest even second. For Unix files this is expected to change in the next major releases of zip(1L) and unzip. In addition to individual file information, a default zipfile listing also includes header and trailer lines: Archive: OS2.zip 5453 bytes 5 files ,,rw, 1.0 hpf 730 b- i4:3 26-Jun-92 23:40 Contents ,,rw, 1.0 hpf 3710 b- i4:3 26-Jun-92 23:33 makefile.os2 ,,rw, 1.0 hpf 8753 b- i8:3 26-Jun-92 15:29 os2unzip.c ,,rw, 1.0 hpf 98 b- stor 21-Aug-91 15:34 unzip.def ,,rw, 1.0 hpf 95 b- stor 21-Aug-91 17:51 zipinfo.def 5 files, 13386 bytes uncompressed, 4951 bytes compressed: 63.0% The header line gives the name of the archive, its total size, and the total number of files; the trailer gives the number of files listed, their total uncompressed size, and their total compressed size (not including any of zip's internal overhead). If, however, one or more file(s) are provided, the header and trailer lines are not listed. This behavior is also similar to that of Unix's ``ls -l''; it may be overridden by specifying the -h and -t options explicitly. In such a case the listing format must also be specified explicitly, since -h or -t (or both) in the absence of other options implies that ONLY the header or trailer line (or both) is listed. See the EXAMPLES section below for a semi-intelligible translation of this nonsense. The verbose listing is mostly self-explanatory. It also lists file comments and the zipfile comment, if any, and the type and number of bytes in any stored extra fields. Currently known types of extra fields include PKWARE's authentication (``AV'') info; OS/2 extended attributes; VMS filesystem info, both PKWARE and Info-ZIP versions; Macintosh resource forks; Acorn/Archimedes SparkFS info; and so on. (Note that in the case of OS/2 extended attributes--perhaps the most common use of zipfile extra fields--the size of the stored EAs as reported by zipinfo may not match the number given by OS/2's dir command: OS/2 always reports the number of bytes required in 16-bit format, whereas zipinfo always reports the 32-bit storage.) Again, the compressed size figures of the individual entries include the 12 extra header bytes for encrypted entries. In contrast, the archive total compressed size and the average compression ratio shown in the summary bottom line are calculated without the extra 12 header bytes of encrypted entries. ENVIRONMENT OPTIONS Modifying zipinfo's default behavior via options placed in an environment variable can be a bit complicated to explain, due to zipinfo's attempts to handle various defaults in an intuitive, yet Unix-like, manner. (Try not to laugh.) Nevertheless, there is some underlying logic. In brief, there are three ``priority levels'' of options: the default options; environment options, which can override or add to the defaults; and explicit options given by the user, which can override or add to either of the above. The default listing format, as noted above, corresponds roughly to the "zipinfo -hst" command (except when individual zipfile members are specified). A user who prefers the long-listing format (-l) can make use of the zipinfo's environment variable to change this default: Unix Bourne shell: ZIPINFO=-l; export ZIPINFO Unix C shell: setenv ZIPINFO -l OS/2 or MS-DOS: set ZIPINFO=-l VMS (quotes for lowercase): define ZIPINFO_OPTS "-l" If, in addition, the user dislikes the trailer line, zipinfo's concept of ``negative options'' may be used to override the default inclusion of the line. This is accomplished by preceding the undesired option with one or more minuses: e.g., ``-l-t'' or ``--tl'', in this example. The first hyphen is the regular switch character, but the one before the `t' is a minus sign. The dual use of hyphens may seem a little awkward, but it's reasonably intuitive nonetheless: simply ignore the first hyphen and go from there. It is also consistent with the behavior of the Unix command nice(1). As suggested above, the default variable names are ZIPINFO_OPTS for VMS (where the symbol used to install zipinfo as a foreign command would otherwise be confused with the environment variable), and ZIPINFO for all other operating systems. For compatibility with zip(1L), ZIPINFOOPT is also accepted (don't ask). If both ZIPINFO and ZIPINFOOPT are defined, however, ZIPINFO takes precedence. unzip's diagnostic option (-v with no zipfile name) can be used to check the values of all four possible unzip and zipinfo environment variables.
To get a basic, short-format listing of the complete contents of a ZIP archive storage.zip, with both header and totals lines, use only the archive name as an argument to zipinfo: zipinfo storage To produce a basic, long-format listing (not verbose), including header and totals lines, use -l: zipinfo -l storage To list the complete contents of the archive without header and totals lines, either negate the -h and -t options or else specify the contents explicitly: zipinfo --h-t storage zipinfo storage \* (where the backslash is required only if the shell would otherwise expand the `*' wildcard, as in Unix when globbing is turned on--double quotes around the asterisk would have worked as well). To turn off the totals line by default, use the environment variable (C shell is assumed here): setenv ZIPINFO --t zipinfo storage To get the full, short-format listing of the first example again, given that the environment variable is set as in the previous example, it is necessary to specify the -s option explicitly, since the -t option by itself implies that ONLY the footer line is to be printed: setenv ZIPINFO --t zipinfo -t storage [only totals line] zipinfo -st storage [full listing] The -s option, like -m and -l, includes headers and footers by default, unless otherwise specified. Since the environment variable specified no footers and that has a higher precedence than the default behavior of -s, an explicit -t option was necessary to produce the full listing. Nothing was indicated about the header, however, so the -s option was sufficient. Note that both the -h and -t options, when used by themselves or with each other, override any default listing of member files; only the header and/or footer are printed. This behavior is useful when zipinfo is used with a wildcard zipfile specification; the contents of all zipfiles are then summarized with a single command. To list information on a single file within the archive, in medium format, specify the filename explicitly: zipinfo -m storage unshrink.c The specification of any member file, as in this example, will override the default header and totals lines; only the single line of information about the requested file will be printed. This is intuitively what one would expect when requesting information about a single file. For multiple files, it is often useful to know the total compressed and uncompressed size; in such cases -t may be specified explicitly: zipinfo -mt storage "*.[ch]" Mak\* To get maximal information about the ZIP archive, use the verbose option. It is usually wise to pipe the output into a filter such as Unix more(1) if the operating system allows it: zipinfo -v storage | more Finally, to see the most recently modified files in the archive, use the -T option in conjunction with an external sorting utility such as Unix sort(1) (and sed(1) as well, in this example): zipinfo -T storage | sort -nr -k 7 | sed 15q The -nr option to sort(1) tells it to sort numerically in reverse order rather than in textual order, and the -k 7 option tells it to sort on the seventh field. This assumes the default short-listing format; if -m or -l is used, the proper sort(1) option would be -k 8. Older versions of sort(1) do not support the -k option, but you can use the traditional + option instead, e.g., +6 instead of -k 7. The sed(1) command filters out all but the first 15 lines of the listing. Future releases of zipinfo may incorporate date/time and filename sorting as built-in options. TIPS The author finds it convenient to define an alias ii for zipinfo on systems that allow aliases (or, on other systems, copy/rename the executable, create a link or create a command file with the name ii). The ii usage parallels the common ll alias for long listings in Unix, and the similarity between the outputs of the two commands was intentional. BUGS As with unzip, zipinfo's -M (``more'') option is overly simplistic in its handling of screen output; as noted above, it fails to detect the wrapping of long lines and may thereby cause lines at the top of the screen to be scrolled off before being read. zipinfo should detect and treat each occurrence of line-wrap as one additional line printed. This requires knowledge of the screen's width as well as its height. In addition, zipinfo should detect the true screen geometry on all systems. zipinfo's listing-format behavior is unnecessarily complex and should be simplified. (This is not to say that it will be.) SEE ALSO ls(1), funzip(1L), unzip(1L), unzipsfx(1L), zip(1L), zipcloak(1L), zipnote(1L), zipsplit(1L) URL The Info-ZIP home page is currently at http://www.info-zip.org/pub/infozip/ or ftp://ftp.info-zip.org/pub/infozip/ . AUTHOR Greg ``Cave Newt'' Roelofs. ZipInfo contains pattern-matching code by Mark Adler and fixes/improvements by many others. Please refer to the CONTRIBS file in the UnZip source distribution for a more complete list. Info-ZIP 20 April 2009 (v3.0) ZIPINFO(1L)
diffstat
This program reads the output of diff and displays a histogram of the insertions, deletions, and modifications per-file. Diffstat is a program that is useful for reviewing large, complex patch files. It reads from one or more input files which contain output from diff, producing a histogram of the total lines changed for each file referenced. If the input filename ends with “.bz2”, “.gz”, “.lzma”, “.xz”, “.z” or “.Z”, diffstat will read the uncompressed data via a pipe from the corresponding program. It also can infer the compression type from files piped via the standard input. Diffstat recognizes the most popular types of output from diff: unified preferred by the patch utility. context best for readability, but not very compact. default not good for much, but simple to generate. Diffstat detects the lines that are output by diff to tell which files are compared, and then counts the markers in the first column that denote the type of change (insertion, deletion or modification). These are shown in the histogram as "+", "-" and "!" characters. If no filename is given on the command line, diffstat reads the differences from the standard input.
diffstat - make histogram from diff-output
diffstat [options] [file-specifications]
-b ignore lines matching "Binary files XXX and YYY differ" in the diff -c prefix each line of output with "#", making it a comment-line for shell scripts. -C add SGR color escape sequences to highlight the histogram. -D destination specify a directory containing files which can be referred to as the result of applying the differences. diffstat will count the lines in the corresponding files (after adjusting the names by the -p option) to obtain the total number of lines in each file. The remainder, after subtracting modified and deleted lines, is shown as "unchanged lines". -d The debug prints a lot of information. It is normally compiled- in, but can be suppressed. -e file redirect standard error to file. -E strip out ANSI escape sequences on each line before parsing the differences. This allows diffstat to be used with colordiff. -f format specify the format of the histogram. 0 for concise, which shows only the value and a single histogram code for each of insert (+), delete (-) or modify (!) 1 for normal output, 2 to fill in the histogram with dots, 4 to print each value with the histogram. Any nonzero value gives a histogram. The dots and individual values can be combined, e.g., -f6 gives both. -h prints the usage message and exits. -k suppress the merging of filenames in the report. -K attempt to improve the annotation of "only" files by looking for a match in the resulting set of files and inferring whether the file was added or removed. This does not currently work in combination with -R because diffstat maintains only the resulting set of files. -l lists only the filenames. No histogram is generated. -m merge insert/delete counts from each "chunk" of the patch file to approximate a count of the modified lines. -n number specify the minimum width used for filenames. If you do not specify this, diffstat uses the length of the longest filename, after stripping common prefixes. -N number specify the maximum width used for filenames. Names longer than this limit are truncated on the left. If you do not specify this, diffstat next checks the -n option. -o file redirect standard output to file. -p number override the logic that strips common pathnames, simulating the patch "-p" option. If you do not give a -p option, diffstat examines the differences and strips the common prefix from the pathnames. This is not what patch does. -q suppress the "0 files changed" message for empty diffs. -r code provides optional rounding of the data shown in histogram, rather than truncating with error adjustments. 0 is the default. No rounding is performed, but accumulated errors are added to following columns. 1 rounds the data 2 rounds the data and adjusts the histogram to ensure that it displays something if there are any differences even if those would normally be rounded to zero. -R Assume patch was created with old and new files swapped. -s show only the summary line, e.g., number of insertions and deletions. -S source this is like the -D option, but specifies a location where the original files (before applying differences) can be found. -t overrides the histogram, generates output of comma separated values for the number of changed lines found in the differences for each file: inserted, deleted and modified. If -S or -D options are given, the number of unchanged lines precedes the number of changes. -T prints the numbers that the -t option would show, between the pathname and histogram. The width of the number of changes is determined by the largest value (but at least 3). The width given in the -w option is separate from the width of these numbers. -u suppress the sorting of filenames in the report. -v show progress, e.g., if the output is redirected to a file, write progress messages to the standard error. -V prints the current version number and exits. -w number specify the maximum width of the histogram. The histogram will never be shorter than 10 columns, just in case the filenames get too large. The default is 80 columns, unless the output is to a terminal. In that case, the default width is the terminal's width. ENVIRONMENT Diffstat runs in a POSIX environment. You can override the compiled-in paths of programs used for decompressing input files by setting environment variables corresponding to their name: DIFFSTAT_BZCAT_PATH DIFFSTAT_BZIP2_PATH DIFFSTAT_COMPRESS_PATH DIFFSTAT_GZIP_PATH DIFFSTAT_LZCAT_PATH DIFFSTAT_PCAT_PATH DIFFSTAT_UNCOMPRESS_PATH DIFFSTAT_XZ_PATH DIFFSTAT_ZCAT_PATH However, diffstat assumes that the resulting program uses the same command-line options, e.g., "-c" to decompress to the standard output. FILES Diffstat is a single binary module, which uses no auxiliary files. BUGS Diffstat makes a lot of assumptions about the format of diff's output. There is no way to obtain a filename from the standard diff between two files with no options. Context diffs work, as well as unified diffs. There's no easy way to determine the degree of overlap between the "before" and "after" displays of modified lines. diffstat simply counts the number of inserted and deleted lines to approximate modified lines for the -m option. SEE ALSO diff(1), patch(1). AUTHOR Thomas Dickey <dickey@invisible-island.net>. DIFFSTAT(1)
null
yamlpp-load5.34
null
null
null
null
null
login
The login utility logs users (and pseudo-users) into the computer system. If no user is specified, or if a user is specified and authentication of the user fails, login prompts for a user name. Authentication of users is configurable via pam(8). Password authentication is the default. The following options are available: -f When a user name is specified, this option indicates that proper authentication has already been done and that no password need be requested. This option may only be used by the super-user or when an already logged in user is logging in as themselves. With the -f option, an alternate program (and any arguments) may be run instead of the user's default shell. The program and arguments follows the user name. -h Specify the host from which the connection was received. It is used by various daemons such as telnetd(8). This option may only be used by the super-user. -l Tells the program executed by login that this is not a login session (by convention, a login session is signalled to the program with a hyphen as the first character of argv[0]; this option disables that), and prevents it from chdir(2)ing to the user's home directory. The default is to add the hyphen (this is a login session). -p By default, login discards any previous environment. The -p option disables this behavior. -q This forces quiet logins, as if a .hushlogin is present. If the file /etc/nologin exists, login dislays its contents to the user and exits. This is used by shutdown(8) to prevent users from logging in when the system is about to go down. Immediately after logging a user in, login displays the system copyright notice, the date and time the user last logged in, the message of the day as well as other information. If the file .hushlogin exists in the user's home directory, all of these messages are suppressed. If -q is specified, all of these messages are suppressed. This is to simplify logins for non-human users, such as uucp(1). login then records an entry in utmpx(5) and the like, and executes the user's command interpreter (or the program specified on the command line if -f is specified). The login utility enters information into the environment (see environ(7)) specifying the user's home directory (HOME), command interpreter (SHELL), search path (PATH), terminal type (TERM) and user name (both LOGNAME and USER). Some shells may provide a builtin login command which is similar or identical to this utility. Consult the builtin(1) manual page. The login utility will submit an audit record when login succeeds or fails. Failure to determine the current auditing state will result in an error exit from login. FILES /etc/motd message-of-the-day /etc/nologin disallows logins /var/run/utmpx current logins /var/mail/user system mailboxes .hushlogin makes login quieter /etc/pam.d/login pam(8) configuration file /etc/security/audit_user user flags for auditing /etc/security/audit_control global flags for auditing SEE ALSO builtin(1), chpass(1), csh(1), newgrp(1), passwd(1), rlogin(1), getpass(3), utmpx(5), environ(7) HISTORY A login utility appeared in Version 6 AT&T UNIX. macOS 14.5 July 20, 2019 macOS 14.5
login – log into the computer
login [-fpq] [-h hostname] [user] login -f [-lpq] [-h hostname] [user [prog [args...]]]
null
null
db_dump
The db_dump utility reads the database file file and writes it to the standard output using a portable flat-text format understood by the db_load utility. The file argument must be a file produced using the Berkeley DB library functions. The options are as follows: -d Dump the specified database in a format helpful for debugging the Berkeley DB library routines. a Display all information. h Display only page headers. r Do not display the free-list or pages on the free list. This mode is used by the recovery tests. The output format of the -d option is not standard and may change, without notice, between releases of the Berkeley DB library. -f Write to the specified file instead of to the standard output. -h Specify a home directory for the database environment; by default, the current working directory is used. -k Dump record numbers from Queue and Recno databases as keys. -l List the databases stored in the file. -N Do not acquire shared region mutexes while running. Other problems, such as potentially fatal errors in Berkeley DB, will be ignored as well. This option is intended only for debugging errors, and should not be used under any other circumstances. -P Specify an environment password. Although Berkeley DB utilities overwrite password strings as soon as possible, be aware there may be a window of vulnerability on systems where unprivileged users can see command-line arguments or where utilities are not able to overwrite the memory containing the command-line arguments. -p If characters in either the key or data items are printing characters (as defined by isprint(3)), use printing characters in file to represent them. This option permits users to use standard text editors and tools to modify the contents of databases. Note: different systems may have different notions about what characters are considered printing characters, and databases dumped in this manner may be less portable to external systems. -R Aggressively salvage data from a possibly corrupt file. The -R flag differs from the -r option in that it will return all possible data from the file at the risk of also returning already deleted or otherwise nonsensical items. Data dumped in this fashion will almost certainly have to be edited by hand or other means before the data is ready for reload into another database -r Salvage data from a possibly corrupt file. When used on a uncorrupted database, this option should return equivalent data to a normal dump, but most likely in a different order. -s Specify a single database to dump. If no database is specified, all databases in the database file are dumped. -V Write the library version number to the standard output, and exit. Dumping and reloading Hash databases that use user-defined hash functions will result in new databases that use the default hash function. Although using the default hash function may not be optimal for the new database, it will continue to work correctly. Dumping and reloading Btree databases that use user-defined prefix or comparison functions will result in new databases that use the default prefix and comparison functions. In this case, it is quite likely that the database will be damaged beyond repair permitting neither record storage or retrieval. The only available workaround for either case is to modify the sources for the db_load utility to load the database using the correct hash, prefix, and comparison functions. The db_dump utility output format is documented in the Dump Output Formats section of the Berkeley DB Reference Guide. The db_dump utility may be used with a Berkeley DB environment (as described for the -h option, the environment variable DB_HOME, or because the utility was run in a directory containing a Berkeley DB environment). In order to avoid environment corruption when using a Berkeley DB environment, db_dump should always be given the chance to detach from the environment and exit gracefully. To cause db_dump to release all environment resources and exit cleanly, send it an interrupt signal (SIGINT). Even when using a Berkeley DB database environment, the db_dump utility does not use any kind of database locking if it is invoked with the -d, -R, or -r arguments. If used with one of these arguments, the db_dump utility may only be safely run on databases that are not being modified by any other process; otherwise, the output may be corrupt. The db_dump utility exits 0 on success, and >0 if an error occurs. ENVIRONMENT DB_HOME If the -h option is not specified and the environment variable DB_HOME is set, it is used as the path of the database home, as described in DB_ENV->open. SEE ALSO db_archive(1), db_checkpoint(1), db_deadlock(1), db_load(1), db_printlog(1), db_recover(1), db_stat(1), db_upgrade(1), db_verify(1) Darwin December 3, 2003 Darwin
db_dump
db_dump [-klNpRrV] [-d ahr] [-f output] [-h home] [-P password] [-s database] file
null
null
cpuctl
The cpuctl command can be used to control and inspect the state of CPUs in the system. The first argument, command, specifies the action to take. Valid commands are: list For each CPU in the system, display the current state and time of the last state change. offline cpu [cpu ...] Set the specified CPUs off line. At least one CPU in the system must remain on line. online cpu [cpu ...] Set the specified CPUs on line.
cpuctl – program to control CPUs
cpuctl command [arguments]
null
Run cpuctl offline 2 and then cpuctl list The output should reflect the fact that CPU#2 was taken offline. Darwin March 18, 2019 Darwin
cpu_profiler.d
null
null
null
null
null
which
The which utility takes a list of command names and searches the path for each executable file that would be run had these commands actually been invoked. The following options are available: -a List all instances of executables found (instead of just the first one of each). -s No output, just return 0 if all of the executables are found, or 1 if some were not found. Some shells may provide a builtin which command which is similar or identical to this utility. Consult the builtin(1) manual page.
which – locate a program file in the user's path
which [-as] program ...
null
Locate the ls(1) and cp(1) commands: $ /usr/bin/which ls cp /bin/ls /bin/cp Same as above with a specific PATH and showing all occurrences: $ PATH=/bin:/rescue /usr/bin/which -a ls cp /bin/ls /rescue/ls /bin/cp /rescue/cp which will show duplicates if the same executable is found more than once: $ PATH=/bin:/bin /usr/bin/which -a ls /bin/ls /bin/ls Do not show output. Just exit with an appropriate return code: $ /usr/bin/which -s ls cp $ echo $? 0 $ /usr/bin/which -s fakecommand $ echo $? 1 SEE ALSO builtin(1), csh(1), find(1), locate(1), whereis(1) HISTORY The which command first appeared in FreeBSD 2.1. AUTHORS The which utility was originally written in Perl and was contributed by Wolfram Schneider <wosch@FreeBSD.org>. The current version of which was rewritten in C by Daniel Papasian <dpapasia@andrew.cmu.edu>. macOS 14.5 September 24, 2020 macOS 14.5
bundler
null
null
null
null
null
iopattern
This prints details on the I/O access pattern for the disks, such as percentage of events that were of a random or sequential nature. By default totals for all disks are printed. An event is considered random when the heads seek. This program prints the percentage of events that are random. The size of the seek is not measured - it's either random or not. Since this uses DTrace, only users with root privileges can run this command.
iopattern - print disk I/O pattern. Uses DTrace.
iopattern [-v] [-d device] [-f filename] [-m mount_point] [interval [count]]
-v print timestamp, string -d device instance name to snoop (eg, dad0) -f filename full pathname of file to snoop -m mount_point mountpoint for filesystem to snoop
Default output, print I/O summary every 1 second, # iopattern Print 10 second samples, # iopattern 10 Print 12 x 5 second samples, # iopattern 5 12 Snoop events on the root filesystem only, # iopattern -m / FIELDS %RAN percentage of events of a random nature %SEQ percentage of events of a sequential nature COUNT number of I/O events MIN minimum I/O event size MAX maximum I/O event size AVG average I/O event size KR total kilobytes read during sample KW total kilobytes written during sample DEVICE device name MOUNT mount point FILE filename (basename) for I/O operation TIME timestamp, string IDEA Ryan Matteson DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT iopattern will run forever until Ctrl-C is hit, or the specified count is reached. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO iosnoop(1M), iotop(1M), dtrace(1M) version 0.70 July 25, 2005 iopattern(1m)
finger
The finger utility displays information about the system users. Options are: -4 Forces finger to use IPv4 addresses only. -6 Forces finger to use IPv6 addresses only. -s Display the user's login name, real name, terminal name and write status (as a ``*'' before the terminal name if write permission is denied), idle time, login time, and either office location and office phone number, or the remote host. If -o is given, the office location and office phone number is printed (the default). If -h is given, the remote host is printed instead. Idle time is in minutes if it is a single integer, hours and minutes if a ``:'' is present, or days if a ``d'' is present. If it is an “*”, the login time indicates the time of last login. Login time is displayed as the day name if less than 6 days, else month, day; hours and minutes, unless more than six months ago, in which case the year is displayed rather than the hours and minutes. Unknown devices as well as nonexistent idle and login times are displayed as single asterisks. -h When used in conjunction with the -s option, the name of the remote host is displayed instead of the office location and office phone. -o When used in conjunction with the -s option, the office location and office phone information is displayed instead of the name of the remote host. -g This option restricts the gecos output to only the users' real name. It also has the side-effect of restricting the output of the remote host when used in conjunction with the -h option. -k Disable all use of the user accounting database. -l Produce a multi-line format displaying all of the information described for the -s option as well as the user's home directory, home phone number, login shell, mail status, and the contents of the files .forward, .plan, .project and .pubkey from the user's home directory. If idle time is at least a minute and less than a day, it is presented in the form ``hh:mm''. Idle times greater than a day are presented as ``d day[s]hh:mm''. Phone numbers specified as eleven digits are printed as ``+N-NNN- NNN-NNNN''. Numbers specified as ten or seven digits are printed as the appropriate subset of that string. Numbers specified as five digits are printed as ``xN-NNNN''. Numbers specified as four digits are printed as ``xNNNN''. If write permission is denied to the device, the phrase ``(messages off)'' is appended to the line containing the device name. One entry per user is displayed with the -l option; if a user is logged on multiple times, terminal information is repeated once per login. Mail status is shown as ``No Mail.'' if there is no mail at all, ``Mail last read DDD MMM ## HH:MM YYYY (TZ)'' if the person has looked at their mailbox since new mail arriving, or ``New mail received ...'', ``Unread since ...'' if they have new mail. -p Prevent the -l option of finger from displaying the contents of the .forward, .plan, .project and .pubkey files. -m Prevent matching of user names. User is usually a login name; however, matching will also be done on the users' real names, unless the -m option is supplied. All name matching performed by finger is case insensitive. If no options are specified, finger defaults to the -l style output if operands are provided, otherwise to the -s style. Note that some fields may be missing, in either format, if information is not available for them. If no arguments are specified, finger will print an entry for each user currently logged into the system. The finger utility may be used to look up users on a remote machine. The format is to specify a user as “user@host”, or “@host”, where the default output format for the former is the -l style, and the default output format for the latter is the -s style. The -l option is the only option that may be passed to a remote machine. If the file .nofinger exists in the user's home directory, and the program is not run with superuser privileges, finger behaves as if the user in question does not exist. The optional finger.conf(5) configuration file can be used to specify aliases. Since finger is invoked by fingerd(8), aliases will work for both local and network queries. ENVIRONMENT The finger utility utilizes the following environment variable, if it exists: FINGER This variable may be set with favored options to finger. FILES /etc/finger.conf alias definition data base /var/log/utx.lastlogin last login data base SEE ALSO chpass(1), w(1), who(1), finger.conf(5), fingerd(8) D. Zimmerman, The Finger User Information Protocol, RFC 1288, December, 1991. HISTORY The finger command appeared in 3.0BSD. BUGS The finger utility does not recognize multibyte characters. macOS 14.5 January 21, 2010 macOS 14.5
finger – user information lookup program
finger [-46gklmpsho] [user ...] [user@host ...]
null
null
mig
The mig command invokes the Mach Interface Generator to generate Remote Procedure Call (RPC) code for client-server style Mach IPC from specification files.
mig - Mach Interface Generator
mig [ option ... ] file
-q/-Q Omit / emit warning messages. -v/-V Verbose mode ( on / off ) will summarize types and routines as they are processed. -l/-L Controls ( off / on ) whether or not generated code logs RPC events to system logs. -k/-K Controls ( on / off ) whether generated code complies with ANSI C standards. -s/-S Controls ( on / off ) whether generated server-side code includes a generated symbol table. -b/-B Controls ( on / off ) whether generated code includes bounds- checking annotations, such as __counted_by . -i prefix Specify User file prefix. -user path Specify name of user-side RPC generated source file. -server path Specify name of server-side RPC generated source file. -header path Specify name of user-side generated header file. -sheader path Specify name of server-side generated header file. -iheader path Specify internal header file name. -dheader path Specify defines generated header file. -maxonstack value Specify maximum size of message on stack. -split Use split headers. -arch arch Specify machine architecture for target code. -MD Option is passed to the C compiler for dependency generation. -cpp This option is ignored. -cc path Specify pathname to specific C compiler to use as the preprocessor. -migcom path Specify pathname to specific migcom compiler to use for source code generation. -isysroot path Specify SDK root directory. Additional options provided are passed along to the C compiler unchanged. Apple Computer, Inc. November 20, 2009 MIG(1)
null
sntp
-d Enable debug logging. -g milliseconds Gap betwen requests in milliseconds. -n number Number of DNS records to use for each host argument. This does not have to be less than, equal to, or greater than the number of records that any host lookup returns. Numbers smaller than the number of records will fetch only that many records. Numbers larger than the number of records will fetch some records more than once, as required, to reach the number of fetches desired. -r bind(2) the NTP reserved port (123) for source communications. -S Use clock_settime(2) to set the system clock if the offset is greater than 50 milliseconds, or -s is not specified. -s Use adjtime(2) to slew the system clock if the offset is less than 50 milliseconds. -t seconds Maximum number of seconds to wait for responses. -z path Path to dump header state to. This is useful for sntpd(8) to read from. Header data is stored on disk in big-endian (NBO) format. USAGE sntp pool.ntp.org SEE ALSO clock_settime(2), adjtime(2), timed(8), sntpd(8) HISTORY This sntp should not be confused with the Network Time Foundation's sntp implementation. This is a simplified client that supports a minimal subset of features for compatibility. This sntp first appeared in macOS 11.0. Darwin 8/4/10 Darwin
sntp – A very Simple Network Time Protocol client program
sntp [-drSs] [-g milliseconds] [-t seconds] host-name-or-IP
null
null
klist
klist lists the Kerberos principal and Kerberos tickets held in a credentials cache, or the keys held in a keytab file.
klist - list cached Kerberos tickets
klist [-e] [[-c] [-l] [-A] [-f] [-s] [-a [-n]]] [-C] [-k [-i] [-t] [-K]] [-V] [-d] [cache_name|keytab_name]
-e Displays the encryption types of the session key and the ticket for each credential in the credential cache, or each key in the keytab file. -l If a cache collection is available, displays a table summarizing the caches present in the collection. -A If a cache collection is available, displays the contents of all of the caches in the collection. -c List tickets held in a credentials cache. This is the default if neither -c nor -k is specified. -f Shows the flags present in the credentials, using the following abbreviations: F Forwardable f forwarded P Proxiable p proxy D postDateable d postdated R Renewable I Initial i invalid H Hardware authenticated A preAuthenticated T Transit policy checked O Okay as delegate a anonymous -s Causes klist to run silently (produce no output). klist will exit with status 1 if the credentials cache cannot be read or is expired, and with status 0 otherwise. -a Display list of addresses in credentials. -n Show numeric addresses instead of reverse-resolving addresses. -C List configuration data that has been stored in the credentials cache when klist encounters it. By default, configuration data is not listed. -k List keys held in a keytab file. -i In combination with -k, defaults to using the default client keytab instead of the default acceptor keytab, if no name is given. -t Display the time entry timestamps for each keytab entry in the keytab file. -K Display the value of the encryption key in each keytab entry in the keytab file. -d Display the authdata types (if any) for each entry. -V Display the Kerberos version number and exit. If cache_name or keytab_name is not specified, klist will display the credentials in the default credentials cache or keytab file as appropriate. If the KRB5CCNAME environment variable is set, its value is used to locate the default ticket cache. ENVIRONMENT See kerberos(7) for a description of Kerberos environment variables. FILES KCM: Default location of Kerberos 5 credentials cache FILE:/etc/krb5.keytab Default location for the local host's keytab file. SEE ALSO kinit(1), kdestroy(1), kerberos(7) AUTHOR MIT COPYRIGHT 1985-2022, MIT 1.20.1 KLIST(1)
null
open
The open command opens a file (or a directory or URL), just as if you had double-clicked the file's icon. If no application name is specified, the default application as determined via LaunchServices is used to open the specified files. If the file is in the form of a URL, the file will be opened as a URL. You can specify one or more file names (or pathnames), which are interpreted relative to the shell or Terminal window's current working directory. For example, the following command would open all Word files in the current working directory: open *.doc Opened applications inherit environment variables just as if you had launched the application directly through its full path. This behavior was also present in Tiger. The options are as follows: -a application Specifies the application to use for opening the file -b bundle_identifier Specifies the bundle identifier for the application to use when opening the file -e Causes the file to be opened with the TextEdit application -t Causes the file to be opened with the default text editor, as determined via LaunchServices -f Reads input from standard input and opens the results in the default text editor. End input by sending EOF character (type Control-D). Also useful for piping output to open and having it open in the default text editor. -F Opens the application "fresh," that is, without restoring windows. Saved persistent state is lost, except for Untitled documents. -W Causes open to wait until the applications it opens (or that were already open) have exited. Use with the -n flag to allow open to function as an appropriate app for the $EDITOR environment variable. -R Reveals the file(s) in the Finder instead of opening them. -n Open a new instance of the application(s) even if one is already running. -g Do not bring the application to the foreground. -j Launches the app hidden. --arch ARCH Launch with the given cpu architecture type and subtype; ARCH should be one of any, arm, arm64, arm64e, arm64_32, x86_64, x86_64h, i386. Two integers matching the values for cpu_type_t and cpu_subtype_t can be specified as integers separated by a '/' character, like "12/13" for CPU_TYPE_ARM/CPU_SUBTYPE_ARM_V8. -h Searches header locations for a header whose name matches the given string and then opens it. Pass a full header name (such as NSView.h) for increased performance. -s For -h, partial or full SDK name to use; if supplied, only SDKs whose names contain the argument value are searched. Otherwise the highest versioned SDK in each platform is used. -u Opens URL with whatever application claims the url scheme, even if URL also matches a file path --args All remaining arguments are passed to the opened application in the argv parameter to main(). These arguments are not opened or interpreted by the open tool. --env VAR Adds VAR to the environment of the launched application. VAR should be formatted NAME=VALUE or NAME. --stdin PATH Launches the application with stdin connected to PATH. --stdout PATH Launches the application with stdout connected to PATH. --stderr PATH Launches the application with stderr connected to PATH.
open – open files and directories
open [-e] [-t] [-f] [-F] [-W] [-R] [-n] [-g] [-j] [-h] [-u URL] [-s sdk] [-b bundle_identifier] [-a application] [--env VAR] [--stderr PATH] [--stdin PATH] [--stdout PATH] [--arch ARCH] [--args arg1 ...]
null
"open '/Volumes/Macintosh HD/foo.txt'" opens the document in the default application for its type (as determined by LaunchServices). "open '/Volumes/Macintosh HD/Applications/'" opens that directory in the Finder. "open -a /Applications/TextEdit.app '/Volumes/Macintosh HD/foo.txt'" opens the document in the application specified (in this case, TextEdit). "open -b com.apple.TextEdit '/Volumes/Macintosh HD/foo.txt'" opens the document in the application specified (in this case, TextEdit). "open -e '/Volumes/Macintosh HD/foo.txt'" opens the document in TextEdit. "ls | open -f" writes the output of the 'ls' command to a file in /tmp and opens the file in the default text editor (as determined by LaunchServices). "open http://www.apple.com/" opens the URL in the default browser. "open 'file://localhost/Volumes/Macintosh HD/foo.txt'" opens the document in the default application for its type (as determined by LaunchServices). "open 'file://localhost/Volumes/Macintosh HD/Applications/'" opens that directory in the Finder. "open -h NSView" lists headers whose names contain NSView and allows you to choose which ones to open. "open -h NSView.h" immediately opens NSView.h. "open --env MallocStackLogging=YES -b com.apple.TextEdit" launches TextEdit with the environment variable "MallocStackLogging" set to "YES" "open -h NSView -s OSX10.12" lists headers whose names contain NSView in the MacOSX 10.12 SDK and allows you to choose which ones to open. HISTORY First appeared in NextStep. macOS April 14, 2017 macOS
GetFileInfo
Tools supporting Carbon development, including /usr/bin/GetFileInfo, were deprecated with Xcode 6. /usr/bin/GetFileInfo is a tool to get the file attributes. With no flags, GetFileInfo retrieves all information about the file. If exactly one option is provided, GetFileInfo retrieves and displays just that information; supplying more than one is an error. Flags: -P Acts on a symlink file instead of the file the symlink resolves to. -a[<attribute-letter>] Gets a file's attribute bits where <attribute-letter> is one of the following: a Alias file b Has bundle c Custom icon (allowed on folders) d Located on the desktop (allowed on folders) e Extension is hidden (allowed on folders) i Inited - Finder is aware of this file and has given it a location in a window. (allowed on folders) l Locked m Shared (can run multiple times) n File has no INIT resource s System file (name locked) t "Stationery Pad" file v Invisible (allowed on folders) z Busy (allowed on folders) The value of a single attribute is printed as 0 for off or false, 1 for on or true. If no attribute letter is specified, the value of all attributes is returned, with lowercase letters representing off or false, and uppercase representing on or true. -t Gets the file type, a string of exactly four characters. If the type is not set, these will display as an empty pair of quotation marks. Directories do not have types, so the type will be skipped if all information is being displayed; specifying -t for a directory is an error. -c Gets the file's creator, a string of four characters enclosed in quotation marks. If the creator is not set, these will display as an empty pair of quotation marks. Directories do not have creators, so the creator will be skipped if all information is being displayed; specifying -c for a directory is an error. -d Gets the creation date, a string of the form "mm/dd/yyyy hh:mm:ss" in 24-hour clock format. -m Gets the modification date, a string of the form "mm/dd/yyyy hh:mm:ss" in 24-hour clock format. RETURN VALUES 0 success 1 syntax error 2 any other error SEE ALSO SetFile(1)
/usr/bin/GetFileInfo – get attributes of files and directories (DEPRECATED)
/usr/bin/GetFileInfo [-P -a[<attribute-letter>] | -c | -d | -m | -t] file ...
null
The following command line gets and prints the creator for the "Late Breaking News" file: /Developer/Tools/GetFileInfo -c "Late Breaking News" This command line prints the modification date of "myFile": /Developer/Tools/GetFileInfo -m myFile Mac OS X September 27, 2005 Mac OS X
kmutil
kmutil is a multipurpose tool for managing kernel extensions (kexts) and kext collections on disk. It takes a subcommand and a number of options, some of which are common to multiple commands. kmutil interacts with the KernelManagement subsystem for loading, unloading, and diagnosing kexts. It can also be used for inspecting the contents of a kext collection, interacting with the kernel to query load information, finding kexts and kext dependencies on disk, creating new collections of kexts, and displaying other diagnostic information. COLLECTIONS Starting in macOS 11, kernel extensions are found in 3 different artifacts on disk. Each artifact is loaded exactly once at boot, and a kext must be linked into one of the three artifacts before it can be used. • The boot kext collection contains the kernel and all system kexts necessary for starting and bootstrapping the operating system. It is an immutable artifact in /System/Library/KernelCollections. On Apple Silicon Macs, this artifact is kept exclusively in the Preboot volume. • The system kext collection, if used, contains all remaining system kexts required by the operating system, and is loaded after boot. It is prelinked against the boot kext collection, and is also an immutable artifact in /System/Library/KernelCollections. Note that on Apple Silicon Macs, there is no system kext collection. • The auxiliary kext collection, if built, contains kexts placed in /Library/Extensions and any other third-party kexts installed on the system. It is dynamically built by kernelmanagerd(8) and prelinked against the boot kext collection and, if present, the system kext collection. On Apple Silicon Macs, the auxiliary kext collection is located in the Preboot volume. For more information on installing third-party kexts into the auxiliary kext collection, see INSTALLING. INSTALLING As of macOS 11, a kext is only loadable once it has been built into the auxiliary kext collection by kernelmanagerd(8), and the system has rebooted. At boot, kernelmanagerd(8) will load this collection into the kernel, which allows all of the kexts in the collection to match and load. If kmutil load, kextload(8), or any invocation of a KextManager function attempts to load a kext that is not yet loadable, kernelmanagerd(8) will stage the kext into a protected location, validate it, and prompt the user to approve a rebuild of the auxiliary kext collection. If the validation and rebuild are successful, the kext will be available on the next boot. COMMANDS Commands and their specific options are listed below. For other options common to most commands, see OPTIONS. • create: Create a new kext collection according to the options provided. This command should only be used by developers investigating custom kernels or replacing the contents of the boot kext collection or system kext collection. As of macOS 13.0, a KDK is required to create a new boot or system kext collection. To load or unload kexts already in a collection see the load and unload subcommands. -n, --new <boot|sys|aux> Specify one or more of boot, sys, or aux to build one or more collections at a time. -L, --no-system-collection If building an auxiliary collection, don’t look for or generate a system kext collection. -s, --strip Specify none, all, or allkexts (default: none) to strip symbol information after a kext has been built into a collection. -k, --kernel When building the boot kext collection, specify the path to the kernel. If -V is specified, kmutil will append the variant extension. -x, --explicit-only Only consider the bundle identifiers and paths explicitly specified, along with their dependencies. --compress Compress results using the LZFSE algorithm. --img4-encode Encode the collection in an img4 payload. • inspect: Inspect & display the contents of a kext collection according to the options provided. --show-mach-header Print the mach header(s) in the collection(s). Use with --verbose to also display contents of inner fileset entries. --show-fileset-entries Only print mach header information present in fileset subentries. This is useful for determining prelink addresses and other load information about a kext in a collection. --show-kext-load-addresses When displaying the default output, include the load addresses of the kexts inline. --show-kext-uuids Include the UUIDs of each kext in the output. --show-kernel-uuid Print the UUID (and version) of the kernel if present in the collection. This will output nothing if a kernel is not found in the specified collection(s). --show-kernel-uuid-only Print the UUID of the kernel if present in the collection, and suppress default kext information. --show-prelink-info Dump the raw __PRELINK_INFO segment of the collection(s). --show-collection-metadata Print the metadata of the collection(s), such as their prelink uuids, the uuids of collections they link against, and the build version that produced the collection. --show-mach-boot-properties Print derived Mach-O boot properties of the collection(s). --json Output the section layout as JSON. • load: Load the extension(s) specified with -b or -p. If the extension is not already in the auxiliary kext collection, the collection will be dynamically rebuilt by kernelmanagerd(8) for use on the next reboot. For more information, see INSTALLING. For kexts already contained in the boot, system, or auxiliary kext collection, the load subcommand will start the kext if it has not already been started. For most kexts, the load subcommand must run as the superuser (root). Kexts installed under /System/ with an OSBundleAllowUserLoad property set to true may be loaded via the load subcommand by non-root users. macOS 10.6 introduced C functions for loading kexts, KextManagerLoadKextWithIdentifier() and KextManagerLoadKextWithURL(), which are described in Apple’s developer documentation. These functions continue to be supported as of macOS 11. -P, --personality-name If this kext is already loaded, send the named personality to the catalog. -e, --no-default-repositories Don’t use the default repositories for kexts. If you use this option, you will have to explicitly specify all dependencies of the kext being loaded, or otherwise worked on using the --repository option. --load-style Control the load style of the request to load extension. Valid options: • start-and-match: Start the kernel extension and also begin matching on any accompanying personalities. (default) • start-only: Start any specified kernel extensions but do not begin matching against any personalities provided by those extensions (unless matching has already started for them). • match-only: Do not explictly start any of the given kernel extensions but do begin matching on IOKit personalities provided by them. This is useful to allow extensions that were previous loaded with start-only to now begin matching. • unload: Unload the extension(s) specified with -b or -p. The extension must have been previously linked into a kext collection and loaded by the KernelManagement system. A successfull call to the unload subcommand will invoke the kext’s stop function and end the kext’s IOKit lifecycle, however the kext remains in kernel memory as part of the kext collection from which it was loaded. The extension will not be removed from any collection, including the auxiliary kext collection, and will still be available for loading without requiring a reboot. If another loaded kext has a dependency on the kext being unloaded, the unload will fail. You can determine whether a kext has dependents using the showloaded subcommand. -c, --class-name <class-name> Terminate all instances of the IOService class, but do not unload its kext or unload its personalities. -P, --personalities-only Terminate services and remove personalities only; do not unload kexts. • libraries: Search for library kexts in the boot kext collection and the system kext collection (if available) that define symbols needed for linking the specified kexts, printing their bundle identifiers and versions. Information on symbols not found are printed after the library kext information for each architecture. A handy use of the libraries subcommand is to run it with just the --xml flag and pipe the output to pbcopy(1). If the exit status is zero (indicating no undefined or multiply-defined symbols), you can open your kext’s Info.plist file in a text editor and paste the library declarations over the OSBundleLibraries property. You can specify other collections with the libraries subcommand to look for dependencies in other collections as well. --all-symbols List all symbols; found, not found, or found more than once. --onedef-symbols List all symbols found, with the library kext they were found in. --multdef-symbols List all symbols found more than once, with their library kexts. --undef-symbols List all symbols not found in any library. --unsupported Look in unsupported kexts for symbols. -c, --compatible-versions Use library kext compatible versions rather than current versions. --xml Print XML fragment suitable for pasting. • showloaded: Display the status/information of loaded kernel extensions on the system, according to the options provided. By default, the following is shown for each kext: Index The load index of the kext (used to track linkage references). Gaps in the list indicate kexts that have been unloaded. Refs The number of references to this kext by others. If nonzero, the kext cannot be unloaded. Address The address in kernel space where the kext has been loaded. Size The number of bytes of kernel memory that the kext occupies. If this is zero, the kext is a built-in part of the kernel that has an entry as a kext for resolving dependencies among kexts. Wired The number of wired bytes of kernel memory that the kext occupies. Architecture The architecture of the kext, displayed only if using the --arch-info option. Name The CFBundleIdentifier of the kext. Version The CFBundleVersion of the kext. <Linked Against> The index numbers of all other kexts that this kext has a reference to. The following options are available for the showloaded command: --show-mach-headers Show the mach headers of the loaded extensions and/or kernel, if --show-kernel is specified. --show <loaded|unloaded|all> Restrict output to a specific load state. --collection <boot|sys|aux|codeless> Restrict the load information to a particular kind. Defaults to all non-codeless kexts if unspecified. To display information about codeless kexts and dexts that the kernel knows about, use --collection codeless --show all. --sort Sort the output by load address of each extension, instead of by index. --list-only Print the list of extensions only, omitting the header on the first line. --arch-info Include the architecture info in output. --no-kernel-components Do not show kernel components in output. --show-kernel Show load information about the kernel in the output. Use with --show-mach-headers to view the kernel mach header. • dumpstate: Display diagnostic information about the state of kernelmanagerd(8). • find: Locate and print paths of kexts (or kexts in collections) matching the filter criteria. For more information on filtering, see FILTERING OPTIONS. Searches are performed using the same kext management logic used elsewhere in kmutil, by which only kexts specified with the repository or bundle options are eligible; this is specifically not an exhaustive, recursive filesystem search. • check: Check that load information and/or kext collections on the system are consistent. --collection-linkage Check to see that the collections on the system are properly linked together by inspecting the UUID metadata in the prelink info section of each collection on the system. --load-info Check to see that the load information in the kernel properly mirrors the collections on disk. This is the default action if no other options are specified. --kernel-only If checking load info, just check that the kernel matches, and no other kexts. --collection <boot|sys|aux>: Restrict consistency check to one (or more) of the specified collection types. If unspecified, check all by default. • log: Display logging information about the kext management subsystem. This is a wrapper around the system log(1) command with a pre-defined predicate to show only logs from kernelmanagerd and kmutil. • print-diagnostics: Perform all possible tests on one or more kexts, and indicate whether or not the kext can be successfully built into a collection. If there are issues found with the kext, diagnostic information is reported which can help to isolate and resolve the problem. Note that some tests require root. Note that custom collections, variants, and architectures can be specified with the GENERIC and COLLECTION kmutil options. -p, --bundle-path Print diagnostics for the bundle specified at this path (can be specified more than once). -Z --no-resolve-dependencies Don’t resolve kext dependencies -D --diagnose-dependencies Recursively diagnose all kext dependencies of each kext specified with -p. Ignored when -Z is present. --plugins Diagnose each kext found in the PlugIns directory of kexts specified with -p. --do-staging Perform kext staging to the SIP protected location. This test requires root privileges. • clear-staging: Clear the staging directory managed by kernelmanagerd(8) and kmutil(8). • migrate: System subcommand used during a software update. • install: System subcommand used to update the Boot and System kext collections. • rebuild: System subcommand used to attempt an Auxiliary kext collection rebuild. This command evaluates the current Auxiliary kext collection for changes, which may add newly approved third-party kexts and remove kexts that were previously installed and have since been deleted or moved from their installed location. To uninstall a kext from the Auxiliary kext collection: 1. Delete or move the kext bundle(s) to be uninstalled from their installed location. 2. Run “kmutil rebuild” from Terminal and confirm the Auxiliary kext collection changes. 3. Authorize the Auxiliary kext collection rebuild. 4. Reboot the system for the changes to take effect. RECOVERY COMMANDS The following commands can only be run in Recovery Mode. • trigger-panic-medic: Remove the auxiliary kext collection and remove all kext approvals on the next boot. This subcommand can only be used in Recovery Mode. This command can be used to recover the system from a kext that causes a kernel panic. After calling trigger-panic-medic, all previously installed kexts will prompt the user to re-approve them when they are loaded or installed. • configure-boot: Configure a custom boot object policy. This command can be used to install a custom mach-o file from which the system will boot. In order to install custom boot objects, you must first enter Medium Security by using the Startup Disk utility in Recovery Mode. Setting a custom boot object will further lower the system security to Permissive Security, and you will be prompted to confirm this action. -c, --custom-boot-object The Mach-O that the booter will load and start. The file can be optionally compressed and wrapped in an img4. -C, --compress Compress the custom boot object -v, --volume Install the custom boot object for the specified volume --raw Treat custom boot object as a raw file to be installed. The object will be installed with custom Mach-O boot properties derived from –lowest-virtual-address and –entry-point --lowest-virtual-address Lowest virtual memory address of the raw boot object. (iBoot will map the raw boot object at this virtual address) --entry-point Virtual memory address of entry point into the raw boot object
null
kmutil <subcommand> kmutil <load|unload|showloaded> kmutil <find|libraries|print-diagnostics> kmutil <create|inspect|check|log|dumpstate> kmutil <clear-staging|trigger-panic-medic> kmutil -h
GLOBAL OPTIONS The following options are global to most kmutil subcommands. -a, --arch Specify the architecture to use for the extensions or collections specified. Defaults to the current running architecture. -V, --variant-suffix Specify a variant, i.e., development, debug, or kasan, of extensions or collections to prefer instead of the release defaults. -z, --no-authentication Disable staging and validation of extensions when performing an action. -v, --verbose Enable verbose output. -r, --repository Paths to directories containing extensions. If -R is specified, the volume root will be automatically prepended. -R, --volume-root Specify the target volume to operate on. Defaults to /. FILTERING OPTIONS The following options can be used in certain kmutil commands for filtering its input or output. -p, --bundle-path Include the bundle specified at this path in the results. Return an error if not found. -b, --bundle-identifier Search for, and/or include this identifier in the results. Return an error if not found. --optional-identifier Search for, and/or include this identifier in the results, if possible. --elide-identifier Do not include this identifier in the results. -f, --filter Specify a filter, in predicate syntax, which must match against properties of an extension to be included in the input or output. This argument can be overridden by other arguments for specifying and including extensions. -F, --filter-all Specify a filter, in predicate syntax, which must match against properties of an extension to be included in the input or output. This argument can not be overridden by other arguments for specifying and including extensions. –kdk The KDK path to use for discovering kexts when creating a new boot or sys kext collection. –build Use with caution. This specifies the build version number to use when discovering kexts and building kext collections. If no build version is specified, the current system build version number is used. For more information on predicate filter syntax, see the predicate programming guide available in the Apple developer documentation. COLLECTION OPTIONS The following options can be used to specify paths and options for handling kext collections. If left unspecified, collection paths will default to the default paths for the system kext collections. -B, --boot-path The path to the boot kext collection. -S, --system-path The path to the system kext collection. -A, --aux-path The path to the auxiliary kext collection. -M, --allow-missing-collections Recover gracefully, where applicable, if a collection is missing.
Inspect the contents of system kext collections: $ kmutil inspect -v --show-mach-header -B /System/Library/KernelCollections/BootKernelExtensions.kc $ kmutil inspect --show-fileset-entries --bundle-identifier com.apple.kernel Load and unload kexts: $ kmutil load -b com.apple.filesystems.apfs $ kmutil load -p /Library/Extensions/foo.kext $ kmutil unload -p /System/Library/Extensions/apfs.kext Show load information about kexts: $ kmutil showloaded --show-mach-headers --bundle-identifier com.example.foo $ kmutil showloaded --show-kernel --collection boot $ kmutil showloaded --show unloaded --filter "'CFBundleVersion' == '15.2.13'" Find dependencies of kexts: $ kmutil libraries -p /Library/Extensions/foo.kext --xml | pbcopy Create custom kext collections: $ kmutil -n boot -B myboot.kc -k mykernel --elide-identifier com.apple.filesystems.apfs $ kmutil -n boot sys -B myboot.kc -S mysys.kc -V debug $ kmutil -n boot -B myboot.kc -k mykernel $ kmutil -n sys -B myboot.kc -S mysys.kc -F "'OSBundleRequired' == 'Safe Boot'" -s stripkexts $ kmutil -n aux -r /Library/Extensions -L DIAGNOSTICS kmutil exits with a zero status on success. On error, kmutil prints an error message and then exits with a non-zero status. Well known exit codes: • 3 : kmutil failed because the kext is missing when trying to unload it. • 27 : kmutil failed because user approval is required. • 28 : kmutil failed because a reboot is required. COMPLETIONS For frequent users, kmutil can generate a shell completion script by invoking: $ kmutil --generate-completion-script <shell> This option supports zsh(1), bash(1), and fish. If no shell is specified, then a completion script will be generated for the current running shell. SEE ALSO kernelmanagerd(8) 2023-08-25 KMUTIL(8)
localedef
The localedef utility reads source definitions for one or more locale categories belonging to the same locale from the file named in the -i option (if specified) or from standard input. The name operand identifies the target locale. The localedef utility supports the creation of public, or generally accessible locales, as well as private, or restricted-access locales. Each category source definition is identified by the corresponding environment variable name and terminated by an END category-name statement. LC_CTYPE Defines character classification and case conversion. LC_COLLATE Defines collation rules. LC_MONETARY Defines the format and symbols used in formatting of monetary information. LC_NUMERIC Defines the decimal delimiter, grouping, and grouping symbol for non-monetary numeric editing. LC_TIME Defines the format and content of date and time information. LC_MESSAGES Defines the format and values of affirmative and negative responses.
localedef – define locale environment
localedef [-c] [-f charmap] [-i sourcefile] name
The following options are supported: -c Create permanent output even if warning messages have been issued. -f charmap Specify the pathname of a file containing a mapping of character symbols and collating element symbols to actual character encodings. -i sourcefile The pathname of a file containing the source definitions. If this option is not present, source definitions will be read from standard input. OPERANDS The following operand is supported: name Identifies the locale. If the name contains one or more slash characters, name will be interpreted as a pathname where the created locale definitions will be stored. If name does not contain any slash characters, the locale will be public. This capability is restricted to users with appropriate privileges. (As a consequence of specifying one name, although several categories can be processed in one execution, only categories belonging to the same locale can be processed.) ENVIRONMENT The following environment variables affect the execution of localedef: LANG Provide a default value for the internationalization variables that are unset or null. If LANG is unset or null, the corresponding value from the implementation-dependent default locale will be used. If any of the internationalization variables contains an invalid setting, the utility will behave as if none of the variables had been defined. LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_COLLATE (This variable has no effect on localedef; the POSIX locale will be used for this category.) LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single- as opposed to multi-byte characters in arguments and input files). This variable has no effect on the processing of localedef input data; the POSIX locale is used for this purpose, regardless of the value of this variable. LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error. NLSPATH Determine the location of message catalogues for the processing of LC_MESSAGES. EXIT STATUS The following exit values are returned: 0 No errors occurred and the locales were successfully created. 1 Warnings occurred and the locales were successfully created. 2 The locale specification exceeded implementation limits or the coded character set or sets used were not supported by the implementation, and no locale was created. >2 Warnings or errors occurred and no output was created. Darwin September 9, 2004 Darwin
null
ruby
Ruby is an interpreted scripting language for quick and easy object- oriented programming. It has many features to process text files and to do system management tasks (like in Perl). It is simple, straight- forward, and extensible. If you want a language for easy object-oriented programming, or you don't like the Perl ugliness, or you do like the concept of LISP, but don't like too many parentheses, Ruby might be your language of choice. FEATURES Ruby's features are as follows: Interpretive Ruby is an interpreted language, so you don't have to recompile programs written in Ruby to execute them. Variables have no type (dynamic typing) Variables in Ruby can contain data of any type. You don't have to worry about variable typing. Consequently, it has a weaker compile time check. No declaration needed You can use variables in your Ruby programs without any declarations. Variable names denote their scope - global, class, instance, or local. Simple syntax Ruby has a simple syntax influenced slightly from Eiffel. No user-level memory management Ruby has automatic memory management. Objects no longer referenced from anywhere are automatically collected by the garbage collector built into the interpreter. Everything is an object Ruby is a purely object-oriented language, and was so since its creation. Even such basic data as integers are seen as objects. Class, inheritance, and methods Being an object-oriented language, Ruby naturally has basic features like classes, inheritance, and methods. Singleton methods Ruby has the ability to define methods for certain objects. For example, you can define a press-button action for certain widget by defining a singleton method for the button. Or, you can make up your own prototype based object system using singleton methods, if you want to. Mix-in by modules Ruby intentionally does not have the multiple inheritance as it is a source of confusion. Instead, Ruby has the ability to share implementations across the inheritance tree. This is often called a ‘Mix-in’. Iterators Ruby has iterators for loop abstraction. Closures In Ruby, you can objectify the procedure. Text processing and regular expressions Ruby has a bunch of text processing features like in Perl. M17N, character set independent Ruby supports multilingualized programming. Easy to process texts written in many different natural languages and encoded in many different character encodings, without dependence on Unicode. Bignums With built-in bignums, you can for example calculate factorial(400). Reflection and domain specific languages Class is also an instance of the Class class. Definition of classes and methods is an expression just as 1+1 is. So your programs can even write and modify programs. Thus you can write your application in your own programming language on top of Ruby. Exception handling As in Java(tm). Direct access to the OS Ruby can use most UNIX system calls, often used in system programming. Dynamic loading On most UNIX systems, you can load object files into the Ruby interpreter on-the-fly. Rich libraries In addition to the “builtin libraries” and “standard libraries” that are bundled with Ruby, a vast amount of third-party libraries (“gems”) are available via the package management system called ‘RubyGems’, namely the gem(1) command. Visit RubyGems.org (https://rubygems.org/) to find the gems you need, and explore GitHub (https://github.com/) to see how they are being developed and used.
ruby – Interpreted object-oriented scripting language
ruby [--copyright] [--version] [-SUacdlnpswvy] [-0[octal]] [-C directory] [-E external[:internal]] [-F[pattern]] [-I directory] [-K[c]] [-T[level]] [-W[level]] [-e command] [-i[extension]] [-r library] [-x[directory]] [--{enable|disable}-FEATURE] [--dump=target] [--verbose] [--] [program_file] [argument ...]
The Ruby interpreter accepts the following command-line options (switches). They are quite similar to those of perl(1). --copyright Prints the copyright notice, and quits immediately without running any script. --version Prints the version of the Ruby interpreter, and quits immediately without running any script. -0[octal] (The digit “zero”.) Specifies the input record separator ($/) as an octal number. If no digit is given, the null character is taken as the separator. Other switches may follow the digits. -00 turns Ruby into paragraph mode. -0777 makes Ruby read whole file at once as a single string since there is no legal character with that value. -C directory -X directory Causes Ruby to switch to the directory. -E external[:internal] --encoding external[:internal] Specifies the default value(s) for external encodings and internal encoding. Values should be separated with colon (:). You can omit the one for internal encodings, then the value (Encoding.default_internal) will be nil. --external-encoding=encoding --internal-encoding=encoding Specify the default external or internal character encoding -F pattern Specifies input field separator ($;). -I directory Used to tell Ruby where to load the library scripts. Directory path will be added to the load-path variable ($:). -K kcode Specifies KANJI (Japanese) encoding. The default value for script encodings (__ENCODING__) and external encodings (Encoding.default_external) will be the specified one. kcode can be one of e EUC-JP s Windows-31J (CP932) u UTF-8 n ASCII-8BIT (BINARY) -S Makes Ruby use the PATH environment variable to search for script, unless its name begins with a slash. This is used to emulate #! on machines that don't support it, in the following manner: #! /usr/local/bin/ruby # This line makes the next one a comment in Ruby \ exec /usr/local/bin/ruby -S $0 $* On some systems $0 does not always contain the full pathname, so you need the -S switch to tell Ruby to search for the script if necessary (to handle embedded spaces and such). A better construct than $* would be ${1+"$@"}, but it does not work if the script is being interpreted by csh(1). -T[level=1] Turns on taint checks at the specified level (default 1). -U Sets the default value for internal encodings (Encoding.default_internal) to UTF-8. -W[level=2] Turns on verbose mode at the specified level without printing the version message at the beginning. The level can be; 0 Verbose mode is "silence". It sets the $VERBOSE to nil. 1 Verbose mode is "medium". It sets the $VERBOSE to false. 2 (default) Verbose mode is "verbose". It sets the $VERBOSE to true. -W2 is same as -w -a Turns on auto-split mode when used with -n or -p. In auto-split mode, Ruby executes $F = $_.split at beginning of each loop. -c Causes Ruby to check the syntax of the script and exit without executing. If there are no syntax errors, Ruby will print “Syntax OK” to the standard output. -d --debug Turns on debug mode. $DEBUG will be set to true. -e command Specifies script from command-line while telling Ruby not to search the rest of the arguments for a script file name. -h --help Prints a summary of the options. -i extension Specifies in-place-edit mode. The extension, if specified, is added to old file name to make a backup copy. For example: % echo matz > /tmp/junk % cat /tmp/junk matz % ruby -p -i.bak -e '$_.upcase!' /tmp/junk % cat /tmp/junk MATZ % cat /tmp/junk.bak matz -l (The lowercase letter “ell”.) Enables automatic line- ending processing, which means to firstly set $\ to the value of $/, and secondly chops every line read using chop!. -n Causes Ruby to assume the following loop around your script, which makes it iterate over file name arguments somewhat like sed -n or awk. while gets ... end -p Acts mostly same as -n switch, but print the value of variable $_ at the each end of the loop. For example: % echo matz | ruby -p -e '$_.tr! "a-z", "A-Z"' MATZ -r library Causes Ruby to load the library using require. It is useful when using -n or -p. -s Enables some switch parsing for switches after script name but before any file name arguments (or before a --). Any switches found there are removed from ARGV and set the corresponding variable in the script. For example: #! /usr/local/bin/ruby -s # prints "true" if invoked with `-xyz' switch. print "true\n" if $xyz -v Enables verbose mode. Ruby will print its version at the beginning and set the variable $VERBOSE to true. Some methods print extra messages if this variable is true. If this switch is given, and no other switches are present, Ruby quits after printing its version. -w Enables verbose mode without printing version message at the beginning. It sets the $VERBOSE variable to true. -x[directory] Tells Ruby that the script is embedded in a message. Leading garbage will be discarded until the first line that starts with “#!” and contains the string, “ruby”. Any meaningful switches on that line will be applied. The end of the script must be specified with either EOF, ^D (control-D), ^Z (control-Z), or the reserved word __END__. If the directory name is specified, Ruby will switch to that directory before executing script. -y --yydebug DO NOT USE. Turns on compiler debug mode. Ruby will print a bunch of internal state messages during compilation. Only specify this switch you are going to debug the Ruby interpreter. --disable-FEATURE --enable-FEATURE Disables (or enables) the specified FEATURE. --disable-gems --enable-gems Disables (or enables) RubyGems libraries. By default, Ruby will load the latest version of each installed gem. The Gem constant is true if RubyGems is enabled, false if otherwise. --disable-rubyopt --enable-rubyopt Ignores (or considers) the RUBYOPT environment variable. By default, Ruby considers the variable. --disable-all --enable-all Disables (or enables) all features. --dump=target Dump some informations. Prints the specified target. target can be one of; version version description same as --version usage brief usage message same as -h help Show long help message same as --help syntax check of syntax same as -c --yydebug yydebug compiler debug mode, same as --yydebug Only specify this switch if you are going to debug the Ruby interpreter. parsetree parsetree_with_comment AST nodes tree Only specify this switch if you are going to debug the Ruby interpreter. insns disassembled instructions Only specify this switch if you are going to debug the Ruby interpreter. --verbose Enables verbose mode without printing version message at the beginning. It sets the $VERBOSE variable to true. If this switch is given, and no script arguments (script file or -e options) are present, Ruby quits immediately. ENVIRONMENT RUBYLIB A colon-separated list of directories that are added to Ruby's library load path ($:). Directories from this environment variable are searched before the standard load path is searched. e.g.: RUBYLIB="$HOME/lib/ruby:$HOME/lib/rubyext" RUBYOPT Additional Ruby options. e.g. RUBYOPT="-w -Ke" Note that RUBYOPT can contain only -d, -E, -I, -K, -r, -T, -U, -v, -w, -W, --debug, --disable-FEATURE and --enable-FEATURE. RUBYPATH A colon-separated list of directories that Ruby searches for Ruby programs when the -S flag is specified. This variable precedes the PATH environment variable. RUBYSHELL The path to the system shell command. This environment variable is enabled for only mswin32, mingw32, and OS/2 platforms. If this variable is not defined, Ruby refers to COMSPEC. PATH Ruby refers to the PATH environment variable on calling Kernel#system. And Ruby depends on some RubyGems related environment variables unless RubyGems is disabled. See the help of gem(1) as below. % gem help GC ENVIRONMENT The Ruby garbage collector (GC) tracks objects in fixed-sized slots, but each object may have auxiliary memory allocations handled by the malloc family of C standard library calls ( malloc(3), calloc(3), and realloc(3)). In this documentatation, the "heap" refers to the Ruby object heap of fixed-sized slots, while "malloc" refers to auxiliary allocations commonly referred to as the "process heap". Thus there are at least two possible ways to trigger GC: 1 Reaching the object limit. 2 Reaching the malloc limit. In Ruby 2.1, the generational GC was introduced and the limits are divided into young and old generations, providing two additional ways to trigger a GC: 3 Reaching the old object limit. 4 Reaching the old malloc limit. There are currently 4 possible areas where the GC may be tuned by the following 11 environment variables: RUBY_GC_HEAP_INIT_SLOTS Initial allocation slots. Introduced in Ruby 2.1, default: 10000. RUBY_GC_HEAP_FREE_SLOTS Prepare at least this amount of slots after GC. Allocate this number slots if there are not enough slots. Introduced in Ruby 2.1, default: 4096 RUBY_GC_HEAP_GROWTH_FACTOR Increase allocation rate of heap slots by this factor. Introduced in Ruby 2.1, default: 1.8, minimum: 1.0 (no growth) RUBY_GC_HEAP_GROWTH_MAX_SLOTS Allocation rate is limited to this number of slots, preventing excessive allocation due to RUBY_GC_HEAP_GROWTH_FACTOR. Introduced in Ruby 2.1, default: 0 (no limit) RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR Perform a full GC when the number of old objects is more than R * N, where R is this factor and N is the number of old objects after the last full GC. Introduced in Ruby 2.1.1, default: 2.0 RUBY_GC_MALLOC_LIMIT The initial limit of young generation allocation from the malloc-family. GC will start when this limit is reached. Default: 16MB RUBY_GC_MALLOC_LIMIT_MAX The maximum limit of young generation allocation from malloc before GC starts. Prevents excessive malloc growth due to RUBY_GC_MALLOC_LIMIT_GROWTH_FACTOR. Introduced in Ruby 2.1, default: 32MB. RUBY_GC_MALLOC_LIMIT_GROWTH_FACTOR Increases the limit of young generation malloc calls, reducing GC frequency but increasing malloc growth until RUBY_GC_MALLOC_LIMIT_MAX is reached. Introduced in Ruby 2.1, default: 1.4, minimum: 1.0 (no growth) RUBY_GC_OLDMALLOC_LIMIT The initial limit of old generation allocation from malloc, a full GC will start when this limit is reached. Introduced in Ruby 2.1, default: 16MB RUBY_GC_OLDMALLOC_LIMIT_MAX The maximum limit of old generation allocation from malloc before a full GC starts. Prevents excessive malloc growth due to RUBY_GC_OLDMALLOC_LIMIT_GROWTH_FACTOR. Introduced in Ruby 2.1, default: 128MB RUBY_GC_OLDMALLOC_LIMIT_GROWTH_FACTOR Increases the limit of old generation malloc allocation, reducing full GC frequency but increasing malloc growth until RUBY_GC_OLDMALLOC_LIMIT_MAX is reached. Introduced in Ruby 2.1, default: 1.2, minimum: 1.0 (no growth) STACK SIZE ENVIRONMENT Stack size environment variables are implementation-dependent and subject to change with different versions of Ruby. The VM stack is used for pure-Ruby code and managed by the virtual machine. Machine stack is used by the operating system and its usage is dependent on C extensions as well as C compiler options. Using lower values for these may allow applications to keep more Fibers or Threads running; but increases the chance of SystemStackError exceptions and segmentation faults (SIGSEGV). These environment variables are available since Ruby 2.0.0. All values are specified in bytes. RUBY_THREAD_VM_STACK_SIZE VM stack size used at thread creation. default: 131072 (32-bit CPU) or 262144 (64-bit) RUBY_THREAD_MACHINE_STACK_SIZE Machine stack size used at thread creation. default: 524288 or 1048575 RUBY_FIBER_VM_STACK_SIZE VM stack size used at fiber creation. default: 65536 or 131072 RUBY_FIBER_MACHINE_STACK_SIZE Machine stack size used at fiber creation. default: 262144 or 524288 SEE ALSO https://www.ruby-lang.org/ The official web site. https://www.ruby-toolbox.com/ Comprehensive catalog of Ruby libraries. REPORTING BUGS • Security vulnerabilities should be reported via an email to security@ruby-lang.org. Reported problems will be published after being fixed. • Other bugs and feature requests can be reported via the Ruby Issue Tracking System (https://bugs.ruby-lang.org/). Do not report security vulnerabilities via this system because it publishes the vulnerabilities immediately. AUTHORS Ruby is designed and implemented by Yukihiro Matsumoto ⟨matz@netlab.jp⟩. See ⟨https://bugs.ruby-lang.org/projects/ruby/wiki/Contributors⟩ for contributors to Ruby. UNIX April 14, 2018 UNIX
null
stringdups
stringdups examines the content of malloc blocks in the specified target process. For all blocks which have the same content, it shows a line with the number of such blocks, their total allocated size (the total size in the malloc heap, not just the specific size of their content), and the average allocated size. stringdups requires one argument -- either the process ID or the full or partial executable name of the process to examine, or the pathname of a memory graph file generated by leaks. When generating a memory graph with leaks for use with stringdups it is necessary to use the -fullContent argument to include labels describing the contents of memory. If the MallocStackLogging environment variable was set when the target process was launched, stringdups also displays stack backtraces or call trees showing where all the blocks with a particular grouping of content were allocated. stringdups gathers the content of blocks of various types including: • C strings (composed of UTF8 characters, null terminated, of any length) • Pascal strings (composed of UTF8 characters with length byte at start, no longer than 255 characters, not necessarily null terminated) • NSString of all types (immutable, mutable, UTF8, Unicode). Malloc blocks which are the storage blocks for non-inline or mutable NSString's are listed separately. The string content is shown for both but the block sizes accurately show what is allocated in the malloc heap for that particular chunk of storage. • NSDate • NSNumber • NSPathStore2 (Cocoa's representation of file paths) • __NSMallocBlock__ For these, stringdups shows the symbol name of the code block (^) that this storage is associated with. If debug information is available, the source path and line number of the code block are also shown. • item counts for collection classes such as NSArray, NSSet, and NSDictionary
stringdups – Identify duplicate strings or other objects in malloc blocks of a target process
stringdups [-minimumCount count] [-stringsOnly] [-nostacks] [-callTrees] [-invertCallTrees] pid | partial-executable-name | memory-graph-file
-minimumCount count Only print information for object descriptions which appear at least count times in the target process. The default minimum count is 2. To see all strings in the target process, use 1 or use 'heap <pid> -addresses all'. -stringsOnly Only print information for objects that have string content such as C or Pascal strings, or NSString. -nostacks Do not print stack backtraces or call trees even if the target process has the MallocStackLogging environment variable set. -callTrees If stack backtraces are available, then by default all the object descriptions for a particular stack backtrace are consolidated together. However if this argument is passed then the output is consolidated by each particular string and a call tree is displayed showing the allocation backtraces of all occurrences of objects with that description. This output can be very lengthy if minimumCount is a low value, because the same call tree may be displayed many times. -invertCallTrees Same as -callTrees except that the call trees are printed from hottest to coldest stack frame, so the leaf malloc call appears first. SEE ALSO heap(1), leaks(1), malloc_history(1), vmmap(1), DevToolsSecurity(1) macOS 14.5 July 2, 2016 macOS 14.5
null