command
stringlengths
1
42
description
stringlengths
29
182k
name
stringlengths
7
64.9k
synopsis
stringlengths
4
85.3k
options
stringclasses
593 values
examples
stringclasses
455 values
zipcloak
zipcloak encrypts all unencrypted entries in the zipfile. This is the default action. The -d option is used to decrypt encrypted entries in the zipfile. zipcloak uses original zip encryption which is considered weak. Note: The encryption code of this program is not copyrighted and is put in the public domain. It was originally written in Europe and can be freely distributed from any country including the U.S.A. (Previously if this program was imported into the U.S.A, it could not be re-exported from the U.S.A to another country.) See the file README.CR included in the source distribution for more on this. Otherwise, the Info-ZIP license applies.
zipcloak - encrypt entries in a zipfile
zipcloak [-d] [-b path] [-h] [-v] [-L] zipfile ARGUMENTS zipfile Zipfile to encrypt entries in
-b path --temp-path path Use the directory given by path for the temporary zip file. -d --decrypt Decrypt encrypted entries (copy if given wrong password). -h --help Show a short help. -L --license Show software license. -O path --output-file zipfile Write output to new archive zipfile, leaving original archive as is. -q --quiet Quiet operation. Suppresses some informational messages. -v --version Show version information.
To be added. BUGS Large files (> 2 GB) and large archives not yet supported. Split archives not yet supported. A work around is to convert the split archive to a single-file archive using zip and then use zipcloak on the single-file archive. If needed, the resulting archive can then be split again using zip. SEE ALSO zip(1), unzip(1) AUTHOR Info-ZIP v3.0 of 8 May 2008 zipcloak(1)
git-upload-archive
Invoked by git archive --remote and sends a generated archive to the other end over the Git protocol. This command is usually not invoked directly by the end user. The UI for the protocol is on the git archive side, and the program pair is meant to be used to get an archive from a remote repository. SECURITY In order to protect the privacy of objects that have been removed from history but may not yet have been pruned, git-upload-archive avoids serving archives for commits and trees that are not reachable from the repository’s refs. However, because calculating object reachability is computationally expensive, git-upload-archive implements a stricter but easier-to-check set of rules: 1. Clients may request a commit or tree that is pointed to directly by a ref. E.g., git archive --remote=origin v1.0. 2. Clients may request a sub-tree within a commit or tree using the ref:path syntax. E.g., git archive --remote=origin v1.0:Documentation. 3. Clients may not use other sha1 expressions, even if the end result is reachable. E.g., neither a relative commit like master^ nor a literal sha1 like abcd1234 is allowed, even if the result is reachable from the refs. Note that rule 3 disallows many cases that do not have any privacy implications. These rules are subject to change in future versions of git, and the server accessed by git archive --remote may or may not follow these exact rules. If the config option uploadArchive.allowUnreachable is true, these rules are ignored, and clients may use arbitrary sha1 expressions. This is useful if you do not care about the privacy of unreachable objects, or if your object database is already publicly available for access via non-smart-http.
git-upload-archive - Send archive back to git-archive
git upload-archive <repository>
<repository> The repository to get a tar archive from. GIT Part of the git(1) suite Git 2.41.0 2023-06-01 GIT-UPLOAD-ARCHIVE(1)
null
xml2-config
xml2-config is a tool that is used to determine the compile and linker flags that should be used to compile and link programs that use GNOME- XML.
xml2-config - script to get information about the installed version of GNOME-XML
xml2-config [--prefix[=DIR]] [--libs] [--cflags] [--version] [--help]
xml2-config accepts the following options: --version Print the currently installed version of GNOME-XML on the standard output. --libs Print the linker flags that are necessary to link a GNOME-XML program. Add --dynamic after --libs to print only shared library linking information. --cflags Print the compiler flags that are necessary to compile a GNOME- XML program. --prefix=PREFIX If specified, use PREFIX instead of the installation prefix that GNOME-XML was built with when computing the output for the --cflags and --libs options. This option must be specified before any --libs or --cflags options. AUTHOR This manual page was written by Fredrik Hallenberg <hallon@lysator.liu.se>, for the Debian GNU/linux system (but may be used by others). Version 3 July 1999 GNOME-XML(1)
null
encode_keychange
encode_keychange produces a KeyChange string using the old and new passphrases as described in Section 5 of RFC 2274 "User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3)". -t option is mandatory and specifies the hash transform type to use. The transform is used to convert passphrase to master key for a given user (Ku), convert master key to the localized key (Kul), and to hash the old Kul with the random bits. Passphrases are obtained by examining a number of sources until success (in order listed): command line options (see -N and -O options below); the file $HOME/.snmp/passphrase.ek which should only contain two lines with old and new passphrase; standard input -or- user input from the terminal.
encode_keychange - produce the KeyChange string for SNMPv3
encode_keychange -t md5|sha1 [OPTIONS]
-E [0x]<engineID> EngineID used for Kul generation. <engineID> is intepreted as a hex string when preceeded by 0x, otherwise it is treated as a text string. If no <engineID> is specified, it is constructed from the first IP address for the local host. -f Force passphrases to be read from standard input. -h Display the help message. -N "<new_passphrase>" Passphrase used to generate the new Ku. -O "<old_passphrase>" Passphrase used to generate the old Ku. -P Turn off the prompt for passphrases when getting data from standard input. -v Be verbose. -V Echo passphrases to terminal. SEE ALSO The localized key method is defined in RFC 2274, Sections 2.6 and A.2, and originally documented in U. Blumenthal, N. C. Hien, B. Wijnen, "Key Derivation for Network Management Applications", IEEE Network Magazine, April/May issue, 1997. V5.6.2.1 16 Nov 2006 encode_keychange(1)
null
traptoemail
converts snmp traps into email messages.
traptoemail - snmptrapd handler script to convert snmp traps into emails
traptoemail [-f FROM] [-s SMTPSERVER] ADDRESSES
-f FROM sender address, defaults to "root" -s SMTPSERVER SMTP server, defaults to "localhost" ADDRESSES recipient addresses V5.6.2.1 16 Nov 2006 traptoemail(1)
null
gcc
null
null
null
null
null
smbutil
The smbutil command is used to control SMB requester and issue various commands. There are two types of options — global and local to the specified command. Global options are as follows: -h Print a short help message. -v Verbose output. The commands and local options are: help command Print usage information about command. lookup [-w host] [-t node_type] [-e] name Resolve the given name to an IP address. The NetBIOS name server can be directly specified via the -w option. The NetBIOS name type can be specified via the -t, the default is to lookup file servers. For a complete list of name types please see "http://support.microsoft.com/kb/163409". The NetBIOS names will be unpercent escaped out if the -e option is specified. status [-ae] hostname Resolve given hostname (IP address or DNS name) to NetBIOS workgroup and system name. All NetBIOS names will be displayed if the -a option is specified. All NetBIOS names will be percent escaped out if the -e option is specified. view [-options] //[domain;][user[:password]@]server List resources available on the specified server for the user. The options are as follows: -A authorize only. -N don't prompt for a password. -G allow guest access. -g authorize with guest only. -a authorize with anonymous only. -f don't share session. identity [-N] //[domain;][user[:password]@]server Display the user's identity as known by the server for the authenticated session. Will not prompt for a password if the -N option is specified. dfs smb://[domain;][user[:password]@]server/DfsRoot[/DfsLink] Display the Dfs referrals for this URL for the authenticated session. statshares [-m mount_path | -a] [-f format] If -m is specified, it prints the attributes of the share mounted at mount_path. If -a is specified, it prints the attributes of all mounted shares. You can not specify both -m and -a together since they are mutually exclusive. -f controls the output format. If -f is not specified then human readable format is used. Supported formats are: Json. multichannel [-m mount_path | -a] [-options] [-f format] If -m is specified, it prints the multichannel attributes of the share mounted at mount_path. If -a is specified, it prints the multichannel attributes of all mounted shares. You can not specify both -m and -a together since they are mutually exclusive. -f controls the output format. If -f is not specified then human readable format is used. Supported formats are: Json. The options are as follows: -i print information about the session. -c print information about the client's interfaces. -s print information about the server's interfaces. -x print information about the established connection. If no option is given, then all options will be shown. snapshot [-m mount_path | -a] [-f format] If -m is specified, it prints out a list of snapshots for the item at the path of mount_path. If -a is specified, it prints the snapshots of all mounted shares. You can not specify both -m and -a together since they are mutually exclusive. -f controls the output format. If -f is not specified then human readable format is used. Supported formats are: Json. smbstat [-f format] path_to_item List out various information about the file or dir located at path_to_item. If -f is not specified then human readable format is used. Supported formats are: Json. FILES nsmb.conf Keeps static parameters for connections and other information. See man nsmb.conf for details. AUTHORS Boris Popov ⟨bp@butya.kz⟩, ⟨bp@FreeBSD.org⟩ BUGS Please report any bugs to Apple. macOS 14.5 February 14, 2000 macOS 14.5
smbutil – interface to the SMB requester
smbutil [-hv] command [-options] [args]
null
null
diff
The diff utility compares the contents of file1 and file2 and writes to the standard output the list of changes necessary to convert one file into the other. No output is produced if the files are identical. Output options (mutually exclusive): -C number --context number Like -c but produces a diff with number lines of context. -c Produces a diff with 3 lines of context. With -c the output format is modified slightly: the output begins with identification of the files involved and their creation dates and then each change is separated by a line with fifteen *'s. The lines removed from file1 are marked with ‘- ’; those added to file2 are marked ‘+ ’. Lines which are changed from one file to the other are marked in both files with ‘! ’. Changes which lie within 3 lines of each other are grouped together on output. -D string --ifdef string Creates a merged version of file1 and file2 on the standard output, with C preprocessor controls included so that a compilation of the result without defining string is equivalent to compiling file1, while defining string will yield file2. -e --ed Produces output in a form suitable as input for the editor utility, ed(1), which can then be used to convert file1 into file2. Note that when comparing directories with -e, the resulting file may no longer be interpreted as an ed(1) script. Output is added to indicate which file each set of ed(1) commands applies to. These hunks can be manually extracted to produce an ed(1) script, which can also be applied with patch(1). -f --forward-ed Identical output to that of the -e flag, but in reverse order. It cannot be digested by ed(1). --help This option prints a summary to stdout and exits with status 0. -n Produces a script similar to that of -e, but in the opposite order and with a count of changed lines on each insert or delete command. This is the form used by rcsdiff. -q --brief Just print a line when the files differ. Does not output a list of changes. -U number --unified number Like -u but produces a diff with number lines of context. -u Produces a unified diff with 3 lines of context. A unified diff is similar to the context diff produced by the -c option. However, unlike with -c, all lines to be changed (added and/or removed) are present in a single section. --version This option prints a version string to stdout and exits with status 0. -y --side-by-side Output in two columns with a marker between them. The marker can be one of the following: space Corresponding lines are identical. '|' Corresponding lines are different. '<' Files differ and only the first file contains the line. '>' Files differ and only the second file contains the line. Comparison options: -A algo, --algorithm algo Configure the algorithm used when comparing files. diff supports 3 algorithms: myers Myers diff algorithm performs a O(ND) comparison between the two files. An optimisation is present when worst case files are detected it causes the Myers algorithm to bails out and produces correct, but non-optimal diff output. patience The Patience variant of the Myers algorithm, this variant attempts to create more aesthetically pleasing diff output by logically grouping lines. stone The Stone algorithm looks for the longest common subsequence between compared files. Stone encounters worst case performance when there are long common subsequences. In large files this can lead to a significant performance impact. The Stone algorithm is maintained for compatibility. The default diff algorithm when this flag is not given is “myers”. diff will fallback to the “stone” algorithm if the “myers” algorithm cannot be supported with the given options and the algorithm has not been set explicitly. The default algorithm is affected by the POSIXLY_CORRECT and POSIX_PEDANTIC environment variables. When either variable is set the default algorithm will be “stone”. If the diff algorithm is selected, but cannot be supported with the given options diff will produce an error. -a --text Treat all files as ASCII text. Normally diff will simply print “Binary files ... differ” if files contain binary characters. Use of this option forces diff to produce a diff. -B --ignore-blank-lines Causes chunks that include only blank lines to be ignored. -b --ignore-space-change Causes trailing blanks (spaces and tabs) to be ignored, and other strings of blanks to compare equal. --color=[when] Color the additions green, and removals red, or the value in the DIFFCOLORS environment variable. The possible values of when are “never”, “always” and “auto”. auto will use color if the output is a tty and the COLORTERM environment variable is set to a non- empty string. -d --minimal Try very hard to produce a diff as small as possible. This may consume a lot of processing power and memory when processing large files with many changes. -F pattern, --show-function-line pattern Like -p, but display the last line that matches provided pattern. -I pattern --ignore-matching-lines pattern Ignores changes, insertions, and deletions whose lines match the extended regular expression pattern. Multiple -I patterns may be specified. All lines in the change must match some pattern for the change to be ignored. See re_format(7) for more information on regular expression patterns. -i --ignore-case Ignores the case of letters. E.g., “A” will compare equal to “a”. -l --paginate Pass the output through pr(1) to paginate it. -L label --label label Print label instead of the first (and second, if this option is specified twice) file name and time in the context or unified diff header. -p --show-c-function With unified and context diffs, show with each change the first 40 characters of the last line before the context beginning with a letter, an underscore or a dollar sign. For C and Objective-C source code following standard layout conventions, this will show the prototype of the function the change applies to. -T --initial-tab Print a tab rather than a space before the rest of the line for the normal, context or unified output formats. This makes the alignment of tabs in the line consistent. -t --expand-tabs Will expand tabs in output lines. Normal or -c output adds character(s) to the front of each line which may screw up the indentation of the original source lines and make the output listing difficult to interpret. This option will preserve the original source's indentation. -w --ignore-all-blanks Is similar to -b --ignore-space-change but causes whitespace (blanks and tabs) to be totally ignored. E.g., “if ( a == b )” will compare equal to “if(a==b)”. -W number --width number Output at most number columns when using side by side format. The default value is 130. --changed-group-format GFMT Format input groups in the provided the format is a string with special keywords: %< lines from FILE1 %< lines from FILE2 --ignore-file-name-case ignore case when comparing file names --no-ignore-file-name-case do not ignore case wen comparing file names (default) --normal default diff output --speed-large-files stub option for compatibility with GNU diff --strip-trailing-cr strip carriage return on input files --suppress-common-lines Do not output common lines when using the side by side format --tabsize number Number of spaces representing a tab (default 8) Directory comparison options: -N --new-file If a file is found in only one directory, act as if it was found in the other directory too but was of zero size. -P --unidirectional-new-file If a file is found only in dir2, act as if it was found in dir1 too but was of zero size. -r --recursive Causes application of diff recursively to common subdirectories encountered. -S name --starting-file name Re-starts a directory diff in the middle, beginning with file name. -s --report-identical-files Causes diff to report files which are the same, which are otherwise not mentioned. -X file --exclude-from file Exclude files and subdirectories from comparison whose basenames match lines in file. Multiple -X options may be specified. -x pattern --exclude pattern Exclude files and subdirectories from comparison whose basenames match pattern. Patterns are matched using shell-style globbing via fnmatch(3). Multiple -x options may be specified. If both arguments are directories, diff sorts the contents of the directories by name, and then runs the regular file diff algorithm, producing a change list, on text files which are different. Binary files which differ, common subdirectories, and files which appear in only one directory are described as such. In directory mode only regular files and directories are compared. If a non-regular file such as a device special file or FIFO is encountered, a diagnostic message is printed. If only one of file1 and file2 is a directory, diff is applied to the non-directory file and the file contained in the directory file with a filename that is the same as the last component of the non-directory file. If either file1 or file2 is ‘-’, the standard input is used in its place. Output Style The default (without -e, -c, or -n --rcs options) output contains lines of these forms, where XX, YY, ZZ, QQ are line numbers respective of file order. XXaYY At (the end of) line XX of file1, append the contents of line YY of file2 to make them equal. XXaYY,ZZ Same as above, but append the range of lines, YY through ZZ of file2 to line XX of file1. XXdYY At line XX delete the line. The value YY tells to which line the change would bring file1 in line with file2. XX,YYdZZ Delete the range of lines XX through YY in file1. XXcYY Change the line XX in file1 to the line YY in file2. XX,YYcZZ Replace the range of specified lines with the line ZZ. XX,YYcZZ,QQ Replace the range XX,YY from file1 with the range ZZ,QQ from file2. These lines resemble ed(1) subcommands to convert file1 into file2. The line numbers before the action letters pertain to file1; those after pertain to file2. Thus, by exchanging a for d and reading the line in reverse order, one can also determine how to convert file2 into file1. As in ed(1), identical pairs (where num1 = num2) are abbreviated as a single number. ENVIRONMENT DIFFCOLORS The value of this variable is the form add:rm, where add is the ASCII escape sequence for additions and rm is the ASCII escape sequence for deletions. If this is unset, diff uses green for additions and red for removals. FILES /tmp/diff.XXXXXXXX Temporary file used when comparing a device or the standard input. Note that the temporary file is unlinked as soon as it is created so it will not show up in a directory listing. EXIT STATUS The diff utility exits with one of the following values: 0 No differences were found. 1 Differences were found. >1 An error occurred. The --help and --version options exit with a status of 0.
diff – differential file and directory comparator
diff [-aBbdipTtw] [-c | -e | -f | -n | -q | -u | -y] [-A algo | --algorithm algo] [--brief] [--color=when] [--changed-group-format GFMT] [--ed] [--expand-tabs] [--forward-ed] [--ignore-all-space] [--ignore-case] [--ignore-space-change] [--initial-tab] [--minimal] [--no-ignore-file-name-case] [--normal] [--rcs] [--show-c-function] [--starting-file] [--speed-large-files] [--strip-trailing-cr] [--tabsize number] [--text] [--unified] [-I pattern | --ignore-matching-lines pattern] [-F pattern | --show-function-line pattern] [-L label | --label label] file1 file2 diff [-aBbdilpTtw] [-A algo | --algorithm algo] [-I pattern | --ignore-matching-lines pattern] [-F pattern | --show-function-line pattern] [-L label | --label label] [--brief] [--color=when] [--changed-group-format GFMT] [--ed] [--expand-tabs] [--forward-ed] [--ignore-all-space] [--ignore-case] [--ignore-space-change] [--initial-tab] [--minimal] [--no-ignore-file-name-case] [--normal] [--paginate] [--rcs] [--show-c-function] [--speed-large-files] [--starting-file] [--strip-trailing-cr] [--tabsize number] [--text] -C number | -context number file1 file2 diff [-aBbdiltw] [-A algo | --algorithm algo] [-I pattern | --ignore-matching-lines pattern] [--brief] [--color=when] [--changed-group-format GFMT] [--ed] [--expand-tabs] [--forward-ed] [--ignore-all-space] [--ignore-case] [--ignore-space-change] [--initial-tab] [--minimal] [--no-ignore-file-name-case] [--normal] [--paginate] [--rcs] [--show-c-function] [--speed-large-files] [--starting-file] [--strip-trailing-cr] [--tabsize number] [--text] -D string | --ifdef string file1 file2 diff [-aBbdilpTtw] [-A algo | --algorithm algo] [-I pattern | --ignore-matching-lines pattern] [-F pattern | --show-function-line pattern] [-L label | --label label] [--brief] [--color=when] [--changed-group-format GFMT] [--ed] [--expand-tabs] [--forward-ed] [--ignore-all-space] [--ignore-case] [--ignore-space-change] [--initial-tab] [--minimal] [--no-ignore-file-name-case] [--normal] [--paginate] [--rcs] [--show-c-function] [--speed-large-files] [--starting-file] [--strip-trailing-cr] [--tabsize number] [--text] -U number | --unified number file1 file2 diff [-aBbdilNPprsTtw] [-c | -e | -f | -n | -q | -u] [-A algo | --algorithm algo] [--brief] [--color=when] [--changed-group-format GFMT] [--context] [--ed] [--expand-tabs] [--forward-ed] [--ignore-all-space] [--ignore-case] [--ignore-space-change] [--initial-tab] [--minimal] [--new-file] [--no-ignore-file-name-case] [--normal] [--paginate] [--rcs] [--recursive] [--report-identical-files] [--show-c-function] [--speed-large-files] [--strip-trailing-cr] [--tabsize number] [--text] [--unidirectional-new-file] [--unified] [-I pattern | --ignore-matching-lines pattern] [-F pattern | --show-function-line pattern] [-L label | --label label] [-S name | --starting-file name] [-X file | --exclude-from file] [-x pattern | --exclude pattern] dir1 dir2 diff [-aBbditwW] [--color=when] [--expand-tabs] [--ignore-all-blanks] [--ignore-blank-lines] [--ignore-case] [--minimal] [--no-ignore-file-name-case] [--strip-trailing-cr] [--suppress-common-lines] [--tabsize number] [--text] [--width] -y | --side-by-side file1 file2 diff [--help] [--version]
null
Compare old_dir and new_dir recursively generating an unified diff and treating files found only in one of those directories as new files: $ diff -ruN /path/to/old_dir /path/to/new_dir Same as above but excluding files matching the expressions “*.h” and “*.c”: $ diff -ruN -x '*.h' -x '*.c' /path/to/old_dir /path/to/new_dir Show a single line indicating if the files differ: $ diff -q /boot/loader.conf /boot/defaults/loader.conf Files /boot/loader.conf and /boot/defaults/loader.conf differ Assuming a file named example.txt with the following contents: FreeBSD is an operating system Linux is a kernel OpenBSD is an operating system Compare stdin with example.txt excluding from the comparison those lines containing either "Linux" or "Open": $ echo "FreeBSD is an operating system" | diff -q -I 'Linux|Open' example.txt - LEGACY DESCRIPTION The unified diff format's timestamps are formatted differently in legacy mode. By default, diff does not include nanoseconds or a timezone in unified diff timestamps. In legacy mode, nanoseconds and a timezone are both included. Note that patch(1) may not be able to process timestamps in the legacy format. For more information about legacy mode, see compat(5). SEE ALSO cmp(1), comm(1), diff3(1), ed(1), patch(1), pr(1), sdiff(1), compat(5) James W. Hunt and M. Douglas McIlroy, “An Algorithm for Differential File Comparison”, Computing Science Technical Report, Bell Laboratories 41, June 1976. STANDARDS The diff utility is compliant with the IEEE Std 1003.1-2008 (“POSIX.1”) specification. The flags [-AaDdIiLlNnPpqSsTtwXxy] are extensions to that specification. HISTORY A diff command appeared in Version 6 AT&T UNIX. libdiff was imported from the Game of Trees version control system and default algorithm was changed to Myers for FreeBSD 14. macOS 14.5 September 8, 2022 macOS 14.5
ri
ri is a command-line front end for the Ruby API reference. You can search and read the API reference for classes and methods with ri. ri is a part of Ruby. name can be: Class | Module | Module::Class Class::method | Class#method | Class.method | method gem_name: | gem_name:README | gem_name:History All class names may be abbreviated to their minimum unambiguous form. If a name is ambiguous, all valid options will be listed. A ‘.’ matches either class or instance methods, while #method matches only instance and ::method matches only class methods. README and other files may be displayed by prefixing them with the gem name they're contained in. If the gem name is followed by a ‘:’ all files in the gem will be shown. The file name extension may be omitted where it is unambiguous. For example: ri Fil ri File ri File.new ri zip ri rdoc:README Note that shell quoting or escaping may be required for method names containing punctuation: ri 'Array.[]' ri compact\! To see the default directories ri will search, run: ri --list-doc-dirs Specifying the --system, --site, --home, --gems, or --doc-dir options will limit ri to searching only the specified directories. ri options may be set in the RI environment variable. The ri pager can be set with the RI_PAGER environment variable or the PAGER environment variable.
ri – Ruby API reference front end
ri [-ahilTv] [-d DIRNAME] [-f FORMAT] [-w WIDTH] [--[no-]pager] [--server[=PORT]] [--[no-]list-doc-dirs] [--no-standard-docs] [--[no-]{system|site|gems|home}] [--[no-]profile] [--dump=CACHE] [name ...]
-i --[no-]interactive In interactive mode you can repeatedly look up methods with autocomplete. -a --[no-]all Show all documentation for a class or module. -l --[no-]list List classes ri knows about. --[no-]pager Send output to a pager, rather than directly to stdout. -T Synonym for --no-pager. -w WIDTH --width=WIDTH Set the width of the output. --server[=PORT] Run RDoc server on the given port. The default port is 8214. -f FORMAT --format=FORMAT Use the selected formatter. The default formatter is bs for paged output and ansi otherwise. Valid formatters are: ansi, bs, markdown, rdoc. -h --help Show help and exit. -v --version Output version information and exit. Data source options: --[no-]list-doc-dirs List the directories from which ri will source documentation on stdout and exit. -d DIRNAME --doc-dir=DIRNAME List of directories from which to source documentation in addition to the standard directories. May be repeated. --no-standard-docs Do not include documentation from the Ruby standard library, site_lib, installed gems, or ~/.rdoc. Use with --doc-dir. --[no-]system Include documentation from Ruby's standard library. Defaults to true. --[no-]site Include documentation from libraries installed in site_lib. Defaults to true. --[no-]gems Include documentation from RubyGems. Defaults to true. --[no-]home Include documentation stored in ~/.rdoc. Defaults to true. Debug options: --[no-]profile Run with the Ruby profiler. --dump=CACHE Dump data from an ri cache or data file. ENVIRONMENT RI Options to prepend to those specified on the command-line. RI_PAGER PAGER Pager program to use for displaying. HOME USERPROFILE HOMEPATH Path to the user's home directory. FILES ~/.rdoc Path for ri data in the user's home directory. SEE ALSO ruby(1), rdoc(1), gem(1) REPORTING BUGS • Security vulnerabilities should be reported via an email to security@ruby-lang.org. Reported problems will be published after being fixed. • Other bugs and feature requests can be reported via the Ruby Issue Tracking System (https://bugs.ruby-lang.org/). Do not report security vulnerabilities via this system because it publishes the vulnerabilities immediately. AUTHORS Written by Dave Thomas ⟨dave@pragmaticprogrammer.com⟩. UNIX April 20, 2017 UNIX
null
agvtool
agvtool helps speed up common operations for Xcode projects that use the Apple Generic Versioning system. You enable versioning support by setting up some build settings in your project. Build Settings The settings used by the apple-generic versioning system are as follows: VERSIONING_SYSTEM This must be set to “apple-generic” at the project level to enable versioning. CURRENT_PROJECT_VERSION This setting defines the current version of the project. The value must be a integer or floating point number like 57 or 365.8. DYLIB_CURRENT_VERSION This setting defines the current version of any framework built by the project. Like CURRENT_PROJECT_VERSION the value must be an integer or floating point number like 57 or 365.8. By default it is set to “$(CURRENT_PROJECT_VERSION)”. VERSION_INFO_PREFIX Used as a prefix for the name of the version info symbol in the generated versioning source file. If you prefix your exported symbols you will probably want to set this to the same prefix. VERSION_INFO_SUFFIX Used as a suffix for the name of the version info symbol in the generated versioning source file. This is rarely used. VERSION_INFO_BUILDER This defines a reference to the user performing a build to be included in the generated stub, and defaults to the value of the USER environment variable. VERSION_INFO_EXPORT_DECL This defines a prefix string for the version info symbol declaration in the generated stub. This can be used, for example, to add an optional ‘export’ keyword to the version symbol declaration. This should rarely be changed. VERSION_INFO_FILE Used to specify a name for the source file that will be generated and compiled into your product. By default this is set to “$(PRODUCT_NAME)_vers.c”. To enable Apple Generic Versioning, then, you must set up at least the VERSIONING_SYSTEM and CURRENT_PROJECT_VERSION project build settings for each project you want to be versioned. The target of a versioned project will have two global variables generated and linked into your product. One is of type double and is simply the CURRENT_PROJECT_VERSION. The other is a version string which is formatted to be compatible with what(1). These variables are available for use in your code. Projects with multiple targets are required to have the same CURRENT_PROJECT_VERSION for each target. The easiest way to achieve this is to set CURRENT_PROJECT_VERSION at the project level. Usage agvtool should be invoked with the working directory set to your project directory (the folder containing your .xcodeproj project file). agvtool pays attention to the following defaults for CVS usage: CVSEnabled, CVSSubmitByTag, and CVSToolPath. If CVSEnabled is set to YES then agvtool will perform certain CVS operations like committing modified project files and performing tagging operations. You can set this default by issuing the following command: defaults write agvtool CVSEnabled YES The sense of this default can be overidden by supplying an explicit -noscm (which turns off CVS and Subversion usage), -usecvs (which turns on CVS usage and turns off Subversion usage), or -usesvn (which turns off CVS usage and turns on Subversion usage). If CVSSubmitByTag is set to YES then agvtool will submit your project by CVS tag using the same version as the tag operation. The sense of this default can be overridden by supplying an explicit -bytag or -notbytag argument to the submit operation. You can set this default by issuing the following command: defaults write agvtool CVSSubmitByTag YES Set CVSToolPath to point to the location of the cvs tool to use. If this default is not set then agvtool will use /usr/bin/ocvs if it exists. Otherwise /usr/bin/cvs will be used. You can set this default by issuing the following command: defaults write agvtool CVSToolPath pathToCVS agvtool pays attention to the following defaults for Subversion usage: SVNEnabled, SVNSubmitByTag, and SVNToolPath. If SVNEnabled is set to YES then agvtool will perform certain Subversion operations like committing modified project files and performing tagging operations. You can set this default by issuing the following command: defaults write agvtool SVNEnabled YES The sense of this default can be overidden by supplying an explicit -noscm (which turns off CVS and Subversion usage), -usecvs (which turns on CVS usage and turns off Subversion usage), or -usesvn (which turns off CVS usage and turns on Subversion usage). If SVNSubmitByTag is set to YES then agvtool () will submit your project by Subversion URL using the same version as that created by the tag operation. The sense of this default can be overridden by supplying an explicit -bytag or -notbytag argument to the submit operation. You can set this default by issuing the following command: defaults write agvtool SVNSubmitByTag YES Set SVNToolPath to point to the location of the svn tool to use. If this default is not set then agvtool will use /usr/local/bin/svn if it exists. You can set this default by issuing the following command: defaults write agvtool SVNToolPath pathToSVN Commands And Options what-version | vers [-terse] Prints out the current version number of the project. The -terse option can be used to limit the output to the version number only. next-version | bump [-all] Increments the version numbers of all versioned targets to the next highest integral value. For example, 54 will change to 55 and 234.6 will change to 235. The CURRENT_PROJECT_VERSION and the DYLIB_CURRENT_VERSION will be updated. The -all option will also update the CFBundleVersion Info.plist key. If CVS support is enabled, the modified project file will be committed. new-version [-all version] Sets the version numbers of all targets to the given version. The CURRENT_PROJECT_VERSION and the DYLIB_CURRENT_VERSION will be updated. The -all option will also update the CFBundleVersion Info.plist key. If CVS support is enabled, the modified project file will be committed. tag [-force | -F] [-noupdatecheck | -Q] [-baseurlfortag] Create a new tag projectname-currentversion where projectname is the name of the Xcode project file (without the extension) and currentversion is the CURRENT_PROJECT_VERSION with any ‘.’ transformed into ‘~’ (since CVS does not allow dots in tag names). The -force or -F option will add a -F to the tag operation. The -noupdatecheck or -Q option skips the cvs update usually done prior to tagging to ensure that there are no uncommitted changes. The -baseurlfortag option can be used to provide a URL that points to the directory to place the "tag" in when using Subversion. This overrides the SVNBaseTagURL default. This option is ignored if Subversion is not being used. Note: This command will only function if CVS or Subversion support is enabled. submit [-bytag | -notbytag] [-baseurlfortag] release ... Submits your project to the specified releases in Build & Integration. The -bytag option performs the submission by tag instead of submitting from the project source directly. The -notbytag option submits from the project source. If CVSSubmitByTag is set, -bytag is the default. Otherwise, -notbytag is the default. The -baseurlfortag option can be used to provide a URL that points to the directory to find "tags" in when using Subversion. This overrides the SVNBaseTagURL default. This option is ignored if Subversion is not being used. Note: This command is relevant only for Apple employees. what-marketing-version | mvers [-terse | -terse1] Prints the current marketing version of the project. For native targets, a marketing version is listed for each Info.plist file found. For Jambase targets a marketing version is shown if a common value is found. The marketing version is the CFBundleShortVersionString Info.plist key. This is often a totally different version determined by product marketing folks. The -terse option will limit the output to the version number only when displaying version numbers for Jambase targets. The -terse1 option will limit the output to the first version number found, and only display the version number. new-marketing-version version Sets the marketing version numbers of all versioned targets to the given version number. The marketing version is the CFBundleShortVersionString Info.plist key. This is often a totally different version determined by product marketing folks. If CVS support is enabled, the modified project file will be committed. Do not use this command on a project with targets that track different marketing versions. OS X April 11, 2012 OS X
agvtool – Apple-generic versioning tool for Xcode projects
agvtool what-version | vers [-terse] agvtool [-noscm | -usecvs | -usesvn] next-version | bump [-all] agvtool [-noscm | -usecvs | -usesvn] new-version [-all] versionNumber agvtool [-noscm | -usecvs | -usesvn] tag [-force | -F] [-noupdatecheck | -Q] [-baseurlfortag] agvtool [-noscm | -usecvs | -usesvn] submit [-bytag | -notbytag] [-baseurlfortag] release ... agvtool what-marketing-version | mvers [-terse | -terse1] agvtool [-noscm | -usecvs | -usesvn] new-marketing-version | vers version
null
null
cksum
The cksum utility writes to the standard output three whitespace separated fields for each input file. These fields are a checksum CRC, the total number of octets in the file and the file name. If no file name is specified, the standard input is used and no file name is written. The sum utility is identical to the cksum utility, except that it defaults to using historic algorithm 1, as described below. It is provided for compatibility only. The options are as follows: -o Use historic algorithms instead of the (superior) default one. Algorithm 1 is the algorithm used by historic BSD systems as the sum(1) algorithm and by historic AT&T System V UNIX systems as the sum(1) algorithm when using the -r option. This is a 16-bit checksum, with a right rotation before each addition; overflow is discarded. Algorithm 2 is the algorithm used by historic AT&T System V UNIX systems as the default sum(1) algorithm. This is a 32-bit checksum, and is defined as follows: s = sum of all bytes; r = s % 2^16 + (s % 2^32) / 2^16; cksum = (r % 2^16) + r / 2^16; Algorithm 3 is what is commonly called the ‘32bit CRC’ algorithm. This is a 32-bit checksum. Both algorithm 1 and 2 write to the standard output the same fields as the default algorithm except that the size of the file in bytes is replaced with the size of the file in blocks. For historic reasons, the block size is 1024 for algorithm 1 and 512 for algorithm 2. Partial blocks are rounded up. The default CRC used is based on the polynomial used for CRC error checking in the networking standard ISO 8802-3: 1989. The CRC checksum encoding is defined by the generating polynomial: G(x) = x^32 + x^26 + x^23 + x^22 + x^16 + x^12 + x^11 + x^10 + x^8 + x^7 + x^5 + x^4 + x^2 + x + 1 Mathematically, the CRC value corresponding to a given file is defined by the following procedure: The n bits to be evaluated are considered to be the coefficients of a mod 2 polynomial M(x) of degree n-1. These n bits are the bits from the file, with the most significant bit being the most significant bit of the first octet of the file and the last bit being the least significant bit of the last octet, padded with zero bits (if necessary) to achieve an integral number of octets, followed by one or more octets representing the length of the file as a binary value, least significant octet first. The smallest number of octets capable of representing this integer are used. M(x) is multiplied by x^32 (i.e., shifted left 32 bits) and divided by G(x) using mod 2 division, producing a remainder R(x) of degree <= 31. The coefficients of R(x) are considered to be a 32-bit sequence. The bit sequence is complemented and the result is the CRC. EXIT STATUS The cksum and sum utilities exit 0 on success, and >0 if an error occurs. SEE ALSO md5(1) The default calculation is identical to that given in pseudo-code in the following ACM article. Dilip V. Sarwate, “Computation of Cyclic Redundancy Checks Via Table Lookup”, Communications of the Tn ACM, August 1988. STANDARDS The cksum utility is expected to conform to IEEE Std 1003.2-1992 (“POSIX.2”). HISTORY The cksum utility appeared in 4.4BSD. macOS 14.5 April 28, 1995 macOS 14.5
cksum, sum – display file checksums and block counts
cksum [-o 1 | 2 | 3] [file ...] sum [file ...]
null
null
corelist5.34
See Module::CoreList for one.
corelist - a commandline frontend to Module::CoreList
corelist -v corelist [-a|-d] <ModuleName> | /<ModuleRegex>/ [<ModuleVersion>] ... corelist [-v <PerlVersion>] [ <ModuleName> | /<ModuleRegex>/ ] ... corelist [-r <PerlVersion>] ... corelist --utils [-d] <UtilityName> [<UtilityName>] ... corelist --utils -v <PerlVersion> corelist --feature <FeatureName> [<FeatureName>] ... corelist --diff PerlVersion PerlVersion corelist --upstream <ModuleName>
-a lists all versions of the given module (or the matching modules, in case you used a module regexp) in the perls Module::CoreList knows about. corelist -a Unicode Unicode was first released with perl v5.6.2 v5.6.2 3.0.1 v5.8.0 3.2.0 v5.8.1 4.0.0 v5.8.2 4.0.0 v5.8.3 4.0.0 v5.8.4 4.0.1 v5.8.5 4.0.1 v5.8.6 4.0.1 v5.8.7 4.1.0 v5.8.8 4.1.0 v5.8.9 5.1.0 v5.9.0 4.0.0 v5.9.1 4.0.0 v5.9.2 4.0.1 v5.9.3 4.1.0 v5.9.4 4.1.0 v5.9.5 5.0.0 v5.10.0 5.0.0 v5.10.1 5.1.0 v5.11.0 5.1.0 v5.11.1 5.1.0 v5.11.2 5.1.0 v5.11.3 5.2.0 v5.11.4 5.2.0 v5.11.5 5.2.0 v5.12.0 5.2.0 v5.12.1 5.2.0 v5.12.2 5.2.0 v5.12.3 5.2.0 v5.12.4 5.2.0 v5.13.0 5.2.0 v5.13.1 5.2.0 v5.13.2 5.2.0 v5.13.3 5.2.0 v5.13.4 5.2.0 v5.13.5 5.2.0 v5.13.6 5.2.0 v5.13.7 6.0.0 v5.13.8 6.0.0 v5.13.9 6.0.0 v5.13.10 6.0.0 v5.13.11 6.0.0 v5.14.0 6.0.0 v5.14.1 6.0.0 v5.15.0 6.0.0 -d finds the first perl version where a module has been released by date, and not by version number (as is the default). --diff Given two versions of perl, this prints a human-readable table of all module changes between the two. The output format may change in the future, and is meant for humans, not programs. For programs, use the Module::CoreList API. -? or -help help! help! help! to see more help, try --man. -man all of the help -v lists all of the perl release versions we got the CoreList for. If you pass a version argument (value of $], like 5.00503 or 5.008008), you get a list of all the modules and their respective versions. (If you have the "version" module, you can also use new- style version numbers, like 5.8.8.) In module filtering context, it can be used as Perl version filter. -r lists all of the perl releases and when they were released If you pass a perl version you get the release date for that version only. --utils lists the first version of perl each named utility program was released with May be used with -d to modify the first release criteria. If used with -v <version> then all utilities released with that version of perl are listed, and any utility programs named on the command line are ignored. --feature, -f lists the first version bundle of each named feature given --upstream, -u Shows if the given module is primarily maintained in perl core or on CPAN and bug tracker URL. As a special case, if you specify the module name "Unicode", you'll get the version number of the Unicode Character Database bundled with the requested perl versions.
$ corelist File::Spec File::Spec was first released with perl 5.005 $ corelist File::Spec 0.83 File::Spec 0.83 was released with perl 5.007003 $ corelist File::Spec 0.89 File::Spec 0.89 was not in CORE (or so I think) $ corelist File::Spec::Aliens File::Spec::Aliens was not in CORE (or so I think) $ corelist /IPC::Open/ IPC::Open2 was first released with perl 5 IPC::Open3 was first released with perl 5 $ corelist /MANIFEST/i ExtUtils::Manifest was first released with perl 5.001 $ corelist /Template/ /Template/ has no match in CORE (or so I think) $ corelist -v 5.8.8 B B 1.09_01 $ corelist -v 5.8.8 /^B::/ B::Asmdata 1.01 B::Assembler 0.07 B::Bblock 1.02_01 B::Bytecode 1.01_01 B::C 1.04_01 B::CC 1.00_01 B::Concise 0.66 B::Debug 1.02_01 B::Deparse 0.71 B::Disassembler 1.05 B::Lint 1.03 B::O 1.00 B::Showlex 1.02 B::Stackobj 1.00 B::Stash 1.00 B::Terse 1.03_01 B::Xref 1.01 COPYRIGHT Copyright (c) 2002-2007 by D.H. aka PodMaster Currently maintained by the perl 5 porters <perl5-porters@perl.org>. This program is distributed under the same terms as perl itself. See http://perl.org/ or http://cpan.org/ for more info on that. perl v5.34.1 2024-04-13 CORELIST(1)
segedit
segedit extracts or replaces named sections from the input_file. When extracting sections, segedit will write the contents of each requested section into data_file. When replacing sections, segedit will write a new output_file formed from the input_file and the requested replacement section content from data_file. The segment and section names are the same as those given to ld(1) with the -sectcreate option. The segment and section names of an object file can be examined with the -l option to otool(1). Only sections in segments that have no relocation to or from them (i.e., segments marked with the SG_NORELOC flag) can be replaced but all sections can be extracted. The options to segedit(1): -extract seg_name sect_name data_file Extracts each section specified by the segment and section names and places the contents in the specified data_file. If the output file is `-' the section contents will be written to the standard output. -replace seg_name sect_name data_file Replaces each section specified by the segment and section names and takes the new section content from the specified data_file. The -output output_file option must also be specified. The resulting size of the section will be rounded to a multiple of 4 bytes and padded with zero bytes if necessary. -output output_file Specifies the new file to create when replacing sections. SEE ALSO ld(1), otool(1), lipo(1) LIMITATIONS Only Mach-O format files that are laid out in a contiguous address space and with their segments in increasing address order can have their segments replaced by this program. This layout is what ld(1) produces by default. Only sections in segments that have no relocation to or from them (i.e., segments marked with the SG_NORELOC flag) can be replaced. segedit will not extract or replace sections from universal files. If necessary, use lipo(1) to extract the desired Mach-O files from a universal file before running segedit. Apple, Inc. June 25, 2018 SEGEDIT(1)
segedit - extract and replace sections from object files
segedit input_file [-extract seg_name sect_name data_file] ... segedit input_file [-replace seg_name sect_name data_file] ... -output output_file
null
null
zipnote
zipnote writes the comments in a zipfile to stdout. This is the default mode. A second mode allows updating the comments in a zipfile as well as allows changing the names of the files in the zipfile. These modes are described below.
zipnote - write the comments in zipfile to stdout, edit comments and rename files in zipfile
zipnote [-w] [-b path] [-h] [-v] [-L] zipfile ARGUMENTS zipfile Zipfile to read comments from or edit.
-w Write comments to a zipfile from stdin (see below). -b path Use path for the temporary zip file. -h Show a short help. -v Show version information. -L Show software license.
To write all comments in a zipfile to stdout use for example zipnote foo.zip > foo.tmp This writes all comments in the zipfile foo.zip to the file foo.tmp in a specific format. If desired, this file can then be edited to change the comments and then used to update the zipfile. zipnote -w foo.zip < foo.tmp The names of the files in the zipfile can also be changed in this way. This is done by following lines like "@ name" in the created temporary file (called foo.tmp here) with lines like "@=newname" and then using the -w option as above. BUGS The temporary file format is rather specific and zipnote is rather picky about it. It should be easier to change file names in a script. Does not yet support large (> 2 GB) or split archives. SEE ALSO zip(1), unzip(1) AUTHOR Info-ZIP v3.0 of 8 May 2008 zipnote(1)
ptar
ptar is a small, tar look-alike program that uses the perl module Archive::Tar to extract, create and list tar archives.
ptar - a tar-like program written in perl
ptar -c [-v] [-z] [-C] [-f ARCHIVE_FILE | -] FILE FILE ... ptar -c [-v] [-z] [-C] [-T index | -] [-f ARCHIVE_FILE | -] ptar -x [-v] [-z] [-f ARCHIVE_FILE | -] ptar -t [-z] [-f ARCHIVE_FILE | -] ptar -h
c Create ARCHIVE_FILE or STDOUT (-) from FILE x Extract from ARCHIVE_FILE or STDIN (-) t List the contents of ARCHIVE_FILE or STDIN (-) f Name of the ARCHIVE_FILE to use. Default is './default.tar' z Read/Write zlib compressed ARCHIVE_FILE (not always available) v Print filenames as they are added or extracted from ARCHIVE_FILE h Prints this help message C CPAN mode - drop 022 from permissions T get names to create from file SEE ALSO tar(1), Archive::Tar. perl v5.38.2 2023-11-28 PTAR(1)
null
printf
The printf utility formats and prints its arguments, after the first, under control of the format. The format is a character string which contains three types of objects: plain characters, which are simply copied to standard output, character escape sequences which are converted and copied to the standard output, and format specifications, each of which causes printing of the next successive argument. The arguments after the first are treated as strings if the corresponding format is either c, b or s; otherwise it is evaluated as a C constant, with the following extensions: • A leading plus or minus sign is allowed. • If the leading character is a single or double quote, the value is the character code of the next character. The format string is reused as often as necessary to satisfy the arguments. Any extra format specifications are evaluated with zero or the null string. Character escape sequences are in backslash notation as defined in the ANSI X3.159-1989 (“ANSI C89”), with extensions. The characters and their meanings are as follows: \a Write a <bell> character. \b Write a <backspace> character. \f Write a <form-feed> character. \n Write a <new-line> character. \r Write a <carriage return> character. \t Write a <tab> character. \v Write a <vertical tab> character. \´ Write a <single quote> character. \\ Write a backslash character. \num Write a byte whose value is the 1-, 2-, or 3-digit octal number num. Multibyte characters can be constructed using multiple \num sequences. Each format specification is introduced by the percent character (``%''). The remainder of the format specification includes, in the following order: Zero or more of the following flags: # A `#' character specifying that the value should be printed in an ``alternate form''. For b, c, d, s and u formats, this option has no effect. For the o formats the precision of the number is increased to force the first character of the output string to a zero. For the x (X) format, a non-zero result has the string 0x (0X) prepended to it. For a, A, e, E, f, F, g and G formats, the result will always contain a decimal point, even if no digits follow the point (normally, a decimal point only appears in the results of those formats if a digit follows the decimal point). For g and G formats, trailing zeros are not removed from the result as they would otherwise be; - A minus sign `-' which specifies left adjustment of the output in the indicated field; + A `+' character specifying that there should always be a sign placed before the number when using signed formats. ‘ ’ A space specifying that a blank should be left before a positive number for a signed format. A `+' overrides a space if both are used; 0 A zero `0' character indicating that zero-padding should be used rather than blank-padding. A `-' overrides a `0' if both are used; Field Width: An optional digit string specifying a field width; if the output string has fewer bytes than the field width it will be blank- padded on the left (or right, if the left-adjustment indicator has been given) to make up the field width (note that a leading zero is a flag, but an embedded zero is part of a field width); Precision: An optional period, ‘.’, followed by an optional digit string giving a precision which specifies the number of digits to appear after the decimal point, for e and f formats, or the maximum number of bytes to be printed from a string; if the digit string is missing, the precision is treated as zero; Format: A character which indicates the type of format to use (one of diouxXfFeEgGaAcsb). The uppercase formats differ from their lowercase counterparts only in that the output of the former is entirely in uppercase. The floating-point format specifiers (fFeEgGaA) may be prefixed by an L to request that additional precision be used, if available. A field width or precision may be ‘*’ instead of a digit string. In this case an argument supplies the field width or precision. The format characters and their meanings are: diouXx The argument is printed as a signed decimal (d or i), unsigned octal, unsigned decimal, or unsigned hexadecimal (X or x), respectively. fF The argument is printed in the style `[-]ddd.ddd' where the number of d's after the decimal point is equal to the precision specification for the argument. If the precision is missing, 6 digits are given; if the precision is explicitly 0, no digits and no decimal point are printed. The values infinity and NaN are printed as ‘inf’ and ‘nan’, respectively. eE The argument is printed in the style e ‘[-d.ddd±dd]’ where there is one digit before the decimal point and the number after is equal to the precision specification for the argument; when the precision is missing, 6 digits are produced. The values infinity and NaN are printed as ‘inf’ and ‘nan’, respectively. gG The argument is printed in style f (F) or in style e (E) whichever gives full precision in minimum space. aA The argument is printed in style ‘[-h.hhh±pd]’ where there is one digit before the hexadecimal point and the number after is equal to the precision specification for the argument; when the precision is missing, enough digits are produced to convey the argument's exact double-precision floating-point representation. The values infinity and NaN are printed as ‘inf’ and ‘nan’, respectively. c The first byte of argument is printed. s Bytes from the string argument are printed until the end is reached or until the number of bytes indicated by the precision specification is reached; however if the precision is 0 or missing, the string is printed entirely. b As for s, but interpret character escapes in backslash notation in the string argument. The permitted escape sequences are slightly different in that octal escapes are \0num instead of \num and that an additional escape sequence \c stops further output from this printf invocation. n$ Allows reordering of the output according to argument. % Print a `%'; no argument is used. The decimal point character is defined in the program's locale (category LC_NUMERIC). In no case does a non-existent or small field width cause truncation of a field; padding takes place only if the specified field width exceeds the actual width. Some shells may provide a builtin printf command which is similar or identical to this utility. Consult the builtin(1) manual page. EXIT STATUS The printf utility exits 0 on success, and >0 if an error occurs.
printf – formatted output
printf format [arguments ...]
null
Print the string "hello": $ printf "%s\n" hello hello Same as above, but notice that the format string is not quoted and hence we do not get the expected behavior: $ printf %s\n hello hellon$ Print arguments forcing sign only for the first argument: $ printf "%+d\n%d\n%d\n" 1 -2 13 +1 -2 13 Same as above, but the single format string will be applied to the three arguments: $ printf "%+d\n" 1 -2 13 +1 -2 +13 Print number using only two digits after the decimal point: $ printf "%.2f\n" 31.7456 31.75 COMPATIBILITY The traditional BSD behavior of converting arguments of numeric formats not beginning with a digit to the ASCII code of the first character is not supported. SEE ALSO builtin(1), echo(1), sh(1), printf(3) STANDARDS The printf command is expected to be compatible with the IEEE Std 1003.2 (“POSIX.2”) specification. HISTORY The printf command appeared in 4.3BSD-Reno. It is modeled after the standard library function, printf(3). CAVEATS ANSI hexadecimal character constants were deliberately not provided. Trying to print a dash ("-") as the first character causes printf to interpret the dash as a program argument. -- must be used before format. If the locale contains multibyte characters (such as UTF-8), the c format and b and s formats with a precision may not operate as expected. BUGS Since the floating point numbers are translated from ASCII to floating- point and then back again, floating-point precision may be lost. (By default, the number is translated to an IEEE-754 double-precision value before being printed. The L modifier may produce additional precision, depending on the hardware platform.) The escape sequence \000 is the string terminator. When present in the argument for the b format, the argument will be truncated at the \000 character. Multibyte characters are not recognized in format strings (this is only a problem if ‘%’ can appear inside a multibyte character). macOS 14.5 July 1, 2020 macOS 14.5
iptab5.34
null
null
null
null
null
AssetCacheTetheratorUtil
iOS and tvOS devices connected to a computer with a USB cable can be "tethered," so that they route their Internet requests through the computer. AssetCacheTetheratorUtil enables a tethered network, disables it, or reports on its status. Tethering requires Content Caching. AssetCacheTetheratorUtil must be run by root, except for the isEnabled and status commands.
AssetCacheTetheratorUtil – control networking of tethered devices
AssetCacheTetheratorUtil [-j|--json] enable AssetCacheTetheratorUtil [-j|--json] disable AssetCacheTetheratorUtil [-j|--json] isEnabled AssetCacheTetheratorUtil [-j|--json] status
-j|--json Print results in machine-parseable JSON format to stdout. SEE ALSO System Settings > Sharing > Content Caching, AssetCacheLocatorUtil(8), AssetCacheManagerUtil(8) macOS 8/1/19 macOS
null
column
The column utility formats its input into multiple columns. Rows are filled before columns. Input is taken from file operands, or, by default, from the standard input. Empty lines are ignored. The options are as follows: -c Output is formatted for a display columns wide. -s Specify a set of characters to be used to delimit columns for the -t option. -t Determine the number of columns the input contains and create a table. Columns are delimited with whitespace, by default, or with the characters supplied using the -s option. Useful for pretty-printing displays. -x Fill columns before filling rows. ENVIRONMENT The COLUMNS, LANG, LC_ALL and LC_CTYPE environment variables affect the execution of column as described in environ(7). EXIT STATUS The column utility exits 0 on success, and >0 if an error occurs.
column – columnate lists
column [-tx] [-c columns] [-s sep] [file ...]
null
(printf "PERM LINKS OWNER GROUP SIZE MONTH DAY " ; \ printf "HH:MM/YEAR NAME\n" ; \ ls -l | sed 1d) | column -t SEE ALSO colrm(1), ls(1), paste(1), sort(1) HISTORY The column command appeared in 4.3BSD-Reno. BUGS Input lines are limited to LINE_MAX (2048) bytes in length. macOS 14.5 July 29, 2004 macOS 14.5
ldapexop
ldapexop issues the LDAP extended operation specified by oid or one of the special keywords whoami, cancel, or refresh. Additional data for the extended operation can be passed to the server using data or base-64 encoded as b64data in the case of oid, or using the additional parameters in the case of the specially named extended operations above. Please note that ldapexop behaves differently for the same extended operation when it was given as an OID or as a specialliy named operation: Calling ldapexop with the OID of the whoami (RFC 4532) extended operation ldapexop [<options>] 1.3.6.1.4.1.4203.1.11.3 yields # extended operation response data:: <base64 encoded response data> while calling it with the keyword whoami ldapexop [<options>] whoami results in dn:<client's identity>
ldapexop - issue LDAP extended operations
ldapexop [-d_level] [-D_binddn] [-e [!]ext[=extparam]] [-f_file] [-h_host] [-H_URI] [-I] [-n] [-N] [-O_security-properties] [-o_opt[=optparam]] [-p_port] [-Q] [-R_realm] [-U_authcid] [-v] [-V] [-w_passwd] [-W] [-x] [-X_authzid] [-y_file] [-Y_mech] [-Z[Z]] {oid | oid:data | oid::b64data | whoami | cancel_cancel-id | refresh_DN [ttl]}
-d_level Set the LDAP debugging level to level. -D_binddn Use the Distinguished Name binddn to bind to the LDAP directory. -e [!]ext[=extparam] Specify general extensions. ´!´ indicates criticality. [!]assert=<filter> (RFC 4528; a RFC 4515 Filter string) [!]authzid=<authzid> (RFC 4370; "dn:<dn>" or "u:<user>") [!]chaining[=<resolveBehavior>[/<continuationBehavior>]] one of "chainingPreferred", "chainingRequired", "referralsPreferred", "referralsRequired" [!]manageDSAit (RFC 3296) [!]noop ppolicy [!]postread[=<attrs>] (RFC 4527; comma-separated attr list) [!]preread[=<attrs>] (RFC 4527; comma-separated attr list) [!]relax abandon, cancel, ignore (SIGINT sends abandon/cancel, or ignores response; if critical, doesn't wait for SIGINT. not really controls) -f_file Read operations from file. -h_host Specify the host on which the ldap server is running. Deprecated in favor of -H. -H_URI Specify URI(s) referring to the ldap server(s); only the protocol/host/port fields are allowed; a list of URI, separated by whitespace or commas is expected. -I Enable SASL Interactive mode. Always prompt. Default is to prompt only as needed. -n Show what would be done but don't actually do it. Useful for debugging in conjunction with -v. -N Do not use reverse DNS to canonicalize SASL host name. -O_security-properties Specify SASL security properties. -o_opt[=optparam] Specify general options: nettimeout=<timeout> (in seconds, or "none" or "max") -p_port Specify the TCP port where the ldap server is listening. Deprecated in favor of -H. -Q Enable SASL Quiet mode. Never prompt. -R_realm Specify the realm of authentication ID for SASL bind. The form of the realm depends on the actual SASL mechanism used. -U_authcid Specify the authentication ID for SASL bind. The form of the ID depends on the actual SASL mechanism used. -v Run in verbose mode, with many diagnostics written to standard output. -V Print version info and usage message. If-VV is given, only the version information is printed. -w_passwd Use passwd as the password for simple authentication. -W Prompt for simple authentication. This is used instead of specifying the password on the command line. -x Use simple authentication instead of SASL. -X_authzid Specify the requested authorization ID for SASL bind. authzid must be one of the following formats: dn:<distinguished name> or u:<username> -y_file Use complete contents of file as the password for simple authentication. -Y_mech Specify the SASL mechanism to be used for authentication. Without this option, the program will choose the best mechanism the server knows. -Z[Z] Issue StartTLS (Transport Layer Security) extended operation. Giving it twice (-ZZ) will require the operation to be successful. DIAGNOSTICS Exit status is zero if no errors occur. Errors result in a non-zero exit status and a diagnostic message being written to standard error. SEE ALSO ldap_extended_operation_s(3) AUTHOR This manual page was written by Peter Marschall based on ldapexop's usage message and a few tests with ldapexop. Do not expect it to be complete or absolutely correct. ACKNOWLEDGEMENTS OpenLDAP Software is developed and maintained by The OpenLDAP Project <http://www.openldap.org/>. OpenLDAP Software is derived from University of Michigan LDAP 3.3 Release. LDAPEXOP(1)
null
ippfind
ippfind finds services registered with a DNS server or available through local devices. Its primary purpose is to find IPP printers and show their URIs, show their current status, or run commands. REGISTRATION TYPES ippfind supports the following registration types: _http._tcp HyperText Transport Protocol (HTTP, RFC 2616) _https._tcp Secure HyperText Transport Protocol (HTTPS, RFC 2818) _ipp._tcp Internet Printing Protocol (IPP, RFC 2911) _ipps._tcp Secure Internet Printing Protocol (IPPS, draft) _printer._tcp Line Printer Daemon (LPD, RFC 1179) EXPRESSIONS ippfind supports expressions much like the find(1) utility. However, unlike find(1), ippfind uses POSIX regular expressions instead of shell filename matching patterns. If --exec, -l, --ls, -p, --print, --print-name, -q, --quiet, -s, or -x is not specified, ippfind adds --print to print the service URI of anything it finds. The following expressions are supported: -d regex --domain regex True if the domain matches the given regular expression. --false Always false. -h regex --host regex True is the hostname matches the given regular expression. -l --ls Lists attributes returned by Get-Printer-Attributes for IPP printers and traditional find "-ls" output for HTTP URLs. The result is true if the URI is accessible, false otherwise. --local True if the service is local to this computer. -N name --literal-name name True if the service instance name matches the given name. -n regex --name regex True if the service instance name matches the given regular expression. --path regex True if the URI resource path matches the given regular expression. -P number[-number] --port number[-number] True if the port matches the given number or range. -p --print Prints the URI if the result of previous expressions is true. The result is always true. -q --quiet Quiet mode - just returns the exit codes below. -r --remote True if the service is not local to this computer. -s --print-name Prints the service instance name if the result of previous expressions is true. The result is always true. --true Always true. -t key --txt key True if the TXT record contains the named key. --txt-key regex True if the TXT record contains the named key and matches the given regular expression. -u regex --uri regex True if the URI matches the given regular expression. -x utility [ argument ... ] ; --exec utility [ argument ... ] ; Executes the specified program if the current result is true. "{foo}" arguments are replaced with the corresponding value - see SUBSTITUTIONS below. Expressions may also contain modifiers: ( expression ) Group the result of expressions. ! expression --not expression Unary NOT of the expression. expression expression expression --and expression Logical AND of expressions. expression --or expression Logical OR of expressions. SUBSTITUTIONS The substitutions for "{foo}" in -e and --exec are: {service_domain} Domain name, e.g., "example.com.", "local.", etc. {service_hostname} Fully-qualified domain name, e.g., "printer.example.com.", "printer.local.", etc. {service_name} Service instance name, e.g., "My Fine Printer". {service_port} Port number for server, typically 631 for IPP and 80 for HTTP. {service_regtype} DNS-SD registration type, e.g., "_ipp._tcp", "_http._tcp", etc. {service_scheme} URI scheme for DNS-SD registration type, e.g., "ipp", "http", etc. {} {service_uri} URI for service, e.g., "ipp://printer.local./ipp/print", "http://printer.local./", etc. {txt_key} Value of TXT record key (lowercase).
ippfind - find internet printing protocol printers
ippfind [ options ] regtype[,subtype][.domain.] ... [ expression ... ] ippfind [ options ] name[.regtype[.domain.]] ... [ expression ... ] ippfind --help ippfind --version
ippfind supports the following options: --help Show program help. --version Show program version. -4 Use IPv4 when listing. -6 Use IPv6 when listing. -T seconds Specify find timeout in seconds. If 1 or less, ippfind stops as soon as it thinks it has found everything. The default timeout is 1 second. -V version Specifies the IPP version when listing. Supported values are "1.1", "2.0", "2.1", and "2.2". EXIT STATUS ippfind returns 0 if the result for all processed expressions is true, 1 if the result of any processed expression is false, 2 if browsing or any query or resolution failed, 3 if an undefined option or invalid expression was specified, and 4 if it ran out of memory. ENVIRONMENT When executing a program, ippfind sets the following environment variables for the matching service registration: IPPFIND_SERVICE_DOMAIN Domain name, e.g., "example.com.", "local.", etc. IPPFIND_SERVICE_HOSTNAME Fully-qualified domain name, e.g., "printer.example.com.", "printer.local.", etc. IPPFIND_SERVICE_NAME Service instance name, e.g., "My Fine Printer". IPPFIND_SERVICE_PORT Port number for server, typically 631 for IPP and 80 for HTTP. IPPFIND_SERVICE_REGTYPE DNS-SD registration type, e.g., "_ipp._tcp", "_http._tcp", etc. IPPFIND_SERVICE_SCHEME URI scheme for DNS-SD registration type, e.g., "ipp", "http", etc. IPPFIND_SERVICE_URI URI for service, e.g., "ipp://printer.local./ipp/print", "http://printer.local./", etc. IPPFIND_TXT_KEY Values of TXT record KEY (uppercase).
To show the status of all registered IPP printers on your network, run: ippfind --ls Similarly, to send a PostScript test page to every PostScript printer, run: ippfind --txt-pdl application/postscript --exec ipptool -f onepage-letter.ps '{}' print-job.test \; SEE ALSO ipptool(1) COPYRIGHT Copyright © 2013-2019 by Apple Inc. 26 April 2019 ippsample ippfind(1)
syscallbysysc.d
syscallbysysc.d is a DTrace OneLiner to a report of the number of each type of system call made. This is useful to identify which system call is the most common. Docs/oneliners.txt and Docs/Examples/oneliners_examples.txt in the DTraceToolkit contain this as a oneliner that can be cut-n-paste to run. Since this uses DTrace, only users with root privileges can run this command.
syscallbysysc.d - syscalls by syscall. Uses DTrace.
syscallbysysc.d
null
This samples until Ctrl-C is hit. # syscallbysysc.d FIELDS first field This is the system call type. Most have man pages in section 2. second field This is the count, the number of occurrances for this system call. DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT syscallbysysc.d will sample until Ctrl-C is hit. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO procsystime(1M), dtrace(1M), truss(1) version 1.00 May 15, 2005 syscallbysysc.d(1m)
groups
The groups utility has been obsoleted by the id(1) utility, and is equivalent to “id -Gn [user]”. The command “id -p” is suggested for normal interactive use. The groups utility displays the groups to which you (or the optionally specified user) belong. EXIT STATUS The groups utility exits 0 on success, and >0 if an error occurs.
groups – show group memberships
groups [user]
null
Show groups the root user belongs to: $ groups root wheel operator SEE ALSO id(1) macOS 14.5 June 6, 1993 macOS 14.5
dwarfdump
dwarfdump parses DWARF sections in object files, archives, and .dSYM bundles and prints their contents in human-readable form. Only the .debug_info section is printed unless one of the section-specific options or --all is specified. If no input file is specified, a.out is used instead. If - is used as the input file, dwarfdump reads the input from its standard input stream.
dwarfdump - dump and verify DWARF debug information
dwarfdump [options] [filename ...]
-a, --all Dump all supported DWARF sections. --arch=<arch> Dump DWARF debug information for the specified CPU architecture. Architectures may be specified by name or by number. This option can be specified multiple times, once for each desired architecture. All CPU architectures will be printed by default. -c, --show-children Show a debug info entry's children when selectively printing with the =<offset> argument of --debug-info, or options such as --find or --name. --color Use colors in output. -f <name>, --find=<name> Search for the exact text <name> in the accelerator tables and print the matching debug information entries. When there is no accelerator tables or the name of the DIE you are looking for is not found in the accelerator tables, try using the slower but more complete --name option. -F, --show-form Show DWARF form types after the DWARF attribute types. -h, --help Show help and usage for this command. --help-list Show help and usage for this command without grouping the options into categories. -i, --ignore-case Ignore case distinctions when using --name. -n <name>, --name=<name> Find and print all debug info entries whose name (DW_AT_name attribute) is <name>. --lookup=<address> Look up <address> in the debug information and print out the file, function, block, and line table details. -o <path> Redirect output to a file specified by <path>, where - is the standard output stream. -p, --show-parents Show a debug info entry's parents when selectively printing with the =<offset> argument of --debug-info, or options such as --find or --name. --parent-recurse-depth=<N> When displaying debug info entry parents, only show them to a maximum depth of <N>. --quiet Use with --verify to not emit to STDOUT. -r <N>, --recurse-depth=<N> When displaying debug info entries, only show children to a maximum depth of <N>. --show-section-sizes Show the sizes of all debug sections, expressed in bytes. --show-sources Print all source files mentioned in the debug information. Absolute paths are given whenever possible. --statistics Collect debug info quality metrics and print the results as machine-readable single-line JSON output. The output format is described in the section below (FORMAT OF STATISTICS OUTPUT). --summarize-types Abbreviate the description of type unit entries. -x, --regex Treat any <name> strings as regular expressions when searching with --name. If --ignore-case is also specified, the regular expression becomes case-insensitive. -u, --uuid Show the UUID for each architecture. --diff Dump the output in a format that is more friendly for comparing DWARF output from two different files. -v, --verbose Display verbose information when dumping. This can help to debug DWARF issues. --verify Verify the structure of the DWARF information by verifying the compile unit chains, DIE relationships graph, address ranges, and more. --version Display the version of the tool. --debug-abbrev, --debug-addr, --debug-aranges, --debug-cu-index, --debug-frame [=<offset>], --debug-gnu-pubnames, --debug-gnu-pubtypes, --debug-info [=<offset>], --debug-line [=<offset>], --debug-line-str, --debug-loc [=<offset>], --debug-loclists [=<offset>], --debug-macro, --debug-names, --debug-pubnames, --debug-pubtypes, --debug-ranges, --debug-rnglists, --debug-str, --debug-str-offsets, --debug-tu-index, --debug-types [=<offset>], --eh-frame [=<offset>], --gdb-index, --apple-names, --apple-types, --apple-namespaces, --apple-objc Dump the specified DWARF section by name. Only the .debug_info section is shown by default. Some entries support adding an =<offset> as a way to provide an optional offset of the exact entry to dump within the respective section. When an offset is provided, only the entry at that offset will be dumped, else the entire section will be dumped. The --debug-macro option prints both the .debug_macro and the .debug_macinfo sections. The --debug-frame and --eh-frame options are aliases, in cases where both sections are present one command outputs both. @<FILE> Read command-line options from <FILE>. FORMAT OF STATISTICS OUTPUT The --statistics option generates single-line JSON output representing quality metrics of the processed debug info. These metrics are useful to compare changes between two compilers, particularly for judging the effect that a change to the compiler has on the debug info quality. The output is formatted as key-value pairs. The first pair contains a version number. The following naming scheme is used for the keys: • variables ==> local variables and parameters • local vars ==> local variables • params ==> formal parameters For aggregated values, the following keys are used: • sum_of_all_variables(...) ==> the sum applied to all variables • #bytes ==> the number of bytes • #variables - entry values ... ==> the number of variables excluding the entry values etc. EXIT STATUS dwarfdump returns 0 if the input files were parsed and dumped successfully. Otherwise, it returns 1. SEE ALSO dsymutil(1) AUTHOR Maintained by the LLVM Team (https://llvm.org/). COPYRIGHT 2003-2024, LLVM Project 11 2024-01-28 DWARFDUMP(1)
null
ppdc
ppdc compiles PPDC source files into one or more PPD files. This program is deprecated and will be removed in a future release of CUPS.
ppdc - cups ppd compiler (deprecated)
ppdc [ -D name[=value] ] [ -I include-directory ] [ -c message-catalog ] [ -d output-directory ] [ -l language(s) ] [ -m ] [ -t ] [ -v ] [ -z ] [ --cr ] [ --crlf ] [ --lf ] source-file
ppdc supports the following options: -D name[=value] Sets the named variable for use in the source file. It is equivalent to using the #define directive in the source file. -I include-directory Specifies an alternate include directory. Multiple -I options can be supplied to add additional directories. -c message-catalog Specifies a single message catalog file in GNU gettext (filename.po) or Apple strings (filename.strings) format to be used for localization. -d output-directory Specifies the output directory for PPD files. The default output directory is "ppd". -l language(s) Specifies one or more languages to use when localizing the PPD file(s). The default language is "en" (English). Separate multiple languages with commas, for example "de_DE,en_UK,es_ES,es_MX,es_US,fr_CA,fr_FR,it_IT" will create PPD files with German, UK English, Spanish (Spain, Mexico, and US), French (France and Canada), and Italian languages in each file. -m Specifies that the output filename should be based on the ModelName value instead of FileName or PCFilenName. -t Specifies that PPD files should be tested instead of generated. -v Specifies verbose output, basically a running status of which files are being loaded or written. -z Generates compressed PPD files (filename.ppd.gz). The default is to generate uncompressed PPD files. --cr --crlf --lf Specifies the line ending to use - carriage return, carriage return and line feed, or line feed alone. The default is to use the line feed character alone. NOTES PPD files are deprecated and will no longer be supported in a future feature release of CUPS. Printers that do not support IPP can be supported using applications such as ippeveprinter(1). SEE ALSO ppdhtml(1), ppdi(1), ppdmerge(1), ppdpo(1), ppdcfile(5), CUPS Online Help (http://localhost:631/help) COPYRIGHT Copyright © 2007-2019 by Apple Inc. 26 April 2019 CUPS ppdc(1)
null
rs
The rs utility reads the standard input, interpreting each line as a row of blank-separated entries in an array, transforms the array according to the options, and writes it on the standard output. With no arguments it transforms stream input into a columnar format convenient for terminal viewing. The shape of the input array is deduced from the number of lines and the number of columns on the first line. If that shape is inconvenient, a more useful one might be obtained by skipping some of the input with the -k option. Other options control interpretation of the input columns. The shape of the output array is influenced by the rows and cols specifications, which should be positive integers. If only one of them is a positive integer, rs computes a value for the other which will accommodate all of the data. When necessary, missing data are supplied in a manner specified by the options and surplus data are deleted. There are options to control presentation of the output columns, including transposition of the rows and columns. The following options are available: -cx Input columns are delimited by the single character x. A missing x is taken to be `^I'. -sx Like -c, but maximal strings of x are delimiters. -Cx Output columns are delimited by the single character x. A missing x is taken to be `^I'. -Sx Like -C, but padded strings of x are delimiters. -t Fill in the rows of the output array using the columns of the input array, that is, transpose the input while honoring any rows and cols specifications. -T Print the pure transpose of the input, ignoring any rows or cols specification. -kN Ignore the first N lines of input. -KN Like -k, but print the ignored lines. -gN The gutter width (inter-column space), normally 2, is taken to be N. -GN The gutter width has N percent of the maximum column width added to it. -e Consider each line of input as an array entry. -n On lines having fewer entries than the first line, use null entries to pad out the line. Normally, missing entries are taken from the next line of input. -y If there are too few entries to make up the output dimensions, pad the output by recycling the input from the beginning. Normally, the output is padded with blanks. -h Print the shape of the input array and do nothing else. The shape is just the number of lines and the number of entries on the first line. -H Like -h, but also print the length of each line. -j Right adjust entries within columns. -wN The width of the display, normally 80, is taken to be the positive integer N. -m Do not trim excess delimiters from the ends of the output array. -z Adapt column widths to fit the largest entries appearing in them. With no arguments, rs transposes its input, and assumes one array entry per input line unless the first non-ignored line is longer than the display width. Option letters which take numerical arguments interpret a missing number as zero unless otherwise indicated.
rs – reshape a data array
rs [-[csCS][x] [kKgGw][N] tTeEnyjhHmz] [rows [cols]]
null
The rs utility can be used as a filter to convert the stream output of certain programs (e.g., spell(1), du(1), file(1), look(1), nm(1), who(1), and wc(1)) into a convenient ``window'' format, as in % who | rs This function has been incorporated into the ls(1) program, though for most programs with similar output rs suffices. To convert stream input into vector output and back again, use % rs 1 0 | rs 0 1 A 10 by 10 array of random numbers from 1 to 100 and its transpose can be generated with % jot -r 100 | rs 10 10 | tee array | rs -T > tarray In the editor vi(1), a file consisting of a multi-line vector with 9 elements per line can undergo insertions and deletions, and then be neatly reshaped into 9 columns with :1,$!rs 0 9 Finally, to sort a database by the first line of each 4-line field, try % rs -eC 0 4 | sort | rs -c 0 1 SEE ALSO jot(1), pr(1), sort(1), vi(1) HISTORY The rs utility first appeared in 4.2BSD. AUTHORS John A. Kunze BUGS Handles only two dimensional arrays. The algorithm currently reads the whole file into memory, so files that do not fit in memory will not be reshaped. Fields cannot be defined yet on character positions. Re-ordering of columns is not yet possible. There are too many options. Multibyte characters are not recognized. Lines longer than LINE_MAX (2048) bytes are not processed and result in immediate termination of rs. macOS 14.5 April 7, 2015 macOS 14.5
spfd
spfd is a simple forking Sender Policy Framework (SPF) query proxy server. spfd receives and answers SPF query requests on a TCP/IP or UNIX domain socket. The --port form listens on a TCP/IP socket on the specified port. The default port is 5970. The --socket form listens on a UNIX domain socket that is created with the specified filename. The socket can be assigned specific user and group ownership with the --socket-user and --socket-group options, and specific filesystem permissions with the --socket-perms option. Generally, spfd can be instructed with the --set-user and --set-group options to drop root privileges and change to another user and group before it starts listening for requests. The --help form prints usage information for spfd. REQUEST A request consists of a series of lines delimited by \x0A (LF) characters (or whatever your system considers a newline). Each line must be of the form key=value, where the following keys are required: ip The sender IP address. sender The envelope sender address (from the SMTP "MAIL FROM" command). helo The envelope sender hostname (from the SMTP "HELO" command). RESPONSE spfd responds to query requests with similar series of lines of the form key=value. The most important response keys are: result The result of the SPF query: pass The specified IP address is an authorized mailer for the sender domain/address. fail The specified IP address is not an authorized mailer for the sender domain/address. softfail The specified IP address is not an authorized mailer for the sender domain/address, however the domain is still in the process of transitioning to SPF. neutral The sender domain makes no assertion about the status of the IP address. unknown The sender domain has a syntax error in its SPF record. error A temporary DNS error occurred while resolving the sender policy. Try again later. none There is no SPF record for the sender domain. smtp_comment The text that should be included in the receiver's SMTP response. header_comment The text that should be included as a comment in the message's "Received-SPF:" header. spf_record The SPF record of the envelope sender domain. For the description of other response keys see Mail::SPF::Query. For more information on SPF see <http://www.openspf.org>. EXAMPLE A running spfd could be tested using the "netcat" utility like this: $ echo -e "ip=11.22.33.44\nsender=user@pobox.com\nhelo=spammer.example.net\n" | nc localhost 5970 result=neutral smtp_comment=Please see http://spf.pobox.com/why.html?sender=user%40pobox.com&ip=11.22.33.44&receiver=localhost header_comment=localhost: 11.22.33.44 is neither permitted nor denied by domain of user@pobox.com guess=neutral smtp_guess= header_guess= guess_tf=neutral smtp_tf= header_tf= spf_record=v=spf1 ?all SEE ALSO Mail::SPF::Query, <http://www.openspf.org> AUTHORS This version of spfd was written by Meng Weng Wong <mengwong+spf@pobox.com>. Improved argument parsing was added by Julian Mehnle <julian@mehnle.net>. This man-page was written by Julian Mehnle <julian@mehnle.net>. perl v5.34.0 2006-02-07 SPFD(1)
spfd - simple forking daemon to provide SPF query services VERSION 2006-02-07
spfd --port port [--set-user uid|username] [--set-group gid|groupname] spfd --socket filename [--socket-user uid|username] [--socket-group gid|groupname] [--socket-perms octal-perms] [--set-user uid|username] [--set-group gid|groupname] spfd --help
null
null
config_data
The "config_data" tool provides a command-line interface to the configuration of Perl modules. By "configuration", we mean something akin to "user preferences" or "local settings". This is a formalization and abstraction of the systems that people like Andreas Koenig ("CPAN::Config"), Jon Swartz ("HTML::Mason::Config"), Andy Wardley ("Template::Config"), and Larry Wall (perl's own Config.pm) have developed independently. The configuration system employed here was developed in the context of "Module::Build". Under this system, configuration information for a module "Foo", for example, is stored in a module called "Foo::ConfigData") (I would have called it "Foo::Config", but that was taken by all those other systems mentioned in the previous paragraph...). These "...::ConfigData" modules contain the configuration data, as well as publicly accessible methods for querying and setting (yes, actually re-writing) the configuration data. The "config_data" script (whose docs you are currently reading) is merely a front-end for those methods. If you wish, you may create alternate front-ends. The two types of data that may be stored are called "config" values and "feature" values. A "config" value may be any perl scalar, including references to complex data structures. It must, however, be serializable using "Data::Dumper". A "feature" is a boolean (1 or 0) value. USAGE This script functions as a basic getter/setter wrapper around the configuration of a single module. On the command line, specify which module's configuration you're interested in, and pass options to get or set "config" or "feature" values. The following options are supported: module Specifies the name of the module to configure (required). feature When passed the name of a "feature", shows its value. The value will be 1 if the feature is enabled, 0 if the feature is not enabled, or empty if the feature is unknown. When no feature name is supplied, the names and values of all known features will be shown. config When passed the name of a "config" entry, shows its value. The value will be displayed using "Data::Dumper" (or similar) as perl code. When no config name is supplied, the names and values of all known config entries will be shown. set_feature Sets the given "feature" to the given boolean value. Specify the value as either 1 or 0. set_config Sets the given "config" entry to the given value. eval If the "--eval" option is used, the values in "set_config" will be evaluated as perl code before being stored. This allows moderately complicated data structures to be stored. For really complicated structures, you probably shouldn't use this command-line interface, just use the Perl API instead. help Prints a help message, including a few examples, and exits. AUTHOR Ken Williams, kwilliams@cpan.org COPYRIGHT Copyright (c) 1999, Ken Williams. All rights reserved. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. SEE ALSO Module::Build(3), perl(1). perl v5.34.0 2024-04-13 CONFIG_DATA(1)
config_data - Query or change configuration of Perl modules
# Get config/feature values config_data --module Foo::Bar --feature bazzable config_data --module Foo::Bar --config magic_number # Set config/feature values config_data --module Foo::Bar --set_feature bazzable=1 config_data --module Foo::Bar --set_config magic_number=42 # Print a usage message config_data --help
null
null
slogin
ssh (SSH client) is a program for logging into a remote machine and for executing commands on a remote machine. It is intended to provide secure encrypted communications between two untrusted hosts over an insecure network. X11 connections, arbitrary TCP ports and UNIX-domain sockets can also be forwarded over the secure channel. ssh connects and logs into the specified destination, which may be specified as either [user@]hostname or a URI of the form ssh://[user@]hostname[:port]. The user must prove their identity to the remote machine using one of several methods (see below). If a command is specified, it will be executed on the remote host instead of a login shell. A complete command line may be specified as command, or it may have additional arguments. If supplied, the arguments will be appended to the command, separated by spaces, before it is sent to the server to be executed. The options are as follows: -4 Forces ssh to use IPv4 addresses only. -6 Forces ssh to use IPv6 addresses only. -A Enables forwarding of connections from an authentication agent such as ssh-agent(1). This can also be specified on a per-host basis in a configuration file. Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's UNIX-domain socket) can access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the keys that enable them to authenticate using the identities loaded into the agent. A safer alternative may be to use a jump host (see -J). -a Disables forwarding of the authentication agent connection. -B bind_interface Bind to the address of bind_interface before attempting to connect to the destination host. This is only useful on systems with more than one address. -b bind_address Use bind_address on the local machine as the source address of the connection. Only useful on systems with more than one address. -C Requests compression of all data (including stdin, stdout, stderr, and data for forwarded X11, TCP and UNIX-domain connections). The compression algorithm is the same used by gzip(1). Compression is desirable on modem lines and other slow connections, but will only slow down things on fast networks. The default value can be set on a host-by-host basis in the configuration files; see the Compression option in ssh_config(5). -c cipher_spec Selects the cipher specification for encrypting the session. cipher_spec is a comma-separated list of ciphers listed in order of preference. See the Ciphers keyword in ssh_config(5) for more information. -D [bind_address:]port Specifies a local “dynamic” application-level port forwarding. This works by allocating a socket to listen to port on the local side, optionally bound to the specified bind_address. Whenever a connection is made to this port, the connection is forwarded over the secure channel, and the application protocol is then used to determine where to connect to from the remote machine. Currently the SOCKS4 and SOCKS5 protocols are supported, and ssh will act as a SOCKS server. Only root can forward privileged ports. Dynamic port forwardings can also be specified in the configuration file. IPv6 addresses can be specified by enclosing the address in square brackets. Only the superuser can forward privileged ports. By default, the local port is bound in accordance with the GatewayPorts setting. However, an explicit bind_address may be used to bind the connection to a specific address. The bind_address of “localhost” indicates that the listening port be bound for local use only, while an empty address or ‘*’ indicates that the port should be available from all interfaces. -E log_file Append debug logs to log_file instead of standard error. -e escape_char Sets the escape character for sessions with a pty (default: ‘~’). The escape character is only recognized at the beginning of a line. The escape character followed by a dot (‘.’) closes the connection; followed by control-Z suspends the connection; and followed by itself sends the escape character once. Setting the character to “none” disables any escapes and makes the session fully transparent. -F configfile Specifies an alternative per-user configuration file. If a configuration file is given on the command line, the system-wide configuration file (/etc/ssh/ssh_config) will be ignored. The default for the per-user configuration file is ~/.ssh/config. If set to “none”, no configuration files will be read. -f Requests ssh to go to background just before command execution. This is useful if ssh is going to ask for passwords or passphrases, but the user wants it in the background. This implies -n. The recommended way to start X11 programs at a remote site is with something like ssh -f host xterm. If the ExitOnForwardFailure configuration option is set to “yes”, then a client started with -f will wait for all remote port forwards to be successfully established before placing itself in the background. Refer to the description of ForkAfterAuthentication in ssh_config(5) for details. -G Causes ssh to print its configuration after evaluating Host and Match blocks and exit. -g Allows remote hosts to connect to local forwarded ports. If used on a multiplexed connection, then this option must be specified on the master process. -I pkcs11 Specify the PKCS#11 shared library ssh should use to communicate with a PKCS#11 token providing keys for user authentication. Use of this option will disable UseKeychain. -i identity_file Selects a file from which the identity (private key) for public key authentication is read. You can also specify a public key file to use the corresponding private key that is loaded in ssh-agent(1) when the private key file is not present locally. The default is ~/.ssh/id_rsa, ~/.ssh/id_ecdsa, ~/.ssh/id_ecdsa_sk, ~/.ssh/id_ed25519, ~/.ssh/id_ed25519_sk and ~/.ssh/id_dsa. Identity files may also be specified on a per- host basis in the configuration file. It is possible to have multiple -i options (and multiple identities specified in configuration files). If no certificates have been explicitly specified by the CertificateFile directive, ssh will also try to load certificate information from the filename obtained by appending -cert.pub to identity filenames. -J destination Connect to the target host by first making an ssh connection to the jump host described by destination and then establishing a TCP forwarding to the ultimate destination from there. Multiple jump hops may be specified separated by comma characters. This is a shortcut to specify a ProxyJump configuration directive. Note that configuration directives supplied on the command-line generally apply to the destination host and not any specified jump hosts. Use ~/.ssh/config to specify configuration for jump hosts. -K Enables GSSAPI-based authentication and forwarding (delegation) of GSSAPI credentials to the server. -k Disables forwarding (delegation) of GSSAPI credentials to the server. -L [bind_address:]port:host:hostport -L [bind_address:]port:remote_socket -L local_socket:host:hostport -L local_socket:remote_socket Specifies that connections to the given TCP port or Unix socket on the local (client) host are to be forwarded to the given host and port, or Unix socket, on the remote side. This works by allocating a socket to listen to either a TCP port on the local side, optionally bound to the specified bind_address, or to a Unix socket. Whenever a connection is made to the local port or socket, the connection is forwarded over the secure channel, and a connection is made to either host port hostport, or the Unix socket remote_socket, from the remote machine. Port forwardings can also be specified in the configuration file. Only the superuser can forward privileged ports. IPv6 addresses can be specified by enclosing the address in square brackets. By default, the local port is bound in accordance with the GatewayPorts setting. However, an explicit bind_address may be used to bind the connection to a specific address. The bind_address of “localhost” indicates that the listening port be bound for local use only, while an empty address or ‘*’ indicates that the port should be available from all interfaces. -l login_name Specifies the user to log in as on the remote machine. This also may be specified on a per-host basis in the configuration file. -M Places the ssh client into “master” mode for connection sharing. Multiple -M options places ssh into “master” mode but with confirmation required using ssh-askpass(1) before each operation that changes the multiplexing state (e.g. opening a new session). Refer to the description of ControlMaster in ssh_config(5) for details. -m mac_spec A comma-separated list of MAC (message authentication code) algorithms, specified in order of preference. See the MACs keyword in ssh_config(5) for more information. -N Do not execute a remote command. This is useful for just forwarding ports. Refer to the description of SessionType in ssh_config(5) for details. -n Redirects stdin from /dev/null (actually, prevents reading from stdin). This must be used when ssh is run in the background. A common trick is to use this to run X11 programs on a remote machine. For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically forwarded over an encrypted channel. The ssh program will be put in the background. (This does not work if ssh needs to ask for a password or passphrase; see also the -f option.) Refer to the description of StdinNull in ssh_config(5) for details. -O ctl_cmd Control an active connection multiplexing master process. When the -O option is specified, the ctl_cmd argument is interpreted and passed to the master process. Valid commands are: “check” (check that the master process is running), “forward” (request forwardings without command execution), “cancel” (cancel forwardings), “exit” (request the master to exit), and “stop” (request the master to stop accepting further multiplexing requests). -o option Can be used to give options in the format used in the configuration file. This is useful for specifying options for which there is no separate command-line flag. For full details of the options listed below, and their possible values, see ssh_config(5). AddKeysToAgent AddressFamily BatchMode BindAddress CanonicalDomains CanonicalizeFallbackLocal CanonicalizeHostname CanonicalizeMaxDots CanonicalizePermittedCNAMEs CASignatureAlgorithms CertificateFile CheckHostIP Ciphers ClearAllForwardings Compression ConnectionAttempts ConnectTimeout ControlMaster ControlPath ControlPersist DynamicForward EnableEscapeCommandline EscapeChar ExitOnForwardFailure FingerprintHash ForkAfterAuthentication ForwardAgent ForwardX11 ForwardX11Timeout ForwardX11Trusted GatewayPorts GlobalKnownHostsFile GSSAPIAuthentication GSSAPIDelegateCredentials HashKnownHosts Host HostbasedAcceptedAlgorithms HostbasedAuthentication HostKeyAlgorithms HostKeyAlias Hostname IdentitiesOnly IdentityAgent IdentityFile IPQoS KbdInteractiveAuthentication KbdInteractiveDevices KexAlgorithms KnownHostsCommand LocalCommand LocalForward LogLevel MACs Match NoHostAuthenticationForLocalhost NumberOfPasswordPrompts PasswordAuthentication PermitLocalCommand PermitRemoteOpen PKCS11Provider Port PreferredAuthentications ProxyCommand ProxyJump ProxyUseFdpass PubkeyAcceptedAlgorithms PubkeyAuthentication RekeyLimit RemoteCommand RemoteForward RequestTTY RequiredRSASize SendEnv ServerAliveInterval ServerAliveCountMax SessionType SetEnv StdinNull StreamLocalBindMask StreamLocalBindUnlink StrictHostKeyChecking TCPKeepAlive Tunnel TunnelDevice UpdateHostKeys UseKeychain User UserKnownHostsFile VerifyHostKeyDNS VisualHostKey XAuthLocation -P tag Specify a tag name that may be used to select configuration in ssh_config(5). Refer to the Tag and Match keywords in ssh_config(5) for more information. -p port Port to connect to on the remote host. This can be specified on a per-host basis in the configuration file. -Q query_option Queries for the algorithms supported by one of the following features: cipher (supported symmetric ciphers), cipher-auth (supported symmetric ciphers that support authenticated encryption), help (supported query terms for use with the -Q flag), mac (supported message integrity codes), kex (key exchange algorithms), key (key types), key-ca-sign (valid CA signature algorithms for certificates), key-cert (certificate key types), key-plain (non-certificate key types), key-sig (all key types and signature algorithms), protocol-version (supported SSH protocol versions), and sig (supported signature algorithms). Alternatively, any keyword from ssh_config(5) or sshd_config(5) that takes an algorithm list may be used as an alias for the corresponding query_option. -q Quiet mode. Causes most warning and diagnostic messages to be suppressed. -R [bind_address:]port:host:hostport -R [bind_address:]port:local_socket -R remote_socket:host:hostport -R remote_socket:local_socket -R [bind_address:]port Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the local side. This works by allocating a socket to listen to either a TCP port or to a Unix socket on the remote side. Whenever a connection is made to this port or Unix socket, the connection is forwarded over the secure channel, and a connection is made from the local machine to either an explicit destination specified by host port hostport, or local_socket, or, if no explicit destination was specified, ssh will act as a SOCKS 4/5 proxy and forward connections to the destinations requested by the remote SOCKS client. Port forwardings can also be specified in the configuration file. Privileged ports can be forwarded only when logging in as root on the remote machine. IPv6 addresses can be specified by enclosing the address in square brackets. By default, TCP listening sockets on the server will be bound to the loopback interface only. This may be overridden by specifying a bind_address. An empty bind_address, or the address ‘*’, indicates that the remote socket should listen on all interfaces. Specifying a remote bind_address will only succeed if the server's GatewayPorts option is enabled (see sshd_config(5)). If the port argument is ‘0’, the listen port will be dynamically allocated on the server and reported to the client at run time. When used together with -O forward, the allocated port will be printed to the standard output. -S ctl_path Specifies the location of a control socket for connection sharing, or the string “none” to disable connection sharing. Refer to the description of ControlPath and ControlMaster in ssh_config(5) for details. -s May be used to request invocation of a subsystem on the remote system. Subsystems facilitate the use of SSH as a secure transport for other applications (e.g. sftp(1)). The subsystem is specified as the remote command. Refer to the description of SessionType in ssh_config(5) for details. -T Disable pseudo-terminal allocation. -t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty. -V Display the version number and exit. -v Verbose mode. Causes ssh to print debugging messages about its progress. This is helpful in debugging connection, authentication, and configuration problems. Multiple -v options increase the verbosity. The maximum is 3. -W host:port Requests that standard input and output on the client be forwarded to host on port over the secure channel. Implies -N, -T, ExitOnForwardFailure and ClearAllForwardings, though these can be overridden in the configuration file or using -o command line options. -w local_tun[:remote_tun] Requests tunnel device forwarding with the specified tun(4) devices between the client (local_tun) and the server (remote_tun). The devices may be specified by numerical ID or the keyword “any”, which uses the next available tunnel device. If remote_tun is not specified, it defaults to “any”. See also the Tunnel and TunnelDevice directives in ssh_config(5). If the Tunnel directive is unset, it will be set to the default tunnel mode, which is “point-to-point”. If a different Tunnel forwarding mode it desired, then it should be specified before -w. -X Enables X11 forwarding. This can also be specified on a per-host basis in a configuration file. X11 forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the user's X authorization database) can access the local X11 display through the forwarded connection. An attacker may then be able to perform activities such as keystroke monitoring. For this reason, X11 forwarding is subjected to X11 SECURITY extension restrictions by default. Refer to the ssh -Y option and the ForwardX11Trusted directive in ssh_config(5) for more information. -x Disables X11 forwarding. -Y Enables trusted X11 forwarding. Trusted X11 forwardings are not subjected to the X11 SECURITY extension controls. -y Send log information using the syslog(3) system module. By default this information is sent to stderr. ssh may additionally obtain configuration data from a per-user configuration file and a system-wide configuration file. The file format and configuration options are described in ssh_config(5). AUTHENTICATION The OpenSSH SSH client supports SSH protocol 2. The methods available for authentication are: GSSAPI-based authentication, host-based authentication, public key authentication, keyboard-interactive authentication, and password authentication. Authentication methods are tried in the order specified above, though PreferredAuthentications can be used to change the default order. Host-based authentication works as follows: If the machine the user logs in from is listed in /etc/hosts.equiv or /etc/shosts.equiv on the remote machine, the user is non-root and the user names are the same on both sides, or if the files ~/.rhosts or ~/.shosts exist in the user's home directory on the remote machine and contain a line containing the name of the client machine and the name of the user on that machine, the user is considered for login. Additionally, the server must be able to verify the client's host key (see the description of /etc/ssh/ssh_known_hosts and ~/.ssh/known_hosts, below) for login to be permitted. This authentication method closes security holes due to IP spoofing, DNS spoofing, and routing spoofing. [Note to the administrator: /etc/hosts.equiv, ~/.rhosts, and the rlogin/rsh protocol in general, are inherently insecure and should be disabled if security is desired.] Public key authentication works as follows: The scheme is based on public-key cryptography, using cryptosystems where encryption and decryption are done using separate keys, and it is unfeasible to derive the decryption key from the encryption key. The idea is that each user creates a public/private key pair for authentication purposes. The server knows the public key, and only the user knows the private key. ssh implements public key authentication protocol automatically, using one of the DSA, ECDSA, Ed25519 or RSA algorithms. The HISTORY section of ssl(8) contains a brief discussion of the DSA and RSA algorithms. The file ~/.ssh/authorized_keys lists the public keys that are permitted for logging in. When the user logs in, the ssh program tells the server which key pair it would like to use for authentication. The client proves that it has access to the private key and the server checks that the corresponding public key is authorized to accept the account. The server may inform the client of errors that prevented public key authentication from succeeding after authentication completes using a different method. These may be viewed by increasing the LogLevel to DEBUG or higher (e.g. by using the -v flag). The user creates their key pair by running ssh-keygen(1). This stores the private key in ~/.ssh/id_dsa (DSA), ~/.ssh/id_ecdsa (ECDSA), ~/.ssh/id_ecdsa_sk (authenticator-hosted ECDSA), ~/.ssh/id_ed25519 (Ed25519), ~/.ssh/id_ed25519_sk (authenticator-hosted Ed25519), or ~/.ssh/id_rsa (RSA) and stores the public key in ~/.ssh/id_dsa.pub (DSA), ~/.ssh/id_ecdsa.pub (ECDSA), ~/.ssh/id_ecdsa_sk.pub (authenticator-hosted ECDSA), ~/.ssh/id_ed25519.pub (Ed25519), ~/.ssh/id_ed25519_sk.pub (authenticator-hosted Ed25519), or ~/.ssh/id_rsa.pub (RSA) in the user's home directory. The user should then copy the public key to ~/.ssh/authorized_keys in their home directory on the remote machine. The authorized_keys file corresponds to the conventional ~/.rhosts file, and has one key per line, though the lines can be very long. After this, the user can log in without giving the password. A variation on public key authentication is available in the form of certificate authentication: instead of a set of public/private keys, signed certificates are used. This has the advantage that a single trusted certification authority can be used in place of many public/private keys. See the CERTIFICATES section of ssh-keygen(1) for more information. The most convenient way to use public key or certificate authentication may be with an authentication agent. See ssh-agent(1) and (optionally) the AddKeysToAgent directive in ssh_config(5) for more information. Keyboard-interactive authentication works as follows: The server sends an arbitrary "challenge" text and prompts for a response, possibly multiple times. Examples of keyboard-interactive authentication include BSD Authentication (see login.conf(5)) and PAM (some non-OpenBSD systems). Finally, if other authentication methods fail, ssh prompts the user for a password. The password is sent to the remote host for checking; however, since all communications are encrypted, the password cannot be seen by someone listening on the network. ssh automatically maintains and checks a database containing identification for all hosts it has ever been used with. Host keys are stored in ~/.ssh/known_hosts in the user's home directory. Additionally, the file /etc/ssh/ssh_known_hosts is automatically checked for known hosts. Any new hosts are automatically added to the user's file. If a host's identification ever changes, ssh warns about this and disables password authentication to prevent server spoofing or man-in-the-middle attacks, which could otherwise be used to circumvent the encryption. The StrictHostKeyChecking option can be used to control logins to machines whose host key is not known or has changed. When the user's identity has been accepted by the server, the server either executes the given command in a non-interactive session or, if no command has been specified, logs into the machine and gives the user a normal shell as an interactive session. All communication with the remote command or shell will be automatically encrypted. If an interactive session is requested, ssh by default will only request a pseudo-terminal (pty) for interactive sessions when the client has one. The flags -T and -t can be used to override this behaviour. If a pseudo-terminal has been allocated, the user may use the escape characters noted below. If no pseudo-terminal has been allocated, the session is transparent and can be used to reliably transfer binary data. On most systems, setting the escape character to “none” will also make the session transparent even if a tty is used. The session terminates when the command or shell on the remote machine exits and all X11 and TCP connections have been closed. ESCAPE CHARACTERS When a pseudo-terminal has been requested, ssh supports a number of functions through the use of an escape character. A single tilde character can be sent as ~~ or by following the tilde by a character other than those described below. The escape character must always follow a newline to be interpreted as special. The escape character can be changed in configuration files using the EscapeChar configuration directive or on the command line by the -e option. The supported escapes (assuming the default ‘~’) are: ~. Disconnect. ~^Z Background ssh. ~# List forwarded connections. ~& Background ssh at logout when waiting for forwarded connection / X11 sessions to terminate. ~? Display a list of escape characters. ~B Send a BREAK to the remote system (only useful if the peer supports it). ~C Open command line. Currently this allows the addition of port forwardings using the -L, -R and -D options (see above). It also allows the cancellation of existing port-forwardings with -KL[bind_address:]port for local, -KR[bind_address:]port for remote and -KD[bind_address:]port for dynamic port-forwardings. !command allows the user to execute a local command if the PermitLocalCommand option is enabled in ssh_config(5). Basic help is available, using the -h option. ~R Request rekeying of the connection (only useful if the peer supports it). ~V Decrease the verbosity (LogLevel) when errors are being written to stderr. ~v Increase the verbosity (LogLevel) when errors are being written to stderr. TCP FORWARDING Forwarding of arbitrary TCP connections over a secure channel can be specified either on the command line or in a configuration file. One possible application of TCP forwarding is a secure connection to a mail server; another is going through firewalls. In the example below, we look at encrypting communication for an IRC client, even though the IRC server it connects to does not directly support encrypted communication. This works as follows: the user connects to the remote host using ssh, specifying the ports to be used to forward the connection. After that it is possible to start the program locally, and ssh will encrypt and forward the connection to the remote server. The following example tunnels an IRC session from the client to an IRC server at “server.example.com”, joining channel “#users”, nickname “pinky”, using the standard IRC port, 6667: $ ssh -f -L 6667:localhost:6667 server.example.com sleep 10 $ irc -c '#users' pinky IRC/127.0.0.1 The -f option backgrounds ssh and the remote command “sleep 10” is specified to allow an amount of time (10 seconds, in the example) to start the program which is going to use the tunnel. If no connections are made within the time specified, ssh will exit. X11 FORWARDING If the ForwardX11 variable is set to “yes” (or see the description of the -X, -x, and -Y options above) and the user is using X11 (the DISPLAY environment variable is set), the connection to the X11 display is automatically forwarded to the remote side in such a way that any X11 programs started from the shell (or command) will go through the encrypted channel, and the connection to the real X server will be made from the local machine. The user should not manually set DISPLAY. Forwarding of X11 connections can be configured on the command line or in configuration files. The DISPLAY value set by ssh will point to the server machine, but with a display number greater than zero. This is normal, and happens because ssh creates a “proxy” X server on the server machine for forwarding the connections over the encrypted channel. ssh will also automatically set up Xauthority data on the server machine. For this purpose, it will generate a random authorization cookie, store it in Xauthority on the server, and verify that any forwarded connections carry this cookie and replace it by the real cookie when the connection is opened. The real authentication cookie is never sent to the server machine (and no cookies are sent in the plain). If the ForwardAgent variable is set to “yes” (or see the description of the -A and -a options above) and the user is using an authentication agent, the connection to the agent is automatically forwarded to the remote side. VERIFYING HOST KEYS When connecting to a server for the first time, a fingerprint of the server's public key is presented to the user (unless the option StrictHostKeyChecking has been disabled). Fingerprints can be determined using ssh-keygen(1): $ ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key If the fingerprint is already known, it can be matched and the key can be accepted or rejected. If only legacy (MD5) fingerprints for the server are available, the ssh-keygen(1) -E option may be used to downgrade the fingerprint algorithm to match. Because of the difficulty of comparing host keys just by looking at fingerprint strings, there is also support to compare host keys visually, using random art. By setting the VisualHostKey option to “yes”, a small ASCII graphic gets displayed on every login to a server, no matter if the session itself is interactive or not. By learning the pattern a known server produces, a user can easily find out that the host key has changed when a completely different pattern is displayed. Because these patterns are not unambiguous however, a pattern that looks similar to the pattern remembered only gives a good probability that the host key is the same, not guaranteed proof. To get a listing of the fingerprints along with their random art for all known hosts, the following command line can be used: $ ssh-keygen -lv -f ~/.ssh/known_hosts If the fingerprint is unknown, an alternative method of verification is available: SSH fingerprints verified by DNS. An additional resource record (RR), SSHFP, is added to a zonefile and the connecting client is able to match the fingerprint with that of the key presented. In this example, we are connecting a client to a server, “host.example.com”. The SSHFP resource records should first be added to the zonefile for host.example.com: $ ssh-keygen -r host.example.com. The output lines will have to be added to the zonefile. To check that the zone is answering fingerprint queries: $ dig -t SSHFP host.example.com Finally the client connects: $ ssh -o "VerifyHostKeyDNS ask" host.example.com [...] Matching host key fingerprint found in DNS. Are you sure you want to continue connecting (yes/no)? See the VerifyHostKeyDNS option in ssh_config(5) for more information. SSH-BASED VIRTUAL PRIVATE NETWORKS ssh contains support for Virtual Private Network (VPN) tunnelling using the tun(4) network pseudo-device, allowing two networks to be joined securely. The sshd_config(5) configuration option PermitTunnel controls whether the server supports this, and at what level (layer 2 or 3 traffic). The following example would connect client network 10.0.50.0/24 with remote network 10.0.99.0/24 using a point-to-point connection from 10.1.1.1 to 10.1.1.2, provided that the SSH server running on the gateway to the remote network, at 192.168.1.15, allows it. On the client: # ssh -f -w 0:1 192.168.1.15 true # ifconfig tun0 10.1.1.1 10.1.1.2 netmask 255.255.255.252 # route add 10.0.99.0/24 10.1.1.2 On the server: # ifconfig tun1 10.1.1.2 10.1.1.1 netmask 255.255.255.252 # route add 10.0.50.0/24 10.1.1.1 Client access may be more finely tuned via the /root/.ssh/authorized_keys file (see below) and the PermitRootLogin server option. The following entry would permit connections on tun(4) device 1 from user “jane” and on tun device 2 from user “john”, if PermitRootLogin is set to “forced-commands-only”: tunnel="1",command="sh /etc/netstart tun1" ssh-rsa ... jane tunnel="2",command="sh /etc/netstart tun2" ssh-rsa ... john Since an SSH-based setup entails a fair amount of overhead, it may be more suited to temporary setups, such as for wireless VPNs. More permanent VPNs are better provided by tools such as ipsecctl(8) and isakmpd(8). ENVIRONMENT ssh will normally set the following environment variables: DISPLAY The DISPLAY variable indicates the location of the X11 server. It is automatically set by ssh to point to a value of the form “hostname:n”, where “hostname” indicates the host where the shell runs, and ‘n’ is an integer ≥ 1. ssh uses this special value to forward X11 connections over the secure channel. The user should normally not set DISPLAY explicitly, as that will render the X11 connection insecure (and will require the user to manually copy any required authorization cookies). HOME Set to the path of the user's home directory. LOGNAME Synonym for USER; set for compatibility with systems that use this variable. MAIL Set to the path of the user's mailbox. PATH Set to the default PATH, as specified when compiling ssh. SSH_ASKPASS If ssh needs a passphrase, it will read the passphrase from the current terminal if it was run from a terminal. If ssh does not have a terminal associated with it but DISPLAY and SSH_ASKPASS are set, it will execute the program specified by SSH_ASKPASS and open an X11 window to read the passphrase. This is particularly useful when calling ssh from a .xsession or related script. (Note that on some machines it may be necessary to redirect the input from /dev/null to make this work.) SSH_ASKPASS_REQUIRE Allows further control over the use of an askpass program. If this variable is set to “never” then ssh will never attempt to use one. If it is set to “prefer”, then ssh will prefer to use the askpass program instead of the TTY when requesting passwords. Finally, if the variable is set to “force”, then the askpass program will be used for all passphrase input regardless of whether DISPLAY is set. SSH_AUTH_SOCK Identifies the path of a UNIX-domain socket used to communicate with the agent. SSH_CONNECTION Identifies the client and server ends of the connection. The variable contains four space- separated values: client IP address, client port number, server IP address, and server port number. SSH_ORIGINAL_COMMAND This variable contains the original command line if a forced command is executed. It can be used to extract the original arguments. SSH_TTY This is set to the name of the tty (path to the device) associated with the current shell or command. If the current session has no tty, this variable is not set. SSH_TUNNEL Optionally set by sshd(8) to contain the interface names assigned if tunnel forwarding was requested by the client. SSH_USER_AUTH Optionally set by sshd(8), this variable may contain a pathname to a file that lists the authentication methods successfully used when the session was established, including any public keys that were used. TZ This variable is set to indicate the present time zone if it was set when the daemon was started (i.e. the daemon passes the value on to new connections). USER Set to the name of the user logging in. Additionally, ssh reads ~/.ssh/environment, and adds lines of the format “VARNAME=value” to the environment if the file exists and users are allowed to change their environment. For more information, see the PermitUserEnvironment option in sshd_config(5). FILES ~/.rhosts This file is used for host-based authentication (see above). On some machines this file may need to be world-readable if the user's home directory is on an NFS partition, because sshd(8) reads it as root. Additionally, this file must be owned by the user, and must not have write permissions for anyone else. The recommended permission for most machines is read/write for the user, and not accessible by others. ~/.shosts This file is used in exactly the same way as .rhosts, but allows host-based authentication without permitting login with rlogin/rsh. ~/.ssh/ This directory is the default location for all user-specific configuration and authentication information. There is no general requirement to keep the entire contents of this directory secret, but the recommended permissions are read/write/execute for the user, and not accessible by others. ~/.ssh/authorized_keys Lists the public keys (DSA, ECDSA, Ed25519, RSA) that can be used for logging in as this user. The format of this file is described in the sshd(8) manual page. This file is not highly sensitive, but the recommended permissions are read/write for the user, and not accessible by others. ~/.ssh/config This is the per-user configuration file. The file format and configuration options are described in ssh_config(5). Because of the potential for abuse, this file must have strict permissions: read/write for the user, and not writable by others. ~/.ssh/environment Contains additional definitions for environment variables; see ENVIRONMENT, above. ~/.ssh/id_dsa ~/.ssh/id_ecdsa ~/.ssh/id_ecdsa_sk ~/.ssh/id_ed25519 ~/.ssh/id_ed25519_sk ~/.ssh/id_rsa Contains the private key for authentication. These files contain sensitive data and should be readable by the user but not accessible by others (read/write/execute). ssh will simply ignore a private key file if it is accessible by others. It is possible to specify a passphrase when generating the key which will be used to encrypt the sensitive part of this file using AES-128. ~/.ssh/id_dsa.pub ~/.ssh/id_ecdsa.pub ~/.ssh/id_ecdsa_sk.pub ~/.ssh/id_ed25519.pub ~/.ssh/id_ed25519_sk.pub ~/.ssh/id_rsa.pub Contains the public key for authentication. These files are not sensitive and can (but need not) be readable by anyone. ~/.ssh/known_hosts Contains a list of host keys for all hosts the user has logged into that are not already in the systemwide list of known host keys. See sshd(8) for further details of the format of this file. ~/.ssh/rc Commands in this file are executed by ssh when the user logs in, just before the user's shell (or command) is started. See the sshd(8) manual page for more information. /etc/hosts.equiv This file is for host-based authentication (see above). It should only be writable by root. /etc/shosts.equiv This file is used in exactly the same way as hosts.equiv, but allows host-based authentication without permitting login with rlogin/rsh. /etc/ssh/ssh_config Systemwide configuration file. The file format and configuration options are described in ssh_config(5). /etc/ssh/ssh_host_key /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_ecdsa_key /etc/ssh/ssh_host_ed25519_key /etc/ssh/ssh_host_rsa_key These files contain the private parts of the host keys and are used for host-based authentication. /etc/ssh/ssh_known_hosts Systemwide list of known host keys. This file should be prepared by the system administrator to contain the public host keys of all machines in the organization. It should be world-readable. See sshd(8) for further details of the format of this file. /etc/ssh/sshrc Commands in this file are executed by ssh when the user logs in, just before the user's shell (or command) is started. See the sshd(8) manual page for more information. EXIT STATUS ssh exits with the exit status of the remote command or with 255 if an error occurred. SEE ALSO scp(1), sftp(1), ssh-add(1), ssh-agent(1), ssh-keygen(1), ssh-keyscan(1), tun(4), ssh_config(5), ssh-keysign(8), sshd(8) STANDARDS S. Lehtinen and C. Lonvick, The Secure Shell (SSH) Protocol Assigned Numbers, RFC 4250, January 2006. T. Ylonen and C. Lonvick, The Secure Shell (SSH) Protocol Architecture, RFC 4251, January 2006. T. Ylonen and C. Lonvick, The Secure Shell (SSH) Authentication Protocol, RFC 4252, January 2006. T. Ylonen and C. Lonvick, The Secure Shell (SSH) Transport Layer Protocol, RFC 4253, January 2006. T. Ylonen and C. Lonvick, The Secure Shell (SSH) Connection Protocol, RFC 4254, January 2006. J. Schlyter and W. Griffin, Using DNS to Securely Publish Secure Shell (SSH) Key Fingerprints, RFC 4255, January 2006. F. Cusack and M. Forssen, Generic Message Exchange Authentication for the Secure Shell Protocol (SSH), RFC 4256, January 2006. J. Galbraith and P. Remaker, The Secure Shell (SSH) Session Channel Break Extension, RFC 4335, January 2006. M. Bellare, T. Kohno, and C. Namprempre, The Secure Shell (SSH) Transport Layer Encryption Modes, RFC 4344, January 2006. B. Harris, Improved Arcfour Modes for the Secure Shell (SSH) Transport Layer Protocol, RFC 4345, January 2006. M. Friedl, N. Provos, and W. Simpson, Diffie-Hellman Group Exchange for the Secure Shell (SSH) Transport Layer Protocol, RFC 4419, March 2006. J. Galbraith and R. Thayer, The Secure Shell (SSH) Public Key File Format, RFC 4716, November 2006. D. Stebila and J. Green, Elliptic Curve Algorithm Integration in the Secure Shell Transport Layer, RFC 5656, December 2009. A. Perrig and D. Song, Hash Visualization: a New Technique to improve Real-World Security, 1999, International Workshop on Cryptographic Techniques and E-Commerce (CrypTEC '99). AUTHORS OpenSSH is a derivative of the original and free ssh 1.2.12 release by Tatu Ylonen. Aaron Campbell, Bob Beck, Markus Friedl, Niels Provos, Theo de Raadt and Dug Song removed many bugs, re-added newer features and created OpenSSH. Markus Friedl contributed the support for SSH protocol versions 1.5 and 2.0. macOS 14.5 October 11, 2023 macOS 14.5
ssh – OpenSSH remote login client
ssh [-46AaCfGgKkMNnqsTtVvXxYy] [-B bind_interface] [-b bind_address] [-c cipher_spec] [-D [bind_address:]port] [-E log_file] [-e escape_char] [-F configfile] [-I pkcs11] [-i identity_file] [-J destination] [-L address] [-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-P tag] [-p port] [-R address] [-S ctl_path] [-W host:port] [-w local_tun[:remote_tun]] destination [command [argument ...]] ssh [-Q query_option]
null
null
latency
The latency utility provides scheduling and interrupt-latency statistics. Due to the kernel tracing facility it uses to operate, the command requires root privileges. The arguments are as follows: -c code_file When the -c option is specified, it takes a path to a code file that contains the mappings for the system calls. This option overrides the default location of the system call code file, which is found in /usr/share/misc/trace.codes. -h Display high resolution interrupt latencies and write them to latencies.csv (truncate existing file) upon exit. -m Display per-CPU interrupt latency statistics. -it threshold Set the interrupt latency threshold, expressed in microseconds. If the latency exceeds this value, and a log file has been specified, a record of what occurred during this time is recorded. -l log_file Specifies a log file that is written to when either the interrupt or scheduling latency is exceeded. -n kernel By default, latency acts on the default /System/Library/Kernels/kernel.development. This option allows you to specify an alternate booted kernel. -p priority Specifies the priority level to observe scheduler latencies for. The default is realtime ( 97 ). A range of priorities to monitor can also be provided, for example 31-47 or 0-127 -st threshold Set the scheduler latency threshold in microseconds. If latency exceeds this, and a log file has been specified, a record of what occurred during this time is recorded. -R raw_file Specifies a raw trace file to use as input. The data columns displayed are as follows: SCHEDULER The number of context switches that fall within the described delay. INTERRUPTS The number of interrupts that fall within the described delay. The latency utility is also SIGWINCH savvy, so adjusting your window geometry will change the list of delay values displayed. SAMPLE USAGE latency -p 97 -st 20000 -it 1000 -l /var/tmp/latency.log The latency utility will watch threads with priority 97 for scheduling latencies. The threshold for the scheduler is set to 20000 microseconds. The threshold for interrupts is set to 1000 microseconds. Latencies that exceed these thresholds will be logged in /var/tmp/latency.log. SEE ALSO fs_usage(1), sc_usage(1), top(1) macOS March 28, 2000 macOS
latency – monitors scheduling and interrupt latency
latency [-p priority] [-h] [-m] [-st threshold] [-it threshold] [-c code_file] [-l log_file] [-R raw_file] [-n kernel]
null
null
lsm
The Latent Semantic Mapping framework is a language independent, Unicode based technology that builds maps and uses them to classify texts into one of a number of categories. lsm is a tool to create, manipulate, test, and dump Latent Semantic Mapping maps. It is designed to provide access to a large subset of the functionality of the Latent Semantic Mapping API, mainly for rapid prototyping and diagnostic purposes, but possibly also for simple shell script based applications of Latent Semantic Mapping. COMMANDS lsm provides a variety of commands (lsm_command in the Synopsis), each of which often has a wealth of options (see the Command Options below). Command names may be abbreviated to unambiguous prefixes. lsm create map_file input_files Create a new LSM map from the specified input_files. lsm update map_file input_files Add the specified input_files to an existing LSM map. lsm evaluate map_file input_files Classify the specified input_files into the categories of the LSM map. lsm cluster [--k-means=N | --agglomerative=N] [--apply] Compute clusters for the map, and, if the --apply option is specified, transform the map accordingly. Multiple levels of clustering may be applied for faster performance on large maps, e.g. lsm cluster --k-means=100 --each --agglomerative=100 --agglomerative=1000 my.map first computes 100 clusters using (fast) k-means clustering, computes 100 subclusters for each first stage cluster using agglomerative clustering, and finally reduces those 10000 clusters to 1000 using agglomerative clustering. lsm dump map_file [input_files] Without input_files, dumps all words in the map with their counts. With input_files, dump, for each file, the words that appear in the map, their counts in the map, and their relative frequencies in the input file. lsm info map_file Bypass the Latent Semantic Mapping framework to extract and print information about the file and perform a number of consistency checks on it. (NOT IMPLEMENTED YET) COMMAND OPTIONS This section describes the command_options that are available for the lsm commands. Not all commands support all of these options; each option is only supported for commands where it makes sense. However, when a command has one of these options you can count on the same meaning for the option as in other commands. --append-categories Directs the update command to put the data into new categories appended after the existing ones, instead of adding the data to the existing categories. --categories count Directs the evaluate command to only list the top count categories. --category-delimiter delimiter Specify the delimiter to be used to between categories in the input_files passed to the create and update commands. group Categories are separated by a `;' argument. file Each input_file represents a separate category. This is the default if the --category-delimiter option is not given. line Each line represents a separate category. string Categories are separated by the specified string. --clobber When creating a map, overwrite an existing file at the path, even if it's not an LSM map. By default, create will only overwrite an existing file if it's believed to be an LSM map, which guards against frequent operator errors such as: lsm create /usr/include/*.h --dimensions dim Direct the create and update commands to use the given number of dimensions for computing the map (Defaults to the number of categories). This option is useful to manage the size and computational overhead of maps with large number of categories. --discard-counts Direct the create and update commands to omit the raw word / token counts when writing the map. This results in a map that is more compact, but cannot be updated any further. --hash Direct the create and update commands to write the map in a format that is not human readable with default file manipulation tools like cat or hexdump. This is useful in applications such as junk mail filtering, where input data may contain naughty words and where the contents of the map may tip off spammers what words to avoid. --help List an overview of the options available for a command. Available for all commands. --html Strip HTML codes from the input_files. Useful for mail and web input. Available for the create, update, evaluate, and dump commands. --junk-mail When parsing the input files, apply heuristics to counteract common methods used by spammers to disguise incriminating words such as: Zer0 1nt3rest l0ans Substituting letters with digits W E A L T H Adding spaces between letters m.o.r.t.g.a.g.e Adding punctuation between letters Available for the create, update, evaluate, and dump commands. --pairs If specified with the create command when building the map, store counts for pairs of words as well as the words themselves. This can increase accuracy for certain classes of problems, but will generate unreasonably large maps unless the vocabulary is fairly limited. --stop-words stop_word_file If specified with the create command, stop_word_file is parsed and all words found are excluded from texts evaluated against the map. This is useful for excluding frequent, semantically meaningless words. --sweep-cutoff threshold --sweep-frequency days Available for the create and update commands. Every specified number of days (by default 7), scan the map and remove from it any entries that have been in the map for at least 2 previous scans and whose total counts are smaller than threshold. threshold defaults to 0, so by default the map is not scanned. --text-delimiter delimiter Specify the delimiter to be used to between texts in the input_files passed to the create, update, evaluate, and dump commands. file Each input_file represents a separate text. This is the default if the --text-delimiter option is not given. line Each line represents a separate text. string Texts are separated by the specified string. --triplets If specified with the create command when building the map, store counts for triplets and pairs of words as well as the words themselves. This can increase accuracy for certain classes of problems, but will generate unreasonably large maps unless the vocabulary is fairly limited. --weight weight Scale counts of input words for the create and update commands by the specified weight, which may be a positive or negative floating point number. --words Directs the evaluate or cluster commands to apply to words, instead of categories. --words=count Directs the evaluate command to list the top count words, instead of categories.
lsm - Latent Semantic Mapping tool
lsm lsm_command [command_options] map_file [input_files]
null
"lsm evaluate --html --junk-mail ~/Library/Mail/V2/MailData/LSMMap2 msg*.txt" Simulate the Mail.app junk mail filter by evaluating the specified files (assumed to each hold the raw text of one mail message) against the user's junk mail map. "lsm dump ~/Library/Mail/V2/MailData/LSMMap2" Dump the words accumulated in the junk mail map and their counts. "lsm create --category-delimiter=group c_vs_h *.c ';' *.h" Create an LSM map trained to distinguish C header files from C source files. "lsm update --weight 2.0 --cat=group c_vs_h ';' ../xy/*.h" Add some additional header files with an increased weight to the training. "lsm create --help" List the options available for the lsm create command. 1.0 2023-05-31 LSM(1)
h2ph5.30
h2ph converts any C header files specified to the corresponding Perl header file format. It is most easily run while in /usr/include: cd /usr/include; h2ph * sys/* or cd /usr/include; h2ph * sys/* arpa/* netinet/* or cd /usr/include; h2ph -r -l . The output files are placed in the hierarchy rooted at Perl's architecture dependent library directory. You can specify a different hierarchy with a -d switch. If run with no arguments, filters standard input to standard output.
h2ph - convert .h C header files to .ph Perl header files
h2ph [-d destination directory] [-r | -a] [-l] [-h] [-e] [-D] [-Q] [headerfiles]
-d destination_dir Put the resulting .ph files beneath destination_dir, instead of beneath the default Perl library location ($Config{'installsitearch'}). -r Run recursively; if any of headerfiles are directories, then run h2ph on all files in those directories (and their subdirectories, etc.). -r and -a are mutually exclusive. -a Run automagically; convert headerfiles, as well as any .h files which they include. This option will search for .h files in all directories which your C compiler ordinarily uses. -a and -r are mutually exclusive. -l Symbolic links will be replicated in the destination directory. If -l is not specified, then links are skipped over. -h Put 'hints' in the .ph files which will help in locating problems with h2ph. In those cases when you require a .ph file containing syntax errors, instead of the cryptic [ some error condition ] at (eval mmm) line nnn you will see the slightly more helpful [ some error condition ] at filename.ph line nnn However, the .ph files almost double in size when built using -h. -e If an error is encountered during conversion, output file will be removed and a warning emitted instead of terminating the conversion immediately. -D Include the code from the .h file as a comment in the .ph file. This is primarily used for debugging h2ph. -Q 'Quiet' mode; don't print out the names of the files being converted. ENVIRONMENT No environment variables are used. FILES /usr/include/*.h /usr/include/sys/*.h etc. AUTHOR Larry Wall SEE ALSO perl(1) DIAGNOSTICS The usual warnings if it can't read or write the files involved. BUGS Doesn't construct the %sizeof array for you. It doesn't handle all C constructs, but it does attempt to isolate definitions inside evals so that you can get at the definitions that it can translate. It's only intended as a rough tool. You may need to dicker with the files produced. You have to run this program by hand; it's not run as part of the Perl installation. Doesn't handle complicated expressions built piecemeal, a la: enum { FIRST_VALUE, SECOND_VALUE, #ifdef ABC THIRD_VALUE #endif }; Doesn't necessarily locate all of your C compiler's internally-defined symbols. perl v5.30.3 2024-04-13 H2PH(1)
null
xar
The XAR project aims to provide an easily extensible archive format. Important design decisions include an easily extensible XML table of contents (TOC) for random access to archived files, storing the TOC at the beginning of the archive to allow for efficient handling of streamed archives, the ability to handle files of arbitrarily large sizes, the ability to choose independent encodings for individual files in the archive, the ability to store checksums for individual files in both compressed and uncompressed form, and the ability to query the table of content's rich meta-data. FUNCTIONS One of the following options must be used: -c Creates an archive -t Lists the contents of an archive -x Extracts an archive NOTE: all of the above require the use of the -f option (filename) as this release of xar doesn't correctly handle pipes or sockets. -f The filename to use for creation, listing or extraction. With extraction, this can be a POSIX regular expression.
xar - eXtensible ARchiver DEPRECATION WARNING xar is no longer under active development by Apple. Clients of xar should pursue alternative archive formats.
xar -[ctx][v] ...
--compression Specifies the compression type to use. Valid values: none, gzip, bzip2, lzma (on some systems). Default value: gzip -C <path> On extract, xar will chdir to the specified path before extracting the archive. -a Synonym for --compression=lzma -j Synonym for --compression=bzip2 -z Synonym for --compression=gzip --compression-args=<arguments> Specifies arguments to the compression engine selected. gzip, bzip2, and lzma all take a single integer argument between 0 and 9 specifying the compression level to use. --dump-toc=<filename> Has xar dump the xml header into the specified file. "-" can be specified to mean stdout. --dump-toc-cksum Dumps the ToC checksum to stdout along with the algorithm of the ToC. --dump-header Has xar print out the xar binary header information to stdout. --extract-subdoc=<name> Extracts the specified subdocument to a document in cwd named <name>.xml --list-subdocs List the subdocuments in the xml header --toc-cksum Specifies the hashing algorithm to use for xml header verification. Valid values: md5 (on some systems), sha1, sha256, and sha512. Default value: sha1 --file-cksum Specifies the hashing algorithm to use for file content verification. Valid values: md5 (on some systems), sha1, sha256, and sha512. Default value: sha1 -l On archival, stay on the local device. -P On extract, set ownership based on uid/gid. If the uid/gid can be set on the extracted file, setuid/setgid bits will also be preserved. -p On extract, set ownership based on symbolic names, if possible. If the uid/gid can be set on the extracted file, setuid/setgid bits will also be preserved. -s <filename> On extract, specifies the file to extract subdocuments to. On archival, specifies an xml file to add as a subdocument. -v Verbose output --exclude Specifies a POSIX regular expression of files to exclude from adding to the archive during creation or from being extracted during extraction. This option can be specified multiple times. --rsize Specifies a size (in bytes) for the internal libxar read buffer while performing I/O. --coalesce-heap When multiple files in the archive are identical, only store one copy of the data in the heap. This creates smaller archives, but the archives created are not streamable. --link-same When the data section of multiple files are identical, hardlink them within the archive. --no-compress Specifies a POSIX regular expression of files to archive, but not compress. The archived files will be copied raw into the archive. This can be used to exclude already gzipped files from being gzipped during the archival process. --prop-include Specifies a file property to be included in the archive. When this option is specified, only the specified options will be included. Anything not specifically included with this option will be omitted. This option can be used multiple times. --prop-exclude Specifies a file property to be excluded from the archive. When this option is specified, all file properties will be included except the specified properties. This option can be used multiple times. --distribution Creates an archive to only contain file properties safe for file distribution. Currently, only name, type, mode, and data are preserved with this option. --keep-existing Does not overwrite existing files during extraction. Keeps any previously existing files while extracting. -k Synonym for --keep-existing. --keep-setuid When extracting without -p or -P options, xar will extract files as the uid/gid of the extracting process. In this situation, xar will strip setuid/setgid bits from the extracted files for security reasons. --keep-setuid will preserve the setuid/setgid bits even though the uid/gid of the extracted file is not the same as the archived file.
xar -cf sample.xar /home/uid Create a xar archive of all files in /home/uid xar -tf sample.xar List the contents of the xar archive sample.xar xar -xf sample.xar Extract the contents of sample.xar to the current working directory BUGS Doesn't currently work with pipes or streams. Might be fixed in a future release. Probably one or two more somewhere in there. If you find one please report it to http://code.google.com/p/xar/ AUTHORS Rob Braun <bbraun AT synack DOT net> Landon Fuller <landonf AT bikemonkey DOT org> David Leimbach Kevin Van Vechten version 1.8 June 4, 2015 XAR(1)
db_codegen
null
null
null
null
null
lwp-request5.34
This program can be used to send requests to WWW servers and your local file system. The request content for POST and PUT methods is read from stdin. The content of the response is printed on stdout. Error messages are printed on stderr. The program returns a status value indicating the number of URLs that failed. The options are: -m <method> Set which method to use for the request. If this option is not used, then the method is derived from the name of the program. -f Force request through, even if the program believes that the method is illegal. The server might reject the request eventually. -b <uri> This URI will be used as the base URI for resolving all relative URIs given as argument. -t <timeout> Set the timeout value for the requests. The timeout is the amount of time that the program will wait for a response from the remote server before it fails. The default unit for the timeout value is seconds. You might append "m" or "h" to the timeout value to make it minutes or hours, respectively. The default timeout is '3m', i.e. 3 minutes. -i <time> Set the If-Modified-Since header in the request. If time is the name of a file, use the modification timestamp for this file. If time is not a file, it is parsed as a literal date. Take a look at HTTP::Date for recognized formats. -c <content-type> Set the Content-Type for the request. This option is only allowed for requests that take a content, i.e. POST and PUT. You can force methods to take content by using the "-f" option together with "-c". The default Content-Type for POST is "application/x-www-form-urlencoded". The default Content-type for the others is "text/plain". -p <proxy-url> Set the proxy to be used for the requests. The program also loads proxy settings from the environment. You can disable this with the "-P" option. -P Don't load proxy settings from environment. -H <header> Send this HTTP header with each request. You can specify several, e.g.: lwp-request \ -H 'Referer: http://other.url/' \ -H 'Host: somehost' \ http://this.url/ -C <username>:<password> Provide credentials for documents that are protected by Basic Authentication. If the document is protected and you did not specify the username and password with this option, then you will be prompted to provide these values. The following options controls what is displayed by the program: -u Print request method and absolute URL as requests are made. -U Print request headers in addition to request method and absolute URL. -s Print response status code. This option is always on for HEAD requests. -S Print response status chain. This shows redirect and authorization requests that are handled by the library. -e Print response headers. This option is always on for HEAD requests. -E Print response status chain with full response headers. -d Do not print the content of the response. -o <format> Process HTML content in various ways before printing it. If the content type of the response is not HTML, then this option has no effect. The legal format values are; "text", "ps", "links", "html" and "dump". If you specify the "text" format then the HTML will be formatted as plain "latin1" text. If you specify the "ps" format then it will be formatted as Postscript. The "links" format will output all links found in the HTML document. Relative links will be expanded to absolute ones. The "html" format will reformat the HTML code and the "dump" format will just dump the HTML syntax tree. Note that the "HTML-Tree" distribution needs to be installed for this option to work. In addition the "HTML-Format" distribution needs to be installed for "-o text" or "-o ps" to work. -v Print the version number of the program and quit. -h Print usage message and quit. -a Set text(ascii) mode for content input and output. If this option is not used, content input and output is done in binary mode. Because this program is implemented using the LWP library, it will only support the protocols that LWP supports. SEE ALSO lwp-mirror, LWP COPYRIGHT Copyright 1995-1999 Gisle Aas. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. AUTHOR Gisle Aas <gisle@aas.no> perl v5.34.0 2020-04-14 LWP-REQUEST(1)
lwp-request - Simple command line user agent
lwp-request [-afPuUsSedvhx] [-m method] [-b base URL] [-t timeout] [-i if-modified-since] [-c content-type] [-C credentials] [-p proxy-url] [-o format] url...
null
null
dns-sd
The dns-sd command is a network diagnostic tool, much like ping(8) or traceroute(8). However, unlike those tools, most of its functionality is not implemented in the dns-sd executable itself, but in library code that is available to any application. The library API that dns-sd uses is documented in /usr/include/dns_sd.h. The dns-sd command replaces the older mDNS command. The dns-sd command is primarily intended for interactive use. Because its command-line arguments and output format are subject to change, invoking it from a shell script will generally be fragile. Additionally, the asynchronous nature of DNS Service Discovery does not lend itself easily to script-oriented programming. For example, calls like "browse" never complete; the action of performing a "browse" sets in motion machinery to notify the client whenever instances of that service type appear or disappear from the network. These notifications continue to be delivered indefinitely, for minutes, hours, or even days, as services come and go, until the client explicitly terminates the call. This style of asynchronous interaction works best with applications that are either multi-threaded, or use a main event-handling loop to receive keystrokes, network data, and other asynchronous event notifications as they happen. If you wish to perform DNS Service Discovery operations from a scripting language, then the best way to do this is not to execute the dns-sd command and then attempt to decipher the textual output, but instead to directly call the DNS-SD APIs using a binding for your chosen language. For example, if you are programming in Ruby, then you can directly call DNS-SD APIs using the dnssd package documented at <http://rubyforge.org/projects/dnssd/>. Similar bindings for other languages are also in development. dns-sd -E return a list of domains recommended for registering(advertising) services. dns-sd -F return a list of domains recommended for browsing services. Normally, on your home network, the only domain you are likely to see is "local". However if your network administrator has created Domain Enumeration records, then you may also see other recommended domains for registering and browsing. dns-sd -R name type domain port [key=value ...] register (advertise) a service in the specified domain with the given name and type as listening (on the current machine) on port. name can be arbitrary unicode text, containing any legal unicode characters (including dots, spaces, slashes, colons, etc. without restriction), up to 63 UTF-8 bytes long. type must be of the form "_app-proto._tcp" or "_app-proto._udp", where "app-proto" is an application protocol name registered at http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xml. domain is the domain in which to register the service. In current implementations, only the local multicast domain "local" is supported. In the future, registering will be supported in any arbitrary domain that has a working DNS Update server [RFC 2136]. The domain "." is a synonym for "pick a sensible default" which today means "local". port is a number from 0 to 65535, and is the TCP or UDP port number upon which the service is listening. Additional attributes of the service may optionally be described by key/value pairs, which are stored in the advertised service's DNS TXT record. Allowable keys and values are listed with the service registration at http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xml. dns-sd -B type domain browse for instances of service type in domain. For valid types see http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xml. as described above. Omitting the domain or using "." means "pick a sensible default." dns-sd -L name type domain look up and display the information necessary to contact and use the named service: the hostname of the machine where that service is available, the port number on which the service is listening, and (if present) TXT record attributes describing properties of the service. Note that in a typical application, browsing may only happen rarely, while lookup (or "resolving") happens every time the service is used. For example, a user browses the network to pick a default printer fairly rarely, but once a default printer has been picked, that named service is resolved to its current IP address and port number every time the user presses Cmd-P to print. dns-sd -P name type domain port host IP [key=value ...] create a proxy advertisement for a service running on(offered by) some other machine. The two new options are Host, a name for the device and IP, the address of it. The service for which you create a proxy advertisement does not necessarily have to be on your local network. You can set up a local proxy for a website on the Internet. dns-sd -q name rrtype rrclass look up any DNS name, resource record type, and resource record class, not necessarily DNS-SD names and record types. If rrtype is not specified, it queries for the IPv4 address of the name, if rrclass is not specified, IN class is assumed. If the name is not a fully qualified domain name, then search domains may be appended. dns-sd -Z type domain browse for service instances and display output in zone file format. dns-sd -G v4/v6/v4v6 name look up the IP address information of the name. If v4 is specified, the IPv4 address of the name is looked up, if v6 is specified the IPv6 address is looked up. If v4v6 is specified both the IPv4 and IPv6 address is looked up. If the name is not a fully qualified domain name, then search domains may be appended. dns-sd -V return the version of the currently running daemon/system service.
dns-sd – Multicast DNS (mDNS) & DNS Service Discovery (DNS-SD) Test Tool
dns-sd -E dns-sd -F dns-sd -R name type domain port [key=value ...] dns-sd -B type domain dns-sd -L name type domain dns-sd -P name type domain port host IP [key=value ...] dns-sd -q name rrtype rrclass dns-sd -Z type domain dns-sd -G v4/v6/v4v6 name dns-sd -V
null
To advertise the existence of LPR printing service on port 515 on this machine, such that it will be discovered by the Mac OS X printing software and other DNS-SD compatible printing clients, use: dns-sd -R "My Test" _printer._tcp. . 515 pdl=application/postscript For this registration to be useful, you need to actually have LPR service available on port 515. Advertising a service that does not exist is not very useful, and will be confusing and annoying to other people on the network. Similarly, to advertise a web page being served by an HTTP server on port 80 on this machine, such that it will show up in the Bonjour list in Safari and other DNS-SD compatible Web clients, use: dns-sd -R "My Test" _http._tcp . 80 path=/path-to-page.html To find the advertised web pages on the local network (the same list that Safari shows), use: dns-sd -B _http._tcp While that command is running, in another window, try the dns-sd -R example given above to advertise a web page, and you should see the "Add" event reported to the dns-sd -B window. Now press Ctrl-C in the dns-sd -R window and you should see the "Remove" event reported to the dns-sd -B window. In the example below, the www.apple.com web page is advertised as a service called "apple", running on a target host called apple.local, which resolves to 17.149.160.49. dns-sd -P apple _http._tcp "" 80 apple.local 17.149.160.49 The Bonjour menu in the Safari web browser will now show "apple". The same IP address can be reached by entering apple.local in the web browser. In either case, the request will be resolved to the IP address and browser will show contents associated with www.apple.com. If a client wants to be notified of changes in server state, it can initiate a query for the service's particular record and leave it running. For example, to monitor the status of an iChat user you can use: dns-sd -q someone@ex1._presence._tcp.local txt Everytime status of that user(someone) changes, you will see a new TXT record result reported. You can also query for a unicast name like www.apple.com and monitor its status. dns-sd -q www.apple.com FILES /usr/bin/dns-sd SEE ALSO mDNSResponder(8) BUGS dns-sd bugs are tracked in Apple Radar component "mDNSResponder". HISTORY The dns-sd command first appeared in Mac OS X 10.4 (Tiger). Darwin April 2004 Darwin
tab2space
tab2space expands tab characters into a specific number of spaces. It also normalizes line endings into a single format.
tab2space - Utility to expand tabs and ensure consistent line endings
tab2space [options] [infile [outfile]] ...
-help or -h display this help message -dos or -crlf set line ends to CRLF (PC-DOS/Windows - default) -mac or -cr set line ends to CR (classic Mac OS) -unix or -lf set line ends to LF (Unix / Mac OS X) -tabs preserve tabs, e.g. for Makefile -t<n> set tabs to <n> (default is 4) spaces SEE ALSO HTML Tidy Project Page at http://tidy.sourceforge.net AUTHOR Dave Raggett <dsr@w3.org> February 6, 2003 TAB2SPACE(1)
null
avconvert
avconvert is a tool that converts source media files to different file types for sharing on the web or loading onto devices. The tool will not allow protected content to be converted. Only one video and one audio track is preserved through the conversion, along with metadata tracks. The tool will never resize the video higher than the source dimensions. If the preset internal dimensions are larger than that of the source, the conversion will maintain the source dimensions. The file extension provided for the output movie will determine the output file type. --source | -s file The source media file to be converted. --output | -o file The output movie file to be created. --preset | -p name Use the specified preset for file conversion. All presets encode using AVC (H.264) encoding unless otherwise specified in the preset name. Use --help to get the full list. Preset640x480 A 480p Standard Definition preset with H.264 video and AAC audio. Preset960x540 A 540p preset with H.264 video and AAC audio. Preset1280x720 A 720p High Definition preset with H.264 video and AAC audio. Preset1920x1080 A 1080p High Definition preset with H.264 video and AAC audio. Preset3840x2160 A 2160p Ultra High Definition preset with H.264 video and AAC audio. PresetAppleM4A An audio- only preset with AAC audio. PresetAppleM4V480pSD A legacy 480p Standard Definition preset with H.264 video and AAC audio suitable for playing on Apple devices. PresetAppleM4V720pHD A legacy 720p High Definition preset with H.264 video and AAC audio suitable for playing on Apple devices. PresetAppleM4V1080pHD A legacy 1080p High Definition preset with H.264 video and AAC audio suitable for playing on Apple devices. PresetAppleM4VAppleTV A legacy preset with H.264 video and AAC audio suitable for playing on older AppleTV models. PresetAppleM4VCellular A legacy, smaller than Standard Definition, preset with H.264 video and AAC audio suitable for playing on Apple devices when streamed over a cellular network. PresetAppleM4ViPod A legacy Standard Definition preset with H.264 video and AAC audio suitable for playing on an iPod. PresetAppleM4VWiFi A legacy, smaller than Standard Definition, preset with H.264 video and AAC audio suitable for playing on Apple devices when streamed over a WiFi network. PresetAppleProRes422LPCM A preset with Apple ProRes 422 video and LPCM audio. PresetAppleProRes4444LPCM A preset with Apple ProRes 4444 video and LPCM audio. PresetHEVC1920x1080 A 1080p High Definition preset with HEVC video and AAC audio. PresetHEVC1920x1080WithAlpha A 1080p High Definition preset with HEVC alpha video and AAC audio. If a non- alpha source is selected, an error will occur. PresetHEVC3840x2160 A 2160p Ultra High Definition preset with HEVC video and AAC audio. PresetHEVC3840x2160WithAlpha A 2160p Ultra High Definition preset with HEVC alpha video and AAC audio. If a non- alpha source is selected, an error will occur. PresetHEVC7680x4320 An 8K preset with HEVC video and AAC audio. PresetHEVCHighestQuality A high quality preset with HEVC video and AAC audio. PresetHEVCHighestQualityWithAlpha A high quality preset with HEVC alpha video and AAC audio. If a non- alpha source is selected, an error will occur. PresetHighestQuality A high quality preset with H.264 video and AAC audio. PresetLowQuality A low quality, smaller than Standard Definition, preset with H.264 video and AAC audio. PresetMediumQuality A medium quality, smaller than Standard Definition, preset with H.264 video and AAC audio. PresetPassthrough A preset that passes through the video and audio tracks, without conversion.
avconvert – movie conversion tool
avconvert [-hv] -s <source_media> -o <output_movie> -p <preset_name>
--disableFastStart Disable fast-start movie creation. Reduces disk accesses if fast-start is not required. --disableMetadataFilter Disable the metadata filter. Use with caution. This will allow privacy sensitive source metadata to be preserved in the output file. This may include information such as the location of the video, time when the video was recorded, video capture device information, etc. If this option is not specified, the aforementioned source metadata is not present in the output file. --duration num Trim the output movie to num seconds (decimal allowed). Default is end of file. --help | -h Print command usage and list available preset names. --multiPass Perform a higher quality multi-pass encode in the conversion. --progress | -prog Display progress during the conversion (default with -v). --replace Overwrite the output file, if it already exists. --start num Skip the first num seconds (decimal allowed) of the source movie. Default is beginning of file. --verbose | -v Print additional information about the conversion.
Convert the source movie from 4k HEVC to 720p AVC using the 1280x720 encoding preset: avconvert --source 4k_hevc_movie.mov --output 720p_avc_movie.mov --preset Preset1280x720 Convert the source movie from 4k AVC to 4K HEVC using the HEVCHighestQuality encoding preset: avconvert -s 4k_avc_movie.mov -o 4k_hevc_movie.mov -p PresetHEVCHighestQuality Skip the first 3.5 seconds of the source movie and only convert the next 30 seconds: avconvert --source source_movie.mov --output trimmed_movie.mov -p PresetMediumQuality --start 3.5 --duration 30 Convert the source movie from a QuickTime movie file to an MPEG-4 file: avconvert -s source_movie.mov -o output_movie.mp4 -p PresetLowQuality HISTORY avconvert command first appeared in Mac OS X 10.7. 64-bit implementation introduced in Mac OS X 10.15. macOS October 8, 2021 macOS
xsubpp5.30
This compiler is typically run by the makefiles created by ExtUtils::MakeMaker or by Module::Build or other Perl module build tools. xsubpp will compile XS code into C code by embedding the constructs necessary to let C functions manipulate Perl values and creates the glue necessary to let Perl access those functions. The compiler uses typemaps to determine how to map C function parameters and variables to Perl values. The compiler will search for typemap files called typemap. It will use the following search path to find default typemaps, with the rightmost typemap taking precedence. ../../../typemap:../../typemap:../typemap:typemap It will also use a default typemap installed as "ExtUtils::typemap".
xsubpp - compiler to convert Perl XS code into C code
xsubpp [-v] [-except] [-s pattern] [-prototypes] [-noversioncheck] [-nolinenumbers] [-nooptimize] [-typemap typemap] [-output filename]... file.xs
Note that the "XSOPT" MakeMaker option may be used to add these options to any makefiles generated by MakeMaker. -hiertype Retains '::' in type names so that C++ hierarchical types can be mapped. -except Adds exception handling stubs to the C code. -typemap typemap Indicates that a user-supplied typemap should take precedence over the default typemaps. This option may be used multiple times, with the last typemap having the highest precedence. -output filename Specifies the name of the output file to generate. If no file is specified, output will be written to standard output. -v Prints the xsubpp version number to standard output, then exits. -prototypes By default xsubpp will not automatically generate prototype code for all xsubs. This flag will enable prototypes. -noversioncheck Disables the run time test that determines if the object file (derived from the ".xs" file) and the ".pm" files have the same version number. -nolinenumbers Prevents the inclusion of '#line' directives in the output. -nooptimize Disables certain optimizations. The only optimization that is currently affected is the use of targets by the output C code (see perlguts). This may significantly slow down the generated code, but this is the way xsubpp of 5.005 and earlier operated. -noinout Disable recognition of "IN", "OUT_LIST" and "INOUT_LIST" declarations. -noargtypes Disable recognition of ANSI-like descriptions of function signature. -C++ Currently doesn't do anything at all. This flag has been a no-op for many versions of perl, at least as far back as perl5.003_07. It's allowed here for backwards compatibility. -s=... or -strip=... This option is obscure and discouraged. If specified, the given string will be stripped off from the beginning of the C function name in the generated XS functions (if it starts with that prefix). This only applies to XSUBs without "CODE" or "PPCODE" blocks. For example, the XS: void foo_bar(int i); when "xsubpp" is invoked with "-s foo_" will install a "foo_bar" function in Perl, but really call bar(i) in C. Most of the time, this is the opposite of what you want and failure modes are somewhat obscure, so please avoid this option where possible. ENVIRONMENT No environment variables are used. AUTHOR Originally by Larry Wall. Turned into the "ExtUtils::ParseXS" module by Ken Williams. MODIFICATION HISTORY See the file Changes. SEE ALSO perl(1), perlxs(1), perlxstut(1), ExtUtils::ParseXS perl v5.30.3 2024-04-13 XSUBPP(1)
null
jhsdb
You can use the jhsdb tool to attach to a Java process or to launch a postmortem debugger to analyze the content of a core-dump from a crashed Java Virtual Machine (JVM). This command is experimental and unsupported. Note: Attaching the jhsdb tool to a live process will cause the process to hang and the process will probably crash when the debugger detaches. The jhsdb tool can be launched in any one of the following modes: jhsdb clhsdb Starts the interactive command-line debugger. jhsdb hsdb Starts the interactive GUI debugger. jhsdb debugd Starts the remote debug server. jhsdb jstack Prints stack and locks information. jhsdb jmap Prints heap information. jhsdb jinfo Prints basic JVM information. jhsdb jsnap Prints performance counter information. jhsdb command --help Displays the options available for the command. OPTIONS FOR THE DEBUGD MODE --serverid server-id An optional unique ID for this debug server. This is required if multiple debug servers are run on the same machine. --rmiport port Sets the port number to which the RMI connector is bound. If not specified a random available port is used. --registryport port Sets the RMI registry port. This option overrides the system property 'sun.jvm.hotspot.rmi.port'. If not specified, the system property is used. If the system property is not set, the default port 1099 is used. --hostname hostname Sets the hostname the RMI connector is bound. The value could be a hostname or an IPv4/IPv6 address. This option overrides the system property 'java.rmi.server.hostname'. If not specified, the system property is used. If the system property is not set, a system hostname is used. OPTIONS FOR THE JINFO MODE --flags Prints the VM flags. --sysprops Prints the Java system properties. no option Prints the VM flags and the Java system properties. OPTIONS FOR THE JMAP MODE no option Prints the same information as Solaris pmap. --heap Prints the java heap summary. --binaryheap Dumps the java heap in hprof binary format. --dumpfile name The name of the dumpfile. --histo Prints the histogram of java object heap. --clstats Prints the class loader statistics. --finalizerinfo Prints the information on objects awaiting finalization. OPTIONS FOR THE JSTACK MODE --locks Prints the java.util.concurrent locks information. --mixed Attempts to print both java and native frames if the platform allows it. OPTIONS FOR THE JSNAP MODE --all Prints all performance counters. JDK 22 2024 JHSDB(1)
jhsdb - attach to a Java process or launch a postmortem debugger to analyze the content of a core dump from a crashed Java Virtual Machine (JVM)
jhsdb clhsdb [--pid pid | --exe executable --core coredump] jhsdb hsdb [--pid pid | --exe executable --core coredump] jhsdb debugd (--pid pid | --exe executable --core coredump) [options] jhsdb jstack (--pid pid | --exe executable --core coredump | --connect [server-id@]debugd-host) [options] jhsdb jmap (--pid pid | --exe executable --core coredump | --connect [server-id@]debugd-host) [options] jhsdb jinfo (--pid pid | --exe executable --core coredump | --connect [server-id@]debugd-host) [options] jhsdb jsnap (--pid pid | --exe executable --core coredump | --connect [server-id@]debugd-host) [options] pid The process ID to which the jhsdb tool should attach. The process must be a Java process. To get a list of Java processes running on a machine, use the ps command or, if the JVM processes are not running in a separate docker instance, the jps command. executable The Java executable file from which the core dump was produced. coredump The core file to which the jhsdb tool should attach. [server-id@]debugd-host An optional server ID and the address of the remote debug server (debugd).
The command-line options for a jhsdb mode. See Options for the debugd Mode, Options for the jstack Mode, Options for the jmap Mode, Options for the jinfo Mode, and Options for the jsnap Mode. Note: Either the pid or the pair of executable and core files or the [server- id@]debugd-host must be provided for debugd, jstack, jmap, jinfo and jsnap modes.
null
unzipsfx
unzipsfx is a modified version of unzip(1L) designed to be prepended to existing ZIP archives in order to form self-extracting archives. Instead of taking its first non-flag argument to be the zipfile(s) to be extracted, unzipsfx seeks itself under the name by which it was invoked and tests or extracts the contents of the appended archive. Because the executable stub adds bulk to the archive (the whole purpose of which is to be as small as possible), a number of the less-vital capabilities in regular unzip have been removed. Among these are the usage (or help) screen, the listing and diagnostic functions (-l and -v), the ability to decompress older compression formats (the ``reduce,'' ``shrink'' and ``implode'' methods). The ability to extract to a directory other than the current one can be selected as a compile-time option, which is now enabled by default since UnZipSFX version 5.5. Similarly, decryption is supported as a compile-time option but should be avoided unless the attached archive contains encrypted files. Starting with release 5.5, another compile-time option adds a simple ``run command after extraction'' feature. This feature is currently incompatible with the ``extract to different directory'' feature and remains disabled by default. Note that self-extracting archives made with unzipsfx are no more (or less) portable across different operating systems than is the unzip executable itself. In general a self-extracting archive made on a particular Unix system, for example, will only self-extract under the same flavor of Unix. Regular unzip may still be used to extract the embedded archive as with any normal zipfile, although it will generate a harmless warning about extra bytes at the beginning of the zipfile. Despite this, however, the self-extracting archive is technically not a valid ZIP archive, and PKUNZIP may be unable to test or extract it. This limitation is due to the simplistic manner in which the archive is created; the internal directory structure is not updated to reflect the extra bytes prepended to the original zipfile. ARGUMENTS [file(s)] An optional list of archive members to be processed. Regular expressions (wildcards) similar to those in Unix egrep(1) may be used to match multiple members. These wildcards may contain: * matches a sequence of 0 or more characters ? matches exactly 1 character [...] matches any single character found inside the brackets; ranges are specified by a beginning character, a hyphen, and an ending character. If an exclamation point or a caret (`!' or `^') follows the left bracket, then the range of characters within the brackets is complemented (that is, anything except the characters inside the brackets is considered a match). (Be sure to quote any character that might otherwise be interpreted or modified by the operating system, particularly under Unix and VMS.) [-x xfile(s)] An optional list of archive members to be excluded from processing. Since wildcard characters match directory separators (`/'), this option may be used to exclude any files that are in subdirectories. For example, ``foosfx *.[ch] -x */*'' would extract all C source files in the main directory, but none in any subdirectories. Without the -x option, all C source files in all directories within the zipfile would be extracted. If unzipsfx is compiled with SFX_EXDIR defined, the following option is also enabled: [-d exdir] An optional directory to which to extract files. By default, all files and subdirectories are recreated in the current directory; the -d option allows extraction in an arbitrary directory (always assuming one has permission to write to the directory). The option and directory may be concatenated without any white space between them, but note that this may cause normal shell behavior to be suppressed. In particular, ``-d ~'' (tilde) is expanded by Unix C shells into the name of the user's home directory, but ``-d~'' is treated as a literal subdirectory ``~'' of the current directory.
unzipsfx - self-extracting stub for prepending to ZIP archives
<name of unzipsfx+archive combo> [-cfptuz[ajnoqsCLV$]] [file(s) ... [-x xfile(s) ...]]
unzipsfx supports the following unzip(1L) options: -c and -p (extract to standard output/screen), -f and -u (freshen and update existing files upon extraction), -t (test archive) and -z (print archive comment). All normal listing options (-l, -v and -Z) have been removed, but the testing option (-t) may be used as a ``poor man's'' listing. Alternatively, those creating self-extracting archives may wish to include a short listing in the zipfile comment. See unzip(1L) for a more complete description of these options. MODIFIERS unzipsfx currently supports all unzip(1L) modifiers: -a (convert text files), -n (never overwrite), -o (overwrite without prompting), -q (operate quietly), -C (match names case-insensitively), -L (convert uppercase-OS names to lowercase), -j (junk paths) and -V (retain version numbers); plus the following operating-system specific options: -X (restore VMS owner/protection info), -s (convert spaces in filenames to underscores [DOS, OS/2, NT]) and -$ (restore volume label [DOS, OS/2, NT, Amiga]). (Support for regular ASCII text-conversion may be removed in future versions, since it is simple enough for the archive's creator to ensure that text files have the appropriate format for the local OS. EBCDIC conversion will of course continue to be supported since the zipfile format implies ASCII storage of text files.) See unzip(1L) for a more complete description of these modifiers. ENVIRONMENT OPTIONS unzipsfx uses the same environment variables as unzip(1L) does, although this is likely to be an issue only for the person creating and testing the self-extracting archive. See unzip(1L) for details. DECRYPTION Decryption is supported exactly as in unzip(1L); that is, interactively with a non-echoing prompt for the password(s). See unzip(1L) for details. Once again, note that if the archive has no encrypted files there is no reason to use a version of unzipsfx with decryption support; that only adds to the size of the archive. AUTORUN COMMAND When unzipsfx was compiled with CHEAP_SFX_AUTORUN defined, a simple ``command autorun'' feature is supported. You may enter a command into the Zip archive comment, using the following format: $AUTORUN$>[command line string] When unzipsfx recognizes the ``$AUTORUN$>'' token at the beginning of the Zip archive comment, the remainder of the first line of the comment (until the first newline character) is passed as a shell command to the operating system using the C rtl ``system'' function. Before executing the command, unzipsfx displays the command on the console and prompts the user for confirmation. When the user has switched off prompting by specifying the -q option, autorun commands are never executed. In case the archive comment contains additional lines of text, the remainder of the archive comment following the first line is displayed normally, unless quiet operation was requested by supplying a -q option.
To create a self-extracting archive letters from a regular zipfile letters.zip and change the new archive's permissions to be world- executable under Unix: cat unzipsfx letters.zip > letters chmod 755 letters zip -A letters To create the same archive under MS-DOS, OS/2 or NT (note the use of the /b [binary] option to the copy command): copy /b unzipsfx.exe+letters.zip letters.exe zip -A letters.exe Under VMS: copy unzipsfx.exe,letters.zip letters.exe letters == "$currentdisk:[currentdir]letters.exe" zip -A letters.exe (The VMS append command may also be used. The second command installs the new program as a ``foreign command'' capable of taking arguments. The third line assumes that Zip is already installed as a foreign command.) Under AmigaDOS: MakeSFX letters letters.zip UnZipSFX (MakeSFX is included with the UnZip source distribution and with Amiga binary distributions. ``zip -A'' doesn't work on Amiga self-extracting archives.) To test (or list) the newly created self-extracting archive: letters -t To test letters quietly, printing only a summary message indicating whether the archive is OK or not: letters -tqq To extract the complete contents into the current directory, recreating all files and subdirectories as necessary: letters To extract all *.txt files (in Unix quote the `*'): letters *.txt To extract everything except the *.txt files: letters -x *.txt To extract only the README file to standard output (the screen): letters -c README To print only the zipfile comment: letters -z LIMITATIONS The principle and fundamental limitation of unzipsfx is that it is not portable across architectures or operating systems, and therefore neither are the resulting archives. For some architectures there is limited portability, however (e.g., between some flavors of Intel-based Unix). Another problem with the current implementation is that any archive with ``junk'' prepended to the beginning technically is no longer a zipfile (unless zip(1) is used to adjust the zipfile offsets appropriately, as noted above). unzip(1) takes note of the prepended bytes and ignores them since some file-transfer protocols, notably MacBinary, are also known to prepend junk. But PKWARE's archiver suite may not be able to deal with the modified archive unless its offsets have been adjusted. unzipsfx has no knowledge of the user's PATH, so in general an archive must either be in the current directory when it is invoked, or else a full or relative path must be given. If a user attempts to extract the archive from a directory in the PATH other than the current one, unzipsfx will print a warning to the effect, ``can't find myself.'' This is always true under Unix and may be true in some cases under MS- DOS, depending on the compiler used (Microsoft C fully qualifies the program name, but other compilers may not). Under OS/2 and NT there are operating-system calls available that provide the full path name, so the archive may be invoked from anywhere in the user's path. The situation is not known for AmigaDOS, Atari TOS, MacOS, etc. As noted above, a number of the normal unzip(1L) functions have been removed in order to make unzipsfx smaller: usage and diagnostic info, listing functions and extraction to other directories. Also, only stored and deflated files are supported. The latter limitation is mainly relevant to those who create SFX archives, however. VMS users must know how to set up self-extracting archives as foreign commands in order to use any of unzipsfx's options. This is not necessary for simple extraction, but the command to do so then becomes, e.g., ``run letters'' (to continue the examples given above). unzipsfx on the Amiga requires the use of a special program, MakeSFX, in order to create working self-extracting archives; simple concatenation does not work. (For technically oriented users, the attached archive is defined as a ``debug hunk.'') There may be compatibility problems between the ROM levels of older Amigas and newer ones. All current bugs in unzip(1L) exist in unzipsfx as well. DIAGNOSTICS unzipsfx's exit status (error level) is identical to that of unzip(1L); see the corresponding man page. SEE ALSO funzip(1L), unzip(1L), zip(1L), zipcloak(1L), zipgrep(1L), zipinfo(1L), zipnote(1L), zipsplit(1L) URL The Info-ZIP home page is currently at http://www.info-zip.org/pub/infozip/ or ftp://ftp.info-zip.org/pub/infozip/ . AUTHORS Greg Roelofs was responsible for the basic modifications to UnZip necessary to create UnZipSFX. See unzip(1L) for the current list of Zip-Bugs authors, or the file CONTRIBS in the UnZip source distribution for the full list of Info-ZIP contributors. Info-ZIP 20 April 2009 (v6.0) UNZIPSFX(1L)
crc325.34
null
null
null
null
null
treereg
"Treereg" translates a tree grammar specification file (default extension ".trg" describing a set of tree patterns and the actions to modify them using tree-terms like: TIMES(NUM, $x) and { $NUM->{VAL} == 0) => { $NUM } which says that wherever an abstract syntax tree representing the product of a numeric expression with value 0 times any other kind of expression, the "TIMES" tree can be substituted by its left child. The compiler produces a Perl module containing the subroutines implementing those sets of pattern-actions. EXAMPLE Consider the following "eyapp" grammar (see the "Parse::Eyapp" documentation to know more about "Parse::Eyapp" grammars): ---------------------------------------------------------- nereida:~/LEyapp/examples> cat Rule6.yp %{ use Data::Dumper; %} %right '=' %left '-' '+' %left '*' '/' %left NEG %tree %% line: exp { $_[1] } ; exp: %name NUM NUM | %name VAR VAR | %name ASSIGN VAR '=' exp | %name PLUS exp '+' exp | %name MINUS exp '-' exp | %name TIMES exp '*' exp | %name DIV exp '/' exp | %name UMINUS '-' exp %prec NEG | '(' exp ')' { $_[2] } /* Let us simplify a bit the tree */ ; %% sub _Error { die "Syntax error.\n"; } sub _Lexer { my($parser)=shift; $parser->YYData->{INPUT} or $parser->YYData->{INPUT} = <STDIN> or return('',undef); $parser->YYData->{INPUT}=~s/^\s+//; for ($parser->YYData->{INPUT}) { s/^([0-9]+(?:\.[0-9]+)?)// and return('NUM',$1); s/^([A-Za-z][A-Za-z0-9_]*)// and return('VAR',$1); s/^(.)//s and return($1,$1); } } sub Run { my($self)=shift; $self->YYParse( yylex => \&_Lexer, yyerror => \&_Error ); } ---------------------------------------------------------- Compile it using "eyapp": ---------------------------------------------------------- nereida:~/LEyapp/examples> eyapp Rule6.yp nereida:~/LEyapp/examples> ls -ltr | tail -1 -rw-rw---- 1 pl users 4976 2006-09-15 19:56 Rule6.pm ---------------------------------------------------------- Now consider this tree grammar: ---------------------------------------------------------- nereida:~/LEyapp/examples> cat Transform2.trg %{ my %Op = (PLUS=>'+', MINUS => '-', TIMES=>'*', DIV => '/'); %} fold: 'TIMES|PLUS|DIV|MINUS':bin(NUM($n), NUM($m)) => { my $op = $Op{ref($bin)}; $n->{attr} = eval "$n->{attr} $op $m->{attr}"; $_[0] = $NUM[0]; } zero_times_whatever: TIMES(NUM($x), .) and { $x->{attr} == 0 } => { $_[0] = $NUM } whatever_times_zero: TIMES(., NUM($x)) and { $x->{attr} == 0 } => { $_[0] = $NUM } /* rules related with times */ times_zero = zero_times_whatever whatever_times_zero; ---------------------------------------------------------- Compile it with "treereg": ---------------------------------------------------------- nereida:~/LEyapp/examples> treereg Transform2.trg nereida:~/LEyapp/examples> ls -ltr | tail -1 -rw-rw---- 1 pl users 1948 2006-09-15 19:57 Transform2.pm ---------------------------------------------------------- The following program makes use of both modules "Rule6.pm" and "Transform2.pm": ---------------------------------------------------------- nereida:~/LEyapp/examples> cat foldand0rule6_3.pl #!/usr/bin/perl -w use strict; use Rule6; use Parse::Eyapp::YATW; use Data::Dumper; use Transform2; $Data::Dumper::Indent = 1; my $parser = new Rule6(); my $t = $parser->Run; print "\n***** Before ******\n"; print Dumper($t); $t->s(@Transform2::all); print "\n***** After ******\n"; print Dumper($t); ---------------------------------------------------------- When the program runs with input "b*(2-2)" produces the following output: ---------------------------------------------------------- nereida:~/LEyapp/examples> foldand0rule6_3.pl b*(2-2) ***** Before ****** $VAR1 = bless( { 'children' => [ bless( { 'children' => [ bless( { 'children' => [], 'attr' => 'b', 'token' => 'VAR' }, 'TERMINAL' ) ] }, 'VAR' ), bless( { 'children' => [ bless( { 'children' => [ bless( { 'children' => [], 'attr' => '2', 'token' => 'NUM' }, 'TERMINAL' ) ] }, 'NUM' ), bless( { 'children' => [ bless( { 'children' => [], 'attr' => '2', 'token' => 'NUM' }, 'TERMINAL' ) ] }, 'NUM' ) ] }, 'MINUS' ) ] }, 'TIMES' ); ***** After ****** $VAR1 = bless( { 'children' => [ bless( { 'children' => [], 'attr' => 0, 'token' => 'NUM' }, 'TERMINAL' ) ] }, 'NUM' ); ---------------------------------------------------------- See also the section "Compiling: More Options" in Parse::Eyapp for a more contrived example. SEE ALSO • Parse::Eyapp, • eyapptut • The pdf file in <http://nereida.deioc.ull.es/~pl/perlexamples/Eyapp.pdf> • <http://nereida.deioc.ull.es/~pl/perlexamples/section_eyappts.html> (Spanish), • eyapp, • treereg, • Parse::yapp, • yacc(1), • bison(1), • the classic book "Compilers: Principles, Techniques, and Tools" by Alfred V. Aho, Ravi Sethi and • Jeffrey D. Ullman (Addison-Wesley 1986) • Parse::RecDescent. AUTHOR Casiano Rodriguez-Leon LICENSE AND COPYRIGHT Copyright © 2006, 2007, 2008, 2009, 2010, 2011, 2012 Casiano Rodriguez- Leon. Copyright © 2017 William N. Braswell, Jr. All Rights Reserved. Parse::Yapp is Copyright © 1998, 1999, 2000, 2001, Francois Desarmenien. Parse::Yapp is Copyright © 2017 William N. Braswell, Jr. All Rights Reserved. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.8 or, at your option, any later version of Perl 5 you may have available. POD ERRORS Hey! The above document had some coding errors, which are explained below: Around line 416: Non-ASCII character seen before =encoding in '©'. Assuming UTF-8 perl v5.34.0 2017-06-14 TREEREG(1)
treereg - Compiler for Tree Regular Expressions
treereg [-m packagename] [[no]syntax] [[no]numbers] [-severity 0|1|2|3] \ [-p treeprefix] [-o outputfile] [-lib /path/to/library/] -i filename[.trg] treereg [-m packagename] [[no]syntax] [[no]numbers] [-severity 0|1|2|3] \ [-p treeprefix] [-lib /path/to/library/] [-o outputfile] filename[.trg] treereg -v treereg -h
Options can be used both with one dash and double dash. It is not necessary to write the full name of the option. A disambiguation prefix suffices. • "-i[n] filename" Input file. Extension ".trg" is assumed if no extension is provided. • "-o[ut] filename" Output file. By default is the name of the input file (concatenated with .pm) • "-m[od] packagename" Name of the package containing the generated subroutines. By default is the longest prefix of the input file name that conforms to the classic definition of integer "[a-z_A-Z]\w*". • "-l[ib] /path/to/library/" Specifies that "/path/to/library/" will be included in @INC. Useful when the "syntax" option is on. Can be inserted as many times as necessary. • "-p[refix] treeprefix" Tree nodes automatically generated using "Parse::Eyapp" are objects blessed into the name of the production. To avoid crashes the programmer may prefix the class names with a given prefix when calling the parser; for example: $self->YYParse( yylex => \&_Lexer, yyerror => \&_Error, yyprefix => __PACKAGE__."::") The "-prefix treeprefix" option simplifies the process of writing the tree grammar so that instead of writing with the full names CLASS::TIMES(CLASS::NUM, $x) and { $NUM->{VAL} == 0) => { $NUM } it can be written: TIMES(NUM, $x) and { $NUM->{VAL} == 0) => { $NUM } • "-n[umbers]" Produces "#line" directives. • "-non[umbers]" Disable source file line numbering embedded in your parser • "-sy[ntax]" Checks that Perl code is syntactically correct. • "-nosy[ntax]" Does not check the syntax of Perl code • "-se[verity] number" - 0 = Don't check arity (default). Matching does not check the arity. The actual node being visited may have more children. - 1 = Check arity. Matching requires the equality of the number of children and the actual node and the pattern. - 2 = Check arity and give a warning - 3 = Check arity, give a warning and exit • "-v[ersion]" Gives the version • "-u[sage]" Prints the usage info • "-h[elp]" Print this help
null
parl5.30
This stand-alone command offers roughly the same feature as "perl -MPAR", except that it takes the pre-loaded .par files via "-Afoo.par" instead of "-MPAR=foo.par". Additionally, it lets you convert a CPAN distribution to a PAR distribution, as well as manipulate such distributions. For more information about PAR distributions, see PAR::Dist. You can use it to run .par files: # runs script/run.pl in archive, uses its lib/* as libraries % parl myapp.par run.pl # runs run.pl or script/run.pl in myapp.par % parl otherapp.pl # also runs normal perl scripts However, if the .par archive contains either main.pl or script/main.pl, it is used instead: % parl myapp.par run.pl # runs main.pl, with 'run.pl' as @ARGV Finally, the "-O" option makes a stand-alone binary executable from a PAR file: % parl -B -Omyapp myapp.par % ./myapp # run it anywhere without perl binaries With the "--par-options" flag, generated binaries can act as "parl" to pack new binaries: % ./myapp --par-options -Omyap2 myapp.par # identical to ./myapp % ./myapp --par-options -Omyap3 myap3.par # now with different PAR For an explanation of stand-alone executable format, please see par.pl. SEE ALSO PAR, PAR::Dist, par.pl, pp AUTHORS Audrey Tang <cpan@audreyt.org> You can write to the mailing list at <par@perl.org>, or send an empty mail to <par-subscribe@perl.org> to participate in the discussion. Please submit bug reports to <bug-par-packer@rt.cpan.org>. COPYRIGHT Copyright 2002-2009 by Audrey Tang <cpan@audreyt.org>. Neither this program nor the associated pp program impose any licensing restrictions on files generated by their execution, in accordance with the 8th article of the Artistic License: "Aggregation of this Package with a commercial distribution is always permitted provided that the use of this Package is embedded; that is, when no overt attempt is made to make this Package's interfaces visible to the end user of the commercial distribution. Such use shall not be construed as a distribution of this Package." Therefore, you are absolutely free to place any license on the resulting executable, as long as the packed 3rd-party libraries are also available under the Artistic License. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See LICENSE. perl v5.30.3 2020-03-08 PARL(1)
parl - Binary PAR Loader
(Please see pp for convenient ways to make self-contained executables, scripts or PAR archives from perl programs.) To make a PAR distribution from a CPAN module distribution: % parl -p # make a PAR dist under the current path % parl -p Foo-0.01 # assume unpacked CPAN dist in Foo-0.01/ To manipulate a PAR distribution: % parl -i Foo-0.01-i386-freebsd-5.8.0.par # install % parl -i http://foo.com/Foo-0.01 # auto-appends archname + perlver % parl -i cpan://AUTRIJUS/PAR-0.74 # uses CPAN author directory % parl -u Foo-0.01-i386-freebsd-5.8.0.par # uninstall % parl -s Foo-0.01-i386-freebsd-5.8.0.par # sign % parl -v Foo-0.01-i386-freebsd-5.8.0.par # verify To use Hello.pm from ./foo.par: % parl -A./foo.par -MHello % parl -A./foo -MHello # the .par part is optional Same thing, but search foo.par in the @INC; % parl -Ifoo.par -MHello % parl -Ifoo -MHello # ditto Run test.pl or script/test.pl from foo.par: % parl foo.par test.pl # looks for 'main.pl' by default, # otherwise run 'test.pl' To make a self-containing executable containing a PAR file : % parl -O./foo foo.par % ./foo test.pl # same as above To embed the necessary non-core modules and shared objects for PAR's execution (like "Zlib", "IO", "Cwd", etc), use the -b flag: % parl -b -O./foo foo.par % ./foo test.pl # runs anywhere with core modules installed If you also wish to embed core modules along, use the -B flag instead: % parl -B -O./foo foo.par % ./foo test.pl # runs anywhere with the perl interpreter This is particularly useful when making stand-alone binary executables; see pp for details.
null
null
spfd5.34
spfd is a simple forking Sender Policy Framework (SPF) query proxy server. spfd receives and answers SPF query requests on a TCP/IP or UNIX domain socket. The --port form listens on a TCP/IP socket on the specified port. The default port is 5970. The --socket form listens on a UNIX domain socket that is created with the specified filename. The socket can be assigned specific user and group ownership with the --socket-user and --socket-group options, and specific filesystem permissions with the --socket-perms option. Generally, spfd can be instructed with the --set-user and --set-group options to drop root privileges and change to another user and group before it starts listening for requests. The --help form prints usage information for spfd. REQUEST A request consists of a series of lines delimited by \x0A (LF) characters (or whatever your system considers a newline). Each line must be of the form key=value, where the following keys are required: ip The sender IP address. sender The envelope sender address (from the SMTP "MAIL FROM" command). helo The envelope sender hostname (from the SMTP "HELO" command). RESPONSE spfd responds to query requests with similar series of lines of the form key=value. The most important response keys are: result The result of the SPF query: pass The specified IP address is an authorized mailer for the sender domain/address. fail The specified IP address is not an authorized mailer for the sender domain/address. softfail The specified IP address is not an authorized mailer for the sender domain/address, however the domain is still in the process of transitioning to SPF. neutral The sender domain makes no assertion about the status of the IP address. unknown The sender domain has a syntax error in its SPF record. error A temporary DNS error occurred while resolving the sender policy. Try again later. none There is no SPF record for the sender domain. smtp_comment The text that should be included in the receiver's SMTP response. header_comment The text that should be included as a comment in the message's "Received-SPF:" header. spf_record The SPF record of the envelope sender domain. For the description of other response keys see Mail::SPF::Query. For more information on SPF see <http://www.openspf.org>. EXAMPLE A running spfd could be tested using the "netcat" utility like this: $ echo -e "ip=11.22.33.44\nsender=user@pobox.com\nhelo=spammer.example.net\n" | nc localhost 5970 result=neutral smtp_comment=Please see http://spf.pobox.com/why.html?sender=user%40pobox.com&ip=11.22.33.44&receiver=localhost header_comment=localhost: 11.22.33.44 is neither permitted nor denied by domain of user@pobox.com guess=neutral smtp_guess= header_guess= guess_tf=neutral smtp_tf= header_tf= spf_record=v=spf1 ?all SEE ALSO Mail::SPF::Query, <http://www.openspf.org> AUTHORS This version of spfd was written by Meng Weng Wong <mengwong+spf@pobox.com>. Improved argument parsing was added by Julian Mehnle <julian@mehnle.net>. This man-page was written by Julian Mehnle <julian@mehnle.net>. perl v5.34.0 2006-02-07 SPFD(1)
spfd - simple forking daemon to provide SPF query services VERSION 2006-02-07
spfd --port port [--set-user uid|username] [--set-group gid|groupname] spfd --socket filename [--socket-user uid|username] [--socket-group gid|groupname] [--socket-perms octal-perms] [--set-user uid|username] [--set-group gid|groupname] spfd --help
null
null
csreq
The csreq command manipulates Code Signing Requirement data. It reads one requirement from a file or command arguments, converts it into internal form, checks it, and then optionally outputs it in a different form. The options are as follows: -b path Requests that the requirement read be written in binary form to the path given. -r requirement-input Specifies the input requirement. See "specifying requirements" below. This is exactly the same format as is accepted by the -r and -R options of the codesign(1) command. -t Requests that the requirement read be written as text to standard output. -v Increases the verbosity of output. Multiple instances of -v produce increasing levels of commentary output. In the first synopsis form, csreq reads a Code Requirement and writes it to standard output as canonical source text. Note that with text input, this actually compiles the requirement into internal form and then converts it back to text, giving you the system's view of the requirement code. In the second synopsis form, csreq reads a Code Requirement and writes its binary representation to a file. This is the same form produced by the SecRequirementCopyData API, and is readily acceptable as input to Code Signing verification APIs. It can also be used as input to subsequent invocations of csreq by passing the filename to the -r option. SPECIFYING REQUIREMENTS The requirement argument (-r) can be given in various forms. A plain text argument is taken to be a path to a file containing the requirement. This program will accept both binary files containing properly compiled requirements code, and source files that are automatically compiled for use. An argument of "-" requests that the requirement(s) are read from standard input. Again, standard input can contain either binary form or text. Finally, an argument that begins with an equal sign "=" is taken as a literal requirements source text, and is compiled accordingly for use.
csreq – Expert tool for manipulating Code Signing Requirement data
csreq [-v] -r requirement-input -t csreq [-v] -r requirement-input -b outputfile
null
To compile an explicit requirement program and write its binary form to file "output": csreq -r="identifier com.foo.test" -b output.csreq To display the requirement program embedded at offset 1234 of file "foo": tail -b 1234 foo | csreq -r- -t FILES DIAGNOSTICS The csreq program exits 0 on success or 1 on failure. Errors in arguments yield exit code 2. SEE ALSO codesign(1) HISTORY The csreq command first appeared in Mac OS 10.5.0 . macOS 14.5 June 1, 2006 macOS 14.5
whatis
The man utility finds and displays online manual documentation pages. If mansect is provided, man restricts the search to the specific section of the manual. The sections of the manual are: 1. General Commands Manual 2. System Calls Manual 3. Library Functions Manual 4. Kernel Interfaces Manual 5. File Formats Manual 6. Games Manual 7. Miscellaneous Information Manual 8. System Manager's Manual 9. Kernel Developer's Manual Options that man understands: -M manpath Forces a specific colon separated manual path instead of the default search path. See manpath(1). Overrides the MANPATH environment variable. -P pager Use specified pager. Defaults to “less -sR” if color support is enabled, or “less -s”. Overrides the MANPAGER environment variable, which in turn overrides the PAGER environment variable. -S mansect Restricts manual sections searched to the specified colon delimited list. Defaults to “1:8:2:3:3lua:n:4:5:6:7:9:l”. Overrides the MANSECT environment variable. -a Display all manual pages instead of just the first found for each page argument. -d Print extra debugging information. Repeat for increased verbosity. Does not display the manual page. -f Emulate whatis(1). Note that only a subset of options will have any effect when man is invoked in this mode. See the below description of whatis options for details. -h Display short help message and exit. -k Emulate apropos(1). Note that only a subset of options will have any effect when man is invoked in this mode. See the below description of apropos options for details. -m arch[:machine] Override the default architecture and machine settings allowing lookup of other platform specific manual pages. This option is accepted, but not implemented, on macOS. -o Force use of non-localized manual pages. See IMPLEMENTATION NOTES for how locale specific searches work. Overrides the LC_ALL, LC_CTYPE, and LANG environment variables. -p [eprtv] Use the list of given preprocessors before running nroff(1) or troff(1). Valid preprocessors arguments: e eqn(1) p pic(1) r refer(1) t tbl(1) v vgrind(1) Overrides the MANROFFSEQ environment variable. -t Send manual page source through troff(1) allowing transformation of the manual pages to other formats. -w Display the location of the manual page instead of the contents of the manual page. Options that apropos and whatis understand: -d Same as the -d option for man. -s Same as the -S option for man. When man is operated in apropos or whatis emulation mode, only a subset of its options will be honored. Specifically, -d, -M, -P, and -S have equivalent functionality in the apropos and whatis implementation provided. The MANPATH, MANSECT, and MANPAGER environment variables will similarly be honored. IMPLEMENTATION NOTES Locale Specific Searches The man utility supports manual pages in different locales. The search behavior is dictated by the first of three environment variables with a nonempty string: LC_ALL, LC_CTYPE, or LANG. If set, man will search for locale specific manual pages using the following logic: lang_country.charset lang.charset en.charset For example, if LC_ALL is set to “ja_JP.eucJP”, man will search the following paths when considering section 1 manual pages in /usr/share/man: /usr/share/man/ja_JP.eucJP/man1 /usr/share/man/ja.eucJP/man1 /usr/share/man/en.eucJP/man1 /usr/share/man/man1 Displaying Specific Manual Files The man utility also supports displaying a specific manual page if passed a path to the file as long as it contains a ‘/’ character. ENVIRONMENT The following environment variables affect the execution of man: LC_ALL, LC_CTYPE, LANG Used to find locale specific manual pages. Valid values can be found by running the locale(1) command. See IMPLEMENTATION NOTES for details. Influenced by the -o option. MACHINE_ARCH, MACHINE Used to find platform specific manual pages. If unset, the output of “sysctl hw.machine_arch” and “sysctl hw.machine” is used respectively. See IMPLEMENTATION NOTES for details. Corresponds to the -m option. MANPATH The standard search path used by man(1) may be changed by specifying a path in the MANPATH environment variable. Invalid paths, or paths without manual databases, are ignored. Overridden by -M. If MANPATH begins with a colon, it is appended to the default list; if it ends with a colon, it is prepended to the default list; or if it contains two adjacent colons, the standard search path is inserted between the colons. If none of these conditions are met, it overrides the standard search path. MANROFFSEQ Used to determine the preprocessors for the manual source before running nroff(1) or troff(1). If unset, defaults to tbl(1). Corresponds to the -p option. MANSECT Restricts manual sections searched to the specified colon delimited list. Corresponds to the -S option. MANWIDTH If set to a numeric value, used as the width manpages should be displayed. Otherwise, if set to a special value “tty”, and output is to a terminal, the pages may be displayed over the whole width of the screen. MANCOLOR If set, enables color support. MANPAGER Program used to display files. If unset, and color support is enabled, “less -sR” is used. If unset, and color support is disabled, then PAGER is used. If that has no value either, “less -s” is used. FILES /etc/man.conf System configuration file. /usr/local/etc/man.d/*.conf Local configuration files. EXIT STATUS The man utility exits 0 on success, and >0 if an error occurs.
man, apropos, whatis – display online manual documentation pages
man [-adho] [-t | -w] [-M manpath] [-P pager] [-S mansect] [-m arch[:machine]] [-p [eprtv]] [mansect] page ... man -f [-d] [-M manpath] [-P pager] [-S mansect] keyword ... whatis [-d] [-s mansect] keyword ... man -k [-d] [-M manpath] [-P pager] [-S mansect] keyword ... apropos [-d] [-s mansect] keyword ...
null
Show the manual page for stat(2): $ man 2 stat Show all manual pages for ‘stat’. $ man -a stat List manual pages which match the regular expression either in the title or in the body: $ man -k '\<copy\>.*archive' Show the manual page for ls(1) and use cat(1) as pager: $ man -P cat ls Show the location of the ls(1) manual page: $ man -w ls SEE ALSO apropos(1), intro(1), mandoc(1), manpath(1), whatis(1), intro(2), intro(3), intro(3lua), intro(4), intro(5), man.conf(5), intro(6), intro(7), mdoc(7), intro(8), intro(9) macOS 14.5 January 9, 2021 macOS 14.5
gperf
GNU 'gperf' generates perfect hash functions. If a long option shows an argument as mandatory, then it is mandatory for the equivalent short option also. Output file location: --output-file=FILE Write output to specified file. The results are written to standard output if no output file is specified or if it is -. Input file interpretation: -e, --delimiters=DELIMITER-LIST Allow user to provide a string containing delimiters used to separate keywords from their attributes. Default is ",". -t, --struct-type Allows the user to include a structured type declaration for generated code. Any text before %% is considered part of the type declaration. Key words and additional fields may follow this, one group of fields per line. --ignore-case Consider upper and lower case ASCII characters as equivalent. Note that locale dependent case mappings are ignored. Language for the output code: -L, --language=LANGUAGE-NAME Generates code in the specified language. Languages handled are currently C++, ANSI-C, C, and KR-C. The default is C. Details in the output code: -K, --slot-name=NAME Select name of the keyword component in the keyword structure. -F, --initializer-suffix=INITIALIZERS Initializers for additional components in the keyword structure. -H, --hash-function-name=NAME Specify name of generated hash function. Default is 'hash'. -N, --lookup-function-name=NAME Specify name of generated lookup function. Default name is 'in_word_set'. -Z, --class-name=NAME Specify name of generated C++ class. Default name is 'Perfect_Hash'. -7, --seven-bit Assume 7-bit characters. -l, --compare-lengths Compare key lengths before trying a string comparison. This is necessary if the keywords contain NUL bytes. It also helps cut down on the number of string comparisons made during the lookup. -c, --compare-strncmp Generate comparison code using strncmp rather than strcmp. -C, --readonly-tables Make the contents of generated lookup tables constant, i.e., readonly. -E, --enum Define constant values using an enum local to the lookup function rather than with defines. -I, --includes Include the necessary system include file <string.h> at the beginning of the code. -G, --global-table Generate the static table of keywords as a static global variable, rather than hiding it inside of the lookup function (which is the default behavior). -P, --pic Optimize the generated table for inclusion in shared libraries. This reduces the startup time of programs using a shared library containing the generated code. -Q, --string-pool-name=NAME Specify name of string pool generated by option --pic. Default name is 'stringpool'. --null-strings Use NULL strings instead of empty strings for empty keyword table entries. -W, --word-array-name=NAME Specify name of word list array. Default name is 'wordlist'. --length-table-name=NAME Specify name of length table array. Default name is 'lengthtable'. -S, --switch=COUNT Causes the generated C code to use a switch statement scheme, rather than an array lookup table. This can lead to a reduction in both time and space requirements for some keyfiles. The COUNT argument determines how many switch statements are generated. A value of 1 generates 1 switch containing all the elements, a value of 2 generates 2 tables with 1/2 the elements in each table, etc. If COUNT is very large, say 1000000, the generated C code does a binary search. -T, --omit-struct-type Prevents the transfer of the type declaration to the output file. Use this option if the type is already defined elsewhere. --size-type=TYPE Specify the type for length parameters. Default type is 'unsigned int'. Algorithm employed by gperf: -k, --key-positions=KEYS Select the key positions used in the hash function. The allowable choices range between 1-255, inclusive. The positions are separated by commas, ranges may be used, and key positions may occur in any order. Also, the meta-character '*' causes the generated hash function to consider ALL key positions, and $ indicates the "final character" of a key, e.g., $,1,2,4,6-10. -D, --duplicates Handle keywords that hash to duplicate values. This is useful for certain highly redundant keyword sets. -m, --multiple-iterations=ITERATIONS Perform multiple choices of the -i and -j values, and choose the best results. This increases the running time by a factor of ITERATIONS but does a good job minimizing the generated table size. -i, --initial-asso=N Provide an initial value for the associate values array. Default is 0. Setting this value larger helps inflate the size of the final table. -j, --jump=JUMP-VALUE Affects the "jump value", i.e., how far to advance the associated character value upon collisions. Must be an odd number, default is 5. -n, --no-strlen Do not include the length of the keyword when computing the hash function. -r, --random Utilizes randomness to initialize the associated values table. -s, --size-multiple=N Affects the size of the generated hash table. The numeric argument N indicates "how many times larger or smaller" the associated value range should be, in relationship to the number of keys, e.g. a value of 3 means "allow the maximum associated value to be about 3 times larger than the number of input keys". Conversely, a value of 1/3 means "make the maximum associated value about 3 times smaller than the number of input keys". A larger table should decrease the time required for an unsuccessful search, at the expense of extra table space. Default value is 1. Informative output: -h, --help Print this message. -v, --version Print the gperf version number. -d, --debug Enables the debugging option (produces verbose output to the standard error). AUTHOR Written by Douglas C. Schmidt and Bruno Haible. REPORTING BUGS Report bugs to <bug-gnu-gperf@gnu.org>. COPYRIGHT Copyright © 1989-1998, 2000-2004, 2006-2007 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. SEE ALSO The full documentation for gperf is maintained as a Texinfo manual. If the info and gperf programs are properly installed at your site, the command info gperf should give you access to the complete manual. GNU gperf 3.0.3 October 2011 GPERF(1)
gperf - manual page for gperf 3.0.3
gperf [OPTION]... [INPUT-FILE]
null
null
iconutil
iconutil converts between '.iconset' and '.icns' files, and can extract icons from '.car' asset catalog files. The tool takes a single source '.icns' file, '.iconset' or a '.car' file and icon name. It converts the input to either a '.icns' or '.iconset' depending on the value of the -c flag's argument. It is possible to specify the name of the output file by passing the file path as the argument to the -o flag. If -o is not set iconutil will write converted '.icns' file or '.iconset' to the same directory as the source file using either the name of the icon in the '.car' or the input file name with the extension derived from the output type.
iconutil – Utility to convert between '.iconset' and '.icns' files.
iconutil -c {icns | iconset} [-o file] file [icon-name]
-c --convert {icns | iconset} Given the argument iconset, iconutil converts the source '.icns' file to an '.iconset'. The '.iconset' is saved in the same directory as the source '.icns'. It is given a file name derived from the input file name or the input icon name with the '.iconset' file extension. If the argument is icns, iconutil converts the source '.iconset' to an '.icns'. The '.icns' is saved in the same directory as the source '.iconset'. It is given a file name derived from the input file name or the input icon name with the '.iconset' file extension. -o --output Overrides the default output file name that iconutil uses to save the converted '.iconset' or '.icns' files. FILES /usr/bin/iconutil Darwin 4/10/12 Darwin
null
zipsplit
zipsplit reads a zipfile and splits it into smaller zipfiles.
zipsplit - split a zipfile into smaller zipfiles
zipsplit [-t] [-i] [-p] [-s] [-n size] [-r room] [-b path] [-h] [-v] [-L] zipfile ARGUMENTS zipfile Zipfile to split.
-t Report how many files it will take, but don't make them. -i Make index (zipsplit.idx) and count its size against first zip file. -n size Make zip files no larger than "size" (default = 36000). -r room Leave room for "room" bytes on the first disk (default = 0). -b path Use path for the output zip files. -p Pause between output zip files. -s Do a sequential split even if it takes more zip files. -h Show a short help. -v Show version information. -L Show software license.
To be filled in. BUGS Does not yet support large (> 2 GB) or split archives. SEE ALSO zip(1), unzip(1) AUTHOR Info-ZIP v3.0 of 8 May 2008 zipnote(1)
lwp-dump
The lwp-dump program will get the resource identified by the URL and then dump the response object to STDOUT. This will display the headers returned and the initial part of the content, escaped so that it's safe to display even binary content. The escapes syntax used is the same as for Perl's double quoted strings. If there is no content the string "(no content)" is shown in its place. The following options are recognized: --agent string Override the user agent string passed to the server. --keep-client-headers LWP internally generate various "Client-*" headers that are stripped by lwp-dump in order to show the headers exactly as the server provided them. This option will suppress this. --max-length n How much of the content to show. The default is 512. Set this to 0 for unlimited. If the content is longer then the string is chopped at the limit and the string "...\n(### more bytes not shown)" appended. --method string Use the given method for the request instead of the default "GET". --parse-head By default lwp-dump will not try to initialize headers by looking at the head section of HTML documents. This option enables this. This corresponds to "parse_head" in LWP::UserAgent. --request Also dump the request sent. SEE ALSO lwp-request, LWP, "dump" in HTTP::Message perl v5.34.0 2020-04-14 LWP-DUMP(1)
lwp-dump - See what headers and content is returned for a URL
lwp-dump [ options ] URL
null
null
pathopens.d
This program prints a count of the number of times files have been successfully opened. This is somewhat special in that the full pathname is calculated, even if the file open referred to a relative pathname. Since this uses DTrace, only users with root privileges can run this command.
pathopens.d - full pathnames opened ok count. Uses DTrace.
pathopens.d
null
This samples until Ctrl-C is hit. # pathopens.d FIELDS PATHNAME full pathname COUNT number of successful opens DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT pathopens.d will sample until Ctrl-C is hit. SEE ALSO opensnoop(1M), dtrace(1M) version 0.80 June 28, 2005 pathopens.d(1m)
tail
The tail utility displays the contents of file or, by default, its standard input, to the standard output. The display begins at a byte, line or 512-byte block location in the input. Numbers having a leading plus (‘+’) sign are relative to the beginning of the input, for example, “-c +2” starts the display at the second byte of the input. Numbers having a leading minus (‘-’) sign or no explicit sign are relative to the end of the input, for example, “-n 2” displays the last two lines of the input. The default starting location is “-n 10”, or the last 10 lines of the input. The options are as follows: -b number, --blocks=number The location is number 512-byte blocks. -c number, --bytes=number The location is number bytes. -f The -f option causes tail to not stop when end of file is reached, but rather to wait for additional data to be appended to the input. The -f option is ignored if the standard input is a pipe, but not if it is a FIFO. -F The -F option implies the -f option, but tail will also check to see if the file being followed has been renamed or rotated. The file is closed and reopened when tail detects that the filename being read from has a new inode number. If the file being followed does not (yet) exist or if it is removed, tail will keep looking and will display the file from the beginning if and when it is created. The -F option is the same as the -f option if reading from standard input rather than a file. -n number, --lines=number The location is number lines. -q, --quiet, --silent Suppresses printing of headers when multiple files are being examined. -r The -r option causes the input to be displayed in reverse order, by line. Additionally, this option changes the meaning of the -b, -c and -n options. When the -r option is specified, these options specify the number of bytes, lines or 512-byte blocks to display, instead of the bytes, lines or blocks from the beginning or end of the input from which to begin the display. The default for the -r option is to display all of the input. -v, --verbose Prepend each file with a header. If more than a single file is specified, or if the -v option is used, each file is preceded by a header consisting of the string “==> XXX <==” where XXX is the name of the file. The -q flag disables the printing of the header in all cases. All number arguments may also be specified with size suffixes supported by expand_number(3). EXIT STATUS The tail utility exits 0 on success, and >0 if an error occurs.
tail – display the last part of a file
tail [-F | -f | -r] [-qv] [-b number | -c number | -n number] [file ...]
null
To display the last 500 lines of the file foo: $ tail -n 500 foo Keep /var/log/messages open, displaying to the standard output anything appended to the file: $ tail -F /var/log/messages SEE ALSO cat(1), head(1), sed(1), expand_number(3) STANDARDS The tail utility is expected to be a superset of the IEEE Std 1003.2-1992 (“POSIX.2”) specification. In particular, the -F, -b and -r options are extensions to that standard. The historic command line syntax of tail is supported by this implementation. The only difference between this implementation and historic versions of tail, once the command line syntax translation has been done, is that the -b, -c and -n options modify the -r option, i.e., “-r -c 4” displays the last 4 characters of the last line of the input, while the historic tail (using the historic syntax “-4cr”) would ignore the -c option and display the last 4 lines of the input. HISTORY A tail command appeared in PWB UNIX. macOS 14.5 July 12, 2022 macOS 14.5
whois
The whois utility looks up records in the databases maintained by several Network Information Centers (NICs). By default whois starts by querying the Internet Assigned Numbers Authority (IANA) whois server, and follows referrals to whois servers that have more specific details about the query name. The IANA whois server knows about IP address and AS numbers as well as domain names. There are a few special cases where referrals do not work, so whois goes directly to the appropriate server. These include point-of-contact handles for ARIN, nic.at, NORID, and RIPE, and domain names under ac.uk. The options are as follows: -a Use the American Registry for Internet Numbers (ARIN) database. It contains network numbers used in those parts of the world covered neither by APNIC, AfriNIC, LACNIC, nor by RIPE. The query syntax is documented at https://www.arin.net/resources/whoisrws/whois_api.html#nicname -A Use the Asia/Pacific Network Information Center (APNIC) database. It contains network numbers used in East Asia, Australia, New Zealand, and the Pacific islands. Get query syntax documentation using whois -A help -b Use the Network Abuse Clearinghouse database. It contains addresses to which network abuse should be reported, indexed by domain name. -c TLD This is the equivalent of using the -h option with an argument of "TLD.whois-servers.net". This can be helpful for locating country-class TLD whois servers. -f Use the African Network Information Centre (AfriNIC) database. It contains network numbers used in Africa and the islands of the western Indian Ocean. Get query syntax documentation using whois -f help -g Use the US non-military federal government database, which contains points of contact for subdomains of .GOV. -h host Use the specified host instead of the default. Either a host name or an IP address may be specified. -i Use the traditional Network Information Center (InterNIC) (whois.internic.net) database. This now contains only registrations for domain names under .COM, .NET, .EDU. You can specify the type of object to search for like whois -i 'type name' where type can be domain, nameserver, registrar. The name can contain * wildcards. -I Use the Internet Assigned Numbers Authority (IANA) database. -k Use the National Internet Development Agency of Korea's (KRNIC) database. It contains network numbers and domain contact information for Korea. -l Use the Latin American and Caribbean IP address Regional Registry (LACNIC) database. It contains network numbers used in much of Latin America and the Caribbean. -m Use the Route Arbiter Database (RADB) database. It contains route policy specifications for a large number of operators' networks. -p port Connect to the whois server on port. If this option is not specified, whois defaults to port 43. -P Use the PeeringDB database of AS numbers. It contains details about presence at internet peering points for many network operators. -Q Do a quick lookup; whois will not attempt to follow referrals to other whois servers. This is the default if a server is explicitly specified using one of the other options or in an environment variable. See also the -R option. -r Use the R´eseaux IP Europ´eens (RIPE) database. It contains network numbers and domain contact information for Europe. Get query syntax documentation using whois -r help -R Do a recursive lookup; whois will attempt to follow referrals to other whois servers. This is the default if no server is explicitly specified. See also the -Q option. -S By default whois adjusts simple queries (without spaces) to produce more useful output from certain whois servers, and it suppresses some uninformative output. With the -S option, whois sends the query and prints the output verbatim. The operands specified to whois are treated independently and may be used as queries on different whois servers. ENVIRONMENT WHOIS_SERVER The primary default whois server. If this is unset, whois uses the RA_SERVER environment variable. RA_SERVER The secondary default whois server. If this is unset, whois will use whois.iana.org. EXIT STATUS The whois utility exits 0 on success, and >0 if an error occurs.
whois – Internet domain name and network number directory service
whois [-aAbfgiIklmPQrRS] [-c TLD | -h host] [-p port] [--] name ...
null
To obtain contact information about an administrator located in the Russian TLD domain "RU", use the -c option as shown in the following example, where CONTACT-ID is substituted with the actual contact identifier. whois -c RU CONTACT-ID (Note: This example is specific to the TLD "RU", but other TLDs can be queried by using a similar syntax.) The following example demonstrates how to query a whois server using a non-standard port, where “query-data” is the query to be sent to “whois.example.com” on port “rwhois” (written numerically as 4321). whois -h whois.example.com -p rwhois query-data Some whois servers support complex queries with dash-letter options. You can use the -- option to separate whois command options from whois server query options. A query containing spaces must be quoted as one argument to the whois command. The following example asks the RIPE whois server to return a brief description of its “domain” object type: whois -r -- '-t domain' STANDARDS K. Harrenstien, M. Stahl, and E. Feinler, NICNAME/WHOIS, RFC 954, October 1985. L. Daigle, WHOIS Protocol Specification, RFC 3912, September 2004. HISTORY The whois command appeared in 4.3BSD. macOS 14.5 August 1, 2019 macOS 14.5
nsupdate
nsupdate is used to submit Dynamic DNS Update requests as defined in RFC 2136 to a name server. This allows resource records to be added or removed from a zone without manually editing the zone file. A single update request can contain requests to add or remove more than one resource record. Zones that are under dynamic control via nsupdate or a DHCP server should not be edited by hand. Manual edits could conflict with dynamic updates and cause data to be lost. The resource records that are dynamically added or removed with nsupdate have to be in the same zone. Requests are sent to the zone's master server. This is identified by the MNAME field of the zone's SOA record. Transaction signatures can be used to authenticate the Dynamic DNS updates. These use the TSIG resource record type described in RFC 2845 or the SIG(0) record described in RFC 2535 and RFC 2931 or GSS-TSIG as described in RFC 3645. TSIG relies on a shared secret that should only be known to nsupdate and the name server. For instance, suitable key and server statements would be added to /etc/named.conf so that the name server can associate the appropriate secret key and algorithm with the IP address of the client application that will be using TSIG authentication. You can use ddns-confgen to generate suitable configuration fragments. nsupdate uses the -y or -k options to provide the TSIG shared secret. These options are mutually exclusive. SIG(0) uses public key cryptography. To use a SIG(0) key, the public key must be stored in a KEY record in a zone served by the name server. GSS-TSIG uses Kerberos credentials. Standard GSS-TSIG mode is switched on with the -g flag. A non-standards-compliant variant of GSS-TSIG used by Windows 2000 can be switched on with the -o flag.
nsupdate - Dynamic DNS update utility
nsupdate [-d] [-D] [-L level] [[-g] | [-o] | [-l] | [-y [hmac:]keyname:secret] | [-k keyfile]] [-t timeout] [-u udptimeout] [-r udpretries] [-R randomdev] [-v] [-T] [-P] [-V] [filename]
-d Debug mode. This provides tracing information about the update requests that are made and the replies received from the name server. -D Extra debug mode. -k keyfile The file containing the TSIG authentication key. Keyfiles may be in two formats: a single file containing a named.conf-format key statement, which may be generated automatically by ddns-confgen, or a pair of files whose names are of the format K{name}.+157.+{random}.key and K{name}.+157.+{random}.private, which can be generated by dnssec-keygen. The -k may also be used to specify a SIG(0) key used to authenticate Dynamic DNS update requests. In this case, the key specified is not an HMAC-MD5 key. -l Local-host only mode. This sets the server address to localhost (disabling the server so that the server address cannot be overridden). Connections to the local server will use a TSIG key found in /var/run/named/session.key, which is automatically generated by named if any local master zone has set update-policy to local. The location of this key file can be overridden with the -k option. -L level Set the logging debug level. If zero, logging is disabled. -p port Set the port to use for connections to a name server. The default is 53. -P Print the list of private BIND-specific resource record types whose format is understood by nsupdate. See also the -T option. -r udpretries The number of UDP retries. The default is 3. If zero, only one update request will be made. -R randomdev Where to obtain randomness. If the operating system does not provide a /dev/random or equivalent device, the default source of randomness is keyboard input. randomdev specifies the name of a character device or file containing random data to be used instead of the default. The special value keyboard indicates that keyboard input should be used. This option may be specified multiple times. -t timeout The maximum time an update request can take before it is aborted. The default is 300 seconds. Zero can be used to disable the timeout. -T Print the list of IANA standard resource record types whose format is understood by nsupdate. nsupdate will exit after the lists are printed. The -T option can be combined with the -P option. Other types can be entered using "TYPEXXXXX" where "XXXXX" is the decimal value of the type with no leading zeros. The rdata, if present, will be parsed using the UNKNOWN rdata format, (<backslash> <hash> <space> <length> <space> <hexstring>). -u udptimeout The UDP retry interval. The default is 3 seconds. If zero, the interval will be computed from the timeout interval and number of UDP retries. -v Use TCP even for small update requests. By default, nsupdate uses UDP to send update requests to the name server unless they are too large to fit in a UDP request in which case TCP will be used. TCP may be preferable when a batch of update requests is made. -V Print the version number and exit. -y [hmac:]keyname:secret Literal TSIG authentication key. keyname is the name of the key, and secret is the base64 encoded shared secret. hmac is the name of the key algorithm; valid choices are hmac-md5, hmac-sha1, hmac-sha224, hmac-sha256, hmac-sha384, or hmac-sha512. If hmac is not specified, the default is hmac-md5 or if MD5 was disabled hmac-sha256. NOTE: Use of the -y option is discouraged because the shared secret is supplied as a command line argument in clear text. This may be visible in the output from ps(1) or in a history file maintained by the user's shell. INPUT FORMAT nsupdate reads input from filename or standard input. Each command is supplied on exactly one line of input. Some commands are for administrative purposes. The others are either update instructions or prerequisite checks on the contents of the zone. These checks set conditions that some name or set of resource records (RRset) either exists or is absent from the zone. These conditions must be met if the entire update request is to succeed. Updates will be rejected if the tests for the prerequisite conditions fail. Every update request consists of zero or more prerequisites and zero or more updates. This allows a suitably authenticated update request to proceed if some specified resource records are present or missing from the zone. A blank input line (or the send command) causes the accumulated commands to be sent as one Dynamic DNS update request to the name server. The command formats and their meaning are as follows: server {servername} [port] Sends all dynamic update requests to the name server servername. When no server statement is provided, nsupdate will send updates to the master server of the correct zone. The MNAME field of that zone's SOA record will identify the master server for that zone. port is the port number on servername where the dynamic update requests get sent. If no port number is specified, the default DNS port number of 53 is used. local {address} [port] Sends all dynamic update requests using the local address. When no local statement is provided, nsupdate will send updates using an address and port chosen by the system. port can additionally be used to make requests come from a specific port. If no port number is specified, the system will assign one. zone {zonename} Specifies that all updates are to be made to the zone zonename. If no zone statement is provided, nsupdate will attempt determine the correct zone to update based on the rest of the input. class {classname} Specify the default class. If no class is specified, the default class is IN. ttl {seconds} Specify the default time to live for records to be added. The value none will clear the default ttl. key [hmac:] {keyname} {secret} Specifies that all updates are to be TSIG-signed using the keynamesecret pair. If hmac is specified, then it sets the signing algorithm in use; the default is hmac-md5 or if MD5 was disabled hmac-sha256. The key command overrides any key specified on the command line via -y or -k. gsstsig Use GSS-TSIG to sign the updated. This is equivalent to specifying -g on the commandline. oldgsstsig Use the Windows 2000 version of GSS-TSIG to sign the updated. This is equivalent to specifying -o on the commandline. realm {[realm_name]} When using GSS-TSIG use realm_name rather than the default realm in krb5.conf. If no realm is specified the saved realm is cleared. [prereq] nxdomain {domain-name} Requires that no resource record of any type exists with name domain-name. [prereq] yxdomain {domain-name} Requires that domain-name exists (has as at least one resource record, of any type). [prereq] nxrrset {domain-name} [class] {type} Requires that no resource record exists of the specified type, class and domain-name. If class is omitted, IN (internet) is assumed. [prereq] yxrrset {domain-name} [class] {type} This requires that a resource record of the specified type, class and domain-name must exist. If class is omitted, IN (internet) is assumed. [prereq] yxrrset {domain-name} [class] {type} {data...} The data from each set of prerequisites of this form sharing a common type, class, and domain-name are combined to form a set of RRs. This set of RRs must exactly match the set of RRs existing in the zone at the given type, class, and domain-name. The data are written in the standard text representation of the resource record's RDATA. [update] del[ete] {domain-name} [ttl] [class] [type [data...]] Deletes any resource records named domain-name. If type and data is provided, only matching resource records will be removed. The internet class is assumed if class is not supplied. The ttl is ignored, and is only allowed for compatibility. [update] add {domain-name} {ttl} [class] {type} {data...} Adds a new resource record with the specified ttl, class and data. show Displays the current message, containing all of the prerequisites and updates specified since the last send. send Sends the current message. This is equivalent to entering a blank line. answer Displays the answer. debug Turn on debugging. version Print version number. help Print a list of commands. Lines beginning with a semicolon are comments and are ignored.
The examples below show how nsupdate could be used to insert and delete resource records from the example.com zone. Notice that the input in each example contains a trailing blank line so that a group of commands are sent as one dynamic update request to the master name server for example.com. # nsupdate > update delete oldhost.example.com A > update add newhost.example.com 86400 A 172.16.1.1 > send Any A records for oldhost.example.com are deleted. And an A record for newhost.example.com with IP address 172.16.1.1 is added. The newly-added record has a 1 day TTL (86400 seconds). # nsupdate > prereq nxdomain nickname.example.com > update add nickname.example.com 86400 CNAME somehost.example.com > send The prerequisite condition gets the name server to check that there are no resource records of any type for nickname.example.com. If there are, the update request fails. If this name does not exist, a CNAME for it is added. This ensures that when the CNAME is added, it cannot conflict with the long-standing rule in RFC 1034 that a name must not exist as any other record type if it exists as a CNAME. (The rule has been updated for DNSSEC in RFC 2535 to allow CNAMEs to have RRSIG, DNSKEY and NSEC records.) FILES /etc/resolv.conf used to identify default name server /var/run/named/session.key sets the default TSIG key for use in local-only mode K{name}.+157.+{random}.key base-64 encoding of HMAC-MD5 key created by dnssec-keygen(8). K{name}.+157.+{random}.private base-64 encoding of HMAC-MD5 key created by dnssec-keygen(8). SEE ALSO RFC 2136, RFC 3007, RFC 2104, RFC 2845, RFC 1034, RFC 2535, RFC 2931, named(8), ddns-confgen(8), dnssec-keygen(8). BUGS The TSIG key is redundantly stored in two separate files. This is a consequence of nsupdate using the DST library for its cryptographic operations, and may change in future releases. AUTHOR Internet Systems Consortium, Inc. COPYRIGHT Copyright © 2004-2012, 2014-2016 Internet Systems Consortium, Inc. ("ISC") Copyright © 2000-2003 Internet Software Consortium. ISC 2014-04-18 NSUPDATE(1)
crc32
This package provides a Tcl implementation of the CRC-32 algorithm based upon information provided at http://www.naaccr.org/standard/crc32/document.html If either the critcl package or the Trf package are available then a compiled version may be used internally to accelerate the checksum calculation. COMMANDS ::crc::crc32 ?-format format? ?-seed value? [ -channel chan | -filename file | message ] The command takes either string data or a channel or file name and returns a checksum value calculated using the CRC-32 algorithm. The result is formatted using the format(n) specifier provided. The default is to return the value as an unsigned integer (format %u).
crc32 - Perform a 32bit Cyclic Redundancy Check
package require Tcl 8.2 package require crc32 ?1.3? ::crc::crc32 ?-format format? ?-seed value? [ -channel chan | -filename file | message ] ::crc::Crc32Init ?seed? ::crc::Crc32Update token data ::crc::Crc32Final token ______________________________________________________________________________
-channel name Return a checksum for the data read from a channel. The command will read data from the channel until the eof is true. If you need to be able to process events during this calculation see the PROGRAMMING INTERFACE section -filename name This is a convenience option that opens the specified file, sets the encoding to binary and then acts as if the -channel option had been used. The file is closed on completion. -format string Return the checksum using an alternative format template. -seed value Select an alternative seed value for the CRC calculation. The default is 0xffffffff. This can be useful for calculating the CRC for data structures without first converting the whole structure into a string. The CRC of the previous member can be used as the seed for calculating the CRC of the next member. Note that the crc32 algorithm includes a final XOR step. If incremental processing is desired then this must be undone before using the output of the algorithm as the seed for further processing. A simpler alternative is to use the PROGRAMMING INTERFACE which is intended for this mode of operation. PROGRAMMING INTERFACE The CRC-32 package implements the checksum using a context variable to which additional data can be added at any time. This is expecially useful in an event based environment such as a Tk application or a web server package. Data to be checksummed may be handled incrementally during a fileevent handler in discrete chunks. This can improve the interactive nature of a GUI application and can help to avoid excessive memory consumption. ::crc::Crc32Init ?seed? Begins a new CRC32 context. Returns a token ID that must be used for the remaining functions. An optional seed may be specified if required. ::crc::Crc32Update token data Add data to the checksum identified by token. Calling Crc32Update $token "abcd" is equivalent to calling Crc32Update $token "ab" followed by Crc32Update $token "cb". See EXAMPLES. ::crc::Crc32Final token Returns the checksum value and releases any resources held by this token. Once this command completes the token will be invalid. The result is a 32 bit integer value.
% crc::crc32 "Hello, World!" 3964322768 % crc::crc32 -format 0x%X "Hello, World!" 0xEC4AC3D0 % crc::crc32 -file crc32.tcl 483919716 % set tok [crc::Crc32Init] % crc::Crc32Update $tok "Hello, " % crc::Crc32Update $tok "World!" % crc::Crc32Final $tok 3964322768 AUTHORS Pat Thoyts BUGS, IDEAS, FEEDBACK This document, and the package it describes, will undoubtedly contain bugs and other problems. Please report such in the category crc of the Tcllib SF Trackers [http://sourceforge.net/tracker/?group_id=12883]. Please also report any ideas for enhancements you may have for either package and/or documentation. SEE ALSO cksum(n), crc16(n), sum(n) KEYWORDS checksum, cksum, crc, crc32, cyclic redundancy check, data integrity, security CATEGORY Hashes, checksums, and encryption COPYRIGHT Copyright (c) 2002, Pat Thoyts crc 1.3 crc32(n)
ldapdelete
ldapdelete is a shell-accessible interface to the ldap_delete_ext(3) library call. ldapdelete opens a connection to an LDAP server, binds, and deletes one or more entries. If one or more DN arguments are provided, entries with those Distinguished Names are deleted. Each DN should be provided using the LDAPv3 string representation as defined in RFC 4514. If no DN arguments are provided, a list of DNs is read from standard input (or from file if the -f flag is used).
ldapdelete - LDAP delete entry tool
ldapdelete [-n] [-v] [-c] [-M[M]] [-d_debuglevel] [-f_file] [-D_binddn] [-W] [-w_passwd] [-y_passwdfile] [-H_ldapuri] [-h_ldaphost] [-P {2|3}] [-e [!]ext[=extparam]] [-E [!]ext[=extparam]] [-p_ldapport] [-O_security-properties] [-U_authcid] [-R_realm] [-r] [-x] [-I] [-Q] [-X_authzid] [-Y_mech] [-z_sizelimit] [-Z[Z]] [DN [...]]
-n Show what would be done, but don't actually delete entries. Useful for debugging in conjunction with -v. -v Use verbose mode, with many diagnostics written to standard output. -c Continuous operation mode. Errors are reported, but ldapdelete will continue with deletions. The default is to exit after reporting an error. -M[M] Enable manage DSA IT control. -MM makes control critical. -d_debuglevel Set the LDAP debugging level to debuglevel. ldapdelete must be compiled with LDAP_DEBUG defined for this option to have any effect. -f_file Read a series of DNs from file, one per line, performing an LDAP delete for each. -x Use simple authentication instead of SASL. -D_binddn Use the Distinguished Name binddn to bind to the LDAP directory. For SASL binds, the server is expected to ignore this value. -W Prompt for simple authentication. This is used instead of specifying the password on the command line. -w_passwd Use passwd as the password for simple authentication. -y_passwdfile Use complete contents of passwdfile as the password for simple authentication. -H_ldapuri Specify URI(s) referring to the ldap server(s); only the protocol/host/port fields are allowed; a list of URI, separated by whitespace or commas is expected. -h_ldaphost Specify an alternate host on which the ldap server is running. Deprecated in favor of -H. -p_ldapport Specify an alternate TCP port where the ldap server is listening. Deprecated in favor of -H. -P {2|3} Specify the LDAP protocol version to use. -e [!]ext[=extparam] -E [!]ext[=extparam] Specify general extensions with -e and search extensions with -E. ´!´ indicates criticality. General extensions: [!]assert=<filter> (an RFC 4515 Filter) [!]authzid=<authzid> ("dn:<dn>" or "u:<user>") [!]manageDSAit [!]noop ppolicy [!]postread[=<attrs>] (a comma-separated attribute list) [!]preread[=<attrs>] (a comma-separated attribute list) abandon, cancel (SIGINT sends abandon/cancel; not really controls) Search extensions: [!]domainScope (domain scope) [!]mv=<filter> (matched values filter) [!]pr=<size>[/prompt|noprompt] (paged results/prompt) [!]sss=[-]<attr[:OID]>[/[-]<attr[:OID]>...] (server side sorting) [!]subentries[=true|false] (subentries) [!]sync=ro[/<cookie>] (LDAP Sync refreshOnly) rp[/<cookie>][/<slimit>] (LDAP Sync refreshAndPersist) -r Do a recursive delete. If the DN specified isn't a leaf, its children, and all their children are deleted down the tree. No verification is done, so if you add this switch, ldapdelete will happily delete large portions of your tree. Use with care. -z_sizelimit Use sizelimit when searching for children DN to delete, to circumvent any server-side size limit. Only useful in conjunction with -r. -O_security-properties Specify SASL security properties. -I Enable SASL Interactive mode. Always prompt. Default is to prompt only as needed. -Q Enable SASL Quiet mode. Never prompt. -U_authcid Specify the authentication ID for SASL bind. The form of the identity depends on the actual SASL mechanism used. -R_realm Specify the realm of authentication ID for SASL bind. The form of the realm depends on the actual SASL mechanism used. -X_authzid Specify the requested authorization ID for SASL bind. authzid must be one of the following formats: dn:<distinguished name> or u:<username> -Y_mech Specify the SASL mechanism to be used for authentication. If it's not specified, the program will choose the best mechanism the server knows. -Z[Z] Issue StartTLS (Transport Layer Security) extended operation. If you use -ZZ, the command will require the operation to be successful. EXAMPLE The following command: ldapdelete "cn=Delete Me,dc=example,dc=com" will attempt to delete the entry named "cn=Delete Me,dc=example,dc=com". Of course it would probably be necessary to supply authentication credentials. DIAGNOSTICS Exit status is 0 if no errors occur. Errors result in a non-zero exit status and a diagnostic message being written to standard error. SEE ALSO ldap.conf(5), ldapadd(1), ldapmodify(1), ldapmodrdn(1), ldapsearch(1), ldap(3), ldap_delete_ext(3) AUTHOR The OpenLDAP Project <http://www.openldap.org/> ACKNOWLEDGEMENTS OpenLDAP Software is developed and maintained by The OpenLDAP Project <http://www.openldap.org/>. OpenLDAP Software is derived from University of Michigan LDAP 3.3 Release. OpenLDAP 2.4.28 2011/11/24 LDAPDELETE(1)
null
grep
The grep utility searches any given input files, selecting lines that match one or more patterns. By default, a pattern matches an input line if the regular expression (RE) in the pattern matches the input line without its trailing newline. An empty expression matches every line. Each input line that matches at least one of the patterns is written to the standard output. grep is used for simple patterns and basic regular expressions (BREs); egrep can handle extended regular expressions (EREs). See re_format(7) for more information on regular expressions. fgrep is quicker than both grep and egrep, but can only handle fixed patterns (i.e., it does not interpret regular expressions). Patterns may consist of one or more lines, allowing any of the pattern lines to match a portion of the input. zgrep, zegrep, and zfgrep act like grep, egrep, and fgrep, respectively, but accept input files compressed with the compress(1) or gzip(1) compression utilities. bzgrep, bzegrep, and bzfgrep act like grep, egrep, and fgrep, respectively, but accept input files compressed with the bzip2(1) compression utility. The following options are available: -A num, --after-context=num Print num lines of trailing context after each match. See also the -B and -C options. -a, --text Treat all files as ASCII text. Normally grep will simply print “Binary file ... matches” if files contain binary characters. Use of this option forces grep to output lines matching the specified pattern. -B num, --before-context=num Print num lines of leading context before each match. See also the -A and -C options. -b, --byte-offset The offset in bytes of a matched pattern is displayed in front of the respective matched line. -C num, --context=num Print num lines of leading and trailing context surrounding each match. See also the -A and -B options. -c, --count Only a count of selected lines is written to standard output. --colour=[when], --color=[when] Mark up the matching text with the expression stored in the GREP_COLOR environment variable. The possible values of when are “never”, “always” and “auto”. -D action, --devices=action Specify the demanded action for devices, FIFOs and sockets. The default action is “read”, which means, that they are read as if they were normal files. If the action is set to “skip”, devices are silently skipped. -d action, --directories=action Specify the demanded action for directories. It is “read” by default, which means that the directories are read in the same manner as normal files. Other possible values are “skip” to silently ignore the directories, and “recurse” to read them recursively, which has the same effect as the -R and -r option. -E, --extended-regexp Interpret pattern as an extended regular expression (i.e., force grep to behave as egrep). -e pattern, --regexp=pattern Specify a pattern used during the search of the input: an input line is selected if it matches any of the specified patterns. This option is most useful when multiple -e options are used to specify multiple patterns, or when a pattern begins with a dash (‘-’). --exclude pattern If specified, it excludes files matching the given filename pattern from the search. Note that --exclude and --include patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. If no --include pattern is specified, all files are searched that are not excluded. Patterns are matched to the full path specified, not only to the filename component. --exclude-dir pattern If -R is specified, it excludes directories matching the given filename pattern from the search. Note that --exclude-dir and --include-dir patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. If no --include-dir pattern is specified, all directories are searched that are not excluded. -F, --fixed-strings Interpret pattern as a set of fixed strings (i.e., force grep to behave as fgrep). -f file, --file=file Read one or more newline separated patterns from file. Empty pattern lines match every input line. Newlines are not considered part of a pattern. If file is empty, nothing is matched. -G, --basic-regexp Interpret pattern as a basic regular expression (i.e., force grep to behave as traditional grep). -H Always print filename headers with output lines. -h, --no-filename Never print filename headers (i.e., filenames) with output lines. --help Print a brief help message. -I Ignore binary files. This option is equivalent to the “--binary-files=without-match” option. -i, --ignore-case Perform case insensitive matching. By default, grep is case sensitive. --include pattern If specified, only files matching the given filename pattern are searched. Note that --include and --exclude patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. Patterns are matched to the full path specified, not only to the filename component. --include-dir pattern If -R is specified, only directories matching the given filename pattern are searched. Note that --include-dir and --exclude-dir patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. -J, --bz2decompress Decompress the bzip2(1) compressed file before looking for the text. -L, --files-without-match Only the names of files not containing selected lines are written to standard output. Pathnames are listed once per file searched. If the standard input is searched, the string “(standard input)” is written unless a --label is specified. -l, --files-with-matches Only the names of files containing selected lines are written to standard output. grep will only search a file until a match has been found, making searches potentially less expensive. Pathnames are listed once per file searched. If the standard input is searched, the string “(standard input)” is written unless a --label is specified. --label Label to use in place of “(standard input)” for a file name where a file name would normally be printed. This option applies to -H, -L, and -l. --mmap Use mmap(2) instead of read(2) to read input, which can result in better performance under some circumstances but can cause undefined behaviour. -M, --lzma Decompress the LZMA compressed file before looking for the text. -m num, --max-count=num Stop reading the file after num matches. -n, --line-number Each output line is preceded by its relative line number in the file, starting at line 1. The line number counter is reset for each file processed. This option is ignored if -c, -L, -l, or -q is specified. --null Prints a zero-byte after the file name. -O If -R is specified, follow symbolic links only if they were explicitly listed on the command line. The default is not to follow symbolic links. -o, --only-matching Prints only the matching part of the lines. -p If -R is specified, no symbolic links are followed. This is the default. -q, --quiet, --silent Quiet mode: suppress normal output. grep will only search a file until a match has been found, making searches potentially less expensive. -R, -r, --recursive Recursively search subdirectories listed. (i.e., force grep to behave as rgrep). -S If -R is specified, all symbolic links are followed. The default is not to follow symbolic links. -s, --no-messages Silent mode. Nonexistent and unreadable files are ignored (i.e., their error messages are suppressed). -U, --binary Search binary files, but do not attempt to print them. -u This option has no effect and is provided only for compatibility with GNU grep. -V, --version Display version information and exit. -v, --invert-match Selected lines are those not matching any of the specified patterns. -w, --word-regexp The expression is searched for as a word (as if surrounded by ‘[[:<:]]’ and ‘[[:>:]]’; see re_format(7)). This option has no effect if -x is also specified. -x, --line-regexp Only input lines selected against an entire fixed string or regular expression are considered to be matching lines. -y Equivalent to -i. Obsoleted. -z, --null-data Treat input and output data as sequences of lines terminated by a zero-byte instead of a newline. -X, --xz Decompress the xz(1) compressed file before looking for the text. -Z, --decompress Force grep to behave as zgrep. --binary-files=value Controls searching and printing of binary files. Options are: binary (default) Search binary files but do not print them. without-match Do not search binary files. text Treat all files as text. --line-buffered Force output to be line buffered. By default, output is line buffered when standard output is a terminal and block buffered otherwise. If no file arguments are specified, the standard input is used. Additionally, “-” may be used in place of a file name, anywhere that a file name is accepted, to read from standard input. This includes both -f and file arguments. ENVIRONMENT GREP_OPTIONS May be used to specify default options that will be placed at the beginning of the argument list. Backslash-escaping is not supported, unlike the behavior in GNU grep. EXIT STATUS The grep utility exits with one of the following values: 0 One or more lines were selected. 1 No lines were selected. >1 An error occurred.
grep, egrep, fgrep, rgrep, bzgrep, bzegrep, bzfgrep, zgrep, zegrep, zfgrep – file pattern searcher
grep [-abcdDEFGHhIiJLlMmnOopqRSsUVvwXxZz] [-A num] [-B num] [-C num] [-e pattern] [-f file] [--binary-files=value] [--color[=when]] [--colour[=when]] [--context=num] [--label] [--line-buffered] [--null] [pattern] [file ...]
null
- Find all occurrences of the pattern ‘patricia’ in a file: $ grep 'patricia' myfile - Same as above but looking only for complete words: $ grep -w 'patricia' myfile - Count occurrences of the exact pattern ‘FOO’ : $ grep -c FOO myfile - Same as above but ignoring case: $ grep -c -i FOO myfile - Find all occurrences of the pattern ‘.Pp’ at the beginning of a line: $ grep '^\.Pp' myfile The apostrophes ensure the entire expression is evaluated by grep instead of by the user's shell. The caret ‘^’ matches the null string at the beginning of a line, and the ‘\’ escapes the ‘.’, which would otherwise match any character. - Find all lines in a file which do not contain the words ‘foo’ or ‘bar’: $ grep -v -e 'foo' -e 'bar' myfile - Peruse the file ‘calendar’ looking for either 19, 20, or 25 using extended regular expressions: $ egrep '19|20|25' calendar - Show matching lines and the name of the ‘*.h’ files which contain the pattern ‘FIXME’. Do the search recursively from the /usr/src/sys/arm directory $ grep -H -R FIXME --include="*.h" /usr/src/sys/arm/ - Same as above but show only the name of the matching file: $ grep -l -R FIXME --include="*.h" /usr/src/sys/arm/ - Show lines containing the text ‘foo’. The matching part of the output is colored and every line is prefixed with the line number and the offset in the file for those lines that matched. $ grep -b --colour -n foo myfile - Show lines that match the extended regular expression patterns read from the standard input: $ echo -e 'Free\nBSD\nAll.*reserved' | grep -E -f - myfile - Show lines from the output of the pciconf(8) command matching the specified extended regular expression along with three lines of leading context and one line of trailing context: $ pciconf -lv | grep -B3 -A1 -E 'class.*=.*storage' - Suppress any output and use the exit status to show an appropriate message: $ grep -q foo myfile && echo File matches SEE ALSO bzip2(1), compress(1), ed(1), ex(1), gzip(1), sed(1), xz(1), zgrep(1), re_format(7) STANDARDS The grep utility is compliant with the IEEE Std 1003.1-2008 (“POSIX.1”) specification. The flags [-AaBbCDdGHhILmopRSUVw] are extensions to that specification, and the behaviour of the -f flag when used with an empty pattern file is left undefined. All long options are provided for compatibility with GNU versions of this utility. Historic versions of the grep utility also supported the flags [-ruy]. This implementation supports those options; however, their use is strongly discouraged. HISTORY The grep command first appeared in Version 6 AT&T UNIX. BUGS The grep utility does not normalize Unicode input, so a pattern containing composed characters will not match decomposed input, and vice versa. macOS 14.5 November 10, 2021 macOS 14.5
macerror5.34
The macerror script translates Mac error numbers into their symbolic name and description. SEE ALSO Mac::Errors AUTHOR Chris Nandor, pudge@pobox.com COPYRIGHT Copryright 2002, Chris Nandor, All rights reserved You may use this under the same terms as Perl itself. perl v5.34.0 2018-06-20 MACERROR(1)
macerror
% macerror -23
null
null
moose-outdated5.30
null
null
null
null
null
rwbytype.d
This program identifies the vnode type of read/write activity - whether that is for regular files, sockets, character special devices, etc. This is measuring at the application level, so file activity may well be cached by the system. Since this uses DTrace, only users with root privileges can run this command.
rwbytype.d - read/write bytes by vnode type. Uses DTrace.
rwbytype.d
null
This samples until Ctrl-C is hit. # rwbytype.d FIELDS PID process ID CMD process name VNODE vnode type (describes I/O type) DIR direction, Read or Write BYTES total bytes DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT rwbytype.d will sample until Ctrl-C is hit. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO rwbypid.d(1M), rwbbypid.d(1M), dtrace(1M) version 0.70 January 11, 2006 rwbytype.d(1m)
script
The script utility makes a typescript of everything printed on your terminal. It is useful for students who need a hardcopy record of an interactive session as proof of an assignment, as the typescript file can be printed out later with lpr(1). If the argument file is given, script saves all dialogue in file. If no file name is given, the typescript is saved in the file typescript. If the argument command is given, script will run the specified command with an optional argument vector instead of an interactive shell. The following options are available: -a Append the output to file or typescript, retaining the prior contents. -d When playing back a session with the -p flag, do not sleep between records when playing back a timestamped session. -e Accepted for compatibility with util-linux script. The child command exit status is always the exit status of script. -F Immediately flush output after each write. This will allow a user to create a named pipe using mkfifo(1) and another user may watch the live session using a utility like cat(1). -k Log keys sent to the program as well as output. -p Play back a session recorded with the -r flag in real time. -q Run in quiet mode, omit the start, stop and command status messages. -r Record a session with input, output, and timestamping. -t time Specify the interval at which the script output file will be flushed to disk, in seconds. A value of 0 causes script to flush after every character I/O event. The default interval is 30 seconds. -T fmt Implies -p, but just reports the time-stamp of each output. This is very useful for assessing the timing of events. If fmt does not contain any ‘%’ characters, it indicates the default format: ‘%n@ %s [%Y-%m-%d %T]%n’, which is useful for both tools and humans to read, should be used. Note that time- stamps will only be output when different from the previous one. The script ends when the forked shell (or command) exits (a control-D to exit the Bourne shell (sh(1)), and exit, logout or control-D (if ignoreeof is not set) for the C-shell, csh(1)). Certain interactive commands, such as vi(1), create garbage in the typescript file. The script utility works best with commands that do not manipulate the screen. The results are meant to emulate a hardcopy terminal, not an addressable one. ENVIRONMENT The following environment variables are utilized by script: SCRIPT The SCRIPT environment variable is added to the sub-shell. If SCRIPT already existed in the users environment, its value is overwritten within the sub-shell. The value of SCRIPT is the name of the typescript file. SHELL If the variable SHELL exists, the shell forked by script will be that shell. If SHELL is not set, the Bourne shell is assumed. (Most shells set this variable automatically).
script – make typescript of terminal session
script [-aeFkqr] [-t time] [file [command ...]] script -p [-deq] [-T fmt] [file]
null
Record a simple csh(1) session with no additional details like input, output, and timestamping: $ SHELL=/bin/csh script Script started, output file is typescript % date Tue Jan 5 15:08:10 UTC 2021 % exit exit Script done, output file is typescript Now, replay the session recorded in the previous example: $ cat ./typescript Script started on Tue Jan 5 15:08:08 2021 % date Tue Jan 5 15:08:10 UTC 2021 % exit exit Script done on Tue Jan 5 15:08:13 2021 Record a csh(1) session, but this time with additional details like timestamping: $ SHELL=/bin/csh script -r Script started, output file is typescript % date Tue Jan 5 15:17:11 UTC 2021 % exit exit Script done, output file is typescript In order to replay a sessions recorded with the -r flag, it is necessary to specify -p (cat(1) will not work because of all the aditional information stored in the session file). Also, let us use -d to print the whole session at once: $ script -dp ./typescript Script started on Tue Jan 5 15:17:09 2021 % date Tue Jan 5 15:17:11 UTC 2021 % exit exit Script done on Tue Jan 5 15:17:14 2021 SEE ALSO csh(1) (for the history mechanism) HISTORY The script command appeared in 3.0BSD. The -d, -p and -r options first appeared in NetBSD 2.0 and were ported to FreeBSD 9.2. BUGS The script utility places everything in the log file, including linefeeds and backspaces. This is not what the naive user expects. It is not possible to specify a command without also naming the script file because of argument parsing compatibility issues. When running in -k mode, echo cancelling is far from ideal. The slave terminal mode is checked for ECHO mode to check when to avoid manual echo logging. This does not work when the terminal is in a raw mode where the program being run is doing manual echo. If script reads zero bytes from the terminal, it switches to a mode when it only attempts to read once a second until there is data to read. This prevents script from spinning on zero-byte reads, but might cause a 1-second delay in processing of user input. macOS 14.5 October 26, 2022 macOS 14.5
pod2text5.30
pod2text is a front-end for Pod::Text and its subclasses. It uses them to generate formatted ASCII text from POD source. It can optionally use either termcap sequences or ANSI color escape sequences to format the text. input is the file to read for POD source (the POD can be embedded in code). If input isn't given, it defaults to "STDIN". output, if given, is the file to which to write the formatted output. If output isn't given, the formatted output is written to "STDOUT". Several POD files can be processed in the same pod2text invocation (saving module load and compile times) by providing multiple pairs of input and output files on the command line.
pod2text - Convert POD data to formatted ASCII text
pod2text [-aclostu] [--code] [--errors=style] [-i indent] [-q quotes] [--nourls] [--stderr] [-w width] [input [output ...]] pod2text -h
-a, --alt Use an alternate output format that, among other things, uses a different heading style and marks "=item" entries with a colon in the left margin. --code Include any non-POD text from the input file in the output as well. Useful for viewing code documented with POD blocks with the POD rendered and the code left intact. -c, --color Format the output with ANSI color escape sequences. Using this option requires that Term::ANSIColor be installed on your system. --errors=style Set the error handling style. "die" says to throw an exception on any POD formatting error. "stderr" says to report errors on standard error, but not to throw an exception. "pod" says to include a POD ERRORS section in the resulting documentation summarizing the errors. "none" ignores POD errors entirely, as much as possible. The default is "die". -i indent, --indent=indent Set the number of spaces to indent regular text, and the default indentation for "=over" blocks. Defaults to 4 spaces if this option isn't given. -h, --help Print out usage information and exit. -l, --loose Print a blank line after a "=head1" heading. Normally, no blank line is printed after "=head1", although one is still printed after "=head2", because this is the expected formatting for manual pages; if you're formatting arbitrary text documents, using this option is recommended. -m width, --left-margin=width, --margin=width The width of the left margin in spaces. Defaults to 0. This is the margin for all text, including headings, not the amount by which regular text is indented; for the latter, see -i option. --nourls Normally, L<> formatting codes with a URL but anchor text are formatted to show both the anchor text and the URL. In other words: L<foo|http://example.com/> is formatted as: foo <http://example.com/> This flag, if given, suppresses the URL when anchor text is given, so this example would be formatted as just "foo". This can produce less cluttered output in cases where the URLs are not particularly important. -o, --overstrike Format the output with overstrike printing. Bold text is rendered as character, backspace, character. Italics and file names are rendered as underscore, backspace, character. Many pagers, such as less, know how to convert this to bold or underlined text. -q quotes, --quotes=quotes Sets the quote marks used to surround C<> text to quotes. If quotes is a single character, it is used as both the left and right quote. Otherwise, it is split in half, and the first half of the string is used as the left quote and the second is used as the right quote. quotes may also be set to the special value "none", in which case no quote marks are added around C<> text. -s, --sentence Assume each sentence ends with two spaces and try to preserve that spacing. Without this option, all consecutive whitespace in non- verbatim paragraphs is compressed into a single space. --stderr By default, pod2text dies if any errors are detected in the POD input. If --stderr is given and no --errors flag is present, errors are sent to standard error, but pod2text does not abort. This is equivalent to "--errors=stderr" and is supported for backward compatibility. -t, --termcap Try to determine the width of the screen and the bold and underline sequences for the terminal from termcap, and use that information in formatting the output. Output will be wrapped at two columns less than the width of your terminal device. Using this option requires that your system have a termcap file somewhere where Term::Cap can find it and requires that your system support termios. With this option, the output of pod2text will contain terminal control sequences for your current terminal type. -u, --utf8 By default, pod2text tries to use the same output encoding as its input encoding (to be backward-compatible with older versions). This option says to instead force the output encoding to UTF-8. Be aware that, when using this option, the input encoding of your POD source should be properly declared unless it's US-ASCII. Pod::Simple will attempt to guess the encoding and may be successful if it's Latin-1 or UTF-8, but it will warn, which by default results in a pod2text failure. Use the "=encoding" command to declare the encoding. See perlpod(1) for more information. -w, --width=width, -width The column at which to wrap text on the right-hand side. Defaults to 76, unless -t is given, in which case it's two columns less than the width of your terminal device. EXIT STATUS As long as all documents processed result in some output, even if that output includes errata (a "POD ERRORS" section generated with "--errors=pod"), pod2text will exit with status 0. If any of the documents being processed do not result in an output document, pod2text will exit with status 1. If there are syntax errors in a POD document being processed and the error handling style is set to the default of "die", pod2text will abort immediately with exit status 255. DIAGNOSTICS If pod2text fails with errors, see Pod::Text and Pod::Simple for information about what those errors might mean. Internally, it can also produce the following diagnostics: -c (--color) requires Term::ANSIColor be installed (F) -c or --color were given, but Term::ANSIColor could not be loaded. Unknown option: %s (F) An unknown command line option was given. In addition, other Getopt::Long error messages may result from invalid command-line options. ENVIRONMENT COLUMNS If -t is given, pod2text will take the current width of your screen from this environment variable, if available. It overrides terminal width information in TERMCAP. TERMCAP If -t is given, pod2text will use the contents of this environment variable if available to determine the correct formatting sequences for your current terminal device. AUTHOR Russ Allbery <rra@cpan.org>. COPYRIGHT AND LICENSE Copyright 1999-2001, 2004, 2006, 2008, 2010, 2012-2018 Russ Allbery <rra@cpan.org> This program is free software; you may redistribute it and/or modify it under the same terms as Perl itself. SEE ALSO Pod::Text, Pod::Text::Color, Pod::Text::Overstrike, Pod::Text::Termcap, Pod::Simple, perlpod(1) The current version of this script is always available from its web site at <https://www.eyrie.org/~eagle/software/podlators/>. It is also part of the Perl core distribution as of 5.6.0. perl v5.30.3 2024-04-13 POD2TEXT(1)
null
vtool
The vtool utility displays and edits build and source version numbers embedded in the Mach-O(5) file format. These version numbers are stored within the Mach-O load commands, as described in the ⟨mach-o/loader.h⟩ header file and in the VERSION LOAD COMMANDS section below. When editing files, a new out_file must be specified using the -output flag; vtool will only ever write to a single output file, and input files are never modified in place. vtool operates in one of three functional modes (in addition to a help mode) depending on the type of arguments specified on the command line: show, set, and remove. All of these modes operate on “universal” (multi- architecture) files as well as ordinary Mach-O files. The -arch flag limits operation to one or more architectures within a universal file. Show Show options include -show, -show-build, -show-source, and -show-space. Only one of these commands may be specified. The version information will be printed in a manner similar to otool(1) or otool-classic(1). Set Set options include -set-build-tool, -set-build-version, -set-source-version, and -set-version-min. Any number of these commands can be combined in a single vtool invocation. You can use these set commands to add a new build version to a Mach-O or to replace an existing version for a specific platform. When used with the -replace option, all existing build versions will be entirely replaced by the new build versions specified on the command line. Remove Remove options include -remove-build-tool, -remove-build-version, and -remove-source-version. Any number of these commands can be combined in a single vtool invocation. Currently vtool only operates on final linked binaries, such as executable files, dynamic libraries, and bundles. Because the executable code in Mach-O final linked binaries cannot be moved or resized, and because the load commands reside between the mach header and the executable code, there is only a limited amount of space available for vtool to save changes. Set operations that add or resize load commands may fail if there isn't enough space in the Mach-O file availble to hold the new load commands.
vtool – Mach-O version number utility
vtool [-arch ⟨arch⟩] ... ⟨show_command⟩ ... file vtool [-arch ⟨arch⟩] ... ⟨set_command⟩ ... [-replace] -output out_file file vtool [-arch ⟨arch⟩] ... ⟨remove_command⟩ ... -output out_file file vtool -help
-arch ⟨arch⟩ Specifies the architecture, ⟨arch⟩, for vtool to operate on when the file is a universal (multi-architecture) file. See arch(3) for the current list of architectures. More than one architecture can be specified, and by default vtool will operate on all architectures in a universal file. -h, -help Print full usage. -o, -output out_file Commands that create new files write to the out_file file specified by the -output flag. This option is required for all set and remove commands. -r, -replace When used with -set-build-version or -set-version-min the -replace option instructs vtool to discard all of the existing build versions from the input file. Use this to change a file's platform in a single call to vtool. When used with the -set-build-tool command, vtool will discard all of the existing tool versions from the specified platform's build version. This option has no effect on source versions. -remove-build-tool platform tool Removes tool from the platform build version. A build version for the specified platform must exist in the input file and that build version must be an LC_BUILD_VERSION. Must be used with -output. See VERSION LOAD COMMANDS for more information on platform and tool values. -remove-build-version platform Removes the build version for the specified platform. Must be used with -output. See VERSION LOAD COMMANDS for more information on platform values. -remove-source-version Removes the source version from the Mach-O file. Must be used with -output. -set-build-tool platform tool version Updates the build version load command for platform to include the specified tool, adding a new tool entry if necessary. The build version must be an LC_BUILD_VERSION load command which either already existss within the input file or is newly specified on the command line. The version field takes the format X.Y.Z. Must be used with -output. See VERSION LOAD COMMANDS for more information on platform and tool values. -set-build-version platform minos sdk [-tool tool version] Create or update the LC_BUILD_VERSION load command for platform to include the specified minos and sdk version numbers, and zero or more optional tools. The minos, sdk, and tool version all take the format X.Y.Z. Must be used with -output. See VERSION LOAD COMMANDS for more information on platform and tool values. -set-source-version version Create or update the source version load command. version takes the format A.B.C.D.E. Must be used with -output. -set-version-min platform minos sdk Create or update an LC_VERSION_MIN_* load command for platform. This option is included to support older operating systems, and generally one should favor -set-build-version instead. Note that version min load commands do not support tool versions, and not all platforms can be expressed using version min load commands. Must be used with -output. -show, -show-all Display the build and source versions within the specified file. This option cannot be combined with other commands. -show-build Display the build versions within the specified file. This option cannot be combined with other commands. -show-source Display the source version within the specified file. This option cannot be combined with other commands. -show-space Show the space in the file consumed by the mach header and the existing load commands, and measure the amount of additional space available for adding new load commands. - A single dash instructs vtool to stop parsing arguments. This is useful for operating on files whose names would otherwise be interpreted as an option or flag. VERSION LOAD COMMANDS Modern Mach-O files can contain multiple build versions, one for each unqiue platform represented in the file. A platform is a loosely-defined concept within Mach-O, most often used to identify different Darwin operating systems, such as macOS and iOS. Platforms and tools can be specified either by name (e.g., "macos" or "clang") or by number (e.g., "1"). Common platform and tool constants are defined in ⟨mach-o/loader.h⟩ and vtool will display platform and tool names when invoked with -help. Modern Mach-O files store build information in one or more LC_BUILD_VERSION load commands. LC_BUILD_VERSION supports arbitrary platforms and can include version information about the tools used to build the Mach-O file. Older Mach-O files use a “version min” load command, such as LC_VERSION_MIN_MACOSX. While version min commands are appropriate when deploying Mach-O files on older operating systems, be aware that they do not support tool versions, and version min load commands do not exist for all possible platforms. In some cases LC_BUILD_VERSION and LC_VERSION_MIN_* load commands can appear in a single Mach-O file, but many restrictions apply, and vtool may not enforce these restrictions. vtool will prevent you from writing more than one build version load command for the same platform. Source versions are stored in a single LC_SOURCE_VERSION load command. When writing new load commands, vtool will attempt to preserve the order of the load commands as they appear on the command line. No attempt is made to preserve positions relative to other existing load commands. Editing an existing load command may have the side effect of moving the load command to the end of the load command list. SEE ALSO ld(1), lipo(1), otool-classic(1), arch(3), Mach-O(5). HISTORY LC_BUILD_VERSION first appeared in macOS 10.13 in 2017 for use with the bridgeOS platform. LC_BUILD_VERSION became the default build version load command for the macOS, iOS, tvOS, and watchOS platforms in 2018 with macOS 10.14, iOS 12.0, and friends. The list of platforms also grew to include iOSSimulator, tvOSSimulator, and watchOSSimulator. vtool first appeared in macOS 10.15 and iOS 13.0 in 2019. BUGS vtool will write load commands in a different order than ld(1). Currently vtool does not work with object files or archives. Darwin December 31, 2018 Darwin
null
more
Less is a program similar to more(1), but which allows backward movement in the file as well as forward movement. Also, less does not have to read the entire input file before starting, so with large input files it starts up faster than text editors like vi(1). Less uses termcap (or terminfo on some systems), so it can run on a variety of terminals. There is even limited support for hardcopy terminals. (On a hardcopy terminal, lines which should be printed at the top of the screen are prefixed with a caret.) Commands are based on both more and vi. Commands may be preceded by a decimal number, called N in the descriptions below. The number is used by some commands, as indicated. COMMANDS In the following descriptions, ^X means control-X. ESC stands for the ESCAPE key; for example ESC-v means the two character sequence "ESCAPE", then "v". h or H Help: display a summary of these commands. If you forget all the other commands, remember this one. SPACE or ^V or f or ^F Scroll forward N lines, default one window (see option -z below). If N is more than the screen size, only the final screenful is displayed. Warning: some systems use ^V as a special literalization character. z Like SPACE, but if N is specified, it becomes the new window size. ESC-SPACE Like SPACE, but scrolls a full screenful, even if it reaches end-of-file in the process. ENTER or RETURN or ^N or e or ^E or j or ^J Scroll forward N lines, default 1. The entire N lines are displayed, even if N is more than the screen size. d or ^D Scroll forward N lines, default one half of the screen size. If N is specified, it becomes the new default for subsequent d and u commands. b or ^B or ESC-v Scroll backward N lines, default one window (see option -z below). If N is more than the screen size, only the final screenful is displayed. w Like ESC-v, but if N is specified, it becomes the new window size. y or ^Y or ^P or k or ^K Scroll backward N lines, default 1. The entire N lines are displayed, even if N is more than the screen size. Warning: some systems use ^Y as a special job control character. u or ^U Scroll backward N lines, default one half of the screen size. If N is specified, it becomes the new default for subsequent d and u commands. J Like j, but continues to scroll beyond the end of the file. K or Y Like k, but continues to scroll beyond the beginning of the file. ESC-) or RIGHTARROW Scroll horizontally right N characters, default half the screen width (see the -# option). If a number N is specified, it becomes the default for future RIGHTARROW and LEFTARROW commands. While the text is scrolled, it acts as though the -S option (chop lines) were in effect. ESC-( or LEFTARROW Scroll horizontally left N characters, default half the screen width (see the -# option). If a number N is specified, it becomes the default for future RIGHTARROW and LEFTARROW commands. ESC-} or ^RIGHTARROW Scroll horizontally right to show the end of the longest displayed line. ESC-{ or ^LEFTARROW Scroll horizontally left back to the first column. r or ^R or ^L Repaint the screen. R Repaint the screen, discarding any buffered input. That is, reload the current file. Useful if the file is changing while it is being viewed. F Scroll forward, and keep trying to read when the end of file is reached. Normally this command would be used when already at the end of the file. It is a way to monitor the tail of a file which is growing while it is being viewed. (The behavior is similar to the "tail -f" command.) To stop waiting for more data, enter the interrupt character (usually ^C). On some systems you can also use ^X. ESC-F Like F, but as soon as a line is found which matches the last search pattern, the terminal bell is rung and forward scrolling stops. g or < or ESC-< Go to line N in the file, default 1 (beginning of file). (Warning: this may be slow if N is large.) G or > or ESC-> Go to line N in the file, default the end of the file. (Warning: this may be slow if N is large, or if N is not specified and standard input, rather than a file, is being read.) ESC-G Same as G, except if no number N is specified and the input is standard input, goes to the last line which is currently buffered. p or % Go to a position N percent into the file. N should be between 0 and 100, and may contain a decimal point. P Go to the line containing byte offset N in the file. { If a left curly bracket appears in the top line displayed on the screen, the { command will go to the matching right curly bracket. The matching right curly bracket is positioned on the bottom line of the screen. If there is more than one left curly bracket on the top line, a number N may be used to specify the N-th bracket on the line. } If a right curly bracket appears in the bottom line displayed on the screen, the } command will go to the matching left curly bracket. The matching left curly bracket is positioned on the top line of the screen. If there is more than one right curly bracket on the top line, a number N may be used to specify the N-th bracket on the line. ( Like {, but applies to parentheses rather than curly brackets. ) Like }, but applies to parentheses rather than curly brackets. [ Like {, but applies to square brackets rather than curly brackets. ] Like }, but applies to square brackets rather than curly brackets. ESC-^F Followed by two characters, acts like {, but uses the two characters as open and close brackets, respectively. For example, "ESC ^F < >" could be used to go forward to the > which matches the < in the top displayed line. ESC-^B Followed by two characters, acts like }, but uses the two characters as open and close brackets, respectively. For example, "ESC ^B < >" could be used to go backward to the < which matches the > in the bottom displayed line. m Followed by any lowercase or uppercase letter, marks the first displayed line with that letter. If the status column is enabled via the -J option, the status column shows the marked line. M Acts like m, except the last displayed line is marked rather than the first displayed line. ' (Single quote.) Followed by any lowercase or uppercase letter, returns to the position which was previously marked with that letter. Followed by another single quote, returns to the position at which the last "large" movement command was executed. Followed by a ^ or $, jumps to the beginning or end of the file respectively. Marks are preserved when a new file is examined, so the ' command can be used to switch between input files. ^X^X Same as single quote. ESC-m Followed by any lowercase or uppercase letter, clears the mark identified by that letter. /pattern Search forward in the file for the N-th line containing the pattern. N defaults to 1. The pattern is a regular expression, as recognized by the regular expression library supplied by your system. The search starts at the first line displayed (but see the -a and -j options, which change this). Certain characters are special if entered at the beginning of the pattern; they modify the type of search rather than become part of the pattern: ^N or ! Search for lines which do NOT match the pattern. ^E or * Search multiple files. That is, if the search reaches the END of the current file without finding a match, the search continues in the next file in the command line list. ^F or @ Begin the search at the first line of the FIRST file in the command line list, regardless of what is currently displayed on the screen or the settings of the -a or -j options. ^K Highlight any text which matches the pattern on the current screen, but don't move to the first match (KEEP current position). ^R Don't interpret regular expression metacharacters; that is, do a simple textual comparison. ^W WRAP around the current file. That is, if the search reaches the end of the current file without finding a match, the search continues from the first line of the current file up to the line where it started. ?pattern Search backward in the file for the N-th line containing the pattern. The search starts at the last line displayed (but see the -a and -j options, which change this). Certain characters are special as in the / command: ^N or ! Search for lines which do NOT match the pattern. ^E or * Search multiple files. That is, if the search reaches the beginning of the current file without finding a match, the search continues in the previous file in the command line list. ^F or @ Begin the search at the last line of the last file in the command line list, regardless of what is currently displayed on the screen or the settings of the -a or -j options. ^K As in forward searches. ^R As in forward searches. ^W WRAP around the current file. That is, if the search reaches the beginning of the current file without finding a match, the search continues from the last line of the current file up to the line where it started. ESC-/pattern Same as "/*". ESC-?pattern Same as "?*". n Repeat previous search, for N-th line containing the last pattern. If the previous search was modified by ^N, the search is made for the N-th line NOT containing the pattern. If the previous search was modified by ^E, the search continues in the next (or previous) file if not satisfied in the current file. If the previous search was modified by ^R, the search is done without using regular expressions. There is no effect if the previous search was modified by ^F or ^K. N Repeat previous search, but in the reverse direction. ESC-n Repeat previous search, but crossing file boundaries. The effect is as if the previous search were modified by *. ESC-N Repeat previous search, but in the reverse direction and crossing file boundaries. ESC-u Undo search highlighting. Turn off highlighting of strings matching the current search pattern. If highlighting is already off because of a previous ESC-u command, turn highlighting back on. Any search command will also turn highlighting back on. (Highlighting can also be disabled by toggling the -G option; in that case search commands do not turn highlighting back on.) ESC-U Like ESC-u but also clears the saved search pattern. If the status column is enabled via the -J option, this clears all search matches marked in the status column. &pattern Display only lines which match the pattern; lines which do not match the pattern are not displayed. If pattern is empty (if you type & immediately followed by ENTER), any filtering is turned off, and all lines are displayed. While filtering is in effect, an ampersand is displayed at the beginning of the prompt, as a reminder that some lines in the file may be hidden. Multiple & commands may be entered, in which case only lines which match all of the patterns will be displayed. Certain characters are special as in the / command: ^N or ! Display only lines which do NOT match the pattern. ^R Don't interpret regular expression metacharacters; that is, do a simple textual comparison. :e [filename] Examine a new file. If the filename is missing, the "current" file (see the :n and :p commands below) from the list of files in the command line is re-examined. A percent sign (%) in the filename is replaced by the name of the current file. A pound sign (#) is replaced by the name of the previously examined file. However, two consecutive percent signs are simply replaced with a single percent sign. This allows you to enter a filename that contains a percent sign in the name. Similarly, two consecutive pound signs are replaced with a single pound sign. The filename is inserted into the command line list of files so that it can be seen by subsequent :n and :p commands. If the filename consists of several files, they are all inserted into the list of files and the first one is examined. If the filename contains one or more spaces, the entire filename should be enclosed in double quotes (also see the -" option). ^X^V or E Same as :e. Warning: some systems use ^V as a special literalization character. On such systems, you may not be able to use ^V. :n Examine the next file (from the list of files given in the command line). If a number N is specified, the N-th next file is examined. :p Examine the previous file in the command line list. If a number N is specified, the N-th previous file is examined. :x Examine the first file in the command line list. If a number N is specified, the N-th file in the list is examined. :d Remove the current file from the list of files. t Go to the next tag, if there were more than one matches for the current tag. See the -t option for more details about tags. T Go to the previous tag, if there were more than one matches for the current tag. = or ^G or :f Prints some information about the file being viewed, including its name and the line number and byte offset of the bottom line being displayed. If possible, it also prints the length of the file, the number of lines in the file and the percent of the file above the last displayed line. - Followed by one of the command line option letters (see OPTIONS below), this will change the setting of that option and print a message describing the new setting. If a ^P (CONTROL-P) is entered immediately after the dash, the setting of the option is changed but no message is printed. If the option letter has a numeric value (such as -b or -h), or a string value (such as -P or -t), a new value may be entered after the option letter. If no new value is entered, a message describing the current setting is printed and nothing is changed. -- Like the - command, but takes a long option name (see OPTIONS below) rather than a single option letter. You must press ENTER or RETURN after typing the option name. A ^P immediately after the second dash suppresses printing of a message describing the new setting, as in the - command. -+ Followed by one of the command line option letters this will reset the option to its default setting and print a message describing the new setting. (The "-+X" command does the same thing as "-+X" on the command line.) This does not work for string-valued options. --+ Like the -+ command, but takes a long option name rather than a single option letter. -! Followed by one of the command line option letters, this will reset the option to the "opposite" of its default setting and print a message describing the new setting. This does not work for numeric or string-valued options. --! Like the -! command, but takes a long option name rather than a single option letter. _ (Underscore.) Followed by one of the command line option letters, this will print a message describing the current setting of that option. The setting of the option is not changed. __ (Double underscore.) Like the _ (underscore) command, but takes a long option name rather than a single option letter. You must press ENTER or RETURN after typing the option name. +cmd Causes the specified cmd to be executed each time a new file is examined. For example, +G causes less to initially display each file starting at the end rather than the beginning. V Prints the version number of less being run. q or Q or :q or :Q or ZZ Exits less. The following four commands may or may not be valid, depending on your particular installation. v Invokes an editor to edit the current file being viewed. The editor is taken from the environment variable VISUAL if defined, or EDITOR if VISUAL is not defined, or defaults to "vi" if neither VISUAL nor EDITOR is defined. See also the discussion of LESSEDIT under the section on PROMPTS below. ! shell-command Invokes a shell to run the shell-command given. A percent sign (%) in the command is replaced by the name of the current file. A pound sign (#) is replaced by the name of the previously examined file. "!!" repeats the last shell command. "!" with no shell command simply invokes a shell. On Unix systems, the shell is taken from the environment variable SHELL, or defaults to "sh". On MS-DOS and OS/2 systems, the shell is the normal command processor. | <m> shell-command <m> represents any mark letter. Pipes a section of the input file to the given shell command. The section of the file to be piped is between the position marked by the letter and the current screen. The entire current screen is included, regardless of whether the marked position is before or after the current screen. <m> may also be ^ or $ to indicate beginning or end of file respectively. If <m> is . or newline, the current screen is piped. s filename Save the input to a file. This only works if the input is a pipe, not an ordinary file.
less - opposite of more
less -? less --help less -V less --version less [-[+]aABcCdeEfFgGiIJKLmMnNqQrRsSuUVwWX~] [-b space] [-h lines] [-j line] [-k keyfile] [-{oO} logfile] [-p pattern] [-P prompt] [-t tag] [-T tagsfile] [-x tab,...] [-y lines] [-[z] lines] [-# shift] [+[+]cmd] [--] [filename]... (See the OPTIONS section for alternate option syntax with long option names.)
Command line options are described below. Most options may be changed while less is running, via the "-" command. Most options may be given in one of two forms: either a dash followed by a single letter, or two dashes followed by a long option name. A long option name may be abbreviated as long as the abbreviation is unambiguous. For example, --quit-at-eof may be abbreviated --quit, but not --qui, since both --quit-at-eof and --quiet begin with --qui. Some long option names are in uppercase, such as --QUIT-AT-EOF, as distinct from --quit-at-eof. Such option names need only have their first letter capitalized; the remainder of the name may be in either case. For example, --Quit-at-eof is equivalent to --QUIT-AT-EOF. Options are also taken from the environment variable "LESS". For example, to avoid typing "less -options ..." each time less is invoked, you might tell csh: setenv LESS "-options" or if you use sh: LESS="-options"; export LESS On MS-DOS, you don't need the quotes, but you should replace any percent signs in the options string by double percent signs. The environment variable is parsed before the command line, so command line options override the LESS environment variable. If an option appears in the LESS variable, it can be reset to its default value on the command line by beginning the command line option with "-+". Some options like -k or -D require a string to follow the option letter. The string for that option is considered to end when a dollar sign ($) is found. For example, you can set two -D options on MS-DOS like this: LESS="Dn9.1$Ds4.1" If the --use-backslash option appears earlier in the options, then a dollar sign or backslash may be included literally in an option string by preceding it with a backslash. If the --use-backslash option is not in effect, then backslashes are not treated specially, and there is no way to include a dollar sign in the option string. -? or --help This option displays a summary of the commands accepted by less (the same as the h command). (Depending on how your shell interprets the question mark, it may be necessary to quote the question mark, thus: "-\?".) -a or --search-skip-screen By default, forward searches start at the top of the displayed screen and backwards searches start at the bottom of the displayed screen (except for repeated searches invoked by the n or N commands, which start after or before the "target" line respectively; see the -j option for more about the target line). The -a option causes forward searches to instead start at the bottom of the screen and backward searches to start at the top of the screen, thus skipping all lines displayed on the screen. -A or --SEARCH-SKIP-SCREEN Causes all forward searches (not just non-repeated searches) to start just after the target line, and all backward searches to start just before the target line. Thus, forward searches will skip part of the displayed screen (from the first line up to and including the target line). Similarly backwards searches will skip the displayed screen from the last line up to and including the target line. This was the default behavior in less versions prior to 441. -bn or --buffers=n Specifies the amount of buffer space less will use for each file, in units of kilobytes (1024 bytes). By default 64 KB of buffer space is used for each file (unless the file is a pipe; see the -B option). The -b option specifies instead that n kilobytes of buffer space should be used for each file. If n is -1, buffer space is unlimited; that is, the entire file can be read into memory. -B or --auto-buffers By default, when data is read from a pipe, buffers are allocated automatically as needed. If a large amount of data is read from the pipe, this can cause a large amount of memory to be allocated. The -B option disables this automatic allocation of buffers for pipes, so that only 64 KB (or the amount of space specified by the -b option) is used for the pipe. Warning: use of -B can result in erroneous display, since only the most recently viewed part of the piped data is kept in memory; any earlier data is lost. -c or --clear-screen Causes full screen repaints to be painted from the top line down. By default, full screen repaints are done by scrolling from the bottom of the screen. -C or --CLEAR-SCREEN Same as -c, for compatibility with older versions of less. -d or --dumb The -d option suppresses the error message normally displayed if the terminal is dumb; that is, lacks some important capability, such as the ability to clear the screen or scroll backward. The -d option does not otherwise change the behavior of less on a dumb terminal. -Dxcolor or --color=xcolor Changes the color of different parts of the displayed text. x is a single character which selects the type of text whose color is being set: B Binary characters. C Control characters. E Errors and informational messages. M Mark letters in the status column. N Line numbers enabled via the -N option. P Prompts. R The rscroll character. S Search results. W The highlight enabled via the -w option. d Bold text. k Blinking text. s Standout text. u Underlined text. The uppercase letters can be used only when the --use-color option is enabled. When text color is specified by both an uppercase letter and a lowercase letter, the uppercase letter takes precedence. For example, error messages are normally displayed as standout text. So if both "s" and "E" are given a color, the "E" color applies to error messages, and the "s" color applies to other standout text. The "d" and "u" letters refer to bold and underline text formed by overstriking with backspaces (see the -u option), not to text using ANSI escape sequences with the -R option. A lowercase letter may be followed by a + to indicate that both the normal format change and the specified color should both be used. For example, -Dug displays underlined text as green without underlining; the green color has replaced the usual underline formatting. But -Du+g displays underlined text as both green and in underlined format. color is either a 4-bit color string or an 8-bit color string: A 4-bit color string is zero, one or two characters, where the first character specifies the foreground color and the second specifies the background color as follows: b Blue c Cyan g Green k Black m Magenta r Red w White y Yellow The corresponding upper-case letter denotes a brighter shade of the color. For example, -DNGk displays line numbers as bright green text on a black background, and -DEbR displays error messages as blue text on a bright red background. If either character is a "-" or is omitted, the corresponding color is set to that of normal text. An 8-bit color string is one or two decimal integers separated by a dot, where the first integer specifies the foreground color and the second specifies the background color. Each integer is a value between 0 and 255 inclusive which selects a "CSI 38;5" color value (see https://en.wikipedia.org/wiki/ANSI_escape_code#SGR_parameters) If either integer is a "-" or is omitted, the corresponding color is set to that of normal text. On MS-DOS versions of less, 8-bit color is not supported; instead, decimal values are interpreted as 4-bit CHAR_INFO.Attributes values (see https://docs.microsoft.com/en-us/windows/console/char-info-str). -e or --quit-at-eof Causes less to automatically exit the second time it reaches end-of-file. By default, the only way to exit less is via the "q" command. -E or --QUIT-AT-EOF Causes less to automatically exit the first time it reaches end- of-file. -f or --force Forces non-regular files to be opened. (A non-regular file is a directory or a device special file.) Also suppresses the warning message when a binary file is opened. By default, less will refuse to open non-regular files. Note that some operating systems will not allow directories to be read, even if -f is set. -F or --quit-if-one-screen Causes less to automatically exit if the entire file can be displayed on the first screen. -g or --hilite-search Normally, less will highlight ALL strings which match the last search command. The -g option changes this behavior to highlight only the particular string which was found by the last search command. This can cause less to run somewhat faster than the default. -G or --HILITE-SEARCH The -G option suppresses all highlighting of strings found by search commands. -hn or --max-back-scroll=n Specifies a maximum number of lines to scroll backward. If it is necessary to scroll backward more than n lines, the screen is repainted in a forward direction instead. (If the terminal does not have the ability to scroll backward, -h0 is implied.) -i or --ignore-case Causes searches to ignore case; that is, uppercase and lowercase are considered identical. This option is ignored if any uppercase letters appear in the search pattern; in other words, if a pattern contains uppercase letters, then that search does not ignore case. -I or --IGNORE-CASE Like -i, but searches ignore case even if the pattern contains uppercase letters. -jn or --jump-target=n Specifies a line on the screen where the "target" line is to be positioned. The target line is the line specified by any command to search for a pattern, jump to a line number, jump to a file percentage or jump to a tag. The screen line may be specified by a number: the top line on the screen is 1, the next is 2, and so on. The number may be negative to specify a line relative to the bottom of the screen: the bottom line on the screen is -1, the second to the bottom is -2, and so on. Alternately, the screen line may be specified as a fraction of the height of the screen, starting with a decimal point: .5 is in the middle of the screen, .3 is three tenths down from the first line, and so on. If the line is specified as a fraction, the actual line number is recalculated if the terminal window is resized, so that the target line remains at the specified fraction of the screen height. If any form of the -j option is used, repeated forward searches (invoked with "n" or "N") begin at the line immediately after the target line, and repeated backward searches begin at the target line, unless changed by -a or -A. For example, if "-j4" is used, the target line is the fourth line on the screen, so forward searches begin at the fifth line on the screen. However nonrepeated searches (invoked with "/" or "?") always begin at the start or end of the current screen respectively. -J or --status-column Displays a status column at the left edge of the screen. The status column shows the lines that matched the current search, and any lines that are marked (via the m or M command). -kfilename or --lesskey-file=filename Causes less to open and interpret the named file as a lesskey(1) file. Multiple -k options may be specified. If the LESSKEY or LESSKEY_SYSTEM environment variable is set, or if a lesskey file is found in a standard place (see KEY BINDINGS), it is also used as a lesskey file. -K or --quit-on-intr Causes less to exit immediately (with status 2) when an interrupt character (usually ^C) is typed. Normally, an interrupt character causes less to stop whatever it is doing and return to its command prompt. Note that use of this option makes it impossible to return to the command prompt from the "F" command. -L or --no-lessopen Ignore the LESSOPEN environment variable (see the INPUT PREPROCESSOR section below). This option can be set from within less, but it will apply only to files opened subsequently, not to the file which is currently open. -m or --long-prompt Causes less to prompt verbosely (like more), with the percent into the file. By default, less prompts with a colon. -M or --LONG-PROMPT Causes less to prompt even more verbosely than more. -n or --line-numbers Suppresses line numbers. The default (to use line numbers) may cause less to run more slowly in some cases, especially with a very large input file. Suppressing line numbers with the -n option will avoid this problem. Using line numbers means: the line number will be displayed in the verbose prompt and in the = command, and the v command will pass the current line number to the editor (see also the discussion of LESSEDIT in PROMPTS below). -N or --LINE-NUMBERS Causes a line number to be displayed at the beginning of each line in the display. -ofilename or --log-file=filename Causes less to copy its input to the named file as it is being viewed. This applies only when the input file is a pipe, not an ordinary file. If the file already exists, less will ask for confirmation before overwriting it. -Ofilename or --LOG-FILE=filename The -O option is like -o, but it will overwrite an existing file without asking for confirmation. If no log file has been specified, the -o and -O options can be used from within less to specify a log file. Without a file name, they will simply report the name of the log file. The "s" command is equivalent to specifying -o from within less. -ppattern or --pattern=pattern The -p option on the command line is equivalent to specifying +/pattern; that is, it tells less to start at the first occurrence of pattern in the file. -Pprompt or --prompt=prompt Provides a way to tailor the three prompt styles to your own preference. This option would normally be put in the LESS environment variable, rather than being typed in with each less command. Such an option must either be the last option in the LESS variable, or be terminated by a dollar sign. -Ps followed by a string changes the default (short) prompt to that string. -Pm changes the medium (-m) prompt. -PM changes the long (-M) prompt. -Ph changes the prompt for the help screen. -P= changes the message printed by the = command. -Pw changes the message printed while waiting for data (in the F command). All prompt strings consist of a sequence of letters and special escape sequences. See the section on PROMPTS for more details. -q or --quiet or --silent Causes moderately "quiet" operation: the terminal bell is not rung if an attempt is made to scroll past the end of the file or before the beginning of the file. If the terminal has a "visual bell", it is used instead. The bell will be rung on certain other errors, such as typing an invalid character. The default is to ring the terminal bell in all such cases. -Q or --QUIET or --SILENT Causes totally "quiet" operation: the terminal bell is never rung. If the terminal has a "visual bell", it is used in all cases where the terminal bell would have been rung. -r or --raw-control-chars Causes "raw" control characters to be displayed. The default is to display control characters using the caret notation; for example, a control-A (octal 001) is displayed as "^A". Warning: when the -r option is used, less cannot keep track of the actual appearance of the screen (since this depends on how the screen responds to each type of control character). Thus, various display problems may result, such as long lines being split in the wrong place. USE OF THE -r OPTION IS NOT RECOMMENDED. -R or --RAW-CONTROL-CHARS Like -r, but only ANSI "color" escape sequences and OSC 8 hyperlink sequences are output in "raw" form. Unlike -r, the screen appearance is maintained correctly, provided that there are no escape sequences in the file other than these types of escape sequences. Color escape sequences are only supported when the color is changed within one line, not across lines. In other words, the beginning of each line is assumed to be normal (non-colored), regardless of any escape sequences in previous lines. For the purpose of keeping track of screen appearance, these escape sequences are assumed to not move the cursor. OSC 8 hyperlinks are sequences of the form: ESC ] 8 ; ... \7 The terminating sequence may be either a BEL character (\7) or the two-character sequence "ESC \". ANSI color escape sequences are sequences of the form: ESC [ ... m where the "..." is zero or more color specification characters. You can make less think that characters other than "m" can end ANSI color escape sequences by setting the environment variable LESSANSIENDCHARS to the list of characters which can end a color escape sequence. And you can make less think that characters other than the standard ones may appear between the ESC and the m by setting the environment variable LESSANSIMIDCHARS to the list of characters which can appear. -s or --squeeze-blank-lines Causes consecutive blank lines to be squeezed into a single blank line. This is useful when viewing nroff output. -S or --chop-long-lines Causes lines longer than the screen width to be chopped (truncated) rather than wrapped. That is, the portion of a long line that does not fit in the screen width is not displayed until you press RIGHT-ARROW. The default is to wrap long lines; that is, display the remainder on the next line. -ttag or --tag=tag The -t option, followed immediately by a TAG, will edit the file containing that tag. For this to work, tag information must be available; for example, there may be a file in the current directory called "tags", which was previously built by ctags(1) or an equivalent command. If the environment variable LESSGLOBALTAGS is set, it is taken to be the name of a command compatible with global(1), and that command is executed to find the tag. (See http://www.gnu.org/software/global/global.html). The -t option may also be specified from within less (using the - command) as a way of examining a new file. The command ":t" is equivalent to specifying -t from within less. -Ttagsfile or --tag-file=tagsfile Specifies a tags file to be used instead of "tags". -u or --underline-special Causes backspaces and carriage returns to be treated as printable characters; that is, they are sent to the terminal when they appear in the input. -U or --UNDERLINE-SPECIAL Causes backspaces, tabs, carriage returns and "formatting characters" (as defined by Unicode) to be treated as control characters; that is, they are handled as specified by the -r option. By default, if neither -u nor -U is given, backspaces which appear adjacent to an underscore character are treated specially: the underlined text is displayed using the terminal's hardware underlining capability. Also, backspaces which appear between two identical characters are treated specially: the overstruck text is printed using the terminal's hardware boldface capability. Other backspaces are deleted, along with the preceding character. Carriage returns immediately followed by a newline are deleted. Other carriage returns are handled as specified by the -r option. Unicode formatting characters, such as the Byte Order Mark, are sent to the terminal. Text which is overstruck or underlined can be searched for if neither -u nor -U is in effect. -V or --version Displays the version number of less. -w or --hilite-unread Temporarily highlights the first "new" line after a forward movement of a full page. The first "new" line is the line immediately following the line previously at the bottom of the screen. Also highlights the target line after a g or p command. The highlight is removed at the next command which causes movement. The entire line is highlighted, unless the -J option is in effect, in which case only the status column is highlighted. -W or --HILITE-UNREAD Like -w, but temporarily highlights the first new line after any forward movement command larger than one line. -xn,... or --tabs=n,... Sets tab stops. If only one n is specified, tab stops are set at multiples of n. If multiple values separated by commas are specified, tab stops are set at those positions, and then continue with the same spacing as the last two. For example, -x9,17 will set tabs at positions 9, 17, 25, 33, etc. The default for n is 8. -X or --no-init Disables sending the termcap initialization and deinitialization strings to the terminal. This is sometimes desirable if the deinitialization string does something unnecessary, like clearing the screen. -yn or --max-forw-scroll=n Specifies a maximum number of lines to scroll forward. If it is necessary to scroll forward more than n lines, the screen is repainted instead. The -c or -C option may be used to repaint from the top of the screen if desired. By default, any forward movement causes scrolling. -zn or --window=n or -n Changes the default scrolling window size to n lines. The default is one screenful. The z and w commands can also be used to change the window size. The "z" may be omitted for compatibility with some versions of more. If the number n is negative, it indicates n lines less than the current screen size. For example, if the screen is 24 lines, -z-4 sets the scrolling window to 20 lines. If the screen is resized to 40 lines, the scrolling window automatically changes to 36 lines. -"cc or --quotes=cc Changes the filename quoting character. This may be necessary if you are trying to name a file which contains both spaces and quote characters. Followed by a single character, this changes the quote character to that character. Filenames containing a space should then be surrounded by that character rather than by double quotes. Followed by two characters, changes the open quote to the first character, and the close quote to the second character. Filenames containing a space should then be preceded by the open quote character and followed by the close quote character. Note that even after the quote characters are changed, this option remains -" (a dash followed by a double quote). -~ or --tilde Normally lines after end of file are displayed as a single tilde (~). This option causes lines after end of file to be displayed as blank lines. -# or --shift Specifies the default number of positions to scroll horizontally in the RIGHTARROW and LEFTARROW commands. If the number specified is zero, it sets the default number of positions to one half of the screen width. Alternately, the number may be specified as a fraction of the width of the screen, starting with a decimal point: .5 is half of the screen width, .3 is three tenths of the screen width, and so on. If the number is specified as a fraction, the actual number of scroll positions is recalculated if the terminal window is resized, so that the actual scroll remains at the specified fraction of the screen width. --follow-name Normally, if the input file is renamed while an F command is executing, less will continue to display the contents of the original file despite its name change. If --follow-name is specified, during an F command less will periodically attempt to reopen the file by name. If the reopen succeeds and the file is a different file from the original (which means that a new file has been created with the same name as the original (now renamed) file), less will display the contents of that new file. --incsearch Subsequent search commands will be "incremental"; that is, less will advance to the next line containing the search pattern as each character of the pattern is typed in. --line-num-width Sets the minimum width of the line number field when the -N option is in effect. The default is 7 characters. --mouse Enables mouse input: scrolling the mouse wheel down moves forward in the file, scrolling the mouse wheel up moves backwards in the file, and clicking the mouse sets the "#" mark to the line where the mouse is clicked. The number of lines to scroll when the wheel is moved can be set by the --wheel-lines option. Mouse input works only on terminals which support X11 mouse reporting, and on the Windows version of less. --MOUSE Like --mouse, except the direction scrolled on mouse wheel movement is reversed. --no-keypad Disables sending the keypad initialization and deinitialization strings to the terminal. This is sometimes useful if the keypad strings make the numeric keypad behave in an undesirable manner. --no-histdups This option changes the behavior so that if a search string or file name is typed in, and the same string is already in the history list, the existing copy is removed from the history list before the new one is added. Thus, a given string will appear only once in the history list. Normally, a string may appear multiple times. --rscroll This option changes the character used to mark truncated lines. It may begin with a two-character attribute indicator like LESSBINFMT does. If there is no attribute indicator, standout is used. If set to "-", truncated lines are not marked. --save-marks Save marks in the history file, so marks are retained across different invocations of less. --status-col-width Sets the width of the status column when the -J option is in effect. The default is 2 characters. --use-backslash This option changes the interpretations of options which follow this one. After the --use-backslash option, any backslash in an option string is removed and the following character is taken literally. This allows a dollar sign to be included in option strings. --use-color Enables the colored text in various places. The -D option can be used to change the colors. Colored text works only if the terminal supports ANSI color escape sequences (as defined in ECMA-48 SGR; see https://www.ecma-international.org/publications-and- standards/standards/ecma-48). --wheel-lines=n Set the number of lines to scroll when the mouse wheel is scrolled and the --mouse or --MOUSE option is in effect. The default is 1 line. -- A command line argument of "--" marks the end of option arguments. Any arguments following this are interpreted as filenames. This can be useful when viewing a file whose name begins with a "-" or "+". + If a command line option begins with +, the remainder of that option is taken to be an initial command to less. For example, +G tells less to start at the end of the file rather than the beginning, and +/xyz tells it to start at the first occurrence of "xyz" in the file. As a special case, +<number> acts like +<number>g; that is, it starts the display at the specified line number (however, see the caveat under the "g" command above). If the option starts with ++, the initial command applies to every file being viewed, not just the first one. The + command described previously may also be used to set (or change) an initial command for every file. LINE EDITING When entering a command line at the bottom of the screen (for example, a filename for the :e command, or the pattern for a search command), certain keys can be used to manipulate the command line. Most commands have an alternate form in [ brackets ] which can be used if a key does not exist on a particular keyboard. (Note that the forms beginning with ESC do not work in some MS-DOS and Windows systems because ESC is the line erase character.) Any of these special keys may be entered literally by preceding it with the "literal" character, either ^V or ^A. A backslash itself may also be entered literally by entering two backslashes. LEFTARROW [ ESC-h ] Move the cursor one space to the left. RIGHTARROW [ ESC-l ] Move the cursor one space to the right. ^LEFTARROW [ ESC-b or ESC-LEFTARROW ] (That is, CONTROL and LEFTARROW simultaneously.) Move the cursor one word to the left. ^RIGHTARROW [ ESC-w or ESC-RIGHTARROW ] (That is, CONTROL and RIGHTARROW simultaneously.) Move the cursor one word to the right. HOME [ ESC-0 ] Move the cursor to the beginning of the line. END [ ESC-$ ] Move the cursor to the end of the line. BACKSPACE Delete the character to the left of the cursor, or cancel the command if the command line is empty. DELETE or [ ESC-x ] Delete the character under the cursor. ^BACKSPACE [ ESC-BACKSPACE ] (That is, CONTROL and BACKSPACE simultaneously.) Delete the word to the left of the cursor. ^DELETE [ ESC-X or ESC-DELETE ] (That is, CONTROL and DELETE simultaneously.) Delete the word under the cursor. UPARROW [ ESC-k ] Retrieve the previous command line. If you first enter some text and then press UPARROW, it will retrieve the previous command which begins with that text. DOWNARROW [ ESC-j ] Retrieve the next command line. If you first enter some text and then press DOWNARROW, it will retrieve the next command which begins with that text. TAB Complete the partial filename to the left of the cursor. If it matches more than one filename, the first match is entered into the command line. Repeated TABs will cycle thru the other matching filenames. If the completed filename is a directory, a "/" is appended to the filename. (On MS-DOS systems, a "\" is appended.) The environment variable LESSSEPARATOR can be used to specify a different character to append to a directory name. BACKTAB [ ESC-TAB ] Like, TAB, but cycles in the reverse direction thru the matching filenames. ^L Complete the partial filename to the left of the cursor. If it matches more than one filename, all matches are entered into the command line (if they fit). ^U (Unix and OS/2) or ESC (MS-DOS) Delete the entire command line, or cancel the command if the command line is empty. If you have changed your line-kill character in Unix to something other than ^U, that character is used instead of ^U. ^G Delete the entire command line and return to the main prompt. KEY BINDINGS You may define your own less commands by using the program lesskey(1) to create a lesskey file. This file specifies a set of command keys and an action associated with each key. You may also use lesskey to change the line-editing keys (see LINE EDITING), and to set environment variables. If the environment variable LESSKEY is set, less uses that as the name of the lesskey file. Otherwise, less looks in a standard place for the lesskey file: On Unix systems, less looks for a lesskey file called "$HOME/.less". On MS-DOS and Windows systems, less looks for a lesskey file called "$HOME/_less", and if it is not found there, then looks for a lesskey file called "_less" in any directory specified in the PATH environment variable. On OS/2 systems, less looks for a lesskey file called "$HOME/less.ini", and if it is not found, then looks for a lesskey file called "less.ini" in any directory specified in the INIT environment variable, and if it not found there, then looks for a lesskey file called "less.ini" in any directory specified in the PATH environment variable. See the lesskey manual page for more details. A system-wide lesskey file may also be set up to provide key bindings. If a key is defined in both a local lesskey file and in the system-wide file, key bindings in the local file take precedence over those in the system-wide file. If the environment variable LESSKEY_SYSTEM is set, less uses that as the name of the system-wide lesskey file. Otherwise, less looks in a standard place for the system-wide lesskey file: On Unix systems, the system-wide lesskey file is /usr/local/etc/sysless. (However, if less was built with a different sysconf directory than /usr/local/etc, that directory is where the sysless file is found.) On MS-DOS and Windows systems, the system-wide lesskey file is c:\_sysless. On OS/2 systems, the system-wide lesskey file is c:\sysless.ini. INPUT PREPROCESSOR You may define an "input preprocessor" for less. Before less opens a file, it first gives your input preprocessor a chance to modify the way the contents of the file are displayed. An input preprocessor is simply an executable program (or shell script), which writes the contents of the file to a different file, called the replacement file. The contents of the replacement file are then displayed in place of the contents of the original file. However, it will appear to the user as if the original file is opened; that is, less will display the original filename as the name of the current file. An input preprocessor receives one command line argument, the original filename, as entered by the user. It should create the replacement file, and when finished, print the name of the replacement file to its standard output. If the input preprocessor does not output a replacement filename, less uses the original file, as normal. The input preprocessor is not called when viewing standard input. To set up an input preprocessor, set the LESSOPEN environment variable to a command line which will invoke your input preprocessor. This command line should include one occurrence of the string "%s", which will be replaced by the filename when the input preprocessor command is invoked. When less closes a file opened in such a way, it will call another program, called the input postprocessor, which may perform any desired clean-up action (such as deleting the replacement file created by LESSOPEN). This program receives two command line arguments, the original filename as entered by the user, and the name of the replacement file. To set up an input postprocessor, set the LESSCLOSE environment variable to a command line which will invoke your input postprocessor. It may include two occurrences of the string "%s"; the first is replaced with the original name of the file and the second with the name of the replacement file, which was output by LESSOPEN. For example, on many Unix systems, these two scripts will allow you to keep files in compressed format, but still let less view them directly: lessopen.sh: #! /bin/sh case "$1" in *.Z) TEMPFILE=$(mktemp) uncompress -c $1 >$TEMPFILE 2>/dev/null if [ -s $TEMPFILE ]; then echo $TEMPFILE else rm -f $TEMPFILE fi ;; esac lessclose.sh: #! /bin/sh rm $2 To use these scripts, put them both where they can be executed and set LESSOPEN="lessopen.sh %s", and LESSCLOSE="lessclose.sh %s %s". More complex LESSOPEN and LESSCLOSE scripts may be written to accept other types of compressed files, and so on. It is also possible to set up an input preprocessor to pipe the file data directly to less, rather than putting the data into a replacement file. This avoids the need to decompress the entire file before starting to view it. An input preprocessor that works this way is called an input pipe. An input pipe, instead of writing the name of a replacement file on its standard output, writes the entire contents of the replacement file on its standard output. If the input pipe does not write any characters on its standard output, then there is no replacement file and less uses the original file, as normal. To use an input pipe, make the first character in the LESSOPEN environment variable a vertical bar (|) to signify that the input preprocessor is an input pipe. As with non-pipe input preprocessors, the command string must contain one occurrence of %s, which is replaced with the filename of the input file. For example, on many Unix systems, this script will work like the previous example scripts: lesspipe.sh: #! /bin/sh case "$1" in *.Z) uncompress -c $1 2>/dev/null ;; *) exit 1 ;; esac exit $? To use this script, put it where it can be executed and set LESSOPEN="|lesspipe.sh %s". Note that a preprocessor cannot output an empty file, since that is interpreted as meaning there is no replacement, and the original file is used. To avoid this, if LESSOPEN starts with two vertical bars, the exit status of the script becomes meaningful. If the exit status is zero, the output is considered to be replacement text, even if it is empty. If the exit status is nonzero, any output is ignored and the original file is used. For compatibility with previous versions of less, if LESSOPEN starts with only one vertical bar, the exit status of the preprocessor is ignored. When an input pipe is used, a LESSCLOSE postprocessor can be used, but it is usually not necessary since there is no replacement file to clean up. In this case, the replacement file name passed to the LESSCLOSE postprocessor is "-". For compatibility with previous versions of less, the input preprocessor or pipe is not used if less is viewing standard input. However, if the first character of LESSOPEN is a dash (-), the input preprocessor is used on standard input as well as other files. In this case, the dash is not considered to be part of the preprocessor command. If standard input is being viewed, the input preprocessor is passed a file name consisting of a single dash. Similarly, if the first two characters of LESSOPEN are vertical bar and dash (|-) or two vertical bars and a dash (||-), the input pipe is used on standard input as well as other files. Again, in this case the dash is not considered to be part of the input pipe command. NATIONAL CHARACTER SETS There are three types of characters in the input file: normal characters can be displayed directly to the screen. control characters should not be displayed directly, but are expected to be found in ordinary text files (such as backspace and tab). binary characters should not be displayed directly and are not expected to be found in text files. A "character set" is simply a description of which characters are to be considered normal, control, and binary. The LESSCHARSET environment variable may be used to select a character set. Possible values for LESSCHARSET are: ascii BS, TAB, NL, CR, and formfeed are control characters, all chars with values between 32 and 126 are normal, and all others are binary. iso8859 Selects an ISO 8859 character set. This is the same as ASCII, except characters between 160 and 255 are treated as normal characters. latin1 Same as iso8859. latin9 Same as iso8859. dos Selects a character set appropriate for MS-DOS. ebcdic Selects an EBCDIC character set. IBM-1047 Selects an EBCDIC character set used by OS/390 Unix Services. This is the EBCDIC analogue of latin1. You get similar results by setting either LESSCHARSET=IBM-1047 or LC_CTYPE=en_US in your environment. koi8-r Selects a Russian character set. next Selects a character set appropriate for NeXT computers. utf-8 Selects the UTF-8 encoding of the ISO 10646 character set. UTF-8 is special in that it supports multi-byte characters in the input file. It is the only character set that supports multi-byte characters. windows Selects a character set appropriate for Microsoft Windows (cp 1251). In rare cases, it may be desired to tailor less to use a character set other than the ones definable by LESSCHARSET. In this case, the environment variable LESSCHARDEF can be used to define a character set. It should be set to a string where each character in the string represents one character in the character set. The character "." is used for a normal character, "c" for control, and "b" for binary. A decimal number may be used for repetition. For example, "bccc4b." would mean character 0 is binary, 1, 2 and 3 are control, 4, 5, 6 and 7 are binary, and 8 is normal. All characters after the last are taken to be the same as the last, so characters 9 through 255 would be normal. (This is an example, and does not necessarily represent any real character set.) This table shows the value of LESSCHARDEF which is equivalent to each of the possible values for LESSCHARSET: ascii 8bcccbcc18b95.b dos 8bcccbcc12bc5b95.b. ebcdic 5bc6bcc7bcc41b.9b7.9b5.b..8b6.10b6.b9.7b 9.8b8.17b3.3b9.7b9.8b8.6b10.b.b.b. IBM-1047 4cbcbc3b9cbccbccbb4c6bcc5b3cbbc4bc4bccbc 191.b iso8859 8bcccbcc18b95.33b. koi8-r 8bcccbcc18b95.b128. latin1 8bcccbcc18b95.33b. next 8bcccbcc18b95.bb125.bb If neither LESSCHARSET nor LESSCHARDEF is set, but any of the strings "UTF-8", "UTF8", "utf-8" or "utf8" is found in the LC_ALL, LC_CTYPE or LANG environment variables, then the default character set is utf-8. If that string is not found, but your system supports the setlocale interface, less will use setlocale to determine the character set. setlocale is controlled by setting the LANG or LC_CTYPE environment variables. Finally, if the setlocale interface is also not available, the default character set is latin1. Control and binary characters are displayed in standout (reverse video). Each such character is displayed in caret notation if possible (e.g. ^A for control-A). Caret notation is used only if inverting the 0100 bit results in a normal printable character. Otherwise, the character is displayed as a hex number in angle brackets. This format can be changed by setting the LESSBINFMT environment variable. LESSBINFMT may begin with a "*" and one character to select the display attribute: "*k" is blinking, "*d" is bold, "*u" is underlined, "*s" is standout, and "*n" is normal. If LESSBINFMT does not begin with a "*", normal attribute is assumed. The remainder of LESSBINFMT is a string which may include one printf-style escape sequence (a % followed by x, X, o, d, etc.). For example, if LESSBINFMT is "*u[%x]", binary characters are displayed in underlined hexadecimal surrounded by brackets. The default if no LESSBINFMT is specified is "*s<%02X>". Warning: the result of expanding the character via LESSBINFMT must be less than 31 characters. When the character set is utf-8, the LESSUTFBINFMT environment variable acts similarly to LESSBINFMT but it applies to Unicode code points that were successfully decoded but are unsuitable for display (e.g., unassigned code points). Its default value is "<U+%04lX>". Note that LESSUTFBINFMT and LESSBINFMT share their display attribute setting ("*x") so specifying one will affect both; LESSUTFBINFMT is read after LESSBINFMT so its setting, if any, will have priority. Problematic octets in a UTF-8 file (octets of a truncated sequence, octets of a complete but non-shortest form sequence, invalid octets, and stray trailing octets) are displayed individually using LESSBINFMT so as to facilitate diagnostic of how the UTF-8 file is ill-formed. PROMPTS The -P option allows you to tailor the prompt to your preference. The string given to the -P option replaces the specified prompt string. Certain characters in the string are interpreted specially. The prompt mechanism is rather complicated to provide flexibility, but the ordinary user need not understand the details of constructing personalized prompt strings. A percent sign followed by a single character is expanded according to what the following character is: %bX Replaced by the byte offset into the current input file. The b is followed by a single character (shown as X above) which specifies the line whose byte offset is to be used. If the character is a "t", the byte offset of the top line in the display is used, an "m" means use the middle line, a "b" means use the bottom line, a "B" means use the line just after the bottom line, and a "j" means use the "target" line, as specified by the -j option. %B Replaced by the size of the current input file. %c Replaced by the column number of the text appearing in the first column of the screen. %dX Replaced by the page number of a line in the input file. The line to be used is determined by the X, as with the %b option. %D Replaced by the number of pages in the input file, or equivalently, the page number of the last line in the input file. %E Replaced by the name of the editor (from the VISUAL environment variable, or the EDITOR environment variable if VISUAL is not defined). See the discussion of the LESSEDIT feature below. %f Replaced by the name of the current input file. %F Replaced by the last component of the name of the current input file. %g Replaced by the shell-escaped name of the current input file. This is useful when the expanded string will be used in a shell command, such as in LESSEDIT. %i Replaced by the index of the current file in the list of input files. %lX Replaced by the line number of a line in the input file. The line to be used is determined by the X, as with the %b option. %L Replaced by the line number of the last line in the input file. %m Replaced by the total number of input files. %pX Replaced by the percent into the current input file, based on byte offsets. The line used is determined by the X as with the %b option. %PX Replaced by the percent into the current input file, based on line numbers. The line used is determined by the X as with the %b option. %s Same as %B. %t Causes any trailing spaces to be removed. Usually used at the end of the string, but may appear anywhere. %T Normally expands to the word "file". However if viewing files via a tags list using the -t option, it expands to the word "tag". %x Replaced by the name of the next input file in the list. If any item is unknown (for example, the file size if input is a pipe), a question mark is printed instead. The format of the prompt string can be changed depending on certain conditions. A question mark followed by a single character acts like an "IF": depending on the following character, a condition is evaluated. If the condition is true, any characters following the question mark and condition character, up to a period, are included in the prompt. If the condition is false, such characters are not included. A colon appearing between the question mark and the period can be used to establish an "ELSE": any characters between the colon and the period are included in the string if and only if the IF condition is false. Condition characters (which follow a question mark) may be: ?a True if any characters have been included in the prompt so far. ?bX True if the byte offset of the specified line is known. ?B True if the size of current input file is known. ?c True if the text is horizontally shifted (%c is not zero). ?dX True if the page number of the specified line is known. ?e True if at end-of-file. ?f True if there is an input filename (that is, if input is not a pipe). ?lX True if the line number of the specified line is known. ?L True if the line number of the last line in the file is known. ?m True if there is more than one input file. ?n True if this is the first prompt in a new input file. ?pX True if the percent into the current input file, based on byte offsets, of the specified line is known. ?PX True if the percent into the current input file, based on line numbers, of the specified line is known. ?s Same as "?B". ?x True if there is a next input file (that is, if the current input file is not the last one). Any characters other than the special ones (question mark, colon, period, percent, and backslash) become literally part of the prompt. Any of the special characters may be included in the prompt literally by preceding it with a backslash. Some examples: ?f%f:Standard input. This prompt prints the filename, if known; otherwise the string "Standard input". ?f%f .?ltLine %lt:?pt%pt\%:?btByte %bt:-... This prompt would print the filename, if known. The filename is followed by the line number, if known, otherwise the percent if known, otherwise the byte offset if known. Otherwise, a dash is printed. Notice how each question mark has a matching period, and how the % after the %pt is included literally by escaping it with a backslash. ?n?f%f .?m(%T %i of %m) ..?e(END) ?x- Next\: %x..%t"; This prints the filename if this is the first prompt in a file, followed by the "file N of N" message if there is more than one input file. Then, if we are at end-of-file, the string "(END)" is printed followed by the name of the next file, if there is one. Finally, any trailing spaces are truncated. This is the default prompt. For reference, here are the defaults for the other two prompts (-m and -M respectively). Each is broken into two lines here for readability only. ?n?f%f .?m(%T %i of %m) ..?e(END) ?x- Next\: %x.: ?pB%pB\%:byte %bB?s/%s...%t ?f%f .?n?m(%T %i of %m) ..?ltlines %lt-%lb?L/%L. : byte %bB?s/%s. .?e(END) ?x- Next\: %x.:?pB%pB\%..%t And here is the default message produced by the = command: ?f%f .?m(%T %i of %m) .?ltlines %lt-%lb?L/%L. . byte %bB?s/%s. ?e(END) :?pB%pB\%..%t The prompt expansion features are also used for another purpose: if an environment variable LESSEDIT is defined, it is used as the command to be executed when the v command is invoked. The LESSEDIT string is expanded in the same way as the prompt strings. The default value for LESSEDIT is: %E ?lm+%lm. %g Note that this expands to the editor name, followed by a + and the line number, followed by the shell-escaped file name. If your editor does not accept the "+linenumber" syntax, or has other differences in invocation syntax, the LESSEDIT variable can be changed to modify this default. SECURITY When the environment variable LESSSECURE is set to 1, less runs in a "secure" mode. This means these features are disabled: ! the shell command | the pipe command :e the examine command. v the editing command s -o log files -k use of lesskey files -t use of tags files metacharacters in filenames, such as * filename completion (TAB, ^L) Less can also be compiled to be permanently in "secure" mode. COMPATIBILITY WITH MORE If the environment variable LESS_IS_MORE is set to 1, or if the program is invoked via a file link named "more", less behaves (mostly) in conformance with the POSIX "more" command specification. In this mode, less behaves differently in these ways: The -e option works differently. If the -e option is not set, less behaves as if the -e option were set. If the -e option is set, less behaves as if the -E option were set. The -m option works differently. If the -m option is not set, the medium prompt is used, and it is prefixed with the string "--More--". If the -m option is set, the short prompt is used. The -n option acts like the -z option. The normal behavior of the -n option is unavailable in this mode. The parameter to the -p option is taken to be a less command rather than a search pattern. The LESS environment variable is ignored, and the MORE environment variable is used in its place. ENVIRONMENT VARIABLES Environment variables may be specified either in the system environment as usual, or in a lesskey(1) file. If environment variables are defined in more than one place, variables defined in a local lesskey file take precedence over variables defined in the system environment, which take precedence over variables defined in the system-wide lesskey file. COLUMNS Sets the number of columns on the screen. Takes precedence over the number of columns specified by the TERM variable. (But if you have a windowing system which supports TIOCGWINSZ or WIOCGETD, the window system's idea of the screen size takes precedence over the LINES and COLUMNS environment variables.) EDITOR The name of the editor (used for the v command). HOME Name of the user's home directory (used to find a lesskey file on Unix and OS/2 systems). HOMEDRIVE, HOMEPATH Concatenation of the HOMEDRIVE and HOMEPATH environment variables is the name of the user's home directory if the HOME variable is not set (only in the Windows version). INIT Name of the user's init directory (used to find a lesskey file on OS/2 systems). LANG Language for determining the character set. LC_CTYPE Language for determining the character set. LESS Options which are passed to less automatically. LESSANSIENDCHARS Characters which may end an ANSI color escape sequence (default "m"). LESSANSIMIDCHARS Characters which may appear between the ESC character and the end character in an ANSI color escape sequence (default "0123456789:;[?!"'#%()*+ ". LESSBINFMT Format for displaying non-printable, non-control characters. LESSCHARDEF Defines a character set. LESSCHARSET Selects a predefined character set. LESSCLOSE Command line to invoke the (optional) input-postprocessor. LESSECHO Name of the lessecho program (default "lessecho"). The lessecho program is needed to expand metacharacters, such as * and ?, in filenames on Unix systems. LESSEDIT Editor prototype string (used for the v command). See discussion under PROMPTS. LESSGLOBALTAGS Name of the command used by the -t option to find global tags. Normally should be set to "global" if your system has the global(1) command. If not set, global tags are not used. LESSHISTFILE Name of the history file used to remember search commands and shell commands between invocations of less. If set to "-" or "/dev/null", a history file is not used. The default is "$HOME/.lesshst" on Unix systems, "$HOME/_lesshst" on DOS and Windows systems, or "$HOME/lesshst.ini" or "$INIT/lesshst.ini" on OS/2 systems. LESSHISTSIZE The maximum number of commands to save in the history file. The default is 100. LESSKEY Name of the default lesskey(1) file. LESSKEY_SYSTEM Name of the default system-wide lesskey(1) file. LESSMETACHARS List of characters which are considered "metacharacters" by the shell. LESSMETAESCAPE Prefix which less will add before each metacharacter in a command sent to the shell. If LESSMETAESCAPE is an empty string, commands containing metacharacters will not be passed to the shell. LESSOPEN Command line to invoke the (optional) input-preprocessor. LESSSECURE Runs less in "secure" mode. See discussion under SECURITY. LESSSEPARATOR String to be appended to a directory name in filename completion. LESSUTFBINFMT Format for displaying non-printable Unicode code points. LESS_IS_MORE Emulate the more(1) command. LINES Sets the number of lines on the screen. Takes precedence over the number of lines specified by the TERM variable. (But if you have a windowing system which supports TIOCGWINSZ or WIOCGETD, the window system's idea of the screen size takes precedence over the LINES and COLUMNS environment variables.) MORE Options which are passed to less automatically when running in more compatible mode. PATH User's search path (used to find a lesskey file on MS-DOS and OS/2 systems). SHELL The shell used to execute the ! command, as well as to expand filenames. TERM The type of terminal on which less is being run. VISUAL The name of the editor (used for the v command). COPYRIGHT Copyright (C) 1984-2021 Mark Nudelman less is part of the GNU project and is free software. You can redistribute it and/or modify it under the terms of either (1) the GNU General Public License as published by the Free Software Foundation; or (2) the Less License. See the file README in the less distribution for more details regarding redistribution. You should have received a copy of the GNU General Public License along with the source for less; see the file COPYING. If not, write to the Free Software Foundation, 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. You should also have received a copy of the Less License; see the file LICENSE. less is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. AUTHOR Mark Nudelman Report bugs at https://github.com/gwsw/less/issues. For more information, see the less homepage at https://greenwoodsoftware.com/less. Version 581.2: 28 Apr 2021 LESS(1)
null
ldapwhoami
ldapwhoami implements the LDAP "Who Am I?" extended operation. ldapwhoami opens a connection to an LDAP server, binds, and performs a whoami operation.
ldapwhoami - LDAP who am i? tool
ldapwhoami [-n] [-v] [-z] [-d_debuglevel] [-D_binddn] [-W] [-w_passwd] [-y_passwdfile] [-H_ldapuri] [-h_ldaphost] [-p_ldapport] [-e [!]ext[=extparam]] [-E [!]ext[=extparam]] [-O_security-properties] [-I] [-Q] [-U_authcid] [-R_realm] [-x] [-X_authzid] [-Y_mech] [-Z[Z]]
-n Show what would be done, but don't actually perform the whoami operation. Useful for debugging in conjunction with -v. -v Run in verbose mode, with many diagnostics written to standard output. -d_debuglevel Set the LDAP debugging level to debuglevel. ldapwhoami must be compiled with LDAP_DEBUG defined for this option to have any effect. -x Use simple authentication instead of SASL. -D_binddn Use the Distinguished Name binddn to bind to the LDAP directory. For SASL binds, the server is expected to ignore this value. -W Prompt for simple authentication. This is used instead of specifying the password on the command line. -w_passwd Use passwd as the password for simple authentication. -y_passwdfile Use complete contents of passwdfile as the password for simple authentication. -H_ldapuri Specify URI(s) referring to the ldap server(s); only the protocol/host/port fields are allowed; a list of URI, separated by whitespace or commas is expected. -h_ldaphost Specify an alternate host on which the ldap server is running. Deprecated in favor of -H. -p_ldapport Specify an alternate TCP port where the ldap server is listening. Deprecated in favor of -H. -O_security-properties Specify SASL security properties. -e [!]ext[=extparam] -E [!]ext[=extparam] Specify general extensions with -e and search extensions with -E. ´!´ indicates criticality. General extensions: [!]assert=<filter> (an RFC 4515 Filter) [!]authzid=<authzid> ("dn:<dn>" or "u:<user>") [!]manageDSAit [!]noop ppolicy [!]postread[=<attrs>] (a comma-separated attribute list) [!]preread[=<attrs>] (a comma-separated attribute list) abandon, cancel (SIGINT sends abandon/cancel; not really controls) Search extensions: [!]domainScope (domain scope) [!]mv=<filter> (matched values filter) [!]pr=<size>[/prompt|noprompt] (paged results/prompt) [!]sss=[-]<attr[:OID]>[/[-]<attr[:OID]>...] (server side sorting) [!]subentries[=true|false] (subentries) [!]sync=ro[/<cookie>] (LDAP Sync refreshOnly) rp[/<cookie>][/<slimit>] (LDAP Sync refreshAndPersist) -I Enable SASL Interactive mode. Always prompt. Default is to prompt only as needed. -Q Enable SASL Quiet mode. Never prompt. -U_authcid Specify the authentication ID for SASL bind. The form of the ID depends on the actual SASL mechanism used. -R_realm Specify the realm of authentication ID for SASL bind. The form of the realm depends on the actual SASL mechanism used. -X_authzid Specify the requested authorization ID for SASL bind. authzid must be one of the following formats: dn:<distinguished name> or u:<username> -Y_mech Specify the SASL mechanism to be used for authentication. If it's not specified, the program will choose the best mechanism the server knows. -Z[Z] Issue StartTLS (Transport Layer Security) extended operation. If you use -ZZ, the command will require the operation to be successful. EXAMPLE ldapwhoami -x -D "cn=Manager,dc=example,dc=com" -W SEE ALSO ldap.conf(5), ldap(3), ldap_extended_operation(3) AUTHOR The OpenLDAP Project <http://www.openldap.org/> ACKNOWLEDGEMENTS OpenLDAP Software is developed and maintained by The OpenLDAP Project <http://www.openldap.org/>. OpenLDAP Software is derived from University of Michigan LDAP 3.3 Release. OpenLDAP 2.4.28 2011/11/24 LDAPWHOAMI(1)
null
pl2pm
pl2pm is a tool to aid in the conversion of Perl4-style .pl library files to Perl5-style library modules. Usually, your old .pl file will still work fine and you should only use this tool if you plan to update your library to use some of the newer Perl 5 features, such as AutoLoading. LIMITATIONS It's just a first step, but it's usually a good first step. AUTHOR Larry Wall <larry@wall.org> perl v5.38.2 2023-11-28 PL2PM(1)
pl2pm - Rough tool to translate Perl4 .pl files to Perl5 .pm modules.
pl2pm files
null
null
ncinit
ncctl controls the caller's kernel Kerberos credentials for any of the specified path's associated NFS mounts. If no paths are specified then all the caller's associated credentials for all NFS file systems are acted upon by the command given. When an NFS file system is mounted using Kerberos through the “sec=” option or by the export specified on the server, the resulting session context is stored in a table for each mount. If the user decides to finish his or her session or chooses to use a different credential, then ncctl can be called to invalidate or change those credentials in the kernel. ncctl supports the following commands: init, set Set the mount or mounts to obtain credentials form the associated principal. Any current credential is unset. destroy, unset Unset the current credentials on the mount or mounts. list, get List the principal(s) set on the mount or mounts for this session. If no principal was set, then display “Default credential” followed by “[from ⟨principal name⟩]” if the access succeeded and “[kinit needed]” if not. If there has been no access to the file system then display “Credentials are not set”. Note the second synopsis is equivalent to ncctl [-Pv] {init | set} [-F] -p principal The third synopsis is equivalent to ncctl [-Pv] {destroy | unset} And the last synopsis is equivalent to ncctl [-Pv] {list | get} Kerberos keeps a collection of credentials which can be seen by using klist -A. The current default credential can be seen with klist without any arguments. kswitch can be used to switch the default to a different Kerberos credential. kdestroy can be use to remove all or a particular Kerberos credential. New Kerberos credentials can be obtain and added to the collection by calling kinit and those credentials can be used when accessing the mount. See kinit(1), klist(1), kswitch(1), and kdestroy(1). ncctl can set any principal from the associated Kerberos credentials or can destroy and unset credentials currently on the mount. When accessing a Kerberos mounted NFS file system, if no principal is set on the mount, when the kernel needs credentials it will make an up call to the gssd daemon and what ever the default credentials are available at the time will be used. The options are as follows: -h, --help Print a help summary of the command and then exit. -v, --verbose Be verbose and show what file system is being operated on and any resulting errors. -P, --nofollow If the trailing component resolves to a symbolic link do not resolve the link but use the current path to determine any associate NFS file system. -p, --principal ⟨principal⟩ For the init, set and ncinit commands set the principal to ⟨principal⟩. This option is required for theses commands. This option is not valid for other commands. -F, --force For the init, set and ncinit commands to not check the presence of the required principal in the Kerberos cache collection. This may be useful if Kerberos credentials will be obtain later. WARNING: If the credential is incorrectly set it may not work and no access to the file system will ever be allowed until another set or unset operation takes place. This option is not valid for other commands.
ncctl – Control NFS kernel credentials
ncctl [-Pvh] {{init | set} [-F] -p principal | {destroy | unset} | {list | get}} [path ...] ncinit [-PvhF] -p principal [path ...] ncdestroy [-Pvh] [path ...] nclist [-Pvh] [path ...]
null
If leaving for the day: $ kdestroy -A $ ncdestroy Lets say a user does $ kinit user@FOO.COM And through the automounter access a path /Network/Serves/someserver/Sources/foo/bar where the mount of /Network/Servers/someserver/Sources/foo was done with user@FOO.COM. $ cat /Network/Servers/someserver/Sources/foo/bar cat: /Network/Servers/someserver/Sources/foo/bar: Permission denied The user realizes that in order to have access on the server his identity should be user2@BAR.COM. So: $ kinit user2@BAR.COM $ ncctl set -p user2@BAR.COM Now the local user can access bar. To see your credentials $ nclist /Network/Servers/someserver/Sources/foo: user2@BAR.COM If the user destroys his credentials and then acquires new ones $ ncdestroy $ nclist -v /private/tmp/mp : No credentials are set. /Network/Servers/xs1/release : NFS mount is not using Kerberos. $ kinit user user@FOO.COM's password: ****** $ klist Credentials cache: API:648E3003-0A6B-4BB3-8447-1D5034F98EAE Principal: user@FOO.COM Issued Expires Principal Dec 15 13:57:57 2014 Dec 15 23:57:57 2014 krbtgt/FOO.COM@FOO.COM $ ls /private/tmp/mp filesystemui.socket= sysdiagnose.tar.gz x mtrecorder/ systemstats/ z $ nclist /private/tmp/mp : Default credential [from user@FOO.COM] NOTES As mentioned above credentials are per session, so the console session's credential cache collection is separate for a collections of credentials obtain in an ssh session even by the same user. Kerberos will set the default credential with klist or kswitch. However, the default credential can change without the user's knowledge, because of renewals or some other script or program in the user's session is run and does a kswitch (krb5_cc_set_default_name()) or kinit on the user's behalf. kinit may not prompt for a password if the Kerberos password for the principal is in the user's keychain. ncctl with the set command will allow a user to change the mapping of the local user identity to a different one on the server. It is up to the user to decide which identity will be used. Previous versions of gssd daemon would attempt to select credentials if they were not set, by choosing credentials in the same realm as the server. This was imperfect and that has been removed. There may be multiple credentials in the same realm or a user may prefer a cross realm principal. It is highly recommended that after accessing a mount (typically through the automounter) that if the user has access to multiple credentials to set the credential on the mount that they want to use. The current default credential will be used by the automounter on first mount. If you do not explicitly set the credentials to use, then if the server expires the credential, the client will use the current default credential at the time of renewal and that may be a different identity. If using mount directly a user can select what credential to use for the mount and subsequently there after (at least until a new ncctl set command is run) by using the principal=⟨principal⟩ option. It is also possible to select the realm to use with the realm=⟨realm⟩ option. The latter can be useful to administrators in automounter maps. There is currently no way to remember what the chosen identity is for a given mount after its been unmounted. So for automounted mounts a reference it taken on the mount point so unmounts will not happen until all credentials on a mount with a set principal have been destroyed. Forced unmounts will not be effected. nclist or ncctl get can be used to see what credentials are actually being used and ncdestroy or ncctl unset can be used to destroy that session's credential. Accessing the mount after its credentials have been destroyed will cause the default credential to be used until the next ncinit or ncctl set Default credentials for an automounted NFS mount will not prevent the unmounting of the file system. DIAGNOSTICS The ncctl command will exit with 1 if any of the supplied paths doesn't exist or there is an error returned for any path tried. If all paths exist and no errors are returned the exit status will be 0. SEE ALSO kdestroy(1), kinit(1), klist(1), kswitch(1), mount_nfs(8) BUGS There should be an option to kdestroy to destroy cached NFS contexts. macOS 14.5 January 14, 2015 macOS 14.5
tkmib
Simple Network Management Protocol (SNMP) provides a framework for exchange of the management information between the agents (servers) and clients. The Management Information Bases (MIBs) contain a formal description of a set of network objects that can be managed using the SNMP for a particular agent. tkmib is a graphical user interface for browsing the MIBs. It is also capable of sending or retrieving the SNMP management information to/from the remote agents interactively. V5.6.2.1 16 Nov 2006 tkmib(1)
tkmib - an interactive graphical MIB browser for SNMP
tkmib
null
null
cal
The cal utility displays a simple calendar in traditional format and ncal offers an alternative layout, more options and the date of Easter. The new format is a little cramped but it makes a year fit on a 25x80 terminal. If arguments are not specified, the current month is displayed. The options are as follows: -h Turns off highlighting of today. -J Display Julian Calendar, if combined with the -e option, display date of Easter according to the Julian Calendar. -e Display date of Easter (for western churches). -j Display Julian days (days one-based, numbered from January 1). -m month Display the specified month. If month is specified as a decimal number, it may be followed by the letter ‘f’ or ‘p’ to indicate the following or preceding month of that number, respectively. -o Display date of Orthodox Easter (Greek and Russian Orthodox Churches). -p Print the country codes and switching days from Julian to Gregorian Calendar as they are assumed by ncal. The country code as determined from the local environment is marked with an asterisk. -s country_code Assume the switch from Julian to Gregorian Calendar at the date associated with the country_code. If not specified, ncal tries to guess the switch date from the local environment or falls back to September 2, 1752. This was when Great Britain and her colonies switched to the Gregorian Calendar. -w Print the number of the week below each week column. -y Display a calendar for the specified year. -3 Display the previous, current and next month surrounding today. -A number Display the number of months after the current month. -B number Display the number of months before the current month. -C Switch to cal mode. -N Switch to ncal mode. -d yyyy-mm Use yyyy-mm as the current date (for debugging of date selection). -H yyyy-mm-dd Use yyyy-mm-dd as the current date (for debugging of highlighting). A single parameter specifies the year (1–9999) to be displayed; note the year must be fully specified: “cal 89” will not display a calendar for 1989. Two parameters denote the month and year; the month is either a number between 1 and 12, or a full or abbreviated name as specified by the current locale. Month and year default to those of the current system clock and time zone (so “cal -m 8” will display a calendar for the month of August in the current year). Not all options can be used together. For example “-3 -A 2 -B 3 -y -m 7” would mean: show me the three months around the seventh month, three before that, two after that and the whole year. ncal will warn about these combinations. A year starts on January 1. Highlighting of dates is disabled if stdout is not a tty. SEE ALSO calendar(3), strftime(3) STANDARDS The cal utility is compliant with the X/Open System Interfaces option of the IEEE Std 1003.1-2008 (“POSIX.1”) specification. The flags [-3hyJeopw], as well as the ability to specify a month name as a single argument, are extensions to that specification. The week number computed by -w is compliant with the ISO 8601 specification. HISTORY A cal command appeared in Version 1 AT&T UNIX. The ncal command appeared in FreeBSD 2.2.6. AUTHORS The ncal command and manual were written by Wolfgang Helbig <helbig@FreeBSD.org>. BUGS The assignment of Julian–Gregorian switching dates to country codes is historically naive for many countries. Not all options are compatible and using them in different orders will give varying results. It is not possible to display Monday as the first day of the week with cal. macOS 14.5 March 7, 2019 macOS 14.5
cal, ncal – displays a calendar and the date of Easter
cal [-3hjy] [-A number] [-B number] [[month] year] cal [-3hj] [-A number] [-B number] -m month [year] ncal [-3hjJpwy] [-A number] [-B number] [-s country_code] [[month] year] ncal [-3hJeo] [-A number] [-B number] [year] ncal [-CN] [-H yyyy-mm-dd] [-d yyyy-mm]
null
null
afplay
Audio File Play plays an audio file to the default audio output
afplay – Audio File Play
afplay [-h] audiofile
-h print help text Darwin February 13, 2007 Darwin
null
time
The time utility executes and times the specified utility. After the utility finishes, time writes to the standard error stream, (in seconds): the total time elapsed, the time used to execute the utility process and the time consumed by system overhead. The following options are available: -a If the -o flag is used, append to the specified file rather than overwriting it. Otherwise, this option has no effect. -h Print times in a human friendly format. Times are printed in minutes, hours, etc. as appropriate. -l The contents of the rusage structure are printed as well. -o file Write the output to file instead of stderr. If file exists and the -a flag is not specified, the file will be overwritten. -p Makes time output POSIX.2 compliant (each time is printed on its own line). Some shells may provide a builtin time command which is similar or identical to this utility. Consult the builtin(1) manual page. If time receives a SIGINFO (see the status argument for stty(1)) signal, the current time the given command is running will be written to the standard output. ENVIRONMENT The PATH environment variable is used to locate the requested utility if the name contains no ‘/’ characters. EXIT STATUS If utility could be timed successfully, its exit status is returned. If utility terminated abnormally, a warning message is output to stderr. If the utility was found but could not be run, the exit status is 126. If no utility could be found at all, the exit status is 127. If time encounters any other error, the exit status is between 1 and 125 included.
time – time command execution
time [-al] [-h | -p] [-o file] utility [argument ...]
null
Time the execution of ls(1) on an empty directory: $ /usr/bin/time ls 0.00 real 0.00 user 0.00 sys Time the execution of the cp(1) command and store the result in the times.txt file. Then execute the command again to make a new copy and add the result to the same file: $ /usr/bin/time -o times.txt cp FreeBSD-12.1-RELEASE-amd64-bootonly.iso copy1.iso $ /usr/bin/time -a -o times.txt cp FreeBSD-12.1-RELEASE-amd64-bootonly.iso copy2.iso The times.txt file will contain the times of both commands: $ cat times.txt 0.68 real 0.00 user 0.22 sys 0.67 real 0.00 user 0.21 sys Time the sleep(1) command and show the results in a human friendly format. Show the contents of the rusage structure too: $ /usr/bin/time -l -h -p sleep 5 real 5.01 user 0.00 sys 0.00 0 maximum resident set size 0 average shared memory size 0 average unshared data size 0 average unshared stack size 80 page reclaims 0 page faults 0 swaps 1 block input operations 0 block output operations 0 messages sent 0 messages received 0 signals received 3 voluntary context switches 0 involuntary context switches 2054316 instructions retired 2445544 cycles elapsed 241664 peak memory footprint SEE ALSO builtin(1), csh(1), getrusage(2), wait(2) STANDARDS The time utility is expected to conform to ISO/IEC 9945-2:1993 (``POSIX''). HISTORY A time utility appeared in Version 3 AT&T UNIX. macOS 14.5 January 15, 2021 macOS 14.5
ssh-agent
ssh-agent is a program to hold private keys used for public key authentication. Through use of environment variables the agent can be located and automatically used for authentication when logging in to other machines using ssh(1). The options are as follows: -a bind_address Bind the agent to the UNIX-domain socket bind_address. The default is $TMPDIR/ssh-XXXXXXXXXX/agent.<ppid>. -c Generate C-shell commands on stdout. This is the default if SHELL looks like it's a csh style of shell. -D Foreground mode. When this option is specified, ssh-agent will not fork. -d Debug mode. When this option is specified, ssh-agent will not fork and will write debug information to standard error. -E fingerprint_hash Specifies the hash algorithm used when displaying key fingerprints. Valid options are: “md5” and “sha256”. The default is “sha256”. -k Kill the current agent (given by the SSH_AGENT_PID environment variable). -O option Specify an option when starting ssh-agent. Currently two options are supported: allow-remote-pkcs11 and no-restrict-websafe. The allow-remote-pkcs11 option allows clients of a forwarded ssh-agent to load PKCS#11 or FIDO provider libraries. By default only local clients may perform this operation. Note that signalling that an ssh-agent client is remote is performed by ssh(1), and use of other tools to forward access to the agent socket may circumvent this restriction. The no-restrict-websafe option instructs ssh-agent to permit signatures using FIDO keys that might be web authentication requests. By default, ssh-agent refuses signature requests for FIDO keys where the key application string does not start with “ssh:” and when the data to be signed does not appear to be a ssh(1) user authentication request or a ssh-keygen(1) signature. The default behaviour prevents forwarded access to a FIDO key from also implicitly forwarding the ability to authenticate to websites. -P allowed_providers Specify a pattern-list of acceptable paths for PKCS#11 provider and FIDO authenticator middleware shared libraries that may be used with the -S or -s options to ssh-add(1). Libraries that do not match the pattern list will be refused. See PATTERNS in ssh_config(5) for a description of pattern-list syntax. The default list is “usr/lib*/*,/usr/local/lib*/*”. -s Generate Bourne shell commands on stdout. This is the default if SHELL does not look like it's a csh style of shell. -t life Set a default value for the maximum lifetime of identities added to the agent. The lifetime may be specified in seconds or in a time format specified in sshd_config(5). A lifetime specified for an identity with ssh-add(1) overrides this value. Without this option the default maximum lifetime is forever. command [arg ...] If a command (and optional arguments) is given, this is executed as a subprocess of the agent. The agent exits automatically when the command given on the command line terminates. There are two main ways to get an agent set up. The first is at the start of an X session, where all other windows or programs are started as children of the ssh-agent program. The agent starts a command under which its environment variables are exported, for example ssh-agent xterm &. When the command terminates, so does the agent. The second method is used for a login session. When ssh-agent is started, it prints the shell commands required to set its environment variables, which in turn can be evaluated in the calling shell, for example eval `ssh-agent -s`. In both cases, ssh(1) looks at these environment variables and uses them to establish a connection to the agent. The agent initially does not have any private keys. Keys are added using ssh-add(1) or by ssh(1) when AddKeysToAgent is set in ssh_config(5). Multiple identities may be stored in ssh-agent concurrently and ssh(1) will automatically use them if present. ssh-add(1) is also used to remove keys from ssh-agent and to query the keys that are held in one. Connections to ssh-agent may be forwarded from further remote hosts using the -A option to ssh(1) (but see the caveats documented therein), avoiding the need for authentication data to be stored on other machines. Authentication passphrases and private keys never go over the network: the connection to the agent is forwarded over SSH remote connections and the result is returned to the requester, allowing the user access to their identities anywhere in the network in a secure fashion. ENVIRONMENT SSH_AGENT_PID When ssh-agent starts, it stores the name of the agent's process ID (PID) in this variable. SSH_AUTH_SOCK When ssh-agent starts, it creates a UNIX-domain socket and stores its pathname in this variable. It is accessible only to the current user, but is easily abused by root or another instance of the same user. FILES $TMPDIR/ssh-XXXXXXXXXX/agent.<ppid> UNIX-domain sockets used to contain the connection to the authentication agent. These sockets should only be readable by the owner. The sockets should get automatically removed when the agent exits. SEE ALSO ssh(1), ssh-add(1), ssh-keygen(1), ssh_config(5), sshd(8) AUTHORS OpenSSH is a derivative of the original and free ssh 1.2.12 release by Tatu Ylonen. Aaron Campbell, Bob Beck, Markus Friedl, Niels Provos, Theo de Raadt and Dug Song removed many bugs, re-added newer features and created OpenSSH. Markus Friedl contributed the support for SSH protocol versions 1.5 and 2.0. macOS 14.5 August 10, 2023 macOS 14.5
ssh-agent – OpenSSH authentication agent
ssh-agent [-c | -s] [-Dd] [-a bind_address] [-E fingerprint_hash] [-O option] [-P allowed_providers] [-t life] ssh-agent [-a bind_address] [-E fingerprint_hash] [-O option] [-P allowed_providers] [-t life] command [arg ...] ssh-agent [-c | -s] -k
null
null
afktool
See afktool [help] for more information. February 11, 2020
afktool – AppleFirmwareKit debug utility
afktool [command] [options]
null
null
gcore
The gcore program creates a core file image of the process specified by pid. The resulting core file can be used with a debugger, e.g. lldb(1), to examine the state of the process. The following options are available: -s Suspend the process while the core file is captured. -v Report progress on the dump as it proceeds. -b size Limit the size of the core file to size MiBytes. The following options control the name of the core file: -o path Write the core file to path. -c pathformat Write the core file to pathformat. The pathformat string is treated as a pathname that may contain various special characters which cause the interpolation of strings representing specific attributes of the process into the name. Each special character is introduced by the % character. The format characters and their meanings are: N The name of the program being dumped, as reported by ps(1). U The uid of the process being dumped, converted to a string. P The pid of the process being dumped, converted to a string. T The time when the core file was taken, converted to ISO 8601 format. % Output a percent character. The default file name used by gcore is %N-%P-%T. By default, the core file will be written to a directory whose name is determined from the kern.corefile MIB. This can be printed or modified using sysctl(8). The directory where the core file is to be written must be accessible to the owner of the target process. gcore will not overwrite an existing file, nor will it create missing directories in the path. FILES /cores/%N-%P-%T default pathname for the corefile. EXIT STATUS The gcore utility exits 0 on success, and >0 if an error occurs. SEE ALSO lldb(1), core(5), Mach-O(5), sudo(8), sysctl(8) BUGS With the -b flag, gcore writes out as much data as it can up to the specified limit, even if that results in an incomplete core image. Such a partial core dump may confuse subsequent programs that attempt to parse the contents of such files. Darwin February 10, 2016 Darwin
gcore – get core images of running processes
gcore [-s] [-v] [-b size] [-o path | -c pathformat] pid
null
null
javap
The javap command disassembles one or more class files. The output depends on the options used. When no options are used, the javap command prints the protected and public fields, and methods of the classes passed to it. The javap command isn't multirelease JAR aware. Using the class path form of the command results in viewing the base entry in all JAR files, multirelease or not. Using the URL form, you can use the URL form of an argument to specify a specific version of a class to be disassembled. The javap command prints its output to stdout. Note: In tools that support -- style options, the GNU-style options can use the equal sign (=) instead of a white space to separate the name of an option from its value. OPTIONS FOR JAVAP --help, -help , -h, or -? Prints a help message for the javap command. -version Prints release information. -verbose or -v Prints additional information about the selected class. -l Prints line and local variable tables. -public Shows only public classes and members. -protected Shows only protected and public classes and members. -package Shows package/protected/public classes and members (default). -private or -p Shows all classes and members. -c Prints disassembled code, for example, the instructions that comprise the Java bytecodes, for each of the methods in the class. -s Prints internal type signatures. -sysinfo Shows system information (path, size, date, SHA-256 hash) of the class being processed. -constants Shows static final constants. --module module or -m module Specifies the module containing classes to be disassembled. --module-path path Specifies where to find application modules. --system jdk Specifies where to find system modules. --class-path path, -classpath path, or -cp path Specifies the path that the javap command uses to find user class files. It overrides the default or the CLASSPATH environment variable when it's set. -bootclasspath path Overrides the location of bootstrap class files. --multi-release version Specifies the version to select in multi-release JAR files. -Joption Passes the specified option to the JVM. For example: javap -J-version javap -J-Djava.security.manager -J-Djava.security.policy=MyPolicy MyClassName See Overview of Java Options in java. JAVAP EXAMPLE Compile the following HelloWorldFrame class: import java.awt.Graphics; import javax.swing.JFrame; import javax.swing.JPanel; public class HelloWorldFrame extends JFrame { String message = "Hello World!"; public HelloWorldFrame(){ setContentPane(new JPanel(){ @Override protected void paintComponent(Graphics g) { g.drawString(message, 15, 30); } }); setSize(100, 100); } public static void main(String[] args) { HelloWorldFrame frame = new HelloWorldFrame(); frame.setVisible(true); } } The output from the javap HelloWorldFrame.class command yields the following: Compiled from "HelloWorldFrame.java" public class HelloWorldFrame extends javax.swing.JFrame { java.lang.String message; public HelloWorldFrame(); public static void main(java.lang.String[]); } The output from the javap -c HelloWorldFrame.class command yields the following: Compiled from "HelloWorldFrame.java" public class HelloWorldFrame extends javax.swing.JFrame { java.lang.String message; public HelloWorldFrame(); Code: 0: aload_0 1: invokespecial #1 // Method javax/swing/JFrame."<init>":()V 4: aload_0 5: ldc #2 // String Hello World! 7: putfield #3 // Field message:Ljava/lang/String; 10: aload_0 11: new #4 // class HelloWorldFrame$1 14: dup 15: aload_0 16: invokespecial #5 // Method HelloWorldFrame$1."<init>":(LHelloWorldFrame;)V 19: invokevirtual #6 // Method setContentPane:(Ljava/awt/Container;)V 22: aload_0 23: bipush 100 25: bipush 100 27: invokevirtual #7 // Method setSize:(II)V 30: return public static void main(java.lang.String[]); Code: 0: new #8 // class HelloWorldFrame 3: dup 4: invokespecial #9 // Method "<init>":()V 7: astore_1 8: aload_1 9: iconst_1 10: invokevirtual #10 // Method setVisible:(Z)V 13: return } JDK 22 2024 JAVAP(1)
javap - disassemble one or more class files
javap [options] classes...
Specifies the command-line options. See Options for javap. classes Specifies one or more classes separated by spaces to be processed for annotations. You can specify a class that can be found in the class path by its file name, URL, or by its fully qualified class name. Examples: path/to/MyClass.class jar:file:///path/to/MyJar.jar!/mypkg/MyClass.class java.lang.Object
null
xed
The xed tool launches the Xcode application and opens the given documents, or opens a new untitled document, optionally with the contents of standard in.
xed – Xcode text editor invocation tool.
xed [-xcwrbhv] [-l lineno] [file ...]
The options for xed are similar to those for the command-line utilities for other text editors: -x, --launch Launches Xcode opening a new empty unsaved file, without reading from standard input. -c, --create Creates any files in the file list that do not already exist. If used without --launch, standard input will be read and piped to the last file created. -w, --wait Wait for the files to be closed before exiting. xed will idle in a run loop waiting for a notification from Xcode when each file is closed, and will only terminate when all are closed. This is useful when invoking it from a script. -l, --line <number> Selects the given line in the last file opened. -b, --background Opens Xcode without activating it; the process that invoked xed remains in front. -h, --help Prints a brief summary of usage. -v, --version Prints the version number of xed [file...] A list of file paths. Existing files will be opened; nonexistent files will be created only if the --create flag is passed. If no files are passed, then standard input will be read and piped into a new untitled document (unless --launch is passed). If --create and at least one nonexistent file name is passed, the last nonexistent file will be created, filled with the standard input, and opened. SEE ALSO xcodebuild(1), xcode-select(1), xcrun(1) HISTORY xed was introduced in Mac OS X 10.5 with Xcode 3.0. Mac OS X March 19, 2015 Mac OS X
null
uuname
By default, the uuname program simply lists the names of all the remote systems mentioned in the UUCP configuration files. The uuname program may also be used to print the UUCP name of the local system. The uuname program is mainly for use by shell scripts.
uuname - list remote UUCP sites
uuname [-a] [--aliases] uuname [-l] [--local]
-a, --aliases List all aliases for remote systems, as well as their canonical names. Aliases may be specified in the `sys' file. -l, --local Print the UUCP name of the local system, rather than listing the names of all the remote systems. Standard UUCP options: -x type, --debug type Turn on particular debugging types. The following types are recognized: abnormal, chat, handshake, uucp-proto, proto, port, config, spooldir, execute, incoming, outgoing. -I file, --config file Set configuration file to use. -v, --version Report version information and exit. --help Print a help message and exit. SEE ALSO uucp(1) FILES /etc/uucp/sys UUCP system configuration file used to describe all known sites to the local host. AUTHOR Ian Lance Taylor <ian@airs.com>. Text for this Manpage comes from Taylor UUCP, version 1.07 Info documentation. Taylor UUCP 1.07 uuname(1)
null
javaws
null
null
null
null
null
ptar5.34
ptar is a small, tar look-alike program that uses the perl module Archive::Tar to extract, create and list tar archives.
ptar - a tar-like program written in perl
ptar -c [-v] [-z] [-C] [-f ARCHIVE_FILE | -] FILE FILE ... ptar -c [-v] [-z] [-C] [-T index | -] [-f ARCHIVE_FILE | -] ptar -x [-v] [-z] [-f ARCHIVE_FILE | -] ptar -t [-z] [-f ARCHIVE_FILE | -] ptar -h
c Create ARCHIVE_FILE or STDOUT (-) from FILE x Extract from ARCHIVE_FILE or STDIN (-) t List the contents of ARCHIVE_FILE or STDIN (-) f Name of the ARCHIVE_FILE to use. Default is './default.tar' z Read/Write zlib compressed ARCHIVE_FILE (not always available) v Print filenames as they are added or extracted from ARCHIVE_FILE h Prints this help message C CPAN mode - drop 022 from permissions T get names to create from file SEE ALSO tar(1), Archive::Tar. perl v5.34.1 2024-04-13 PTAR(1)
null
findrule5.34
"findrule" mostly borrows the interface from GNU find(1) to provide a command-line interface onto the File::Find::Rule heirarchy of modules. The syntax for expressions is the rule name, preceded by a dash, followed by an optional argument. If the argument is an opening parenthesis it is taken as a list of arguments, terminated by a closing parenthesis. Some examples: find -file -name ( foo bar ) files named "foo" or "bar", below the current directory. find -file -name foo -bar files named "foo", that have pubs (for this is what our ficticious "bar" clause specifies), below the current directory. find -file -name ( -bar ) files named "-bar", below the current directory. In this case if we'd have omitted the parenthesis it would have parsed as a call to name with no arguments, followed by a call to -bar. Supported switches I'm very slack. Please consult the File::Find::Rule manpage for now, and prepend - to the commands that you want. Extra bonus switches findrule automatically loads all of your installed File::Find::Rule::* extension modules, so check the documentation to see what those would be. AUTHOR Richard Clamp <richardc@unixbeard.net> from a suggestion by Tatsuhiko Miyagawa COPYRIGHT Copyright (C) 2002 Richard Clamp. All Rights Reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. SEE ALSO File::Find::Rule perl v5.34.0 2015-12-03 FINDRULE(1)
findrule - command line wrapper to File::Find::Rule USAGE findrule [path...] [expression]
null
null
null
env
The env utility executes another utility after modifying the environment as specified on the command line. Each name=value option specifies the setting of an environment variable, name, with a value of value. All such environment variables are set before the utility is executed. The options are as follows: -0 End each output line with NUL, not newline. -i Execute the utility with only those environment variables specified by name=value options. The environment inherited by env is ignored completely. -P altpath Search the set of directories as specified by altpath to locate the specified utility program, instead of using the value of the PATH environment variable. -S string Split apart the given string into multiple strings, and process each of the resulting strings as separate arguments to the env utility. The -S option recognizes some special character escape sequences and also supports environment-variable substitution, as described below. -u name If the environment variable name is in the environment, then remove it before processing the remaining options. This is similar to the unset command in sh(1). The value for name must not include the ‘=’ character. -v Print verbose information for each step of processing done by the env utility. Additional information will be printed if -v is specified multiple times. The above options are only recognized when they are specified before any name=value options. If no utility is specified, env prints out the names and values of the variables in the environment. Each name/value pair is separated by a new line unless -0 is specified, in which case name/value pairs are separated by NUL. Both -0 and utility may not be specified together. Details of -S (split-string) processing The processing of the -S option will split the given string into separate arguments based on any space or <tab> characters found in the string. Each of those new arguments will then be treated as if it had been specified as a separate argument on the original env command. Spaces and tabs may be embedded in one of those new arguments by using single (“'”) or double (‘"’) quotes, or backslashes (‘\’). Single quotes will escape all non-single quote characters, up to the matching single quote. Double quotes will escape all non-double quote characters, up to the matching double quote. It is an error if the end of the string is reached before the matching quote character. If -S would create a new argument that starts with the ‘#’ character, then that argument and the remainder of the string will be ignored. The ‘\#’ sequence can be used when you want a new argument to start with a ‘#’ character, without causing the remainder of the string to be skipped. While processing the string value, -S processing will treat certain character combinations as escape sequences which represent some action to take. The character escape sequences are in backslash notation. The characters and their meanings are as follows: \c Ignore the remaining characters in the string. This must not appear inside a double-quoted string. \f Replace with a <form-feed> character. \n Replace with a <new-line> character. \r Replace with a <carriage return> character. \t Replace with a <tab> character. \v Replace with a <vertical tab> character. \# Replace with a ‘#’ character. This would be useful when you need a ‘#’ as the first character in one of the arguments created by splitting apart the given string. \$ Replace with a ‘$’ character. \_ If this is found inside of a double-quoted string, then replace it with a single blank. If this is found outside of a quoted string, then treat this as the separator character between new arguments in the original string. \" Replace with a <double quote> character. \´ Replace with a <single quote> character. \\ Replace with a backslash character. The sequences for <single-quote> and backslash are the only sequences which are recognized inside of a single-quoted string. The other sequences have no special meaning inside a single-quoted string. All escape sequences are recognized inside of a double-quoted string. It is an error if a single ‘\’ character is followed by a character other than the ones listed above. The processing of -S also supports substitution of values from environment variables. To do this, the name of the environment variable must be inside of ‘${}’, such as: ${SOMEVAR}. The common shell syntax of $SOMEVAR is not supported. All values substituted will be the values of the environment variables as they were when the env utility was originally invoked. Those values will not be checked for any of the escape sequences as described above. And any settings of name=value will not effect the values used for substitution in -S processing. Also, -S processing cannot reference the value of the special parameters which are defined by most shells. For instance, -S cannot recognize special parameters such as: ‘$*’, ‘$@’, ‘$#’, ‘$?’ or ‘$$’ if they appear inside the given string. Use in shell-scripts The env utility is often used as the interpreter on the first line of interpreted scripts, as described in execve(2). Note that the way the kernel parses the ‘#!’ (first line) of an interpreted script has changed as of FreeBSD 6.0. Prior to that, the FreeBSD kernel would split that first line into separate arguments based on any whitespace (space or <tab> characters) found in the line. So, if a script named /usr/local/bin/someport had a first line of: #!/usr/local/bin/php -n -q -dsafe_mode=0 then the /usr/local/bin/php program would have been started with the arguments of: arg[0] = '/usr/local/bin/php' arg[1] = '-n' arg[2] = '-q' arg[3] = '-dsafe_mode=0' arg[4] = '/usr/local/bin/someport' plus any arguments the user specified when executing someport. However, this processing of multiple options on the ‘#!’ line is not the way any other operating system parses the first line of an interpreted script. So after a change which was made for FreeBSD 6.0 release, that script will result in /usr/local/bin/php being started with the arguments of: arg[0] = '/usr/local/bin/php' arg[1] = '-n -q -dsafe_mode=0' arg[2] = '/usr/local/bin/someport' plus any arguments the user specified. This caused a significant change in the behavior of a few scripts. In the case of above script, to have it behave the same way under FreeBSD 6.0 as it did under earlier releases, the first line should be changed to: #!/usr/bin/env -S /usr/local/bin/php -n -q -dsafe_mode=0 The env utility will be started with the entire line as a single argument: arg[1] = '-S /usr/local/bin/php -n -q -dsafe_mode=0' and then -S processing will split that line into separate arguments before executing /usr/local/bin/php. ENVIRONMENT The env utility uses the PATH environment variable to locate the requested utility if the name contains no ‘/’ characters, unless the -P option has been specified. EXIT STATUS The env utility exits 0 on success, and >0 if an error occurs. An exit status of 126 indicates that utility was found, but could not be executed. An exit status of 127 indicates that utility could not be found.
env – set environment and execute command, or print environment
env [-0iv] [-u name] [name=value ...] env [-iv] [-P altpath] [-S string] [-u name] [name=value ...] utility [argument ...]
null
Since the env utility is often used as part of the first line of an interpreted script, the following examples show a number of ways that the env utility can be useful in scripts. The kernel processing of an interpreted script does not allow a script to directly reference some other script as its own interpreter. As a way around this, the main difference between #!/usr/local/bin/foo and #!/usr/bin/env /usr/local/bin/foo is that the latter works even if /usr/local/bin/foo is itself an interpreted script. Probably the most common use of env is to find the correct interpreter for a script, when the interpreter may be in different directories on different systems. The following example will find the ‘perl’ interpreter by searching through the directories specified by PATH. #!/usr/bin/env perl One limitation of that example is that it assumes the user's value for PATH is set to a value which will find the interpreter you want to execute. The -P option can be used to make sure a specific list of directories is used in the search for utility. Note that the -S option is also required for this example to work correctly. #!/usr/bin/env -S -P/usr/local/bin:/usr/bin perl The above finds ‘perl’ only if it is in /usr/local/bin or /usr/bin. That could be combined with the present value of PATH, to provide more flexibility. Note that spaces are not required between the -S and -P options: #!/usr/bin/env -S-P/usr/local/bin:/usr/bin:${PATH} perl COMPATIBILITY The env utility accepts the - option as a synonym for -i. SEE ALSO printenv(1), sh(1), execvp(3), login.conf(5), environ(7) STANDARDS The env utility conforms to IEEE Std 1003.1-2001 (“POSIX.1”). The -0, -P, -S, -u and -v options are non-standard extensions supported by FreeBSD, but which may not be available on other operating systems. HISTORY The env command appeared in 4.4BSD. The -P, -S and -v options were added in FreeBSD 6.0. BUGS The env utility does not handle values of utility which have an equals sign (‘=’) in their name, for obvious reasons. The env utility does not take multibyte characters into account when processing the -S option, which may lead to incorrect results in some locales. macOS 14.5 March 3, 2021 macOS 14.5
pr
The pr utility is a printing and pagination filter for text files. When multiple input files are specified, each is read, formatted, and written to standard output. By default, the input is separated into 66-line pages, each with • A 5-line header with the page number, date, time, and the pathname of the file. • A 5-line trailer consisting of blank lines. If standard output is associated with a terminal, diagnostic messages are suppressed until the pr utility has completed processing. When multiple column output is specified, text columns are of equal width. By default text columns are separated by at least one <blank>. Input lines that do not fit into a text column are truncated. Lines are not truncated under single column output.
pr – print files
pr [+page] [-column] [-adFfmprt] [[-e] [char] [gap]] [-L locale] [-h header] [[-i] [char] [gap]] [-l lines] [-o offset] [[-s] [char]] [[-n] [char] [width]] [-w width] [-] [file ...]
In the following option descriptions, column, lines, offset, page, and width are positive decimal integers and gap is a nonnegative decimal integer. +page Begin output at page number page of the formatted input. -column Produce output that is columns wide (default is 1) that is written vertically down each column in the order in which the text is received from the input file. The options -e and -i are assumed. This option should not be used with -m. When used with -t, the minimum number of lines is used to display the output. (To columnify and reshape text files more generally and without additional formatting, see the rs(1) utility.) -a Modify the effect of the -column option so that the columns are filled across the page in a round-robin order (e.g., when column is 2, the first input line heads column 1, the second heads column 2, the third is the second line in column 1, etc.). This option requires the use of the -column option. -d Produce output that is double spaced. An extra <newline> character is output following every <newline> found in the input. -e [char][gap] Expand each input <tab> to the next greater column position specified by the formula n*gap+1, where n is an integer > 0. If gap is zero or is omitted the default is 8. All <tab> characters in the input are expanded into the appropriate number of <space>s. If any nondigit character, char, is specified, it is used as the input tab character. -F Use a <form-feed> character for new pages, instead of the default behavior that uses a sequence of <newline> characters. -f Same as -F but pause before beginning the first page if standard output is a terminal. -h header Use the string header to replace the file name in the header line. -i [char][gap] In output, replace multiple <space>s with <tab>s whenever two or more adjacent <space>s reach column positions gap+1, 2*gap+1, etc. If gap is zero or omitted, default <tab> settings at every eighth column position is used. If any nondigit character, char, is specified, it is used as the output <tab> character. -L locale Use locale specified as argument instead of one found in environment. Use "C" to reset locale to default. -l lines Override the 66 line default and reset the page length to lines. If lines is not greater than the sum of both the header and trailer depths (in lines), the pr utility suppresses output of both the header and trailer, as if the -t option were in effect. -m Merge the contents of multiple files. One line from each file specified by a file operand is written side by side into text columns of equal fixed widths, in terms of the number of column positions. The number of text columns depends on the number of file operands successfully opened. The maximum number of files merged depends on page width and the per process open file limit. The options -e and -i are assumed. -n [char][width] Provide width digit line numbering. The default for width, if not specified, is 5. The number occupies the first width column positions of each text column or each line of -m output. If char (any nondigit character) is given, it is appended to the line number to separate it from whatever follows. The default for char is a <tab>. Line numbers longer than width columns are truncated. -o offset Each line of output is preceded by offset <spaces>s. If the -o option is not specified, the default is zero. The space taken is in addition to the output line width. -p Pause before each page if the standard output is a terminal. pr will write an alert character to standard error and wait for a carriage return to be read on the terminal. -r Write no diagnostic reports on failure to open a file. -s char Separate text columns by the single character char instead of by the appropriate number of <space>s (default for char is the <tab> character). -t Print neither the five-line identifying header nor the five-line trailer usually supplied for each page. Quit printing after the last line of each file without spacing to the end of the page. -w width Set the width of the line to width column positions for multiple text-column output only. If the -w option is not specified and the -s option is not specified, the default width is 72. If the -w option is not specified and the -s option is specified, the default width is 512. file A pathname of a file to be printed. If no file operands are specified, or if a file operand is ‘-’, the standard input is used. The standard input is used only if no file operands are specified, or if a file operand is ‘-’. The -s option does not allow the option letter to be separated from its argument, and the options -e, -i, and -n require that both arguments, if present, not be separated from the option letter. EXIT STATUS The pr utility exits 0 on success, and >0 if an error occurs. DIAGNOSTICS If pr receives an interrupt while printing to a terminal, it flushes all accumulated error messages to the screen before terminating. Error messages are written to standard error during the printing process (if output is redirected) or after all successful file printing is complete (when printing to a terminal). LEGACY DESCRIPTION The last space before the tab stop is replaced with a tab character. In legacy mode, it is not. For more information about legacy mode, see compat(5). SEE ALSO cat(1), more(1), rs(1), compat(5) STANDARDS The pr utility is IEEE Std 1003.1-2001 (“POSIX.1”) compatible. HISTORY A pr command appeared in Version 1 AT&T UNIX. BUGS The pr utility does not recognize multibyte characters. macOS 14.5 July 3, 2004 macOS 14.5
null
h2xs5.34
h2xs builds a Perl extension from C header files. The extension will include functions which can be used to retrieve the value of any #define statement which was in the C header files. The module_name will be used for the name of the extension. If module_name is not supplied then the name of the first header file will be used, with the first character capitalized. If the extension might need extra libraries, they should be included here. The extension Makefile.PL will take care of checking whether the libraries actually exist and how they should be loaded. The extra libraries should be specified in the form -lm -lposix, etc, just as on the cc command line. By default, the Makefile.PL will search through the library path determined by Configure. That path can be augmented by including arguments of the form -L/another/library/path in the extra-libraries argument. In spite of its name, h2xs may also be used to create a skeleton pure Perl module. See the -X option.
h2xs - convert .h C header files to Perl extensions
h2xs [OPTIONS ...] [headerfile ... [extra_libraries]] h2xs -h|-?|--help
-A, --omit-autoload Omit all autoload facilities. This is the same as -c but also removes the "use AutoLoader" statement from the .pm file. -B, --beta-version Use an alpha/beta style version number. Causes version number to be "0.00_01" unless -v is specified. -C, --omit-changes Omits creation of the Changes file, and adds a HISTORY section to the POD template. -F, --cpp-flags=addflags Additional flags to specify to C preprocessor when scanning header for function declarations. Writes these options in the generated Makefile.PL too. -M, --func-mask=regular expression selects functions/macros to process. -O, --overwrite-ok Allows a pre-existing extension directory to be overwritten. -P, --omit-pod Omit the autogenerated stub POD section. -X, --omit-XS Omit the XS portion. Used to generate a skeleton pure Perl module. "-c" and "-f" are implicitly enabled. -a, --gen-accessors Generate an accessor method for each element of structs and unions. The generated methods are named after the element name; will return the current value of the element if called without additional arguments; and will set the element to the supplied value (and return the new value) if called with an additional argument. Embedded structures and unions are returned as a pointer rather than the complete structure, to facilitate chained calls. These methods all apply to the Ptr type for the structure; additionally two methods are constructed for the structure type itself, "_to_ptr" which returns a Ptr type pointing to the same structure, and a "new" method to construct and return a new structure, initialised to zeroes. -b, --compat-version=version Generates a .pm file which is backwards compatible with the specified perl version. For versions < 5.6.0, the changes are. - no use of 'our' (uses 'use vars' instead) - no 'use warnings' Specifying a compatibility version higher than the version of perl you are using to run h2xs will have no effect. If unspecified h2xs will default to compatibility with the version of perl you are using to run h2xs. -c, --omit-constant Omit "constant()" from the .xs file and corresponding specialised "AUTOLOAD" from the .pm file. -d, --debugging Turn on debugging messages. -e, --omit-enums=[regular expression] If regular expression is not given, skip all constants that are defined in a C enumeration. Otherwise skip only those constants that are defined in an enum whose name matches regular expression. Since regular expression is optional, make sure that this switch is followed by at least one other switch if you omit regular expression and have some pending arguments such as header-file names. This is ok: h2xs -e -n Module::Foo foo.h This is not ok: h2xs -n Module::Foo -e foo.h In the latter, foo.h is taken as regular expression. -f, --force Allows an extension to be created for a header even if that header is not found in standard include directories. -g, --global Include code for safely storing static data in the .xs file. Extensions that do no make use of static data can ignore this option. -h, -?, --help Print the usage, help and version for this h2xs and exit. -k, --omit-const-func For function arguments declared as "const", omit the const attribute in the generated XS code. -m, --gen-tied-var Experimental: for each variable declared in the header file(s), declare a perl variable of the same name magically tied to the C variable. -n, --name=module_name Specifies a name to be used for the extension, e.g., -n RPC::DCE -o, --opaque-re=regular expression Use "opaque" data type for the C types matched by the regular expression, even if these types are "typedef"-equivalent to types from typemaps. Should not be used without -x. This may be useful since, say, types which are "typedef"-equivalent to integers may represent OS-related handles, and one may want to work with these handles in OO-way, as in "$handle->do_something()". Use "-o ." if you want to handle all the "typedef"ed types as opaque types. The type-to-match is whitewashed (except for commas, which have no whitespace before them, and multiple "*" which have no whitespace between them). -p, --remove-prefix=prefix Specify a prefix which should be removed from the Perl function names, e.g., -p sec_rgy_ This sets up the XS PREFIX keyword and removes the prefix from functions that are autoloaded via the "constant()" mechanism. -s, --const-subs=sub1,sub2 Create a perl subroutine for the specified macros rather than autoload with the constant() subroutine. These macros are assumed to have a return type of char *, e.g., -s sec_rgy_wildcard_name,sec_rgy_wildcard_sid. -t, --default-type=type Specify the internal type that the constant() mechanism uses for macros. The default is IV (signed integer). Currently all macros found during the header scanning process will be assumed to have this type. Future versions of "h2xs" may gain the ability to make educated guesses. --use-new-tests When --compat-version (-b) is present the generated tests will use "Test::More" rather than "Test" which is the default for versions before 5.6.2. "Test::More" will be added to PREREQ_PM in the generated "Makefile.PL". --use-old-tests Will force the generation of test code that uses the older "Test" module. --skip-exporter Do not use "Exporter" and/or export any symbol. --skip-ppport Do not use "Devel::PPPort": no portability to older version. --skip-autoloader Do not use the module "AutoLoader"; but keep the constant() function and "sub AUTOLOAD" for constants. --skip-strict Do not use the pragma "strict". --skip-warnings Do not use the pragma "warnings". -v, --version=version Specify a version number for this extension. This version number is added to the templates. The default is 0.01, or 0.00_01 if "-B" is specified. The version specified should be numeric. -x, --autogen-xsubs Automatically generate XSUBs basing on function declarations in the header file. The package "C::Scan" should be installed. If this option is specified, the name of the header file may look like "NAME1,NAME2". In this case NAME1 is used instead of the specified string, but XSUBs are emitted only for the declarations included from file NAME2. Note that some types of arguments/return-values for functions may result in XSUB-declarations/typemap-entries which need hand- editing. Such may be objects which cannot be converted from/to a pointer (like "long long"), pointers to functions, or arrays. See also the section on "LIMITATIONS of -x".
# Default behavior, extension is Rusers h2xs rpcsvc/rusers # Same, but extension is RUSERS h2xs -n RUSERS rpcsvc/rusers # Extension is rpcsvc::rusers. Still finds <rpcsvc/rusers.h> h2xs rpcsvc::rusers # Extension is ONC::RPC. Still finds <rpcsvc/rusers.h> h2xs -n ONC::RPC rpcsvc/rusers # Without constant() or AUTOLOAD h2xs -c rpcsvc/rusers # Creates templates for an extension named RPC h2xs -cfn RPC # Extension is ONC::RPC. h2xs -cfn ONC::RPC # Extension is a pure Perl module with no XS code. h2xs -X My::Module # Extension is Lib::Foo which works at least with Perl5.005_03. # Constants are created for all #defines and enums h2xs can find # in foo.h. h2xs -b 5.5.3 -n Lib::Foo foo.h # Extension is Lib::Foo which works at least with Perl5.005_03. # Constants are created for all #defines but only for enums # whose names do not start with 'bar_'. h2xs -b 5.5.3 -e '^bar_' -n Lib::Foo foo.h # Makefile.PL will look for library -lrpc in # additional directory /opt/net/lib h2xs rpcsvc/rusers -L/opt/net/lib -lrpc # Extension is DCE::rgynbase # prefix "sec_rgy_" is dropped from perl function names h2xs -n DCE::rgynbase -p sec_rgy_ dce/rgynbase # Extension is DCE::rgynbase # prefix "sec_rgy_" is dropped from perl function names # subroutines are created for sec_rgy_wildcard_name and # sec_rgy_wildcard_sid h2xs -n DCE::rgynbase -p sec_rgy_ \ -s sec_rgy_wildcard_name,sec_rgy_wildcard_sid dce/rgynbase # Make XS without defines in perl.h, but with function declarations # visible from perl.h. Name of the extension is perl1. # When scanning perl.h, define -DEXT=extern -DdEXT= -DINIT(x)= # Extra backslashes below because the string is passed to shell. # Note that a directory with perl header files would # be added automatically to include path. h2xs -xAn perl1 -F "-DEXT=extern -DdEXT= -DINIT\(x\)=" perl.h # Same with function declaration in proto.h as visible from perl.h. h2xs -xAn perl2 perl.h,proto.h # Same but select only functions which match /^av_/ h2xs -M '^av_' -xAn perl2 perl.h,proto.h # Same but treat SV* etc as "opaque" types h2xs -o '^[S]V \*$' -M '^av_' -xAn perl2 perl.h,proto.h Extension based on .h and .c files Suppose that you have some C files implementing some functionality, and the corresponding header files. How to create an extension which makes this functionality accessible in Perl? The example below assumes that the header files are interface_simple.h and interface_hairy.h, and you want the perl module be named as "Ext::Ension". If you need some preprocessor directives and/or linking with external libraries, see the flags "-F", "-L" and "-l" in "OPTIONS". Find the directory name Start with a dummy run of h2xs: h2xs -Afn Ext::Ension The only purpose of this step is to create the needed directories, and let you know the names of these directories. From the output you can see that the directory for the extension is Ext/Ension. Copy C files Copy your header files and C files to this directory Ext/Ension. Create the extension Run h2xs, overwriting older autogenerated files: h2xs -Oxan Ext::Ension interface_simple.h interface_hairy.h h2xs looks for header files after changing to the extension directory, so it will find your header files OK. Archive and test As usual, run cd Ext/Ension perl Makefile.PL make dist make make test Hints It is important to do "make dist" as early as possible. This way you can easily merge(1) your changes to autogenerated files if you decide to edit your ".h" files and rerun h2xs. Do not forget to edit the documentation in the generated .pm file. Consider the autogenerated files as skeletons only, you may invent better interfaces than what h2xs could guess. Consider this section as a guideline only, some other options of h2xs may better suit your needs. ENVIRONMENT No environment variables are used. AUTHOR Larry Wall and others SEE ALSO perl, perlxstut, ExtUtils::MakeMaker, and AutoLoader. DIAGNOSTICS The usual warnings if it cannot read or write the files involved. LIMITATIONS of -x h2xs would not distinguish whether an argument to a C function which is of the form, say, "int *", is an input, output, or input/output parameter. In particular, argument declarations of the form int foo(n) int *n should be better rewritten as int foo(n) int &n if "n" is an input parameter. Additionally, h2xs has no facilities to intuit that a function int foo(addr,l) char *addr int l takes a pair of address and length of data at this address, so it is better to rewrite this function as int foo(sv) SV *addr PREINIT: STRLEN len; char *s; CODE: s = SvPV(sv,len); RETVAL = foo(s, len); OUTPUT: RETVAL or alternately static int my_foo(SV *sv) { STRLEN len; char *s = SvPV(sv,len); return foo(s, len); } MODULE = foo PACKAGE = foo PREFIX = my_ int foo(sv) SV *sv See perlxs and perlxstut for additional details. perl v5.34.1 2024-04-13 H2XS(1)
head
This filter displays the first count lines or bytes of each of the specified files, or of the standard input if no files are specified. If count is omitted it defaults to 10. The following options are available: -c bytes, --bytes=bytes Print bytes of each of the specified files. -n count, --lines=count Print count lines of each of the specified files. If more than a single file is specified, each file is preceded by a header consisting of the string “==> XXX <==” where “XXX” is the name of the file. EXIT STATUS The head utility exits 0 on success, and >0 if an error occurs.
head – display first lines of a file
head [-n count | -c bytes] [file ...]
null
To display the first 500 lines of the file foo: $ head -n 500 foo head can be used in conjunction with tail(1) in the following way to, for example, display only line 500 from the file foo: $ head -n 500 foo | tail -n 1 SEE ALSO tail(1) HISTORY The head command appeared in PWB UNIX. macOS 14.5 April 10, 2018 macOS 14.5
dsymutil
dsymutil links the DWARF debug information found in the object files for an executable executable by using debug symbols information contained in its symbol table. By default, the linked debug information is placed in a .dSYM bundle with the same name as the executable.
dsymutil - manipulate archived DWARF debug symbol files
dsymutil [options] executable
--accelerator=<accelerator type> Specify the desired type of accelerator table. Valid options are 'Apple', 'Dwarf', 'Default' and 'None'. --arch <arch> Link DWARF debug information only for specified CPU architecture types. Architectures may be specified by name. When using this option, an error will be returned if any architectures can not be properly linked. This option can be specified multiple times, once for each desired architecture. All CPU architectures will be linked by default and any architectures that can't be properly linked will cause dsymutil to return an error. --dump-debug-map Dump the executable's debug-map (the list of the object files containing the debug information) in YAML format and exit. No DWARF link will take place. --fat64 Use a 64-bit header when emitting universal binaries. --flat, -f Produce a flat dSYM file. A .dwarf extension will be appended to the executable name unless the output file is specified using the -o option. --gen-reproducer Generate a reproducer consisting of the input object files. --help, -h Print this help output. --keep-function-for-static Make a static variable keep the enclosing function even if it would have been omitted otherwise. --minimize, -z When used when creating a dSYM file, this option will suppress the emission of the .debug_inlines, .debug_pubnames, and .debug_pubtypes sections since dsymutil currently has better equivalents: .apple_names and .apple_types. When used in conjunction with --update option, this option will cause redundant accelerator tables to be removed. --no-odr Do not use ODR (One Definition Rule) for uniquing C++ types. --no-output Do the link in memory, but do not emit the result file. --no-swiftmodule-timestamp Don't check the timestamp for swiftmodule files. --num-threads <threads>, -j <threads> Specifies the maximum number (n) of simultaneous threads to use when linking multiple architectures. --object-prefix-map <prefix=remapped> Remap object file paths (but no source paths) before processing. Use this for Clang objects where the module cache location was remapped using -fdebug-prefix-map; to help dsymutil find the Clang module cache. --oso-prepend-path <path> Specifies a path to prepend to all debug symbol object file paths. --out <filename>, -o <filename> Specifies an alternate path to place the dSYM bundle. The default dSYM bundle path is created by appending .dSYM to the executable name. --papertrail When running dsymutil as part of your build system, it can be desirable for warnings to be part of the end product, rather than just being emitted to the output stream. When enabled warnings are embedded in the linked DWARF debug information. --remarks-drop-without-debug Drop remarks without valid debug locations. Without this flags, all remarks are kept. --remarks-output-format <format> Specify the format to be used when serializing the linked remarks. --remarks-prepend-path <path> Specify a directory to prepend the paths of the external remark files. --statistics Print statistics about the contribution of each object file to the linked debug info. This prints a table after linking with the object file name, the size of the debug info in the object file (in bytes) and the size contributed (in bytes) to the linked dSYM. The table is sorted by the output size listing the object files with the largest contribution first. --symbol-map <bcsymbolmap> Update the existing dSYMs inplace using symbol map specified. -s, --symtab Dumps the symbol table found in executable or object file(s) and exits. -S Output textual assembly instead of a binary dSYM companion file. --toolchain <toolchain> Embed the toolchain in the dSYM bundle's property list. -u, --update Update an existing dSYM file to contain the latest accelerator tables and other DWARF optimizations. This option will rebuild the '.apple_names' and '.apple_types' hashed accelerator tables. --use-reproducer <path> Use the object files from the given reproducer path. --verbose Display verbose information when linking. --verify Run the DWARF verifier on the linked DWARF debug info. -v, --version Display the version of the tool. -y Treat executable as a YAML debug-map rather than an executable. EXIT STATUS dsymutil returns 0 if the DWARF debug information was linked successfully. Otherwise, it returns 1. SEE ALSO llvm-dwarfdump(1) AUTHOR Maintained by the LLVM Team (https://llvm.org/). COPYRIGHT 2003-2024, LLVM Project 11 2024-01-28 DSYMUTIL(1)
null
clear
clear clears your screen if this is possible, including its scrollback buffer (if the extended "E3" capability is defined). clear looks in the environment for the terminal type and then in the terminfo database to determine how to clear the screen. clear ignores any command-line parameters that may be present. SEE ALSO tput(1), terminfo(5) This describes ncurses version 5.7 (patch 20081102). clear(1)
clear - clear the terminal screen
clear
null
null
manpath
The manpath utility determines the user's manual search path from the user's PATH, and local configuration files. This result is echoed to the standard output. -L Output manual locales list instead of the manual path. -d Print extra debugging information. -q Suppresses warning messages. IMPLEMENTATION NOTES The manpath utility constructs the manual path from two sources: 1. From each component of the user's PATH for the first of: - pathname/man - pathname/MAN - If pathname ends with /bin: pathname/../share/man and pathname/../man 2. The configuration files listed in the FILES section for MANPATH entries. The information from these locations is then concatenated together. If the -L flag is set, the manpath utility will search the configuration files listed in the FILES section for MANLOCALE entries. ENVIRONMENT The following environment variables affect the execution of manpath: MANLOCALES If set with the -L flag, causes the utility to display a warning and the value, overriding any other configuration found on the system. MANPATH If set, causes the utility to display a warning and the value, overriding any other configuration found on the system. PATH Influences the manual path as described in the IMPLEMENTATION NOTES. FILES /etc/man.conf System configuration file. /usr/local/etc/man.d/*.conf Local configuration files. SEE ALSO apropos(1), man(1), whatis(1), man.conf(5) macOS 14.5 March 11, 2017 macOS 14.5
manpath – display search path for manual pages
manpath [-Ldq]
null
null
apropos
The man utility finds and displays online manual documentation pages. If mansect is provided, man restricts the search to the specific section of the manual. The sections of the manual are: 1. General Commands Manual 2. System Calls Manual 3. Library Functions Manual 4. Kernel Interfaces Manual 5. File Formats Manual 6. Games Manual 7. Miscellaneous Information Manual 8. System Manager's Manual 9. Kernel Developer's Manual Options that man understands: -M manpath Forces a specific colon separated manual path instead of the default search path. See manpath(1). Overrides the MANPATH environment variable. -P pager Use specified pager. Defaults to “less -sR” if color support is enabled, or “less -s”. Overrides the MANPAGER environment variable, which in turn overrides the PAGER environment variable. -S mansect Restricts manual sections searched to the specified colon delimited list. Defaults to “1:8:2:3:3lua:n:4:5:6:7:9:l”. Overrides the MANSECT environment variable. -a Display all manual pages instead of just the first found for each page argument. -d Print extra debugging information. Repeat for increased verbosity. Does not display the manual page. -f Emulate whatis(1). Note that only a subset of options will have any effect when man is invoked in this mode. See the below description of whatis options for details. -h Display short help message and exit. -k Emulate apropos(1). Note that only a subset of options will have any effect when man is invoked in this mode. See the below description of apropos options for details. -m arch[:machine] Override the default architecture and machine settings allowing lookup of other platform specific manual pages. This option is accepted, but not implemented, on macOS. -o Force use of non-localized manual pages. See IMPLEMENTATION NOTES for how locale specific searches work. Overrides the LC_ALL, LC_CTYPE, and LANG environment variables. -p [eprtv] Use the list of given preprocessors before running nroff(1) or troff(1). Valid preprocessors arguments: e eqn(1) p pic(1) r refer(1) t tbl(1) v vgrind(1) Overrides the MANROFFSEQ environment variable. -t Send manual page source through troff(1) allowing transformation of the manual pages to other formats. -w Display the location of the manual page instead of the contents of the manual page. Options that apropos and whatis understand: -d Same as the -d option for man. -s Same as the -S option for man. When man is operated in apropos or whatis emulation mode, only a subset of its options will be honored. Specifically, -d, -M, -P, and -S have equivalent functionality in the apropos and whatis implementation provided. The MANPATH, MANSECT, and MANPAGER environment variables will similarly be honored. IMPLEMENTATION NOTES Locale Specific Searches The man utility supports manual pages in different locales. The search behavior is dictated by the first of three environment variables with a nonempty string: LC_ALL, LC_CTYPE, or LANG. If set, man will search for locale specific manual pages using the following logic: lang_country.charset lang.charset en.charset For example, if LC_ALL is set to “ja_JP.eucJP”, man will search the following paths when considering section 1 manual pages in /usr/share/man: /usr/share/man/ja_JP.eucJP/man1 /usr/share/man/ja.eucJP/man1 /usr/share/man/en.eucJP/man1 /usr/share/man/man1 Displaying Specific Manual Files The man utility also supports displaying a specific manual page if passed a path to the file as long as it contains a ‘/’ character. ENVIRONMENT The following environment variables affect the execution of man: LC_ALL, LC_CTYPE, LANG Used to find locale specific manual pages. Valid values can be found by running the locale(1) command. See IMPLEMENTATION NOTES for details. Influenced by the -o option. MACHINE_ARCH, MACHINE Used to find platform specific manual pages. If unset, the output of “sysctl hw.machine_arch” and “sysctl hw.machine” is used respectively. See IMPLEMENTATION NOTES for details. Corresponds to the -m option. MANPATH The standard search path used by man(1) may be changed by specifying a path in the MANPATH environment variable. Invalid paths, or paths without manual databases, are ignored. Overridden by -M. If MANPATH begins with a colon, it is appended to the default list; if it ends with a colon, it is prepended to the default list; or if it contains two adjacent colons, the standard search path is inserted between the colons. If none of these conditions are met, it overrides the standard search path. MANROFFSEQ Used to determine the preprocessors for the manual source before running nroff(1) or troff(1). If unset, defaults to tbl(1). Corresponds to the -p option. MANSECT Restricts manual sections searched to the specified colon delimited list. Corresponds to the -S option. MANWIDTH If set to a numeric value, used as the width manpages should be displayed. Otherwise, if set to a special value “tty”, and output is to a terminal, the pages may be displayed over the whole width of the screen. MANCOLOR If set, enables color support. MANPAGER Program used to display files. If unset, and color support is enabled, “less -sR” is used. If unset, and color support is disabled, then PAGER is used. If that has no value either, “less -s” is used. FILES /etc/man.conf System configuration file. /usr/local/etc/man.d/*.conf Local configuration files. EXIT STATUS The man utility exits 0 on success, and >0 if an error occurs.
man, apropos, whatis – display online manual documentation pages
man [-adho] [-t | -w] [-M manpath] [-P pager] [-S mansect] [-m arch[:machine]] [-p [eprtv]] [mansect] page ... man -f [-d] [-M manpath] [-P pager] [-S mansect] keyword ... whatis [-d] [-s mansect] keyword ... man -k [-d] [-M manpath] [-P pager] [-S mansect] keyword ... apropos [-d] [-s mansect] keyword ...
null
Show the manual page for stat(2): $ man 2 stat Show all manual pages for ‘stat’. $ man -a stat List manual pages which match the regular expression either in the title or in the body: $ man -k '\<copy\>.*archive' Show the manual page for ls(1) and use cat(1) as pager: $ man -P cat ls Show the location of the ls(1) manual page: $ man -w ls SEE ALSO apropos(1), intro(1), mandoc(1), manpath(1), whatis(1), intro(2), intro(3), intro(3lua), intro(4), intro(5), man.conf(5), intro(6), intro(7), mdoc(7), intro(8), intro(9) macOS 14.5 January 9, 2021 macOS 14.5
units
The units program converts quantities expressed in various scales to their equivalents in other scales. It can only handle multiplicative or affine scale changes. units can work interactively by prompting the user for input (see EXAMPLES) or non-interactively, providing a conversion for given arguments from and to. The following options are available: -e, --exponential Same as -o %6e (see the description of the -o flag). -f unitsfile, --file unitsfile Specify the name of the units data file to load. This option may be specified multiple times. -H historyfile, --history historyfile Ignored, for compatibility with GNU units. -h, --help Show an overview of options. -o format, --output-format format Select the output format string by which numbers are printed. Defaults to “%.8g”. -q, --quiet Suppress prompting of the user for units and the display of statistics about the number of units loaded. -t, --terse Only print the result. This is used when calling units from other programs for easy to parse results. -U, --unitsfile Print the location of the default unit file if it exists. Otherwise, print an error message. -v, --version Print the version number (which is fixed at “FreeBSD units”), the path to the units data file and exit. -V, --verbose Print the units in the conversion output. Be more verbose in general. from to Allow a single unit conversion to be done directly from the command line. The program will not print prompts. It will print out the result of the single specified conversion. Both arguments, i.e., from and to, can be just a unit (e.g., “cm”), a quantity (e.g., “42”), or a quantity with a unit (e.g., “42 cm”) Mathematical operators - Powers of units can be specified using the “^” character as shown in the example, or by simple concatenation: “cm3” is equivalent to “cm^3”. See the BUGS section for details on the limitations of exponent values. - Multiplication of units can be specified by using spaces (“ ”), a dash (“-”) or an asterisk (“*”). - Division of units is indicated by the slash (“/”). - Division of numbers must be indicated using the vertical bar (“|”). Note that multiplication has a higher precedence than division, so “m/s/s” is the same as “m/s^2” or “m/s s”. Units The conversion information is read from a units data file. The default file includes definitions for most familiar units, abbreviations and metric prefixes. Some constants of nature included are: pi ratio of circumference to diameter c speed of light e charge on an electron g acceleration of gravity force same as g mole Avogadro's number water pressure per unit height of water mercury pressure per unit height of mercury au astronomical unit The unit “pound” is a unit of mass. Compound names are run together so “pound force” is a unit of force. The unit “ounce” is also a unit of mass. The fluid ounce is “floz”. British units that differ from their US counterparts are prefixed with “br”, and currency is prefixed with its country name: “belgiumfranc”, “britainpound”. When searching for a unit, if the specified string does not appear exactly as a unit name, then units will try to remove a trailing “s” or a trailing “es” and check again for a match. Units file format To find out what units are available read the standard units file. If you want to add your own units you can supply your own file. A unit is specified on a single line by giving its name and an equivalence. Be careful to define new units in terms of old ones so that a reduction leads to the primitive units which are marked with “!” characters. The units program will not detect infinite loops that could be caused by careless unit definitions. Comments in the unit definition file begin with a “#” or “/” character at the beginning of a line. Prefixes are defined in the same way as standard units, but with a trailing dash (“-”) at the end of the prefix name. If a unit is not found even after removing trailing “s” or “es”, then it will be checked against the list of prefixes. Prefixes will be removed until a legal base unit is identified. ENVIRONMENT PATH The colon-separated list of root directories at which units tries to find /usr/share/misc/definitions.units. For example if PATH is set to “/tmp:/:/usr/local”, no -f flags are provided, and /usr/share/misc/definitions.units is missing then units tries to open the following files as the default units file: /tmp/usr/share/misc/definitions.units, /usr/share/misc/definitions.units, and /usr/local/usr/share/misc/definitions.units. FILES /usr/share/misc/definitions.units The standard units file. EXIT STATUS The units utility exits 0 on success, and >0 if an error occurs.
units – conversion calculator
units [-ehqtUVv] [-f unitsfile] [-o format] [from to]
null
Example 1: Simple conversion of units This example shows how to do simple conversions, for example from gigabytes to bytes: $ units -o %0.f -t '4 gigabytes' bytes 4294967296 The -o %0.f part of the command is required to print the result in a non-scientific notation (e.g, 4294967296 instead of 4.29497e+09). Example 2: Interactive usage Here is an example of an interactive session where the user is prompted for units: You have: meters You want: feet * 3.2808399 / 0.3048 You have: cm^3 You want: gallons * 0.00026417205 / 3785.4118 You have: meters/s You want: furlongs/fortnight * 6012.8848 / 0.00016630952 You have: 1|2 inch You want: cm * 1.27 / 0.78740157 You have: 85 degF You want: degC 29.444444 Example 3: Difference between “|” and “/” division The following command shows how to convert half a meter to centimeters. $ units '1|2 meter' cm * 50 / 0.02 units prints the expected result because the division operator for numbers (“|”) was used. Using the division operator for units (“/”) would result in an error: $ units '1/2 meter' cm conformability error 0.5 / m 0.01 m It is because units interprets “1/2 meter” as “0.5/meter”, which is not conformable to “cm”. Example 4: Simple units file Here is an example of a short units file that defines some basic units: m !a! sec !b! micro- 1e-6 minute 60 sec hour 60 min inch 0.0254 m ft 12 inches mile 5280 ft Example 5: Viewing units and conversions of the default units file The following shell one-liner allows the user to view the contents of the default units file: $ less "$(units -U)" DIAGNOSTICS can't find units file '%s' The default units file is not in its default location (see FILES) and it is not present in any file tree starting with their roots at directories from PATH (see ENVIRONMENT). cap_rights_limit() failed See capsicum(4). conformability error It is not possible to reduce the given units to one common unit: they are not conformable. Instead of a conversion, units will display the reduced form for each provided unit: You have: ergs/hour You want: fathoms kg^2 / day conformability error 2.7777778e-11 kg m^2 / sec^3 2.1166667e-05 kg^2 m / sec Could not initialize history See editline(3). dupstr strdup(3) failed. memory for prefixes exceeded in line %d Over 100 prefixes were defined. memory for units exceeded in line %d Over 1000 prefixes were defined. memory overflow in unit reduction The requested conversion involves too many units (see BUGS). redefinition of prefix '%s' on line %d ignored redefinition of unit '%s' on line %d ignored unexpected end of prefix on line %d unexpected end of unit on line %d Units data file not found The default units file is missing. unable to enter capability mode See capsicum(4). unable to open units file '%s' One of the user-specified units files cannot be opened. unit reduces to zero unknown unit '%s' The provided unit cannot be found in the units file. WARNING: conversion of non-proportional quantities. units may fail to convert from to to because the units are not proportional. The warning is printed when a quantity is a part of the to argument. It can be illustrated on an example of conversion from Fahrenheit to Celsius: $ units "degF" "degC" (-> x*0.55555556g -17.777778g) (<- y*1.8g 32g) $ units "degF" "1 degC" WARNING: conversion of non-proportional quantities. (-> x*0.55555556g -17.777778g) (<- y*1.8g 32g) $ units "1 degF" "1 degC" WARNING: conversion of non-proportional quantities. -17.222222 SEE ALSO bc(1) HISTORY The units first appeared in NetBSD and was ported to FreeBSD 2.2.0. The manual page was significantly rewritten in FreeBSD 13.0 by Mateusz Piotrowski <0mp@FreeBSD.org>. AUTHORS Adrian Mariano <adrian@cam.cornell.edu> BUGS The effect of including a “/” in a prefix is surprising. Exponents entered by the user can be only one digit. You can work around this by multiplying several terms. The user must use “|” to indicate division of numbers and “/” to indicate division of symbols. This distinction should not be necessary. The program contains various arbitrary limits on the length of the units converted and on the length of the data file. The program should use a hash table to store units so that it does not take so long to load the units list and check for duplication. It is not possible to convert a negative value. The units program does not handle reductions of long lists of units very well: $ units "$(yes m | head -n 154)" "$(yes cm | head -n 154)" * 1e+308 / 1e-308 $ units "$(yes m | head -n 333)" "$(yes cm | head -n 333)" * inf / 0 $ units "$(yes m | head -n 500)" "$(yes cm | head -n 500)" units: memory overflow in unit reduction conformability error 1 m^500 1 centi cm^499 $ units "$(yes m | head -n 501)" "$(yes cm | head -n 501)" units: memory overflow in unit reduction units: memory overflow in unit reduction units: memory overflow in unit reduction conformability error 1 m^500 1 centi cm^499 macOS 14.5 March 17, 2020 macOS 14.5
curl-config
curl-config displays information about the curl and libcurl installation. curl-config displays information about the curl and libcurl installation.
curl-config - Get information about a libcurl installation curl-config - Get information about a libcurl installation
curl-config [options] curl-config [options]
--ca Displays the built-in path to the CA cert bundle this libcurl uses. --cc Displays the compiler used to build libcurl. --cflags Set of compiler options (CFLAGS) to use when compiling files that use libcurl. Currently that is only the include path to the curl include files. --checkfor [version] Specify the oldest possible libcurl version string you want, and this script will return 0 if the current installation is new enough or it returns 1 and outputs a text saying that the current version is not new enough. (Added in 7.15.4) --configure Displays the arguments given to configure when building curl. --feature Lists what particular main features the installed libcurl was built with. At the time of writing, this list may include SSL, KRB4 or IPv6. Do not assume any particular order. The keywords will be separated by newlines. There may be none, one, or several keywords in the list. --help Displays the available options. --libs Shows the complete set of libs and other linker options you will need in order to link your application with libcurl. --prefix This is the prefix used when libcurl was installed. Libcurl is then installed in $prefix/lib and its header files are installed in $prefix/include and so on. The prefix is set with "configure --prefix". --protocols Lists what particular protocols the installed libcurl was built to support. At the time of writing, this list may include HTTP, HTTPS, FTP, FTPS, FILE, TELNET, LDAP, DICT and many more. Do not assume any particular order. The protocols will be listed using uppercase and are separated by newlines. There may be none, one, or several protocols in the list. (Added in 7.13.0) --ssl-backends Lists the SSL backends that were enabled when libcurl was built. It might be no, one or several names. If more than one name, they will appear comma-separated. (Added in 7.58.0) --static-libs Shows the complete set of libs and other linker options you will need in order to link your application with libcurl statically. (Added in 7.17.1) --version Outputs version information about the installed libcurl. --vernum Outputs version information about the installed libcurl, in numerical mode. This shows the version number, in hexadecimal, using 8 bits for each part: major, minor, and patch numbers. This makes libcurl 7.7.4 appear as 070704 and libcurl 12.13.14 appear as 0c0d0e... Note that the initial zero might be omitted. (This option was broken in the 7.15.0 release.) --ca Displays the built-in path to the CA cert bundle this libcurl uses. --cc Displays the compiler used to build libcurl. --cflags Set of compiler options (CFLAGS) to use when compiling files that use libcurl. Currently that is only the include path to the curl include files. --checkfor [version] Specify the oldest possible libcurl version string you want, and this script will return 0 if the current installation is new enough or it returns 1 and outputs a text saying that the current version is not new enough. (Added in 7.15.4) --configure Displays the arguments given to configure when building curl. --feature Lists what particular main features the installed libcurl was built with. At the time of writing, this list may include SSL, KRB4 or IPv6. Do not assume any particular order. The keywords will be separated by newlines. There may be none, one, or several keywords in the list. --help Displays the available options. --libs Shows the complete set of libs and other linker options you will need in order to link your application with libcurl. --prefix This is the prefix used when libcurl was installed. Libcurl is then installed in $prefix/lib and its header files are installed in $prefix/include and so on. The prefix is set with "configure --prefix". --protocols Lists what particular protocols the installed libcurl was built to support. At the time of writing, this list may include HTTP, HTTPS, FTP, FTPS, FILE, TELNET, LDAP, DICT and many more. Do not assume any particular order. The protocols will be listed using uppercase and are separated by newlines. There may be none, one, or several protocols in the list. (Added in 7.13.0) --ssl-backends Lists the SSL backends that were enabled when libcurl was built. It might be no, one or several names. If more than one name, they will appear comma-separated. (Added in 7.58.0) --static-libs Shows the complete set of libs and other linker options you will need in order to link your application with libcurl statically. (Added in 7.17.1) --version Outputs version information about the installed libcurl. --vernum Outputs version information about the installed libcurl, in numerical mode. This shows the version number, in hexadecimal, using 8 bits for each part: major, minor, and patch numbers. This makes libcurl 7.7.4 appear as 070704 and libcurl 12.13.14 appear as 0c0d0e... Note that the initial zero might be omitted. (This option was broken in the 7.15.0 release.)
What linker options do I need when I link with libcurl? $ curl-config --libs What compiler options do I need when I compile using libcurl functions? $ curl-config --cflags How do I know if libcurl was built with SSL support? $ curl-config --feature | grep SSL What's the installed libcurl version? $ curl-config --version How do I build a single file with a one-line command? $ `curl-config --cc --cflags` -o example source.c `curl-config --libs` SEE ALSO curl(1) What linker options do I need when I link with libcurl? $ curl-config --libs What compiler options do I need when I compile using libcurl functions? $ curl-config --cflags How do I know if libcurl was built with SSL support? $ curl-config --feature | grep SSL What's the installed libcurl version? $ curl-config --version How do I build a single file with a one-line command? $ `curl-config --cc --cflags` -o example source.c `curl-config --libs` SEE ALSO curl(1) curl-config January 26 2024 curl-config(1)
htmltree5.34
null
htmltree - Parse the given HTML file(s) and dump the parse tree
htmltree -D3 -w file1 file2 file3 Options: -D[number] sets HTML::TreeBuilder::Debug to that figure. -w turns on $tree->warn(1) for the new tree -h Help message perl v5.34.0 2024-04-13 HTMLTREE(1)
null
null
lwp-mirror5.34
This program can be used to mirror a document from a WWW server. The document is only transferred if the remote copy is newer than the local copy. If the local copy is newer nothing happens. Use the "-v" option to print the version number of this program. The timeout value specified with the "-t" option. The timeout value is the time that the program will wait for response from the remote server before it fails. The default unit for the timeout value is seconds. You might append "m" or "h" to the timeout value to make it minutes or hours, respectively. Because this program is implemented using the LWP library, it only supports the protocols that LWP supports. SEE ALSO lwp-request, LWP AUTHOR Gisle Aas <gisle@aas.no> perl v5.34.0 2020-04-14 LWP-MIRROR(1)
lwp-mirror - Simple mirror utility
lwp-mirror [-v] [-t timeout] <url> <local file>
null
null
ptargrep5.34
This utility allows you to apply pattern matching to the contents of files contained in a tar archive. You might use this to identify all files in an archive which contain lines matching the specified pattern and either print out the pathnames or extract the files. The pattern will be used as a Perl regular expression (as opposed to a simple grep regex). Multiple tar archive filenames can be specified - they will each be processed in turn.
ptargrep - Apply pattern matching to the contents of files in a tar archive
ptargrep [options] <pattern> <tar file> ... Options: --basename|-b ignore directory paths from archive --ignore-case|-i do case-insensitive pattern matching --list-only|-l list matching filenames rather than extracting matches --verbose|-v write debugging message to STDERR --help|-? detailed help message
--basename (alias -b) When matching files are extracted, ignore the directory path from the archive and write to the current directory using the basename of the file from the archive. Beware: if two matching files in the archive have the same basename, the second file extracted will overwrite the first. --ignore-case (alias -i) Make pattern matching case-insensitive. --list-only (alias -l) Print the pathname of each matching file from the archive to STDOUT. Without this option, the default behaviour is to extract each matching file. --verbose (alias -v) Log debugging info to STDERR. --help (alias -?) Display this documentation. COPYRIGHT Copyright 2010 Grant McLean <grantm@cpan.org> This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.34.1 2024-04-13 PTARGREP(1)
null
kgetcred
kgetcred obtains a ticket for a service. Usually tickets for services are obtained automatically when needed but sometimes for some odd reason you want to obtain a particular ticket or of a special type. Supported options: --canonicalize requests that the KDC canonicalize the principal. -c cache, --cache=cache the credential cache to use. -e enctype, --enctype=enctype encryption type to use. --no-transit-check requests that the KDC doesn't do transit checking. --version --help SEE ALSO kinit(1), klist(1) HEIMDAL March 12, 2004 HEIMDAL
kgetcred – get a ticket for a particular service
kgetcred [--canonicalize] [-c -cache | --cache=cache] [-e enctype | --enctype=enctype] [--no-transit-check] [--version] [--help] service
null
null
rev
The rev utility copies the specified files to the standard output, reversing the order of characters in every line. If no files are specified, the standard input is read.
rev – reverse lines of a file
rev [file ...]
null
Reverse the text from stdin: $ echo -e "reverse \t these\ntwo lines" | rev eseht esrever senil owt macOS 14.5 June 27, 2020 macOS 14.5
SplitForks
Tools supporting Carbon development, including /usr/bin/SplitForks, were deprecated with Xcode 6. SplitForks takes a Macintosh HFS or HFS Extended ("HFS+") two-fork file and converts it into AppleDouble format, with the data fork in one file and the resource fork and file system metadata in another. /usr/bin/SplitForks takes the following flags and arguments: [s] Strip the resource fork from the original file. The default is to leave the resource file in place after copying it to its AppleDouble metadata file. [v] Produce verbose diagnostics to standard output. file The file to split. FILES foo Data fork of file 'foo' NOTES SplitForks will fail with error 2 if the designated file is not on an HFS or Extended HFS file system volume. SEE ALSO FixupResourceForks(1), MvMac(1), CpMac(1) STANDARDS Consult RFC 1740 for details on AppleSingle/AppleDouble formats. Mac OS X April 12, 2004 Mac OS X
/usr/bin/SplitForks – Divide a two-fork HFS file into AppleDouble format resource and data files. (DEPRECATED)
/usr/bin/SplitForks [-s] [-v] file
null
null
uniq
The uniq utility reads the specified input_file comparing adjacent lines, and writes a copy of each unique input line to the output_file. If input_file is a single dash (‘-’) or absent, the standard input is read. If output_file is absent, standard output is used for output. The second and succeeding copies of identical adjacent input lines are not written. Repeated lines in the input will not be detected if they are not adjacent, so it may be necessary to sort the files first. The following options are available: -c, --count Precede each output line with the count of the number of times the line occurred in the input, followed by a single space. -d, --repeated Output a single copy of each line that is repeated in the input. -D, --all-repeated [septype] Output all lines that are repeated (like -d, but each copy of the repeated line is written). The optional septype argument controls how to separate groups of repeated lines in the output; it must be one of the following values: none Do not separate groups of lines (this is the default). prepend Output an empty line before each group of lines. separate Output an empty line after each group of lines. -f num, --skip-fields num Ignore the first num fields in each input line when doing comparisons. A field is a string of non-blank characters separated from adjacent fields by blanks. Field numbers are one based, i.e., the first field is field one. -i, --ignore-case Case insensitive comparison of lines. -s chars, --skip-chars chars Ignore the first chars characters in each input line when doing comparisons. If specified in conjunction with the -f, --unique option, the first chars characters after the first num fields will be ignored. Character numbers are one based, i.e., the first character is character one. -u, --unique Only output lines that are not repeated in the input. ENVIRONMENT The LANG, LC_ALL, LC_COLLATE and LC_CTYPE environment variables affect the execution of uniq as described in environ(7). EXIT STATUS The uniq utility exits 0 on success, and >0 if an error occurs.
uniq – report or filter out repeated lines in a file
uniq [-c | -d | -D | -u] [-i] [-f num] [-s chars] [input_file [output_file]]
null
Assuming a file named cities.txt with the following content: Madrid Lisbon Madrid The following command reports three different lines since identical elements are not adjacent: $ uniq -u cities.txt Madrid Lisbon Madrid Sort the file and count the number of identical lines: $ sort cities.txt | uniq -c 1 Lisbon 2 Madrid Assuming the following content for the file cities.txt: madrid Madrid Lisbon Show repeated lines ignoring case sensitiveness: $ uniq -d -i cities.txt madrid Same as above but showing the whole group of repeated lines: $ uniq -D -i cities.txt madrid Madrid Report the number of identical lines ignoring the first character of every line: $ uniq -s 1 -c cities.txt 2 madrid 1 Lisbon COMPATIBILITY The historic +number and -number options have been deprecated but are still supported in this implementation. SEE ALSO sort(1) STANDARDS The uniq utility conforms to IEEE Std 1003.1-2001 (“POSIX.1”) as amended by Cor. 1-2002. HISTORY A uniq command appeared in Version 3 AT&T UNIX. macOS 14.5 June 7, 2020 macOS 14.5
tclsh
Tclsh is a shell-like application that reads Tcl commands from its standard input or from a file and evaluates them. If invoked with no arguments then it runs interactively, reading Tcl commands from standard input and printing command results and error messages to standard output. It runs until the exit command is invoked or until it reaches end-of-file on its standard input. If there exists a file .tclshrc (or tclshrc.tcl on the Windows platforms) in the home directory of the user, interactive tclsh evaluates the file as a Tcl script just before reading the first command from standard input. SCRIPT FILES If tclsh is invoked with arguments then the first few arguments specify │ the name of a script file, and, optionally, the encoding of the text │ data stored in that script file. Any additional arguments are made available to the script as variables (see below). Instead of reading commands from standard input tclsh will read Tcl commands from the named file; tclsh will exit when it reaches the end of the file. The end of the file may be marked either by the physical end of the medium, or by the character, “\032” (“\u001a”, control-Z). If this character is present in the file, the tclsh application will read text up to but not including the character. An application that requires this character in the file may safely encode it as “\032”, “\x1a”, or “\u001a”; or may generate it by use of commands such as format or binary. There is no automatic evaluation of .tclshrc when the name of a script file is presented on the tclsh command line, but the script file can always source it if desired. If you create a Tcl script in a file whose first line is #!/usr/bin/tclsh then you can invoke the script file directly from your shell if you mark the file as executable. This assumes that tclsh has been installed in the default location in /usr/bin; if it is installed somewhere else then you will have to modify the above line to match. Many UNIX systems do not allow the #! line to exceed about 30 characters in length, so be sure that the tclsh executable can be accessed with a short file name. An even better approach is to start your script files with the following three lines: #!/bin/sh # the next line restarts using tclsh \ exec tclsh "$0" "$@" This approach has three advantages over the approach in the previous paragraph. First, the location of the tclsh binary does not have to be hard-wired into the script: it can be anywhere in your shell search path. Second, it gets around the 30-character file name limit in the previous approach. Third, this approach will work even if tclsh is itself a shell script (this is done on some systems in order to handle multiple architectures or operating systems: the tclsh script selects one of several binaries to run). The three lines cause both sh and tclsh to process the script, but the exec is only executed by sh. sh processes the script first; it treats the second line as a comment and executes the third line. The exec statement cause the shell to stop processing and instead to start up tclsh to reprocess the entire script. When tclsh starts up, it treats all three lines as comments, since the backslash at the end of the second line causes the third line to be treated as part of the comment on the second line. You should note that it is also common practice to install tclsh with its version number as part of the name. This has the advantage of allowing multiple versions of Tcl to exist on the same system at once, but also the disadvantage of making it harder to write scripts that start up uniformly across different versions of Tcl. VARIABLES Tclsh sets the following Tcl variables: argc Contains a count of the number of arg arguments (0 if none), not including the name of the script file. argv Contains a Tcl list whose elements are the arg arguments, in order, or an empty string if there are no arg arguments. argv0 Contains fileName if it was specified. Otherwise, contains the name by which tclsh was invoked. tcl_interactive Contains 1 if tclsh is running interactively (no fileName was specified and standard input is a terminal- like device), 0 otherwise. PROMPTS When tclsh is invoked interactively it normally prompts for each command with “% ”. You can change the prompt by setting the variables tcl_prompt1 and tcl_prompt2. If variable tcl_prompt1 exists then it must consist of a Tcl script to output a prompt; instead of outputting a prompt tclsh will evaluate the script in tcl_prompt1. The variable tcl_prompt2 is used in a similar way when a newline is typed but the current command is not yet complete; if tcl_prompt2 is not set then no prompt is output for incomplete commands. STANDARD CHANNELS See Tcl_StandardChannels for more explanations. SEE ALSO encoding(n), fconfigure(n), tclvars(n) KEYWORDS argument, interpreter, prompt, script file, shell Tcl tclsh(1)
tclsh - Simple shell containing Tcl interpreter
tclsh ?-encoding name? ?fileName arg arg ...? ______________________________________________________________________________
null
null
binhex5.30.pl
null
null
null
null
null