command
stringlengths 1
42
| description
stringlengths 29
182k
β | name
stringlengths 7
64.9k
β | synopsis
stringlengths 4
85.3k
β | options
stringclasses 593
values | examples
stringclasses 455
values |
|---|---|---|---|---|---|
unzip
|
unzip will list, test, or extract files from a ZIP archive, commonly found on MS-DOS systems. The default behavior (with no options) is to extract into the current directory (and subdirectories below it) all files from the specified ZIP archive. A companion program, zip(1L), creates ZIP archives; both programs are compatible with archives created by PKWARE's PKZIP and PKUNZIP for MS-DOS, but in many cases the program options or default behaviors differ. ARGUMENTS file[.zip] Path of the ZIP archive(s). If the file specification is a wildcard, each matching file is processed in an order determined by the operating system (or file system). Only the filename can be a wildcard; the path itself cannot. Wildcard expressions are similar to those supported in commonly used Unix shells (sh, ksh, csh) and may contain: * matches a sequence of 0 or more characters ? matches exactly 1 character [...] matches any single character found inside the brackets; ranges are specified by a beginning character, a hyphen, and an ending character. If an exclamation point or a caret (`!' or `^') follows the left bracket, then the range of characters within the brackets is complemented (that is, anything except the characters inside the brackets is considered a match). To specify a verbatim left bracket, the three-character sequence ``[[]'' has to be used. (Be sure to quote any character that might otherwise be interpreted or modified by the operating system, particularly under Unix and VMS.) If no matches are found, the specification is assumed to be a literal filename; and if that also fails, the suffix .zip is appended. Note that self-extracting ZIP files are supported, as with any other ZIP archive; just specify the .exe suffix (if any) explicitly. [file(s)] An optional list of archive members to be processed, separated by spaces. (VMS versions compiled with VMSCLI defined must delimit files with commas instead. See -v in OPTIONS below.) Regular expressions (wildcards) may be used to match multiple members; see above. Again, be sure to quote expressions that would otherwise be expanded or modified by the operating system. [-x xfile(s)] An optional list of archive members to be excluded from processing. Since wildcard characters normally match (`/') directory separators (for exceptions see the option -W), this option may be used to exclude any files that are in subdirectories. For example, ``unzip foo *.[ch] -x */*'' would extract all C source files in the main directory, but none in any subdirectories. Without the -x option, all C source files in all directories within the zipfile would be extracted. [-d exdir] An optional directory to which to extract files. By default, all files and subdirectories are recreated in the current directory; the -d option allows extraction in an arbitrary directory (always assuming one has permission to write to the directory). This option need not appear at the end of the command line; it is also accepted before the zipfile specification (with the normal options), immediately after the zipfile specification, or between the file(s) and the -x option. The option and directory may be concatenated without any white space between them, but note that this may cause normal shell behavior to be suppressed. In particular, ``-d ~'' (tilde) is expanded by Unix C shells into the name of the user's home directory, but ``-d~'' is treated as a literal subdirectory ``~'' of the current directory.
|
unzip - list, test and extract compressed files in a ZIP archive
|
unzip [-Z] [-cflptTuvz[abjnoqsCDKLMUVWX$/:^]] file[.zip] [file(s) ...] [-x xfile(s) ...] [-d exdir]
|
Note that, in order to support obsolescent hardware, unzip's usage screen is limited to 22 or 23 lines and should therefore be considered only a reminder of the basic unzip syntax rather than an exhaustive list of all possible flags. The exhaustive list follows: -Z zipinfo(1L) mode. If the first option on the command line is -Z, the remaining options are taken to be zipinfo(1L) options. See the appropriate manual page for a description of these options. -A [OS/2, Unix DLL] print extended help for the DLL's programming interface (API). -c extract files to stdout/screen (``CRT''). This option is similar to the -p option except that the name of each file is printed as it is extracted, the -a option is allowed, and ASCII- EBCDIC conversion is automatically performed if appropriate. This option is not listed in the unzip usage screen. -f freshen existing files, i.e., extract only those files that already exist on disk and that are newer than the disk copies. By default unzip queries before overwriting, but the -o option may be used to suppress the queries. Note that under many operating systems, the TZ (timezone) environment variable must be set correctly in order for -f and -u to work properly (under Unix the variable is usually set automatically). The reasons for this are somewhat subtle but have to do with the differences between DOS-format file times (always local time) and Unix- format times (always in GMT/UTC) and the necessity to compare the two. A typical TZ value is ``PST8PDT'' (US Pacific time with automatic adjustment for Daylight Savings Time or ``summer time''). -l list archive files (short format). The names, uncompressed file sizes and modification dates and times of the specified files are printed, along with totals for all files specified. If UnZip was compiled with OS2_EAS defined, the -l option also lists columns for the sizes of stored OS/2 extended attributes (EAs) and OS/2 access control lists (ACLs). In addition, the zipfile comment and individual file comments (if any) are displayed. If a file was archived from a single-case file system (for example, the old MS-DOS FAT file system) and the -L option was given, the filename is converted to lowercase and is prefixed with a caret (^). -p extract files to pipe (stdout). Nothing but the file data is sent to stdout, and the files are always extracted in binary format, just as they are stored (no conversions). -t test archive files. This option extracts each specified file in memory and compares the CRC (cyclic redundancy check, an enhanced checksum) of the expanded file with the original file's stored CRC value. -T [most OSes] set the timestamp on the archive(s) to that of the newest file in each one. This corresponds to zip's -go option except that it can be used on wildcard zipfiles (e.g., ``unzip -T \*.zip'') and is much faster. -u update existing files and create new ones if needed. This option performs the same function as the -f option, extracting (with query) files that are newer than those with the same name on disk, and in addition it extracts those files that do not already exist on disk. See -f above for information on setting the timezone properly. -v list archive files (verbose format) or show diagnostic version info. This option has evolved and now behaves as both an option and a modifier. As an option it has two purposes: when a zipfile is specified with no other options, -v lists archive files verbosely, adding to the basic -l info the compression method, compressed size, compression ratio and 32-bit CRC. In contrast to most of the competing utilities, unzip removes the 12 additional header bytes of encrypted entries from the compressed size numbers. Therefore, compressed size and compression ratio figures are independent of the entry's encryption status and show the correct compression performance. (The complete size of the encrypted compressed data stream for zipfile entries is reported by the more verbose zipinfo(1L) reports, see the separate manual.) When no zipfile is specified (that is, the complete command is simply ``unzip -v''), a diagnostic screen is printed. In addition to the normal header with release date and version, unzip lists the home Info-ZIP ftp site and where to find a list of other ftp and non-ftp sites; the target operating system for which it was compiled, as well as (possibly) the hardware on which it was compiled, the compiler and version used, and the compilation date; any special compilation options that might affect the program's operation (see also DECRYPTION below); and any options stored in environment variables that might do the same (see ENVIRONMENT OPTIONS below). As a modifier it works in conjunction with other options (e.g., -t) to produce more verbose or debugging output; this is not yet fully implemented but will be in future releases. -z display only the archive comment. MODIFIERS -a convert text files. Ordinarily all files are extracted exactly as they are stored (as ``binary'' files). The -a option causes files identified by zip as text files (those with the `t' label in zipinfo listings, rather than `b') to be automatically extracted as such, converting line endings, end-of-file characters and the character set itself as necessary. (For example, Unix files use line feeds (LFs) for end-of-line (EOL) and have no end-of-file (EOF) marker; Macintoshes use carriage returns (CRs) for EOLs; and most PC operating systems use CR+LF for EOLs and control-Z for EOF. In addition, IBM mainframes and the Michigan Terminal System use EBCDIC rather than the more common ASCII character set, and NT supports Unicode.) Note that zip's identification of text files is by no means perfect; some ``text'' files may actually be binary and vice versa. unzip therefore prints ``[text]'' or ``[binary]'' as a visual check for each file it extracts when using the -a option. The -aa option forces all files to be extracted as text, regardless of the supposed file type. On VMS, see also -S. -b [general] treat all files as binary (no text conversions). This is a shortcut for ---a. -b [Tandem] force the creation files with filecode type 180 ('C') when extracting Zip entries marked as "text". (On Tandem, -a is enabled by default, see above). -b [VMS] auto-convert binary files (see -a above) to fixed-length, 512-byte record format. Doubling the option (-bb) forces all files to be extracted in this format. When extracting to standard output (-c or -p option in effect), the default conversion of text record delimiters is disabled for binary (-b) resp. all (-bb) files. -B [when compiled with UNIXBACKUP defined] save a backup copy of each overwritten file. The backup file is gets the name of the target file with a tilde and optionally a unique sequence number (up to 5 digits) appended. The sequence number is applied whenever another file with the original name plus tilde already exists. When used together with the "overwrite all" option -o, numbered backup files are never created. In this case, all backup files are named as the original file with an appended tilde, existing backup files are deleted without notice. This feature works similarly to the default behavior of emacs(1) in many locations. Example: the old copy of ``foo'' is renamed to ``foo~''. Warning: Users should be aware that the -B option does not prevent loss of existing data under all circumstances. For example, when unzip is run in overwrite-all mode, an existing ``foo~'' file is deleted before unzip attempts to rename ``foo'' to ``foo~''. When this rename attempt fails (because of a file locks, insufficient privileges, or ...), the extraction of ``foo~'' gets cancelled, but the old backup file is already lost. A similar scenario takes place when the sequence number range for numbered backup files gets exhausted (99999, or 65535 for 16-bit systems). In this case, the backup file with the maximum sequence number is deleted and replaced by the new backup version without notice. -C use case-insensitive matching for the selection of archive entries from the command-line list of extract selection patterns. unzip's philosophy is ``you get what you ask for'' (this is also responsible for the -L/-U change; see the relevant options below). Because some file systems are fully case- sensitive (notably those under the Unix operating system) and because both ZIP archives and unzip itself are portable across platforms, unzip's default behavior is to match both wildcard and literal filenames case-sensitively. That is, specifying ``makefile'' on the command line will only match ``makefile'' in the archive, not ``Makefile'' or ``MAKEFILE'' (and similarly for wildcard specifications). Since this does not correspond to the behavior of many other operating/file systems (for example, OS/2 HPFS, which preserves mixed case but is not sensitive to it), the -C option may be used to force all filename matches to be case-insensitive. In the example above, all three files would then match ``makefile'' (or ``make*'', or similar). The -C option affects file specs in both the normal file list and the excluded-file list (xlist). Please note that the -C option does neither affect the search for the zipfile(s) nor the matching of archive entries to existing files on the extraction path. On a case-sensitive file system, unzip will never try to overwrite a file ``FOO'' when extracting an entry ``foo''! -D skip restoration of timestamps for extracted items. Normally, unzip tries to restore all meta-information for extracted items that are supplied in the Zip archive (and do not require privileges or impose a security risk). By specifying -D, unzip is told to suppress restoration of timestamps for directories explicitly created from Zip archive entries. This option only applies to ports that support setting timestamps for directories (currently ATheOS, BeOS, MacOS, OS/2, Unix, VMS, Win32, for other unzip ports, -D has no effect). The duplicated option -DD forces suppression of timestamp restoration for all extracted entries (files and directories). This option results in setting the timestamps for all extracted entries to the current time. On VMS, the default setting for this option is -D for consistency with the behaviour of BACKUP: file timestamps are restored, timestamps of extracted directories are left at the current time. To enable restoration of directory timestamps, the negated option --D should be specified. On VMS, the option -D disables timestamp restoration for all extracted Zip archive items. (Here, a single -D on the command line combines with the default -D to do what an explicit -DD does on other systems.) -F [Acorn only] suppress removal of NFS filetype extension from stored filenames. -F [non-Acorn systems supporting long filenames with embedded commas, and only if compiled with ACORN_FTYPE_NFS defined] translate filetype information from ACORN RISC OS extra field blocks into a NFS filetype extension and append it to the names of the extracted files. (When the stored filename appears to already have an appended NFS filetype extension, it is replaced by the info from the extra field.) -j junk paths. The archive's directory structure is not recreated; all files are deposited in the extraction directory (by default, the current one). -J [BeOS only] junk file attributes. The file's BeOS file attributes are not restored, just the file's data. -K [AtheOS, BeOS, Unix only] retain SUID/SGID/Tacky file attributes. Without this flag, these attribute bits are cleared for security reasons. -L convert to lowercase any filename originating on an uppercase- only operating system or file system. (This was unzip's default behavior in releases prior to 5.11; the new default behavior is identical to the old behavior with the -U option, which is now obsolete and will be removed in a future release.) Depending on the archiver, files archived under single-case file systems (VMS, old MS-DOS FAT, etc.) may be stored as all-uppercase names; this can be ugly or inconvenient when extracting to a case-preserving file system such as OS/2 HPFS or a case- sensitive one such as under Unix. By default unzip lists and extracts such filenames exactly as they're stored (excepting truncation, conversion of unsupported characters, etc.); this option causes the names of all files from certain systems to be converted to lowercase. The -LL option forces conversion of every filename to lowercase, regardless of the originating file system. -M pipe all output through an internal pager similar to the Unix more(1) command. At the end of a screenful of output, unzip pauses with a ``--More--'' prompt; the next screenful may be viewed by pressing the Enter (Return) key or the space bar. unzip can be terminated by pressing the ``q'' key and, on some systems, the Enter/Return key. Unlike Unix more(1), there is no forward-searching or editing capability. Also, unzip doesn't notice if long lines wrap at the edge of the screen, effectively resulting in the printing of two or more lines and the likelihood that some text will scroll off the top of the screen before being viewed. On some systems the number of available lines on the screen is not detected, in which case unzip assumes the height is 24 lines. -n never overwrite existing files. If a file already exists, skip the extraction of that file without prompting. By default unzip queries before extracting any file that already exists; the user may choose to overwrite only the current file, overwrite all files, skip extraction of the current file, skip extraction of all existing files, or rename the current file. -N [Amiga] extract file comments as Amiga filenotes. File comments are created with the -c option of zip(1L), or with the -N option of the Amiga port of zip(1L), which stores filenotes as comments. -o overwrite existing files without prompting. This is a dangerous option, so use it with care. (It is often used with -f, however, and is the only way to overwrite directory EAs under OS/2.) -P password use password to decrypt encrypted zipfile entries (if any). THIS IS INSECURE! Many multi-user operating systems provide ways for any user to see the current command line of any other user; even on stand-alone systems there is always the threat of over-the-shoulder peeking. Storing the plaintext password as part of a command line in an automated script is even worse. Whenever possible, use the non-echoing, interactive prompt to enter passwords. (And where security is truly important, use strong encryption such as Pretty Good Privacy instead of the relatively weak encryption provided by standard zipfile utilities.) -q perform operations quietly (-qq = even quieter). Ordinarily unzip prints the names of the files it's extracting or testing, the extraction methods, any file or zipfile comments that may be stored in the archive, and possibly a summary when finished with each archive. The -q[q] options suppress the printing of some or all of these messages. -s [OS/2, NT, MS-DOS] convert spaces in filenames to underscores. Since all PC operating systems allow spaces in filenames, unzip by default extracts filenames with spaces intact (e.g., ``EA DATA. SF''). This can be awkward, however, since MS-DOS in particular does not gracefully support spaces in filenames. Conversion of spaces to underscores can eliminate the awkwardness in some cases. -S [VMS] convert text files (-a, -aa) into Stream_LF record format, instead of the text-file default, variable-length record format. (Stream_LF is the default record format of VMS unzip. It is applied unless conversion (-a, -aa and/or -b, -bb) is requested or a VMS-specific entry is processed.) -U [UNICODE_SUPPORT only] modify or disable UTF-8 handling. When UNICODE_SUPPORT is available, the option -U forces unzip to escape all non-ASCII characters from UTF-8 coded filenames as ``#Uxxxx'' (for UCS-2 characters, or ``#Lxxxxxx'' for unicode codepoints needing 3 octets). This option is mainly provided for debugging purpose when the fairly new UTF-8 support is suspected to mangle up extracted filenames. The option -UU allows to entirely disable the recognition of UTF-8 encoded filenames. The handling of filename codings within unzip falls back to the behaviour of previous versions. [old, obsolete usage] leave filenames uppercase if created under MS-DOS, VMS, etc. See -L above. -V retain (VMS) file version numbers. VMS files can be stored with a version number, in the format file.ext;##. By default the ``;##'' version numbers are stripped, but this option allows them to be retained. (On file systems that limit filenames to particularly short lengths, the version numbers may be truncated or stripped regardless of this option.) -W [only when WILD_STOP_AT_DIR compile-time option enabled] modifies the pattern matching routine so that both `?' (single- char wildcard) and `*' (multi-char wildcard) do not match the directory separator character `/'. (The two-character sequence ``**'' acts as a multi-char wildcard that includes the directory separator in its matched characters.) Examples: "*.c" matches "foo.c" but not "mydir/foo.c" "**.c" matches both "foo.c" and "mydir/foo.c" "*/*.c" matches "bar/foo.c" but not "baz/bar/foo.c" "??*/*" matches "ab/foo" and "abc/foo" but not "a/foo" or "a/b/foo" This modified behaviour is equivalent to the pattern matching style used by the shells of some of UnZip's supported target OSs (one example is Acorn RISC OS). This option may not be available on systems where the Zip archive's internal directory separator character `/' is allowed as regular character in native operating system filenames. (Currently, UnZip uses the same pattern matching rules for both wildcard zipfile specifications and zip entry selection patterns in most ports. For systems allowing `/' as regular filename character, the -W option would not work as expected on a wildcard zipfile specification.) -X [VMS, Unix, OS/2, NT, Tandem] restore owner/protection info (UICs and ACL entries) under VMS, or user and group info (UID/GID) under Unix, or access control lists (ACLs) under certain network-enabled versions of OS/2 (Warp Server with IBM LAN Server/Requester 3.0 to 5.0; Warp Connect with IBM Peer 1.0), or security ACLs under Windows NT. In most cases this will require special system privileges, and doubling the option (-XX) under NT instructs unzip to use privileges for extraction; but under Unix, for example, a user who belongs to several groups can restore files owned by any of those groups, as long as the user IDs match his or her own. Note that ordinary file attributes are always restored--this option applies only to optional, extra ownership info available on some operating systems. [NT's access control lists do not appear to be especially compatible with OS/2's, so no attempt is made at cross-platform portability of access privileges. It is not clear under what conditions this would ever be useful anyway.] -Y [VMS] treat archived file name endings of ``.nnn'' (where ``nnn'' is a decimal number) as if they were VMS version numbers (``;nnn''). (The default is to treat them as file types.) Example: "a.b.3" -> "a.b;3". -$ [MS-DOS, OS/2, NT] restore the volume label if the extraction medium is removable (e.g., a diskette). Doubling the option (-$$) allows fixed media (hard disks) to be labelled as well. By default, volume labels are ignored. -/ extensions [Acorn only] overrides the extension list supplied by Unzip$Ext environment variable. During extraction, filename extensions that match one of the items in this extension list are swapped in front of the base name of the extracted file. -: [all but Acorn, VM/CMS, MVS, Tandem] allows to extract archive members into locations outside of the current `` extraction root folder''. For security reasons, unzip normally removes ``parent dir'' path components (``../'') from the names of extracted file. This safety feature (new for version 5.50) prevents unzip from accidentally writing files to ``sensitive'' areas outside the active extraction folder tree head. The -: option lets unzip switch back to its previous, more liberal behaviour, to allow exact extraction of (older) archives that used ``../'' components to create multiple directory trees at the level of the current extraction folder. This option does not enable writing explicitly to the root directory (``/''). To achieve this, it is necessary to set the extraction target folder to root (e.g. -d / ). However, when the -: option is specified, it is still possible to implicitly write to the root directory by specifying enough ``../'' path components within the zip archive. Use this option with extreme caution. -^ [Unix only] allow control characters in names of extracted ZIP archive entries. On Unix, a file name may contain any (8-bit) character code with the two exception '/' (directory delimiter) and NUL (0x00, the C string termination indicator), unless the specific file system has more restrictive conventions. Generally, this allows to embed ASCII control characters (or even sophisticated control sequences) in file names, at least on 'native' Unix file systems. However, it may be highly suspicious to make use of this Unix "feature". Embedded control characters in file names might have nasty side effects when displayed on screen by some listing code without sufficient filtering. And, for ordinary users, it may be difficult to handle such file names (e.g. when trying to specify it for open, copy, move, or delete operations). Therefore, unzip applies a filter by default that removes potentially dangerous control characters from the extracted file names. The -^ option allows to override this filter in the rare case that embedded filename control characters are to be intentionally restored. -2 [VMS] force unconditionally conversion of file names to ODS2-compatible names. The default is to exploit the destination file system, preserving case and extended file name characters on an ODS5 destination file system; and applying the ODS2-compatibility file name filtering on an ODS2 destination file system. ENVIRONMENT OPTIONS unzip's default behavior may be modified via options placed in an environment variable. This can be done with any option, but it is probably most useful with the -a, -L, -C, -q, -o, or -n modifiers: make unzip auto-convert text files by default, make it convert filenames from uppercase systems to lowercase, make it match names case-insensitively, make it quieter, or make it always overwrite or never overwrite files as it extracts them. For example, to make unzip act as quietly as possible, only reporting errors, one would use one of the following commands: Unix Bourne shell: UNZIP=-qq; export UNZIP Unix C shell: setenv UNZIP -qq OS/2 or MS-DOS: set UNZIP=-qq VMS (quotes for lowercase): define UNZIP_OPTS "-qq" Environment options are, in effect, considered to be just like any other command-line options, except that they are effectively the first options on the command line. To override an environment option, one may use the ``minus operator'' to remove it. For instance, to override one of the quiet-flags in the example above, use the command unzip --q[other options] zipfile The first hyphen is the normal switch character, and the second is a minus sign, acting on the q option. Thus the effect here is to cancel one quantum of quietness. To cancel both quiet flags, two (or more) minuses may be used: unzip -t--q zipfile unzip ---qt zipfile (the two are equivalent). This may seem awkward or confusing, but it is reasonably intuitive: just ignore the first hyphen and go from there. It is also consistent with the behavior of Unix nice(1). As suggested by the examples above, the default variable names are UNZIP_OPTS for VMS (where the symbol used to install unzip as a foreign command would otherwise be confused with the environment variable), and UNZIP for all other operating systems. For compatibility with zip(1L), UNZIPOPT is also accepted (don't ask). If both UNZIP and UNZIPOPT are defined, however, UNZIP takes precedence. unzip's diagnostic option (-v with no zipfile name) can be used to check the values of all four possible unzip and zipinfo environment variables. The timezone variable (TZ) should be set according to the local timezone in order for the -f and -u to operate correctly. See the description of -f above for details. This variable may also be necessary to get timestamps of extracted files to be set correctly. The WIN32 (Win9x/ME/NT4/2K/XP/2K3) port of unzip gets the timezone configuration from the registry, assuming it is correctly set in the Control Panel. The TZ variable is ignored for this port. DECRYPTION Encrypted archives are fully supported by Info-ZIP software, but due to United States export restrictions, de-/encryption support might be disabled in your compiled binary. However, since spring 2000, US export restrictions have been liberated, and our source archives do now include full crypt code. In case you need binary distributions with crypt support enabled, see the file ``WHERE'' in any Info-ZIP source or binary distribution for locations both inside and outside the US. Some compiled versions of unzip may not support decryption. To check a version for crypt support, either attempt to test or extract an encrypted archive, or else check unzip's diagnostic screen (see the -v option above) for ``[decryption]'' as one of the special compilation options. As noted above, the -P option may be used to supply a password on the command line, but at a cost in security. The preferred decryption method is simply to extract normally; if a zipfile member is encrypted, unzip will prompt for the password without echoing what is typed. unzip continues to use the same password as long as it appears to be valid, by testing a 12-byte header on each file. The correct password will always check out against the header, but there is a 1-in-256 chance that an incorrect password will as well. (This is a security feature of the PKWARE zipfile format; it helps prevent brute-force attacks that might otherwise gain a large speed advantage by testing only the header.) In the case that an incorrect password is given but it passes the header test anyway, either an incorrect CRC will be generated for the extracted data or else unzip will fail during the extraction because the ``decrypted'' bytes do not constitute a valid compressed data stream. If the first password fails the header check on some file, unzip will prompt for another password, and so on until all files are extracted. If a password is not known, entering a null password (that is, just a carriage return or ``Enter'') is taken as a signal to skip all further prompting. Only unencrypted files in the archive(s) will thereafter be extracted. (In fact, that's not quite true; older versions of zip(1L) and zipcloak(1L) allowed null passwords, so unzip checks each encrypted file to see if the null password works. This may result in ``false positives'' and extraction errors, as noted above.) Archives encrypted with 8-bit passwords (for example, passwords with accented European characters) may not be portable across systems and/or other archivers. This problem stems from the use of multiple encoding methods for such characters, including Latin-1 (ISO 8859-1) and OEM code page 850. DOS PKZIP 2.04g uses the OEM code page; Windows PKZIP 2.50 uses Latin-1 (and is therefore incompatible with DOS PKZIP); Info- ZIP uses the OEM code page on DOS, OS/2 and Win3.x ports but ISO coding (Latin-1 etc.) everywhere else; and Nico Mak's WinZip 6.x does not allow 8-bit passwords at all. UnZip 5.3 (or newer) attempts to use the default character set first (e.g., Latin-1), followed by the alternate one (e.g., OEM code page) to test passwords. On EBCDIC systems, if both of these fail, EBCDIC encoding will be tested as a last resort. (EBCDIC is not tested on non-EBCDIC systems, because there are no known archivers that encrypt using EBCDIC encoding.) ISO character encodings other than Latin-1 are not supported. The new addition of (partially) Unicode (resp. UTF-8) support in UnZip 6.0 has not yet been adapted to the encryption password handling in unzip. On systems that use UTF-8 as native character encoding, unzip simply tries decryption with the native UTF-8 encoded password; the built-in attempts to check the password in translated encoding have not yet been adapted for UTF-8 support and will consequently fail.
|
To use unzip to extract all members of the archive letters.zip into the current directory and subdirectories below it, creating any subdirectories as necessary: unzip letters To extract all members of letters.zip into the current directory only: unzip -j letters To test letters.zip, printing only a summary message indicating whether the archive is OK or not: unzip -tq letters To test all zipfiles in the current directory, printing only the summaries: unzip -tq \*.zip (The backslash before the asterisk is only required if the shell expands wildcards, as in Unix; double quotes could have been used instead, as in the source examples below.) To extract to standard output all members of letters.zip whose names end in .tex, auto- converting to the local end-of-line convention and piping the output into more(1): unzip -ca letters \*.tex | more To extract the binary file paper1.dvi to standard output and pipe it to a printing program: unzip -p articles paper1.dvi | dvips To extract all FORTRAN and C source files--*.f, *.c, *.h, and Makefile--into the /tmp directory: unzip source.zip "*.[fch]" Makefile -d /tmp (the double quotes are necessary only in Unix and only if globbing is turned on). To extract all FORTRAN and C source files, regardless of case (e.g., both *.c and *.C, and any makefile, Makefile, MAKEFILE or similar): unzip -C source.zip "*.[fch]" makefile -d /tmp To extract any such files but convert any uppercase MS-DOS or VMS names to lowercase and convert the line-endings of all of the files to the local standard (without respect to any files that might be marked ``binary''): unzip -aaCL source.zip "*.[fch]" makefile -d /tmp To extract only newer versions of the files already in the current directory, without querying (NOTE: be careful of unzipping in one timezone a zipfile created in another--ZIP archives other than those created by Zip 2.1 or later contain no timezone information, and a ``newer'' file from an eastern timezone may, in fact, be older): unzip -fo sources To extract newer versions of the files already in the current directory and to create any files not already there (same caveat as previous example): unzip -uo sources To display a diagnostic screen showing which unzip and zipinfo options are stored in environment variables, whether decryption support was compiled in, the compiler with which unzip was compiled, etc.: unzip -v In the last five examples, assume that UNZIP or UNZIP_OPTS is set to -q. To do a singly quiet listing: unzip -l file.zip To do a doubly quiet listing: unzip -ql file.zip (Note that the ``.zip'' is generally not necessary.) To do a standard listing: unzip --ql file.zip or unzip -l-q file.zip or unzip -l--q file.zip (Extra minuses in options don't hurt.) TIPS The current maintainer, being a lazy sort, finds it very useful to define a pair of aliases: tt for ``unzip -tq'' and ii for ``unzip -Z'' (or ``zipinfo''). One may then simply type ``tt zipfile'' to test an archive, something that is worth making a habit of doing. With luck unzip will report ``No errors detected in compressed data of zipfile.zip,'' after which one may breathe a sigh of relief. The maintainer also finds it useful to set the UNZIP environment variable to ``-aL'' and is tempted to add ``-C'' as well. His ZIPINFO variable is set to ``-z''. DIAGNOSTICS The exit status (or error level) approximates the exit codes defined by PKWARE and takes on the following values, except under VMS: 0 normal; no errors or warnings detected. 1 one or more warning errors were encountered, but processing completed successfully anyway. This includes zipfiles where one or more files was skipped due to unsupported compression method or encryption with an unknown password. 2 a generic error in the zipfile format was detected. Processing may have completed successfully anyway; some broken zipfiles created by other archivers have simple work-arounds. 3 a severe error in the zipfile format was detected. Processing probably failed immediately. 4 unzip was unable to allocate memory for one or more buffers during program initialization. 5 unzip was unable to allocate memory or unable to obtain a tty to read the decryption password(s). 6 unzip was unable to allocate memory during decompression to disk. 7 unzip was unable to allocate memory during in-memory decompression. 8 [currently not used] 9 the specified zipfiles were not found. 10 invalid options were specified on the command line. 11 no matching files were found. 50 the disk is (or was) full during extraction. 51 the end of the ZIP archive was encountered prematurely. 80 the user aborted unzip prematurely with control-C (or similar) 81 testing or extraction of one or more files failed due to unsupported compression methods or unsupported decryption. 82 no files were found due to bad decryption password(s). (If even one file is successfully processed, however, the exit status is 1.) VMS interprets standard Unix (or PC) return values as other, scarier- looking things, so unzip instead maps them into VMS-style status codes. The current mapping is as follows: 1 (success) for normal exit, 0x7fff0001 for warning errors, and (0x7fff000? + 16*normal_unzip_exit_status) for all other errors, where the `?' is 2 (error) for unzip values 2, 9-11 and 80-82, and 4 (fatal error) for the remaining ones (3-8, 50, 51). In addition, there is a compilation option to expand upon this behavior: defining RETURN_CODES results in a human-readable explanation of what the error status means. BUGS Multi-part archives are not yet supported, except in conjunction with zip. (All parts must be concatenated together in order, and then ``zip -F'' (for zip 2.x) or ``zip -FF'' (for zip 3.x) must be performed on the concatenated archive in order to ``fix'' it. Also, zip 3.0 and later can combine multi-part (split) archives into a combined single- file archive using ``zip -s- inarchive -O outarchive''. See the zip 3 manual page for more information.) This will definitely be corrected in the next major release. Archives read from standard input are not yet supported, except with funzip (and then only the first member of the archive can be extracted). Archives encrypted with 8-bit passwords (e.g., passwords with accented European characters) may not be portable across systems and/or other archivers. See the discussion in DECRYPTION above. unzip's -M (``more'') option tries to take into account automatic wrapping of long lines. However, the code may fail to detect the correct wrapping locations. First, TAB characters (and similar control sequences) are not taken into account, they are handled as ordinary printable characters. Second, depending on the actual system / OS port, unzip may not detect the true screen geometry but rather rely on "commonly used" default dimensions. The correct handling of tabs would require the implementation of a query for the actual tabulator setup on the output console. Dates, times and permissions of stored directories are not restored except under Unix. (On Windows NT and successors, timestamps are now restored.) [MS-DOS] When extracting or testing files from an archive on a defective floppy diskette, if the ``Fail'' option is chosen from DOS's ``Abort, Retry, Fail?'' message, older versions of unzip may hang the system, requiring a reboot. This problem appears to be fixed, but control-C (or control-Break) can still be used to terminate unzip. Under DEC Ultrix, unzip would sometimes fail on long zipfiles (bad CRC, not always reproducible). This was apparently due either to a hardware bug (cache memory) or an operating system bug (improper handling of page faults?). Since Ultrix has been abandoned in favor of Digital Unix (OSF/1), this may not be an issue anymore. [Unix] Unix special files such as FIFO buffers (named pipes), block devices and character devices are not restored even if they are somehow represented in the zipfile, nor are hard-linked files relinked. Basically the only file types restored by unzip are regular files, directories and symbolic (soft) links. [OS/2] Extended attributes for existing directories are only updated if the -o (``overwrite all'') option is given. This is a limitation of the operating system; because directories only have a creation time associated with them, unzip has no way to determine whether the stored attributes are newer or older than those on disk. In practice this may mean a two-pass approach is required: first unpack the archive normally (with or without freshening/updating existing files), then overwrite just the directory entries (e.g., ``unzip -o foo */''). [VMS] When extracting to another directory, only the [.foo] syntax is accepted for the -d option; the simple Unix foo syntax is silently ignored (as is the less common VMS foo.dir syntax). [VMS] When the file being extracted already exists, unzip's query only allows skipping, overwriting or renaming; there should additionally be a choice for creating a new version of the file. In fact, the ``overwrite'' choice does create a new version; the old version is not overwritten or deleted. SEE ALSO funzip(1L), zip(1L), zipcloak(1L), zipgrep(1L), zipinfo(1L), zipnote(1L), zipsplit(1L) URL The Info-ZIP home page is currently at http://www.info-zip.org/pub/infozip/ or ftp://ftp.info-zip.org/pub/infozip/ . AUTHORS The primary Info-ZIP authors (current semi-active members of the Zip- Bugs workgroup) are: Ed Gordon (Zip, general maintenance, shared code, Zip64, Win32, Unix, Unicode); Christian Spieler (UnZip maintenance coordination, VMS, MS-DOS, Win32, shared code, general Zip and UnZip integration and optimization); Onno van der Linden (Zip); Mike White (Win32, Windows GUI, Windows DLLs); Kai Uwe Rommel (OS/2, Win32); Steven M. Schweda (VMS, Unix, support of new features); Paul Kienitz (Amiga, Win32, Unicode); Chris Herborth (BeOS, QNX, Atari); Jonathan Hudson (SMS/QDOS); Sergio Monesi (Acorn RISC OS); Harald Denker (Atari, MVS); John Bush (Solaris, Amiga); Hunter Goatley (VMS, Info-ZIP Site maintenance); Steve Salisbury (Win32); Steve Miller (Windows CE GUI), Johnny Lee (MS-DOS, Win32, Zip64); and Dave Smith (Tandem NSK). The following people were former members of the Info-ZIP development group and provided major contributions to key parts of the current code: Greg ``Cave Newt'' Roelofs (UnZip, unshrink decompression); Jean- loup Gailly (deflate compression); Mark Adler (inflate decompression, fUnZip). The author of the original unzip code upon which Info-ZIP's was based is Samuel H. Smith; Carl Mascott did the first Unix port; and David P. Kirschbaum organized and led Info-ZIP in its early days with Keith Petersen hosting the original mailing list at WSMR-SimTel20. The full list of contributors to UnZip has grown quite large; please refer to the CONTRIBS file in the UnZip source distribution for a relatively complete version. VERSIONS v1.2 15 Mar 89 Samuel H. Smith v2.0 9 Sep 89 Samuel H. Smith v2.x fall 1989 many Usenet contributors v3.0 1 May 90 Info-ZIP (DPK, consolidator) v3.1 15 Aug 90 Info-ZIP (DPK, consolidator) v4.0 1 Dec 90 Info-ZIP (GRR, maintainer) v4.1 12 May 91 Info-ZIP v4.2 20 Mar 92 Info-ZIP (Zip-Bugs subgroup, GRR) v5.0 21 Aug 92 Info-ZIP (Zip-Bugs subgroup, GRR) v5.01 15 Jan 93 Info-ZIP (Zip-Bugs subgroup, GRR) v5.1 7 Feb 94 Info-ZIP (Zip-Bugs subgroup, GRR) v5.11 2 Aug 94 Info-ZIP (Zip-Bugs subgroup, GRR) v5.12 28 Aug 94 Info-ZIP (Zip-Bugs subgroup, GRR) v5.2 30 Apr 96 Info-ZIP (Zip-Bugs subgroup, GRR) v5.3 22 Apr 97 Info-ZIP (Zip-Bugs subgroup, GRR) v5.31 31 May 97 Info-ZIP (Zip-Bugs subgroup, GRR) v5.32 3 Nov 97 Info-ZIP (Zip-Bugs subgroup, GRR) v5.4 28 Nov 98 Info-ZIP (Zip-Bugs subgroup, SPC) v5.41 16 Apr 00 Info-ZIP (Zip-Bugs subgroup, SPC) v5.42 14 Jan 01 Info-ZIP (Zip-Bugs subgroup, SPC) v5.5 17 Feb 02 Info-ZIP (Zip-Bugs subgroup, SPC) v5.51 22 May 04 Info-ZIP (Zip-Bugs subgroup, SPC) v5.52 28 Feb 05 Info-ZIP (Zip-Bugs subgroup, SPC) v6.0 20 Apr 09 Info-ZIP (Zip-Bugs subgroup, SPC) Info-ZIP 20 April 2009 (v6.0) UNZIP(1L)
|
db_stat
|
The db_stat utility utility displays statistics for Berkeley DB environments. The options are as follows: -C Display internal information about the lock region. (The output from this option is often both voluminous and meaningless, and is intended only for debugging.) A Display all information. c Display lock conflict matrix. l Display lockers within hash chains. m Display region memory information. o Display objects within hash chains. p Display lock region parameters. -c Display lock region statistics, as described in DB_ENV->lock_stat. -d Display database statistics for the specified file, as described in DB->stat. If the database contains multiple databases and the -s flag is not specified, the statistics are for the internal database that describes the other databases the file contains, and not for the file as a whole. -e Display current environment statistics. -f Display only those database statistics that can be acquired without traversing the database. -h Specify a home directory for the database environment; by default, the current working directory is used. -l Display log region statistics, as described in DB_ENV->log_stat. -M Display internal information about the shared memory buffer pool. (The output from this option is often both voluminous and meaningless, and is intended only for debugging.) A Display all information. h Display buffers within hash chains. m Display region memory information. -m Display shared memory buffer pool statistics, as described in DB_ENV->memp_stat. -N Do not acquire shared region mutexes while running. Other problems, such as potentially fatal errors in Berkeley DB, will be ignored as well. This option is intended only for debugging errors, and should not be used under any other circumstances. -P Specify an environment password. Although Berkeley DB utilities overwrite password strings as soon as possible, be aware there may be a window of vulnerability on systems where unprivileged users can see command-line arguments or where utilities are not able to overwrite the memory containing the command-line arguments. -r Display replication statistics, as described in DB_ENV->rep_stat. -s Display statistics for the specified database contained in the file specified with the -d flag. -t Display transaction region statistics, as described in DB_ENV->txn_stat. -V Write the library version number to the standard output, and exit. -Z Reset the statistics after reporting them; valid only with the -c, -e, -l, -m, and -t options. Values normally displayed in quantities of bytes are displayed as a combination of gigabytes (GB), megabytes (MB), kilobytes (KB), and bytes (B). Otherwise, values smaller than 10 million are displayed without any special notation, and values larger than 10 million are displayed as a number followed by "M". The db_stat utility may be used with a Berkeley DB environment (as described for the -h option, the environment variable DB_HOME, or because the utility was run in a directory containing a Berkeley DB environment). In order to avoid environment corruption when using a Berkeley DB environment, db_stat should always be given the chance to detach from the environment and exit gracefully. To cause db_stat to release all environment resources and exit cleanly, send it an interrupt signal (SIGINT). The db_stat utility exits 0 on success, and >0 if an error occurs. ENVIRONMENT DB_HOME If the -h option is not specified and the environment variable DB_HOME is set, it is used as the path of the database home, as described in DB_ENV->open. SEE ALSO db_archive(1), db_checkpoint(1), db_deadlock(1), db_dump(1), db_load(1), db_printlog(1), db_recover(1), db_upgrade(1), db_verify(1) Darwin December 3, 2003 Darwin
|
db_stat
|
db_stat -d file [-fN] [-h home] [-P password] [-s database] db_stat [-celmNrtVZ] [-C Aclmop] [-h home] [-M Ahm] [-P password]
| null | null |
rmiregistry
|
The rmiregistry command creates and starts a remote object registry on the specified port on the current host. If the port is omitted, then the registry is started on port 1099. The rmiregistry command produces no output and is typically run in the background, for example: rmiregistry & A remote object registry is a bootstrap naming service that's used by RMI servers on the same host to bind remote objects to names. Clients on local and remote hosts can then look up remote objects and make remote method invocations. The registry is typically used to locate the first remote object on which an application needs to call methods. That object then provides application-specific support for finding other objects. The methods of the java.rmi.registry.LocateRegistry class are used to get a registry operating on the local host or local host and port. The URL-based methods of the java.rmi.Naming class operate on a registry and can be used to: β’ Bind the specified name to a remote object β’ Return an array of the names bound in the registry β’ Return a reference, a stub, for the remote object associated with the specified name β’ Rebind the specified name to a new remote object β’ Destroy the binding for the specified name that's associated with a remote object
|
rmiregistry - create and start a remote object registry on the specified port on the current host
|
rmiregistry [options] [port]
|
This represents the option for the rmiregistry command. See port The number of a port on the current host at which to start the remote object registry. -Joption Used with any Java option to pass the option following the -J (no spaces between the -J and the option) to the Java interpreter. JDK 22 2024 RMIREGISTRY(1)
| null |
json_pp
|
json_pp converts between some input and output formats (one of them is JSON). This program was copied from json_xs and modified. The default input format is json and the default output format is json with pretty option.
|
json_pp - JSON::PP command utility
|
json_pp [-v] [-f from_format] [-t to_format] [-json_opt options_to_json1[,options_to_json2[,...]]]
|
-f -f from_format Reads a data in the given format from STDIN. Format types: json as JSON eval as Perl code -t Writes a data in the given format to STDOUT. null no action. json as JSON dumper as Data::Dumper -json_opt options to JSON::PP Acceptable options are: ascii latin1 utf8 pretty indent space_before space_after relaxed canonical allow_nonref allow_singlequote allow_barekey allow_bignum loose escape_slash indent_length Multiple options must be separated by commas: Right: -json_opt pretty,canonical Wrong: -json_opt pretty -json_opt canonical -v Verbose option, but currently no action in fact. -V Prints version and exits.
|
$ perl -e'print q|{"foo":"γγ","bar":1234567890000000000000000}|' |\ json_pp -f json -t dumper -json_opt pretty,utf8,allow_bignum $VAR1 = { 'bar' => bless( { 'value' => [ '0000000', '0000000', '5678900', '1234' ], 'sign' => '+' }, 'Math::BigInt' ), 'foo' => "\x{3042}\x{3044}" }; $ perl -e'print q|{"foo":"γγ","bar":1234567890000000000000000}|' |\ json_pp -f json -t dumper -json_opt pretty $VAR1 = { 'bar' => '1234567890000000000000000', 'foo' => "\x{e3}\x{81}\x{82}\x{e3}\x{81}\x{84}" }; SEE ALSO JSON::PP, json_xs AUTHOR Makamaka Hannyaharamitu, <makamaka[at]cpan.org> COPYRIGHT AND LICENSE Copyright 2010 by Makamaka Hannyaharamitu This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.38.2 2023-11-28 JSON_PP(1)
|
test-yaml
| null | null | null | null | null |
ptardiff5.30
|
ptardiff is a small program that diffs an extracted archive against an unextracted one, using the perl module Archive::Tar. This effectively lets you view changes made to an archives contents. Provide the progam with an ARCHIVE_FILE and it will look up all the files with in the archive, scan the current working directory for a file with the name and diff it against the contents of the archive.
|
ptardiff - program that diffs an extracted archive against an unextracted one
|
ptardiff ARCHIVE_FILE ptardiff -h $ tar -xzf Acme-Buffy-1.3.tar.gz $ vi Acme-Buffy-1.3/README [...] $ ptardiff Acme-Buffy-1.3.tar.gz > README.patch
|
h Prints this help message SEE ALSO tar(1), Archive::Tar. perl v5.30.3 2024-04-13 PTARDIFF(1)
| null |
zprint
|
zprint displays data about Mach zones (allocation buckets). By default, zprint will print out information about all Mach zones. If the optional name is specified, zprint will print information about each zone for which name is a substring of the zone's name. zprint interprets the following options: -c (Default) zprint prints zone info in columns. Long zone names are truncated with β$β, and spaces are replaced with β.β, to allow for sorting by column. Pageable and collectible zones are shown with βPβ and βCβ on the far right, respectively. Zones with preposterously large maximum sizes are shown with β----β in the max size and max num elts fields. -d Display deltas over time, showing any zones that have achieved a new maximum current allocation size during the interval. If the total allocation sizes are being displayed for the zones in question, it will also display the deltas if the total allocations have doubled. -h (Default) Shows headings for the columns printed with the -c option. It may be useful to override this option when sorting by column. -l (Default) Show all wired memory information after the zone information. -L Do not show all wired memory information after the zone information. -s zprint sorts the zones, showing the zone wasting the most memory first. -t For each zone, zprint calculates the total size of allocations from the zone over the life of the zone. -w For each zone, zprint calculates how much space is allocated but not currently in use, the space wasted by the zone. Any option (including default options) can be overridden by specifying the option in upper-case; for example, -C overrides the default option -c. DIAGNOSTICS The zprint utility exits 0 on success, and >0 if an error occurs. SEE ALSO ioclasscount(1), lskq(1), lsmp(1) macOS May 2, 2016 macOS
|
zprint β show information about kernel zones
|
zprint [-cdhlLstw] [name]
| null | null |
kinit
|
kinit obtains and caches an initial ticket-granting ticket for principal. If principal is absent, kinit chooses an appropriate principal name based on existing credential cache contents or the local username of the user invoking kinit. Some options modify the choice of principal name.
|
kinit - obtain and cache Kerberos ticket-granting ticket
|
kinit [-V] [-l lifetime] [-s start_time] [-r renewable_life] [-p | -P] [-f | -F] [-a] [-A] [-C] [-E] [-v] [-R] [-k [-i | -t keytab_file]] [-c cache_name] [-n] [-S service_name] [-I input_ccache] [-T armor_ccache] [-X attribute[=value]] [--request-pac | --no-request-pac] [principal]
|
-V display verbose output. -l lifetime (duration string.) Requests a ticket with the lifetime lifetime. For example, kinit -l 5:30 or kinit -l 5h30m. If the -l option is not specified, the default ticket lifetime (configured by each site) is used. Specifying a ticket lifetime longer than the maximum ticket lifetime (configured by each site) will not override the configured maximum ticket lifetime. -s start_time (duration string.) Requests a postdated ticket. Postdated tickets are issued with the invalid flag set, and need to be resubmitted to the KDC for validation before use. start_time specifies the duration of the delay before the ticket can become valid. -r renewable_life (duration string.) Requests renewable tickets, with a total lifetime of renewable_life. -f requests forwardable tickets. -F requests non-forwardable tickets. -p requests proxiable tickets. -P requests non-proxiable tickets. -a requests tickets restricted to the host's local address[es]. -A requests tickets not restricted by address. -C requests canonicalization of the principal name, and allows the KDC to reply with a different client principal from the one requested. -E treats the principal name as an enterprise name. -v requests that the ticket-granting ticket in the cache (with the invalid flag set) be passed to the KDC for validation. If the ticket is within its requested time range, the cache is replaced with the validated ticket. -R requests renewal of the ticket-granting ticket. Note that an expired ticket cannot be renewed, even if the ticket is still within its renewable life. Note that renewable tickets that have expired as reported by klist(1) may sometimes be renewed using this option, because the KDC applies a grace period to account for client-KDC clock skew. See krb5.conf(5) clockskew setting. -k [-i | -t keytab_file] requests a ticket, obtained from a key in the local host's keytab. The location of the keytab may be specified with the -t keytab_file option, or with the -i option to specify the use of the default client keytab; otherwise the default keytab will be used. By default, a host ticket for the local host is requested, but any principal may be specified. On a KDC, the special keytab location KDB: can be used to indicate that kinit should open the KDC database and look up the key directly. This permits an administrator to obtain tickets as any principal that supports authentication based on the key. -n Requests anonymous processing. Two types of anonymous principals are supported. For fully anonymous Kerberos, configure pkinit on the KDC and configure pkinit_anchors in the client's krb5.conf(5). Then use the -n option with a principal of the form @REALM (an empty principal name followed by the at-sign and a realm name). If permitted by the KDC, an anonymous ticket will be returned. A second form of anonymous tickets is supported; these realm-exposed tickets hide the identity of the client but not the client's realm. For this mode, use kinit -n with a normal principal name. If supported by the KDC, the principal (but not realm) will be replaced by the anonymous principal. As of release 1.8, the MIT Kerberos KDC only supports fully anonymous operation. -I input_ccache Specifies the name of a credentials cache that already contains a ticket. When obtaining that ticket, if information about how that ticket was obtained was also stored to the cache, that information will be used to affect how new credentials are obtained, including preselecting the same methods of authenticating to the KDC. -T armor_ccache Specifies the name of a credentials cache that already contains a ticket. If supported by the KDC, this cache will be used to armor the request, preventing offline dictionary attacks and allowing the use of additional preauthentication mechanisms. Armoring also makes sure that the response from the KDC is not modified in transit. -c cache_name use cache_name as the Kerberos 5 credentials (ticket) cache location. If this option is not used, the default cache location is used. The default cache location may vary between systems. If the KRB5CCNAME environment variable is set, its value is used to locate the default cache. If a principal name is specified and the type of the default cache supports a collection (such as the DIR type), an existing cache containing credentials for the principal is selected or a new one is created and becomes the new primary cache. Otherwise, any existing contents of the default cache are destroyed by kinit. -S service_name specify an alternate service name to use when getting initial tickets. -X attribute[=value] specify a pre-authentication attribute and value to be interpreted by pre-authentication modules. The acceptable attribute and value values vary from module to module. This option may be specified multiple times to specify multiple attributes. If no value is specified, it is assumed to be "yes". The following attributes are recognized by the PKINIT pre-authentication mechanism: X509_user_identity=value specify where to find user's X509 identity information X509_anchors=value specify where to find trusted X509 anchor information flag_RSA_PROTOCOL[=yes] specify use of RSA, rather than the default Diffie-Hellman protocol disable_freshness[=yes] disable sending freshness tokens (for testing purposes only) --request-pac | --no-request-pac mutually exclusive. If --request-pac is set, ask the KDC to include a PAC in authdata; if --no-request-pac is set, ask the KDC not to include a PAC; if neither are set, the KDC will follow its default, which is typically is to include a PAC if doing so is supported. ENVIRONMENT See kerberos(7) for a description of Kerberos environment variables. FILES KCM: default location of Kerberos 5 credentials cache FILE:/etc/krb5.keytab default location for the local host's keytab. SEE ALSO klist(1), kdestroy(1), kerberos(7) AUTHOR MIT COPYRIGHT 1985-2022, MIT 1.20.1 KINIT(1)
| null |
mib2c
|
The mib2c tool is designed to take a portion of the MIB tree (as defined by a MIB file) and generate the template C code necessary to implement the relevant management objects within it. In order to implement a new MIB module, three files are necessary: - MIB definition file - C header file - C implementation file. The mib2c tool uses the MIB definition file to produce the two C code files. Thus, mib2c generates a template that you can edit to add logic necessary to obtain information from the operating system or application to complete the module. MIBNODE is the top level mib node you want to generate code for. You must give mib2c a mib node (e.g. ifTable) on the command line, not a mib file. This is the single most common mistake. The mib2c tool accepts both SMIv1 and SMIv2 MIBs. mib2c needs to be able to find and load a MIB file in order to generate C code for the MIB. To enable mib2c to find the MIB file, set the MIBS environment variable to include the MIB file you are using. An example of setting this environment variable is: MIBS=+NET-SNMP-TUTORIAL-MIB or MIBS=ALL The first example ensures that mib2c finds the NET-SNMP-TUTORIAL-MIB mib, in addition to the default MIB modules. The default list of MIB modules is set when the suite is first configured and built and basically corresponds to the list of modules that the agent supports. The second example ensures that mib2c finds all MIBs in the search location for MIB files. The default search location for MIB files is /usr/share/snmp/mibs. This search location can be modified by the MIBDIRS environment variable. Both the MIB files to be loaded and the MIB file search location can also be configured in the snmp.conf file. Please see snmp.conf(5) for more information. The generated *.c and *.h files will be created in the current working directory.
|
mib2c -- generate template code for extending the agent
|
mib2c [-h] -c CONFIGFILE [-I PATH] [-f OUTNAME] [-i][-s][-q][-S VAR=VAL] MIBNODE
|
-h Display a help message. -c CONFIGFILE Use CONFIGFILE when generating code. These files will be searched for first in the current directory and then in the /usr/share directory (which is where the default mib2c configuration files can be found). Running mib2c without the -c CONFIGFILE option will display a description of the valid values for CONFIGFILE, that is, the available config files, including new ones that you might author. For example, % mib2c ifTable will display a description of the currently available values for CONFIGFILE. The following values are supported for CONFIGFILE: mib2c.mfd.conf mib2c.scalar.conf mib2c.int_watch.conf mib2c.iterate.conf mib2c.create-dataset.conf mib2c.array-user.conf mib2c.column_defines.conf mib2c.column_enums.conf GENERATING CODE FOR SCALAR OBJECTS: If you're writing code for some scalars, run: mib2c -c mib2c.scalar.conf MIBNODE If you want to magically "tie" integer variables to integer scalars, use: mib2c -c mib2c.int_watch.conf MIBNODE GENERATING CODE FOR TABLES: The recommended configuration file for tables is the MIBs for Dummies, or MFD, configuration file. It hides as much of the SNMP details as possible, generating small, easy to understand functions. It is also the most flexible and well documented configuration file. See the agent/mibgroup/if- mib/ifTable/ifTable*.c files for an example: mib2c -c mib2c.mfd.conf MIBNODE If your table data is kept somewhere else (e.g. it's in the kernel and not in the memory of the agent itself) and you need to "iterate" over it to find the right data for the SNMP row being accessed. See the agent/mibgroup/mibII/vacm_context.c file for an example: mib2c -c mib2c.iterate.conf MIBNODE If your table data is kept in the agent (i.e. it's not located in an external source) and is purely data driven (i.e. you do not need to perform any work when a set occurs). See the agent/mibgroup/examples/data_set.c file for an example of such a table: mib2c -c mib2c.create-dataset.conf MIBNODE If your table data is kept in the agent (i.e. it's not located in an external source), and you can keep your data sorted by the table index but you do need to perform work when a set occurs: mib2c -c mib2c.array-user.conf MIBNODE GENERATING HEADER FILE DEFINITIONS To generate just a header with a define for each column number in your table: mib2c -c mib2c.column_defines.conf MIBNODE To generate just a header with a define for each enum for any column containing enums: mib2c -c mib2c.column_enums.conf MIBNODE GENERATING CODE FOR THE 4.X LINE OF CODE (THE OLDER API) mib2c -c mib2c.old-api.conf MIBNODE -IPATH Search for configuration files in PATH. Multiple paths can be specified using multiple -I switches or by using one with a comma separated list of paths in it. -f OUTNAME Places the output code into OUTNAME.c and OUTNAME.h. Normally, mib2c will place the output code into files which correspond to the table names it is generating code for, which is probably what you want anyway. -i Do not run indent on the resulting code. -s Do not look for MIBNODE.sed and run sed on the resulting code. This is useful to shorten long mib variable names in the code. -q Run in "quiet" mode, which minimizes the status messages mib2c generates. -SVAR=VAL Preset a variable VAR, in the mib2c.*.conf file, to the value VAL. None of the existing mib2c configuration files (mib2c.*.conf) currently makes use of this feature, however, so this option should be considered available only for future use.
|
The following generates C template code for the header and implementation files to implement UCD-DEMO-MIB::ucdDemoPublic. % mib2c -c mib2c.scalar.conf ucdDemoPublic writing to ucdDemoPublic.h writing to ucdDemoPublic.c running indent on ucdDemoPublic.h running indent on ucdDemoPublic.c The resulting ucdDemoPublic.c and ucdDemoPublic.h files are generated the current working directory. The following generates C template code for the header and implementation files for the module to implement TCP- MIB::tcpConnTable. % mib2c -c mib2c.iterate.conf tcpConnTable writing to tcpConnTable.h writing to tcpConnTable.c running indent on tcpConnTable.h running indent on tcpConnTable.c The resulting tcpConnTable.c and tcpConnTable.h files are generated in the current working directory. SEE ALSO snmpcmd(1), snmp.conf(5) V5.6.2.1 05 Apr 2010 MIB2C(1)
|
od
|
The od utility is a filter which displays the specified files, or standard input if no files are specified, in a user specified format. The options are as follows: -A base Specify the input address base. The argument base may be one of d, o, x or n, which specify decimal, octal, hexadecimal addresses or no address, respectively. -a Output named characters. Equivalent to -t a. -B, -o Output octal shorts. Equivalent to -t o2. -b Output octal bytes. Equivalent to -t o1. -c Output C-style escaped characters. Equivalent to -t c. -D Output unsigned decimal ints. Equivalent to -t u4. -d Output unsigned decimal shorts. Equivalent to -t u2. -e, -F Output double-precision floating point numbers. Equivalent to -t fD. -f Output single-precision floating point numbers. Equivalent to -t fF. -H, -X Output hexadecimal ints. Equivalent to -t x4. -h, -x Output hexadecimal shorts. Equivalent to -t x2. -I, -L, -l Output signed decimal longs. Equivalent to -t dL. -i Output signed decimal ints. Equivalent to -t dI. -j skip Skip skip bytes of the combined input before dumping. The number may be followed by one of b, k, m or g which specify the units of the number as blocks (512 bytes), kilobytes, megabytes and gigabytes, respectively. -N length Dump at most length bytes of input. -O Output octal ints. Equivalent to -t o4. -s Output signed decimal shorts. Equivalent to -t d2. -t type Specify the output format. The type argument is a string containing one or more of the following kinds of type specifiers: a Named characters (ASCII). Control characters are displayed using the following names: 000 NUL 001 SOH 002 STX 003 ETX 004 EOT 005 ENQ 006 ACK 007 BEL 008 BS 009 HT 00A NL 00B VT 00C FF 00D CR 00E SO 00F SI 010 DLE 011 DC1 012 DC2 013 DC3 014 DC4 015 NAK 016 SYN 017 ETB 018 CAN 019 EM 01A SUB 01B ESC 01C FS 01D GS 01E RS 01F US 020 SP 07F DEL c Characters in the default character set. Non- printing characters are represented as 3-digit octal character codes, except the following characters, which are represented as C escapes: NUL \0 alert \a backspace \b newline \n carriage-return \r tab \t vertical tab \v Multi-byte characters are displayed in the area corresponding to the first byte of the character. The remaining bytes are shown as β**β. [d|o|u|x][C|S|I|L|n] Signed decimal (d), octal (o), unsigned decimal (u) or hexadecimal (x). Followed by an optional size specifier, which may be either C (char), S (short), I (int), L (long), or a byte count as a decimal integer. f[F|D|L|n] Floating-point number. Followed by an optional size specifier, which may be either F (float), D (double) or L (long double). -v Write all input data, instead of replacing lines of duplicate values with a β*β. Multiple options that specify output format may be used; the output will contain one line for each format. If no output format is specified, -t oS is assumed. ENVIRONMENT The LANG, LC_ALL and LC_CTYPE environment variables affect the execution of od as described in environ(7). EXIT STATUS The od utility exits 0 on success, and >0 if an error occurs.
|
od β octal, decimal, hex, ASCII dump
|
od [-aBbcDdeFfHhIiLlOosvXx] [-A base] [-j skip] [-N length] [-t type] [[+]offset[.][Bb]] [file ...]
| null |
Dump stdin and show the output using named characters and C-style escaped characters: $ echo "FreeBSD: The power to serve" | od -a -c 0000000 F r e e B S D : sp T h e sp p o w F r e e B S D : T h e p o w 0000020 e r sp t o sp s e r v e nl e r t o s e r v e \n 0000034 Dump stdin skipping the first 13 bytes using named characters and dumping no more than 5 bytes: $ echo "FreeBSD: The power to serve" | od -An -a -j 13 -N 5 p o w e r COMPATIBILITY The traditional -s option to extract string constants is not supported; consider using strings(1) instead. SEE ALSO hexdump(1), strings(1) STANDARDS The od utility conforms to IEEE Std 1003.1-2001 (βPOSIX.1β). HISTORY An od command appeared in Version 1 AT&T UNIX. macOS 14.5 December 22, 2011 macOS 14.5
|
bsdtar
|
tar creates and manipulates streaming archive files. This implementation can extract from tar, pax, cpio, zip, jar, ar, xar, rpm, 7-zip, and ISO 9660 cdrom images and can create tar, pax, cpio, ar, zip, 7-zip, and shar archives. The first synopsis form shows a βbundledβ option word. This usage is provided for compatibility with historical implementations. See COMPATIBILITY below for details. The other synopsis forms show the preferred usage. The first option to tar is a mode indicator from the following list: -c Create a new archive containing the specified items. The long option form is --create. -r Like -c, but new entries are appended to the archive. Note that this only works on uncompressed archives stored in regular files. The -f option is required. The long option form is --append. -t List archive contents to stdout. The long option form is --list. -u Like -r, but new entries are added only if they have a modification date newer than the corresponding entry in the archive. Note that this only works on uncompressed archives stored in regular files. The -f option is required. The long form is --update. -x Extract to disk from the archive. If a file with the same name appears more than once in the archive, each copy will be extracted, with later copies overwriting (replacing) earlier copies. The long option form is --extract. In -c, -r, or -u mode, each specified file or directory is added to the archive in the order specified on the command line. By default, the contents of each directory are also archived. In extract or list mode, the entire command line is read and parsed before the archive is opened. The pathnames or patterns on the command line indicate which items in the archive should be processed. Patterns are shell-style globbing patterns as documented in tcsh(1).
|
tar β manipulate tape archives
|
tar [bundled-flags β¨argsβ©] [β¨fileβ© | β¨patternβ© ...] tar {-c} [options] [files | directories] tar {-r | -u} -f archive-file [options] [files | directories] tar {-t | -x} [options] [patterns]
|
Unless specifically stated otherwise, options are applicable in all operating modes. @archive (c and r modes only) The specified archive is opened and the entries in it will be appended to the current archive. As a simple example, tar -c -f - newfile @original.tar writes a new archive to standard output containing a file newfile and all of the entries from original.tar. In contrast, tar -c -f - newfile original.tar creates a new archive with only two entries. Similarly, tar -czf - --format pax @- reads an archive from standard input (whose format will be determined automatically) and converts it into a gzip-compressed pax-format archive on stdout. In this way, tar can be used to convert archives from one format to another. -a, --auto-compress (c mode only) Use the archive suffix to decide a set of the format and the compressions. As a simple example, tar -a -cf archive.tgz source.c source.h creates a new archive with restricted pax format and gzip compression, tar -a -cf archive.tar.bz2.uu source.c source.h creates a new archive with restricted pax format and bzip2 compression and uuencode compression, tar -a -cf archive.zip source.c source.h creates a new archive with zip format, tar -a -jcf archive.tgz source.c source.h ignores the β-jβ option, and creates a new archive with restricted pax format and gzip compression, tar -a -jcf archive.xxx source.c source.h if it is unknown suffix or no suffix, creates a new archive with restricted pax format and bzip2 compression. --acls (c, r, u, x modes only) Archive or extract POSIX.1e or NFSv4 ACLs. This is the reverse of --no-acls and the default behavior in c, r, and u modes (except on Mac OS X) or if tar is run in x mode as root. On Mac OS X this option translates extended ACLs to NFSv4 ACLs. To store extended ACLs the --mac-metadata option is preferred. -B, --read-full-blocks Ignored for compatibility with other tar(1) implementations. -b blocksize, --block-size blocksize Specify the block size, in 512-byte records, for tape drive I/O. As a rule, this argument is only needed when reading from or writing to tape drives, and usually not even then as the default block size of 20 records (10240 bytes) is very common. -C directory, --cd directory, --directory directory In c and r mode, this changes the directory before adding the following files. In x mode, change directories after opening the archive but before extracting entries from the archive. --chroot (x mode only) chroot() to the current directory after processing any -C options and before extracting any files. --clear-nochange-fflags (x mode only) Before removing file system objects to replace them, clear platform-specific file attributes or file flags that might prevent removal. --exclude pattern Do not process files or directories that match the specified pattern. Note that exclusions take precedence over patterns or filenames specified on the command line. --exclude-vcs Do not process files or directories internally used by the version control systems βArchβ, βBazaarβ, βCVSβ, βDarcsβ, βMercurialβ, βRCSβ, βSCCSβ, βSVNβ and βgitβ. --fflags (c, r, u, x modes only) Archive or extract platform-specific file attributes or file flags. This is the reverse of --no-fflags and the default behavior in c, r, and u modes or if tar is run in x mode as root. --format format (c, r, u mode only) Use the specified format for the created archive. Supported formats include βcpioβ, βpaxβ, βsharβ, and βustarβ. Other formats may also be supported; see libarchive-formats(5) for more information about currently- supported formats. In r and u modes, when extending an existing archive, the format specified here must be compatible with the format of the existing archive on disk. -f file, --file file Read the archive from or write the archive to the specified file. The filename can be - for standard input or standard output. The default varies by system; on FreeBSD, the default is /dev/sa0; on Linux, the default is /dev/st0. --gid id Use the provided group id number. On extract, this overrides the group id in the archive; the group name in the archive will be ignored. On create, this overrides the group id read from disk; if --gname is not also specified, the group name will be set to match the group id. --gname name Use the provided group name. On extract, this overrides the group name in the archive; if the provided group name does not exist on the system, the group id (from the archive or from the --gid option) will be used instead. On create, this sets the group name that will be stored in the archive; the name will not be verified against the system group database. -H (c and r modes only) Symbolic links named on the command line will be followed; the target of the link will be archived, not the link itself. -h (c and r modes only) Synonym for -L. -I Synonym for -T. --help Show usage. --hfsCompression (x mode only) Mac OS X specific (v10.6 or later). Compress extracted regular files with HFS+ compression. --ignore-zeros An alias of --options read_concatenated_archives for compatibility with GNU tar. --include pattern Process only files or directories that match the specified pattern. Note that exclusions specified with --exclude take precedence over inclusions. If no inclusions are explicitly specified, all entries are processed by default. The --include option is especially useful when filtering archives. For example, the command tar -c -f new.tar --include='*foo*' @old.tgz creates a new archive new.tar containing only the entries from old.tgz containing the string βfooβ. -J, --xz (c mode only) Compress the resulting archive with xz(1). In extract or list modes, this option is ignored. Note that this tar implementation recognizes XZ compression automatically when reading archives. -j, --bzip, --bzip2, --bunzip2 (c mode only) Compress the resulting archive with bzip2(1). In extract or list modes, this option is ignored. Note that this tar implementation recognizes bzip2 compression automatically when reading archives. -k, --keep-old-files (x mode only) Do not overwrite existing files. In particular, if a file appears more than once in an archive, later copies will not overwrite earlier copies. --keep-newer-files (x mode only) Do not overwrite existing files that are newer than the versions appearing in the archive being extracted. -L, --dereference (c and r modes only) All symbolic links will be followed. Normally, symbolic links are archived as such. With this option, the target of the link will be archived instead. -l, --check-links (c and r modes only) Issue a warning message unless all links to each file are archived. --lrzip (c mode only) Compress the resulting archive with lrzip(1). In extract or list modes, this option is ignored. Note that this tar implementation recognizes lrzip compression automatically when reading archives. --lz4 (c mode only) Compress the archive with lz4-compatible compression before writing it. In extract or list modes, this option is ignored. Note that this tar implementation recognizes lz4 compression automatically when reading archives. --zstd (c mode only) Compress the archive with zstd-compatible compression before writing it. In extract or list modes, this option is ignored. Note that this tar implementation recognizes zstd compression automatically when reading archives. --lzma (c mode only) Compress the resulting archive with the original LZMA algorithm. In extract or list modes, this option is ignored. Use of this option is discouraged and new archives should be created with --xz instead. Note that this tar implementation recognizes LZMA compression automatically when reading archives. --lzop (c mode only) Compress the resulting archive with lzop(1). In extract or list modes, this option is ignored. Note that this tar implementation recognizes LZO compression automatically when reading archives. -m, --modification-time (x mode only) Do not extract modification time. By default, the modification time is set to the time stored in the archive. --mac-metadata (c, r, u and x mode only) Mac OS X specific. Archive or extract extended ACLs and extended file attributes using copyfile(3) in AppleDouble format. This is the reverse of --no-mac-metadata. and the default behavior in c, r, and u modes or if tar is run in x mode as root. -n, --norecurse, --no-recursion Do not operate recursively on the content of directories. --newer date (c, r, u modes only) Only include files and directories newer than the specified date. This compares ctime entries. --newer-mtime date (c, r, u modes only) Like --newer, except it compares mtime entries instead of ctime entries. --newer-than file (c, r, u modes only) Only include files and directories newer than the specified file. This compares ctime entries. --newer-mtime-than file (c, r, u modes only) Like --newer-than, except it compares mtime entries instead of ctime entries. --nodump (c and r modes only) Honor the nodump file flag by skipping this file. --nopreserveHFSCompression (x mode only) Mac OS X specific (v10.6 or later). Do not compress extracted regular files which were compressed with HFS+ compression before archived. By default, compress the regular files again with HFS+ compression. --null (use with -I or -T) Filenames or patterns are separated by null characters, not by newlines. This is often used to read filenames output by the -print0 option to find(1). --no-acls (c, r, u, x modes only) Do not archive or extract POSIX.1e or NFSv4 ACLs. This is the reverse of --acls and the default behavior if tar is run as non-root in x mode (on Mac OS X as any user in c, r, u and x modes). --no-fflags (c, r, u, x modes only) Do not archive or extract file attributes or file flags. This is the reverse of --fflags and the default behavior if tar is run as non-root in x mode. --no-mac-metadata (x mode only) Mac OS X specific. Do not archive or extract ACLs and extended file attributes using copyfile(3) in AppleDouble format. This is the reverse of --mac-metadata. and the default behavior if tar is run as non-root in x mode. --no-read-sparse (c, r, u modes only) Do not read sparse file information from disk. This is the reverse of --read-sparse. --no-safe-writes (x mode only) Do not create temporary files and use rename(2) to replace the original ones. This is the reverse of --safe-writes. --no-same-owner (x mode only) Do not extract owner and group IDs. This is the reverse of --same-owner and the default behavior if tar is run as non-root. --no-same-permissions (x mode only) Do not extract full permissions (SGID, SUID, sticky bit, file attributes or file flags, extended file attributes and ACLs). This is the reverse of -p and the default behavior if tar is run as non-root. --no-xattrs (c, r, u, x modes only) Do not archive or extract extended file attributes. This is the reverse of --xattrs and the default behavior if tar is run as non-root in x mode. --numeric-owner This is equivalent to --uname "" --gname "". On extract, it causes user and group names in the archive to be ignored in favor of the numeric user and group ids. On create, it causes user and group names to not be stored in the archive. -O, --to-stdout (x, t modes only) In extract (-x) mode, files will be written to standard out rather than being extracted to disk. In list (-t) mode, the file listing will be written to stderr rather than the usual stdout. -o (x mode) Use the user and group of the user running the program rather than those specified in the archive. Note that this has no significance unless -p is specified, and the program is being run by the root user. In this case, the file modes and flags from the archive will be restored, but ACLs or owner information in the archive will be discarded. -o (c, r, u mode) A synonym for --format ustar --older date (c, r, u modes only) Only include files and directories older than the specified date. This compares ctime entries. --older-mtime date (c, r, u modes only) Like --older, except it compares mtime entries instead of ctime entries. --older-than file (c, r, u modes only) Only include files and directories older than the specified file. This compares ctime entries. --older-mtime-than file (c, r, u modes only) Like --older-than, except it compares mtime entries instead of ctime entries. --one-file-system (c, r, and u modes) Do not cross mount points. --options options Select optional behaviors for particular modules. The argument is a text string containing comma-separated keywords and values. These are passed to the modules that handle particular formats to control how those formats will behave. Each option has one of the following forms: key=value The key will be set to the specified value in every module that supports it. Modules that do not support this key will ignore it. key The key will be enabled in every module that supports it. This is equivalent to key=1. !key The key will be disabled in every module that supports it. module:key=value, module:key, module:!key As above, but the corresponding key and value will be provided only to modules whose name matches module. The complete list of supported modules and keys for create and append modes is in archive_write_set_options(3) and for extract and list modes in archive_read_set_options(3). Examples of supported options: iso9660:joliet Support Joliet extensions. This is enabled by default, use !joliet or iso9660:!joliet to disable. iso9660:rockridge Support Rock Ridge extensions. This is enabled by default, use !rockridge or iso9660:!rockridge to disable. gzip:compression-level A decimal integer from 1 to 9 specifying the gzip compression level. gzip:timestamp Store timestamp. This is enabled by default, use !timestamp or gzip:!timestamp to disable. lrzip:compression=type Use type as compression method. Supported values are bzip2, gzip, lzo (ultra fast), and zpaq (best, extremely slow). lrzip:compression-level A decimal integer from 1 to 9 specifying the lrzip compression level. lz4:compression-level A decimal integer from 1 to 9 specifying the lzop compression level. lz4:stream-checksum Enable stream checksum. This is by default, use lz4:!stream-checksum to disable. lz4:block-checksum Enable block checksum (Disabled by default). lz4:block-size A decimal integer from 4 to 7 specifying the lz4 compression block size (7 is set by default). lz4:block-dependence Use the previous block of the block being compressed for a compression dictionary to improve compression ratio. zstd:compression-level A decimal integer specifying the zstd compression level. Supported values depend on the library version, common values are from 1 to 22. zstd:threads Specify the number of worker threads to use. Setting threads to a special value 0 makes zstd(1) use as many threads as there are CPU cores on the system. lzop:compression-level A decimal integer from 1 to 9 specifying the lzop compression level. xz:compression-level A decimal integer from 0 to 9 specifying the xz compression level. xz:threads Specify the number of worker threads to use. Setting threads to a special value 0 makes xz(1) use as many threads as there are CPU cores on the system. mtree:keyword The mtree writer module allows you to specify which mtree keywords will be included in the output. Supported keywords include: cksum, device, flags, gid, gname, indent, link, md5, mode, nlink, rmd160, sha1, sha256, sha384, sha512, size, time, uid, uname. The default is equivalent to: βdevice, flags, gid, gname, link, mode, nlink, size, time, type, uid, unameβ. mtree:all Enables all of the above keywords. You can also use mtree:!all to disable all keywords. mtree:use-set Enable generation of /set lines in the output. mtree:indent Produce human-readable output by indenting options and splitting lines to fit into 80 columns. zip:compression=type Use type as compression method. Supported values are store (uncompressed) and deflate (gzip algorithm). zip:encryption Enable encryption using traditional zip encryption. zip:encryption=type Use type as encryption type. Supported values are zipcrypt (traditional zip encryption), aes128 (WinZip AES-128 encryption) and aes256 (WinZip AES-256 encryption). read_concatenated_archives Ignore zeroed blocks in the archive, which occurs when multiple tar archives have been concatenated together. Without this option, only the contents of the first concatenated archive would be read. This option is comparable to the -i, --ignore-zeros option of GNU tar. If a provided option is not supported by any module, that is a fatal error. -P, --absolute-paths Preserve pathnames. By default, absolute pathnames (those that begin with a / character) have the leading slash removed both when creating archives and extracting from them. Also, tar will refuse to extract archive entries whose pathnames contain .. or whose target directory would be altered by a symlink. This option suppresses these behaviors. -p, --insecure, --preserve-permissions (x mode only) Preserve file permissions. Attempt to restore the full permissions, including file modes, file attributes or file flags, extended file attributes and ACLs, if available, for each item extracted from the archive. This is the reverse of --no-same-permissions and the default if tar is being run as root. It can be partially overridden by also specifying --no-acls, --no-fflags, --no-mac-metadata or --no-xattrs. --passphrase passphrase The passphrase is used to extract or create an encrypted archive. Currently, zip is the only supported format that supports encryption. You shouldn't use this option unless you realize how insecure use of this option is. --posix (c, r, u mode only) Synonym for --format pax -q, --fast-read (x and t mode only) Extract or list only the first archive entry that matches each pattern or filename operand. Exit as soon as each specified pattern or filename has been matched. By default, the archive is always read to the very end, since there can be multiple entries with the same name and, by convention, later entries overwrite earlier entries. This option is provided as a performance optimization. --read-sparse (c, r, u modes only) Read sparse file information from disk. This is the reverse of --no-read-sparse and the default behavior. -S (x mode only) Extract files as sparse files. For every block on disk, check first if it contains only NULL bytes and seek over it otherwise. This works similar to the conv=sparse option of dd. -s pattern Modify file or archive member names according to pattern. The pattern has the format /old/new/[ghHprRsS] where old is a basic regular expression, new is the replacement string of the matched part, and the optional trailing letters modify how the replacement is handled. If old is not matched, the pattern is skipped. Within new, ~ is substituted with the match, \1 to \9 with the content of the corresponding captured group. The optional trailing g specifies that matching should continue after the matched part and stop on the first unmatched pattern. The optional trailing s specifies that the pattern applies to the value of symbolic links. The optional trailing p specifies that after a successful substitution the original path name and the new path name should be printed to standard error. Optional trailing H, R, or S characters suppress substitutions for hardlink targets, regular filenames, or symlink targets, respectively. Optional trailing h, r, or s characters enable substitutions for hardlink targets, regular filenames, or symlink targets, respectively. The default is hrs which applies substitutions to all names. In particular, it is never necessary to specify h, r, or s. --safe-writes (x mode only) Extract files atomically. By default tar unlinks the original file with the same name as the extracted file (if it exists), and then creates it immediately under the same name and writes to it. For a short period of time, applications trying to access the file might not find it, or see incomplete results. If --safe-writes is enabled, tar first creates a unique temporary file, then writes the new contents to the temporary file, and finally renames the temporary file to its final name atomically using rename(2). This guarantees that an application accessing the file, will either see the old contents or the new contents at all times. --same-owner (x mode only) Extract owner and group IDs. This is the reverse of --no-same-owner and the default behavior if tar is run as root. --strip-components count Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after checking inclusion/exclusion patterns but before security checks. -T filename, --files-from filename In x or t mode, tar will read the list of names to be extracted from filename. In c mode, tar will read names to be archived from filename. The special name β-Cβ on a line by itself will cause the current directory to be changed to the directory specified on the following line. Names are terminated by newlines unless --null is specified. Note that --null also disables the special handling of lines containing β-Cβ. Note: If you are generating lists of files using find(1), you probably want to use -n as well. --totals (c, r, u modes only) After archiving all files, print a summary to stderr. -U, --unlink, --unlink-first (x mode only) Unlink files before creating them. This can be a minor performance optimization if most files already exist, but can make things slower if most files do not already exist. This flag also causes tar to remove intervening directory symlinks instead of reporting an error. See the SECURITY section below for more details. --uid id Use the provided user id number and ignore the user name from the archive. On create, if --uname is not also specified, the user name will be set to match the user id. --uname name Use the provided user name. On extract, this overrides the user name in the archive; if the provided user name does not exist on the system, it will be ignored and the user id (from the archive or from the --uid option) will be used instead. On create, this sets the user name that will be stored in the archive; the name is not verified against the system user database. --use-compress-program program Pipe the input (in x or t mode) or the output (in c mode) through program instead of using the builtin compression support. -v, --verbose Produce verbose output. In create and extract modes, tar will list each file name as it is read from or written to the archive. In list mode, tar will produce output similar to that of ls(1). An additional -v option will also provide ls-like details in create and extract mode. --version Print version of tar and libarchive, and exit. -w, --confirmation, --interactive Ask for confirmation for every action. -X filename, --exclude-from filename Read a list of exclusion patterns from the specified file. See --exclude for more information about the handling of exclusions. --xattrs (c, r, u, x modes only) Archive or extract extended file attributes. This is the reverse of --no-xattrs and the default behavior in c, r, and u modes or if tar is run in x mode as root. -y (c mode only) Compress the resulting archive with bzip2(1). In extract or list modes, this option is ignored. Note that this tar implementation recognizes bzip2 compression automatically when reading archives. -Z, --compress, --uncompress (c mode only) Compress the resulting archive with compress(1). In extract or list modes, this option is ignored. Note that this tar implementation recognizes compress compression automatically when reading archives. -z, --gunzip, --gzip (c mode only) Compress the resulting archive with gzip(1). In extract or list modes, this option is ignored. Note that this tar implementation recognizes gzip compression automatically when reading archives. ENVIRONMENT The following environment variables affect the execution of tar: TAR_READER_OPTIONS The default options for format readers and compression readers. The --options option overrides this. TAR_WRITER_OPTIONS The default options for format writers and compression writers. The --options option overrides this. LANG The locale to use. See environ(7) for more information. TAPE The default device. The -f option overrides this. Please see the description of the -f option above for more details. TZ The timezone to use when displaying dates. See environ(7) for more information. EXIT STATUS The tar utility exits 0 on success, and >0 if an error occurs.
|
The following creates a new archive called file.tar.gz that contains two files source.c and source.h: tar -czf file.tar.gz source.c source.h To view a detailed table of contents for this archive: tar -tvf file.tar.gz To extract all entries from the archive on the default tape drive: tar -x To examine the contents of an ISO 9660 cdrom image: tar -tf image.iso To move file hierarchies, invoke tar as tar -cf - -C srcdir . | tar -xpf - -C destdir or more traditionally cd srcdir ; tar -cf - . | (cd destdir ; tar -xpf -) In create mode, the list of files and directories to be archived can also include directory change instructions of the form -Cfoo/baz and archive inclusions of the form @archive-file. For example, the command line tar -c -f new.tar foo1 @old.tgz -C/tmp foo2 will create a new archive new.tar. tar will read the file foo1 from the current directory and add it to the output archive. It will then read each entry from old.tgz and add those entries to the output archive. Finally, it will switch to the /tmp directory and add foo2 to the output archive. An input file in mtree(5) format can be used to create an output archive with arbitrary ownership, permissions, or names that differ from existing data on disk: $ cat input.mtree #mtree usr/bin uid=0 gid=0 mode=0755 type=dir usr/bin/ls uid=0 gid=0 mode=0755 type=file content=myls $ tar -cvf output.tar @input.mtree The --newer and --newer-mtime switches accept a variety of common date and time specifications, including β12 Mar 2005 7:14:29pmβ, β2005-03-12 19:14β, β5 minutes agoβ, and β19:14 PST May 1β. The --options argument can be used to control various details of archive generation or reading. For example, you can generate mtree output which only contains type, time, and uid keywords: tar -cf file.tar --format=mtree --options='!all,type,time,uid' dir or you can set the compression level used by gzip or xz compression: tar -czf file.tar --options='compression-level=9'. For more details, see the explanation of the archive_read_set_options() and archive_write_set_options() API calls that are described in archive_read(3) and archive_write(3). COMPATIBILITY The bundled-arguments format is supported for compatibility with historic implementations. It consists of an initial word (with no leading - character) in which each character indicates an option. Arguments follow as separate words. The order of the arguments must match the order of the corresponding characters in the bundled command word. For example, tar tbf 32 file.tar specifies three flags t, b, and f. The b and f flags both require arguments, so there must be two additional items on the command line. The 32 is the argument to the b flag, and file.tar is the argument to the f flag. The mode options c, r, t, u, and x and the options b, f, l, m, o, v, and w comply with SUSv2. For maximum portability, scripts that invoke tar should use the bundled- argument format above, should limit themselves to the c, t, and x modes, and the b, f, m, v, and w options. Additional long options are provided to improve compatibility with other tar implementations. SECURITY Certain security issues are common to many archiving programs, including tar. In particular, carefully-crafted archives can request that tar extract files to locations outside of the target directory. This can potentially be used to cause unwitting users to overwrite files they did not intend to overwrite. If the archive is being extracted by the superuser, any file on the system can potentially be overwritten. There are three ways this can happen. Although tar has mechanisms to protect against each one, savvy users should be aware of the implications: β’ Archive entries can have absolute pathnames. By default, tar removes the leading / character from filenames before restoring them to guard against this problem. β’ Archive entries can have pathnames that include .. components. By default, tar will not extract files containing .. components in their pathname. β’ Archive entries can exploit symbolic links to restore files to other directories. An archive can restore a symbolic link to another directory, then use that link to restore a file into that directory. To guard against this, tar checks each extracted path for symlinks. If the final path element is a symlink, it will be removed and replaced with the archive entry. If -U is specified, any intermediate symlink will also be unconditionally removed. If neither -U nor -P is specified, tar will refuse to extract the entry. To protect yourself, you should be wary of any archives that come from untrusted sources. You should examine the contents of an archive with tar -tf filename before extraction. You should use the -k option to ensure that tar will not overwrite any existing files or the -U option to remove any pre- existing files. You should generally not extract archives while running with super-user privileges. Note that the -P option to tar disables the security checks above and allows you to extract an archive while preserving any absolute pathnames, .. components, or symlinks to other directories. SEE ALSO bzip2(1), compress(1), cpio(1), gzip(1), mt(1), pax(1), shar(1), xz(1), libarchive(3), libarchive-formats(5), tar(5) STANDARDS There is no current POSIX standard for the tar command; it appeared in ISO/IEC 9945-1:1996 (βPOSIX.1β) but was dropped from IEEE Std 1003.1-2001 (βPOSIX.1β). The options supported by this implementation were developed by surveying a number of existing tar implementations as well as the old POSIX specification for tar and the current POSIX specification for pax. The ustar and pax interchange file formats are defined by IEEE Std 1003.1-2001 (βPOSIX.1β) for the pax command. HISTORY A tar command appeared in Seventh Edition Unix, which was released in January, 1979. There have been numerous other implementations, many of which extended the file format. John Gilmore's pdtar public-domain implementation (circa November, 1987) was quite influential, and formed the basis of GNU tar. GNU tar was included as the standard system tar in FreeBSD beginning with FreeBSD 1.0. This is a complete re-implementation based on the libarchive(3) library. It was first released with FreeBSD 5.4 in May, 2005. BUGS This program follows ISO/IEC 9945-1:1996 (βPOSIX.1β) for the definition of the -l option. Note that GNU tar prior to version 1.15 treated -l as a synonym for the --one-file-system option. The -C dir option may differ from historic implementations. All archive output is written in correctly-sized blocks, even if the output is being compressed. Whether or not the last output block is padded to a full block size varies depending on the format and the output device. For tar and cpio formats, the last block of output is padded to a full block size if the output is being written to standard output or to a character or block device such as a tape drive. If the output is being written to a regular file, the last block will not be padded. Many compressors, including gzip(1) and bzip2(1), complain about the null padding when decompressing an archive created by tar, although they still extract it correctly. The compression and decompression is implemented internally, so there may be insignificant differences between the compressed output generated by tar -czf - file and that generated by tar -cf - file | gzip The default should be to read and write archives to the standard I/O paths, but tradition (and POSIX) dictates otherwise. The r and u modes require that the archive be uncompressed and located in a regular file on disk. Other archives can be modified using c mode with the @archive-file extension. To archive a file called @foo or -foo you must specify it as ./@foo or ./-foo, respectively. In create mode, a leading ./ is always removed. A leading / is stripped unless the -P option is specified. There needs to be better support for file selection on both create and extract. There is not yet any support for multi-volume archives. Converting between dissimilar archive formats (such as tar and cpio) using the @- convention can cause hard link information to be lost. (This is a consequence of the incompatible ways that different archive formats store hardlink information.) macOS 14.5 January 31, 2020 macOS 14.5
|
perl
|
Perl officially stands for Practical Extraction and Report Language, except when it doesn't. Perl was originally a language optimized for scanning arbitrary text files, extracting information from those text files, and printing reports based on that information. It quickly became a good language for many system management tasks. Over the years, Perl has grown into a general-purpose programming language. It's widely used for everything from quick "one-liners" to full-scale application development. The language is intended to be practical (easy to use, efficient, complete) rather than beautiful (tiny, elegant, minimal). It combines (in the author's opinion, anyway) some of the best features of sed, awk, and sh, making it familiar and easy to use for Unix users to whip up quick solutions to annoying problems. Its general-purpose programming facilities support procedural, functional, and object- oriented programming paradigms, making Perl a comfortable language for the long haul on major projects, whatever your bent. Perl's roots in text processing haven't been forgotten over the years. It still boasts some of the most powerful regular expressions to be found anywhere, and its support for Unicode text is world-class. It handles all kinds of structured text, too, through an extensive collection of extensions. Those libraries, collected in the CPAN, provide ready-made solutions to an astounding array of problems. When they haven't set the standard themselves, they steal from the best -- just like Perl itself. AVAILABILITY Perl is available for most operating systems, including virtually all Unix-like platforms. See "Supported Platforms" in perlport for a listing. ENVIRONMENT See "ENVIRONMENT" in perlrun. AUTHOR Larry Wall <larry@wall.org>, with the help of oodles of other folks. If your Perl success stories and testimonials may be of help to others who wish to advocate the use of Perl in their applications, or if you wish to simply express your gratitude to Larry and the Perl developers, please write to perl-thanks@perl.org . FILES "@INC" locations of perl libraries "@INC" above is a reference to the built-in variable of the same name; see perlvar for more information. SEE ALSO https://www.perl.org/ the Perl homepage https://www.perl.com/ Perl articles https://www.cpan.org/ the Comprehensive Perl Archive https://www.pm.org/ the Perl Mongers DIAGNOSTICS Using the "use strict" pragma ensures that all variables are properly declared and prevents other misuses of legacy Perl features. These are enabled by default within the scope of "use v5.12" (or higher). The "use warnings" pragma produces some lovely diagnostics. It is enabled by default when you say "use v5.35" (or higher). One can also use the -w flag, but its use is normally discouraged, because it gets applied to all executed Perl code, including that not under your control. See perldiag for explanations of all Perl's diagnostics. The "use diagnostics" pragma automatically turns Perl's normally terse warnings and errors into these longer forms. Compilation errors will tell you the line number of the error, with an indication of the next token or token type that was to be examined. (In a script passed to Perl via -e switches, each -e is counted as one line.) Setuid scripts have additional constraints that can produce error messages such as "Insecure dependency". See perlsec. Did we mention that you should definitely consider using the use warnings pragma? BUGS The behavior implied by the use warnings pragma is not mandatory. Perl is at the mercy of your machine's definitions of various operations such as type casting, atof(), and floating-point output with sprintf(). If your stdio requires a seek or eof between reads and writes on a particular stream, so does Perl. (This doesn't apply to sysread() and syswrite().) While none of the built-in data types have any arbitrary size limits (apart from memory size), there are still a few arbitrary limits: a given variable name may not be longer than 251 characters. Line numbers displayed by diagnostics are internally stored as short integers, so they are limited to a maximum of 65535 (higher numbers usually being affected by wraparound). You may submit your bug reports (be sure to include full configuration information as output by the myconfig program in the perl source tree, or by "perl -V") to <https://github.com/Perl/perl5/issues>. Perl actually stands for Pathologically Eclectic Rubbish Lister, but don't tell anyone I said that. NOTES The Perl motto is "There's more than one way to do it." Divining how many more is left as an exercise to the reader. The three principal virtues of a programmer are Laziness, Impatience, and Hubris. See the Camel Book for why. perl v5.38.2 2023-11-28 PERL(1)
|
perl - The Perl 5 language interpreter
|
perl [ -sTtuUWX ] [ -hv ] [ -V[:configvar] ] [ -cw ] [ -d[t][:debugger] ] [ -D[number/list] ] [ -pna ] [ -Fpattern ] [ -l[octal] ] [ -0[octal/hexadecimal] ] [ -Idir ] [ -m[-]module ] [ -M[-]'module...' ] [ -f ] [ -C [number/list] ] [ -S ] [ -x[dir] ] [ -i[extension] ] [ [-e|-E] 'command' ] [ -- ] [ programfile ] [ argument ]... For more information on these options, you can run "perldoc perlrun". GETTING HELP The perldoc program gives you access to all the documentation that comes with Perl. You can get more documentation, tutorials and community support online at <https://www.perl.org/>. If you're new to Perl, you should start by running "perldoc perlintro", which is a general intro for beginners and provides some background to help you navigate the rest of Perl's extensive documentation. Run "perldoc perldoc" to learn more things you can do with perldoc. For ease of access, the Perl manual has been split up into several sections. Overview perl Perl overview (this section) perlintro Perl introduction for beginners perlrun Perl execution and options perltoc Perl documentation table of contents Tutorials perlreftut Perl references short introduction perldsc Perl data structures intro perllol Perl data structures: arrays of arrays perlrequick Perl regular expressions quick start perlretut Perl regular expressions tutorial perlootut Perl OO tutorial for beginners perlperf Perl Performance and Optimization Techniques perlstyle Perl style guide perlcheat Perl cheat sheet perltrap Perl traps for the unwary perldebtut Perl debugging tutorial perlfaq Perl frequently asked questions perlfaq1 General Questions About Perl perlfaq2 Obtaining and Learning about Perl perlfaq3 Programming Tools perlfaq4 Data Manipulation perlfaq5 Files and Formats perlfaq6 Regexes perlfaq7 Perl Language Issues perlfaq8 System Interaction perlfaq9 Networking Reference Manual perlsyn Perl syntax perldata Perl data structures perlop Perl operators and precedence perlsub Perl subroutines perlfunc Perl built-in functions perlopentut Perl open() tutorial perlpacktut Perl pack() and unpack() tutorial perlpod Perl plain old documentation perlpodspec Perl plain old documentation format specification perldocstyle Perl style guide for core docs perlpodstyle Perl POD style guide perldiag Perl diagnostic messages perldeprecation Perl deprecations perllexwarn Perl warnings and their control perldebug Perl debugging perlvar Perl predefined variables perlre Perl regular expressions, the rest of the story perlrebackslash Perl regular expression backslash sequences perlrecharclass Perl regular expression character classes perlreref Perl regular expressions quick reference perlref Perl references, the rest of the story perlform Perl formats perlobj Perl objects perltie Perl objects hidden behind simple variables perlclass Perl class syntax perldbmfilter Perl DBM filters perlipc Perl interprocess communication perlfork Perl fork() information perlnumber Perl number semantics perlthrtut Perl threads tutorial perlport Perl portability guide perllocale Perl locale support perluniintro Perl Unicode introduction perlunicode Perl Unicode support perlunicook Perl Unicode cookbook perlunifaq Perl Unicode FAQ perluniprops Index of Unicode properties in Perl perlunitut Perl Unicode tutorial perlebcdic Considerations for running Perl on EBCDIC platforms perlsec Perl security perlsecpolicy Perl security report handling policy perlmod Perl modules: how they work perlmodlib Perl modules: how to write and use perlmodstyle Perl modules: how to write modules with style perlmodinstall Perl modules: how to install from CPAN perlnewmod Perl modules: preparing a new module for distribution perlpragma Perl modules: writing a user pragma perlutil utilities packaged with the Perl distribution perlfilter Perl source filters perldtrace Perl's support for DTrace perlglossary Perl Glossary Internals and C Language Interface perlembed Perl ways to embed perl in your C or C++ application perldebguts Perl debugging guts and tips perlxstut Perl XS tutorial perlxs Perl XS application programming interface perlxstypemap Perl XS C/Perl type conversion tools perlclib Internal replacements for standard C library functions perlguts Perl internal functions for those doing extensions perlcall Perl calling conventions from C perlmroapi Perl method resolution plugin interface perlreapi Perl regular expression plugin interface perlreguts Perl regular expression engine internals perlclassguts Internals of class syntax perlapi Perl API listing (autogenerated) perlintern Perl internal functions (autogenerated) perliol C API for Perl's implementation of IO in Layers perlapio Perl internal IO abstraction interface perlhack Perl hackers guide perlsource Guide to the Perl source tree perlinterp Overview of the Perl interpreter source and how it works perlhacktut Walk through the creation of a simple C code patch perlhacktips Tips for Perl core C code hacking perlpolicy Perl development policies perlgov Perl Rules of Governance perlgit Using git with the Perl repository History perlhist Perl history records perldelta Perl changes since previous version perl5381delta Perl changes in version 5.38.1 perl5380delta Perl changes in version 5.38.0 perl5363delta Perl changes in version 5.36.3 perl5362delta Perl changes in version 5.36.2 perl5361delta Perl changes in version 5.36.1 perl5360delta Perl changes in version 5.36.0 perl5343delta Perl changes in version 5.34.3 perl5342delta Perl changes in version 5.34.2 perl5341delta Perl changes in version 5.34.1 perl5340delta Perl changes in version 5.34.0 perl5321delta Perl changes in version 5.32.1 perl5320delta Perl changes in version 5.32.0 perl5303delta Perl changes in version 5.30.3 perl5302delta Perl changes in version 5.30.2 perl5301delta Perl changes in version 5.30.1 perl5300delta Perl changes in version 5.30.0 perl5283delta Perl changes in version 5.28.3 perl5282delta Perl changes in version 5.28.2 perl5281delta Perl changes in version 5.28.1 perl5280delta Perl changes in version 5.28.0 perl5263delta Perl changes in version 5.26.3 perl5262delta Perl changes in version 5.26.2 perl5261delta Perl changes in version 5.26.1 perl5260delta Perl changes in version 5.26.0 perl5244delta Perl changes in version 5.24.4 perl5243delta Perl changes in version 5.24.3 perl5242delta Perl changes in version 5.24.2 perl5241delta Perl changes in version 5.24.1 perl5240delta Perl changes in version 5.24.0 perl5224delta Perl changes in version 5.22.4 perl5223delta Perl changes in version 5.22.3 perl5222delta Perl changes in version 5.22.2 perl5221delta Perl changes in version 5.22.1 perl5220delta Perl changes in version 5.22.0 perl5203delta Perl changes in version 5.20.3 perl5202delta Perl changes in version 5.20.2 perl5201delta Perl changes in version 5.20.1 perl5200delta Perl changes in version 5.20.0 perl5184delta Perl changes in version 5.18.4 perl5182delta Perl changes in version 5.18.2 perl5181delta Perl changes in version 5.18.1 perl5180delta Perl changes in version 5.18.0 perl5163delta Perl changes in version 5.16.3 perl5162delta Perl changes in version 5.16.2 perl5161delta Perl changes in version 5.16.1 perl5160delta Perl changes in version 5.16.0 perl5144delta Perl changes in version 5.14.4 perl5143delta Perl changes in version 5.14.3 perl5142delta Perl changes in version 5.14.2 perl5141delta Perl changes in version 5.14.1 perl5140delta Perl changes in version 5.14.0 perl5125delta Perl changes in version 5.12.5 perl5124delta Perl changes in version 5.12.4 perl5123delta Perl changes in version 5.12.3 perl5122delta Perl changes in version 5.12.2 perl5121delta Perl changes in version 5.12.1 perl5120delta Perl changes in version 5.12.0 perl5101delta Perl changes in version 5.10.1 perl5100delta Perl changes in version 5.10.0 perl589delta Perl changes in version 5.8.9 perl588delta Perl changes in version 5.8.8 perl587delta Perl changes in version 5.8.7 perl586delta Perl changes in version 5.8.6 perl585delta Perl changes in version 5.8.5 perl584delta Perl changes in version 5.8.4 perl583delta Perl changes in version 5.8.3 perl582delta Perl changes in version 5.8.2 perl581delta Perl changes in version 5.8.1 perl58delta Perl changes in version 5.8.0 perl561delta Perl changes in version 5.6.1 perl56delta Perl changes in version 5.6 perl5005delta Perl changes in version 5.005 perl5004delta Perl changes in version 5.004 Miscellaneous perlbook Perl book information perlcommunity Perl community information perldoc Look up Perl documentation in Pod format perlexperiment A listing of experimental features in Perl perlartistic Perl Artistic License perlgpl GNU General Public License Language-Specific perlcn Perl for Simplified Chinese (in UTF-8) perljp Perl for Japanese (in EUC-JP) perlko Perl for Korean (in EUC-KR) perltw Perl for Traditional Chinese (in Big5) Platform-Specific perlaix Perl notes for AIX perlamiga Perl notes for AmigaOS perlandroid Perl notes for Android perlbs2000 Perl notes for POSIX-BC BS2000 perlcygwin Perl notes for Cygwin perlfreebsd Perl notes for FreeBSD perlhaiku Perl notes for Haiku perlhpux Perl notes for HP-UX perlhurd Perl notes for Hurd perlirix Perl notes for Irix perllinux Perl notes for Linux perlmacosx Perl notes for Mac OS X perlopenbsd Perl notes for OpenBSD perlos2 Perl notes for OS/2 perlos390 Perl notes for OS/390 perlos400 Perl notes for OS/400 perlplan9 Perl notes for Plan 9 perlqnx Perl notes for QNX perlriscos Perl notes for RISC OS perlsolaris Perl notes for Solaris perlsynology Perl notes for Synology perltru64 Perl notes for Tru64 perlvms Perl notes for VMS perlvos Perl notes for Stratus VOS perlwin32 Perl notes for Windows Stubs for Deleted Documents perlboot perlbot perlrepository perltodo perltooc perltoot On a Unix-like system, these documentation files will usually also be available as manpages for use with the man program. Some documentation is not available as man pages, so if a cross- reference is not found by man, try it with perldoc. Perldoc can also take you directly to documentation for functions (with the -f switch). See "perldoc --help" (or "perldoc perldoc" or "man perldoc") for other helpful options perldoc has to offer. In general, if something strange has gone wrong with your program and you're not sure where you should look for help, try making your code comply with use strict and use warnings. These will often point out exactly where the trouble is.
| null | null |
findrule
|
"findrule" mostly borrows the interface from GNU find(1) to provide a command-line interface onto the File::Find::Rule heirarchy of modules. The syntax for expressions is the rule name, preceded by a dash, followed by an optional argument. If the argument is an opening parenthesis it is taken as a list of arguments, terminated by a closing parenthesis. Some examples: find -file -name ( foo bar ) files named "foo" or "bar", below the current directory. find -file -name foo -bar files named "foo", that have pubs (for this is what our ficticious "bar" clause specifies), below the current directory. find -file -name ( -bar ) files named "-bar", below the current directory. In this case if we'd have omitted the parenthesis it would have parsed as a call to name with no arguments, followed by a call to -bar. Supported switches I'm very slack. Please consult the File::Find::Rule manpage for now, and prepend - to the commands that you want. Extra bonus switches findrule automatically loads all of your installed File::Find::Rule::* extension modules, so check the documentation to see what those would be. AUTHOR Richard Clamp <richardc@unixbeard.net> from a suggestion by Tatsuhiko Miyagawa COPYRIGHT Copyright (C) 2002 Richard Clamp. All Rights Reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. SEE ALSO File::Find::Rule perl v5.34.0 2015-12-03 FINDRULE(1)
|
findrule - command line wrapper to File::Find::Rule USAGE findrule [path...] [expression]
| null | null | null |
tbtdiagnose
|
Use tbtdiagnose to collect diagnostic information to help with investigation of Thunderbolt issues. macOS Sierra December 7, 2016 macOS Sierra
|
tbtdiagnose - collects diagnostic information to help troubleshoot Thunderbolt issues.
|
tbtdiagnose
| null | null |
mcxquery
|
mcxquery is a utility to determine the effective managed preferences for a user logging in to a workgroup from a specific computer. -user Specify the short name of the user record to read managed preferences from. If this parameter is omitted, or a value of "=" specified, the short name of the currently logged in console user will be used. -group Specify the short name of the group record to read managed preferences from. A value of "=" may be specified to mean the name of the workgroup (if any) chosen for the current login session. -computer Specify the computer record to read managed preferences from. The computer can be specified using either an Ethernet MAC address (e.g. "11:22:33:44:55:66"), a Hardware UUID (e.g. "00112233-4455-6677-8899-AABBCCDDEEFF") or the short name of the computer record itself (e.g. "lab1_12"). If this parameter is omitted, or a value of "=" specified, the record for the current computer will be used.
|
mcxquery β Managed Client (MCX) compositor query tool
|
mcxquery [options] [-user recordName] [-group recordName] [-computer spec] options: -o path Writes output to a file at the specified path. -format space | tab | xml Specifies the format of the output. -computerOnly Ignore values for -user and -group. -useCache Return the cached computer settings in the local node if they are available. -raw Dumps Directory Service data for records contributing to managed preferences. -forApple Convenience for specifying options when sending bug reports to Apple. Currently enables "-raw" and "-format xml". See usage example below. -version Displays the current version of ManagedClient.
| null |
mcxquery -user jane -group science -computer lab1_12 Displays the managed preferences that would be in effect if user "jane" logged in using workgroup "science" from the computer specified in the "lab1_12" computer record. mcxquery -user jane -group science -computer 11:22:33:44:55:66 Displays the managed preferences that would be in effect if user "jane" logged in using workgroup "science" from a computer with an Ethernet MAC address of 11:22:33:44:55:66. mcxquery -user = -group = -computer guest Displays the managed preferences that would be in effect if the current user logged in using the current workgroup into a computer not specified by any computer record (i.e. a "guest" computer). mcxquery -user jane -group math Displays the managed preferences that would be in effect if the user "jane" logged into the "math" workgroup on the current computer. mcxquery -o /tmp/report.txt -format xml -user jane Writes the managed preferences that would be in effect if user "jane" logged into the current computer without a workgroup. The report is written in XML format to /tmp/report.txt. mcxquery -computerOnly -computer lab1_12 Displays the managed preferences for the computer specified in the "lab1_12" computer record only. Useful for determining managed settings when computer is at login window. mcxquery -computerOnly -computer 00112233-4455-6677-8899-AABBCCDDEEFF Displays the managed preferences for the computer with the Hardware UUID "00112233-4455-6677-8899-AABBCCDDEEFF". Supported on Mac OS X 10.6 and later. mcxquery -forApple -o results.plist Creates a plist, suitable for submitting along with bug reports to Apple, containing the managed preferences for the current user on the current computer. Also includes relevant records from Directory Services. Supported on Mac OS X 10.7 and later. SEE ALSO dscl(1) MacOSX April 14, 2017 MacOSX
|
lwp-download
|
The lwp-download program will save the file at url to a local file. If local path is not specified, then the current directory is assumed. If local path is a directory, then the last segment of the path of the url is appended to form a local filename. If the url path ends with slash the name "index" is used. With the -s option pick up the last segment of the filename from server provided sources like the Content- Disposition header or any redirect URLs. A file extension to match the server reported Content-Type might also be appended. If a file with the produced filename already exists, then lwp-download will prompt before it overwrites and will fail if its standard input is not a terminal. This form of invocation will also fail is no acceptable filename can be derived from the sources mentioned above. If local path is not a directory, then it is simply used as the path to save into. If the file already exists it's overwritten. The lwp-download program is implemented using the libwww-perl library. It is better suited to down load big files than the lwp-request program because it does not store the file in memory. Another benefit is that it will keep you updated about its progress and that you don't have much options to worry about. Use the "-a" option to save the file in text (ASCII) mode. Might make a difference on DOSish systems. EXAMPLE Fetch the newest and greatest perl version: $ lwp-download http://www.perl.com/CPAN/src/latest.tar.gz Saving to 'latest.tar.gz'... 11.4 MB received in 8 seconds (1.43 MB/sec) AUTHOR Gisle Aas <gisle@aas.no> perl v5.34.0 2020-04-14 LWP-DOWNLOAD(1)
|
lwp-download - Fetch large files from the web
|
lwp-download [-a] [-s] <url> [<local path>] Options: -a save the file in ASCII mode -s use HTTP headers to guess output filename
| null | null |
ppdpo
|
ppdpo extracts UI strings from PPDC source files and updates either a GNU gettext or macOS strings format message catalog source file for translation. This program is deprecated and will be removed in a future release of CUPS.
|
ppdpo - ppd message catalog generator (deprecated)
|
ppdpo [ -D name[=value] ] [ -I include-directory ] [ -o output-file ] source-file
|
ppdpo supports the following options: -D name[=value] Sets the named variable for use in the source file. It is equivalent to using the #define directive in the source file. -I include-directory Specifies an alternate include directory. Multiple -I options can be supplied to add additional directories. -o output-file Specifies the output file. The supported extensions are .po or .po.gz for GNU gettext format message catalogs and .strings for macOS strings files. NOTES PPD files are deprecated and will no longer be supported in a future feature release of CUPS. Printers that do not support IPP can be supported using applications such as ippeveprinter(1). SEE ALSO ppdc(1), ppdhtml(1), ppdi(1), ppdmerge(1), ppdcfile(5), CUPS Online Help (http://localhost:631/help) COPYRIGHT Copyright Β© 2007-2019 by Apple Inc. 26 April 2019 CUPS ppdpo(1)
| null |
crontab
|
The crontab utility is the program used to install, deinstall or list the tables used to drive the cron(8) daemon in Vixie Cron. Each user can have their own crontab, and they are not intended to be edited directly. (Darwin note: Although cron(8) and crontab(5) are officially supported under Darwin, their functionality has been absorbed into launchd(8), which provides a more flexible way of automatically executing commands. See launchctl(1) for more information.) If the /usr/lib/cron/cron.allow file exists, then you must be listed therein in order to be allowed to use this command. If the /usr/lib/cron/cron.allow file does not exist but the /usr/lib/cron/cron.deny file does exist, then you must not be listed in the /usr/lib/cron/cron.deny file in order to use this command. If neither of these files exists, then depending on site-dependent configuration parameters, only the super user will be allowed to use this command, or all users will be able to use this command. The format of these files is one username per line, with no leading or trailing whitespace. Lines of other formats will be ignored, and so can be used for comments. The first form of this command is used to install a new crontab from some named file or standard input if the pseudo-filename β-β is given. The following options are available: -u Specify the name of the user whose crontab is to be tweaked. If this option is not given, crontab examines βyourβ crontab, i.e., the crontab of the person executing the command. Note that su(1) can confuse crontab and that if you are running inside of su(1) you should always use the -u option for safety's sake. -l Display the current crontab on standard output. -r Remove the current crontab. -e Edit the current crontab using the editor specified by the VISUAL or EDITOR environment variables. The specified editor must edit the file in place; any editor that unlinks the file and recreates it cannot be used. After you exit from the editor, the modified crontab will be installed automatically. FILES /usr/lib/cron/cron.allow /usr/lib/cron/cron.deny DIAGNOSTICS A fairly informative usage message appears if you run it with a bad command line. SEE ALSO crontab(5), compat(5), cron(8), launchctl(1) STANDARDS The crontab command conforms to IEEE Std 1003.2 (βPOSIX.2β). The new command syntax differs from previous versions of Vixie Cron, as well as from the classic SVR3 syntax. AUTHORS Paul Vixie β¨paul@vix.comβ© macOS 14.5 December 29, 1993 macOS 14.5
|
crontab β maintain crontab files for individual users (V3)
|
crontab [-u user] file crontab [-u user] { -l | -r | -e }
| null | null |
reset
|
The tput utility uses the terminfo database to make the values of terminal-dependent capabilities and information available to the shell (see sh(1)), to initialize or reset the terminal, or return the long name of the requested terminal type. The result depends upon the capability's type: string tput writes the string to the standard output. No trailing newline is supplied. integer tput writes the decimal value to the standard output, with a trailing newline. boolean tput simply sets the exit code (0 for TRUE if the terminal has the capability, 1 for FALSE if it does not), and writes nothing to the standard output. Before using a value returned on the standard output, the application should test the exit code (e.g., $?, see sh(1)) to be sure it is 0. (See the EXIT CODES and DIAGNOSTICS sections.) For a complete list of capabilities and the capname associated with each, see terminfo(5). -Ttype indicates the type of terminal. Normally this option is unnecessary, because the default is taken from the environment variable TERM. If -T is specified, then the shell variables LINES and COLUMNS will also be ignored. capname indicates the capability from the terminfo database. When termcap support is compiled in, the termcap name for the capability is also accepted. parms If the capability is a string that takes parameters, the arguments parms will be instantiated into the string. Most parameters are numbers. Only a few terminfo capabilities require string parameters; tput uses a table to decide which to pass as strings. Normally tput uses tparm (3X) to perform the substitution. If no parameters are given for the capability, tput writes the string without performing the substitution. -S allows more than one capability per invocation of tput. The capabilities must be passed to tput from the standard input instead of from the command line (see example). Only one capname is allowed per line. The -S option changes the meaning of the 0 and 1 boolean and string exit codes (see the EXIT CODES section). Again, tput uses a table and the presence of parameters in its input to decide whether to use tparm (3X), and how to interpret the parameters. -V reports the version of ncurses which was used in this program, and exits. init If the terminfo database is present and an entry for the user's terminal exists (see -Ttype, above), the following will occur: (1) if present, the terminal's initialization strings will be output as detailed in the terminfo(5) section on Tabs and Initialization, (2) any delays (e.g., newline) specified in the entry will be set in the tty driver, (3) tabs expansion will be turned on or off according to the specification in the entry, and (4) if tabs are not expanded, standard tabs will be set (every 8 spaces). If an entry does not contain the information needed for any of the four above activities, that activity will silently be skipped. reset Instead of putting out initialization strings, the terminal's reset strings will be output if present (rs1, rs2, rs3, rf). If the reset strings are not present, but initialization strings are, the initialization strings will be output. Otherwise, reset acts identically to init. longname If the terminfo database is present and an entry for the user's terminal exists (see -Ttype above), then the long name of the terminal will be put out. The long name is the last name in the first line of the terminal's description in the terminfo database [see term(5)]. If tput is invoked by a link named reset, this has the same effect as tput reset. See @TSET@ for comparison, which has similar behavior.
|
tput, reset - initialize a terminal or query terminfo database
|
tput [-Ttype] capname [parms ... ] tput [-Ttype] init tput [-Ttype] reset tput [-Ttype] longname tput -S << tput -V
| null |
tput init Initialize the terminal according to the type of terminal in the environmental variable TERM. This command should be included in everyone's .profile after the environmental variable TERM has been exported, as illustrated on the profile(5) manual page. tput -T5620 reset Reset an AT&T 5620 terminal, overriding the type of terminal in the environmental variable TERM. tput cup 0 0 Send the sequence to move the cursor to row 0, column 0 (the upper left corner of the screen, usually known as the "home" cursor position). tput clear Echo the clear-screen sequence for the current terminal. tput cols Print the number of columns for the current terminal. tput -T450 cols Print the number of columns for the 450 terminal. bold=`tput smso` offbold=`@TPUT@ rmso` Set the shell variables bold, to begin stand-out mode sequence, and offbold, to end standout mode sequence, for the current terminal. This might be followed by a prompt: echo "${bold}Please type in your name: ${offbold}\c" tput hc Set exit code to indicate if the current terminal is a hard copy terminal. tput cup 23 4 Send the sequence to move the cursor to row 23, column 4. tput cup Send the terminfo string for cursor-movement, with no parameters substituted. tput longname Print the long name from the terminfo database for the type of terminal specified in the environmental variable TERM. tput -S <<! > clear > cup 10 10 > bold > ! This example shows tput processing several capabilities in one invocation. It clears the screen, moves the cursor to position 10, 10 and turns on bold (extra bright) mode. The list is terminated by an exclamation mark (!) on a line by itself. FILES /usr/share/terminfo compiled terminal description database /usr/share/tabset/* tab settings for some terminals, in a format appropriate to be output to the terminal (escape sequences that set margins and tabs); for more information, see the "Tabs and Initialization" section of terminfo(5) EXIT CODES If the -S option is used, tput checks for errors from each line, and if any errors are found, will set the exit code to 4 plus the number of lines with errors. If no errors are found, the exit code is 0. No indication of which line failed can be given so exit code 1 will never appear. Exit codes 2, 3, and 4 retain their usual interpretation. If the -S option is not used, the exit code depends on the type of capname: boolean a value of 0 is set for TRUE and 1 for FALSE. string a value of 0 is set if the capname is defined for this terminal type (the value of capname is returned on standard output); a value of 1 is set if capname is not defined for this terminal type (nothing is written to standard output). integer a value of 0 is always set, whether or not capname is defined for this terminal type. To determine if capname is defined for this terminal type, the user must test the value written to standard output. A value of -1 means that capname is not defined for this terminal type. other reset or init may fail to find their respective files. In that case, the exit code is set to 4 + errno. Any other exit code indicates an error; see the DIAGNOSTICS section. DIAGNOSTICS tput prints the following error messages and sets the corresponding exit codes. exit code error message βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 0 (capname is a numeric variable that is not specified in the terminfo(5) database for this terminal type, e.g. tput -T450 lines and @TPUT@ -T2621 xmc) 1 no error message is printed, see the EXIT CODES section. 2 usage error 3 unknown terminal type or no terminfo database 4 unknown terminfo capability capname >4 error occurred in -S βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ PORTABILITY The longname and -S options, and the parameter-substitution features used in the cup example, are not supported in BSD curses or in AT&T/USL curses before SVr4. X/Open documents only the operands for clear, init and reset. In this implementation, clear is part of the capname support. Other implementations of tput on SVr4-based systems such as Solaris, IRIX64 and HPUX as well as others such as AIX and Tru64 provide support for capname operands. A few platforms such as FreeBSD and NetBSD recognize termcap names rather than terminfo capability names in their respective tput commands. Most implementations which provide support for capname operands use the tparm function to expand parameters in it. That function expects a mixture of numeric and string parameters, requiring tput to know which type to use. This implementation uses a table to determine that for the standard capname operands, and an internal library function to analyze nonstandard capname operands. Other implementations may simply guess that an operand containing only digits is intended to be a number. SEE ALSO clear(1), stty(1), tabs(1), terminfo(5), curs_termcap(3X). This describes ncurses version 5.7 (patch 20081102). tput(1)
|
kextutil
|
The kextutil program is used to explicitly load kernel extensions (kexts), diagnose problems with kexts, and to generate symbol files for debugging kexts. In order to load a kext into the kernel kextutil must run as the superuser; for all other uses it can run as any user. kextutil is the developer utility for kext loading in the Darwin OS and in macOS. Software and installers should use kextload(8) instead of this program. The arguments and options available are these: kext The pathname of a kext bundle to load or otherwise use. Kexts can also be specified by CFBundleIdentifier with the -bundle-id option. -a identifier@address, -address identifier@address Treat the kext whose CFBundleIdenfier is identifier as being loaded at address when generating symbol files and not loading. When generating symbols, any dependencies with unspecified addresses are skipped. Use this option repeatedly to name every nonkernel dependency for which you want symbols. This option implies the use of the -no-load option. See also -use-load-addresses and -no-load. -arch Use the specified architecture for generating symbols and performing tests. If loading into the kernel or getting load addresses from the kernel, the specified arch must match that of the running kernel. -A, -use-load-addresses When generating symbol files and not loading, look up all dependency kext addresses within the running kernel. This option implies the use of the -no-load option. See also -address and -no-load. -b identifier, -bundle-id identifier Look up the kext whose CFBundleIdentifier is identifier within the set of known kexts and load it. The kext of the highest CFBundleVersion with the given identifier is used; in the case of version ties, the last such kext specified on the command line is used. See the -dependency, -no-system-extensions, and -repository options for more information. -c, -no-caches Ignore any repository cache files and scan all kext bundles to gather information. If this option is not given, kextutil attempts to use cache files and (when running as root) to create them if they are out of date or don't exist. -d kext, -dependency kext Add kext and its plugins to the set of known kexts for resolving dependencies. This is useful for adding a single kext from a directory. See βExplicitly Specifying Dependenciesβ for more information, as well as the -no-system-extensions and -repository options. -e, -no-system-extensions Don't use the contents of /System/Library/Extensions/ or /Library/Extensions/ as the default repository of kexts. If you use this option you will have to explicitly specify all dependencies of the kext being loaded or otherwise worked on using the -dependency and -repository options. See βExplicitly Specifying Dependenciesβ for more information. -h, -help Print a help message describing each option flag and exit with a success result, regardless of any other options on the command line. -i, -interactive Interactive mode; pause after loading each specified kext and wait for user input to start the kext and send its personalities to the kernel. This allows for debugger setup when the kext needs to be debugged during its earliest stages of running. -I, -interactive-all Interactive mode, as described above, for each specified kext and all of their dependencies. -k kernel_file, -kernel kernel_file Link against the given kernel_file. Allowed only with the -no-load option to generate debug symbols. By default kextutil attempts to get link symbols from the kernel at /System/Library/Kernels/kernel. -l, -load-only Load and start the kext only; don't send I/O Kit personalities to the kernel to begin matching. Matching may still occur if the personalities are present from an earlier load operation. You may want to use kextunload(8) before loading a kext with this option. -m, -match-only Don't load the kext, but do send its personalities to the kernel to begin matching. Use this option after you have loaded a driver with -load-only and after setting up the debugger. -n, -no-load Neither load the kext nor send personalities to the kernel. This option is for use when generating debug symbols only with the -symbols option, or when diagnosing kexts with the -print-diagnostics option. For convenience in development, this option implies the -no-authentication option. See also the -address and -use-load-addresses options. -p personality, -personality personality Send only the named personalities from the kext to the kernel. Repeat for each personality desired, or use the -interactive option to have kextutil ask for each personality. -q, -quiet Quiet mode; print no informational or error messages. If kextutil is run with -quiet in a way that might require user interaction, as with the -interactive and -interactive-all options, and some uses of -no-load, the program silently exits with an error status. -r directory, -repository directory Use directory as a repository of kexts. This adds to the set of known kexts for resolving dependencies or looking up by CFBundleIdentifier when using the -bundle-id option. This is not recursive; only the directory's immediate contents (and their plugins) are scanned. See βExplicitly Specifying Dependenciesβ for more information, as well as the -dependency and -no-system-extensions options. -s directory, -symbols directory Write all generated symbol files into directory. The directory must already exist. Symbol files are named after the CFBundleIdentifier of each kext with a .sym suffix appended. -t, -print-diagnostics Perform all possible tests on the specified kexts, even with options that implicitly disable some tests, and indicate whether the kext is loadable, or if not, what problems it has. Note that tests are performed in three stages, validation, authentication, and dependency resolution; a failure at any stage can make tests in further stages impossible. Thus, a kext with validation failures may have unreported authentication problems or missing dependencies. Additionally, some tests require being run as root. -v [0-6 | 0x####], -verbose [0-6 | 0x####] Verbose mode; print information about program operation. Higher levels of verbosity include all lower levels. By default kextutil prints only warnings and errors. You can specify a level from 0-6, or a hexadecimal log specification (as described in kext_logging(8)). The levels of verbose output are: 0 Print only errors (that is, suppress warnings); see also -quiet. 1 (or none) Print basic information about program operation. 2 Print basic information about the link/load operation. 3 Print more information about user-kernel interaction, link/load operation, and processing of I/O Kit Personalities. 4 Print detailed information about module start and C++ class construction. 5 Print internal debug information, including checks for loaded kexts. 6 Identical to level 5 but for all kexts read by the program. To ease debug loading of kexts, the verbose levels 1-6 in kextutil implicitly set the OSBundleEnableKextLogging property for each kext specified on the command line to true. See kext_logging(8) for more information on verbose logging. -x, -safe-boot Run kextutil as if in safe boot mode (indicating startup with the Shift key held down). Kexts that don't specify a proper value for the OSBundleRequired info dictionary property will not load. This option implies the use of the -no-caches option. Note that if the system has actually started up in safe boot mode, this option is redundant. There is no way to simulate non-safe boot mode for a system running in safe boot mode. -z, -no-authentication Don't authenticate kexts. This option is for convenience during development, and is allowed only for operations that don't actually load a kext into the kernel (such as when generating symbols). -Z, -no-resolve-dependencies Don't try to resolve dependencies. This option is allowed only when using the -no-load and -print-diagnostics options to test a kext for problems. It is not allowed with the -symbols option as generating symbols requires dependencies to be resolved. -- End of all options. Only kext names follow.
|
kextutil β load, diagnose problems with, and generate symbols for kernel extensions (kexts)
|
kextutil [options] [--] [kext] ... DEPRECATED The kextutil utility has been deprecated. Please use the kmutil(8) equivalents: kmutil load, or kmutil print-diagnostics.
| null |
Here are the common uses and usage patterns for kextutil. Basic Loading To load a kext you must run kextutil as the superuser and supply a kext bundle name; no options are required: kextutil TabletDriver.kext Alternatively, you can use the -bundle-id (-b) option to specify a kext by its CFBundleIdentifier: kextutil -b com.mycompany.driver.TabletDriver With no additional options kextutil looks in /System/Library/Extensions/ and /Library/Extensions/ for a kext with the given CFBundleIdentifier. Adding repository directories with the -repository (-r) option or individual kexts with the -dependency (-d) option expands the set of kexts that kextutil looks among: kextutil -r ${USER}/Library/Extensions TabletDriver.kext Diagnosing Kexts kextutil prints diagnostic information about kexts by default, but some options cause certain tests to be skipped. The ensure that all tests are performed, use the -print-diagnostics (-t) option. The -print-diagnostics option is typically used with -no-load (-n) after a load failure to pinpoint a problem. It can be used with any other set of options, however. If you want to validate a kext in isolation, as in a build environment where dependencies may not be available, you can use the -no-system-extensions (-e) and -no-resolve-dependencies (-Z) options to omit the /System/Library/Extensions/ and /Library/Extensions/ repositories and to suppress dependency resolution, respectively: kextutil -entZ PacketSniffer.kext Only validation and authentication checks are performed. Generating Debug Symbols When Loading To generate a symbol file for use with gdb when loading a kext, use the -symbols (-s) option to specify a directory where symbol files will be written for the kext being loaded and all its dependencies. kextutil -s ~/ksyms PacketSniffer.kext Generating Debug Symbols For an Already-Loaded Kext If you want to generate symbols for a kext that's already loaded, whether on the same system or on another, use the -symbols (-s) option along with the -no-load (-n) option. Since in this case addresses must be known for the kext and all its dependencies, though, you must specify them. If you don't indicate them on the command line, kextutil asks for the load address of each kext needed. To get these addresses you can use kextstat(8) on the machine you're generating symbols for, the showallkmods gdb(1) macro defined by the kgmacros file in the Kernel Development Kit, or consult a panic backtrace. kextutil -n -s ~/ksyms GrobbleEthernet.kext enter the hexadecimal load addresses for these modules: com.apple.iokit.IONetworkingFamily: 0x1001000 ... Alternatively, if you know the CFBundleIdentifiers of all the kexts, you can use the -address (-a) option for each kext (you needn't specify -no-load when using the -address option): kextutil -s ~/ksyms \ -a com.apple.iokit.IONetworkingFamily@0x1001000 \ -a com.apple.iokit.IOPCIFamily@0x1004000 \ -a com.mycompany.driver.GrobbleEthernet@0x1007000 \ GrobbleEthernet.kext Simplest of all, however, provided you can run kextutil on the same machine as the loaded kext, is to use the -use-load-addresses (-A) option, which checks with the kernel for all loaded kexts and automatically gets their load addresses. kextutil -s ~/ksyms -A GrobbleEthernet.kext Explicitly Specifying Dependencies Because kextutil resolves dependencies automatically, it's possible that a kext other than the one you intend might get used as a dependency (as when there are multiple copies of the same version, or if you're working with a different version of a kext that's already in /System/Library/Extensions/). By default, when loading a kext into the kernel, kextutil checks which versions of possible dependencies are already loaded in order to assure a successful load. When not loading and not using -use-load-addresses, however, it always chooses the highest versions of any dependencies, and in the case of a tie it chooses from kexts specified on the command line using the -dependency or -repository options, or as command line arguments (in decreasing order of priority). For precise control over the set of extensions used to resolve dependencies, use the -no-system-extensions (-e) option along with the -dependency (-d), and -repository (-r) options. The -no-system-extensions option excludes the standard /System/Library/Extensions/ and /Library/Extensions/ directories, leaving the set of candidate extensions for dependency resolution entirely up to you. To specify candidate dependencies you use either -dependency (-d), which names a single kext as a candidate, or -repository (-r), which adds an entire directory of extensions. kextutil -n -s ~/ksyms -e \ -d /System/Library/Extensions/System.kext \ -r ~/TestKexts -d JoystickSupport.kext JoystickDriver.kext Note also that if you use -no-system-extensions (-e), you must supply at least some version of System.kext in order to supply information about the kernel. This should always match the kernel you're linking against, which is by default the installed kernel on the machine you're using kextutil on; you can use the -kernel (-k) option to specify a different kernel file. You may also need to explicitly specify other library or family kexts. Debug Loading an I/O Kit Driver Pure I/O Kit driver kexts have empty module-start routines, but trigger matching and driver instance creation on load. If you need to debug an I/O Kit driver's early startup code, you can load the driver on the target machine without starting matching by using the -load-only (-l) option: kextutil -l DiskController.kext Once you have done this, you can use the generated symbol file in your debug session to set breakpoints and then trigger matching by running kextutil again on the target machine with the -match-only (-m) option: kextutil -m DiskController.kext You may wish to use the -personality (-p) option as well in order to send selected personalities to the kernel. Alternatively, you can use the -interactive (-i) option for the whole process, which causes kextutil to pause just before loading any personalities and then to ask you for each personality whether that one should be sent to the kernel: kextutil -i DiskController.kext DiskController.kext appears to be loadable (not including linkage for on-disk libraries). Load DiskController.kext and its dependencies into the kernel [Y/n]? y Loading DiskController.kext. DiskController.kext successfully loaded (or already loaded). DiskController.kext and its dependencies are now loaded, but not started (unless they were already running). You may now set breakpoints in the debugger before starting them. start DiskController.kext [Y/n]? y DiskController.kext started. send personalities for DiskController.kext [Y/n]? y send personality Test Match Personality [Y/n]? y Debug Loading a Kext with a Module-Start Routine In order to debug a kext's module-start routine, you must use the -interactive (-i) or -interactive-all (-I) option, which pause after loading and before calling the module-start function, so that you can set up your debugging session as needed before proceeding. FILES /System/Library/Extensions/ The standard system repository of kernel extensions. /Library/Extensions/ The standard repository of non Apple kernel extensions. /System/Library/Caches/com.apple.kext.caches/* Contains all kext caches for a Mac OS X 10.6 (Snow Leopard) system: prelinked kernel, mkext, and system kext info caches. /System/Library/Kernels/kernel The default kernel file. DIAGNOSTICS kextutil exits with a zero status upon success. Upon failure, it prints an error message and continues processing remaining kexts if possible, then exits with a nonzero status. For a kext to be loadable, it must be valid, authentic, have all dependencies met (that is, all dependencies must be found and loadable). A valid kext has a well formed bundle, info dictionary, and executable. An authentic kext's component files are owned by root:wheel, with permissions nonwritable by group and other. If your kext fails to load, try using the -print-diagnostics (-t) option to print diagnostics related to validation and authentication. BUGS Many single-letter options are inconsistent in meaning with (or directly contradictory to) the same letter options in other kext tools. SEE ALSO kmutil(8), kernelmanagerd(8), kextcache(8), kextd(8), kextload(8), kextstat(8), kextunload(8), kext_logging(8) Darwin November 14, 2012 Darwin
|
par5.30.pl
| null | null | null | null | null |
bzcat
|
bzip2 compresses files using the Burrows-Wheeler block sorting text compression algorithm, and Huffman coding. Compression is generally considerably better than that achieved by more conventional LZ77/LZ78-based compressors, and approaches the performance of the PPM family of statistical compressors. The command-line options are deliberately very similar to those of GNU gzip, but they are not identical. bzip2 expects a list of file names to accompany the command-line flags. Each file is replaced by a compressed version of itself, with the name "original_name.bz2". Each compressed file has the same modification date, permissions, and, when possible, ownership as the corresponding original, so that these properties can be correctly restored at decompression time. File name handling is naive in the sense that there is no mechanism for preserving original file names, permissions, ownerships or dates in filesystems which lack these concepts, or have serious file name length restrictions, such as MS-DOS. bzip2 and bunzip2 will by default not overwrite existing files. If you want this to happen, specify the -f flag. If no file names are specified, bzip2 compresses from standard input to standard output. In this case, bzip2 will decline to write compressed output to a terminal, as this would be entirely incomprehensible and therefore pointless. bunzip2 (or bzip2 -d) decompresses all specified files. Files which were not created by bzip2 will be detected and ignored, and a warning issued. bzip2 attempts to guess the filename for the decompressed file from that of the compressed file as follows: filename.bz2 becomes filename filename.bz becomes filename filename.tbz2 becomes filename.tar filename.tbz becomes filename.tar anyothername becomes anyothername.out If the file does not end in one of the recognised endings, .bz2, .bz, .tbz2 or .tbz, bzip2 complains that it cannot guess the name of the original file, and uses the original name with .out appended. As with compression, supplying no filenames causes decompression from standard input to standard output. bunzip2 will correctly decompress a file which is the concatenation of two or more compressed files. The result is the concatenation of the corresponding uncompressed files. Integrity testing (-t) of concatenated compressed files is also supported. You can also compress or decompress files to the standard output by giving the -c flag. Multiple files may be compressed and decompressed like this. The resulting outputs are fed sequentially to stdout. Compression of multiple files in this manner generates a stream containing multiple compressed file representations. Such a stream can be decompressed correctly only by bzip2 version 0.9.0 or later. Earlier versions of bzip2 will stop after decompressing the first file in the stream. bzcat (or bzip2 -dc) decompresses all specified files to the standard output. bzip2 will read arguments from the environment variables BZIP2 and BZIP, in that order, and will process them before any arguments read from the command line. This gives a convenient way to supply default arguments. Compression is always performed, even if the compressed file is slightly larger than the original. Files of less than about one hundred bytes tend to get larger, since the compression mechanism has a constant overhead in the region of 50 bytes. Random data (including the output of most file compressors) is coded at about 8.05 bits per byte, giving an expansion of around 0.5%. As a self-check for your protection, bzip2 uses 32-bit CRCs to make sure that the decompressed version of a file is identical to the original. This guards against corruption of the compressed data, and against undetected bugs in bzip2 (hopefully very unlikely). The chances of data corruption going undetected is microscopic, about one chance in four billion for each file processed. Be aware, though, that the check occurs upon decompression, so it can only tell you that something is wrong. It can't help you recover the original uncompressed data. You can use bzip2recover to try to recover data from damaged files. Return values: 0 for a normal exit, 1 for environmental problems (file not found, invalid flags, I/O errors, &c), 2 to indicate a corrupt compressed file, 3 for an internal consistency error (eg, bug) which caused bzip2 to panic.
|
bzip2, bunzip2 - a block-sorting file compressor, v1.0.8 bzcat - decompresses files to stdout bzip2recover - recovers data from damaged bzip2 files
|
bzip2 [ -cdfkqstvzVL123456789 ] [ filenames ... ] bunzip2 [ -fkvsVL ] [ filenames ... ] bzcat [ -s ] [ filenames ... ] bzip2recover filename
|
-c --stdout Compress or decompress to standard output. -d --decompress Force decompression. bzip2, bunzip2 and bzcat are really the same program, and the decision about what actions to take is done on the basis of which name is used. This flag overrides that mechanism, and forces bzip2 to decompress. -z --compress The complement to -d: forces compression, regardless of the invocation name. -t --test Check integrity of the specified file(s), but don't decompress them. This really performs a trial decompression and throws away the result. -f --force Force overwrite of output files. Normally, bzip2 will not overwrite existing output files. Also forces bzip2 to break hard links to files, which it otherwise wouldn't do. bzip2 normally declines to decompress files which don't have the correct magic header bytes. If forced (-f), however, it will pass such files through unmodified. This is how GNU gzip behaves. -k --keep Keep (don't delete) input files during compression or decompression. -s --small Reduce memory usage, for compression, decompression and testing. Files are decompressed and tested using a modified algorithm which only requires 2.5 bytes per block byte. This means any file can be decompressed in 2300k of memory, albeit at about half the normal speed. During compression, -s selects a block size of 200k, which limits memory use to around the same figure, at the expense of your compression ratio. In short, if your machine is low on memory (8 megabytes or less), use -s for everything. See MEMORY MANAGEMENT below. -q --quiet Suppress non-essential warning messages. Messages pertaining to I/O errors and other critical events will not be suppressed. -v --verbose Verbose mode -- show the compression ratio for each file processed. Further -v's increase the verbosity level, spewing out lots of information which is primarily of interest for diagnostic purposes. -L --license -V --version Display the software version, license terms and conditions. -1 (or --fast) to -9 (or --best) Set the block size to 100 k, 200 k .. 900 k when compressing. Has no effect when decompressing. See MEMORY MANAGEMENT below. The --fast and --best aliases are primarily for GNU gzip compatibility. In particular, --fast doesn't make things significantly faster. And --best merely selects the default behaviour. -- Treats all subsequent arguments as file names, even if they start with a dash. This is so you can handle files with names beginning with a dash, for example: bzip2 -- -myfilename. --repetitive-fast --repetitive-best These flags are redundant in versions 0.9.5 and above. They provided some coarse control over the behaviour of the sorting algorithm in earlier versions, which was sometimes useful. 0.9.5 and above have an improved algorithm which renders these flags irrelevant. MEMORY MANAGEMENT bzip2 compresses large files in blocks. The block size affects both the compression ratio achieved, and the amount of memory needed for compression and decompression. The flags -1 through -9 specify the block size to be 100,000 bytes through 900,000 bytes (the default) respectively. At decompression time, the block size used for compression is read from the header of the compressed file, and bunzip2 then allocates itself just enough memory to decompress the file. Since block sizes are stored in compressed files, it follows that the flags -1 to -9 are irrelevant to and so ignored during decompression. Compression and decompression requirements, in bytes, can be estimated as: Compression: 400k + ( 8 x block size ) Decompression: 100k + ( 4 x block size ), or 100k + ( 2.5 x block size ) Larger block sizes give rapidly diminishing marginal returns. Most of the compression comes from the first two or three hundred k of block size, a fact worth bearing in mind when using bzip2 on small machines. It is also important to appreciate that the decompression memory requirement is set at compression time by the choice of block size. For files compressed with the default 900k block size, bunzip2 will require about 3700 kbytes to decompress. To support decompression of any file on a 4 megabyte machine, bunzip2 has an option to decompress using approximately half this amount of memory, about 2300 kbytes. Decompression speed is also halved, so you should use this option only where necessary. The relevant flag is -s. In general, try and use the largest block size memory constraints allow, since that maximises the compression achieved. Compression and decompression speed are virtually unaffected by block size. Another significant point applies to files which fit in a single block -- that means most files you'd encounter using a large block size. The amount of real memory touched is proportional to the size of the file, since the file is smaller than a block. For example, compressing a file 20,000 bytes long with the flag -9 will cause the compressor to allocate around 7600k of memory, but only touch 400k + 20000 * 8 = 560 kbytes of it. Similarly, the decompressor will allocate 3700k but only touch 100k + 20000 * 4 = 180 kbytes. Here is a table which summarises the maximum memory usage for different block sizes. Also recorded is the total compressed size for 14 files of the Calgary Text Compression Corpus totalling 3,141,622 bytes. This column gives some feel for how compression varies with block size. These figures tend to understate the advantage of larger block sizes for larger files, since the Corpus is dominated by smaller files. Compress Decompress Decompress Corpus Flag usage usage -s usage Size -1 1200k 500k 350k 914704 -2 2000k 900k 600k 877703 -3 2800k 1300k 850k 860338 -4 3600k 1700k 1100k 846899 -5 4400k 2100k 1350k 845160 -6 5200k 2500k 1600k 838626 -7 6100k 2900k 1850k 834096 -8 6800k 3300k 2100k 828642 -9 7600k 3700k 2350k 828642 RECOVERING DATA FROM DAMAGED FILES bzip2 compresses files in blocks, usually 900kbytes long. Each block is handled independently. If a media or transmission error causes a multi-block .bz2 file to become damaged, it may be possible to recover data from the undamaged blocks in the file. The compressed representation of each block is delimited by a 48-bit pattern, which makes it possible to find the block boundaries with reasonable certainty. Each block also carries its own 32-bit CRC, so damaged blocks can be distinguished from undamaged ones. bzip2recover is a simple program whose purpose is to search for blocks in .bz2 files, and write each block out into its own .bz2 file. You can then use bzip2 -t to test the integrity of the resulting files, and decompress those which are undamaged. bzip2recover takes a single argument, the name of the damaged file, and writes a number of files "rec00001file.bz2", "rec00002file.bz2", etc, containing the extracted blocks. The output filenames are designed so that the use of wildcards in subsequent processing -- for example, "bzip2 -dc rec*file.bz2 > recovered_data" -- processes the files in the correct order. bzip2recover should be of most use dealing with large .bz2 files, as these will contain many blocks. It is clearly futile to use it on damaged single-block files, since a damaged block cannot be recovered. If you wish to minimise any potential data loss through media or transmission errors, you might consider compressing with a smaller block size. PERFORMANCE NOTES The sorting phase of compression gathers together similar strings in the file. Because of this, files containing very long runs of repeated symbols, like "aabaabaabaab ..." (repeated several hundred times) may compress more slowly than normal. Versions 0.9.5 and above fare much better than previous versions in this respect. The ratio between worst-case and average-case compression time is in the region of 10:1. For previous versions, this figure was more like 100:1. You can use the -vvvv option to monitor progress in great detail, if you want. Decompression speed is unaffected by these phenomena. bzip2 usually allocates several megabytes of memory to operate in, and then charges all over it in a fairly random fashion. This means that performance, both for compressing and decompressing, is largely determined by the speed at which your machine can service cache misses. Because of this, small changes to the code to reduce the miss rate have been observed to give disproportionately large performance improvements. I imagine bzip2 will perform best on machines with very large caches. CAVEATS I/O error messages are not as helpful as they could be. bzip2 tries hard to detect I/O errors and exit cleanly, but the details of what the problem is sometimes seem rather misleading. This manual page pertains to version 1.0.8 of bzip2. Compressed data created by this version is entirely forwards and backwards compatible with the previous public releases, versions 0.1pl2, 0.9.0, 0.9.5, 1.0.0, 1.0.1, 1.0.2 and above, but with the following exception: 0.9.0 and above can correctly decompress multiple concatenated compressed files. 0.1pl2 cannot do this; it will stop after decompressing just the first file in the stream. bzip2recover versions prior to 1.0.2 used 32-bit integers to represent bit positions in compressed files, so they could not handle compressed files more than 512 megabytes long. Versions 1.0.2 and above use 64-bit ints on some platforms which support them (GNU supported targets, and Windows). To establish whether or not bzip2recover was built with such a limitation, run it without arguments. In any event you can build yourself an unlimited version if you can recompile it with MaybeUInt64 set to be an unsigned 64-bit integer. AUTHOR Julian Seward, jseward@acm.org. https://sourceware.org/bzip2/ The ideas embodied in bzip2 are due to (at least) the following people: Michael Burrows and David Wheeler (for the block sorting transformation), David Wheeler (again, for the Huffman coder), Peter Fenwick (for the structured coding model in the original bzip, and many refinements), and Alistair Moffat, Radford Neal and Ian Witten (for the arithmetic coder in the original bzip). I am much indebted for their help, support and advice. See the manual in the source distribution for pointers to sources of documentation. Christian von Roques encouraged me to look for faster sorting algorithms, so as to speed up compression. Bela Lubkin encouraged me to improve the worst-case compression performance. Donna Robinson XMLised the documentation. The bz* scripts are derived from those of GNU gzip. Many people sent patches, helped with portability problems, lent machines, gave advice and were generally helpful. bzip2(1)
| null |
dbicadmin
| null |
dbicadmin - utility for administrating DBIx::Class schemata
|
dbicadmin: [-I] [long options...] deploy a schema to a database dbicadmin --schema=MyApp::Schema \ --connect='["dbi:SQLite:my.db", "", ""]' \ --deploy update an existing record dbicadmin --schema=MyApp::Schema --class=Employee \ --connect='["dbi:SQLite:my.db", "", ""]' \ --op=update --set='{ "name": "New_Employee" }'
|
Actions --create Create version diffs needs preversion --upgrade Upgrade the database to the current schema --install Install the schema version tables to an existing database --deploy Deploy the schema to the database --select Select data from the schema --insert Insert data into the schema --update Update data in the schema --delete Delete data from the schema --op compatibility option all of the above can be supplied as --op=<action> --help display this help Arguments --config-file or --config Supply the config file for parsing by Config::Any --connect-info Supply the connect info as trailing options e.g. --connect-info dsn=<dsn> user=<user> password=<pass> --connect Supply the connect info as a JSON-encoded structure, e.g. an --connect=["dsn","user","pass"] --schema-class The class of the schema to load --config-stanza Where in the config to find the connection_info, supply in form MyApp::Model::DB --resultset or --resultset-class or --class The resultset to operate on for data manipulation --sql-dir The directory where sql diffs will be created --sql-type The RDBMs flavour you wish to use --version Supply a version install --preversion The previous version to diff against --set JSON data used to perform data operations --attrs JSON string to be used for the second argument for search --where JSON string to be used for the where clause of search --force Be forceful with some operations --trace Turn on DBIx::Class trace output --quiet Be less verbose -I Same as perl's -I, prepended to current @INC AUTHORS See "AUTHORS" in DBIx::Class LICENSE You may distribute this code under the same terms as Perl itself perl v5.34.0 2018-01-29 DBICADMIN(1)
| null |
ex
|
Vim is a text editor that is upwards compatible to Vi. It can be used to edit all kinds of plain text. It is especially useful for editing programs. There are a lot of enhancements above Vi: multi level undo, multi windows and buffers, syntax highlighting, command line editing, filename completion, on-line help, visual selection, etc.. See ":help vi_diff.txt" for a summary of the differences between Vim and Vi. While running Vim a lot of help can be obtained from the on-line help system, with the ":help" command. See the ON-LINE HELP section below. Most often Vim is started to edit a single file with the command vim file More generally Vim is started with: vim [options] [filelist] If the filelist is missing, the editor will start with an empty buffer. Otherwise exactly one out of the following four may be used to choose one or more files to be edited. file .. A list of filenames. The first one will be the current file and read into the buffer. The cursor will be positioned on the first line of the buffer. You can get to the other files with the ":next" command. To edit a file that starts with a dash, precede the filelist with "--". - The file to edit is read from stdin. Commands are read from stderr, which should be a tty. -t {tag} The file to edit and the initial cursor position depends on a "tag", a sort of goto label. {tag} is looked up in the tags file, the associated file becomes the current file and the associated command is executed. Mostly this is used for C programs, in which case {tag} could be a function name. The effect is that the file containing that function becomes the current file and the cursor is positioned on the start of the function. See ":help tag-commands". -q [errorfile] Start in quickFix mode. The file [errorfile] is read and the first error is displayed. If [errorfile] is omitted, the filename is obtained from the 'errorfile' option (defaults to "AztecC.Err" for the Amiga, "errors.err" on other systems). Further errors can be jumped to with the ":cn" command. See ":help quickfix". Vim behaves differently, depending on the name of the command (the executable may still be the same file). vim The "normal" way, everything is default. ex Start in Ex mode. Go to Normal mode with the ":vi" command. Can also be done with the "-e" argument. view Start in read-only mode. You will be protected from writing the files. Can also be done with the "-R" argument. gvim gview The GUI version. Starts a new window. Can also be done with the "-g" argument. evim eview The GUI version in easy mode. Starts a new window. Can also be done with the "-y" argument. rvim rview rgvim rgview Like the above, but with restrictions. It will not be possible to start shell commands, or suspend Vim. Can also be done with the "-Z" argument.
|
vim - Vi IMproved, a programmer's text editor
|
vim [options] [file ..] vim [options] - vim [options] -t tag vim [options] -q [errorfile] ex view gvim gview evim eview rvim rview rgvim rgview
|
The options may be given in any order, before or after filenames. Options without an argument can be combined after a single dash. +[num] For the first file the cursor will be positioned on line "num". If "num" is missing, the cursor will be positioned on the last line. +/{pat} For the first file the cursor will be positioned in the line with the first occurrence of {pat}. See ":help search-pattern" for the available search patterns. +{command} -c {command} {command} will be executed after the first file has been read. {command} is interpreted as an Ex command. If the {command} contains spaces it must be enclosed in double quotes (this depends on the shell that is used). Example: vim "+set si" main.c Note: You can use up to 10 "+" or "-c" commands. -S {file} {file} will be sourced after the first file has been read. This is equivalent to -c "source {file}". {file} cannot start with '-'. If {file} is omitted "Session.vim" is used (only works when -S is the last argument). --cmd {command} Like using "-c", but the command is executed just before processing any vimrc file. You can use up to 10 of these commands, independently from "-c" commands. -A If Vim has been compiled with ARABIC support for editing right-to-left oriented files and Arabic keyboard mapping, this option starts Vim in Arabic mode, i.e. 'arabic' is set. Otherwise an error message is given and Vim aborts. -b Binary mode. A few options will be set that makes it possible to edit a binary or executable file. -C Compatible. Set the 'compatible' option. This will make Vim behave mostly like Vi, even though a .vimrc file exists. -d Start in diff mode. There should between two to eight file name arguments. Vim will open all the files and show differences between them. Works like vimdiff(1). -d {device}, -dev {device} Open {device} for use as a terminal. Only on the Amiga. Example: "-d con:20/30/600/150". -D Debugging. Go to debugging mode when executing the first command from a script. -e Start Vim in Ex mode, just like the executable was called "ex". -E Start Vim in improved Ex mode, just like the executable was called "exim". -f Foreground. For the GUI version, Vim will not fork and detach from the shell it was started in. On the Amiga, Vim is not restarted to open a new window. This option should be used when Vim is executed by a program that will wait for the edit session to finish (e.g. mail). On the Amiga the ":sh" and ":!" commands will not work. --nofork Foreground. For the GUI version, Vim will not fork and detach from the shell it was started in. -F If Vim has been compiled with FKMAP support for editing right-to-left oriented files and Farsi keyboard mapping, this option starts Vim in Farsi mode, i.e. 'fkmap' and 'rightleft' are set. Otherwise an error message is given and Vim aborts. -g If Vim has been compiled with GUI support, this option enables the GUI. If no GUI support was compiled in, an error message is given and Vim aborts. --gui-dialog-file {name} When using the GUI, instead of showing a dialog, write the title and message of the dialog to file {name}. The file is created or appended to. Only useful for testing, to avoid that the test gets stuck on a dialog that can't be seen. Without the GUI the argument is ignored. --help, -h, -? Give a bit of help about the command line arguments and options. After this Vim exits. -H If Vim has been compiled with RIGHTLEFT support for editing right-to-left oriented files and Hebrew keyboard mapping, this option starts Vim in Hebrew mode, i.e. 'hkmap' and 'rightleft' are set. Otherwise an error message is given and Vim aborts. -i {viminfo} Specifies the filename to use when reading or writing the viminfo file, instead of the default "~/.viminfo". This can also be used to skip the use of the .viminfo file, by giving the name "NONE". -L Same as -r. -l Lisp mode. Sets the 'lisp' and 'showmatch' options on. -m Modifying files is disabled. Resets the 'write' option. You can still modify the buffer, but writing a file is not possible. -M Modifications not allowed. The 'modifiable' and 'write' options will be unset, so that changes are not allowed and files can not be written. Note that these options can be set to enable making modifications. -N No-compatible mode. Resets the 'compatible' option. This will make Vim behave a bit better, but less Vi compatible, even though a .vimrc file does not exist. -n No swap file will be used. Recovery after a crash will be impossible. Handy if you want to edit a file on a very slow medium (e.g. floppy). Can also be done with ":set uc=0". Can be undone with ":set uc=200". -nb Become an editor server for NetBeans. See the docs for details. -o[N] Open N windows stacked. When N is omitted, open one window for each file. -O[N] Open N windows side by side. When N is omitted, open one window for each file. -p[N] Open N tab pages. When N is omitted, open one tab page for each file. -P {parent-title} Win32 GUI only: Specify the title of the parent application. When possible, Vim will run in an MDI window inside the application. {parent-title} must appear in the window title of the parent application. Make sure that it is specific enough. Note that the implementation is still primitive. It won't work with all applications and the menu doesn't work. -R Read-only mode. The 'readonly' option will be set. You can still edit the buffer, but will be prevented from accidentally overwriting a file. If you do want to overwrite a file, add an exclamation mark to the Ex command, as in ":w!". The -R option also implies the -n option (see above). The 'readonly' option can be reset with ":set noro". See ":help 'readonly'". -r List swap files, with information about using them for recovery. -r {file} Recovery mode. The swap file is used to recover a crashed editing session. The swap file is a file with the same filename as the text file with ".swp" appended. See ":help recovery". -s Silent mode. Only when started as "Ex" or when the "-e" option was given before the "-s" option. -s {scriptin} The script file {scriptin} is read. The characters in the file are interpreted as if you had typed them. The same can be done with the command ":source! {scriptin}". If the end of the file is reached before the editor exits, further characters are read from the keyboard. -T {terminal} Tells Vim the name of the terminal you are using. Only required when the automatic way doesn't work. Should be a terminal known to Vim (builtin) or defined in the termcap or terminfo file. --not-a-term Tells Vim that the user knows that the input and/or output is not connected to a terminal. This will avoid the warning and the two second delay that would happen. --ttyfail When stdin or stdout is not a a terminal (tty) then exit right away. -u {vimrc} Use the commands in the file {vimrc} for initializations. All the other initializations are skipped. Use this to edit a special kind of files. It can also be used to skip all initializations by giving the name "NONE". See ":help initialization" within vim for more details. -U {gvimrc} Use the commands in the file {gvimrc} for GUI initializations. All the other GUI initializations are skipped. It can also be used to skip all GUI initializations by giving the name "NONE". See ":help gui-init" within vim for more details. -V[N] Verbose. Give messages about which files are sourced and for reading and writing a viminfo file. The optional number N is the value for 'verbose'. Default is 10. -V[N]{filename} Like -V and set 'verbosefile' to {filename}. The result is that messages are not displayed but written to the file {filename}. {filename} must not start with a digit. --log {filename} If Vim has been compiled with eval and channel feature, start logging and write entries to {filename}. This works like calling ch_logfile({filename}, 'ao') very early during startup. -v Start Vim in Vi mode, just like the executable was called "vi". This only has effect when the executable is called "ex". -w{number} Set the 'window' option to {number}. -w {scriptout} All the characters that you type are recorded in the file {scriptout}, until you exit Vim. This is useful if you want to create a script file to be used with "vim -s" or ":source!". If the {scriptout} file exists, characters are appended. -W {scriptout} Like -w, but an existing file is overwritten. -x Use encryption when writing files. Will prompt for a crypt key. -X Don't connect to the X server. Shortens startup time in a terminal, but the window title and clipboard will not be used. -y Start Vim in easy mode, just like the executable was called "evim" or "eview". Makes Vim behave like a click-and-type editor. -Z Restricted mode. Works like the executable starts with "r". -- Denotes the end of the options. Arguments after this will be handled as a file name. This can be used to edit a filename that starts with a '-'. --clean Do not use any personal configuration (vimrc, plugins, etc.). Useful to see if a problem reproduces with a clean Vim setup. --echo-wid GTK GUI only: Echo the Window ID on stdout. --literal Take file name arguments literally, do not expand wildcards. This has no effect on Unix where the shell expands wildcards. --noplugin Skip loading plugins. Implied by -u NONE. --remote Connect to a Vim server and make it edit the files given in the rest of the arguments. If no server is found a warning is given and the files are edited in the current Vim. --remote-expr {expr} Connect to a Vim server, evaluate {expr} in it and print the result on stdout. --remote-send {keys} Connect to a Vim server and send {keys} to it. --remote-silent As --remote, but without the warning when no server is found. --remote-wait As --remote, but Vim does not exit until the files have been edited. --remote-wait-silent As --remote-wait, but without the warning when no server is found. --serverlist List the names of all Vim servers that can be found. --servername {name} Use {name} as the server name. Used for the current Vim, unless used with a --remote argument, then it's the name of the server to connect to. --socketid {id} GTK GUI only: Use the GtkPlug mechanism to run gvim in another window. --startuptime {file} During startup write timing messages to the file {fname}. --version Print version information and exit. --windowid {id} Win32 GUI only: Make gvim try to use the window {id} as a parent, so that it runs inside that window. ON-LINE HELP Type ":help" in Vim to get started. Type ":help subject" to get help on a specific subject. For example: ":help ZZ" to get help for the "ZZ" command. Use <Tab> and CTRL-D to complete subjects (":help cmdline-completion"). Tags are present to jump from one place to another (sort of hypertext links, see ":help"). All documentation files can be viewed in this way, for example ":help syntax.txt". FILES /usr/local/share/vim/vim??/doc/*.txt The Vim documentation files. Use ":help doc-file-list" to get the complete list. vim?? is short version number, like vim91 for Vim 9.1 /usr/local/share/vim/vim??/doc/tags The tags file used for finding information in the documentation files. /usr/local/share/vim/vim??/syntax/syntax.vim System wide syntax initializations. /usr/local/share/vim/vim??/syntax/*.vim Syntax files for various languages. /usr/local/share/vim/vimrc System wide Vim initializations. ~/.vimrc, ~/.vim/vimrc, $XDG_CONFIG_HOME/vim/vimrc Your personal Vim initializations (first one found is used). /usr/local/share/vim/gvimrc System wide gvim initializations. ~/.gvimrc, ~/.vim/gvimrc, $XDG_CONFIG_HOME/vim/gvimrc Your personal gvim initializations (first one found is used). /usr/local/share/vim/vim??/optwin.vim Script used for the ":options" command, a nice way to view and set options. /usr/local/share/vim/vim??/menu.vim System wide menu initializations for gvim. /usr/local/share/vim/vim??/bugreport.vim Script to generate a bug report. See ":help bugs". /usr/local/share/vim/vim??/filetype.vim Script to detect the type of a file by its name. See ":help 'filetype'". /usr/local/share/vim/vim??/scripts.vim Script to detect the type of a file by its contents. See ":help 'filetype'". /usr/local/share/vim/vim??/print/*.ps Files used for PostScript printing. For recent info read the VIM home page: <URL:http://www.vim.org/> SEE ALSO vimtutor(1) AUTHOR Most of Vim was made by Bram Moolenaar, with a lot of help from others. See ":help credits" in Vim. Vim is based on Stevie, worked on by: Tim Thompson, Tony Andrews and G.R. (Fred) Walter. Although hardly any of the original code remains. BUGS Probably. See ":help todo" for a list of known problems. Note that a number of things that may be regarded as bugs by some, are in fact caused by a too-faithful reproduction of Vi's behaviour. And if you think other things are bugs "because Vi does it differently", you should take a closer look at the vi_diff.txt file (or type :help vi_diff.txt when in Vim). Also have a look at the 'compatible' and 'cpoptions' options. 2024 Jun 04 VIM(1)
| null |
perlbug
|
This program is designed to help you generate bug reports (and thank- you notes) about perl5 and the modules which ship with it. In most cases, you can just run it interactively from a command line without any special arguments and follow the prompts. If you have found a bug with a non-standard port (one that was not part of the standard distribution), a binary distribution, or a non-core module (such as Tk, DBI, etc), then please see the documentation that came with that distribution to determine the correct place to report bugs. Bug reports should be submitted to the GitHub issue tracker at <https://github.com/Perl/perl5/issues>. The perlbug@perl.org address no longer automatically opens tickets. You can use this tool to compose your report and save it to a file which you can then submit to the issue tracker. In extreme cases, perlbug may not work well enough on your system to guide you through composing a bug report. In those cases, you may be able to use perlbug -d or perl -V to get system configuration information to include in your issue report. When reporting a bug, please run through this checklist: What version of Perl you are running? Type "perl -v" at the command line to find out. Are you running the latest released version of perl? Look at <http://www.perl.org/> to find out. If you are not using the latest released version, please try to replicate your bug on the latest stable release. Note that reports about bugs in old versions of Perl, especially those which indicate you haven't also tested the current stable release of Perl, are likely to receive less attention from the volunteers who build and maintain Perl than reports about bugs in the current release. Are you sure what you have is a bug? A significant number of the bug reports we get turn out to be documented features in Perl. Make sure the issue you've run into isn't intentional by glancing through the documentation that comes with the Perl distribution. Given the sheer volume of Perl documentation, this isn't a trivial undertaking, but if you can point to documentation that suggests the behaviour you're seeing is wrong, your issue is likely to receive more attention. You may want to start with perldoc perltrap for pointers to common traps that new (and experienced) Perl programmers run into. If you're unsure of the meaning of an error message you've run across, perldoc perldiag for an explanation. If the message isn't in perldiag, it probably isn't generated by Perl. You may have luck consulting your operating system documentation instead. If you are on a non-UNIX platform perldoc perlport, as some features may be unimplemented or work differently. You may be able to figure out what's going wrong using the Perl debugger. For information about how to use the debugger perldoc perldebug. Do you have a proper test case? The easier it is to reproduce your bug, the more likely it will be fixed -- if nobody can duplicate your problem, it probably won't be addressed. A good test case has most of these attributes: short, simple code; few dependencies on external commands, modules, or libraries; no platform-dependent code (unless it's a platform-specific bug); clear, simple documentation. A good test case is almost always a good candidate to be included in Perl's test suite. If you have the time, consider writing your test case so that it can be easily included into the standard test suite. Have you included all relevant information? Be sure to include the exact error messages, if any. "Perl gave an error" is not an exact error message. If you get a core dump (or equivalent), you may use a debugger (dbx, gdb, etc) to produce a stack trace to include in the bug report. NOTE: unless your Perl has been compiled with debug info (often -g), the stack trace is likely to be somewhat hard to use because it will most probably contain only the function names and not their arguments. If possible, recompile your Perl with debug info and reproduce the crash and the stack trace. Can you describe the bug in plain English? The easier it is to understand a reproducible bug, the more likely it will be fixed. Any insight you can provide into the problem will help a great deal. In other words, try to analyze the problem (to the extent you can) and report your discoveries. Can you fix the bug yourself? If so, that's great news; bug reports with patches are likely to receive significantly more attention and interest than those without patches. Please submit your patch via the GitHub Pull Request workflow as described in perldoc perlhack. You may also send patches to perl5-porters@perl.org. When sending a patch, create it using "git format-patch" if possible, though a unified diff created with "diff -pu" will do nearly as well. Your patch may be returned with requests for changes, or requests for more detailed explanations about your fix. Here are a few hints for creating high-quality patches: Make sure the patch is not reversed (the first argument to diff is typically the original file, the second argument your changed file). Make sure you test your patch by applying it with "git am" or the "patch" program before you send it on its way. Try to follow the same style as the code you are trying to patch. Make sure your patch really does work ("make test", if the thing you're patching is covered by Perl's test suite). Can you use "perlbug" to submit a thank-you note? Yes, you can do this by either using the "-T" option, or by invoking the program as "perlthanks". Thank-you notes are good. It makes people smile. Please make your issue title informative. "a bug" is not informative. Neither is "perl crashes" nor is "HELP!!!". These don't help. A compact description of what's wrong is fine. Having done your bit, please be prepared to wait, to be told the bug is in your code, or possibly to get no reply at all. The volunteers who maintain Perl are busy folks, so if your problem is an obvious bug in your own code, is difficult to understand or is a duplicate of an existing report, you may not receive a personal reply. If it is important to you that your bug be fixed, do monitor the issue tracker (you will be subscribed to notifications for issues you submit or comment on) and the commit logs to development versions of Perl, and encourage the maintainers with kind words or offers of frosty beverages. (Please do be kind to the maintainers. Harassing or flaming them is likely to have the opposite effect of the one you want.) Feel free to update the ticket about your bug on <https://github.com/Perl/perl5/issues> if a new version of Perl is released and your bug is still present.
|
perlbug - how to submit bug reports on Perl
|
perlbug perlbug [ -v ] [ -a address ] [ -s subject ] [ -b body | -f inputfile ] [ -F outputfile ] [ -r returnaddress ] [ -e editor ] [ -c adminaddress | -C ] [ -S ] [ -t ] [ -d ] [ -h ] [ -T ] perlbug [ -v ] [ -r returnaddress ] [ -ok | -okay | -nok | -nokay ] perlthanks
|
-a Address to send the report to instead of saving to a file. -b Body of the report. If not included on the command line, or in a file with -f, you will get a chance to edit the report. -C Don't send copy to administrator when sending report by mail. -c Address to send copy of report to when sending report by mail. Defaults to the address of the local perl administrator (recorded when perl was built). -d Data mode (the default if you redirect or pipe output). This prints out your configuration data, without saving or mailing anything. You can use this with -v to get more complete data. -e Editor to use. -f File containing the body of the report. Use this to quickly send a prepared report. -F File to output the results to. Defaults to perlbug.rep. -h Prints a brief summary of the options. -ok Report successful build on this system to perl porters. Forces -S and -C. Forces and supplies values for -s and -b. Only prompts for a return address if it cannot guess it (for use with make). Honors return address specified with -r. You can use this with -v to get more complete data. Only makes a report if this system is less than 60 days old. -okay As -ok except it will report on older systems. -nok Report unsuccessful build on this system. Forces -C. Forces and supplies a value for -s, then requires you to edit the report and say what went wrong. Alternatively, a prepared report may be supplied using -f. Only prompts for a return address if it cannot guess it (for use with make). Honors return address specified with -r. You can use this with -v to get more complete data. Only makes a report if this system is less than 60 days old. -nokay As -nok except it will report on older systems. -p The names of one or more patch files or other text attachments to be included with the report. Multiple files must be separated with commas. -r Your return address. The program will ask you to confirm its default if you don't use this option. -S Save or send the report without asking for confirmation. -s Subject to include with the report. You will be prompted if you don't supply one on the command line. -t Test mode. Makes it possible to command perlbug from a pipe or file, for testing purposes. -T Send a thank-you note instead of a bug report. -v Include verbose configuration data in the report. AUTHORS Kenneth Albanowski (<kjahds@kjahds.com>), subsequently doctored by Gurusamy Sarathy (<gsar@activestate.com>), Tom Christiansen (<tchrist@perl.com>), Nathan Torkington (<gnat@frii.com>), Charles F. Randall (<cfr@pobox.com>), Mike Guy (<mjtg@cam.ac.uk>), Dominic Dunlop (<domo@computer.org>), Hugo van der Sanden (<hv@crypt.org>), Jarkko Hietaniemi (<jhi@iki.fi>), Chris Nandor (<pudge@pobox.com>), Jon Orwant (<orwant@media.mit.edu>, Richard Foley (<richard.foley@rfi.net>), Jesse Vincent (<jesse@bestpractical.com>), and Craig A. Berry (<craigberry@mac.com>). SEE ALSO perl(1), perldebug(1), perldiag(1), perlport(1), perltrap(1), diff(1), patch(1), dbx(1), gdb(1) BUGS None known (guess what must have been used to report them?) perl v5.38.2 2023-11-28 PERLBUG(1)
| null |
jvisualvm
| null | null | null | null | null |
footprint
|
The footprint utility gathers and displays memory consumption information for the specified processes or memory graph files. footprint will display all addressable memory used by the specified processes, but it emphasizes memory considered 'dirty' by the kernel for purposes of accounting. If multiple processes are specified, footprint will de-duplicate multiply mapped objects and will display shared objects separately from private ones. footprint must be run as root when inspecting processes that are not owned by the current user. Processes are specified using a PID, exact process name, or partial process name. Memory information will be displayed for all processes matching any provided name.
|
footprint β gathers memory information about one or more processes
|
footprint [-j path] [-f bytes|formatted|pages] [--sort column] [-p name|pid] [-x name|pid] [-t] [-s] [-v] [-y] [-w] [--swapped] [--wired] [-a] process-name | pid | memgraph [...] footprint --sample interval ... footprint -h, --help
|
-a, --all target all processes (will take much longer) -j, --json path also save a JSON representation of the data to the specified path -f, --format bytes|formatted|pages textual output should be formatted in bytes, pages, or human- readable formatted (default) --sort column textual output should be sorted by the given column name, for example dirty (default), clean, category, etc. -p, --proc name target the given process by name (can be used multiple times) -p, --pid pid target the given process by pid (can be used multiple times) -x, --exclude name/pid exclude the given process by name or pid (can be used multiple times) often used with --all to exclude some processes from analysis -t, --targetChildren in addition to the supplied processes, target their children, grandchildren, etc. -s, --skip skip processes that are dirty tracked and have no outstanding XPC transactions (i.e., are "clean") --minFootprint MiB skip processes with footprint less than the provided minimum MiB. --forkCorpse analyze a forked corpse of the target process rather than the original process. Due to system resource limits on corpses this argument is not compatible with --all or if attempting to analyze more than a couple processes. -v display all regions and vmmap-like output of address space layout. Without this flag the default output is a summary of the totals for each memory category. -w, --wide show wide output with all columns and full paths (implies --swapped --wired) --swapped show swapped/compressed column --wired show wired memory column --vmObjectDirty interpret dirty memory as viewed by VM objects in the kernel, rather than the default behavior which interprets dirty memory through the pmap. This mode may calculate a total footprint that does not match what is shown in other tools such as top(1) or Activity Monitor.app. However, it can provide insight into dirty memory that is by design not included in the default mode, such as dirty file-backed memory or a VM region mapped into a process that is normally accounted to only the process that created it. The --vmObjectDirty mode was the default in versions prior to macOS 10.15. --unmapped search all processes for memory owned by the target processes but not mapped into their address spaces (see the discussion in MEMORY ACCOUNTING for more details) --sample interval Start footprint in sampling mode, gathering data every interval seconds (which can be fractional like 0.5). Text output will be a concatenation of usual text output with added timestamps. JSON output will contain a "samples" array with many of the same key/values that would normally be at the top level. All other command line options are also supported in sampling mode. --sample-duration duration Time limit on the number of seconds to sample when combined with --sample. When this flag is omitted or set to 0, sampling continues until <ctrl-c>. -h, --help display help and exit COLUMNS Column names between parentheses indicate that they are a subset of one or more non-parenthesized columns. Dirty Memory that has been written to by a process, which includes "Swapped", purgeable non-volatile memory, and implicitly written memory such as zero-filled. A process's footprint is equal to the total of all dirty memory. (Swapped) A subset of "Dirty" memory that has been compressed or swapped out by the kernel. Clean Resident memory which is neither "Dirty" nor "Reclaimable". Reclaimable Resident memory that has been explicitly marked as available for reuse. Memory can be marked reclaimable when it is made purgeable volatile (including purgeable empty) or by using madvise(2) with advice such as MADV_FREE. Reclaimable memory can be taken away from a process at any time in response to system memory pressure. (Wired) Memory that has been wired down (e.g., by calling mlock(2) ). This memory is usually a subset of "Dirty" and cannot be paged out. Regions The count of VM regions contributing to this entry. Each binary segment contained within the shared cache region is considered a separate region for display purposes. Category A descriptive name for this entry, such as a human-readable name for a VM_MEMORY_* tag, a path to a mapped file, or a segment of a loaded binary. INVESTIGATING MEMORY FOOTPRINT footprint provides an efficient calculation of a process's memory footprint and a high-level overview of the various categories of memory contributing to that footprint. The details that it provides can be used as a starting point in an investigation. Prioritize reducing "Dirty" memory. Dirty memory cannot be automatically reclaimed by the kernel and is directly used by various parts of the OS as a measure of a process's contribution to system memory pressure. Next, focus on reducing "Reclaimable" memory, especially purgeable volatile memory which will become dirty when marked non-volatile. Although this memory can be cheaply reclaimed by the kernel, purgeable volatile memory is commonly used as a cache of data that may be expensive for a user process to recreate (such as decoded image data). "Clean" memory can also be cheaply taken by the kernel, but unlike "Reclaimable" it can be restored automatically by the kernel without the help of a user process. For example, clean file backed data can be automatically evicted from memory and re-read from disk on-demand. Having too much clean memory can still be a performance problem, since large working sets can cause thrashing when loading and unloading various parts of a process under low memory situations. Lastly, avoid using "Wired" memory as much as possible since it cannot be paged out or reclaimed. Malloc memory Memory allocated by malloc(3) is one of the most common forms of memory, making up what is usually referred to as the 'heap'. This memory will have a category prefixed with 'MALLOC_'. malloc(3) allocates VM regions on a process's behalf; the contents of those regions will be the individual allocations representing objects and data in a process. Refer to the heap(1) tool to further categorize the objects contained within a malloc memory region, or leaks(1) to detect a subset of heap memory that is no longer reachable. Binary segments Loaded binaries will be visible as an entry with both the segment type and the path to the binary, most often __TEXT, __DATA, or __LINKEDIT segments. Non-shared cache binaries and pages in the __DATA segment (such as those that contain modified global variables) can often have dirty memory. Mapped files File-backed memory allocated using mmap(2) will show up as 'mapped file' along with the path to the file. VM allocations Most other types of memory can be tagged with a name that indicates what subsystem allocated the region (see mmap(2) for more information). For instance, Foundation.framework may allocate memory and tag it with VM_MEMORY_FOUNDATION, which appears in footprint's output as 'Foundation'. Processes are able to allocate memory with their own tags by using an appropriate tag in the range VM_MEMORY_APPLICATION_SPECIFIC_1-VM_MEMORY_APPLICATION_SPECIFIC_16. Memory which does not fall into one of the previous categories and has not been explicitly tagged will be marked 'untagged ("VM_ALLOCATE")'. Kernel memory In the special case of analyzing kernel_task, footprint's output and categories will mirror much of the data also available via zprint(1). This is memory allocated by the kernel or a kernel extension and is generally unavailable to userspace directly. Despite the restricted access, userspace programs often influence when and how much memory the kernel allocates (e.g., for resources allocated on behalf of a user process). For malloc and VM allocated memory, details about when and where the memory was allocated can often be obtained by enabling MallocStackLogging and using malloc_history(1) to view the backtrace at the time of each allocation. Xcode.app and Instruments.app also provide visual tools for debugging memory, such as the Xcode's Memory Graph Debugger. vmmap(1) provides a similar view to footprint, but with an emphasis on displaying the raw metrics returned by the kernel rather than the simplified and more processed view of footprint. One important difference is that vmmap(1)'s "DIRTY" column does not include the compressed or swapped memory found in the "SWAPPED" column. Additionally, vmmap(1) can only operate on a single process and contains additional information such as a malloc zone summary. MEMORY ACCOUNTING Determining what dirty memory should and should not be accounted to a process is a difficult problem. Memory can be shared by many processes, it can sometimes be allocated on your behalf by other processes, and no matter how the accounting is done can often be expensive to accurately calculate. Many operating systems have historically exposed memory metrics such as Virtual Size (VSIZE) and Resident Size (RSIZE/RPRVT/RSS/etc.). Metrics such as these, which are useful in their own respect, are not great indicators of the amount of physical memory required by a process to run (and therefore the memory pressure that a process applies to the system). For instance, Virtual Size includes allocations that may not be backed by physical memory, and Resident Size includes clean and volatile purgeable memory that can be reclaimed by the kernel (as described earlier). On the other hand, analyzing the dirty memory reported by the underlying VM objects mapped into a process (the approach taken by --vmObjectDirty), while more accurate, is expensive and cannot be done in real-time for systems that need to frequently know the memory footprint of a process. Apple platforms instead keep track of the 'physical footprint' by using a per-process ledger in the kernel that is kept up-to-date by the pmap and other subsystems. This ledger is cheap to query, suitably accurate, and provides additional features such as tracking peak memory and the ability to charge one process for memory that is no longer mapped into it or that may have been allocated by another process. In cases where footprint is unable to analyze a portion of 'physical footprint' that is not mapped into a process, this memory will be listed as 'Owned physical footprint (unmapped)'. If this memory is mapped into another userspace process then the --unmapped argument can be used to search all processes for a mapping of the same VM object, which if found will provide a better description and what process(s) have mapped the memory. This also happens by default when targeting all processes via --all. Any memory still listed as "(unmapped)" after using --unmapped is likely not mapped into any userspace process and instead only referenced by the kernel or drivers. The exact definition of this 'physical footprint' ledger is complicated and subject to change, but suffice it to say that the default mode of footprint aims to present an accurate memory breakdown that matches the value reported by the ledger. Most other diagnostic tools, such as the 'MEM' column in top(1), the 'Memory' column in Activity Monitor.app, and the Memory Debug Gauge in Xcode.app, query this ledger to populate their metrics. Physical footprint can be potentially be split into multiple subcategories, such as network related memory, graphics, etc. When a memory allocation (either directly mapped into a process, or owned but unmapped) has such a classification, footprint will append it to the category name such as 'IOKit (graphics)' or 'Owned physical footprint (unmapped) (media)'. SEE ALSO vmmap(1), heap(1), leaks(1), malloc_history(1), zprint(1) OS X April 15, 2022 OS X
| null |
lsappinfo
| null |
lsappinfo - Control and query CoreApplicationServices about the app state on the system parentasn pid presentationmode presentationoptions psn recordingAppleEvents session shellpath supressRelaunch version KEY VALUES In numerous places a key can be set to a value. The format of value can be any of the following β’ "string" A string, surrounded by double quotes. β’ numeric-digits | -numeric-digits | numeric-digits.numeric-digits[E]numeric-digits A numeric value, either an integer type or a double floating point type. β’ $hex-digits A numeric value given by the hex value hex-digits. β’ "ASN:0xAAAA:0xBBBB:" An ASN, where AAAA and BBBB are the values for an application ASN. β’ App:str An ASN, where str matches one of the application-specifier formats. β’ ( [[str,] str] ) A CFArrayRef, where each str is converted as if it were a key value. β’ true The kCFBooleanTrue value. β’ false The kCFBooleanFalse value. β’ null The kCFNull value. β’ Any of the application information item, or launch modifier strings The equivalent, exported LaunchServices CFStringRef key for the item or launch modifier. APPLICATION INFORMATION ITEM KEYS β’ asn An application ASN, which is unique identifier assigned to each application when the application is launched and persists until the application exits, and likely is unique for the entire time a user is logged in. When displayed, an ASN looks like "ASN:0x0-0x1f01f:". β’ parentasn The ASN of the application which launched this application. β’ bundlename The bundle name, if one exists, for the application. β’ bundlenamelc The bundle name, if one exists, for the application, but with every upper case character converted into the equivalent lower case character. β’ bundlepath The bundle path, if the application is bundled β’ executablepath The executable path of the application β’ filetype The file type of the application, if it has one. β’ filecreator The creator type of the application, if it has one. β’ pid The pid of the application. β’ filename The filename of the executable (the last component of the executable path), converted into a lowercase string β’ bundlelastcomponent The last component of the bundle path, converted into a lowercase string. β’ displayname | name The display name of this application β’ bundleid The bundle identifier of the application, if one exists. β’ applicationtype The type of the application (generally "Foreground", "Background", or "UIElement") β’ allowedtobecomefrontmost The application is allowed to be frontmost. β’ version The version string for the application, if it has one β’ presentationmode The UIPresentationMode for this application (only for foreground applications), generally one of "Normal", "ContentSupressed", "ContentHidden", "Suppressed", "AllHidden" β’ presentationoptions β’ session A number indicating which audit session this application is running in. β’ hidden If this application is a foreground application, then if it is hidden, "true", or "false" if it is not hidden β’ changecount A number which changes whenever any items in the applicationΒ΄s information dictionary is changed. β’ debuglevel β’ isregistered If this application has registered, then "true", otherwise "false". β’ isready If this application has entered its main runloop and is able to respond to requests to hide or show itself, "true", otherwise "false". β’ isstopped If this application was launched stopped, and if it has not been started yet, then "true", otherwise "false" or not present. β’ launchedinquarantine If this application was launched in a quarantined state, then "true", otherwise "false" or not present. β’ arch The architecture of the code running this application, generally "x86_64" or "i386". β’ recordingAppleEvents If this application is recording AppleEvents, then "true", otherwise "false" or not present. β’ supressRelaunch If this application should not be re-launched after a logout and login, then "true", otherwise "false" or not present. β’ applicationTypeToRestore β’ applicationWasTerminatedByTAL β’ isthrottled If this application was launched in the throttled state, and if it has not been unthrottled, then "true", otherwise false or not present. β’ applicationWouldBeTerminatedByTALKey β’ launchedhidden If the application was launched hidden, then "true", otherwise "false" or not present. This is not whether the application is currently hidden, just whether at the time it was launched the request was to have it hide itself. β’ launchandhideothers If the application was launched and asked to hide all other application, then "true", otherwise "false" or not present. This is not whether the application is currently hidden, just whether at the time it was launched the request was to have it hide all other applications. β’ launchForPersistence If the application was launched with launchForPersistence=true, then "true", otherwise "false" or not present. LAUNCHMODIFIER KEYS β’ async=[true|false] Launch asynchronously β’ refcon=[#] Launch with the given numeric refcon. β’ nofront=[true|false] If true, do not bring the application to the front when it finishes launching β’ stopped=[true|false] Launch the process but do not start it. β’ launchandhide=[true|false] Launch the process and cause it to hide itself when it finishes launching β’ \launchandhideothers`=[true|false] Launch the process and couse it to hide all other applications when it finishes launching β’ launchForPersistence=[true|false] β’ launchWithASLRDisabled=[true|false] NOTIFICATION CODES Notifications are sent out by LaunchServices when various conditions arrive. Each notification has a type, called the notification-code, a dictionary of data items which are specific to the notification, a time the notification was sent, and an optional affected ASN. β’ launch Sent when an application is launched β’ creation Sent when an entry for an application is created on the system and associated with an ASN. β’ birth Sent when an β’ death Sent when an application exits. β’ abnormaldeath Sent when an application exits with a non-zero exit status. β’ childDeath Sent when an application exits, with affected ASN set to the parent ASN of the application which exited. β’ abnormalChildDeath Sent when an application exits with a non-zero exit status, with affected ASN set to the parent ASN of the application which exited. β’ launchFailure Sent when an application launch fails, after a launch notification has been sent out. β’ appCreation Sent when an application is "created", which happens immediately after the application is created and certain items are added into the application information dictionary. β’ childAppCreation Sent when an application is "created", which happens immediately after the application is created and certain items are added into the application information dictionary, with affected ASN set to the asn of the parent ASN of this application. β’ appReady Sent when an applications signals to LaunchServices that it is ready to accept hide/show events, generally when it has entered its main runloop. β’ childAppReady Sent when an applications signals to LaunchServices that it is ready to accept hide/show events, generally when it has entered its main runloop, with affected ASN set to the parent ASN of the application which signalled ready. β’ readyToAcceptAppleEvents Sent when an application signals that it is ready to accept AppleEvents. β’ launchTimedOut β’ launchFinished β’ allTALAppsRegistered Sent when talagentd decides that all applications which were launched for persistence have registered. β’ becameFrontmost Sent when an application is made into the front application. β’ lostFrontmost Sent when an application which previously was the front application is no longer the front application. β’ orderChanged Sent when the front-to-back order of the application list changes. β’ bringForwardRequest Someone has requested that the application with affected ASN make itself frontmost. β’ menuBarAcquired Sent when the application which is responsible for drawing the menu bar (generally the frontmost foreground application) changes β’ menuBarLost Sent when the application which was responsible for drawing the menu bar (generally the frontmost foreground application) is no longer responsible β’ hidden Sent when the application is hidden β’ shown Sent when the application is shown β’ showRequest Someone has requested that the application with the affected application asn should show (un-hide) itself. β’ hideRequest Someone has requested that the application with the affected application asn should hide itself. β’ pullwindowsforward Someone has requested that the application with the affected application asn should show itself and pull all of its windows forward. β’ appInfoChanged Sent when the information for the application is changed. β’ appInfoKeyAdded Sent when a key is added to the information for the application. The data for the notification will include the key being added and its value. β’ appInfoKeyChanged Sent when a value for an item in the application information is changed. The data for the notification will include the key being changes and its new and old value. β’ appInfoKeyRemoved Sent when the value for an item in the application information is removed. The data for the notification will include the key being removed and its value. β’ appTypeChanged Sent when the "ApplicationType" key in the application information is changed. β’ appNameChanged Sent when the application name in the application information is changed. β’ wantsAttentionChanged Sent when the LSWantsAttention key in the application information is changed. β’ presentationModeChanged Sent when an application changes its presentation mode. β’ pidChanged Sent when an application changes its pid. In practice this can never happen, except when LaunchServices launches a process which itself forks or spawns a new process, and then checks-in from that new pid. β’ frontPresentationModeChanged Sent when the presentation mode of the system changes, generally when the foreground application changes its own presentation mode or when the front application changes and the old and new applications have different presentation modes. β’ presentationModeChangedBecauseFrontApplicationChanged Sent when the presentation mode of the system changes only because the front application changed and the old and new applications have different presentation modes. β’ launchrequest β’ started Sent when a formally stopped application is started. β’ sessionLauncherRegister Sent when the ASN of the session launcher application registers with LaunchServices. β’ sessionLauncherUnregistered Sent when the application registered as the session launcher unregisters or exits. β’ nextAppToBringForwardAtQuitRegistered Sent when the meta-information item for the next application to bring forward ASN is changed β’ nextAppToBringForwardAtQuitUnregistered β’ systemProcessRegistered Sent when the system process (generally loginwindow) registers with LaunchServices. β’ systemProcessUnregistered Sent when the system process (generally loginwindow) unregisters with LaunchServices. β’ frontReservationCreated Sent when a front-reservation is created. β’ frontReservationDestroyed Sent when a front reservation is destroyed. β’ permittedFrontASNsChanged Sent when the array of permitted-front-applications changes. β’ suppressRelaunch Sent when an application changes its "LSSupressRelaunch" key. β’ terminatedByTALChanged Sent when an application changes its "TerminatedByTAL" key. β’ launchedThrottledChanged Sent when an application changes * applicationWouldBeTerminatedByTALChanged * applicationProgressValueChanged * applicationVisualNotification * wakeup Request that the application with affected ASN resume running its main runloop. β’ sessionCreated Sent when a session is created, generally when the first application registers inside the session. Affected ASN is always NULL, since this does not refer to any particular application. β’ sessionDestroyed Sent when a session is destroyed. Affected ASN is always NULL, since this does not refer to any particular application. β’ invalid This represents an invalid notification code, and is never sent. β’ all This represents all notification codes, and is never sent, but gets used when specifying which notifications to listen for.
|
lsappinfo [options] [ command [command options] ] ... COMMON COMMANDS β’ front Show the front application. β’ find [ key=value ]+ Show the ASN of all applications which have the given key/value present in their application information. For key the actual CFString value for the key can be used, or any of the aliases described below under Key Strings. For value, see the rules below under Key Values. β’ info [-only information-item-key] [-app app-specifier] [-long][app-specifier] Show the information for the application app-specifier β’ list Show the application list and information about each running application β’ listen [+notificationcode]* [-notificationcode]* [-addasn asn] [-removeasn asn] [ -id # ] duration [--] Listen for the given notifications ( those with Β΄+Β΄, excluding those with Β΄-Β΄ ) and display each one and its payload. Notifications are displayed when they receive when this tool is executing a wait or forever command. β’ launch [[launch-modifier=value]+ [launch-option=value]+ [-arg argument] [path-to-bundle] [--] Launch an application with CoreApplicationServices in LaunchServices. At the minimum, the execpath must be included as one of the launch-options or -poseas and a path-to-bundle. This is a fairly low level operation and does not handle a number of conditions that the higher level functions do. β’ metainfo Show the meta information, which is the session-wide information which CoreApplicationServices maintains for each login session. β’ processList Show the application list, in ascending ASN order. β’ restart Ask the launchservicesd to restart. The requestor must be privileged. β’ sharedmemory Show the shared memory information page for this session. β’ unlisten [ -id ID ] [ -all ] Unlisten to all notifications on notification ID. β’ visibleProcessList Show the visible ( front-to-back ) application list. UNCOMMON COMMANDS β’ allocateASN Ask launchservicesd to allocate an ASN, and print it out. β’ createFile PATH Create a file at the given path β’ disconnect disconnect from launchservicesd β’ file path Open the file at path and read lines, treating each one as if it were passed to lsappinfo on the command line. β’ forever Wait forever before executing the next command β’ log [ -d | -i | -n | -w | -e | -c | -a ] [ -B ] [ -sender *processname* ] [ string ... -- ] If an option is given, dump any LaunchServices logging information on the system until the process is terminated with control-C. If a string is provided, log that string to syslog. β’ removeFile PATH Remove the file at the given path β’ server [ -xpcservicename ARG ] [ -local ] [ -duration *DURATION* ] [ -file *FILEPATH* ] [ -gone FILEGONEPATH ] [ -forever ] Start up the launchservicesd server in process, with the optional given xpc service name or if -local then processing xpc requests from future commands for this same process. Terminate the server after the given DURATION seconds, or when the file at FILEPATH exists, or the file at path FILEPATHFONE is deleted, or never if -forever. β’ setinfo [-app app-specifier] [app-info-item=value]+ [--] Set the values for the given application information items in the specified application. β’ setmetainfo [meta-info-item=value]+ [--] β’ wait [ -duration duration ] [ -file FILEPATH ] [ -gone FILEPATHGONE ] duration Wait for duration seconds before executing the next command, or if FILEPATH is given until that file exists, or if FILEPATHGONE is given until that file no longer exists. β’ writePIDToFile PATH Write the current processes pid to a file at PATH.
|
β’ -v | --verbose Be more verbose about many operations β’ -q | --quiet Be less verbose about many operations β’ -defaultSession Use kLSDefaultSessionID as the sessionID passed to all calls (the default) β’ -currentSession Use kLSCurrentSessionID as the sessionID passed to all calls β’ -debug | -info | -notice | -warning | -err | -critical | -alert | -emergency Set the log level for this process to the given level APPLICATION SPECIFIERS There are different ways to indicate what application the commands operate on, collectively called the app-specifier. This may be one of the following. β’ "ASN:0xAAAA:0xBBBB:" where AAAA and BBBB are the values for an application ASN. β’ "0xBBBB" where BBBB are the values from the lower part of an application ASN for which the upper part of the ASN is 0x0 β’ "#" where # is a decimal value above 10, representing the application with the pid # β’ "name" where name is the display name of a running application β’ "bundleid" where bundleid is the bundle id of a running application β’ "me" the asn of the lsappinfo tool KEY STRINGS Any string from this set will map to the corresponding constant from the LaunchServices header files. kCFBundleNameKey kLSASNKey kLSASNToBringForwardAtNextApplicationExitKey kLSAllowedToBecomeFrontmostKey kLSApplicationBackgroundOnlyTypeKey kLSApplicationBackgroundPriorityKey kLSApplicationCountKey kLSApplicationDesiresAttentionKey, kLSApplicationForegroundPriorityKey kLSApplicationForegroundTypeKey kLSApplicationHasRegisteredKey kLSApplicationHasSignalledItIsReadyKey kLSApplicationInStoppedStateKey kLSApplicationInThrottledStateAfterLaunchKey kLSApplicationInformationSeedKey kLSApplicationIsHiddenKey kLSApplicationListSeedKey kLSApplicationReadyToBeFrontableKey kLSApplicationTypeKey kLSApplicationTypeToRestoreKey kLSApplicationUIElementTypeKey kLSApplicationVersionKey kLSApplicationWasTerminatedByTALKey kLSApplicationWouldBeTerminatedByTALKey kLSArchitectureKey kLSBundleIdentifierLowerCaseKey kLSBundlePathDeviceIDKey kLSBundlePathINodeKey kLSBundlePathKey kLSCheckInTimeKey kLSDebugLevelKey kLSDisplayNameKey kLSExecutableFormatCFMKey kLSExecutableFormatKey kLSExecutableFormatMachOKey kLSExecutableFormatPoundBangKey kLSExecutablePathDeviceIDKey kLSExecutablePathINodeKey kLSExecutablePathKey kLSExitStatusKey kLSFileCreatorKey kLSFileTypeKey kLSFlavorKey kLSFrontApplicationSeedKey kLSHiddenApplicationCountKey kLSLaunchTimeKey kLSLaunchedByLaunchServicesKey kLSLaunchedByLaunchServicesThruForkExecKey kLSLaunchedByLaunchServicesThruLaunchDKey kLSLaunchedByLaunchServicesThruSessionLauncherKey kLSLaunchedInQuarantineKey kLSMenuBarOwnerApplicationSeedKey kLSModifierLaunchedForPersistenceKey kLSModifierRefConKey kLSNotifyBecameFrontmostAnotherLaunchKey kLSNotifyBecameFrontmostFirstActivationKey kLSNotifyLaunchRequestLaunchModifiersKey kLSOriginalExecutablePathDeviceIDKey kLSOriginalExecutablePathINodeKey kLSOriginalExecutablePathKey kLSOriginalPIDKey kLSPIDKey kLSParentASNKey kLSParentASNWasInferredKey kLSPersistenceSuppressRelaunchAtLoginKey kLSPreviousASNKey kLSPreviousPresentationModeKey kLSPreviousValueKey kLSRecordingAppleEventsKey kLSRequiresCarbonKey kLSSessionIDKey kLSShellExecutablePathKey kLSUIDsInSessionKey kLSUIPresentationModeAllHiddenValue kLSUIPresentationModeAllSuppressedValue kLSUIPresentationModeContentHiddenValue kLSUIPresentationModeContentSuppressedValue kLSUIPresentationModeKey kLSUIPresentationModeNormalValue kLSUIPresentationOptionsKey kLSUnhiddenApplicationCountKey kLSVisibleApplicationCountKey kLSVisibleApplicationListSeedKey kLSWantsToComeForwardAtRegistrationTimeKey launchedThrottled Likewise, these short strings also make to the corresponding constants. allowedtobecomefrontmost applicationTypeToRestore applicationWasTerminatedByTAL applicationtype arch asn bundleid bundlelastcomponent bundlename bundlenamelc bundlepath changecount creator debuglevel displayname execpath executablepath filecreator filename filetype hidden isconnectedtowindowserver isready isregistered isstopped isthrottled launchedForPersistence launchedinquarantine
|
β’ List all of the running applications lsappinfo list β’ Show all the notifications which are being sent out lsappinfo listen +all forever β’ Show the notifications sent out whenever the front application is changed, for the next 60 seconds lsappinfo listen +becameFrontmost wait 60 β’ Launch TextEdit.app, asyncronously, and donΒ΄t bring it to the front lsappinfo launch nofront=true async=true /Applications/TextEdit.app/ β’ Find the ASN for the running application "TextEdit", by bundle id lsappinfo find bundleid=com.apple.TextEdit β’ Find the ASN for the running application "TextEdit", by name lsappinfo find name="TextEdit" β’ Show the information for the running application "TextEdit" lsappinfo info "TextEdit" 04/01/2013 LSAPPINFO(8)
|
lwp-download5.34
|
The lwp-download program will save the file at url to a local file. If local path is not specified, then the current directory is assumed. If local path is a directory, then the last segment of the path of the url is appended to form a local filename. If the url path ends with slash the name "index" is used. With the -s option pick up the last segment of the filename from server provided sources like the Content- Disposition header or any redirect URLs. A file extension to match the server reported Content-Type might also be appended. If a file with the produced filename already exists, then lwp-download will prompt before it overwrites and will fail if its standard input is not a terminal. This form of invocation will also fail is no acceptable filename can be derived from the sources mentioned above. If local path is not a directory, then it is simply used as the path to save into. If the file already exists it's overwritten. The lwp-download program is implemented using the libwww-perl library. It is better suited to down load big files than the lwp-request program because it does not store the file in memory. Another benefit is that it will keep you updated about its progress and that you don't have much options to worry about. Use the "-a" option to save the file in text (ASCII) mode. Might make a difference on DOSish systems. EXAMPLE Fetch the newest and greatest perl version: $ lwp-download http://www.perl.com/CPAN/src/latest.tar.gz Saving to 'latest.tar.gz'... 11.4 MB received in 8 seconds (1.43 MB/sec) AUTHOR Gisle Aas <gisle@aas.no> perl v5.34.0 2020-04-14 LWP-DOWNLOAD(1)
|
lwp-download - Fetch large files from the web
|
lwp-download [-a] [-s] <url> [<local path>] Options: -a save the file in ASCII mode -s use HTTP headers to guess output filename
| null | null |
db_load
|
The db_load utility reads from the standard input and loads it into the database file. The database file is created if it does not already exist. The input to db_load must be in the output format specified by the db_dump utility, utilities, or as specified for the -T below. The options are as follows: -c Specify configuration options ignoring any value they may have based on the input. The command-line format is name=value. See the Supported Keywords section below for a list of keywords supported by the -c option. -f Read from the specified input file instead of from the standard input. -h Specify a home directory for the database environment. If a home directory is specified, the database environment is opened using the Db.DB_INIT_LOCK, Db.DB_INIT_LOG, Db.DB_INIT_MPOOL, Db.DB_INIT_TXN, and Db.DB_USE_ENVIRON flags to DB_ENV->open. (This means that db_load can be used to load data into databases while they are in use by other processes.) If the DB_ENV->open call fails, or if no home directory is specified, the database is still updated, but the environment is ignored; for example, no locking is done. -n Do not overwrite existing keys in the database when loading into an already existing database. If a key/data pair cannot be loaded into the database for this reason, a warning message is displayed on the standard error output, and the key/data pair are skipped. -P Specify an environment password. Although Berkeley DB utilities overwrite password strings as soon as possible, be aware there may be a window of vulnerability on systems where unprivileged users can see command-line arguments or where utilities are not able to overwrite the memory containing the command-line arguments. -T The -T option allows non-Berkeley DB applications to easily load text files into databases. If the database to be created is of type Btree or Hash, or the keyword keys is specified as set, the input must be paired lines of text, where the first line of the pair is the key item, and the second line of the pair is its corresponding data item. If the database to be created is of type Queue or Recno and the keyword keys is not set, the input must be lines of text, where each line is a new data item for the database. A simple escape mechanism, where newline and backslash ( characters are special, is applied to the text input. Newline characters are interpreted as record separators. Backslash characters in the text will be interpreted in one of two ways: If the backslash character precedes another backslash character, the pair will be interpreted as a literal backslash. If the backslash character precedes any other character, the two characters following the backslash will be interpreted as a hexadecimal specification of a single character; for example, a is a newline character in the ASCII character set. For this reason, any backslash or newline characters that naturally occur in the text input must be escaped to avoid misinterpretation by db_load. If the -T option is specified, the underlying access method type must be specified using the -t option. -t Specify the underlying access method. If no -t option is specified, the database will be loaded into a database of the same type as was dumped; for example, a Hash database will be created if a Hash database was dumped. Btree and Hash databases may be converted from one to the other. Queue and Recno databases may be converted from one to the other. If the -k option was specified on the call to db_dump then Queue and Recno databases may be converted to Btree or Hash, with the key being the integer record number. -V Write the library version number to the standard output, and exit. The db_load utility may be used with a Berkeley DB environment (as described for the -h option, the environment variable DB_HOME, or because the utility was run in a directory containing a Berkeley DB environment). In order to avoid environment corruption when using a Berkeley DB environment, db_load should always be given the chance to detach from the environment and exit gracefully. To cause db_load to release all environment resources and exit cleanly, send it an interrupt signal (SIGINT). The db_load utility exits 0 on success, 1 if one or more key/data pairs were not loaded into the database because the key already existed, and >1 if an error occurs.
|
db_load
|
db_load [-nTV] [-c name=value] [-f file] [-h home] [-P password] [-t btree | hash | queue | recno] file
| null |
The db_load utility can be used to load text files into databases. For example, the following command loads the standard UNIX /etc/passwd file into a database, with the login name as the key item and the entire password entry as the data item: awk -F: '{print $1; print $0}' < /etc/passwd | sed 's/\/\\/g' | db_load -T -t hash passwd.db Note that backslash characters naturally occurring in the text are escaped to avoid interpretation as escape characters by db_load. ENVIRONMENT DB_HOME If the -h option is not specified and the environment variable DB_HOME is set, it is used as the path of the database home, as described in DB_ENV->open. SUPPORTED KEYWORDS The following keywords are supported for the -c command-line option to the db_load utility. See DB->open for further discussion of these keywords and what values should be specified. The parenthetical listing specifies how the value part of the name=value pair is interpreted. Items listed as (boolean) expect value to be 1 (set) or 0 (unset). Items listed as (number) convert value to a number. Items listed as (string) use the string value without modification. bt_minkey (number) The minimum number of keys per page. chksum (boolean) Enable page checksums. database (string) The database to load. db_lorder (number) The byte order for integers in the stored database metadata. db_pagesize (number) The size of database pages, in bytes. duplicates (boolean) The value of the Db.DB_DUP flag. dupsort (boolean) The value of the Db.DB_DUPSORT flag. extentsize (number) The size of database extents, in pages, for Queue databases configured to use extents. h_ffactor (number) The density within the Hash database. h_nelem (number) The size of the Hash database. keys (boolean) Specify whether keys are present for Queue or Recno databases. re_len (number) Specify fixed-length records of the specified length. re_pad (string) Specify the fixed-length record pad character. recnum (boolean) The value of the Db.DB_RECNUM flag. renumber (boolean) The value of the Db.DB_RENUMBER flag. subdatabase (string) The subdatabase to load. SEE ALSO db_archive(1), db_checkpoint(1), db_deadlock(1), db_dump(1), db_printlog(1), db_recover(1), db_stat(1), db_upgrade(1), db_verify(1) Darwin December 3, 2003 Darwin
|
fwkdp
|
Use fwkdp to act as a proxy for the kernel debugging KDP protocol over FireWire. It will also accept kernel core dump images transmitted over FireWire. Additionally, fwkdp can be used to set the boot-args necessary for a target machine which is to be debugged. As a complete technology, FireWireKDP redirects the kernel debugging KDP protocol over FireWire. It enables live LLDB debugging of a trapped kernel over a FireWire cable, just as you would over Ethernet. It provides the following advantages over remote Ethernet kernel debugging: - Available sooner in the kernel's startup. - Available until right when the cpu is powered down at sleep and almost as soon as the cpu is powered when waking. - No IP network configuration is required. FireWireKDP also enables Remote Kernel Core Dumps over FireWire. This allows you to debug a static kernel at a later time without the need to be connected at the time of debug. To enable kernel core dumps, see section "CORE DUMPS". For more info on debugging with Kernel Core Dumps, please see: Technical Note TN2118: Debugging With Kernel Core Dumps. FireWireKDP works in two parts: kernel software on the target side (machine to be debugged) and user-space software on the side of the host. Now, the target side software is integrated into the OS. This means that AppleFireWireKDP.kext is no longer necessary. See the installation instructions below.
|
fwkdp β FireWire KDP Tool
|
fwkdp [--setargs[=boot-args]] [--proxy] [--core] [--verbose] [--disable] [--erase] [--ioff] [--restart] [--help]
|
--setargs[=boot-args], -r[boot-args] Sets the nvram boot-args on the current machine to boot-args. This flag should only be used on the target machine (which is contrary to all other usage cases, when it is used on the host). If boot-args is not passed, the tool will prompt the user as to which boot-args are to be set. --proxy, -p Use proxy mode only. --core, -c Use core dump receive mode only. --verbose, -v Verbose mode. --disable, -x Sets the nvram boot-args on the current machine to "debug=0x146" which disables kprintf logging. This flag should only be used on the target machine (which is contrary to typical usage cases, when this tool is used on the host). --erase, -e Deletes the boot-args variable from nvram. This flag should only be used on the target machine (which is contrary to typical usage cases, when this tool is used on the host). --ioff Turns off interactive mode. --restart Automatically restarts the machine only after the nvram has been modified by this tool. --help, -h Displays usage info for fwkdp. COMPATABILITY FireWireKDP doesn't interfere with the loading of the normal FireWire stack - it only touches the FireWire hardware when the kernel debugger is invoked, either by a panic, NMI, trap, or similar. Furthermore, FireWireKDP is designed to work cooperatively with FireWireKPrintf. To use both you must use a combination of boot-args such as "debug=0x14e kdp_match_name=firewire". To use FireWireKDP on a non-built-in FireWire interface (e.g. when using a Thunderbolt to FireWire adapter) add fwkdp=0x8000 to your boot-args. USAGE Connect two Macs via FireWire and follow the steps below. On the target (machine to be debugged): 1. Use fwkdp to set the kernel boot arguments to enable live debugging: % fwkdp -r If using FireWireKDP with FireWireKPrintf try: % sudo nvram boot-args="debug=0x14e kdp_match_name=firewire" 2. Reboot the target. 3. Break into debug mode as you would with Ethernet. (NMI button, panic, debugger traps, etc.) On the debugger machine: 1. Run fwkdp: % fwkdp The FireWireKDP tool defaults to both proxy and core-dump receive mode. It is a stateless translator that shunts data between the network KDP/UDP port and the FireWire connection. Once started it can be left running indefinitely. 2. Run LLDB with the target operating system's symbol file. % lldb kernel.development See the Apple Development Kits webpage for the proper "Kernel Debug Kit" which will contain the proper "kernel.development" or "kernel.debug" "symbol file. See step 6 for more info. 3. Within LLDB, allow script loading to import the appropriate kernel macros (commonly found along with symbolic mach_kernel). (lldb) settings set target.load-script-from-symbol-file true 4. Within LLDB, attach using kdp-remote. (lldb) kdp-remote localhost 5. The connection should be established. Use lldb as you would over Ethernet. 6. For more info on remote kernel debugging, please see "Two- machine Debugging" of the I/O Kit Device Driver Design Guidelines and Technical Note TN2118: Debugging With Kernel Core Dumps. CORE DUMPS To capture kernel core dumps, set the proper bits of the boot-args' debug variable and kdp_match_name equal to "firewire". In addition, an IP address for the receiving computer is also required, although it's meaningless over FireWire. On the target machine, set the boot-args and restart. % sudo nvram boot-args="debug=0xd46 _panicd_ip=1.2.3.4 kdp_match_name=firewire" Connect the machine to be debugged to a second Mac with a FireWire cable. Run "fwkdp" from a Terminal window on the second Mac; it will wait for the target to transmit it's core after it drops to the debugger (panic, NMI, etc.). For more info on the debugging with Kernel Core Dumps, please see Technical Note TN2118: Debugging With Kernel Core Dumps. NOTES Post-Panic Hot-Plugs Some Macs do not support post-panic debugging after hot-plugging another Mac. To avoid this problem, keep a debugger Mac connected in anticipation of a panic. 64-bit Debugging FireWireKDP does work when running the kernel in 64-bit mode. Sleep/Wake Notes FireWireKDP will work if the target has been through a sleep/wake cycle. However, if FireWireKDP has run (e.g. drop into debugger and conitnue) on the target once, it might not work again if the machine is sleep/wake cycled afterwards. Therefore, if you would like to debug a sleep/wake issue with FireWireKDP, do not sleep between breaks to the debugger. Other FireWire Devices To avoid conflicts it is best not to have other FireWire devices plugged into the host or target machines while using any FireWire debugging tools. However, it is possible to connect more than one target machine to a single host (e.g. to collect core dumps). Second FireWire Interface FireWireKDP does not work on multiple FireWire interfaces. Please use a built-in FireWire port without installing any FireWire add-in cards. FILES /usr/bin/fwkdp is installed as part of the Mac OS X Developer Tools. SEE ALSO fwkpfv(1) Mac OS X July 7, 2015 Mac OS X
| null |
piconv
|
piconv is perl version of iconv, a character encoding converter widely available for various Unixen today. This script was primarily a technology demonstrator for Perl 5.8.0, but you can use piconv in the place of iconv for virtually any case. piconv converts the character encoding of either STDIN or files specified in the argument and prints out to STDOUT. Here is the list of options. Some options can be in short format (-f) or long (--from) one. -f,--from from_encoding Specifies the encoding you are converting from. Unlike iconv, this option can be omitted. In such cases, the current locale is used. -t,--to to_encoding Specifies the encoding you are converting to. Unlike iconv, this option can be omitted. In such cases, the current locale is used. Therefore, when both -f and -t are omitted, piconv just acts like cat. -s,--string string uses string instead of file for the source of text. -l,--list Lists all available encodings, one per line, in case-insensitive order. Note that only the canonical names are listed; many aliases exist. For example, the names are case-insensitive, and many standard and common aliases work, such as "latin1" for "ISO-8859-1", or "ibm850" instead of "cp850", or "winlatin1" for "cp1252". See Encode::Supported for a full discussion. -r,--resolve encoding_alias Resolve encoding_alias to Encode canonical encoding name. -C,--check N Check the validity of the stream if N = 1. When N = -1, something interesting happens when it encounters an invalid character. -c Same as "-C 1". -p,--perlqq Transliterate characters missing in encoding to \x{HHHH} where HHHH is the hexadecimal Unicode code point. --htmlcref Transliterate characters missing in encoding to &#NNN; where NNN is the decimal Unicode code point. --xmlcref Transliterate characters missing in encoding to &#xHHHH; where HHHH is the hexadecimal Unicode code point. -h,--help Show usage. -D,--debug Invokes debugging mode. Primarily for Encode hackers. -S,--scheme scheme Selects which scheme is to be used for conversion. Available schemes are as follows: from_to Uses Encode::from_to for conversion. This is the default. decode_encode Input strings are decode()d then encode()d. A straight two- step implementation. perlio The new perlIO layer is used. NI-S' favorite. You should use this option if you are using UTF-16 and others which linefeed is not $/. Like the -D option, this is also for Encode hackers. SEE ALSO iconv(1) locale(3) Encode Encode::Supported Encode::Alias PerlIO perl v5.38.2 2023-11-28 PICONV(1)
|
piconv -- iconv(1), reinvented in perl
|
piconv [-f from_encoding] [-t to_encoding] [-p|--perlqq|--htmlcref|--xmlcref] [-C N|-c] [-D] [-S scheme] [-s string|file...] piconv -l piconv -r encoding_alias piconv -h
| null | null |
read
|
Shell builtin commands are commands that can be executed within the running shell's process. Note that, in the case of csh(1) builtin commands, the command is executed in a subshell if it occurs as any component of a pipeline except the last. If a command specified to the shell contains a slash β/β, the shell will not execute a builtin command, even if the last component of the specified command matches the name of a builtin command. Thus, while specifying βechoβ causes a builtin command to be executed under shells that support the echo builtin command, specifying β/bin/echoβ or β./echoβ does not. While some builtin commands may exist in more than one shell, their operation may be different under each shell which supports them. Below is a table which lists shell builtin commands, the standard shells that support them and whether they exist as standalone utilities. Only builtin commands for the csh(1) and sh(1) shells are listed here. Consult a shell's manual page for details on the operation of its builtin commands. Beware that the sh(1) manual page, at least, calls some of these commands βbuilt-in commandsβ and some of them βreserved wordsβ. Users of other shells may need to consult an info(1) page or other sources of documentation. Commands marked βNo**β under External do exist externally, but are implemented as scripts using a builtin command of the same name. Command External csh(1) sh(1) ! No No Yes % No Yes No . No No Yes : No Yes Yes @ No Yes Yes [ Yes No Yes { No No Yes } No No Yes alias No** Yes Yes alloc No Yes No bg No** Yes Yes bind No No Yes bindkey No Yes No break No Yes Yes breaksw No Yes No builtin No No Yes builtins No Yes No case No Yes Yes cd No** Yes Yes chdir No Yes Yes command No** No Yes complete No Yes No continue No Yes Yes default No Yes No dirs No Yes No do No No Yes done No No Yes echo Yes Yes Yes echotc No Yes No elif No No Yes else No Yes Yes end No Yes No endif No Yes No endsw No Yes No esac No No Yes eval No Yes Yes exec No Yes Yes exit No Yes Yes export No No Yes false Yes No Yes fc No** No Yes fg No** Yes Yes filetest No Yes No fi No No Yes for No No Yes foreach No Yes No getopts No** No Yes glob No Yes No goto No Yes No hash No** No Yes hashstat No Yes No history No Yes No hup No Yes No if No Yes Yes jobid No No Yes jobs No** Yes Yes kill Yes Yes Yes limit No Yes No local No No Yes log No Yes No login Yes Yes No logout No Yes No ls-F No Yes No nice Yes Yes No nohup Yes Yes No notify No Yes No onintr No Yes No popd No Yes No printenv Yes Yes No printf Yes No Yes pushd No Yes No pwd Yes No Yes read No** No Yes readonly No No Yes rehash No Yes No repeat No Yes No return No No Yes sched No Yes No set No Yes Yes setenv No Yes No settc No Yes No setty No Yes No setvar No No Yes shift No Yes Yes source No Yes No stop No Yes No suspend No Yes No switch No Yes No telltc No Yes No test Yes No Yes then No No Yes time Yes Yes No times No No Yes trap No No Yes true Yes No Yes type No** No Yes ulimit No** No Yes umask No** Yes Yes unalias No** Yes Yes uncomplete No Yes No unhash No Yes No unlimit No Yes No unset No Yes Yes unsetenv No Yes No until No No Yes wait No** Yes Yes where No Yes No which Yes Yes No while No Yes Yes SEE ALSO csh(1), dash(1), echo(1), false(1), info(1), kill(1), login(1), nice(1), nohup(1), printenv(1), printf(1), pwd(1), sh(1), test(1), time(1), true(1), which(1), zsh(1) HISTORY The builtin manual page first appeared in FreeBSD 3.4. AUTHORS This manual page was written by Sheldon Hearn <sheldonh@FreeBSD.org>. macOS 14.5 December 21, 2010 macOS 14.5
|
builtin, !, %, ., :, @, [, {, }, alias, alloc, bg, bind, bindkey, break, breaksw, builtins, case, cd, chdir, command, complete, continue, default, dirs, do, done, echo, echotc, elif, else, end, endif, endsw, esac, eval, exec, exit, export, false, fc, fg, filetest, fi, for, foreach, getopts, glob, goto, hash, hashstat, history, hup, if, jobid, jobs, kill, limit, local, log, login, logout, ls-F, nice, nohup, notify, onintr, popd, printenv, printf, pushd, pwd, read, readonly, rehash, repeat, return, sched, set, setenv, settc, setty, setvar, shift, source, stop, suspend, switch, telltc, test, then, time, times, trap, true, type, ulimit, umask, unalias, uncomplete, unhash, unlimit, unset, unsetenv, until, wait, where, which, while β shell built-in commands
|
See the built-in command description in the appropriate shell manual page.
| null | null |
snmpusm
|
snmpusm is an SNMP application that can be used to do simple maintenance on the users known to an SNMP agent, by manipulating the agent's User-based Security Module (USM) table. The user needs write access to the usmUserTable MIB table. This tool can be used to create, delete, clone, and change the passphrase of users configured on a running SNMP agent.
|
snmpusm - creates and maintains SNMPv3 users on a network entity
|
snmpusm [COMMON OPTIONS] [-Cw] AGENT create USER [CLONEFROM-USER] snmpusm [COMMON OPTIONS] AGENT delete USER snmpusm [COMMON OPTIONS] AGENT cloneFrom USER CLONEFROM-USER snmpusm [COMMON OPTIONS] [-Ca] [-Cx] AGENT passwd OLD-PASSPHRASE NEW- PASSPHRASE [USER] snmpusm [COMMON OPTIONS] <-Ca | -Cx> -Ck AGENT passwd OLD-KEY-OR- PASSPHRASE NEW-KEY-OR-PASSPHRASE [USER] snmpusm [COMMON OPTIONS] [-Ca] [-Cx] AGENT changekey [USER]
|
Common options for all snmpusm commands: -CE ENGINE-ID Set usmUserEngineID to be used as part of the index of the usmUserTable. Default is to use the contextEngineID (set via -E or probed) as the usmUserEngineID. -Cp STRING Set the usmUserPublic value of the (new) user to the specified STRING. Options for the passwd and changekey commands: -Ca Change the authentication key. -Cx Change the privacy key. -Ck Allows to use localized key (must start with 0x) instead of passphrase. When this option is used, either the -Ca or -Cx option (but not both) must also be used. CREATING USERS An unauthenticated SNMPv3 user can be created using the command snmpusm [OPTIONS] AGENT create USER This constructs an (inactive) entry in the usmUserTable, with no authentication or privacy settings. In principle, this user should be useable for 'noAuthNoPriv' requests, but in practise the Net-SNMP agent will not allow such an entry to be made active. The user can be created via the createAndWait operation instead by using the -Ca flag. This will prevent the user from being marked as active in any agent until explicitly activated later via the activate command. In order to activate this entry, it is necessary to "clone" an existing user, using the command snmpusm [OPTIONS] AGENT cloneFrom USER CLONEFROM-USER The USER entry then inherits the same authentication and privacy settings (including pass phrases) as the CLONEFROM user. These two steps can be combined into one, by using the command snmpusm [OPTIONS] AGENT create USER CLONEFROM-USER The two forms of the create sub-command require that the user being created does not already exist. The cloneFrom sub-command requires that the user being cloned to does already exist. Cloning is the only way to specify which authentication and privacy protocols to use for a given user, and it is only possible to do this once. Subsequent attempts to reclone onto the same user will appear to succeed, but will be silently ignored. This (somewhat unexpected) behaviour is mandated by the SNMPv3 USM specifications (RFC 3414). To change the authentication and privacy settings for a given user, it is necessary to delete and recreate the user entry. This is not necessary for simply changing the pass phrases (see below). This means that the agent must be initialized with at least one user for each combination of authentication and privacy protocols. See the snmpd.conf(5) manual page for details of the createUser configuration directive. DELETING USERS A user can be deleted from the usmUserTable using the command snmpusm [OPTIONS] AGENT delete USER CHANGING PASS PHRASES User profiles contain private keys that are never transmitted over the wire in clear text (regardless of whether the administration requests are encrypted or not). To change the secret key for a user, it is necessary to specify the user's old passphrase as well as the new one. This uses the command snmpusm [OPTIONS] [-Ca] [-Cx] AGENT passwd OLD-PASSPHRASE NEW- PASSPHRASE [USER] After cloning a new user entry from the appropriate template, you should immediately change the new user's passphrase. If USER is not specified, this command will change the passphrase of the (SNMPv3) user issuing the command. If the -Ca or -Cx options are specified, then only the authentication or privacy keys are changed. If these options are not specified, then both the authentication and privacy keys are changed. snmpusm [OPTIONS] [-Ca] [-Cx] AGENT changekey [USER] This command changes the key in a perfect-forward-secrecy compliant way through a diffie-helman exchange. The remote agent must support the SNMP-USM-DH-OBJECTS-MIB for this command to work. The resulting keys are printed to the console and may be then set in future command invocations using the --defAuthLocalizedKey and --defPrivLocalizedKey options or in your snmp.conf file using the defAuthLocalizedKey and defPrivLocalizedKey keywords. Note that since these keys are randomly generated based on a diffie helman exchange, they are no longer derived from a more easily typed password. They are, however, much more secure. To change from a localized key back to a password, the following variant of the passwd sub-command is used: snmpusm [OPTIONS] <-Ca | -Cx> -Ck AGENT passwd OLD-KEY-OR- PASSPHRASE NEW-KEY-OR-PASSPHRASE [USER] Either the -Ca or the -Cx option must be specified. The OLD-KEY-OR- PASSPHRASE and/or NEW-KEY-OR-PASSPHRASE arguments can either be a passphrase or a localized key starting with "0x", e.g. as printed out by the changekey sub-command. Note that snmpusm REQUIRES an argument specifying the agent to query as described in the .I snmpcmd(1) manual page.
|
Let's assume for our examples that the following VACM and USM configurations lines were in the snmpd.conf file for a Net-SNMP agent. These lines set up a default user called "initial" with the authentication passphrase "setup_passphrase" so that we can perform the initial setup of an agent: # VACM configuration entries rwuser initial # lets add the new user we'll create too: rwuser wes # USM configuration entries createUser initial MD5 setup_passphrase DES Note: the "initial" user's setup should be removed after creating a real user that you grant administrative privileges to (like the user "wes" we'll be creating in this example. Note: passphrases must be 8 characters minimum in length. Create a new user snmpusm -v3 -u initial -n "" -l authNoPriv -a MD5 -A setup_passphrase localhost create wes initial Creates a new user, here named "wes" using the user "initial" to do it. "wes" is cloned from "initial" in the process, so he inherits that user's passphrase ("setup_passphrase"). Change the user's passphrase snmpusm -v 3 -u wes -n "" -l authNoPriv -a MD5 -A setup_passphrase localhost passwd setup_passphrase new_passphrase After creating the user "wes" with the same passphrase as the "initial" user, we need to change his passphrase for him. The above command changes it from "setup_passphrase", which was inherited from the initial user, to "new_passphrase". Test the new user snmpget -v 3 -u wes -n "" -l authNoPriv -a MD5 -A new_passphrase localhost sysUpTime.0 If the above commands were successful, this command should have properly performed an authenticated SNMPv3 GET request to the agent. Now, go remove the vacm "group" snmpd.conf entry for the "initial" user and you have a valid user 'wes' that you can use for future transactions instead of initial. WARNING Manipulating the usmUserTable using this command can only be done using SNMPv3. This command will not work with the community-based versions, even if they have write access to the table. SEE ALSO snmpd.conf(5), snmp.conf(5), RFC 3414 V5.6.2.1 11 Dec 2009 SNMPUSM(1)
|
tkpp5.34
|
Tkpp is a GUI frontend to pp, which can turn perl scripts into stand- alone PAR files, perl scripts or executables. You can save command line generated, load and save your Tkpp configuration GUI. Below is a short explanation of tkpp GUI. Menu File -> Save command line When you build or display command line in the Tkpp GUI, you can save the command line in a separated file. This command line can be executed from a terminal. File -> Save configuration You can save your GUI configuration (all options used) to load and execute it next time. File -> Load configuration Load your saved file configuration. All saved options will be set in the GUI. File -> Exit Close Tkpp. Help -> Tkpp documentation Display POD documentation of Tkpp. Help -> pp documentation Display POD documentation of pp. Help -> About Tkpp Display version and authors of Tkpp. Help -> About pp Display version and authors of pp ( pp --version). Tabs GUI There are five tabs in GUI : General Options, Information, Size, Other Options and Output. All tabs contain all options which can be used with pp. All default pp options are kept. You can now set as you want the options. When your have finished, you can display the command line or start building your package. You will have the output tab to see error or verbose messages. NOTES In Win32 system, the building is executed in a separate process, then the GUI is not frozen. The first time you use Tkpp, it will tell you to install some CPAN modules to use the GUI (like Tk, Tk::ColoredButton...). SEE ALSO pp, PAR AUTHORS Tkpp was written by Doug Gruber and rewrite by Djibril Ousmanou. In the event this application breaks, you get both pieces :-) COPYRIGHT Copyright 2003, 2004, 2005, 2006, 2011, 2014, 2015 by Doug Gruber <doug(a)dougthug.com>, Audrey Tang <cpan@audreyt.org> and Djibril Ousmanou <djibel(a)cpan.org>. Neither this program nor the associated pp program impose any licensing restrictions on files generated by their execution, in accordance with the 8th article of the Artistic License: "Aggregation of this Package with a commercial distribution is always permitted provided that the use of this Package is embedded; that is, when no overt attempt is made to make this Package's interfaces visible to the end user of the commercial distribution. Such use shall not be construed as a distribution of this Package." Therefore, you are absolutely free to place any license on the resulting executable, as long as the packed 3rd-party libraries are also available under the Artistic License. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See LICENSE. perl v5.34.0 2020-03-08 TKPP(1)
|
tkpp - frontend to pp written in Perl/Tk.
|
You just have to execute command line : tkpp
| null | null |
test-yaml5.34
| null | null | null | null | null |
execsnoop
|
execsnoop prints details of new processes as they are executed. Details such as UID, PID and argument listing are printed out. This program is very useful to examine short lived processes that would not normally appear in a prstat or "ps -ef" listing. Sometimes applications will run hundreds of short lived processes in their normal startup cycle, a behaviour that is easily monitored with execsnoop. Since this uses DTrace, only users with root privileges can run this command.
|
execsnoop - snoop new process execution. Uses DTrace.
|
execsnoop [-a|-A|-ejhsvZ] [-c command]
|
-a print all data -A dump all data, space delimited -e safe output, parseable. This prevents the ARGS field containing "\n"s, to assist postprocessing. -j print project ID -s print start time, us -v print start time, string -Z print zonename -c command command name to snoop
|
Default output, print processes as they are executed, # execsnoop Print human readable timestamps, # execsnoop -v Print zonename, # execsnoop -Z Snoop this command only, # execsnoop -c ls FIELDS UID User ID PID Process ID PPID Parent Process ID COMM command name for the process ARGS argument listing for the process ZONE zonename PROJ project ID TIME timestamp for the exec event, us STRTIME timestamp for the exec event, string DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT execsnoop will run forever until Ctrl-C is hit. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO dtrace(1M), truss(1) version 1.20 July 2, 2005 execsnoop(1m)
|
pagesize
|
The pagesize utility prints the size of a page of memory in bytes, as returned by getpagesize(3). This program is useful in constructing portable shell scripts. SEE ALSO getpagesize(3) HISTORY The pagesize command appeared in 4.2BSD. macOS 14.5 June 6, 1993 macOS 14.5
|
pagesize β print system page size
|
pagesize
| null | null |
lp
|
lp submits files for printing or alters a pending job. Use a filename of "-" to force printing from the standard input. THE DEFAULT DESTINATION CUPS provides many ways to set the default destination. The LPDEST and PRINTER environment variables are consulted first. If neither are set, the current default set using the lpoptions(1) command is used, followed by the default set using the lpadmin(8) command.
|
lp - print files
|
lp [ -E ] [ -U username ] [ -c ] [ -d destination[/instance] ] [ -h hostname[:port] ] [ -m ] [ -n num-copies ] [ -o option[=value] ] [ -q priority ] [ -s ] [ -t title ] [ -H handling ] [ -P page-list ] [ -- ] [ file(s) ] lp [ -E ] [ -U username ] [ -c ] [ -h hostname[:port] ] [ -i job-id ] [ -n num-copies ] [ -o option[=value] ] [ -q priority ] [ -t title ] [ -H handling ] [ -P page-list ]
|
The following options are recognized by lp: -- Marks the end of options; use this to print a file whose name begins with a dash (-). -E Forces encryption when connecting to the server. -U username Specifies the username to use when connecting to the server. -c This option is provided for backwards-compatibility only. On systems that support it, this option forces the print file to be copied to the spool directory before printing. In CUPS, print files are always sent to the scheduler via IPP which has the same effect. -d destination Prints files to the named printer. -h hostname[:port] Chooses an alternate server. -i job-id Specifies an existing job to modify. -m Sends an email when the job is completed. -n copies Sets the number of copies to print. -o "name=value [ ... name=value ]" Sets one or more job options. See "COMMON JOB OPTIONS" below. -q priority Sets the job priority from 1 (lowest) to 100 (highest). The default priority is 50. -s Do not report the resulting job IDs (silent mode.) -t "name" Sets the job name. -H hh:mm -H hold -H immediate -H restart -H resume Specifies when the job should be printed. A value of immediate will print the file immediately, a value of hold will hold the job indefinitely, and a UTC time value (HH:MM) will hold the job until the specified UTC (not local) time. Use a value of resume with the -i option to resume a held job. Use a value of restart with the -i option to restart a completed job. -P page-list Specifies which pages to print in the document. The list can contain a list of numbers and ranges (#-#) separated by commas, e.g., "1,3-5,16". The page numbers refer to the output pages and not the document's original pages - options like "number-up" can affect the numbering of the pages. COMMON JOB OPTIONS Aside from the printer-specific options reported by the lpoptions(1) command, the following generic options are available: -o job-sheets=name Prints a cover page (banner) with the document. The "name" can be "classified", "confidential", "secret", "standard", "topsecret", or "unclassified". -o media=size Sets the page size to size. Most printers support at least the size names "a4", "letter", and "legal". -o number-up={2|4|6|9|16} Prints 2, 4, 6, 9, or 16 document (input) pages on each output page. -o orientation-requested=4 Prints the job in landscape (rotated 90 degrees counter- clockwise). -o orientation-requested=5 Prints the job in landscape (rotated 90 degrees clockwise). -o orientation-requested=6 Prints the job in reverse portrait (rotated 180 degrees). -o print-quality=3 -o print-quality=4 -o print-quality=5 Specifies the output quality - draft (3), normal (4), or best (5). -o sides=one-sided Prints on one side of the paper. -o sides=two-sided-long-edge Prints on both sides of the paper for portrait output. -o sides=two-sided-short-edge Prints on both sides of the paper for landscape output. CONFORMING TO Unlike the System V printing system, CUPS allows printer names to contain any printable character except SPACE, TAB, "/", or "#". Also, printer and class names are not case-sensitive. The -q option accepts a different range of values than the Solaris lp command, matching the IPP job priority values (1-100, 100 is highest priority) instead of the Solaris values (0-39, 0 is highest priority).
|
Print two copies of a document to the default printer: lp -n 2 filename Print a double-sided legal document to a printer called "foo": lp -d foo -o media=legal -o sides=two-sided-long-edge filename Print a presentation document 2-up to a printer called "bar": lp -d bar -o number-up=2 filename SEE ALSO cancel(1), lpadmin(8), lpoptions(1), lpq(1), lpr(1), lprm(1), lpstat(1), CUPS Online Help (http://localhost:631/help) COPYRIGHT Copyright Β© 2007-2019 by Apple Inc. 26 April 2019 CUPS lp(1)
|
base64
|
The uuencode and uudecode utilities are used to transmit binary files over transmission mediums that do not support other than simple ASCII data. The b64encode utility is synonymous with uuencode with the -m flag specified. The b64decode utility is synonymous with uudecode with the -m flag specified. The base64 utility acts as a base64 decoder when passed the --decode (or -d) flag and as a base64 encoder otherwise. As a decoder it only accepts raw base64 input and as an encoder it does not produce the framing lines. base64 reads standard input or file if it is provided and writes to standard output. Options --wrap (or -w) and --ignore-garbage (or -i) are accepted for compatibility with GNU base64, but the latter is unimplemented and silently ignored. The uuencode utility reads file (or by default the standard input) and writes an encoded version to the standard output, or output_file if one has been specified. The encoding uses only printing ASCII characters and includes the mode of the file and the operand name for use by uudecode. The uudecode utility transforms uuencoded files (or by default, the standard input) into the original form. The resulting file is named either name or (depending on options passed to uudecode) output_file and will have the mode of the original file except that setuid and execute bits are not retained. The uudecode utility ignores any leading and trailing lines. The following options are available for uuencode: -m Use the Base64 method of encoding, rather than the traditional uuencode algorithm. -r Produce raw output by excluding the initial and final framing lines. -o output_file Output to output_file instead of standard output. The following options are available for uudecode: -c Decode more than one uuencoded file from file if possible. -i Do not overwrite files. -m When used with the -r flag, decode Base64 input instead of traditional uuencode input. Without -r it has no effect. -o output_file Output to output_file instead of any pathname contained in the input data. -p Decode file and write output to standard output. -r Decode raw (or broken) input, which is missing the initial and possibly the final framing lines. The input is assumed to be in the traditional uuencode encoding, but if the -m flag is used, or if the utility is invoked as b64decode, then the input is assumed to be in Base64 format. -s Do not strip output pathname to base filename. By default uudecode deletes any prefix ending with the last slash '/' for security reasons. Additionally, b64encode accepts the following option: -w column Wrap encoded output after column. The following options are available for base64: -b count, --break=count Insert line breaks every count characters. The default is 0, which generates an unbroken stream. -d, -D, --decode Decode incoming Base64 stream into binary data. -h, --help Print usage summary and exit. -i input_file, --input=input_file Read input from input_file. The default is stdin; passing β-β also represents stdin. -o output_file, --output=output_file Write output to output_file. The default is stdout; passing β-β also represents stdout. bintrans is a generic utility that can run any of the aforementioned encoders and decoders. It can also run algorithms that are not available through a dedicated program: qp is a quoted-printable converter and accepts the following options: -u Decode. -o output_file Output to output_file instead of standard output.
|
bintrans, base64, uuencode, uudecode, β encode/decode a binary file
|
bintrans [algorithm] [...] uuencode [-m] [-r] [-o output_file] [file] name uudecode [-cimprs] [file ...] uudecode [-i] -o output_file b64encode [-r] [-w column] [-o output_file] [file] name b64decode [-cimprs] [file ...] b64decode [-i] -o output_file [file] base64 [-h | -D | -d] [-b count] [-i input_file] [-o output_file]
| null |
The following example packages up a source tree, compresses it, uuencodes it and mails it to a user on another system. When uudecode is run on the target system, the file ``src_tree.tar.Z'' will be created which may then be uncompressed and extracted into the original tree. tar cf - src_tree | compress | uuencode src_tree.tar.Z | mail user@example.com The following example unpacks all uuencoded files from your mailbox into your current working directory. uudecode -c < $MAIL The following example extracts a compressed tar archive from your mailbox uudecode -o /dev/stdout < $MAIL | zcat | tar xfv - SEE ALSO basename(1), compress(1), mail(1), uucp(1) (ports/net/freebsd-uucp), uuencode(5) HISTORY The uudecode and uuencode utilities appeared in 4.0BSD. BUGS Files encoded using the traditional algorithm are expanded by 35% (3 bytes become 4 plus control information). macOS 14.5 April 18, 2022 macOS 14.5
|
ptargrep
|
This utility allows you to apply pattern matching to the contents of files contained in a tar archive. You might use this to identify all files in an archive which contain lines matching the specified pattern and either print out the pathnames or extract the files. The pattern will be used as a Perl regular expression (as opposed to a simple grep regex). Multiple tar archive filenames can be specified - they will each be processed in turn.
|
ptargrep - Apply pattern matching to the contents of files in a tar archive
|
ptargrep [options] <pattern> <tar file> ... Options: --basename|-b ignore directory paths from archive --ignore-case|-i do case-insensitive pattern matching --list-only|-l list matching filenames rather than extracting matches --verbose|-v write debugging message to STDERR --help|-? detailed help message
|
--basename (alias -b) When matching files are extracted, ignore the directory path from the archive and write to the current directory using the basename of the file from the archive. Beware: if two matching files in the archive have the same basename, the second file extracted will overwrite the first. --ignore-case (alias -i) Make pattern matching case-insensitive. --list-only (alias -l) Print the pathname of each matching file from the archive to STDOUT. Without this option, the default behaviour is to extract each matching file. --verbose (alias -v) Log debugging info to STDERR. --help (alias -?) Display this documentation. COPYRIGHT Copyright 2010 Grant McLean <grantm@cpan.org> This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.38.2 2023-11-28 PTARGREP(1)
| null |
jar
|
The jar command is a general-purpose archiving and compression tool, based on the ZIP and ZLIB compression formats. Initially, the jar command was designed to package Java applets (not supported since JDK 11) or applications; however, beginning with JDK 9, users can use the jar command to create modular JARs. For transportation and deployment, it's usually more convenient to package modules as modular JARs. The syntax for the jar command resembles the syntax for the tar command. It has several main operation modes, defined by one of the mandatory operation arguments. Other arguments are either options that modify the behavior of the operation or are required to perform the operation. When modules or the components of an application (files, images and sounds) are combined into a single archive, they can be downloaded by a Java agent (such as a browser) in a single HTTP transaction, rather than requiring a new connection for each piece. This dramatically improves download times. The jar command also compresses files, which further improves download time. The jar command also enables individual entries in a file to be signed so that their origin can be authenticated. A JAR file can be used as a class path entry, whether or not it's compressed. An archive becomes a modular JAR when you include a module descriptor, module-info.class, in the root of the given directories or in the root of the .jar archive. The following operations described in Operation Modifiers Valid Only in Create and Update Modes are valid only when creating or updating a modular jar or updating an existing non-modular jar: β’ --module-version β’ --hash-modules β’ --module-path Note: All mandatory or optional arguments for long options are also mandatory or optional for any corresponding short options. MAIN OPERATION MODES When using the jar command, you must specify the operation for it to perform. You specify the operation mode for the jar command by including the appropriate operation arguments described in this section. You can mix an operation argument with other one-letter options. Generally the operation argument is the first argument specified on the command line. -c or --create Creates the archive. -i FILE or --generate-index=FILE Generates index information for the specified JAR file. This option is deprecated and may be removed in a future release. -t or --list Lists the table of contents for the archive. -u or --update Updates an existing JAR file. -x or --extract Extracts the named (or all) files from the archive. -d or --describe-module Prints the module descriptor or automatic module name. OPERATION MODIFIERS VALID IN ANY MODE You can use the following options to customize the actions of any operation mode included in the jar command. -C DIR Changes the specified directory and includes the files specified at the end of the command line. jar [OPTION ...] [ [--release VERSION] [-C dir] files] -f FILE or --file=FILE Specifies the archive file name. --release VERSION Creates a multirelease JAR file. Places all files specified after the option into a versioned directory of the JAR file named META-INF/versions/VERSION/, where VERSION must be must be a positive integer whose value is 9 or greater. At run time, where more than one version of a class exists in the JAR, the JDK will use the first one it finds, searching initially in the directory tree whose VERSION number matches the JDK's major version number. It will then look in directories with successively lower VERSION numbers, and finally look in the root of the JAR. -v or --verbose Sends or prints verbose output to standard output. OPERATION MODIFIERS VALID ONLY IN CREATE AND UPDATE MODES You can use the following options to customize the actions of the create and the update main operation modes: -e CLASSNAME or --main-class=CLASSNAME Specifies the application entry point for standalone applications bundled into a modular or executable modular JAR file. -m FILE or --manifest=FILE Includes the manifest information from the given manifest file. -M or --no-manifest Doesn't create a manifest file for the entries. --module-version=VERSION Specifies the module version, when creating or updating a modular JAR file, or updating a non-modular JAR file. --hash-modules=PATTERN Computes and records the hashes of modules matched by the given pattern and that depend upon directly or indirectly on a modular JAR file being created or a non-modular JAR file being updated. -p or --module-path Specifies the location of module dependence for generating the hash. @file Reads jar options and file names from a text file as if they were supplied on the command line OPERATION MODIFIERS VALID ONLY IN CREATE, UPDATE, AND GENERATE-INDEX MODES You can use the following options to customize the actions of the create (-c or --create) the update (-u or --update ) and the generate-index (-i or --generate-index=FILE) main operation modes: -0 or --no-compress Stores without using ZIP compression. --date=TIMESTAMP The timestamp in ISO-8601 extended offset date-time with optional time-zone format, to use for the timestamp of the entries, e.g. "2022-02-12T12:30:00-05:00". OTHER OPTIONS The following options are recognized by the jar command and not used with operation modes: -h or --help[:compat] Displays the command-line help for the jar command or optionally the compatibility help. --help-extra Displays help on extra options. --version Prints the program version. EXAMPLES OF JAR COMMAND SYNTAX β’ Create an archive, classes.jar, that contains two class files, Foo.class and Bar.class. jar --create --file classes.jar Foo.class Bar.class β’ Create an archive, classes.jar, that contains two class files, Foo.class and Bar.class setting the last modified date and time to 2021 Jan 6 12:36:00. jar --create --date="2021-01-06T14:36:00+02:00" --file=classes.jar Foo.class Bar.class β’ Create an archive, classes.jar, by using an existing manifest, mymanifest, that contains all of the files in the directory foo/. jar --create --file classes.jar --manifest mymanifest -C foo/ β’ Create a modular JAR archive,foo.jar, where the module descriptor is located in classes/module-info.class. jar --create --file foo.jar --main-class com.foo.Main --module-version 1.0 -C foo/classes resources β’ Update an existing non-modular JAR, foo.jar, to a modular JAR file. jar --update --file foo.jar --main-class com.foo.Main --module-version 1.0 -C foo/module-info.class β’ Create a versioned or multi-release JAR, foo.jar, that places the files in the classes directory at the root of the JAR, and the files in the classes-10 directory in the META-INF/versions/10 directory of the JAR. In this example, the classes/com/foo directory contains two classes, com.foo.Hello (the entry point class) and com.foo.NameProvider, both compiled for JDK 8. The classes-10/com/foo directory contains a different version of the com.foo.NameProvider class, this one containing JDK 10 specific code and compiled for JDK 10. Given this setup, create a multirelease JAR file foo.jar by running the following command from the directory containing the directories classes and classes-10 . jar --create --file foo.jar --main-class com.foo.Hello -C classes . --release 10 -C classes-10 . The JAR file foo.jar now contains: % jar -tf foo.jar META-INF/ META-INF/MANIFEST.MF com/ com/foo/ com/foo/Hello.class com/foo/NameProvider.class META-INF/versions/10/com/ META-INF/versions/10/com/foo/ META-INF/versions/10/com/foo/NameProvider.class As well as other information, the file META-INF/MANIFEST.MF, will contain the following lines to indicate that this is a multirelease JAR file with an entry point of com.foo.Hello. ... Main-Class: com.foo.Hello Multi-Release: true Assuming that the com.foo.Hello class calls a method on the com.foo.NameProvider class, running the program using JDK 10 will ensure that the com.foo.NameProvider class is the one in META-INF/versions/10/com/foo/. Running the program using JDK 8 will ensure that the com.foo.NameProvider class is the one at the root of the JAR, in com/foo. β’ Create an archive, my.jar, by reading options and lists of class files from the file classes.list. Note: To shorten or simplify the jar command, you can provide an arg file that lists the files to include in the JAR file and pass it to the jar command with the at sign (@) as a prefix. jar --create --file my.jar @classes.list If one or more entries in the arg file cannot be found then the jar command fails without creating the JAR file. JDK 22 2024 JAR(1)
|
jar - create an archive for classes and resources, and manipulate or restore individual classes or resources from an archive
|
jar [OPTION ...] [ [--release VERSION] [-C dir] files] ...
| null | null |
h2ph
|
h2ph converts any C header files specified to the corresponding Perl header file format. It is most easily run while in /usr/include: cd /usr/include; h2ph * sys/* or cd /usr/include; h2ph * sys/* arpa/* netinet/* or cd /usr/include; h2ph -r -l . The output files are placed in the hierarchy rooted at Perl's architecture dependent library directory. You can specify a different hierarchy with a -d switch. If run with no arguments, filters standard input to standard output.
|
h2ph - convert .h C header files to .ph Perl header files
|
h2ph [-d destination directory] [-r | -a] [-l] [-h] [-e] [-D] [-Q] [headerfiles]
|
-d destination_dir Put the resulting .ph files beneath destination_dir, instead of beneath the default Perl library location ($Config{'installsitearch'}). -r Run recursively; if any of headerfiles are directories, then run h2ph on all files in those directories (and their subdirectories, etc.). -r and -a are mutually exclusive. -a Run automagically; convert headerfiles, as well as any .h files which they include. This option will search for .h files in all directories which your C compiler ordinarily uses. -a and -r are mutually exclusive. -l Symbolic links will be replicated in the destination directory. If -l is not specified, then links are skipped over. -h Put 'hints' in the .ph files which will help in locating problems with h2ph. In those cases when you require a .ph file containing syntax errors, instead of the cryptic [ some error condition ] at (eval mmm) line nnn you will see the slightly more helpful [ some error condition ] at filename.ph line nnn However, the .ph files almost double in size when built using -h. -e If an error is encountered during conversion, output file will be removed and a warning emitted instead of terminating the conversion immediately. -D Include the code from the .h file as a comment in the .ph file. This is primarily used for debugging h2ph. -Q 'Quiet' mode; don't print out the names of the files being converted. ENVIRONMENT No environment variables are used. FILES /usr/include/*.h /usr/include/sys/*.h etc. AUTHOR Larry Wall SEE ALSO perl(1) DIAGNOSTICS The usual warnings if it can't read or write the files involved. BUGS Doesn't construct the %sizeof array for you. It doesn't handle all C constructs, but it does attempt to isolate definitions inside evals so that you can get at the definitions that it can translate. It's only intended as a rough tool. You may need to dicker with the files produced. You have to run this program by hand; it's not run as part of the Perl installation. Doesn't handle complicated expressions built piecemeal, a la: enum { FIRST_VALUE, SECOND_VALUE, #ifdef ABC THIRD_VALUE #endif }; Doesn't necessarily locate all of your C compiler's internally-defined symbols. perl v5.38.2 2023-11-28 H2PH(1)
| null |
readlink
|
The stat utility displays information about the file pointed to by file. Read, write, or execute permissions of the named file are not required, but all directories listed in the pathname leading to the file must be searchable. If no argument is given, stat displays information about the file descriptor for standard input. When invoked as readlink, only the target of the symbolic link is printed. If the given argument is not a symbolic link and the -f option is not specified, readlink will print nothing and exit with an error. If the -f option is specified, the output is canonicalized by following every symlink in every component of the given path recursively. readlink will resolve both absolute and relative paths, and return the absolute pathname corresponding to file. In this case, the argument does not need to be a symbolic link. The information displayed is obtained by calling lstat(2) with the given argument and evaluating the returned structure. The default format displays the st_dev, st_ino, st_mode, st_nlink, st_uid, st_gid, st_rdev, st_size, st_atime, st_mtime, st_ctime, st_birthtime, st_blksize, st_blocks, and st_flags fields, in that order. The options are as follows: -F As in ls(1), display a slash (β/β) immediately after each pathname that is a directory, an asterisk (β*β) after each that is executable, an at sign (β@β) after each symbolic link, a percent sign (β%β) after each whiteout, an equal sign (β=β) after each socket, and a vertical bar (β|β) after each that is a FIFO. The use of -F implies -l. -L Use stat(2) instead of lstat(2). The information reported by stat will refer to the target of file, if file is a symbolic link, and not to file itself. If the link is broken or the target does not exist, fall back on lstat(2) and report information about the link. -f format Display information using the specified format. See the Formats section for a description of valid formats. -l Display output in ls -lT format. -n Do not force a newline to appear at the end of each piece of output. -q Suppress failure messages if calls to stat(2) or lstat(2) fail. When run as readlink, error messages are automatically suppressed. -r Display raw information. That is, for all the fields in the stat structure, display the raw, numerical value (for example, times in seconds since the epoch, etc.). -s Display information in βshell outputβ format, suitable for initializing variables. -t timefmt Display timestamps using the specified format. This format is passed directly to strftime(3). -x Display information in a more verbose way as known from some Linux distributions. Formats Format strings are similar to printf(3) formats in that they start with %, are then followed by a sequence of formatting characters, and end in a character that selects the field of the struct stat which is to be formatted. If the % is immediately followed by one of n, t, %, or @, then a newline character, a tab character, a percent character, or the current file number is printed, otherwise the string is examined for the following: Any of the following optional flags: # Selects an alternate output form for octal and hexadecimal output. Non-zero octal output will have a leading zero, and non- zero hexadecimal output will have β0xβ prepended to it. + Asserts that a sign indicating whether a number is positive or negative should always be printed. Non-negative numbers are not usually printed with a sign. - Aligns string output to the left of the field, instead of to the right. 0 Sets the fill character for left padding to the β0β character, instead of a space. space Reserves a space at the front of non-negative signed output fields. A β+β overrides a space if both are used. Then the following fields: size An optional decimal digit string specifying the minimum field width. prec An optional precision composed of a decimal point β.β and a decimal digit string that indicates the maximum string length, the number of digits to appear after the decimal point in floating point output, or the minimum number of digits to appear in numeric output. fmt An optional output format specifier which is one of D, O, U, X, F, or S. These represent signed decimal output, octal output, unsigned decimal output, hexadecimal output, floating point output, and string output, respectively. Some output formats do not apply to all fields. Floating point output only applies to timespec fields (the a, m, and c fields). The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: amc Display date in strftime(3) format. dr Display actual device name. f Display the flags of file as in ls -lTdo. gu Display group or user name. p Display the mode of file as in ls -lTd. N Displays the name of file. T Displays the type of file. Y Insert a β -> β into the output. Note that the default output format for Y is a string, but if specified explicitly, these four characters are prepended. sub An optional sub field specifier (high, middle, low). Only applies to the p, d, r, and T output formats. It can be one of the following: H βHighβ β specifies the major number for devices from r or d, the βuserβ bits for permissions from the string form of p, the file βtypeβ bits from the numeric forms of p, and the long output form of T. L βLowβ β specifies the minor number for devices from r or d, the βotherβ bits for permissions from the string form of p, the βuserβ, βgroupβ, and βotherβ bits from the numeric forms of p, and the ls -F style output character for file type when used with T (the use of L for this is optional). M βMiddleβ β specifies the βgroupβ bits for permissions from the string output form of p, or the βsuidβ, βsgidβ, and βstickyβ bits for the numeric forms of p. datum A required field specifier, being one of the following: d Device upon which file resides (st_dev). i file's inode number (st_ino). p File type and permissions (st_mode). l Number of hard links to file (st_nlink). u, g User ID and group ID of file's owner (st_uid, st_gid). r Device number for character and block device special files (st_rdev). a, m, c, B The time file was last accessed or modified, or when the inode was last changed, or the birth time of the inode (st_atime, st_mtime, st_ctime, st_birthtime). z The size of file in bytes (st_size). b Number of blocks allocated for file (st_blocks). k Optimal file system I/O operation block size (st_blksize). f User defined flags for file. v Inode generation number (st_gen). The following five field specifiers are not drawn directly from the data in struct stat, but are: N The name of the file. R The absolute pathname corresponding to the file. T The file type, either as in ls -F or in a more descriptive form if the sub field specifier H is given. Y The target of a symbolic link. Z Expands to βmajor,minorβ from the rdev field for character or block special devices and gives size output for all others. Only the % and the field specifier are required. Most field specifiers default to U as an output form, with the exception of p which defaults to O; a, m, and c which default to D; and Y, T, and N which default to S. EXIT STATUS The stat and readlink utilities exit 0 on success, and >0 if an error occurs.
|
stat, readlink β display file status
|
stat [-FLnq] [-f format | -l | -r | -s | -x] [-t timefmt] [file ...] readlink [-fn] [file ...]
| null |
If no options are specified, the default format is "%d %i %Sp %l %Su %Sg %r %z \"%Sa\" \"%Sm\" \"%Sc\" \"%SB\" %k %b %#Xf %N". > stat /tmp/bar 0 78852 -rw-r--r-- 1 root wheel 0 0 "Jul 8 10:26:03 2004" "Jul 8 10:26:03 2004" "Jul 8 10:28:13 2004" "Jan 1 09:00:00 1970" 16384 0 0 /tmp/bar Given a symbolic link βfooβ that points from /tmp/foo to /, you would use stat as follows: > stat -F /tmp/foo lrwxrwxrwx 1 jschauma cs 1 Apr 24 16:37:28 2002 /tmp/foo@ -> / > stat -LF /tmp/foo drwxr-xr-x 16 root wheel 512 Apr 19 10:57:54 2002 /tmp/foo/ To initialize some shell variables, you could use the -s flag as follows: > csh % eval set `stat -s .cshrc` % echo $st_size $st_mtimespec 1148 1015432481 > sh $ eval $(stat -s .profile) $ echo $st_size $st_mtimespec 1148 1015432481 In order to get a list of file types including files pointed to if the file is a symbolic link, you could use the following format: $ stat -f "%N: %HT%SY" /tmp/* /tmp/bar: Symbolic Link -> /tmp/foo /tmp/output25568: Regular File /tmp/blah: Directory /tmp/foo: Symbolic Link -> / In order to get a list of the devices, their types and the major and minor device numbers, formatted with tabs and linebreaks, you could use the following format: stat -f "Name: %N%n%tType: %HT%n%tMajor: %Hr%n%tMinor: %Lr%n%n" /dev/* [...] Name: /dev/wt8 Type: Block Device Major: 3 Minor: 8 Name: /dev/zero Type: Character Device Major: 2 Minor: 12 In order to determine the permissions set on a file separately, you could use the following format: > stat -f "%Sp -> owner=%SHp group=%SMp other=%SLp" . drwxr-xr-x -> owner=rwx group=r-x other=r-x In order to determine the three files that have been modified most recently, you could use the following format: > stat -f "%m%t%Sm %N" /tmp/* | sort -rn | head -3 | cut -f2- Apr 25 11:47:00 2002 /tmp/blah Apr 25 10:36:34 2002 /tmp/bar Apr 24 16:47:35 2002 /tmp/foo To display a file's modification time: > stat -f %m /tmp/foo 1177697733 To display the same modification time in a readable format: > stat -f %Sm /tmp/foo Apr 27 11:15:33 2007 To display the same modification time in a readable and sortable format: > stat -f %Sm -t %Y%m%d%H%M%S /tmp/foo 20070427111533 To display the same in UTC: > sh $ TZ= stat -f %Sm -t %Y%m%d%H%M%S /tmp/foo 20070427181533 SEE ALSO file(1), ls(1), lstat(2), readlink(2), stat(2), printf(3), strftime(3) HISTORY The stat utility appeared in NetBSD 1.6 and FreeBSD 4.10. AUTHORS The stat utility was written by Andrew Brown <atatat@NetBSD.org>. This man page was written by Jan Schaumann <jschauma@NetBSD.org>. macOS 14.5 June 22, 2017 macOS 14.5
|
opensnoop
|
opensnoop tracks file opens. As a process issues a file open, details such as UID, PID and pathname are printed out. The returned file descriptor is printed, a value of -1 indicates an error. This can be useful for troubleshooting to determine if applications are attempting to open files that do not exist. Since this uses DTrace, only users with root privileges can run this command.
|
opensnoop - snoop file opens as they occur. Uses DTrace.
|
opensnoop [-a|-A|-ceFghstvxZ] [-f pathname] [-n name] [-p PID]
|
-a print all data -A dump all data, space delimited -c print current working directory of process -e print errno value -F print the flags passed to open -g print full command arguments -s print start time, us -t print user stack trace -v print start time, string -x only print failed opens -Z print zonename -f pathname file pathname to snoop -n name process name to snoop -p PID process ID to snoop
|
Default output, print file opens by process as they occur, # opensnoop Print human readable timestamps, # opensnoop -v See error codes, # opensnoop -e Snoop this file only, # opensnoop -f /etc/passwd FIELDS ZONE Zone name UID User ID PID Process ID PPID Parent Process ID FD File Descriptor (-1 is error) FLAGS Flags passed to open ERR errno value (see /usr/include/sys/errno.h) CWD current working directory of process PATH pathname for file open COMM command name for the process ARGS argument listing for the process TIME timestamp for the open event, us STRTIME timestamp for the open event, string DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT opensnoop will run forever until Ctrl-C is hit. BUGS occasionally the pathname for the file open cannot be read and the following error will be seen, dtrace: error on enabled probe ID 6 (...): invalid address this is normal behaviour. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO dtrace(1M), truss(1) version 1.60 January 12, 2006 opensnoop(1m)
|
rake
| null | null | null | null | null |
rvim
|
Vim is a text editor that is upwards compatible to Vi. It can be used to edit all kinds of plain text. It is especially useful for editing programs. There are a lot of enhancements above Vi: multi level undo, multi windows and buffers, syntax highlighting, command line editing, filename completion, on-line help, visual selection, etc.. See ":help vi_diff.txt" for a summary of the differences between Vim and Vi. While running Vim a lot of help can be obtained from the on-line help system, with the ":help" command. See the ON-LINE HELP section below. Most often Vim is started to edit a single file with the command vim file More generally Vim is started with: vim [options] [filelist] If the filelist is missing, the editor will start with an empty buffer. Otherwise exactly one out of the following four may be used to choose one or more files to be edited. file .. A list of filenames. The first one will be the current file and read into the buffer. The cursor will be positioned on the first line of the buffer. You can get to the other files with the ":next" command. To edit a file that starts with a dash, precede the filelist with "--". - The file to edit is read from stdin. Commands are read from stderr, which should be a tty. -t {tag} The file to edit and the initial cursor position depends on a "tag", a sort of goto label. {tag} is looked up in the tags file, the associated file becomes the current file and the associated command is executed. Mostly this is used for C programs, in which case {tag} could be a function name. The effect is that the file containing that function becomes the current file and the cursor is positioned on the start of the function. See ":help tag-commands". -q [errorfile] Start in quickFix mode. The file [errorfile] is read and the first error is displayed. If [errorfile] is omitted, the filename is obtained from the 'errorfile' option (defaults to "AztecC.Err" for the Amiga, "errors.err" on other systems). Further errors can be jumped to with the ":cn" command. See ":help quickfix". Vim behaves differently, depending on the name of the command (the executable may still be the same file). vim The "normal" way, everything is default. ex Start in Ex mode. Go to Normal mode with the ":vi" command. Can also be done with the "-e" argument. view Start in read-only mode. You will be protected from writing the files. Can also be done with the "-R" argument. gvim gview The GUI version. Starts a new window. Can also be done with the "-g" argument. evim eview The GUI version in easy mode. Starts a new window. Can also be done with the "-y" argument. rvim rview rgvim rgview Like the above, but with restrictions. It will not be possible to start shell commands, or suspend Vim. Can also be done with the "-Z" argument.
|
vim - Vi IMproved, a programmer's text editor
|
vim [options] [file ..] vim [options] - vim [options] -t tag vim [options] -q [errorfile] ex view gvim gview evim eview rvim rview rgvim rgview
|
The options may be given in any order, before or after filenames. Options without an argument can be combined after a single dash. +[num] For the first file the cursor will be positioned on line "num". If "num" is missing, the cursor will be positioned on the last line. +/{pat} For the first file the cursor will be positioned in the line with the first occurrence of {pat}. See ":help search-pattern" for the available search patterns. +{command} -c {command} {command} will be executed after the first file has been read. {command} is interpreted as an Ex command. If the {command} contains spaces it must be enclosed in double quotes (this depends on the shell that is used). Example: vim "+set si" main.c Note: You can use up to 10 "+" or "-c" commands. -S {file} {file} will be sourced after the first file has been read. This is equivalent to -c "source {file}". {file} cannot start with '-'. If {file} is omitted "Session.vim" is used (only works when -S is the last argument). --cmd {command} Like using "-c", but the command is executed just before processing any vimrc file. You can use up to 10 of these commands, independently from "-c" commands. -A If Vim has been compiled with ARABIC support for editing right-to-left oriented files and Arabic keyboard mapping, this option starts Vim in Arabic mode, i.e. 'arabic' is set. Otherwise an error message is given and Vim aborts. -b Binary mode. A few options will be set that makes it possible to edit a binary or executable file. -C Compatible. Set the 'compatible' option. This will make Vim behave mostly like Vi, even though a .vimrc file exists. -d Start in diff mode. There should between two to eight file name arguments. Vim will open all the files and show differences between them. Works like vimdiff(1). -d {device}, -dev {device} Open {device} for use as a terminal. Only on the Amiga. Example: "-d con:20/30/600/150". -D Debugging. Go to debugging mode when executing the first command from a script. -e Start Vim in Ex mode, just like the executable was called "ex". -E Start Vim in improved Ex mode, just like the executable was called "exim". -f Foreground. For the GUI version, Vim will not fork and detach from the shell it was started in. On the Amiga, Vim is not restarted to open a new window. This option should be used when Vim is executed by a program that will wait for the edit session to finish (e.g. mail). On the Amiga the ":sh" and ":!" commands will not work. --nofork Foreground. For the GUI version, Vim will not fork and detach from the shell it was started in. -F If Vim has been compiled with FKMAP support for editing right-to-left oriented files and Farsi keyboard mapping, this option starts Vim in Farsi mode, i.e. 'fkmap' and 'rightleft' are set. Otherwise an error message is given and Vim aborts. -g If Vim has been compiled with GUI support, this option enables the GUI. If no GUI support was compiled in, an error message is given and Vim aborts. --gui-dialog-file {name} When using the GUI, instead of showing a dialog, write the title and message of the dialog to file {name}. The file is created or appended to. Only useful for testing, to avoid that the test gets stuck on a dialog that can't be seen. Without the GUI the argument is ignored. --help, -h, -? Give a bit of help about the command line arguments and options. After this Vim exits. -H If Vim has been compiled with RIGHTLEFT support for editing right-to-left oriented files and Hebrew keyboard mapping, this option starts Vim in Hebrew mode, i.e. 'hkmap' and 'rightleft' are set. Otherwise an error message is given and Vim aborts. -i {viminfo} Specifies the filename to use when reading or writing the viminfo file, instead of the default "~/.viminfo". This can also be used to skip the use of the .viminfo file, by giving the name "NONE". -L Same as -r. -l Lisp mode. Sets the 'lisp' and 'showmatch' options on. -m Modifying files is disabled. Resets the 'write' option. You can still modify the buffer, but writing a file is not possible. -M Modifications not allowed. The 'modifiable' and 'write' options will be unset, so that changes are not allowed and files can not be written. Note that these options can be set to enable making modifications. -N No-compatible mode. Resets the 'compatible' option. This will make Vim behave a bit better, but less Vi compatible, even though a .vimrc file does not exist. -n No swap file will be used. Recovery after a crash will be impossible. Handy if you want to edit a file on a very slow medium (e.g. floppy). Can also be done with ":set uc=0". Can be undone with ":set uc=200". -nb Become an editor server for NetBeans. See the docs for details. -o[N] Open N windows stacked. When N is omitted, open one window for each file. -O[N] Open N windows side by side. When N is omitted, open one window for each file. -p[N] Open N tab pages. When N is omitted, open one tab page for each file. -P {parent-title} Win32 GUI only: Specify the title of the parent application. When possible, Vim will run in an MDI window inside the application. {parent-title} must appear in the window title of the parent application. Make sure that it is specific enough. Note that the implementation is still primitive. It won't work with all applications and the menu doesn't work. -R Read-only mode. The 'readonly' option will be set. You can still edit the buffer, but will be prevented from accidentally overwriting a file. If you do want to overwrite a file, add an exclamation mark to the Ex command, as in ":w!". The -R option also implies the -n option (see above). The 'readonly' option can be reset with ":set noro". See ":help 'readonly'". -r List swap files, with information about using them for recovery. -r {file} Recovery mode. The swap file is used to recover a crashed editing session. The swap file is a file with the same filename as the text file with ".swp" appended. See ":help recovery". -s Silent mode. Only when started as "Ex" or when the "-e" option was given before the "-s" option. -s {scriptin} The script file {scriptin} is read. The characters in the file are interpreted as if you had typed them. The same can be done with the command ":source! {scriptin}". If the end of the file is reached before the editor exits, further characters are read from the keyboard. -T {terminal} Tells Vim the name of the terminal you are using. Only required when the automatic way doesn't work. Should be a terminal known to Vim (builtin) or defined in the termcap or terminfo file. --not-a-term Tells Vim that the user knows that the input and/or output is not connected to a terminal. This will avoid the warning and the two second delay that would happen. --ttyfail When stdin or stdout is not a a terminal (tty) then exit right away. -u {vimrc} Use the commands in the file {vimrc} for initializations. All the other initializations are skipped. Use this to edit a special kind of files. It can also be used to skip all initializations by giving the name "NONE". See ":help initialization" within vim for more details. -U {gvimrc} Use the commands in the file {gvimrc} for GUI initializations. All the other GUI initializations are skipped. It can also be used to skip all GUI initializations by giving the name "NONE". See ":help gui-init" within vim for more details. -V[N] Verbose. Give messages about which files are sourced and for reading and writing a viminfo file. The optional number N is the value for 'verbose'. Default is 10. -V[N]{filename} Like -V and set 'verbosefile' to {filename}. The result is that messages are not displayed but written to the file {filename}. {filename} must not start with a digit. --log {filename} If Vim has been compiled with eval and channel feature, start logging and write entries to {filename}. This works like calling ch_logfile({filename}, 'ao') very early during startup. -v Start Vim in Vi mode, just like the executable was called "vi". This only has effect when the executable is called "ex". -w{number} Set the 'window' option to {number}. -w {scriptout} All the characters that you type are recorded in the file {scriptout}, until you exit Vim. This is useful if you want to create a script file to be used with "vim -s" or ":source!". If the {scriptout} file exists, characters are appended. -W {scriptout} Like -w, but an existing file is overwritten. -x Use encryption when writing files. Will prompt for a crypt key. -X Don't connect to the X server. Shortens startup time in a terminal, but the window title and clipboard will not be used. -y Start Vim in easy mode, just like the executable was called "evim" or "eview". Makes Vim behave like a click-and-type editor. -Z Restricted mode. Works like the executable starts with "r". -- Denotes the end of the options. Arguments after this will be handled as a file name. This can be used to edit a filename that starts with a '-'. --clean Do not use any personal configuration (vimrc, plugins, etc.). Useful to see if a problem reproduces with a clean Vim setup. --echo-wid GTK GUI only: Echo the Window ID on stdout. --literal Take file name arguments literally, do not expand wildcards. This has no effect on Unix where the shell expands wildcards. --noplugin Skip loading plugins. Implied by -u NONE. --remote Connect to a Vim server and make it edit the files given in the rest of the arguments. If no server is found a warning is given and the files are edited in the current Vim. --remote-expr {expr} Connect to a Vim server, evaluate {expr} in it and print the result on stdout. --remote-send {keys} Connect to a Vim server and send {keys} to it. --remote-silent As --remote, but without the warning when no server is found. --remote-wait As --remote, but Vim does not exit until the files have been edited. --remote-wait-silent As --remote-wait, but without the warning when no server is found. --serverlist List the names of all Vim servers that can be found. --servername {name} Use {name} as the server name. Used for the current Vim, unless used with a --remote argument, then it's the name of the server to connect to. --socketid {id} GTK GUI only: Use the GtkPlug mechanism to run gvim in another window. --startuptime {file} During startup write timing messages to the file {fname}. --version Print version information and exit. --windowid {id} Win32 GUI only: Make gvim try to use the window {id} as a parent, so that it runs inside that window. ON-LINE HELP Type ":help" in Vim to get started. Type ":help subject" to get help on a specific subject. For example: ":help ZZ" to get help for the "ZZ" command. Use <Tab> and CTRL-D to complete subjects (":help cmdline-completion"). Tags are present to jump from one place to another (sort of hypertext links, see ":help"). All documentation files can be viewed in this way, for example ":help syntax.txt". FILES /usr/local/share/vim/vim??/doc/*.txt The Vim documentation files. Use ":help doc-file-list" to get the complete list. vim?? is short version number, like vim91 for Vim 9.1 /usr/local/share/vim/vim??/doc/tags The tags file used for finding information in the documentation files. /usr/local/share/vim/vim??/syntax/syntax.vim System wide syntax initializations. /usr/local/share/vim/vim??/syntax/*.vim Syntax files for various languages. /usr/local/share/vim/vimrc System wide Vim initializations. ~/.vimrc, ~/.vim/vimrc, $XDG_CONFIG_HOME/vim/vimrc Your personal Vim initializations (first one found is used). /usr/local/share/vim/gvimrc System wide gvim initializations. ~/.gvimrc, ~/.vim/gvimrc, $XDG_CONFIG_HOME/vim/gvimrc Your personal gvim initializations (first one found is used). /usr/local/share/vim/vim??/optwin.vim Script used for the ":options" command, a nice way to view and set options. /usr/local/share/vim/vim??/menu.vim System wide menu initializations for gvim. /usr/local/share/vim/vim??/bugreport.vim Script to generate a bug report. See ":help bugs". /usr/local/share/vim/vim??/filetype.vim Script to detect the type of a file by its name. See ":help 'filetype'". /usr/local/share/vim/vim??/scripts.vim Script to detect the type of a file by its contents. See ":help 'filetype'". /usr/local/share/vim/vim??/print/*.ps Files used for PostScript printing. For recent info read the VIM home page: <URL:http://www.vim.org/> SEE ALSO vimtutor(1) AUTHOR Most of Vim was made by Bram Moolenaar, with a lot of help from others. See ":help credits" in Vim. Vim is based on Stevie, worked on by: Tim Thompson, Tony Andrews and G.R. (Fred) Walter. Although hardly any of the original code remains. BUGS Probably. See ":help todo" for a list of known problems. Note that a number of things that may be regarded as bugs by some, are in fact caused by a too-faithful reproduction of Vi's behaviour. And if you think other things are bugs "because Vi does it differently", you should take a closer look at the vi_diff.txt file (or type :help vi_diff.txt when in Vim). Also have a look at the 'compatible' and 'cpoptions' options. 2024 Jun 04 VIM(1)
| null |
jcmd
|
The jcmd utility is used to send diagnostic command requests to the JVM. It must be used on the same machine on which the JVM is running, and have the same effective user and group identifiers that were used to launch the JVM. Each diagnostic command has its own set of arguments. To display the description, syntax, and a list of available arguments for a diagnostic command, use the name of the command as the argument. For example: jcmd pid help command If arguments contain spaces, then you must surround them with single or double quotation marks (' or "). In addition, you must escape single or double quotation marks with a backslash (\) to prevent the operating system shell from processing quotation marks. Alternatively, you can surround these arguments with single quotation marks and then with double quotation marks (or with double quotation marks and then with single quotation marks). If you specify the process identifier (pid) or the main class (main- class) as the first argument, then the jcmd utility sends the diagnostic command request to the Java process with the specified identifier or to all Java processes with the specified name of the main class. You can also send the diagnostic command request to all available Java processes by specifying 0 as the process identifier. COMMANDS FOR JCMD The command must be a valid jcmd diagnostic command for the selected JVM. The list of available commands for jcmd is obtained by running the help command (jcmd pid help) where pid is the process ID for a running Java process. If the pid is 0, commands will be sent to all Java processes. The main class argument will be used to match, either partially or fully, the class used to start Java. If no options are given, it lists the running Java process identifiers that are not in separate docker processes along with the main class and command-line arguments that were used to launch the process (the same as using -l). The following commands are available: help [options] [arguments] For more information about a specific command. arguments: β’ command name: The name of the command for which we want help (STRING, no default value) Note: The following options must be specified using either key or key=value syntax. options: β’ -all: (Optional) Show help for all commands (BOOLEAN, false) . Compiler.CodeHeap_Analytics [function] [granularity] Print CodeHeap analytics Impact: Low: Depends on code heap size and content. Holds CodeCache_lock during analysis step, usually sub-second duration. arguments: β’ function: (Optional) Function to be performed (aggregate, UsedSpace, FreeSpace, MethodCount, MethodSpace, MethodAge, MethodNames, discard (STRING, all) β’ granularity: (Optional) Detail level - smaller value -> more detail (INT, 4096) Compiler.codecache Prints code cache layout and bounds. Impact: Low Compiler.codelist Prints all compiled methods in code cache that are alive. Impact: Medium Compiler.directives_add arguments Adds compiler directives from a file. Impact: Low arguments: β’ filename: The name of the directives file (STRING, no default value) Compiler.directives_clear Remove all compiler directives. Impact: Low Compiler.directives_print Prints all active compiler directives. Impact: Low Compiler.directives_remove Remove latest added compiler directive. Impact: Low Compiler.memory [options] Print compilation footprint Impact: Medium: Pause time depends on number of compiled methods Note: The options must be specified using either key or key=value syntax. options: β’ -H: (Optional) Human readable format (BOOLEAN, false) β’ -s: (Optional) Minimum memory size (MEMORY SIZE, 0) Compiler.perfmap (Linux only) Write map file for Linux perf tool. Impact: Low Compiler.queue Prints methods queued for compilation. Impact: Low GC.class_histogram [options] Provides statistics about the Java heap usage. Impact: High --- depends on Java heap size and content. Note: The options must be specified using either key or key=value syntax. options: β’ -all: (Optional) Inspects all objects, including unreachable objects (BOOLEAN, false) β’ -parallel: (Optional) Number of parallel threads to use for heap inspection. 0 (the default) means let the VM determine the number of threads to use. 1 means use one thread (disable parallelism). For any other value the VM will try to use the specified number of threads, but might use fewer. (INT, 0) GC.finalizer_info Provides information about the Java finalization queue. Impact: Medium GC.heap_dump [options] [arguments] Generates a HPROF format dump of the Java heap. Impact: High --- depends on the Java heap size and content. Request a full GC unless the -all option is specified. Note: The following options must be specified using either key or key=value syntax. options: β’ -all: (Optional) Dump all objects, including unreachable objects (BOOLEAN, false) β’ -gz: (Optional) If specified, the heap dump is written in gzipped format using the given compression level. 1 (recommended) is the fastest, 9 the strongest compression. (INT, 1) β’ -overwrite: (Optional) If specified, the dump file will be overwritten if it exists (BOOLEAN, false) β’ -parallel: (Optional) Number of parallel threads to use for heap dump. The VM will try to use the specified number of threads, but might use fewer. (INT, 1) arguments: β’ filename: The name of the dump file (STRING, no default value) GC.heap_info Provides generic Java heap information. Impact: Medium GC.run Calls java.lang.System.gc(). Impact: Medium --- depends on the Java heap size and content. GC.run_finalization Calls java.lang.System.runFinalization(). Impact: Medium --- depends on the Java content. JFR.check [options] Show information about a running flight recording Impact: Low Note: The options must be specified using either key or key=value syntax. If no parameters are entered, information for all active recordings is shown. options: β’ name: (Optional) Name of the flight recording. (STRING, no default value) β’ verbose: (Optional) Flag for printing the event settings for the recording (BOOLEAN, false) JFR.configure [options] Set the parameters for a flight recording Impact: Low Note: The options must be specified using either key or key=value syntax. If no parameters are entered, the current settings are displayed. options: β’ dumppath: (Optional) Path to the location where a recording file is written in case the VM runs into a critical error, such as a system crash. (STRING, The default location is the current directory) β’ globalbuffercount: (Optional) Number of global buffers. This option is a legacy option: change the memorysize parameter to alter the number of global buffers. This value cannot be changed once JFR has been initialized. (STRING, default determined by the value for memorysize) β’ globalbuffersize: (Optional) Size of the global buffers, in bytes. This option is a legacy option: change the memorysize parameter to alter the size of the global buffers. This value cannot be changed once JFR has been initialized. (STRING, default determined by the value for memorysize) β’ maxchunksize: (Optional) Maximum size of an individual data chunk in bytes if one of the following suffixes is not used: 'm' or 'M' for megabytes OR 'g' or 'G' for gigabytes. This value cannot be changed once JFR has been initialized. (STRING, 12M) β’ memorysize: (Optional) Overall memory size, in bytes if one of the following suffixes is not used: 'm' or 'M' for megabytes OR 'g' or 'G' for gigabytes. This value cannot be changed once JFR has been initialized. (STRING, 10M) β’ repositorypath: (Optional) Path to the location where recordings are stored until they are written to a permanent file. (STRING, The default location is the temporary directory for the operating system. On Linux operating systems, the temporary directory is /tmp. On Windwows, the temporary directory is specified by the TMP environment variable.) β’ preserve-repository={true|false} : Specifies whether files stored in the disk repository should be kept after the JVM has exited. If false, files are deleted. By default, this parameter is disabled. β’ stackdepth: (Optional) Stack depth for stack traces. Setting this value greater than the default of 64 may cause a performance degradation. This value cannot be changed once JFR has been initialized. (LONG, 64) β’ thread_buffer_size: (Optional) Local buffer size for each thread in bytes if one of the following suffixes is not used: 'k' or 'K' for kilobytes or 'm' or 'M' for megabytes. Overriding this parameter could reduce performance and is not recommended. This value cannot be changed once JFR has been initialized. (STRING, 8k) β’ samplethreads: (Optional) Flag for activating thread sampling. (BOOLEAN, true) JFR.dump [options] Write data to a file while a flight recording is running Impact: Low Note: The options must be specified using either key or key=value syntax. No options are required. The recording continues to run after the data is written. options: β’ begin: (Optional) Specify the time from which recording data will be included in the dump file. The format is specified as local time. (STRING, no default value) β’ end: (Optional) Specify the time to which recording data will be included in the dump file. The format is specified as local time. (STRING, no default value) Note: For both begin and end, the time must be in a format that can be read by java.time.LocalTime::parse(STRING), java.time.LocalDateTime::parse(STRING) or java.time.Instant::parse(STRING). For example, "13:20:15", "2020-03-17T09:00:00" or "2020-03-17T09:00:00Z". Note: begin and end times correspond to the timestamps found within the recorded information in the flight recording data. Another option is to use a time relative to the current time that is specified by a negative integer followed by "s", "m" or "h". For example, "-12h", "-15m" or "-30s" β’ filename: (Optional) Name of the file to which the flight recording data is dumped. If no filename is given, a filename is generated from the PID and the current date. The filename may also be a directory in which case, the filename is generated from the PID and the current date in the specified directory. If %p and/or %t is specified in the filename, it expands to the JVM's PID and the current timestamp, respectively. (STRING, no default value) β’ maxage: (Optional) Length of time for dumping the flight recording data to a file. (INTEGER followed by 's' for seconds 'm' for minutes or 'h' for hours, no default value) β’ maxsize: (Optional) Maximum size for the amount of data to dump from a flight recording in bytes if one of the following suffixes is not used: 'm' or 'M' for megabytes OR 'g' or 'G' for gigabytes. (STRING, no default value) β’ name: (Optional) Name of the recording. If no name is given, data from all recordings is dumped. (STRING, no default value) β’ path-to-gc-root: (Optional) Flag for saving the path to garbage collection (GC) roots at the time the recording data is dumped. The path information is useful for finding memory leaks but collecting it can cause the application to pause for a short period of time. Turn on this flag only when you have an application that you suspect has a memory leak. (BOOLEAN, false) JFR.start [options] Start a flight recording Impact: Low Note: The options must be specified using either key or key=value syntax. If no parameters are entered, then a recording is started with default values. options: β’ delay: (Optional) Length of time to wait before starting to record (INTEGER followed by 's' for seconds 'm' for minutes or 'h' for hours, 0s) β’ disk: (Optional) Flag for also writing the data to disk while recording (BOOLEAN, true) β’ dumponexit: (Optional) Flag for writing the recording to disk when the Java Virtual Machine (JVM) shuts down. If set to 'true' and no value is given for filename, the recording is written to a file in the directory where the process was started. The file name is a system-generated name that contains the process ID, the recording ID and the current time stamp. (For example: id-1-2019_12_12_10_41.jfr) (BOOLEAN, false) β’ duration: (Optional) Length of time to record. Note that 0s means forever (INTEGER followed by 's' for seconds 'm' for minutes or 'h' for hours, 0s) β’ filename: (Optional) Name of the file to which the flight recording data is written when the recording is stopped. If no filename is given, a filename is generated from the PID and the current date and is placed in the directory where the process was started. The filename may also be a directory in which case, the filename is generated from the PID and the current date in the specified directory. If %p and/or %t is specified in the filename, it expands to the JVM's PID and the current timestamp, respectively. (STRING, no default value) β’ maxage: (Optional) Maximum time to keep the recorded data on disk. This parameter is valid only when the disk parameter is set to true. Note 0s means forever. (INTEGER followed by 's' for seconds 'm' for minutes or 'h' for hours, 0s) β’ maxsize: (Optional) Maximum size of the data to keep on disk in bytes if one of the following suffixes is not used: 'm' or 'M' for megabytes OR 'g' or 'G' for gigabytes. This parameter is valid only when the disk parameter is set to 'true'. The value must not be less than the value for the maxchunksize parameter set with the JFR.configure command. (STRING, 0 (no maximum size)) β’ name: (Optional) Name of the recording. If no name is provided, a name is generated. Make note of the generated name that is shown in the response to the command so that you can use it with other commands. (STRING, system-generated default name) β’ path-to-gc-root: (Optional) Flag for saving the path to garbage collection (GC) roots at the end of a recording. The path information is useful for finding memory leaks but collecting it is time consuming. Turn on this flag only when you have an application that you suspect has a memory leak. If the settings parameter is set to 'profile', then the information collected includes the stack trace from where the potential leaking object was allocated. (BOOLEAN, false) β’ settings: (Optional) Name of the settings file that identifies which events to record. To specify more than one file, separate the names with a comma (','). Include the path if the file is not in JAVA-HOME/lib/jfr. The following profiles are included with the JDK in the JAVA-HOME/lib/jfr directory: 'default.jfc': collects a predefined set of information with low overhead, so it has minimal impact on performance and can be used with recordings that run continuously; 'profile.jfc': Provides more data than the 'default.jfc' profile, but with more overhead and impact on performance. Use this configuration for short periods of time when more information is needed. Use none to start a recording without a predefined configuration file. (STRING, JAVA-HOME/lib/jfr/default.jfc) Event settings and .jfc options can be specified using the following syntax: β’ option: (Optional) Specifies the option value to modify. To list available options, use the JAVA_HOME/bin/jfr tool. β’ event-setting: (Optional) Specifies the event setting value to modify. Use the form: <event-name>#<setting-name>=<value> To add a new event setting, prefix the event name with '+'. You can specify values for multiple event settings and .jfc options by separating them with a whitespace. In case of a conflict between a parameter and a .jfc option, the parameter will take precedence. The whitespace delimiter can be omitted for timespan values, i.e. 20ms. For more information about the settings syntax, see Javadoc of the jdk.jfr package. JFR.stop [options] Stop a flight recording Impact: Low Note: The options must be specified using either key or key=value syntax. If no parameters are entered, then no recording is stopped. options: β’ filename: (Optional) Name of the file to which the recording is written when the recording is stopped. If %p and/or %t is specified in the filename, it expands to the JVM's PID and the current timestamp, respectively. If no path is provided, the data from the recording is discarded. (STRING, no default value) β’ name: (Optional) Name of the recording (STRING, no default value) JVMTI.agent_load [arguments] Loads JVMTI native agent. Impact: Low arguments: β’ library path: Absolute path of the JVMTI agent to load. (STRING, no default value) β’ agent option: (Optional) Option string to pass the agent. (STRING, no default value) JVMTI.data_dump Signal the JVM to do a data-dump request for JVMTI. Impact: High ManagementAgent.start [options] Starts remote management agent. Impact: Low --- no impact Note: The following options must be specified using either key or key=value syntax. options: β’ config.file: (Optional) Sets com.sun.management.config.file (STRING, no default value) β’ jmxremote.host: (Optional) Sets com.sun.management.jmxremote.host (STRING, no default value) β’ jmxremote.port: (Optional) Sets com.sun.management.jmxremote.port (STRING, no default value) β’ jmxremote.rmi.port: (Optional) Sets com.sun.management.jmxremote.rmi.port (STRING, no default value) β’ jmxremote.ssl: (Optional) Sets com.sun.management.jmxremote.ssl (STRING, no default value) β’ jmxremote.registry.ssl: (Optional) Sets com.sun.management.jmxremote.registry.ssl (STRING, no default value) β’ jmxremote.authenticate: (Optional) Sets com.sun.management.jmxremote.authenticate (STRING, no default value) β’ jmxremote.password.file: (Optional) Sets com.sun.management.jmxremote.password.file (STRING, no default value) β’ jmxremote.access.file: (Optional) Sets com.sun.management.jmxremote.acce ss.file (STRING, no default value) β’ jmxremote.login.config: (Optional) Sets com.sun.management.jmxremote.log in.config (STRING, no default value) β’ jmxremote.ssl.enabled.cipher.suites: (Optional) Sets com.sun.management. β’ jmxremote.ssl.enabled.cipher.suite: (STRING, no default value) β’ jmxremote.ssl.enabled.protocols: (Optional) Sets com.sun.management.jmxr emote.ssl.enabled.protocols (STRING, no default value) β’ jmxremote.ssl.need.client.auth: (Optional) Sets com.sun.management.jmxre mote.need.client.auth (STRING, no default value) β’ jmxremote.ssl.config.file: (Optional) Sets com.sun.management.jmxremote. ssl_config_file (STRING, no default value) β’ jmxremote.autodiscovery: (Optional) Sets com.sun.management.jmxremote.au todiscovery (STRING, no default value) β’ jdp.port: (Optional) Sets com.sun.management.jdp.port (INT, no default value) β’ jdp.address: (Optional) Sets com.sun.management.jdp.address (STRING, no default value) β’ jdp.source_addr: (Optional) Sets com.sun.management.jdp.source_addr (STRING, no default value) β’ jdp.ttl: (Optional) Sets com.sun.management.jdp.ttl (INT, no default value) β’ jdp.pause: (Optional) Sets com.sun.management.jdp.pause (INT, no default value) β’ jdp.name: (Optional) Sets com.sun.management.jdp.name (STRING, no default value) ManagementAgent.start_local Starts the local management agent. Impact: Low --- no impact ManagementAgent.status Print the management agent status. Impact: Low --- no impact ManagementAgent.stop Stops the remote management agent. Impact: Low --- no impact System.dump_map [options] (Linux only) Dumps an annotated process memory map to an output file. Impact: Low Note: The following options must be specified using either key or key=value syntax. options: β’ -H: (Optional) Human readable format (BOOLEAN, false) β’ -F: (Optional) File path (STRING, "vm_memory_map_.txt") System.map [options] (Linux only) Prints an annotated process memory map of the VM process. Impact: Low Note: The following options must be specified using either key or key=value syntax. options: β’ -H: (Optional) Human readable format (BOOLEAN, false) System.native_heap_info (Linux only) Attempts to output information regarding native heap usage through malloc_info(3). If unsuccessful outputs "Error: " and a reason. Impact: Low System.trim_native_heap (Linux only) Attempts to free up memory by trimming the C-heap. Impact: Low Thread.dump_to_file [options] filepath Dump threads, with stack traces, to a file in plain text or JSON format. Impact: Medium: Depends on the number of threads. Note: The following options must be specified using either key or key=value syntax. options: β’ -overwrite: (Optional) May overwrite existing file (BOOLEAN, false) β’ -format: (Optional) Output format ("plain" or "json") (STRING, plain) arguments: β’ filepath: The file path to the output file (STRING, no default value) Thread.print [options] Prints all threads with stacktraces. Impact: Medium --- depends on the number of threads. Note: The following options must be specified using either key or key=value syntax. options: β’ -e: (Optional) Print extended thread information (BOOLEAN, false) β’ -l: (Optional) Prints java.util.concurrent locks (BOOLEAN, false) VM.cds [arguments] Dump a static or dynamic shared archive that includes all currently loaded classes. Impact: Medium --- pause time depends on number of loaded classes arguments: β’ subcmd: must be either static_dump or dynamic_dump (STRING, no default value) β’ filename: (Optional) Name of the shared archive to be dumped (STRING, no default value) If filename is not specified, a default file name is chosen using the pid of the target JVM process. For example, java_pid1234_static.jsa, java_pid5678_dynamic.jsa, etc. If filename is not specified as an absolute path, the archive file is created in a directory relative to the current directory of the target JVM process. If dynamic_dump is specified, the target JVM must be started with the JVM option -XX:+RecordDynamicDumpInfo. VM.class_hierarchy [options] [arguments] Print a list of all loaded classes, indented to show the class hierarchy. The name of each class is followed by the ClassLoaderData* of its ClassLoader, or "null" if it is loaded by the bootstrap class loader. Impact: Medium --- depends on the number of loaded classes. Note: The following options must be specified using either key or key=value syntax. options: β’ -i: (Optional) Inherited interfaces should be printed. (BOOLEAN, false) β’ -s: (Optional) If a classname is specified, print its subclasses in addition to its superclasses. Without this option only the superclasses will be printed. (BOOLEAN, false) arguments: β’ classname: (Optional) The name of the class whose hierarchy should be printed. If not specified, all class hierarchies are printed. (STRING, no default value) VM.classes [options] Print all loaded classes Impact: Medium: Depends on number of loaded classes. The following options must be specified using either key or key=value syntax. options: β’ -verbose: (Optional) Dump the detailed content of a Java class. Some classes are annotated with flags: F = has, or inherits, a non-empty finalize method, f = has final method, W = methods rewritten, C = marked with @Contended annotation, R = has been redefined, S = is shared class (BOOLEAN, false) VM.classloader_stats Print statistics about all ClassLoaders. Impact: Low VM.classloaders [options] Prints classloader hierarchy. Impact: Medium --- Depends on number of class loaders and classes loaded. The following options must be specified using either key or key=value syntax. options: β’ show-classes: (Optional) Print loaded classes. (BOOLEAN, false) β’ verbose: (Optional) Print detailed information. (BOOLEAN, false) β’ fold: (Optional) Show loaders of the same name and class as one. (BOOLEAN, true) VM.command_line Print the command line used to start this VM instance. Impact: Low VM.dynlibs Print loaded dynamic libraries. Impact: Low VM.events [options] Print VM event logs Impact: Low --- Depends on event log size. options: Note: The following options must be specified using either key or key=value syntax. β’ log: (Optional) Name of log to be printed. If omitted, all logs are printed. (STRING, no default value) β’ max: (Optional) Maximum number of events to be printed (newest first). If omitted, all events are printed. (STRING, no default value) VM.flags [options] Print the VM flag options and their current values. Impact: Low Note: The following options must be specified using either key or key=value syntax. options: β’ -all: (Optional) Prints all flags supported by the VM (BOOLEAN, false). VM.info Print information about the JVM environment and status. Impact: Low VM.log [options] Lists current log configuration, enables/disables/configures a log output, or rotates all logs. Impact: Low options: Note: The following options must be specified using either key or key=value syntax. β’ output: (Optional) The name or index (#) of output to configure. (STRING, no default value) β’ output_options: (Optional) Options for the output. (STRING, no default value) β’ what: (Optional) Configures what tags to log. (STRING, no default value ) β’ decorators: (Optional) Configures which decorators to use. Use 'none' or an empty value to remove all. (STRING, no default value) β’ disable: (Optional) Turns off all logging and clears the log configuration. (BOOLEAN, no default value) β’ list: (Optional) Lists current log configuration. (BOOLEAN, no default value) β’ rotate: (Optional) Rotates all logs. (BOOLEAN, no default value) VM.metaspace [options] Prints the statistics for the metaspace Impact: Medium --- Depends on number of classes loaded. Note: The following options must be specified using either key or key=value syntax. options: β’ basic: (Optional) Prints a basic summary (does not need a safepoint). (BOOLEAN, false) β’ show-loaders: (Optional) Shows usage by class loader. (BOOLEAN, false) β’ show-classes: (Optional) If show-loaders is set, shows loaded classes for each loader. (BOOLEAN, false) β’ by-chunktype: (Optional) Break down numbers by chunk type. (BOOLEAN, false) β’ by-spacetype: (Optional) Break down numbers by loader type. (BOOLEAN, false) β’ vslist: (Optional) Shows details about the underlying virtual space. (BOOLEAN, false) β’ chunkfreelist: (Optional) Shows details about global chunk free lists (ChunkManager). (BOOLEAN, false) β’ scale: (Optional) Memory usage in which to scale. Valid values are: 1, KB, MB or GB (fixed scale) or "dynamic" for a dynamically chosen scale. (STRING, dynamic) VM.native_memory [options] Print native memory usage Impact: Medium Note: The following options must be specified using either key or key=value syntax. options: β’ summary: (Optional) Requests runtime to report current memory summary, which includes total reserved and committed memory, along with memory usage summary by each subsystem. (BOOLEAN, false) β’ detail: (Optional) Requests runtime to report memory allocation >= 1K by each callsite. (BOOLEAN, false) β’ baseline: (Optional) Requests runtime to baseline current memory usage, so it can be compared against in later time. (BOOLEAN, false) β’ summary.diff: (Optional) Requests runtime to report memory summary comparison against previous baseline. (BOOLEAN, false) β’ detail.diff: (Optional) Requests runtime to report memory detail comparison against previous baseline, which shows the memory allocation activities at different callsites. (BOOLEAN, false) β’ statistics: (Optional) Prints tracker statistics for tuning purpose. (BOOLEAN, false) β’ scale: (Optional) Memory usage in which scale, KB, MB or GB (STRING, KB) VM.set_flag [arguments] Sets VM flag option using the provided value. Impact: Low arguments: β’ flag name: The name of the flag that you want to set (STRING, no default value) β’ string value: (Optional) The value that you want to set (STRING, no default value) VM.stringtable [options] Dump string table. Impact: Medium --- depends on the Java content. Note: The following options must be specified using either key or key=value syntax. options: β’ -verbose: (Optional) Dumps the content of each string in the table (BOOLEAN, false) VM.symboltable [options] Dump symbol table. Impact: Medium --- depends on the Java content. Note: The following options must be specified using either key or key=value syntax). options: β’ -verbose: (Optional) Dumps the content of each symbol in the table (BOOLEAN, false) VM.system_properties Print system properties. Impact: Low VM.systemdictionary Prints the statistics for dictionary hashtable sizes and bucket length. Impact: Medium Note: The following options must be specified using either key or key=value syntax. options: β’ verbose: (Optional) Dump the content of each dictionary entry for all class loaders (BOOLEAN, false) . VM.uptime [options] Print VM uptime. Impact: Low Note: The following options must be specified using either key or key=value syntax. options: β’ -date: (Optional) Adds a prefix with the current date (BOOLEAN, false) VM.version Print JVM version information. Impact: Low JDK 22 2024 JCMD(1)
|
jcmd - send diagnostic command requests to a running Java Virtual Machine (JVM)
|
jcmd [pid | main-class] command... | PerfCounter.print | -f filename jcmd [-l] jcmd -h pid When used, the jcmd utility sends the diagnostic command request to the process ID for the Java process. main-class When used, the jcmd utility sends the diagnostic command request to all Java processes with the specified name of the main class. command The command must be a valid jcmd command for the selected JVM. The list of available commands for jcmd is obtained by running the help command (jcmd pid help) where pid is the process ID for the running Java process. If the pid is 0, commands will be sent to all Java processes. The main class argument will be used to match, either partially or fully, the class used to start Java. If no options are given, it lists the running Java process identifiers with the main class and command-line arguments that were used to launch the process (the same as using -l). Perfcounter.print Prints the performance counters exposed by the specified Java process. -f filename Reads and executes commands from a specified file, filename. -l Displays the list of Java Virtual Machine process identifiers that are not running in a separate docker process along with the main class and command-line arguments that were used to launch the process. If the JVM is in a docker process, you must use tools such as ps to look up the PID. Note: Using jcmd without arguments is the same as using jcmd -l. -h Displays the jcmd utility's command-line help.
| null | null |
dmc
|
dmc(1) configures the Disk Mount Conditioner. The Disk Mount Conditioner is a kernel provided service that can degrade the disk I/O being issued to specific mount points, providing the illusion that the I/O is executing on a slower device. It can also cause the conditioned mount point to advertise itself as a different device type, e.g. the disk type of an SSD could be set to an HDD. This behavior consequently changes various parameters such as read-ahead settings, disk I/O throttling, etc., which normally have different behavior depending on the underlying device type. COMMANDS Common command parameters: β’ mount - the mount point to be used in the command β’ profile-name - the name of a profile as shown in dmc list β’ profile-index - the index of a profile as shown in dmc list dmc start mount [profile-name|profile-index [-boot]] Start the Disk Mount Conditioner on the given mount point with the current settings (from dmc status) or the given profile, if provided. Optionally configure the profile to remain enabled across reboots, if -boot is supplied. dmc stop mount Disable the Disk Mount Conditioner on the given mount point. Also disables any settings that persist across reboot via the -boot flag provided to dmc start, if any. dmc status mount [-json] Display the current settings (including on/off state), optionally as JSON dmc show profile-name|profile-index Display the settings of the given profile dmc list Display all profile names and indices dmc select mount profile-name|profile-index Choose a different profile for the given mount point without enabling or disabling the Disk Mount Conditioner dmc configure mount type access-time read-throughput write-throughput [ioqueue-depth maxreadcnt maxwritecnt segreadcnt segwritecnt] Select custom parameters for the given mount point rather than using the settings provided by a default profile. See dmc list for example parameter settings for various disk presets. β’ type - Β΄SSDΒ΄ or Β΄HDDΒ΄. The type determines how various system behaviors like disk I/O throttling and read-ahead algorithms affect the issued I/O. Additionally, choosing Β΄HDDΒ΄ will attempt to simulate seek times, including drive spin-up from idle. β’ access-time - latency in microseconds for a single I/O. For SSD types this latency is applied exactly as specified to all I/O. For HDD types, the latency scales based on a simulated seek time (thus making the access-time the maximum latency or seek penalty). β’ read-throughput - integer specifying megabytes-per-second maximum throughput for disk reads β’ write-throughput - integer specifying megabytes-per-second maxmimu throughput for disk writes β’ ioqueue-depth - maximum number of commands that a device can accept β’ maxreadcnt - maximum byte count per read β’ maxwritecnt - maximum byte count per write β’ segreadcnt - maximum physically disjoint segments processed per read β’ segwritecnt - maximum physically disjoint segments processed per write dmc help | -h Display help text
|
dmc - controls the Disk Mount Conditioner
|
dmc start mount [profile-name|profile-index [-boot]] dmc stop mount dmc status mount [-json] dmc show profile-name|profile-index dmc list dmc select mount profile-name|profile-index dmc configure mount type access-time read-throughput write-throughput [ioqueue-depth maxreadcnt maxwritecnt segreadcnt segwritecnt] dmc help | -h
| null |
dmc start / Β΄5400 HDDΒ΄ Turn on the Disk Mount Conditioner for the boot volume, acting like a 5400 RPM hard drive. dmc configure /Volumes/ExtDisk SSD 100 100 50 Configure an external disk to use custom parameters to degrade performance as if it were a slow SSD with 100 microsecond latencies, 100MB/s read throughput, and 50MB/s write throughput. IMPORTANT The Disk Mount Conditioner is not a Β΄simulatorΒ΄. It can only degrade (or Β΄conditionΒ΄) the I/O such that a faster disk device behaves like a slower device, not vice-versa. For example, a 5400 RPM hard drive cannot be conditioned to act like a SSD that is capable of a higher throughput than the theoretical limitations of the hard disk. In addition to running dmc stop, rebooting is also a sufficient way to clear any existing settings and disable Disk Mount Conditioner on all mount points (unless started with -boot). SEE ALSO nlc(1) January 2018 DMC(1)
|
vimtutor
|
Vimtutor starts the Vim tutor. It copies the tutor file first, so that it can be modified without changing the original file. The Vimtutor is useful for people that want to learn their first Vim commands. The optional argument -g starts vimtutor with gvim rather than vim, if the GUI version of vim is available, or falls back to Vim if gvim is not found. The optional [language] argument is the two-letter name of a language, like "it" or "es". If the [language] argument is missing, the language of the current locale will be used. If a tutor in this language is available, it will be used. Otherwise the English version will be used. Vim is always started in Vi compatible mode. FILES /opt/homebrew/Cellar/vim/9.1.0600/share/vim/vim91/tutor/tutor[.language] The Vimtutor text file(s). /opt/homebrew/Cellar/vim/9.1.0600/share/vim/vim91/tutor/tutor.vim The Vim script used to copy the Vimtutor text file. AUTHOR The Vimtutor was originally written for Vi by Michael C. Pierce and Robert K. Ware, Colorado School of Mines using ideas supplied by Charles Smith, Colorado State University. E-mail: bware@mines.colorado.edu (now invalid). It was modified for Vim by Bram Moolenaar. For the names of the translators see the tutor files. SEE ALSO vim(1) 2001 April 2 VIMTUTOR(1)
|
vimtutor - the Vim tutor
|
vimtutor [-g] [language]
| null | null |
prove5.34
| null |
prove - Run tests through a TAP harness. USAGE prove [options] [files or directories]
| null |
Boolean options: -v, --verbose Print all test lines. -l, --lib Add 'lib' to the path for your tests (-Ilib). -b, --blib Add 'blib/lib' and 'blib/arch' to the path for your tests -s, --shuffle Run the tests in random order. -c, --color Colored test output (default). --nocolor Do not color test output. --count Show the X/Y test count when not verbose (default) --nocount Disable the X/Y test count. -D --dry Dry run. Show test that would have run. -f, --failures Show failed tests. -o, --comments Show comments. --ignore-exit Ignore exit status from test scripts. -m, --merge Merge test scripts' STDERR with their STDOUT. -r, --recurse Recursively descend into directories. --reverse Run the tests in reverse order. -q, --quiet Suppress some test output while running tests. -Q, --QUIET Only print summary results. -p, --parse Show full list of TAP parse errors, if any. --directives Only show results with TODO or SKIP directives. --timer Print elapsed time after each test. --trap Trap Ctrl-C and print summary on interrupt. --normalize Normalize TAP output in verbose output -T Enable tainting checks. -t Enable tainting warnings. -W Enable fatal warnings. -w Enable warnings. -h, --help Display this help -?, Display this help -V, --version Display the version -H, --man Longer manpage for prove --norc Don't process default .proverc Options that take arguments: -I Library paths to include. -P Load plugin (searches App::Prove::Plugin::*.) -M Load a module. -e, --exec Interpreter to run the tests ('' for compiled tests.) --ext Set the extension for tests (default '.t') --harness Define test harness to use. See TAP::Harness. --formatter Result formatter to use. See FORMATTERS. --source Load and/or configure a SourceHandler. See SOURCE HANDLERS. -a, --archive out.tgz Store the resulting TAP in an archive file. -j, --jobs N Run N test jobs in parallel (try 9.) --state=opts Control prove's persistent state. --statefile=file Use `file` instead of `.prove` for state --rc=rcfile Process options from rcfile --rules Rules for parallel vs sequential processing. NOTES .proverc If ~/.proverc or ./.proverc exist they will be read and any options they contain processed before the command line options. Options in .proverc are specified in the same way as command line options: # .proverc --state=hot,fast,save -j9 Additional option files may be specified with the "--rc" option. Default option file processing is disabled by the "--norc" option. Under Windows and VMS the option file is named _proverc rather than .proverc and is sought only in the current directory. Reading from "STDIN" If you have a list of tests (or URLs, or anything else you want to test) in a file, you can add them to your tests by using a '-': prove - < my_list_of_things_to_test.txt See the "README" in the "examples" directory of this distribution. Default Test Directory If no files or directories are supplied, "prove" looks for all files matching the pattern "t/*.t". Colored Test Output Colored test output using TAP::Formatter::Color is the default, but if output is not to a terminal, color is disabled. You can override this by adding the "--color" switch. Color support requires Term::ANSIColor and, on windows platforms, also Win32::Console::ANSI. If the necessary module(s) are not installed colored output will not be available. Exit Code If the tests fail "prove" will exit with non-zero status. Arguments to Tests It is possible to supply arguments to tests. To do so separate them from prove's own arguments with the arisdottle, '::'. For example prove -v t/mytest.t :: --url http://example.com would run t/mytest.t with the options '--url http://example.com'. When running multiple tests they will each receive the same arguments. "--exec" Normally you can just pass a list of Perl tests and the harness will know how to execute them. However, if your tests are not written in Perl or if you want all tests invoked exactly the same way, use the "-e", or "--exec" switch: prove --exec '/usr/bin/ruby -w' t/ prove --exec '/usr/bin/perl -Tw -mstrict -Ilib' t/ prove --exec '/path/to/my/customer/exec' "--merge" If you need to make sure your diagnostics are displayed in the correct order relative to test results you can use the "--merge" option to merge the test scripts' STDERR into their STDOUT. This guarantees that STDOUT (where the test results appear) and STDERR (where the diagnostics appear) will stay in sync. The harness will display any diagnostics your tests emit on STDERR. Caveat: this is a bit of a kludge. In particular note that if anything that appears on STDERR looks like a test result the test harness will get confused. Use this option only if you understand the consequences and can live with the risk. "--trap" The "--trap" option will attempt to trap SIGINT (Ctrl-C) during a test run and display the test summary even if the run is interrupted "--state" You can ask "prove" to remember the state of previous test runs and select and/or order the tests to be run based on that saved state. The "--state" switch requires an argument which must be a comma separated list of one or more of the following options. "last" Run the same tests as the last time the state was saved. This makes it possible, for example, to recreate the ordering of a shuffled test. # Run all tests in random order $ prove -b --state=save --shuffle # Run them again in the same order $ prove -b --state=last "failed" Run only the tests that failed on the last run. # Run all tests $ prove -b --state=save # Run failures $ prove -b --state=failed If you also specify the "save" option newly passing tests will be excluded from subsequent runs. # Repeat until no more failures $ prove -b --state=failed,save "passed" Run only the passed tests from last time. Useful to make sure that no new problems have been introduced. "all" Run all tests in normal order. Multple options may be specified, so to run all tests with the failures from last time first: $ prove -b --state=failed,all,save "hot" Run the tests that most recently failed first. The last failure time of each test is stored. The "hot" option causes tests to be run in most-recent- failure order. $ prove -b --state=hot,save Tests that have never failed will not be selected. To run all tests with the most recently failed first use $ prove -b --state=hot,all,save This combination of options may also be specified thus $ prove -b --state=adrian "todo" Run any tests with todos. "slow" Run the tests in slowest to fastest order. This is useful in conjunction with the "-j" parallel testing switch to ensure that your slowest tests start running first. $ prove -b --state=slow -j9 "fast" Run test tests in fastest to slowest order. "new" Run the tests in newest to oldest order based on the modification times of the test scripts. "old" Run the tests in oldest to newest order. "fresh" Run those test scripts that have been modified since the last test run. "save" Save the state on exit. The state is stored in a file called .prove (_prove on Windows and VMS) in the current directory. The "--state" switch may be used more than once. $ prove -b --state=hot --state=all,save --rules The "--rules" option is used to control which tests are run sequentially and which are run in parallel, if the "--jobs" option is specified. The option may be specified multiple times, and the order matters. The most practical use is likely to specify that some tests are not "parallel-ready". Since mentioning a file with --rules doesn't cause it to be selected to run as a test, you can "set and forget" some rules preferences in your .proverc file. Then you'll be able to take maximum advantage of the performance benefits of parallel testing, while some exceptions are still run in parallel. --rules examples # All tests are allowed to run in parallel, except those starting with "p" --rules='seq=t/p*.t' --rules='par=**' # All tests must run in sequence except those starting with "p", which should be run parallel --rules='par=t/p*.t' --rules resolution β’ By default, all tests are eligible to be run in parallel. Specifying any of your own rules removes this one. β’ "First match wins". The first rule that matches a test will be the one that applies. β’ Any test which does not match a rule will be run in sequence at the end of the run. β’ The existence of a rule does not imply selecting a test. You must still specify the tests to run. β’ Specifying a rule to allow tests to run in parallel does not make them run in parallel. You still need specify the number of parallel "jobs" in your Harness object. --rules Glob-style pattern matching We implement our own glob-style pattern matching for --rules. Here are the supported patterns: ** is any number of characters, including /, within a pathname * is zero or more characters within a filename/directory name ? is exactly one character within a filename/directory name {foo,bar,baz} is any of foo, bar or baz. \ is an escape character More advanced specifications for parallel vs sequence run rules If you need more advanced management of what runs in parallel vs in sequence, see the associated 'rules' documentation in TAP::Harness and TAP::Parser::Scheduler. If what's possible directly through "prove" is not sufficient, you can write your own harness to access these features directly. @INC prove introduces a separation between "options passed to the perl which runs prove" and "options passed to the perl which runs tests"; this distinction is by design. Thus the perl which is running a test starts with the default @INC. Additional library directories can be added via the "PERL5LIB" environment variable, via -Ifoo in "PERL5OPT" or via the "-Ilib" option to prove. Taint Mode Normally when a Perl program is run in taint mode the contents of the "PERL5LIB" environment variable do not appear in @INC. Because "PERL5LIB" is often used during testing to add build directories to @INC prove passes the names of any directories found in "PERL5LIB" as -I switches. The net effect of this is that "PERL5LIB" is honoured even when prove is run in taint mode. FORMATTERS You can load a custom TAP::Parser::Formatter: prove --formatter MyFormatter SOURCE HANDLERS You can load custom TAP::Parser::SourceHandlers, to change the way the parser interprets particular sources of TAP. prove --source MyHandler --source YetAnother t If you want to provide config to the source you can use: prove --source MyCustom \ --source Perl --perl-option 'foo=bar baz' --perl-option avg=0.278 \ --source File --file-option extensions=.txt --file-option extensions=.tmp t --source pgTAP --pgtap-option pset=format=html --pgtap-option pset=border=2 Each "--$source-option" option must specify a key/value pair separated by an "=". If an option can take multiple values, just specify it multiple times, as with the "extensions=" examples above. If the option should be a hash reference, specify the value as a second pair separated by a "=", as in the "pset=" examples above (escape "=" with a backslash). All "--sources" are combined into a hash, and passed to "new" in TAP::Harness's "sources" parameter. See TAP::Parser::IteratorFactory for more details on how configuration is passed to SourceHandlers. PLUGINS Plugins can be loaded using the "-Pplugin" syntax, eg: prove -PMyPlugin This will search for a module named "App::Prove::Plugin::MyPlugin", or failing that, "MyPlugin". If the plugin can't be found, "prove" will complain & exit. You can pass arguments to your plugin by appending "=arg1,arg2,etc" to the plugin name: prove -PMyPlugin=fou,du,fafa Please check individual plugin documentation for more details. Available Plugins For an up-to-date list of plugins available, please check CPAN: <http://search.cpan.org/search?query=App%3A%3AProve+Plugin> Writing Plugins Please see "PLUGINS" in App::Prove. perl v5.34.1 2024-04-13 PROVE(1)
| null |
spfquery5.34
|
spfquery checks if a given set of e-mail parameters (e.g., the SMTP sender's IP address) matches the responsible domain's Sender Policy Framework (SPF) policy. For more information on SPF see <http://www.openspf.org>. Preferred Usage The following usage forms are preferred over the legacy forms used by older spfquery versions: The --identity form checks if the given ip-address is an authorized SMTP sender for the given "helo" hostname, "mfrom" envelope sender e-mail address, or "pra" (so-called purported resonsible address) e-mail address, depending on the value of the --scope option (which defaults to mfrom if omitted). The --file form reads "ip-address identity [helo-identity]" tuples from the file with the specified filename, or from standard input if filename is -, and checks them against the specified scope (mfrom by default). Both forms support an optional --versions option, which specifies a comma-separated list of the SPF version numbers of SPF records that may be used. 1 means that "v=spf1" records should be used. 2 means that "spf2.0" records should be used. Defaults to 1,2, i.e., uses any SPF records that are available. Records of a higher version are preferred. Legacy Usage spfquery versions before 2.500 featured the following usage forms, which are discouraged but still supported for backwards compatibility: The --helo form checks if the given ip-address is an authorized SMTP sender for the "HELO" hostname given as the identity (so-called "HELO" check). The --mfrom form checks if the given ip-address is an authorized SMTP sender for the envelope sender email-address (or domain) given as the identity (so-called "MAIL FROM" check). If a domain is given instead of an e-mail address, "postmaster" will be substituted for the localpart. The --pra form checks if the given ip-address is an authorized SMTP sender for the PRA (Purported Responsible Address) e-mail address given as the identity. Other Usage The --version form prints version information of spfquery. The --help form prints usage information for spfquery.
|
spfquery - (Mail::SPF) - Checks if a given set of e-mail parameters matches a domain's SPF policy VERSION 2.501
|
Preferred usage: spfquery [--versions|-v 1|2|1,2] [--scope|-s helo|mfrom|pra] --identity|--id identity --ip-address|--ip ip-address [--helo-identity|--helo-id helo-identity] [OPTIONS] spfquery [--versions|-v 1|2|1,2] [--scope|-s helo|mfrom|pra] --file|-f filename|- [OPTIONS] Legacy usage: spfquery --helo helo-identity --ip-address|--ip ip-address [OPTIONS] spfquery --mfrom mfrom-identity --ip-address|--ip ip-address [--helo helo-identity] [OPTIONS] spfquery --pra pra-identity --ip-address|--ip ip-address [OPTIONS] Other usage: spfquery --version|-V spfquery --help
|
Standard Options The preferred and legacy forms optionally take any of the following OPTIONS: --default-explanation string --def-exp string Use the specified string as the default explanation if the authority domain does not specify an explanation string of its own. --hostname hostname Use hostname as the host name of the local system instead of auto- detecting it. --keep-comments --no-keep-comments Do (not) print any comments found when reading from a file or from standard input. --sanitize (currently ignored) --no-sanitize (currently ignored) Do (not) sanitize the output by condensing consecutive white-space into a single space and replacing non-printable characters with question marks. Enabled by default. --debug (currently ignored) Print out debug information. Black Magic Options Several options that were supported by earlier versions of spfquery are considered black magic (i.e. potentially dangerous for the innocent user) and are thus disabled by default. If the Mail::SPF::BlackMagic Perl module is installed, they may be enabled by specifying --enable-black-magic. --max-dns-interactive-terms n Evaluate a maximum of n DNS-interactive mechanisms and modifiers per SPF check. Defaults to 10. Do not override the default unless you know what you are doing! --max-name-lookups-per-term n Perform a maximum of n DNS name look-ups per mechanism or modifier. Defaults to 10. Do not override the default unless you know what you are doing! --authorize-mxes-for email-address|domain,... Consider all the MXes of the comma-separated list of email- addresses and domains as inherently authorized. --tfwl Perform "trusted-forwarder.org" accreditation checking. --guess spf-terms Use spf-terms as a default record if no SPF record is found. --local spf-terms Process spf-terms as local policy before resorting to a default result (the implicit or explicit "all" mechanism at the end of the domain's SPF record). For example, this could be used for white- listing one's secondary MXes: "mx:mydomain.example.org". --override domain=spf-record --fallback domain=spf-record Set overrides and fallbacks. Each option can be specified multiple times. For example: --override example.org='v=spf1 -all' --override '*.example.net'='v=spf1 a mx -all' --fallback example.com='v=spf1 -all' RESULT CODES pass The specified IP address is an authorized SMTP sender for the identity. fail The specified IP address is not an authorized SMTP sender for the identity. softfail The specified IP address is not an authorized SMTP sender for the identity, however the authority domain is still testing out its SPF policy. neutral The identity's authority domain makes no assertion about the status of the IP address. permerror A permanent error occurred while evaluating the authority domain's policy (e.g., a syntax error in the SPF record). Manual intervention is required from the authority domain. temperror A temporary error occurred while evaluating the authority domain's policy (e.g., a DNS error). Try again later. none There is no applicable SPF policy for the identity domain. EXIT CODES Result | Exit code -----------+----------- pass | 0 fail | 1 softfail | 2 neutral | 3 permerror | 4 temperror | 5 none | 6
|
spfquery --scope mfrom --id user@example.com --ip 1.2.3.4 spfquery --file test_data echo "127.0.0.1 user@example.com helohost.example.com" | spfquery -f - COMPATIBILITY spfquery has undergone the following interface changes compared to earlier versions: 2.500 β’ A new preferred usage style for performing individual SPF checks has been introduced. The new style accepts a unified --identity option and an optional --scope option that specifies the type (scope) of the identity. In contrast, the legacy usage style requires a separate usage form for every supported scope. See "Preferred usage" and "Legacy usage" for details. β’ The former "unknown" and "error" result codes have been renamed to "permerror" and "temperror", respectively, in order to comply with RFC 4408 terminology. β’ SPF checks with an empty identity are no longer supported. In the case of an empty "MAIL FROM" SMTP transaction parameter, perform a check with the "helo" scope directly. β’ The --debug and --(no-)sanitize options are currently ignored by this version of spfquery. They will again be supported in the future. β’ Several features that were supported by earlier versions of spfquery are considered black magic and thus are now disabled by default. See "Black Magic Options". β’ Several option names have been deprecated. This is a list of them and their preferred synonyms: Deprecated options | Preferred options ---------------------+----------------------------- --sender, -s | --mfrom --ipv4, -i | --ip-address, --ip --name | --hostname --max-lookup-count, | --max-dns-interactive-terms --max-lookup | --rcpt-to, -r | --authorize-mxes-for --trusted | --tfwl SEE ALSO Mail::SPF, spfd(8) <http://tools.ietf.org/html/rfc4408> AUTHORS This version of spfquery is a complete rewrite by Julian Mehnle <julian@mehnle.net>, based on an earlier version written by Meng Weng Wong <mengwong+spf@pobox.com> and Wayne Schlitt <wayne@schlitt.net>. perl v5.34.0 2024-04-13 SPFQUERY(1)
|
perlivp5.34
|
The perlivp program is set up at Perl source code build time to test the Perl version it was built under. It can be used after running: make install (or your platform's equivalent procedure) to verify that perl and its libraries have been installed correctly. A correct installation is verified by output that looks like: ok 1 ok 2 etc.
|
perlivp - Perl Installation Verification Procedure
|
perlivp [-p] [-v] [-h]
|
-h help Prints out a brief help message. -p print preface Gives a description of each test prior to performing it. -v verbose Gives more detailed information about each test, after it has been performed. Note that any failed tests ought to print out some extra information whether or not -v is thrown. DIAGNOSTICS β’ print "# Perl binary '$perlpath' does not appear executable.\n"; Likely to occur for a perl binary that was not properly installed. Correct by conducting a proper installation. β’ print "# Perl version '$]' installed, expected $ivp_VERSION.\n"; Likely to occur for a perl that was not properly installed. Correct by conducting a proper installation. β’ print "# Perl \@INC directory '$_' does not appear to exist.\n"; Likely to occur for a perl library tree that was not properly installed. Correct by conducting a proper installation. β’ print "# Needed module '$_' does not appear to be properly installed.\n"; One of the two modules that is used by perlivp was not present in the installation. This is a serious error since it adversely affects perlivp's ability to function. You may be able to correct this by performing a proper perl installation. β’ print "# Required module '$_' does not appear to be properly installed.\n"; An attempt to "eval "require $module"" failed, even though the list of extensions indicated that it should succeed. Correct by conducting a proper installation. β’ print "# Unnecessary module 'bLuRfle' appears to be installed.\n"; This test not coming out ok could indicate that you have in fact installed a bLuRfle.pm module or that the "eval " require \"$module_name.pm\"; "" test may give misleading results with your installation of perl. If yours is the latter case then please let the author know. β’ print "# file",+($#missing == 0) ? '' : 's'," missing from installation:\n"; One or more files turned up missing according to a run of "ExtUtils::Installed -> validate()" over your installation. Correct by conducting a proper installation. For further information on how to conduct a proper installation consult the INSTALL file that comes with the perl source and the README file for your platform. AUTHOR Peter Prymmer perl v5.34.1 2024-04-13 PERLIVP(1)
| null |
perldoc5.30
|
perldoc looks up documentation in .pod format that is embedded in the perl installation tree or in a perl script, and displays it using a variety of formatters. This is primarily used for the documentation for the perl library modules. Your system may also have man pages installed for those modules, in which case you can probably just use the man(1) command. If you are looking for a table of contents to the Perl library modules documentation, see the perltoc page.
|
perldoc - Look up Perl documentation in Pod format.
|
perldoc [-h] [-D] [-t] [-u] [-m] [-l] [-U] [-F] [-i] [-V] [-T] [-r] [-d destination_file] [-o formatname] [-M FormatterClassName] [-w formatteroption:value] [-n nroff-replacement] [-X] [-L language_code] PageName|ModuleName|ProgramName|URL Examples: perldoc -f BuiltinFunction perldoc -L it -f BuiltinFunction perldoc -q FAQ Keyword perldoc -L fr -q FAQ Keyword perldoc -v PerlVariable perldoc -a PerlAPI See below for more description of the switches.
|
-h Prints out a brief help message. -D Describes search for the item in detail. -t Display docs using plain text converter, instead of nroff. This may be faster, but it probably won't look as nice. -u Skip the real Pod formatting, and just show the raw Pod source (Unformatted) -m module Display the entire module: both code and unformatted pod documentation. This may be useful if the docs don't explain a function in the detail you need, and you'd like to inspect the code directly; perldoc will find the file for you and simply hand it off for display. -l Display only the file name of the module found. -U When running as the superuser, don't attempt drop privileges for security. This option is implied with -F. NOTE: Please see the heading SECURITY below for more information. -F Consider arguments as file names; no search in directories will be performed. Implies -U if run as the superuser. -f perlfunc The -f option followed by the name of a perl built-in function will extract the documentation of this function from perlfunc. Example: perldoc -f sprintf -q perlfaq-search-regexp The -q option takes a regular expression as an argument. It will search the question headings in perlfaq[1-9] and print the entries matching the regular expression. Example: perldoc -q shuffle -a perlapifunc The -a option followed by the name of a perl api function will extract the documentation of this function from perlapi. Example: perldoc -a newHV -v perlvar The -v option followed by the name of a Perl predefined variable will extract the documentation of this variable from perlvar. Examples: perldoc -v '$"' perldoc -v @+ perldoc -v DATA -T This specifies that the output is not to be sent to a pager, but is to be sent directly to STDOUT. -d destination-filename This specifies that the output is to be sent neither to a pager nor to STDOUT, but is to be saved to the specified filename. Example: "perldoc -oLaTeX -dtextwrapdocs.tex Text::Wrap" -o output-formatname This specifies that you want Perldoc to try using a Pod-formatting class for the output format that you specify. For example: "-oman". This is actually just a wrapper around the "-M" switch; using "-oformatname" just looks for a loadable class by adding that format name (with different capitalizations) to the end of different classname prefixes. For example, "-oLaTeX" currently tries all of the following classes: Pod::Perldoc::ToLaTeX Pod::Perldoc::Tolatex Pod::Perldoc::ToLatex Pod::Perldoc::ToLATEX Pod::Simple::LaTeX Pod::Simple::latex Pod::Simple::Latex Pod::Simple::LATEX Pod::LaTeX Pod::latex Pod::Latex Pod::LATEX. -M module-name This specifies the module that you want to try using for formatting the pod. The class must at least provide a "parse_from_file" method. For example: "perldoc -MPod::Perldoc::ToChecker". You can specify several classes to try by joining them with commas or semicolons, as in "-MTk::SuperPod;Tk::Pod". -w option:value or -w option This specifies an option to call the formatter with. For example, "-w textsize:15" will call "$formatter->textsize(15)" on the formatter object before it is used to format the object. For this to be valid, the formatter class must provide such a method, and the value you pass should be valid. (So if "textsize" expects an integer, and you do "-w textsize:big", expect trouble.) You can use "-w optionname" (without a value) as shorthand for "-w optionname:TRUE". This is presumably useful in cases of on/off features like: "-w page_numbering". You can use an "=" instead of the ":", as in: "-w textsize=15". This might be more (or less) convenient, depending on what shell you use. -X Use an index if it is present. The -X option looks for an entry whose basename matches the name given on the command line in the file "$Config{archlib}/pod.idx". The pod.idx file should contain fully qualified filenames, one per line. -L language_code This allows one to specify the language code for the desired language translation. If the "POD2::<language_code>" package isn't installed in your system, the switch is ignored. All available translation packages are to be found under the "POD2::" namespace. See POD2::IT (or POD2::FR) to see how to create new localized "POD2::*" documentation packages and integrate them into Pod::Perldoc. PageName|ModuleName|ProgramName|URL The item you want to look up. Nested modules (such as "File::Basename") are specified either as "File::Basename" or "File/Basename". You may also give a descriptive name of a page, such as "perlfunc". For URLs, HTTP and HTTPS are the only kind currently supported. For simple names like 'foo', when the normal search fails to find a matching page, a search with the "perl" prefix is tried as well. So "perldoc intro" is enough to find/render "perlintro.pod". -n some-formatter Specify replacement for groff -r Recursive search. -i Ignore case. -V Displays the version of perldoc you're running. SECURITY Because perldoc does not run properly tainted, and is known to have security issues, when run as the superuser it will attempt to drop privileges by setting the effective and real IDs to nobody's or nouser's account, or -2 if unavailable. If it cannot relinquish its privileges, it will not run. See the "-U" option if you do not want this behavior but beware that there are significant security risks if you choose to use "-U". Since 3.26, using "-F" as the superuser also implies "-U" as opening most files and traversing directories requires privileges that are above the nobody/nogroup level. ENVIRONMENT Any switches in the "PERLDOC" environment variable will be used before the command line arguments. Useful values for "PERLDOC" include "-oterm", "-otext", "-ortf", "-oxml", and so on, depending on what modules you have on hand; or the formatter class may be specified exactly with "-MPod::Perldoc::ToTerm" or the like. "perldoc" also searches directories specified by the "PERL5LIB" (or "PERLLIB" if "PERL5LIB" is not defined) and "PATH" environment variables. (The latter is so that embedded pods for executables, such as "perldoc" itself, are available.) In directories where either "Makefile.PL" or "Build.PL" exist, "perldoc" will add "." and "lib" first to its search path, and as long as you're not the superuser will add "blib" too. This is really helpful if you're working inside of a build directory and want to read through the docs even if you have a version of a module previously installed. "perldoc" will use, in order of preference, the pager defined in "PERLDOC_PAGER", "MANPAGER", or "PAGER" before trying to find a pager on its own. ("MANPAGER" is not used if "perldoc" was told to display plain text or unformatted pod.) When using perldoc in it's "-m" mode (display module source code), "perldoc" will attempt to use the pager set in "PERLDOC_SRC_PAGER". A useful setting for this command is your favorite editor as in "/usr/bin/nano". (Don't judge me.) One useful value for "PERLDOC_PAGER" is "less -+C -E". Having PERLDOCDEBUG set to a positive integer will make perldoc emit even more descriptive output than the "-D" switch does; the higher the number, the more it emits. CHANGES Up to 3.14_05, the switch -v was used to produce verbose messages of perldoc operation, which is now enabled by -D. SEE ALSO perlpod, Pod::Perldoc AUTHOR Current maintainer: Mark Allen "<mallen@cpan.org>" Past contributors are: brian d foy "<bdfoy@cpan.org>" Adriano R. Ferreira "<ferreira@cpan.org>", Sean M. Burke "<sburke@cpan.org>", Kenneth Albanowski "<kjahds@kjahds.com>", Andy Dougherty "<doughera@lafcol.lafayette.edu>", and many others. perl v5.30.3 2019-10-21 PERLDOC(1)
| null |
xgettext5.30.pl
| null | null | null | null | null |
iperf3-darwin
|
The iperf3-darwin is a tool for performing network throughput measurements based on the ESnet iperf3 project that has been modified to use functionalities specific to Darwin. For a full list of options see: iperf3-darwin -h AUTHORS A list of the contributors to iperf3 can be found within the documentation located at https://software.es.net/iperf/dev.html#authors. SEE ALSO https://software.es.net/iperf Darwin July 22, 2022 Darwin
|
iperf3-darwin β perform network throughput tests
|
iperf3-darwin -s [options] iperf3-darwin -c server [options] iperf3-darwin -h iperf3-darwin -v
| null | null |
prove
| null |
prove - Run tests through a TAP harness. USAGE prove [options] [files or directories]
| null |
Boolean options: -v, --verbose Print all test lines. -l, --lib Add 'lib' to the path for your tests (-Ilib). -b, --blib Add 'blib/lib' and 'blib/arch' to the path for your tests -s, --shuffle Run the tests in random order. -c, --color Colored test output (default). --nocolor Do not color test output. --count Show the X/Y test count when not verbose (default) --nocount Disable the X/Y test count. -D --dry Dry run. Show test that would have run. -f, --failures Show failed tests. -o, --comments Show comments. --ignore-exit Ignore exit status from test scripts. -m, --merge Merge test scripts' STDERR with their STDOUT. -r, --recurse Recursively descend into directories. --reverse Run the tests in reverse order. -q, --quiet Suppress some test output while running tests. -Q, --QUIET Only print summary results. -p, --parse Show full list of TAP parse errors, if any. --directives Only show results with TODO or SKIP directives. --timer Print elapsed time after each test. --trap Trap Ctrl-C and print summary on interrupt. --normalize Normalize TAP output in verbose output -T Enable tainting checks. -t Enable tainting warnings. -W Enable fatal warnings. -w Enable warnings. -h, --help Display this help -?, Display this help -V, --version Display the version -H, --man Longer manpage for prove --norc Don't process default .proverc Options that take arguments: -I Library paths to include. -P Load plugin (searches App::Prove::Plugin::*.) -M Load a module. -e, --exec Interpreter to run the tests ('' for compiled tests.) --ext Set the extension for tests (default '.t') --harness Define test harness to use. See TAP::Harness. --formatter Result formatter to use. See FORMATTERS. --source Load and/or configure a SourceHandler. See SOURCE HANDLERS. -a, --archive out.tgz Store the resulting TAP in an archive file. -j, --jobs N Run N test jobs in parallel (try 9.) --state=opts Control prove's persistent state. --statefile=file Use `file` instead of `.prove` for state --rc=rcfile Process options from rcfile --rules Rules for parallel vs sequential processing. NOTES .proverc If ~/.proverc or ./.proverc exist they will be read and any options they contain processed before the command line options. Options in .proverc are specified in the same way as command line options: # .proverc --state=hot,fast,save -j9 Additional option files may be specified with the "--rc" option. Default option file processing is disabled by the "--norc" option. Under Windows and VMS the option file is named _proverc rather than .proverc and is sought only in the current directory. Reading from "STDIN" If you have a list of tests (or URLs, or anything else you want to test) in a file, you can add them to your tests by using a '-': prove - < my_list_of_things_to_test.txt See the "README" in the "examples" directory of this distribution. Default Test Directory If no files or directories are supplied, "prove" looks for all files matching the pattern "t/*.t". Colored Test Output Colored test output using TAP::Formatter::Color is the default, but if output is not to a terminal, color is disabled. You can override this by adding the "--color" switch. Color support requires Term::ANSIColor and, on windows platforms, also Win32::Console::ANSI. If the necessary module(s) are not installed colored output will not be available. Exit Code If the tests fail "prove" will exit with non-zero status. Arguments to Tests It is possible to supply arguments to tests. To do so separate them from prove's own arguments with the arisdottle, '::'. For example prove -v t/mytest.t :: --url http://example.com would run t/mytest.t with the options '--url http://example.com'. When running multiple tests they will each receive the same arguments. "--exec" Normally you can just pass a list of Perl tests and the harness will know how to execute them. However, if your tests are not written in Perl or if you want all tests invoked exactly the same way, use the "-e", or "--exec" switch: prove --exec '/usr/bin/ruby -w' t/ prove --exec '/usr/bin/perl -Tw -mstrict -Ilib' t/ prove --exec '/path/to/my/customer/exec' "--merge" If you need to make sure your diagnostics are displayed in the correct order relative to test results you can use the "--merge" option to merge the test scripts' STDERR into their STDOUT. This guarantees that STDOUT (where the test results appear) and STDERR (where the diagnostics appear) will stay in sync. The harness will display any diagnostics your tests emit on STDERR. Caveat: this is a bit of a kludge. In particular note that if anything that appears on STDERR looks like a test result the test harness will get confused. Use this option only if you understand the consequences and can live with the risk. "--trap" The "--trap" option will attempt to trap SIGINT (Ctrl-C) during a test run and display the test summary even if the run is interrupted "--state" You can ask "prove" to remember the state of previous test runs and select and/or order the tests to be run based on that saved state. The "--state" switch requires an argument which must be a comma separated list of one or more of the following options. "last" Run the same tests as the last time the state was saved. This makes it possible, for example, to recreate the ordering of a shuffled test. # Run all tests in random order $ prove -b --state=save --shuffle # Run them again in the same order $ prove -b --state=last "failed" Run only the tests that failed on the last run. # Run all tests $ prove -b --state=save # Run failures $ prove -b --state=failed If you also specify the "save" option newly passing tests will be excluded from subsequent runs. # Repeat until no more failures $ prove -b --state=failed,save "passed" Run only the passed tests from last time. Useful to make sure that no new problems have been introduced. "all" Run all tests in normal order. Multiple options may be specified, so to run all tests with the failures from last time first: $ prove -b --state=failed,all,save "hot" Run the tests that most recently failed first. The last failure time of each test is stored. The "hot" option causes tests to be run in most-recent- failure order. $ prove -b --state=hot,save Tests that have never failed will not be selected. To run all tests with the most recently failed first use $ prove -b --state=hot,all,save This combination of options may also be specified thus $ prove -b --state=adrian "todo" Run any tests with todos. "slow" Run the tests in slowest to fastest order. This is useful in conjunction with the "-j" parallel testing switch to ensure that your slowest tests start running first. $ prove -b --state=slow -j9 "fast" Run test tests in fastest to slowest order. "new" Run the tests in newest to oldest order based on the modification times of the test scripts. "old" Run the tests in oldest to newest order. "fresh" Run those test scripts that have been modified since the last test run. "save" Save the state on exit. The state is stored in a file called .prove (_prove on Windows and VMS) in the current directory. The "--state" switch may be used more than once. $ prove -b --state=hot --state=all,save --rules The "--rules" option is used to control which tests are run sequentially and which are run in parallel, if the "--jobs" option is specified. The option may be specified multiple times, and the order matters. The most practical use is likely to specify that some tests are not "parallel-ready". Since mentioning a file with --rules doesn't cause it to be selected to run as a test, you can "set and forget" some rules preferences in your .proverc file. Then you'll be able to take maximum advantage of the performance benefits of parallel testing, while some exceptions are still run in parallel. --rules examples # All tests are allowed to run in parallel, except those starting with "p" --rules='seq=t/p*.t' --rules='par=**' # All tests must run in sequence except those starting with "p", which should be run parallel --rules='par=t/p*.t' --rules resolution β’ By default, all tests are eligible to be run in parallel. Specifying any of your own rules removes this one. β’ "First match wins". The first rule that matches a test will be the one that applies. β’ Any test which does not match a rule will be run in sequence at the end of the run. β’ The existence of a rule does not imply selecting a test. You must still specify the tests to run. β’ Specifying a rule to allow tests to run in parallel does not make them run in parallel. You still need specify the number of parallel "jobs" in your Harness object. --rules Glob-style pattern matching We implement our own glob-style pattern matching for --rules. Here are the supported patterns: ** is any number of characters, including /, within a pathname * is zero or more characters within a filename/directory name ? is exactly one character within a filename/directory name {foo,bar,baz} is any of foo, bar or baz. \ is an escape character More advanced specifications for parallel vs sequence run rules If you need more advanced management of what runs in parallel vs in sequence, see the associated 'rules' documentation in TAP::Harness and TAP::Parser::Scheduler. If what's possible directly through "prove" is not sufficient, you can write your own harness to access these features directly. @INC prove introduces a separation between "options passed to the perl which runs prove" and "options passed to the perl which runs tests"; this distinction is by design. Thus the perl which is running a test starts with the default @INC. Additional library directories can be added via the "PERL5LIB" environment variable, via -Ifoo in "PERL5OPT" or via the "-Ilib" option to prove. Taint Mode Normally when a Perl program is run in taint mode the contents of the "PERL5LIB" environment variable do not appear in @INC. Because "PERL5LIB" is often used during testing to add build directories to @INC prove passes the names of any directories found in "PERL5LIB" as -I switches. The net effect of this is that "PERL5LIB" is honoured even when prove is run in taint mode. FORMATTERS You can load a custom TAP::Parser::Formatter: prove --formatter MyFormatter SOURCE HANDLERS You can load custom TAP::Parser::SourceHandlers, to change the way the parser interprets particular sources of TAP. prove --source MyHandler --source YetAnother t If you want to provide config to the source you can use: prove --source MyCustom \ --source Perl --perl-option 'foo=bar baz' --perl-option avg=0.278 \ --source File --file-option extensions=.txt --file-option extensions=.tmp t --source pgTAP --pgtap-option pset=format=html --pgtap-option pset=border=2 Each "--$source-option" option must specify a key/value pair separated by an "=". If an option can take multiple values, just specify it multiple times, as with the "extensions=" examples above. If the option should be a hash reference, specify the value as a second pair separated by a "=", as in the "pset=" examples above (escape "=" with a backslash). All "--sources" are combined into a hash, and passed to "new" in TAP::Harness's "sources" parameter. See TAP::Parser::IteratorFactory for more details on how configuration is passed to SourceHandlers. PLUGINS Plugins can be loaded using the "-Pplugin" syntax, eg: prove -PMyPlugin This will search for a module named "App::Prove::Plugin::MyPlugin", or failing that, "MyPlugin". If the plugin can't be found, "prove" will complain & exit. You can pass arguments to your plugin by appending "=arg1,arg2,etc" to the plugin name: prove -PMyPlugin=fou,du,fafa Please check individual plugin documentation for more details. Available Plugins For an up-to-date list of plugins available, please check CPAN: <http://search.cpan.org/search?query=App%3A%3AProve+Plugin> Writing Plugins Please see "PLUGINS" in App::Prove. perl v5.38.2 2023-11-28 PROVE(1)
| null |
newproc.d
|
newproc.d is a DTrace OneLiner to snoop new processes as they are run. The argument listing is printed. This is useful to identify short lived processes that are usually difficult to spot using traditional tools. Docs/oneliners.txt and Docs/Examples/oneliners_examples.txt in the DTraceToolkit contain this as a oneliner that can be cut-n-paste to run. Since this uses DTrace, only users with root privileges can run this command.
|
newproc.d - snoop new processes. Uses DTrace.
|
newproc.d
| null |
This prints new processes until Ctrl-C is hit. # newproc.d FIELDS CPU The CPU that recieved the event ID A DTrace probe ID for the event FUNCTION:NAME The DTrace probe name for the event remaining fields These contains the argument listing for the new process DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT newproc.d will run forever until Ctrl-C is hit. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO execsnoop(1M), dtrace(1M), truss(1) version 1.00 May 15, 2005 newproc.d(1m)
|
gencat
|
The gencat utility merges the text NLS input files input-files... into a formatted message catalog file output-file. The file output-file will be created if it does not already exist. If output-file does exist, its messages will be included in the new output-file. If set and message numbers collide, the new message text defined in input-files... will replace the old message text currently contained in output-file. INPUT FILES The format of a message text source file is defined below. Note that the fields of a message text source line are separated by a single space character: any other space characters are considered to be part of the field contents. $set n comment This line specifies the set identifier of the following messages until the next $set or end-of-file appears. The argument n is the set identifier which is defined as a number in the range [1, (NL_SETMAX)]. Set identifiers must occur in ascending order within a single source file, but need not be contiguous. Any string following a space following the set identifier is treated as a comment. If no $set directive is specified in a given source file, all messages will be located in the default message set NL_SETD. $del n comment This line deletes messages from set n from a message catalog. The n specifies a set number. Any string following a space following the set number is treated as a comment. $ comment A line beginning with $ followed by a space is treated as a comment. m message-text A message line consists of a message identifier m in the range [1, (NL_MSGMAX)]. The message-text is stored in the message catalog with the set identifier specified by the last $set directive, and the message identifier m. If the message-text is empty, and there is a space character following the message identifier, an empty string is stored in the message catalog. If the message-text is empty, and if there is no space character following the message identifier, then the existing message in the current set with the specified message identifier is deleted from the catalog. Message identifiers must be in ascending order within a single set, but need not be contiguous. The message-text length must be in the range [0, (NL_TEXTMAX)]. $quote c This line specifies an optional quote character c which can be used to surround message-text so that trailing space or empty messages are visible in message source files. By default, or if an empty $quote directive is specified, no quoting of message-text will be recognized. Empty lines in message source files are ignored. The effect of lines beginning with any character other than those described above is undefined. Text strings can contain the following special characters and escape sequences. In addition, if a quote character is defined, it may be escaped as well to embed a literal quote character. \n line feed \t horizontal tab \v vertical tab \b backspace \r carriage return \f form feed \\ backslash \ooo octal number in the range [000, 377] A backslash character immediately before the end of the line in a file is used to continue the line onto the next line, e.g.: 1 This line is continued \ on this line. If the character following the backslash is not one of those specified, the backslash is ignored. DIAGNOSTICS The gencat utility exits 0 on success, and >0 if an error occurs. SEE ALSO catclose(3), catgets(3), catopen(3) STANDARDS The gencat utility is compliant with the X/Open Portability Guide Issue 4 (βXPG4β) standard. AUTHORS This manual page was originally written by Ken Stailey and later revised by Terry Lambert. BUGS A message catalog file created from a blank input file cannot be revised; it must be deleted and recreated. macOS 14.5 June 11, 1997 macOS 14.5
|
gencat β NLS catalog compiler
|
gencat output-file input-files...
| null | null |
mdutil
|
The mdutil command is useful for managing the metadata stores for mounted volumes. The following options are available: -i on | off Sets the indexing status for the provided volumes to on or off. Note that indexing may be delayed due to low disk space or other conditions. -d Disables Spotlight searches on the provided volume. -E This flag will cause each local store for the volumes indicated to be erased. The stores will be rebuilt if appropriate. -s Display the indexing status of the listed volumes. -a Apply command to all stores on all volumes. -t Resolve files from file id with an optional volume path or device id. -p Spotlight caches indexes of some network devices locally. This option requests that a local caches be flushed to the appropriate network device. -V volume-path Apply command to all stores on the specified volume. -v Print verbose information when available. -r plugins Ask the server to reimport files for UTIs claimed by the listed plugin. -L volume-path List the directory contents of the Spotlight index on the specified volume. -P volume-path Dump the VolumeConfig.plist for the specified volume. -X volume-path Remove the Spotlight index directory on the specified volume. This does not disable indexing. Spotlight will reevaluate volume when it is unmounted and remounted, the machine is rebooted, or an explicit index command such as 'mdutil -i' or 'mdutil -E' run for the volume. SEE ALSO mdfind(1), mds(8), mdimport(1) Mac OS X September 1, 2005 Mac OS X
|
mdutil β manage the metadata stores used by Spotlight
|
mdutil [-pEsav] [-i on | off] mountPoint ...
| null | null |
znew
|
The znew utility uncompresses files compressed by compress(1) and recompresses them with gzip(1). The options are as follows: -f Overwrite existing β.gzβ files. Unless this option is specified, znew refuses to overwrite existing files. -t Test integrity of the gzipped file before deleting the original file. If the integrity check fails, the original β.Zβ file is not removed. -v Print a report specifying the achieved compression ratios. -9 Use the -9 mode of gzip(1), achieving better compression at the cost of slower execution. -K Keep the original β.Zβ file if it uses less disk blocks than the gzipped one. A disk block is 1024 bytes. SEE ALSO gzip(1) CAVEATS The znew utility tries to maintain the file mode of the original file. If the original file is not writable, it is not able to do that and znew will print a warning. macOS 14.5 January 26, 2007 macOS 14.5
|
znew β convert compressed files to gzipped files
|
znew [-ftv9K] file ...
| null | null |
podchecker
|
podchecker will read the given input files looking for POD syntax errors in the POD documentation and will print any errors it find to STDERR. At the end, it will print a status message indicating the number of errors found. Directories are ignored, an appropriate warning message is printed. podchecker invokes the podchecker() function exported by Pod::Checker Please see "podchecker()" in Pod::Checker for more details. RETURN VALUE podchecker returns a 0 (zero) exit status if all specified POD files are ok. ERRORS podchecker returns the exit status 1 if at least one of the given POD files has syntax errors. The status 2 indicates that at least one of the specified files does not contain any POD commands. Status 1 overrides status 2. If you want unambiguous results, call podchecker with one single argument only. SEE ALSO Pod::Simple and Pod::Checker AUTHORS Please report bugs using <http://rt.cpan.org>. Brad Appleton <bradapp@enteract.com>, Marek Rouchal <marekr@cpan.org> Based on code for Pod::Text::pod2text(1) written by Tom Christiansen <tchrist@mox.perl.com> perl v5.38.2 2023-11-28 PODCHECKER(1)
|
podchecker - check the syntax of POD format documentation files
|
podchecker [-help] [-man] [-(no)warnings] [file ...] OPTIONS AND ARGUMENTS -help Print a brief help message and exit. -man Print the manual page and exit. -warnings -nowarnings Turn on/off printing of warnings. Repeating -warnings increases the warning level, i.e. more warnings are printed. Currently increasing to level two causes flagging of unescaped "<,>" characters. file The pathname of a POD file to syntax-check (defaults to standard input).
| null | null |
mcxrefresh
|
mcxrefresh is a utility to force the client to re-read the managed preferences on the server for a user. This tool must be run with elevated privileges. Note that this may return a zero status if the user could not be found but there were computer settings available. You must use the -a parameter to enter a password if you are requesting a refresh of an Active Directory server. -u Specify the numeric user id of the user to be refreshed. A user id of zero can be used to refresh the client at the login window. -n Specify the short name of the user to be refreshed. -a Ask for authentication and pass it to ManagedClient. MacOSX October 27, 2009 MacOSX
|
mcxrefresh β Managed Client (MCX) preference refresh tool
|
mcxrefresh [-u uid] [-n username] [-a]
| null | null |
colrm
|
The colrm utility removes selected columns from the lines of a file. A column is defined as a single character in a line. Input is read from the standard input. Output is written to the standard output. If only the start column is specified, columns numbered less than the start column will be written. If both start and stop columns are specified, columns numbered less than the start column or greater than the stop column will be written. Column numbering starts with one, not zero. Tab characters increment the column count to the next multiple of eight. Backspace characters decrement the column count by one. ENVIRONMENT The LANG, LC_ALL and LC_CTYPE environment variables affect the execution of colrm as described in environ(7). EXIT STATUS The colrm utility exits 0 on success, and >0 if an error occurs.
|
colrm β remove columns from a file
|
colrm [start [stop]]
| null |
Show columns below 3 (c) and above 5 (e): $ echo -e "abcdefgh\n12345678" | colrm 3 5 abfgh 12678 Specifying a start column bigger than the number of columns in the file is allowed and shows all the columns: $ echo "abcdefgh" | colrm 100 abcdefgh Using 1 as start column will show nothing: $ echo "abcdefgh" | colrm 1 SEE ALSO awk(1), column(1), cut(1), paste(1) HISTORY The colrm utility first appeared in 1BSD. AUTHORS Jeff Schriebman wrote the original version of colrm in November 1974. macOS 14.5 June 23, 2020 macOS 14.5
|
dbilogstrip
|
Replaces any hex addresses, e.g, 0x128f72ce with "0xN". Replaces any references to process id or thread id, like "pid#6254" with "pidN". So a DBI trace line like this: -> STORE for DBD::DBM::st (DBI::st=HASH(0x19162a0)~0x191f9c8 'f_params' ARRAY(0x1922018)) thr#1800400 will look like this: -> STORE for DBD::DBM::st (DBI::st=HASH(0xN)~0xN 'f_params' ARRAY(0xN)) thrN perl v5.34.0 2024-04-13 DBILOGSTRIP(1)
|
dbilogstrip - filter to normalize DBI trace logs for diff'ing
|
Read DBI trace file "dbitrace.log" and write out a stripped version to "dbitrace_stripped.log" dbilogstrip dbitrace.log > dbitrace_stripped.log Run "yourscript.pl" twice, each with different sets of arguments, with DBI_TRACE enabled. Filter the output and trace through "dbilogstrip" into a separate file for each run. Then compare using diff. (This example assumes you're using a standard shell.) DBI_TRACE=2 perl yourscript.pl ...args1... 2>&1 | dbilogstrip > dbitrace1.log DBI_TRACE=2 perl yourscript.pl ...args2... 2>&1 | dbilogstrip > dbitrace2.log diff -u dbitrace1.log dbitrace2.log
| null | null |
pagestuff
|
pagestuff shows how a structure of a Mach-O or universal file corresponds to logical pages on the current system. Structural information includes the location and extent of file headers, sections and segments, symbol tables, code signatures, etc. When displaying a universal file, all architectures will be shown unless otherwise specified by the -arch flag. The options to pagestuff(1) are: -arch arch_type Specifies the architecture, arch_type, of the file for pagestuff to operate on when the file is a universal file. (See arch(3) for the currently known arch_types.) When this option is used the logical page numbers start from the beginning of the architecture file within the universal file. -pagesize pagesize Specifies the page size to use when computing logical page boundaries. By default pagestuff will use the page size of the current system. -a Display all pages in the file. -p Print a list of the sections of the specified file, offsets and lengths. When displaying a universal file, all archs will be displayed unless Print a list of the sections of the specified Mach-O file, with offsets and lengths. Note that the size(1) tool displays a much more concise listing given the `-l -m -x' arguments. SEE ALSO size(1), arch(3), Mach-O(5). Apple, Inc. June 23, 2020 PAGESTUFF(1)
|
pagestuff - Mach-O file page analysis tool
|
pagestuff file [-arch arch_flag] [[-a] [-p] | [pagenumber...]]
| null | null |
crlrefresh
|
Crlrefresh is a UNIX command-line program which is used to refresh and update the contents of the system-wide cache of Certificate Revocation Lists (CRLs). CRLs, which are optionally used as part of the procedure for verifying X.509 certificates, are typically fetched from the network using a URL which appears in (some) certificates. Caching CRLs is an optimization to avoid costs of network latency and/or unavailability. Each CRL has a finite validity time which is specified in the CRL itself. This validity time may be as short as one day, or it may be much longer. Crlrefresh examines the contents of the CRL cache and updates - via network fetch - all CRLs which are currently, or will soon be, invalid. Crlrefresh is also use to fetch specific CRLs and certificates from the network; CRLs fetched via crlrefresh will be added to the CRL cache as well as provided to the specified output file (or to stdout if no output file is provided). The URL specified in the f and F commands must have schema "http:" or "ldap:". Typically, crlrefresh would be run on a regular basis via one of the configuration files used by the cron(8) program. CRLREFRESH OPTION SUMMARY s=stale_period Specify the time in days which, having elapsed after a CRL is expired, that the CRL is deleted fromt he CRL cache. The default is 10 days. o=expire_overlap Specify the time in seconds prior to a CRL's expiration when a refresh action will attempt to replace the CRL with a fresh copy. p Purge all entries from the CRL cache, ensuring refresh with fresh CRLs. Normally, CRLs whose expiration date is more than expire_overlap past the current time are not refreshed. f Perform full cryptographic verification of all CRLs in the CRL cache. Normally this step is only performed when a CRL is actually used to validate a certificate. k=keychain_name The full path to the CRL cache (which is always a keychain). The default is /var/db/crls/crlcache.db. v Provide verbose output during operation. F=output_file_name When fetching a CRL or certificate, specifies the destination to which the fetched entity will be written. If this is not specified then the fetched entity is sent to stdout. n When fetching a CRL, this inhibits the addition of the? fetched CRL to the system CRL cache. v Execute in verbose mode. FILES /var/db/crls/crlcache.db System CRL cache database SEE ALSO cron(8) Apple Computer, Inc. April 13, 2004 CRLREFRESH(1)
|
crlrefresh - update and maintain system-wide CRL cache
|
crlrefresh command [command-args] [options] crlrefresh r [options] crlrefresh f URL [options] crlrefresh F URI [options] CRLREFRESH COMMAND SUMMARY r Refresh the entire CRL cache f Fetch a CRL from specified URL F Fetch a Certificate from specified URL
| null | null |
headerdoc2html
|
Headerdoc2html processes the header file or files and generates HTML documentation based on specially-formatted comments. The options are as follows: -H The -H option turns on inclusion of the htmlHeader line, as specified in the config file. -X The -X option switches from HTML to XML output -d The -d option turns on extra debugging output. -h The -h option causes headerdoc to output an XML file containing metadata about the resulting document. -p The -p option turns on the C preprocessor. -q The -q option causes headerdoc to be excessively quiet. -u The -u option causes headerdoc to produce unsorted output. -v The -v option causes headerdoc to print version information. If no options are specified, headerdoc will produce directories containing its standard HTML output. For a complete list of flags, see the HeaderDoc User Guide. FILES /$HOME/Library/Preferences/com.apple.headerDoc2HTML.config SEE ALSO gatherheaderdoc(1) For more information, see the HeaderDoc User Guide. It can be found in /Developer/Documentation/ if you have the Xcode Tools package installed, or at <http://developer.apple.com/mac/library/documentation/DeveloperTools/Conceptual/HeaderDoc/> in the reference library. Darwin June 13, 2003 Darwin
|
headerdoc2html β header documentation processor
|
headerdoc2html [-HXdhquvx] [-o output_dir] file [file ...]
| null | null |
jrunscript
|
The jrunscript command is a language-independent command-line script shell. The jrunscript command supports both an interactive (read-eval- print) mode and a batch (-f option) mode of script execution. By default, JavaScript is the language used, but the -l option can be used to specify a different language. By using Java to scripting language communication, the jrunscript command supports an exploratory programming style. If JavaScript is used, then before it evaluates a user defined script, the jrunscript command initializes certain built-in functions and objects, which are documented in the API Specification for jrunscript JavaScript built-in functions. OPTIONS FOR THE JRUNSCRIPT COMMAND -cp path or -classpath path Indicates where any class files are that the script needs to access. -Dname=value Sets a Java system property. -Jflag Passes flag directly to the Java Virtual Machine where the jrunscript command is running. -l language Uses the specified scripting language. By default, JavaScript is used. To use other scripting languages, you must specify the corresponding script engine's JAR file with the -cp or -classpath option. -e script Evaluates the specified script. This option can be used to run one-line scripts that are specified completely on the command line. -encoding encoding Specifies the character encoding used to read script files. -f script-file Evaluates the specified script file (batch mode). -f - Enters interactive mode to read and evaluate a script from standard input. -help or -? Displays a help message and exits. -q Lists all script engines available and exits. ARGUMENTS If arguments are present and if no -e or -f option is used, then the first argument is the script file and the rest of the arguments, if any, are passed as script arguments. If arguments and the -e or the -f option are used, then all arguments are passed as script arguments. If arguments -e and -f are missing, then the interactive mode is used. EXAMPLE OF EXECUTING INLINE SCRIPTS jrunscript -e "print('hello world')" jrunscript -e "cat('http://www.example.com')" EXAMPLE OF USING SPECIFIED LANGUAGE AND EVALUATE THE SCRIPT FILE jrunscript -l js -f test.js EXAMPLE OF INTERACTIVE MODE jrunscript js> print('Hello World\n'); Hello World js> 34 + 55 89.0 js> t = new java.lang.Thread(function() { print('Hello World\n'); }) Thread[Thread-0,5,main] js> t.start() js> Hello World js> RUN SCRIPT FILE WITH SCRIPT ARGUMENTS In this example, the test.js file is the script file. The arg1, arg2, and arg3 arguments are passed to the script. The script can access these arguments with an arguments array. jrunscript test.js arg1 arg2 arg3 JDK 22 2024 JRUNSCRIPT(1)
|
jrunscript - run a command-line script shell that supports interactive and batch modes
|
Note: This tool is experimental and unsupported. jrunscript [options] [arguments]
|
This represents the jrunscript command-line options that can be used. See Options for the jrunscript Command. arguments Arguments, when used, follow immediately after options or the command name. See Arguments.
| null |
pp5.34
|
pp creates standalone executables from Perl programs, using the compressed packager provided by PAR, and dependency detection heuristics offered by Module::ScanDeps. Source files are compressed verbatim without compilation. You may think of pp as "perlcc that works without hassle". :-) A GUI interface is also available as the tkpp command. It does not provide the compilation-step acceleration provided by perlcc (however, see -f below for byte-compiled, source-hiding techniques), but makes up for it with better reliability, smaller executable size, and full retrieval of original source code. When a single input program is specified, the resulting executable will behave identically as that program. However, when multiple programs are packaged, the produced executable will run the one that has the same basename as $0 (i.e. the filename used to invoke it). If nothing matches, it dies with the error "Can't open perl script "$0"".
|
pp - PAR Packager
|
pp [ -ABCEFILMPTSVXacdefghilmnoprsuvxz ] [ parfile | scriptfile ]...
|
Options are available in a short form and a long form. For example, the three lines below are all equivalent: % pp -o output.exe input.pl % pp --output output.exe input.pl % pp --output=output.exe input.pl Since the command lines can become sufficiently long to reach the limits imposed by some shells, it is possible to have pp read some of its options from one or more text files. The basic usage is to just include an argument starting with an 'at' (@) sigil. This argument will be interpreted as a file to read options from. Mixing ordinary options and @file options is possible. This is implemented using the Getopt::ArgvFile module, so read its documentation for advanced usage. -a, --addfile=FILE|DIR Add an extra file into the package. If the file is a directory, recursively add all files inside that directory, with links turned into actual files. By default, files are placed under "/" inside the package with their original names. You may override this by appending the target filename after a ";", like this: % pp -a "old_filename.txt;new_filename.txt" % pp -a "old_dirname;new_dirname" You may specify "-a" multiple times. -A, --addlist=FILE Read a list of file/directory names from FILE, adding them into the package. Each line in FILE is taken as an argument to -a above. You may specify "-A" multiple times. -B, --bundle Bundle core modules in the resulting package. This option is enabled by default, except when "-p" or "-P" is specified. Since PAR version 0.953, this also strips any local paths from the list of module search paths @INC before running the contained script. -C, --clean Clean up temporary files extracted from the application at runtime. By default, these files are cached in the temporary directory; this allows the program to start up faster next time. -c, --compile Run "perl -c inputfile" to determine additional run-time dependencies. -cd, --cachedeps=FILE Use FILE to cache detected dependencies. Creates FILE unless present. This will speed up the scanning process on subsequent runs. -d, --dependent Reduce the executable size by not including a copy of perl interpreter. Executables built this way will need a separate perl5x.dll or libperl.so to function correctly. This option is only available if perl is built as a shared library. -e, --eval=STRING Package a one-liner, much the same as "perl -e '...'" -E, --evalfeature=STRING Behaves just like "-e", except that it implicitly enables all optional features (in the main compilation unit) with Perl 5.10 and later. See feature. -x, --execute Run "perl inputfile" to determine additional run-time dependencies. Using this option, pp may be able to detect the use of modules that can't be determined by static analysis of "inputfile". Examples are stuff loaded by run-time loaders like Module::Runtime or "plugin" loaders like Module::Loader. Note that which modules are detected depends on which parts of your program are exercised when running "inputfile". E.g. if your program immediately terminates when run as "perl inputfile" because it lacks mandatory arguments, then this option will probably have no effect. You may use --xargs to supply arguments in this case. --xargs=STRING If -x is given, splits the "STRING" using the function "shellwords" from Text::ParseWords and passes the result as @ARGV when running "perl inputfile". -X, --exclude=MODULE Exclude the given module from the dependency search path and from the package. If the given file is a zip or par or par executable, all the files in the given file (except MANIFEST, META.yml and script/*) will be excluded and the output file will "use" the given file at runtime. -f, --filter=FILTER Filter source script(s) with a PAR::Filter subclass. You may specify multiple such filters. If you wish to hide the source code from casual prying, this will do: % pp -f Bleach source.pl If you are more serious about hiding your source code, you should have a look at Steve Hay's PAR::Filter::Crypto module. Make sure you understand the Filter::Crypto caveats! -g, --gui Build an executable that does not have a console window. This option is ignored on non-MSWin32 platforms or when "-p" is specified. -h, --help Show basic usage information. -I, --lib=DIR Add the given directory to the perl module search path. May be specified multiple times. -l, --link=FILE|LIBRARY Add the given shared library (a.k.a. shared object or DLL) into the packed file. Also accepts names under library paths; i.e. "-l ncurses" means the same thing as "-l libncurses.so" or "-l /usr/local/lib/libncurses.so" in most Unixes. May be specified multiple times. -L, --log=FILE Log the output of packaging to a file rather than to stdout. -F, --modfilter=FILTER[=REGEX], Filter included perl module(s) with a PAR::Filter subclass. You may specify multiple such filters. By default, the PodStrip filter is applied. In case that causes trouble, you can turn this off by setting the environment variable "PAR_VERBATIM" to 1. Since PAR 0.958, you can use an optional regular expression (REGEX above) to select the files in the archive which should be filtered. Example: pp -o foo.exe -F Bleach=warnings\.pm$ foo.pl This creates a binary executable foo.exe from foo.pl packaging all files as usual except for files ending in "warnings.pm" which are filtered with PAR::Filter::Bleach. -M, --module=MODULE Add the specified module into the package, along with its dependencies. The following variants may be used to add whole module namespaces: -M Foo::** Add every module in the "Foo" namespace except "Foo" itself, i.e. add "Foo::Bar", "Foo::Bar::Quux" etc up to any depth. -M Foo::* Add every module at level 1 in the "Foo" namespace, i.e. add "Foo::Bar", but neither "Foo::Bar::Quux" nor "Foo". -M Foo:: Shorthand for "-M Foo -M Foo:**": every module in the "Foo" namespace including "Foo" itself. Instead of a module name, MODULE may also be specified as a filename relative to the @INC path, i.e. "-M Module/ScanDeps.pm" means the same thing as "-M Module::ScanDeps". If MODULE has an extension that is not ".pm"/".ix"/".al", it will not be scanned for dependencies, and will be placed under "/" instead of "/lib/" inside the PAR file. This use is deprecated -- consider using the -a option instead. You may specify "-M" multiple times. -m, --multiarch Build a multi-architecture PAR file. Implies -p. -n, --noscan Skip the default static scanning altogether, using run-time dependencies from -c or -x exclusively. -N, --namespace=NAMESPACE Add all modules in the namespace into the package, along with their dependencies. If "NAMESPACE" is something like "Foo::Bar" then this will add all modules "Foo/Bar/Quux.pm", "Foo/Bar/Fred/Barnie.pm" etc that can be located in your module search path. It mimics the behaviour of "plugin" loaders like Module::Loader. This is different from using "-M Foo::Bar::", as the latter insists on adding "Foo/Bar.pm" which might not exist in the above "plugin" scenario. You may specify "-N" multiple times. -o, --output=FILE File name for the final packaged executable. -p, --par Create PAR archives only; do not package to a standalone binary. -P, --perlscript Create stand-alone perl script; do not package to a standalone binary. -r, --run Run the resulting packaged script after packaging it. --reusable EXPERIMENTAL Make the packaged executable reusable for running arbitrary, external Perl scripts as if they were part of the package: pp -o myapp --reusable someapp.pl ./myapp --par-options --reuse otherapp.pl The second line will run otherapp.pl instead of someapp.pl. -S, --save Do not delete generated PAR file after packaging. -s, --sign Cryptographically sign the generated PAR or binary file using Module::Signature. -T, --tempcache Set the program unique part of the cache directory name that is used if the program is run without -C. If not set, a hash of the executable is used. When the program is run, its contents are extracted to a temporary directory. On Unix systems, this is commonly /tmp/par-USER/cache-XXXXXXX. USER is replaced by the name of the user running the program, but "spelled" in hex. XXXXXXX is either a hash of the executable or the value passed to the "-T" or "--tempcache" switch. -u, --unicode Package Unicode support (essentially utf8_heavy.pl and everything below the directory unicore in your perl library). This option exists because it is impossible to detect using static analysis if your program needs Unicode support at runtime. (Note: If your program contains "use utf8" this does not imply it needs Unicode support. It merely says that your program is written in UTF-8.) If your packed program exits with an error message like Can't locate utf8_heavy.pl in @INC (@INC contains: ...) try to pack it with "-u" (or use "-x"). -v, --verbose[=NUMBER] Increase verbosity of output; NUMBER is an integer from 1 to 3, 3 being the most verbose. Defaults to 1 if specified without an argument. Alternatively, -vv sets verbose level to 2, and -vvv sets it to 3. -V, --version Display the version number and copyrights of this program. -z, --compress=NUMBER Set zip compression level; NUMBER is an integer from 0 to 9, 0 = no compression, 9 = max compression. Defaults to 6 if -z is not used. ENVIRONMENT PP_OPTS Command-line options (switches). Switches in this variable are taken as if they were on every pp command line. NOTES Here are some recipes showing how to utilize pp to bundle source.pl with all its dependencies, on target machines with different expected settings: Stone-alone setup: To make a stand-alone executable, suitable for running on a machine that doesn't have perl installed: % pp -o packed.exe source.pl # makes packed.exe # Now, deploy 'packed.exe' to target machine... $ packed.exe # run it Perl interpreter only, without core modules: To make a packed .pl file including core modules, suitable for running on a machine that has a perl interpreter, but where you want to be sure of the versions of the core modules that your program uses: % pp -B -P -o packed.pl source.pl # makes packed.pl # Now, deploy 'packed.pl' to target machine... $ perl packed.pl # run it Perl with core modules installed: To make a packed .pl file without core modules, relying on the target machine's perl interpreter and its core libraries. This produces a significantly smaller file than the previous version: % pp -P -o packed.pl source.pl # makes packed.pl # Now, deploy 'packed.pl' to target machine... $ perl packed.pl # run it Perl with PAR.pm and its dependencies installed: Make a separate archive and executable that uses the archive. This relies upon the perl interpreter and libraries on the target machine. % pp -p source.pl # makes source.par % echo "use PAR 'source.par';" > packed.pl; % cat source.pl >> packed.pl; # makes packed.pl # Now, deploy 'source.par' and 'packed.pl' to target machine... $ perl packed.pl # run it, perl + core modules required Note that even if your perl was built with a shared library, the 'Stand-alone executable' above will not need a separate perl5x.dll or libperl.so to function correctly. But even in this case, the underlying system libraries such as libc must be compatible between the host and target machines. Use "--dependent" if you are willing to ship the shared library with the application, which can significantly reduce the executable size. SEE ALSO tkpp, par.pl, parl, perlcc PAR, PAR::Packer, Module::ScanDeps Getopt::Long, Getopt::ArgvFile ACKNOWLEDGMENTS Simon Cozens, Tom Christiansen and Edward Peschko for writing perlcc; this program try to mimic its interface as close as possible, and copied liberally from their code. Jan Dubois for writing the exetype.pl utility, which has been partially adapted into the "-g" flag. Mattia Barbon for providing the "myldr" binary loader code. Jeff Goff for suggesting the name pp. AUTHORS Audrey Tang <cpan@audreyt.org>, Steffen Mueller <smueller@cpan.org> Roderich Schupp <rschupp@cpan.org> You can write to the mailing list at <par@perl.org>, or send an empty mail to <par-subscribe@perl.org> to participate in the discussion. Please submit bug reports to <bug-par-packer@rt.cpan.org>. COPYRIGHT Copyright 2002-2009 by Audrey Tang <cpan@audreyt.org>. Neither this program nor the associated parl program impose any licensing restrictions on files generated by their execution, in accordance with the 8th article of the Artistic License: "Aggregation of this Package with a commercial distribution is always permitted provided that the use of this Package is embedded; that is, when no overt attempt is made to make this Package's interfaces visible to the end user of the commercial distribution. Such use shall not be construed as a distribution of this Package." Therefore, you are absolutely free to place any license on the resulting executable, as long as the packed 3rd-party libraries are also available under the Artistic License. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See LICENSE. perl v5.34.0 2020-03-08 PP(1)
|
Note: When running on Microsoft Windows, the a.out below will be replaced by a.exe instead. % pp hello.pl # Pack 'hello.pl' into executable 'a.out' % pp -o hello hello.pl # Pack 'hello.pl' into executable 'hello' # (or 'hello.exe' on Win32) % pp -o foo foo.pl bar.pl # Pack 'foo.pl' and 'bar.pl' into 'foo' % ./foo # Run 'foo.pl' inside 'foo' % mv foo bar; ./bar # Run 'bar.pl' inside 'foo' % mv bar baz; ./baz # Error: Can't open perl script "baz" % pp -p file # Creates a PAR file, 'a.par' % pp -o hello a.par # Pack 'a.par' to executable 'hello' % pp -S -o hello file # Combine the two steps above % pp -p -o out.par file # Creates 'out.par' from 'file' % pp -B -p -o out.par file # same as above, but bundles core modules # and removes any local paths from @INC % pp -P -o out.pl file # Creates 'out.pl' from 'file' % pp -B -p -o out.pl file # same as above, but bundles core modules # and removes any local paths from @INC # (-B is assumed when making executables) % pp -e "print 123" # Pack a one-liner into 'a.out' % pp -p -e "print 123" # Creates a PAR file 'a.par' % pp -P -e "print 123" # Creates a perl script 'a.pl' % pp -c hello # Check dependencies from "perl -c hello" % pp -x hello # Check dependencies from "perl hello" % pp -n -x hello # same as above, but skips static scanning % pp -I /foo hello # Extra include paths % pp -M Foo::Bar hello # Extra modules in the include path % pp -M abbrev.pl hello # Extra libraries in the include path % pp -X Foo::Bar hello # Exclude modules % pp -a data.txt hello # Additional data files % pp -r hello # Pack 'hello' into 'a.out', runs 'a.out' % pp -r hello a b c # Pack 'hello' into 'a.out', runs 'a.out' # with arguments 'a b c' % pp hello --log=c # Pack 'hello' into 'a.out', logs # messages into 'c' # Pack 'hello' into a console-less 'out.exe' (Win32 only) % pp --gui -o out.exe hello % pp @file hello.pl # Pack 'hello.pl' but read _additional_ # options from file 'file'
|
filtercalltree
|
filtercalltree reads a file containing a call tree, as generated by the sample(1) or malloc_history(1) commands, and filters or prunes it as specified by the options.
|
filtercalltree β Filter or prune a call tree file generated by sample or malloc_history
|
filtercalltree call-tree-file [-invertCallTree] [-pruneCount count] [-pruneMallocSize size] [-chargeSystemLibraries] [-chargeLibrary libraryName] [-keepBoundaries]
|
-invertCallTree Print the call tree from hottest to coldest stack frame. -pruneCount count Remove branches of the call tree that have count less than count -pruneMallocSize size Remove branches of the call tree that have malloc size less than size, such as 500K or 1.2M. -chargeSystemLibraries Remove stack frames from all libraries in /System and /usr, while still charging their cost to the caller. -chargeLibrary library-name Remove stack frames from library-name, while still charging their cost to the caller. This argument can be repeated for multiple libraries. -keepBoundaries When charging libraries to callers, keep the top call into excluded libraries. SEE ALSO malloc_history(1), sample(1) macOS 14.5 May 7, 2011 macOS 14.5
| null |
afhash
|
Audio File Hash writes an SHA-1 hash to an audio file or prints (to stdout) the hash contained in an audio file
|
afhash β Audio File Hash
|
afhash [-h] audiofile1 audiofile2
|
-h print help text -w write hash code to audio file -x print hash code from audio file (if present) -c compare hash codes from two audio files Darwin February 13, 2007 Darwin
| null |
sandbox-exec
|
The sandbox-exec command is DEPRECATED. Developers who wish to sandbox an app should instead adopt the App Sandbox feature described in the App Sandbox Design Guide. The sandbox-exec command enters a sandbox using a profile specified by the -f, -n, or -p option and executes command with arguments. The options are as follows: -f profile-file Read the profile from the file named profile-file. -n profile-name Use the pre-defined profile profile-name. -p profile-string Specify the profile to be used on the command line. -D key=value Set the profile parameter key to value. SEE ALSO sandbox_init(3), sandbox(7), sandboxd(8) Mac OS X March 9, 2017 Mac OS X
|
sandbox-exec β execute within a sandbox (DEPRECATED)
|
sandbox-exec [-f profile-file] [-n profile-name] [-p profile-string] [-D key=value ...] command [arguments ...]
| null | null |
snmpvacm
|
snmpvacm is an SNMP application that can be used to do simple maintenance on the View-based Control Module (VACM) tables of an SNMP agent. The SNMPv3 VACM specifications (see RFC2575) define assorted tables to specify groups of users, MIB views, and authorised access settings. These snmpvacm commands effectively create or delete rows in the appropriate one of these tables, and match the equivalent configure directives which are documented in the snmpd.conf(5) man page. A fuller explanation of how these operate can be found in the project FAQ. SUB-COMMANDS createSec2Group MODEL SECURITYNAME GROUPNAME Create an entry in the SNMPv3 security name to group table. This table allows a single access control entry to be applied to a number of users (or 'principals'), and is indexed by the security model and security name values. MODEL An integer representing the security model, taking one of the following values: 1 - reserved for SNMPv1 2 - reserved for SNMPv2c 3 - User-based Security Model (USM) SECURITYNAME A string representing the security name for a principal (represented in a security-model-independent format). For USM- based requests, the security name is the same as the username. GROUPNAME A string identifying the group that this entry (i.e. security name/model pair) should belong to. This group name will then be referenced in the access table (see createAccess below). deleteSec2Group MODEL SECURITYNAME Delete an entry from the SNMPv3 security name to group table, thus removing access control settings for the given principal. The entry to be removed is indexed by the MODEL and SECURITYNAME values, which should match those used in the corresponding createSec2Group command (or equivalent). createView [-Ce] NAME SUBTREE MASK Create an entry in the SNMPv3 MIB view table. A MIB view consists of a family of view subtrees which may be individually included in or (occasionally) excluded from the view. Each view subtree is defined by a combination of an OID subtree together with a bit string mask. The view table is indexed by the view name and subtree OID values. [-Ce] An optional flag to indicate that this view subtree should be excluded from the named view. If not specified, the default is to include the subtree in the view. When constructing a view from a mixture of included and excluded subtrees, the excluded subtrees should be defined first - particularly if the named view is already referenced in one or more access entries.
|
snmpvacm - creates and maintains SNMPv3 View-based Access Control entries on a network entity A string identifying a particular MIB view, of which this OID subtree/mask forms part (possibly the only part). SUBTREE The OID defining the root of the subtree to add to (or exclude from) the named view. MASK A bit mask indicating which sub-identifiers of the associated subtree OID should be regarded as significant. deleteView NAME SUBTREE Delete an entry from the SNMPv3 view table, thus removing the subtree from the given MIB view. Removing the final (or only) subtree will result in the deletion of the view. The entry to be removed is indexed by the NAME and SUBTREE values, which should match those used in the corresponding createView command (or equivalent). When removing subtrees from a mixed view (i.e. containing both included and excluded subtrees), the included subtrees should be removed first. createAccess GROUPNAME [CONTEXTPREFIX] MODEL LEVEL CONTEXTMATCH READVIEW WRITEVIEW NOTIFYVIEW Create an entry in the SNMPv3 access table, thus allowing a certain level of access to particular MIB views for the principals in the specified group (given suitable security model and levels in the request). The access table is indexed by the group name, context prefix, security model and security level values. GROUPNAME The name of the group that this access entry applies to (as set up by a createSec2Group command, or equivalent) CONTEXTPREFIX A string representing a context name (or collection of context names) which this access entry applies to. The interpretation of this string depends on the value of the CONTEXTMATCH field (see below). If omitted, this will default to the null context "". MODEL An integer representing the security model, taking one of the following values: 1 - reserved for SNMPv1 2 - reserved for SNMPv2c 3 - User-based Security Model (USM) LEVEL An integer representing the minimal security level, taking one of the following values: 1 - noAuthNoPriv 2 - authNoPriv 3 - authPriv This access entry will be applied to requests of this level or higher (where authPriv is higher than authNoPriv which is in turn higher than noAuthNoPriv). CONTEXTMATCH Indicates how to interpret the CONTEXTPREFIX value. If this field has the value '1' (representing 'exact') then the context name of a request must match the CONTEXTPREFIX value exactly for this access entry to be applicable to that request. If this field has the value '2' (representing 'prefix') then the initial substring of the context name of a request must match the CONTEXTPREFIX value for this access entry to be applicable to that request. This provides a simple form of wildcarding. READVIEW The name of the MIB view (as set up by createView or equivalent) defining the MIB objects for which this request may request the current values. If there is no view with this name, then read access is not granted. WRITEVIEW The name of the MIB view (as set up by createView or equivalent) defining the MIB objects for which this request may potentially SET new values. If there is no view with this name, then read access is not granted. NOTIFYVIEW The name of the MIB view (as set up by createView or equivalent) defining the MIB objects which may be included in notification request. Note that this aspect of access control is not currently supported. deleteAccess GROUPNAME [CONTEXTPREFIX] MODEL LEVEL Delete an entry from the SNMPv3 access table, thus removing the specified access control settings. The entry to be removed is indexed by the group name, context prefix, security model and security level values, which should match those used in the corresponding createAccess command (or equivalent). createAuth GROUPNAME [CONTEXTPREFIX] MODEL LEVEL AUTHTYPE CONTEXTMATCH VIEW Create an entry in the Net-SNMP extension to the standard access table, thus allowing a certain type of access to the MIB view for the principals in the specified group. The interpretation of GROUPNAME, CONTEXTPREFIX, MODEL, LEVEL and CONTEXTMATCH are the same as for the createAccess directive. The extension access table is indexed by the group name, context prefix, security model, security level and authtype values. AUTHTYPE The style of access that this entry should be applied to. See snmpd.conf(5) and snmptrapd.conf(5) for details of valid tokens. VIEW The name of the MIB view (as set up by createView or equivalent) defining the MIB objects for which this style of access is authorized. deleteAuth GROUPNAME [CONTEXTPREFIX] MODEL LEVEL AUTHTYPE Delete an entry from the extension access table, thus removing the specified access control settings. The entry to be removed is indexed by the group name, context prefix, security model, security level and authtype values, which should match those used in the corresponding createAuth command (or equivalent). Note that snmpget REQUIRES an argument specifying the agent to query as described in the .I snmpcmd(1) manual page.
|
snmpvacm [COMMON OPTIONS] AGENT createSec2Group MODEL SECURITYNAME GROUPNAME snmpvacm [COMMON OPTIONS] AGENT deleteSec2Group MODEL SECURITYNAME snmpvacm [COMMON OPTIONS] AGENT createView [-Ce] NAME SUBTREE MASK snmpvacm [COMMON OPTIONS] AGENT deleteView NAME SUBTREE snmpvacm [COMMON OPTIONS] AGENT createAccess GROUPNAME [CONTEXTPREFIX] MODEL LEVEL CONTEXTMATCH READVIEW WRITEVIEW NOTIFYVIEW snmpvacm [COMMON OPTIONS] AGENT deleteAccess GROUPNAME [CONTEXTPREFIX] MODEL LEVEL snmpvacm [COMMON OPTIONS] AGENT createAuth GROUPNAME [CONTEXTPREFIX] MODEL LEVEL AUTHTYPE CONTEXTMATCH VIEW snmpvacm [COMMON OPTIONS] AGENT deleteAuth GROUPNAME [CONTEXTPREFIX] MODEL LEVEL AUTHTYPE
| null |
Given a pre-existing user dave (which could be set up using the snmpusm(1) command), we could configure full read-write access to the whole OID tree using the commands: snmpvacm localhost createSec2Group 3 dave RWGroup snmpvacm localhost createView all .1 80 snmpvacm localhost createAccess RWGroup 3 1 1 all all none This creates a new security group named "RWGroup" containing the SNMPv3 user "dave", a new view "all" containing the full OID tree based on .iso(1) , and then allows those users in the group "RWGroup" (i.e. "dave") both read- and write-access to the view "all" (i.e. the full OID tree) when using authenticated SNMPv3 requests. As a second example, we could set up read-only access to a portion of the OID tree using the commands: snmpvacm localhost createSec2Group 3 wes ROGroup snmpvacm localhost createView sysView system fe snmpvacm localhost createAccess ROGroup 3 0 1 sysView none none This creates a new security group named "ROGroup" containing the (pre- existing) user "wes", a new view "sysView" containing just the OID tree based on .iso(1).org(3).dod(6).inet(1).mgmt(2).mib-2(1).system(1) , and then allows those users in the group "ROGroup" (i.e. "wes") read- access, but not write-access to the view "sysView" (i.e. the system group). EXIT STATUS The following exit values are returned: 0 - Successful completion 1 - A usage syntax error (which displays a suitable usage message) or a request timeout. 2 - An error occurred while executing the command (which also displays a suitable error message). LIMITATIONS This utility does not support the configuration of new community strings, so is only of use for setting up new access control for SNMPv3 requests. It can be used to amend the access settings for existing community strings, but not to set up new ones. The use of numeric parameters for secLevel and contextMatch parameters is less than intuitive. These commands do not provide the full flexibility of the equivalent config file directives. There is (currently) no equivalent to the one-shot configure directives rouser and rwuser. SEE ALSO snmpcmd(1), snmpusm(1), snmpd.conf(5), snmp.conf(5), RFC 2575, Net-SNMP project FAQ V5.6.2.1 05 Sep 2006 SNMPVACM(1)
|
stapler
|
The stapler utility attaches tickets for notarized executables to app bundles, disk images, and packages. Developer ID requires apps to be notarized before distribution. A ticket contains a list of the code signatures for executables within a supported file format. The stapler utility downloads and attaches (staples) a ticket to these files, enabling Gatekeeper to verify that executables they contain have been properly notarized. Stapling is performed automatically by Xcode as part of the Developer ID distribution workflow for notarized apps. The stapler utility must be applied separately to a supported file format that was built or packaged with command-line tools, before distributing it. This enables Gatekeeper to verify the ticket offline. Stapling does not invalidate the code signature and must be run after an executable or archive has been code-signed and notarized with Developer ID. Code-signing a supported file format invalidates any stapled tickets, so stapler staple must be run again if this occurs. stapler requires internet access to retrieve tickets when stapling or validating. SUPPORTED FILE FORMATS stapler works only with UDIF disk images, signed "flat" installer packages, and certain code-signed executable bundles such as ".app". Passing an unsigned "flat" installer package or an unsigned executable bundle in path to stapler is considered an error.
|
stapler β Attach and validate tickets for notarized executables
|
stapler staple [-q] [-v] path stapler validate [-q] [-v] path
|
The options are as follows: staple Retrieves a ticket and attaches it to the supported file format at path. The executable must have completed the notarization process. validate Validates a stapled ticket. This includes verifying the contents and comparing it to the latest ticket retrieved from the ticketing service. -q, --quiet When validating or attaching tickets, stapler will only return the exit code. --verbose overrides this option. -v, --verbose Sets the output of stapler to include additional diagnostic output. Without the verbose option, no output is produced upon success.
|
stapler staple Example.app Retrieve and staple a ticket to Xcode.app. stapler validate -v SampleInstaller.pkg Validate the ticket stapled to a package with verbose output. DIAGNOSTICS stapler returns 0 on successful stapling or validation. If an error occurs, it returns one of the non-zero codes defined in sysexits(3). stapler exits upon encountering the first error. It may exit with codes other than those listed below in less common scenarios. [EX_USAGE] Options appear malformed or are missing. [EX_NOINPUT] path cannot be found, is not code-signed, or is not of a supported file format, or, if the validate option is passed, the existing ticket is missing or invalid. [EX_DATAERR] The ticket data is invalid. [EX_NOPERM] The ticket has been revoked by the ticketing service. [EX_NOHOST] The path has not been previously notarized or the ticketing service returns an unexpected response. [EX_CANTCREAT] The ticket has been retrieved from the ticketing service and was properly validated but the ticket could not be written out to disk. SEE ALSO codesign(1), spctl(8), syspolicyd(8) HISTORY The stapler command first appeared in macOS 10.14. BUGS stapler can only act on one path per invocation. If multiple paths are specified, stapler will only process the last path specified. The folder containing path must be writable. If an executable bundle contains a symlink at Contents/CodeResources, it must be manually deleted before staple will function. macOS 14.5 May 15, 2018 macOS 14.5
|
codecctl
| null | null | null | null | null |
fwkpfv
|
Use fwkpfv to receive FireWire kprintf logging. FireWireKPrintf redirects "kprintf()" logging to FireWire. Kernel printfs or "kprintfs" are used by many kernel services as a low level logging mechanism. They can also be used in third party kernel extensions. FireWire kprintfs are available very early in the kernel's startup and right until the cpu is powered down at sleep, restarted, or shutdown. Similarly, they are available almost as soon as the cpu is powered when waking. They can be useful for debugging kernel code (including KEXTs), particularly sleep/wake issues where the display and/or ethernet is unavailable.
|
fwkpfv β FireWire kprintf viewer
|
fwkpfv [--appendlog] [--openlog] [--newlog] [--prefix] [--publish] [--single] [--buffer=boot-args] [--setargs[=boot-args]] [--setpm] [--disable] [--erase] [--ioff] [--restart] [--help]
|
The available options are as follows: --appendlog, -a Append output logging to /tmp/fwkpf.log. --openlog, -o Open log file with Console.app. Only valid with -a. --newlog, -n Create a new log file, rather than append. Only valid with -a. --prefix, -p Prefix logger machine's ID to each log. --publish, -k Do not publish FireWire unit directory keys. --single, -s Use a single window even if multiple loggers are present. --buffer=size, -bsize Sets the host's psuedo address space queue buffer to sizein bytes. Increasing this value may help avoid potential packet loss. The default buffer size is 204,800 bytes. --file=path, -b-path Sets the log file path, if in use, to pathgiven as a path to a file. The tilde character is not allowed. --setargs[=boot-args], -r[boot-args] Sets the nvram boot-args on the current machine to boot-args. This flag should only be used on the target machine (which is contrary to typical usage cases, when this tool is used on the host). If boot-args is not passed, the tool will prompt the user as to which boot-args are to be set. --setpm, -m Sets the nvram boot-args on the current machine to "debug=0x14e io=0x880". This flag should only be used on the target machine (which is contrary to typical usage cases, when this tool is used on the host). --disable, -x Sets the nvram boot-args on the current machine to "debug=0x146" which disables kprintf logging. This flag should only be used on the target machine (which is contrary to typical usage cases, when this tool is used on the host). --erase, -e Deletes the boot-args variable from nvram. This flag should only be used on the target machine (which is contrary to typical usage cases, when this tool is used on the host). --ioff Turns off interactive mode. --restart Automatically restarts the machine only after the nvram has been modified by this tool. --help, -h Displays usage info for fwkpfv. COMPATABILITY Unlike in the past, Mac OS X 10.5 has integrated FireWireKPrintf functionality, so it is not necessary to install a separate kext to enable kprintf logging over FireWire. While the symbol for kprintf() is available at all times, the calls are essentially ignored unless activated with a boot argument (see below). While the new FireWireKPrintf is integrated with the normal FireWire stack, once the machine begins logging kprintfs via FireWire, normal FireWire services will stop until the machine is restarted. Once in logging mode, all typical FireWire services (like FireWire hard disk access) will be unavailable. It is expected that any devices connected before logging will be forcefully removed. If you need to log while also using the FireWire stack, please use FireLog (see the FireWireSDK). The new integrated FireWireKPrintf cannot be used while the old AppleFireWireKPrintf.kext is installed. Remove it to use the integrated version. The new viewer will be able to capture kprintf logs from the old-style AppleFireWireKPrintf.kext, however, the old-style viewer will not work with the integrated FireWireKPrintf services. USAGE To use the FireWireKPrintf, two machines must be setup as such: - On the Target machine (to be debugged): 1. Boot the Mac from the partition you wish to use. 2. Set kernel boot arguments to enable kernel printfs: % sudo nvram boot-args="debug=0x8" While only the debug bit equivalent to '0x8' is required for kprintf logging, you may want to set the debug variable to '0x14e' to enable other debugging services. For more information on the debug flags, please see Technical Note TN2118. For more logging options, please see <Kernel/IOKit/IOKitDebug.h>. 3. Restart the target Mac. 4. Disconnect any FireWire device. - On the debugger machine with Mac OS X and Developer Tools installed, run from Terminal.app: % fwkpfv (If the machine is running Tiger: Run the FireWireKPrintf viewer tool included in the FireWireSDK available at http://developer.apple.com/sdk/.) - Connect the two machines together using a FireWire cable. - After 5 seconds, you should see output in the viewer and kprintf logging is flowing. Note: At this point, normal FireWire services will cease to exist on the target until the machine is restarted. TARGET-SIDE OPTIONS FireWireKPrintf implements a few options that can be set as a "boot-arg," much like the "debug" variable. The "fwkpf" variable specifies the timestamp format (calculated on the target, before transmission), timestamp padding, verbose kprintf printing, and synchronous mode. To set the "fwkpf" variable, choose a timestamp unit and add any of the "additive" options. The default timestamp is 0x4 (microseconds). Timestamp Formats (not additive): 0x0 Converted FW Cycle Time Units (c) - Classic time format shown as "Seconds.Microseconds". The Second unit rolls over every 128 seconds. Driven by the FireWire clock. 0x1 Absolute Time Units (a) - "Absolute" time units derived directly from the kernel's uptime clock. 0x2 FireWire Time Units (w) - Shown as "Seconds:Cycles:Offset". Driven by the FireWire clock. Seconds rollover every 128 seconds. 8000 cycles per second. 3072 offset counts per cycle. Equivalent to FireBug's time format. 0x3 Nanoseconds Time Units (n) - The kernel's uptime clock converted to nanoseconds. 0x4 Microseconds Time Units (u) - The kernel's uptime clock converted to microseconds. 0x5 Milliseconds Time Units (m) - The kernel's uptime clock converted to milliseconds. 0x6 Seconds Time Units (s) - Shown as "Seconds:Milliseconds:Microseconds". Converted from kernel's uptime clock. 0x7 Day Time Units (d) - "Days:Hours:Minutes:Seconds:Milliseconds:Microseconds". Converted from kernel's uptime clock. 0xF No Time Units (-) - No time units, displayed as "-". Additive Options: 0x10 Append output logging to /Library/Logs/FireWireKPrintf.log. 0x100 Open log file with Console.app. Only valid with "-o". 0x800 Create a new log file, rather than append. Only valid with "-o". 0x8000 Prefix logger machine's ID to each log. For example, if you wish to display microsecond time units with padding, synchronous mode enabled, and verbose printing disabled, the target's boot-args would be as follows: "debug=0x14e fwkpf=0x114". On the target, run the following in Terminal.app: % sudo nvram boot-args="debug=0x14e fwkpf=0x114" If not defined, the "fwkpf" variable defaults to "0x004." DECIPHERING THE OUTPUT Once the viewer is running, he target machine is logging, and both machines are connected with a FireWire cable, you will see output similar to the following: % fwkpfv Welcome to FireWireKPrintf (viewer x.x.x). c>50.097255 AppleFWOHCI_KPF version x.x.x c>55.110783 AppleFWOHCI_KPF (re)initialized c>55.110793 Log saver c>55.129614 in.c: warning can't plumb proto if=fw0 type 144 error=17 'Welcome to FireWireKPrintf (viewer x.x.x)' signifies viewer tool start correctly. If multiple interfaces are present on the debugger machine, it will give an interface count. 'AppleFWOHCI_KPF version x.x.x' signifies the AppleFireWireKPrintf kext has (re)initialized the FireWire hardware for use in a FireWIreKPrintf manner. 'FWKPF: Time Format->...' displays the time format declared in the target's boot-args. See the "Options" section of this document to select a different time format. 'c>13.481567' displays the time at which the kprintf call was logged. Prefixed with the letter that corresponds to the time formats listed above. The format of this time log is displayed upon start and can be changed in the target's boot-args. See above. '... in.c: warning can't...' the const char * string from the kprintf() call; the log. (This is a normal log.) TROUBLESHOOTING If you are seeing the following symptoms: There is no output from the fwkpfv tool on the second machine: - Make sure the two machines are connected with a good FireWire cable. - Run "nvram boot-args" and verify that the boot-args are set correctly. - Be sure you're using the new fwkpfv utility, version 2.1 or newer. The machine hangs at boot: - Sometimes the console will hang at boot when there is a high volume of logging to screen. Try booting in non-verbose mode or limiting the volume of logging. Remove the "-v" from your machine's boot-args. Or remove "io=0x80". DISABLING To disable the FireWireKPrintf, delete the target machine's boot-args. Within Terminal.app run the following: - % sudo nvram -d boot-args - OR set the boot-args variable to your previous setting. - Restart to target Mac. NOTES Other debug/boot-arg options: For more information on the debug flags, please see Technical Note TN2118. Setting the boot-arg variable "io" to "0x80" will turn on a significant volume of power management logging, which may be useful while debugging sleep/wake issues. Similarly, adding the "-v" argument to the boot-args will enable Mac OS X's verbose mode. This may be useful for watching local logging during boot or shutdown. For example, to add power management logging and verbose mode: % sudo nvram boot-args="debug=0x8 io=0x80 -v" The timestamps for very early boot logs are inconsistent: FireWireKPrintf tries to catch kprintf calls as soon as its start() routine is called. All kprintf calls after this point will be saved until the FireWire hardware has been initialized completely (which is also early in the boot process), however, the timestamps for these very early logs will reference the time they were sent via FireWire, not when kprintf() was called. All timestamps can be assumed accurate after the log from FireWireKPrintf that reads something similar to: FWKPF: Time Format->... The timestamps for very early wake logs are inconsistent: Similar to very early boot logs, kprintf() calls by the kernel very early upon wake will be saved and sent after the FireWire hardware has had time to initialize. Likewise, the timestamps for these early logs may reflect a yet-to-be-initialized cpu time. These timestamps will be extremely large and clearly recognizable. Synchronous or Non-Synchronous?: With exception to the two cases above (very early boot and very early sleep) when the FireWire hardware cannot be initialized without stopping kernel progression, all FireWireKPrintf logs are sent synchronously. This means that if the log is sent successfully, it is guaranteed to be on the wire before the call returns. If the log cannot be sent, an error will be written to system.log. How do I know if I have enabled FireWireKPrintf and have 'normal' FireWire disabled?: The "FireWire" tab of "/Applications/Utilities/System Profiler.app" will allow you to see if FireWireKPrintf has disabled normal FireWire services. To this end, if FireWire is disabled, unplug any FireWire cables and restart the Mac to restore normal FireWire services. Additionally, be mindful to restart machines that have dropped into logging mode as soon as you have finished using FireWireKPrintf logging. My boot-args disappeared unexpectedly: Some applications, such as the Startup Disk preference pane, set the boot-args themselves. Therefore, it is always best to boot to the partition you wish to debug, set the boot-args, and then restart. My FireWire drive mounts on a second machine and then disappears off the first: When a viewer Mac is connected to a logging (target) Mac, all normal FireWire services stop, including FireWire disk access. It may take a few moments for the disk to disappear on the logging Mac, but once you have connected a viewer Mac, it will be impossible to use a FireWire hard disk without restarting. I see an error when I first connect: The following log is often shown when you first connect: in.c: warning can't plumb proto if=fw0 type 144 error=17 It is a normal log from a different part of the system and should not be of any concern. Compatibility with Intel and PPC: FireWireKPrintf works on both Intel and PowerPC based Macs. The integrated FireWireKPrintf and fwkpfv is new for Leopard and is not included in any previous OS release. Other FireWire Devices: To avoid conflicts it is best not to have other FireWire devices plugged into the host or target machines while using FireWireKPrintf. Having more than 2 nodes total (i.e. the two CPUs) may cause unexpected results. Logging from multiple machines: The fwkpfv utility is able to receive logging from multiple machines. Connecting more than one logging target machine to a viewer will result in individual Terminal windows showing machine specific logs. A full, unparsed log is saved to "/Library/Logs/fwkpf.log". You may also force machine ID prefixation to each log by specifying the "-i" flags to fwkpfv. Using FireWireKPrintf with FireWireKDP: FireWireKPrintf is compatible with FireWireKDP. To use both, it is recommended to set the boot-args using the following command: % sudo nvram boot-args="debug=0x14e kdp_match_name=firewire" Of course, you may modify or add boot-args to suit your needs (see note above). How do I clear the viewer?: Remember, you can clear the scrollback buffer of Terminal.app by selecting "Clear Scrollback" (or Cmd-K) from the "Scrollback" menu. Why do I see different logging with different machines?: The "built-in" kprintf output is target machine specific. This is due to special casing of hardware and other states. It may also vary with operating system version and even kext versions. Remember, a developer can change their kprintf() calls at any time. Can I see more logging?: Most Macs have the ability to output a significant volume of power management logging, which may be useful while debugging sleep/wake issues. Many options are defined in <Kernel/IOKit/IOKitDebug.h>. What about FireLog?: FireLog and FireWireKPrintf are different, both in theory and practice. FireLog is a high speed logging system which requires a framework. Most importantly, FireLog uses a buffering system (in a pull manner) to prevent the loss of logs during high logging volume or low processing time. Conversely, FireWireKPrintf employs a push method of sending each log onto the wire as soon as it is available. Furthermore, FireWireKPrintf is available sooner in the kernel's startup. FireLog is an excellent solution if you need high speed logging. FILES /usr/bin/fwkpfv is installed as part of the Mac OS X Developer Tools. SEE ALSO fwkdp(1) Mac OS X September 15, 2008 Mac OS X
| null |
sftp
|
sftp is a file transfer program, similar to ftp(1), which performs all operations over an encrypted ssh(1) transport. It may also use many features of ssh, such as public key authentication and compression. The destination may be specified either as [user@]host[:path] or as a URI in the form sftp://[user@]host[:port][/path]. If the destination includes a path and it is not a directory, sftp will retrieve files automatically if a non-interactive authentication method is used; otherwise it will do so after successful interactive authentication. If no path is specified, or if the path is a directory, sftp will log in to the specified host and enter interactive command mode, changing to the remote directory if one was specified. An optional trailing slash can be used to force the path to be interpreted as a directory. Since the destination formats use colon characters to delimit host names from path names or port numbers, IPv6 addresses must be enclosed in square brackets to avoid ambiguity. The options are as follows: -4 Forces sftp to use IPv4 addresses only. -6 Forces sftp to use IPv6 addresses only. -A Allows forwarding of ssh-agent(1) to the remote system. The default is not to forward an authentication agent. -a Attempt to continue interrupted transfers rather than overwriting existing partial or complete copies of files. If the partial contents differ from those being transferred, then the resultant file is likely to be corrupt. -B buffer_size Specify the size of the buffer that sftp uses when transferring files. Larger buffers require fewer round trips at the cost of higher memory consumption. The default is 32768 bytes. -b batchfile Batch mode reads a series of commands from an input batchfile instead of stdin. Since it lacks user interaction, it should be used in conjunction with non-interactive authentication to obviate the need to enter a password at connection time (see sshd(8) and ssh-keygen(1) for details). A batchfile of β-β may be used to indicate standard input. sftp will abort if any of the following commands fail: get, put, reget, reput, rename, ln, rm, mkdir, chdir, ls, lchdir, copy, cp, chmod, chown, chgrp, lpwd, df, symlink, and lmkdir. Termination on error can be suppressed on a command by command basis by prefixing the command with a β-β character (for example, -rm /tmp/blah*). Echo of the command may be suppressed by prefixing the command with a β@β character. These two prefixes may be combined in any order, for example -@ls /bsd. -C Enables compression (via ssh's -C flag). -c cipher Selects the cipher to use for encrypting the data transfers. This option is directly passed to ssh(1). -D sftp_server_command Connect directly to a local sftp server (rather than via ssh(1)). A command and arguments may be specified, for example "/path/sftp-server -el debug3". This option may be useful in debugging the client and server. -F ssh_config Specifies an alternative per-user configuration file for ssh(1). This option is directly passed to ssh(1). -f Requests that files be flushed to disk immediately after transfer. When uploading files, this feature is only enabled if the server implements the "fsync@openssh.com" extension. -i identity_file Selects the file from which the identity (private key) for public key authentication is read. This option is directly passed to ssh(1). -J destination Connect to the target host by first making an sftp connection to the jump host described by destination and then establishing a TCP forwarding to the ultimate destination from there. Multiple jump hops may be specified separated by comma characters. This is a shortcut to specify a ProxyJump configuration directive. This option is directly passed to ssh(1). -l limit Limits the used bandwidth, specified in Kbit/s. -N Disables quiet mode, e.g. to override the implicit quiet mode set by the -b flag. -o ssh_option Can be used to pass options to ssh in the format used in ssh_config(5). This is useful for specifying options for which there is no separate sftp command-line flag. For example, to specify an alternate port use: sftp -oPort=24. For full details of the options listed below, and their possible values, see ssh_config(5). AddressFamily BatchMode BindAddress BindInterface CanonicalDomains CanonicalizeFallbackLocal CanonicalizeHostname CanonicalizeMaxDots CanonicalizePermittedCNAMEs CASignatureAlgorithms CertificateFile CheckHostIP Ciphers Compression ConnectionAttempts ConnectTimeout ControlMaster ControlPath ControlPersist GlobalKnownHostsFile GSSAPIAuthentication GSSAPIDelegateCredentials HashKnownHosts Host HostbasedAcceptedAlgorithms HostbasedAuthentication HostKeyAlgorithms HostKeyAlias Hostname IdentitiesOnly IdentityAgent IdentityFile IPQoS KbdInteractiveAuthentication KbdInteractiveDevices KexAlgorithms KnownHostsCommand LogLevel MACs NoHostAuthenticationForLocalhost NumberOfPasswordPrompts PasswordAuthentication PKCS11Provider Port PreferredAuthentications ProxyCommand ProxyJump PubkeyAcceptedAlgorithms PubkeyAuthentication RekeyLimit RequiredRSASize SendEnv ServerAliveInterval ServerAliveCountMax SetEnv StrictHostKeyChecking TCPKeepAlive UpdateHostKeys User UserKnownHostsFile VerifyHostKeyDNS -P port Specifies the port to connect to on the remote host. -p Preserves modification times, access times, and modes from the original files transferred. -q Quiet mode: disables the progress meter as well as warning and diagnostic messages from ssh(1). -R num_requests Specify how many requests may be outstanding at any one time. Increasing this may slightly improve file transfer speed but will increase memory usage. The default is 64 outstanding requests. -r Recursively copy entire directories when uploading and downloading. Note that sftp does not follow symbolic links encountered in the tree traversal. -S program Name of the program to use for the encrypted connection. The program must understand ssh(1) options. -s subsystem | sftp_server Specifies the SSH2 subsystem or the path for an sftp server on the remote host. A path is useful when the remote sshd(8) does not have an sftp subsystem configured. -v Raise logging level. This option is also passed to ssh. -X sftp_option Specify an option that controls aspects of SFTP protocol behaviour. The valid options are: nrequests=value Controls how many concurrent SFTP read or write requests may be in progress at any point in time during a download or upload. By default 64 requests may be active concurrently. buffer=value Controls the maximum buffer size for a single SFTP read/write operation used during download or upload. By default a 32KB buffer is used. INTERACTIVE COMMANDS Once in interactive mode, sftp understands a set of commands similar to those of ftp(1). Commands are case insensitive. Pathnames that contain spaces must be enclosed in quotes. Any special characters contained within pathnames that are recognized by glob(3) must be escaped with backslashes (β\β). bye Quit sftp. cd [path] Change remote directory to path. If path is not specified, then change directory to the one the session started in. chgrp [-h] grp path Change group of file path to grp. path may contain glob(7) characters and may match multiple files. grp must be a numeric GID. If the -h flag is specified, then symlinks will not be followed. Note that this is only supported by servers that implement the "lsetstat@openssh.com" extension. chmod [-h] mode path Change permissions of file path to mode. path may contain glob(7) characters and may match multiple files. If the -h flag is specified, then symlinks will not be followed. Note that this is only supported by servers that implement the "lsetstat@openssh.com" extension. chown [-h] own path Change owner of file path to own. path may contain glob(7) characters and may match multiple files. own must be a numeric UID. If the -h flag is specified, then symlinks will not be followed. Note that this is only supported by servers that implement the "lsetstat@openssh.com" extension. copy oldpath newpath Copy remote file from oldpath to newpath. Note that this is only supported by servers that implement the "copy-data" extension. cp oldpath newpath Alias to copy command. df [-hi] [path] Display usage information for the filesystem holding the current directory (or path if specified). If the -h flag is specified, the capacity information will be displayed using "human-readable" suffixes. The -i flag requests display of inode information in addition to capacity information. This command is only supported on servers that implement the βstatvfs@openssh.comβ extension. exit Quit sftp. get [-afpR] remote-path [local-path] Retrieve the remote-path and store it on the local machine. If the local path name is not specified, it is given the same name it has on the remote machine. remote-path may contain glob(7) characters and may match multiple files. If it does and local-path is specified, then local-path must specify a directory. If the -a flag is specified, then attempt to resume partial transfers of existing files. Note that resumption assumes that any partial copy of the local file matches the remote copy. If the remote file contents differ from the partial local copy then the resultant file is likely to be corrupt. If the -f flag is specified, then fsync(2) will be called after the file transfer has completed to flush the file to disk. If the -p flag is specified, then full file permissions and access times are copied too. If the -R flag is specified then directories will be copied recursively. Note that sftp does not follow symbolic links when performing recursive transfers. help Display help text. lcd [path] Change local directory to path. If path is not specified, then change directory to the local user's home directory. lls [ls-options [path]] Display local directory listing of either path or current directory if path is not specified. ls-options may contain any flags supported by the local system's ls(1) command. path may contain glob(7) characters and may match multiple files. lmkdir path Create local directory specified by path. ln [-s] oldpath newpath Create a link from oldpath to newpath. If the -s flag is specified the created link is a symbolic link, otherwise it is a hard link. lpwd Print local working directory. ls [-1afhlnrSt] [path] Display a remote directory listing of either path or the current directory if path is not specified. path may contain glob(7) characters and may match multiple files. The following flags are recognized and alter the behaviour of ls accordingly: -1 Produce single columnar output. -a List files beginning with a dot (β.β). -f Do not sort the listing. The default sort order is lexicographical. -h When used with a long format option, use unit suffixes: Byte, Kilobyte, Megabyte, Gigabyte, Terabyte, Petabyte, and Exabyte in order to reduce the number of digits to four or fewer using powers of 2 for sizes (K=1024, M=1048576, etc.). -l Display additional details including permissions and ownership information. -n Produce a long listing with user and group information presented numerically. -r Reverse the sort order of the listing. -S Sort the listing by file size. -t Sort the listing by last modification time. lumask umask Set local umask to umask. mkdir path Create remote directory specified by path. progress Toggle display of progress meter. put [-afpR] local-path [remote-path] Upload local-path and store it on the remote machine. If the remote path name is not specified, it is given the same name it has on the local machine. local-path may contain glob(7) characters and may match multiple files. If it does and remote-path is specified, then remote-path must specify a directory. If the -a flag is specified, then attempt to resume partial transfers of existing files. Note that resumption assumes that any partial copy of the remote file matches the local copy. If the local file contents differ from the remote local copy then the resultant file is likely to be corrupt. If the -f flag is specified, then a request will be sent to the server to call fsync(2) after the file has been transferred. Note that this is only supported by servers that implement the "fsync@openssh.com" extension. If the -p flag is specified, then full file permissions and access times are copied too. If the -R flag is specified then directories will be copied recursively. Note that sftp does not follow symbolic links when performing recursive transfers. pwd Display remote working directory. quit Quit sftp. reget [-fpR] remote-path [local-path] Resume download of remote-path. Equivalent to get with the -a flag set. reput [-fpR] local-path [remote-path] Resume upload of local-path. Equivalent to put with the -a flag set. rename oldpath newpath Rename remote file from oldpath to newpath. rm path Delete remote file specified by path. rmdir path Remove remote directory specified by path. symlink oldpath newpath Create a symbolic link from oldpath to newpath. version Display the sftp protocol version. !command Execute command in local shell. ! Escape to local shell. ? Synonym for help. SEE ALSO ftp(1), ls(1), scp(1), ssh(1), ssh-add(1), ssh-keygen(1), ssh_config(5), glob(7), sftp-server(8), sshd(8) T. Ylonen and S. Lehtinen, SSH File Transfer Protocol, draft-ietf-secsh- filexfer-00.txt, January 2001, work in progress material. macOS 14.5 December 16, 2022 macOS 14.5
|
sftp β OpenSSH secure file transfer
|
sftp [-46AaCfNpqrv] [-B buffer_size] [-b batchfile] [-c cipher] [-D sftp_server_command] [-F ssh_config] [-i identity_file] [-J destination] [-l limit] [-o ssh_option] [-P port] [-R num_requests] [-S program] [-s subsystem | sftp_server] [-X sftp_option] destination
| null | null |
wc
|
The wc utility displays the number of lines, words, and bytes contained in each input file, or standard input (if no file is specified) to the standard output. A line is defined as a string of characters delimited by a β¨newlineβ© character. Characters beyond the final β¨newlineβ© character will not be included in the line count. A word is defined as a string of characters delimited by white space characters. White space characters are the set of characters for which the iswspace(3) function returns true. If more than one input file is specified, a line of cumulative counts for all the files is displayed on a separate line after the output for the last file. The following options are available: --libxo Generate output via libxo(3) in a selection of different human and machine readable formats. See xo_parse_args(3) for details on command line arguments. -L Write the length of the line containing the most bytes (default) or characters (when -m is provided) to standard output. When more than one file argument is specified, the longest input line of all files is reported as the value of the final βtotalβ. -c The number of bytes in each input file is written to the standard output. This will cancel out any prior usage of the -m option. -l The number of lines in each input file is written to the standard output. -m The number of characters in each input file is written to the standard output. If the current locale does not support multibyte characters, this is equivalent to the -c option. This will cancel out any prior usage of the -c option. -w The number of words in each input file is written to the standard output. When an option is specified, wc only reports the information requested by that option. The order of output always takes the form of line, word, byte, and file name. The default action is equivalent to specifying the -c, -l and -w options. If no files are specified, the standard input is used and no file name is displayed. The prompt will accept input until receiving EOF, or [^D] in most environments. If wc receives a SIGINFO (see the status argument for stty(1)) signal, the interim data will be written to the standard error output in the same format as the standard completion message. ENVIRONMENT The LANG, LC_ALL and LC_CTYPE environment variables affect the execution of wc as described in environ(7). EXIT STATUS The wc utility exits 0 on success, and >0 if an error occurs.
|
wc β word, line, character, and byte count
|
wc [--libxo] [-Lclmw] [file ...]
| null |
Count the number of characters, words and lines in each of the files report1 and report2 as well as the totals for both: wc -mlw report1 report2 Find the longest line in a list of files: wc -L file1 file2 file3 | fgrep total COMPATIBILITY Historically, the wc utility was documented to define a word as a βmaximal string of characters delimited by <space>, <tab> or <newline> charactersβ. The implementation, however, did not handle non-printing characters correctly so that β ^D^E β counted as 6 spaces, while βfoo^D^Ebarβ counted as 8 characters. 4BSD systems after 4.3BSD modified the implementation to be consistent with the documentation. This implementation defines a βwordβ in terms of the iswspace(3) function, as required by IEEE Std 1003.2 (βPOSIX.2β). The -L option is a non-standard FreeBSD extension, compatible with the -L option of the GNU wc utility. SEE ALSO iswspace(3), libxo(3), xo_parse_args(3) STANDARDS The wc utility conforms to IEEE Std 1003.1-2001 (βPOSIX.1β). HISTORY A wc command appeared in Version 1 AT&T UNIX. macOS 14.5 April 11, 2020 macOS 14.5
|
jps
|
The jps command lists the instrumented Java HotSpot VMs on the target system. The command is limited to reporting information on JVMs for which it has the access permissions. If the jps command is run without specifying a hostid, then it searches for instrumented JVMs on the local host. If started with a hostid, then it searches for JVMs on the indicated host, using the specified protocol and port. A jstatd process is assumed to be running on the target host. The jps command reports the local JVM identifier, or lvmid, for each instrumented JVM found on the target system. The lvmid is typically, but not necessarily, the operating system's process identifier for the JVM process. With no options, the jps command lists each Java application's lvmid followed by the short form of the application's class name or jar file name. The short form of the class name or JAR file name omits the class's package information or the JAR files path information. The jps command uses the Java launcher to find the class name and arguments passed to the main method. If the target JVM is started with a custom launcher, then the class or JAR file name, and the arguments to the main method aren't available. In this case, the jps command outputs the string Unknown for the class name, or JAR file name, and for the arguments to the main method. The list of JVMs produced by the jps command can be limited by the permissions granted to the principal running the command. The command lists only the JVMs for which the principal has access rights as determined by operating system-specific access control mechanisms. HOST IDENTIFIER The host identifier, or hostid, is a string that indicates the target system. The syntax of the hostid string corresponds to the syntax of a URI: [protocol:][[//]hostname][:port][/servername] protocol The communications protocol. If the protocol is omitted and a hostname isn't specified, then the default protocol is a platform-specific, optimized, local protocol. If the protocol is omitted and a host name is specified, then the default protocol is rmi. hostname A host name or IP address that indicates the target host. If you omit the hostname parameter, then the target host is the local host. port The default port for communicating with the remote server. If the hostname parameter is omitted or the protocol parameter specifies an optimized, local protocol, then the port parameter is ignored. Otherwise, treatment of the port parameter is implementation-specific. For the default rmi protocol, the port parameter indicates the port number for the rmiregistry on the remote host. If the port parameter is omitted, and the protocol parameter indicates rmi, then the default rmiregistry port (1099) is used. servername The treatment of this parameter depends on the implementation. For the optimized, local protocol, this field is ignored. For the rmi protocol, this parameter is a string that represents the name of the RMI remote object on the remote host. See the jstatd command -n option. OUTPUT FORMAT OF THE JPS COMMAND The output of the jps command has the following pattern: lvmid [ [ classname | JARfilename | "Unknown"] [ arg* ] [ jvmarg* ] ] All output tokens are separated by white space. An arg value that includes embedded white space introduces ambiguity when attempting to map arguments to their actual positional parameters. Note: It's recommended that you don't write scripts to parse jps output because the format might change in future releases. If you write scripts that parse jps output, then expect to modify them for future releases of this tool.
|
jps - list the instrumented JVMs on the target system
|
Note: This command is experimental and unsupported. jps [-q] [-mlvV] [hostid] jps [-help]
|
-q Suppresses the output of the class name, JAR file name, and arguments passed to the main method, producing a list of only local JVM identifiers. -mlvV You can specify any combination of these options. β’ -m displays the arguments passed to the main method. The output may be null for embedded JVMs. β’ -l displays the full package name for the application's main class or the full path name to the application's JAR file. β’ -v displays the arguments passed to the JVM. β’ -V suppresses the output of the class name, JAR file name, and arguments passed to the main method, producing a list of only local JVM identifiers. hostid The identifier of the host for which the process report should be generated. The hostid can include optional components that indicate the communications protocol, port number, and other implementation specific data. See Host Identifier. -help Displays the help message for the jps command.
|
This section provides examples of the jps command. List the instrumented JVMs on the local host: jps 18027 Java2Demo.JAR 18032 jps 18005 jstat The following example lists the instrumented JVMs on a remote host. This example assumes that the jstat server and either the its internal RMI registry or a separate external rmiregistry process are running on the remote host on the default port (port 1099). It also assumes that the local host has appropriate permissions to access the remote host. This example includes the -l option to output the long form of the class names or JAR file names. jps -l remote.domain 3002 /opt/jdk1.7.0/demo/jfc/Java2D/Java2Demo.JAR 2857 sun.tools.jstatd.jstatd The following example lists the instrumented JVMs on a remote host with a nondefault port for the RMI registry. This example assumes that the jstatd server, with an internal RMI registry bound to port 2002, is running on the remote host. This example also uses the -m option to include the arguments passed to the main method of each of the listed Java applications. jps -m remote.domain:2002 3002 /opt/jdk1.7.0/demo/jfc/Java2D/Java2Demo.JAR 3102 sun.tools.jstatd.jstatd -p 2002 JDK 22 2024 JPS(1)
|
rsync
|
rsync is a program that behaves in much the same way that rcp does, but has many more options and uses the rsync remote-update protocol to greatly speed up file transfers when the destination file is being updated. The rsync remote-update protocol allows rsync to transfer just the differences between two sets of files across the network connection, using an efficient checksum-search algorithm described in the technical report that accompanies this package. Some of the additional features of rsync are: o support for copying links, devices, owners, groups, and permissions o exclude and exclude-from options similar to GNU tar o a CVS exclude mode for ignoring the same files that CVS would ignore o can use any transparent remote shell, including ssh or rsh o does not require super-user privileges o pipelining of file transfers to minimize latency costs o support for anonymous or authenticated rsync daemons (ideal for mirroring) GENERAL Rsync copies files either to or from a remote host, or locally on the current host (it does not support copying files between two remote hosts). There are two different ways for rsync to contact a remote system: using a remote-shell program as the transport (such as ssh or rsh) or contacting an rsync daemon directly via TCP. The remote-shell transport is used whenever the source or destination path contains a single colon (:) separator after a host specification. Contacting an rsync daemon directly happens when the source or destination path contains a double colon (::) separator after a host specification, OR when an rsync:// URL is specified (see also the "USING RSYNC-DAEMON FEATURES VIA A REMOTE-SHELL CONNECTION" section for an exception to this latter rule). As a special case, if a single source arg is specified without a destination, the files are listed in an output format similar to "ls -l". As expected, if neither the source or destination path specify a remote host, the copy occurs locally (see also the --list-only option). SETUP See the file README for installation instructions. Once installed, you can use rsync to any machine that you can access via a remote shell (as well as some that you can access using the rsync daemon-mode protocol). For remote transfers, a modern rsync uses ssh for its communications, but it may have been configured to use a different remote shell by default, such as rsh or remsh. You can also specify any remote shell you like, either by using the -e command line option, or by setting the RSYNC_RSH environment variable. Note that rsync must be installed on both the source and destination machines. USAGE You use rsync in the same way you use rcp. You must specify a source and a destination, one of which may be remote. Perhaps the best way to explain the syntax is with some examples: rsync -t *.c foo:src/ This would transfer all files matching the pattern *.c from the current directory to the directory src on the machine foo. If any of the files already exist on the remote system then the rsync remote-update protocol is used to update the file by sending only the differences. See the tech report for details. rsync -avz foo:src/bar /data/tmp This would recursively transfer all files from the directory src/bar on the machine foo into the /data/tmp/bar directory on the local machine. The files are transferred in "archive" mode, which ensures that symbolic links, devices, attributes, permissions, ownerships, etc. are preserved in the transfer. Additionally, compression will be used to reduce the size of data portions of the transfer. rsync -avz foo:src/bar/ /data/tmp A trailing slash on the source changes this behavior to avoid creating an additional directory level at the destination. You can think of a trailing / on a source as meaning "copy the contents of this directory" as opposed to "copy the directory by name", but in both cases the attributes of the containing directory are transferred to the containing directory on the destination. In other words, each of the following commands copies the files in the same way, including their setting of the attributes of /dest/foo: rsync -av /src/foo /dest rsync -av /src/foo/ /dest/foo Note also that host and module references don't require a trailing slash to copy the contents of the default directory. For example, both of these copy the remote directory's contents into "/dest": rsync -av host: /dest rsync -av host::module /dest You can also use rsync in local-only mode, where both the source and destination don't have a ':' in the name. In this case it behaves like an improved copy command. Finally, you can list all the (listable) modules available from a particular rsync daemon by leaving off the module name: rsync somehost.mydomain.com:: See the following section for more details. ADVANCED USAGE The syntax for requesting multiple files from a remote host involves using quoted spaces in the SRC. Some examples: rsync host::'modname/dir1/file1 modname/dir2/file2' /dest This would copy file1 and file2 into /dest from an rsync daemon. Each additional arg must include the same "modname/" prefix as the first one, and must be preceded by a single space. All other spaces are assumed to be a part of the filenames. rsync -av host:'dir1/file1 dir2/file2' /dest This would copy file1 and file2 into /dest using a remote shell. This word-splitting is done by the remote shell, so if it doesn't work it means that the remote shell isn't configured to split its args based on whitespace (a very rare setting, but not unknown). If you need to transfer a filename that contains whitespace, you'll need to either escape the whitespace in a way that the remote shell will understand, or use wildcards in place of the spaces. Two examples of this are: rsync -av host:'file\ name\ with\ spaces' /dest rsync -av host:file?name?with?spaces /dest This latter example assumes that your shell passes through unmatched wildcards. If it complains about "no match", put the name in quotes. CONNECTING TO AN RSYNC DAEMON It is also possible to use rsync without a remote shell as the transport. In this case you will directly connect to a remote rsync daemon, typically using TCP port 873. (This obviously requires the daemon to be running on the remote system, so refer to the STARTING AN RSYNC DAEMON TO ACCEPT CONNECTIONS section below for information on that.) Using rsync in this way is the same as using it with a remote shell except that: o you either use a double colon :: instead of a single colon to separate the hostname from the path, or you use an rsync:// URL. o the first word of the "path" is actually a module name. o the remote daemon may print a message of the day when you connect. o if you specify no path name on the remote daemon then the list of accessible paths on the daemon will be shown. o if you specify no local destination then a listing of the specified files on the remote daemon is provided. o you must not specify the --rsh (-e) option. An example that copies all the files in a remote module named "src": rsync -av host::src /dest Some modules on the remote daemon may require authentication. If so, you will receive a password prompt when you connect. You can avoid the password prompt by setting the environment variable RSYNC_PASSWORD to the password you want to use or using the --password-file option. This may be useful when scripting rsync. WARNING: On some systems environment variables are visible to all users. On those systems using --password-file is recommended. You may establish the connection via a web proxy by setting the environment variable RSYNC_PROXY to a hostname:port pair pointing to your web proxy. Note that your web proxy's configuration must support proxy connections to port 873. USING RSYNC-DAEMON FEATURES VIA A REMOTE-SHELL CONNECTION It is sometimes useful to use various features of an rsync daemon (such as named modules) without actually allowing any new socket connections into a system (other than what is already required to allow remote- shell access). Rsync supports connecting to a host using a remote shell and then spawning a single-use "daemon" server that expects to read its config file in the home dir of the remote user. This can be useful if you want to encrypt a daemon-style transfer's data, but since the daemon is started up fresh by the remote user, you may not be able to use features such as chroot or change the uid used by the daemon. (For another way to encrypt a daemon transfer, consider using ssh to tunnel a local port to a remote machine and configure a normal rsync daemon on that remote host to only allow connections from "localhost".) From the user's perspective, a daemon transfer via a remote-shell connection uses nearly the same command-line syntax as a normal rsync- daemon transfer, with the only exception being that you must explicitly set the remote shell program on the command-line with the --rsh=COMMAND option. (Setting the RSYNC_RSH in the environment will not turn on this functionality.) For example: rsync -av --rsh=ssh host::module /dest If you need to specify a different remote-shell user, keep in mind that the user@ prefix in front of the host is specifying the rsync-user value (for a module that requires user-based authentication). This means that you must give the '-l user' option to ssh when specifying the remote-shell, as in this example that uses the short version of the --rsh option: rsync -av -e "ssh -l ssh-user" rsync-user@host::module /dest The "ssh-user" will be used at the ssh level; the "rsync-user" will be used to log-in to the "module". STARTING AN RSYNC DAEMON TO ACCEPT CONNECTIONS In order to connect to an rsync daemon, the remote system needs to have a daemon already running (or it needs to have configured something like inetd to spawn an rsync daemon for incoming connections on a particular port). For full information on how to start a daemon that will handling incoming socket connections, see the rsyncd.conf(5) man page -- that is the config file for the daemon, and it contains the full details for how to run the daemon (including stand-alone and inetd configurations). If you're using one of the remote-shell transports for the transfer, there is no need to manually start an rsync daemon.
|
rsync - faster, flexible replacement for rcp
|
rsync [OPTION]... SRC [SRC]... DEST rsync [OPTION]... SRC [SRC]... [USER@]HOST:DEST rsync [OPTION]... SRC [SRC]... [USER@]HOST::DEST rsync [OPTION]... SRC [SRC]... rsync://[USER@]HOST[:PORT]/DEST rsync [OPTION]... SRC rsync [OPTION]... [USER@]HOST:SRC [DEST] rsync [OPTION]... [USER@]HOST::SRC [DEST] rsync [OPTION]... rsync://[USER@]HOST[:PORT]/SRC [DEST]
|
rsync uses the GNU long options package. Many of the command line options have two variants, one short and one long. These are shown below, separated by commas. Some options only have a long variant. The '=' for options that take a parameter is optional; whitespace can be used instead. --help Print a short help page describing the options available in rsync and exit. For backward-compatibility with older versions of rsync, the help will also be output if you use the -h option without any other args. --version print the rsync version number and exit. -v, --verbose This option increases the amount of information you are given during the transfer. By default, rsync works silently. A single -v will give you information about what files are being transferred and a brief summary at the end. Two -v flags will give you information on what files are being skipped and slightly more information at the end. More than two -v flags should only be used if you are debugging rsync. Note that the names of the transferred files that are output are done using a default --out-format of "%n%L", which tells you just the name of the file and, if the item is a link, where it points. At the single -v level of verbosity, this does not mention when a file gets its attributes changed. If you ask for an itemized list of changed attributes (either --itemize-changes or adding "%i" to the --out-format setting), the output (on the client) increases to mention all items that are changed in any way. See the --out-format option for more details. -q, --quiet This option decreases the amount of information you are given during the transfer, notably suppressing information messages from the remote server. This flag is useful when invoking rsync from cron. --no-motd This option affects the information that is output by the client at the start of a daemon transfer. This suppresses the message- of-the-day (MOTD) text, but it also affects the list of modules that the daemon sends in response to the "rsync host::" request (due to a limitation in the rsync protocol), so omit this option if you want to request the list of modules from the deamon. -I, --ignore-times Normally rsync will skip any files that are already the same size and have the same modification time-stamp. This option turns off this "quick check" behavior, causing all files to be updated. --size-only Normally rsync will not transfer any files that are already the same size and have the same modification time-stamp. With the --size-only option, files will not be transferred if they have the same size, regardless of timestamp. This is useful when starting to use rsync after using another mirroring system which may not preserve timestamps exactly. --modify-window When comparing two timestamps, rsync treats the timestamps as being equal if they differ by no more than the modify-window value. This is normally 0 (for an exact match), but you may find it useful to set this to a larger value in some situations. In particular, when transferring to or from an MS Windows FAT filesystem (which represents times with a 2-second resolution), --modify-window=1 is useful (allowing times to differ by up to 1 second). -c, --checksum This forces the sender to checksum every regular file using a 128-bit MD4 checksum. It does this during the initial file- system scan as it builds the list of all available files. The receiver then checksums its version of each file (if it exists and it has the same size as its sender-side counterpart) in order to decide which files need to be updated: files with either a changed size or a changed checksum are selected for transfer. Since this whole-file checksumming of all files on both sides of the connection occurs in addition to the automatic checksum verifications that occur during a file's transfer, this option can be quite slow. Note that rsync always verifies that each transferred file was correctly reconstructed on the receiving side by checking its whole-file checksum, but that automatic after-the-transfer verification has nothing to do with this option's before-the- transfer "Does this file need to be updated?" check. -a, --archive This is equivalent to -rlptgoD. It is a quick way of saying you want recursion and want to preserve almost everything (with -H being a notable omission). The only exception to the above equivalence is when --files-from is specified, in which case -r is not implied. Note that -a does not preserve hardlinks, because finding multiply-linked files is expensive. You must separately specify -H. --no-OPTION You may turn off one or more implied options by prefixing the option name with "no-". Not all options may be prefixed with a "no-": only options that are implied by other options (e.g. --no-D, --no-perms) or have different defaults in various circumstances (e.g. --no-whole-file, --no-blocking-io, --no-dirs). You may specify either the short or the long option name after the "no-" prefix (e.g. --no-R is the same as --no-relative). For example: if you want to use -a (--archive) but don't want -o (--owner), instead of converting -a into -rlptgD, you could specify -a --no-o (or -a --no-owner). The order of the options is important: if you specify --no-r -a, the -r option would end up being turned on, the opposite of -a --no-r. Note also that the side-effects of the --files-from option are NOT positional, as it affects the default state of several options and slightly changes the meaning of -a (see the --files-from option for more details). -r, --recursive This tells rsync to copy directories recursively. See also --dirs (-d). -R, --relative Use relative paths. This means that the full path names specified on the command line are sent to the server rather than just the last parts of the filenames. This is particularly useful when you want to send several different directories at the same time. For example, if you used this command: rsync -av /foo/bar/baz.c remote:/tmp/ ... this would create a file named baz.c in /tmp/ on the remote machine. If instead you used rsync -avR /foo/bar/baz.c remote:/tmp/ then a file named /tmp/foo/bar/baz.c would be created on the remote machine -- the full path name is preserved. To limit the amount of path information that is sent, you have a couple options: (1) With a modern rsync on the sending side (beginning with 2.6.7), you can insert a dot and a slash into the source path, like this: rsync -avR /foo/./bar/baz.c remote:/tmp/ That would create /tmp/bar/baz.c on the remote machine. (Note that the dot must be followed by a slash, so "/foo/." would not be abbreviated.) (2) For older rsync versions, you would need to use a chdir to limit the source path. For example, when pushing files: (cd /foo; rsync -avR bar/baz.c remote:/tmp/) (Note that the parens put the two commands into a sub-shell, so that the "cd" command doesn't remain in effect for future commands.) If you're pulling files, use this idiom (which doesn't work with an rsync daemon): rsync -avR --rsync-path="cd /foo; rsync" \ remote:bar/baz.c /tmp/ --no-implied-dirs This option affects the default behavior of the --relative option. When it is specified, the attributes of the implied directories from the source names are not included in the transfer. This means that the corresponding path elements on the destination system are left unchanged if they exist, and any missing implied directories are created with default attributes. This even allows these implied path elements to have big differences, such as being a symlink to a directory on one side of the transfer, and a real directory on the other side. For instance, if a command-line arg or a files-from entry told rsync to transfer the file "path/foo/file", the directories "path" and "path/foo" are implied when --relative is used. If "path/foo" is a symlink to "bar" on the destination system, the receiving rsync would ordinarily delete "path/foo", recreate it as a directory, and receive the file into the new directory. With --no-implied-dirs, the receiving rsync updates "path/foo/file" using the existing path elements, which means that the file ends up being created in "path/bar". Another way to accomplish this link preservation is to use the --keep-dirlinks option (which will also affect symlinks to directories in the rest of the transfer). In a similar but opposite scenario, if the transfer of "path/foo/file" is requested and "path/foo" is a symlink on the sending side, running without --no-implied-dirs would cause rsync to transform "path/foo" on the receiving side into an identical symlink, and then attempt to transfer "path/foo/file", which might fail if the duplicated symlink did not point to a directory on the receiving side. Another way to avoid this sending of a symlink as an implied directory is to use --copy-unsafe-links, or --copy-dirlinks (both of which also affect symlinks in the rest of the transfer -- see their descriptions for full details). -b, --backup With this option, preexisting destination files are renamed as each file is transferred or deleted. You can control where the backup file goes and what (if any) suffix gets appended using the --backup-dir and --suffix options. Note that if you don't specify --backup-dir, (1) the --omit-dir-times option will be implied, and (2) if --delete is also in effect (without --delete-excluded), rsync will add a "protect" filter-rule for the backup suffix to the end of all your existing excludes (e.g. -f "P *~"). This will prevent previously backed-up files from being deleted. Note that if you are supplying your own filter rules, you may need to manually insert your own exclude/protect rule somewhere higher up in the list so that it has a high enough priority to be effective (e.g., if your rules specify a trailing inclusion/exclusion of '*', the auto-added rule would never be reached). --backup-dir=DIR In combination with the --backup option, this tells rsync to store all backups in the specified directory on the receiving side. This can be used for incremental backups. You can additionally specify a backup suffix using the --suffix option (otherwise the files backed up in the specified directory will keep their original filenames). --suffix=SUFFIX This option allows you to override the default backup suffix used with the --backup (-b) option. The default suffix is a ~ if no --backup-dir was specified, otherwise it is an empty string. -u, --update This forces rsync to skip any files which exist on the destination and have a modified time that is newer than the source file. (If an existing destination file has a modify time equal to the source file's, it will be updated if the sizes are different.) In the current implementation of --update, a difference of file format between the sender and receiver is always considered to be important enough for an update, no matter what date is on the objects. In other words, if the source has a directory or a symlink where the destination has a file, the transfer would occur regardless of the timestamps. This might change in the future (feel free to comment on this on the mailing list if you have an opinion). --inplace This causes rsync not to create a new copy of the file and then move it into place. Instead rsync will overwrite the existing file, meaning that the rsync algorithm can't accomplish the full amount of network reduction it might be able to otherwise (since it does not yet try to sort data matches). One exception to this is if you combine the option with --backup, since rsync is smart enough to use the backup file as the basis file for the transfer. This option is useful for transfer of large files with block- based changes or appended data, and also on systems that are disk bound, not network bound. The option implies --partial (since an interrupted transfer does not delete the file), but conflicts with --partial-dir and --delay-updates. Prior to rsync 2.6.4 --inplace was also incompatible with --compare-dest and --link-dest. WARNING: The file's data will be in an inconsistent state during the transfer (and possibly afterward if the transfer gets interrupted), so you should not use this option to update files that are in use. Also note that rsync will be unable to update a file in-place that is not writable by the receiving user. --append This causes rsync to update a file by appending data onto the end of the file, which presumes that the data that already exists on the receiving side is identical with the start of the file on the sending side. If that is not true, the file will fail the checksum test, and the resend will do a normal --inplace update to correct the mismatched data. Only files on the receiving side that are shorter than the corresponding file on the sending side (as well as new files) are sent. Implies --inplace, but does not conflict with --sparse (though the --sparse option will be auto-disabled if a resend of the already-existing data is required). -d, --dirs Tell the sending side to include any directories that are encountered. Unlike --recursive, a directory's contents are not copied unless the directory name specified is "." or ends with a trailing slash (e.g. ".", "dir/.", "dir/", etc.). Without this option or the --recursive option, rsync will skip all directories it encounters (and output a message to that effect for each one). If you specify both --dirs and --recursive, --recursive takes precedence. -l, --links When symlinks are encountered, recreate the symlink on the destination. -L, --copy-links When symlinks are encountered, the item that they point to (the referent) is copied, rather than the symlink. In older versions of rsync, this option also had the side-effect of telling the receiving side to follow symlinks, such as symlinks to directories. In a modern rsync such as this one, you'll need to specify --keep-dirlinks (-K) to get this extra behavior. The only exception is when sending files to an rsync that is too old to understand -K -- in that case, the -L option will still have the side-effect of -K on that older receiving rsync. --copy-unsafe-links This tells rsync to copy the referent of symbolic links that point outside the copied tree. Absolute symlinks are also treated like ordinary files, and so are any symlinks in the source path itself when --relative is used. This option has no additional effect if --copy-links was also specified. --safe-links This tells rsync to ignore any symbolic links which point outside the copied tree. All absolute symlinks are also ignored. Using this option in conjunction with --relative may give unexpected results. -K, --copy-dirlinks This option causes the sending side to treat a symlink to a directory as though it were a real directory. This is useful if you don't want symlinks to non-directories to be affected, as they would be using --copy-links. Without this option, if the sending side has replaced a directory with a symlink to a directory, the receiving side will delete anything that is in the way of the new symlink, including a directory hierarchy (as long as --force or --delete is in effect). See also --keep-dirlinks for an analogous option for the receiving side. -K, --keep-dirlinks This option causes the receiving side to treat a symlink to a directory as though it were a real directory, but only if it matches a real directory from the sender. Without this option, the receiver's symlink would be deleted and replaced with a real directory. For example, suppose you transfer a directory "foo" that contains a file "file", but "foo" is a symlink to directory "bar" on the receiver. Without --keep-dirlinks, the receiver deletes symlink "foo", recreates it as a directory, and receives the file into the new directory. With --keep-dirlinks, the receiver keeps the symlink and "file" ends up in "bar". See also --copy-dirlinks for an analogous option for the sending side. -H, --hard-links This tells rsync to look for hard-linked files in the transfer and link together the corresponding files on the receiving side. Without this option, hard-linked files in the transfer are treated as though they were separate files. Note that rsync can only detect hard links if both parts of the link are in the list of files being sent. -p, --perms This option causes the receiving rsync to set the destination permissions to be the same as the source permissions. (See also the --chmod option for a way to modify what rsync considers to be the source permissions.) When this option is off, permissions are set as follows: o Existing files (including updated files) retain their existing permissions, though the --executability option might change just the execute permission for the file. o New files get their "normal" permission bits set to the source file's permissions masked with the receiving end's umask setting, and their special permission bits disabled except in the case where a new directory inherits a setgid bit from its parent directory. Thus, when --perms and --executability are both disabled, rsync's behavior is the same as that of other file-copy utilities, such as cp(1) and tar(1). In summary: to give destination files (both old and new) the source permissions, use --perms. To give new files the destination-default permissions (while leaving existing files unchanged), make sure that the --perms option is off and use --chmod=ugo=rwX (which ensures that all non-masked bits get enabled). If you'd care to make this latter behavior easier to type, you could define a popt alias for it, such as putting this line in the file ~/.popt (this defines the -s option, and includes --no-g to use the default group of the destination dir): rsync alias -s --no-p --no-g --chmod=ugo=rwX You could then use this new option in a command such as this one: rsync -asv src/ dest/ (Caveat: make sure that -a does not follow -s, or it will re- enable the "--no-*" options.) The preservation of the destination's setgid bit on newly- created directories when --perms is off was added in rsync 2.6.7. Older rsync versions erroneously preserved the three special permission bits for newly-created files when --perms was off, while overriding the destination's setgid bit setting on a newly-created directory. (Keep in mind that it is the version of the receiving rsync that affects this behavior.) --executability This option causes rsync to preserve the executability (or non- executability) of regular files when --perms is not enabled. A regular file is considered to be executable if at least one 'x' is turned on in its permissions. When an existing destination file's executability differs from that of the corresponding source file, rsync modifies the destination file's permissions as follows: o To make a file non-executable, rsync turns off all its 'x' permissions. o To make a file executable, rsync turns on each 'x' permission that has a corresponding 'r' permission enabled. If --perms is enabled, this option is ignored. --chmod This option tells rsync to apply one or more comma-separated "chmod" strings to the permission of the files in the transfer. The resulting value is treated as though it was the permissions that the sending side supplied for the file, which means that this option can seem to have no effect on existing files if --perms is not enabled. In addition to the normal parsing rules specified in the chmod(1) manpage, you can specify an item that should only apply to a directory by prefixing it with a 'D', or specify an item that should only apply to a file by prefixing it with a 'F'. For example: --chmod=Dg+s,ug+w,Fo-w,+X It is also legal to specify multiple --chmod options, as each additional option is just appended to the list of changes to make. See the --perms and --executability options for how the resulting permission value can be applied to the files in the transfer. -o, --owner This option causes rsync to set the owner of the destination file to be the same as the source file, but only if the receiving rsync is being run as the super-user (see also the --super option to force rsync to attempt super-user activities). Without this option, the owner is set to the invoking user on the receiving side. The preservation of ownership will associate matching names by default, but may fall back to using the ID number in some circumstances (see also the --numeric-ids option for a full discussion). -g, --group This option causes rsync to set the group of the destination file to be the same as the source file. If the receiving program is not running as the super-user (or if --no-super was specified), only groups that the invoking user on the receiving side is a member of will be preserved. Without this option, the group is set to the default group of the invoking user on the receiving side. The preservation of group information will associate matching names by default, but may fall back to using the ID number in some circumstances (see also the --numeric-ids option for a full discussion). --devices This option causes rsync to transfer character and block device files to the remote system to recreate these devices. This option has no effect if the receiving rsync is not run as the super-user and --super is not specified. --specials This option causes rsync to transfer special files such as named sockets and fifos. -D The -D option is equivalent to --devices --specials. -t, --times This tells rsync to transfer modification times along with the files and update them on the remote system. Note that if this option is not used, the optimization that excludes files that have not been modified cannot be effective; in other words, a missing -t or -a will cause the next transfer to behave as if it used -I, causing all files to be updated (though the rsync algorithm will make the update fairly efficient if the files haven't actually changed, you're much better off using -t). -O, --omit-dir-times This tells rsync to omit directories when it is preserving modification times (see --times). If NFS is sharing the directories on the receiving side, it is a good idea to use -O. This option is inferred if you use --backup without --backup-dir. --super This tells the receiving side to attempt super-user activities even if the receiving rsync wasn't run by the super-user. These activities include: preserving users via the --owner option, preserving all groups (not just the current user's groups) via the --groups option, and copying devices via the --devices option. This is useful for systems that allow such activities without being the super-user, and also for ensuring that you will get errors if the receiving side isn't being running as the super-user. To turn off super-user activities, the super-user can use --no-super. -S, --sparse Try to handle sparse files efficiently so they take up less space on the destination. Conflicts with --inplace because it's not possible to overwrite data in a sparse fashion. NOTE: Don't use this option when the destination is a Solaris "tmpfs" filesystem. It doesn't seem to handle seeks over null regions correctly and ends up corrupting the files. -n, --dry-run This tells rsync to not do any file transfers, instead it will just report the actions it would have taken. -W, --whole-file With this option the incremental rsync algorithm is not used and the whole file is sent as-is instead. The transfer may be faster if this option is used when the bandwidth between the source and destination machines is higher than the bandwidth to disk (especially when the "disk" is actually a networked filesystem). This is the default when both the source and destination are specified as local paths. -x, --one-file-system This tells rsync to avoid crossing a filesystem boundary when recursing. This does not limit the user's ability to specify items to copy from multiple filesystems, just rsync's recursion through the hierarchy of each directory that the user specified, and also the analogous recursion on the receiving side during deletion. Also keep in mind that rsync treats a "bind" mount to the same device as being on the same filesystem. If this option is repeated, rsync omits all mount-point directories from the copy. Otherwise, it includes an empty directory at each mount-point it encounters (using the attributes of the mounted directory because those of the underlying mount-point directory are inaccessible). If rsync has been told to collapse symlinks (via --copy-links or --copy-unsafe-links), a symlink to a directory on another device is treated like a mount-point. Symlinks to non-directories are unaffected by this option. --existing, --ignore-non-existing This tells rsync to skip creating files (including directories) that do not exist yet on the destination. If this option is combined with the --ignore-existing option, no files will be updated (which can be useful if all you want to do is to delete extraneous files). --ignore-existing This tells rsync to skip updating files that already exist on the destination (this does not ignore existing directores, or nothing would get done). See also --existing. --remove-source-files This tells rsync to remove from the sending side the files (meaning non-directories) that are a part of the transfer and have been successfully duplicated on the receiving side. --delete This tells rsync to delete extraneous files from the receiving side (ones that aren't on the sending side), but only for the directories that are being synchronized. You must have asked rsync to send the whole directory (e.g. "dir" or "dir/") without using a wildcard for the directory's contents (e.g. "dir/*") since the wildcard is expanded by the shell and rsync thus gets a request to transfer individual files, not the files' parent directory. Files that are excluded from transfer are also excluded from being deleted unless you use the --delete-excluded option or mark the rules as only matching on the sending side (see the include/exclude modifiers in the FILTER RULES section). Prior to rsync 2.6.7, this option would have no effect unless --recursive was in effect. Beginning with 2.6.7, deletions will also occur when --dirs (-d) is in effect, but only for directories whose contents are being copied. This option can be dangerous if used incorrectly! It is a very good idea to run first using the --dry-run option (-n) to see what files would be deleted to make sure important files aren't listed. If the sending side detects any I/O errors, then the deletion of any files at the destination will be automatically disabled. This is to prevent temporary filesystem failures (such as NFS errors) on the sending side causing a massive deletion of files on the destination. You can override this with the --ignore-errors option. The --delete option may be combined with one of the --delete-WHEN options without conflict, as well as --delete-excluded. However, if none of the --delete-WHEN options are specified, rsync will currently choose the --delete-before algorithm. A future version may change this to choose the --delete-during algorithm. See also --delete-after. --delete-before Request that the file-deletions on the receiving side be done before the transfer starts. This is the default if --delete or --delete-excluded is specified without one of the --delete-WHEN options. See --delete (which is implied) for more details on file-deletion. Deleting before the transfer is helpful if the filesystem is tight for space and removing extraneous files would help to make the transfer possible. However, it does introduce a delay before the start of the transfer, and this delay might cause the transfer to timeout (if --timeout was specified). --delete-during, --del Request that the file-deletions on the receiving side be done incrementally as the transfer happens. This is a faster method than choosing the before- or after-transfer algorithm, but it is only supported beginning with rsync version 2.6.4. See --delete (which is implied) for more details on file-deletion. --delete-after Request that the file-deletions on the receiving side be done after the transfer has completed. This is useful if you are sending new per-directory merge files as a part of the transfer and you want their exclusions to take effect for the delete phase of the current transfer. See --delete (which is implied) for more details on file-deletion. --delete-excluded In addition to deleting the files on the receiving side that are not on the sending side, this tells rsync to also delete any files on the receiving side that are excluded (see --exclude). See the FILTER RULES section for a way to make individual exclusions behave this way on the receiver, and for a way to protect files from --delete-excluded. See --delete (which is implied) for more details on file-deletion. --ignore-errors Tells --delete to go ahead and delete files even when there are I/O errors. --force This option tells rsync to delete a non-empty directory when it is to be replaced by a non-directory. This is only relevant if deletions are not active (see --delete for details). Note for older rsync versions: --force used to still be required when using --delete-after, and it used to be non-functional unless the --recursive option was also enabled. --max-delete=NUM This tells rsync not to delete more than NUM files or directories (NUM must be non-zero). This is useful when mirroring very large trees to prevent disasters. --max-size=SIZE This tells rsync to avoid transferring any file that is larger than the specified SIZE. The SIZE value can be suffixed with a string to indicate a size multiplier, and may be a fractional value (e.g. "--max-size=1.5m"). The suffixes are as follows: "K" (or "KiB") is a kibibyte (1024), "M" (or "MiB") is a mebibyte (1024*1024), and "G" (or "GiB") is a gibibyte (1024*1024*1024). If you want the multiplier to be 1000 instead of 1024, use "KB", "MB", or "GB". (Note: lower-case is also accepted for all values.) Finally, if the suffix ends in either "+1" or "-1", the value will be offset by one byte in the indicated direction. Examples: --max-size=1.5mb-1 is 1499999 bytes, and --max-size=2g+1 is 2147483649 bytes. --min-size=SIZE This tells rsync to avoid transferring any file that is smaller than the specified SIZE, which can help in not transferring small, junk files. See the --max-size option for a description of SIZE. -B, --block-size=BLOCKSIZE This forces the block size used in the rsync algorithm to a fixed value. It is normally selected based on the size of each file being updated. See the technical report for details. -e, --rsh=COMMAND This option allows you to choose an alternative remote shell program to use for communication between the local and remote copies of rsync. Typically, rsync is configured to use ssh by default, but you may prefer to use rsh on a local network. If this option is used with [user@]host::module/path, then the remote shell COMMAND will be used to run an rsync daemon on the remote host, and all data will be transmitted through that remote shell connection, rather than through a direct socket connection to a running rsync daemon on the remote host. See the section "USING RSYNC-DAEMON FEATURES VIA A REMOTE-SHELL CONNECTION" above. Command-line arguments are permitted in COMMAND provided that COMMAND is presented to rsync as a single argument. You must use spaces (not tabs or other whitespace) to separate the command and args from each other, and you can use single- and/or double-quotes to preserve spaces in an argument (but not backslashes). Note that doubling a single-quote inside a single-quoted string gives you a single-quote; likewise for double-quotes (though you need to pay attention to which quotes your shell is parsing and which quotes rsync is parsing). Some examples: -e 'ssh -p 2234' -e 'ssh -o "ProxyCommand nohup ssh firewall nc -w1 %h %p"' (Note that ssh users can alternately customize site-specific connect options in their .ssh/config file.) You can also choose the remote shell program using the RSYNC_RSH environment variable, which accepts the same range of values as -e. See also the --blocking-io option which is affected by this option. --rsync-path=PROGRAM Use this to specify what program is to be run on the remote machine to start-up rsync. Often used when rsync is not in the default remote-shell's path (e.g. --rsync-path=/usr/local/bin/rsync). Note that PROGRAM is run with the help of a shell, so it can be any program, script, or command sequence you'd care to run, so long as it does not corrupt the standard-in & standard-out that rsync is using to communicate. One tricky example is to set a different default directory on the remote machine for use with the --relative option. For instance: rsync -avR --rsync-path="cd /a/b && rsync" hst:c/d /e/ -C, --cvs-exclude This is a useful shorthand for excluding a broad range of files that you often don't want to transfer between systems. It uses the same algorithm that CVS uses to determine if a file should be ignored. The exclude list is initialized to: RCS SCCS CVS CVS.adm RCSLOG cvslog.* tags TAGS .make.state .nse_depinfo *~ #* .#* ,* _$* *$ *.old *.bak *.BAK *.orig *.rej .del-* *.a *.olb *.o *.obj *.so *.exe *.Z *.elc *.ln core .svn/ then files listed in a $HOME/.cvsignore are added to the list and any files listed in the CVSIGNORE environment variable (all cvsignore names are delimited by whitespace). Finally, any file is ignored if it is in the same directory as a .cvsignore file and matches one of the patterns listed therein. Unlike rsync's filter/exclude files, these patterns are split on whitespace. See the cvs(1) manual for more information. If you're combining -C with your own --filter rules, you should note that these CVS excludes are appended at the end of your own rules, regardless of where the -C was placed on the command- line. This makes them a lower priority than any rules you specified explicitly. If you want to control where these CVS excludes get inserted into your filter rules, you should omit the -C as a command-line option and use a combination of --filter=:C and --filter=-C (either on your command-line or by putting the ":C" and "-C" rules into a filter file with your other rules). The first option turns on the per-directory scanning for the .cvsignore file. The second option does a one- time import of the CVS excludes mentioned above. -f, --filter=RULE This option allows you to add rules to selectively exclude certain files from the list of files to be transferred. This is most useful in combination with a recursive transfer. You may use as many --filter options on the command line as you like to build up the list of files to exclude. See the FILTER RULES section for detailed information on this option. -F The -F option is a shorthand for adding two --filter rules to your command. The first time it is used is a shorthand for this rule: --filter='dir-merge /.rsync-filter' This tells rsync to look for per-directory .rsync-filter files that have been sprinkled through the hierarchy and use their rules to filter the files in the transfer. If -F is repeated, it is a shorthand for this rule: --filter='exclude .rsync-filter' This filters out the .rsync-filter files themselves from the transfer. See the FILTER RULES section for detailed information on how these options work. --exclude=PATTERN This option is a simplified form of the --filter option that defaults to an exclude rule and does not allow the full rule- parsing syntax of normal filter rules. See the FILTER RULES section for detailed information on this option. --exclude-from=FILE This option is related to the --exclude option, but it specifies a FILE that contains exclude patterns (one per line). Blank lines in the file and lines starting with ';' or '#' are ignored. If FILE is -, the list will be read from standard input. --include=PATTERN This option is a simplified form of the --filter option that defaults to an include rule and does not allow the full rule- parsing syntax of normal filter rules. See the FILTER RULES section for detailed information on this option. --include-from=FILE This option is related to the --include option, but it specifies a FILE that contains include patterns (one per line). Blank lines in the file and lines starting with ';' or '#' are ignored. If FILE is -, the list will be read from standard input. --files-from=FILE Using this option allows you to specify the exact list of files to transfer (as read from the specified FILE or - for standard input). It also tweaks the default behavior of rsync to make transferring just the specified files and directories easier: o The --relative (-R) option is implied, which preserves the path information that is specified for each item in the file (use --no-relative or --no-R if you want to turn that off). o The --dirs (-d) option is implied, which will create directories specified in the list on the destination rather than noisily skipping them (use --no-dirs or --no-d if you want to turn that off). o The --archive (-a) option's behavior does not imply --recursive (-r), so specify it explicitly, if you want it. o These side-effects change the default state of rsync, so the position of the --files-from option on the command- line has no bearing on how other options are parsed (e.g. -a works the same before or after --files-from, as does --no-R and all other options). The file names that are read from the FILE are all relative to the source dir -- any leading slashes are removed and no ".." references are allowed to go higher than the source dir. For example, take this command: rsync -a --files-from=/tmp/foo /usr remote:/backup If /tmp/foo contains the string "bin" (or even "/bin"), the /usr/bin directory will be created as /backup/bin on the remote host. If it contains "bin/" (note the trailing slash), the immediate contents of the directory would also be sent (without needing to be explicitly mentioned in the file -- this began in version 2.6.4). In both cases, if the -r option was enabled, that dir's entire hierarchy would also be transferred (keep in mind that -r needs to be specified explicitly with --files-from, since it is not implied by -a). Also note that the effect of the (enabled by default) --relative option is to duplicate only the path info that is read from the file -- it does not force the duplication of the source-spec path (/usr in this case). In addition, the --files-from file can be read from the remote host instead of the local host if you specify a "host:" in front of the file (the host must match one end of the transfer). As a short-cut, you can specify just a prefix of ":" to mean "use the remote end of the transfer". For example: rsync -a --files-from=:/path/file-list src:/ /tmp/copy This would copy all the files specified in the /path/file-list file that was located on the remote "src" host. -0, --from0 This tells rsync that the rules/filenames it reads from a file are terminated by a null ('\0') character, not a NL, CR, or CR+LF. This affects --exclude-from, --include-from, --files-from, and any merged files specified in a --filter rule. It does not affect --cvs-exclude (since all names read from a .cvsignore file are split on whitespace). -T, --temp-dir=DIR This option instructs rsync to use DIR as a scratch directory when creating temporary copies of the files transferred on the receiving side. The default behavior is to create each temporary file in the same directory as the associated destination file. This option is most often used when the receiving disk partition does not have enough free space to hold a copy of the largest file in the transfer. In this case (i.e. when the scratch directory in on a different disk partition), rsync will not be able to rename each received temporary file over the top of the associated destination file, but instead must copy it into place. Rsync does this by copying the file over the top of the destination file, which means that the destination file will contain truncated data during this copy. If this were not done this way (even if the destination file were first removed, the data locally copied to a temporary file in the destination directory, and then renamed into place) it would be possible for the old file to continue taking up disk space (if someone had it open), and thus there might not be enough room to fit the new version on the disk at the same time. If you are using this option for reasons other than a shortage of disk space, you may wish to combine it with the --delay-updates option, which will ensure that all copied files get put into subdirectories in the destination hierarchy, awaiting the end of the transfer. If you don't have enough room to duplicate all the arriving files on the destination partition, another way to tell rsync that you aren't overly concerned about disk space is to use the --partial-dir option with a relative path; because this tells rsync that it is OK to stash off a copy of a single file in a subdir in the destination hierarchy, rsync will use the partial-dir as a staging area to bring over the copied file, and then rename it into place from there. (Specifying a --partial-dir with an absolute path does not have this side-effect.) -y, --fuzzy This option tells rsync that it should look for a basis file for any destination file that is missing. The current algorithm looks in the same directory as the destination file for either a file that has an identical size and modified-time, or a similarly-named file. If found, rsync uses the fuzzy basis file to try to speed up the transfer. Note that the use of the --delete option might get rid of any potential fuzzy-match files, so either use --delete-after or specify some filename exclusions if you need to prevent this. --compare-dest=DIR This option instructs rsync to use DIR on the destination machine as an additional hierarchy to compare destination files against doing transfers (if the files are missing in the destination directory). If a file is found in DIR that is identical to the sender's file, the file will NOT be transferred to the destination directory. This is useful for creating a sparse backup of just files that have changed from an earlier backup. Beginning in version 2.6.4, multiple --compare-dest directories may be provided, which will cause rsync to search the list in the order specified for an exact match. If a match is found that differs only in attributes, a local copy is made and the attributes updated. If a match is not found, a basis file from one of the DIRs will be selected to try to speed up the transfer. If DIR is a relative path, it is relative to the destination directory. See also --copy-dest and --link-dest. --copy-dest=DIR This option behaves like --compare-dest, but rsync will also copy unchanged files found in DIR to the destination directory using a local copy. This is useful for doing transfers to a new destination while leaving existing files intact, and then doing a flash-cutover when all files have been successfully transferred. Multiple --copy-dest directories may be provided, which will cause rsync to search the list in the order specified for an unchanged file. If a match is not found, a basis file from one of the DIRs will be selected to try to speed up the transfer. If DIR is a relative path, it is relative to the destination directory. See also --compare-dest and --link-dest. --link-dest=DIR This option behaves like --copy-dest, but unchanged files are hard linked from DIR to the destination directory. The files must be identical in all preserved attributes (e.g. permissions, possibly ownership) in order for the files to be linked together. An example: rsync -av --link-dest=$PWD/prior_dir host:src_dir/ new_dir/ Beginning in version 2.6.4, multiple --link-dest directories may be provided, which will cause rsync to search the list in the order specified for an exact match. If a match is found that differs only in attributes, a local copy is made and the attributes updated. If a match is not found, a basis file from one of the DIRs will be selected to try to speed up the transfer. Note that if you combine this option with --ignore-times, rsync will not link any files together because it only links identical files together as a substitute for transferring the file, never as an additional check after the file is updated. If DIR is a relative path, it is relative to the destination directory. See also --compare-dest and --copy-dest. Note that rsync versions prior to 2.6.1 had a bug that could prevent --link-dest from working properly for a non-super-user when -o was specified (or implied by -a). You can work-around this bug by avoiding the -o option when sending to an old rsync. -z, --compress With this option, rsync compresses the file data as it is sent to the destination machine, which reduces the amount of data being transmitted -- something that is useful over a slow connection. Note that this option typically achieves better compression ratios than can be achieved by using a compressing remote shell or a compressing transport because it takes advantage of the implicit information in the matching data blocks that are not explicitly sent over the connection. --compress-level=NUM Explicitly set the compression level to use (see --compress) instead of letting it default. If NUM is non-zero, the --compress option is implied. --numeric-ids With this option rsync will transfer numeric group and user IDs rather than using user and group names and mapping them at both ends. By default rsync will use the username and groupname to determine what ownership to give files. The special uid 0 and the special group 0 are never mapped via user/group names even if the --numeric-ids option is not specified. If a user or group has no name on the source system or it has no match on the destination system, then the numeric ID from the source system is used instead. See also the comments on the "use chroot" setting in the rsyncd.conf manpage for information on how the chroot setting affects rsync's ability to look up the names of the users and groups and what you can do about it. --timeout=TIMEOUT This option allows you to set a maximum I/O timeout in seconds. If no data is transferred for the specified time then rsync will exit. The default is 0, which means no timeout. --address By default rsync will bind to the wildcard address when connecting to an rsync daemon. The --address option allows you to specify a specific IP address (or hostname) to bind to. See also this option in the --daemon mode section. --port=PORT This specifies an alternate TCP port number to use rather than the default of 873. This is only needed if you are using the double-colon (::) syntax to connect with an rsync daemon (since the URL syntax has a way to specify the port as a part of the URL). See also this option in the --daemon mode section. --sockopts This option can provide endless fun for people who like to tune their systems to the utmost degree. You can set all sorts of socket options which may make transfers faster (or slower!). Read the man page for the setsockopt() system call for details on some of the options you may be able to set. By default no special socket options are set. This only affects direct socket connections to a remote rsync daemon. This option also exists in the --daemon mode section. --blocking-io This tells rsync to use blocking I/O when launching a remote shell transport. If the remote shell is either rsh or remsh, rsync defaults to using blocking I/O, otherwise it defaults to using non-blocking I/O. (Note that ssh prefers non-blocking I/O.) -i, --itemize-changes Requests a simple itemized list of the changes that are being made to each file, including attribute changes. This is exactly the same as specifying --out-format='%i %n%L'. If you repeat the option, unchanged files will also be output, but only if the receiving rsync is at least version 2.6.7 (you can use -vv with older versions of rsync, but that also turns on the output of other verbose messages). The "%i" escape has a cryptic output that is 9 letters long. The general format is like the string YXcstpogz, where Y is replaced by the type of update being done, X is replaced by the file-type, and the other letters represent attributes that may be output if they are being modified. The update types that replace the Y are as follows: o A < means that a file is being transferred to the remote host (sent). o A > means that a file is being transferred to the local host (received). o A c means that a local change/creation is occurring for the item (such as the creation of a directory or the changing of a symlink, etc.). o A h means that the item is a hard link to another item (requires --hard-links). o A . means that the item is not being updated (though it might have attributes that are being modified). The file-types that replace the X are: f for a file, a d for a directory, an L for a symlink, a D for a device, and a S for a special file (e.g. named sockets and fifos). The other letters in the string above are the actual letters that will be output if the associated attribute for the item is being updated or a "." for no change. Three exceptions to this are: (1) a newly created item replaces each letter with a "+", (2) an identical item replaces the dots with spaces, and (3) an unknown attribute replaces each letter with a "?" (this can happen when talking to an older rsync). The attribute that is associated with each letter is as follows: o A c means the checksum of the file is different and will be updated by the file transfer (requires --checksum). o A s means the size of the file is different and will be updated by the file transfer. o A t means the modification time is different and is being updated to the sender's value (requires --times). An alternate value of T means that the time will be set to the transfer time, which happens anytime a symlink is transferred, or when a file or device is transferred without --times. o A p means the permissions are different and are being updated to the sender's value (requires --perms). o An o means the owner is different and is being updated to the sender's value (requires --owner and super-user privileges). o A g means the group is different and is being updated to the sender's value (requires --group and the authority to set the group). o The z slot is reserved for future use. One other output is possible: when deleting files, the "%i" will output the string "*deleting" for each item that is being removed (assuming that you are talking to a recent enough rsync that it logs deletions instead of outputting them as a verbose message). --out-format=FORMAT This allows you to specify exactly what the rsync client outputs to the user on a per-update basis. The format is a text string containing embedded single-character escape sequences prefixed with a percent (%) character. For a list of the possible escape characters, see the "log format" setting in the rsyncd.conf manpage. Specifying this option will mention each file, dir, etc. that gets updated in a significant way (a transferred file, a recreated symlink/device, or a touched directory). In addition, if the itemize-changes escape (%i) is included in the string, the logging of names increases to mention any item that is changed in any way (as long as the receiving side is at least 2.6.4). See the --itemize-changes option for a description of the output of "%i". The --verbose option implies a format of "%n%L", but you can use --out-format without --verbose if you like, or you can override the format of its per-file output using this option. Rsync will output the out-format string prior to a file's transfer unless one of the transfer-statistic escapes is requested, in which case the logging is done at the end of the file's transfer. When this late logging is in effect and --progress is also specified, rsync will also output the name of the file being transferred prior to its progress information (followed, of course, by the out-format output). --log-file=FILE This option causes rsync to log what it is doing to a file. This is similar to the logging that a daemon does, but can be requested for the client side and/or the server side of a non- daemon transfer. If specified as a client option, transfer logging will be enabled with a default format of "%i %n%L". See the --log-file-format option if you wish to override this. Here's a example command that requests the remote side to log what is happening: rsync -av --rsync-path="rsync --log-file=/tmp/rlog" src/ dest/ This is very useful if you need to debug why a connection is closing unexpectedly. --log-file-format=FORMAT This allows you to specify exactly what per-update logging is put into the file specified by the --log-file option (which must also be specified for this option to have any effect). If you specify an empty string, updated files will not be mentioned in the log file. For a list of the possible escape characters, see the "log format" setting in the rsyncd.conf manpage. --stats This tells rsync to print a verbose set of statistics on the file transfer, allowing you to tell how effective the rsync algorithm is for your data. The current statistics are as follows: o Number of files is the count of all "files" (in the generic sense), which includes directories, symlinks, etc. o Number of files transferred is the count of normal files that were updated via the rsync algorithm, which does not include created dirs, symlinks, etc. o Total file size is the total sum of all file sizes in the transfer. This does not count any size for directories or special files, but does include the size of symlinks. o Total transferred file size is the total sum of all files sizes for just the transferred files. o Literal data is how much unmatched file-update data we had to send to the receiver for it to recreate the updated files. o Matched data is how much data the receiver got locally when recreating the updated files. o File list size is how big the file-list data was when the sender sent it to the receiver. This is smaller than the in-memory size for the file list due to some compressing of duplicated data when rsync sends the list. o File list generation time is the number of seconds that the sender spent creating the file list. This requires a modern rsync on the sending side for this to be present. o File list transfer time is the number of seconds that the sender spent sending the file list to the receiver. o Total bytes sent is the count of all the bytes that rsync sent from the client side to the server side. o Total bytes received is the count of all non-message bytes that rsync received by the client side from the server side. "Non-message" bytes means that we don't count the bytes for a verbose message that the server sent to us, which makes the stats more consistent. -8, --8-bit-output This tells rsync to leave all high-bit characters unescaped in the output instead of trying to test them to see if they're valid in the current locale and escaping the invalid ones. All control characters (but never tabs) are always escaped, regardless of this option's setting. The escape idiom that started in 2.6.7 is to output a literal backslash (\) and a hash (#), followed by exactly 3 octal digits. For example, a newline would output as "\#012". A literal backslash that is in a filename is not escaped unless it is followed by a hash and 3 digits (0-9). -h, --human-readable Output numbers in a more human-readable format. This makes big numbers output using larger units, with a K, M, or G suffix. If this option was specified once, these units are K (1000), M (1000*1000), and G (1000*1000*1000); if the option is repeated, the units are powers of 1024 instead of 1000. --partial By default, rsync will delete any partially transferred file if the transfer is interrupted. In some circumstances it is more desirable to keep partially transferred files. Using the --partial option tells rsync to keep the partial file which should make a subsequent transfer of the rest of the file much faster. --partial-dir=DIR A better way to keep partial files than the --partial option is to specify a DIR that will be used to hold the partial data (instead of writing it out to the destination file). On the next transfer, rsync will use a file found in this dir as data to speed up the resumption of the transfer and then delete it after it has served its purpose. Note that if --whole-file is specified (or implied), any partial-dir file that is found for a file that is being updated will simply be removed (since rsync is sending files without using the incremental rsync algorithm). Rsync will create the DIR if it is missing (just the last dir -- not the whole path). This makes it easy to use a relative path (such as "--partial-dir=.rsync-partial") to have rsync create the partial-directory in the destination file's directory when needed, and then remove it again when the partial file is deleted. If the partial-dir value is not an absolute path, rsync will add an exclude rule at the end of all your existing excludes. This will prevent the sending of any partial-dir files that may exist on the sending side, and will also prevent the untimely deletion of partial-dir items on the receiving side. An example: the above --partial-dir option would add the equivalent of "--exclude=.rsync-partial/" at the end of any other filter rules. If you are supplying your own exclude rules, you may need to add your own exclude/hide/protect rule for the partial-dir because (1) the auto-added rule may be ineffective at the end of your other rules, or (2) you may wish to override rsync's exclude choice. For instance, if you want to make rsync clean-up any left-over partial-dirs that may be lying around, you should specify --delete-after and add a "risk" filter rule, e.g. -f 'R .rsync-partial/'. (Avoid using --delete-before or --delete-during unless you don't need rsync to use any of the left-over partial-dir data during the current run.) IMPORTANT: the --partial-dir should not be writable by other users or it is a security risk. E.g. AVOID "/tmp". You can also set the partial-dir value the RSYNC_PARTIAL_DIR environment variable. Setting this in the environment does not force --partial to be enabled, but rather it affects where partial files go when --partial is specified. For instance, instead of using --partial-dir=.rsync-tmp along with --progress, you could set RSYNC_PARTIAL_DIR=.rsync-tmp in your environment and then just use the -P option to turn on the use of the .rsync-tmp dir for partial transfers. The only times that the --partial option does not look for this environment value are (1) when --inplace was specified (since --inplace conflicts with --partial-dir), and (2) when --delay-updates was specified (see below). For the purposes of the daemon-config's "refuse options" setting, --partial-dir does not imply --partial. This is so that a refusal of the --partial option can be used to disallow the overwriting of destination files with a partial transfer, while still allowing the safer idiom provided by --partial-dir. --delay-updates This option puts the temporary file from each updated file into a holding directory until the end of the transfer, at which time all the files are renamed into place in rapid succession. This attempts to make the updating of the files a little more atomic. By default the files are placed into a directory named ".~tmp~" in each file's destination directory, but if you've specified the --partial-dir option, that directory will be used instead. See the comments in the --partial-dir section for a discussion of how this ".~tmp~" dir will be excluded from the transfer, and what you can do if you wnat rsync to cleanup old ".~tmp~" dirs that might be lying around. Conflicts with --inplace and --append. This option uses more memory on the receiving side (one bit per file transferred) and also requires enough free disk space on the receiving side to hold an additional copy of all the updated files. Note also that you should not use an absolute path to --partial-dir unless (1) there is no chance of any of the files in the transfer having the same name (since all the updated files will be put into a single directory if the path is absolute) and (2) there are no mount points in the hierarchy (since the delayed updates will fail if they can't be renamed into place). See also the "atomic-rsync" perl script in the "support" subdir for an update algorithm that is even more atomic (it uses --link-dest and a parallel hierarchy of files). -m, --prune-empty-dirs This option tells the receiving rsync to get rid of empty directories from the file-list, including nested directories that have no non-directory children. This is useful for avoiding the creation of a bunch of useless directories when the sending rsync is recursively scanning a hierarchy of files using include/exclude/filter rules. Because the file-list is actually being pruned, this option also affects what directories get deleted when a delete is active. However, keep in mind that excluded files and directories can prevent existing items from being deleted (because an exclude hides source files and protects destination files). You can prevent the pruning of certain empty directories from the file-list by using a global "protect" filter. For instance, this option would ensure that the directory "emptydir" was kept in the file-list: --filter 'protect emptydir/' Here's an example that copies all .pdf files in a hierarchy, only creating the necessary destination directories to hold the .pdf files, and ensures that any superfluous files and directories in the destination are removed (note the hide filter of non-directories being used instead of an exclude): rsync -avm --del --include='*.pdf' -f 'hide,! */' src/ dest If you didn't want to remove superfluous destination files, the more time-honored options of "--include='*/' --exclude='*'" would work fine in place of the hide-filter (if that is more natural to you). --progress This option tells rsync to print information showing the progress of the transfer. This gives a bored user something to watch. Implies --verbose if it wasn't already specified. While rsync is transferring a regular file, it updates a progress line that looks like this: 782448 63% 110.64kB/s 0:00:04 In this example, the receiver has reconstructed 782448 bytes or 63% of the sender's file, which is being reconstructed at a rate of 110.64 kilobytes per second, and the transfer will finish in 4 seconds if the current rate is maintained until the end. These statistics can be misleading if the incremental transfer algorithm is in use. For example, if the sender's file consists of the basis file followed by additional data, the reported rate will probably drop dramatically when the receiver gets to the literal data, and the transfer will probably take much longer to finish than the receiver estimated as it was finishing the matched part of the file. When the file transfer finishes, rsync replaces the progress line with a summary line that looks like this: 1238099 100% 146.38kB/s 0:00:08 (xfer#5, to-check=169/396) In this example, the file was 1238099 bytes long in total, the average rate of transfer for the whole file was 146.38 kilobytes per second over the 8 seconds that it took to complete, it was the 5th transfer of a regular file during the current rsync session, and there are 169 more files for the receiver to check (to see if they are up-to-date or not) remaining out of the 396 total files in the file-list. -P The -P option is equivalent to --partial --progress. Its purpose is to make it much easier to specify these two options for a long transfer that may be interrupted. --password-file This option allows you to provide a password in a file for accessing a remote rsync daemon. Note that this option is only useful when accessing an rsync daemon using the built in transport, not when using a remote shell as the transport. The file must not be world readable. It should contain just the password as a single line. --list-only This option will cause the source files to be listed instead of transferred. This option is inferred if there is a single source arg and no destination specified, so its main uses are: (1) to turn a copy command that includes a destination arg into a file-listing command, (2) to be able to specify more than one local source arg (note: be sure to include the destination), or (3) to avoid the automatically added "-r --exclude='/*/*'" options that rsync usually uses as a compatibility kluge when generating a non-recursive listing. Caution: keep in mind that a source arg with a wild-card is expanded by the shell into multiple args, so it is never safe to try to list such an arg without using this option. For example: rsync -av --list-only foo* dest/ --bwlimit=KBPS This option allows you to specify a maximum transfer rate in kilobytes per second. This option is most effective when using rsync with large files (several megabytes and up). Due to the nature of rsync transfers, blocks of data are sent, then if rsync determines the transfer was too fast, it will wait before sending the next data block. The result is an average transfer rate equaling the specified limit. A value of zero specifies no limit. --write-batch=FILE Record a file that can later be applied to another identical destination with --read-batch. See the "BATCH MODE" section for details, and also the --only-write-batch option. --only-write-batch=FILE Works like --write-batch, except that no updates are made on the destination system when creating the batch. This lets you transport the changes to the destination system via some other means and then apply the changes via --read-batch. Note that you can feel free to write the batch directly to some portable media: if this media fills to capacity before the end of the transfer, you can just apply that partial transfer to the destination and repeat the whole process to get the rest of the changes (as long as you don't mind a partially updated destination system while the multi-update cycle is happening). Also note that you only save bandwidth when pushing changes to a remote system because this allows the batched data to be diverted from the sender into the batch file without having to flow over the wire to the receiver (when pulling, the sender is remote, and thus can't write the batch). --read-batch=FILE Apply all of the changes stored in FILE, a file previously generated by --write-batch. If FILE is -, the batch data will be read from standard input. See the "BATCH MODE" section for details. --protocol=NUM Force an older protocol version to be used. This is useful for creating a batch file that is compatible with an older version of rsync. For instance, if rsync 2.6.4 is being used with the --write-batch option, but rsync 2.6.3 is what will be used to run the --read-batch option, you should use "--protocol=28" when creating the batch file to force the older protocol version to be used in the batch file (assuming you can't upgrade the rsync on the reading system). -4, --ipv4 or -6, --ipv6 Tells rsync to prefer IPv4/IPv6 when creating sockets. This only affects sockets that rsync has direct control over, such as the outgoing socket when directly contacting an rsync daemon. See also these options in the --daemon mode section. --checksum-seed=NUM Set the MD4 checksum seed to the integer NUM. This 4 byte checksum seed is included in each block and file MD4 checksum calculation. By default the checksum seed is generated by the server and defaults to the current time() . This option is used to set a specific checksum seed, which is useful for applications that want repeatable block and file checksums, or in the case where the user wants a more random checksum seed. Note that setting NUM to 0 causes rsync to use the default of time() for checksum seed. -E, --extended-attributes Apple specific option to copy extended attributes, resource forks, and ACLs. Requires at least Mac OS X 10.4 or suitably patched rsync. --cache Apple specific option to enable filesystem caching of rsync file i/o Otherwise fcntl(F_NOCACHE) is used to limit memory growth. DAEMON OPTIONS The options allowed when starting an rsync daemon are as follows: --daemon This tells rsync that it is to run as a daemon. The daemon you start running may be accessed using an rsync client using the host::module or rsync://host/module/ syntax. If standard input is a socket then rsync will assume that it is being run via inetd, otherwise it will detach from the current terminal and become a background daemon. The daemon will read the config file (rsyncd.conf) on each connect made by a client and respond to requests accordingly. See the rsyncd.conf(5) man page for more details. --address By default rsync will bind to the wildcard address when run as a daemon with the --daemon option. The --address option allows you to specify a specific IP address (or hostname) to bind to. This makes virtual hosting possible in conjunction with the --config option. See also the "address" global option in the rsyncd.conf manpage. --bwlimit=KBPS This option allows you to specify a maximum transfer rate in kilobytes per second for the data the daemon sends. The client can still specify a smaller --bwlimit value, but their requested value will be rounded down if they try to exceed it. See the client version of this option (above) for some extra details. --config=FILE This specifies an alternate config file than the default. This is only relevant when --daemon is specified. The default is /etc/rsyncd.conf unless the daemon is running over a remote shell program and the remote user is not the super-user; in that case the default is rsyncd.conf in the current directory (typically $HOME). --no-detach When running as a daemon, this option instructs rsync to not detach itself and become a background process. This option is required when running as a service on Cygwin, and may also be useful when rsync is supervised by a program such as daemontools or AIX's System Resource Controller. --no-detach is also recommended when rsync is run under a debugger. This option has no effect if rsync is run from inetd or sshd. --port=PORT This specifies an alternate TCP port number for the daemon to listen on rather than the default of 873. See also the "port" global option in the rsyncd.conf manpage. --log-file=FILE This option tells the rsync daemon to use the given log-file name instead of using the "log file" setting in the config file. --log-file-format=FORMAT This option tells the rsync daemon to use the given FORMAT string instead of using the "log format" setting in the config file. It also enables "transfer logging" unless the string is empty, in which case transfer logging is turned off. --sockopts This overrides the socket options setting in the rsyncd.conf file and has the same syntax. -v, --verbose This option increases the amount of information the daemon logs during its startup phase. After the client connects, the daemon's verbosity level will be controlled by the options that the client used and the "max verbosity" setting in the module's config section. -4, --ipv4 or -6, --ipv6 Tells rsync to prefer IPv4/IPv6 when creating the incoming sockets that the rsync daemon will use to listen for connections. One of these options may be required in older versions of Linux to work around an IPv6 bug in the kernel (if you see an "address already in use" error when nothing else is using the port, try specifying --ipv6 or --ipv4 when starting the daemon). -h, --help When specified after --daemon, print a short help page describing the options available for starting an rsync daemon. FILTER RULES The filter rules allow for flexible selection of which files to transfer (include) and which files to skip (exclude). The rules either directly specify include/exclude patterns or they specify a way to acquire more include/exclude patterns (e.g. to read them from a file). As the list of files/directories to transfer is built, rsync checks each name to be transferred against the list of include/exclude patterns in turn, and the first matching pattern is acted on: if it is an exclude pattern, then that file is skipped; if it is an include pattern then that filename is not skipped; if no matching pattern is found, then the filename is not skipped. Rsync builds an ordered list of filter rules as specified on the command-line. Filter rules have the following syntax: RULE [PATTERN_OR_FILENAME] RULE,MODIFIERS [PATTERN_OR_FILENAME] You have your choice of using either short or long RULE names, as described below. If you use a short-named rule, the ',' separating the RULE from the MODIFIERS is optional. The PATTERN or FILENAME that follows (when present) must come after either a single space or an underscore (_). Here are the available rule prefixes: exclude, - specifies an exclude pattern. include, + specifies an include pattern. merge, . specifies a merge-file to read for more rules. dir-merge, : specifies a per-directory merge-file. hide, H specifies a pattern for hiding files from the transfer. show, S files that match the pattern are not hidden. protect, P specifies a pattern for protecting files from deletion. risk, R files that match the pattern are not protected. clear, ! clears the current include/exclude list (takes no arg) When rules are being read from a file, empty lines are ignored, as are comment lines that start with a "#". Note that the --include/--exclude command-line options do not allow the full range of rule parsing as described above -- they only allow the specification of include/exclude patterns plus a "!" token to clear the list (and the normal comment parsing when rules are read from a file). If a pattern does not begin with "- " (dash, space) or "+ " (plus, space), then the rule will be interpreted as if "+ " (for an include option) or "- " (for an exclude option) were prefixed to the string. A --filter option, on the other hand, must always contain either a short or long rule name at the start of the rule. Note also that the --filter, --include, and --exclude options take one rule/pattern each. To add multiple ones, you can repeat the options on the command-line, use the merge-file syntax of the --filter option, or the --include-from/--exclude-from options. INCLUDE/EXCLUDE PATTERN RULES You can include and exclude files by specifying patterns using the "+", "-", etc. filter rules (as introduced in the FILTER RULES section above). The include/exclude rules each specify a pattern that is matched against the names of the files that are going to be transferred. These patterns can take several forms: o if the pattern starts with a / then it is anchored to a particular spot in the hierarchy of files, otherwise it is matched against the end of the pathname. This is similar to a leading ^ in regular expressions. Thus "/foo" would match a file named "foo" at either the "root of the transfer" (for a global rule) or in the merge-file's directory (for a per- directory rule). An unqualified "foo" would match any file or directory named "foo" anywhere in the tree because the algorithm is applied recursively from the top down; it behaves as if each path component gets a turn at being the end of the file name. Even the unanchored "sub/foo" would match at any point in the hierarchy where a "foo" was found within a directory named "sub". See the section on ANCHORING INCLUDE/EXCLUDE PATTERNS for a full discussion of how to specify a pattern that matches at the root of the transfer. o if the pattern ends with a / then it will only match a directory, not a file, link, or device. o rsync chooses between doing a simple string match and wildcard matching by checking if the pattern contains one of these three wildcard characters: '*', '?', and '[' . o a '*' matches any non-empty path component (it stops at slashes). o use '**' to match anything, including slashes. o a '?' matches any character except a slash (/). o a '[' introduces a character class, such as [a-z] or [[:alpha:]]. o in a wildcard pattern, a backslash can be used to escape a wildcard character, but it is matched literally when no wildcards are present. o if the pattern contains a / (not counting a trailing /) or a "**", then it is matched against the full pathname, including any leading directories. If the pattern doesn't contain a / or a "**", then it is matched only against the final component of the filename. (Remember that the algorithm is applied recursively so "full filename" can actually be any portion of a path from the starting directory on down.) o a trailing "dir_name/***" will match both the directory (as if "dir_name/" had been specified) and all the files in the directory (as if "dir_name/**" had been specified). (This behavior is new for version 2.6.7.) Note that, when using the --recursive (-r) option (which is implied by -a), every subcomponent of every path is visited from the top down, so include/exclude patterns get applied recursively to each subcomponent's full name (e.g. to include "/foo/bar/baz" the subcomponents "/foo" and "/foo/bar" must not be excluded). The exclude patterns actually short- circuit the directory traversal stage when rsync finds the files to send. If a pattern excludes a particular parent directory, it can render a deeper include pattern ineffectual because rsync did not descend through that excluded section of the hierarchy. This is particularly important when using a trailing '*' rule. For instance, this won't work: + /some/path/this-file-will-not-be-found + /file-is-included - * This fails because the parent directory "some" is excluded by the '*' rule, so rsync never visits any of the files in the "some" or "some/path" directories. One solution is to ask for all directories in the hierarchy to be included by using a single rule: "+ */" (put it somewhere before the "- *" rule), and perhaps use the --prune-empty-dirs option. Another solution is to add specific include rules for all the parent dirs that need to be visited. For instance, this set of rules works fine: + /some/ + /some/path/ + /some/path/this-file-is-found + /file-also-included - * Here are some examples of exclude/include matching: o "- *.o" would exclude all filenames matching *.o o "- /foo" would exclude a file (or directory) named foo in the transfer-root directory o "- foo/" would exclude any directory named foo o "- /foo/*/bar" would exclude any file named bar which is at two levels below a directory named foo in the transfer-root directory o "- /foo/**/bar" would exclude any file named bar two or more levels below a directory named foo in the transfer-root directory o The combination of "+ */", "+ *.c", and "- *" would include all directories and C source files but nothing else (see also the --prune-empty-dirs option) o The combination of "+ foo/", "+ foo/bar.c", and "- *" would include only the foo directory and foo/bar.c (the foo directory must be explicitly included or it would be excluded by the "*") MERGE-FILE FILTER RULES You can merge whole files into your filter rules by specifying either a merge (.) or a dir-merge (:) filter rule (as introduced in the FILTER RULES section above). There are two kinds of merged files -- single-instance ('.') and per- directory (':'). A single-instance merge file is read one time, and its rules are incorporated into the filter list in the place of the "." rule. For per-directory merge files, rsync will scan every directory that it traverses for the named file, merging its contents when the file exists into the current list of inherited rules. These per- directory rule files must be created on the sending side because it is the sending side that is being scanned for the available files to transfer. These rule files may also need to be transferred to the receiving side if you want them to affect what files don't get deleted (see PER-DIRECTORY RULES AND DELETE below). Some examples: merge /etc/rsync/default.rules . /etc/rsync/default.rules dir-merge .per-dir-filter dir-merge,n- .non-inherited-per-dir-excludes :n- .non-inherited-per-dir-excludes The following modifiers are accepted after a merge or dir-merge rule: o A - specifies that the file should consist of only exclude patterns, with no other rule-parsing except for in-file comments. o A + specifies that the file should consist of only include patterns, with no other rule-parsing except for in-file comments. o A C is a way to specify that the file should be read in a CVS- compatible manner. This turns on 'n', 'w', and '-', but also allows the list-clearing token (!) to be specified. If no filename is provided, ".cvsignore" is assumed. o A e will exclude the merge-file name from the transfer; e.g. "dir-merge,e .rules" is like "dir-merge .rules" and "- .rules". o An n specifies that the rules are not inherited by subdirectories. o A w specifies that the rules are word-split on whitespace instead of the normal line-splitting. This also turns off comments. Note: the space that separates the prefix from the rule is treated specially, so "- foo + bar" is parsed as two rules (assuming that prefix-parsing wasn't also disabled). o You may also specify any of the modifiers for the "+" or "-" rules (below) in order to have the rules that are read in from the file default to having that modifier set. For instance, "merge,-/ .excl" would treat the contents of .excl as absolute- path excludes, while "dir-merge,s .filt" and ":sC" would each make all their per-directory rules apply only on the sending side. The following modifiers are accepted after a "+" or "-": o A "/" specifies that the include/exclude rule should be matched against the absolute pathname of the current item. For example, "-/ /etc/passwd" would exclude the passwd file any time the transfer was sending files from the "/etc" directory, and "-/ subdir/foo" would always exclude "foo" when it is in a dir named "subdir", even if "foo" is at the root of the current transfer. o A "!" specifies that the include/exclude should take effect if the pattern fails to match. For instance, "-! */" would exclude all non-directories. o A C is used to indicate that all the global CVS-exclude rules should be inserted as excludes in place of the "-C". No arg should follow. o An s is used to indicate that the rule applies to the sending side. When a rule affects the sending side, it prevents files from being transferred. The default is for a rule to affect both sides unless --delete-excluded was specified, in which case default rules become sender-side only. See also the hide (H) and show (S) rules, which are an alternate way to specify sending-side includes/excludes. o An r is used to indicate that the rule applies to the receiving side. When a rule affects the receiving side, it prevents files from being deleted. See the s modifier for more info. See also the protect (P) and risk (R) rules, which are an alternate way to specify receiver-side includes/excludes. Per-directory rules are inherited in all subdirectories of the directory where the merge-file was found unless the 'n' modifier was used. Each subdirectory's rules are prefixed to the inherited per- directory rules from its parents, which gives the newest rules a higher priority than the inherited rules. The entire set of dir-merge rules are grouped together in the spot where the merge-file was specified, so it is possible to override dir-merge rules via a rule that got specified earlier in the list of global rules. When the list-clearing rule ("!") is read from a per-directory file, it only clears the inherited rules for the current merge file. Another way to prevent a single rule from a dir-merge file from being inherited is to anchor it with a leading slash. Anchored rules in a per-directory merge-file are relative to the merge-file's directory, so a pattern "/foo" would only match the file "foo" in the directory where the dir-merge filter file was found. Here's an example filter file which you'd specify via --filter=". file": merge /home/user/.global-filter - *.gz dir-merge .rules + *.[ch] - *.o This will merge the contents of the /home/user/.global-filter file at the start of the list and also turns the ".rules" filename into a per- directory filter file. All rules read in prior to the start of the directory scan follow the global anchoring rules (i.e. a leading slash matches at the root of the transfer). If a per-directory merge-file is specified with a path that is a parent directory of the first transfer directory, rsync will scan all the parent dirs from that starting point to the transfer directory for the indicated per-directory file. For instance, here is a common filter (see -F): --filter=': /.rsync-filter' That rule tells rsync to scan for the file .rsync-filter in all directories from the root down through the parent directory of the transfer prior to the start of the normal directory scan of the file in the directories that are sent as a part of the transfer. (Note: for an rsync daemon, the root is always the same as the module's "path".) Some examples of this pre-scanning for per-directory files: rsync -avF /src/path/ /dest/dir rsync -av --filter=': ../../.rsync-filter' /src/path/ /dest/dir rsync -av --filter=': .rsync-filter' /src/path/ /dest/dir The first two commands above will look for ".rsync-filter" in "/" and "/src" before the normal scan begins looking for the file in "/src/path" and its subdirectories. The last command avoids the parent-dir scan and only looks for the ".rsync-filter" files in each directory that is a part of the transfer. If you want to include the contents of a ".cvsignore" in your patterns, you should use the rule ":C", which creates a dir-merge of the .cvsignore file, but parsed in a CVS-compatible manner. You can use this to affect where the --cvs-exclude (-C) option's inclusion of the per-directory .cvsignore file gets placed into your rules by putting the ":C" wherever you like in your filter rules. Without this, rsync would add the dir-merge rule for the .cvsignore file at the end of all your other rules (giving it a lower priority than your command-line rules). For example: cat <<EOT | rsync -avC --filter='. -' a/ b + foo.o :C - *.old EOT rsync -avC --include=foo.o -f :C --exclude='*.old' a/ b Both of the above rsync commands are identical. Each one will merge all the per-directory .cvsignore rules in the middle of the list rather than at the end. This allows their dir-specific rules to supersede the rules that follow the :C instead of being subservient to all your rules. To affect the other CVS exclude rules (i.e. the default list of exclusions, the contents of $HOME/.cvsignore, and the value of $CVSIGNORE) you should omit the -C command-line option and instead insert a "-C" rule into your filter rules; e.g. "--filter=-C". LIST-CLEARING FILTER RULE You can clear the current include/exclude list by using the "!" filter rule (as introduced in the FILTER RULES section above). The "current" list is either the global list of rules (if the rule is encountered while parsing the filter options) or a set of per-directory rules (which are inherited in their own sub-list, so a subdirectory can use this to clear out the parent's rules). ANCHORING INCLUDE/EXCLUDE PATTERNS As mentioned earlier, global include/exclude patterns are anchored at the "root of the transfer" (as opposed to per-directory patterns, which are anchored at the merge-file's directory). If you think of the transfer as a subtree of names that are being sent from sender to receiver, the transfer-root is where the tree starts to be duplicated in the destination directory. This root governs where patterns that start with a / match. Because the matching is relative to the transfer-root, changing the trailing slash on a source path or changing your use of the --relative option affects the path you need to use in your matching (in addition to changing how much of the file tree is duplicated on the destination host). The following examples demonstrate this. Let's say that we want to match two source files, one with an absolute path of "/home/me/foo/bar", and one with a path of "/home/you/bar/baz". Here is how the various command choices differ for a 2-source transfer: Example cmd: rsync -a /home/me /home/you /dest +/- pattern: /me/foo/bar +/- pattern: /you/bar/baz Target file: /dest/me/foo/bar Target file: /dest/you/bar/baz Example cmd: rsync -a /home/me/ /home/you/ /dest +/- pattern: /foo/bar (note missing "me") +/- pattern: /bar/baz (note missing "you") Target file: /dest/foo/bar Target file: /dest/bar/baz Example cmd: rsync -a --relative /home/me/ /home/you /dest +/- pattern: /home/me/foo/bar (note full path) +/- pattern: /home/you/bar/baz (ditto) Target file: /dest/home/me/foo/bar Target file: /dest/home/you/bar/baz Example cmd: cd /home; rsync -a --relative me/foo you/ /dest +/- pattern: /me/foo/bar (starts at specified path) +/- pattern: /you/bar/baz (ditto) Target file: /dest/me/foo/bar Target file: /dest/you/bar/baz The easiest way to see what name you should filter is to just look at the output when using --verbose and put a / in front of the name (use the --dry-run option if you're not yet ready to copy any files). PER-DIRECTORY RULES AND DELETE Without a delete option, per-directory rules are only relevant on the sending side, so you can feel free to exclude the merge files themselves without affecting the transfer. To make this easy, the 'e' modifier adds this exclude for you, as seen in these two equivalent commands: rsync -av --filter=': .excl' --exclude=.excl host:src/dir /dest rsync -av --filter=':e .excl' host:src/dir /dest However, if you want to do a delete on the receiving side AND you want some files to be excluded from being deleted, you'll need to be sure that the receiving side knows what files to exclude. The easiest way is to include the per-directory merge files in the transfer and use --delete-after, because this ensures that the receiving side gets all the same exclude rules as the sending side before it tries to delete anything: rsync -avF --delete-after host:src/dir /dest However, if the merge files are not a part of the transfer, you'll need to either specify some global exclude rules (i.e. specified on the command line), or you'll need to maintain your own per-directory merge files on the receiving side. An example of the first is this (assume that the remote .rules files exclude themselves): rsync -av --filter=': .rules' --filter='. /my/extra.rules' --delete host:src/dir /dest In the above example the extra.rules file can affect both sides of the transfer, but (on the sending side) the rules are subservient to the rules merged from the .rules files because they were specified after the per-directory merge rule. In one final example, the remote side is excluding the .rsync-filter files from the transfer, but we want to use our own .rsync-filter files to control what gets deleted on the receiving side. To do this we must specifically exclude the per-directory merge files (so that they don't get deleted) and then put rules into the local files to control what else should not get deleted. Like one of these commands: rsync -av --filter=':e /.rsync-filter' --delete \ host:src/dir /dest rsync -avFF --delete host:src/dir /dest BATCH MODE Batch mode can be used to apply the same set of updates to many identical systems. Suppose one has a tree which is replicated on a number of hosts. Now suppose some changes have been made to this source tree and those changes need to be propagated to the other hosts. In order to do this using batch mode, rsync is run with the write-batch option to apply the changes made to the source tree to one of the destination trees. The write-batch option causes the rsync client to store in a "batch file" all the information needed to repeat this operation against other, identical destination trees. To apply the recorded changes to another destination tree, run rsync with the read-batch option, specifying the name of the same batch file, and the destination tree. Rsync updates the destination tree using the information stored in the batch file. For convenience, one additional file is creating when the write-batch option is used. This file's name is created by appending ".sh" to the batch filename. The .sh file contains a command-line suitable for updating a destination tree using that batch file. It can be executed using a Bourne (or Bourne-like) shell, optionally passing in an alternate destination tree pathname which is then used instead of the original path. This is useful when the destination tree path differs from the original destination tree path. Generating the batch file once saves having to perform the file status, checksum, and data block generation more than once when updating multiple destination trees. Multicast transport protocols can be used to transfer the batch update files in parallel to many hosts at once, instead of sending the same data to every host individually. Examples: $ rsync --write-batch=foo -a host:/source/dir/ /adest/dir/ $ scp foo* remote: $ ssh remote ./foo.sh /bdest/dir/ $ rsync --write-batch=foo -a /source/dir/ /adest/dir/ $ ssh remote rsync --read-batch=- -a /bdest/dir/ <foo In these examples, rsync is used to update /adest/dir/ from /source/dir/ and the information to repeat this operation is stored in "foo" and "foo.sh". The host "remote" is then updated with the batched data going into the directory /bdest/dir. The differences between the two examples reveals some of the flexibility you have in how you deal with batches: o The first example shows that the initial copy doesn't have to be local -- you can push or pull data to/from a remote host using either the remote-shell syntax or rsync daemon syntax, as desired. o The first example uses the created "foo.sh" file to get the right rsync options when running the read-batch command on the remote host. o The second example reads the batch data via standard input so that the batch file doesn't need to be copied to the remote machine first. This example avoids the foo.sh script because it needed to use a modified --read-batch option, but you could edit the script file if you wished to make use of it (just be sure that no other option is trying to use standard input, such as the "--exclude-from=-" option). Caveats: The read-batch option expects the destination tree that it is updating to be identical to the destination tree that was used to create the batch update fileset. When a difference between the destination trees is encountered the update might be discarded with a warning (if the file appears to be up-to-date already) or the file-update may be attempted and then, if the file fails to verify, the update discarded with an error. This means that it should be safe to re-run a read- batch operation if the command got interrupted. If you wish to force the batched-update to always be attempted regardless of the file's size and date, use the -I option (when reading the batch). If an error occurs, the destination tree will probably be in a partially updated state. In that case, rsync can be used in its regular (non-batch) mode of operation to fix up the destination tree. The rsync version used on all destinations must be at least as new as the one used to generate the batch file. Rsync will die with an error if the protocol version in the batch file is too new for the batch- reading rsync to handle. See also the --protocol option for a way to have the creating rsync generate a batch file that an older rsync can understand. (Note that batch files changed format in version 2.6.3, so mixing versions older than that with newer versions will not work.) When reading a batch file, rsync will force the value of certain options to match the data in the batch file if you didn't set them to the same as the batch-writing command. Other options can (and should) be changed. For instance --write-batch changes to --read-batch, --files-from is dropped, and the --filter/--include/--exclude options are not needed unless one of the --delete options is specified. The code that creates the BATCH.sh file transforms any filter/include/exclude options into a single list that is appended as a "here" document to the shell script file. An advanced user can use this to modify the exclude list if a change in what gets deleted by --delete is desired. A normal user can ignore this detail and just use the shell script as an easy way to run the appropriate --read-batch command for the batched data. The original batch mode in rsync was based on "rsync+", but the latest version uses a new implementation. SYMBOLIC LINKS Three basic behaviors are possible when rsync encounters a symbolic link in the source directory. By default, symbolic links are not transferred at all. A message "skipping non-regular" file is emitted for any symlinks that exist. If --links is specified, then symlinks are recreated with the same target on the destination. Note that --archive implies --links. If --copy-links is specified, then symlinks are "collapsed" by copying their referent, rather than the symlink. rsync also distinguishes "safe" and "unsafe" symbolic links. An example where this might be used is a web site mirror that wishes ensure the rsync module they copy does not include symbolic links to /etc/passwd in the public section of the site. Using --copy-unsafe-links will cause any links to be copied as the file they point to on the destination. Using --safe-links will cause unsafe links to be omitted altogether. (Note that you must specify --links for --safe-links to have any effect.) Symbolic links are considered unsafe if they are absolute symlinks (start with /), empty, or if they contain enough ".." components to ascend from the directory being copied. Here's a summary of how the symlink options are interpreted. The list is in order of precedence, so if your combination of options isn't mentioned, use the first line that is a complete subset of your options: --copy-links Turn all symlinks into normal files (leaving no symlinks for any other options to affect). --links --copy-unsafe-links Turn all unsafe symlinks into files and duplicate all safe symlinks. --copy-unsafe-links Turn all unsafe symlinks into files, noisily skip all safe symlinks. --links --safe-links Duplicate safe symlinks and skip unsafe ones. --links Duplicate all symlinks. DIAGNOSTICS rsync occasionally produces error messages that may seem a little cryptic. The one that seems to cause the most confusion is "protocol version mismatch -- is your shell clean?". This message is usually caused by your startup scripts or remote shell facility producing unwanted garbage on the stream that rsync is using for its transport. The way to diagnose this problem is to run your remote shell like this: ssh remotehost /bin/true > out.dat then look at out.dat. If everything is working correctly then out.dat should be a zero length file. If you are getting the above error from rsync then you will probably find that out.dat contains some text or data. Look at the contents and try to work out what is producing it. The most common cause is incorrectly configured shell startup scripts (such as .cshrc or .profile) that contain output statements for non- interactive logins. If you are having trouble debugging filter patterns, then try specifying the -vv option. At this level of verbosity rsync will show why each individual file is included or excluded. EXIT VALUES 0 Success 1 Syntax or usage error 2 Protocol incompatibility 3 Errors selecting input/output files, dirs 4 Requested action not supported: an attempt was made to manipulate 64-bit files on a platform that cannot support them; or an option was specified that is supported by the client and not by the server. 5 Error starting client-server protocol 6 Daemon unable to append to log-file 10 Error in socket I/O 11 Error in file I/O 12 Error in rsync protocol data stream 13 Errors with program diagnostics 14 Error in IPC code 20 Received SIGUSR1 or SIGINT 21 Some error returned by waitpid() 22 Error allocating core memory buffers 23 Partial transfer due to error 24 Partial transfer due to vanished source files 25 The --max-delete limit stopped deletions 30 Timeout in data send/receive ENVIRONMENT VARIABLES CVSIGNORE The CVSIGNORE environment variable supplements any ignore patterns in .cvsignore files. See the --cvs-exclude option for more details. RSYNC_RSH The RSYNC_RSH environment variable allows you to override the default shell used as the transport for rsync. Command line options are permitted after the command name, just as in the -e option. RSYNC_PROXY The RSYNC_PROXY environment variable allows you to redirect your rsync client to use a web proxy when connecting to a rsync daemon. You should set RSYNC_PROXY to a hostname:port pair. RSYNC_PASSWORD Setting RSYNC_PASSWORD to the required password allows you to run authenticated rsync connections to an rsync daemon without user intervention. Note that this does not supply a password to a shell transport such as ssh. USER or LOGNAME The USER or LOGNAME environment variables are used to determine the default username sent to an rsync daemon. If neither is set, the username defaults to "nobody". HOME The HOME environment variable is used to find the user's default .cvsignore file. FILES /etc/rsyncd.conf or rsyncd.conf SEE ALSO rsyncd.conf(5) fcntl(2) BUGS times are transferred as *nix time_t values When transferring to FAT filesystems rsync may re-sync unmodified files. See the comments on the --modify-window option. file permissions, devices, etc. are transferred as native numerical values see also the comments on the --delete option Please report bugs! See the website at http://rsync.samba.org/ VERSION This man page is current for version 2.6.9 of rsync. INTERNAL OPTIONS The options --server and --sender are used internally by rsync, and should never be typed by a user under normal circumstances. Some awareness of these options may be needed in certain scenarios, such as when setting up a login that can only run an rsync command. For instance, the support directory of the rsync distribution has an example script named rrsync (for restricted rsync) that can be used with a restricted ssh login. CREDITS rsync is distributed under the GNU public license. See the file COPYING for details. A WEB site is available at http://rsync.samba.org/. The site includes an FAQ-O-Matic which may cover questions unanswered by this manual page. The primary ftp site for rsync is ftp://rsync.samba.org/pub/rsync. We would be delighted to hear from you if you like this program. This program uses the excellent zlib compression library written by Jean-loup Gailly and Mark Adler. THANKS Thanks to Richard Brent, Brendan Mackay, Bill Waite, Stephen Rothwell and David Bell for helpful suggestions, patches and testing of rsync. I've probably missed some people, my apologies if I have. Especial thanks also to: David Dykstra, Jos Backus, Sebastian Krahmer, Martin Pool, Wayne Davison, J.W. Schultz. AUTHOR rsync was originally written by Andrew Tridgell and Paul Mackerras. Many people have later contributed to it. Mailing lists for support and development are available at http://lists.samba.org 6 Nov 2006 rsync(1)
|
Here are some examples of how I use rsync. To backup my wife's home directory, which consists of large MS Word files and mail folders, I use a cron job that runs rsync -Cavz . arvidsjaur:backup each night over a PPP connection to a duplicate directory on my machine "arvidsjaur". To synchronize my samba source trees I use the following Makefile targets: get: rsync -avuzb --exclude '*~' samba:samba/ . put: rsync -Cavuzb . samba:samba/ sync: get put this allows me to sync with a CVS directory at the other end of the connection. I then do CVS operations on the remote machine, which saves a lot of time as the remote CVS protocol isn't very efficient. I mirror a directory between my "old" and "new" ftp sites with the command: rsync -az -e ssh --delete ~ftp/pub/samba nimbus:"~ftp/pub/tridge" This is launched from cron every few hours. OPTIONS SUMMARY Here is a short summary of the options available in rsync. Please refer to the detailed description below for a complete description. -v, --verbose increase verbosity -q, --quiet suppress non-error messages --no-motd suppress daemon-mode MOTD (see caveat) -c, --checksum skip based on checksum, not mod-time & size -a, --archive archive mode; same as -rlptgoD (no -H) --no-OPTION turn off an implied OPTION (e.g. --no-D) -r, --recursive recurse into directories -R, --relative use relative path names --no-implied-dirs don't send implied dirs with --relative -b, --backup make backups (see --suffix & --backup-dir) --backup-dir=DIR make backups into hierarchy based in DIR --suffix=SUFFIX backup suffix (default ~ w/o --backup-dir) -u, --update skip files that are newer on the receiver --inplace update destination files in-place --append append data onto shorter files -d, --dirs transfer directories without recursing -l, --links copy symlinks as symlinks -L, --copy-links transform symlink into referent file/dir --copy-unsafe-links only "unsafe" symlinks are transformed --safe-links ignore symlinks that point outside the tree -k, --copy-dirlinks transform symlink to dir into referent dir -K, --keep-dirlinks treat symlinked dir on receiver as dir -H, --hard-links preserve hard links -p, --perms preserve permissions --executability preserve executability --chmod=CHMOD affect file and/or directory permissions -o, --owner preserve owner (super-user only) -g, --group preserve group --devices preserve device files (super-user only) --specials preserve special files -D same as --devices --specials -t, --times preserve times -O, --omit-dir-times omit directories when preserving times --super receiver attempts super-user activities -S, --sparse handle sparse files efficiently -n, --dry-run show what would have been transferred -W, --whole-file copy files whole (without rsync algorithm) -x, --one-file-system don't cross filesystem boundaries -B, --block-size=SIZE force a fixed checksum block-size -e, --rsh=COMMAND specify the remote shell to use --rsync-path=PROGRAM specify the rsync to run on remote machine --existing skip creating new files on receiver --ignore-existing skip updating files that exist on receiver --remove-source-files sender removes synchronized files (non-dir) --del an alias for --delete-during --delete delete extraneous files from dest dirs --delete-before receiver deletes before transfer (default) --delete-during receiver deletes during xfer, not before --delete-after receiver deletes after transfer, not before --delete-excluded also delete excluded files from dest dirs --ignore-errors delete even if there are I/O errors --force force deletion of dirs even if not empty --max-delete=NUM don't delete more than NUM files --max-size=SIZE don't transfer any file larger than SIZE --min-size=SIZE don't transfer any file smaller than SIZE --partial keep partially transferred files --partial-dir=DIR put a partially transferred file into DIR --delay-updates put all updated files into place at end -m, --prune-empty-dirs prune empty directory chains from file-list --numeric-ids don't map uid/gid values by user/group name --timeout=TIME set I/O timeout in seconds -I, --ignore-times don't skip files that match size and time --size-only skip files that match in size --modify-window=NUM compare mod-times with reduced accuracy -T, --temp-dir=DIR create temporary files in directory DIR -y, --fuzzy find similar file for basis if no dest file --compare-dest=DIR also compare received files relative to DIR --copy-dest=DIR ... and include copies of unchanged files --link-dest=DIR hardlink to files in DIR when unchanged -z, --compress compress file data during the transfer --compress-level=NUM explicitly set compression level -C, --cvs-exclude auto-ignore files in the same way CVS does -f, --filter=RULE add a file-filtering RULE -F same as --filter='dir-merge /.rsync-filter' repeated: --filter='- .rsync-filter' --exclude=PATTERN exclude files matching PATTERN --exclude-from=FILE read exclude patterns from FILE --include=PATTERN don't exclude files matching PATTERN --include-from=FILE read include patterns from FILE --files-from=FILE read list of source-file names from FILE -0, --from0 all *from/filter files are delimited by 0s --address=ADDRESS bind address for outgoing socket to daemon --port=PORT specify double-colon alternate port number --sockopts=OPTIONS specify custom TCP options --blocking-io use blocking I/O for the remote shell --stats give some file-transfer stats -8, --8-bit-output leave high-bit chars unescaped in output -h, --human-readable output numbers in a human-readable format --progress show progress during transfer -P same as --partial --progress -i, --itemize-changes output a change-summary for all updates --out-format=FORMAT output updates using the specified FORMAT --log-file=FILE log what we're doing to the specified FILE --log-file-format=FMT log updates using the specified FMT --password-file=FILE read password from FILE --list-only list the files instead of copying them --bwlimit=KBPS limit I/O bandwidth; KBytes per second --write-batch=FILE write a batched update to FILE --only-write-batch=FILE like --write-batch but w/o updating dest --read-batch=FILE read a batched update from FILE --protocol=NUM force an older protocol version to be used --checksum-seed=NUM set block/file checksum seed (advanced) -4, --ipv4 prefer IPv4 -6, --ipv6 prefer IPv6 -E, --extended-attributes copy extended attributes, resource forks --cache disable fcntl(F_NOCACHE) --version print version number (-h) --help show this help (see below for -h comment) Rsync can also be run as a daemon, in which case the following options are accepted: --daemon run as an rsync daemon --address=ADDRESS bind to the specified address --bwlimit=KBPS limit I/O bandwidth; KBytes per second --config=FILE specify alternate rsyncd.conf file --no-detach do not detach from the parent --port=PORT listen on alternate port number --log-file=FILE override the "log file" setting --log-file-format=FMT override the "log format" setting --sockopts=OPTIONS specify custom TCP options -v, --verbose increase verbosity -4, --ipv4 prefer IPv4 -6, --ipv6 prefer IPv6 -h, --help show this help (if used after --daemon)
|
osacompile
|
osacompile compiles the given files, or standard input if none are listed, into a single output script. Files may be plain text or other compiled scripts. The options are as follows: -l language Override the language for any plain text files. Normally, plain text files are compiled as AppleScript. -e command Enter one line of a script. Script commands given via -e are prepended to the normal source, if any. Multiple -e options may be given to build up a multi-line script. Because most scripts use characters that are special to many shell programs (e.g., AppleScript uses single and double quote marks, β(β, β)β, and β*β), the command will have to be correctly quoted and escaped to get it past the shell intact. -o name Place the output in the file name. If -o is not specified, the resulting script is placed in the file βa.scptβ. The value of -o partly determines the output file format; see below. -x Save the resulting script as execute-only. The following options are only relevant when creating a new bundled applet or droplet: -s Stay-open applet. -u Use startup screen. The following options control the packaging of the output file. You should only need them for compatibility with classic Mac OS or for custom file formats. -d Place the resulting script in the data fork of the output file. This is the default. -r type:id Place the resulting script in the resource fork of the output file, in the specified resource. -t type Set the output file type to type, where type is a four-character code. If this option is not specified, the creator code will not be set. -c creator Set the output file creator to creator, where creator is a four- character code. If this option is not specified, the creator code will not be set. If no options are specified, osacompile produces a Mac OS X format script file: data fork only, with no type or creator code. If the -o option is specified and the file does not already exist, osacompile uses the filename extension to determine what type of file to create. If the filename ends with β.appβ, it creates a bundled applet or droplet. If the filename ends with β.scptdβ, it creates a bundled compiled script. Otherwise, it creates a flat file with the script data placed according to the values of the -d and -r options.
|
osacompile β compile AppleScripts and other OSA language scripts
|
osacompile [-l language] [-e command] [-o name] [-d] [-r type:id] [-t type] [-c creator] [-x] [-s] [-u] [file ...]
| null |
To produce a script compatible with classic Mac OS: osacompile -r scpt:128 -t osas -c ToyS example.applescript SEE ALSO osascript(1), osalang(1) Mac OS X November 12, 2008 Mac OS X
|
yamlpp-load5.30
| null | null | null | null | null |
hdxml2manxml
| null |
hdxml2manxml β HeaderDoc XML to MPGL translator
|
hdxml2manxml [-M man_section] filename [...]
|
The available options are as follows: -M the manual section for the output files filename the filename(s) to be processed ENVIRONMENT This tool was designed to translate from headerdoc's XML output to an mxml file for use with xml2man. The tool takes a list of XML files generated with headerdoc2html (with the -X flag) and outputs a series of .mxml files (suitable for use with xml2man) in the current directory. SEE ALSO For more information on xml2man, see: xml2man(1) Darwin August 28, 2002 Darwin
| null |
gzcat
|
The gzip program compresses and decompresses files using Lempel-Ziv coding (LZ77). If no files are specified, gzip will compress from standard input, or decompress to standard output. When in compression mode, each file will be replaced with another file with the suffix, set by the -S suffix option, added, if possible. In decompression mode, each file will be checked for existence, as will the file with the suffix added. Each file argument must contain a separate complete archive; when multiple files are indicated, each is decompressed in turn. In the case of gzcat the resulting data is then concatenated in the manner of cat(1). If invoked as gunzip then the -d option is enabled. If invoked as zcat or gzcat then both the -c and -d options are enabled. This version of gzip is also capable of decompressing files compressed using compress(1), bzip2(1), lzip, or xz(1).
|
gzip, gunzip, zcat β compression/decompression tool using Lempel-Ziv coding (LZ77)
|
gzip [-cdfhkLlNnqrtVv] [-S suffix] file [file [...]] gunzip [-cfhkLNqrtVv] [-S suffix] file [file [...]] zcat [-fhV] file [file [...]]
|
The following options are available: -1, --fast -2, -3, -4, -5, -6, -7, -8 -9, --best These options change the compression level used, with the -1 option being the fastest, with less compression, and the -9 option being the slowest, with optimal compression. The default compression level is 6. -c, --stdout, --to-stdout This option specifies that output will go to the standard output stream, leaving files intact. -d, --decompress, --uncompress This option selects decompression rather than compression. -f, --force This option turns on force mode. This allows files with multiple links, symbolic links to regular files, overwriting of pre-existing files, reading from or writing to a terminal, and when combined with the -c option, allowing non-compressed data to pass through unchanged. -h, --help This option prints a usage summary and exits. -k, --keep This option prevents gzip from deleting input files after (de)compression. -L, --license This option prints gzip license. -l, --list This option displays information about the file's compressed and uncompressed size, ratio, uncompressed name. With the -v option, it also displays the compression method, CRC, date and time embedded in the file. -N, --name This option causes the stored filename in the input file to be used as the output file. -n, --no-name This option stops the filename and timestamp from being stored in the output file. -q, --quiet With this option, no warnings or errors are printed. -r, --recursive This option is used to gzip the files in a directory tree individually, using the fts(3) library. -S suffix, --suffix suffix This option changes the default suffix from .gz to suffix. -t, --test This option will test compressed files for integrity. -V, --version This option prints the version of the gzip program. -v, --verbose This option turns on verbose mode, which prints the compression ratio for each file compressed. ENVIRONMENT If the environment variable GZIP is set, it is parsed as a white-space separated list of options handled before any options on the command line. Options on the command line will override anything in GZIP. EXIT STATUS The gzip utility exits 0 on success, 1 on errors, and 2 if a warning occurs. SIGNALS gzip responds to the following signals: SIGINFO Report progress to standard error. SEE ALSO bzip2(1), compress(1), xz(1), fts(3), zlib(3) HISTORY The gzip program was originally written by Jean-loup Gailly, licensed under the GNU Public Licence. Matthew R. Green wrote a simple front end for NetBSD 1.3 distribution media, based on the freely re-distributable zlib library. It was enhanced to be mostly feature-compatible with the original GNU gzip program for NetBSD 2.0. This implementation of gzip was ported based on the NetBSD gzip version 20181111, and first appeared in FreeBSD 7.0. AUTHORS This implementation of gzip was written by Matthew R. Green <mrg@eterna.com.au> with unpack support written by Xin LI <delphij@FreeBSD.org>. BUGS According to RFC 1952, the recorded file size is stored in a 32-bit integer, therefore, it cannot represent files larger than 4GB. This limitation also applies to -l option of gzip utility. macOS 14.5 January 7, 2019 macOS 14.5
| null |
delv
|
delv (Domain Entity Lookup & Validation) is a tool for sending DNS queries and validating the results, using the the same internal resolver and validator logic as named. delv will send to a specified name server all queries needed to fetch and validate the requested data; this includes the original requested query, subsequent queries to follow CNAME or DNAME chains, and queries for DNSKEY, DS and DLV records to establish a chain of trust for DNSSEC validation. It does not perform iterative resolution, but simulates the behavior of a name server configured for DNSSEC validating and forwarding. By default, responses are validated using built-in DNSSEC trust anchors for the root zone (".") and for the ISC DNSSEC lookaside validation zone ("dlv.isc.org"). Records returned by delv are either fully validated or were not signed. If validation fails, an explanation of the failure is included in the output; the validation process can be traced in detail. Because delv does not rely on an external server to carry out validation, it can be used to check the validity of DNS responses in environments where local name servers may not be trustworthy. Unless it is told to query a specific name server, delv will try each of the servers listed in /etc/resolv.conf. If no usable server addresses are found, delv will send queries to the localhost addresses (127.0.0.1 for IPv4, ::1 for IPv6). When no command line arguments or options are given, delv will perform an NS query for "." (the root zone). SIMPLE USAGE A typical invocation of delv looks like: delv @server name type where: server is the name or IP address of the name server to query. This can be an IPv4 address in dotted-decimal notation or an IPv6 address in colon-delimited notation. When the supplied server argument is a hostname, delv resolves that name before querying that name server (note, however, that this initial lookup is not validated by DNSSEC). If no server argument is provided, delv consults /etc/resolv.conf; if an address is found there, it queries the name server at that address. If either of the -4 or -6 options are in use, then only addresses for the corresponding transport will be tried. If no usable addresses are found, delv will send queries to the localhost addresses (127.0.0.1 for IPv4, ::1 for IPv6).
|
delv - DNS lookup and validation utility is the domain name to be looked up. type indicates what type of query is required β ANY, A, MX, etc. type can be any valid query type. If no type argument is supplied, delv will perform a lookup for an A record.
|
delv [@server] [-4] [-6] [-a anchor-file] [-b address] [-c class] [-d level] [-i] [-m] [-p port#] [-q name] [-t type] [-x addr] [name] [type] [class] [queryopt...] delv [-h] delv [-v] delv [queryopt...] [query...]
|
-a anchor-file Specifies a file from which to read DNSSEC trust anchors. The default is /etc/bind.keys, which is included with BIND 9 and contains trust anchors for the root zone (".") and for the ISC DNSSEC lookaside validation zone ("dlv.isc.org"). Keys that do not match the root or DLV trust-anchor names are ignored; these key names can be overridden using the +dlv=NAME or +root=NAME options. Note: When reading the trust anchor file, delv treats managed-keys statements and trusted-keys statements identically. That is, for a managed key, it is the initial key that is trusted; RFC 5011 key management is not supported. delv will not consult the managed-keys database maintained by named. This means that if either of the keys in /etc/bind.keys is revoked and rolled over, it will be necessary to update /etc/bind.keys to use DNSSEC validation in delv. -b address Sets the source IP address of the query to address. This must be a valid address on one of the host's network interfaces or "0.0.0.0" or "::". An optional source port may be specified by appending "#<port>" -c class Sets the query class for the requested data. Currently, only class "IN" is supported in delv and any other value is ignored. -d level Set the systemwide debug level to level. The allowed range is from 0 to 99. The default is 0 (no debugging). Debugging traces from delv become more verbose as the debug level increases. See the +mtrace, +rtrace, and +vtrace options below for additional debugging details. -h Display the delv help usage output and exit. -i Insecure mode. This disables internal DNSSEC validation. (Note, however, this does not set the CD bit on upstream queries. If the server being queried is performing DNSSEC validation, then it will not return invalid data; this can cause delv to time out. When it is necessary to examine invalid data to debug a DNSSEC problem, use dig +cd.) -m Enables memory usage debugging. -p port# Specifies a destination port to use for queries instead of the standard DNS port number 53. This option would be used with a name server that has been configured to listen for queries on a non-standard port number. -q name Sets the query name to name. While the query name can be specified without using the -q, it is sometimes necessary to disambiguate names from types or classes (for example, when looking up the name "ns", which could be misinterpreted as the type NS, or "ch", which could be misinterpreted as class CH). -t type Sets the query type to type, which can be any valid query type supported in BIND 9 except for zone transfer types AXFR and IXFR. As with -q, this is useful to distinguish query name type or class when they are ambiguous. it is sometimes necessary to disambiguate names from types. The default query type is "A", unless the -x option is supplied to indicate a reverse lookup, in which case it is "PTR". -v Print the delv version and exit. -x addr Performs a reverse lookup, mapping an addresses to a name. addr is an IPv4 address in dotted-decimal notation, or a colon-delimited IPv6 address. When -x is used, there is no need to provide the name or type arguments. delv automatically performs a lookup for a name like 11.12.13.10.in-addr.arpa and sets the query type to PTR. IPv6 addresses are looked up using nibble format under the IP6.ARPA domain. -4 Forces delv to only use IPv4. -6 Forces delv to only use IPv6. QUERY OPTIONS delv provides a number of query options which affect the way results are displayed, and in some cases the way lookups are performed. Each query option is identified by a keyword preceded by a plus sign (+). Some keywords set or reset an option. These may be preceded by the string no to negate the meaning of that keyword. Other keywords assign values to options like the timeout interval. They have the form +keyword=value. The query options are: +[no]cdflag Controls whether to set the CD (checking disabled) bit in queries sent by delv. This may be useful when troubleshooting DNSSEC problems from behind a validating resolver. A validating resolver will block invalid responses, making it difficult to retrieve them for analysis. Setting the CD flag on queries will cause the resolver to return invalid responses, which delv can then validate internally and report the errors in detail. +[no]class Controls whether to display the CLASS when printing a record. The default is to display the CLASS. +[no]ttl Controls whether to display the TTL when printing a record. The default is to display the TTL. +[no]rtrace Toggle resolver fetch logging. This reports the name and type of each query sent by delv in the process of carrying out the resolution and validation process: this includes including the original query and all subsequent queries to follow CNAMEs and to establish a chain of trust for DNSSEC validation. This is equivalent to setting the debug level to 1 in the "resolver" logging category. Setting the systemwide debug level to 1 using the -d option will product the same output (but will affect other logging categories as well). +[no]mtrace Toggle message logging. This produces a detailed dump of the responses received by delv in the process of carrying out the resolution and validation process. This is equivalent to setting the debug level to 10 for the the "packets" module of the "resolver" logging category. Setting the systemwide debug level to 10 using the -d option will produce the same output (but will affect other logging categories as well). +[no]vtrace Toggle validation logging. This shows the internal process of the validator as it determines whether an answer is validly signed, unsigned, or invalid. This is equivalent to setting the debug level to 3 for the the "validator" module of the "dnssec" logging category. Setting the systemwide debug level to 3 using the -d option will produce the same output (but will affect other logging categories as well). +[no]short Provide a terse answer. The default is to print the answer in a verbose form. +[no]comments Toggle the display of comment lines in the output. The default is to print comments. +[no]rrcomments Toggle the display of per-record comments in the output (for example, human-readable key information about DNSKEY records). The default is to print per-record comments. +[no]crypto Toggle the display of cryptographic fields in DNSSEC records. The contents of these field are unnecessary to debug most DNSSEC validation failures and removing them makes it easier to see the common failures. The default is to display the fields. When omitted they are replaced by the string "[omitted]" or in the DNSKEY case the key id is displayed as the replacement, e.g. "[ key id = value ]". +[no]trust Controls whether to display the trust level when printing a record. The default is to display the trust level. +[no]split[=W] Split long hex- or base64-formatted fields in resource records into chunks of W characters (where W is rounded up to the nearest multiple of 4). +nosplit or +split=0 causes fields not to be split at all. The default is 56 characters, or 44 characters when multiline mode is active. +[no]all Set or clear the display options +[no]comments, +[no]rrcomments, and +[no]trust as a group. +[no]multiline Print long records (such as RRSIG, DNSKEY, and SOA records) in a verbose multi-line format with human-readable comments. The default is to print each record on a single line, to facilitate machine parsing of the delv output. +[no]dnssec Indicates whether to display RRSIG records in the delv output. The default is to do so. Note that (unlike in dig) this does not control whether to request DNSSEC records or whether to validate them. DNSSEC records are always requested, and validation will always occur unless suppressed by the use of -i or +noroot and +nodlv. +[no]root[=ROOT] Indicates whether to perform conventional (non-lookaside) DNSSEC validation, and if so, specifies the name of a trust anchor. The default is to validate using a trust anchor of "." (the root zone), for which there is a built-in key. If specifying a different trust anchor, then -a must be used to specify a file containing the key. +[no]dlv[=DLV] Indicates whether to perform DNSSEC lookaside validation, and if so, specifies the name of the DLV trust anchor. The default is to perform lookaside validation using a trust anchor of "dlv.isc.org", for which there is a built-in key. If specifying a different name, then -a must be used to specify a file containing the DLV key. macOS NOTICE The delv command does not use the host name and address resolution or the DNS query routing mechanisms used by other processes running on macOS. The results of name or address queries printed by delv may differ from those found by other processes that use the macOS native name and address resolution mechanisms. The results of DNS queries may also differ from queries that use the macOS DNS routing library. FILES /etc/bind.keys /etc/resolv.conf SEE ALSO dig(1), named(8), RFC4034, RFC4035, RFC4431, RFC5074, RFC5155. AUTHOR Internet Systems Consortium, Inc. COPYRIGHT Copyright Β© 2014-2016 Internet Systems Consortium, Inc. ("ISC") ISC 2018-05-25 DELV(1)
| null |
thermal
| null | null | null | null | null |
pico
|
Pico is a simple, display-oriented text editor based on the Alpine message system composer. As with Alpine, commands are displayed at the bottom of the screen, and context-sensitive help is provided. As characters are typed they are immediately inserted into the text. Editing commands are entered using control-key combinations. As a work-around for communications programs that swallow certain control characters, you can emulate a control key by pressing ESCAPE twice, followed by the desired control character, e.g. "ESC ESC c" would be equivalent to entering a ctrl-c. The editor has five basic features: paragraph justification, searching, block cut/paste, a spelling checker, and a file browser. Paragraph justification (or filling) takes place in the paragraph that contains the cursor, or, if the cursor is between lines, in the paragraph immediately below. Paragraphs are delimited by blank lines, or by lines beginning with a space or tab. Unjustification can be done immediately after justification using the control-U key combination. String searches are not sensitive to case. A search begins at the current cursor position and wraps around the end of the text. The most recent search string is offered as the default in subsequent searches. Blocks of text can be moved, copied or deleted with creative use of the command for mark (ctrl-^), delete (ctrl-k) and undelete (ctrl-u). The delete command will remove text between the "mark" and the current cursor position, and place it in the "cut" buffer. The undelete command effects a "paste" at the current cursor position. The spell checker examines all words in the text. It then offers, in turn, each misspelled word for correction while highlighting it in the text. Spell checking can be cancelled at any time. Alternatively, pico will substitute for the default spell checking routine a routine defined by the SPELL environment variable. The replacement routine should read standard input and write standard output. The file browser is offered as an option in the "Read File" and "Write Out" command prompts. It is intended to help in searching for specific files and navigating directory hierarchies. Filenames with sizes and names of directories in the current working directory are presented for selection. The current working directory is displayed on the top line of the display while the list of available commands takes up the bottom two. Several basic file manipulation functions are supported: file renaming, copying, and deletion. More specific help is available in pico's online help.
|
pico - simple text editor in the style of the Alpine Composer Syntax pico [ options ] [ file ]
| null |
+n Causes pico to be started with the cursor located n lines into the file. (Note: no space between "+" sign and number) -a Display all files including those beginning with a period (.). -b Enable word wrap. -d Rebind the "delete" key so the character the cursor is on is rubbed out rather than the character to its left. -e Enable file name completion. -f Use function keys for commands. This option supported only in conjunction with UW Enhanced NCSA telnet. -h List valid command line options. -j Enable "Goto" command in the file browser. This enables the command to permit explicitly telling pilot which directory to visit. -g Enable "Show Cursor" mode in file browser. Cause cursor to be positioned before the current selection rather than placed at the lower left of the display. -k Causes "Cut Text" command to remove characters from the cursor position to the end of the line rather than remove the entire line. -m Enable mouse functionality. This only works when pico is run from within an X Window System "xterm" window. -nn The -nn option enables new mail notification. The n argument is optional, and specifies how often, in seconds, your mailbox is checked for new mail. For example, -n60 causes pico to check for new mail once every minute. The default interval is 180 seconds, while the minimum allowed is 30. (Note: no space between "n" and the number) -o dir Sets operating directory. Only files within this directory are accessible. Likewise, the file browser is limited to the specified directory subtree. -rn Sets column used to limit the "Justify" command's right margin -s speller Specify an alternate program spell to use when spell checking. -t Enable "tool" mode. Intended for when pico is used as the editor within other tools (e.g., Elm, Pnews). Pico will not prompt for save on exit, and will not rename the buffer during the "Write Out" command. -v View the file only, disallowing any editing. -version Print Pico version and exit. -w Disable word wrap (thus allow editing of long lines). -x Disable keymenu at the bottom of the screen. -z Enable ^Z suspension of pico. -p Preserve the "start" and "stop" characters, typically Ctrl-Q and Ctrl-S, which are sometimes used in communications paths to control data flow between devices that operate at different speeds. -Q quotestr Set the quote string. Especially useful when composing email, setting this allows the quote string to be checked for when Justifying paragraphs. A common quote string is "> ". -W word_separators If characters listed here appear in the middle of a word surrounded by alphanumeric characters that word is split into two words. This is used by the Forward and Backward word commands and by the spell checker. -q Termcap or terminfo definition for input escape sequences are used in preference to sequences defined by default. This option is only available if pico was compiled with the TERMCAP_WINS define turned on. -setlocale_ctype Do setlocale(LC_CTYPE) if available. Default is to not do this setlocale. -no_setlocale_collate Do not do setlocale(LC_COLLATE). Default is to do this setlocale. Lastly, when a running pico is disconnected (i.e., receives a SIGHUP), pico will save the current work if needed before exiting. Work is saved under the current filename with ".save" appended. If the current work is unnamed, it is saved under the filename "pico.save". Color Support If your terminal supports colors, Pico can be configured to color text. Users can configure the color of the text, the text in the key menu, the titlebar, messages and prompt in the status line. As an added feature Pico can also be used to configure the color of up to three different levels of quoted text, and the signature of an email message. This is useful when Pico is used as a tool (with the -t command line switch.) Pico can tell you the number of colors that your terminal supports, when started with the switch -color_codes. In addition Pico will print a table showing the numerical code of every color supported in that terminal. In order to configure colors, one must use these numerical codes. For example, 0 is for black, so in order to configure a black color, one must use its code, the number 0. In order to activate colors, one must use the option -ncolors with a numerical value indicating the number of colors that your terminal supports, for example, -ncolors 256 indicates that the user wishes to use a table of 256 colors. All options that control color, are four letter options. Their last two letters are either "fc" or "bc", indicating foreground color and bacground color, respectively. The first two letters indicate the type of text that is being configured, for example "nt" stands for normal text, so that -ntfc represents the color of the normal text, while -ntbc represents the color of the background of normal text. Here is a complete list of the color options supported by Pico. -color_code displays the number of colors supported by the terminal, and a table showing the association of colors and numerical codes -ncolors number activates color support in Pico, and tells Pico how many colors to use. Depending on your terminal number could be 8, 16, or 256. -ntfc num specifies the number num of the color to be used to color normal text. -ntbc num specifies the number num of the color of the background for normal text. -rtfc num specifies the number num of the color of reverse text. Default: same as background color of normal text (if specified.) -rtbc num specifies the number num of the color of the background of reverse text. Default: same as color of normal text (if specified.) -tbfc num specifies the number num of then color of text of the title bar. Default: same as foreground color of reverse text. -tbbc num specifies the number num of the color in the background of the title bar. -klfc num specifies the number num of the color of the text of the key label. -klbc num specifies the number num of the color in the background of the key label. -knfc num specifies the number num of the color of the text of the key name. -knbc num specifies the number num of the color of the background of the key name. -stfc num specifies the number num of the color of the text of the status line. -stbc num specifies the number num of the color of the background of the status line. -prfc num specifies the number num of the color of the text of a prompt. -prbc num specifies the number num of the color of the background of a prompt. -q1fc num specifies the number num of the color of the text of level one of quoted text. -q1bc num specifies the number num of the color of the background of level one of quoted text. If the option -q1bc is used, the default value of this option is the background color or normal text. -q2fc num specifies the number num of the color of text of level two of quoted text. -q2bc num specifies the number num of the color of the background of level two of quoted text. If the option -q1bc is used, the default value of this option is the background color or normal text. -q3fc num specifies the number num of the color of text of level three of quoted text. -sbfc num specifies the number num of the color of text of signature block text. -sbbc num specifies the number num of the color of the background of signature block text. Bugs The manner in which lines longer than the display width are dealt is not immediately obvious. Lines that continue beyond the edge of the display are indicated by a '$' character at the end of the line. Long lines are scrolled horizontally as the cursor moves through them. Files pico.save Unnamed interrupted work saved here. *.save Interrupted work on a named file is saved here. Authors Michael Seibel <mikes@cac.washington.edu> Laurence Lundblade <lgl@cac.washington.edu> Pico was originally derived from MicroEmacs 3.6, by Dave G. Conroy. Copyright 1989-2008 by the University of Washington. See Also alpine(1) Source distribution (part of the Alpine Message System): $Date: 2009-02-02 13:54:23 -0600 (Mon, 02 Feb 2009) $ Version 5.09 pico(1)
| null |
IOMFB_FDR_Loader
| null | null | null | null | null |
shasum
|
Running shasum is often the quickest way to compute SHA message digests. The user simply feeds data to the script through files or standard input, and then collects the results from standard output. The following command shows how to compute digests for typical inputs such as the NIST test vector "abc": perl -e "print qq(abc)" | shasum Or, if you want to use SHA-256 instead of the default SHA-1, simply say: perl -e "print qq(abc)" | shasum -a 256 Since shasum mimics the behavior of the combined GNU sha1sum, sha224sum, sha256sum, sha384sum, and sha512sum programs, you can install this script as a convenient drop-in replacement. Unlike the GNU programs, shasum encompasses the full SHA standard by allowing partial-byte inputs. This is accomplished through the BITS option (-0). The following example computes the SHA-224 digest of the 7-bit message 0001100: perl -e "print qq(0001100)" | shasum -0 -a 224 AUTHOR Copyright (C) 2003-2023 Mark Shelor <mshelor@cpan.org>. SEE ALSO shasum is implemented using the Perl module Digest::SHA. perl v5.38.2 2023-11-28 SHASUM(1)
|
shasum - Print or Check SHA Checksums
|
Usage: shasum [OPTION]... [FILE]... Print or check SHA checksums. With no FILE, or when FILE is -, read standard input. -a, --algorithm 1 (default), 224, 256, 384, 512, 512224, 512256 -b, --binary read in binary mode -c, --check read SHA sums from the FILEs and check them --tag create a BSD-style checksum -t, --text read in text mode (default) -U, --UNIVERSAL read in Universal Newlines mode produces same digest on Windows/Unix/Mac -0, --01 read in BITS mode ASCII '0' interpreted as 0-bit, ASCII '1' interpreted as 1-bit, all other characters ignored The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files -q, --quiet don't print OK for each successfully verified file -s, --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines -h, --help display this help and exit -v, --version output version information and exit When verifying SHA-512/224 or SHA-512/256 checksums, indicate the algorithm explicitly using the -a option, e.g. shasum -a 512224 -c checksumfile The sums are computed as described in FIPS PUB 180-4. When checking, the input should be a former output of this program. The default mode is to print a line with checksum, a character indicating type (`*' for binary, ` ' for text, `U' for UNIVERSAL, `^' for BITS), and name for each FILE. The line starts with a `\' character if the FILE name contains either newlines or backslashes, which are then replaced by the two-character sequences `\n' and `\\' respectively. Report shasum bugs to mshelor@cpan.org
| null | null |
comm
|
The comm utility reads file1 and file2, which should be sorted lexically, and produces three text columns as output: lines only in file1; lines only in file2; and lines in both files. The filename ``-'' means the standard input. The following options are available: -1 Suppress printing of column 1, lines only in file1. -2 Suppress printing of column 2, lines only in file2. -3 Suppress printing of column 3, lines common to both. -i Case insensitive comparison of lines. Each column will have a number of tab characters prepended to it equal to the number of lower numbered columns that are being printed. For example, if column number two is being suppressed, lines printed in column number one will not have any tabs preceding them, and lines printed in column number three will have one. The comm utility assumes that the files are lexically sorted; all characters participate in line comparisons. ENVIRONMENT The LANG, LC_ALL, LC_COLLATE, and LC_CTYPE environment variables affect the execution of comm as described in environ(7). EXIT STATUS The comm utility exits 0 on success, and >0 if an error occurs.
|
comm β select or reject lines common to two files
|
comm [-123i] file1 file2
| null |
Assuming a file named example.txt with the following contents: a b c d Show lines only in example.txt, lines only in stdin and common lines: $ echo -e "B\nc" | comm example.txt - B a b c d Show only common lines doing case insensitive comparisons: $ echo -e "B\nc" | comm -1 -2 -i example.txt - b c SEE ALSO cmp(1), diff(1), sort(1), uniq(1) STANDARDS The comm utility conforms to IEEE Std 1003.2-1992 (βPOSIX.2β). The -i option is an extension to the POSIX standard. HISTORY A comm command appeared in Version 4 AT&T UNIX. macOS 14.5 July 27, 2020 macOS 14.5
|
mpsgraphtool
| null | null | null | null | null |
resolveLinks
| null |
resolveLinks β Resolves link requests in collections of HTML files
|
resolveLinks [-a] [-b basepath] [-d debugflags] [-D] [-h] [-i installpath] [-n] [-N] [-P] [-r refanchorprefix] [-s seedfile] [-S seedfilebasepath] [-t nthreads] [-x xreffile] directory
|
The available options are as follows: -a Treat external (seeded) paths as absolute. If passed after a -s argument, modifies that argument. If passed before any -s arguments, modifies all seed files. This may be specified multiple times. -b Base path. Paths in -x output file are generated relative to this path. If this ends with a trailing /, resolveLinks assumes that the contents of the specified directory will be installed in the location specified by the -i flag. Otherwise, it is assumed that the directory itself will be installed there. (The cp command behaves similarly.) If unspecified, defaults to /. -d Sets debug flags (bitmask). -D Disables dot printing (for cleaner debugging). -h Prints a usage summary. -i Location where this directory will eventually be installed. Used for generating relative paths for external (seeded) paths. If unspecified, the value of the -b flag is used. -n Disables all file writes (except for the seed output file). -N Disables name-based matching (normally used for unknown symbol names in user-entered link requests). Disabling this matching can provide a performance gain for large doc trees. -P Disables partial matching. Disabling this matching can provide a performance gain for large doc trees. -r Additional reference anchor prefix. The default, "apple_ref", is always active; if you use something else, add a -r option. This may be specified multiple times. -s A seed file generated by the -x option from a previous run of the tool. Used to add additional external cross references from other folders. This may be passed multiple times. -S A base path prepended to the immediate previous seed file for path purposes. This may be passed multiple times (once per seed file). -t The number of threads to use during resolution. Default is 2. -x An output cross-reference file. You can pass this file to future runs of the tool by using the -s flag. directory the directory to be processed. ENVIRONMENT This program is a helper tool that is usually run by the gatherHeaderDoc tool to generate links within HeaderDoc output. Although it is usually not run directly, it can be useful to do so when linking together multiple documentation sets in an installed set. SEE ALSO For more information on gatherHeaderDoc and HeaderDoc, see headerdoc2html(1) and gatherheaderdoc(1). Darwin May 27, 2010 Darwin
| null |
xcdebug
| null | null | null | null | null |
dbicadmin5.30
| null |
dbicadmin - utility for administrating DBIx::Class schemata
|
dbicadmin: [-I] [long options...] deploy a schema to a database dbicadmin --schema=MyApp::Schema \ --connect='["dbi:SQLite:my.db", "", ""]' \ --deploy update an existing record dbicadmin --schema=MyApp::Schema --class=Employee \ --connect='["dbi:SQLite:my.db", "", ""]' \ --op=update --set='{ "name": "New_Employee" }'
|
Actions --create Create version diffs needs preversion --upgrade Upgrade the database to the current schema --install Install the schema version tables to an existing database --deploy Deploy the schema to the database --select Select data from the schema --insert Insert data into the schema --update Update data in the schema --delete Delete data from the schema --op compatibility option all of the above can be supplied as --op=<action> --help display this help Arguments --config-file or --config Supply the config file for parsing by Config::Any --connect-info Supply the connect info as trailing options e.g. --connect-info dsn=<dsn> user=<user> password=<pass> --connect Supply the connect info as a JSON-encoded structure, e.g. an --connect=["dsn","user","pass"] --schema-class The class of the schema to load --config-stanza Where in the config to find the connection_info, supply in form MyApp::Model::DB --resultset or --resultset-class or --class The resultset to operate on for data manipulation --sql-dir The directory where sql diffs will be created --sql-type The RDBMs flavour you wish to use --version Supply a version install --preversion The previous version to diff against --set JSON data used to perform data operations --attrs JSON string to be used for the second argument for search --where JSON string to be used for the where clause of search --force Be forceful with some operations --trace Turn on DBIx::Class trace output --quiet Be less verbose -I Same as perl's -I, prepended to current @INC AUTHORS See "AUTHORS" in DBIx::Class LICENSE You may distribute this code under the same terms as Perl itself perl v5.30.3 2018-01-29 DBICADMIN(1)
| null |
lpq
|
lpq shows the current print queue status on the named printer. Jobs queued on the default destination will be shown if no printer or class is specified on the command-line. The +interval option allows you to continuously report the jobs in the queue until the queue is empty; the list of jobs is shown once every interval seconds.
|
lpq - show printer queue status
|
lpq [ -E ] [ -U username ] [ -h server[:port] ] [ -P destination[/instance] ] [ -a ] [ -l ] [ +interval ]
|
lpq supports the following options: -E Forces encryption when connecting to the server. -P destination[/instance] Specifies an alternate printer or class name. -U username Specifies an alternate username. -a Reports jobs on all printers. -h server[:port] Specifies an alternate server. -l Requests a more verbose (long) reporting format. SEE ALSO cancel(1), lp(1), lpr(1), lprm(1), lpstat(1), CUPS Online Help (http://localhost:631/help) COPYRIGHT Copyright Β© 2007-2019 by Apple Inc. 26 April 2019 CUPS lpq(1)
| null |
java
|
The java command starts a Java application. It does this by starting the Java Virtual Machine (JVM), loading the specified class, and calling that class's main() method. The method must be declared public and static, it must not return any value, and it must accept a String array as a parameter. The method declaration has the following form: public static void main(String[] args) In source-file mode, the java command can launch a class declared in a source file. See Using Source-File Mode to Launch Source-Code Programs for a description of using the source-file mode. Note: You can use the JDK_JAVA_OPTIONS launcher environment variable to prepend its content to the actual command line of the java launcher. See Using the JDK_JAVA_OPTIONS Launcher Environment Variable. By default, the first argument that isn't an option of the java command is the fully qualified name of the class to be called. If -jar is specified, then its argument is the name of the JAR file containing class and resource files for the application. The startup class must be indicated by the Main-Class manifest header in its manifest file. Arguments after the class file name or the JAR file name are passed to the main() method. javaw Windows: The javaw command is identical to java, except that with javaw there's no associated console window. Use javaw when you don't want a command prompt window to appear. The javaw launcher will, however, display a dialog box with error information if a launch fails. USING SOURCE-FILE MODE TO LAUNCH SOURCE-CODE PROGRAMS To launch a class declared in a source file, run the java launcher in source-file mode. Entering source-file mode is determined by two items on the java command line: β’ The first item on the command line that is not an option or part of an option. In other words, the item in the command line that would otherwise be the main class name. β’ The --source version option, if present. If the class identifies an existing file that has a .java extension, or if the --source option is specified, then source-file mode is selected. The source file is then compiled and run. The --source option can be used to specify the source version or N of the source code. This determines the API that can be used. When you set --source N, you can only use the public API that was defined in JDK N. Note: The valid values of N change for each release, with new values added and old values removed. You'll get an error message if you use a value of N that is no longer supported. The supported values of N are the current Java SE release (22) and a limited number of previous releases, detailed in the command-line help for javac, under the --source and --release options. If the file does not have the .java extension, the --source option must be used to tell the java command to use the source-file mode. The --source option is used for cases when the source file is a "script" to be executed and the name of the source file does not follow the normal naming conventions for Java source files. In source-file mode, the effect is as though the source file is compiled into memory, and the first class found in the source file is executed. Any arguments placed after the name of the source file in the original command line are passed to the compiled class when it is executed. For example, if a file were named HelloWorld.java and contained a class named HelloWorld, then the source-file mode command to launch the class would be: java HelloWorld.java This use of source-file mode is informally equivalent to using the following two commands: javac -d <memory> --source-path <source-root> HelloWorld.java java --class-path <memory> HelloWorld where <source-root> is computed In source-file mode, any additional command-line options are processed as follows: β’ The launcher scans the options specified before the source file for any that are relevant in order to compile the source file. This includes: --class-path, --module-path, --add-exports, --add-modules, --limit-modules, --patch-module, --upgrade-module-path, and any variant forms of those options. It also includes the new --enable-preview option, described in JEP 12. β’ No provision is made to pass any additional options to the compiler, such as -processor or -Werror. β’ Command-line argument files (@-files) may be used in the standard way. Long lists of arguments for either the VM or the program being invoked may be placed in files specified on the command-line by prefixing the filename with an @ character. In source-file mode, compilation proceeds as follows: β’ Any command-line options that are relevant to the compilation environment are taken into account. These include: --class-path/-classpath/-cp, --module-path/-p, --add-exports, --add-modules, --limit-modules, --patch-module, --upgrade-module-path, --enable-preview. β’ The root of the source tree, <source-root> is computed from the package of the class being launched. For example, if HelloWorld.java declared its classes to be in the hello package, then the file HelloWorld.java is expected to reside in the directory somedir/hello/. In this case, somedir is computed to be the root of the source tree. β’ The root of the source tree serves as the source-path for compilation, so that other source files found in that tree and are needed by HelloWorld could be compiled. β’ Annotation processing is disabled, as if -proc:none is in effect. β’ If a version is specified, via the --source option, the value is used as the argument for an implicit --release option for the compilation. This sets both the source version accepted by compiler and the system API that may be used by the code in the source file. β’ If a module-info.java file exists in the <source-root> directory, its module declaration is used to define a named module that will contain all the classes compiled from .java files in the source tree. If module-info.java does not exist, all the classes compiled from source files will be compiled in the context of the unnamed module. β’ The source file that is launched should contain one or more top-level classes, the first of which is taken as the class to be executed. β’ For the source file that is launched, the compiler does not enforce the optional restriction defined at the end of JLS 7.6, that a type in a named package should exist in a file whose name is composed from the type name followed by the .java extension. β’ If a source file contains errors, appropriate error messages are written to the standard error stream, and the launcher exits with a non-zero exit code. In source-file mode, execution proceeds as follows: β’ The class to be executed is the first top-level class found in the source file. It must contain a declaration of an entry main method. β’ The compiled classes are loaded by a custom class loader, that delegates to the application class loader. This implies that classes appearing on the application class path cannot refer to any classes declared in source files. β’ If a module-info.java file exists in the <source-root> directory, then all the classes compiled from .java files in the source tree will be in that module, which will serve as the root module for the execution of the program. If module-info.java does not exist, the compiled classes are executed in the context of an unnamed module, as though --add-modules=ALL-DEFAULT is in effect. This is in addition to any other --add-module options that may be have been specified on the command line. β’ Any arguments appearing after the name of the file on the command line are passed to the main method in the obvious way. β’ It is an error if there is a class on the application class path whose name is the same as that of the class to be executed. See JEP 458: Launch Multi-File Source-Code Programs [https://openjdk.org/jeps/458] for complete details. USING THE JDK_JAVA_OPTIONS LAUNCHER ENVIRONMENT VARIABLE JDK_JAVA_OPTIONS prepends its content to the options parsed from the command line. The content of the JDK_JAVA_OPTIONS environment variable is a list of arguments separated by white-space characters (as determined by isspace()). These are prepended to the command line arguments passed to java launcher. The encoding requirement for the environment variable is the same as the java command line on the system. JDK_JAVA_OPTIONS environment variable content is treated in the same manner as that specified in the command line. Single (') or double (") quotes can be used to enclose arguments that contain whitespace characters. All content between the open quote and the first matching close quote are preserved by simply removing the pair of quotes. In case a matching quote is not found, the launcher will abort with an error message. @-files are supported as they are specified in the command line. However, as in @-files, use of a wildcard is not supported. In order to mitigate potential misuse of JDK_JAVA_OPTIONS behavior, options that specify the main class (such as -jar) or cause the java launcher to exit without executing the main class (such as -h) are disallowed in the environment variable. If any of these options appear in the environment variable, the launcher will abort with an error message. When JDK_JAVA_OPTIONS is set, the launcher prints a message to stderr as a reminder. Example: $ export JDK_JAVA_OPTIONS='-g @file1 -Dprop=value @file2 -Dws.prop="white spaces"' $ java -Xint @file3 is equivalent to the command line: java -g @file1 -Dprop=value @file2 -Dws.prop="white spaces" -Xint @file3 OVERVIEW OF JAVA OPTIONS The java command supports a wide range of options in the following categories: β’ Standard Options for Java: Options guaranteed to be supported by all implementations of the Java Virtual Machine (JVM). They're used for common actions, such as checking the version of the JRE, setting the class path, enabling verbose output, and so on. β’ Extra Options for Java: General purpose options that are specific to the Java HotSpot Virtual Machine. They aren't guaranteed to be supported by all JVM implementations, and are subject to change. These options start with -X. The advanced options aren't recommended for casual use. These are developer options used for tuning specific areas of the Java HotSpot Virtual Machine operation that often have specific system requirements and may require privileged access to system configuration parameters. Several examples of performance tuning are provided in Performance Tuning Examples. These options aren't guaranteed to be supported by all JVM implementations and are subject to change. Advanced options start with -XX. β’ Advanced Runtime Options for Java: Control the runtime behavior of the Java HotSpot VM. β’ Advanced JIT Compiler Options for java: Control the dynamic just-in- time (JIT) compilation performed by the Java HotSpot VM. β’ Advanced Serviceability Options for Java: Enable gathering system information and performing extensive debugging. β’ Advanced Garbage Collection Options for Java: Control how garbage collection (GC) is performed by the Java HotSpot Boolean options are used to either enable a feature that's disabled by default or disable a feature that's enabled by default. Such options don't require a parameter. Boolean -XX options are enabled using the plus sign (-XX:+OptionName) and disabled using the minus sign (-XX:-OptionName). For options that require an argument, the argument may be separated from the option name by a space, a colon (:), or an equal sign (=), or the argument may directly follow the option (the exact syntax differs for each option). If you're expected to specify the size in bytes, then you can use no suffix, or use the suffix k or K for kilobytes (KB), m or M for megabytes (MB), or g or G for gigabytes (GB). For example, to set the size to 8 GB, you can specify either 8g, 8192m, 8388608k, or 8589934592 as the argument. If you are expected to specify the percentage, then use a number from 0 to 1. For example, specify 0.25 for 25%. The following sections describe the options that are deprecated, obsolete, and removed: β’ Deprecated Java Options: Accepted and acted upon --- a warning is issued when they're used. β’ Obsolete Java Options: Accepted but ignored --- a warning is issued when they're used. β’ Removed Java Options: Removed --- using them results in an error. STANDARD OPTIONS FOR JAVA These are the most commonly used options supported by all implementations of the JVM. Note: To specify an argument for a long option, you can use either --name=value or --name value. -agentlib:libname[=options] Loads the specified native agent library. After the library name, a comma-separated list of options specific to the library can be used. If the option -agentlib:foo is specified, then the JVM attempts to load the library named foo using the platform specific naming conventions and locations: β’ Linux and other POSIX-like platforms: The JVM attempts to load the library named libfoo.so in the location specified by the LD_LIBRARY_PATH system variable. β’ macOS: The JVM attempts to load the library named libfoo.dylib in the location specified by the DYLD_LIBRARY_PATH system variable. β’ Windows: The JVM attempts to load the library named foo.dll in the location specified by the PATH system variable. The following example shows how to load the Java Debug Wire Protocol (JDWP) library and listen for the socket connection on port 8000, suspending the JVM before the main class loads: -agentlib:jdwp=transport=dt_socket,server=y,address=8000 -agentpath:pathname[=options] Loads the native agent library specified by the absolute path name. This option is equivalent to -agentlib but uses the full path and file name of the library. --class-path classpath, -classpath classpath, or -cp classpath Specifies a list of directories, JAR files, and ZIP archives to search for class files. On Windows, semicolons (;) separate entities in this list; on other platforms it is a colon (:). Specifying classpath overrides any setting of the CLASSPATH environment variable. If the class path option isn't used and classpath isn't set, then the user class path consists of the current directory (.). As a special convenience, a class path element that contains a base name of an asterisk (*) is considered equivalent to specifying a list of all the files in the directory with the extension .jar or .JAR . A Java program can't tell the difference between the two invocations. For example, if the directory mydir contains a.jar and b.JAR, then the class path element mydir/* is expanded to A.jar:b.JAR, except that the order of JAR files is unspecified. All .jar files in the specified directory, even hidden ones, are included in the list. A class path entry consisting of an asterisk (*) expands to a list of all the jar files in the current directory. The CLASSPATH environment variable, where defined, is similarly expanded. Any class path wildcard expansion that occurs before the Java VM is started. Java programs never see wildcards that aren't expanded except by querying the environment, such as by calling System.getenv("CLASSPATH"). --disable-@files Can be used anywhere on the command line, including in an argument file, to prevent further @filename expansion. This option stops expanding @-argfiles after the option. --enable-preview Allows classes to depend on preview features [https://docs.oracle.com/en/java/javase/12/language/index.html#JSLAN- GUID-5A82FE0E-0CA4-4F1F-B075-564874FE2823] of the release. --enable-native-access module[,module...] Native access involves access to code or data outside the Java runtime. This is generally unsafe and, if done incorrectly, might crash the JVM or result in memory corruption. Methods that provide native access are restricted, and by default their use causes warnings. This option allows code in the specified modules to use restricted methods without warnings. module can be ALL-UNNAMED to indicate code on the class path. When this option is present, any use of restricted methods by code outside the specified modules causes an IllegalCallerException. --finalization=value Controls whether the JVM performs finalization of objects. Valid values are "enabled" and "disabled". Finalization is enabled by default, so the value "enabled" does nothing. The value "disabled" disables finalization, so that no finalizers are invoked. --module-path modulepath... or -p modulepath Specifies where to find application modules with a list of path elements. The elements of a module path can be a file path to a module or a directory containing modules. Each module is either a modular JAR or an exploded-module directory. On Windows, semicolons (;) separate path elements in this list; on other platforms it is a colon (:). --upgrade-module-path modulepath... Specifies where to find module replacements of upgradeable modules in the runtime image with a list of path elements. The elements of a module path can be a file path to a module or a directory containing modules. Each module is either a modular JAR or an exploded-module directory. On Windows, semicolons (;) separate path elements in this list; on other platforms it is a colon (:). --add-modules module[,module...] Specifies the root modules to resolve in addition to the initial module. module can also be ALL-DEFAULT, ALL-SYSTEM, and ALL-MODULE-PATH. --list-modules Lists the observable modules and then exits. -d module_name or --describe-module module_name Describes a specified module and then exits. --dry-run Creates the VM but doesn't execute the main method. This --dry-run option might be useful for validating the command-line options such as the module system configuration. --validate-modules Validates all modules and exit. This option is helpful for finding conflicts and other errors with modules on the module path. -Dproperty=value Sets a system property value. The property variable is a string with no spaces that represents the name of the property. The value variable is a string that represents the value of the property. If value is a string with spaces, then enclose it in quotation marks (for example -Dfoo="foo bar"). -disableassertions[:[packagename]...|:classname] or -da[:[packagename]...|:classname] Disables assertions. By default, assertions are disabled in all packages and classes. With no arguments, -disableassertions (-da) disables assertions in all packages and classes. With the packagename argument ending in ..., the switch disables assertions in the specified package and any subpackages. If the argument is simply ..., then the switch disables assertions in the unnamed package in the current working directory. With the classname argument, the switch disables assertions in the specified class. The -disableassertions (-da) option applies to all class loaders and to system classes (which don't have a class loader). There's one exception to this rule: If the option is provided with no arguments, then it doesn't apply to system classes. This makes it easy to disable assertions in all classes except for system classes. The -disablesystemassertions option enables you to disable assertions in all system classes. To explicitly enable assertions in specific packages or classes, use the -enableassertions (-ea) option. Both options can be used at the same time. For example, to run the MyClass application with assertions enabled in the package com.wombat.fruitbat (and any subpackages) but disabled in the class com.wombat.fruitbat.Brickbat, use the following command: java -ea:com.wombat.fruitbat... -da:com.wombat.fruitbat.Brickbat MyClass -disablesystemassertions or -dsa Disables assertions in all system classes. -enableassertions[:[packagename]...|:classname] or -ea[:[packagename]...|:classname] Enables assertions. By default, assertions are disabled in all packages and classes. With no arguments, -enableassertions (-ea) enables assertions in all packages and classes. With the packagename argument ending in ..., the switch enables assertions in the specified package and any subpackages. If the argument is simply ..., then the switch enables assertions in the unnamed package in the current working directory. With the classname argument, the switch enables assertions in the specified class. The -enableassertions (-ea) option applies to all class loaders and to system classes (which don't have a class loader). There's one exception to this rule: If the option is provided with no arguments, then it doesn't apply to system classes. This makes it easy to enable assertions in all classes except for system classes. The -enablesystemassertions option provides a separate switch to enable assertions in all system classes. To explicitly disable assertions in specific packages or classes, use the -disableassertions (-da) option. If a single command contains multiple instances of these switches, then they're processed in order, before loading any classes. For example, to run the MyClass application with assertions enabled only in the package com.wombat.fruitbat (and any subpackages) but disabled in the class com.wombat.fruitbat.Brickbat, use the following command: java -ea:com.wombat.fruitbat... -da:com.wombat.fruitbat.Brickbat MyClass -enablesystemassertions or -esa Enables assertions in all system classes. -help, -h, or -? Prints the help message to the error stream. --help Prints the help message to the output stream. -javaagent:jarpath[=options] Loads the specified Java programming language agent. See java.lang.instrument. --show-version Prints the product version to the output stream and continues. -showversion Prints the product version to the error stream and continues. --show-module-resolution Shows module resolution output during startup. -splash:imagepath Shows the splash screen with the image specified by imagepath. HiDPI scaled images are automatically supported and used if available. The unscaled image file name, such as image.ext, should always be passed as the argument to the -splash option. The most appropriate scaled image provided is picked up automatically. For example, to show the splash.gif file from the images directory when starting your application, use the following option: -splash:images/splash.gif See the SplashScreen API documentation for more information. -verbose:class Displays information about each loaded class. -verbose:gc Displays information about each garbage collection (GC) event. -verbose:jni Displays information about the use of native methods and other Java Native Interface (JNI) activity. -verbose:module Displays information about the modules in use. --version Prints product version to the output stream and exits. -version Prints product version to the error stream and exits. -X Prints the help on extra options to the error stream. --help-extra Prints the help on extra options to the output stream. @argfile Specifies one or more argument files prefixed by @ used by the java command. It isn't uncommon for the java command line to be very long because of the .jar files needed in the classpath. The @argfile option overcomes command-line length limitations by enabling the launcher to expand the contents of argument files after shell expansion, but before argument processing. Contents in the argument files are expanded because otherwise, they would be specified on the command line until the --disable-@files option was encountered. The argument files can also contain the main class name and all options. If an argument file contains all of the options required by the java command, then the command line could simply be: java @argfile See java Command-Line Argument Files for a description and examples of using @-argfiles. EXTRA OPTIONS FOR JAVA The following java options are general purpose options that are specific to the Java HotSpot Virtual Machine. -Xbatch Disables background compilation. By default, the JVM compiles the method as a background task, running the method in interpreter mode until the background compilation is finished. The -Xbatch flag disables background compilation so that compilation of all methods proceeds as a foreground task until completed. This option is equivalent to -XX:-BackgroundCompilation. -Xbootclasspath/a:directories|zip|JAR-files Specifies a list of directories, JAR files, and ZIP archives to append to the end of the default bootstrap class path. On Windows, semicolons (;) separate entities in this list; on other platforms it is a colon (:). -Xcheck:jni Performs additional checks for Java Native Interface (JNI) functions. The following checks are considered indicative of significant problems with the native code, and the JVM terminates with an irrecoverable error in such cases: β’ The thread doing the call is not attached to the JVM. β’ The thread doing the call is using the JNIEnv belonging to another thread. β’ A parameter validation check fails: β’ A jfieldID, or jmethodID, is detected as being invalid. For example: β’ Of the wrong type β’ Associated with the wrong class β’ A parameter of the wrong type is detected. β’ An invalid parameter value is detected. For example: β’ NULL where not permitted β’ An out-of-bounds array index, or frame capacity β’ A non-UTF-8 string β’ An invalid JNI reference β’ An attempt to use a ReleaseXXX function on a parameter not produced by the corresponding GetXXX function The following checks only result in warnings being printed: β’ A JNI call was made without checking for a pending exception from a previous JNI call, and the current call is not safe when an exception may be pending. β’ A class descriptor is in decorated format (Lname;) when it should not be. β’ A NULL parameter is allowed, but its use is questionable. β’ Calling other JNI functions in the scope of Get/ReleasePrimitiveArrayCritical or Get/ReleaseStringCritical Expect a performance degradation when this option is used. -Xcomp Testing mode to exercise JIT compilers. This option should not be used in production environments. -Xdebug Does nothing; deprecated for removal in a future release. -Xdiag Shows additional diagnostic messages. -Xint Runs the application in interpreted-only mode. Compilation to native code is disabled, and all bytecode is executed by the interpreter. The performance benefits offered by the just-in- time (JIT) compiler aren't present in this mode. -Xinternalversion Displays more detailed JVM version information than the -version option, and then exits. -Xlog:option Configure or enable logging with the Java Virtual Machine (JVM) unified logging framework. See Enable Logging with the JVM Unified Logging Framework. -Xmixed Executes all bytecode by the interpreter except for hot methods, which are compiled to native code. On by default. Use -Xint to switch off. -Xmn size Sets the initial and maximum size (in bytes) of the heap for the young generation (nursery) in the generational collectors. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. The young generation region of the heap is used for new objects. GC is performed in this region more often than in other regions. If the size for the young generation is too small, then a lot of minor garbage collections are performed. If the size is too large, then only full garbage collections are performed, which can take a long time to complete. It is recommended that you do not set the size for the young generation for the G1 collector, and keep the size for the young generation greater than 25% and less than 50% of the overall heap size for other collectors. The following examples show how to set the initial and maximum size of young generation to 256 MB using various units: -Xmn256m -Xmn262144k -Xmn268435456 Instead of the -Xmn option to set both the initial and maximum size of the heap for the young generation, you can use -XX:NewSize to set the initial size and -XX:MaxNewSize to set the maximum size. -Xms size Sets the minimum and the initial size (in bytes) of the heap. This value must be a multiple of 1024 and greater than 1 MB. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. The following examples show how to set the size of allocated memory to 6 MB using various units: -Xms6291456 -Xms6144k -Xms6m If you do not set this option, then the initial size will be set as the sum of the sizes allocated for the old generation and the young generation. The initial size of the heap for the young generation can be set using the -Xmn option or the -XX:NewSize option. Note that the -XX:InitialHeapSize option can also be used to set the initial heap size. If it appears after -Xms on the command line, then the initial heap size gets set to the value specified with -XX:InitialHeapSize. -Xmx size Specifies the maximum size (in bytes) of the heap. This value must be a multiple of 1024 and greater than 2 MB. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. The default value is chosen at runtime based on system configuration. For server deployments, -Xms and -Xmx are often set to the same value. The following examples show how to set the maximum allowed size of allocated memory to 80 MB using various units: -Xmx83886080 -Xmx81920k -Xmx80m The -Xmx option is equivalent to -XX:MaxHeapSize. -Xnoclassgc Disables garbage collection (GC) of classes. This can save some GC time, which shortens interruptions during the application run. When you specify -Xnoclassgc at startup, the class objects in the application are left untouched during GC and are always be considered live. This can result in more memory being permanently occupied which, if not used carefully, throws an out-of-memory exception. -Xrs Reduces the use of operating system signals by the JVM. Shutdown hooks enable the orderly shutdown of a Java application by running user cleanup code (such as closing database connections) at shutdown, even if the JVM terminates abruptly. β’ Non-Windows: β’ The JVM catches signals to implement shutdown hooks for unexpected termination. The JVM uses SIGHUP, SIGINT, and SIGTERM to initiate the running of shutdown hooks. β’ Applications embedding the JVM frequently need to trap signals such as SIGINT or SIGTERM, which can lead to interference with the JVM signal handlers. The -Xrs option is available to address this issue. When -Xrs is used, the signal masks for SIGINT, SIGTERM, SIGHUP, and SIGQUIT aren't changed by the JVM, and signal handlers for these signals aren't installed. β’ Windows: β’ The JVM watches for console control events to implement shutdown hooks for unexpected termination. Specifically, the JVM registers a console control handler that begins shutdown-hook processing and returns TRUE for CTRL_C_EVENT, CTRL_CLOSE_EVENT, CTRL_LOGOFF_EVENT, and CTRL_SHUTDOWN_EVENT. β’ The JVM uses a similar mechanism to implement the feature of dumping thread stacks for debugging purposes. The JVM uses CTRL_BREAK_EVENT to perform thread dumps. β’ If the JVM is run as a service (for example, as a servlet engine for a web server), then it can receive CTRL_LOGOFF_EVENT but shouldn't initiate shutdown because the operating system doesn't actually terminate the process. To avoid possible interference such as this, the -Xrs option can be used. When the -Xrs option is used, the JVM doesn't install a console control handler, implying that it doesn't watch for or process CTRL_C_EVENT, CTRL_CLOSE_EVENT, CTRL_LOGOFF_EVENT, or CTRL_SHUTDOWN_EVENT. There are two consequences of specifying -Xrs: β’ Non-Windows: SIGQUIT thread dumps aren't available. β’ Windows: Ctrl + Break thread dumps aren't available. User code is responsible for causing shutdown hooks to run, for example, by calling System.exit() when the JVM is to be terminated. -Xshare:mode Sets the class data sharing (CDS) mode. Possible mode arguments for this option include the following: auto Use shared class data if possible (default). on Require using shared class data, otherwise fail. Note: The -Xshare:on option is used for testing purposes only. It may cause the VM to unexpectedly exit during start-up when the CDS archive cannot be used (for example, when certain VM parameters are changed, or when a different JDK is used). This option should not be used in production environments. off Do not attempt to use shared class data. -XshowSettings Shows all settings and then continues. -XshowSettings:category Shows settings and continues. Possible category arguments for this option include the following: all Shows all categories of settings. This is the default value. locale Shows settings related to locale. properties Shows settings related to system properties. vm Shows the settings of the JVM. system Linux only: Shows host system or container configuration and continues. -Xss size Sets the thread stack size (in bytes). Append the letter k or K to indicate KB, m or M to indicate MB, or g or G to indicate GB. The actual size may be rounded up to a multiple of the system page size as required by the operating system. The default value depends on the platform. For example: β’ Linux/x64: 1024 KB β’ Linux/Aarch64: 2048 KB β’ macOS/x64: 1024 KB β’ macOS/Aarch64: 2048 KB β’ Windows: The default value depends on virtual memory The following examples set the thread stack size to 1024 KB in different units: -Xss1m -Xss1024k -Xss1048576 This option is similar to -XX:ThreadStackSize. --add-reads module=target-module(,target-module)* Updates module to read the target-module, regardless of the module declaration. target-module can be ALL-UNNAMED to read all unnamed modules. --add-exports module/package=target-module(,target-module)* Updates module to export package to target-module, regardless of module declaration. target-module can be ALL-UNNAMED to export to all unnamed modules. --add-opens module/package=target-module(,target-module)* Updates module to open package to target-module, regardless of module declaration. --limit-modules module[,module...] Specifies the limit of the universe of observable modules. --patch-module module=file(;file)* Overrides or augments a module with classes and resources in JAR files or directories. --source version Sets the version of the source in source-file mode. EXTRA OPTIONS FOR MACOS The following extra options are macOS specific. -XstartOnFirstThread Runs the main() method on the first (AppKit) thread. -Xdock:name=application_name Overrides the default application name displayed in dock. -Xdock:icon=path_to_icon_file Overrides the default icon displayed in dock. ADVANCED OPTIONS FOR JAVA These java options can be used to enable other advanced options. -XX:+UnlockDiagnosticVMOptions Unlocks the options intended for diagnosing the JVM. By default, this option is disabled and diagnostic options aren't available. Command line options that are enabled with the use of this option are not supported. If you encounter issues while using any of these options, it is very likely that you will be required to reproduce the problem without using any of these unsupported options before Oracle Support can assist with an investigation. It is also possible that any of these options may be removed or their behavior changed without any warning. -XX:+UnlockExperimentalVMOptions Unlocks the options that provide experimental features in the JVM. By default, this option is disabled and experimental features aren't available. ADVANCED RUNTIME OPTIONS FOR JAVA These java options control the runtime behavior of the Java HotSpot VM. -XX:ActiveProcessorCount=x Overrides the number of CPUs that the VM will use to calculate the size of thread pools it will use for various operations such as Garbage Collection and ForkJoinPool. The VM normally determines the number of available processors from the operating system. This flag can be useful for partitioning CPU resources when running multiple Java processes in docker containers. This flag is honored even if UseContainerSupport is not enabled. See -XX:-UseContainerSupport for a description of enabling and disabling container support. -XX:AllocateHeapAt=path Takes a path to the file system and uses memory mapping to allocate the object heap on the memory device. Using this option enables the HotSpot VM to allocate the Java object heap on an alternative memory device, such as an NV-DIMM, specified by the user. Alternative memory devices that have the same semantics as DRAM, including the semantics of atomic operations, can be used instead of DRAM for the object heap without changing the existing application code. All other memory structures (such as the code heap, metaspace, and thread stacks) continue to reside in DRAM. Some operating systems expose non-DRAM memory through the file system. Memory-mapped files in these file systems bypass the page cache and provide a direct mapping of virtual memory to the physical memory on the device. The existing heap related flags (such as -Xmx and -Xms) and garbage-collection related flags continue to work as before. -XX:-CompactStrings Disables the Compact Strings feature. By default, this option is enabled. When this option is enabled, Java Strings containing only single-byte characters are internally represented and stored as single-byte-per-character Strings using ISO-8859-1 / Latin-1 encoding. This reduces, by 50%, the amount of space required for Strings containing only single-byte characters. For Java Strings containing at least one multibyte character: these are represented and stored as 2 bytes per character using UTF-16 encoding. Disabling the Compact Strings feature forces the use of UTF-16 encoding as the internal representation for all Java Strings. Cases where it may be beneficial to disable Compact Strings include the following: β’ When it's known that an application overwhelmingly will be allocating multibyte character Strings β’ In the unexpected event where a performance regression is observed in migrating from Java SE 8 to Java SE 9 and an analysis shows that Compact Strings introduces the regression In both of these scenarios, disabling Compact Strings makes sense. -XX:ErrorFile=filename Specifies the path and file name to which error data is written when an irrecoverable error occurs. By default, this file is created in the current working directory and named hs_err_pidpid.log where pid is the identifier of the process that encountered the error. The following example shows how to set the default log file (note that the identifier of the process is specified as %p): -XX:ErrorFile=./hs_err_pid%p.log β’ Non-Windows: The following example shows how to set the error log to /var/log/java/java_error.log: -XX:ErrorFile=/var/log/java/java_error.log β’ Windows: The following example shows how to set the error log file to C:/log/java/java_error.log: -XX:ErrorFile=C:/log/java/java_error.log If the file exists, and is writeable, then it will be overwritten. Otherwise, if the file can't be created in the specified directory (due to insufficient space, permission problem, or another issue), then the file is created in the temporary directory for the operating system: β’ Non-Windows: The temporary directory is /tmp. β’ Windows: The temporary directory is specified by the value of the TMP environment variable; if that environment variable isn't defined, then the value of the TEMP environment variable is used. -XX:+ExtensiveErrorReports Enables the reporting of more extensive error information in the ErrorFile. This option can be turned on in environments where maximal information is desired - even if the resulting logs may be quite large and/or contain information that might be considered sensitive. The information can vary from release to release, and across different platforms. By default this option is disabled. -XX:FlightRecorderOptions=parameter=value (or) -XX:FlightRecorderOptions:parameter=value Sets the parameters that control the behavior of JFR. Multiple parameters can be specified by separating them with a comma. The following list contains the available JFR parameter=value entries: globalbuffersize=size Specifies the total amount of primary memory used for data retention. The default value is based on the value specified for memorysize. Change the memorysize parameter to alter the size of global buffers. maxchunksize=size Specifies the maximum size (in bytes) of the data chunks in a recording. Append m or M to specify the size in megabytes (MB), or g or G to specify the size in gigabytes (GB). By default, the maximum size of data chunks is set to 12 MB. The minimum allowed is 1 MB. memorysize=size Determines how much buffer memory should be used, and sets the globalbuffersize and numglobalbuffers parameters based on the size specified. Append m or M to specify the size in megabytes (MB), or g or G to specify the size in gigabytes (GB). By default, the memory size is set to 10 MB. numglobalbuffers Specifies the number of global buffers used. The default value is based on the memory size specified. Change the memorysize parameter to alter the number of global buffers. old-object-queue-size=number-of-objects Maximum number of old objects to track. By default, the number of objects is set to 256. preserve-repository={true|false} Specifies whether files stored in the disk repository should be kept after the JVM has exited. If false, files are deleted. By default, this parameter is disabled. repository=path Specifies the repository (a directory) for temporary disk storage. By default, the system's temporary directory is used. retransform={true|false} Specifies whether event classes should be retransformed using JVMTI. If false, instrumentation is added when event classes are loaded. By default, this parameter is enabled. stackdepth=depth Stack depth for stack traces. By default, the depth is set to 64 method calls. The maximum is 2048. Values greater than 64 could create significant overhead and reduce performance. threadbuffersize=size Specifies the per-thread local buffer size (in bytes). By default, the local buffer size is set to 8 kilobytes, with a minimum value of 4 kilobytes. Overriding this parameter could reduce performance and is not recommended. -XX:LargePageSizeInBytes=size Sets the maximum large page size (in bytes) used by the JVM. The size argument must be a valid page size supported by the environment to have any effect. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. By default, the size is set to 0, meaning that the JVM will use the default large page size for the environment as the maximum size for large pages. See Large Pages. The following example describes how to set the large page size to 1 gigabyte (GB): -XX:LargePageSizeInBytes=1g -XX:MaxDirectMemorySize=size Sets the maximum total size (in bytes) of the java.nio package, direct-buffer allocations. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. If not set, the flag is ignored and the JVM chooses the size for NIO direct-buffer allocations automatically. The following examples illustrate how to set the NIO size to 1024 KB in different units: -XX:MaxDirectMemorySize=1m -XX:MaxDirectMemorySize=1024k -XX:MaxDirectMemorySize=1048576 -XX:-MaxFDLimit Disables the attempt to set the soft limit for the number of open file descriptors to the hard limit. By default, this option is enabled on all platforms, but is ignored on Windows. The only time that you may need to disable this is on macOS, where its use imposes a maximum of 10240, which is lower than the actual system maximum. -XX:NativeMemoryTracking=mode Specifies the mode for tracking JVM native memory usage. Possible mode arguments for this option include the following: off Instructs not to track JVM native memory usage. This is the default behavior if you don't specify the -XX:NativeMemoryTracking option. summary Tracks memory usage only by JVM subsystems, such as Java heap, class, code, and thread. detail In addition to tracking memory usage by JVM subsystems, track memory usage by individual CallSite, individual virtual memory region and its committed regions. -XX:TrimNativeHeapInterval=millis Interval, in ms, at which the JVM will trim the native heap. Lower values will reclaim memory more eagerly at the cost of higher overhead. A value of 0 (default) disables native heap trimming. Native heap trimming is performed in a dedicated thread. This option is only supported on Linux with GNU C Library (glibc). -XX:+NeverActAsServerClassMachine Enable the "Client VM emulation" mode which only uses the C1 JIT compiler, a 32Mb CodeCache and the Serial GC. The maximum amount of memory that the JVM may use (controlled by the -XX:MaxRAM=n flag) is set to 1GB by default. The string "emulated-client" is added to the JVM version string. By default the flag is set to true only on Windows in 32-bit mode and false in all other cases. The "Client VM emulation" mode will not be enabled if any of the following flags are used on the command line: -XX:{+|-}TieredCompilation -XX:CompilationMode=mode -XX:TieredStopAtLevel=n -XX:{+|-}EnableJVMCI -XX:{+|-}UseJVMCICompiler -XX:ObjectAlignmentInBytes=alignment Sets the memory alignment of Java objects (in bytes). By default, the value is set to 8 bytes. The specified value should be a power of 2, and must be within the range of 8 and 256 (inclusive). This option makes it possible to use compressed pointers with large Java heap sizes. The heap size limit in bytes is calculated as: 4GB * ObjectAlignmentInBytes Note: As the alignment value increases, the unused space between objects also increases. As a result, you may not realize any benefits from using compressed pointers with large Java heap sizes. -XX:OnError=string Sets a custom command or a series of semicolon-separated commands to run when an irrecoverable error occurs. If the string contains spaces, then it must be enclosed in quotation marks. β’ Non-Windows: The following example shows how the -XX:OnError option can be used to run the gcore command to create a core image, and start the gdb debugger to attach to the process in case of an irrecoverable error (the %p designates the current process identifier): -XX:OnError="gcore %p;gdb -p %p" β’ Windows: The following example shows how the -XX:OnError option can be used to run the userdump.exe utility to obtain a crash dump in case of an irrecoverable error (the %p designates the current process identifier). This example assumes that the path to the userdump.exe utility is specified in the PATH environment variable: -XX:OnError="userdump.exe %p" -XX:OnOutOfMemoryError=string Sets a custom command or a series of semicolon-separated commands to run when an OutOfMemoryError exception is first thrown. If the string contains spaces, then it must be enclosed in quotation marks. For an example of a command string, see the description of the -XX:OnError option. -XX:+PrintCommandLineFlags Enables printing of ergonomically selected JVM flags that appeared on the command line. It can be useful to know the ergonomic values set by the JVM, such as the heap space size and the selected garbage collector. By default, this option is disabled and flags aren't printed. -XX:+PreserveFramePointer Selects between using the RBP register as a general purpose register (-XX:-PreserveFramePointer) and using the RBP register to hold the frame pointer of the currently executing method (-XX:+PreserveFramePointer . If the frame pointer is available, then external profiling tools (for example, Linux perf) can construct more accurate stack traces. -XX:+PrintNMTStatistics Enables printing of collected native memory tracking data at JVM exit when native memory tracking is enabled (see -XX:NativeMemoryTracking). By default, this option is disabled and native memory tracking data isn't printed. -XX:SharedArchiveFile=path Specifies the path and name of the class data sharing (CDS) archive file See Application Class Data Sharing. -XX:+VerifySharedSpaces If this option is specified, the JVM will load a CDS archive file only if it passes an integrity check based on CRC32 checksums. The purpose of this flag is to check for unintentional damage to CDS archive files in transmission or storage. To guarantee the security and proper operation of CDS, the user must ensure that the CDS archive files used by Java applications cannot be modified without proper authorization. -XX:SharedArchiveConfigFile=shared_config_file Specifies additional shared data added to the archive file. -XX:SharedClassListFile=file_name Specifies the text file that contains the names of the classes to store in the class data sharing (CDS) archive. This file contains the full name of one class per line, except slashes (/) replace dots (.). For example, to specify the classes java.lang.Object and hello.Main, create a text file that contains the following two lines: java/lang/Object hello/Main The classes that you specify in this text file should include the classes that are commonly used by the application. They may include any classes from the application, extension, or bootstrap class paths. See Application Class Data Sharing. -XX:+ShowCodeDetailsInExceptionMessages Enables printing of improved NullPointerException messages. When an application throws a NullPointerException, the option enables the JVM to analyze the program's bytecode instructions to determine precisely which reference is null, and describes the source with a null-detail message. The null-detail message is calculated and returned by NullPointerException.getMessage(), and will be printed as the exception message along with the method, filename, and line number. By default, this option is enabled. -XX:+ShowMessageBoxOnError Enables the display of a dialog box when the JVM experiences an irrecoverable error. This prevents the JVM from exiting and keeps the process active so that you can attach a debugger to it to investigate the cause of the error. By default, this option is disabled. -XX:StartFlightRecording=parameter=value Starts a JFR recording for the Java application. This option is equivalent to the JFR.start diagnostic command that starts a recording during runtime. You can set the following parameter=value entries when starting a JFR recording: delay=time Specifies the delay between the Java application launch time and the start of the recording. Append s to specify the time in seconds, m for minutes, h for hours, or d for days (for example, specifying 10m means 10 minutes). By default, there's no delay, and this parameter is set to 0. disk={true|false} Specifies whether to write data to disk while recording. By default, this parameter is enabled. dumponexit={true|false} Specifies if the running recording is dumped when the JVM shuts down. If enabled and a filename is not entered, the recording is written to a file in the directory where the process was started. The file name is a system- generated name that contains the process ID, recording ID, and current timestamp, similar to hotspot-pid-47496-id-1-2018_01_25_19_10_41.jfr. By default, this parameter is disabled. duration=time Specifies the duration of the recording. Append s to specify the time in seconds, m for minutes, h for hours, or d for days (for example, specifying 5h means 5 hours). By default, the duration isn't limited, and this parameter is set to 0. filename=path Specifies the path and name of the file to which the recording is written when the recording is stopped, for example: β’ recording.jfr β’ /home/user/recordings/recording.jfr β’ c:\recordings\recording.jfr If %p and/or %t is specified in the filename, it expands to the JVM's PID and the current timestamp, respectively. name=identifier Takes both the name and the identifier of a recording. maxage=time Specifies the maximum age of disk data to keep for the recording. This parameter is valid only when the disk parameter is set to true. Append s to specify the time in seconds, m for minutes, h for hours, or d for days (for example, specifying 30s means 30 seconds). By default, the maximum age isn't limited, and this parameter is set to 0s. maxsize=size Specifies the maximum size (in bytes) of disk data to keep for the recording. This parameter is valid only when the disk parameter is set to true. The value must not be less than the value for the maxchunksize parameter set with -XX:FlightRecorderOptions. Append m or M to specify the size in megabytes, or g or G to specify the size in gigabytes. By default, the maximum size of disk data isn't limited, and this parameter is set to 0. path-to-gc-roots={true|false} Specifies whether to collect the path to garbage collection (GC) roots at the end of a recording. By default, this parameter is disabled. The path to GC roots is useful for finding memory leaks, but collecting it is time-consuming. Enable this option only when you start a recording for an application that you suspect has a memory leak. If the settings parameter is set to profile, the stack trace from where the potential leaking object was allocated is included in the information collected. settings=path Specifies the path and name of the event settings file (of type JFC). By default, the default.jfc file is used, which is located in JAVA_HOME/lib/jfr. This default settings file collects a predefined set of information with low overhead, so it has minimal impact on performance and can be used with recordings that run continuously. A second settings file is also provided, profile.jfc, which provides more data than the default configuration, but can have more overhead and impact performance. Use this configuration for short periods of time when more information is needed. You can specify values for multiple parameters by separating them with a comma. Event settings and .jfc options can be specified using the following syntax: option=value Specifies the option value to modify. To list available options, use the JAVA_HOME/bin/jfr tool. event-setting=value Specifies the event setting value to modify. Use the form: <event-name>#<setting-name>=<value>. To add a new event setting, prefix the event name with '+'. You can specify values for multiple event settings and .jfc options by separating them with a comma. In case of a conflict between a parameter and a .jfc option, the parameter will take precedence. The whitespace delimiter can be omitted for timespan values, i.e. 20ms. For more information about the settings syntax, see Javadoc of the jdk.jfr package. -XX:ThreadStackSize=size Sets the Java thread stack size (in kilobytes). Use of a scaling suffix, such as k, results in the scaling of the kilobytes value so that -XX:ThreadStackSize=1k sets the Java thread stack size to 1024*1024 bytes or 1 megabyte. The default value depends on the platform. For example: β’ Linux/x64: 1024 KB β’ Linux/Aarch64: 2048 KB β’ macOS/x64: 1024 KB β’ macOS/Aarch64: 2048 KB β’ Windows: The default value depends on virtual memory The following examples show how to set the thread stack size to 1 megabyte in different units: -XX:ThreadStackSize=1k -XX:ThreadStackSize=1024 This option is similar to -Xss. -XX:-UseCompressedOops Disables the use of compressed pointers. By default, this option is enabled, and compressed pointers are used. This will automatically limit the maximum ergonomically determined Java heap size to the maximum amount of memory that can be covered by compressed pointers. By default this range is 32 GB. With compressed oops enabled, object references are represented as 32-bit offsets instead of 64-bit pointers, which typically increases performance when running the application with Java heap sizes smaller than the compressed oops pointer range. This option works only for 64-bit JVMs. It's possible to use compressed pointers with Java heap sizes greater than 32 GB. See the -XX:ObjectAlignmentInBytes option. -XX:-UseContainerSupport Linux only: The VM now provides automatic container detection support, which allows the VM to determine the amount of memory and number of processors that are available to a Java process running in docker containers. It uses this information to allocate system resources. The default for this flag is true, and container support is enabled by default. It can be disabled with -XX:-UseContainerSupport. Unified Logging is available to help to diagnose issues related to this support. Use -Xlog:os+container=trace for maximum logging of container information. See Enable Logging with the JVM Unified Logging Framework for a description of using Unified Logging. -XX:+UseLargePages Enables the use of large page memory. By default, this option is disabled and large page memory isn't used. See Large Pages. -XX:+UseTransparentHugePages Linux only: Enables the use of large pages that can dynamically grow or shrink. This option is disabled by default. You may encounter performance problems with transparent huge pages as the OS moves other pages around to create huge pages; this option is made available for experimentation. -XX:+AllowUserSignalHandlers Non-Windows: Enables installation of signal handlers by the application. By default, this option is disabled and the application isn't allowed to install signal handlers. -XX:VMOptionsFile=filename Allows user to specify VM options in a file, for example, java -XX:VMOptionsFile=/var/my_vm_options HelloWorld. -XX:UseBranchProtection=mode Linux AArch64 only: Specifies the branch protection mode. All options other than none require the VM to have been built with branch protection enabled. In addition, for full protection, any native libraries provided by applications should be compiled with the same level of protection. Possible mode arguments for this option include the following: none Do not use branch protection. This is the default value. standard Enables all branch protection modes available on the current platform. pac-ret Enables protection against ROP based attacks. (AArch64 8.3+ only) ADVANCED JIT COMPILER OPTIONS FOR JAVA These java options control the dynamic just-in-time (JIT) compilation performed by the Java HotSpot VM. -XX:AllocateInstancePrefetchLines=lines Sets the number of lines to prefetch ahead of the instance allocation pointer. By default, the number of lines to prefetch is set to 1: -XX:AllocateInstancePrefetchLines=1 -XX:AllocatePrefetchDistance=size Sets the size (in bytes) of the prefetch distance for object allocation. Memory about to be written with the value of new objects is prefetched up to this distance starting from the address of the last allocated object. Each Java thread has its own allocation point. Negative values denote that prefetch distance is chosen based on the platform. Positive values are bytes to prefetch. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. The default value is set to -1. The following example shows how to set the prefetch distance to 1024 bytes: -XX:AllocatePrefetchDistance=1024 -XX:AllocatePrefetchInstr=instruction Sets the prefetch instruction to prefetch ahead of the allocation pointer. Possible values are from 0 to 3. The actual instructions behind the values depend on the platform. By default, the prefetch instruction is set to 0: -XX:AllocatePrefetchInstr=0 -XX:AllocatePrefetchLines=lines Sets the number of cache lines to load after the last object allocation by using the prefetch instructions generated in compiled code. The default value is 1 if the last allocated object was an instance, and 3 if it was an array. The following example shows how to set the number of loaded cache lines to 5: -XX:AllocatePrefetchLines=5 -XX:AllocatePrefetchStepSize=size Sets the step size (in bytes) for sequential prefetch instructions. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, g or G to indicate gigabytes. By default, the step size is set to 16 bytes: -XX:AllocatePrefetchStepSize=16 -XX:AllocatePrefetchStyle=style Sets the generated code style for prefetch instructions. The style argument is an integer from 0 to 3: 0 Don't generate prefetch instructions. 1 Execute prefetch instructions after each allocation. This is the default setting. 2 Use the thread-local allocation block (TLAB) watermark pointer to determine when prefetch instructions are executed. 3 Generate one prefetch instruction per cache line. -XX:+BackgroundCompilation Enables background compilation. This option is enabled by default. To disable background compilation, specify -XX:-BackgroundCompilation (this is equivalent to specifying -Xbatch). -XX:CICompilerCount=threads Sets the number of compiler threads to use for compilation. By default, the number of compiler threads is selected automatically depending on the number of CPUs and memory available for compiled code. The following example shows how to set the number of threads to 2: -XX:CICompilerCount=2 -XX:+UseDynamicNumberOfCompilerThreads Dynamically create compiler thread up to the limit specified by -XX:CICompilerCount. This option is enabled by default. -XX:CompileCommand=command,method[,option] Specifies a command to perform on a method. For example, to exclude the indexOf() method of the String class from being compiled, use the following: -XX:CompileCommand=exclude,java/lang/String.indexOf Note that the full class name is specified, including all packages and subpackages separated by a slash (/). For easier cut-and-paste operations, it's also possible to use the method name format produced by the -XX:+PrintCompilation and -XX:+LogCompilation options: -XX:CompileCommand=exclude,java.lang.String::indexOf If the method is specified without the signature, then the command is applied to all methods with the specified name. However, you can also specify the signature of the method in the class file format. In this case, you should enclose the arguments in quotation marks, because otherwise the shell treats the semicolon as a command end. For example, if you want to exclude only the indexOf(String) method of the String class from being compiled, use the following: -XX:CompileCommand="exclude,java/lang/String.indexOf,(Ljava/lang/String;)I" You can also use the asterisk (*) as a wildcard for class and method names. For example, to exclude all indexOf() methods in all classes from being compiled, use the following: -XX:CompileCommand=exclude,*.indexOf The commas and periods are aliases for spaces, making it easier to pass compiler commands through a shell. You can pass arguments to -XX:CompileCommand using spaces as separators by enclosing the argument in quotation marks: -XX:CompileCommand="exclude java/lang/String indexOf" Note that after parsing the commands passed on the command line using the -XX:CompileCommand options, the JIT compiler then reads commands from the .hotspot_compiler file. You can add commands to this file or specify a different file using the -XX:CompileCommandFile option. To add several commands, either specify the -XX:CompileCommand option multiple times, or separate each argument with the new line separator (\n). The following commands are available: break Sets a breakpoint when debugging the JVM to stop at the beginning of compilation of the specified method. compileonly Excludes all methods from compilation except for the specified method. As an alternative, you can use the -XX:CompileOnly option, which lets you specify several methods. dontinline Prevents inlining of the specified method. exclude Excludes the specified method from compilation. help Prints a help message for the -XX:CompileCommand option. inline Attempts to inline the specified method. log Excludes compilation logging (with the -XX:+LogCompilation option) for all methods except for the specified method. By default, logging is performed for all compiled methods. option Passes a JIT compilation option to the specified method in place of the last argument (option). The compilation option is set at the end, after the method name. For example, to enable the BlockLayoutByFrequency option for the append() method of the StringBuffer class, use the following: -XX:CompileCommand=option,java/lang/StringBuffer.append,BlockLayoutByFrequency You can specify multiple compilation options, separated by commas or spaces. print Prints generated assembler code after compilation of the specified method. quiet Instructs not to print the compile commands. By default, the commands that you specify with the -XX:CompileCommand option are printed; for example, if you exclude from compilation the indexOf() method of the String class, then the following is printed to standard output: CompilerOracle: exclude java/lang/String.indexOf You can suppress this by specifying the -XX:CompileCommand=quiet option before other -XX:CompileCommand options. -XX:CompileCommandFile=filename Sets the file from which JIT compiler commands are read. By default, the .hotspot_compiler file is used to store commands performed by the JIT compiler. Each line in the command file represents a command, a class name, and a method name for which the command is used. For example, this line prints assembly code for the toString() method of the String class: print java/lang/String toString If you're using commands for the JIT compiler to perform on methods, then see the -XX:CompileCommand option. -XX:CompilerDirectivesFile=file Adds directives from a file to the directives stack when a program starts. See Compiler Control [https://docs.oracle.com/en/java/javase/12/vm/compiler- control1.html#GUID-94AD8194-786A-4F19-BFFF-278F8E237F3A]. The -XX:CompilerDirectivesFile option has to be used together with the -XX:UnlockDiagnosticVMOptions option that unlocks diagnostic JVM options. -XX:+CompilerDirectivesPrint Prints the directives stack when the program starts or when a new directive is added. The -XX:+CompilerDirectivesPrint option has to be used together with the -XX:UnlockDiagnosticVMOptions option that unlocks diagnostic JVM options. -XX:CompileOnly=methods Sets the list of methods (separated by commas) to which compilation should be restricted. Only the specified methods are compiled. -XX:CompileOnly=method1,method2,...,methodN is an alias for: -XX:CompileCommand=compileonly,method1 -XX:CompileCommand=compileonly,method2 ... -XX:CompileCommand=compileonly,methodN -XX:CompileThresholdScaling=scale Provides unified control of first compilation. This option controls when methods are first compiled for both the tiered and the nontiered modes of operation. The CompileThresholdScaling option has a floating point value between 0 and +Inf and scales the thresholds corresponding to the current mode of operation (both tiered and nontiered). Setting CompileThresholdScaling to a value less than 1.0 results in earlier compilation while values greater than 1.0 delay compilation. Setting CompileThresholdScaling to 0 is equivalent to disabling compilation. -XX:+DoEscapeAnalysis Enables the use of escape analysis. This option is enabled by default. To disable the use of escape analysis, specify -XX:-DoEscapeAnalysis. -XX:InitialCodeCacheSize=size Sets the initial code cache size (in bytes). Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. The default value depends on the platform. The initial code cache size shouldn't be less than the system's minimal memory page size. The following example shows how to set the initial code cache size to 32 KB: -XX:InitialCodeCacheSize=32k -XX:+Inline Enables method inlining. This option is enabled by default to increase performance. To disable method inlining, specify -XX:-Inline. -XX:InlineSmallCode=size Sets the maximum code size (in bytes) for already compiled methods that may be inlined. This flag only applies to the C2 compiler. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. The default value depends on the platform and on whether tiered compilation is enabled. In the following example it is set to 1000 bytes: -XX:InlineSmallCode=1000 -XX:+LogCompilation Enables logging of compilation activity to a file named hotspot.log in the current working directory. You can specify a different log file path and name using the -XX:LogFile option. By default, this option is disabled and compilation activity isn't logged. The -XX:+LogCompilation option has to be used together with the -XX:UnlockDiagnosticVMOptions option that unlocks diagnostic JVM options. You can enable verbose diagnostic output with a message printed to the console every time a method is compiled by using the -XX:+PrintCompilation option. -XX:FreqInlineSize=size Sets the maximum bytecode size (in bytes) of a hot method to be inlined. This flag only applies to the C2 compiler. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. The default value depends on the platform. In the following example it is set to 325 bytes: -XX:FreqInlineSize=325 -XX:MaxInlineSize=size Sets the maximum bytecode size (in bytes) of a cold method to be inlined. This flag only applies to the C2 compiler. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. By default, the maximum bytecode size is set to 35 bytes: -XX:MaxInlineSize=35 -XX:C1MaxInlineSize=size Sets the maximum bytecode size (in bytes) of a cold method to be inlined. This flag only applies to the C1 compiler. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. By default, the maximum bytecode size is set to 35 bytes: -XX:MaxInlineSize=35 -XX:MaxTrivialSize=size Sets the maximum bytecode size (in bytes) of a trivial method to be inlined. This flag only applies to the C2 compiler. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. By default, the maximum bytecode size of a trivial method is set to 6 bytes: -XX:MaxTrivialSize=6 -XX:C1MaxTrivialSize=size Sets the maximum bytecode size (in bytes) of a trivial method to be inlined. This flag only applies to the C1 compiler. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. By default, the maximum bytecode size of a trivial method is set to 6 bytes: -XX:MaxTrivialSize=6 -XX:MaxNodeLimit=nodes Sets the maximum number of nodes to be used during single method compilation. By default the value depends on the features enabled. In the following example the maximum number of nodes is set to 100,000: -XX:MaxNodeLimit=100000 -XX:NonNMethodCodeHeapSize=size Sets the size in bytes of the code segment containing nonmethod code. A nonmethod code segment containing nonmethod code, such as compiler buffers and the bytecode interpreter. This code type stays in the code cache forever. This flag is used only if -XX:SegmentedCodeCache is enabled. -XX:NonProfiledCodeHeapSize=size Sets the size in bytes of the code segment containing nonprofiled methods. This flag is used only if -XX:SegmentedCodeCache is enabled. -XX:+OptimizeStringConcat Enables the optimization of String concatenation operations. This option is enabled by default. To disable the optimization of String concatenation operations, specify -XX:-OptimizeStringConcat. -XX:+PrintAssembly Enables printing of assembly code for bytecoded and native methods by using the external hsdis-<arch>.so or .dll library. For 64-bit VM on Windows, it's hsdis-amd64.dll. This lets you to see the generated code, which may help you to diagnose performance issues. By default, this option is disabled and assembly code isn't printed. The -XX:+PrintAssembly option has to be used together with the -XX:UnlockDiagnosticVMOptions option that unlocks diagnostic JVM options. -XX:ProfiledCodeHeapSize=size Sets the size in bytes of the code segment containing profiled methods. This flag is used only if -XX:SegmentedCodeCache is enabled. -XX:+PrintCompilation Enables verbose diagnostic output from the JVM by printing a message to the console every time a method is compiled. This lets you to see which methods actually get compiled. By default, this option is disabled and diagnostic output isn't printed. You can also log compilation activity to a file by using the -XX:+LogCompilation option. -XX:+PrintInlining Enables printing of inlining decisions. This let's you see which methods are getting inlined. By default, this option is disabled and inlining information isn't printed. The -XX:+PrintInlining option has to be used together with the -XX:+UnlockDiagnosticVMOptions option that unlocks diagnostic JVM options. -XX:ReservedCodeCacheSize=size Sets the maximum code cache size (in bytes) for JIT-compiled code. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. The default maximum code cache size is 240 MB; if you disable tiered compilation with the option -XX:-TieredCompilation, then the default size is 48 MB. This option has a limit of 2 GB; otherwise, an error is generated. The maximum code cache size shouldn't be less than the initial code cache size; see the option -XX:InitialCodeCacheSize. -XX:RTMAbortRatio=abort_ratio Specifies the RTM abort ratio is specified as a percentage (%) of all executed RTM transactions. If a number of aborted transactions becomes greater than this ratio, then the compiled code is deoptimized. This ratio is used when the -XX:+UseRTMDeopt option is enabled. The default value of this option is 50. This means that the compiled code is deoptimized if 50% of all transactions are aborted. -XX:RTMRetryCount=number_of_retries Specifies the number of times that the RTM locking code is retried, when it is aborted or busy, before falling back to the normal locking mechanism. The default value for this option is 5. The -XX:UseRTMLocking option must be enabled. -XX:+SegmentedCodeCache Enables segmentation of the code cache, without which the code cache consists of one large segment. With -XX:+SegmentedCodeCache, separate segments will be used for non- method, profiled method, and non-profiled method code. The segments are not resized at runtime. The advantages are better control of the memory footprint, reduced code fragmentation, and better CPU iTLB (instruction translation lookaside buffer) and instruction cache behavior due to improved locality. The feature is enabled by default if tiered compilation is enabled (-XX:+TieredCompilation ) and the reserved code cache size (-XX:ReservedCodeCacheSize) is at least 240 MB. -XX:StartAggressiveSweepingAt=percent Forces stack scanning of active methods to aggressively remove unused code when only the given percentage of the code cache is free. The default value is 10%. -XX:-TieredCompilation Disables the use of tiered compilation. By default, this option is enabled. -XX:UseSSE=version Enables the use of SSE instruction set of a specified version. Is set by default to the highest supported version available (x86 only). -XX:UseAVX=version Enables the use of AVX instruction set of a specified version. Is set by default to the highest supported version available (x86 only). -XX:+UseAES Enables hardware-based AES intrinsics for hardware that supports it. This option is on by default on hardware that has the necessary instructions. The -XX:+UseAES is used in conjunction with UseAESIntrinsics. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseAESIntrinsics Enables AES intrinsics. Specifying -XX:+UseAESIntrinsics is equivalent to also enabling -XX:+UseAES. To disable hardware- based AES intrinsics, specify -XX:-UseAES -XX:-UseAESIntrinsics. For example, to enable hardware AES, use the following flags: -XX:+UseAES -XX:+UseAESIntrinsics Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseAESCTRIntrinsics Analogous to -XX:+UseAESIntrinsics enables AES/CTR intrinsics. -XX:+UseGHASHIntrinsics Controls the use of GHASH intrinsics. Enabled by default on platforms that support the corresponding instructions. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseChaCha20Intrinsics Enable ChaCha20 intrinsics. This option is on by default for supported platforms. To disable ChaCha20 intrinsics, specify -XX:-UseChaCha20Intrinsics. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UsePoly1305Intrinsics Enable Poly1305 intrinsics. This option is on by default for supported platforms. To disable Poly1305 intrinsics, specify -XX:-UsePoly1305Intrinsics. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseBASE64Intrinsics Controls the use of accelerated BASE64 encoding routines for java.util.Base64. Enabled by default on platforms that support it. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseAdler32Intrinsics Controls the use of Adler32 checksum algorithm intrinsic for java.util.zip.Adler32. Enabled by default on platforms that support it. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseCRC32Intrinsics Controls the use of CRC32 intrinsics for java.util.zip.CRC32. Enabled by default on platforms that support it. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseCRC32CIntrinsics Controls the use of CRC32C intrinsics for java.util.zip.CRC32C. Enabled by default on platforms that support it. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseSHA Enables hardware-based intrinsics for SHA crypto hash functions for some hardware. The UseSHA option is used in conjunction with the UseSHA1Intrinsics, UseSHA256Intrinsics, and UseSHA512Intrinsics options. The UseSHA and UseSHA*Intrinsics flags are enabled by default on machines that support the corresponding instructions. This feature is applicable only when using the sun.security.provider.Sun provider for SHA operations. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. To disable all hardware-based SHA intrinsics, specify the -XX:-UseSHA. To disable only a particular SHA intrinsic, use the appropriate corresponding option. For example: -XX:-UseSHA256Intrinsics. -XX:+UseSHA1Intrinsics Enables intrinsics for SHA-1 crypto hash function. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseSHA256Intrinsics Enables intrinsics for SHA-224 and SHA-256 crypto hash functions. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseSHA512Intrinsics Enables intrinsics for SHA-384 and SHA-512 crypto hash functions. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseMathExactIntrinsics Enables intrinsification of various java.lang.Math.*Exact() functions. Enabled by default. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseMultiplyToLenIntrinsic Enables intrinsification of BigInteger.multiplyToLen(). Enabled by default on platforms that support it. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseSquareToLenIntrinsic Enables intrinsification of BigInteger.squareToLen(). Enabled by default on platforms that support it. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseMulAddIntrinsic Enables intrinsification of BigInteger.mulAdd(). Enabled by default on platforms that support it. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseMontgomeryMultiplyIntrinsic Enables intrinsification of BigInteger.montgomeryMultiply(). Enabled by default on platforms that support it. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseMontgomerySquareIntrinsic Enables intrinsification of BigInteger.montgomerySquare(). Enabled by default on platforms that support it. Flags that control intrinsics now require the option -XX:+UnlockDiagnosticVMOptions. -XX:+UseCMoveUnconditionally Generates CMove (scalar and vector) instructions regardless of profitability analysis. -XX:+UseCodeCacheFlushing Enables flushing of the code cache before shutting down the compiler. This option is enabled by default. To disable flushing of the code cache before shutting down the compiler, specify -XX:-UseCodeCacheFlushing. -XX:+UseCondCardMark Enables checking if the card is already marked before updating the card table. This option is disabled by default. It should be used only on machines with multiple sockets, where it increases the performance of Java applications that rely on concurrent operations. -XX:+UseCountedLoopSafepoints Keeps safepoints in counted loops. Its default value depends on whether the selected garbage collector requires low latency safepoints. -XX:LoopStripMiningIter=number_of_iterations Controls the number of iterations in the inner strip mined loop. Strip mining transforms counted loops into two level nested loops. Safepoints are kept in the outer loop while the inner loop can execute at full speed. This option controls the maximum number of iterations in the inner loop. The default value is 1,000. -XX:LoopStripMiningIterShortLoop=number_of_iterations Controls loop strip mining optimization. Loops with the number of iterations less than specified will not have safepoints in them. Default value is 1/10th of -XX:LoopStripMiningIter. -XX:+UseFMA Enables hardware-based FMA intrinsics for hardware where FMA instructions are available (such as, Intel and ARM64). FMA intrinsics are generated for the java.lang.Math.fma(a, b, c) methods that calculate the value of ( a * b + c ) expressions. -XX:+UseRTMDeopt Autotunes RTM locking depending on the abort ratio. This ratio is specified by the -XX:RTMAbortRatio option. If the number of aborted transactions exceeds the abort ratio, then the method containing the lock is deoptimized and recompiled with all locks as normal locks. This option is disabled by default. The -XX:+UseRTMLocking option must be enabled. -XX:+UseRTMLocking Generates Restricted Transactional Memory (RTM) locking code for all inflated locks, with the normal locking mechanism as the fallback handler. This option is disabled by default. Options related to RTM are available only on x86 CPUs that support Transactional Synchronization Extensions (TSX). RTM is part of Intel's TSX, which is an x86 instruction set extension and facilitates the creation of multithreaded applications. RTM introduces the new instructions XBEGIN, XABORT, XEND, and XTEST. The XBEGIN and XEND instructions enclose a set of instructions to run as a transaction. If no conflict is found when running the transaction, then the memory and register modifications are committed together at the XEND instruction. The XABORT instruction can be used to explicitly abort a transaction and the XTEST instruction checks if a set of instructions is being run in a transaction. A lock on a transaction is inflated when another thread tries to access the same transaction, thereby blocking the thread that didn't originally request access to the transaction. RTM requires that a fallback set of operations be specified in case a transaction aborts or fails. An RTM lock is a lock that has been delegated to the TSX's system. RTM improves performance for highly contended locks with low conflict in a critical region (which is code that must not be accessed by more than one thread concurrently). RTM also improves the performance of coarse-grain locking, which typically doesn't perform well in multithreaded applications. (Coarse-grain locking is the strategy of holding locks for long periods to minimize the overhead of taking and releasing locks, while fine-grained locking is the strategy of trying to achieve maximum parallelism by locking only when necessary and unlocking as soon as possible.) Also, for lightly contended locks that are used by different threads, RTM can reduce false cache line sharing, also known as cache line ping-pong. This occurs when multiple threads from different processors are accessing different resources, but the resources share the same cache line. As a result, the processors repeatedly invalidate the cache lines of other processors, which forces them to read from main memory instead of their cache. -XX:+UseSuperWord Enables the transformation of scalar operations into superword operations. Superword is a vectorization optimization. This option is enabled by default. To disable the transformation of scalar operations into superword operations, specify -XX:-UseSuperWord. ADVANCED SERVICEABILITY OPTIONS FOR JAVA These java options provide the ability to gather system information and perform extensive debugging. -XX:+DisableAttachMechanism Disables the mechanism that lets tools attach to the JVM. By default, this option is disabled, meaning that the attach mechanism is enabled and you can use diagnostics and troubleshooting tools such as jcmd, jstack, jmap, and jinfo. Note: The tools such as jcmd, jinfo, jmap, and jstack shipped with the JDK aren't supported when using the tools from one JDK version to troubleshoot a different JDK version. -XX:+DTraceAllocProbes Linux and macOS: Enable dtrace tool probes for object allocation. -XX:+DTraceMethodProbes Linux and macOS: Enable dtrace tool probes for method-entry and method-exit. -XX:+DTraceMonitorProbes Linux and macOS: Enable dtrace tool probes for monitor events. -XX:+HeapDumpOnOutOfMemoryError Enables the dumping of the Java heap to a file in the current directory by using the heap profiler (HPROF) when a java.lang.OutOfMemoryError exception is thrown. You can explicitly set the heap dump file path and name using the -XX:HeapDumpPath option. By default, this option is disabled and the heap isn't dumped when an OutOfMemoryError exception is thrown. -XX:HeapDumpPath=path Sets the path and file name for writing the heap dump provided by the heap profiler (HPROF) when the -XX:+HeapDumpOnOutOfMemoryError option is set. By default, the file is created in the current working directory, and it's named java_pid<pid>.hprof where <pid> is the identifier of the process that caused the error. The following example shows how to set the default file explicitly (%p represents the current process identifier): -XX:HeapDumpPath=./java_pid%p.hprof β’ Non-Windows: The following example shows how to set the heap dump file to /var/log/java/java_heapdump.hprof: -XX:HeapDumpPath=/var/log/java/java_heapdump.hprof β’ Windows: The following example shows how to set the heap dump file to C:/log/java/java_heapdump.log: -XX:HeapDumpPath=C:/log/java/java_heapdump.log -XX:LogFile=path Sets the path and file name to where log data is written. By default, the file is created in the current working directory, and it's named hotspot.log. β’ Non-Windows: The following example shows how to set the log file to /var/log/java/hotspot.log: -XX:LogFile=/var/log/java/hotspot.log β’ Windows: The following example shows how to set the log file to C:/log/java/hotspot.log: -XX:LogFile=C:/log/java/hotspot.log -XX:+PrintClassHistogram Enables printing of a class instance histogram after one of the following events: β’ Non-Windows: Control+\ (SIGQUIT) β’ Windows: Control+C (SIGTERM) By default, this option is disabled. Setting this option is equivalent to running the jmap -histo command, or the jcmd pid GC.class_histogram command, where pid is the current Java process identifier. -XX:+PrintConcurrentLocks Enables printing of java.util.concurrent locks after one of the following events: β’ Non-Windows: Control+\ (SIGQUIT) β’ Windows: Control+C (SIGTERM) By default, this option is disabled. Setting this option is equivalent to running the jstack -l command or the jcmd pid Thread.print -l command, where pid is the current Java process identifier. -XX:+PrintFlagsRanges Prints the range specified and allows automatic testing of the values. See Validate Java Virtual Machine Flag Arguments. -XX:+PerfDataSaveToFile If enabled, saves jstat binary data when the Java application exits. This binary data is saved in a file named hsperfdata_pid, where pid is the process identifier of the Java application that you ran. Use the jstat command to display the performance data contained in this file as follows: jstat -class file:///path/hsperfdata_pid jstat -gc file:///path/hsperfdata_pid -XX:+UsePerfData Enables the perfdata feature. This option is enabled by default to allow JVM monitoring and performance testing. Disabling it suppresses the creation of the hsperfdata_userid directories. To disable the perfdata feature, specify -XX:-UsePerfData. ADVANCED GARBAGE COLLECTION OPTIONS FOR JAVA These java options control how garbage collection (GC) is performed by the Java HotSpot VM. -XX:+AggressiveHeap Enables Java heap optimization. This sets various parameters to be optimal for long-running jobs with intensive memory allocation, based on the configuration of the computer (RAM and CPU). By default, the option is disabled and the heap sizes are configured less aggressively. -XX:+AlwaysPreTouch Requests the VM to touch every page on the Java heap after requesting it from the operating system and before handing memory out to the application. By default, this option is disabled and all pages are committed as the application uses the heap space. -XX:ConcGCThreads=threads Sets the number of threads used for concurrent GC. Sets threads to approximately 1/4 of the number of parallel garbage collection threads. The default value depends on the number of CPUs available to the JVM. For example, to set the number of threads for concurrent GC to 2, specify the following option: -XX:ConcGCThreads=2 -XX:+DisableExplicitGC Enables the option that disables processing of calls to the System.gc() method. This option is disabled by default, meaning that calls to System.gc() are processed. If processing of calls to System.gc() is disabled, then the JVM still performs GC when necessary. -XX:+ExplicitGCInvokesConcurrent Enables invoking of concurrent GC by using the System.gc() request. This option is disabled by default and can be enabled only with the -XX:+UseG1GC option. -XX:G1AdaptiveIHOPNumInitialSamples=number When -XX:UseAdaptiveIHOP is enabled, this option sets the number of completed marking cycles used to gather samples until G1 adaptively determines the optimum value of -XX:InitiatingHeapOccupancyPercent. Before, G1 uses the value of -XX:InitiatingHeapOccupancyPercent directly for this purpose. The default value is 3. -XX:G1HeapRegionSize=size Sets the size of the regions into which the Java heap is subdivided when using the garbage-first (G1) collector. The value is a power of 2 and can range from 1 MB to 32 MB. The default region size is determined ergonomically based on the heap size with a goal of approximately 2048 regions. The following example sets the size of the subdivisions to 16 MB: -XX:G1HeapRegionSize=16m -XX:G1HeapWastePercent=percent Sets the percentage of heap that you're willing to waste. The Java HotSpot VM doesn't initiate the mixed garbage collection cycle when the reclaimable percentage is less than the heap waste percentage. The default is 5 percent. -XX:G1MaxNewSizePercent=percent Sets the percentage of the heap size to use as the maximum for the young generation size. The default value is 60 percent of your Java heap. This is an experimental flag. This setting replaces the -XX:DefaultMaxNewGenPercent setting. -XX:G1MixedGCCountTarget=number Sets the target number of mixed garbage collections after a marking cycle to collect old regions with at most G1MixedGCLIveThresholdPercent live data. The default is 8 mixed garbage collections. The goal for mixed collections is to be within this target number. -XX:G1MixedGCLiveThresholdPercent=percent Sets the occupancy threshold for an old region to be included in a mixed garbage collection cycle. The default occupancy is 85 percent. This is an experimental flag. This setting replaces the -XX:G1OldCSetRegionLiveThresholdPercent setting. -XX:G1NewSizePercent=percent Sets the percentage of the heap to use as the minimum for the young generation size. The default value is 5 percent of your Java heap. This is an experimental flag. This setting replaces the -XX:DefaultMinNewGenPercent setting. -XX:G1OldCSetRegionThresholdPercent=percent Sets an upper limit on the number of old regions to be collected during a mixed garbage collection cycle. The default is 10 percent of the Java heap. -XX:G1ReservePercent=percent Sets the percentage of the heap (0 to 50) that's reserved as a false ceiling to reduce the possibility of promotion failure for the G1 collector. When you increase or decrease the percentage, ensure that you adjust the total Java heap by the same amount. By default, this option is set to 10%. The following example sets the reserved heap to 20%: -XX:G1ReservePercent=20 -XX:+G1UseAdaptiveIHOP Controls adaptive calculation of the old generation occupancy to start background work preparing for an old generation collection. If enabled, G1 uses -XX:InitiatingHeapOccupancyPercent for the first few times as specified by the value of -XX:G1AdaptiveIHOPNumInitialSamples, and after that adaptively calculates a new optimum value for the initiating occupancy automatically. Otherwise, the old generation collection process always starts at the old generation occupancy determined by -XX:InitiatingHeapOccupancyPercent. The default is enabled. -XX:InitialHeapSize=size Sets the initial size (in bytes) of the memory allocation pool. This value must be either 0, or a multiple of 1024 and greater than 1 MB. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. The default value is selected at run time based on the system configuration. The following examples show how to set the size of allocated memory to 6 MB using various units: -XX:InitialHeapSize=6291456 -XX:InitialHeapSize=6144k -XX:InitialHeapSize=6m If you set this option to 0, then the initial size is set as the sum of the sizes allocated for the old generation and the young generation. The size of the heap for the young generation can be set using the -XX:NewSize option. Note that the -Xms option sets both the minimum and the initial heap size of the heap. If -Xms appears after -XX:InitialHeapSize on the command line, then the initial heap size gets set to the value specified with -Xms. -XX:InitialRAMPercentage=percent Sets the initial amount of memory that the JVM will use for the Java heap before applying ergonomics heuristics as a percentage of the maximum amount determined as described in the -XX:MaxRAM option. The default value is 1.5625 percent. The following example shows how to set the percentage of the initial amount of memory used for the Java heap: -XX:InitialRAMPercentage=5 -XX:InitialSurvivorRatio=ratio Sets the initial survivor space ratio used by the throughput garbage collector (which is enabled by the -XX:+UseParallelGC option). Adaptive sizing is enabled by default with the throughput garbage collector by using the -XX:+UseParallelGC option, and the survivor space is resized according to the application behavior, starting with the initial value. If adaptive sizing is disabled (using the -XX:-UseAdaptiveSizePolicy option), then the -XX:SurvivorRatio option should be used to set the size of the survivor space for the entire execution of the application. The following formula can be used to calculate the initial size of survivor space (S) based on the size of the young generation (Y), and the initial survivor space ratio (R): S=Y/(R+2) The 2 in the equation denotes two survivor spaces. The larger the value specified as the initial survivor space ratio, the smaller the initial survivor space size. By default, the initial survivor space ratio is set to 8. If the default value for the young generation space size is used (2 MB), then the initial size of the survivor space is 0.2 MB. The following example shows how to set the initial survivor space ratio to 4: -XX:InitialSurvivorRatio=4 -XX:InitiatingHeapOccupancyPercent=percent Sets the percentage of the old generation occupancy (0 to 100) at which to start the first few concurrent marking cycles for the G1 garbage collector. By default, the initiating value is set to 45%. A value of 0 implies nonstop concurrent GC cycles from the beginning until G1 adaptively sets this value. See also the -XX:G1UseAdaptiveIHOP and -XX:G1AdaptiveIHOPNumInitialSamples options. The following example shows how to set the initiating heap occupancy to 75%: -XX:InitiatingHeapOccupancyPercent=75 -XX:MaxGCPauseMillis=time Sets a target for the maximum GC pause time (in milliseconds). This is a soft goal, and the JVM will make its best effort to achieve it. The specified value doesn't adapt to your heap size. By default, for G1 the maximum pause time target is 200 milliseconds. The other generational collectors do not use a pause time goal by default. The following example shows how to set the maximum target pause time to 500 ms: -XX:MaxGCPauseMillis=500 -XX:MaxHeapSize=size Sets the maximum size (in byes) of the memory allocation pool. This value must be a multiple of 1024 and greater than 2 MB. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. The default value is selected at run time based on the system configuration. For server deployments, the options -XX:InitialHeapSize and -XX:MaxHeapSize are often set to the same value. The following examples show how to set the maximum allowed size of allocated memory to 80 MB using various units: -XX:MaxHeapSize=83886080 -XX:MaxHeapSize=81920k -XX:MaxHeapSize=80m The -XX:MaxHeapSize option is equivalent to -Xmx. -XX:MaxHeapFreeRatio=percent Sets the maximum allowed percentage of free heap space (0 to 100) after a GC event. If free heap space expands above this value, then the heap is shrunk. By default, this value is set to 70%. Minimize the Java heap size by lowering the values of the parameters MaxHeapFreeRatio (default value is 70%) and MinHeapFreeRatio (default value is 40%) with the command-line options -XX:MaxHeapFreeRatio and -XX:MinHeapFreeRatio. Lowering MaxHeapFreeRatio to as low as 10% and MinHeapFreeRatio to 5% has successfully reduced the heap size without too much performance regression; however, results may vary greatly depending on your application. Try different values for these parameters until they're as low as possible yet still retain acceptable performance. -XX:MaxHeapFreeRatio=10 -XX:MinHeapFreeRatio=5 Customers trying to keep the heap small should also add the option -XX:-ShrinkHeapInSteps. See Performance Tuning Examples for a description of using this option to keep the Java heap small by reducing the dynamic footprint for embedded applications. -XX:MaxMetaspaceSize=size Sets the maximum amount of native memory that can be allocated for class metadata. By default, the size isn't limited. The amount of metadata for an application depends on the application itself, other running applications, and the amount of memory available on the system. The following example shows how to set the maximum class metadata size to 256 MB: -XX:MaxMetaspaceSize=256m -XX:MaxNewSize=size Sets the maximum size (in bytes) of the heap for the young generation (nursery). The default value is set ergonomically. -XX:MaxRAM=size Sets the maximum amount of memory that the JVM may use for the Java heap before applying ergonomics heuristics. The default value is the maximum amount of available memory to the JVM process or 128 GB, whichever is lower. The maximum amount of available memory to the JVM process is the minimum of the machine's physical memory and any constraints set by the environment (e.g. container). Specifying this option disables automatic use of compressed oops if the combined result of this and other options influencing the maximum amount of memory is larger than the range of memory addressable by compressed oops. See -XX:UseCompressedOops for further information about compressed oops. The following example shows how to set the maximum amount of available memory for sizing the Java heap to 2 GB: -XX:MaxRAM=2G -XX:MaxRAMPercentage=percent Sets the maximum amount of memory that the JVM may use for the Java heap before applying ergonomics heuristics as a percentage of the maximum amount determined as described in the -XX:MaxRAM option. The default value is 25 percent. Specifying this option disables automatic use of compressed oops if the combined result of this and other options influencing the maximum amount of memory is larger than the range of memory addressable by compressed oops. See -XX:UseCompressedOops for further information about compressed oops. The following example shows how to set the percentage of the maximum amount of memory used for the Java heap: -XX:MaxRAMPercentage=75 -XX:MinRAMPercentage=percent Sets the maximum amount of memory that the JVM may use for the Java heap before applying ergonomics heuristics as a percentage of the maximum amount determined as described in the -XX:MaxRAM option for small heaps. A small heap is a heap of approximately 125 MB. The default value is 50 percent. The following example shows how to set the percentage of the maximum amount of memory used for the Java heap for small heaps: -XX:MinRAMPercentage=75 -XX:MaxTenuringThreshold=threshold Sets the maximum tenuring threshold for use in adaptive GC sizing. The largest value is 15. The default value is 15 for the parallel (throughput) collector. The following example shows how to set the maximum tenuring threshold to 10: -XX:MaxTenuringThreshold=10 -XX:MetaspaceSize=size Sets the size of the allocated class metadata space that triggers a garbage collection the first time it's exceeded. This threshold for a garbage collection is increased or decreased depending on the amount of metadata used. The default size depends on the platform. -XX:MinHeapFreeRatio=percent Sets the minimum allowed percentage of free heap space (0 to 100) after a GC event. If free heap space falls below this value, then the heap is expanded. By default, this value is set to 40%. Minimize Java heap size by lowering the values of the parameters MaxHeapFreeRatio (default value is 70%) and MinHeapFreeRatio (default value is 40%) with the command-line options -XX:MaxHeapFreeRatio and -XX:MinHeapFreeRatio. Lowering MaxHeapFreeRatio to as low as 10% and MinHeapFreeRatio to 5% has successfully reduced the heap size without too much performance regression; however, results may vary greatly depending on your application. Try different values for these parameters until they're as low as possible, yet still retain acceptable performance. -XX:MaxHeapFreeRatio=10 -XX:MinHeapFreeRatio=5 Customers trying to keep the heap small should also add the option -XX:-ShrinkHeapInSteps. See Performance Tuning Examples for a description of using this option to keep the Java heap small by reducing the dynamic footprint for embedded applications. -XX:MinHeapSize=size Sets the minimum size (in bytes) of the memory allocation pool. This value must be either 0, or a multiple of 1024 and greater than 1 MB. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. The default value is selected at run time based on the system configuration. The following examples show how to set the minimum size of allocated memory to 6 MB using various units: -XX:MinHeapSize=6291456 -XX:MinHeapSize=6144k -XX:MinHeapSize=6m If you set this option to 0, then the minimum size is set to the same value as the initial size. -XX:NewRatio=ratio Sets the ratio between young and old generation sizes. By default, this option is set to 2. The following example shows how to set the young-to-old ratio to 1: -XX:NewRatio=1 -XX:NewSize=size Sets the initial size (in bytes) of the heap for the young generation (nursery). Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. The young generation region of the heap is used for new objects. GC is performed in this region more often than in other regions. If the size for the young generation is too low, then a large number of minor GCs are performed. If the size is too high, then only full GCs are performed, which can take a long time to complete. It is recommended that you keep the size for the young generation greater than 25% and less than 50% of the overall heap size. The following examples show how to set the initial size of the young generation to 256 MB using various units: -XX:NewSize=256m -XX:NewSize=262144k -XX:NewSize=268435456 The -XX:NewSize option is equivalent to -Xmn. -XX:ParallelGCThreads=threads Sets the number of the stop-the-world (STW) worker threads. The default value depends on the number of CPUs available to the JVM and the garbage collector selected. For example, to set the number of threads for G1 GC to 2, specify the following option: -XX:ParallelGCThreads=2 -XX:+ParallelRefProcEnabled Enables parallel reference processing. By default, this option is disabled. -XX:+PrintAdaptiveSizePolicy Enables printing of information about adaptive-generation sizing. By default, this option is disabled. -XX:+ScavengeBeforeFullGC Enables GC of the young generation before each full GC. This option is enabled by default. It is recommended that you don't disable it, because scavenging the young generation before a full GC can reduce the number of objects reachable from the old generation space into the young generation space. To disable GC of the young generation before each full GC, specify the option -XX:-ScavengeBeforeFullGC. -XX:SoftRefLRUPolicyMSPerMB=time Sets the amount of time (in milliseconds) a softly reachable object is kept active on the heap after the last time it was referenced. The default value is one second of lifetime per free megabyte in the heap. The -XX:SoftRefLRUPolicyMSPerMB option accepts integer values representing milliseconds per one megabyte of the current heap size (for Java HotSpot Client VM) or the maximum possible heap size (for Java HotSpot Server VM). This difference means that the Client VM tends to flush soft references rather than grow the heap, whereas the Server VM tends to grow the heap rather than flush soft references. In the latter case, the value of the -Xmx option has a significant effect on how quickly soft references are garbage collected. The following example shows how to set the value to 2.5 seconds: -XX:SoftRefLRUPolicyMSPerMB=2500 -XX:-ShrinkHeapInSteps Incrementally reduces the Java heap to the target size, specified by the option -XX:MaxHeapFreeRatio. This option is enabled by default. If disabled, then it immediately reduces the Java heap to the target size instead of requiring multiple garbage collection cycles. Disable this option if you want to minimize the Java heap size. You will likely encounter performance degradation when this option is disabled. See Performance Tuning Examples for a description of using the MaxHeapFreeRatio option to keep the Java heap small by reducing the dynamic footprint for embedded applications. -XX:StringDeduplicationAgeThreshold=threshold Identifies String objects reaching the specified age that are considered candidates for deduplication. An object's age is a measure of how many times it has survived garbage collection. This is sometimes referred to as tenuring. Note: String objects that are promoted to an old heap region before this age has been reached are always considered candidates for deduplication. The default value for this option is 3. See the -XX:+UseStringDeduplication option. -XX:SurvivorRatio=ratio Sets the ratio between eden space size and survivor space size. By default, this option is set to 8. The following example shows how to set the eden/survivor space ratio to 4: -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=percent Sets the desired percentage of survivor space (0 to 100) used after young garbage collection. By default, this option is set to 50%. The following example shows how to set the target survivor space ratio to 30%: -XX:TargetSurvivorRatio=30 -XX:TLABSize=size Sets the initial size (in bytes) of a thread-local allocation buffer (TLAB). Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, or g or G to indicate gigabytes. If this option is set to 0, then the JVM selects the initial size automatically. The following example shows how to set the initial TLAB size to 512 KB: -XX:TLABSize=512k -XX:+UseAdaptiveSizePolicy Enables the use of adaptive generation sizing. This option is enabled by default. To disable adaptive generation sizing, specify -XX:-UseAdaptiveSizePolicy and set the size of the memory allocation pool explicitly. See the -XX:SurvivorRatio option. -XX:+UseG1GC Enables the use of the garbage-first (G1) garbage collector. It's a server-style garbage collector, targeted for multiprocessor machines with a large amount of RAM. This option meets GC pause time goals with high probability, while maintaining good throughput. The G1 collector is recommended for applications requiring large heaps (sizes of around 6 GB or larger) with limited GC latency requirements (a stable and predictable pause time below 0.5 seconds). By default, this option is enabled and G1 is used as the default garbage collector. -XX:+UseGCOverheadLimit Enables the use of a policy that limits the proportion of time spent by the JVM on GC before an OutOfMemoryError exception is thrown. This option is enabled, by default, and the parallel GC will throw an OutOfMemoryError if more than 98% of the total time is spent on garbage collection and less than 2% of the heap is recovered. When the heap is small, this feature can be used to prevent applications from running for long periods of time with little or no progress. To disable this option, specify the option -XX:-UseGCOverheadLimit. -XX:+UseNUMA Enables performance optimization of an application on a machine with nonuniform memory architecture (NUMA) by increasing the application's use of lower latency memory. By default, this option is disabled and no optimization for NUMA is made. The option is available only when the parallel garbage collector is used (-XX:+UseParallelGC). -XX:+UseParallelGC Enables the use of the parallel scavenge garbage collector (also known as the throughput collector) to improve the performance of your application by leveraging multiple processors. By default, this option is disabled and the default collector is used. -XX:+UseSerialGC Enables the use of the serial garbage collector. This is generally the best choice for small and simple applications that don't require any special functionality from garbage collection. By default, this option is disabled and the default collector is used. -XX:+UseStringDeduplication Enables string deduplication. By default, this option is disabled. To use this option, you must enable the garbage-first (G1) garbage collector. String deduplication reduces the memory footprint of String objects on the Java heap by taking advantage of the fact that many String objects are identical. Instead of each String object pointing to its own character array, identical String objects can point to and share the same character array. -XX:+UseTLAB Enables the use of thread-local allocation blocks (TLABs) in the young generation space. This option is enabled by default. To disable the use of TLABs, specify the option -XX:-UseTLAB. -XX:+UseZGC Enables the use of the Z garbage collector (ZGC). This is a low latency garbage collector, providing max pause times of a few milliseconds, at some throughput cost. Pause times are independent of what heap size is used. Supports heap sizes from 8MB to 16TB. -XX:ZAllocationSpikeTolerance=factor Sets the allocation spike tolerance for ZGC. By default, this option is set to 2.0. This factor describes the level of allocation spikes to expect. For example, using a factor of 3.0 means the current allocation rate can be expected to triple at any time. -XX:ZCollectionInterval=seconds Sets the maximum interval (in seconds) between two GC cycles when using ZGC. By default, this option is set to 0 (disabled). -XX:ZFragmentationLimit=percent Sets the maximum acceptable heap fragmentation (in percent) for ZGC. By default, this option is set to 25. Using a lower value will cause the heap to be compacted more aggressively, to reclaim more memory at the cost of using more CPU time. -XX:+ZProactive Enables proactive GC cycles when using ZGC. By default, this option is enabled. ZGC will start a proactive GC cycle if doing so is expected to have minimal impact on the running application. This is useful if the application is mostly idle or allocates very few objects, but you still want to keep the heap size down and allow reference processing to happen even when there are a lot of free space on the heap. -XX:+ZUncommit Enables uncommitting of unused heap memory when using ZGC. By default, this option is enabled. Uncommitting unused heap memory will lower the memory footprint of the JVM, and make that memory available for other processes to use. -XX:ZUncommitDelay=seconds Sets the amount of time (in seconds) that heap memory must have been unused before being uncommitted. By default, this option is set to 300 (5 minutes). Committing and uncommitting memory are relatively expensive operations. Using a lower value will cause heap memory to be uncommitted earlier, at the risk of soon having to commit it again. DEPRECATED JAVA OPTIONS These java options are deprecated and might be removed in a future JDK release. They're still accepted and acted upon, but a warning is issued when they're used. -Xfuture Enables strict class-file format checks that enforce close conformance to the class-file format specification. Developers should use this flag when developing new code. Stricter checks may become the default in future releases. -Xloggc:filename Sets the file to which verbose GC events information should be redirected for logging. The -Xloggc option overrides -verbose:gc if both are given with the same java command. -Xloggc:filename is replaced by -Xlog:gc:filename. See Enable Logging with the JVM Unified Logging Framework. Example: -Xlog:gc:garbage-collection.log -XX:+FlightRecorder Enables the use of Java Flight Recorder (JFR) during the runtime of the application. Since JDK 8u40 this option has not been required to use JFR. -XX:InitialRAMFraction=ratio Sets the initial amount of memory that the JVM may use for the Java heap before applying ergonomics heuristics as a ratio of the maximum amount determined as described in the -XX:MaxRAM option. The default value is 64. Use the option -XX:InitialRAMPercentage instead. -XX:MaxRAMFraction=ratio Sets the maximum amount of memory that the JVM may use for the Java heap before applying ergonomics heuristics as a fraction of the maximum amount determined as described in the -XX:MaxRAM option. The default value is 4. Specifying this option disables automatic use of compressed oops if the combined result of this and other options influencing the maximum amount of memory is larger than the range of memory addressable by compressed oops. See -XX:UseCompressedOops for further information about compressed oops. Use the option -XX:MaxRAMPercentage instead. -XX:MinRAMFraction=ratio Sets the maximum amount of memory that the JVM may use for the Java heap before applying ergonomics heuristics as a fraction of the maximum amount determined as described in the -XX:MaxRAM option for small heaps. A small heap is a heap of approximately 125 MB. The default value is 2. Use the option -XX:MinRAMPercentage instead. OBSOLETE JAVA OPTIONS These java options are still accepted but ignored, and a warning is issued when they're used. --illegal-access=parameter Controlled relaxed strong encapsulation, as defined in JEP 261 [https://openjdk.org/jeps/261#Relaxed-strong-encapsulation]. This option was deprecated in JDK 16 by JEP 396 [https://openjdk.org/jeps/396] and made obsolete in JDK 17 by JEP 403 [https://openjdk.org/jeps/403]. -XX:+UseHugeTLBFS Linux only: This option is the equivalent of specifying -XX:+UseLargePages. This option is disabled by default. This option pre-allocates all large pages up-front, when memory is reserved; consequently the JVM can't dynamically grow or shrink large pages memory areas; see -XX:UseTransparentHugePages if you want this behavior. -XX:+UseSHM Linux only: Enables the JVM to use shared memory to set up large pages. REMOVED JAVA OPTIONS No documented java options have been removed in JDK 22. For the lists and descriptions of options removed in previous releases see the Removed Java Options section in: β’ The java Command, Release 21 [https://docs.oracle.com/en/java/javase/21/docs/specs/man/java.html] β’ The java Command, Release 20 [https://docs.oracle.com/en/java/javase/20/docs/specs/man/java.html] β’ The java Command, Release 19 [https://docs.oracle.com/en/java/javase/19/docs/specs/man/java.html] β’ The java Command, Release 18 [https://docs.oracle.com/en/java/javase/18/docs/specs/man/java.html] β’ The java Command, Release 17 [https://docs.oracle.com/en/java/javase/17/docs/specs/man/java.html] β’ The java Command, Release 16 [https://docs.oracle.com/en/java/javase/16/docs/specs/man/java.html] β’ The java Command, Release 15 [https://docs.oracle.com/en/java/javase/15/docs/specs/man/java.html] β’ The java Command, Release 14 [https://docs.oracle.com/en/java/javase/14/docs/specs/man/java.html] β’ The java Command, Release 13 [https://docs.oracle.com/en/java/javase/13/docs/specs/man/java.html] β’ Java Platform, Standard Edition Tools Reference, Release 12 [https://docs.oracle.com/en/java/javase/12/tools/java.html#GUID-3B1CE181-CD30-4178-9602-230B800D4FAE] β’ Java Platform, Standard Edition Tools Reference, Release 11 [https://docs.oracle.com/en/java/javase/11/tools/java.html#GUID-741FC470-AA3E-494A-8D2B-1B1FE4A990D1] β’ Java Platform, Standard Edition Tools Reference, Release 10 [https://docs.oracle.com/javase/10/tools/java.htm#JSWOR624] β’ Java Platform, Standard Edition Tools Reference, Release 9 [https://docs.oracle.com/javase/9/tools/java.htm#JSWOR624] β’ Java Platform, Standard Edition Tools Reference, Release 8 for Oracle JDK on Windows [https://docs.oracle.com/javase/8/docs/technotes/tools/windows/java.html#BGBCIEFC] β’ Java Platform, Standard Edition Tools Reference, Release 8 for Oracle JDK on Solaris, Linux, and macOS [https://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html#BGBCIEFC] JAVA COMMAND-LINE ARGUMENT FILES You can shorten or simplify the java command by using @ argument files to specify one or more text files that contain arguments, such as options and class names, which are passed to the java command. This let's you to create java commands of any length on any operating system. In the command line, use the at sign (@) prefix to identify an argument file that contains java options and class names. When the java command encounters a file beginning with the at sign (@), it expands the contents of that file into an argument list just as they would be specified on the command line. The java launcher expands the argument file contents until it encounters the --disable-@files option. You can use the --disable-@files option anywhere on the command line, including in an argument file, to stop @ argument files expansion. The following items describe the syntax of java argument files: β’ The argument file must contain only ASCII characters or characters in system default encoding that's ASCII friendly, such as UTF-8. β’ The argument file size must not exceed MAXINT (2,147,483,647) bytes. β’ The launcher doesn't expand wildcards that are present within an argument file. β’ Use white space or new line characters to separate arguments included in the file. β’ White space includes a white space character, \t, \n, \r, and \f. For example, it is possible to have a path with a space, such as c:\Program Files that can be specified as either "c:\\Program Files" or, to avoid an escape, c:\Program" "Files. β’ Any option that contains spaces, such as a path component, must be within quotation marks using quotation ('"') characters in its entirety. β’ A string within quotation marks may contain the characters \n, \r, \t, and \f. They are converted to their respective ASCII codes. β’ If a file name contains embedded spaces, then put the whole file name in double quotation marks. β’ File names in an argument file are relative to the current directory, not to the location of the argument file. β’ Use the number sign # in the argument file to identify comments. All characters following the # are ignored until the end of line. β’ Additional at sign @ prefixes to @ prefixed options act as an escape, (the first @ is removed and the rest of the arguments are presented to the launcher literally). β’ Lines may be continued using the continuation character (\) at the end-of-line. The two lines are concatenated with the leading white spaces trimmed. To prevent trimming the leading white spaces, a continuation character (\) may be placed at the first column. β’ Because backslash (\) is an escape character, a backslash character must be escaped with another backslash character. β’ Partial quote is allowed and is closed by an end-of-file. β’ An open quote stops at end-of-line unless \ is the last character, which then joins the next line by removing all leading white space characters. β’ Wildcards (*) aren't allowed in these lists (such as specifying *.java). β’ Use of the at sign (@) to recursively interpret files isn't supported. Example of Open or Partial Quotes in an Argument File In the argument file, -cp "lib/ cool/ app/ jars this is interpreted as: -cp lib/cool/app/jars Example of a Backslash Character Escaped with Another Backslash Character in an Argument File To output the following: -cp c:\Program Files (x86)\Java\jre\lib\ext;c:\Program Files\Java\jre9\lib\ext The backslash character must be specified in the argument file as: -cp "c:\\Program Files (x86)\\Java\\jre\\lib\\ext;c:\\Program Files\\Java\\jre9\\lib\\ext" Example of an EOL Escape Used to Force Concatenation of Lines in an Argument File In the argument file, -cp "/lib/cool app/jars:\ /lib/another app/jars" This is interpreted as: -cp /lib/cool app/jars:/lib/another app/jars Example of Line Continuation with Leading Spaces in an Argument File In the argument file, -cp "/lib/cool\ \app/jars" This is interpreted as: -cp /lib/cool app/jars Examples of Using Single Argument File You can use a single argument file, such as myargumentfile in the following example, to hold all required java arguments: java @myargumentfile Examples of Using Argument Files with Paths You can include relative paths in argument files; however, they're relative to the current working directory and not to the paths of the argument files themselves. In the following example, path1/options and path2/options represent argument files with different paths. Any relative paths that they contain are relative to the current working directory and not to the argument files: java @path1/options @path2/classes CODE HEAP STATE ANALYTICS Overview There are occasions when having insight into the current state of the JVM code heap would be helpful to answer questions such as: β’ Why was the JIT turned off and then on again and again? β’ Where has all the code heap space gone? β’ Why is the method sweeper not working effectively? To provide this insight, a code heap state analytics feature has been implemented that enables on-the-fly analysis of the code heap. The analytics process is divided into two parts. The first part examines the entire code heap and aggregates all information that is believed to be useful or important. The second part consists of several independent steps that print the collected information with an emphasis on different aspects of the data. Data collection and printing are done on an "on request" basis. Syntax Requests for real-time, on-the-fly analysis can be issued with the following command: jcmd pid Compiler.CodeHeap_Analytics [function] [granularity] If you are only interested in how the code heap looks like after running a sample workload, you can use the command line option: -Xlog:codecache=Trace To see the code heap state when a "CodeCache full" condition exists, start the VM with the command line option: -Xlog:codecache=Debug See CodeHeap State Analytics (OpenJDK) [https://bugs.openjdk.org/secure/attachment/75649/JVM_CodeHeap_StateAnalytics_V2.pdf] for a detailed description of the code heap state analytics feature, the supported functions, and the granularity options. ENABLE LOGGING WITH THE JVM UNIFIED LOGGING FRAMEWORK You use the -Xlog option to configure or enable logging with the Java Virtual Machine (JVM) unified logging framework. The Java Virtual Machine (JVM) unified logging framework provides a common logging system for all components of the JVM. GC logging for the JVM has been changed to use the new logging framework. The mapping of old GC flags to the corresponding new Xlog configuration is described in Convert GC Logging Flags to Xlog. In addition, runtime logging has also been changed to use the JVM unified logging framework. The mapping of legacy runtime logging flags to the corresponding new Xlog configuration is described in Convert Runtime Logging Flags to Xlog. The following provides quick reference to the -Xlog command and syntax for options: -Xlog Enables JVM logging on an info level. -Xlog:help Prints -Xlog usage syntax and available tags, levels, and decorators along with example command lines with explanations. -Xlog:disable Turns off all logging and clears all configuration of the logging framework including the default configuration for warnings and errors. -Xlog[:option] Applies multiple arguments in the order that they appear on the command line. Multiple -Xlog arguments for the same output override each other in their given order. The option is set as: [tag- selection][:[output][:[decorators][:output-options]]] Omitting the tag-selection defaults to a tag-set of all and a level of info. tag[+...] all The all tag is a meta tag consisting of all tag-sets available. The asterisk * in a tag set definition denotes a wildcard tag match. Matching with a wildcard selects all tag sets that contain at least the specified tags. Without the wildcard, only exact matches of the specified tag sets are selected. output-options is filecount=file-count filesize=file size with optional K, M or G suffix foldmultilines=<true|false> When foldmultilines is true, a log event that consists of multiple lines will be folded into a single line by replacing newline characters with the sequence '\' and 'n' in the output. Existing single backslash characters will also be replaced with a sequence of two backslashes so that the conversion can be reversed. This option is safe to use with UTF-8 character encodings, but other encodings may not work. For example, it may incorrectly convert multi-byte sequences in Shift JIS and BIG5. Default Configuration When the -Xlog option and nothing else is specified on the command line, the default configuration is used. The default configuration logs all messages with a level that matches either warning or error regardless of what tags the message is associated with. The default configuration is equivalent to entering the following on the command line: -Xlog:all=warning:stdout:uptime,level,tags Controlling Logging at Runtime Logging can also be controlled at run time through Diagnostic Commands (with the jcmd utility). Everything that can be specified on the command line can also be specified dynamically with the VM.log command. As the diagnostic commands are automatically exposed as MBeans, you can use JMX to change logging configuration at run time. -Xlog Tags and Levels Each log message has a level and a tag set associated with it. The level of the message corresponds to its details, and the tag set corresponds to what the message contains or which JVM component it involves (such as, gc, jit, or os). Mapping GC flags to the Xlog configuration is described in Convert GC Logging Flags to Xlog. Mapping legacy runtime logging flags to the corresponding Xlog configuration is described in Convert Runtime Logging Flags to Xlog. Available log levels: β’ off β’ trace β’ debug β’ info β’ warning β’ error Available log tags: There are literally dozens of log tags, which in the right combinations, will enable a range of logging output. The full set of available log tags can be seen using -Xlog:help. Specifying all instead of a tag combination matches all tag combinations. -Xlog Output The -Xlog option supports the following types of outputs: β’ stdout --- Sends output to stdout β’ stderr --- Sends output to stderr β’ file=filename --- Sends output to text file(s). When using file=filename, specifying %p and/or %t in the file name expands to the JVM's PID and startup timestamp, respectively. You can also configure text files to handle file rotation based on file size and a number of files to rotate. For example, to rotate the log file every 10 MB and keep 5 files in rotation, specify the options filesize=10M, filecount=5. The target size of the files isn't guaranteed to be exact, it's just an approximate value. Files are rotated by default with up to 5 rotated files of target size 20 MB, unless configured otherwise. Specifying filecount=0 means that the log file shouldn't be rotated. There's a possibility of the pre-existing log file getting overwritten. -Xlog Output Mode By default logging messages are output synchronously - each log message is written to the designated output when the logging call is made. But you can instead use asynchronous logging mode by specifying: -Xlog:async Write all logging asynchronously. In asynchronous logging mode, log sites enqueue all logging messages to an intermediate buffer and a standalone thread is responsible for flushing them to the corresponding outputs. The intermediate buffer is bounded and on buffer exhaustion the enqueuing message is discarded. Log entry write operations are guaranteed non-blocking. The option -XX:AsyncLogBufferSize=N specifies the memory budget in bytes for the intermediate buffer. The default value should be big enough to cater for most cases. Users can provide a custom value to trade memory overhead for log accuracy if they need to. Decorations Logging messages are decorated with information about the message. You can configure each output to use a custom set of decorators. The order of the output is always the same as listed in the table. You can configure the decorations to be used at run time. Decorations are prepended to the log message. For example: [6.567s][info][gc,old] Old collection complete Omitting decorators defaults to uptime, level, and tags. The none decorator is special and is used to turn off all decorations. time (t), utctime (utc), uptime (u), timemillis (tm), uptimemillis (um), timenanos (tn), uptimenanos (un), hostname (hn), pid (p), tid (ti), level (l), tags (tg) decorators can also be specified as none for no decoration. Logging Messages Decorations Decorations Description ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ time or t Current time and date in ISO-8601 format. utctime or utc Universal Time Coordinated or Coordinated Universal Time. uptime or u Time since the start of the JVM in seconds and milliseconds. For example, 6.567s. timemillis or The same value as generated by tm System.currentTimeMillis() uptimemillis or Milliseconds since the JVM started. um timenanos or tn The same value generated by System.nanoTime(). uptimenanos or Nanoseconds since the JVM started. un hostname or hn The host name. pid or p The process identifier. tid or ti The thread identifier. level or l The level associated with the log message. tags or tg The tag-set associated with the log message. Convert GC Logging Flags to Xlog Legacy GC Logging Flags to Xlog Configuration Mapping Legacy Garbage Collection (GC) Xlog Configuration Comment Flag βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ G1PrintHeapRegions -Xlog:gc+region=trace Not Applicable GCLogFileSize No configuration available Log rotation is handled by the framework. NumberOfGCLogFiles Not Applicable Log rotation is handled by the framework. PrintAdaptiveSizePolicy -Xlog:gc+ergo*=level Use a level of debug for most of the information, or a level of trace for all of what was logged for PrintAdaptiveSizePolicy. PrintGC -Xlog:gc Not Applicable PrintGCApplicationConcurrentTime -Xlog:safepoint Note that PrintGCApplicationConcurrentTime and PrintGCApplicationStoppedTime are logged on the same tag and aren't separated in the new logging. PrintGCApplicationStoppedTime -Xlog:safepoint Note that PrintGCApplicationConcurrentTime and PrintGCApplicationStoppedTime are logged on the same tag and not separated in the new logging. PrintGCCause Not Applicable GC cause is now always logged. PrintGCDateStamps Not Applicable Date stamps are logged by the framework. PrintGCDetails -Xlog:gc* Not Applicable PrintGCID Not Applicable GC ID is now always logged. PrintGCTaskTimeStamps -Xlog:gc+task*=debug Not Applicable PrintGCTimeStamps Not Applicable Time stamps are logged by the framework. PrintHeapAtGC -Xlog:gc+heap=trace Not Applicable PrintReferenceGC -Xlog:gc+ref*=debug Note that in the old logging, PrintReferenceGC had an effect only if PrintGCDetails was also enabled. PrintStringDeduplicationStatistics `-Xlog:gc+stringdedup*=debug ` Not Applicable PrintTenuringDistribution -Xlog:gc+age*=level Use a level of debug for the most relevant information, or a level of trace for all of what was logged for PrintTenuringDistribution. UseGCLogFileRotation Not Applicable What was logged for PrintTenuringDistribution. Convert Runtime Logging Flags to Xlog These legacy flags are no longer recognized and will cause an error if used directly. Use their unified logging equivalent instead. Runtime Logging Flags to Xlog Configuration Mapping Legacy Runtime Flag Xlog Configuration Comment βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ TraceExceptions -Xlog:exceptions=info Not Applicable TraceClassLoading -Xlog:class+load=level Use level=info for regular information, or level=debug for additional information. In Unified Logging syntax, -verbose:class equals -Xlog:class+load=info,class+unload=info. TraceClassLoadingPreorder -Xlog:class+preorder=debug Not Applicable TraceClassUnloading -Xlog:class+unload=level Use level=info for regular information, or level=trace for additional information. In Unified Logging syntax, -verbose:class equals -Xlog:class+load=info,class+unload=info. VerboseVerification -Xlog:verification=info Not Applicable TraceClassPaths -Xlog:class+path=info Not Applicable TraceClassResolution -Xlog:class+resolve=debug Not Applicable TraceClassInitialization -Xlog:class+init=info Not Applicable TraceLoaderConstraints -Xlog:class+loader+constraints=info Not Applicable TraceClassLoaderData -Xlog:class+loader+data=level Use level=debug for regular information or level=trace for additional information. TraceSafepointCleanupTime -Xlog:safepoint+cleanup=info Not Applicable TraceSafepoint -Xlog:safepoint=debug Not Applicable TraceMonitorInflation -Xlog:monitorinflation=debug Not Applicable TraceRedefineClasses -Xlog:redefine+class*=level level=info, debug, and trace provide increasing amounts of information. -Xlog Usage Examples The following are -Xlog examples. -Xlog Logs all messages by using the info level to stdout with uptime, levels, and tags decorations. This is equivalent to using: -Xlog:all=info:stdout:uptime,levels,tags -Xlog:gc Logs messages tagged with the gc tag using info level to stdout. The default configuration for all other messages at level warning is in effect. -Xlog:gc,safepoint Logs messages tagged either with the gc or safepoint tags, both using the info level, to stdout, with default decorations. Messages tagged with both gc and safepoint won't be logged. -Xlog:gc+ref=debug Logs messages tagged with both gc and ref tags, using the debug level to stdout, with default decorations. Messages tagged only with one of the two tags won't be logged. -Xlog:gc=debug:file=gc.txt:none Logs messages tagged with the gc tag using the debug level to a file called gc.txt with no decorations. The default configuration for all other messages at level warning is still in effect. -Xlog:gc=trace:file=gctrace.txt:uptimemillis,pids:filecount=5,filesize=1024 Logs messages tagged with the gc tag using the trace level to a rotating file set with 5 files with size 1 MB with the base name gctrace.txt and uses decorations uptimemillis and pid. The default configuration for all other messages at level warning is still in effect. -Xlog:gc::uptime,tid Logs messages tagged with the gc tag using the default 'info' level to default the output stdout and uses decorations uptime and tid. The default configuration for all other messages at level warning is still in effect. -Xlog:gc*=info,safepoint*=off Logs messages tagged with at least gc using the info level, but turns off logging of messages tagged with safepoint. Messages tagged with both gc and safepoint won't be logged. -Xlog:disable -Xlog:safepoint=trace:safepointtrace.txt Turns off all logging, including warnings and errors, and then enables messages tagged with safepointusing tracelevel to the file safepointtrace.txt. The default configuration doesn't apply, because the command line started with -Xlog:disable. Complex -Xlog Usage Examples The following describes a few complex examples of using the -Xlog option. -Xlog:gc+class*=debug Logs messages tagged with at least gc and class tags using the debug level to stdout. The default configuration for all other messages at the level warning is still in effect -Xlog:gc+meta*=trace,class*=off:file=gcmetatrace.txt Logs messages tagged with at least the gc and meta tags using the trace level to the file metatrace.txt but turns off all messages tagged with class. Messages tagged with gc, meta, and class aren't be logged as class* is set to off. The default configuration for all other messages at level warning is in effect except for those that include class. -Xlog:gc+meta=trace Logs messages tagged with exactly the gc and meta tags using the trace level to stdout. The default configuration for all other messages at level warning is still be in effect. -Xlog:gc+class+heap*=debug,meta*=warning,threads*=off Logs messages tagged with at least gc, class, and heap tags using the trace level to stdout but only log messages tagged with meta with level. The default configuration for all other messages at the level warning is in effect except for those that include threads. VALIDATE JAVA VIRTUAL MACHINE FLAG ARGUMENTS You use values provided to all Java Virtual Machine (JVM) command-line flags for validation and, if the input value is invalid or out-of- range, then an appropriate error message is displayed. Whether they're set ergonomically, in a command line, by an input tool, or through the APIs (for example, classes contained in the package java.lang.management) the values provided to all Java Virtual Machine (JVM) command-line flags are validated. Ergonomics are described in Java Platform, Standard Edition HotSpot Virtual Machine Garbage Collection Tuning Guide. Range and constraints are validated either when all flags have their values set during JVM initialization or a flag's value is changed during runtime (for example using the jcmd tool). The JVM is terminated if a value violates either the range or constraint check and an appropriate error message is printed on the error stream. For example, if a flag violates a range or a constraint check, then the JVM exits with an error: java -XX:AllocatePrefetchStyle=5 -version intx AllocatePrefetchStyle=5 is outside the allowed range [ 0 ... 3 ] Improperly specified VM option 'AllocatePrefetchStyle=5' Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. The flag -XX:+PrintFlagsRanges prints the range of all the flags. This flag allows automatic testing of the flags by the values provided by the ranges. For the flags that have the ranges specified, the type, name, and the actual range is printed in the output. For example, intx ThreadStackSize [ 0 ... 9007199254740987 ] {pd product} For the flags that don't have the range specified, the values aren't displayed in the print out. For example: size_t NewSize [ ... ] {product} This helps to identify the flags that need to be implemented. The automatic testing framework can skip those flags that don't have values and aren't implemented. LARGE PAGES You use large pages, also known as huge pages, as memory pages that are significantly larger than the standard memory page size (which varies depending on the processor and operating system). Large pages optimize processor Translation-Lookaside Buffers. A Translation-Lookaside Buffer (TLB) is a page translation cache that holds the most-recently used virtual-to-physical address translations. A TLB is a scarce system resource. A TLB miss can be costly because the processor must then read from the hierarchical page table, which may require multiple memory accesses. By using a larger memory page size, a single TLB entry can represent a larger memory range. This results in less pressure on a TLB, and memory-intensive applications may have better performance. However, using large pages can negatively affect system performance. For example, when a large amount of memory is pinned by an application, it may create a shortage of regular memory and cause excessive paging in other applications and slow down the entire system. Also, a system that has been up for a long time could produce excessive fragmentation, which could make it impossible to reserve enough large page memory. When this happens, either the OS or JVM reverts to using regular pages. Linux and Windows support large pages. Large Pages Support for Linux Linux supports large pages since version 2.6. To check if your environment supports large pages, try the following: # cat /proc/meminfo | grep Huge HugePages_Total: 0 HugePages_Free: 0 ... Hugepagesize: 2048 kB If the output contains items prefixed with "Huge", then your system supports large pages. The values may vary depending on environment. The Hugepagesize field shows the default large page size in your environment, and the other fields show details for large pages of this size. Newer kernels have support for multiple large page sizes. To list the supported page sizes, run this: # ls /sys/kernel/mm/hugepages/ hugepages-1048576kB hugepages-2048kB The above environment supports 2 MB and 1 GB large pages, but they need to be configured so that the JVM can use them. When using large pages and not enabling transparent huge pages (option -XX:+UseTransparentHugePages), the number of large pages must be pre- allocated. For example, to enable 8 GB of memory to be backed by 2 MB large pages, login as root and run: # echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages It is always recommended to check the value of nr_hugepages after the request to make sure the kernel was able to allocate the requested number of large pages. Note: The values contained in /proc and /sys reset after you reboot your system, so may want to set them in an initialization script (for example, rc.local or sysctl.conf). If you configure the OS kernel parameters to enable use of large pages, the Java processes may allocate large pages for the Java heap as well as other internal areas, for example: β’ Code cache β’ Marking bitmaps Consequently, if you configure the nr_hugepages parameter to the size of the Java heap, then the JVM can still fail to allocate the heap using large pages because other areas such as the code cache might already have used some of the configured large pages. Large Pages Support for Windows To use large pages support on Windows, the administrator must first assign additional privileges to the user who is running the application: 1. Select Control Panel, Administrative Tools, and then Local Security Policy. 2. Select Local Policies and then User Rights Assignment. 3. Double-click Lock pages in memory, then add users and/or groups. 4. Reboot your system. Note that these steps are required even if it's the administrator who's running the application, because administrators by default don't have the privilege to lock pages in memory. APPLICATION CLASS DATA SHARING Application Class Data Sharing (AppCDS) stores classes used by your applications in an archive file. Since these classes are stored in a format that can be loaded very quickly (compared to classes stored in a JAR file), AppCDS can improve the start-up time of your applications. In addition, AppCDS can reduce the runtime memory footprint by sharing parts of these classes across multiple processes. Classes in the CDS archive are stored in an optimized format that's about 2 to 5 times larger than classes stored in JAR files or the JDK runtime image. Therefore, it's a good idea to archive only those classes that are actually used by your application. These usually are just a small portion of all available classes. For example, your application may use only a few APIs provided by a large library. Using CDS Archives By default, in most JDK distributions, unless -Xshare:off is specified, the JVM starts up with a default CDS archive, which is usually located in JAVA_HOME/lib/server/classes.jsa (or JAVA_HOME\bin\server\classes.jsa on Windows). This archive contains about 1300 core library classes that are used by most applications. To use CDS for the exact set of classes used by your application, you can use the -XX:SharedArchiveFile option, which has the general form: -XX:SharedArchiveFile=<static_archive>:<dynamic_archive> β’ The <static_archive> overrides the default CDS archive. β’ The <dynamic_archive> provides additional classes that can be loaded on top of those in the <static_archive>. β’ On Windows, the above path delimiter : should be replaced with ; (The names "static" and "dynamic" are used for historical reasons. The only significance is that the "static" archive is loaded first and the "dynamic" archive is loaded second). The JVM can use up to two archives. To use only a single <static_archive>, you can omit the <dynamic_archive> portion: -XX:SharedArchiveFile=<static_archive> For convenience, the <dynamic_archive> records the location of the <static_archive>. Therefore, you can omit the <static_archive> by saying only: -XX:SharedArchiveFile=<dynamic_archive> Manually Creating CDS Archives CDS archives can be created manually using several methods: β’ -Xshare:dump β’ -XX:ArchiveClassesAtExit β’ jcmd VM.cds One common operation in all these methods is a "trial run", where you run the application once to determine the classes that should be stored in the archive. Creating a Static CDS Archive File with -Xshare:dump The following steps create a static CDS archive file that contains all the classes used by the test.Hello application. 1. Create a list of all classes used by the test.Hello application. The following command creates a file named hello.classlist that contains a list of all classes used by this application: java -Xshare:off -XX:DumpLoadedClassList=hello.classlist -cp hello.jar test.Hello The classpath specified by the -cp parameter must contain only JAR files. 2. Create a static archive, named hello.jsa, that contains all the classes in hello.classlist: java -Xshare:dump -XX:SharedArchiveFile=hello.jsa -XX:SharedClassListFile=hello.classlist -cp hello.jar 3. Run the application test.Hello with the archive hello.jsa: java -XX:SharedArchiveFile=hello.jsa -cp hello.jar test.Hello 4. Optional Verify that the test.Hello application is using the class contained in the hello.jsa shared archive: java -XX:SharedArchiveFile=hello.jsa -cp hello.jar -Xlog:class+load test.Hello The output of this command should contain the following text: [info][class,load] test.Hello source: shared objects file By default, when the -Xshare:dump option is used, the JVM runs in interpreter-only mode (as if the -Xint option were specified). This is required for generating deterministic output in the shared archive file. I.e., the exact same archive will be generated, bit-for-bit, every time you dump it. However, if deterministic output is not needed, and you have a large classlist, you can explicitly add -Xmixed to the command-line to enable the JIT compiler. This will speed up the archive creation. Creating a Dynamic CDS Archive File with -XX:ArchiveClassesAtExit Advantages of dynamic CDS archives are: β’ They usually use less disk space, since they don't need to store the classes that are already in the static archive. β’ They are created with one fewer step than the comparable static archive. The following steps create a dynamic CDS archive file that contains the classes that are used by the test.Hello application, excluding those that are already in the default CDS archive. 1. Create a dynamic CDS archive, named hello.jsa, that contains all the classes in hello.jar loaded by the application test.Hello: java -XX:ArchiveClassesAtExit=hello.jsa -cp hello.jar Hello 2. Run the application test.Hello with the shared archive hello.jsa: java -XX:SharedArchiveFile=hello.jsa -cp hello.jar test.Hello 3. Optional Repeat step 4 of the previous section to verify that the test.Hello application is using the class contained in the hello.jsa shared archive. It's also possible to create a dynamic CDS archive with a non-default static CDS archive. E.g., java -XX:SharedArchiveFile=base.jsa -XX:ArchiveClassesAtExit=hello.jsa -cp hello.jar Hello To run the application using this dynamic CDS archive: java -XX:SharedArchiveFile=base.jsa:hello.jsa -cp hello.jar Hello (On Windows, the above path delimiter : should be replaced with ;) As mention above, the name of the static archive can be skipped: java -XX:SharedArchiveFile=hello.jsa -cp hello.jar Hello Creating CDS Archive Files with jcmd The previous two sections require you to modify the application's start-up script in order to create a CDS archive. Sometimes this could be difficult, for example, if the application's class path is set up by complex routines. The jcmd VM.cds command provides a less intrusive way for creating a CDS archive by connecting to a running JVM process. You can create either a static: jcmd <pid> VM.cds static_dump my_static_archive.jsa or a dynamic archive: jcmd <pid> VM.cds dynamic_dump my_dynamic_archive.jsa To use the resulting archive file in a subsequent run of the application without modifying the application's start-up script, you can use the following technique: env JAVA_TOOL_OPTIONS=-XX:SharedArchiveFile=my_static_archive.jsa bash app_start.sh Note: to use jcmd <pid> VM.cds dynamic_dump, the JVM process identified by <pid> must be started with -XX:+RecordDynamicDumpInfo, which can also be passed to the application start-up script with the same technique: env JAVA_TOOL_OPTIONS=-XX:+RecordDynamicDumpInfo bash app_start.sh Creating Dynamic CDS Archive File with -XX:+AutoCreateSharedArchive -XX:+AutoCreateSharedArchive is a more convenient way of creating/using CDS archives. Unlike the methods of manual CDS archive creation described in the previous section, with -XX:+AutoCreateSharedArchive, it's no longer necessary to have a separate trial run. Instead, you can always run the application with the same command-line and enjoy the benefits of CDS automatically. java -XX:+AutoCreateSharedArchive -XX:SharedArchiveFile=hello.jsa -cp hello.jar Hello If the specified archive file exists and was created by the same version of the JDK, then it will be loaded as a dynamic archive; otherwise it is ignored at VM startup. At VM exit, if the specified archive file does not exist, it will be created. If it exists but was created with a different (but post JDK 19) version of the JDK, then it will be replaced. In both cases the archive will be ready to be loaded the next time the JVM is launched with the same command line. If the specified archive file exists but was created by a JDK version prior to JDK 19, then it will be ignored: neither loaded at startup, nor replaced at exit. Developers should note that the contents of the CDS archive file are specific to each build of the JDK. Therefore, if you switch to a different JDK build, -XX:+AutoCreateSharedArchive will automatically recreate the archive to match the JDK. If you intend to use this feature with an existing archive, you should make sure that the archive is created by at least version 19 of the JDK. Restrictions on Class Path and Module Path β’ Neither the class path (-classpath and -Xbootclasspath/a) nor the module path (--module-path) can contain non-empty directories. β’ Only modular JAR files are supported in --module-path. Exploded modules are not supported. β’ The class path used at archive creation time must be the same as (or a prefix of) the class path used at run time. (There's no such requirement for the module path.) β’ The CDS archive cannot be loaded if any JAR files in the class path or module path are modified after the archive is generated. β’ If any of the VM options --upgrade-module-path, --patch-module or --limit-modules are specified, CDS is disabled. This means that the JVM will execute without loading any CDS archives. In addition, if you try to create a CDS archive with any of these 3 options specified, the JVM will report an error. PERFORMANCE TUNING EXAMPLES You can use the Java advanced runtime options to optimize the performance of your applications. Tuning for Higher Throughput Use the following commands and advanced options to achieve higher throughput performance for your application: java -server -XX:+UseParallelGC -XX:+UseLargePages -Xmn10g -Xms26g -Xmx26g Tuning for Lower Response Time Use the following commands and advanced options to achieve lower response times for your application: java -XX:+UseG1GC -XX:MaxGCPauseMillis=100 Keeping the Java Heap Small and Reducing the Dynamic Footprint of Embedded Applications Use the following advanced runtime options to keep the Java heap small and reduce the dynamic footprint of embedded applications: -XX:MaxHeapFreeRatio=10 -XX:MinHeapFreeRatio=5 Note: The defaults for these two options are 70% and 40% respectively. Because performance sacrifices can occur when using these small settings, you should optimize for a small footprint by reducing these settings as much as possible without introducing unacceptable performance degradation. EXIT STATUS The following exit values are typically returned by the launcher when the launcher is called with the wrong arguments, serious errors, or exceptions thrown by the JVM. However, a Java application may choose to return any value by using the API call System.exit(exitValue). The values are: β’ 0: Successful completion β’ >0: An error occurred JDK 22 2024 JAVA(1)
|
java - launch a Java application
|
To launch a class file: java [options] mainclass [args ...] To launch the main class in a JAR file: java [options] -jar jarfile [args ...] To launch the main class in a module: java [options] -m module[/mainclass] [args ...] or java [options] --module module[/mainclass] [args ...] To launch a source-file program: java [options] source-file [args ...] -Xlog[:[what][:[output][:[decorators][:output-options[,...]]]]] -Xlog:directive what Specifies a combination of tags and levels of the form tag1[+tag2...][*][=level][,...]. Unless the wildcard (*) is specified, only log messages tagged with exactly the tags specified are matched. See -Xlog Tags and Levels. output Sets the type of output. Omitting the output type defaults to stdout. See -Xlog Output. decorators Configures the output to use a custom set of decorators. Omitting decorators defaults to uptime, level, and tags. See Decorations. output-options Sets the -Xlog logging output options. directive A global option or subcommand: help, disable, async
|
Optional: Specifies command-line options separated by spaces. See Overview of Java Options for a description of available options. mainclass Specifies the name of the class to be launched. Command-line entries following classname are the arguments for the main method. -jar jarfile Executes a program encapsulated in a JAR file. The jarfile argument is the name of a JAR file with a manifest that contains a line in the form Main-Class:classname that defines the class with the public static void main(String[] args) method that serves as your application's starting point. When you use -jar, the specified JAR file is the source of all user classes, and other class path settings are ignored. If you're using JAR files, then see jar. -m or --module module[/mainclass] Executes the main class in a module specified by mainclass if it is given, or, if it is not given, the value in the module. In other words, mainclass can be used when it is not specified by the module, or to override the value when it is specified. See Standard Options for Java. source-file Only used to launch a source-file program. Specifies the source file that contains the main class when using source-file mode. See Using Source-File Mode to Launch Source-Code Programs args ... Optional: Arguments following mainclass, source-file, -jar jarfile, and -m or --module module/mainclass are passed as arguments to the main class.
| null |
bundle
|
Bundler manages an applicationΒ΄s dependencies through its entire life across many machines systematically and repeatably. See the bundler website http://bundler.io for information on getting started, and Gemfile(5) for more information on the Gemfile format.
|
bundle - Ruby Dependency Management
|
bundle COMMAND [--no-color] [--verbose] [ARGS]
|
--no-color Print all output without color --retry, -r Specify the number of times you wish to attempt network commands --verbose, -V Print out additional logging information BUNDLE COMMANDS We divide bundle subcommands into primary commands and utilities: PRIMARY COMMANDS bundle install(1) bundle-install.1.html Install the gems specified by the Gemfile or Gemfile.lock bundle update(1) bundle-update.1.html Update dependencies to their latest versions bundle package(1) bundle-package.1.html Package the .gem files required by your application into the vendor/cache directory bundle exec(1) bundle-exec.1.html Execute a script in the current bundle bundle config(1) bundle-config.1.html Specify and read configuration options for Bundler bundle help(1) Display detailed help for each subcommand UTILITIES bundle add(1) bundle-add.1.html Add the named gem to the Gemfile and run bundle install bundle binstubs(1) bundle-binstubs.1.html Generate binstubs for executables in a gem bundle check(1) bundle-check.1.html Determine whether the requirements for your application are installed and available to Bundler bundle show(1) bundle-show.1.html Show the source location of a particular gem in the bundle bundle outdated(1) bundle-outdated.1.html Show all of the outdated gems in the current bundle bundle console(1) Start an IRB session in the current bundle bundle open(1) bundle-open.1.html Open an installed gem in the editor bundle lock(1) bundle-lock.1.hmtl Generate a lockfile for your dependencies bundle viz(1) bundle-viz.1.html Generate a visual representation of your dependencies bundle init(1) bundle-init.1.html Generate a simple Gemfile, placed in the current directory bundle gem(1) bundle-gem.1.html Create a simple gem, suitable for development with Bundler bundle platform(1) bundle-platform.1.html Display platform compatibility information bundle clean(1) bundle-clean.1.html Clean up unused gems in your Bundler directory bundle doctor(1) bundle-doctor.1.html Display warnings about common problems PLUGINS When running a command that isnΒ΄t listed in PRIMARY COMMANDS or UTILITIES, Bundler will try to find an executable on your path named bundler-<command> and execute it, passing down any extra arguments to it. OBSOLETE These commands are obsolete and should no longer be used: β’ bundle cache(1) β’ bundle show(1) November 2018 BUNDLE(1)
| null |
gzip
|
The gzip program compresses and decompresses files using Lempel-Ziv coding (LZ77). If no files are specified, gzip will compress from standard input, or decompress to standard output. When in compression mode, each file will be replaced with another file with the suffix, set by the -S suffix option, added, if possible. In decompression mode, each file will be checked for existence, as will the file with the suffix added. Each file argument must contain a separate complete archive; when multiple files are indicated, each is decompressed in turn. In the case of gzcat the resulting data is then concatenated in the manner of cat(1). If invoked as gunzip then the -d option is enabled. If invoked as zcat or gzcat then both the -c and -d options are enabled. This version of gzip is also capable of decompressing files compressed using compress(1), bzip2(1), lzip, or xz(1).
|
gzip, gunzip, zcat β compression/decompression tool using Lempel-Ziv coding (LZ77)
|
gzip [-cdfhkLlNnqrtVv] [-S suffix] file [file [...]] gunzip [-cfhkLNqrtVv] [-S suffix] file [file [...]] zcat [-fhV] file [file [...]]
|
The following options are available: -1, --fast -2, -3, -4, -5, -6, -7, -8 -9, --best These options change the compression level used, with the -1 option being the fastest, with less compression, and the -9 option being the slowest, with optimal compression. The default compression level is 6. -c, --stdout, --to-stdout This option specifies that output will go to the standard output stream, leaving files intact. -d, --decompress, --uncompress This option selects decompression rather than compression. -f, --force This option turns on force mode. This allows files with multiple links, symbolic links to regular files, overwriting of pre-existing files, reading from or writing to a terminal, and when combined with the -c option, allowing non-compressed data to pass through unchanged. -h, --help This option prints a usage summary and exits. -k, --keep This option prevents gzip from deleting input files after (de)compression. -L, --license This option prints gzip license. -l, --list This option displays information about the file's compressed and uncompressed size, ratio, uncompressed name. With the -v option, it also displays the compression method, CRC, date and time embedded in the file. -N, --name This option causes the stored filename in the input file to be used as the output file. -n, --no-name This option stops the filename and timestamp from being stored in the output file. -q, --quiet With this option, no warnings or errors are printed. -r, --recursive This option is used to gzip the files in a directory tree individually, using the fts(3) library. -S suffix, --suffix suffix This option changes the default suffix from .gz to suffix. -t, --test This option will test compressed files for integrity. -V, --version This option prints the version of the gzip program. -v, --verbose This option turns on verbose mode, which prints the compression ratio for each file compressed. ENVIRONMENT If the environment variable GZIP is set, it is parsed as a white-space separated list of options handled before any options on the command line. Options on the command line will override anything in GZIP. EXIT STATUS The gzip utility exits 0 on success, 1 on errors, and 2 if a warning occurs. SIGNALS gzip responds to the following signals: SIGINFO Report progress to standard error. SEE ALSO bzip2(1), compress(1), xz(1), fts(3), zlib(3) HISTORY The gzip program was originally written by Jean-loup Gailly, licensed under the GNU Public Licence. Matthew R. Green wrote a simple front end for NetBSD 1.3 distribution media, based on the freely re-distributable zlib library. It was enhanced to be mostly feature-compatible with the original GNU gzip program for NetBSD 2.0. This implementation of gzip was ported based on the NetBSD gzip version 20181111, and first appeared in FreeBSD 7.0. AUTHORS This implementation of gzip was written by Matthew R. Green <mrg@eterna.com.au> with unpack support written by Xin LI <delphij@FreeBSD.org>. BUGS According to RFC 1952, the recorded file size is stored in a 32-bit integer, therefore, it cannot represent files larger than 4GB. This limitation also applies to -l option of gzip utility. macOS 14.5 January 7, 2019 macOS 14.5
| null |
yamlpp-events5.30
| null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.