command
stringlengths 1
42
| description
stringlengths 29
182k
⌀ | name
stringlengths 7
64.9k
⌀ | synopsis
stringlengths 4
85.3k
⌀ | options
stringclasses 593
values | examples
stringclasses 455
values |
|---|---|---|---|---|---|
piconv5.34
|
piconv is perl version of iconv, a character encoding converter widely available for various Unixen today. This script was primarily a technology demonstrator for Perl 5.8.0, but you can use piconv in the place of iconv for virtually any case. piconv converts the character encoding of either STDIN or files specified in the argument and prints out to STDOUT. Here is the list of options. Some options can be in short format (-f) or long (--from) one. -f,--from from_encoding Specifies the encoding you are converting from. Unlike iconv, this option can be omitted. In such cases, the current locale is used. -t,--to to_encoding Specifies the encoding you are converting to. Unlike iconv, this option can be omitted. In such cases, the current locale is used. Therefore, when both -f and -t are omitted, piconv just acts like cat. -s,--string string uses string instead of file for the source of text. -l,--list Lists all available encodings, one per line, in case-insensitive order. Note that only the canonical names are listed; many aliases exist. For example, the names are case-insensitive, and many standard and common aliases work, such as "latin1" for "ISO-8859-1", or "ibm850" instead of "cp850", or "winlatin1" for "cp1252". See Encode::Supported for a full discussion. -r,--resolve encoding_alias Resolve encoding_alias to Encode canonical encoding name. -C,--check N Check the validity of the stream if N = 1. When N = -1, something interesting happens when it encounters an invalid character. -c Same as "-C 1". -p,--perlqq Transliterate characters missing in encoding to \x{HHHH} where HHHH is the hexadecimal Unicode code point. --htmlcref Transliterate characters missing in encoding to &#NNN; where NNN is the decimal Unicode code point. --xmlcref Transliterate characters missing in encoding to &#xHHHH; where HHHH is the hexadecimal Unicode code point. -h,--help Show usage. -D,--debug Invokes debugging mode. Primarily for Encode hackers. -S,--scheme scheme Selects which scheme is to be used for conversion. Available schemes are as follows: from_to Uses Encode::from_to for conversion. This is the default. decode_encode Input strings are decode()d then encode()d. A straight two- step implementation. perlio The new perlIO layer is used. NI-S' favorite. You should use this option if you are using UTF-16 and others which linefeed is not $/. Like the -D option, this is also for Encode hackers. SEE ALSO iconv(1) locale(3) Encode Encode::Supported Encode::Alias PerlIO perl v5.34.1 2024-04-13 PICONV(1)
|
piconv -- iconv(1), reinvented in perl
|
piconv [-f from_encoding] [-t to_encoding] [-p|--perlqq|--htmlcref|--xmlcref] [-C N|-c] [-D] [-S scheme] [-s string|file...] piconv -l piconv -r encoding_alias piconv -h
| null | null |
basename
|
The basename utility deletes any prefix ending with the last slash ‘/’ character present in string (after first stripping trailing slashes), and a suffix, if given. The suffix is not stripped if it is identical to the remaining characters in string. The resulting filename is written to the standard output. A non-existent suffix is ignored. If -a is specified, then every argument is treated as a string as if basename were invoked with just one argument. If -s is specified, then the suffix is taken as its argument, and all other arguments are treated as a string. The dirname utility deletes the filename portion, beginning with the last slash ‘/’ character to the end of string (after first stripping trailing slashes), and writes the result to the standard output. EXIT STATUS The basename and dirname utilities exit 0 on success, and >0 if an error occurs.
|
basename, dirname – return filename or directory portion of pathname
|
basename string [suffix] basename [-a] [-s suffix] string [...] dirname string [...]
| null |
The following line sets the shell variable FOO to /usr/bin. FOO=`dirname /usr/bin/trail` SEE ALSO csh(1), sh(1), basename(3), dirname(3) STANDARDS The basename and dirname utilities are expected to be IEEE Std 1003.2 (“POSIX.2”) compatible. HISTORY The basename and dirname utilities first appeared in 4.4BSD. macOS 14.5 May 26, 2020 macOS 14.5
|
nclist
|
ncctl controls the caller's kernel Kerberos credentials for any of the specified path's associated NFS mounts. If no paths are specified then all the caller's associated credentials for all NFS file systems are acted upon by the command given. When an NFS file system is mounted using Kerberos through the “sec=” option or by the export specified on the server, the resulting session context is stored in a table for each mount. If the user decides to finish his or her session or chooses to use a different credential, then ncctl can be called to invalidate or change those credentials in the kernel. ncctl supports the following commands: init, set Set the mount or mounts to obtain credentials form the associated principal. Any current credential is unset. destroy, unset Unset the current credentials on the mount or mounts. list, get List the principal(s) set on the mount or mounts for this session. If no principal was set, then display “Default credential” followed by “[from ⟨principal name⟩]” if the access succeeded and “[kinit needed]” if not. If there has been no access to the file system then display “Credentials are not set”. Note the second synopsis is equivalent to ncctl [-Pv] {init | set} [-F] -p principal The third synopsis is equivalent to ncctl [-Pv] {destroy | unset} And the last synopsis is equivalent to ncctl [-Pv] {list | get} Kerberos keeps a collection of credentials which can be seen by using klist -A. The current default credential can be seen with klist without any arguments. kswitch can be used to switch the default to a different Kerberos credential. kdestroy can be use to remove all or a particular Kerberos credential. New Kerberos credentials can be obtain and added to the collection by calling kinit and those credentials can be used when accessing the mount. See kinit(1), klist(1), kswitch(1), and kdestroy(1). ncctl can set any principal from the associated Kerberos credentials or can destroy and unset credentials currently on the mount. When accessing a Kerberos mounted NFS file system, if no principal is set on the mount, when the kernel needs credentials it will make an up call to the gssd daemon and what ever the default credentials are available at the time will be used. The options are as follows: -h, --help Print a help summary of the command and then exit. -v, --verbose Be verbose and show what file system is being operated on and any resulting errors. -P, --nofollow If the trailing component resolves to a symbolic link do not resolve the link but use the current path to determine any associate NFS file system. -p, --principal ⟨principal⟩ For the init, set and ncinit commands set the principal to ⟨principal⟩. This option is required for theses commands. This option is not valid for other commands. -F, --force For the init, set and ncinit commands to not check the presence of the required principal in the Kerberos cache collection. This may be useful if Kerberos credentials will be obtain later. WARNING: If the credential is incorrectly set it may not work and no access to the file system will ever be allowed until another set or unset operation takes place. This option is not valid for other commands.
|
ncctl – Control NFS kernel credentials
|
ncctl [-Pvh] {{init | set} [-F] -p principal | {destroy | unset} | {list | get}} [path ...] ncinit [-PvhF] -p principal [path ...] ncdestroy [-Pvh] [path ...] nclist [-Pvh] [path ...]
| null |
If leaving for the day: $ kdestroy -A $ ncdestroy Lets say a user does $ kinit user@FOO.COM And through the automounter access a path /Network/Serves/someserver/Sources/foo/bar where the mount of /Network/Servers/someserver/Sources/foo was done with user@FOO.COM. $ cat /Network/Servers/someserver/Sources/foo/bar cat: /Network/Servers/someserver/Sources/foo/bar: Permission denied The user realizes that in order to have access on the server his identity should be user2@BAR.COM. So: $ kinit user2@BAR.COM $ ncctl set -p user2@BAR.COM Now the local user can access bar. To see your credentials $ nclist /Network/Servers/someserver/Sources/foo: user2@BAR.COM If the user destroys his credentials and then acquires new ones $ ncdestroy $ nclist -v /private/tmp/mp : No credentials are set. /Network/Servers/xs1/release : NFS mount is not using Kerberos. $ kinit user user@FOO.COM's password: ****** $ klist Credentials cache: API:648E3003-0A6B-4BB3-8447-1D5034F98EAE Principal: user@FOO.COM Issued Expires Principal Dec 15 13:57:57 2014 Dec 15 23:57:57 2014 krbtgt/FOO.COM@FOO.COM $ ls /private/tmp/mp filesystemui.socket= sysdiagnose.tar.gz x mtrecorder/ systemstats/ z $ nclist /private/tmp/mp : Default credential [from user@FOO.COM] NOTES As mentioned above credentials are per session, so the console session's credential cache collection is separate for a collections of credentials obtain in an ssh session even by the same user. Kerberos will set the default credential with klist or kswitch. However, the default credential can change without the user's knowledge, because of renewals or some other script or program in the user's session is run and does a kswitch (krb5_cc_set_default_name()) or kinit on the user's behalf. kinit may not prompt for a password if the Kerberos password for the principal is in the user's keychain. ncctl with the set command will allow a user to change the mapping of the local user identity to a different one on the server. It is up to the user to decide which identity will be used. Previous versions of gssd daemon would attempt to select credentials if they were not set, by choosing credentials in the same realm as the server. This was imperfect and that has been removed. There may be multiple credentials in the same realm or a user may prefer a cross realm principal. It is highly recommended that after accessing a mount (typically through the automounter) that if the user has access to multiple credentials to set the credential on the mount that they want to use. The current default credential will be used by the automounter on first mount. If you do not explicitly set the credentials to use, then if the server expires the credential, the client will use the current default credential at the time of renewal and that may be a different identity. If using mount directly a user can select what credential to use for the mount and subsequently there after (at least until a new ncctl set command is run) by using the principal=⟨principal⟩ option. It is also possible to select the realm to use with the realm=⟨realm⟩ option. The latter can be useful to administrators in automounter maps. There is currently no way to remember what the chosen identity is for a given mount after its been unmounted. So for automounted mounts a reference it taken on the mount point so unmounts will not happen until all credentials on a mount with a set principal have been destroyed. Forced unmounts will not be effected. nclist or ncctl get can be used to see what credentials are actually being used and ncdestroy or ncctl unset can be used to destroy that session's credential. Accessing the mount after its credentials have been destroyed will cause the default credential to be used until the next ncinit or ncctl set Default credentials for an automounted NFS mount will not prevent the unmounting of the file system. DIAGNOSTICS The ncctl command will exit with 1 if any of the supplied paths doesn't exist or there is an error returned for any path tried. If all paths exist and no errors are returned the exit status will be 0. SEE ALSO kdestroy(1), kinit(1), klist(1), kswitch(1), mount_nfs(8) BUGS There should be an option to kdestroy to destroy cached NFS contexts. macOS 14.5 January 14, 2015 macOS 14.5
|
objdump
|
The llvm-objdump utility prints the contents of object files and final linked images named on the command line. If no file name is specified, llvm-objdump will attempt to read from a.out. If - is used as a file name, llvm-objdump will process a file on its standard input stream. COMMANDS At least one of the following commands are required, and some commands can be combined with other commands: -a, --archive-headers Display the information contained within an archive's headers. -d, --disassemble Disassemble all executable sections found in the input files. On some architectures (AArch64, PPC64, x86), all known instructions are disassembled by default. On the others, --mcpu or --mattr is needed to enable some instruction sets. Disabled instructions are displayed as <unknown>. -D, --disassemble-all Disassemble all sections found in the input files. --disassemble-symbols=<symbol1[,symbol2,...]> Disassemble only the specified symbols. Takes demangled symbol names when --demangle is specified, otherwise takes mangled symbol names. Implies --disassemble. --dwarf=<value> Dump the specified DWARF debug sections. The supported values are: frames - .debug_frame -f, --file-headers Display the contents of the overall file header. --fault-map-section Display the content of the fault map section. -h, --headers, --section-headers Display summaries of the headers for each section. --help Display usage information and exit. Does not stack with other commands. -p, --private-headers Display format-specific file headers. -r, --reloc Display the relocation entries in the file. -R, --dynamic-reloc Display the dynamic relocation entries in the file. --raw-clang-ast Dump the raw binary contents of the clang AST section. -s, --full-contents Display the contents of each section. -t, --syms Display the symbol table. -T, --dynamic-syms Display the contents of the dynamic symbol table. -u, --unwind-info Display the unwind info of the input(s). This operation is only currently supported for COFF and Mach-O object files. -v, --version Display the version of the llvm-objdump executable. Does not stack with other commands. -x, --all-headers Display all available header information. Equivalent to specifying --archive-headers, --file-headers, --private-headers, --reloc, --section-headers, and --syms.
|
llvm-objdump - LLVM's object file dumper
|
llvm-objdump [commands] [options] [filenames...]
|
llvm-objdump supports the following options: --adjust-vma=<offset> Increase the displayed address in disassembly or section header printing by the specified offset. --arch-name=<string> Specify the target architecture when disassembling. Use --version for a list of available targets. --build-id=<string> Look up the object using the given build ID, specified as a hexadecimal string. The found object is handled as if it were an input filename. -C, --demangle Demangle symbol names in the output. --debug-file-directory <path> Provide a path to a directory with a .build-id subdirectory to search for debug information for stripped binaries. Multiple instances of this argument are searched in the order given. --debuginfod, --no-debuginfod Whether or not to try debuginfod lookups for debug binaries. Unless specified, debuginfod is only enabled if libcurl was compiled in (LLVM_ENABLE_CURL) and at least one server URL was provided by the environment variable DEBUGINFOD_URLS. --debug-vars=<format> Print the locations (in registers or memory) of source-level variables alongside disassembly. format may be unicode or ascii, defaulting to unicode if omitted. --debug-vars-indent=<width> Distance to indent the source-level variable display, relative to the start of the disassembly. Defaults to 52 characters. -j, --section=<section1[,section2,...]> Perform commands on the specified sections only. For Mach-O use segment,section to specify the section name. -l, --line-numbers When disassembling, display source line numbers. Implies --disassemble. -M, --disassembler-options=<opt1[,opt2,...]> Pass target-specific disassembler options. Available options: • reg-names-std: ARM only (default). Print in ARM 's instruction set documentation, with r13/r14/r15 replaced by sp/lr/pc. • reg-names-raw: ARM only. Use r followed by the register number. • no-aliases: AArch64 and RISC-V only. Print raw instruction mnemonic instead of pseudo instruction mnemonic. • numeric: RISC-V only. Print raw register names instead of ABI mnemonic. (e.g. print x1 instead of ra) • att: x86 only (default). Print in the AT&T syntax. • intel: x86 only. Print in the intel syntax. --mcpu=<cpu-name> Target a specific CPU type for disassembly. Specify --mcpu=help to display available CPUs. --mattr=<a1,+a2,-a3,...> Enable/disable target-specific attributes. Specify --mattr=help to display the available attributes. --no-leading-addr, --no-addresses When disassembling, do not print leading addresses for instructions or inline relocations. --no-print-imm-hex Do not use hex format for immediate values in disassembly output (default). --no-show-raw-insn When disassembling, do not print the raw bytes of each instruction. --offloading Display the content of the LLVM offloading section. --prefix=<prefix> When disassembling with the --source option, prepend prefix to absolute paths. --prefix-strip=<level> When disassembling with the --source option, strip out level initial directories from absolute paths. This option has no effect without --prefix. --print-imm-hex Use hex format when printing immediate values in disassembly output. -S, --source When disassembling, display source interleaved with the disassembly. Implies --disassemble. --show-lma Display the LMA column when dumping ELF section headers. Defaults to off unless any section has different VMA and LMAs. --start-address=<address> When disassembling, only disassemble from the specified address. When printing relocations, only print the relocations patching offsets from at least address. When printing symbols, only print symbols with a value of at least address. --stop-address=<address> When disassembling, only disassemble up to, but not including the specified address. When printing relocations, only print the relocations patching offsets up to address. When printing symbols, only print symbols with a value up to address. --symbolize-operands When disassembling, symbolize a branch target operand to print a label instead of a real address. When printing a PC-relative global symbol reference, print it as an offset from the leading symbol. When a bb-address-map section is present (i.e., the object file is built with -fbasic-block-sections=labels), labels are retrieved from that section instead. Only works with PowerPC objects or X86 linked images. Example: A non-symbolized branch instruction with a local target and pc-relative memory access like cmp eax, dword ptr [rip + 4112] jge 0x20117e <_start+0x25> might become <L0>: cmp eax, dword ptr <g> jge <L0> --triple=<string> Target triple to disassemble for, see --version for available targets. -w, --wide Ignored for compatibility with GNU objdump. --x86-asm-syntax=<style> Deprecated. When used with --disassemble, choose style of code to emit from X86 backend. Supported values are: att AT&T-style assembly intel Intel-style assembly The default disassembly style is att. -z, --disassemble-zeroes Do not skip blocks of zeroes when disassembling. @<FILE> Read command-line options and commands from response file <FILE>. MACH-O ONLY OPTIONS AND COMMANDS --arch=<architecture> Specify the architecture to disassemble. see --version for available architectures. --archive-member-offsets Print the offset to each archive member for Mach-O archives (requires --archive-headers). --bind Display binding info --data-in-code Display the data in code table. --dis-symname=<name> Disassemble just the specified symbol's instructions. --chained-fixups Print chained fixup information. --dyld-info Print bind and rebase information used by dyld to resolve external references in a final linked binary. --dylibs-used Display the shared libraries used for linked files. --dsym=<string> Use .dSYM file for debug info. --dylib-id Display the shared library's ID for dylib files. --exports-trie Display exported symbols. --function-starts Print the function starts table for Mach-O objects. -g Print line information from debug info if available. --full-leading-addr Print the full leading address when disassembling. --indirect-symbols Display the indirect symbol table. --info-plist Display the info plist section as strings. --lazy-bind Display lazy binding info. --link-opt-hints Display the linker optimization hints. -m, --macho Use Mach-O specific object file parser. Commands and other options may behave differently when used with --macho. --no-leading-headers Do not print any leading headers. --no-symbolic-operands Do not print symbolic operands when disassembling. --non-verbose Display the information for Mach-O objects in non-verbose or numeric form. --objc-meta-data Display the Objective-C runtime meta data. --private-header Display only the first format specific file header. --rebase Display rebasing information. --rpaths Display runtime search paths for the binary. --universal-headers Display universal headers. --weak-bind Display weak binding information. XCOFF ONLY OPTIONS AND COMMANDS --symbol-description Add symbol description to disassembly output. BUGS To report bugs, please visit <https://github.com/llvm/llvm-project/labels/tools:llvm-objdump/>. SEE ALSO llvm-nm(1), llvm-otool(1), llvm-readelf(1), llvm-readobj(1) AUTHOR Maintained by the LLVM Team (https://llvm.org/). COPYRIGHT 2003-2024, LLVM Project 11 2024-01-28 LLVM-OBJDUMP(1)
| null |
install
|
The file(s) are copied to the target file or directory. If the destination is a directory, then the file is copied into directory with its original filename. If the target file already exists, it is either renamed to file.old if the -b option is given or overwritten if permissions allow. An alternate backup suffix may be specified via the -B option's argument. The options are as follows: -B suffix Use suffix as the backup suffix if -b is given. -b Back up any existing files before overwriting them by renaming them to file.old. See -B for specifying a different backup suffix. -C Copy the file. If the target file already exists and the files are the same, then don't change the modification time of the target. -c Copy the file. This is actually the default. The -c option is only included for backwards compatibility. -d Create directories. Missing parent directories are created as required. -f Specify the target's file flags; see chflags(1) for a list of possible flags and their meanings. -g Specify a group. A numeric GID is allowed. -M Disable all use of mmap(2). -m Specify an alternate mode. The default mode is set to rwxr-xr-x (0755). The specified mode may be either an octal or symbolic value; see chmod(1) for a description of possible mode values. -o Specify an owner. A numeric UID is allowed. -p Preserve the modification time. Copy the file, as if the -C (compare and copy) option is specified, except if the target file doesn't already exist or is different, then preserve the modification time of the file. -S Safe copy. Normally, install unlinks an existing target before installing the new file. With the -S flag a temporary file is used and then renamed to be the target. The reason this is safer is that if the copy or rename fails, the existing target is left untouched. -s install exec's the command strip(1) to strip binaries so that install can be portable over a large number of systems and binary types. -v Causes install to show when -C actually installs something. By default, install preserves all file flags, with the exception of the “nodump” flag. The install utility attempts to prevent moving a file onto itself. Installing /dev/null creates an empty file. DIAGNOSTICS The install utility exits 0 on success, and 1 otherwise. FILES INS@XXXX If either -S option is specified, or the -C or -p option is used in conjuction with the -s option, temporary files named INS@XXXX, where XXXX is decided by mkstemp(3), are created in the target directory. COMPATIBILITY Historically install moved files by default. The default was changed to copy in FreeBSD 4.4. SEE ALSO chflags(1), chgrp(1), chmod(1), cp(1), mv(1), strip(1), mmap(2), chown(8) HISTORY The install utility appeared in 4.2BSD. BUGS Temporary files may be left in the target directory if install exits abnormally. File flags cannot be set by fchflags(2) over a NFS file system. Other file systems do not have a concept of flags. install will only warn when flags could not be set on a file system that does not support them. install with -v falsely says a file is copied when -C snaps hard links. macOS 14.5 May 7, 2001 macOS 14.5
|
install – install binaries
|
install [-bCcMpSsv] [-B suffix] [-f flags] [-g group] [-m mode] [-o owner] file1 file2 install [-bCcMpSsv] [-B suffix] [-f flags] [-g group] [-m mode] [-o owner] file1 ... fileN directory install -d [-v] [-g group] [-m mode] [-o owner] directory ...
| null | null |
native2ascii
| null | null | null | null | null |
c99
|
This is the name of the C language compiler as required by the IEEE Std 1003.1-2001 (“POSIX.1”) standard. The c99 compiler accepts the following options: -c Suppress the link-edit phase of the compilation, and do not remove any object files that are produced. -D name[=value] Define name as if by a C-language #define directive. If no “=value” is given, a value of 1 will be used. Note that in order to request a translation as specified by IEEE Std 1003.1-2001 (“POSIX.1”), you need to define _POSIX_C_SOURCE=200112L either in the source or using this option. The -D option has lower precedence than the -U option. That is, if name is used in both a -U and a -D option, name will be undefined regardless of the order of the options. The -D option may be specified more than once. -E Copy C-language source files to the standard output, expanding all preprocessor directives; no compilation will be performed. -g Produce symbolic information in the object or executable files. -I directory Change the algorithm for searching for headers whose names are not absolute pathnames to look in the directory named by the directory pathname before looking in the usual places. Thus, headers whose names are enclosed in double-quotes ("") will be searched for first in the directory of the file with the #include line, then in directories named in -I options, and last in the usual places. For headers whose names are enclosed in angle brackets (⟨⟩), the header will be searched for only in directories named in -I options and then in the usual places. Directories named in -I options shall be searched in the order specified. The -I option may be specified more than once. -L directory Change the algorithm of searching for the libraries named in the -l objects to look in the directory named by the directory pathname before looking in the usual places. Directories named in -L options will be searched in the order specified. The -L option may be specified more than once. -o outfile Use the pathname outfile, instead of the default a.out, for the executable file produced. -O optlevel If optlevel is zero, disable all optimizations. Otherwise, enable optimizations at the specified level. -s Produce object and/or executable files from which symbolic and other information not required for proper execution has been removed (stripped). -U name Remove any initial definition of name. The -U option may be specified more than once. -W 32|64 Set the pointer size for the compiled code to either 32 or 64 bits. If not specified, the pointer size matches the current host architecture. An operand is either in the form of a pathname or the form -l library. At least one operand of the pathname form needs to be specified. Supported operands are of the form: file.c A C-language source file to be compiled and optionally linked. The operand must be of this form if the -c option is used. file.a A library of object files, as produced by ar(1), passed directly to the link editor. file.o An object file produced by c99 -c, and passed directly to the link editor. -l library Search the library named liblibrary.a. A library will be searched when its name is encountered, so the placement of a -l operand is significant. SEE ALSO ar(1), c89(1), cc(1) STANDARDS The c99 utility interface conforms to IEEE Std 1003.1-2001 (“POSIX.1”). macOS 14.5 October 7, 2002 macOS 14.5
|
c99 – standard C language compiler
|
c99 [-cEgs] [-D name[=value]] ... [-I directory ...] [-L directory ...] [-o outfile] [-O optlevel] [-U name ...] [-W 32|64] operand ...
| null | null |
desdp
|
desdp generates a scripting definition (“sdef”) from the specified scriptable application and writes it to standard output. The original dictionary may be either an aete resource or a set of Cocoa suite definition files (scriptSuite/scriptTerminology pairs). desdp is primarily useful for developers with an existing scriptable application who want a shortcut to creating an sdef(5) file. While the resulting sdef will contain all the information in the original dictionary, it will probably not be perfect, since sdef(5) is more expressive than either of the older aete or suite definition formats. For instance, aete cannot specify which commands an object responds to, and suite definitions cannot specify the ordering of terms. SEE ALSO sdef(5), sdp(1) BUGS desdp does not yet correctly support Cocoa “Synonym” sections or synonymous terms or codes in aete. Mac OS X June 6, 2002 Mac OS X
|
desdp – scripting definition generator
|
desdp application
| null | null |
binhex
|
applesingle, binhex, macbinary are implemented as a single tool with multiple names. All invocations support the three verbs encode, decode, and probe. If multiple files are passed to probe, the exit status will be non-zero only if all files contain data in the specified encoding.
|
applesingle, binhex, macbinary – encode and decode files
|
<tool> probe file ... <tool> [decode] [-c] [-fv] [-C dir] [-o outfile] [file ...] <tool> -h | -V applesingle encode [-cfv] [-s suf] [-C dir] [-o outfile] file ... binhex encode [-R] [-cfv] [-s suf] [-C dir] [-o outfile] file ... macbinary encode [-t 1-3] [-cfv] [-s suf] [-C dir] [-o outfile] file ...
|
-f, --force perform the operation even if the output file already exists -h, --help display version and usage, then quit -v, --verbose be verbose -V, --version display version, then quit -c, --pipe, --from-stdin, --to-stdout For decode, read encoded data from the standand input. For encode, write encoded data to the standard output. Currently, "plain" data must be written to and from specified filenames (see also mount_fdesc(8)). -C, --directory dir create output files in dir -o, --rename name Use name for output, overriding any stored or default name. For encode, the appropriate suffix will be added to name. -o implies only one file to be encoded or decoded. -s, --suffix .suf override the default suffix for the given encoding -R, --no-runlength-encoding don't use BinHex runlength compression when encoding -t, --type 1-3 Specify MacBinary encoding type. Type 1 is undesirable because it has neither a checksum nor a signature and is thus difficult to recognize. DIAGNOSTICS In general, the tool returns a non-zero exit status if it fails. Darwin 14 November 2005 Darwin
| null |
perlbug5.34
|
This program is designed to help you generate bug reports (and thank- you notes) about perl5 and the modules which ship with it. In most cases, you can just run it interactively from a command line without any special arguments and follow the prompts. If you have found a bug with a non-standard port (one that was not part of the standard distribution), a binary distribution, or a non-core module (such as Tk, DBI, etc), then please see the documentation that came with that distribution to determine the correct place to report bugs. Bug reports should be submitted to the GitHub issue tracker at <https://github.com/Perl/perl5/issues>. The perlbug@perl.org address no longer automatically opens tickets. You can use this tool to compose your report and save it to a file which you can then submit to the issue tracker. In extreme cases, perlbug may not work well enough on your system to guide you through composing a bug report. In those cases, you may be able to use perlbug -d or perl -V to get system configuration information to include in your issue report. When reporting a bug, please run through this checklist: What version of Perl you are running? Type "perl -v" at the command line to find out. Are you running the latest released version of perl? Look at <http://www.perl.org/> to find out. If you are not using the latest released version, please try to replicate your bug on the latest stable release. Note that reports about bugs in old versions of Perl, especially those which indicate you haven't also tested the current stable release of Perl, are likely to receive less attention from the volunteers who build and maintain Perl than reports about bugs in the current release. Are you sure what you have is a bug? A significant number of the bug reports we get turn out to be documented features in Perl. Make sure the issue you've run into isn't intentional by glancing through the documentation that comes with the Perl distribution. Given the sheer volume of Perl documentation, this isn't a trivial undertaking, but if you can point to documentation that suggests the behaviour you're seeing is wrong, your issue is likely to receive more attention. You may want to start with perldoc perltrap for pointers to common traps that new (and experienced) Perl programmers run into. If you're unsure of the meaning of an error message you've run across, perldoc perldiag for an explanation. If the message isn't in perldiag, it probably isn't generated by Perl. You may have luck consulting your operating system documentation instead. If you are on a non-UNIX platform perldoc perlport, as some features may be unimplemented or work differently. You may be able to figure out what's going wrong using the Perl debugger. For information about how to use the debugger perldoc perldebug. Do you have a proper test case? The easier it is to reproduce your bug, the more likely it will be fixed -- if nobody can duplicate your problem, it probably won't be addressed. A good test case has most of these attributes: short, simple code; few dependencies on external commands, modules, or libraries; no platform-dependent code (unless it's a platform-specific bug); clear, simple documentation. A good test case is almost always a good candidate to be included in Perl's test suite. If you have the time, consider writing your test case so that it can be easily included into the standard test suite. Have you included all relevant information? Be sure to include the exact error messages, if any. "Perl gave an error" is not an exact error message. If you get a core dump (or equivalent), you may use a debugger (dbx, gdb, etc) to produce a stack trace to include in the bug report. NOTE: unless your Perl has been compiled with debug info (often -g), the stack trace is likely to be somewhat hard to use because it will most probably contain only the function names and not their arguments. If possible, recompile your Perl with debug info and reproduce the crash and the stack trace. Can you describe the bug in plain English? The easier it is to understand a reproducible bug, the more likely it will be fixed. Any insight you can provide into the problem will help a great deal. In other words, try to analyze the problem (to the extent you can) and report your discoveries. Can you fix the bug yourself? If so, that's great news; bug reports with patches are likely to receive significantly more attention and interest than those without patches. Please submit your patch via the GitHub Pull Request workflow as described in perldoc perlhack. You may also send patches to perl5-porters@perl.org. When sending a patch, create it using "git format-patch" if possible, though a unified diff created with "diff -pu" will do nearly as well. Your patch may be returned with requests for changes, or requests for more detailed explanations about your fix. Here are a few hints for creating high-quality patches: Make sure the patch is not reversed (the first argument to diff is typically the original file, the second argument your changed file). Make sure you test your patch by applying it with "git am" or the "patch" program before you send it on its way. Try to follow the same style as the code you are trying to patch. Make sure your patch really does work ("make test", if the thing you're patching is covered by Perl's test suite). Can you use "perlbug" to submit a thank-you note? Yes, you can do this by either using the "-T" option, or by invoking the program as "perlthanks". Thank-you notes are good. It makes people smile. Please make your issue title informative. "a bug" is not informative. Neither is "perl crashes" nor is "HELP!!!". These don't help. A compact description of what's wrong is fine. Having done your bit, please be prepared to wait, to be told the bug is in your code, or possibly to get no reply at all. The volunteers who maintain Perl are busy folks, so if your problem is an obvious bug in your own code, is difficult to understand or is a duplicate of an existing report, you may not receive a personal reply. If it is important to you that your bug be fixed, do monitor the issue tracker (you will be subscribed to notifications for issues you submit or comment on) and the commit logs to development versions of Perl, and encourage the maintainers with kind words or offers of frosty beverages. (Please do be kind to the maintainers. Harassing or flaming them is likely to have the opposite effect of the one you want.) Feel free to update the ticket about your bug on <https://github.com/Perl/perl5/issues> if a new version of Perl is released and your bug is still present.
|
perlbug - how to submit bug reports on Perl
|
perlbug perlbug [ -v ] [ -a address ] [ -s subject ] [ -b body | -f inputfile ] [ -F outputfile ] [ -r returnaddress ] [ -e editor ] [ -c adminaddress | -C ] [ -S ] [ -t ] [ -d ] [ -h ] [ -T ] perlbug [ -v ] [ -r returnaddress ] [ -ok | -okay | -nok | -nokay ] perlthanks
|
-a Address to send the report to instead of saving to a file. -b Body of the report. If not included on the command line, or in a file with -f, you will get a chance to edit the report. -C Don't send copy to administrator when sending report by mail. -c Address to send copy of report to when sending report by mail. Defaults to the address of the local perl administrator (recorded when perl was built). -d Data mode (the default if you redirect or pipe output). This prints out your configuration data, without saving or mailing anything. You can use this with -v to get more complete data. -e Editor to use. -f File containing the body of the report. Use this to quickly send a prepared report. -F File to output the results to. Defaults to perlbug.rep. -h Prints a brief summary of the options. -ok Report successful build on this system to perl porters. Forces -S and -C. Forces and supplies values for -s and -b. Only prompts for a return address if it cannot guess it (for use with make). Honors return address specified with -r. You can use this with -v to get more complete data. Only makes a report if this system is less than 60 days old. -okay As -ok except it will report on older systems. -nok Report unsuccessful build on this system. Forces -C. Forces and supplies a value for -s, then requires you to edit the report and say what went wrong. Alternatively, a prepared report may be supplied using -f. Only prompts for a return address if it cannot guess it (for use with make). Honors return address specified with -r. You can use this with -v to get more complete data. Only makes a report if this system is less than 60 days old. -nokay As -nok except it will report on older systems. -p The names of one or more patch files or other text attachments to be included with the report. Multiple files must be separated with commas. -r Your return address. The program will ask you to confirm its default if you don't use this option. -S Save or send the report without asking for confirmation. -s Subject to include with the report. You will be prompted if you don't supply one on the command line. -t Test mode. Makes it possible to command perlbug from a pipe or file, for testing purposes. -T Send a thank-you note instead of a bug report. -v Include verbose configuration data in the report. AUTHORS Kenneth Albanowski (<kjahds@kjahds.com>), subsequently doctored by Gurusamy Sarathy (<gsar@activestate.com>), Tom Christiansen (<tchrist@perl.com>), Nathan Torkington (<gnat@frii.com>), Charles F. Randall (<cfr@pobox.com>), Mike Guy (<mjtg@cam.ac.uk>), Dominic Dunlop (<domo@computer.org>), Hugo van der Sanden (<hv@crypt.org>), Jarkko Hietaniemi (<jhi@iki.fi>), Chris Nandor (<pudge@pobox.com>), Jon Orwant (<orwant@media.mit.edu>, Richard Foley (<richard.foley@rfi.net>), Jesse Vincent (<jesse@bestpractical.com>), and Craig A. Berry (<craigberry@mac.com>). SEE ALSO perl(1), perldebug(1), perldiag(1), perlport(1), perltrap(1), diff(1), patch(1), dbx(1), gdb(1) BUGS None known (guess what must have been used to report them?) perl v5.34.1 2024-04-13 PERLBUG(1)
| null |
lsmp
|
The lsmp command prints information about every active right in a task's port space, giving a view into the inter-process communication behavior of that task. Following is an explanation of each symbol and values from the output. name : Task unique name for a port. A "-" signifies that this is a member of a port-set ipc-object : A unique identifier for a kernel object. A "+" sign implies that this entry is expanded from above ipc-object. rights : Rights corresponding to this name. Possible values are recv, send, send-once and port-set. flags : Flags indicating port status. T : Port has tempowner set G : Port is guarded S : Port has strict guarding restrictions I : Port has importance donation flag set R : Port is marked reviving P : Port has task pointer set boost : Importance boost count reqs : Notifications armed on this port. D : Dead name notification N : No sender notification P : Port Destroy requests recv : Number of recv rights for this name. send : Number of send rights stored at this name. This does NOT reflect the total number of send rights for this recv right. sonce : Number of outstanding send-once rights for this receive right. oref : Do send rights exist somewhere for this receive right? qlimit : Queue limit for this port. If orefs column shows -> then it indicates the queue limit on the destination port. And a <- indicates this port right is destined to receive messages from process referred in identifier column. msgcount : Number of messages enqueued on this port. See qlimit for -> and <- explanations. context : Mach port context value. identifier : A unique identifier for a kernel object or task's name for this right. This field is described by the type column. SEE ALSO ddt(1), top(1) macOS July 24, 2012 macOS
|
lsmp – Display mach port information for processes on the system
|
lsmp -h lsmp -p <pid> Show mach port usage for <pid>. Run with root privileges to see detailed info about port destinations etc. lsmp -v Show information in detail for Kernel object based ports. Including thread ports and special ports attached to it. lsmp -a Show mach port usage for all tasks in the system lsmp -j <path> Save output as JSON to <path>.
| null | null |
xmllint
|
The xmllint program parses one or more XML files, specified on the command line as XML-FILE (or the standard input if the filename provided is - ). It prints various types of output, depending upon the options selected. It is useful for detecting errors both in XML code and in the XML parser itself. xmllint is included in libxml(3).
|
xmllint - command line XML tool
|
xmllint [--version | --debug | --shell | --xpath "XPath_expression" | --debugent | --copy | --recover | --noent | --noout | --nonet | --path "PATH(S)" | --load-trace | --htmlout | --nowrap | --valid | --postvalid | --dtdvalid URL | --dtdvalidfpi FPI | --timing | --output FILE | --repeat | --insert | --compress | --html | --xmlout | --push | --memory | --maxmem NBBYTES | --nowarning | --noblanks | --nocdata | --format | --encode ENCODING | --dropdtd | --nsclean | --testIO | --catalogs | --nocatalogs | --auto | --xinclude | --noxincludenode | --loaddtd | --dtdattr | --stream | --walker | --pattern PATTERNVALUE | --chkregister | --relaxng SCHEMA | --schema SCHEMA | --c14n] {XML-FILE(S)... | -} xmllint --help
|
xmllint accepts the following options (in alphabetical order): --auto Generate a small document for testing purposes. --catalogs Use the SGML catalog(s) from SGML_CATALOG_FILES. Otherwise XML catalogs starting from /etc/xml/catalog are used by default. --chkregister Turn on node registration. Useful for developers testing libxml(3) node tracking code. --compress Turn on gzip(1) compression of output. --copy Test the internal copy implementation. --c14n Use the W3C XML Canonicalisation (C14N) to serialize the result of parsing to stdout. It keeps comments in the result. --dtdvalid URL Use the DTD specified by an URL for validation. --dtdvalidfpi FPI Use the DTD specified by a Formal Public Identifier FPI for validation, note that this will require a catalog exporting that Formal Public Identifier to work. --debug Parse a file and output an annotated tree of the in-memory version of the document. --debugent Debug the entities defined in the document. --dropdtd Remove DTD from output. --dtdattr Fetch external DTD and populate the tree with inherited attributes. --encode ENCODING Output in the given encoding. Note that this works for full document not fragments or result from XPath queries. --format Reformat and reindent the output. The XMLLINT_INDENT environment variable controls the indentation. The default value is two spaces " "). --help Print out a short usage summary for xmllint. --html Use the HTML parser. --htmlout Output results as an HTML file. This causes xmllint to output the necessary HTML tags surrounding the result tree output so the results can be displayed/viewed in a browser. --insert Test for valid insertions. --loaddtd Fetch an external DTD. --load-trace Display all the documents loaded during the processing to stderr. --maxmem NNBYTES Test the parser memory support. NNBYTES is the maximum number of bytes the library is allowed to allocate. This can also be used to make sure batch processing of XML files will not exhaust the virtual memory of the server running them. --memory Parse from memory. --noblanks Drop ignorable blank spaces. --nocatalogs Do not use any catalogs. --nocdata Substitute CDATA section by equivalent text nodes. --noent Substitute entity values for entity references. By default, xmllint leaves entity references in place. --nonet Do not use the Internet to fetch DTDs or entities. --noout Suppress output. By default, xmllint outputs the result tree. --nowarning Do not emit warnings from the parser and/or validator. --nowrap Do not output HTML doc wrapper. --noxincludenode Do XInclude processing but do not generate XInclude start and end nodes. --nsclean Remove redundant namespace declarations. --output FILE Define a file path where xmllint will save the result of parsing. Usually the programs build a tree and save it on stdout, with this option the result XML instance will be saved onto a file. --path "PATH(S)" Use the (space- or colon-separated) list of filesystem paths specified by PATHS to load DTDs or entities. Enclose space-separated lists by quotation marks. --pattern PATTERNVALUE Used to exercise the pattern recognition engine, which can be used with the reader interface to the parser. It allows to select some nodes in the document based on an XPath (subset) expression. Used for debugging. --postvalid Validate after parsing has completed. --push Use the push mode of the parser. --recover Output any parsable portions of an invalid document. --relaxng SCHEMA Use RelaxNG file named SCHEMA for validation. --repeat Repeat 100 times, for timing or profiling. --schema SCHEMA Use a W3C XML Schema file named SCHEMA for validation. --shell Run a navigating shell. Details on available commands in shell mode are below (see the section called “SHELL COMMANDS”). --xpath "XPath_expression" Run an XPath expression given as argument and print the result. In case of a nodeset result, each node in the node set is serialized in full in the output. In case of an empty node set the "XPath set is empty" result will be shown and an error exit code will be returned. --stream Use streaming API - useful when used in combination with --relaxng or --valid options for validation of files that are too large to be held in memory. --testIO Test user input/output support. --timing Output information about the time it takes xmllint to perform the various steps. --valid Determine if the document is a valid instance of the included Document Type Definition (DTD). A DTD to be validated against also can be specified at the command line using the --dtdvalid option. By default, xmllint also checks to determine if the document is well-formed. --version Display the version of libxml(3) used. --walker Test the walker module, which is a reader interface but for a document tree, instead of using the reader API on an unparsed document it works on an existing in-memory tree. Used for debugging. --xinclude Do XInclude processing. --xmlout Used in conjunction with --html. Usually when HTML is parsed the document is saved with the HTML serializer. But with this option the resulting document is saved with the XML serializer. This is primarily used to generate XHTML from HTML input. SHELL COMMANDS xmllint offers an interactive shell mode invoked with the --shell command. Available commands in shell mode include (in alphabetical order): base Display XML base of the node. bye Leave the shell. cat NODE Display the given node or the current one. cd PATH Change the current node to the given path (if unique) or root if no argument is given. dir PATH Dumps information about the node (namespace, attributes, content). du PATH Show the structure of the subtree under the given path or the current node. exit Leave the shell. help Show this help. free Display memory usage. load FILENAME Load a new document with the given filename. ls PATH List contents of the given path or the current directory. pwd Display the path to the current node. quit Leave the shell. save FILENAME Save the current document to the given filename or to the original name. validate Check the document for errors. write FILENAME Write the current node to the given filename. ENVIRONMENT SGML_CATALOG_FILES SGML catalog behavior can be changed by redirecting queries to the user's own set of catalogs. This can be done by setting the SGML_CATALOG_FILES environment variable to a list of catalogs. An empty one should deactivate loading the default /etc/sgml/catalog catalog. XML_CATALOG_FILES XML catalog behavior can be changed by redirecting queries to the user's own set of catalogs. This can be done by setting the XML_CATALOG_FILES environment variable to a space-separated list of catalogs. Use percent-encoding to escape spaces or other characters. An empty variable should deactivate loading the default /etc/xml/catalog catalog. XML_DEBUG_CATALOG Setting the environment variable XML_DEBUG_CATALOG to non-zero using the export command outputs debugging information related to catalog operations. XMLLINT_INDENT Setting the environment variable XMLLINT_INDENT controls the indentation. The default value is two spaces " ". DIAGNOSTICS xmllint return codes provide information that can be used when calling it from scripts. 0 No error 1 Unclassified 2 Error in DTD 3 Validation error 4 Validation error 5 Error in schema compilation 6 Error writing output 7 Error in pattern (generated when --pattern option is used) 8 Error in Reader registration (generated when --chkregister option is used) 9 Out of memory error 10 XPath evaluation error SEE ALSO libxml(3) More information can be found at • libxml(3) web page https://gitlab.gnome.org/GNOME/libxml2 AUTHORS John Fleck <jfleck@inkstain.net> Author. Ziying Sherwin <sherwin@nlm.nih.gov> Author. Heiko Rupp <hwr@pilhuhn.de> Author. COPYRIGHT Copyright © 2001, 2004 libxml2 02/19/2022 XMLLINT(1)
| null |
yamlpp-highlight
| null | null | null | null | null |
ipcount5.34
| null | null | null | null | null |
last
|
The last utility will either list the sessions of specified users, ttys, and hosts, in reverse time order, or list the users logged in at a specified date and time. Each line of output contains the user name, the tty from which the session was conducted, any hostname, the start and stop times for the session, and the duration of the session. If the session is still continuing or was cut short by a crash or shutdown, last will so indicate. The following options are available: --libxo Generate output via libxo(3) in a selection of different human and machine readable formats. See xo_parse_args(3) for details on command line arguments. -d date Specify the snapshot date and time. All users logged in at the snapshot date and time will be reported. This may be used with the -f option to derive the results from stored utx.log files. When this argument is provided, all other options except for -f and -n are ignored. The argument should be in the form [[CC]YY][MMDD]hhmm[.SS] where each pair of letters represents the following: CC The first two digits of the year (the century). YY The second two digits of the year. If YY is specified, but CC is not, a value for YY between 69 and 99 results in a CC value of 19. Otherwise, a CC value of 20 is used. MM Month of the year, from 1 to 12. DD Day of the month, from 1 to 31. hh Hour of the day, from 0 to 23. mm Minute of the hour, from 0 to 59. SS Second of the minute, from 0 to 60. If the CC and YY letter pairs are not specified, the values default to the current year. If the SS letter pair is not specified, the value defaults to 0. -f file Read the file file instead of the default, /var/log/utx.log. -h host Host names may be names or internet numbers. -n maxrec Limit the report to maxrec lines. -s Report the duration of the login session in seconds, instead of the default days, hours and minutes. -t tty Specify the tty. Tty names may be given fully or abbreviated, for example, “last -t 03” is equivalent to “last -t tty03”. -w Widen the duration field to show seconds, as well as the default days, hours and minutes. -y Report the year in the session start time. If multiple arguments are given, and a snapshot time is not specified, the information which applies to any of the arguments is printed, e.g., “last root -t console” would list all of “root's” sessions as well as all sessions on the console terminal. If no users, hostnames or terminals are specified, last prints a record of all logins and logouts. The pseudo-user reboot logs in at reboots of the system, thus “last reboot” will give an indication of mean time between reboot. If last is interrupted, it indicates to what date the search has progressed. If interrupted with a quit signal last indicates how far the search has progressed and then continues. FILES /var/log/utx.log login data base
|
last – indicate last logins of users and ttys
|
last [--libxo] [-swy] [-d [[CC]YY][MMDD]hhmm[.SS]] [-f file] [-h host] [-n maxrec] [-t tty] [user ...]
| null |
Show logins in pts/14 with the duration in seconds and limit the report to two lines: $ last -n2 -s -t pts/14 bob pts/1 Wed Dec 9 11:08 still logged in bob pts/2 Mon Dec 7 20:10 - 20:23 ( 776) Show active logins at ‘December 7th 20:23’ of the current year: $ last -d 12072023 bob pts/1 Mon Dec 7 20:10 - 20:23 (00:12) bob pts/6 Mon Dec 7 19:24 - 22:27 (03:03) alice ttyv0 Mon Dec 7 19:18 - 22:27 (03:09) SEE ALSO lastcomm(1), getutxent(3), libxo(3), xo_parse_args(3), ac(8) HISTORY last utility first appeared in 1BSD. AUTHORS The original version was written by Howard P. Katseff; Keith Bostic rewrote it in 1986/87 to add functionality and to improve code quality. Philip Paeps added libxo(3) support in August 2018. BUGS If a login shell should terminate abnormally for some reason, it is likely that a logout record will not be written to the utx.log file. In this case, last will indicate the logout time as "shutdown". macOS 14.5 January 9, 2021 macOS 14.5
|
vm_stat
|
vm_stat displays Mach virtual memory statistics. If the optional interval is specified, then vm_stat will display the statistics every interval seconds. In this case, each line of output displays the change in each statistic (an interval count of 1 displays the values per second). However, the first line of output following each banner displays the system-wide totals for each statistic. If a count is provided, the command will terminate after count intervals. The following values are displayed: Pages free the total number of free pages in the system. Pages active the total number of pages currently in use and pageable. Pages inactive the total number of pages on the inactive list. Pages speculative the total number of pages on the speculative list. Pages throttled the total number of pages on the throttled list (not wired but not pageable). Pages wired down the total number of pages wired down. That is, pages that cannot be paged out. Pages purgeable the total number of purgeable pages. Translation faults the number of times the "vm_fault" routine has been called. Pages copy-on-write the number of faults that caused a page to be copied (generally caused by copy-on-write faults). Pages zero filled the total number of pages that have been zero-filled on demand. Pages reactivated the total number of pages that have been moved from the inactive list to the active list (reactivated). Pages purged the total number of pages that have been purged. File-backed pages the total number of pages that are file-backed (non-swap) Anonymous pages the total number of pages that are anonymous Uncompressed pages the total number of pages (uncompressed) held within the compressor Pages used by VM compressor: the number of pages used to store compressed VM pages. Pages decompressed the total number of pages that have been decompressed by the VM compressor. Pages compressed the total number of pages that have been compressed by the VM compressor. Pageins the total number of requests for pages from a pager (such as the inode pager). Pageouts the total number of pages that have been paged out. Swapins the total number of compressed pages that have been swapped out to disk. Swapouts the total number of compressed pages that have been swapped back in from disk. If interval is not specified, then vm_stat displays all accumulated statistics along with the page size. macOS August 13, 1997 macOS
|
vm_stat – show Mach virtual memory statistics
|
vm_stat [[-c count] interval]
| null | null |
syslog
|
syslog is a command-line utility for a variety of tasks relating to the Apple System Log (ASL) facility. It provides mechanisms for sending and viewing log messages, copying log messages to ASL format data store files, and for controlling the flow of log messages from client processes. When invoked with the -help option, syslog prints a usage message. NOTE: Most system logs have moved to a new logging system. See log(1) for more information. SENDING MESSAGES The -s option is used send log messages to the syslogd(8) log message daemon, either locally or to a remote server if the -r host option in used. There are two main forms of the command. If the -k option is used, then it must be followed by a list of keys and values. A structured message will be sent to the server with the keys and values given as arguments. If a key or a value has embedded white space, it must be enclosed in quotes. Note that the text of the log message should be supplied as a value following the “Message” key. If the -k option is not specified, then the rest of the command line is treated as the message text. The text may be preceded by -l level to set the log level (priority) of the message. Levels may be an integer value corresponding the the log levels specified in syslog(3) or asl(3), or they may be a string. String values are case insensitive, and should be one of: Emergency (level 0) Alert (level 1) Critical (level 2) Error (level 3) Warning (level 4) Notice (level 5) Info (level 6) Debug (level 7) The string “Panic” is an alias for “Emergency”. If the -l option is omitted, the log level defaults to 7 (Debug). syslog only requires one or two leading characters for a level specification. A single character suffices in most cases. Use “P” or “Em” for Panic / Emergency, and “Er” or “X” for Error). READING MESSAGES The syslogd daemon filters and saves log messages to different output streams. One module saves messages to files specified in the syslog.conf(5) file. Those log files may be examined with any file printing or editing utility, e.g. cat /var/log/system.log Another module saves messages in a data store (/var/log/asl). If invoked with no arguments, syslog fetches all messages from the active data store. Messages are then printed to standard output, subject to formatting options and character encoding as described below. Some log messages are read-access controlled, so only messages that are readable by the user running syslog will be fetched and printed. If invoked with the -C option, syslog fetches and prints console messages. The -C option is actually an alias for the expression: -k Facility com.apple.console See the EXPRESSIONS section below for more details. Individual ASL data store files may be read by providing one or more file names as arguments to the -f option. This may be useful when searching archived files, files on alternate disk volumes, or files created as export files with the -x option. The -d option may be followed by a list of directory paths. syslog will read or search all ASL data store files in those directories. Any files that are not readable will be skipped. Specifying -d with the name “archive” will open all readable files in the default ASL archive directory /var/log/asl.archive. Specifying -d with the name “store” will open all readable files in the ASL store directory /var/log/asl. Legacy ASL database files that were written by syslogd on Mac OS X 10.5 (Leopard) may also be read using the -f option. However only one such legacy database may be read or searched at a time. Note that a legacy database may be read and copied into a new ASL data store format file using a combination of -f and -x options. The -B option causes syslog to start processing messages beginning at the time of the last system startup. If used in conjunction with -w, all messages since the last system startup are displayed, or matched against an expression, before syslog waits for new messages. The -w option causes syslog to wait for new messages. By default, syslog prints the last 10 messages, then waits for new messages to be added to the data store. A number following the -w option specifies the number of messages to print and overrides the default value of 10. For example: syslog -w 20 Use the value “all” to view all messages in the data store before watching for new messages. The value “boot” will display messages since the last system startup before watching for new messages. Specifying “-w boot” is equivalent to using -w and -B together. Using syslog with the -w option is similar to watching a log file using, e.g. tail -f /var/log/system.log The -w option can only be used when reading the system's ASL data store or when reading a single data store file, and when printing messages to standard output. If the -x file option is specified, messages are copied to the named file rather than being printed. The file will be created if it does not exist. When called without the -x option, messages are printed to standard output. Messages are printed in a format similar to that used in the system.log file, except that the message priority level is printed between angle-brackets. The output format may by changed by specifying the -F format option. Non-printable and control characters are encoded by default. Text encoding may be controlled using the -E option (see below). The value of format may be one of the following: bsd Format used by the syslogd daemon for system log files, e.g. /var/log/system.log. std Standard (default) format. Similar to “bsd”, but includes the message priority level. raw Prints the complete message structure. Each key/value pair is enclosed in square brackets. Embedded closing brackets and white space are escaped. Time stamps are printed as seconds since the epoch by default, but may also be printed in local time or UTC if the -T option is specified (see below). xml The list of messages is printed as an XML property list. Each message is represented as a dictionary in a array. Dictionary keys represent message keys. Dictionary values are strings. Each of the format styles above may optionally be followed by a dot character and an integer value, for example: syslog -F std.4 This causes sub-second time values to be printed. In the example above, 4 decimal digits would be printed. The sub-second time values come from the value of the TimeNanoSec key in the ASL message. If the TimeNanoSec key is missing, a value of zero is used. The value of the format argument may also be a custom print format string. A custom format should in most cases be enclosed in single quotes to prevent the shell from substituting special characters and breaking at white space. Custom format strings may include variables of the form “$Name”, “$(Name)”, or “$((Name)(format))”. which will be expanded to the value associated with the named key. For example, the command: syslog -F '$Time $Host $(Sender)[$(PID)] <$((Level)(str))>: $Message' produces output similar to the “std” format. The simple “$Name” form is sufficient in most cases. However, the second form: “$(Name)” must be used if the name is not delimited by white space. The third form allows different formats of the value to be printed. For example, a message priority level may appear as an integer value (e.g. “3”) or as a string (``Error''). The following print formats are known. $((Level)(str)) Formats a Level value as a string, for example “Error”, “Alert”, “Warning”, and so on. Note that $(Level) or $Level formats the value as an integer 0 through 7. $((Time)(sec)) Formats a Time value as the number of seconds since the Epoch. $((Time)(raw)) Alias for $((Time)(sec)). $((Time)(local)) Formats a Time value as a string of the form “Mmm dd hh:mm:ss”, where Mmm is the abbreviation for the month, dd is the date (1 - 31) and hh:mm:ss is the time. The local timezone is used. $((Time)(lcl)) Alias for $((Time)(local)). $((Time)(utc)) Formats a Time value as a string of the form “yyyy-mm-dd hh:mm:ssZ”, using Coordinated Universal Time, or the “Zulu” time zone. $((Time)(zulu)) Alias for $((Time)(utc)). $((Time)(X)) Where X may be any letter in the range A - Z or a - z. Formats the Time using the format “yyyy-mm-dd hh:mm:ssX”, using the specified nautical timezone. Z is the same as UTC/Zulu time. Timezones A - M (except J) decrease by one hour to the east of the Zulu time zone. Timezones N - Y increase by one hour to the west of Z. M and Y have the same clock time, but differ by one day. J is used to indicate the local timezone. When printing using $((Time)(J)), the output format is “yyyy-mm-dd hh:mm:ss”, without a trailing timezone letter. $((Time)(JZ)) Specifies the local timezone. The timezone offset from UTC follows the date and time. The time is formatted as “yyyy-mm-dd hh:mm:ss[+|-]HH[:MM]”. Minutes in the timezone offset are only printed if they are non-zero. $((Time)(ISO8601)) Specifies the local timezone and ISO 8601 extended format. The timezone offset from UTC follows the date and time. The time is formatted as “yyyy-mm-ddThh:mm:ss[+|-]HH[:MM]”. Minutes in the timezone offset are only printed if they are non-zero. Note that this differs from “JZ” format only in that a “T” character separates the date and time. $((Time)(ISO8601B)) Specifies the local timezone and ISO 8601 basic format, in the form: “yyyymmddThhmmss[+|-]HH[:MM]”. $((Time)(ISO8601Z)) Specifies UTC/Zulu time and ISO 8601 extended format, in the form: “yyyy-mm-ddThh:mm:ssZ”. $((Time)(ISO8601BZ)) Specifies UTC/Zulu time and ISO 8601 basic format, in the form: “yyyymmddThhmmssZ”. $((Time)([+|-]HH[:MM])) Specifies an offset (+ or -) of the indicated number of hours (HH) and optionally minutes (MM) to UTC. The value is formatted as a string of the form “yyyy-mm-dd hh:mm:ss[+|-]HH[:MM]”. Minutes in the timezone offset are only printed if they are non-zero. Each of the print formats listed above for Time values may optionally be followed by a dot character and an integer value. In that case, sub- second time values will be printed. For example, the following line prints messages with a UTC time format, and includes 6 digits of sub- second time: syslog -F '$((Time)(utc.6)) $Host $(Sender)[$(PID)] <$((Level)(str))>: $Message If a custom format is not being used to specify the format for Time values, then Time values are generally converted to local time, except when the -F raw option is used, in which case times are printed as the number of seconds since the epoch. The -T format option may be used to control the format used for timestamps. The value of format may be one of the following: sec or raw Times are printed as the number of seconds since the epoch. local or lcl Times are converted to the local time zone, and printed with the format mmm dd hh:mm:ss where mmm is the month name abbreviated as three characters. utc or zulu Times are converted to UTC, and printed with the format yyyy-mm-dd hh:mm:ssZ A-Z Times are converted to the indicated nautical time zone, printed in the same format as UTC. “J” is interpreted as the local timezone and printed in the same format, but without a trailing timezone letter. JZ is interpreted as the local timezone and printed with the format yyyy-mm-dd hh:mm:ss[+|-]HH[:MM]. The trailing “[+|-]HH[:MM]” string represents the local timezone offset from UTC in hours, or in hours and minutes if minutes are non-zero. ISO8601 Times are printed with the format specified by ISO 8601: yyyy-mm-ddThh:mm:ss[+|-]HH[:MM]. This is the same as the “JZ” format, except a “T character separates the date and time components.” [+|-]hh[:mm] The specified offset is used to adjust time. Each of the time formats above may optionally be followed by a dot character and an integer value. In that case, sub-second time values will be printed. For example: syslog -T bsd.3 The -u option is a short form for -T utc. By default, control characters and non-printable characters are encoded in the output stream. In some cases this may make messages less natural in appearance. The encoding is designed to preserve all the information in the log message, and to prevent malicious users from spoofing or obscuring information in log messages. Text in the “std”, “bsd”, and “raw” formats is encoded as it is by the vis utility with the -c option. Newlines and tabs are also encoded as "\n" and "\t" respectively. In “raw” format, space characters embedded in log message keys are encoded as "\s" and embedded brackets are escaped to print as "\[" and "\]". XML format output requires that keys are valid UTF8 strings. Keys which are not valid UTF8 are ignored, and the associated value is not printed. Values that contain legal UTF8 are printed as strings. Ampersand, less than, greater than, quotation mark, and apostrophe characters are encoded according to XML conventions. Embedded control characters are encoded as “&#xNN;” where NN is the character's hexadecimal value. Values that do not contain legal UTF8 are encoded in base-64 and printed as data objects. The -E format option may be used to explicitly control the text encoding. The value of format may be one of the following: safe This is the default encoding for syslog output. Encodes backspace characters as ^H. Carriage returns are mapped to newlines. A tab character is appended after newlines so that message text is indented. vis The C-style backslash encoding similar to that produced by the “vis -c” command, as described above. none No encoding is used. The intent of the “safe” encoding is to prevent obvious message spoofing or damage. The appearance of messages printed will depend on terminal settings and UTF-8 string handling. It is possible that messages printed using the “safe” or “none” options may be garbled or subject to manipulation through the use of control characters and control sequences embedded in user-supplied message text. The “vis” encoding should be used to view messages if there is any suspicion that message text may have been used to manipulate the printed representation. If no further command line options are specified, syslog displays all messages, or copies all messages to a data store file. However, an expression may be specified using the -k and -o options. EXPRESSIONS Expressions specify matching criteria. They may be used to search for messages of interest. A simple expression has the form: -k key [[op] val] The -k option may be followed by one, two, or three arguments. A single argument causes a match to occur if a message has the specified key, regardless of value. If two arguments are specified, a match occurs when a message has exactly the specified value for a given key. For example, to find all messages sent by the portmap process: syslog -k Sender portmap Note that the -C option is treated as an alias for the expression: -k Facility com.apple.console This provides a quick way to search for console messages. If three arguments are given, they are of the form -k key operation value. syslog supports the following matching operators: eq equal ne not equal gt greater than ge greater than or equal to lt less than le less than or equal to Additionally, the operator may be preceded by one or more of the following modifiers: C case-fold R regular expression (see regex(3)) S substring A prefix Z suffix N numeric comparison More complex search expressions may be built by combining two or more simple expressions. A complex expression that has more than one “-k key [[op] val]” term matches a message if all of the key-value operations match. Logically, the result is an AND of all of key-value operations. For example: syslog -k Sender portmap -k Time ge -2h finds all messages sent by portmap in the last 2 hours (-2h means "two hours ago"). The -o option may be used to build even more complex searches by providing an OR operation. If two or more sub-expressions are given, separated by -o options, then a match occurs is a message matches any of the sub-expressions. For example, to find all messages which have either a “Sender” value of “portmap” or that have a numeric priority level of 4 or less: syslog -k Sender portmap -o -k Level Nle 4 Log priority levels are internally handled as an integer value between 0 and 7. Level values in expressions may either be given as integers, or as string equivalents. See the table string values in the SENDING MESSAGES section for details. The example query above could also be specified with the command: syslog -k Sender portmap -o -k Level Nle warning A special convention exists for matching time stamps. An unsigned integer value is regarded as the given number of seconds since 0 hours, 0 minutes, 0 seconds, January 1, 1970, Coordinated Universal Time. An negative integer value is regarded as the given number of seconds before the current time. For example, to find all messages of Error priority level (3) or less which were logged in the last 30 seconds: syslog -k Level Nle error -k Time ge -30 a relative time value may be optionally followed by one of the characters “s”, “m”, “h”, “d”, or “w” to specify seconds, minutes, hours, days, or weeks respectively. Upper case may be used equivalently. A week is taken to be 7 complete days (i.e. 604800 seconds). FILTERING CONTROLS Clients of the Apple System Log facility using either the asl(3) or syslog(3) interfaces may specify a log filter mask. The mask specifies which messages should be sent to the syslogd daemon by specifying a yes/no setting for each priority level. Many clients set a filter mask to avoid sending relatively unimportant messages. Debug or Info priority level messages are generally only useful for debugging operations. By setting a filter mask, a process can improve performance by avoiding spending time sending messages that are in most cases unnecessary. The -c option may be used to control filtering. In addition to the internal filter mask value that processes may set as described above, the system maintains a global “master” filter mask. This filter is normally “off”, meaning that it has no effect. If a value is set for the master filter mask, it overrides the local filter mask for all processes. Root user access is required to set the master filter mask value. The current setting of the master filter mask may be inspected using: syslog -c 0 The value of the master filter mask my be set by providing a second argument following -c 0. The value may a set of characters from the set “pacewnid”. These correspond to the priority levels Emergency (Panic), Alert, Critical, Error, Warning, Notice, Info, and Debug. The character “x” may be used for Error, as it is used for sending messages. The master filter mask may be deactivated with: syslog -c 0 off Since it is common to use the filter mask as a “cutoff” mechanism, for example to cut off messages with Debug and Info priority, a single character from the list above may be specified, preceded by a minus sign. In this case, syslog uses a filter mask starting at level 0 (Emergency) “up to” the given level. For example, to set the master filter mask to cause all processes to log messages from Emergency up to Debug: syslog -c 0 -d While the master filter mask may be set to control the messages produced by all processes, another filter mask may be specified for an individual process. If a per-process filter mask is set, it overrides both the local filter mask and the master filter mask. The current setting for a per-process filter mask may be inspected using -c process, where process is either a PID or the name of a process. If a name is used, it must uniquely identify a process. To set a per-process filter mask, an second argument may be supplied following -c process as described above for the master filter mask. Root access is required to set the per-process filter mask for system (UID 0) processes. The syslogd server follows filtering rules specified in the /etc/asl.conf file. When the remote-control mechanism is used to change the filter of a process, syslogd will save any messages received from that process until the remote-control filter is turned off. SERVER CONFIGURATION When syslogd starts up, and when it receives a HUP signal, it re-reads its configuration settings from /etc/asl.conf. It is sometimes useful to change configuration parameters temporarily, without needing to make changes to the configuration file. Any of the configuration options that may be set in the file (following an ``='' character) may also be sent to syslogd using the -config flag (without an ``='' character). For example, to temporarily disable the kernel message-per-second limit: syslog -config mps_limit 0 Note that only the superuser (root) may change configuration parameters. In addition to the parameter setting options that are described in the asl.conf(5) manual page, an additional option: syslog -config reset will cause syslogd to reset its configuration. ASL OUTPUT MODULES ASL Output Modules are named configuration bundles used by the ASL server syslogd, and by the ASL filesystem manager aslmanager. The /etc/asl.conf file represents the system's primary output module, and is given the name “com.apple.asl”. Other modules are read from files in the /etc/asl directory. File names serve as module names. ASL Output Modules are described in detail in asl.conf(5). When invoked with -module, syslog prints a summary of all loaded ASL Output Modules. The summary includes the output files and ASL store directories used by each module, a list of the module's configuration rules, and the module's current enabled or disabled status. -module name prints a summary for the module with the given name. ASL Output Modules may be enabled or disabled using the command: syslog -module name enable [0] Note that only the superuser (root) may enable or disable a module. The name '*' (including the single-quote characters) may be used to change the status of all ASL Output Modules, excluding the primary com.apple.asl module. com.apple.asl may be enabled or disabled, but only specifically by name. If a module includes rotated files, the command: syslog -module name checkpoint [file] Will force the module to checkpoint all of its rotated files, or just the single optionally named file. The name '*' (including the single-quote characters) may be used to force checkpointing of all rotated files for all ASL Output Modules, including the primary com.apple.asl module. Note that only the superuser (root) may force files to be checkpointed. The checkpoint action sends a command to syslogd and waits for a reply to be returned. This means that any files currently in use will be checkpointed when the syslog command completes. SEE ALSO log(1), logger(1), asl(3), syslog(3), asl.conf(5), syslogd(8) HISTORY The syslog utility appeared in Mac OS X 10.4. Mac OS X October 18, 2004 Mac OS X
|
syslog – Apple System Log utility
|
syslog -help syslog -s [-r host] [-l level] message... syslog -s [-r host] -k key val [key val] ... syslog -C syslog [-f file ...] [-d dir ...] [-B] [-w [n]] [-F format] [-T format] [-E format] expression syslog [-f file ...] [-d dir ...] -x file expression syslog -c process [mask] syslog -config [options] syslog -module [name [action]]
| null | null |
net-snmp-cert
| null | null | null | null | null |
treereg5.30
|
"Treereg" translates a tree grammar specification file (default extension ".trg" describing a set of tree patterns and the actions to modify them using tree-terms like: TIMES(NUM, $x) and { $NUM->{VAL} == 0) => { $NUM } which says that wherever an abstract syntax tree representing the product of a numeric expression with value 0 times any other kind of expression, the "TIMES" tree can be substituted by its left child. The compiler produces a Perl module containing the subroutines implementing those sets of pattern-actions. EXAMPLE Consider the following "eyapp" grammar (see the "Parse::Eyapp" documentation to know more about "Parse::Eyapp" grammars): ---------------------------------------------------------- nereida:~/LEyapp/examples> cat Rule6.yp %{ use Data::Dumper; %} %right '=' %left '-' '+' %left '*' '/' %left NEG %tree %% line: exp { $_[1] } ; exp: %name NUM NUM | %name VAR VAR | %name ASSIGN VAR '=' exp | %name PLUS exp '+' exp | %name MINUS exp '-' exp | %name TIMES exp '*' exp | %name DIV exp '/' exp | %name UMINUS '-' exp %prec NEG | '(' exp ')' { $_[2] } /* Let us simplify a bit the tree */ ; %% sub _Error { die "Syntax error.\n"; } sub _Lexer { my($parser)=shift; $parser->YYData->{INPUT} or $parser->YYData->{INPUT} = <STDIN> or return('',undef); $parser->YYData->{INPUT}=~s/^\s+//; for ($parser->YYData->{INPUT}) { s/^([0-9]+(?:\.[0-9]+)?)// and return('NUM',$1); s/^([A-Za-z][A-Za-z0-9_]*)// and return('VAR',$1); s/^(.)//s and return($1,$1); } } sub Run { my($self)=shift; $self->YYParse( yylex => \&_Lexer, yyerror => \&_Error ); } ---------------------------------------------------------- Compile it using "eyapp": ---------------------------------------------------------- nereida:~/LEyapp/examples> eyapp Rule6.yp nereida:~/LEyapp/examples> ls -ltr | tail -1 -rw-rw---- 1 pl users 4976 2006-09-15 19:56 Rule6.pm ---------------------------------------------------------- Now consider this tree grammar: ---------------------------------------------------------- nereida:~/LEyapp/examples> cat Transform2.trg %{ my %Op = (PLUS=>'+', MINUS => '-', TIMES=>'*', DIV => '/'); %} fold: 'TIMES|PLUS|DIV|MINUS':bin(NUM($n), NUM($m)) => { my $op = $Op{ref($bin)}; $n->{attr} = eval "$n->{attr} $op $m->{attr}"; $_[0] = $NUM[0]; } zero_times_whatever: TIMES(NUM($x), .) and { $x->{attr} == 0 } => { $_[0] = $NUM } whatever_times_zero: TIMES(., NUM($x)) and { $x->{attr} == 0 } => { $_[0] = $NUM } /* rules related with times */ times_zero = zero_times_whatever whatever_times_zero; ---------------------------------------------------------- Compile it with "treereg": ---------------------------------------------------------- nereida:~/LEyapp/examples> treereg Transform2.trg nereida:~/LEyapp/examples> ls -ltr | tail -1 -rw-rw---- 1 pl users 1948 2006-09-15 19:57 Transform2.pm ---------------------------------------------------------- The following program makes use of both modules "Rule6.pm" and "Transform2.pm": ---------------------------------------------------------- nereida:~/LEyapp/examples> cat foldand0rule6_3.pl #!/usr/bin/perl -w use strict; use Rule6; use Parse::Eyapp::YATW; use Data::Dumper; use Transform2; $Data::Dumper::Indent = 1; my $parser = new Rule6(); my $t = $parser->Run; print "\n***** Before ******\n"; print Dumper($t); $t->s(@Transform2::all); print "\n***** After ******\n"; print Dumper($t); ---------------------------------------------------------- When the program runs with input "b*(2-2)" produces the following output: ---------------------------------------------------------- nereida:~/LEyapp/examples> foldand0rule6_3.pl b*(2-2) ***** Before ****** $VAR1 = bless( { 'children' => [ bless( { 'children' => [ bless( { 'children' => [], 'attr' => 'b', 'token' => 'VAR' }, 'TERMINAL' ) ] }, 'VAR' ), bless( { 'children' => [ bless( { 'children' => [ bless( { 'children' => [], 'attr' => '2', 'token' => 'NUM' }, 'TERMINAL' ) ] }, 'NUM' ), bless( { 'children' => [ bless( { 'children' => [], 'attr' => '2', 'token' => 'NUM' }, 'TERMINAL' ) ] }, 'NUM' ) ] }, 'MINUS' ) ] }, 'TIMES' ); ***** After ****** $VAR1 = bless( { 'children' => [ bless( { 'children' => [], 'attr' => 0, 'token' => 'NUM' }, 'TERMINAL' ) ] }, 'NUM' ); ---------------------------------------------------------- See also the section "Compiling: More Options" in Parse::Eyapp for a more contrived example. SEE ALSO • Parse::Eyapp, • eyapptut • The pdf file in <http://nereida.deioc.ull.es/~pl/perlexamples/Eyapp.pdf> • <http://nereida.deioc.ull.es/~pl/perlexamples/section_eyappts.html> (Spanish), • eyapp, • treereg, • Parse::yapp, • yacc(1), • bison(1), • the classic book "Compilers: Principles, Techniques, and Tools" by Alfred V. Aho, Ravi Sethi and • Jeffrey D. Ullman (Addison-Wesley 1986) • Parse::RecDescent. AUTHOR Casiano Rodriguez-Leon LICENSE AND COPYRIGHT Copyright © 2006, 2007, 2008, 2009, 2010, 2011, 2012 Casiano Rodriguez- Leon. Copyright © 2017 William N. Braswell, Jr. All Rights Reserved. Parse::Yapp is Copyright © 1998, 1999, 2000, 2001, Francois Desarmenien. Parse::Yapp is Copyright © 2017 William N. Braswell, Jr. All Rights Reserved. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.8 or, at your option, any later version of Perl 5 you may have available. POD ERRORS Hey! The above document had some coding errors, which are explained below: Around line 416: Non-ASCII character seen before =encoding in '©'. Assuming UTF-8 perl v5.30.3 2017-06-14 TREEREG(1)
|
treereg - Compiler for Tree Regular Expressions
|
treereg [-m packagename] [[no]syntax] [[no]numbers] [-severity 0|1|2|3] \ [-p treeprefix] [-o outputfile] [-lib /path/to/library/] -i filename[.trg] treereg [-m packagename] [[no]syntax] [[no]numbers] [-severity 0|1|2|3] \ [-p treeprefix] [-lib /path/to/library/] [-o outputfile] filename[.trg] treereg -v treereg -h
|
Options can be used both with one dash and double dash. It is not necessary to write the full name of the option. A disambiguation prefix suffices. • "-i[n] filename" Input file. Extension ".trg" is assumed if no extension is provided. • "-o[ut] filename" Output file. By default is the name of the input file (concatenated with .pm) • "-m[od] packagename" Name of the package containing the generated subroutines. By default is the longest prefix of the input file name that conforms to the classic definition of integer "[a-z_A-Z]\w*". • "-l[ib] /path/to/library/" Specifies that "/path/to/library/" will be included in @INC. Useful when the "syntax" option is on. Can be inserted as many times as necessary. • "-p[refix] treeprefix" Tree nodes automatically generated using "Parse::Eyapp" are objects blessed into the name of the production. To avoid crashes the programmer may prefix the class names with a given prefix when calling the parser; for example: $self->YYParse( yylex => \&_Lexer, yyerror => \&_Error, yyprefix => __PACKAGE__."::") The "-prefix treeprefix" option simplifies the process of writing the tree grammar so that instead of writing with the full names CLASS::TIMES(CLASS::NUM, $x) and { $NUM->{VAL} == 0) => { $NUM } it can be written: TIMES(NUM, $x) and { $NUM->{VAL} == 0) => { $NUM } • "-n[umbers]" Produces "#line" directives. • "-non[umbers]" Disable source file line numbering embedded in your parser • "-sy[ntax]" Checks that Perl code is syntactically correct. • "-nosy[ntax]" Does not check the syntax of Perl code • "-se[verity] number" - 0 = Don't check arity (default). Matching does not check the arity. The actual node being visited may have more children. - 1 = Check arity. Matching requires the equality of the number of children and the actual node and the pattern. - 2 = Check arity and give a warning - 3 = Check arity, give a warning and exit • "-v[ersion]" Gives the version • "-u[sage]" Prints the usage info • "-h[elp]" Print this help
| null |
nslookup
|
Nslookup is a program to query Internet domain name servers. Nslookup has two modes: interactive and non-interactive. Interactive mode allows the user to query name servers for information about various hosts and domains or to print a list of hosts in a domain. Non-interactive mode is used to print just the name and requested information for a host or domain. ARGUMENTS Interactive mode is entered in the following cases: 1. when no arguments are given (the default name server will be used) 2. when the first argument is a hyphen (-) and the second argument is the host name or Internet address of a name server. Non-interactive mode is used when the name or Internet address of the host to be looked up is given as the first argument. The optional second argument specifies the host name or address of a name server. Options can also be specified on the command line if they precede the arguments and are prefixed with a hyphen. For example, to change the default query type to host information, and the initial timeout to 10 seconds, type: nslookup -query=hinfo -timeout=10 The -version option causes nslookup to print the version number and immediately exits. INTERACTIVE COMMANDS host [server] Look up information for host using the current default server or using server, if specified. If host is an Internet address and the query type is A or PTR, the name of the host is returned. If host is a name and does not have a trailing period, the search list is used to qualify the name. To look up a host not in the current domain, append a period to the name. server domain lserver domain Change the default server to domain; lserver uses the initial server to look up information about domain, while server uses the current default server. If an authoritative answer can't be found, the names of servers that might have the answer are returned. root not implemented finger not implemented ls not implemented view not implemented help not implemented ? not implemented exit Exits the program. set keyword[=value] This command is used to change state information that affects the lookups. Valid keywords are: all Prints the current values of the frequently used options to set. Information about the current default server and host is also printed. class=value Change the query class to one of: IN the Internet class CH the Chaos class HS the Hesiod class ANY wildcard The class specifies the protocol group of the information. (Default = IN; abbreviation = cl) [no]debug Turn on or off the display of the full response packet and any intermediate response packets when searching. (Default = nodebug; abbreviation = [no]deb) [no]d2 Turn debugging mode on or off. This displays more about what nslookup is doing. (Default = nod2) domain=name Sets the search list to name. [no]search If the lookup request contains at least one period but doesn't end with a trailing period, append the domain names in the domain search list to the request until an answer is received. (Default = search) port=value Change the default TCP/UDP name server port to value. (Default = 53; abbreviation = po) querytype=value type=value Change the type of the information query. (Default = A; abbreviations = q, ty) [no]recurse Tell the name server to query other servers if it does not have the information. (Default = recurse; abbreviation = [no]rec) ndots=number Set the number of dots (label separators) in a domain that will disable searching. Absolute names always stop searching. retry=number Set the number of retries to number. timeout=number Change the initial timeout interval for waiting for a reply to number seconds. [no]vc Always use a virtual circuit when sending requests to the server. (Default = novc) [no]fail Try the next nameserver if a nameserver responds with SERVFAIL or a referral (nofail) or terminate query (fail) on such a response. (Default = nofail) RETURN VALUES nslookup returns with an exit status of 1 if any query failed, and 0 otherwise. macOS NOTICE The nslookup command does not use the host name and address resolution or the DNS query routing mechanisms used by other processes running on macOS. The results of name or address queries printed by nslookup may differ from those found by other processes that use the macOS native name and address resolution mechanisms. The results of DNS queries may also differ from queries that use the macOS DNS routing library. FILES /etc/resolv.conf SEE ALSO dig(1), host(1), named(8). AUTHOR Internet Systems Consortium, Inc. COPYRIGHT Copyright © 2004-2007, 2010, 2013-2016 Internet Systems Consortium, Inc. ("ISC") ISC 2018-05-25 NSLOOKUP(1)
|
nslookup - query Internet name servers interactively
|
nslookup [-option] [name | -] [server]
| null | null |
vimdiff
|
Vimdiff starts Vim on two up to eight files. Each file gets its own window. The differences between the files are highlighted. This is a nice way to inspect changes and to move changes from one version to another version of the same file. See vim(1) for details about Vim itself. When started as gvimdiff the GUI will be started, if available. In each window the 'diff' option will be set, which causes the differences to be highlighted. The 'wrap' and 'scrollbind' options are set to make the text look good. The 'foldmethod' option is set to "diff", which puts ranges of lines without changes in a fold. 'foldcolumn' is set to two to make it easy to spot the folds and open or close them.
|
vimdiff - edit between two and eight versions of a file with Vim and show differences
|
vimdiff [options] file1 file2 [file3 [file4 [file5 [file6 [file7 [file8]]]]]] gvimdiff
|
Vertical splits are used to align the lines, as if the "-O" argument was used. To use horizontal splits instead, use the "-o" argument. For all other arguments see vim(1). SEE ALSO vim(1) AUTHOR Most of Vim was made by Bram Moolenaar, with a lot of help from others. See ":help credits" in Vim. 2001 March 30 VIMDIFF(1)
| null |
topsyscall
|
This program continually prints a report of the top system calls, and refreshes the display every 1 second or as specified at the command line. Since this uses DTrace, only users with root privileges can run this command.
|
topsyscall - top syscalls by syscall name. Uses DTrace.
|
topsyscall [-Cs] [interval [count]]
|
-C don't clear the screen -s print per second values
|
Default output, 1 second updates, # topsyscall Print every 5 seconds, # topsyscall 5 Print a scrolling output, # topsyscall -C FIELDS load avg load averages, see uptime(1) syscalls total syscalls in this interval syscalls/s syscalls per second SYSCALL system call name COUNT total syscalls in this interval COUNT/s syscalls per second DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT topsyscall will run until Ctrl-C is hit, or the specified interval is reached. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO dtrace(1M), prstat(1M) version 0.90 June 13, 2005 topsyscall(1m)
|
libnetcfg
|
The libnetcfg utility can be used to configure the libnet. Starting from perl 5.8 libnet is part of the standard Perl distribution, but the libnetcfg can be used for any libnet installation. USAGE Without arguments libnetcfg displays the current configuration. $ libnetcfg # old config ./libnet.cfg daytime_hosts ntp1.none.such ftp_int_passive 0 ftp_testhost ftp.funet.fi inet_domain none.such nntp_hosts nntp.none.such ph_hosts pop3_hosts pop.none.such smtp_hosts smtp.none.such snpp_hosts test_exist 1 test_hosts 1 time_hosts ntp.none.such # libnetcfg -h for help $ It tells where the old configuration file was found (if found). The "-h" option will show a usage message. To change the configuration you will need to use either the "-c" or the "-d" options. The default name of the old configuration file is by default "libnet.cfg", unless otherwise specified using the -i option, "-i oldfile", and it is searched first from the current directory, and then from your module path. The default name of the new configuration file is "libnet.cfg", and by default it is written to the current directory, unless otherwise specified using the -o option, "-o newfile". SEE ALSO Net::Config, libnetFAQ AUTHORS Graham Barr, the original Configure script of libnet. Jarkko Hietaniemi, conversion into libnetcfg for inclusion into Perl 5.8. perl v5.38.2 2023-11-28 LIBNETCFG(1)
|
libnetcfg - configure libnet
| null | null | null |
cpan5.30
|
This script provides a command interface (not a shell) to CPAN. At the moment it uses CPAN.pm to do the work, but it is not a one-shot command runner for CPAN.pm.
|
cpan - easily interact with CPAN from the command line
|
# with arguments and no switches, installs specified modules cpan module_name [ module_name ... ] # with switches, installs modules with extra behavior cpan [-cfFimtTw] module_name [ module_name ... ] # use local::lib cpan -I module_name [ module_name ... ] # one time mirror override for faster mirrors cpan -p ... # with just the dot, install from the distribution in the # current directory cpan . # without arguments, starts CPAN.pm shell cpan # without arguments, but some switches cpan [-ahpruvACDLOPX]
|
-a Creates a CPAN.pm autobundle with CPAN::Shell->autobundle. -A module [ module ... ] Shows the primary maintainers for the specified modules. -c module Runs a `make clean` in the specified module's directories. -C module [ module ... ] Show the Changes files for the specified modules -D module [ module ... ] Show the module details. This prints one line for each out-of-date module (meaning, modules locally installed but have newer versions on CPAN). Each line has three columns: module name, local version, and CPAN version. -f Force the specified action, when it normally would have failed. Use this to install a module even if its tests fail. When you use this option, -i is not optional for installing a module when you need to force it: % cpan -f -i Module::Foo -F Turn off CPAN.pm's attempts to lock anything. You should be careful with this since you might end up with multiple scripts trying to muck in the same directory. This isn't so much of a concern if you're loading a special config with "-j", and that config sets up its own work directories. -g module [ module ... ] Downloads to the current directory the latest distribution of the module. -G module [ module ... ] UNIMPLEMENTED Download to the current directory the latest distribution of the modules, unpack each distribution, and create a git repository for each distribution. If you want this feature, check out Yanick Champoux's "Git::CPAN::Patch" distribution. -h Print a help message and exit. When you specify "-h", it ignores all of the other options and arguments. -i module [ module ... ] Install the specified modules. With no other switches, this switch is implied. -I Load "local::lib" (think like "-I" for loading lib paths). Too bad "-l" was already taken. -j Config.pm Load the file that has the CPAN configuration data. This should have the same format as the standard CPAN/Config.pm file, which defines $CPAN::Config as an anonymous hash. -J Dump the configuration in the same format that CPAN.pm uses. This is useful for checking the configuration as well as using the dump as a starting point for a new, custom configuration. -l List all installed modules with their versions -L author [ author ... ] List the modules by the specified authors. -m Make the specified modules. -M mirror1,mirror2,... A comma-separated list of mirrors to use for just this run. The "-P" option can find them for you automatically. -n Do a dry run, but don't actually install anything. (unimplemented) -O Show the out-of-date modules. -p Ping the configured mirrors and print a report -P Find the best mirrors you could be using and use them for the current session. -r Recompiles dynamically loaded modules with CPAN::Shell->recompile. -s Drop in the CPAN.pm shell. This command does this automatically if you don't specify any arguments. -t module [ module ... ] Run a `make test` on the specified modules. -T Do not test modules. Simply install them. -u Upgrade all installed modules. Blindly doing this can really break things, so keep a backup. -v Print the script version and CPAN.pm version then exit. -V Print detailed information about the cpan client. -w UNIMPLEMENTED Turn on cpan warnings. This checks various things, like directory permissions, and tells you about problems you might have. -x module [ module ... ] Find close matches to the named modules that you think you might have mistyped. This requires the optional installation of Text::Levenshtein or Text::Levenshtein::Damerau. -X Dump all the namespaces to standard output.
|
# print a help message cpan -h # print the version numbers cpan -v # create an autobundle cpan -a # recompile modules cpan -r # upgrade all installed modules cpan -u # install modules ( sole -i is optional ) cpan -i Netscape::Booksmarks Business::ISBN # force install modules ( must use -i ) cpan -fi CGI::Minimal URI # install modules but without testing them cpan -Ti CGI::Minimal URI Environment variables There are several components in CPAN.pm that use environment variables. The build tools, ExtUtils::MakeMaker and Module::Build use some, while others matter to the levels above them. Some of these are specified by the Perl Toolchain Gang: Lancaster Concensus: <https://github.com/Perl-Toolchain-Gang/toolchain-site/blob/master/lancaster-consensus.md> Oslo Concensus: <https://github.com/Perl-Toolchain-Gang/toolchain-site/blob/master/oslo-consensus.md> NONINTERACTIVE_TESTING Assume no one is paying attention and skips prompts for distributions that do that correctly. cpan(1) sets this to 1 unless it already has a value (even if that value is false). PERL_MM_USE_DEFAULT Use the default answer for a prompted questions. cpan(1) sets this to 1 unless it already has a value (even if that value is false). CPAN_OPTS As with "PERL5OPT", a string of additional cpan(1) options to add to those you specify on the command line. CPANSCRIPT_LOGLEVEL The log level to use, with either the embedded, minimal logger or Log::Log4perl if it is installed. Possible values are the same as the "Log::Log4perl" levels: "TRACE", "DEBUG", "INFO", "WARN", "ERROR", and "FATAL". The default is "INFO". GIT_COMMAND The path to the "git" binary to use for the Git features. The default is "/usr/local/bin/git". EXIT VALUES The script exits with zero if it thinks that everything worked, or a positive number if it thinks that something failed. Note, however, that in some cases it has to divine a failure by the output of things it does not control. For now, the exit codes are vague: 1 An unknown error 2 The was an external problem 4 There was an internal problem with the script 8 A module failed to install TO DO * one shot configuration values from the command line BUGS * none noted SEE ALSO Most behaviour, including environment variables and configuration, comes directly from CPAN.pm. SOURCE AVAILABILITY This code is in Github in the CPAN.pm repository: https://github.com/andk/cpanpm The source used to be tracked separately in another GitHub repo, but the canonical source is now in the above repo. CREDITS Japheth Cleaver added the bits to allow a forced install (-f). Jim Brandt suggest and provided the initial implementation for the up- to-date and Changes features. Adam Kennedy pointed out that exit() causes problems on Windows where this script ends up with a .bat extension AUTHOR brian d foy, "<bdfoy@cpan.org>" COPYRIGHT Copyright (c) 2001-2015, brian d foy, All Rights Reserved. You may redistribute this under the same terms as Perl itself. perl v5.30.3 2024-04-13 CPAN(1)
|
c++filt
|
llvm-cxxfilt is a symbol demangler that can be used as a replacement for the GNU c++filt tool. It takes a series of symbol names and prints their demangled form on the standard output stream. If a name cannot be demangled, it is simply printed as is. If no names are specified on the command-line, names are read interactively from the standard input stream. When reading names from standard input, each input line is split on characters that are not part of valid Itanium name manglings, i.e. characters that are not alphanumeric, '.', '$', or '_'. Separators between names are copied to the output as is. EXAMPLE $ llvm-cxxfilt _Z3foov _Z3bari not_mangled foo() bar(int) not_mangled $ cat input.txt | _Z3foov *** _Z3bari *** not_mangled | $ llvm-cxxfilt < input.txt | foo() *** bar(int) *** not_mangled |
|
llvm-cxxfilt - LLVM symbol name demangler
|
llvm-cxxfilt [options] [mangled names...]
|
--format=<value>, -s Mangling scheme to assume. Valid values are auto (default, auto-detect the style) and gnu (assume GNU/Itanium style). --help, -h Print a summary of command line options. --no-strip-underscore, -n Do not strip a leading underscore. This is the default for all platforms except Mach-O based hosts. --strip-underscore, -_ Strip a single leading underscore, if present, from each input name before demangling. On by default on Mach-O based platforms. --types, -t Attempt to demangle names as type names as well as function names. --version Display the version of the llvm-cxxfilt executable. @<FILE> Read command-line options from response file <FILE>. EXIT STATUS llvm-cxxfilt returns 0 unless it encounters a usage error, in which case a non-zero exit code is returned. SEE ALSO llvm-nm(1) AUTHOR Maintained by the LLVM Team (https://llvm.org/). COPYRIGHT 2003-2024, LLVM Project 11 2024-01-28 LLVM-CXXFILT(1)
| null |
avbdiagnose
|
The avbdiagnose tool is used to capture a snapshot of the current AVB system state and help diagnose common issues with AVB. avbdiagnose looks for the system to determine that it actually has AVB capable interfaces and that at least one of these has been enabled. avbdiagnose will produce a number of warnings which may not be errors depending on the setup of the system. Things such as missing local or remote attributes for MSRP will be flagged as a warning but is not an error if the Mac is not sourcing or sinking streams as appropriate for the warning. avbdiagnose will flag potential errors and warnings and suggest filing a radar report at http://bugreporter.apple.com. Please attach the generated file at /tmp/avbdiagnose-<date>.bz2 to the bug report. An error or warning report may be the result of a network device. Please use your best judgement before filing the bug report. The following options are available: --no-enumeration Disable the reading of the AVDECC AEM from the device and archiving it in the result. --no-coreaudio Disable dumping of the state of the AVB audio driver device tree. --no-acmp-state Disabled reading of and dumping the ACMP state of the entities. --stream-info Enable sending of the AVDECC AECP AEM GET_STREAM_INFO command to each of the possible stream sources and sinks and including in the info dump. --stream-counters Enable sending of the AVDECC AECP AEM GET_COUNTER command to each of the possible stream sources and sinks and including in the info dump. --no-info-tree Disable dumping of the state of the AVB info tree. --no-timesync Disable dumping of the state of the Time Sync info tree. --no-finder-reveal Disable revealing of the output file in Finder. FILES /tmp/avbdiagnose-<date>.bz2 output The information gathered by avbdiagnose including the command line output, an ioreg dump and the current system.log and kernel.log files. Darwin 26/04/15 Darwin
|
avbdiagnose – diagnostic tool for AVB.
|
avbdiagnose
| null | null |
nice
|
The nice utility runs utility at an altered scheduling priority, by incrementing its “nice” value by the specified increment, or a default value of 10. The lower the nice value of a process, the higher its scheduling priority. The superuser may specify a negative increment in order to run a utility with a higher scheduling priority. Some shells may provide a builtin nice command which is similar or identical to this utility. Consult the builtin(1) manual page. ENVIRONMENT The PATH environment variable is used to locate the requested utility if the name contains no ‘/’ characters. EXIT STATUS If utility is invoked, the exit status of nice is the exit status of utility. An exit status of 126 indicates utility was found, but could not be executed. An exit status of 127 indicates utility could not be found.
|
nice – execute a utility at an altered scheduling priority
|
nice [-n increment] utility [argument ...]
| null |
Execute utility ‘date’ at priority 5 assuming the priority of the shell is 0: nice -n 5 date Execute utility ‘date’ at priority -19 assuming the priority of the shell is 0 and you are the super-user: nice -n 16 nice -n -35 date COMPATIBILITY The traditional -increment option has been deprecated but is still supported. SEE ALSO builtin(1), csh(1), getpriority(2), setpriority(2), renice(8) STANDARDS The nice utility conforms to IEEE Std 1003.1-2001 (“POSIX.1”). HISTORY A nice utility appeared in Version 4 AT&T UNIX. macOS 14.5 February 24, 2011 macOS 14.5
|
xcscontrol
| null | null | null | null | null |
truncate
|
The truncate utility adjusts the length of each regular file given on the command-line, or performs space management with the given offset and the length over a regular file given on the command-line. The following options are available: -c Do not create files if they do not exist. The truncate utility does not treat this as an error. No error messages are displayed and the exit value is not affected. -r rfile Truncate or extend files to the length of the file rfile. -s [+|-|%|/]size[SUFFIX] If the size argument is preceded by a plus sign (+), files will be extended by this number of bytes. If the size argument is preceded by a dash (-), file lengths will be reduced by no more than this number of bytes, to a minimum length of zero bytes. If the size argument is preceded by a percent sign (%), files will be round up to a multiple of this number of bytes. If the size argument is preceded by a slash sign (/), files will be round down to a multiple of this number of bytes, to a minimum length of zero bytes. Otherwise, the size argument specifies an absolute length to which all files should be extended or reduced as appropriate. The size, offset and length arguments may be suffixed with one of K, M, G or T (either upper or lower case) to indicate a multiple of Kilobytes, Megabytes, Gigabytes or Terabytes respectively. Exactly one of the -r or -s options must be specified. If a file is made smaller, its extra data is lost. If a file is made larger, it will be extended as if by writing bytes with the value zero. If the file does not exist, it is created unless the -c option is specified. Note that, while truncating a file causes space on disk to be freed, extending a file does not cause space to be allocated. To extend a file and actually allocate the space, it is necessary to explicitly write data to it, using (for example) the shell's ‘>>’ redirection syntax, or dd(1). EXIT STATUS The truncate utility exits 0 on success, and >0 if an error occurs. If the operation fails for an argument, truncate will issue a diagnostic and continue processing the remaining arguments.
|
truncate – truncate, extend the length of files, or perform space management in files
|
truncate [-c] -s [+|-|%|/]size[SUFFIX] file ... truncate [-c] -r rfile file ...
| null |
Adjust the size of the file test_file to 10 Megabytes but do not create it if it does not exist: truncate -c -s +10M test_file Same as above but create the file if it does not exist: truncate -s +10M test_file ls -l test_file -rw-r--r-- 1 root wheel 10485760 Jul 22 18:48 test_file Adjust the size of test_file to the size of the kernel and create another file test_file2 with the same size: truncate -r /boot/kernel/kernel test_file test_file2 ls -l /boot/kernel/kernel test_file* -r-xr-xr-x 1 root wheel 31352552 May 15 14:18 /boot/kernel/kernel* -rw-r--r-- 1 root wheel 31352552 Jul 22 19:15 test_file -rw-r--r-- 1 root wheel 31352552 Jul 22 19:15 test_file2 Downsize test_file in 5 Megabytes: # truncate -s -5M test_file ls -l test_file* -rw-r--r-- 1 root wheel 26109672 Jul 22 19:17 test_file -rw-r--r-- 1 root wheel 31352552 Jul 22 19:15 test_file2 SEE ALSO dd(1), touch(1), fspacectl(2), truncate(2) STANDARDS The truncate utility conforms to no known standards. HISTORY The truncate utility first appeared in FreeBSD 4.2. AUTHORS The truncate utility was written by Sheldon Hearn <sheldonh@starjuice.net>. Hole-punching support of this utility was developed by Ka Ho Ng <khng@FreeBSD.org>. macOS 14.5 August 19, 2021 macOS 14.5
|
calendar
|
The calendar utility checks the current directory for a file named calendar and displays lines that fall into the specified date range. On the day before a weekend (normally Friday), events for the next three days are displayed. The following options are available: -A num Print lines from today and the next num days (forward, future). -a Process the ``calendar'' files for users found in /etc/passwd and mail the results to them. This can result in multiple messages for specific files, since /etc/passwd does not require home directories to be unique. In particular, by default root, toor and daemon share the same home directory. If this directory contains calendar information, calendar will process the file three times. This option requires super-user privileges. -B num Print lines from today and the previous num days (backward, past). -D moon|sun Print UTC offset, longitude and moon or sun information. -d Debug option: print current date information. -F friday Specify which day of the week is ``Friday'' (the day before the weekend begins). Default is 5. -f calendarfile Use calendarfile as the default calendar file. -l longitude Perform lunar and solar calculations from this longitude. If neither longitude nor UTC offset is specified, the calculations will be based on the difference between UTC time and localtime. If both are specified, UTC offset overrides longitude. -t dd[.mm[.year]] For test purposes only: set date directly to argument values. -U UTC-offset Perform lunar and solar calculations from this UTC offset. If neither UTC offset nor longitude is specified, the calculations will be based on the difference between UTC time and localtime. If both are specified, UTC offset overrides longitude. -W num Print lines from today and the next num days (forward, future). Ignore weekends when calculating the number of days. FILE FORMAT To handle calendars in your national code table you can specify “LANG=<locale_name>” in the calendar file as early as possible. To handle the local name of sequences, you can specify them as: “SEQUENCE=<first> <second> <third> <fourth> <fifth> <last>” in the calendar file as early as possible. The names of the following special days are recognized: Easter Catholic Easter. Paskha Orthodox Easter. NewMoon The lunar New Moon. FullMoon The lunar Full Moon. MarEquinox The solar equinox in March. JunSolstice The solar solstice in June. SepEquinox The solar equinox in September. DecSolstice The solar solstice in December. ChineseNewYear The first day of the Chinese year. These names may be reassigned to their local names via an assignment like “Easter=Pasen” in the calendar file. Other lines should begin with a month and day. They may be entered in almost any format, either numeric or as character strings. If the proper locale is set, national month and weekday names can be used. A single asterisk (``*'') matches every month. A day without a month matches that day of every week. A month without a day matches the first of that month. Two numbers default to the month followed by the day. Lines with leading tabs default to the last entered date, allowing multiple line specifications for a single date. The names of the recognized special days may be followed by a positive or negative integer, like: “Easter+3” or “Paskha-4”. Weekdays may be followed by ``-4'' ... ``+5'' (aliases for last, first, second, third, fourth) for moving events like ``the last Monday in April''. By convention, dates followed by an asterisk are not fixed, i.e., change from year to year. Day descriptions start after the first <tab> character in the line; if the line does not contain a <tab> character, it is not displayed. If the first character in the line is a <tab> character, it is treated as a continuation of the previous line. The calendar file is preprocessed by a limited subset of cpp(1) internally, allowing the inclusion of shared files such as lists of company holidays or meetings. This limited subset consists of #include, #define, #undef, #ifdef, #ifndef, #else, #warning, and #error. Conditions can be nested and the consistency of opening and closing instructions is checked. Only the first word after #define is used as the name of the condition variable being defined. More than word following #ifdef, #ifndef, or #undef is considered a syntax error, since names cannot include white-space. Included files are parsed in a global scope with regard to the condition variables being defined or tested therein. All conditional blocks are implicitly closed at the end of a file, and missing #endif instructions are assumed to be present on implied succeeding lines. If the shared file is not referenced by a full pathname, calendar searches in the current (or home) directory first, and then in the directory /usr/share/calendar. Blank lines and text protected by the C comment syntax ‘/* ... */’ or ‘//’ are ignored, but the latter only at the beginning of a line or after white space to allow for URLs in calendar entries. Some possible calendar entries (<tab> characters highlighted by \t sequence): LANG=C Easter=Ostern #include <calendar.usholiday> #include <calendar.birthday> 6/15\tJune 15 (if ambiguous, will default to month/day). Jun. 15\tJune 15. 15 June\tJune 15. Thursday\tEvery Thursday. June\tEvery June 1st. 15 *\t15th of every month. 2010/4/15\t15 April 2010 May Sun+2\tsecond Sunday in May (Muttertag) 04/SunLast\tlast Sunday in April, \tsummer time in Europe Easter\tEaster Ostern-2\tGood Friday (2 days before Easter) Paskha\tOrthodox Easter FILES calendar file in current directory. ~/.calendar calendar HOME directory. A chdir is done into this directory if it exists. ~/.calendar/calendar calendar file to use if no calendar file exists in the current directory. ~/.calendar/nomail do not send mail if this file exists. /usr/share/calendar system wide location of calendar files provided as part of the operating system. /usr/local/share/calendar system wide location for calendar files not provided by the operating system. The order of precedence in searches for a calendar file is: current directory, ~/.calendar, /usr/local/share/calendar, /usr/share/calendar. Files of similar names are ignored in lower precedence locations. COMPATIBILITY The calendar program previously selected lines which had the correct date anywhere in the line. This is no longer true, the date is only recognized when it occurs at the beginning of a line. SEE ALSO at(1), mail(1), cron(8) HISTORY A calendar command appeared in Version 7 AT&T UNIX. NOTES Chinese New Year is calculated at 120 degrees east of Greenwich, which roughly corresponds with the east coast of China. For people west of China, this might result that the start of Chinese New Year and the day of the related new moon might differ. The phases of the moon and the longitude of the sun are calculated against the local position which corresponds with 30 degrees times the time-difference towards Greenwich. The new and full moons are happening on the day indicated: They might happen in the time period in the early night or in the late evening. It does not indicate that they are starting in the night on that date. Because of minor differences between the output of the formulas used and other sources on the Internet, Druids and Werewolves should double-check the start and end time of solar and lunar events. BUGS The calendar does only recognise the cpp directives #include, #define, #ifdef, #ifndef and #else. It supports nested conditions, but does not perform any validation on the correct use and nesting of conditions. #endif without prior #ifdef or #define is ignored and #else outside a conditional section skips input lines up to the next #endif. There is no possibility to properly specify the local position needed for solar and lunar calculations. macOS 14.5 July 31, 2022 macOS 14.5
|
calendar – reminder service
|
calendar [-A num] [-a] [-B num] [-D moon|sun] [-d] [-F friday] [-f calendarfile] [-l longitude] [-t dd[.mm[.year]]] [-U UTC-offset] [-W num]
| null | null |
locale
|
locale displays information about the current locale, or a list of all available locales. When locale is run with no arguments, it will display the current source of each locale category. When locale is given the name of a category, it acts as if it had been given each keyword in that category. For each keyword it is given, the current value is displayed.
|
locale – display locale settings
|
locale [-a|m] locale [-ck] name [...]
|
-a Lists all public locales. -c name ... Lists the category name before each keyword, unless it is the same category as the previously displayed keyword. -k name ... Displays the name of each keyword prior to its value. -m Lists all available public charmaps. Darwin locales do not support charmaps, so list all CODESETs instead. OPERANDS The following operand is supported: name is the name of a keyword or category to display. A list of all keywords and categories can be shown with the following command: locale -ck LC_ALL ENVIRONMENT LANG Used as a substitute for any unset LC_* variable. If LANG is unset, it will act as if set to "C". If any of LANG or LC_* are set to invalid values, locale acts as if they are all unset. LC_ALL Will override the setting of all other LC_* variables. LC_COLLATE Sets the locale for the LC_COLLATE category. LC_CTYPE Sets the locale for the LC_CTYPE category. LC_MESSAGES Sets the locale for the LC_MESSAGES category. LC_MONETARY Sets the locale for the LC_MONETARY category. LC_NUMERIC Sets the locale for the LC_NUMERIC category. LC_TIME Sets the locale for the LC_TIME category. SEE ALSO localedef(1), localeconv(3), nl_langinfo(3), setlocale(3) STANDARDS The locale utility conforms to IEEE Std 1003.1-2001 (``POSIX.1''). HISTORY locale appeared in Mac OS X 10.4 Darwin August 27, 2004 Darwin
| null |
strip
|
strip removes or modifies the symbol table attached to the output of the assembler and link editor. This is useful to save space after a program has been debugged and to limit dynamically bound symbols. strip no longer removes relocation entries under any condition. Instead, it updates the external relocation entries (and indirect symbol table entries) to reflect the resulting symbol table. strip prints an error message for those symbols not in the resulting symbol table that are needed by an external relocation entry or an indirect symbol table. The link editor ld(1) is the only program that can strip relocation entries and know if it is safe to do so. When strip is used with no options on an executable file, it checks that file to see if it uses the dynamic link editor. If it does, the effect of the strip command is the same as using the -u and -r options. If the file does not use the dynamic link editor (e.g. -preload or -static), the effect of strip without any options is to completely remove the symbol table. The options -S, -x, and -X have the same effect as the ld(1) options. The options to strip(1) can be combined to trim the symbol table to just what is desired. You should trim the symbol table of files used with dynamic linking so that only those symbols intended to be external interfaces are saved. Files used with dynamic linking include executables, objects that are loaded (usually bundles), and dynamic shared libraries. Only global symbols are used by the dynamic linking process. You should strip all non-global symbols. When an executable is built with all its dependent dynamic shared libraries, it is typically stripped with: % strip -u -r executable which saves all undefined symbols (usually defined in the dynamic shared libraries) and all global symbols defined in the executable referenced by the dynamic libraries (as marked by the static link editor when the executable was built). This is the maximum level of striping for an executable that will still allow the program to run correctly with its libraries. If the executable loads objects, however, the global symbols that the objects reference from the executable also must not be stripped. In this case, when linking the executable you should use the `-exported_symbols_list` option of the link editor ld(1) to limit which symbols can be referenced by the objects. Then you only need to strip local and debug symbols, like that: % strip -x -S executable For objects that will be loaded into an executable, you should trim the symbol table to limit the global symbols the executable will see. This would be done with: % strip -s interface_symbols -u object which would leave only the undefined symbols and symbols listed in the file interface_symbols in the object file. In this case, strip(1) has updated the relocation entries and indirect symbol table to reflect the new symbol table. For dynamic shared libraries, the maximum level of stripping is usually -x (to remove all non-global symbols). STRIPPING FILES FOR USE WITH RUNTIME LOADED CODE Trimming the symbol table for programs that load code at runtime allows you to control the interface that the executable wants to provide to the objects that it will load; it will not have to publish symbols that are not part of its interface. For example, an executable that wishes to allow only a subset of its global symbols but all of the statically linked shared library's globals to be used would be stripped with: % strip -s interface_symbols -A executable where the file interface_symbols would contain only those symbols from the executable that it wishes the code loaded at runtime to have access to. Another example is an object that is made up of a number of other objects that will be loaded into an executable would built and then stripped with: % ld -o relocatable.o -r a.o b.o c.o % strip -s interface_symbols -u relocatable.o which would leave only the undefined symbols and symbols listed in the file interface_symbols in the object file. In this case strip(1) has updated the relocation entries to reflect the new symbol table.
|
strip - remove symbols
|
strip [ option ] name ...
|
The first set of options indicate symbols that are to be saved in the resulting output file. -u Save all undefined symbols. This is intended for use with relocatable objects to save symbols referred to by external relocation entries. Note that common symbols are also referred to by external relocation entries and this flag does not save those symbols. -r Save all symbols referenced dynamically. -s filename Save the symbol table entries for the global symbols listed in filename. The symbol names listed in filename must be one per line. Leading and trailing white space are not part of the symbol name. Lines starting with # are ignored, as are lines with only white space. -R filename Remove the symbol table entries for the global symbols listed in filename. This file has the same format as the -s filename option above. This option is usually used in combination with other options that save some symbols, -S, -x, etc. -i Ignore symbols listed in the -s filename or -R filename options that are not in the files to be stripped (this is normally an error). -d filename Save the debugging symbol table entries for each source file name listed in filename. The source file names listed in filename must be one per line with no other white space in the file except the newlines on the end of each line. And they must be just the base name of the source file without any leading directories. This option works only with the stab(5) debugging format, it has no affect when using the DWARF debugging format. -A Save all global absolute symbols except those with a value of zero, and save Objective C class symbols. This is intended for use of programs that load code at runtime and want the loaded code to use symbols from the shared libraries (this is only used with NEXTSTEP 3.3 and earlier releases). -n Save all N_SECT global symbols. This is intended for use with executable programs in combination with -A to remove the symbols needed for correct static link editing which are not needed for use with runtime loading interfaces where using the -s filename would be too much trouble (this is only used with NEXTSTEP 3.3 and earlier releases). These options specify symbols to be removed from the resulting output file. -S Remove the debugging symbol table entries (those created by the -g option to cc(1) and other compilers). -X Remove the local symbols whose names begin with `L'. -T The intent of this flag is to remove Swift symbols from the Mach-O symbol table, It removes the symbols whose names begin with `_$S' or `_$s' only when it finds an __objc_imageinfo section with and it has a non-zero swift version. The future the implementation of this flag may change to match the intent. When used together with -R,/ -s files the Swift symbols will also be removed from global symbol lists used by dyld. -N In binaries that use the dynamic linker remove all nlist symbols and the string table. Setting the environment variable STRIP_NLISTS has the same effect. -x Remove all local symbols (saving only global symbols). -c Remove the section contents of a dynamic library creating a stub library output file. And the last options: - Treat all remaining arguments as file names and not options. -D When stripping a static library, set the archive's SYMDEF file's user id, group id, date, and file mode to reasonable defaults. See the libtool(1) documentation for -D for more information. -o output Write the result into the file output. -v Print the arguments passed to other tools run by strip(1) when processing object files. -no_uuid Remove any LC_UUID load commands. -no_split_info Remove the LC_SEGMENT_SPLIT_INFO load command and its payload. -no_atom_info Remove the LC_ATOM_INFO load command and its payload. -no_code_signature_warning Don't warn when the code signature would be invalid in the output. -arch arch_type Specifies the architecture, arch_type, of the file for strip(1) to operate on when the file is a universal file. (See arch(3) for the currently know arch_types.) The arch_type can be "all" to operate on all architectures in the file, which is the default. SEE ALSO ld(1), libtool(1), cc(1)
|
When creating a stub library the -c and -x are typically used: strip -x -c libfoo -o libfoo.stripped LIMITATIONS Not every layout of a Mach-O file can be stripped by this program. But all layouts produced by the Apple compiler system can be stripped. Apple Inc. June 23, 2023 STRIP(1)
|
symbolscache
|
The symbolscache command may be used to list, add, and delete entries from the global symbolscache. Darwin 9/20/10 Darwin
|
symbolscache – display and modify symbolcache information
| null | null | null |
trace
|
trace records and modifies files of software events used for performance analysis. A trace file captures what the system was doing over a period of time, like which threads are scheduled, what memory is used for the first time, and thousands of other kinds of events from software running in the kernel, user space, or on coprocessors. RECORD Trace files (with the .atrc extension) capture how a Darwin system behaves for a period of time. By default, they include a selection of kdebug trace events, Unified Logging information, and metadata to support analysis, like symbols and machine configuration. The record subcommand creates these files from the current system, according to a plan and options passed in on the command line. The file-name positional argument is used as a prefix and can include path components. The path to the file is derived by adding an incrementing number at the end, followed by the file extension. To write to a particular file path, end the argument with ‘.atrc’. The default plan produces files readable by Instruments System Trace and spindump(1). Plans support safe configuration by the user with ‘layers’ and ‘providers’. Layers are listed by the help output for trace record and alter basic configuration of the plan, like which events are collected. Listing providers is not yet implemented, but they add more complex features, like custom data sources beyond kdebug trace. Unified Logging support is implemented as a provider, for instance. This subcommand is opinionated about unsafe operations, and requires any options that may impact the reliability of the tool to also include the --unsafe flag to acknowledge that the files produced may be unusable. Experimental features are treated similarly, requiring a --experimental flag while they are still being vetted. --help | -h Present a help message for the record subcommand. --plan Use a non-default plan. Must be one of those listed by trace plans. --add layer-or-provider Add a layer or provider to the chosen plan, augmenting its behavior. The list of layers is shown in the help message or trace plans. The list of providers can be obtained using trace providers. --provider-name:option-name=option-value Set the option option-name to option-value for use by the provider named provider-name. The list of possible options are reported by trace providers. --omit layer-or-provider Omit a default layer or provider from the chosen plan. --overwrite Allow the output file to overwrite a pre-existing file. --compress Compress the events in the output file. --notify-after-start notification-name Emit a Darwin notification named notification-name with notify(3) after starting the trace session. Other systems can use this notification to stage their workloads, either with the notify(3) interfaces or notifyutil(1). For instance, ‘notifyutil -1 ktrace-start’ will wait for the notification named ktrace-start to be published and then exit. This option can be specified multiple times to send additional notifications. --notify-after-end notification-name Emit a Darwin notification named notification-name with notify(3) after the trace session has finished. --end-after-duration durations End tracing after the specified time period elapses. --end-on-notification notification-name End tracing when a Darwin notification matching the notification-nameis published with notify(3). --end-on-kdebug-event event-id End tracing when a kdebug event with the given event-id is emitted. This is currently experimental and unsafe if the event is not part of the plan. --end-after-kdebug-events-size size-bytes End tracing when the file reaches the specified size-bytes number of bytes for kdebug events. --trailing-duration duration Only include events within the specified duration before trace is ended. In other words, keep a ringbuffer of events, dropping any that are older than duration time in the past. This can be used to reduce the impact of recording's I/O on storage, at the cost of higher CPU usage spent processing incoming events. --start-on-notification notification-name Wait to start tracing until a Darwin notification matching the notification-name is published with notify(3) or notifyutil(1). For instance, ‘notifyutil -p ktrace-end’ published a notification named ktrace-end. --profiling-interval duration Fire the profiling timer at a different rate than the plan specifies. The duration argument accepts suffixes of us, ms, and s. The following options are unsafe and have a may produce an unusable trace file. --unsafe Allow unsafe options to be used. --experimental Allow experimental plans and options to be used. --kdebug-buffer-size size-with-suffix Override the default buffer size for the kdebug trace system. Smaller buffers are likely to lose events, while larger buffers can have a more significant impact on the system. --kdebug-filter-include filter-description Specify additional kdebug events to include in the trace file, following a filter description. Filter descriptions are a comma- separated list of either two rules: C0x01 Filter all events in the given class; in this case, class 1. S0x0140 Filter events in a particular subclass, where the top byte is the class and the bottom byte is the subclass within that class. In this case, class 1 and subclass 0x40. Additional events may require changes to the buffer size. --kdebug-filter-exclude filter-description Prevent kdebug events from being included in the trace file, following a filter description. Some events are necessary for particular analysis tools. --prioritize-collection Increase the priority of the collection thread, at the cost of potentially interfering with the workload being measured. AMEND trace amend adds more information to previously-recorded trace files from providers. --add provider-name At least one provider must be added to the amending process. --provider-name:option-name=option-value Set options for the provider to amend with, as described in trace providers. TRIM trace trim removes events from a trace file except for those within a specified time range. --from time-spec Removes all events before the provided time-spec, which is a number interpreted based on its prefix: @ event timestamp + seconds since the start of tracing - seconds before the end of tracing --to time-spec Removes all events after the provided time-spec. --output | -o path Write the trimmed file to the specified path. PLANS trace plans lists the plans available to trace record and the layers that can be added to them. --verbose Print additional information about each plan, like its documentation. --experimental Show experimental plans. PROVIDERS trace providers lists the providers available to trace record and the options that can be passed to them. --experimental Show experimental providers. ENVIRONMENT KTRACE_PLAN_PATH Redirect the tool to search for plans under the directory path set in this variable. This requires the --experimental flag. KTRACE The ‘ktrace’ feature is supported by two kernel subsystems: kdebug provides the event format and buffering system and kperf emits sampling information as events based on triggers. The event format used by kdebug is simple and constraining, but effective. Events are classified using a 32-bit debug ID: class subclass code function ╭──────┬───────┬─────────────┬─╮ │ 8 │ 8 │ 14 │2│ ╰──────┴───────┴─────────────┴─╯ ╰──────────────╯ │ class-subclass 00│ ╰──────────────────────────────╯ │ event ID │ ╰──────────────────────────────╯ debug ID Classes are assigned in <sys/kdebug.h> for broad parts of the system. Each class can assign its own subclasses. The class-subclass is the finest granularity that can be filtered on. Codes are for specific events in each subclass, and functions denote whether the event is a start (DBG_FUNC_START), end (DBG_FUNC_END), or impulse (left unset). An event ID is a debug ID with the function bits set to 0. Events also contain a timestamp, 4 pointer-sized arguments, the ID of the thread that emitted the event, and the CPU ID on which it was emitted. The CPU ID may be greater than the number of CPUs on the system — denoting a coprocessor event. Trace files can be analyzed with dedicated tools, including fs_usage(1), spindump(1), or Instruments, depending on how they were recorded and the filters in effect. EXIT STATUS The trace utility exits 0 on success, and >0 if an error occurs. SEE ALSO fs_usage(1), notify(3), ktrace(5), and ktrace(1) Darwin December 1, 2023 Darwin
|
trace – record and modify trace files
|
trace record file-name [options] trace amend file-path --add provider [options] trace trim file-name [options] trace plans [options] trace providers [options]
| null | null |
tops
|
tops is a tool that performs in-place substitutions on source files according to a set of rules. Each tops rule describes a particular translation. For example, one tops rule might specify that occurrences of the token 'Application' should be converted to 'NSApplication'. In tops syntax, this rule will appear as: replace "Application" with "NSApplication"
|
tops - perform in-place substitutions on code.
|
tops [-help] [-verbose] [-nocontext] [-nofileinfo] [-semiverbose)] [-dont] (-scriptfile script_name) | (find "search_pattern" [where ("symbol"...) isOneOf {("match"...)...}] ...) | (replace "search_pattern" with "replacement_pattern" | same [where ("symbol"...) isOneOf {("match"...)...}]... [within ("symbol") {...}]... [error "message"] [warning "message"]) | ( replacemethod "selector" with "new_selector"{ [replace "symbol" with "symbol_replacement"]... } [where ("symbol"...) isOneOf {("match" ...)...}]... [within ("symbol") {...}]... [error "message"] [warning "message"] ) [-classfile classfile] [filename ...]
|
-help Displays the tops syntax line. -verbose Prints out the source code lines that are being changed by the command. -nocontext Instead of printing the whole source code line that is being changed or searched for, shows only the portion of the line that has the change. -nofileinfo Does not print the file name and line number information in verbose messages. -semiverbose Shows how much of the file has been processed. -dont Shows what changes would be made to the source code without actually performing the changes. -scriptfile script_name Specifies the script file containing the rules that tops will apply to your code. The script file can contain three types of rules: find, replace, and replacemethod. It also can contain C- style comments, /* ... */. find "search_pattern" Locates all occurrences of search_pattern in the file. search_pattern can contain literal strings and tokens in angle brackets, as described below. where ("symbol"...) isOneOf {("match"...)...} When search_pattern contains tokens in angle brackets, further refines what the token specified by symbol should match. replace "search_pattern" with "replacement_pattern" | same Replaces all occurrences of search_pattern in the file with replacement_pattern. same replaces search_pattern with itself. You usually use same when you want to print out an error or warning message instead of replacing the code. within ("symbol") {...} Specifies further conversions within one of the tokens specified in search_pattern. find, replace, and replacemethod rules can appear within the angle brackets. error "message" Generates an #error message located at search_pattern. warning "message" Generates a #warning message located at search_pattern. replacemethod "selector" with "new_selector" Replaces all invocations, declarations, implementations, and @selector expressions using the method selector with new_selector. -classfile classfile Specifies a file that describes the class hierarchy used by the files being processed. filename ... Specifies the source file(s) you want to convert. You can specify more than one filename, separated by spaces. The files are converted in place; no backups are created. If no file is specified, the tops commands are performed on standard input. The simplest search pattern is a literal string, such as "Application". Within the search pattern, you can define tokens that specify a particular syntax element rather than a literal string. The tokens have the form: <type label> where: type Specifies the type of syntax element the token can match with. label Is a unique label that you assign to the token. type can be one of the following: a Matches any sequence of tokens. b Matches any balanced sequence of tokens, that is, a sequence of tokens within parentheses or curly braces. e Matches any expression. This is the default. s Matches any string. t Matches any one token. w Matches white space, including comments. In a replacemethod rule, three subtokens are defined for each token you specify in the selector. For each token <foo> in the selector, replacemethod defines the following. The Examples section shows an example of using one of these. <foo_arg> Represents the tokens in the invocation of the method, that is, what is supplied for the foo argument. <foo_type> Represents the type for foo that appears in the declaration. <foo_param> Represents the parameter in the declaration. replacemethod also defines the following labels: <implementation> Represents the body of the method implementation (not including curly braces). <receiver> Represents the receiver of the message. <call> Represents the entire method invocation (including the square brackets).
|
The following is a typical tops command invocation. The script file MyRules.tops contains the find, replace, and replacemethod rules that are performed on the files in MyProjectDir. The -semiverbose option means that name of the file being processed and the progress of the command will be printed to standard output. tops -semiverbose -scriptfile MyRules.tops MyProjectDir/*.[hm] The following is a typical rule that a tops script file would contain. The rule renames the method removeRowAt:andFree: to removeRow:andRelease: in all invocations, declarations, implementations, and @selector expressions. replacemethod "removeRowAt:andFree:" with "removeRow:andRelease:" The following rule marks all calls to the function NXGetNamedObject() with the error message. same means replace this function with itself. NXGetNamedObject() will still appear in the file, but it will be marked by the error message. <b args> specifies to replace all of the arguments in between the parentheses as well. replace "NXGetNamedObject(<b args>)" with same error "ApplicationConversion: NXGetNamedObject() is obsolete. Replace with nib file outlets." The following rule renames the method in all occurrences, and swaps the second and third argument in all invocations and declarations. replacemethod "browser:fillMatrix:<2>inColumn:<3>" with "browser:createRowsForColumn:<3>inMatrix:<2>" The following rule renames the method in all occurrences. In the invocations, it reverses the value specified for the flag argument. replacemethod "myMethod:<flag>" with "myNewMethod:<flag>" { replace "<flag_arg>" with "!<flag_arg>" } The following rule renames the method initContent:style:backing:buttonMask:defer: to initWithContentRect:styleMask:backing:defer: in all occurrences. In the declarations of this method, it changes the type for the style argument to be unsigned int and the type for the backing argument to be NSBackingStoreType. replacemethod "<old>" with "<new>" { replace "<style_type>" with "(unsigned int)" replace "<backing_type>" with "(NSBackingStoreType)" } where ("<old>", "<new>") isOneOf { ("initContent:style:<style> backing:<backing> buttonMask:<bmask> defer:<flag>", "initWithContentRect:styleMask:<style> backing:<backing> defer:<flag>"), } The following rule renames the method minFrameWidth:forStyle:buttonMask: to minFrameWidthWithTitle:styleMask: in all occurrences. Within invocations of this method, it changes the style argument to be the logical OR of the previous style argument and the previous button mask argument. Within method declarations, it changes the type for the style argument to be unsigned int. Within the implementation of this method, it changes all uses of the button mask argument to the style argument. replacemethod "minFrameWidth:forStyle:<style> buttonMask:<bmask>" with "minFrameWidthWithTitle:styleMask:<style>" { replace "<style_arg>" with "<style_arg>|<bmask_arg>" replace "<style_type>" with "(unsigned int)" } within ("<implementation") { replace "<bmask_param>" "<style_param>" } Apple Computer, Inc. March 14, 1995 TOPS(1)
|
lprm
|
lprm cancels print jobs that have been queued for printing. If no arguments are supplied, the current job on the default destination is canceled. You can specify one or more job ID numbers to cancel those jobs or use the - option to cancel all jobs.
|
lprm - cancel print jobs
|
lprm [ -E ] [ -U username ] [ -h server[:port] ] [ -P destination[/instance] ] [ - ] [ job-id(s) ]
|
The lprm command supports the following options: -E Forces encryption when connecting to the server. -P destination[/instance] Specifies the destination printer or class. -U username Specifies an alternate username. -h server[:port] Specifies an alternate server. CONFORMING TO The CUPS version of lprm is compatible with the standard Berkeley command of the same name.
|
Cancel the current job on the default printer: lprm Cancel job 1234: lprm 1234 Cancel all jobs: lprm - SEE ALSO cancel(1), lp(1), lpq(1), lpr(1), lpstat(1), CUPS Online Help (http://localhost:631/help) COPYRIGHT Copyright © 2007-2019 by Apple Inc. 26 April 2019 CUPS lprm(1)
|
tiffutil
|
tiffutil lets you manipulate TIFF files. The list of options (also available by running the program without any options) follows: tiffutil -none infile [-out outfile] -lzw infile [-out outfile] -packbits infile [-out outfile] -cat infile1 [infile2 ...] [-out outfile] -catnosizecheck infile1 [infile2 ...] [-out outfile] -cathidpicheck infile1 [infile2 ...] [-out outfile] -extract num infile [-out outfile] -info infile -verboseinfo infile -dump infile -none, -lzw, and -packbits options specify the compression format to be applied to the images in the TIFF file. -none specifies no compression; -packbits specifies PackBits compression; -lzw specifies standard Lempel- Ziv & Welch compression (no prediction scheme). -cat allows combining multiple TIFF files into one. The images are copied without any change in tag values. If the real sizes (pixel size divided by dpi) of the images being combined are not the same, a warning will be generated. This makes sure that NSImage can successfully choose the right size image out of the generated TIFF file. Use -catnosizecheck to bypass the size check. -cathidpicheck can be used to write an output file conforming to Apple's guidelines for resolution independent bitmap images, and will generate warnings if the supplied images do not have the recommended size relationship. For best results, ensure that the larger file has a filename of the form <basename>@2x.png. -extract allows extracting an individual image from a TIFF file; specify num = 0 for the first image in the file. -info prints information about TIFF images. -verboseinfo is the same, except most of the tables are displayed in full. -dump simply lists all of the tags in the file without trying to interpret them; it is handy when trying to figure out why a TIFF file won't load or display properly. For options which write images out, the output goes to "out.tiff" unless an output file name is specified after a -out keyword. This keyword and the file must be the last items on the command line. -info, -verboseinfo, and -dump write their output to the standard output. If there are multiple images in a TIFF file the specified operation will be performed on all of them. SECURITY NOTE: This version of tiffutil SHOULD NOT be used with untrusted files. CREDITS Parts of tiffutil were based on the freely distributable "tiffcp" and "tiffinfo" programs written by Sam Leffler and made available with v3.0 of his excellent TIFF library. The TIFF library and the tiffcp and tiffinfo programs are: Copyright (c) 1988, 1989, 1990, 1991, 1992 Sam Leffler Copyright (c) 1991, 1992 Silicon Graphics, Inc. macOS September 2, 2010 macOS
|
tiffutil - manipulates tiff files
|
tiffutil <option> [<arguments>] [-out <outfile>]
| null | null |
du
|
The du utility displays the file system block usage for each file argument and for each directory in the file hierarchy rooted in each directory argument. If no file is specified, the block usage of the hierarchy rooted in the current directory is displayed. The options are as follows: -A Display the apparent size instead of the disk usage. This can be helpful when operating on compressed volumes or sparse files. -B blocksize Calculate block counts in blocksize byte blocks. This is different from the -h, -k, -m, --si and -g options or setting BLOCKSIZE and gives an estimate of how much space the examined file hierarchy would require on a filesystem with the given blocksize. Unless in -A mode, blocksize is rounded up to the next multiple of 512. -H Symbolic links on the command line are followed, symbolic links in file hierarchies are not followed. -I mask Ignore files and directories matching the specified mask. -L Symbolic links on the command line and in file hierarchies are followed. -P No symbolic links are followed. This is the default. -a Display an entry for each file in a file hierarchy. -c Display a grand total. -d depth Display an entry for all files and directories depth directories deep. -g Display block counts in 1073741824-byte (1 GiB) blocks. -h “Human-readable” output. Use unit suffixes: Byte, Kilobyte, Megabyte, Gigabyte, Terabyte and Petabyte based on powers of 1024. -k Display block counts in 1024-byte (1 kiB) blocks. -l If a file has multiple hard links, count its size multiple times. The default behavior of du is to count files with multiple hard links only once. When the -l option is specified, the hard link checks are disabled, and these files are counted (and displayed) as many times as they are found. -m Display block counts in 1048576-byte (1 MiB) blocks. -n Ignore files and directories with user “nodump” flag (UF_NODUMP) set. -r Generate messages about directories that cannot be read, files that cannot be opened, and so on. This is the default case. This option exists solely for conformance with X/Open Portability Guide Issue 4 (“XPG4”). -s Display an entry for each specified file. (Equivalent to -d 0) --si “Human-readable” output. Use unit suffixes: Byte, Kilobyte, Megabyte, Gigabyte, Terabyte and Petabyte based on powers of 1000. -t threshold Display only entries for which size exceeds threshold. If threshold is negative, display only entries for which size is less than the absolute value of threshold. -x File system mount points are not traversed. The du utility counts the storage used by symbolic links and not the files they reference unless the -H or -L option is specified. If either the -H or -L option is specified, storage used by any symbolic links which are followed is not counted (or displayed). The -H, -L and -P options override each other and the command's actions are determined by the last one specified. Files having multiple hard links are counted (and displayed) a single time per du execution. Directories having multiple hard links (typically Time Machine backups) are counted a single time per du execution. The -h, -k, -m and --si options all override each other; the last one specified determines the block counts used. ENVIRONMENT BLOCKSIZE If the environment variable BLOCKSIZE is set, and the -h, -k, -m or --si options are not specified, the block counts will be displayed in units of that block size. If BLOCKSIZE is not set, and the -h, -k, -m or --si options are not specified, the block counts will be displayed in 512-byte blocks.
|
du – display disk usage statistics
|
du [-Aclnx] [-H | -L | -P] [-g | -h | -k | -m] [-a | -s | -d depth] [-B blocksize] [-I mask] [-t threshold] [file ...]
| null |
Show disk usage for all files in the current directory. Output is in human-readable form: # du -ah Summarize disk usage in the current directory: # du -hs Summarize disk usage for a specific directory: # du -hs /home Show name and size of all C files in a specific directory. Also display a grand total at the end: # du -ch /usr/src/sys/kern/*.c SEE ALSO df(1), chflags(2), fts(3), symlink(7), quot(8) STANDARDS The du utility is compliant with the IEEE Std 1003.1-2008 (“POSIX.1”) specification. The flags [-cdhP], as well as the BLOCKSIZE environment variable, are extensions to that specification. The flag [-r] is accepted but ignored, for compatibility with systems implementing the obsolete X/Open Commands and Utilities Issue 5 (“XCU5”) standard. HISTORY The du utility and its -a and -s options first appeared in Version 1 AT&T UNIX. The -r option first appeared in AT&T System III UNIX and is available since FreeBSD 3.5. The -k and -x options first appeared in 4.3BSD-Reno and -H in 4.4BSD. The -c and -L options first appeared in the GNU fileutils; -L and -P are available since 4.4BSD-Lite1, -c since FreeBSD 2.2.6. The -d option first appeared in FreeBSD 2.2, -h first appeared in FreeBSD 4.0. AUTHORS This version of du was written by Chris Newcomb for 4.3BSD-Reno in 1989. macOS 14.5 August 1, 2019 macOS 14.5
|
debinhex5.34.pl
| null | null | null | null | null |
diff3
|
The diff3 utility compares the contents of three different versions of a file, file1, file2 and file3, writing the result to the standard output. The options describe different methods of merging and purging the separate versions into a new file. diff3 is used by rcs(1) to merge specific versions or create new versions. The options are as follows: -3, --easy-only Produces an output script suitable for ed(1) with changes specific only to file3. -A --show-all Output all changes, bracketing conflicts. -a, --text Treat all files as ASCII. -E, --show-overlap -X Similar to -e and -x, respectively, but treat overlapping changes (i.e., changes that would be noted with ==== in the normal listing) differently. The overlapping lines from both files will be inserted by the edit script, bracketed by "<<<<<<" and ">>>>>>" lines. -e, --ed Produces output in a form suitable as an input script for the ed(1) utility. The script may then be used to merge differences common between all three files and differences specific to file1 and file3. In other words, the -e option ignores differences specific to file1 and file2, and those specific to file2 and file3. It is useful for backing out changes specific to file2 only. --help Prints usage information and exits. -i Appends 'w' and 'q' ed(1) commands. -L, --label Defines labels to print instead of file names file1, file2 and file3. -m, --merge Merge output instead of generating ed script. -T, --initial-tab In the normal listing, use a tab instead of two spaces at the beginning of each line. In modes that produce an ed(1) script, this option changes nothing. -x, --overlap-only Produces an output script suitable for ed(1) with changes specific only to all three versions. --diff-program program Use program instead of the default diff(1) to compare files. --strip-trailing-cr Strip trailing carriage return on input files. --version Prints version information and exits. The -E option is used by RCS merge(1) to ensure that overlapping changes in the merged files are preserved and brought to someone's attention. For example, suppose lines 7-8 are changed in both file1 and file2. Applying the edit script generated by the command $ diff3 -E file1 file2 file3 to file1 results in the file: lines 1-6 of file1 <<<<<<< file1 lines 7-8 of file1 ======= lines 7-8 of file3 >>>>>>> file3 rest of file1 The default output of diff3 makes notation of the differences between all files, and those differences specific to each pair of files. The changes are described by the commands necessary for ed(1) to create the desired target from the different versions. See diff(1) for a description of the commands. ==== The lines beneath this notation are ranges of lines which are different between all files. ====n The lines beneath this notation are ranges of lines which are exclusively different in file n. SEE ALSO diff(1), ed(1), sdiff(1) HISTORY A diff3 command appeared in Version 7 AT&T UNIX. BUGS The -e option cannot catch and change lines which have ‘.’ as the first and only character on the line. The resulting script will fail on that line as ‘.’ is an ed(1) editing command. macOS 14.5 June 23, 2022 macOS 14.5
|
diff3 – 3-way differential file comparison
|
diff3 [-3AaEeimTXx] [--diff-program program] [--strip-trailing-cr] [-L | --label label1] [-L | --label label2] [-L | --label label3] file1 file2 file3 diff3 [--help] [--version]
| null | null |
cupstestppd
|
cupstestppd tests the conformance of PPD files to the Adobe PostScript Printer Description file format specification version 4.3. It can also be used to list the supported options and available fonts in a PPD file. The results of testing and any other output are sent to the standard output. The first form of cupstestppd tests one or more PPD files on the command-line. The second form tests the PPD file provided on the standard input.
|
cupstestppd - test conformance of ppd files
|
cupstestppd [ -I category ] [ -R rootdir ] [ -W category ] [ -q ] [ -r ] [ -v[v] ] filename.ppd[.gz] [ ... filename.ppd[.gz] ] cupstestppd [ -R rootdir ] [ -W category ] [ -q ] [ -r ] [ -v[v] ] -
|
cupstestppd supports the following options: -I filename Ignores all PCFileName warnings. -I filters Ignores all filter errors. -I profiles Ignores all profile errors. -R rootdir Specifies an alternate root directory for the filter, pre-filter, and other support file checks. -W constraints Report all UIConstraint errors as warnings. -W defaults Except for size-related options, report all default option errors as warnings. -W filters Report all filter errors as warnings. -W profiles Report all profile errors as warnings. -W sizes Report all media size errors as warnings. -W translations Report all translation errors as warnings. -W all Report all of the previous errors as warnings. -W none Report all of the previous errors as errors. -q Specifies that no information should be displayed. -r Relaxes the PPD conformance requirements so that common whitespace, control character, and formatting problems are not treated as hard errors. -v Specifies that detailed conformance testing results should be displayed rather than the concise PASS/FAIL/ERROR status. -vv Specifies that all information in the PPD file should be displayed in addition to the detailed conformance testing results. The -q, -v, and -vv options are mutually exclusive. EXIT STATUS cupstestppd returns zero on success and non-zero on error. The error codes are as follows: 1 Bad command-line arguments or missing PPD filename. 2 Unable to open or read PPD file. 3 The PPD file contains format errors that cannot be skipped. 4 The PPD file does not conform to the Adobe PPD specification.
|
The following command will test all PPD files under the current directory and print the names of each file that does not conform: find . -name \*.ppd \! -exec cupstestppd -q '{}' \; -print The next command tests all PPD files under the current directory and print detailed conformance testing results for the files that do not conform: find . -name \*.ppd \! -exec cupstestppd -q '{}' \; \ -exec cupstestppd -v '{}' \; NOTES PPD files are deprecated and will no longer be supported in a future feature release of CUPS. Printers that do not support IPP can be supported using applications such as ippeveprinter(1). SEE ALSO lpadmin(8), CUPS Online Help (http://localhost:631/help), Adobe PostScript Printer Description File Format Specification, Version 4.3. COPYRIGHT Copyright © 2007-2019 by Apple Inc. 26 April 2019 CUPS cupstestppd(1)
|
json_pp5.30
|
json_pp converts between some input and output formats (one of them is JSON). This program was copied from json_xs and modified. The default input format is json and the default output format is json with pretty option.
|
json_pp - JSON::PP command utility
|
json_pp [-v] [-f from_format] [-t to_format] [-json_opt options_to_json1[,options_to_json2[,...]]]
|
-f -f from_format Reads a data in the given format from STDIN. Format types: json as JSON eval as Perl code -t Writes a data in the given format to STDOUT. null no action. json as JSON dumper as Data::Dumper -json_opt options to JSON::PP Acceptable options are: ascii latin1 utf8 pretty indent space_before space_after relaxed canonical allow_nonref allow_singlequote allow_barekey allow_bignum loose escape_slash Multiple options must be separated by commas: Right: -json_opt pretty,canonical Wrong: -json_opt pretty -json_opt canonical -v Verbose option, but currently no action in fact. -V Prints version and exits.
|
$ perl -e'print q|{"foo":"XX","bar":1234567890000000000000000}|' |\ json_pp -f json -t dumper -json_opt pretty,utf8,allow_bignum $VAR1 = { 'bar' => bless( { 'value' => [ '0000000', '0000000', '5678900', '1234' ], 'sign' => '+' }, 'Math::BigInt' ), 'foo' => "\x{3042}\x{3044}" }; $ perl -e'print q|{"foo":"XX","bar":1234567890000000000000000}|' |\ json_pp -f json -t dumper -json_opt pretty $VAR1 = { 'bar' => '1234567890000000000000000', 'foo' => "\x{e3}\x{81}\x{82}\x{e3}\x{81}\x{84}" }; SEE ALSO JSON::PP, json_xs AUTHOR Makamaka Hannyaharamitu, <makamaka[at]cpan.org> COPYRIGHT AND LICENSE Copyright 2010 by Makamaka Hannyaharamitu This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.30.3 2024-04-13 JSON_PP(1)
|
sw_vers
|
sw_vers prints macOS version information for the currently running operating system on the local machine. When executed with no options sw_vers prints a short list of version properties: % sw_vers ProductName: macOS ProductVersion: 13.0 ProductVersionExtra: (a) BuildVersion: 22A100 The ProductName property provides the name of the operating system release (typically "macOS"). The ProductVersion property defines the version of the operating system release (for example, "11.3" or "12.0"). The ProductVersionExtra property defines the Rapid Security Response version, if one is installed on the operating system (for example, "(a)" or "(b)"). The BuildVersion property provides the specific revision of the operating system as generated by the macOS build system.
|
sw_vers – print macOS system version information
|
sw_vers sw_vers --productName sw_vers --productVersion sw_vers --productVersionExtra sw_vers --buildVersion
|
The output of sw_vers can be refined by the following options. These long-form options can also be passed in lowercase for convenience. --productName Print only the value of the ProductName property. --productVersion Print only the value of the ProductVersion property. --productVersionExtra Print only the value of the ProductVersionExtra property. --buildVersion Print only the value of the BuildVersion property.
|
% sw_vers --productName macOS % sw_vers --productVersion 13.0 % sw_vers --productVersionExtra (a) % sw_vers --buildVersion 22A100 COMPATIBILITY Previous versions of sw_vers respected the SYSTEM_VERSION_COMPAT environment variable to provide compatibility fallback versions for scripts which did not support the macOS 11.0+ version transition. This is no longer supported, versions returned by sw_vers will always reflect the real system version. sw_vers is backwards compatible with previous versions which expect options passed with a single dash, as in: -productName FILES /System/Library/CoreServices/SystemVersion.plist macOS 14.5 October 27, 2022 macOS 14.5
|
fs_usage
|
The fs_usage utility presents an ongoing display of system call usage information pertaining to filesystem activity. It requires root privileges due to the kernel tracing facility it uses to operate. By default, the activity monitored includes all system processes except the running fs_usage process, Terminal, telnetd, telnet, sshd, rlogind, tcsh, csh, sh, and zsh. These defaults can be overridden such that output is limited to include or exclude a list of processes specified by the user. The output presented by fs_usage is formatted according to the size of your window. A narrow window will display fewer columns of data. Use a wide window for maximum data display. You may override the window formatting restrictions by forcing a wide display with the -w option. In this case, the data displayed will wrap when the window is not wide enough. The options are as follows: -e Specifying the -e option generates output that excludes sampling of the running fs_usage tool. If a list of process IDs or commands is also given, then those processes are also excluded from the sampled output. -w Specifying the -w option forces a wider, more detailed output, regardless of the window size. -f Specifying the -f option turns on output filtering based on the mode provided. Multiple filtering options can be specified. By default, no output filtering occurs. The supported modes are: network Network-related events are displayed. filesys Filesystem-related events are displayed. pathname Pathname-related events are displayed. exec Exec and spawn events are displayed. diskio Disk I/O events are displayed. cachehit In addition, show cache hits. -b Specifying the -b option annotates disk I/O events with BootCache info (if available). -t seconds Specifies a run timeout in seconds. fs_usage will run for no longer than the timeout specified. -R raw_file Specifies a raw trace file to process. -S start_time If -R is selected, specifies the start time in microseconds to begin processing entries from the raw trace file. Entries with timestamps before the specified start time will be skipped. -E end_time If -R is selected, specifies the ending time in microseconds to stop processing entries from the raw trace file. Entries with timestamps beyond the specified ending time will be skipped. pid | cmd The sampled data can be limited to a list of process IDs or commands. When a command name is given, all processes with that name will be sampled. Using the -e option has the opposite effect, excluding sampled data relating to the given list of process IDs or commands. The data columns displayed are as follows: TIMESTAMP TOD when call occurred. Wide mode will have microsecond granularity. CALL The name of the network or filesystem related call, page-in, page-out, or physical disk access. FILE DESCRIPTOR Of the form F=x, x is a file descriptor. Depending on the type of system call, this will be either an input value or a return value. BYTE COUNT Of the form B=x, x is the number of bytes requested by the call. [ERRNO] On error, the errno is displayed in brackets. PATHNAME Pathname of the file accessed (up to the last 28 bytes). FAULT ADDRESS Of the form A=0xnnnnnnnn, where 0xnnnnnnnn is the address being faulted. DISK BLOCK NUMBER Of the form D=0xnnnnnnnn, where 0xnnnnnnnn is the block number of the physical disk block being read or written. OFFSET Of the form O=0xnnnnnnnn, where 0xnnnnnnnn is a file offset. SELECT RETURN Of the form S=x, x is the number of ready descriptors returned by the select(2) system call. If S=0, the time limit expired. TIME INTERVAL(W) The elapsed time spent in the system call. A ‘W’ after the elapsed time indicates the process was scheduled out during this file activity. In this case, the elapsed time includes the wait time. PROCESS NAME The process that made the system call. Wide mode will append the thread id to the process name (i.e Mail.nnn). SAMPLE USAGE fs_usage -w -f filesys Mail fs_usage will display file system related data for all instances of processes named Mail. Maximum data output will be displayed in the window. SEE ALSO dyld(1), latency(1), sc_usage(1), top(1) macOS November 7, 2002 macOS
|
fs_usage – report system calls and page faults related to filesystem activity in real-time
|
fs_usage [-e] [-w] [-f mode] [-b] [-t seconds] [-R rawfile [-S start_time -E end_time]] [pid | cmd [pid | cmd [...]]]
| null | null |
host
|
host is a simple utility for performing DNS lookups. It is normally used to convert names to IP addresses and vice versa. When no arguments or options are given, host prints a short summary of its command line arguments and options. name is the domain name that is to be looked up. It can also be a dotted-decimal IPv4 address or a colon-delimited IPv6 address, in which case host will by default perform a reverse lookup for that address. server is an optional argument which is either the name or IP address of the name server that host should query instead of the server or servers listed in /etc/resolv.conf.
|
host - DNS lookup utility
|
host [-aCdlnrsTwv] [-c class] [-N ndots] [-R number] [-t type] [-W wait] [-m flag] [-4] [-6] [-v] [-V] {name} [server]
|
-4 Use IPv4 only for query transport. See also the -6 option. -6 Use IPv6 only for query transport. See also the -4 option. -a "All". The -a option is normally equivalent to -v -t ANY. It also affects the behaviour of the -l list zone option. -c class Query class: This can be used to lookup HS (Hesiod) or CH (Chaosnet) class resource records. The default class is IN (Internet). -C Check consistency: host will query the SOA records for zone name from all the listed authoritative name servers for that zone. The list of name servers is defined by the NS records that are found for the zone. -d Print debugging traces. Equivalent to the -v verbose option. -i Obsolete. Use the IP6.INT domain for reverse lookups of IPv6 addresses as defined in RFC1886 and deprecated in RFC4159. The default is to use IP6.ARPA as specified in RFC3596. -l List zone: The host command performs a zone transfer of zone name and prints out the NS, PTR and address records (A/AAAA). Together, the -l -a options print all records in the zone. -N ndots The number of dots that have to be in name for it to be considered absolute. The default value is that defined using the ndots statement in /etc/resolv.conf, or 1 if no ndots statement is present. Names with fewer dots are interpreted as relative names and will be searched for in the domains listed in the search or domain directive in /etc/resolv.conf. -r Non-recursive query: Setting this option clears the RD (recursion desired) bit in the query. This should mean that the name server receiving the query will not attempt to resolve name. The -r option enables host to mimic the behavior of a name server by making non-recursive queries and expecting to receive answers to those queries that can be referrals to other name servers. -R number Number of retries for UDP queries: If number is negative or zero, the number of retries will default to 1. The default value is 1. -s Do not send the query to the next nameserver if any server responds with a SERVFAIL response, which is the reverse of normal stub resolver behavior. -t type Query type: The type argument can be any recognized query type: CNAME, NS, SOA, TXT, DNSKEY, AXFR, etc. When no query type is specified, host automatically selects an appropriate query type. By default, it looks for A, AAAA, and MX records. If the -C option is given, queries will be made for SOA records. If name is a dotted-decimal IPv4 address or colon-delimited IPv6 address, host will query for PTR records. If a query type of IXFR is chosen the starting serial number can be specified by appending an equal followed by the starting serial number (like -t IXFR=12345678). -T TCP: By default, host uses UDP when making queries. The -T option makes it use a TCP connection when querying the name server. TCP will be automatically selected for queries that require it, such as zone transfer (AXFR) requests. -m flag Memory usage debugging: the flag can be record, usage, or trace. You can specify the -m option more than once to set multiple flags. -v Verbose output. Equivalent to the -d debug option. -V Print the version number and exit. -w Wait forever: The query timeout is set to the maximum possible. See also the -W option. -W wait Timeout: Wait for up to wait seconds for a reply. If wait is less than one, the wait interval is set to one second. By default, host will wait for 5 seconds for UDP responses and 10 seconds for TCP connections. See also the -w option. macOS NOTICE The host command does not use the host name and address resolution or the DNS query routing mechanisms used by other processes running on macOS. The results of name or address queries printed by host may differ from those found by other processes that use the macOS native name and address resolution mechanisms. The results of DNS queries may also differ from queries that use the macOS DNS routing library. IDN SUPPORT If host has been built with IDN (internationalized domain name) support, it can accept and display non-ASCII domain names. host appropriately converts character encoding of domain name before sending a request to DNS server or displaying a reply from the server. If you'd like to turn off the IDN support for some reason, defines the IDN_DISABLE environment variable. The IDN support is disabled if the variable is set when host runs. FILES /etc/resolv.conf SEE ALSO dig(1), named(8). AUTHOR Internet Systems Consortium, Inc. COPYRIGHT Copyright © 2004, 2005, 2007-2009, 2014-2016 Internet Systems Consortium, Inc. ("ISC") Copyright © 2000-2002 Internet Software Consortium. ISC 2018-05-25 HOST(1)
| null |
jarsigner
|
The jarsigner tool has two purposes: • To sign Java Archive (JAR) files. • To verify the signatures and integrity of signed JAR files. The JAR feature enables the packaging of class files, images, sounds, and other digital data in a single file for faster and easier distribution. A tool named jar enables developers to produce JAR files. (Technically, any ZIP file can also be considered a JAR file, although when created by the jar command or processed by the jarsigner command, JAR files also contain a META-INF/MANIFEST.MF file.) A digital signature is a string of bits that is computed from some data (the data being signed) and the private key of an entity (a person, company, and so on). Similar to a handwritten signature, a digital signature has many useful characteristics: • Its authenticity can be verified by a computation that uses the public key corresponding to the private key used to generate the signature. • It can't be forged, assuming the private key is kept secret. • It is a function of the data signed and thus can't be claimed to be the signature for other data as well. • The signed data can't be changed. If the data is changed, then the signature can't be verified as authentic. To generate an entity's signature for a file, the entity must first have a public/private key pair associated with it and one or more certificates that authenticate its public key. A certificate is a digitally signed statement from one entity that says that the public key of another entity has a particular value. The jarsigner command uses key and certificate information from a keystore to generate digital signatures for JAR files. A keystore is a database of private keys and their associated X.509 certificate chains that authenticate the corresponding public keys. The keytool command is used to create and administer keystores. The jarsigner command uses an entity's private key to generate a signature. The signed JAR file contains, among other things, a copy of the certificate from the keystore for the public key corresponding to the private key used to sign the file. The jarsigner command can verify the digital signature of the signed JAR file using the certificate inside it (in its signature block file). The jarsigner command can generate signatures that include a time stamp that enables a systems or deployer to check whether the JAR file was signed while the signing certificate was still valid. In addition, APIs allow applications to obtain the timestamp information. At this time, the jarsigner command can only sign JAR files created by the jar command or zip files. JAR files are the same as zip files, except they also have a META-INF/MANIFEST.MF file. A META-INF/MANIFEST.MF file is created when the jarsigner command signs a zip file. The default jarsigner command behavior is to sign a JAR or zip file. Use the -verify option to verify a signed JAR file. The jarsigner command also attempts to validate the signer's certificate after signing or verifying. During validation, it checks the revocation status of each certificate in the signer's certificate chain when the -revCheck option is specified. If there is a validation error or any other problem, the command generates warning messages. If you specify the -strict option, then the command treats severe warnings as errors. See Errors and Warnings. KEYSTORE ALIASES All keystore entities are accessed with unique aliases. When you use the jarsigner command to sign a JAR file, you must specify the alias for the keystore entry that contains the private key needed to generate the signature. If no output file is specified, it overwrites the original JAR file with the signed JAR file. Keystores are protected with a password, so the store password must be specified. You are prompted for it when you don't specify it on the command line. Similarly, private keys are protected in a keystore with a password, so the private key's password must be specified, and you are prompted for the password when you don't specify it on the command line and it isn't the same as the store password. KEYSTORE LOCATION The jarsigner command has a -keystore option for specifying the URL of the keystore to be used. The keystore is by default stored in a file named .keystore in the user's home directory, as determined by the user.home system property. Linux and macOS: user.home defaults to the user's home directory. The input stream from the -keystore option is passed to the KeyStore.load method. If NONE is specified as the URL, then a null stream is passed to the KeyStore.load method. NONE should be specified when the KeyStore class isn't file based, for example, when it resides on a hardware token device. KEYSTORE IMPLEMENTATION The KeyStore class provided in the java.security package supplies a number of well-defined interfaces to access and modify the information in a keystore. You can have multiple different concrete implementations, where each implementation is for a particular type of keystore. Currently, there are two command-line tools that use keystore implementations (keytool and jarsigner). The default keystore implementation is PKCS12. This is a cross platform keystore based on the RSA PKCS12 Personal Information Exchange Syntax Standard. This standard is primarily meant for storing or transporting a user's private keys, certificates, and miscellaneous secrets. There is another built-in implementation, provided by Oracle. It implements the keystore as a file with a proprietary keystore type (format) named JKS. It protects each private key with its individual password, and also protects the integrity of the entire keystore with a (possibly different) password. Keystore implementations are provider-based, which means the application interfaces supplied by the KeyStore class are implemented in terms of a Service Provider Interface (SPI). There is a corresponding abstract KeystoreSpi class, also in the java.security package, that defines the Service Provider Interface methods that providers must implement. The term provider refers to a package or a set of packages that supply a concrete implementation of a subset of services that can be accessed by the Java Security API. To provide a keystore implementation, clients must implement a provider and supply a KeystoreSpi subclass implementation, as described in How to Implement a Provider in the Java Cryptography Architecture [https://www.oracle.com/pls/topic/lookup?ctx=en/java/javase&id=security_guide_implement_provider_jca]. Applications can choose different types of keystore implementations from different providers, with the getInstance factory method in the KeyStore class. A keystore type defines the storage and data format of the keystore information and the algorithms used to protect private keys in the keystore and the integrity of the keystore itself. Keystore implementations of different types aren't compatible. The jarsigner commands can read file-based keystores from any location that can be specified using a URL. In addition, these commands can read non-file-based keystores such as those provided by MSCAPI on Windows and PKCS11 on all platforms. For the jarsigner and keytool commands, you can specify a keystore type at the command line with the -storetype option. If you don't explicitly specify a keystore type, then the tools choose a keystore implementation based on the value of the keystore.type property specified in the security properties file. The security properties file is called java.security, and it resides in the JDK security properties directory, java.home/conf/security. Each tool gets the keystore.type value and then examines all the installed providers until it finds one that implements keystores of that type. It then uses the keystore implementation from that provider. The KeyStore class defines a static method named getDefaultType that lets applications retrieve the value of the keystore.type property. The following line of code creates an instance of the default keystore type as specified in the keystore.type property: KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType()); The default keystore type is pkcs12, which is a cross platform keystore based on the RSA PKCS12 Personal Information Exchange Syntax Standard. This is specified by the following line in the security properties file: keystore.type=pkcs12 Case doesn't matter in keystore type designations. For example, JKS is the same as jks. To have the tools utilize a keystore implementation other than the default, you can change that line to specify a different keystore type. For example, if you want to use the Oracle's jks keystore implementation, then change the line to the following: keystore.type=jks SUPPORTED ALGORITHMS By default, the jarsigner command signs a JAR file using one of the following algorithms and block file extensions depending on the type and size of the private key: Default Signature Algorithms and Block File Extensions keyalg key size default sigalg block file extension ────────────────────────────────────────────────────── DSA any size SHA256withDSA .DSA RSA < 624 SHA256withRSA .RSA <= 7680 SHA384withRSA > 7680 SHA512withRSA EC < 512 SHA384withECDSA .EC >= 512 SHA512withECDSA RSASSA-PSS < 624 RSASSA-PSS (with .RSA SHA-256) <= 7680 RSASSA-PSS (with SHA-384) > 7680 RSASSA-PSS (with SHA-512) EdDSA 255 Ed25519 .EC 448 Ed448 • If an RSASSA-PSS key is encoded with parameters, then jarsigner will use the same parameters in the signature. Otherwise, jarsigner will use parameters that are determined by the size of the key as specified in the table above. For example, an 3072-bit RSASSA-PSS key will use RSASSA-PSS as the signature algorithm and SHA-384 as the hash and MGF1 algorithms. • If a key algorithm is not listed in this table, the .DSA extension is used when signing a JAR file. These default signature algorithms can be overridden by using the -sigalg option. The jarsigner command uses the jdk.jar.disabledAlgorithms and jdk.security.legacyAlgorithms security properties to determine which algorithms are considered a security risk. If the JAR file was signed with any algorithms that are disabled, it will be treated as an unsigned JAR file. If the JAR file was signed with any legacy algorithms, it will be treated as signed with an informational warning to inform users that the legacy algorithm will be disabled in a future update. For detailed verification output, include -J-Djava.security.debug=jar. The jdk.jar.disabledAlgorithms and jdk.security.legacyAlgorithms security properties are defined in the java.security file (located in the JDK's $JAVA_HOME/conf/security directory). Note: In order to improve out of the box security, default key size and signature algorithm names are periodically updated to stronger values with each release of the JDK. If interoperability with older releases of the JDK is important, please make sure the defaults are supported by those releases, or alternatively use the -sigalg option to override the default values at your own risk. THE SIGNED JAR FILE When the jarsigner command is used to sign a JAR file, the output signed JAR file is exactly the same as the input JAR file, except that it has two additional files placed in the META-INF directory: • A signature file with an .SF extension • A signature block file with a .DSA, .RSA, or .EC extension The base file names for these two files come from the value of the -sigfile option. For example, when the option is -sigfile MKSIGN, the files are named MKSIGN.SF and MKSIGN.RSA. In this document, we assume the signer always uses an RSA key. If no -sigfile option appears on the command line, then the base file name for the .SF and the signature block files is the first 8 characters of the alias name specified on the command line, all converted to uppercase. If the alias name has fewer than 8 characters, then the full alias name is used. If the alias name contains any characters that aren't allowed in a signature file name, then each such character is converted to an underscore (_) character in forming the file name. Valid characters include letters, digits, underscores, and hyphens. SIGNATURE FILE A signature file (.SF file) looks similar to the manifest file that is always included in a JAR file when the jarsigner command is used to sign the file. For each source file included in the JAR file, the .SF file has two lines, such as in the manifest file, that list the following: • File name • Name of the digest algorithm (SHA) • SHA digest value Note: The name of the digest algorithm (SHA) and the SHA digest value are on the same line. In the manifest file, the SHA digest value for each source file is the digest (hash) of the binary data in the source file. In the .SF file, the digest value for a specified source file is the hash of the two lines in the manifest file for the source file. The signature file, by default, includes a header with a hash of the whole manifest file. The header also contains a hash of the manifest header. The presence of the header enables verification optimization. See JAR File Verification. SIGNATURE BLOCK FILE The .SF file is signed and the signature is placed in the signature block file. This file also contains, encoded inside it, the certificate or certificate chain from the keystore that authenticates the public key corresponding to the private key used for signing. The file has the extension .DSA, .RSA, or .EC, depending on the key algorithm used. See the table in Supported Algorithms. SIGNATURE TIME STAMP The jarsigner command used with the following options generates and stores a signature time stamp when signing a JAR file: • -tsa url • -tsacert alias • -tsapolicyid policyid • -tsadigestalg algorithm See Options for jarsigner. JAR FILE VERIFICATION A successful JAR file verification occurs when the signatures are valid, and none of the files that were in the JAR file when the signatures were generated have changed since then. JAR file verification involves the following steps: 1. Verify the signature of the .SF file. The verification ensures that the signature stored in each signature block file was generated using the private key corresponding to the public key whose certificate (or certificate chain) also appears in the signature block file. It also ensures that the signature is a valid signature of the corresponding signature (.SF) file, and thus the .SF file wasn't tampered with. 2. Verify the digest listed in each entry in the .SF file with each corresponding section in the manifest. The .SF file by default includes a header that contains a hash of the entire manifest file. When the header is present, the verification can check to see whether or not the hash in the header matches the hash of the manifest file. If there is a match, then verification proceeds to the next step. If there is no match, then a less optimized verification is required to ensure that the hash in each source file information section in the .SF file equals the hash of its corresponding section in the manifest file. See Signature File. One reason the hash of the manifest file that is stored in the .SF file header might not equal the hash of the current manifest file is that it might contain sections for newly added files after the file was signed. For example, suppose one or more files were added to the signed JAR file (using the jar tool) that already contains a signature and a .SF file. If the JAR file is signed again by a different signer, then the manifest file is changed (sections are added to it for the new files by the jarsigner tool) and a new .SF file is created, but the original .SF file is unchanged. A verification is still considered successful if none of the files that were in the JAR file when the original signature was generated have been changed since then. This is because the hashes in the non-header sections of the .SF file equal the hashes of the corresponding sections in the manifest file. 3. Read each file in the JAR file that has an entry in the .SF file. While reading, compute the file's digest and compare the result with the digest for this file in the manifest section. The digests should be the same or verification fails. If any serious verification failures occur during the verification process, then the process is stopped and a security exception is thrown. The jarsigner command catches and displays the exception. 4. Check for disabled algorithm usage. See Supported Algorithms. Note: You should read any addition warnings (or errors if you specified the -strict option), as well as the content of the certificate (by specifying the -verbose and -certs options) to determine if the signature can be trusted. MULTIPLE SIGNATURES FOR A JAR FILE A JAR file can be signed by multiple people by running the jarsigner command on the file multiple times and specifying the alias for a different person each time, as follows: jarsigner myBundle.jar susan jarsigner myBundle.jar kevin When a JAR file is signed multiple times, there are multiple .SF and signature block files in the resulting JAR file, one pair for each signature. In the previous example, the output JAR file includes files with the following names: SUSAN.SF SUSAN.RSA KEVIN.SF KEVIN.RSA OPTIONS FOR JARSIGNER The following sections describe the options for the jarsigner. Be aware of the following standards: • All option names are preceded by a hyphen sign (-). • The options can be provided in any order. • Items that are in italics or underlined (option values) represent the actual values that must be supplied. • The -storepass, -keypass, -sigfile, -sigalg, -digestalg, -signedjar, and TSA-related options are only relevant when signing a JAR file; they aren't relevant when verifying a signed JAR file. The -keystore option is relevant for signing and verifying a JAR file. In addition, aliases are specified when signing and verifying a JAR file. -keystore url Specifies the URL that tells the keystore location. This defaults to the file .keystore in the user's home directory, as determined by the user.home system property. A keystore is required when signing. You must explicitly specify a keystore when the default keystore doesn't exist or if you want to use one other than the default. A keystore isn't required when verifying, but if one is specified or the default exists and the -verbose option was also specified, then additional information is output regarding whether or not any of the certificates used to verify the JAR file are contained in that keystore. The -keystore argument can be a file name and path specification rather than a URL, in which case it is treated the same as a file: URL, for example, the following are equivalent: • -keystore filePathAndName • -keystore file:filePathAndName If the Sun PKCS #11 provider was configured in the java.security security properties file (located in the JDK's $JAVA_HOME/conf/security directory), then the keytool and jarsigner tools can operate on the PKCS #11 token by specifying these options: -keystore NONE -storetype PKCS11 For example, the following command lists the contents of the configured PKCS#11 token: keytool -keystore NONE -storetype PKCS11 -list -storepass [:env | :file] argument Specifies the password that is required to access the keystore. This is only needed when signing (not verifying) a JAR file. In that case, if a -storepass option isn't provided at the command line, then the user is prompted for the password. If the modifier env or file isn't specified, then the password has the value argument. Otherwise, the password is retrieved as follows: • env: Retrieve the password from the environment variable named argument. • file: Retrieve the password from the file named argument. Note: The password shouldn't be specified on the command line or in a script unless it is for testing purposes, or you are on a secure system. -storetype storetype Specifies the type of keystore to be instantiated. The default keystore type is the one that is specified as the value of the keystore.type property in the security properties file, which is returned by the static getDefaultType method in java.security.KeyStore. The PIN for a PKCS #11 token can also be specified with the -storepass option. If none is specified, then the keytool and jarsigner commands prompt for the token PIN. If the token has a protected authentication path (such as a dedicated PIN-pad or a biometric reader), then the -protected option must be specified and no password options can be specified. -keypass [:env | :file] argument -certchain file Specifies the password used to protect the private key of the keystore entry addressed by the alias specified on the command line. The password is required when using jarsigner to sign a JAR file. If no password is provided on the command line, and the required password is different from the store password, then the user is prompted for it. If the modifier env or file isn't specified, then the password has the value argument. Otherwise, the password is retrieved as follows: • env: Retrieve the password from the environment variable named argument. • file: Retrieve the password from the file named argument. Note: The password shouldn't be specified on the command line or in a script unless it is for testing purposes, or you are on a secure system. -certchain file Specifies the certificate chain to be used when the certificate chain associated with the private key of the keystore entry that is addressed by the alias specified on the command line isn't complete. This can happen when the keystore is located on a hardware token where there isn't enough capacity to hold a complete certificate chain. The file can be a sequence of concatenated X.509 certificates, or a single PKCS#7 formatted data block, either in binary encoding format or in printable encoding format (also known as Base64 encoding) as defined by Internet RFC 1421 Certificate Encoding Standard [http://tools.ietf.org/html/rfc1421]. -sigfile file Specifies the base file name to be used for the generated .SF and signature block files. For example, if file is DUKESIGN, then the generated .SF and signature block files are named DUKESIGN.SF and DUKESIGN.RSA, and placed in the META-INF directory of the signed JAR file. The characters in the file must come from the set a-zA-Z0-9_-. Only letters, numbers, underscore, and hyphen characters are allowed. All lowercase characters are converted to uppercase for the .SF and signature block file names. If no -sigfile option appears on the command line, then the base file name for the .SF and signature block files is the first 8 characters of the alias name specified on the command line, all converted to upper case. If the alias name has fewer than 8 characters, then the full alias name is used. If the alias name contains any characters that aren't valid in a signature file name, then each such character is converted to an underscore (_) character to form the file name. -signedjar file Specifies the name of signed JAR file. -digestalg algorithm Specifies the name of the message digest algorithm to use when digesting the entries of a JAR file. For a list of standard message digest algorithm names, see the Java Security Standard Algorithm Names Specification. If this option isn't specified, then SHA-384 is used. There must either be a statically installed provider supplying an implementation of the specified algorithm or the user must specify one with the -addprovider or -providerClass options; otherwise, the command will not succeed. -sigalg algorithm Specifies the name of the signature algorithm to use to sign the JAR file. This algorithm must be compatible with the private key used to sign the JAR file. If this option isn't specified, then use a default algorithm matching the private key as described in the Supported Algorithms section. There must either be a statically installed provider supplying an implementation of the specified algorithm or you must specify one with the -addprovider or -providerClass option; otherwise, the command doesn't succeed. For a list of standard signature algorithm names, see the Java Security Standard Algorithm Names Specification. -verify Verifies a signed JAR file. -verbose[:suboptions] When the -verbose option appears on the command line, it indicates that the jarsigner use the verbose mode when signing or verifying with the suboptions determining how much information is shown. This causes the , which causes jarsigner to output extra information about the progress of the JAR signing or verification. The suboptions can be all, grouped, or summary. If the -certs option is also specified, then the default mode (or suboption all) displays each entry as it is being processed, and after that, the certificate information for each signer of the JAR file. If the -certs and the -verbose:grouped suboptions are specified, then entries with the same signer info are grouped and displayed together with their certificate information. If -certs and the -verbose:summary suboptions are specified, then entries with the same signer information are grouped and displayed together with their certificate information. Details about each entry are summarized and displayed as one entry (and more). See Example of Verifying a Signed JAR File and Example of Verification with Certificate Information. -certs If the -certs option appears on the command line with the -verify and -verbose options, then the output includes certificate information for each signer of the JAR file. This information includes the name of the type of certificate (stored in the signature block file) that certifies the signer's public key, and if the certificate is an X.509 certificate (an instance of the java.security.cert.X509Certificate), then the distinguished name of the signer. The keystore is also examined. If no keystore value is specified on the command line, then the default keystore file (if any) is checked. If the public key certificate for a signer matches an entry in the keystore, then the alias name for the keystore entry for that signer is displayed in parentheses. -revCheck This option enables revocation checking of certificates when signing or verifying a JAR file. The jarsigner command attempts to make network connections to fetch OCSP responses and CRLs if the -revCheck option is specified on the command line. Note that revocation checks are not enabled unless this option is specified. -tsa url If -tsa http://example.tsa.url appears on the command line when signing a JAR file then a time stamp is generated for the signature. The URL, http://example.tsa.url, identifies the location of the Time Stamping Authority (TSA) and overrides any URL found with the -tsacert option. The -tsa option doesn't require the TSA public key certificate to be present in the keystore. To generate the time stamp, jarsigner communicates with the TSA with the Time-Stamp Protocol (TSP) defined in RFC 3161. When successful, the time stamp token returned by the TSA is stored with the signature in the signature block file. -tsacert alias When -tsacert alias appears on the command line when signing a JAR file, a time stamp is generated for the signature. The alias identifies the TSA public key certificate in the keystore that is in effect. The entry's certificate is examined for a Subject Information Access extension that contains a URL identifying the location of the TSA. The TSA public key certificate must be present in the keystore when using the -tsacert option. -tsapolicyid policyid Specifies the object identifier (OID) that identifies the policy ID to be sent to the TSA server. If this option isn't specified, no policy ID is sent and the TSA server will choose a default policy ID. Object identifiers are defined by X.696, which is an ITU Telecommunication Standardization Sector (ITU-T) standard. These identifiers are typically period-separated sets of non-negative digits like 1.2.3.4, for example. -tsadigestalg algorithm Specifies the message digest algorithm that is used to generate the message imprint to be sent to the TSA server. If this option isn't specified, SHA-384 will be used. See Supported Algorithms. For a list of standard message digest algorithm names, see the Java Security Standard Algorithm Names Specification. -internalsf In the past, the signature block file generated when a JAR file was signed included a complete encoded copy of the .SF file (signature file) also generated. This behavior has been changed. To reduce the overall size of the output JAR file, the signature block file by default doesn't contain a copy of the .SF file anymore. If -internalsf appears on the command line, then the old behavior is utilized. This option is useful for testing. In practice, don't use the -internalsf option because it incurs higher overhead. -sectionsonly If the -sectionsonly option appears on the command line, then the .SF file (signature file) generated when a JAR file is signed doesn't include a header that contains a hash of the whole manifest file. It contains only the information and hashes related to each individual source file included in the JAR file. See Signature File. By default, this header is added, as an optimization. When the header is present, whenever the JAR file is verified, the verification can first check to see whether the hash in the header matches the hash of the whole manifest file. When there is a match, verification proceeds to the next step. When there is no match, it is necessary to do a less optimized verification that the hash in each source file information section in the .SF file equals the hash of its corresponding section in the manifest file. See JAR File Verification. The -sectionsonly option is primarily used for testing. It shouldn't be used other than for testing because using it incurs higher overhead. -protected Values can be either true or false. Specify true when a password must be specified through a protected authentication path such as a dedicated PIN reader. -providerName providerName If more than one provider was configured in the java.security security properties file, then you can use the -providerName option to target a specific provider instance. The argument to this option is the name of the provider. For the Oracle PKCS #11 provider, providerName is of the form SunPKCS11-TokenName, where TokenName is the name suffix that the provider instance has been configured with, as detailed in the configuration attributes table. For example, the following command lists the contents of the PKCS #11 keystore provider instance with name suffix SmartCard: jarsigner -keystore NONE -storetype PKCS11 -providerName SunPKCS11-SmartCard -list -addprovider name [-providerArg arg] Adds a security provider by name (such as SunPKCS11) and an optional configure argument. The value of the security provider is the name of a security provider that is defined in a module. Used with the -providerArg ConfigFilePath option, the keytool and jarsigner tools install the provider dynamically and use ConfigFilePath for the path to the token configuration file. The following example shows a command to list a PKCS #11 keystore when the Oracle PKCS #11 provider wasn't configured in the security properties file. jarsigner -keystore NONE -storetype PKCS11 -addprovider SunPKCS11 -providerArg /mydir1/mydir2/token.config -providerClass provider-class-name [-providerArg arg] Used to specify the name of cryptographic service provider's master class file when the service provider isn't listed in the java.security security properties file. Adds a security provider by fully-qualified class name and an optional configure argument. Note: The preferred way to load PKCS11 is by using modules. See -addprovider. -providerPath classpath Used to specify the classpath for providers specified by the -providerClass option. Multiple paths should be separated by the system-dependent path-separator character. -Jjavaoption Passes through the specified javaoption string directly to the Java interpreter. The jarsigner command is a wrapper around the interpreter. This option shouldn't contain any spaces. It is useful for adjusting the execution environment or memory usage. For a list of possible interpreter options, type java -h or java -X at the command line. -strict During the signing or verifying process, the command may issue warning messages. If you specify this option, the exit code of the tool reflects the severe warning messages that this command found. See Errors and Warnings. -conf url Specifies a pre-configured options file. Read the keytool documentation for details. The property keys supported are "jarsigner.all" for all actions, "jarsigner.sign" for signing, and "jarsigner.verify" for verification. jarsigner arguments including the JAR file name and alias name(s) cannot be set in this file. -version Prints the program version. ERRORS AND WARNINGS During the signing or verifying process, the jarsigner command may issue various errors or warnings. If there is a failure, the jarsigner command exits with code 1. If there is no failure, but there are one or more severe warnings, the jarsigner command exits with code 0 when the -strict option is not specified, or exits with the OR-value of the warning codes when the -strict is specified. If there is only informational warnings or no warning at all, the command always exits with code 0. For example, if a certificate used to sign an entry is expired and has a KeyUsage extension that doesn't allow it to sign a file, the jarsigner command exits with code 12 (=4+8) when the -strict option is specified. Note: Exit codes are reused because only the values from 0 to 255 are legal on Linux and macOS. The following sections describes the names, codes, and descriptions of the errors and warnings that the jarsigner command can issue. FAILURE Reasons why the jarsigner command fails include (but aren't limited to) a command line parsing error, the inability to find a keypair to sign the JAR file, or the verification of a signed JAR fails. failure Code 1. The signing or verifying fails. SEVERE WARNINGS Note: Severe warnings are reported as errors if you specify the -strict option. Reasons why the jarsigner command issues a severe warning include the certificate used to sign the JAR file has an error or the signed JAR file has other problems. hasExpiredCert Code 4. This JAR contains entries whose signer certificate has expired. hasExpiredTsaCert Code 4. The timestamp has expired. notYetValidCert Code 4. This JAR contains entries whose signer certificate isn't yet valid. chainNotValidated Code 4. This JAR contains entries whose certificate chain isn't validated. tsaChainNotValidated Code 64. The timestamp is invalid. signerSelfSigned Code 4. This JAR contains entries whose signer certificate is self signed. disabledAlg Code 4. An algorithm used is considered a security risk and is disabled. badKeyUsage Code 8. This JAR contains entries whose signer certificate's KeyUsage extension doesn't allow code signing. badExtendedKeyUsage Code 8. This JAR contains entries whose signer certificate's ExtendedKeyUsage extension doesn't allow code signing. badNetscapeCertType Code 8. This JAR contains entries whose signer certificate's NetscapeCertType extension doesn't allow code signing. hasUnsignedEntry Code 16. This JAR contains unsigned entries which haven't been integrity-checked. notSignedByAlias Code 32. This JAR contains signed entries which aren't signed by the specified alias(es). aliasNotInStore Code 32. This JAR contains signed entries that aren't signed by alias in this keystore. tsaChainNotValidated Code 64. This JAR contains entries whose TSA certificate chain is invalid. INFORMATIONAL WARNINGS Informational warnings include those that aren't errors but regarded as bad practice. They don't have a code. extraAttributesDetected The POSIX file permissions and/or symlink attributes are detected during signing or verifying a JAR file. The jarsigner tool preserves these attributes in the newly signed file but warns that these attributes are unsigned and not protected by the signature. hasExpiringCert This JAR contains entries whose signer certificate expires within six months. hasExpiringTsaCert The timestamp will expire within one year on YYYY-MM-DD. legacyAlg An algorithm used is considered a security risk but not disabled. noTimestamp This JAR contains signatures that doesn't include a timestamp. Without a timestamp, users may not be able to validate this JAR file after the signer certificate's expiration date (YYYY-MM-DD) or after any future revocation date. EXAMPLE OF SIGNING A JAR FILE Use the following command to sign bundle.jar with the private key of a user whose keystore alias is jane in a keystore named mystore in the working directory and name the signed JAR file sbundle.jar: jarsigner -keystore /working/mystore -storepass keystore_password -keypass private_key_password -signedjar sbundle.jar bundle.jar jane There is no -sigfile specified in the previous command so the generated .SF and signature block files to be placed in the signed JAR file have default names based on the alias name. They are named JANE.SF and JANE.RSA. If you want to be prompted for the store password and the private key password, then you could shorten the previous command to the following: jarsigner -keystore /working/mystore -signedjar sbundle.jar bundle.jar jane If the keystore is the default keystore (.keystore in your home directory), then you don't need to specify a keystore, as follows: jarsigner -signedjar sbundle.jar bundle.jar jane If you want the signed JAR file to overwrite the input JAR file (bundle.jar), then you don't need to specify a -signedjar option, as follows: jarsigner bundle.jar jane EXAMPLE OF VERIFYING A SIGNED JAR FILE To verify a signed JAR file to ensure that the signature is valid and the JAR file wasn't been tampered with, use a command such as the following: jarsigner -verify ButtonDemo.jar When the verification is successful, jar verified is displayed. Otherwise, an error message is displayed. You can get more information when you use the -verbose option. A sample use of jarsigner with the -verbose option follows: jarsigner -verify -verbose ButtonDemo.jar s 866 Tue Sep 12 20:08:48 EDT 2017 META-INF/MANIFEST.MF 825 Tue Sep 12 20:08:48 EDT 2017 META-INF/ORACLE_C.SF 7475 Tue Sep 12 20:08:48 EDT 2017 META-INF/ORACLE_C.RSA 0 Tue Sep 12 20:07:54 EDT 2017 META-INF/ 0 Tue Sep 12 20:07:16 EDT 2017 components/ 0 Tue Sep 12 20:07:16 EDT 2017 components/images/ sm 523 Tue Sep 12 20:07:16 EDT 2017 components/ButtonDemo$1.class sm 3440 Tue Sep 12 20:07:16 EDT 2017 components/ButtonDemo.class sm 2346 Tue Sep 12 20:07:16 EDT 2017 components/ButtonDemo.jnlp sm 172 Tue Sep 12 20:07:16 EDT 2017 components/images/left.gif sm 235 Tue Sep 12 20:07:16 EDT 2017 components/images/middle.gif sm 172 Tue Sep 12 20:07:16 EDT 2017 components/images/right.gif s = signature was verified m = entry is listed in manifest k = at least one certificate was found in keystore - Signed by "CN="Oracle America, Inc.", OU=Software Engineering, O="Oracle America, Inc.", L=Redwood City, ST=California, C=US" Digest algorithm: SHA-256 Signature algorithm: SHA256withRSA, 2048-bit key Timestamped by "CN=Symantec Time Stamping Services Signer - G4, O=Symantec Corporation, C=US" on Tue Sep 12 20:08:49 UTC 2017 Timestamp digest algorithm: SHA-1 Timestamp signature algorithm: SHA1withRSA, 2048-bit key jar verified. The signer certificate expired on 2018-02-01. However, the JAR will be valid until the timestamp expires on 2020-12-29. EXAMPLE OF VERIFICATION WITH CERTIFICATE INFORMATION If you specify the -certs option with the -verify and -verbose options, then the output includes certificate information for each signer of the JAR file. The information includes the certificate type, the signer distinguished name information (when it is an X.509 certificate), and in parentheses, the keystore alias for the signer when the public key certificate in the JAR file matches the one in a keystore entry, for example: jarsigner -keystore $JAVA_HOME/lib/security/cacerts -verify -verbose -certs ButtonDemo.jar s k 866 Tue Sep 12 20:08:48 EDT 2017 META-INF/MANIFEST.MF >>> Signer X.509, CN="Oracle America, Inc.", OU=Software Engineering, O="Oracle America, Inc.", L=Redwood City, ST=California, C=US [certificate is valid from 2017-01-30, 7:00 PM to 2018-02-01, 6:59 PM] X.509, CN=Symantec Class 3 SHA256 Code Signing CA, OU=Symantec Trust Network, O=Symantec Corporation, C=US [certificate is valid from 2013-12-09, 7:00 PM to 2023-12-09, 6:59 PM] X.509, CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU="(c) 2006 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US (verisignclass3g5ca [jdk]) [trusted certificate] >>> TSA X.509, CN=Symantec Time Stamping Services Signer - G4, O=Symantec Corporation, C=US [certificate is valid from 2012-10-17, 8:00 PM to 2020-12-29, 6:59 PM] X.509, CN=Symantec Time Stamping Services CA - G2, O=Symantec Corporation, C=US [certificate is valid from 2012-12-20, 7:00 PM to 2020-12-30, 6:59 PM] 825 Tue Sep 12 20:08:48 EDT 2017 META-INF/ORACLE_C.SF 7475 Tue Sep 12 20:08:48 EDT 2017 META-INF/ORACLE_C.RSA 0 Tue Sep 12 20:07:54 EDT 2017 META-INF/ 0 Tue Sep 12 20:07:16 EDT 2017 components/ 0 Tue Sep 12 20:07:16 EDT 2017 components/images/ smk 523 Tue Sep 12 20:07:16 EDT 2017 components/ButtonDemo$1.class [entry was signed on 2017-09-12, 4:08 PM] >>> Signer X.509, CN="Oracle America, Inc.", OU=Software Engineering, O="Oracle America, Inc.", L=Redwood City, ST=California, C=US [certificate is valid from 2017-01-30, 7:00 PM to 2018-02-01, 6:59 PM] X.509, CN=Symantec Class 3 SHA256 Code Signing CA, OU=Symantec Trust Network, O=Symantec Corporation, C=US [certificate is valid from 2013-12-09, 7:00 PM to 2023-12-09, 6:59 PM] X.509, CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU="(c) 2006 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US (verisignclass3g5ca [jdk]) [trusted certificate] >>> TSA X.509, CN=Symantec Time Stamping Services Signer - G4, O=Symantec Corporation, C=US [certificate is valid from 2012-10-17, 8:00 PM to 2020-12-29, 6:59 PM] X.509, CN=Symantec Time Stamping Services CA - G2, O=Symantec Corporation, C=US [certificate is valid from 2012-12-20, 7:00 PM to 2020-12-30, 6:59 PM] smk 3440 Tue Sep 12 20:07:16 EDT 2017 components/ButtonDemo.class ... smk 2346 Tue Sep 12 20:07:16 EDT 2017 components/ButtonDemo.jnlp ... smk 172 Tue Sep 12 20:07:16 EDT 2017 components/images/left.gif ... smk 235 Tue Sep 12 20:07:16 EDT 2017 components/images/middle.gif ... smk 172 Tue Sep 12 20:07:16 EDT 2017 components/images/right.gif ... s = signature was verified m = entry is listed in manifest k = at least one certificate was found in keystore - Signed by "CN="Oracle America, Inc.", OU=Software Engineering, O="Oracle America, Inc.", L=Redwood City, ST=California, C=US" Digest algorithm: SHA-256 Signature algorithm: SHA256withRSA, 2048-bit key Timestamped by "CN=Symantec Time Stamping Services Signer - G4, O=Symantec Corporation, C=US" on Tue Sep 12 20:08:49 UTC 2017 Timestamp digest algorithm: SHA-1 Timestamp signature algorithm: SHA1withRSA, 2048-bit key jar verified. The signer certificate expired on 2018-02-01. However, the JAR will be valid until the timestamp expires on 2020-12-29. If the certificate for a signer isn't an X.509 certificate, then there is no distinguished name information. In that case, just the certificate type and the alias are shown. For example, if the certificate is a PGP certificate, and the alias is bob, then you would get: PGP, (bob). JDK 22 2024 JARSIGNER(1)
|
jarsigner - sign and verify Java Archive (JAR) files
|
jarsigner [options] jar-file alias jarsigner -verify [options] jar-file [alias ...] jarsigner -version
|
The command-line options. See Options for jarsigner. -verify The -verify option can take zero or more keystore alias names after the JAR file name. When the -verify option is specified, the jarsigner command checks that the certificate used to verify each signed entry in the JAR file matches one of the keystore aliases. The aliases are defined in the keystore specified by -keystore or the default keystore. If you also specify the -strict option, and the jarsigner command detects severe warnings, the message, "jar verified, with signer errors" is displayed. jar-file The JAR file to be signed. If you also specified the -strict option, and the jarsigner command detected severe warnings, the message, "jar signed, with signer errors" is displayed. alias The aliases are defined in the keystore specified by -keystore or the default keystore. -version The -version option prints the program version of jarsigner.
| null |
app-sso
|
app-sso is used to control and get information about the Kerberos Single Sign-on (SSO) extension via the command line. The Kerberos SSO extension simplifies using Kerberos authentication with an Active Directory based Kerberos realm. It also allows the user to use Active Directory specific functions such as password changes and password expiration notifications. Note that app-sso cannot be used to completely configure the Kerberos SSO extension. Configuring the Kerberos SSO extension requires a user approved MDM enrollment, as well as an MDM solution that can build and deliver an appropriately configured Extensible SSO configuration profile payload. See your MDM vendor's documentation for additional information. COMMANDS -a, --authenticate REALM Display the login dialog for the specified realm, or if the user has already configured the Kerberos SSO extension, acquire a new credential. Returns success upon acquiring a new credential or if the user already has a valid credential. -u, --username The username for authentication. The user will not be able to change this username on the login screen. -f, --force Display the login screen even if the user is already authenticated. -q, --quiet Suppress the information that is normally printed after authentication. -d, --logout REALM Logs out any user logged into the specified realm. -c, --changepassword REALM Displays the "Change Password" dialog for the specified realm. -l, --listrealms Prints the list of configured realms. -i, --realminfo REALM Print information about the currently configured realm. This includes information such as the current site code, network home directory and date the user's password expires. -v, --verbose Print the complete site code cache in the results. -s, --sitecode REALM Perform a site lookup for the specified realm. -v, --verbose Print the complete site code cache in the results. -r, --reset [REALM] Reset the cache for the specified realm. If a realm isn't specified, reset caches for all realms. -k, --keychainoption REALM Resets the "login automatically" option for the specified realm. -p, --proceedusersetup REALM Allow user setup to proceed if you are using "delayUserSetup" in your configuration profile. -t, --sharedsettings REALM Prints the kerberos settings that are shared with other processes for the specified realm. For diagnostic purposes only, not intended for scripting. -j, --json Format the output of this command as JSON instead of property list format. -h, --help Print a synopsis of the above document.
|
app-sso – A tool used to control and get information about the Kerberos SSO extension.
|
app-sso [command] Commands: -a, --authenticate REALM [options ...] -u, --username USERNAME -f, --force -q, --quiet -d, --logout REALM -c, --changepassword REALM -l, --listrealms -i, --realminfo REALM -v, --verbose -i, --sitecode REALM -v, --verbose -r, --reset REALM -k, --keychainoption REALM -j, --json REALM -h, --help REALM
| null |
Print infomation about the PRETENDCO.COM realm: app-sso -i PRETENDCO.COM Authenticate to the PRETENDCO.COM realm as jappleseed: app-sso -a PRETENDCO.COM -u jappleseed Kerberos Extension UI Options startInSmartCardMode The default behavior of the KerberosExtension is to start in the UI mode last used by the user. To force it to start in SmartCard mode, run this defaults command: defaults write com.apple.AppSSOKerberos.KerberosExtension startInSmartCardMode -bool true allowSmartCard The default behavior of the KerberosExtension is to show both password and SmartCard authentication in the UI. To hide SmartCards, run this defaults command: defaults write com.apple.AppSSOKerberos.KerberosExtension allowSmartCard -bool false allowPassword The default behavior of the KerberosExtension is to show both password and SmartCard authentication in the UI. To hide passwords, run this defaults command: defaults write com.apple.AppSSOKerberos.KerberosExtension allowPassword -bool false identityIssuerAutoSelectFilter The default behavior of the KerberosExtension is to auto select an available identity if one is available. If more than one is available, then the identityIssuerAutoSelectFilter can be used to filter the issuer names. If one is left, then it will be auto selected. The value should include any wild cards. To enable it, run this defaults command with the correct filter value: defaults write com.apple.AppSSOKerberos.KerberosExtension identityIssuerAutoSelectFilter 'Apple CA*' macOS January 28, 2020 macOS
|
devmodectl
| null | null | null | null | null |
nscurl
| null | null | null | null | null |
logger
|
The logger utility provides a shell command interface to the syslog(3) system log module. The following options are available: -i Log the process id of the logger process with each line. This flag is ignored and the process id is always logged. -s Log the message to standard error, as well as the system log. -f file Read the contents of the specified file into syslog. This option is ignored when a message is also specified. -p pri Enter the message with the specified priority. The priority may be specified numerically or as a facility.level pair. For example, “-p local3.info” logs the message(s) as informational level in the local3 facility. The default is “user.notice”. -t tag Mark every line in the log with the specified tag rather than the default of current login name. message Write the message to log; if not specified, and the -f flag is not provided, standard input is logged. EXIT STATUS The logger utility exits 0 on success, and >0 if an error occurs.
|
logger – make entries in the system log
|
logger [-is] [-f file] [-p pri] [-t tag] [message ...]
| null |
logger System rebooted logger -p local0.notice -t HOSTIDM -f /dev/idmc SEE ALSO syslog(3), syslogd(8) STANDARDS The logger command is expected to be IEEE Std 1003.2 (“POSIX.2”) compatible. macOS 14.5 March 16, 2022 macOS 14.5
|
caffeinate
|
caffeinate creates assertions to alter system sleep behavior. If no assertion flags are specified, caffeinate creates an assertion to prevent idle sleep. If a utility is specified, caffeinate creates the assertions on the utility's behalf, and those assertions will persist for the duration of the utility's execution. Otherwise, caffeinate creates the assertions directly, and those assertions will persist until caffeinate exits. Available options: -d Create an assertion to prevent the display from sleeping. -i Create an assertion to prevent the system from idle sleeping. -m Create an assertion to prevent the disk from idle sleeping. -s Create an assertion to prevent the system from sleeping. This assertion is valid only when system is running on AC power. -u Create an assertion to declare that user is active. If the display is off, this option turns the display on and prevents the display from going into idle sleep. If a timeout is not specified with '-t' option, then this assertion is taken with a default of 5 second timeout. -t Specifies the timeout value in seconds for which this assertion has to be valid. The assertion is dropped after the specified timeout. Timeout value is not used when an utility is invoked with this command. -w Waits for the process with the specified pid to exit. Once the the process exits, the assertion is also released. This option is ignored when used with utility option. EXAMPLE caffeinate -i make caffeinate forks a process, execs "make" in it, and holds an assertion that prevents idle sleep as long as that process is running. SEE ALSO pmset(1) LOCATION /usr/bin/caffeinate Darwin November 9, 2012 Darwin
|
caffeinate – prevent the system from sleeping on behalf of a utility
|
caffeinate [-disu] [-t timeout] [-w pid] [utility arguments...]
| null | null |
pod2html5.30
|
Converts files from pod format (see perlpod) to HTML format. ARGUMENTS pod2html takes the following arguments: help --help Displays the usage message. htmldir --htmldir=name Sets the directory to which all cross references in the resulting HTML file will be relative. Not passing this causes all links to be absolute since this is the value that tells Pod::Html the root of the documentation tree. Do not use this and --htmlroot in the same call to pod2html; they are mutually exclusive. htmlroot --htmlroot=URL Sets the base URL for the HTML files. When cross-references are made, the HTML root is prepended to the URL. Do not use this if relative links are desired: use --htmldir instead. Do not pass both this and --htmldir to pod2html; they are mutually exclusive. infile --infile=name Specify the pod file to convert. Input is taken from STDIN if no infile is specified. outfile --outfile=name Specify the HTML file to create. Output goes to STDOUT if no outfile is specified. podroot --podroot=name Specify the base directory for finding library pods. podpath --podpath=name:...:name Specify which subdirectories of the podroot contain pod files whose HTML converted forms can be linked-to in cross-references. cachedir --cachedir=name Specify which directory is used for storing cache. Default directory is the current working directory. flush --flush Flush the cache. backlink --backlink Turn =head1 directives into links pointing to the top of the HTML file. nobacklink --nobacklink Do not turn =head1 directives into links pointing to the top of the HTML file (default behaviour). header --header Create header and footer blocks containing the text of the "NAME" section. noheader --noheader Do not create header and footer blocks containing the text of the "NAME" section (default behaviour). poderrors --poderrors Include a "POD ERRORS" section in the outfile if there were any POD errors in the infile (default behaviour). nopoderrors --nopoderrors Do not include a "POD ERRORS" section in the outfile if there were any POD errors in the infile. index --index Generate an index at the top of the HTML file (default behaviour). noindex --noindex Do not generate an index at the top of the HTML file. recurse --recurse Recurse into subdirectories specified in podpath (default behaviour). norecurse --norecurse Do not recurse into subdirectories specified in podpath. css --css=URL Specify the URL of cascading style sheet to link from resulting HTML file. Default is none style sheet. title --title=title Specify the title of the resulting HTML file. quiet --quiet Don't display mostly harmless warning messages. noquiet --noquiet Display mostly harmless warning messages (default behaviour). But this is not the same as "verbose" mode. verbose --verbose Display progress messages. noverbose --noverbose Do not display progress messages (default behaviour). AUTHOR Tom Christiansen, <tchrist@perl.com>. BUGS See Pod::Html for a list of known bugs in the translator. SEE ALSO perlpod, Pod::Html COPYRIGHT This program is distributed under the Artistic License. perl v5.30.3 2024-04-13 POD2HTML(1)
|
pod2html - convert .pod files to .html files
|
pod2html --help --htmldir=<name> --htmlroot=<URL> --infile=<name> --outfile=<name> --podpath=<name>:...:<name> --podroot=<name> --cachedir=<name> --flush --recurse --norecurse --quiet --noquiet --verbose --noverbose --index --noindex --backlink --nobacklink --header --noheader --poderrors --nopoderrors --css=<URL> --title=<name>
| null | null |
say
|
This tool uses the Speech Synthesis manager to convert input text to audible speech and either play it through the sound output device chosen in System Preferences or save it to an AIFF file.
|
say - Convert text to audible speech
|
say [-v voice] [-r rate] [-o outfile [audio format options] | -n name:port | -a device] [-f file | string ...]
|
string Specify the text to speak on the command line. This can consist of multiple arguments, which are considered to be separated by spaces. -f file, --input-file=file Specify a file to be spoken. If file is - or neither this parameter nor a message is specified, read from standard input. -v voice, --voice=voice Specify the voice to be used. Default is the voice selected in System Preferences. To obtain a list of voices installed in the system, specify '?' as the voice name. -r rate, --rate=rate Speech rate to be used, in words per minute. -o out.aiff, --output-file=file Specify the path for an audio file to be written. AIFF is the default and should be supported for most voices, but some voices support many more file formats. -n name, --network-send=name -n name:port, --network-send=name:port -n :port, --network-send=:port -n :, --network-send=: Specify a service name (default "AUNetSend") and/or IP port to be used for redirecting the speech output through AUNetSend. -a ID, --audio-device=ID -a name, --audio-device=name Specify, by ID or name prefix, an audio device to be used to play the audio. To obtain a list of audio output devices, specify '?' as the device name. --progress Display a progress meter during synthesis. -i, --interactive, --interactive=markup Print the text line by line during synthesis, highlighting words as they are spoken. Markup can be one of • A terminfo capability as described in terminfo(5), e.g. bold, smul, setaf 1. • A color name, one of black, red, green, yellow, blue, magenta, cyan, or white. • A foreground and background color from the above list, separated by a slash, e.g. green/black. If the foreground color is omitted, only the background color is set. If markup is not specified, it defaults to smso, i.e. reverse video. If the input is a TTY, text is spoken line by line, and the output file, if specified, will only contain audio for the last line of the input. Otherwise, text is spoken all at once. AUDIO FORMATS Starting in MacOS X 10.6, file formats other than AIFF may be specified, although not all third party synthesizers may initially support them. In simple cases, the file format can be inferred from the extension, although generally some of the options below are required for finer grained control: --file-format=format The format of the file to write (AIFF, caff, m4af, WAVE). Generally, it's easier to specify a suitable file extension for the output file. To obtain a list of writable file formats, specify '?' as the format name. --data-format=format The format of the audio data to be stored. Formats other than linear PCM are specified by giving their format identifiers (aac, alac). Linear PCM formats are specified as a sequence of: Endianness (optional) One of BE (big endian) or LE (little endian). Default is native endianness. Data type One of F (float), I (integer), or, rarely, UI (unsigned integer). Sample size One of 8, 16, 24, 32, 64. Most available file formats only support a subset of these sample formats. To obtain a list of audio data formats for a file format specified explicitly or by file name, specify '?' as the format name. The format identifier optionally can be followed by @samplerate and /hexflags for the format. --channels=channels The number of channels. This will generally be of limited use, as most speech synthesizers produce mono audio only. --bit-rate=rate The bit rate for formats like AAC. To obtain a list of valid bit rates, specify '?' as the rate. In practice, not all of these bit rates will be available for a given format. --quality=quality The audio converter quality level between 0 (lowest) and 127 (highest). ERRORS say returns 0 if the text was spoken successfully, otherwise non-zero. Diagnostic messages will be printed to standard error.
|
say Hello, World say -v Alex -o hi -f hello_world.txt say --interactive=/green spending each day the color of the leaves say -o hi.aac 'Hello, [[slnc 200]] World' say -o hi.m4a --data-format=alac Hello, World. say -o hi.caf --data-format=LEF32@8000 Hello, World say -v '?' say --file-format=? say --file-format=caff --data-format=? say -o hi.m4a --bit-rate=? SEE ALSO "Speech Synthesis Programming Guide" 1.0 2020-08-13 SAY(1)
|
net-snmp-create-v3-user
|
The net-snmp-create-v3-user shell script is designed to create a new user in net-snmp configuration file (/var/net-snmp/snmpd.conf by default).
|
net-snmp-create-v3-user - create a SNMPv3 user in net-snmp configuration file
|
net-snmp-create-v3-user [-ro] [-a authpass] [-x privpass] [-X DES|AES] [username]
|
--version displays the net-snmp version number -ro create an user with read-only permissions -a authpass specify authentication password -x privpass specify encryption password -X DES|AES specify encryption algorithm V5.6.2.1 17 Sep 2008 net-snmp-create-v3-user(1)
| null |
zipdetails5.34
|
Zipdetails displays information about the internal record structure of zip files. It is not concerned with displaying any details of the compressed data stored in the zip file. The program assumes prior understanding of the internal structure of a Zip file. You should have a copy of the Zip APPNOTE file at hand to help understand the output from this program ("SEE ALSO" for details). Default Behaviour By default the program expects to be given a well-formed zip file. It will navigate the Zip file by first parsing the zip central directory at the end of the file. If that is found, it will then walk through the zip records starting at the beginning of the file. Any badly formed zip data structures encountered are likely to terminate the program. If the program finds any structural problems with the zip file it will print a summary at the end of the output report. The set of error cases reported is very much a work in progress, so don't rely on this feature to find all the possible errors in a zip file. If you have suggestions for use-cases where this could be enhanced please consider creating an enhancement request (see "SUPPORT"). Scan-Mode If you do have a potentially corrupt zip file, particulatly where the central directory at the end of the file is absent/incomplete, you can try usng the "--scan" option to search for zip records that are still present. When Scan-mode is enabled, the program will walk the zip file from the start blindly looking for the 4-byte signatures that preceed each of the zip data structures. If it finds any of the recognised signatures it will attempt to dump the associated zip record. For very large zip files, this operation can take a long time to run. Note that the 4-byte signatures used in zip files can sometimes match with random data stored in the zip file, so care is needed interpreting the results.
|
zipdetails - display the internal structure of zip files
|
zipdetails [-v][--scan] zipfile.zip zipdetails -h zipdetails --version
|
-h Display help --scan Walk the zip file loking for possible zip records. Can be error- prone. See "Scan-Mode" -v Enable Verbose mode. See "Verbose Output". --version Display version number of the program and exit. Default Output By default zipdetails will output the details of the zip file in three columns. Column 1 This contains the offset from the start of the file in hex. Column 2 This contains a textual description of the field. Column 3 If the field contains a numeric value it will be displayed in hex. Zip stores most numbers in little-endian format - the value displayed will have the little-endian encoding removed. Next, is an optional description of what the value means. Verbose Output If the "-v" option is present, column 1 is expanded to include • The offset from the start of the file in hex. • The length of the field in hex. • A hex dump of the bytes in field in the order they are stored in the zip file. LIMITATIONS The following zip file features are not supported by this program: • Multi-part archives. • The strong encryption features defined in the "APPNOTE" document. TODO Error handling is a work in progress. If the program encounters a problem reading a zip file it is likely to terminate with an unhelpful error message. SUPPORT General feedback/questions/bug reports should be sent to <https://github.com/pmqs/IO-Compress/issues> (preferred) or <https://rt.cpan.org/Public/Dist/Display.html?Name=IO-Compress>. SEE ALSO The primary reference for Zip files is the "APPNOTE" document available at <http://www.pkware.com/documents/casestudies/APPNOTE.TXT>. An alternative reference is the Info-Zip appnote. This is available from <ftp://ftp.info-zip.org/pub/infozip/doc/> The "zipinfo" program that comes with the info-zip distribution (<http://www.info-zip.org/>) can also display details of the structure of a zip file. See also Archive::Zip::SimpleZip, IO::Compress::Zip, IO::Uncompress::Unzip. AUTHOR Paul Marquess pmqs@cpan.org. COPYRIGHT Copyright (c) 2011-2021 Paul Marquess. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.34.1 2024-04-13 ZIPDETAILS(1)
| null |
config_data5.34
|
The "config_data" tool provides a command-line interface to the configuration of Perl modules. By "configuration", we mean something akin to "user preferences" or "local settings". This is a formalization and abstraction of the systems that people like Andreas Koenig ("CPAN::Config"), Jon Swartz ("HTML::Mason::Config"), Andy Wardley ("Template::Config"), and Larry Wall (perl's own Config.pm) have developed independently. The configuration system employed here was developed in the context of "Module::Build". Under this system, configuration information for a module "Foo", for example, is stored in a module called "Foo::ConfigData") (I would have called it "Foo::Config", but that was taken by all those other systems mentioned in the previous paragraph...). These "...::ConfigData" modules contain the configuration data, as well as publicly accessible methods for querying and setting (yes, actually re-writing) the configuration data. The "config_data" script (whose docs you are currently reading) is merely a front-end for those methods. If you wish, you may create alternate front-ends. The two types of data that may be stored are called "config" values and "feature" values. A "config" value may be any perl scalar, including references to complex data structures. It must, however, be serializable using "Data::Dumper". A "feature" is a boolean (1 or 0) value. USAGE This script functions as a basic getter/setter wrapper around the configuration of a single module. On the command line, specify which module's configuration you're interested in, and pass options to get or set "config" or "feature" values. The following options are supported: module Specifies the name of the module to configure (required). feature When passed the name of a "feature", shows its value. The value will be 1 if the feature is enabled, 0 if the feature is not enabled, or empty if the feature is unknown. When no feature name is supplied, the names and values of all known features will be shown. config When passed the name of a "config" entry, shows its value. The value will be displayed using "Data::Dumper" (or similar) as perl code. When no config name is supplied, the names and values of all known config entries will be shown. set_feature Sets the given "feature" to the given boolean value. Specify the value as either 1 or 0. set_config Sets the given "config" entry to the given value. eval If the "--eval" option is used, the values in "set_config" will be evaluated as perl code before being stored. This allows moderately complicated data structures to be stored. For really complicated structures, you probably shouldn't use this command-line interface, just use the Perl API instead. help Prints a help message, including a few examples, and exits. AUTHOR Ken Williams, kwilliams@cpan.org COPYRIGHT Copyright (c) 1999, Ken Williams. All rights reserved. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. SEE ALSO Module::Build(3), perl(1). perl v5.34.0 2024-04-13 CONFIG_DATA(1)
|
config_data - Query or change configuration of Perl modules
|
# Get config/feature values config_data --module Foo::Bar --feature bazzable config_data --module Foo::Bar --config magic_number # Set config/feature values config_data --module Foo::Bar --set_feature bazzable=1 config_data --module Foo::Bar --set_config magic_number=42 # Print a usage message config_data --help
| null | null |
plutil
|
plutil can be used to check the syntax of property list files, or convert a plist file from one format to another. Specifying - as an input file reads from stdin. The first argument indicates the operation to perform, one of: -help Show the usage information for the command and exit. -p Print the property list in a human-readable fashion. The output format is not stable and not designed for machine parsing. The purpose of this command is to be able to easily read the contents of a plist file, no matter what format it is in. -lint Check the named property list files for syntax errors. This is the default command option if none is specified. -convert fmt Convert the named file to the indicated format and write back to the file system. If the file can't be loaded due to invalid syntax, the operation fails. This is the only option to support objc swift formats. -convert objc -header Converts the named file to Obj-C literal syntax and creates a .h file. Useful for first time conversions to literal syntax and only supported with the objc format. -insert keypath -type [value] [-append] Insert a value into the property list before writing it out. value is required unless type is dictionary or array. If -append is specified, keypath is expected to reference an array and the value will be appended to the end of the array. -replace keypath -type value Overwrite an existing value in the property list before writing it out. -remove keypath Removes the value at keypath from the property list before writing it out. -extract keypath fmt [-expect expect_type] Outputs the value at keypath in the property list as a new plist of type fmt. Optionally fails if -expect expect_type is used and the value at keypath does not match that type. -type keypath [-expect expect_type] Outputs the type of the value at keypath in the property list. Optionally fails if -expect expect_type is used and the value at keypath does not match that type. -create fmt Creates an empty plist of the specified fmt. There are a few additional options: -- Specifies that all further arguments are file names -n When used with -extract using the raw format, will not print a terminating newline character. This aids use in shell interpolation. -s Don't print anything on success. -r For JSON, add whitespace and indentation to make the output more human-readable and sort the keys like -p, does. -o path Specify an alternate path name for the result of the -convert operation; this option is only useful with a single file to be converted. Specifying - as the path outputs to stdout. -e extension Specify an alternate extension for converted files, and the output file names are otherwise the same. ARGUMENTS fmt is one of: xml1 for version 1 of the XML plist format binary1 for version 1 of the binary plist format json for the JSON format swift to convert from plist to swift literal syntax objc to convert from plist to Obj-C literal syntax raw when used with -extract, will print the unencapsulated value at the keypath. See RAW VALUES AND EXPECTED TYPES below. The result will be output to stdout unless -o is specified. keypath is a key-value coding key path, with one extension: a numerical path component applied to an array will act on the object at that index in the array or insert it into the array if the numerical path component is the last one in the key path. type is one of: -bool YES if passed "YES" or "true", otherwise NO -integer any valid 64 bit integer -float any valid 64 bit float -string UTF8 encoded string -date date in XML property list format, not supported if outputting JSON -data a base-64 encoded string -xml an XML property list, useful for inserting compound values -json JSON fragment, useful for inserting compound values -array An empty array, when used with -insert. Does not accept a value. -dictionary An empty dictionary, when used with -insert Does not accept a value. value will be assigned to the keypath specified with the -insert or -replace flags. RAW VALUES AND EXPECTED TYPES With -extract keypath raw the value printed depends on its type. Following are the possible expect_type values and how they will be printed when encountered with -extract keypath raw bool the string "true" or "false" integer the numeric value float the floating point value with no specific precision string the raw unescaped string, UTF8-encoded date the RFC3339-encoded string representation in UTC time zone data a base64-encoded string representation of the data array a number indicating the count of elements in the array dictionary each key in the dictionary will be printed on a new line in alpha-sorted order The above expect_type string is itself printed when -type keypath is used. DIAGNOSTICS The plutil command exits 0 on success, and 1 on failure. SEE ALSO plist(5) STANDARDS The plutil command obeys no one's rules but its own. HISTORY The plutil command first appeared in macOS 10.2. The raw format type, -type command, -expect option, and -append option first appeared in macOS 12. macOS March 29, 2021 macOS
|
plutil – property list utility
|
plutil [command_option] [other_options] file ...
| null | null |
python3
|
Python is an interpreted, interactive, object-oriented programming language that combines remarkable power with very clear syntax. For an introduction to programming in Python, see the Python Tutorial. The Python Library Reference documents built-in and standard types, constants, functions and modules. Finally, the Python Reference Manual describes the syntax and semantics of the core language in (perhaps too) much detail. (These documents may be located via the INTERNET RESOURCES below; they may be installed on your system as well.) Python's basic power can be extended with your own modules written in C or C++. On most systems such modules may be dynamically loaded. Python is also adaptable as an extension language for existing applications. See the internal documentation for hints. Documentation for installed Python modules and packages can be viewed by running the pydoc program. COMMAND LINE OPTIONS -B Don't write .pyc files on import. See also PYTHONDONTWRITEBYTECODE. -b Issue warnings about str(bytes_instance), str(bytearray_instance) and comparing bytes/bytearray with str. (-bb: issue errors) -c command Specify the command to execute (see next section). This terminates the option list (following options are passed as arguments to the command). --check-hash-based-pycs mode Configure how Python evaluates the up-to-dateness of hash-based .pyc files. -d Turn on parser debugging output (for expert only, depending on compilation options). -E Ignore environment variables like PYTHONPATH and PYTHONHOME that modify the behavior of the interpreter. -h , -? , --help Prints the usage for the interpreter executable and exits. --help-env Prints help about Python-specific environment variables and exits. --help-xoptions Prints help about implementation-specific -X options and exits. --help-all Prints complete usage information and exits. -i When a script is passed as first argument or the -c option is used, enter interactive mode after executing the script or the command. It does not read the $PYTHONSTARTUP file. This can be useful to inspect global variables or a stack trace when a script raises an exception. -I Run Python in isolated mode. This also implies -E, -P and -s. In isolated mode sys.path contains neither the script's directory nor the user's site-packages directory. All PYTHON* environment variables are ignored, too. Further restrictions may be imposed to prevent the user from injecting malicious code. -m module-name Searches sys.path for the named module and runs the corresponding .py file as a script. This terminates the option list (following options are passed as arguments to the module). -O Remove assert statements and any code conditional on the value of __debug__; augment the filename for compiled (bytecode) files by adding .opt-1 before the .pyc extension. -OO Do -O and also discard docstrings; change the filename for compiled (bytecode) files by adding .opt-2 before the .pyc extension. -P Don't automatically prepend a potentially unsafe path to sys.path such as the current directory, the script's directory or an empty string. See also the PYTHONSAFEPATH environment variable. -q Do not print the version and copyright messages. These messages are also suppressed in non-interactive mode. -s Don't add user site directory to sys.path. -S Disable the import of the module site and the site-dependent manipulations of sys.path that it entails. Also disable these manipulations if site is explicitly imported later. -u Force the stdout and stderr streams to be unbuffered. This option has no effect on the stdin stream. -v Print a message each time a module is initialized, showing the place (filename or built-in module) from which it is loaded. When given twice, print a message for each file that is checked for when searching for a module. Also provides information on module cleanup at exit. -V , --version Prints the Python version number of the executable and exits. When given twice, print more information about the build. -W argument Warning control. Python's warning machinery by default prints warning messages to sys.stderr. The simplest settings apply a particular action unconditionally to all warnings emitted by a process (even those that are otherwise ignored by default): -Wdefault # Warn once per call location -Werror # Convert to exceptions -Walways # Warn every time -Wmodule # Warn once per calling module -Wonce # Warn once per Python process -Wignore # Never warn The action names can be abbreviated as desired and the interpreter will resolve them to the appropriate action name. For example, -Wi is the same as -Wignore . The full form of argument is: action:message:category:module:lineno Empty fields match all values; trailing empty fields may be omitted. For example -W ignore::DeprecationWarning ignores all DeprecationWarning warnings. The action field is as explained above but only applies to warnings that match the remaining fields. The message field must match the whole printed warning message; this match is case-insensitive. The category field matches the warning category (ex: "DeprecationWarning"). This must be a class name; the match test whether the actual warning category of the message is a subclass of the specified warning category. The module field matches the (fully-qualified) module name; this match is case-sensitive. The lineno field matches the line number, where zero matches all line numbers and is thus equivalent to an omitted line number. Multiple -W options can be given; when a warning matches more than one option, the action for the last matching option is performed. Invalid -W options are ignored (though, a warning message is printed about invalid options when the first warning is issued). Warnings can also be controlled using the PYTHONWARNINGS environment variable and from within a Python program using the warnings module. For example, the warnings.filterwarnings() function can be used to use a regular expression on the warning message. -X option Set implementation-specific option. The following options are available: -X faulthandler: enable faulthandler -X showrefcount: output the total reference count and number of used memory blocks when the program finishes or after each statement in the interactive interpreter. This only works on debug builds -X tracemalloc: start tracing Python memory allocations using the tracemalloc module. By default, only the most recent frame is stored in a traceback of a trace. Use -X tracemalloc=NFRAME to start tracing with a traceback limit of NFRAME frames -X importtime: show how long each import takes. It shows module name, cumulative time (including nested imports) and self time (excluding nested imports). Note that its output may be broken in multi-threaded application. Typical usage is python3 -X importtime -c 'import asyncio' -X dev: enable CPython's "development mode", introducing additional runtime checks which are too expensive to be enabled by default. It will not be more verbose than the default if the code is correct: new warnings are only emitted when an issue is detected. Effect of the developer mode: * Add default warning filter, as -W default * Install debug hooks on memory allocators: see the PyMem_SetupDebugHooks() C function * Enable the faulthandler module to dump the Python traceback on a crash * Enable asyncio debug mode * Set the dev_mode attribute of sys.flags to True * io.IOBase destructor logs close() exceptions -X utf8: enable UTF-8 mode for operating system interfaces, overriding the default locale-aware mode. -X utf8=0 explicitly disables UTF-8 mode (even when it would otherwise activate automatically). See PYTHONUTF8 for more details -X pycache_prefix=PATH: enable writing .pyc files to a parallel tree rooted at the given directory instead of to the code tree. -X warn_default_encoding: enable opt-in EncodingWarning for 'encoding=None' -X no_debug_ranges: disable the inclusion of the tables mapping extra location information (end line, start column offset and end column offset) to every instruction in code objects. This is useful when smaller code objects and pyc files are desired as well as suppressing the extra visual location indicators when the interpreter displays tracebacks. -X frozen_modules=[on|off]: whether or not frozen modules should be used. The default is "on" (or "off" if you are running a local build). -X int_max_str_digits=number: limit the size of int<->str conversions. This helps avoid denial of service attacks when parsing untrusted data. The default is sys.int_info.default_max_str_digits. 0 disables. -x Skip the first line of the source. This is intended for a DOS specific hack only. Warning: the line numbers in error messages will be off by one! INTERPRETER INTERFACE The interpreter interface resembles that of the UNIX shell: when called with standard input connected to a tty device, it prompts for commands and executes them until an EOF is read; when called with a file name argument or with a file as standard input, it reads and executes a script from that file; when called with -c command, it executes the Python statement(s) given as command. Here command may contain multiple statements separated by newlines. Leading whitespace is significant in Python statements! In non-interactive mode, the entire input is parsed before it is executed. If available, the script name and additional arguments thereafter are passed to the script in the Python variable sys.argv, which is a list of strings (you must first import sys to be able to access it). If no script name is given, sys.argv[0] is an empty string; if -c is used, sys.argv[0] contains the string '-c'. Note that options interpreted by the Python interpreter itself are not placed in sys.argv. In interactive mode, the primary prompt is `>>>'; the second prompt (which appears when a command is not complete) is `...'. The prompts can be changed by assignment to sys.ps1 or sys.ps2. The interpreter quits when it reads an EOF at a prompt. When an unhandled exception occurs, a stack trace is printed and control returns to the primary prompt; in non-interactive mode, the interpreter exits after printing the stack trace. The interrupt signal raises the KeyboardInterrupt exception; other UNIX signals are not caught (except that SIGPIPE is sometimes ignored, in favor of the IOError exception). Error messages are written to stderr. FILES AND DIRECTORIES These are subject to difference depending on local installation conventions; ${prefix} and ${exec_prefix} are installation-dependent and should be interpreted as for GNU software; they may be the same. The default for both is /usr/local. ${exec_prefix}/bin/python Recommended location of the interpreter. ${prefix}/lib/python<version> ${exec_prefix}/lib/python<version> Recommended locations of the directories containing the standard modules. ${prefix}/include/python<version> ${exec_prefix}/include/python<version> Recommended locations of the directories containing the include files needed for developing Python extensions and embedding the interpreter. ENVIRONMENT VARIABLES PYTHONSAFEPATH If this is set to a non-empty string, don't automatically prepend a potentially unsafe path to sys.path such as the current directory, the script's directory or an empty string. See also the -P option. PYTHONHOME Change the location of the standard Python libraries. By default, the libraries are searched in ${prefix}/lib/python<version> and ${exec_prefix}/lib/python<version>, where ${prefix} and ${exec_prefix} are installation-dependent directories, both defaulting to /usr/local. When $PYTHONHOME is set to a single directory, its value replaces both ${prefix} and ${exec_prefix}. To specify different values for these, set $PYTHONHOME to ${prefix}:${exec_prefix}. PYTHONPATH Augments the default search path for module files. The format is the same as the shell's $PATH: one or more directory pathnames separated by colons. Non-existent directories are silently ignored. The default search path is installation dependent, but generally begins with ${prefix}/lib/python<version> (see PYTHONHOME above). The default search path is always appended to $PYTHONPATH. If a script argument is given, the directory containing the script is inserted in the path in front of $PYTHONPATH. The search path can be manipulated from within a Python program as the variable sys.path. PYTHONPLATLIBDIR Override sys.platlibdir. PYTHONSTARTUP If this is the name of a readable file, the Python commands in that file are executed before the first prompt is displayed in interactive mode. The file is executed in the same name space where interactive commands are executed so that objects defined or imported in it can be used without qualification in the interactive session. You can also change the prompts sys.ps1 and sys.ps2 in this file. PYTHONOPTIMIZE If this is set to a non-empty string it is equivalent to specifying the -O option. If set to an integer, it is equivalent to specifying -O multiple times. PYTHONDEBUG If this is set to a non-empty string it is equivalent to specifying the -d option. If set to an integer, it is equivalent to specifying -d multiple times. PYTHONDONTWRITEBYTECODE If this is set to a non-empty string it is equivalent to specifying the -B option (don't try to write .pyc files). PYTHONINSPECT If this is set to a non-empty string it is equivalent to specifying the -i option. PYTHONIOENCODING If this is set before running the interpreter, it overrides the encoding used for stdin/stdout/stderr, in the syntax encodingname:errorhandler The errorhandler part is optional and has the same meaning as in str.encode. For stderr, the errorhandler part is ignored; the handler will always be ´backslashreplace´. PYTHONNOUSERSITE If this is set to a non-empty string it is equivalent to specifying the -s option (Don't add the user site directory to sys.path). PYTHONUNBUFFERED If this is set to a non-empty string it is equivalent to specifying the -u option. PYTHONVERBOSE If this is set to a non-empty string it is equivalent to specifying the -v option. If set to an integer, it is equivalent to specifying -v multiple times. PYTHONWARNINGS If this is set to a comma-separated string it is equivalent to specifying the -W option for each separate value. PYTHONHASHSEED If this variable is set to "random", a random value is used to seed the hashes of str and bytes objects. If PYTHONHASHSEED is set to an integer value, it is used as a fixed seed for generating the hash() of the types covered by the hash randomization. Its purpose is to allow repeatable hashing, such as for selftests for the interpreter itself, or to allow a cluster of python processes to share hash values. The integer must be a decimal number in the range [0,4294967295]. Specifying the value 0 will disable hash randomization. PYTHONINTMAXSTRDIGITS Limit the maximum digit characters in an int value when converting from a string and when converting an int back to a str. A value of 0 disables the limit. Conversions to or from bases 2, 4, 8, 16, and 32 are never limited. PYTHONMALLOC Set the Python memory allocators and/or install debug hooks. The available memory allocators are malloc and pymalloc. The available debug hooks are debug, malloc_debug, and pymalloc_debug. When Python is compiled in debug mode, the default is pymalloc_debug and the debug hooks are automatically used. Otherwise, the default is pymalloc. PYTHONMALLOCSTATS If set to a non-empty string, Python will print statistics of the pymalloc memory allocator every time a new pymalloc object arena is created, and on shutdown. This variable is ignored if the $PYTHONMALLOC environment variable is used to force the malloc(3) allocator of the C library, or if Python is configured without pymalloc support. PYTHONASYNCIODEBUG If this environment variable is set to a non-empty string, enable the debug mode of the asyncio module. PYTHONTRACEMALLOC If this environment variable is set to a non-empty string, start tracing Python memory allocations using the tracemalloc module. The value of the variable is the maximum number of frames stored in a traceback of a trace. For example, PYTHONTRACEMALLOC=1 stores only the most recent frame. PYTHONFAULTHANDLER If this environment variable is set to a non-empty string, faulthandler.enable() is called at startup: install a handler for SIGSEGV, SIGFPE, SIGABRT, SIGBUS and SIGILL signals to dump the Python traceback. This is equivalent to the -X faulthandler option. PYTHONEXECUTABLE If this environment variable is set, sys.argv[0] will be set to its value instead of the value got through the C runtime. Only works on Mac OS X. PYTHONUSERBASE Defines the user base directory, which is used to compute the path of the user site-packages directory and installation paths for python -m pip install --user. PYTHONPROFILEIMPORTTIME If this environment variable is set to a non-empty string, Python will show how long each import takes. This is exactly equivalent to setting -X importtime on the command line. PYTHONBREAKPOINT If this environment variable is set to 0, it disables the default debugger. It can be set to the callable of your debugger of choice. Debug-mode variables Setting these variables only has an effect in a debug build of Python, that is, if Python was configured with the --with-pydebug build option. PYTHONDUMPREFS If this environment variable is set, Python will dump objects and reference counts still alive after shutting down the interpreter. AUTHOR The Python Software Foundation: https://www.python.org/psf/ INTERNET RESOURCES Main website: https://www.python.org/ Documentation: https://docs.python.org/ Developer resources: https://devguide.python.org/ Downloads: https://www.python.org/downloads/ Module repository: https://pypi.org/ Newsgroups: comp.lang.python, comp.lang.python.announce LICENSING Python is distributed under an Open Source license. See the file "LICENSE" in the Python source distribution for information on terms & conditions for accessing and otherwise using Python and for a DISCLAIMER OF ALL WARRANTIES. PYTHON(1)
|
python - an interpreted, interactive, object-oriented programming language
|
python [ -B ] [ -b ] [ -d ] [ -E ] [ -h ] [ -i ] [ -I ] [ -m module-name ] [ -q ] [ -O ] [ -OO ] [ -P ] [ -s ] [ -S ] [ -u ] [ -v ] [ -V ] [ -W argument ] [ -x ] [ -X option ] [ -? ] [ --check-hash-based-pycs default | always | never ] [ --help ] [ --help-env ] [ --help-xoptions ] [ --help-all ] [ -c command | script | - ] [ arguments ]
| null | null |
avbutil
|
The avbutil executable is used for the managment AVB features and settings. The following options are available: --virtual-audio enable if-name [--stream-count stream-count] [--channel-count channel-count] [--no-44.1k] [--no-48k] [--no-88.2k] [--no-96k] [--no-176.4k] [--no-192k] [--no-am824] [--no-aaf-int] [--no-aaf-float] [--config-per-count] Enable the builtin virtual audio entity on the specified interface. With no additional arguments this enabled the builtin model, with additional arguments this dynamically creates an entity model with the specified parameters. --stream-count defines how many audio streams to create, --channel-count is how many audio channels in each stream. --no-44.1k, --no-48k, --no-88.2k, --no-96k, --no-176.4k and --no-192k disable each sample rate. --no-am824 disables IEC-61883-6 AM824 stream format, --no-aaf-int disables the AAF 24 bits in a 32 bit integer PCM stream format and --no-aaf-float disables the AAF floating point PCM stream format. --config-per-count creates multiple configurations with each count of streams, e.g. if --stream-count is 4 then it creates 4 configurations the first with 1 audio stream, the second with 2, the third with 3 and the fourth with 4. The default is 1 stream with 8 channels per stream and all sample rates enabled. Note IEC-61883-6 AM824 streaming only supports 48k, 96k and 192k sample rates and no stream formats will be created for those at 44.1k, 88.2k or 176.4k. --virtual-audio disable if-name Disable the builtin virtual audio entity on the specified interface. --virtual-audio list List the set of interfaces with a builtin virtual audio entity enabled. An interface must be present and enabled for AVB use to enable the virtual audio entity on that interface. A virtual audio entity can always be removed from an interface regardless of if the interface is present or not. --custom-audio add unique-id if-name path-to-entity-model Add the custom audio device with the given AEMXML or AEMPLIST entity model on the specified interface. --custom-audio remove unique-id if-name Disable the virtual audio device on the specified interface and remove it. --custom-audio list List the enabled custom virtual audio devices --controller [launch | enable | disable] Launch, enable or disable the general AVDECC Controller. Passing no arguments is the equivalent of passing enable and then passing launch. Note that enable and disable has no affect and is kept for legacy support. The general AVDECC Controller is part of the AVB Audio Configuration utility that lives in the system CoreServices directory. The launch command is provided as a convenience for not having to find the application. --acquire-mode enable | disable | status Enable, disable or check the current status of the acquire mode AVB audio controller, that is the controller that provides the functionality of the acquire checkboxes in the Network Device Browser window of the Audio MIDI Setup application. --acquire-mtt-tune enable | disable | status Enable, disable or check the current status of the acquire mode Max Transit Time automatic latency tunning. --convert-aem xml-to-plist xml-path plist-path Convert AEM xml file to AEM plist file. --convert-aem plist-to-xml plist-path xml-path Convery AEM plist file to AEM xml file. --convert-aem xml-to-c xml-path output-path Convert AEM xml file to a series of C data arrays in a C file. --convert-aem plist-to-c plist-path output-path Convert AEM plist file to a series of C data arrays in a C file. --mvrp add bsd_name VLAN-ID Add the VLAN ID to the attributes being registered. This will only persist while avbutil is running --mvrp remove bsd_name VLAN-ID Remove the VLAN ID from the attributes being registered by avbutil. Note this will not remove a registration from another application. --msrp list bsd_name List the all of the MVRP attributes being registered. Darwin 11/3/22 Darwin
|
avbutil – manage AVB features and settings.
|
avbutil
| null | null |
xcode-select
|
xcode-select controls the location of the developer directory used by xcrun(1), xcodebuild(1), cc(1), and other Xcode and BSD development tools. This also controls the locations that are searched for by man(1) for developer tool manpages. This allows you to easily switch between different versions of the Xcode tools and can be used to update the path to the Xcode if it is moved after installation. Usage When multiple Xcode applications are installed on a system (e.g. /Applications/Xcode.app, containing the latest Xcode, and /Applications/Xcode-beta.app containing a beta) use xcode-select --switch path/to/Xcode.app to specify the Xcode that you wish to use for command line developer tools. After setting a developer directory, all of the xcode-select provided developer tool shims (see FILES) will automatically invoke the version of the tool inside the selected developer directory. Your own scripts, makefiles, and other tools can also use xcrun(1) to easily lookup tools inside the active developer directory, making it easy to switch them between different versions of the Xcode tools and allowing them to function properly on systems where the Xcode application has been installed to a non-default location.
|
xcode-select - Manages the active developer directory for Xcode and BSD tools.
|
xcode-select [-h|--help] [-s|--switch <path>] [-p|--print-path] [-v|--version]
|
-h, --help Prints the usage message. -s <path>, --switch <path> Sets the active developer directory to the given path, for example /Applications/Xcode-beta.app. This command must be run with superuser permissions (see sudo(8)), and will affect all users on the system. To set the path without superuser permissions or only for the current shell session, use the DEVELOPER_DIR environment variable instead (see ENVIRONMENT). -p, --print-path Prints the path to the currently selected developer directory. This is useful for inspection, but scripts and other tools should use xcrun(1) to locate tool inside the active developer directory. -r, --reset Unsets any user-specified developer directory, so that the developer directory will be found via the default search mechanism. This command must be run with superuser permissions (see sudo(8)), and will affect all users on the system. -v, --version Prints xcode-select version information. --install Opens a user interface dialog to request automatic installation of the command line developer tools. ENVIRONMENT DEVELOPER_DIR Overrides the active developer directory. When DEVELOPER_DIR is set, its value will be used instead of the system-wide active developer directory. Note that for historical reason, the developer directory is considered to be the Developer content directory inside the Xcode application (for example /Applications/Xcode.app/Contents/Developer). You can set the environment variable to either the actual Developer contents directory, or the Xcode application directory -- the xcode-select provided shims will automatically convert the environment variable into the full Developer content path.
|
xcode-select --switch /Applications/Xcode.app/Contents/Developer Select /Applications/Xcode.app/Contents/Developer as the active developer directory. xcode-select --switch /Applications/Xcode.app As above, selects /Applications/Xcode.app/Contents/Developer as the active developer directory. The Developer content directory is automatically inferred by xcode-select. /usr/bin/xcodebuild Runs xcodebuild out of the active developer directory. /usr/bin/xcrun --find xcodebuild Use xcrun to locate xcodebuild inside the active developer directory. env DEVELOPER_DIR="/Applications/Xcode-beta.app" /usr/bin/xcodebuild Execute xcodebuild using an alternate developer directory. FILES /usr/bin/xcrun Used to find or run arbitrary commands from the active developer directory. See xcrun(1) for more information. /usr/bin/actool /usr/bin/agvtool /usr/bin/desdp /usr/bin/genstrings /usr/bin/ibtool /usr/bin/ictool /usr/bin/opendiff /usr/bin/pip3 /usr/bin/python3 /usr/bin/sdef /usr/bin/sdp /usr/bin/stapler /usr/bin/xcodebuild /usr/bin/xcscontrol /usr/bin/xcsdiagnose /usr/bin/xctrace /usr/bin/xed Runs the matching Xcode tool from with the active developer directory. /usr/bin/DeRez /usr/bin/GetFileInfo /usr/bin/ResMerger /usr/bin/Rez /usr/bin/SetFile /usr/bin/SplitForks /usr/bin/ar /usr/bin/as /usr/bin/asa /usr/bin/bm4 /usr/bin/bison /usr/bin/c89 /usr/bin/c99 /usr/bin/clang++ /usr/bin/clang /usr/bin/clangd /usr/bin/cmpdylib /usr/bin/codesign_allocate /usr/bin/cpp /usr/bin/ctags /usr/bin/ctf_insert /usr/bin/dsymutil /usr/bin/dwarfdump /usr/bin/flex++ /usr/bin/flex /usr/bin/g++ /usr/bin/gatherheaderdoc /usr/bin/gcc /usr/bin/gcov /usr/bin/git-receive-pack /usr/bin/git-shell /usr/bin/git-upload-archive /usr/bin/git-upload-pack /usr/bin/git /usr/bin/gm4 /usr/bin/gnumake /usr/bin/gperf /usr/bin/hdxml2manxml /usr/bin/headerdoc2html /usr/bin/indent /usr/bin/install_name_tool /usr/bin/ld /usr/bin/lex /usr/bin/libtool /usr/bin/lipo /usr/bin/lldb /usr/bin/lorder /usr/bin/m4 /usr/bin/make /usr/bin/mig /usr/bin/nm /usr/bin/nmedit /usr/bin/objdump /usr/bin/otool /usr/bin/pagestuff /usr/bin/ranlib /usr/bin/resolveLinks /usr/bin/rpcgen /usr/bin/segedit /usr/bin/size /usr/bin/strings /usr/bin/strip /usr/bin/swift /usr/bin/swiftc /usr/bin/unifdef /usr/bin/unifdefall /usr/bin/vtool /usr/bin/xml2man /usr/bin/yacc Runs the matching BSD tool from with the active developer directory. SEE ALSO xcrun(1), xcodebuild(1) HISTORY The xcode-select command first appeared in Xcode 3.0. Mac OS X June 24, 2019 XCODE-SELECT(1)
|
libnetcfg5.30
|
The libnetcfg utility can be used to configure the libnet. Starting from perl 5.8 libnet is part of the standard Perl distribution, but the libnetcfg can be used for any libnet installation. USAGE Without arguments libnetcfg displays the current configuration. $ libnetcfg # old config ./libnet.cfg daytime_hosts ntp1.none.such ftp_int_passive 0 ftp_testhost ftp.funet.fi inet_domain none.such nntp_hosts nntp.none.such ph_hosts pop3_hosts pop.none.such smtp_hosts smtp.none.such snpp_hosts test_exist 1 test_hosts 1 time_hosts ntp.none.such # libnetcfg -h for help $ It tells where the old configuration file was found (if found). The "-h" option will show a usage message. To change the configuration you will need to use either the "-c" or the "-d" options. The default name of the old configuration file is by default "libnet.cfg", unless otherwise specified using the -i option, "-i oldfile", and it is searched first from the current directory, and then from your module path. The default name of the new configuration file is "libnet.cfg", and by default it is written to the current directory, unless otherwise specified using the -o option, "-o newfile". SEE ALSO Net::Config, libnetFAQ AUTHORS Graham Barr, the original Configure script of libnet. Jarkko Hietaniemi, conversion into libnetcfg for inclusion into Perl 5.8. perl v5.30.3 2024-04-13 LIBNETCFG(1)
|
libnetcfg - configure libnet
| null | null | null |
powermetrics
|
powermetrics gathers and display CPU usage statistics (divided into time spent in user mode and supervisor mode), timer and interrupt wakeup frequency (total and, for near-idle workloads, those that resulted in package idle exits), and on supported platforms, interrupt frequencies (categorized by CPU number), package C-state statistics (an indication of the time the core complex + integrated graphics, if any, were in low- power idle states), CPU frequency distribution during the sample. The tool may also display estimated power consumed by various SoC subsystems, such as CPU, GPU, ANE (Apple Neural Engine). Note: Average power values reported by powermetrics are estimated and may be inaccurate - hence they should not be used for any comparison between devices, but can be used to help optimize apps for energy efficiency. -h, --help Print help message. -s samplers, --samplers samplers Comma separated list of samplers and sampler groups. Run with -h to see a list of samplers and sampler groups. Specifying "default" will display the default set, and specifying "all" will display all supported samplers. -o file, --output-file file Output to file instead of stdout. -b size, --buffer-size size Set output buffer size (0=none, 1=line) -i N, --sample-rate N sample every N ms (0=disabled) [default: 5000ms] -n N, --sample-count N Obtain N periodic samples (0=infinite) [default: 0] -t N, --wakeup-cost N Assume package idle wakeups have a CPU time cost of N us when using hybrid sort orders using idle wakeups with time-based metrics -r method, --order method Order process list using specified method [default: composite] [pid] process identifier [wakeups] total package idle wakeups (alias: -W) [cputime] total CPU time used (alias: -C) [composite] energy number, see --show-process-energy (alias: -O) -f format, --format format Display data in specified format [default: text] [text] human-readable text output [plist] machine-readable property list, NUL-separated -a N, --poweravg N Display poweravg every N samples (0=disabled) [default: 10] --hide-cpu-duty-cycle Hide CPU duty cycle data --show-initial-usage Print initial sample for entire uptime --show-usage-summary Print final usage summary when exiting --show-pstates Show pstate distribution. Only available on certain hardware. --show-plimits Show plimits, forced idle and RMBS. Only available on certain hardware. --show-cpu-qos Show per cpu QOS breakdowns. --show-process-coalition Group processes by coalitions and show per coalition information. Processes that have exited during the sample will still have their time billed to the coalition, making this useful for disambiguating DEAD_TASK time. --show-responsible-pid Show responsible pid for xpc services and parent pid --show-process-wait-times Show per-process sfi wait time info --show-process-qos-tiers Show per-process qos latency and throughput tier --show-process-io Show per-process io information --show-process-gpu Show per-process gpu time. This is only available on certain hardware. --show-process-netstats Show per-process network information --show-process-qos Show QOS times aggregated by process. Per thread information is not available. --show-process-energy Show per-process energy impact number. This number is a rough proxy for the total energy the process uses, including CPU, GPU, disk io and networking. The weighting of each is platform specific. Enabling this implicitly enables sampling of all the above per-process statistics. --show-process-samp-norm Show CPU time normailzed by the sample window, rather than the process start time. For example a process that launched 1 second before the end of a 5 second sample window and ran continuously until the end of the window will show up as 200 ms/s here and 1000 ms/s in the regular column. --show-process-ipc Show per-process Instructions and cycles on ARM machines. Use with --show-process-amp to show cluster stats. --show-all Enables all samplers and displays all the available information for each sampler. This tool also implements special behavior upon receipt of certain signals to aid with the automated collection of data: SIGINFO take an immediate sample SIGIO flush any buffered output SIGINT/SIGTERM/SIGHUP stop sampling and exit OUTPUT Guidelines for energy reduction CPU time, deadlines and interrupt wakeups: Lower is better Interrupt counts: Lower is better C-state residency: Higher is better Running Tasks 1. CPU time consumed by threads assigned to that process, broken down into time spent in user space and kernel mode. 2. Counts of "short" timers (where the time-to-deadline was < 5 milliseconds in the future at the point of timer creation) which woke up threads from that process. High frequency timers, which typically have short time-to-deadlines, can result in significant energy consumption. 3. A count of total interrupt level wakeups which resulted in dispatching a thread from the process in question. For example, if a thread were blocked in a usleep() system call, a timer interrupt would cause that thread to be dispatched, and would increment this counter. For workloads with a significant idle component, this metric is useful to study in conjunction with the package idle exit metric reported below. 4. A count of "package idle exits" induced by timers/device interrupts which awakened threads from the process in question. This is a subset of the interrupt wakeup count. Timers and other interrupts that trigger "package idle exits" have a greater impact on energy consumption relative to other interrupts. With the exception of some Mac Pro systems, Mac and iOS systems are typically single package systems, wherein all CPUs are part of a single processor complex (typically a single IC die) with shared logic that can include (depending on system specifics) shared last level caches, an integrated memory controller etc. When all CPUs in the package are idle, the hardware can power-gate significant portions of the shared logic in addition to each individual processor's logic, as well as take measures such as placing DRAM in to self-refresh (also referred to as auto-refresh), place interconnects into lower-power states etc. Hence a timer or interrupt that triggers an exit from this package idle state results in a a greater increase in power than a timer that occurred when the CPU in question was already executing. The process initiating a package idle wakeup may also be the "prime mover", i.e. it may be the trigger for further activity in its own or other processes. This metric is most useful when the system is relatively idle, as with typical light workloads such as web browsing and movie playback; with heavier workloads, the CPU activity can be high enough such that package idle entry is relatively rare, thus masking package idle exits due to the process/thread in question. 5. If any processes arrived and vanished during the inter-sample interval, or a previously sampled process vanished, their statistics are reflected in the row labeled "DEAD_TASKS". This can identify issues involving transient processes which may be spawned too frequently. dtrace ("execsnoop") or other tools can then be used to identify the transient processes in question. Running powermetrics in coalition mode, (see below), will also help track down transient process issues, by billing the coalition to which the process belongs. Interrupt Distribution The interrupts sampler reports interrupt frequencies, classified by interrupt vector and associated device, on a per-CPU basis. Mac OS currently assigns all device interrupts to CPU0, but timers and interprocessor interrupts can occur on other CPUs. Interrupt frequencies can be useful in identifying misconfigured devices or areas of improvement in interrupt load, and can serve as a proxy for identifying device activity across the sample interval. For example, during a network-heavy workload, an increase in interrupts associated with Airport wireless ("ARPT"), or wired ethernet ("ETH0" "ETH1" etc.) is not unexpected. However, if the interrupt frequency for a given device is non-zero when the device is not active (e.g. if "HDAU" interrupts, for High Definition Audio, occur even when no audio is playing), that may be a driver error. The int_sources sampler attributes interrupts to the responsible InterruptEventSources, which helps disambiguate the cause of an interrupt if the vector serves more than one source. Battery Statistics The battery sampler reports battery discharge rates, current and maximum charge levels, cycle counts and degradation from design capacity across the interval in question, if a delta was reported by the battery management unit. Note that the battery controller data may arrive out-of- phase with respect to powermetrics samples, which can cause aliasing issues across short sample intervals. Discharge rates across discontinuities such as sleep/wake may also be inaccurate on some systems; however, the rate of change of the total charge level across longer intervals is a useful indicator of total system load. Powermetrics does not filter discharge rates for A/C connect/disconnect events, system sleep residency etc. Battery discharge rates are typically not comparable across machine models. Processor Energy Usage The cpu_power sampler reports data derived from the Intel energy models; as of the Sandy Bridge intel microarchitecture, the Intel power control unit internally maintains an energy consumption model whose details are proprietary, but are likely based on duty cycles for individual execution units, current voltage/frequency etc. These numbers are not strictly accurate but are correlated with actual energy consumption. This section lists: power dissipated by the processor package which includes the CPU cores, the integrated GPU and the system agent (integrated memory controller, last level cache), and separately, CPU core power and GT (integrated GPU) power (the latter two in a forthcoming version). The energy model data is generally not comparable across machine models. The cpu_power sampler next reports, on processors with Nehalem and newer microarchitectures, hardware derived processor frequency and idle residency information, labeled "P-states" and "C-states" respectively in Intel terminology. C-states are further classified in to "package c-states" and per-core C- states. The processor enters a "c-state" in the scheduler's idle loop, which results in clock-gating or power-gating CPU core and, potentially, package logic, considerably reducing power dissipation. High package c- state residency is a goal to strive for, as energy consumption of the CPU complex, integrated memory controller if any and DRAM is significantly reduced when in a package c-state. Package c-states occur when all CPU cores within the package are idle, and the on-die integrated GPU if any (SandyBridge mobile and beyond), on the system is also idle. Powermetrics reports package c-state residency as a fraction of the time sampled. This is available on Nehalem microarchitecture and newer processors. Note that some systems, such as Mac Pros, do not enable "package" c-states. Powermetrics also reports per-core c-state residencies, signifying when the core in question (which can include multiple SMTs or "hyperthreads") is idle, as well as active/inactive duty cycle histograms for each logical processor within the core. This is available on Nehalem microarchitecture and newer processors. This section also lists the average clock frequency at which the given logical processor executed when not idle within the sampled interval, expressed as both an absolute frequency in MHz and as a percentage of the nominal rated frequency. These average frequencies can vary due to the operating system's demand based dynamic voltage and frequency scaling. Some systems can execute at frequencies greater than the nominal or "P1" frequency, which is termed "turbo mode" on Intel systems. Such operation will manifest as > 100% of nominal frequency. Lengthy execution in turbo mode is typically energy inefficient, as those frequencies have high voltage requirements, resulting in a correspondingly quadratic increase in power insufficient to outweigh the reduction in execution time. Current systems typically have a single voltage/frequency domain per- package, but as the processors can execute out-of-phase, they may display different average execution frequencies. Disk Usage and Network Activity The network and disk samplers reports deltas in disk and network activity that occured during the sample. Also specifying --show-process-netstats and --show-process-io will give you this information on a per process basis in the tasks sampler. Backlight level The battery sampler also reports the instantaneous value of the backlight luminosity level. This value is likely not comparable across systems and machine models, but can be useful when comparing scenarios on a given system. Devices The devices sampler, for each device, reports the time spent in each of the device's states over the course of the sample. The meaning of the different states is specific to each device. Powermetrics denotes low power states with an "L", device usable states with a "U" and power on states with an "O". SMC The smc sampler displays information supplied by the System Management Controller. On supported platforms, this includes fan speed and information from various temperature sensors. These are instantaneous values taken at the end of the sample window, and do not necessarily reflect the values at other times in the window. Thermal The thermal sampler displays the current thermal pressure the system is under. This is an instantaneous value taken at the end of the sample window, and does not necessarily reflect the value at other times in the window. SFI The sfi sampler shows system wide selective forced idle statistics. Selective forced idle is a mechanism the operating system uses to limit system power while minimizing user impact, by throttling certain threads on the system. Each thread belongs to an SFI class, and this sampler displays how much each SFI class is currently being throttled or empty if none of them is throttled. These are instantaneous values taken at the end of the sample window, and do not necessarily reflect the values at other times in the window. To get SFI wait time statistics on a per process basis use --show-process-wait-times. KNOWN ISSUES Changes in system time and sleep/wake can cause minor inaccuracies in reported cpu time. Darwin 5/1/12 Darwin
|
powermetrics
|
powermetrics [-i sample_interval_ms] [-r order] [-t wakeup_cost] [-o output_file] [-n sample_count]
| null | null |
encguess5.34
|
The encoding identification is done by checking one encoding type at a time until all but the right type are eliminated. The set of encoding types to try is defined by the -s parameter and defaults to ascii, utf8 and UTF-16/32 with BOM. This can be overridden by passing one or more encoding types via the -s parameter. If you need to pass in multiple suspect encoding types, use a quoted string with the a space separating each value. SEE ALSO Encode::Guess, Encode::Detect LICENSE AND COPYRIGHT Copyright 2015 Michael LaGrasta and Dan Kogai. This program is free software; you can redistribute it and/or modify it under the terms of the the Artistic License (2.0). You may obtain a copy of the full license at: <http://www.perlfoundation.org/artistic_license_2_0> perl v5.34.1 2024-04-13 ENCGUESS(1)
|
encguess - guess character encodings of files VERSION $Id: encguess,v 0.3 2020/12/02 01:28:17 dankogai Exp dankogai $
|
encguess [switches] filename... SWITCHES -h show this message and exit. -s specify a list of "suspect encoding types" to test, separated by either ":" or "," -S output a list of all acceptable encoding types that can be used with the -s param -u suppress display of unidentified types EXAMPLES: • Guess encoding of a file named "test.txt", using only the default suspect types. encguess test.txt • Guess the encoding type of a file named "test.txt", using the suspect types "euc-jp,shiftjis,7bit-jis". encguess -s euc-jp,shiftjis,7bit-jis test.txt encguess -s euc-jp:shiftjis:7bit-jis test.txt • Guess the encoding type of several files, do not display results for unidentified files. encguess -us euc-jp,shiftjis,7bit-jis test*.txt
| null | null |
moose-outdated
| null | null | null | null | null |
net-server
|
The net-server program gives a simple way to test out code and try port connection parameters. Though the running server can be robust enough for full tim use, it is anticipated that this binary will just be used for basic testing of net-server ports, acting as a simple echo server, or for running development scripts as CGI.
|
net-server - Base Net::Server starting module
|
net-server [base type] [net server arguments] net-server PreFork ipv '*' net-server HTTP net-server HTTP app foo.cgi net-server HTTP app foo.cgi app /=bar.cgi net-server HTTP port 8080 port 8443/ssl ipv '*' server_type PreFork --SSL_key_file=my.key --SSL_cert_file=my.crt access_log_file STDERR
|
"base type" The very first argument may be a Net::Server flavor. This is given as shorthand for writing out server_type "ServerFlavor". Additionally, this allows types such as HTTP and PSGI, which are not true Net::Server base types, to subclass other server types via an additional server_type argument. net-server PreFork net-server HTTP # becomes a HTTP server in the Fork flavor net-server HTTP server_type PreFork # preforking HTTP server "port" Port to bind upon. Default is 80 if running a HTTP server as root, 8080 if running a HTTP server as non-root, or 20203 otherwise. Multiple value can be given for binding to multiple ports. All of the methods for specifying port attributes enumerated in Net::Server and Net::Server::Proto are available here. net-server port 20201 net-server port 20202 net-server port 20203/IPv6 "host" Host to bind to. Default is *. Will bind to an IPv4 socket if an IPv4 address is given. Will bind to an IPv6 socket if an IPv6 address is given (requires installation of IO::Socket::INET6). If a hostname is given and "ipv" is still set to 4, an IPv4 socket will be created. If a hostname is given and "ipv" is set to 6, an IPv6 socket will be created. If a hostname is given and "ipv" is set to * (default), a lookup will be performed and any available IPv4 or IPv6 addresses will be bound. The "ipv" parameter can be set directly, or passed along in the port, or additionally can be passed as part of the hostname. net-server host localhost net-server host localhost/IPv4 There are many more options available. Please see the Net::Server documentation. AUTHOR Paul Seamons <paul@seamons.com> LICENSE This package may be distributed under the terms of either the GNU General Public License or the Perl Artistic License perl v5.34.0 2017-08-10 NET-SERVER(1)
| null |
cvmkdir
|
The cvmkdir command creates a Xsan File System directory and attaches an affinity parameter (key) to it. If no option is used and the directory exists, the cvmkdir command displays the assigned affinity. Once an affinity is assigned to a directory, it cannot be altered. If no key is specified and the directory does not exist, the directory will not be created. An affinity may be dissociated from a directory by specifying an empty key (e.g., ""). See snfs_config(5) for details about affinities to storage pools.
|
cvmkdir - Create an Xsan Directory with an Affinity
|
cvmkdir [-k key] dirname
|
-k key Specify to the file system what affinity (key) to associate with the directory. All new sub-directories and files created beneath this directory inherit its affinity. If the affinity is changed or removed only files or directories created after the change are affected. dirname The path of the directory to be created. SEE ALSO cvmkfile(1), cvaffinity(1), snfs_config(5) Xsan File System June 2014 CVMKDIR(1)
| null |
ldapmodify
|
ldapmodify is a shell-accessible interface to the ldap_add_ext(3), ldap_modify_ext(3), ldap_delete_ext(3) and ldap_rename(3). library calls. ldapadd is implemented as a hard link to the ldapmodify tool. When invoked as ldapadd the -a (add new entry) flag is turned on automatically. ldapmodify opens a connection to an LDAP server, binds, and modifies or adds entries. The entry information is read from standard input or from file through the use of the -f option.
|
ldapmodify, ldapadd - LDAP modify entry and LDAP add entry tools
|
ldapmodify [-a] [-c] [-S_file] [-n] [-v] [-M[M]] [-d_debuglevel] [-D_binddn] [-W] [-w_passwd] [-y_passwdfile] [-H_ldapuri] [-h_ldaphost] [-p_ldapport] [-P {2|3}] [-e [!]ext[=extparam]] [-E [!]ext[=extparam]] [-O_security-properties] [-I] [-Q] [-U_authcid] [-R_realm] [-x] [-X_authzid] [-Y_mech] [-Z[Z]] [-f_file] ldapadd [-c] [-S_file] [-n] [-v] [-M[M]] [-d_debuglevel] [-D_binddn] [-W] [-w_passwd] [-y_passwdfile] [-H_ldapuri] [-h_ldaphost] [-p_ldapport] [-P {2|3}] [-O_security-properties] [-I] [-Q] [-U_authcid] [-R_realm] [-x] [-X_authzid] [-Y_mech] [-Z[Z]] [-f_file]
|
-a Add new entries. The default for ldapmodify is to modify existing entries. If invoked as ldapadd, this flag is always set. -c Continuous operation mode. Errors are reported, but ldapmodify will continue with modifications. The default is to exit after reporting an error. -S_file Add or change records which were skipped due to an error are written to file and the error message returned by the server is added as a comment. Most useful in conjunction with -c. -n Show what would be done, but don't actually modify entries. Useful for debugging in conjunction with -v. -v Use verbose mode, with many diagnostics written to standard output. -M[M] Enable manage DSA IT control. -MM makes control critical. -d_debuglevel Set the LDAP debugging level to debuglevel. ldapmodify must be compiled with LDAP_DEBUG defined for this option to have any effect. -f_file Read the entry modification information from file instead of from standard input. -x Use simple authentication instead of SASL. -D_binddn Use the Distinguished Name binddn to bind to the LDAP directory. For SASL binds, the server is expected to ignore this value. -W Prompt for simple authentication. This is used instead of specifying the password on the command line. -w_passwd Use passwd as the password for simple authentication. -y_passwdfile Use complete contents of passwdfile as the password for simple authentication. -H_ldapuri Specify URI(s) referring to the ldap server(s); only the protocol/host/port fields are allowed; a list of URI, separated by whitespace or commas is expected. -h_ldaphost Specify an alternate host on which the ldap server is running. Deprecated in favor of -H. -p_ldapport Specify an alternate TCP port where the ldap server is listening. Deprecated in favor of -H. -P {2|3} Specify the LDAP protocol version to use. -O_security-properties Specify SASL security properties. -e [!]ext[=extparam] -E [!]ext[=extparam] Specify general extensions with -e and search extensions with -E. ´!´ indicates criticality. General extensions: [!]assert=<filter> (an RFC 4515 Filter) [!]authzid=<authzid> ("dn:<dn>" or "u:<user>") [!]manageDSAit [!]noop ppolicy [!]postread[=<attrs>] (a comma-separated attribute list) [!]preread[=<attrs>] (a comma-separated attribute list) abandon, cancel (SIGINT sends abandon/cancel; not really controls) Search extensions: [!]domainScope (domain scope) [!]mv=<filter> (matched values filter) [!]pr=<size>[/prompt|noprompt] (paged results/prompt) [!]sss=[-]<attr[:OID]>[/[-]<attr[:OID]>...] (server side sorting) [!]subentries[=true|false] (subentries) [!]sync=ro[/<cookie>] (LDAP Sync refreshOnly) rp[/<cookie>][/<slimit>] (LDAP Sync refreshAndPersist) -I Enable SASL Interactive mode. Always prompt. Default is to prompt only as needed. -Q Enable SASL Quiet mode. Never prompt. -U_authcid Specify the authentication ID for SASL bind. The form of the ID depends on the actual SASL mechanism used. -R_realm Specify the realm of authentication ID for SASL bind. The form of the realm depends on the actual SASL mechanism used. -X_authzid Specify the requested authorization ID for SASL bind. authzid must be one of the following formats: dn:<distinguished name> or u:<username> -Y_mech Specify the SASL mechanism to be used for authentication. If it's not specified, the program will choose the best mechanism the server knows. -Z[Z] Issue StartTLS (Transport Layer Security) extended operation. If you use -ZZ, the command will require the operation to be successful. INPUT FORMAT The contents of file (or standard input if no -f flag is given on the command line) must conform to the format defined in ldif(5) (LDIF as defined in RFC 2849).
|
Assuming that the file /tmp/entrymods exists and has the contents: dn: cn=Modify Me,dc=example,dc=com changetype: modify replace: mail mail: modme@example.com - add: title title: Grand Poobah - add: jpegPhoto jpegPhoto:< file:///tmp/modme.jpeg - delete: description - the command: ldapmodify -f /tmp/entrymods will replace the contents of the "Modify Me" entry's mail attribute with the value "modme@example.com", add a title of "Grand Poobah", and the contents of the file "/tmp/modme.jpeg" as a jpegPhoto, and completely remove the description attribute. Assuming that the file /tmp/newentry exists and has the contents: dn: cn=Barbara Jensen,dc=example,dc=com objectClass: person cn: Barbara Jensen cn: Babs Jensen sn: Jensen title: the world's most famous mythical manager mail: bjensen@example.com uid: bjensen the command: ldapadd -f /tmp/newentry will add a new entry for Babs Jensen, using the values from the file /tmp/newentry. Assuming that the file /tmp/entrymods exists and has the contents: dn: cn=Barbara Jensen,dc=example,dc=com changetype: delete the command: ldapmodify -f /tmp/entrymods will remove Babs Jensen's entry. DIAGNOSTICS Exit status is zero if no errors occur. Errors result in a non-zero exit status and a diagnostic message being written to standard error. SEE ALSO ldapadd(1), ldapdelete(1), ldapmodrdn(1), ldapsearch(1), ldap.conf(5), ldap(3), ldap_add_ext(3), ldap_delete_ext(3), ldap_modify_ext(3), ldap_modrdn_ext(3), ldif(5), slapd.replog(5) AUTHOR The OpenLDAP Project <http://www.openldap.org/> ACKNOWLEDGEMENTS OpenLDAP Software is developed and maintained by The OpenLDAP Project <http://www.openldap.org/>. OpenLDAP Software is derived from University of Michigan LDAP 3.3 Release. OpenLDAP 2.4.28 2011/11/24 LDAPMODIFY(1)
|
iofile.d
|
This prints the total I/O wait times for each filename by process. This can help determine why an application is performing poorly by identifying which file they are waiting on, and the total times. Both disk and NFS I/O are measured. Since this uses DTrace, only users with root privileges can run this command.
|
iofile.d - I/O wait time by file and process. Uses DTrace.
|
iofile.d
| null |
Sample until Ctrl-C is hit then print report, # iofile.d FIELDS PID process ID CMD process name TIME total wait time for disk events, us FILE file pathname BASED ON /usr/demo/dtrace/iocpu.d DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT iofile.d will sample until Ctrl-C is hit. SEE ALSO iosnoop(1M), dtrace(1M) version 0.70 July 24, 2005 iofile.d(1m)
|
join
|
The join utility performs an “equality join” on the specified files and writes the result to the standard output. The “join field” is the field in each file by which the files are compared. The first field in each line is used by default. There is one line in the output for each pair of lines in file1 and file2 which have identical join fields. Each output line consists of the join field, the remaining fields from file1 and then the remaining fields from file2. The default field separators are tab and space characters. In this case, multiple tabs and spaces count as a single field separator, and leading tabs and spaces are ignored. The default output field separator is a single space character. Many of the options use file and field numbers. Both file numbers and field numbers are 1 based, i.e., the first file on the command line is file number 1 and the first field is field number 1. The following options are available: -a file_number In addition to the default output, produce a line for each unpairable line in file file_number. -e string Replace empty output fields with string. -o list The -o option specifies the fields that will be output from each file for each line with matching join fields. Each element of list has either the form file_number.field, where file_number is a file number and field is a field number, or the form ‘0’ (zero), representing the join field. The elements of list must be either comma (‘,’) or whitespace separated. (The latter requires quoting to protect it from the shell, or, a simpler approach is to use multiple -o options.) -t char Use character char as a field delimiter for both input and output. Every occurrence of char in a line is significant. -v file_number Do not display the default output, but display a line for each unpairable line in file file_number. The options -v 1 and -v 2 may be specified at the same time. -1 field Join on the field'th field of file1. -2 field Join on the field'th field of file2. When the default field delimiter characters are used, the files to be joined should be ordered in the collating sequence of sort(1), using the -b option, on the fields on which they are to be joined, otherwise join may not report all field matches. When the field delimiter characters are specified by the -t option, the collating sequence should be the same as sort(1) without the -b option. If one of the arguments file1 or file2 is ‘-’, the standard input is used. EXIT STATUS The join utility exits 0 on success, and >0 if an error occurs.
|
join – relational database operator
|
join [-a file_number | -v file_number] [-e string] [-o list] [-t char] [-1 field] [-2 field] file1 file2
| null |
Assuming a file named nobel_laureates.txt with information about some of the first Nobel Peace Prize laureates: 1901,Jean Henri Dunant,M 1901,Frederic Passy,M 1902,Elie Ducommun,M 1905,Baroness Bertha Sophie Felicita Von Suttner,F 1910,Permanent International Peace Bureau, and a second file nobel_nationalities.txt with their nationalities: Jean Henri Dunant,Switzerland Frederic Passy,France Elie Ducommun,Switzerland Baroness Bertha Sophie Felicita Von Suttner Join the two files using the second column from first file and the default first column from second file specifying a custom field delimiter: $ join -t, -1 2 nobel_laureates.txt nobel_nationalities.txt Jean Henri Dunant,1901,M,Switzerland Frederic Passy,1901,M,France Elie Ducommun,1902,M,Switzerland Baroness Bertha Sophie Felicita Von Suttner,1905,F Show only the year and the nationality of the laureate using ‘<<NULL>>’ to replace empty fields: $ join -e "<<NULL>>" -t, -1 2 -o "1.1 2.2" nobel_laureates.txt nobel_nationalities.txt 1901,Switzerland 1901,France 1902,Switzerland 1905,<<NULL>> Show only lines from first file which do not have a match in second file: $ join -v1 -t, -1 2 nobel_laureates.txt nobel_nationalities.txt Permanent International Peace Bureau,1910, Assuming a file named capitals.txt with the following content: Belgium,Brussels France,Paris Italy,Rome Switzerland Show the name and capital of the country where the laureate was born. This example uses nobel_nationalities.txt as a bridge but does not show any information from that file. Also see the note about sort(1) above to understand why we need to sort the intermediate result. $ join -t, -1 2 -o 1.2 2.2 nobel_laureates.txt nobel_nationalities.txt | \ sort -k2 -t, | join -t, -e "<<NULL>>" -1 2 -o 1.1 2.2 - capitals.txt Elie Ducommun,<<NULL>> Jean Henri Dunant,<<NULL>> COMPATIBILITY For compatibility with historic versions of join, the following options are available: -a In addition to the default output, produce a line for each unpairable line in both file1 and file2. -j1 field Join on the field'th field of file1. -j2 field Join on the field'th field of file2. -j field Join on the field'th field of both file1 and file2. -o list ... Historical implementations of join permitted multiple arguments to the -o option. These arguments were of the form file_number.field_number as described for the current -o option. This has obvious difficulties in the presence of files named 1.2. These options are available only so historic shell scripts do not require modification and should not be used. SEE ALSO awk(1), comm(1), paste(1), sort(1), uniq(1) STANDARDS The join command conforms to IEEE Std 1003.1-2001 (“POSIX.1”). macOS 14.5 June 20, 2020 macOS 14.5
|
SafeEjectGPU
|
The SafeEjectGPU command is used to prepare for safe eject/disconnect of eGPUs from the system. This involves interacting with apps to migrate off of ejecting eGPU(s), and triggering the eject itself. This tool can also be used to view what GPUs are attached to the system, their eject status, and what apps hold references to each. A list of commands and their descriptions - note that commands affecting state are capitalized, and that multiple (including repeated) commands can occupy the same command line: gpus Lists attributes of GPUs currently attached to system (gpuid, vendor/model, flags) gpuid <gpuid> Specifies which GPU(s) subsequent commands apply to. The default (0x0000) means all eGPUs. See output of gpus command for valid <gpuid> values (of the form 0x7005) to use. gpuids <gpuid1>,... Comma seperated list of GPU(s) for the app to select from. See output of gpus command for valid <gpuid> values (of the form 0x7005) to use. apps Lists apps holding references to specified GPU - and app attributes/properties like PID, RPID, USER, PROCESS, APIS (Metal, GL/CL, GVA), BUNDLE_IDENTIFIER, PATH, GPUEjectPolicy and GPUSelectionPolicy where specified. status Shows eject state of specified eGPU(s) (Present, Initiated, Finalized). Eject Performs the full Eject sequence ( Initiate + Relaunch + Finalize ) of specified GPU(s). Initiate Initiates eject of specified eGPU(s). These eGPUs are temporarily hidden from API instantiations. Relaunch Interacts with apps that hold references to specified eGPU(s) - to facilitate migration to remaining GPUs. Finalize Finalizes eject of specified eGPU(s) - must be physically unplugged before they can be used again. Cancel Cancels initiated eject of specified GPU(s) - instead of Finalized. RelaunchPID <PID> Apply relaunch stimulus to one particular PID - for app relaunch stimulus testing. RelaunchPIDOnGPU <PID> Apply relaunch stimulus to one particular PID with set of limited GPUs to select from, use gpuids to limit the GPUs seen by an app. LaunchOnGPU <path> Launch application from given bundle path with set of limited GPUs, use gpuids to limit the GPUs seen by an app. If the instance of an app is already running, this command has no effect.
|
SafeEjectGPU – Facilitate safe eject/disconnect of eGPU(s) from system
|
SafeEjectGPU [gpuid <gpuid>] [gpuids <gpuid1>,<gpuid2>,...] [gpus] [apps] [status] [Eject] [Initiate] [Relaunch] [Finalize] [Cancel] [RelaunchPID <PID>] ...
| null |
$ SafeEjectGPU gpus List eGPUs. Output is useful for cut-n-paste of example specified gpuid values used below $ SafeEjectGPU gpus apps status List all eGPUs and Apps on all eGPUs along with eject status of all eGPUs $ SafeEjectGPU Eject Perform full Eject sequence on all eGPUs $ SafeEjectGPU gpuid 0x7005 Eject Perform full Eject sequence on specified eGPU $ SafeEjectGPU gpus apps gpuid 0x7153 apps Lists all eGPUs and apps on all eGPUs and on integrated GPU as well $ SafeEjectGPU Initiate RelaunchPID 12345 Cancel Hide eGPUs and send relaunch stimulus to PID without doing full eject $ SafeEjectGPU gpuids 0x7005,0x7153 RelaunchPIDOnGPU <pid> Limits GPU selection for PID to either eGPU or Integrated GPU on relaunch $ SafeEjectGPU gpuids 0x7005 LaunchOnGPU /Applications/Calculator.app Launches calculator app on specified eGPU PLIST PROPERTIES The following properties are generally inferred. Some values can be specified in the app's Info.plist. They affect eGPU eject and API selection behaviors. Generally, these properties won't need to be specified: GPUEjectPolicy Inferred/Settable GPUEjectPolicy values for dealing with apps that needs to drop references to ejecting eGPU. Establisehd in app bundle's Info.plist. Possible values: relaunch Send AppKit quit-with-save event followed by open- with-restore (relaunch app using alternate GPU(s)). wait Just wait for GPU references to drop (without sending events or signals). kill Use sigKill to force app exit (for apps that will relaunch via launchd - using alternate GPU(s)). ignore Ignore - necessary for some internal GPU/display components - working to eliminate its use. Inferred-Only GPUEjectPolicy values (you can't specify these values, but you'll see them as defaulted/inferred policies in apps output): wrelaunch Wait momentarily for processing of Metal GPU change notifications before resorting to relaunch (as necessary). jrelaunch Just relaunch without waiting (since OpenGL/OpenCL are in use). rwait When a process is subordinate to another, "responsible", process (see RPID column), Eject actions apply to the responsible process, who in turn deals with subordinates to eliminate their ejecting eGPU references. GPUSelectionPolicy Settable values that affect instantiation of Metal and OpenGL/CL contexts (wrt eGPU use). Established in app bundle's Info.plist. Possible values: avoidRemovable Avoid creation of MTLCommandQueues, and OpenGL/CL contexts on eGPUs. preferRemovable Prefer creation of MTLCommandQueues, and OpenGL/CL contexts on eGPUs. SEE ALSO plist(5) sudo(8) launchd(8) HISTORY The command line SafeEjectGPU tool first appeared in the 10.13.4 release of Mac OS X. Mac OS X January 22, 2018 Mac OS X
|
snmpdelta
|
snmpdelta will monitor the specified integer valued OIDs, and report changes over time. AGENT identifies a target SNMP agent, which is instrumented to monitor the given objects. At its simplest, the AGENT specification will consist of a hostname or an IPv4 address. In this situation, the command will attempt communication with the agent, using UDP/IPv4 to port 161 of the given target host. See snmpcmd(1) for a full list of the possible formats for AGENT. OID is an object identifier which uniquely identifies the object type within a MIB. Multiple OIDs can be specified on a single snmpdelta command.
|
snmpdelta - Monitor delta differences in SNMP Counter values
|
snmpdelta [ COMMON OPTIONS ] [-Cf] [ -Ct ] [ -Cs ] [ -CS ] [ -Cm ] [ -CF configfile ] [ -Cl ] [ -Cp period ] [ -CP Peaks ] [ -Ck ] [ -CT ] AGENT OID [ OID ... ]
|
COMMON OPTIONS Please see snmpcmd(1) for a list of possible values for COMMON OPTIONS as well as their descriptions. -Cf Don't fix errors and retry the request. Without this option, if multiple oids have been specified for a single request and if the request for one or more of the oids fails, snmpdelta will retry the request so that data for oids apart from the ones that failed will still be returned. Specifying -Cf tells snmpdelta not to retry a request, even if there are multiple oids specified. -Ct Flag will determine time interval from the monitored entity. -Cs Flag will display a timestamp. -CS Generates a "sum count" in addition to the individual instance counts. The "sum count" is the total of all the individual deltas for each time period. -Cm Prints the max value ever attained. -CF configfile Tells snmpdelta to read it's configuration from the specified file. This options allows the input to be set up in advance rather than having to be specified on the command line. -Cl Tells snmpdelta to write it's configuration to files whose names correspond to the MIB instances monitored. For example, snmpdelta -Cl localhost ifInOctets.1 will create a file "localhost-ifInOctets.1". -Cp Specifies the number of seconds between polling periods. Polling constitutes sending a request to the agent. The default polling period is one second. -CP peaks Specifies the reporting period in number of polling periods. If this option is specified, snmpdelta polls the agent peaks number of times before reporting the results. The result reported includes the average value over the reporting period. In addition, the highest polled value within the reporting period is shown. -Ck When the polling period (-Cp) is an increment of 60 seconds and the timestamp is displayed in the output (-Cs), then the default display shows the timestamp in the format hh:mm mm/dd. This option causes the timestamp format to be hh:mm:ss mm/dd. -CT Makes snmpdelta print its output in tabular form. -Cv vars/pkt Specifies the maximum number of oids allowed to be packaged in a single PDU. Multiple PDUs can be created in a single request. The default value of variables per packet is 60. This option is useful if a request response results in an error becaues the packet is too big. Note that snmpdelta REQUIRES an argument specifying the agent to query and at least one OID argument, as described in the snmpcmd(1) manual page.
|
$ snmpdelta -c public -v 1 -Cs localhost IF-MIB::ifInUcastPkts.3 IF-MIB::ifOutUcastPkts.3 [20:15:43 6/14] ifInUcastPkts.3 /1 sec: 158 [20:15:43 6/14] ifOutUcastPkts.3 /1 sec: 158 [20:15:44 6/14] ifInUcastPkts.3 /1 sec: 184 [20:15:44 6/14] ifOutUcastPkts.3 /1 sec: 184 [20:15:45 6/14] ifInUcastPkts.3 /1 sec: 184 [20:15:45 6/14] ifOutUcastPkts.3 /1 sec: 184 [20:15:46 6/14] ifInUcastPkts.3 /1 sec: 158 [20:15:46 6/14] ifOutUcastPkts.3 /1 sec: 158 [20:15:47 6/14] ifInUcastPkts.3 /1 sec: 184 [20:15:47 6/14] ifOutUcastPkts.3 /1 sec: 184 [20:15:48 6/14] ifInUcastPkts.3 /1 sec: 184 [20:15:48 6/14] ifOutUcastPkts.3 /1 sec: 184 [20:15:49 6/14] ifInUcastPkts.3 /1 sec: 158 [20:15:49 6/14] ifOutUcastPkts.3 /1 sec: 158 ^C $ snmpdelta -c public -v 1 -Cs -CT localhost IF-MIB:ifInUcastPkts.3 IF-MIB:ifOutcastPkts.3 localhost ifInUcastPkts.3 ifOutUcastPkts.3 [20:15:59 6/14] 184.00 184.00 [20:16:00 6/14] 158.00 158.00 [20:16:01 6/14] 184.00 184.00 [20:16:02 6/14] 184.00 184.00 [20:16:03 6/14] 158.00 158.00 [20:16:04 6/14] 184.00 184.00 [20:16:05 6/14] 184.00 184.00 [20:16:06 6/14] 158.00 158.00 ^C The following example uses a number of options. Since the Cl option is specified, the output is sent to a file and not to the screen. $ snmpdelta -c public -v 1 -Ct -Cs -CS -Cm -Cl -Cp 60 -CP 60 interlink.sw.net.cmu.edu .1.3.6.1.2.1.2.2.1.16.3 .1.3.6.1.2.1.2.2.1.16.4 SEE ALSO snmpcmd(1), variables(5). V5.6.2.1 25 Jul 2003 SNMPDELTA(1)
|
ssh-keyscan
|
ssh-keyscan is a utility for gathering the public SSH host keys of a number of hosts. It was designed to aid in building and verifying ssh_known_hosts files, the format of which is documented in sshd(8). ssh-keyscan provides a minimal interface suitable for use by shell and perl scripts. ssh-keyscan uses non-blocking socket I/O to contact as many hosts as possible in parallel, so it is very efficient. The keys from a domain of 1,000 hosts can be collected in tens of seconds, even when some of those hosts are down or do not run sshd(8). For scanning, one does not need login access to the machines that are being scanned, nor does the scanning process involve any encryption. Hosts to be scanned may be specified by hostname, address or by CIDR network range (e.g. 192.168.16/28). If a network range is specified, then all addresses in that range will be scanned. The options are as follows: -4 Force ssh-keyscan to use IPv4 addresses only. -6 Force ssh-keyscan to use IPv6 addresses only. -c Request certificates from target hosts instead of plain keys. -D Print keys found as SSHFP DNS records. The default is to print keys in a format usable as a ssh(1) known_hosts file. -f file Read hosts or “addrlist namelist” pairs from file, one per line. If ‘-’ is supplied instead of a filename, ssh-keyscan will read from the standard input. Names read from a file must start with an address, hostname or CIDR network range to be scanned. Addresses and hostnames may optionally be followed by comma- separated name or address aliases that will be copied to the output. For example: 192.168.11.0/24 10.20.1.1 happy.example.org 10.0.0.1,sad.example.org -H Hash all hostnames and addresses in the output. Hashed names may be used normally by ssh(1) and sshd(8), but they do not reveal identifying information should the file's contents be disclosed. -O option Specify a key/value option. At present, only a single option is supported: hashalg=algorithm Selects a hash algorithm to use when printing SSHFP records using the -D flag. Valid algorithms are “sha1” and “sha256”. The default is to print both. -p port Connect to port on the remote host. -T timeout Set the timeout for connection attempts. If timeout seconds have elapsed since a connection was initiated to a host or since the last time anything was read from that host, the connection is closed and the host in question considered unavailable. The default is 5 seconds. -t type Specify the type of the key to fetch from the scanned hosts. The possible values are “dsa”, “ecdsa”, “ed25519”, “ecdsa-sk”, “ed25519-sk”, or “rsa”. Multiple values may be specified by separating them with commas. The default is to fetch “rsa”, “ecdsa”, “ed25519”, “ecdsa-sk”, and “ed25519-sk” keys. -v Verbose mode: print debugging messages about progress. If an ssh_known_hosts file is constructed using ssh-keyscan without verifying the keys, users will be vulnerable to man in the middle attacks. On the other hand, if the security model allows such a risk, ssh-keyscan can help in the detection of tampered keyfiles or man in the middle attacks which have begun after the ssh_known_hosts file was created. FILES /etc/ssh/ssh_known_hosts
|
ssh-keyscan – gather SSH public keys from servers
|
ssh-keyscan [-46cDHv] [-f file] [-O option] [-p port] [-T timeout] [-t type] [host | addrlist namelist]
| null |
Print the RSA host key for machine hostname: $ ssh-keyscan -t rsa hostname Search a network range, printing all supported key types: $ ssh-keyscan 192.168.0.64/25 Find all hosts from the file ssh_hosts which have new or different keys from those in the sorted file ssh_known_hosts: $ ssh-keyscan -t rsa,dsa,ecdsa,ed25519 -f ssh_hosts | \ sort -u - ssh_known_hosts | diff ssh_known_hosts - SEE ALSO ssh(1), sshd(8) Using DNS to Securely Publish Secure Shell (SSH) Key Fingerprints, RFC 4255, 2006. AUTHORS David Mazieres <dm@lcs.mit.edu> wrote the initial version, and Wayne Davison <wayned@users.sourceforge.net> added support for protocol version 2. macOS 14.5 February 10, 2023 macOS 14.5
|
renice
|
The renice utility alters the scheduling priority of one or more running processes. The following target parameters are interpreted as process ID's (the default), process group ID's, user ID's or user names. The renice'ing of a process group causes all processes in the process group to have their scheduling priority altered. The renice'ing of a user causes all processes owned by the user to have their scheduling priority altered. The following options are available: -n Instead of changing the specified processes to the given priority, interpret the following argument as an increment to be applied to the current priority of each process. -g Interpret target parameters as process group ID's. -p Interpret target parameters as process ID's (the default). -u Interpret target parameters as user names or user ID's. Users other than the super-user may only alter the priority of processes they own, and can only monotonically increase their ``nice value'' within the range 0 to PRIO_MAX (20). (This prevents overriding administrative fiats.) The super-user may alter the priority of any process and set the priority to any value in the range PRIO_MIN (-20) to PRIO_MAX. Useful priorities are: 20 (the affected processes will run only when nothing else in the system wants to), 0 (the ``base'' scheduling priority), anything negative (to make things go very fast). FILES /etc/passwd to map user names to user ID's
|
renice – alter priority of running processes
|
renice priority [[-gpu] target] renice -n increment [[-gpu] target]
| null |
Change the priority of process ID's 987 and 32, and all processes owned by users daemon and root. renice +1 987 -u daemon root -p 32 SEE ALSO nice(1), rtprio(1), getpriority(2), setpriority(2) STANDARDS The renice utility conforms to IEEE Std 1003.1-2001 (“POSIX.1”). HISTORY The renice utility appeared in 4.0BSD. BUGS Non super-users cannot increase scheduling priorities of their own processes, even if they were the ones that decreased the priorities in the first place. macOS 14.5 October 27, 2020 macOS 14.5
|
xxd
|
xxd creates a hex dump of a given file or standard input. It can also convert a hex dump back to its original binary form. Like uuencode(1) and uudecode(1) it allows the transmission of binary data in a `mail- safe' ASCII representation, but has the advantage of decoding to standard output. Moreover, it can be used to perform binary file patching.
|
xxd - make a hex dump or do the reverse.
|
xxd -h[elp] xxd [options] [infile [outfile]] xxd -r[evert] [options] [infile [outfile]]
|
If no infile is given, standard input is read. If infile is specified as a `-' character, then input is taken from standard input. If no outfile is given (or a `-' character is in its place), results are sent to standard output. Note that a "lazy" parser is used which does not check for more than the first option letter, unless the option is followed by a parameter. Spaces between a single option letter and its parameter are optional. Parameters to options can be specified in decimal, hexadecimal or octal notation. Thus -c8, -c 8, -c 010 and -cols 8 are all equivalent. -a | -autoskip Toggle autoskip: A single '*' replaces NUL-lines. Default off. -b | -bits Switch to bits (binary digits) dump, rather than hex dump. This option writes octets as eight digits "1"s and "0"s instead of a normal hexadecimal dump. Each line is preceded by a line number in hexadecimal and followed by an ASCII (or EBCDIC) representation. The command line switches -p, -i do not work with this mode. -c cols | -cols cols Format <cols> octets per line. Default 16 (-i: 12, -ps: 30, -b: 6). Max 256. No maximum for -ps. With -ps, 0 results in one long line of output. -C | -capitalize Capitalize variable names in C include file style, when using -i. -d show offset in decimal instead of hex. -E | -EBCDIC Change the character encoding in the righthand column from ASCII to EBCDIC. This does not change the hexadecimal representation. The option is meaningless in combinations with -r, -p or -i. -e Switch to little-endian hex dump. This option treats byte groups as words in little-endian byte order. The default grouping of 4 bytes may be changed using -g. This option only applies to the hex dump, leaving the ASCII (or EBCDIC) representation unchanged. The command line switches -r, -p, -i do not work with this mode. -g bytes | -groupsize bytes Separate the output of every <bytes> bytes (two hex characters or eight bit digits each) by a whitespace. Specify -g 0 to suppress grouping. <Bytes> defaults to 2 in normal mode, 4 in little-endian mode and 1 in bits mode. Grouping does not apply to PostScript or include style. -h | -help Print a summary of available commands and exit. No hex dumping is performed. -i | -include Output in C include file style. A complete static array definition is written (named after the input file), unless xxd reads from stdin. -l len | -len len Stop after writing <len> octets. -n name | -name name Override the variable name output when -i is used. The array is named name and the length is named name_len. -o offset Add <offset> to the displayed file position. -p | -ps | -postscript | -plain Output in PostScript continuous hex dump style. Also known as plain hex dump style. -r | -revert Reverse operation: convert (or patch) hex dump into binary. If not writing to stdout, xxd writes into its output file without truncating it. Use the combination -r -p to read plain hexadecimal dumps without line number information and without a particular column layout. Additional whitespace and line breaks are allowed anywhere. Use the combination -r -b to read a bits dump instead of a hex dump. -R when In the output the hex-value and the value are both colored with the same color depending on the hex-value. Mostly helping to differentiate printable and non-printable characters. when is never, always, or auto (default: auto). When the $NO_COLOR environment variable is set, colorization will be disabled. -seek offset When used after -r: revert with <offset> added to file positions found in hex dump. -s [+][-]seek Start at <seek> bytes abs. (or rel.) infile offset. + indicates that the seek is relative to the current stdin file position (meaningless when not reading from stdin). - indicates that the seek should be that many characters from the end of the input (or if combined with +: before the current stdin file position). Without -s option, xxd starts at the current file position. -u Use upper-case hex letters. Default is lower-case. -v | -version Show version string. CAVEATS xxd -r has some built-in magic while evaluating line number information. If the output file is seekable, then the line numbers at the start of each hex dump line may be out of order, lines may be missing, or overlapping. In these cases xxd will lseek(2) to the next position. If the output file is not seekable, only gaps are allowed, which will be filled by null-bytes. xxd -r never generates parse errors. Garbage is silently skipped. When editing hex dumps, please note that xxd -r skips everything on the input line after reading enough columns of hexadecimal data (see option -c). This also means that changes to the printable ASCII (or EBCDIC) columns are always ignored. Reverting a plain (or PostScript) style hex dump with xxd -r -p does not depend on the correct number of columns. Here, anything that looks like a pair of hex digits is interpreted. Note the difference between % xxd -i file and % xxd -i < file xxd -s +seek may be different from xxd -s seek, as lseek(2) is used to "rewind" input. A '+' makes a difference if the input source is stdin, and if stdin's file position is not at the start of the file by the time xxd is started and given its input. The following examples may help to clarify (or further confuse!): Rewind stdin before reading; needed because the `cat' has already read to the end of stdin. % sh -c "cat > plain_copy; xxd -s 0 > hex_copy" < file Hex dump from file position 0x480 (=1024+128) onwards. The `+' sign means "relative to the current position", thus the `128' adds to the 1k where dd left off. % sh -c "dd of=plain_snippet bs=1k count=1; xxd -s +128 > hex_snippet" < file Hex dump from file position 0x100 (=1024-768) onwards. % sh -c "dd of=plain_snippet bs=1k count=1; xxd -s +-768 > hex_snippet" < file However, this is a rare situation and the use of `+' is rarely needed. The author prefers to monitor the effect of xxd with strace(1) or truss(1), whenever -s is used.
|
Print everything but the first three lines (hex 0x30 bytes) of file. % xxd -s 0x30 file Print 3 lines (hex 0x30 bytes) from the end of file. % xxd -s -0x30 file Print 120 bytes as a continuous hex dump with 20 octets per line. % xxd -l 120 -ps -c 20 xxd.1 2e54482058584420312022417567757374203139 39362220224d616e75616c207061676520666f72 20787864220a2e5c220a2e5c222032317374204d 617920313939360a2e5c22204d616e2070616765 20617574686f723a0a2e5c2220202020546f6e79 204e7567656e74203c746f6e79407363746e7567 Hex dump the first 120 bytes of this man page with 12 octets per line. % xxd -l 120 -c 12 xxd.1 0000000: 2e54 4820 5858 4420 3120 2241 .TH XXD 1 "A 000000c: 7567 7573 7420 3139 3936 2220 ugust 1996" 0000018: 224d 616e 7561 6c20 7061 6765 "Manual page 0000024: 2066 6f72 2078 7864 220a 2e5c for xxd"..\ 0000030: 220a 2e5c 2220 3231 7374 204d "..\" 21st M 000003c: 6179 2031 3939 360a 2e5c 2220 ay 1996..\" 0000048: 4d61 6e20 7061 6765 2061 7574 Man page aut 0000054: 686f 723a 0a2e 5c22 2020 2020 hor:..\" 0000060: 546f 6e79 204e 7567 656e 7420 Tony Nugent 000006c: 3c74 6f6e 7940 7363 746e 7567 <tony@sctnug Display just the date from the file xxd.1 % xxd -s 0x36 -l 13 -c 13 xxd.1 0000036: 3231 7374 204d 6179 2031 3939 36 21st May 1996 Copy input_file to output_file and prepend 100 bytes of value 0x00. % xxd input_file | xxd -r -s 100 > output_file Patch the date in the file xxd.1 % echo "0000037: 3574 68" | xxd -r - xxd.1 % xxd -s 0x36 -l 13 -c 13 xxd.1 0000036: 3235 7468 204d 6179 2031 3939 36 25th May 1996 Create a 65537 byte file with all bytes 0x00, except for the last one which is 'A' (hex 0x41). % echo "010000: 41" | xxd -r > file Hex dump this file with autoskip. % xxd -a -c 12 file 0000000: 0000 0000 0000 0000 0000 0000 ............ * 000fffc: 0000 0000 40 ....A Create a 1 byte file containing a single 'A' character. The number after '-r -s' adds to the line numbers found in the file; in effect, the leading bytes are suppressed. % echo "010000: 41" | xxd -r -s -0x10000 > file Use xxd as a filter within an editor such as vim(1) to hex dump a region marked between `a' and `z'. :'a,'z!xxd Use xxd as a filter within an editor such as vim(1) to recover a binary hex dump marked between `a' and `z'. :'a,'z!xxd -r Use xxd as a filter within an editor such as vim(1) to recover one line of a hex dump. Move the cursor over the line and type: !!xxd -r Read single characters from a serial line % xxd -c1 < /dev/term/b & % stty < /dev/term/b -echo -opost -isig -icanon min 1 % echo -n foo > /dev/term/b RETURN VALUES The following error values are returned: 0 no errors encountered. -1 operation not supported (xxd -r -i still impossible). 1 error while parsing options. 2 problems with input file. 3 problems with output file. 4,5 desired seek position is unreachable. SEE ALSO uuencode(1), uudecode(1), patch(1) WARNINGS The tool's weirdness matches its creator's brain. Use entirely at your own risk. Copy files. Trace it. Become a wizard. VERSION This manual page documents xxd version 1.7 AUTHOR (c) 1990-1997 by Juergen Weigert <jnweiger@informatik.uni-erlangen.de> Distribute freely and credit me, make money and share with me, lose money and don't ask me. Manual page started by Tony Nugent <tony@sctnugen.ppp.gu.edu.au> <T.Nugent@sct.gu.edu.au> Small changes by Bram Moolenaar. Edited by Juergen Weigert. Manual page for xxd August 1996 XXD(1)
|
clangd
| null | null | null | null | null |
scandeps.pl
|
scandeps.pl is a simple-minded utility that prints out the "PREREQ_PM" section needed by modules. If the option "-T" is specified and you have CPANPLUS installed, modules that are part of an earlier module's distribution with be denoted with "S"; modules without a distribution name on CPAN are marked with "?". Also, if the "-B" option is specified, module belongs to a perl distribution on CPAN (and thus uninstallable by "CPAN.pm" or "CPANPLUS.pm") are marked with "C". Finally, modules that has loadable shared object files (usually needing a compiler to install) are marked with "X"; with the "-V" flag, those files (and all other files found) will be listed before the main output. Additionally, all module files that the scanned code depends on but were not found (and thus not scanned recursively) are listed. These may include genuinely missing modules or false positives. That means, modules your code does not depend on (on this particular platform) but that were picked up by the heuristic anyway.
|
scandeps.pl - Scan file prerequisites
|
% scandeps.pl *.pm # Print PREREQ_PM section for *.pm % scandeps.pl -e 'STRING' # Scan an one-liner % scandeps.pl -B *.pm # Include core modules % scandeps.pl -V *.pm # Show autoload/shared/data files % scandeps.pl -R *.pm # Don't recurse % scandeps.pl -C CACHEFILE # use CACHEFILE to cache dependencies
|
-e, --eval=STRING Scan STRING as a string containing perl code. -c, --compile Compiles the code and inspects its %INC, in addition to static scanning. -x, --execute Executes the code and inspects its %INC, in addition to static scanning. You may use --xargs to specify @ARGV when executing the code. --xargs=STRING If -x is given, splits the "STRING" using the function "shellwords" from Text::ParseWords and passes the result as @ARGV when executing the code. -B, --bundle Include core modules in the output and the recursive search list. -R, --no-recurse Only show dependencies found in the files listed and do not recurse. -V, --verbose Verbose mode: Output all files found during the process; show dependencies between modules and availability. Additionally, warns of any missing dependencies. If you find missing dependencies that aren't really dependencies, you have probably found false positives. -C, --cachedeps=CACHEFILE Use CACHEFILE to speed up the scanning process by caching dependencies. Creates CACHEFILE if it does not exist yet. -T, --modtree Retrieves module information from CPAN if you have CPANPLUS installed. SEE ALSO Module::ScanDeps, CPANPLUS::Backend, PAR ACKNOWLEDGMENTS Simon Cozens, for suggesting this script to be written. AUTHORS Audrey Tang <autrijus@autrijus.org> COPYRIGHT Copyright 2003, 2004, 2005, 2006 by Audrey Tang <autrijus@autrijus.org>. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See <http://www.perl.com/perl/misc/Artistic.html> perl v5.34.0 2019-01-15 SCANDEPS(1)
| null |
nettop
|
The nettop program displays a list of sockets or routes. The counts for network structures are updated periodically. While the program is running the following keys may be used: q Quit Up Arrow Scroll up Down Arrow Scroll down Right Arrow Scroll Right Left Arrow Scroll Left d Toggle delta output r Redraw screen x Toggle human readable numbers e Expand all c Collapse all h Bring up the help menu j Bring up the column selection menu. In this mode you can enable/disable columns and change their order. p Bring up the process selection menu. In this mode you can enable/disable processes for display. l Change to logging mode, redisplay the current data, and quit.
|
nettop – Display updated information about the network
|
nettop [-ncd] [-m <mode>] [-t <type>] [-s <seconds>] [-p <process-name|pid>] [-n] [-l <samples>] [-L <samples>] [-P] [-j|k|J <column-name[,column-name]...>]
|
A list of flags and their descriptions: -m <mode> Specify the mode. By default, nettop will monitor TCP and UDP sockets. The following modes are supported: tcp Only TCP sockets will be monitored udp Only UDP sockets will be monitored route Instead of sockets, the routing table will be monitored -t <type> Specify the type of interface. By default, all interfaces will be monitored. Multiple interface types may be specified. The following types are supported: wifi WiFi interfaces wired Wired interfaces loopback Loopback interfaces awdl Apple Wireless Direct Link interfaces expensive Interfaces marked as "expensive", for example via hotspot undefined Cases where the underlying socket is not associated with an interface external The combination of all defined non-loopback interfaces. -n Disable address to name resolution -c Less intensive use of the CPU - draws less often -d Delta mode -x Extended display of numbers instead of human readable suffixes such as MiB -P Display per-process summary only, skipping details of open connections. This is equivalent to selecting "Collapse All" in the interactive menu. -s <delay> Set the delay between updates to <delay> seconds. The default delay between updates is 1 second. -l <samples> Use logging mode and display <samples> samples, even if standard output is a terminal. 0 is treated as infinity. Rather than redisplaying, output is periodically printed in raw form. -L <samples> Use logging mode and display <samples> samples, even if standard output is a terminal. Output will be in comma- separated values (CSV) form. 0 is treated as infinity. Rather than redisplaying, output is periodically printed in raw form. -p <process-name|pid> Select a process for display. A numeric argument identifies a process by its pid. Alternatively a process name may be given, in which case all processes with that name will be displayed. The name must be an exact match for the name displayed by nettop, which may require that the name be truncated, for example launchd.develop instead of launchd.development. The option may be repeated to select multiple processes. -j <column name list> Specifiy a list of column headings to be included in the display. List items are separated by commas. For example, -j uuid,rtt_var -J <column name list> Specifiy a list of column headings that are to be the only ones included in the display. List items are separated by commas. For example, -j uuid,rtt_var. The ordering is currently as per nettop default, but may change in future revisions to match the order of the supplied column names. For future compatibility it is recommended that any names supplied here are given an order that matches the output. -k <column name list> Specifiy a list of column headings to be excluded from the display. List items are separated by commas. For example, -k rcvsize,rtt_avg Darwin 4/5/10 Darwin
| null |
ppdmerge
|
ppdmerge merges two or more PPD files into a single, multi-language PPD file. This program is deprecated and will be removed in a future release of CUPS.
|
ppdmerge - merge ppd files (deprecated)
|
ppdmerge [ -o output-ppd-file ] ppd-file ppd-file [ ... ppd-file ]
|
ppdmerge supports the following options: -o output-ppd-file Specifies the PPD file to create. If not specified, the merged PPD file is written to the standard output. If the output file already exists, it is silently overwritten. NOTES PPD files are deprecated and will no longer be supported in a future feature release of CUPS. Printers that do not support IPP can be supported using applications such as ippeveprinter(1). ppdmerge does not check whether the merged PPD files are for the same device. Merging of different device PPDs will yield unpredictable results. SEE ALSO ppdc(1), ppdhtml(1), ppdi(1), ppdpo(1), ppdcfile(5), CUPS Online Help (http://localhost:631/help) COPYRIGHT Copyright © 2007-2019 by Apple Inc. 26 April 2019 CUPS ppdmerge(1)
| null |
snmpconf
|
snmpconf is a simple Perl script that walks you through setting up a configuration file step by step. It should be fairly straight forward to use. Merely run it and answer its questions. In its default mode of operation, it prompts the user with menus showing sections of the various configuration files it knows about. When the user selects a section, a sub-menu is shown listing of the descriptions of the tokens that can be created in that section. When a description is selected, the user is prompted with questions that construct the configuration line in question. Finally, when the user quits the program any configuration files that have been edited by the user are saved to the local directory, fully commented. A particularly useful option is the -g switch, which walks a user through a specific set of configuration questions. Run: snmpconf -g basic_setup for an example.
|
snmpconf - creates and modifies SNMP configuration files
|
snmpconf [OPTIONS] [fileToCreate] Start with: snmpconf -g basic_setup Or even just: snmpconf
|
-f Force overwriting existing files in the current directory without prompting the user if this is a desired thing to do. -i When finished, install the files into the location where the global system commands expect to find them. -p When finished, install the files into the users home directory's .snmp subdirectory (where the applications will also search for configuration files). -I DIRECTORY When finished, install the files into the directory DIRECTORY. -a Don't ask any questions. Simply read in the various known configuration files and write them back out again. This has the effect of "auto-commenting" the configuration files for you. See the NEAT TRICKS section below. -rall|none Read in either all or none of the found configuration files. Normally snmpconf prompts you for which files you wish to read in. Reading in these configuration files will merge these files with the results of the questions that it asks of you. -R FILE,... Read in a specific list of configuration files. -g GROUPNAME Groups of configuration entries can be created that can be used to walk a user through a series of questions to create an initial configuration file. There are no menus to navigate, just a list of questions. Run: snmpconf -g basic_setup for a good example. -G List all the known groups. -c CONFIGDIR snmpconf uses a directory of configuration information to learn about the files and questions that it should be asking. This option tells snmpconf to use a different location for configuring itself. -q Run slightly more quietly. Since this is an interactive program, I don't recommend this option since it only removes information from the output that is designed to help you. -d Turn on lots of debugging output. -D Add even more debugging output in the form of Perl variable dumps. NEAT TRICKS snmpconf -g basic_setup Have I mentioned this command enough yet? It's designed to walk someone through an initial setup for the snmpd(8) daemon. Really, you should try it. snmpconf -R /usr/local/snmp/snmpd.conf -a -f snmpd.conf Automatically reads in an snmpd.conf file (for example) and adds comments to them describing what each token does. Try it. It's cool. NOTES snmpconf is actually a very generic utility that could be easily configured to help construct just about any kind of configuration file. Its default configuration set of files are SNMP based. SEE ALSO snmpd(8), snmp_config(5), snmp.conf(5), snmpd.conf(5) V5.6.2.1 25 Feb 2003 SNMPCONF(1)
| null |
xsltproc
|
xsltproc is a command line tool for applying XSLT stylesheets to XML documents. It is part of libxslt(3), the XSLT C library for GNOME. While it was developed as part of the GNOME project, it can operate independently of the GNOME desktop. xsltproc is invoked from the command line with the name of the stylesheet to be used followed by the name of the file or files to which the stylesheet is to be applied. It will use the standard input if a filename provided is - . If a stylesheet is included in an XML document with a Stylesheet Processing Instruction, no stylesheet need to be named at the command line. xsltproc will automatically detect the included stylesheet and use it. By default, output is to stdout. You can specify a file for output using the -o or --output option.
|
xsltproc - command line XSLT processor
|
xsltproc [[-V | --version] [-v | --verbose] [{-o | --output} {FILE | DIRECTORY}] | --timing | --repeat | --debug | --novalid | --noout | --maxdepth VALUE | --maxvars VALUE | --maxparserdepth VALUE | --huge | --seed-rand VALUE | --html | --encoding ENCODING | --param PARAMNAME PARAMVALUE | --stringparam PARAMNAME PARAMVALUE | --nonet | --path "PATH(S)" | --load-trace | --catalogs | --xinclude | --xincludestyle | [--profile | --norman] | --dumpextensions | --nowrite | --nomkdir | --writesubtree PATH | --nodtdattr] [STYLESHEET] {XML-FILE... | -}
|
xsltproc accepts the following options (in alphabetical order): --catalogs Use the SGML catalog specified in SGML_CATALOG_FILES to resolve the location of external entities. By default, xsltproc looks for the catalog specified in XML_CATALOG_FILES. If that is not specified, it uses /etc/xml/catalog. --debug Output an XML tree of the transformed document for debugging purposes. --dumpextensions Dumps the list of all registered extensions on stdout. --html The input document is an HTML file. --load-trace Display all the documents loaded during the processing to stderr. --maxdepth VALUE Adjust the maximum depth of the template stack before libxslt(3) concludes it is in an infinite loop. The default is 3000. --maxvars VALUE Maximum number of variables. The default is 15000. --maxparserdepth VALUE Maximum element nesting level of parsed XML documents. The default is 256. --huge Relax hardcoded limits of the XML parser by setting the XML_PARSE_HUGE parser option. --seed-rand VALUE Initialize pseudo random number generator with specific seed. --nodtdattr Do not apply default attributes from the document's DTD. --nomkdir Refuses to create directories. --nonet Do not use the Internet to fetch DTDs, entities or documents. --noout Do not output the result. --novalid Skip loading the document's DTD. --nowrite Refuses to write to any file or resource. -o or --output FILE | DIRECTORY Direct output to the given FILE. Using the option with a DIRECTORY directs the output files to the specified directory. This can be useful for multiple outputs (also known as "chunking") or manpage processing. Important The given directory must already exist. Note Make sure that FILE and DIRECTORY follow the “URI reference computation” as described in RFC 2396 and laters. This means, that e.g. -o directory will maybe not work, but -o directory/ will. --encoding ENCODING Allow to specify the encoding for the input. --param PARAMNAME PARAMVALUE Pass a parameter of name PARAMNAME and value PARAMVALUE to the stylesheet. You may pass multiple name/value pairs up to a maximum of 32. If the value being passed is a string, you can use --stringparam instead, to avoid additional quote characters that appear in string expressions. Note: the XPath expression must be UTF-8 encoded. --path "PATH(S)" Use the (space- or colon-separated) list of filesystem paths specified by PATHS to load DTDs, entities or documents. Enclose space-separated lists by quotation marks. --profile or --norman Output profiling information detailing the amount of time spent in each part of the stylesheet. This is useful in optimizing stylesheet performance. --repeat Run the transformation 20 times. Used for timing tests. --stringparam PARAMNAME PARAMVALUE Pass a parameter of name PARAMNAME and value PARAMVALUE where PARAMVALUE is a string rather than a node identifier. Note: The string must be UTF-8 encoded. --timing Display the time used for parsing the stylesheet, parsing the document and applying the stylesheet and saving the result. Displayed in milliseconds. -v or --verbose Output each step taken by xsltproc in processing the stylesheet and the document. -V or --version Show the version of libxml(3) and libxslt(3) used. --writesubtree PATH Allow file write only within the PATH subtree. --xinclude Process the input document using the XInclude specification. More details on this can be found in the XInclude specification: http://www.w3.org/TR/xinclude/ --xincludestyle Process the stylesheet with XInclude. ENVIRONMENT SGML_CATALOG_FILES SGML catalog behavior can be changed by redirecting queries to the user's own set of catalogs. This can be done by setting the SGML_CATALOG_FILES environment variable to a list of catalogs. An empty one should deactivate loading the default /etc/sgml/catalog catalog. XML_CATALOG_FILES XML catalog behavior can be changed by redirecting queries to the user's own set of catalogs. This can be done by setting the XML_CATALOG_FILES environment variable to a list of catalogs. An empty one should deactivate loading the default /etc/xml/catalog catalog. DIAGNOSTICS xsltproc return codes provide information that can be used when calling it from scripts. 0 No error (normal operation) 1 No argument 2 Too many parameters 3 Unknown option 4 Failed to parse the stylesheet 5 Error in the stylesheet 6 Error in one of the documents 7 Unsupported xsl:output method 8 String parameter contains both quote and double-quotes 9 Internal processing error 10 Processing was stopped by a terminating message 11 Could not write the result to the output file SEE ALSO libxml(3), libxslt(3) More information can be found at • libxml(3) web page https://gitlab.gnome.org/GNOME/libxslt • W3C XSLT page http://www.w3.org/TR/xslt AUTHOR John Fleck <jfleck@inkstain.net> Author. COPYRIGHT Copyright © 2001, 2002 libxslt 08/17/2022 XSLTPROC(1)
| null |
osascript
|
osascript executes the given OSA script, which may be plain text or a compiled script (.scpt) created by Script Editor or osacompile(1). By default, osascript treats plain text as AppleScript, but you can change this using the -l option. To get a list of the OSA languages installed on your system, use osalang(1). osascript will look for the script in one of the following three places: 1. Specified line by line using -e switches on the command line. 2. Contained in the file specified by the first filename on the command line. This file may be plain text or a compiled script. 3. Passed in using standard input. This works only if there are no filename arguments; to pass arguments to a STDIN-read script, you must explicitly specify “-” for the script name. Any arguments following the script will be passed as a list of strings to the direct parameter of the “run” handler. For example, in AppleScript: a.scpt: on run argv return "hello, " & item 1 of argv & "." end run % osascript a.scpt world hello, world. The options are as follows: -e statement Enter one line of a script. If -e is given, osascript will not look for a filename in the argument list. Multiple -e options may be given to build up a multi-line script. Because most scripts use characters that are special to many shell programs (for example, AppleScript uses single and double quote marks, “(”, “)”, and “*”), the statement will have to be correctly quoted and escaped to get it past the shell intact. -i Interactive mode: osascript will prompt for one line at a time, and print the result, if applicable, after each line. Any script supplied as a command argument using -e or programfile will be loaded, but not executed, before starting the interactive prompt. -l language Override the language for any plain text files. Normally, plain text files are compiled as AppleScript. -s flags Modify the output style. The flags argument is a string consisting of any of the modifier characters e, h, o, and s. Multiple modifiers can be concatenated in the same string, and multiple -s options can be specified. The modifiers come in exclusive pairs; if conflicting modifiers are specified, the last one takes precedence. The meanings of the modifier characters are as follows: h Print values in human-readable form (default). s Print values in recompilable source form. osascript normally prints its results in human-readable form: strings do not have quotes around them, characters are not escaped, braces for lists and records are omitted, etc. This is generally more useful, but can introduce ambiguities. For example, the lists ‘{"foo", "bar"}’ and ‘{{"foo", {"bar"}}}’ would both be displayed as ‘foo, bar’. To see the results in an unambiguous form that could be recompiled into the same value, use the s modifier. e Print script errors to stderr (default). o Print script errors to stdout. osascript normally prints script errors to stderr, so downstream clients only see valid results. When running automated tests, however, using the o modifier lets you distinguish script errors, which you care about matching, from other diagnostic output, which you don't. SEE ALSO osacompile(1), osalang(1), AppleScript Language Guide HISTORY osascript in Mac OS X 10.0 would translate ‘\r’ characters in the output to ‘\n’ and provided c and r modifiers for the -s option to change this. osascript now always leaves the output alone; pipe through tr(1) if necessary. Prior to Mac OS X 10.4, osascript did not allow passing arguments to the script. Mac OS X April 24, 2014 Mac OS X
|
osascript – execute OSA scripts (AppleScript, JavaScript, etc.)
|
osascript [-l language] [-i] [-s flags] [-e statement | programfile] [argument ...]
| null | null |
iotop
|
iotop tracks disk I/O by process, and prints a summary report that is refreshed every interval. This is measuring disk events that have made it past system caches. Since this uses DTrace, only users with root privileges can run this command.
|
iotop - display top disk I/O events by process. Uses DTrace.
|
iotop [-C] [-D|-o|-P] [-j|-Z] [-d device] [-f filename] [-m mount_point] [-t top] [interval [count]]
|
-C don't clear the screen -D print delta times - elapsed, us -j print project ID -o print disk delta times, us -P print %I/O (disk delta times) -Z print zone ID -d device instance name to snoop (eg, dad0) -f filename full pathname of file to snoop -m mount_point mountpoint for filesystem to snoop -t top print top number only
|
Default output, print summary every 5 seconds # iotop One second samples, # iotop 1 print %I/O (time based), # iotop -P Snoop events on the root filesystem only, # iotop -m / Print top 20 lines only, # iotop -t 20 Print 12 x 5 second samples, scrolling, # iotop -C 5 12 FIELDS UID user ID PID process ID PPID parent process ID PROJ project ID ZONE zone ID CMD command name for the process DEVICE device name MAJ device major number MIN device minor number D direction, Read or Write BYTES total size of operations, bytes ELAPSED total elapsed times from request to completion, us (this is the elapsed time from the disk request (strategy) to the disk completion (iodone)) DISKTIME total times for disk to complete request, us (this is the time for the disk to complete that event since it's last event (time between iodones), or, the time to the strategy if the disk had been idle) %I/O percent disk I/O, based on time (DISKTIME) load 1 minute load average disk_r total disk read Kb for sample disk_w total disk write Kb for sample DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT iotop will run forever until Ctrl-C is hit, or the specified interval is reached. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO iosnoop(1M), dtrace(1M) version 0.75 October 25, 2005 iotop(1m)
|
lex
|
Generates programs that perform pattern-matching on text. Table Compression: -Ca, --align trade off larger tables for better memory alignment -Ce, --ecs construct equivalence classes -Cf do not compress tables; use -f representation -CF do not compress tables; use -F representation -Cm, --meta-ecs construct meta-equivalence classes -Cr, --read use read() instead of stdio for scanner input -f, --full generate fast, large scanner. Same as -Cfr -F, --fast use alternate table representation. Same as -CFr -Cem default compression (same as --ecs --meta-ecs) Debugging: -d, --debug enable debug mode in scanner -b, --backup write backing-up information to lex.backup -p, --perf-report write performance report to stderr -s, --nodefault suppress default rule to ECHO unmatched text -T, --trace flex should run in trace mode -w, --nowarn do not generate warnings -v, --verbose write summary of scanner statistics to stdout --hex use hexadecimal numbers instead of octal in debug outputs FILES -o, --outfile=FILE specify output filename -S, --skel=FILE specify skeleton file -t, --stdout write scanner on stdout instead of lex.yy.c --yyclass=NAME name of C++ class --header-file=FILE create a C header file in addition to the scanner --tables-file[=FILE] write tables to FILE Scanner behavior: -7, --7bit generate 7-bit scanner -8, --8bit generate 8-bit scanner -B, --batch generate batch scanner (opposite of -I) -i, --case-insensitive ignore case in patterns -l, --lex-compat maximal compatibility with original lex -X, --posix-compat maximal compatibility with POSIX lex -I, --interactive generate interactive scanner (opposite of -B) --yylineno track line count in yylineno Generated code: -+, --c++ generate C++ scanner class -Dmacro[=defn] #define macro defn (default defn is '1') -L, --noline suppress #line directives in scanner -P, --prefix=STRING use STRING as prefix instead of "yy" -R, --reentrant generate a reentrant C scanner --bison-bridge scanner for bison pure parser. --bison-locations include yylloc support. --stdinit initialize yyin/yyout to stdin/stdout --nounistd do not include <unistd.h> --noFUNCTION do not generate a particular FUNCTION Miscellaneous: -c do-nothing POSIX option -n do-nothing POSIX option -? -h, --help produce this help message -V, --version report flex version SEE ALSO The full documentation for flex is maintained as a Texinfo manual. If the info and flex programs are properly installed at your site, the command info flex should give you access to the complete manual. The Flex Project May 2017 FLEX(1)
|
flex - the fast lexical analyser generator
|
flex [OPTIONS] [FILE]...
| null | null |
encguess
|
The encoding identification is done by checking one encoding type at a time until all but the right type are eliminated. The set of encoding types to try is defined by the -s parameter and defaults to ascii, utf8 and UTF-16/32 with BOM. This can be overridden by passing one or more encoding types via the -s parameter. If you need to pass in multiple suspect encoding types, use a quoted string with the a space separating each value. SEE ALSO Encode::Guess, Encode::Detect LICENSE AND COPYRIGHT Copyright 2015 Michael LaGrasta and Dan Kogai. This program is free software; you can redistribute it and/or modify it under the terms of the the Artistic License (2.0). You may obtain a copy of the full license at: <http://www.perlfoundation.org/artistic_license_2_0> perl v5.38.2 2023-11-28 ENCGUESS(1)
|
encguess - guess character encodings of files VERSION $Id: encguess,v 0.3 2020/12/02 01:28:17 dankogai Exp $
|
encguess [switches] filename... SWITCHES -h show this message and exit. -s specify a list of "suspect encoding types" to test, separated by either ":" or "," -S output a list of all acceptable encoding types that can be used with the -s param -u suppress display of unidentified types EXAMPLES: • Guess encoding of a file named "test.txt", using only the default suspect types. encguess test.txt • Guess the encoding type of a file named "test.txt", using the suspect types "euc-jp,shiftjis,7bit-jis". encguess -s euc-jp,shiftjis,7bit-jis test.txt encguess -s euc-jp:shiftjis:7bit-jis test.txt • Guess the encoding type of several files, do not display results for unidentified files. encguess -us euc-jp,shiftjis,7bit-jis test*.txt
| null | null |
eyapp5.34
|
The eyapp compiler is a front-end to the Parse::Eyapp module, which lets you compile Parse::Eyapp grammar input files into Perl LALR(1) Object Oriented parser modules. OPTIONS IN DETAIL -v Creates a file grammar.output describing your parser. It will show you a summary of conflicts, rules, the DFA (Deterministic Finite Automaton) states and overall usage of the parser. Implies option "-N". To produce a more detailed description of the states, the LALR tables aren't compacted. Use the combination "-vN" to produce an ".output" file corresponding to the compacted tables. -s Create a standalone module in which the parsing driver is included. The modules including the LALR driver (Parse::Eyapp::Driver), those for AST manipulations (Parse::Eyapp::Node and Parse::Eyapp::YATW)) and Parse::Eyapp::Base are included - almost verbatim - inside the generated module. Note that if you have more than one parser module called from a program, to have it standalone, you need this option only for one of your grammars; -n Disable source file line numbering embedded in your parser module. I don't know why one should need it, but it's there. -m module Gives your parser module the package name (or name space or module name or class name or whatever-you-call-it) of module. It defaults to grammar -o outfile The compiled output file will be named outfile for your parser module. It defaults to grammar.pm or, if you specified the option -m A::Module::Name (see below), to Name.pm. -c grammar[.eyp] Produces as output (STDOUT) the grammar without the actions. Only the syntactic parts are displayed. Comments will be also stripped if the "-v" option is added. -t filename The -t filename option allows you to specify a file which should be used as template for generating the parser output. The default is to use the internal template defined in Parse::Eyapp::Output.pm. For how to write your own template and which substitutions are available, have a look to the module Parse::Eyapp::Output.pm : it should be obvious. -b shebang If you work on systems that understand so called shebangs, and your generated parser is directly an executable script, you can specify one with the -b option, ie: eyapp -b '/usr/local/bin/perl -w' -o myscript.pl myscript.yp This will output a file called myscript.pl whose very first line is: #!/usr/local/bin/perl -w The argument is mandatory, but if you specify an empty string, the value of $Config{perlpath} will be used instead. -B prompt Adds a modulino call '__PACKAGE->main(<prompt>) unless caller();' as the very last line of the output file. The argument is mandatory. -C grammar.eyp An abbreviation for the combined use of -b '' and -B '' -T grammar.eyp Equivalent to %tree. -N grammar.eyp Equivalent to the directive %nocompact. Do not compact LALR action tables. -l Do not provide a default lexical analyzer. By default "eyapp" builds a lexical analyzer from your "%token = /regexp/" definitions grammar The input grammar file. If no suffix is given, and the file does not exists, an attempt to open the file with a suffix of .eyp is tried before exiting. -V Display current version of Parse::Eyapp and gracefully exits. -h Display the usage screen. EXAMPLE The following "eyapp" program translates an infix expression like "2+3*4" to postfix: "2 3 4 * +" %token NUM = /([0-9]+(?:\.[0-9]+)?)/ %token VAR = /([A-Za-z][A-Za-z0-9_]*)/ %right '=' %left '-' '+' %left '*' '/' %left NEG %defaultaction { "$left $right $op"; } %% line: $exp { print "$exp\n" } ; exp: $NUM { $NUM } | $VAR { $VAR } | VAR.left '='.op exp.right | exp.left '+'.op exp.right | exp.left '-'.op exp.right | exp.left '*'.op exp.right | exp.left '/'.op exp.right | '-' $exp %prec NEG { "$exp NEG" } | '(' $exp ')' { $exp } ; %% Notice that there is no need to write lexer and error report subroutines. First, we compile the grammar: pl@nereida:~/LEyapp/examples/eyappintro$ eyapp -o postfix.pl -C Postfix.eyp If we use the "-C" option and no "main()" was written one default "main" sub is provided. We can now execute the resulting program: pl@nereida:~/LEyapp/examples/eyappintro$ ./postfix.pl -c 'a = 2*3 +b' a 2 3 * b + = When a non conformant input is given, it produces an accurate error message: pl@nereida:~/LEyapp/examples/eyappintro$ ./postfix.pl -c 'a = 2**3 +b' Syntax error near '*'. Expected one of these terminals: '-' 'NUM' 'VAR' '(' There were 1 errors during parsing AUTHOR Casiano Rodriguez-Leon COPYRIGHT Copyright © 2006, 2007, 2008, 2009, 2010, 2011, 2012 Casiano Rodriguez- Leon. Copyright © 2017 William N. Braswell, Jr. All Rights Reserved. Parse::Yapp is Copyright © 1998, 1999, 2000, 2001, Francois Desarmenien. Parse::Yapp is Copyright © 2017 William N. Braswell, Jr. All Rights Reserved. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.8 or, at your option, any later version of Perl 5 you may have available. SEE ALSO • Parse::Eyapp, • perldoc vgg, • The tutorial Parsing Strings and Trees with "Parse::Eyapp" (An Introduction to Compiler Construction in seven pages)> in • The pdf file in <http://nereida.deioc.ull.es/~pl/perlexamples/Eyapp.pdf> • <http://nereida.deioc.ull.es/~pl/perlexamples/section_eyappts.html> (Spanish), • eyapp, • treereg, • Parse::yapp, • yacc(1), • bison(1), • the classic book "Compilers: Principles, Techniques, and Tools" by Alfred V. Aho, Ravi Sethi and • Jeffrey D. Ullman (Addison-Wesley 1986) • Parse::RecDescent. POD ERRORS Hey! The above document had some coding errors, which are explained below: Around line 199: Non-ASCII character seen before =encoding in '©'. Assuming UTF-8 perl v5.34.0 2017-06-14 EYAPP(1)
|
eyapp - A Perl front-end to the Parse::Eyapp module
|
eyapp [options] grammar[.eyp] eyapp -V eyapp -h grammar The grammar file. If no suffix is given, and the file does not exists, .eyp is added
| null | null |
ctf_insert
|
ctf_insert inserts CTF (Compact C Type Format) data into a mach_kernel binary, storing the data in a newly created (__CTF,__ctf) section. This section must not be present in the input file. ctf_insert(1) must be passed one -arch argument for each architecture in a universal file, or exactly one -arch for a thin file. input specifies the input mach_kernel. -o output specifies the output file. -arch arch file specifies a file of CTF data to be used for the specified arch in a Mach-O or universal file. The file's content will be stored in a newly created (__CTF,__ctf) section. SEE ALSO otool(1), segedit(1). Apple, Inc. June 23, 2020 CTF_INSERT(1)
|
ctf_insert - insert Compact C Type Format data into a mach_kernel file
|
ctf_insert input [ -arch arch file ]... -o output
| null | null |
csplit
|
The csplit utility splits file into pieces using the patterns args. If file is a dash (‘-’), csplit reads from standard input. Files are created with a prefix of “xx” and two decimal digits. The size of each file is written to standard output as it is created. If an error occurs whilst files are being created, or a HUP, INT, or TERM signal is received, all files previously written are removed. The options are as follows: -f prefix Create file names beginning with prefix, instead of “xx”. -k Do not remove previously created files if an error occurs or a HUP, INT, or TERM signal is received. -n number Create file names beginning with number of decimal digits after the prefix, instead of 2. -s Do not write the size of each output file to standard output as it is created. The args operands may be a combination of the following patterns: /regexp/[[+|-]offset] Create a file containing the input from the current line to (but not including) the next line matching the given basic regular expression. An optional offset from the line that matched may be specified. %regexp%[[+|-]offset] Same as above but a file is not created for the output. line_no Create containing the input from the current line to (but not including) the specified line number. {num} Repeat the previous pattern the specified number of times. If it follows a line number pattern, a new file will be created for each line_no lines, num times. The first line of the file is line number 1 for historic reasons. After all the patterns have been processed, the remaining input data (if there is any) will be written to a new file. Requesting to split at a line before the current line number or past the end of the file will result in an error. ENVIRONMENT The LANG, LC_ALL, LC_COLLATE and LC_CTYPE environment variables affect the execution of csplit as described in environ(7). EXIT STATUS The csplit utility exits 0 on success, and >0 if an error occurs.
|
csplit – split files based on context
|
csplit [-ks] [-f prefix] [-n number] file args ...
| null |
Split the mdoc(7) file foo.1 into one file for each section (up to 21 plus one for the rest, if any): csplit -k foo.1 '%^\.Sh%' '/^\.Sh/' '{20}' Split standard input after the first 99 lines and every 100 lines thereafter: csplit -k - 100 '{19}' SEE ALSO sed(1), split(1), re_format(7) STANDARDS The csplit utility conforms to IEEE Std 1003.1-2001 (“POSIX.1”). HISTORY A csplit command appeared in PWB UNIX. BUGS Input lines are limited to LINE_MAX (2048) bytes in length. macOS 14.5 February 6, 2014 macOS 14.5
|
swift-inspect
| null | null | null | null | null |
splain5.30
|
The "diagnostics" Pragma This module extends the terse diagnostics normally emitted by both the perl compiler and the perl interpreter (from running perl with a -w switch or "use warnings"), augmenting them with the more explicative and endearing descriptions found in perldiag. Like the other pragmata, it affects the compilation phase of your program rather than merely the execution phase. To use in your program as a pragma, merely invoke use diagnostics; at the start (or near the start) of your program. (Note that this does enable perl's -w flag.) Your whole compilation will then be subject(ed :-) to the enhanced diagnostics. These still go out STDERR. Due to the interaction between runtime and compiletime issues, and because it's probably not a very good idea anyway, you may not use "no diagnostics" to turn them off at compiletime. However, you may control their behaviour at runtime using the disable() and enable() methods to turn them off and on respectively. The -verbose flag first prints out the perldiag introduction before any other diagnostics. The $diagnostics::PRETTY variable can generate nicer escape sequences for pagers. Warnings dispatched from perl itself (or more accurately, those that match descriptions found in perldiag) are only displayed once (no duplicate descriptions). User code generated warnings a la warn() are unaffected, allowing duplicate user messages to be displayed. This module also adds a stack trace to the error message when perl dies. This is useful for pinpointing what caused the death. The -traceonly (or just -t) flag turns off the explanations of warning messages leaving just the stack traces. So if your script is dieing, run it again with perl -Mdiagnostics=-traceonly my_bad_script to see the call stack at the time of death. By supplying the -warntrace (or just -w) flag, any warnings emitted will also come with a stack trace. The splain Program While apparently a whole nuther program, splain is actually nothing more than a link to the (executable) diagnostics.pm module, as well as a link to the diagnostics.pod documentation. The -v flag is like the "use diagnostics -verbose" directive. The -p flag is like the $diagnostics::PRETTY variable. Since you're post-processing with splain, there's no sense in being able to enable() or disable() processing. Output from splain is directed to STDOUT, unlike the pragma.
|
diagnostics, splain - produce verbose warning diagnostics
|
Using the "diagnostics" pragma: use diagnostics; use diagnostics -verbose; enable diagnostics; disable diagnostics; Using the "splain" standalone filter program: perl program 2>diag.out splain [-v] [-p] diag.out Using diagnostics to get stack traces from a misbehaving script: perl -Mdiagnostics=-traceonly my_script.pl
| null |
The following file is certain to trigger a few errors at both runtime and compiletime: use diagnostics; print NOWHERE "nothing\n"; print STDERR "\n\tThis message should be unadorned.\n"; warn "\tThis is a user warning"; print "\nDIAGNOSTIC TESTER: Please enter a <CR> here: "; my $a, $b = scalar <STDIN>; print "\n"; print $x/$y; If you prefer to run your program first and look at its problem afterwards, do this: perl -w test.pl 2>test.out ./splain < test.out Note that this is not in general possible in shells of more dubious heritage, as the theoretical (perl -w test.pl >/dev/tty) >& test.out ./splain < test.out Because you just moved the existing stdout to somewhere else. If you don't want to modify your source code, but still have on-the-fly warnings, do this: exec 3>&1; perl -w test.pl 2>&1 1>&3 3>&- | splain 1>&2 3>&- Nifty, eh? If you want to control warnings on the fly, do something like this. Make sure you do the "use" first, or you won't be able to get at the enable() or disable() methods. use diagnostics; # checks entire compilation phase print "\ntime for 1st bogus diags: SQUAWKINGS\n"; print BOGUS1 'nada'; print "done with 1st bogus\n"; disable diagnostics; # only turns off runtime warnings print "\ntime for 2nd bogus: (squelched)\n"; print BOGUS2 'nada'; print "done with 2nd bogus\n"; enable diagnostics; # turns back on runtime warnings print "\ntime for 3rd bogus: SQUAWKINGS\n"; print BOGUS3 'nada'; print "done with 3rd bogus\n"; disable diagnostics; print "\ntime for 4th bogus: (squelched)\n"; print BOGUS4 'nada'; print "done with 4th bogus\n"; INTERNALS Diagnostic messages derive from the perldiag.pod file when available at runtime. Otherwise, they may be embedded in the file itself when the splain package is built. See the Makefile for details. If an extant $SIG{__WARN__} handler is discovered, it will continue to be honored, but only after the diagnostics::splainthis() function (the module's $SIG{__WARN__} interceptor) has had its way with your warnings. There is a $diagnostics::DEBUG variable you may set if you're desperately curious what sorts of things are being intercepted. BEGIN { $diagnostics::DEBUG = 1 } BUGS Not being able to say "no diagnostics" is annoying, but may not be insurmountable. The "-pretty" directive is called too late to affect matters. You have to do this instead, and before you load the module. BEGIN { $diagnostics::PRETTY = 1 } I could start up faster by delaying compilation until it should be needed, but this gets a "panic: top_level" when using the pragma form in Perl 5.001e. While it's true that this documentation is somewhat subserious, if you use a program named splain, you should expect a bit of whimsy. AUTHOR Tom Christiansen <tchrist@mox.perl.com>, 25 June 1995. perl v5.30.3 2024-04-13 SPLAIN(1)
|
iconv
|
The iconv program converts text from one encoding to another encoding. More precisely, it converts from the encoding given for the -f option to the encoding given for the -t option. Either of these encodings defaults to the encoding of the current locale. All the inputfiles are read and converted in turn; if no inputfile is given, the standard input is used. The converted text is printed to standard output. The encodings permitted are system dependent. For the libiconv implementation, they are listed in the iconv_open(3) manual page. Options controlling the input and output format: -f encoding, --from-code=encoding Specifies the encoding of the input. -t encoding, --to-code=encoding Specifies the encoding of the output. Options controlling conversion problems: -c When this option is given, characters that cannot be converted are silently discarded, instead of leading to a conversion error. --unicode-subst=formatstring When this option is given, Unicode characters that cannot be represented in the target encoding are replaced with a placeholder string that is constructed from the given formatstring, applied to the Unicode code point. The formatstring must be a format string in the same format as for the printf command or the printf() function, taking either no argument or exactly one unsigned integer argument. --byte-subst=formatstring When this option is given, bytes in the input that are not valid in the source encoding are replaced with a placeholder string that is constructed from the given formatstring, applied to the byte's value. The formatstring must be a format string in the same format as for the printf command or the printf() function, taking either no argument or exactly one unsigned integer argument. --widechar-subst=formatstring When this option is given, wide characters in the input that are not valid in the source encoding are replaced with a placeholder string that is constructed from the given formatstring, applied to the byte's value. The formatstring must be a format string in the same format as for the printf command or the printf() function, taking either no argument or exactly one unsigned integer argument. Options controlling error output: -s, --silent When this option is given, error messages about invalid or unconvertible characters are omitted, but the actual converted text is unaffected. The iconv -l or iconv --list command lists the names of the supported encodings, in a system dependent format. For the libiconv implementation, the names are printed in upper case, separated by whitespace, and alias names of an encoding are listed on the same line as the encoding itself.
|
iconv - character set conversion
|
iconv [OPTION...] [-f encoding] [-t encoding] [inputfile ...] iconv -l
| null |
iconv -f ISO-8859-1 -t UTF-8 converts input from the old West-European encoding ISO-8859-1 to Unicode. iconv -f KOI8-R --byte-subst="<0x%x>" --unicode-subst="<U+%04X>" converts input from the old Russian encoding KOI8-R to the locale encoding, substituting an angle bracket notation with hexadecimal numbers for invalid bytes and for valid but unconvertible characters. iconv --list lists the supported encodings. CONFORMING TO POSIX:2001 SEE ALSO iconv_open(3), locale(7) GNU March 31, 2007 ICONV(1)
|
captoinfo
|
captoinfo looks in each given text file for termcap descriptions. For each one found, an equivalent terminfo description is written to standard output. Termcap tc capabilities are translated directly to terminfo use capabilities. If no file is given, then the environment variable TERMCAP is used for the filename or entry. If TERMCAP is a full pathname to a file, only the terminal whose name is specified in the environment variable TERM is extracted from that file. If the environment variable TERMCAP is not set, then the file /usr/share/terminfo is read. -v print out tracing information on standard error as the program runs. -V print out the version of the program in use on standard error and exit. -1 cause the fields to print out one to a line. Otherwise, the fields will be printed several to a line to a maximum width of 60 characters. -w change the output to width characters. FILES /usr/share/terminfo Compiled terminal description database. TRANSLATIONS FROM NONSTANDARD CAPABILITIES Some obsolete nonstandard capabilities will automatically be translated into standard (SVr4/XSI Curses) terminfo capabilities by captoinfo. Whenever one of these automatic translations is done, the program will issue an notification to stderr, inviting the user to check that it has not mistakenly translated a completely unknown and random capability and/or syntax error. Nonstd Std From Terminfo name name capability ─────────────────────────────────────────────── BO mr AT&T enter_reverse_mode CI vi AT&T cursor_invisible CV ve AT&T cursor_normal DS mh AT&T enter_dim_mode EE me AT&T exit_attribute_mode FE LF AT&T label_on FL LO AT&T label_off XS mk AT&T enter_secure_mode EN @7 XENIX key_end GE ae XENIX exit_alt_charset_mode GS as XENIX enter_alt_charset_mode HM kh XENIX key_home LD kL XENIX key_dl PD kN XENIX key_npage PN po XENIX prtr_off PS pf XENIX prtr_on PU kP XENIX key_ppage RT @8 XENIX kent UP ku XENIX kcuu1 KA k; Tek key_f10 KB F1 Tek key_f11 KC F2 Tek key_f12 KD F3 Tek key_f13 KE F4 Tek key_f14 KF F5 Tek key_f15 BC Sb Tek set_background FC Sf Tek set_foreground HS mh Iris enter_dim_mode XENIX termcap also used to have a set of extension capabilities for forms drawing, designed to take advantage of the IBM PC high-half graphics. They were as follows: Cap Graphic ───────────────────────────── G2 upper left G3 lower left G1 upper right G4 lower right GR pointing right GL pointing left GU pointing up GD pointing down GH horizontal line GV vertical line GC intersection G6 upper left G7 lower left G5 upper right G8 lower right Gr tee pointing right Gr tee pointing left Gu tee pointing up Gd tee pointing down Gh horizontal line Gv vertical line Gc intersection GG acs magic cookie count If the single-line capabilities occur in an entry, they will automatically be composed into an acsc string. The double-line capabilities and GG are discarded with a warning message. IBM's AIX has a terminfo facility descended from SVr1 terminfo but incompatible with the SVr4 format. The following AIX extensions are automatically translated: IBM XSI ───────────── ksel kslt kbtab kcbt font0 s0ds font1 s1ds font2 s2ds font3 s3ds Additionally, the AIX box1 capability will be automatically translated to an acsc string. Hewlett-Packard's terminfo library supports two nonstandard terminfo capabilities meml (memory lock) and memu (memory unlock). These will be discarded with a warning message. NOTES This utility is actually a link to tic(1M), running in -I mode. You can use other tic options such as -f and -x. The trace option is not identical to SVr4's. Under SVr4, instead of following the -v with a trace level n, you repeat it n times. SEE ALSO infocmp(1M), curses(3X), terminfo(5) This describes ncurses version 5.7 (patch 20081102). AUTHOR Eric S. Raymond <esr@snark.thyrsus.com> and Thomas E. Dickey <dickey@invisible-island.net> captoinfo(1M)
|
captoinfo - convert a termcap description into a terminfo description
|
captoinfo [-vn width] [-V] [-1] [-w width] file . . .
| null | null |
snmpinform
|
snmptrap is an SNMP application that uses the SNMP TRAP operation to send information to a network manager. One or more object identifiers (OIDs) can be given as arguments on the command line. A type and a value must accompany each object identifier. Each variable name is given in the format specified in variables(5). When invoked as snmpinform, or when -Ci is added to the command line flags of snmptrap, it sends an INFORM-PDU, expecting a response from the trap receiver, retransmitting if required. Otherwise it sends an TRAP-PDU or TRAP2-PDU. If any of the required version 1 parameters, enterprise-oid, agent, and uptime are specified as empty, it defaults to 1.3.6.1.4.1.3.1.1 (enterprises.cmu.1.1), hostname, and host-uptime respectively. The TYPE is a single character, one of: i INTEGER u UNSIGNED c COUNTER32 s STRING x HEX STRING d DECIMAL STRING n NULLOBJ o OBJID t TIMETICKS a IPADDRESS b BITS which are handled in the same way as the snmpset command. For example: snmptrap -v 1 -c public manager enterprises.spider test-hub 3 0 '' interfaces.iftable.ifentry.ifindex.1 i 1 will send a generic linkUp trap to manager, for interface 1.
|
snmptrap, snmpinform - sends an SNMP notification to a manager
|
snmptrap -v 1 [COMMON OPTIONS] AGENT enterprise-oid agent generic-trap specific-trap uptime [OID TYPE VALUE]... snmptrap -v [2c|3] [COMMON OPTIONS] [-Ci] AGENT uptime trap-oid [OID TYPE VALUE]... snmpinform -v [2c|3] [COMMON OPTIONS] AGENT uptime trap-oid [OID TYPE VALUE]...
|
snmptrap takes the common options described in the snmpcmd(1) manual page in addition to the -Ci option described above. Note that snmptrap REQUIRES an argument specifying the agent to query as described there. SEE ALSO snmpcmd(1), snmpset(1), variables(5). V5.6.2.1 19 Jun 2003 SNMPTRAP(1)
| null |
lsbom
|
The lsbom command interprets the contents of binary bom (bom(5)) files. For each file in a bom, lsbom prints the file path and/or requested information. If no options are given, lsbom will display the output formatted such that each line contains the path of the entry, its mode (octal), and its UID/GID. There are slight differences in the output for plain files, directories, symbolic links, and device files as follows: plain files the UID/GID is followed by the file size and a 32-bit CRC checksum of the file's contents. symbolic links the UID/GID is followed by the size and checksum of the link path, and the link path itself. device files the UID/GID file number is followed by the device number. The -p option can be used to specify a user-defined format for lsbom's output. The format string consists of one or more characters described below where each character represents a data type. Data types will be separated by tab characters, and each line will end with a newline character. One can use this mechanism to create output similar to the ls(1) command. The options are: -h print full usage -b list block devices -c list character devices -d list directories -f list files -l list symbolic links -m print modified times (for plain files only) -s print only the path of each file -x suppress modes for directories and symlinks --arch archVal when displaying plain files that represent Universal Mach-O binaries, print the size and checksum of the file contents for the specified archVal (either "ppc", "ppc64", or "i386") -p parameters print only some of the results Note: each option can only be used once: c 32-bit checksum f file name F file name with quotes (i.e. "/mach_kernel") g group id G group name m file mode (permissions) M symbolic file mode (i.e. "dr-xr-xr-x" ) s file size S formatted size t mod time T formatted mod time u user id U user name / user id/group id ? user name/group name
|
lsbom – list contents of a bom file
|
lsbom [-b] [-c] [-d] [-f] [-l] [-m] [-s] [-x] [--arch archVal] [-p parameters] bom ... lsbom -h | --help
| null |
lsbom bomfile list the contents of bomfile lsbom -s bomfile list only the paths of the contents of the bomfile lsbom -f -l bomfile list the plain files and symbolic links of the bomfiles (but not directories or devices) lsbom -p MUGsf bomfiles list the contents of bomfile displaying only the files' modes, user name, group name, size, and filename SEE ALSO bom(5), ditto(8), mkbom(8), pkgutil(1) HISTORY The lsbom command appeared in NeXTSTEP as a tool to browse the contents of bom files used during installation. The -p flag appeared in Mac OS X 10.1 in an attempt to make lsbom's output more convenient for human beings. Mac OS X May 7, 2008 Mac OS X
|
pbcopy
|
pbcopy takes the standard input and places it in the specified pasteboard. If no pasteboard is specified, the general pasteboard will be used by default. The input is placed in the pasteboard as plain text data unless it begins with the Encapsulated PostScript (EPS) file header or the Rich Text Format (RTF) file header, in which case it is placed in the pasteboard as one of those data types. pbpaste removes the data from the pasteboard and writes it to the standard output. It normally looks first for plain text data in the pasteboard and writes that to the standard output; if no plain text data is in the pasteboard it looks for Encapsulated PostScript; if no EPS is present it looks for Rich Text. If none of those types is present in the pasteboard, pbpaste produces no output. * Encoding: pbcopy and pbpaste use locale environment variables to determine the encoding to be used for input and output. For example, absent other locale settings, setting the environment variable LANG=en_US.UTF-8 will cause pbcopy and pbpaste to use UTF-8 for input and output. If an encoding cannot be determined from the locale, the standard C encoding will be used. Use of UTF-8 is recommended. Note that by default the Terminal application uses the UTF-8 encoding and automatically sets the appropriate locale environment variable.
|
pbcopy, pbpaste - provide copying and pasting to the pasteboard (the Clipboard) from command line
|
pbcopy [-help] [-pboard {general | ruler | find | font}] pbpaste [-help] [-pboard {general | ruler | find | font}] [-Prefer {txt | rtf | ps}]
|
-pboard {general | ruler | find | font} specifies which pasteboard to copy to or paste from. If no pasteboard is given, the general pasteboard will be used by default. -Prefer {txt | rtf | ps} tells pbpaste what type of data to look for in the pasteboard first. As stated above, pbpaste normally looks first for plain text data; however, by specifying -Prefer ps you can tell pbpaste to look first for Encapsulated PostScript. If you specify -Prefer rtf, pbpaste looks first for Rich Text format. In any case, pbpaste looks for the other formats if the preferred one is not found. The txt option replaces the deprecated ascii option, which continues to function as before. Both indicate a preference for plain text. SEE ALSO ADC Reference Library: Cocoa > Interapplication Communication > Copying and Pasting Carbon > Interapplication Communication > Pasteboard Manager Programming Guide Carbon > Interapplication Communication > Pasteboard Manager Reference BUGS There is no way to tell pbpaste to get only a specified data type. Apple Computer, Inc. January 12, 2005 PBCOPY(1)
| null |
sort
|
The sort utility sorts text and binary files by lines. A line is a record separated from the subsequent record by a newline (default) or NUL ´\0´ character (-z option). A record can contain any printable or unprintable characters. Comparisons are based on one or more sort keys extracted from each line of input, and are performed lexicographically, according to the current locale's collating rules and the specified command-line options that can tune the actual sorting behavior. By default, if keys are not given, sort uses entire lines for comparison. The command line options are as follows: -c, --check, -C, --check=silent|quiet Check that the single input file is sorted. If the file is not sorted, sort produces the appropriate error messages and exits with code 1, otherwise returns 0. If -C or --check=silent is specified, sort produces no output. This is a "silent" version of -c. -m, --merge Merge only. The input files are assumed to be pre-sorted. If they are not sorted the output order is undefined. -o output, --output=output Print the output to the output file instead of the standard output. -S size, --buffer-size=size Use size for the maximum size of the memory buffer. Size modifiers %,b,K,M,G,T,P,E,Z,Y can be used. If a memory limit is not explicitly specified, sort takes up to about 90% of available memory. If the file size is too big to fit into the memory buffer, the temporary disk files are used to perform the sorting. -T dir, --temporary-directory=dir Store temporary files in the directory dir. The default path is the value of the environment variable TMPDIR or /var/tmp if TMPDIR is not defined. -u, --unique Unique keys. Suppress all lines that have a key that is equal to an already processed one. This option, similarly to -s, implies a stable sort. If used with -c or -C, sort also checks that there are no lines with duplicate keys. -s Stable sort. This option maintains the original record order of records that have an equal key. This is a non-standard feature, but it is widely accepted and used. --version Print the version and silently exits. --help Print the help text and silently exits. The following options override the default ordering rules. When ordering options appear independently of key field specifications, they apply globally to all sort keys. When attached to a specific key (see -k), the ordering options override all global ordering options for the key they are attached to. -b, --ignore-leading-blanks Ignore leading blank characters when comparing lines. -d, --dictionary-order Consider only blank spaces and alphanumeric characters in comparisons. -f, --ignore-case Convert all lowercase characters to their uppercase equivalent before comparison, that is, perform case-independent sorting. -g, --general-numeric-sort, --sort=general-numeric Sort by general numerical value. As opposed to -n, this option handles general floating points. It has a more permissive format than that allowed by -n but it has a significant performance drawback. -h, --human-numeric-sort, --sort=human-numeric Sort by numerical value, but take into account the SI suffix, if present. Sort first by numeric sign (negative, zero, or positive); then by SI suffix (either empty, or `k' or `K', or one of `MGTPEZY', in that order); and finally by numeric value. The SI suffix must immediately follow the number. For example, '12345K' sorts before '1M', because M is "larger" than K. This sort option is useful for sorting the output of a single invocation of 'df' command with -h or -H options (human- readable). -i, --ignore-nonprinting Ignore all non-printable characters. -M, --month-sort, --sort=month Sort by month abbreviations. Unknown strings are considered smaller than the month names. -n, --numeric-sort, --sort=numeric Sort fields numerically by arithmetic value. Fields are supposed to have optional blanks in the beginning, an optional minus sign, zero or more digits (including decimal point and possible thousand separators). -R, --random-sort, --sort=random Sort by a random order. This is a random permutation of the inputs except that the equal keys sort together. It is implemented by hashing the input keys and sorting the hash values. The hash function is chosen randomly. The hash function is randomized by /dev/random content, or by file content if it is specified by --random-source. Even if multiple sort fields are specified, the same random hash function is used for all of them. -r, --reverse Sort in reverse order. -V, --version-sort Sort version numbers. The input lines are treated as file names in form PREFIX VERSION SUFFIX, where SUFFIX matches the regular expression "(.([A-Za-z~][A-Za-z0-9~]*)?)*". The files are compared by their prefixes and versions (leading zeros are ignored in version numbers, see example below). If an input string does not match the pattern, then it is compared using the byte compare function. All string comparisons are performed in C locale, the locale environment setting is ignored. Example: $ ls sort* | sort -V sort-1.022.tgz sort-1.23.tgz sort-1.23.1.tgz sort-1.024.tgz sort-1.024.003. sort-1.024.003.tgz sort-1.024.07.tgz sort-1.024.009.tgz The treatment of field separators can be altered using these options: -b, --ignore-leading-blanks Ignore leading blank space when determining the start and end of a restricted sort key (see -k). If -b is specified before the first -k option, it applies globally to all key specifications. Otherwise, -b can be attached independently to each field argument of the key specifications. Note that sort keys specified with the -k option may have a variable number of leading whitespace characters that will affect the result, as described below in the -t option description. -k field1[,field2], --key=field1[,field2] Define a restricted sort key that has the starting position field1, and optional ending position field2 of a key field. The -k option may be specified multiple times, in which case subsequent keys are compared when earlier keys compare equal. The -k option replaces the obsolete options +pos1 and -pos2, but the old notation is also supported. -t char, --field-separator=char Use char as a field separator character. The initial char is not considered to be part of a field when determining key offsets. Each occurrence of char is significant (for example, “charchar” delimits an empty field). If -t is not specified, the default field separator is a sequence of blank space characters, and consecutive blank spaces do not delimit an empty field, however, the initial blank space is considered part of a field when determining key offsets. To use NUL as field separator, use -t ´\0´. -z, --zero-terminated Use NUL as record separator. By default, records in the files are supposed to be separated by the newline characters. With this option, NUL (´\0´) is used as a record separator character. Other options: --batch-size=num Specify maximum number of files that can be opened by sort at once. This option affects behavior when having many input files or using temporary files. The default value is 16. --compress-program=PROGRAM Use PROGRAM to compress temporary files. PROGRAM must compress standard input to standard output, when called without arguments. When called with argument -d it must decompress standard input to standard output. If PROGRAM fails, sort must exit with error. An example of PROGRAM that can be used here is bzip2. --random-source=filename In random sort, the file content is used as the source of the 'seed' data for the hash function choice. Two invocations of random sort with the same seed data will use the same hash function and will produce the same result if the input is also identical. By default, file /dev/random is used. --debug Print some extra information about the sorting process to the standard output. --parallel Set the maximum number of execution threads. Default number equals to the number of CPUs. --files0-from=filename Take the input file list from the file filename. The file names must be separated by NUL (like the output produced by the command "find ... -print0"). --radixsort Try to use radix sort, if the sort specifications allow. The radix sort can only be used for trivial locales (C and POSIX), and it cannot be used for numeric or month sort. Radix sort is very fast and stable. --mergesort Use mergesort. This is a universal algorithm that can always be used, but it is not always the fastest. --qsort Try to use quick sort, if the sort specifications allow. This sort algorithm cannot be used with -u and -s. --heapsort Try to use heap sort, if the sort specifications allow. This sort algorithm cannot be used with -u and -s. --mmap Try to use file memory mapping system call. It may increase speed in some cases. The following operands are available: file The pathname of a file to be sorted, merged, or checked. If no file operands are specified, or if a file operand is -, the standard input is used. A field is defined as a maximal sequence of characters other than the field separator and record separator (newline by default). Initial blank spaces are included in the field unless -b has been specified; the first blank space of a sequence of blank spaces acts as the field separator and is included in the field (unless -t is specified). For example, all blank spaces at the beginning of a line are considered to be part of the first field. Fields are specified by the -k field1[,field2] command-line option. If field2 is missing, the end of the key defaults to the end of the line. The arguments field1 and field2 have the form m.n (m,n > 0) and can be followed by one or more of the modifiers b, d, f, i, n, g, M and r, which correspond to the options discussed above. When b is specified it applies only to field1 or field2 where it is specified while the rest of the modifiers apply to the whole key field regardless if they are specified only with field1 or field2 or both. A field1 position specified by m.n is interpreted as the nth character from the beginning of the mth field. A missing .n in field1 means ‘.1’, indicating the first character of the mth field; if the -b option is in effect, n is counted from the first non-blank character in the mth field; m.1b refers to the first non-blank character in the mth field. 1.n refers to the nth character from the beginning of the line; if n is greater than the length of the line, the field is taken to be empty. nth positions are always counted from the field beginning, even if the field is shorter than the number of specified positions. Thus, the key can really start from a position in a subsequent field. A field2 position specified by m.n is interpreted as the nth character (including separators) from the beginning of the mth field. A missing .n indicates the last character of the mth field; m = 0 designates the end of a line. Thus the option -k v.x,w.y is synonymous with the obsolete option +v-1.x-1 -w-1.y; when y is omitted, -k v.x,w is synonymous with +v-1.x-1 -w.0. The obsolete +pos1 -pos2 option is still supported, except for -w.0b, which has no -k equivalent. ENVIRONMENT LC_COLLATE Locale settings to be used to determine the collation for sorting records. LC_CTYPE Locale settings to be used to case conversion and classification of characters, that is, which characters are considered whitespaces, etc. LC_MESSAGES Locale settings that determine the language of output messages that sort prints out. LC_NUMERIC Locale settings that determine the number format used in numeric sort. LC_TIME Locale settings that determine the month format used in month sort. LC_ALL Locale settings that override all of the above locale settings. This environment variable can be used to set all these settings to the same value at once. LANG Used as a last resort to determine different kinds of locale- specific behavior if neither the respective environment variable, nor LC_ALL are set. TMPDIR Path to the directory in which temporary files will be stored. Note that TMPDIR may be overridden by the -T option. GNUSORT_NUMERIC_COMPATIBILITY If defined -t will not override the locale numeric symbols, that is, thousand separators and decimal separators. By default, if we specify -t with the same symbol as the thousand separator or decimal point, the symbol will be treated as the field separator. Older behavior was less definite; the symbol was treated as both field separator and numeric separator, simultaneously. This environment variable enables the old behavior. GNUSORT_COMPATIBLE_BLANKS Use 'space' symbols as field separators (as modern GNU sort does). FILES /var/tmp/.bsdsort.PID.* Temporary files. /dev/random Default seed file for the random sort. EXIT STATUS The sort utility shall exit with one of the following values: 0 Successfully sorted the input files or if used with -c or -C, the input file already met the sorting criteria. 1 On disorder (or non-uniqueness) with the -c or -C options. 2 An error occurred. SEE ALSO comm(1), join(1), uniq(1) STANDARDS The sort utility is compliant with the IEEE Std 1003.1-2008 (“POSIX.1”) specification. The flags [-ghRMSsTVz] are extensions to the POSIX specification. All long options are extensions to the specification, some of them are provided for compatibility with GNU versions and some of them are own extensions. The old key notations +pos1 and -pos2 come from older versions of sort and are still supported but their use is highly discouraged. HISTORY A sort command first appeared in Version 1 AT&T UNIX. AUTHORS Gabor Kovesdan <gabor@FreeBSD.org>, Oleg Moskalenko <mom040267@gmail.com> NOTES This implementation of sort has no limits on input line length (other than imposed by available memory) or any restrictions on bytes allowed within lines. The performance depends highly on locale settings, efficient choice of sort keys and key complexity. The fastest sort is with locale C, on whole lines, with option -s. In general, locale C is the fastest, then single-byte locales follow and multi-byte locales as the slowest but the correct collation order is always respected. As for the key specification, the simpler to process the lines the faster the search will be. When sorting by arithmetic value, using -n results in much better performance than -g so its use is encouraged whenever possible. macOS 14.5 September 4, 2019 macOS 14.5
|
sort – sort or merge records (lines) of text and binary files
|
sort [-bcCdfghiRMmnrsuVz] [-k field1[,field2]] [-S memsize] [-T dir] [-t char] [-o output] [file ...] sort --help sort --version
| null | null |
xpath
|
xpath uses the XML::XPath perl module to make XPath queries to any XML document. The XML::XPath module aims to comply exactly to the XPath specification at "http://www.w3.org/TR/xpath" and yet allows extensions to be added in the form of functions. The script takes any number of XPath pointers and tries to apply them to each XML document given on the command line. If no file arguments are given, the query is done using "STDIN" as an XML document. When multiple queries exist, the result of the last query is used as context for the next query and only the result of the last one is output. The context of the first query is always the root of the current document.
|
xpath - a script to query XPath statements in XML documents.
|
xpath [-s suffix] [-p prefix] [-n] [-q] -e query [-e query] ... [file] ...
|
-q Be quiet. Output only errors (and no separator) on stderr. -n Never use an external DTD, ie. instantiate the XML::Parser module with 'ParseParamEnt => 0'. -s suffix Place "suffix" at the end of each entry. Default is a linefeed. -p prefix Place "prefix" preceding each entry. Default is nothing. BUGS The author of this man page is not very fluant in english. Please, send him (fabien@tzone.org) any corrections concerning this text. SEE ALSO XML::XPath LICENSE AND COPYRIGHT This module is copyright 2000 AxKit.com Ltd. This is free software, and as such comes with NO WARRANTY. No dates are used in this module. You may distribute this module under the terms of either the Gnu GPL, or the Artistic License (the same terms as Perl itself). For support, please subscribe to the Perl-XML <http://listserv.activestate.com/mailman/listinfo/perl-xml> mailing list at the URL perl v5.34.0 2017-07-27 XPATH(1)
| null |
snmpdf
|
snmpdf is simply a networked verison of the typical df command. It checks the disk space on the remote machine by examining the HOST- RESOURCES-MIB's hrStorageTable or the UCD-SNMP-MIB's dskTable. By default, the hrStorageTable is preferred as it typically contains more information. However, the -Cu argument can be passed to snmpdf to force the usage of the dskTable. AGENT identifies a target SNMP agent, which is instrumented to monitor the gievn objects. At its simplest, the AGENT specification will consist of a hostname or an IPv4 address. In this situation, the command will attempt communication with the agent, using UDP/IPv4 to port 161 of the given target host. See the snmpcmd(1) manual page for a full list of the possible formats for AGENT. See the snmpd.conf(5) manual page on setting up the dskTable using the disk directive in the snmpd.conf file.
|
snmpdf - display disk space usage on a network entity via SNMP
|
snmpdf [COMMON OPTIONS] [-Cu] AGENT
|
COMMON OPTIONS Please see snmpcmd(1) for a list of possible values for COMMON OPTIONS as well as their descriptions. -Cu Forces the command to use dskTable in mib UCD-SNMP-MIB instead of the default to determine the storage information. Generally, the default use of hrStorageTable in mib HOST-RESOURCES-MIB is preferred because it typically contains more information.
|
% snmpdf -v 2c -c public localhost Description size (kB) Used Available Used% / 7524587 2186910 5337677 29% /proc 0 0 0 0% /etc/mnttab 0 0 0 0% /var/run 1223088 32 1223056 0% /tmp 1289904 66848 1223056 5% /cache 124330 2416 121914 1% /vol 0 0 0 0% Real Memory 524288 447456 76832 85% Swap Space 1420296 195192 1225104 13% SEE ALSO snmpd.conf(5), snmp.conf(5) V5.6.2.1 25 Jul 2003 SNMPDF(1)
|
whoami
|
The whoami utility has been obsoleted by the id(1) utility, and is equivalent to “id -un”. The command “id -p” is suggested for normal interactive use. The whoami utility displays your effective user ID as a name. EXIT STATUS The whoami utility exits 0 on success, and >0 if an error occurs. SEE ALSO id(1) macOS 14.5 June 6, 1993 macOS 14.5
|
whoami – display effective user id
|
whoami
| null | null |
sc_usage
|
sc_usage displays an ongoing sample of system call and page fault usage statistics for a given process in a “top-like” fashion. It requires root privileges due to the kernel tracing facility it uses to operate. Page faults can be of the following types: PAGE_IN page had to read from disk ZERO_FILL page was created and zero filled COW page was copied from another page CACHE_HIT page was found in the cache The arguments are as follows: -c When the -c option is specified, it expects a path to a codefile that contains the mappings for the system calls. This option overrides the default location of the system call codefile which is found in /usr/share/misc/trace.codes. -e Specifying the -e option generates output that is sorted by call count. This overrides the default sort by time. -l The -l option causes sc_usage to turn off its continuous window updating style of output and instead output as a continuous scrolling of data. -s By default, sc_usage updates its output at one second intervals. This sampling interval may be changed by specifying the -s option. Enter the interval in seconds. pid | cmd | -E execute The last argument must be a process id, a running command name, or using the -E option, an execution path followed by optional arguments. The system call usage data for the process or command is displayed. If the -E flag is used, sc_usage will launch the executable, pass along any optional arguments and display system call usage date for that executable. The data columns displayed are as follows: TYPE the system call type NUMBER the system call count CPU_TIME the amount of cpu time consumed WAIT_TIME the absolute time the process is waiting CURRENT_TYPE the current system call type LAST_PATHNAME_WAITED_FOR for each active thread, the last pathname that was referenced by a system call that blocked CUR_WAIT_TIME the cumulative time that a thread has been blocked THRD# the thread number PRI current scheduling priority The sc_usage command also displays some global state in the first few lines of output, including the number of preemptions, context switches, threads, faults and system calls, found during the sampling period. The current time and the elapsed time that the command has been running is also displayed here. The sc_usage command is also SIGWINCH savvy, so adjusting your window geometry may change the list of system calls being displayed. Typing a ‘q’ will cause sc_usage to exit immediately. Typing any other character will cause sc_usage to reset its counters and the display. SAMPLE USAGE sc_usage Finder -e -s2 sc_usage will sort the Finder process usage data according to system call count and update the output at 2 second intervals. SEE ALSO fs_usage(1), latency(1), top(1) macOS October 28, 2002 macOS
|
sc_usage – show system call usage statistics
|
sc_usage [-c codefile] [-e] [-l] [-s interval] pid | cmd | -E execute
| null | null |
jot
|
The jot utility is used to print out increasing, decreasing, random, or redundant data, usually numbers, one per line. The following options are available: -r Generate random data instead of the default sequential data. -b word Just print word repetitively. -w word Print word with the generated data appended to it. Octal, hexadecimal, exponential, ASCII, zero padded, and right-adjusted representations are possible by using the appropriate printf(3) conversion specification inside word, in which case the data are inserted rather than appended. -c This is an abbreviation for -w %c. -s string Print data separated by string. Normally, newlines separate data. -n Do not print the final newline normally appended to the output. -p precision Print only as many digits or characters of the data as indicated by the integer precision. In the absence of -p, the precision is the greater of the precisions of begin and end. The -p option is overridden by whatever appears in a printf(3) conversion following -w. The last four arguments indicate, respectively, the number of data, the lower bound, the upper bound, and the step size or, for random data, the seed. While at least one of them must appear, any of the other three may be omitted, and will be considered as such if given as - or as an empty string. Any three of these arguments determines the fourth. If four are specified and the given and computed values of reps conflict, the lower value is used. If one or two are specified, defaults are assigned starting with s, which assumes a default of 1 (or -1 if begin and end specify a descending range). Then the default values are assigned to the leftmost omitted arguments until three arguments are set. Defaults for the four arguments are, respectively, 100, 1, 100, and 1, except that when random data are requested, the seed, s, is picked randomly. The reps argument is expected to be an unsigned integer, and if given as zero is taken to be infinite. The begin and end arguments may be given as real numbers or as characters representing the corresponding value in ASCII. The last argument must be a real number. Random numbers are obtained through arc4random(3) when no seed is specified, and through random(3) when a seed is given. When jot is asked to generate random integers or characters with begin and end values in the range of the random number generator function and no format is specified with one of the -w, -b, or -p options, jot will arrange for all the values in the range to appear in the output with an equal probability. In all other cases be careful to ensure that the output format's rounding or truncation will not skew the distribution of output values in an unintended way. The name jot derives in part from iota, a function in APL. Rounding and truncation The jot utility uses double precision floating point arithmetic internally. Before printing a number, it is converted depending on the output format used. If no output format is specified or the output format is a floating point format (‘E’, ‘G’, ‘e’, ‘f’, or ‘g’), the value is rounded using the printf(3) function, taking into account the requested precision. If the output format is an integer format (‘D’, ‘O’, ‘U’, ‘X’, ‘c’, ‘d’, ‘i’, ‘o’, ‘u’, or ‘x’), the value is converted to an integer value by truncation. As an illustration, consider the following command: $ jot 6 1 10 0.5 1 2 2 2 3 4 By requesting an explicit precision of 1, the values generated before rounding can be seen. The .5 values are rounded down if the integer part is even, up otherwise. $ jot -p 1 6 1 10 0.5 1.0 1.5 2.0 2.5 3.0 3.5 By offsetting the values slightly, the values generated by the following command are always rounded down: $ jot -p 0 6 .9999999999 10 0.5 1 1 2 2 3 3 Another way of achieving the same result is to force truncation by specifying an integer format: $ jot -w %d 6 1 10 0.5 EXIT STATUS The jot utility exits 0 on success, and >0 if an error occurs.
|
jot – print sequential or random data
|
jot [-cnr] [-b word] [-w word] [-s string] [-p precision] [reps [begin [end [s]]]]
| null |
The command jot - 1 10 prints the integers from 1 to 10, while the command jot 21 -1 1.00 prints 21 evenly spaced numbers increasing from -1 to 1. The ASCII character set is generated with jot -c 128 0 and the strings xaa through xaz with jot -w xa%c 26 a while 20 random 8-letter strings are produced with jot -r -c 160 a z | rs -g 0 8 Infinitely many yes's may be obtained through jot -b yes 0 and thirty ed(1) substitution commands applying to lines 2, 7, 12, etc. is the result of jot -w %ds/old/new/ 30 2 - 5 The stuttering sequence 9, 9, 8, 8, 7, etc. can be produced by truncating the output precision and a suitable choice of step size, as in jot -w %d - 9.5 0 -.5 and a file containing exactly 1024 bytes is created with jot -b x 512 > block Finally, to set tabs four spaces apart starting from column 10 and ending in column 132, use expand -`jot -s, - 10 132 4` and to print all lines 80 characters or longer, grep `jot -s "" -b . 80` DIAGNOSTICS The following diagnostic messages deserve special explanation: illegal or unsupported format '%s' The requested conversion format specifier for printf(3) was not of the form %[#][ ][{+,-}][0-9]*[.[0-9]*]? where “?” must be one of [l]{d,i,o,u,x} or {c,e,f,g,D,E,G,O,U,X} range error in conversion A value to be printed fell outside the range of the data type associated with the requested output format. too many conversions More than one conversion format specifier has been supplied, but only one is allowed. SEE ALSO ed(1), expand(1), rs(1), seq(1), yes(1), arc4random(3), printf(3), random(3) HISTORY The jot utility first appeared in 4.2BSD. AUTHORS John A. Kunze macOS 14.5 September 21, 2019 macOS 14.5
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.