command
stringlengths
1
42
description
stringlengths
29
182k
name
stringlengths
7
64.9k
synopsis
stringlengths
4
85.3k
options
stringclasses
593 values
examples
stringclasses
455 values
as
The as command translates assembly code in the named files to object code. If no files are specified, as reads from stdin. All undefined symbols in the assembly are treated as global. The output of the assembly is left in the file a.out by default. The program /usr/bin/as is actually a driver that executes assemblers for specific target architectures. If no target architecture is specified, it defaults to the architecture of the host it is running on.
as - Mac OS X Mach-O GNU-based assemblers
as [ option ... ] [ file ... ]
-o name Name the output file name instead of a.out. -arch arch_type Specifies the target architecture, arch_type, of the assembler to be executed. The target assemblers for each architecture are in /usr/libexec/gcc/darwin/arch_type/as or /usr/local/libexec/gcc/darwin/arch_type/as. There is only one assembler for an architecture family. If the specified target architecture is a machine-specific implementation, the assembler for that architecture family is executed (e.g., /usr/libexec/gcc/darwin/ppc/as for -arch ppc604e). See arch(3) for the currently known arch_types. -arch_multiple Precede any displayed messages with a line stating the program name (as) and the architecture (from the -arch arch_type flag), to distinguish which architecture the error messages refer to. When the cc(1) driver program is run with multiple -arch flags, it invokes as with the -arch_multiple option. -force_cpusubtype_ALL By default, the assembler will produce the CPU subtype ALL for the object file it is assembling if it finds no implementation- specific instructions. Also by default, the assembler will allow implementation-specific instructions and will combine the CPU subtype for those specific implementations. The combining of specific implementations is architecture-dependent; if some combination of instructions is not allowed, an error is generated. With the optional -force_cpusubtype_ALL flag, all instructions are allowed and the object file's CPU subtype will be the ALL subtype. If the target architecture specified is a machine-specific implementation (e.g., -arch ppc603, -arch i486), the assembler will flag as errors instructions that are not supported on that architecture, and it will produce an object file with the CPU subtype for that specific implementation (even if no implementation-specific instructions are used). The -force_cpusubtype_ALL flag is the default for all x86 and x86_64 architectures. -dynamic Enables dynamic linking features. This is the default. -static Causes the assembler to treat as an error any features for dynamic linking. Also causes the .text directive to not include the pure_instructions section attribute. -- Use stdin for the assembly source input. -n Instructs the assembler not to assume that the assembly file starts with a .text directive. Use this option when an output file is not to contain a (__TEXT,__text) section or this section is not to be first one in the output file. -f Fast; no need for the assembler preprocessor (``app''). The assembler preprocessor can also be turned off by starting the assembly file with "#NO_APP\n". This is intended for use by compilers which produce assembly code in a strict "clean" format that specifies exactly where whitespace can go. The assembler preprocessor needs to be run on hand-written assembly files and/or files that have been preprocessed by the C preprocessor cpp. This is typically needed when assembler files are assembled through the use of the cc(1) command, which automatically runs the C preprocessor on assembly source files. The assembler preprocessor strips out excess spaces, turns single-quoted characters into a decimal constants, and turns # <number> <filename> <level> into .line <number>;.file <filename> pairs. When the assembler preprocessor has been turned off by a "#NO_APP\n" at the start of a file, it can be turned back on and off again with pairs of "#APP\n" and "#NO_APP\n" at the beginnings of lines. This is used by the compiler to wrap assembly statements produced from asm() statements. -g Produce debugging information for the symbolic debugger gdb(1) so that the assembly source can be debugged symbolically. The debugger depends on correct use of the C preprocessor's #include directive or the assembler's .include directive: Any include file that produces instructions in the (__TEXT,__text) section must be included while a .text directive is in effect. In other words, there must be a .text directive before the include, and the .text directive must still be in effect at the end of the include file. Otherwise, the debugger will get confused when in that assembly file. -v Display the version of the assembler (both the Mac OS X version and the GNU version it is based on). -V Print the path and the command line of the assembler the assembler driver is using. -Idir Add the directory dir to the list of directories to search for files included with the .include directive. The default place to search is the current directory. -W Suppress warnings. -L Save non-global defined labels beginning with an 'L'; these labels are normally discarded to save space in the resultant symbol table. The compiler generates such temporary labels. -q Use the clang(1) integrated assembler instead of the GNU based system assembler. This is the default for the x86 and arm architectures. -Q Use the GNU based system assembler. Note that Apple's built-in system assemblers are deprecated; programs that rely on these asssemblers should move to the clang(1) integrated assembler instead, using the -q flag. Assembler options for the PowerPC processors -static_branch_prediction_Y_bit Treat a single trailing '+' or '-' after a conditional PowerPC branch instruction as a static branch prediction that sets the Y-bit in the opcode. Pairs of trailing "++" or "--" always set the AT-bits. This is the default for Mac OS X. -static_branch_prediction_AT_bits Treat a single trailing '+' or '-' after a conditional PowerPC branch instruction as a static branch prediction that sets the AT-bits in the opcode. Pairs of trailing "++" or "--" always set the AT-bits but with this option a warning is issued if this syntax is used. With this flag the assembler behaves like the IBM tools. -no_ppc601 Treat any PowerPC 601 instructions as an error. FILES a.out output file SEE ALSO cc(1), ld(1), nm(1), otool(1), arch(3), Mach-O(5) Apple Inc. June 23, 2020 AS(1)
null
splain5.34
The "diagnostics" Pragma This module extends the terse diagnostics normally emitted by both the perl compiler and the perl interpreter (from running perl with a -w switch or "use warnings"), augmenting them with the more explicative and endearing descriptions found in perldiag. Like the other pragmata, it affects the compilation phase of your program rather than merely the execution phase. To use in your program as a pragma, merely invoke use diagnostics; at the start (or near the start) of your program. (Note that this does enable perl's -w flag.) Your whole compilation will then be subject(ed :-) to the enhanced diagnostics. These still go out STDERR. Due to the interaction between runtime and compiletime issues, and because it's probably not a very good idea anyway, you may not use "no diagnostics" to turn them off at compiletime. However, you may control their behaviour at runtime using the disable() and enable() methods to turn them off and on respectively. The -verbose flag first prints out the perldiag introduction before any other diagnostics. The $diagnostics::PRETTY variable can generate nicer escape sequences for pagers. Warnings dispatched from perl itself (or more accurately, those that match descriptions found in perldiag) are only displayed once (no duplicate descriptions). User code generated warnings a la warn() are unaffected, allowing duplicate user messages to be displayed. This module also adds a stack trace to the error message when perl dies. This is useful for pinpointing what caused the death. The -traceonly (or just -t) flag turns off the explanations of warning messages leaving just the stack traces. So if your script is dieing, run it again with perl -Mdiagnostics=-traceonly my_bad_script to see the call stack at the time of death. By supplying the -warntrace (or just -w) flag, any warnings emitted will also come with a stack trace. The splain Program While apparently a whole nuther program, splain is actually nothing more than a link to the (executable) diagnostics.pm module, as well as a link to the diagnostics.pod documentation. The -v flag is like the "use diagnostics -verbose" directive. The -p flag is like the $diagnostics::PRETTY variable. Since you're post-processing with splain, there's no sense in being able to enable() or disable() processing. Output from splain is directed to STDOUT, unlike the pragma.
diagnostics, splain - produce verbose warning diagnostics
Using the "diagnostics" pragma: use diagnostics; use diagnostics -verbose; enable diagnostics; disable diagnostics; Using the "splain" standalone filter program: perl program 2>diag.out splain [-v] [-p] diag.out Using diagnostics to get stack traces from a misbehaving script: perl -Mdiagnostics=-traceonly my_script.pl
null
The following file is certain to trigger a few errors at both runtime and compiletime: use diagnostics; print NOWHERE "nothing\n"; print STDERR "\n\tThis message should be unadorned.\n"; warn "\tThis is a user warning"; print "\nDIAGNOSTIC TESTER: Please enter a <CR> here: "; my $a, $b = scalar <STDIN>; print "\n"; print $x/$y; If you prefer to run your program first and look at its problem afterwards, do this: perl -w test.pl 2>test.out ./splain < test.out Note that this is not in general possible in shells of more dubious heritage, as the theoretical (perl -w test.pl >/dev/tty) >& test.out ./splain < test.out Because you just moved the existing stdout to somewhere else. If you don't want to modify your source code, but still have on-the-fly warnings, do this: exec 3>&1; perl -w test.pl 2>&1 1>&3 3>&- | splain 1>&2 3>&- Nifty, eh? If you want to control warnings on the fly, do something like this. Make sure you do the "use" first, or you won't be able to get at the enable() or disable() methods. use diagnostics; # checks entire compilation phase print "\ntime for 1st bogus diags: SQUAWKINGS\n"; print BOGUS1 'nada'; print "done with 1st bogus\n"; disable diagnostics; # only turns off runtime warnings print "\ntime for 2nd bogus: (squelched)\n"; print BOGUS2 'nada'; print "done with 2nd bogus\n"; enable diagnostics; # turns back on runtime warnings print "\ntime for 3rd bogus: SQUAWKINGS\n"; print BOGUS3 'nada'; print "done with 3rd bogus\n"; disable diagnostics; print "\ntime for 4th bogus: (squelched)\n"; print BOGUS4 'nada'; print "done with 4th bogus\n"; INTERNALS Diagnostic messages derive from the perldiag.pod file when available at runtime. Otherwise, they may be embedded in the file itself when the splain package is built. See the Makefile for details. If an extant $SIG{__WARN__} handler is discovered, it will continue to be honored, but only after the diagnostics::splainthis() function (the module's $SIG{__WARN__} interceptor) has had its way with your warnings. There is a $diagnostics::DEBUG variable you may set if you're desperately curious what sorts of things are being intercepted. BEGIN { $diagnostics::DEBUG = 1 } BUGS Not being able to say "no diagnostics" is annoying, but may not be insurmountable. The "-pretty" directive is called too late to affect matters. You have to do this instead, and before you load the module. BEGIN { $diagnostics::PRETTY = 1 } I could start up faster by delaying compilation until it should be needed, but this gets a "panic: top_level" when using the pragma form in Perl 5.001e. While it's true that this documentation is somewhat subserious, if you use a program named splain, you should expect a bit of whimsy. AUTHOR Tom Christiansen <tchrist@mox.perl.com>, 25 June 1995. perl v5.34.1 2024-04-13 SPLAIN(1)
fold
The fold utility is a filter which folds the contents of the specified files, or the standard input if no files are specified, breaking the lines to have a maximum of 80 columns. The options are as follows: -b Count width in bytes rather than column positions. -s Fold line after the last blank character within the first width column positions (or bytes). -w width Specify a line width to use instead of the default 80 columns. The width value should be a multiple of 8 if tabs are present, or the tabs should be expanded using expand(1) before using fold. ENVIRONMENT The LANG, LC_ALL and LC_CTYPE environment variables affect the execution of fold as described in environ(7).
fold – fold long lines for finite width output device
fold [-bs] [-w width] [file ...]
null
Fold text in standard input with a width of 20 columns: $ echo "I am smart enough to know that I am dumb" | fold -w 15 I am smart enou gh to know that I am dumb Same as above but breaking lines after the last blank character: $ echo "I am smart enough to know that I am dumb" | fold -s -w 15 I am smart enough to know that I am dumb SEE ALSO expand(1), fmt(1) STANDARDS The fold utility conforms to IEEE Std 1003.1-2001 (“POSIX.1”). HISTORY The fold utility first appeared in 1BSD. It was rewritten for 4.3BSD-Reno to improve speed and modernize style. The -b and -s options were added to NetBSD 1.0 for IEEE Std 1003.2 (“POSIX.2”) compliance. AUTHORS Bill Joy wrote the original version of fold on June 28, 1977. Kevin Ruddy rewrote the command in 1990, and J. T. Conklin added the missing options in 1993. BUGS If underlining (see ul(1)) is present it may be messed up by folding. macOS 14.5 October 29, 2020 macOS 14.5
gnumake
The purpose of the make utility is to determine automatically which pieces of a large program need to be recompiled, and issue the commands to recompile them. The manual describes the GNU implementation of make, which was written by Richard Stallman and Roland McGrath, and is currently maintained by Paul Smith. Our examples show C programs, since they are most common, but you can use make with any programming language whose compiler can be run with a shell command. In fact, make is not limited to programs. You can use it to describe any task where some files must be updated automatically from others whenever the others change. To prepare to use make, you must write a file called the makefile that describes the relationships among files in your program, and the states the commands for updating each file. In a program, typically the executable file is updated from object files, which are in turn made by compiling source files. Once a suitable makefile exists, each time you change some source files, this simple shell command: make suffices to perform all necessary recompilations. The make program uses the makefile data base and the last-modification times of the files to decide which of the files need to be updated. For each of those files, it issues the commands recorded in the data base. make executes commands in the makefile to update one or more target names, where name is typically a program. If no -f option is present, make will look for the makefiles GNUmakefile, makefile, and Makefile, in that order. Normally you should call your makefile either makefile or Makefile. (We recommend Makefile because it appears prominently near the beginning of a directory listing, right near other important files such as README.) The first name checked, GNUmakefile, is not recommended for most makefiles. You should use this name if you have a makefile that is specific to GNU make, and will not be understood by other versions of make. If makefile is `-', the standard input is read. make updates a target if it depends on prerequisite files that have been modified since the target was last modified, or if the target does not exist.
make - GNU make utility to maintain groups of programs
make [ -f makefile ] [ options ] ... [ targets ] ... WARNING This man page is an extract of the documentation of GNU make. It is updated only occasionally, because the GNU project does not use nroff. For complete, current documentation, refer to the Info file make.info which is made from the Texinfo source file make.texi.
-b, -m These options are ignored for compatibility with other versions of make. -B, --always-make Unconditionally make all targets. -C dir, --directory=dir Change to directory dir before reading the makefiles or doing anything else. If multiple -C options are specified, each is interpreted relative to the previous one: -C / -C etc is equivalent to -C /etc. This is typically used with recursive invocations of make. -d Print debugging information in addition to normal processing. The debugging information says which files are being considered for remaking, which file-times are being compared and with what results, which files actually need to be remade, which implicit rules are considered and which are applied---everything interesting about how make decides what to do. --debug[=FLAGS] Print debugging information in addition to normal processing. If the FLAGS are omitted, then the behavior is the same as if -d was specified. FLAGS may be a for all debugging output (same as using -d), b for basic debugging, v for more verbose basic debugging, i for showing implicit rules, j for details on invocation of commands, and m for debugging while remaking makefiles. -e, --environment-overrides Give variables taken from the environment precedence over variables from makefiles. +-f file, --file=file, --makefile=FILE Use file as a makefile. -i, --ignore-errors Ignore all errors in commands executed to remake files. -I dir, --include-dir=dir Specifies a directory dir to search for included makefiles. If several -I options are used to specify several directories, the directories are searched in the order specified. Unlike the arguments to other flags of make, directories given with -I flags may come directly after the flag: -Idir is allowed, as well as -I dir. This syntax is allowed for compatibility with the C preprocessor's -I flag. -j [jobs], --jobs[=jobs] Specifies the number of jobs (commands) to run simultaneously. If there is more than one -j option, the last one is effective. If the -j option is given without an argument, make will not limit the number of jobs that can run simultaneously. -k, --keep-going Continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other dependencies of these targets can be processed all the same. -l [load], --load-average[=load] Specifies that no new jobs (commands) should be started if there are others jobs running and the load average is at least load (a floating-point number). With no argument, removes a previous load limit. -L, --check-symlink-times Use the latest mtime between symlinks and target. -n, --just-print, --dry-run, --recon Print the commands that would be executed, but do not execute them. -o file, --old-file=file, --assume-old=file Do not remake the file file even if it is older than its dependencies, and do not remake anything on account of changes in file. Essentially the file is treated as very old and its rules are ignored. -p, --print-data-base Print the data base (rules and variable values) that results from reading the makefiles; then execute as usual or as otherwise specified. This also prints the version information given by the -v switch (see below). To print the data base without trying to remake any files, use make -p -f/dev/null. -q, --question ``Question mode''. Do not run any commands, or print anything; just return an exit status that is zero if the specified targets are already up to date, nonzero otherwise. -r, --no-builtin-rules Eliminate use of the built-in implicit rules. Also clear out the default list of suffixes for suffix rules. -R, --no-builtin-variables Don't define any built-in variables. -s, --silent, --quiet Silent operation; do not print the commands as they are executed. -S, --no-keep-going, --stop Cancel the effect of the -k option. This is never necessary except in a recursive make where -k might be inherited from the top-level make via MAKEFLAGS or if you set -k in MAKEFLAGS in your environment. -t, --touch Touch files (mark them up to date without really changing them) instead of running their commands. This is used to pretend that the commands were done, in order to fool future invocations of make. -v, --version Print the version of the make program plus a copyright, a list of authors and a notice that there is no warranty. -w, --print-directory Print a message containing the working directory before and after other processing. This may be useful for tracking down errors from complicated nests of recursive make commands. --no-print-directory Turn off -w, even if it was turned on implicitly. -W file, --what-if=file, --new-file=file, --assume-new=file Pretend that the target file has just been modified. When used with the -n flag, this shows you what would happen if you were to modify that file. Without -n, it is almost the same as running a touch command on the given file before running make, except that the modification time is changed only in the imagination of make. --warn-undefined-variables Warn when an undefined variable is referenced. EXIT STATUS GNU make exits with a status of zero if all makefiles were successfully parsed and no targets that were built failed. A status of one will be returned if the -q flag was used and make determines that a target needs to be rebuilt. A status of two will be returned if any errors were encountered. SEE ALSO The GNU Make Manual BUGS See the chapter `Problems and Bugs' in The GNU Make Manual. AUTHOR This manual page contributed by Dennis Morse of Stanford University. It has been reworked by Roland McGrath. Further updates contributed by Mike Frysinger. COPYRIGHT Copyright (C) 1992, 1993, 1996, 1999 Free Software Foundation, Inc. This file is part of GNU make. GNU make is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2, or (at your option) any later version. GNU make is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with GNU make; see the file COPYING. If not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA. GNU 22 August 1989 MAKE(1)
null
log
log is used to access system wide log messages created by os_log, os_trace and other logging systems. Some commands require root privileges. Available commands and their options: help General help or help specific to command argument collect Collect the system logs into a .logarchive that can be viewed later with tools such as log or Console. If an output path is not specified, system_logs.logarchive will be created in the current directory. --output path Save the archive to the specified path or file. If the path is a directory, a file named system_logs.logarchive will be created in the specified directory. If the path contains the extension .logarchive, a new logarchive will be created with that name at the specified path. --start date/time Limits the content capture to the date and time forward to now. The following date/time formats are accepted: "YYYY-MM- DD", "YYYY-MM-DD HH:MM:SS", "YYYY-MM-DD HH:MM:SSZZZZZ" --last num [m|h|d] Limits the captured events to the period starting at the given interval ago from the current time. Time is assumed in seconds unless specified. Example: "--last 2m" or "--last 3h" --size num [k|m] The amount of data to be captured in kilobytes or megabytes. This is an approximation, as the actual size may be more than requested. Example: "--size 100k" or "--size 20m" --device Collect system logs from paired device (first device found). --device-name name Collect system logs from paired device with the given name. --device-udid UDID Collect system logs from paired device with the given UDID. config Configure, reset or read settings for the logging system. config commands can act system-wide or on a subsystem. If not specified, system-wide is assumed. If subsystem is specified, category is optional. Requires root access. --reset | --status Option to show or reset the current settings for the system or a specific subsystem. If reset or status is not specified, a change to the configuration is assumed. For example, "log config --reset --subsystem com.mycompany.mysubsystem" will reset the subsystem to its default settings. "log config --status" will show the current system-wide logging settings. "log config --mode "level: default"" will set the system log level to default. --subsystem name Set or get mode for a specified subsystem. --category name Set or get mode for a specified category. If category is supplied, subsystem is required. --process pid Set mode for a specified pid. --mode mode(s) Will enable given mode. Modes include: level: {off | default | info | debug} The level is a hierarchy, e.g. debug implies debug, info, and default. persist: {off | default | info | debug} The persist mode is a hierarchy, e.g. debug implies debug, info, and default. erase Delete selected log data from the system. If no arguments are specified, the main log datastore and inflight log data will be deleted. --all Deletes main log datastore, and inflight log data as well as time-to-live data (TTL), and the fault and error content. --ttl Deletes time-to-live log content. show Shows contents of the system log datastore, archive or a specific tracev3 file. If a file or archive is not specified, the system datastore will be shown. If it is from a future system version that log cannot understand, it exits with EX_DATAERR (65) and an error message. The output contains only default level messages unless --info and/or --debug are specified. The output does not contain signposts unless --signpost is specified. --archive archive Display events stored in the given archive. The archive must be a valid log archive bundle with the suffix .logarchive. --file file Display events stored in the given .tracev3 file. In order to be decoded, the file must be contained within a valid .logarchive bundle, or part of the system logs directory. --[no-]pager Enable or disable pagination of output via less. --predicate filter Filters messages based on the provided predicate, based on NSPredicate. A compound predicate or multiple predicates can be provided. See section "PREDICATE- BASED FILTERING" below. --process pid | process The process on which to operate. This option can be passed more than once to operate on multiple processes. --source Include symbol names and source line numbers for messages, if available. --style style Control the output formatting of events: default Human readable output. ISO-8601 date (microsecond precision and timezone offset), thread ID, log type, activity ID, process ID, TTL, process, subsystem, category and message content. compact Compact human readable output. ISO-8601 date (millisecond precision), abbreviated log type, process, processID, thread ID, subsystem, category and message content. This output uses less horizontal space to indicate event metadata than the default style. json JSON output. Event data is synthesized as an array of JSON dictionaries. ndjson Line-delimited JSON output. Event data is synthesized as JSON dictionaries, each emitted on a single line. A trailing record, identified by the inclusion of a "finished" field, is emitted to indicate the end of events. syslog syslog-style output intended to be more compatible with the output format used by syslog(1). --color auto | always | none Control the display of colorized output. By default, log will disable colorized output when not directed to a terminal, unless overidden using always. --start date/time Shows content starting from the provided date. The following date/time formats are accepted: "YYYY-MM-DD", "YYYY-MM-DD HH:MM:SS", "YYYY-MM-DD HH:MM:SSZZZZZ" --end date/time Shows content up to the provided date. The following date/time formats are accepted: "YYYY-MM-DD", "YYYY-MM-DD HH:MM:SS", "YYYY- MM-DD HH:MM:SSZZZZZ" --last time[m|h|d] | boot Shows events that occurred within the given time relative to the end of the log archive, or beginning at the last boot contained within the log archive. Time may be specified as minutes, hours or days. Time is assumed in seconds unless specified. Example: "--last 2m" or "--last 3h" --timezone local | timezone Displays content in the local timezone, or a specified timezone (see tzset(3)). If not specified, the output is displayed in the timezone at the time the entry was written to source archive or file. --[no-]info Disable or enable info level messages in the output. (By default info messages are not displayed.) --[no-]debug Disable or enable debug level messages in the output. (By default debug messages are not displayed.) --[no-]signpost Disable or enable display of signposts in the output. (By default signposts are not displayed.) stats Shows a breakdown of the events contained within a log datastore or archive. The following options can be supplied to all modes of log stats: --archive archive Display statistics for events stored in the given archive. The archive must be a valid log archive bundle with the suffix .logarchive. --sort events | bytes Sort tabulated data output by number of events, or number of bytes. --count count | all Limit tabulated data to the given number of lines, or all displays all entries in tables. --style human | json Control the format style of the requested output mode. In addition, one of the following output modes can be supplied: --overview Displays statistics for the entire archive. --per-book Displays statistics per log book, the subsections of a log archive. --per-file Displays statistics per file in the archive. --sender sender Displays statistics for a given sender image name. --process process Displays statistics for a given originating process. --predicate predicate Displays statistics for all events matching the given predicate. stream Stream activities, log data or trace messages for the system or from a given process. By default, the command assumes system-wide streaming. Specifying a process id with the --process option will narrow the results. --level default | info | debug Shows messages at specified level and below. The level is a hierarchy. Specifying debug implies debug, info and default. --predicate filter Filters messages using the provided predicate based on NSPredicate. A compound predicate or multiple predicates can be provided. See section "PREDICATE-BASED FILTERING" below. --process pid | process The process on which to operate. This option can be passed more than once to operate on multiple processes. --style default | compact | json | syslog Output the content as a different style. --color auto | always | none Highlight certain types of log messages. In auto, highlighting will be disabled if the output is detected to be non-TTY. --source Include symbol names and source line numbers for messages, if available. --timeout time [m|h|d] Timeout the stream operation after a specified time, e.g. "--timeout 5m", "--timeout 1h" If minutes, hours, days not specified, seconds will be used. --type activity | log | trace Dictates the type of events to stream from a process. By default all types are streamed unless otherwise specified. Pass an appropriate --type for each requested type of event. PREDICATE-BASED FILTERING Using predicate-based filters via the --predicate option allows users to focus on messages based on the provided filter criteria. For detailed information on the use of predicate based filtering, please refer to the Predicate Programming Guide: https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/Predicates/Articles/pSyntax.html The filter argument defines one or more pattern clauses following NSPredicate rules. See log help predicates for the full list of supported keys. Supported keys include: eventType The type of event: activityCreateEvent, activityTransitionEvent, logEvent, signpostEvent, stateEvent, timesyncEvent, traceEvent and userActionEvent. eventMessage The pattern within the message text, or activity name of a log/trace entry. messageType For logEvent and traceEvent, the type of the message itself: default, info, debug, error or fault. process The name of the process the originated the event. processImagePath The full path of the process that originated the event. sender The name of the library, framework, kernel extension, or mach-o image, that originated the event. senderImagePath The full path of the library, framework, kernel extension, or mach-o image, that originated the event. subsystem The subsystem used to log an event. Only works with log messages generated with os_log(3) APIs. category The category used to log an event. Only works with log messages generated with os_log(3) APIs. When category is used, the subsystem filter should also be provided. PREDICATE-BASED FILTERING EXAMPLES Filter for specific subsystem: log show --predicate 'subsystem == "com.example.my_subsystem"' Filter for specific subsystem and category: log show --predicate '(subsystem == "com.example.my_subsystem") && (category == "desired_category")' Filter for specific subsystem and categories: log show --predicate '(subsystem == "com.example.my_subsystem") && (category IN { "category1", "category2" })' Filter for a specific subsystem and sender(s): log show --predicate '(subsystem == "com.example.my_subsystem") && ((senderImagePath ENDSWITH "mybinary") || (senderImagePath ENDSWITH "myframework"))' PREDICATE-BASED FILTERING EXAMPLES WITH LOG LINE log show system_logs.logarchive --predicate 'subsystem == "com.example.subsystem" and category contains "CHECK"' Timestamp Thread Type Activity PID 2016-06-13 11:46:37.248693-0700 0x7c393 Default 0x0 10371 timestamp: [com.example.subsystem.CHECKTIME] Time is 06/13/2016 11:46:37 log show --predicate 'processImagePath endswith "hidd" and senderImagePath contains[cd] "IOKit"' --info Timestamp Thread Type Activity PID 2016-06-10 13:54:34.593220-0700 0x250 Info 0x0 113 hidd: (IOKit) [com.apple.iohid.default] Loaded 6 HID plugins ENVIRONMENT The following environment variables affect the execution of log: LOG_COLORS Controls the color of text output from log show. This string is a concatenation of pairs of the format fb, where f is the foreground color and b is the background color. The color designators are as follows: a black b red c green d brown e blue f magenta g cyan h light grey A bold black, usually shows up as dark grey B bold red C bold green D bold brown, usually shows up as yellow E bold blue F bold magenta G bold cyan H bold light grey; looks like bright white x default foreground or background Note that the above are standard ANSI colors. The actual display may differ depending on the color capabilities of the terminal in use. The order of the attributes are as follows: 1. timestamp 2. thread identifier 3. event type 4. activity identifier 5. process identifier 6. time-to-live 7. process name 8. sender image name 9. subsystem 10. category 11. event message 12. highlight color The default is "xxxxxxxxxxxxFxdxcxExxxxA", i.e. bold magenta process name, yellow sender, green subsystem, bold blue category and dark grey background for highlighted lines. LOG_STYLE Control the default output style of log show: default, compact, json or syslog. OS_ACTIVITY_MODE Change the mode of launched processes to: info Enables info level messages. Does not override logging Preferences that have info level disabled. debug Enables debug level messages which includes info level messages. Does not override logging Preferences that have info level or debug level disabled. OS_ACTIVITY_STREAM Change the type of streaming enabled. live Live streaming from the process using IPC. OS_ACTIVITY_PROPAGATE_MODE If set, will propagate the mode settings via activities. FILES You can control the execution of log show and log stream with a configuration file located at ~/.logrc. Given a ~/.logrc like this: # .logrc - default log(1) arguments, handy predicate shortcuts show: --style compact --last 1h --info # turn back off with --no-info --no-debug # turn back on with --debug predicate: app 'process == "application"' errors 'process == "application" and messageType == error' s 'process == "application" and ' # adjacent strings 'subsystem == "com.example.support"' # get merged log show would automatically run as though the arguments --style compact --last 1h --info --no-debug were passed in. Explicit options will override the arguments provided by ~/.logrc. Furthermore, running with --predicate app would be the same as using: --predicate 'process == "application"' The syntax of the ~/.logrc file made of comments, section headers, options, words, and single-quoted strings. Comments start with the hash character and run to the end of the line. Otherwise, contents are whitespace-separated. The structure of the ~/.logrc file is broken into sections. Section headers are specified by a word and a colon. There are three kinds of sections. The show: and stream: sections operate similarly. Their contents are literal options and arguments that will be passed to the respective command as if they were entered on the command line. The predicate: section creates aliases for predicates. It is made up of pairs of: word 'predicate' where word is a combination of letters (presumably a simple, easy-to-type one) and predicate is some filtering logic, as described in the PREDICATE-BASED FILTERING section above. The predicate is delimited by single quotes, but adjacent quoted elements are "glued" together; this helps in making long predicates easier to read and write. SEE ALSO os_log(3), os_trace(3) Darwin May 10, 2016 Darwin
log – Access system wide log messages created by os_log, os_trace and other logging systems.
log [command [options]] log help [command] log collect [--output path] [--start date/time] [--size num [k|m]] [--last num [m|h|d]] [--device | --device-name name | --device-udid UDID] log config [--reset | --status] [--mode mode(s)] [--subsystem name [--category name]] [--process pid] log erase [--all] [--ttl] log show [--archive archive | --file file] [--predicate filter] [--process pid | process] [--source] [--style default | compact | json | ndjson | syslog] [--color auto | always | none] [--start date/time] [--end date/time] [--[no-]info] [--[no-]debug] [--[no-]pager] [--[no-]signpost] [--last time [m|h|d]] [--timezone local | timezone] log stats [--archive archive] [--sort events | bytes] [--count count | all] [--overview | --per-book | --per-file | --sender sender | --process process | --predicate predicate] log stream [--level default | info | debug] [--predicate filter] [--process pid | process] [--source] [--style default | compact | json | syslog] [--color auto | always | none] [--timeout time [m|h|d]] [--type activity | log | trace]
null
null
tmutil
tmutil provides methods of controlling and interacting with Time Machine, as well as examining and manipulating Time Machine backups. Common abilities include restoring data from backups, editing exclusions, and comparing backups. Several, but not all, verbs require root and Full Disk Access privileges. Full Disk Access privileges can be granted to the Terminal application used to run tmutil from the Privacy tab in the Security & Privacy preference pane. BACKUP STRUCTURE Throughout this manual, specific language is used to describe particular "realms" associated with Time Machine backups. It is important to understand this terminology to make effective use of tmutil and its manual. backup source A volume currently being backed up by Time Machine. backup disk The HFS+ or APFS volume that contains Time Machine backups. backup destination In the case of a local destination, a synonym for backup disk. For network destinations, this is the AFP or SMB share on which the backup disk image resides. backup disk image (or backup image) A sparsebundle that, when mounted, is the backing store for a volume that is a backup disk. backup store The top-level "Backups.backupdb" directory at the root of an HFS+ backup disk. E.g., /Volumes/Chronoton/Backups.backupdb n.b. APFS backup disks do not have backup stores. machine directory On HFS+, a directory inside a backup store that contains all the backups for a particular computer. On APFS, the root of the backup disk is a machine directory. For local HFS+ destinations, a backup store can contain multiple machine directories, all for separate computers. E.g., /Volumes/Chronoton/Backups.backupdb/thermopylae backup A directory inside a machine directory or APFS backup volume snapshot that represents a single initial or incremental backup of one computer. E.g., /Volumes/Chronoton/Backups.backupdb/thermopylae/2011-07-03-123456 com.apple.TimeMachine.2011-07-03-123456.backup local snapshot (or snapshot) An APFS snapshot of an APFS source volume included in the backup. E.g., com.apple.TimeMachine.2011-07-03-123456.local volume store A directory inside a backup that represents a single initial or incremental backup of one backup source. E.g., /Volumes/Chronoton/Backups.backupdb/thermopylae/2011-07-03-123456/Mac HD /Volumes/.timemachine/*/2011-07-03-123456.backup/2011-07-03-123456.backup/Mac HD VERBS Each verb is listed with its description and individual arguments. setdestination [-ap] arg Configure a local HFS+ or APFS volume, AFP share, or SMB share as a backup destination. Requires root and Full Disk Access privileges. When the -a option is provided, arg will be added to the list of destinations. Time Machine will automatically choose a backup destination from the list when performing backups. When the -a option is not provided, the current list of destinations will be replaced by arg. If you wish to set an HFS+ or APFS volume as the backup destination, arg should be the mount point of the volume in question. When setting an AFP or SMB destination arg takes the form: protocol://user[:pass]@host/share In the AFP and SMB cases, the password component of the URL is optional; you may instead specify the -p option to enter the password at a non-echoing interactive prompt. This is of particular interest to the security-conscious, as all arguments provided to a program are visible by all users on the system via the ps tool. destinationinfo [-X] Print information about destinations currently configured for use with Time Machine. For each backup destination, the following information may be displayed: Name The volume label as shown in Finder. Kind Whether the destination is locally attached storage or a network device. URL In the case of a network destination, the URL used for Time Machine configuration. Mount Point If the volume is currently mounted, the path in the file system at which it was mounted. ID The unique identifier for the destination. setquota destination_id quota_in_gb Set the quota for the destination with the specified unique identifier to the specified number of gigabytes. To obtain the unique identifier for a destination, see destinationinfo. The new quota will take effect on the next backup to this destination. Requires root and Full Disk Access privileges. removedestination identifier Remove the destination with the specified unique identifier from the Time Machine configuration. To obtain the unique identifier for a destination, see destinationinfo. Requires root and Full Disk Access privileges. addexclusion [-pv] item ... Configure an exclusion that tells Time Machine not to back up a file, directory, or volume during future backups. There are three kinds of user-configurable exclusions in Time Machine: The first kind of exclusion, which is the default behavior for the addexclusion verb, is a location-independent ("sticky") exclusion that follows a file or directory. When the file or directory is moved, the exclusion goes with the item to the new location. Additionally, when the item is copied, the copy retains the exclusion. The second kind of exclusion is a fixed-path exclusion. With this, you tell Time Machine that you want a specific path to be excluded, agnostic of the item at that path. If there is no file or directory at the specified path, the exclusion has no effect; if the item previously at the path has been moved or renamed, the item is not excluded, because it does not currently reside at the excluded path. As a consequence of these semantics, moving a file or directory to the path will cause the item to be excluded--fixed-path exclusions are not automatically cleaned up when items are moved or deleted and will take effect again once an item exists at an excluded path. The third kind of exclusion is a volume exclusion. These track volumes based on file system UUID, which is persistent across volume name and mount path changes. Erasing the volume will cause Time Machine to apply default behavior for the newly erased volume. The -p option configures fixed-path exclusions. The -v option configures volume exclusions. Both require root and Full Disk Access privileges. The -v option is the only supported way to exclude or unexclude a volume; behavior is undefined if a sticky or fixed-path exclusion is specified. removeexclusion [-pv] item ... Configure Time Machine to back up a file, directory, or volume during future backups. This verb follows the same usage, exclusion style, and privilege semantics as addexclusion. isexcluded [-X] item ... Determine if a file, directory, or volume are excluded from Time Machine backups. When the -X option is provided, output will be printed in XML property list format. # example output for an excluded item thermopylae:~ thoth$ tmutil isexcluded /Users/admin/Desktop/foo.txt [Excluded] /Users/admin/Desktop/foo.txt # example output for an item that is not excluded thermopylae:~ thoth$ tmutil isexcluded /Users/admin/Desktop/bar.txt [Included] /Users/admin/Desktop/bar.txt enable Turn on automatic backups. Requires root and Full Disk Access privileges. disable Turn off automatic backups. Requires root and Full Disk Access privileges. startbackup [-a | --auto] [-b | --block] [-r | --rotation] [-d | --destination dest_id] Begin a backup if one is not already running. Options: --auto Run the backup in a mode similar to system- scheduled backups. --block Wait (block) until the backup is finished before exiting. --rotation Allow automatic destination rotation during the backup. --destination Perform the backup to the destination corresponding to the specified ID. The --auto option provides a supported mechanism with which to trigger "automatic-like" backups, similar to automatic backups that are scheduled by the system. While this is not identical to true system-scheduled backups, it provides custom schedulers the ability to achieve some (but not all) behavior normally exhibited when operating in automatic mode. stopbackup Cancel a backup currently in progress. compare [-@acdefglmnstuEUX] [-D depth] [-I name] [backup_path | path1 path2] Perform a backup diff. If no arguments are provided, tmutil will compare the computer to the latest backup. If a backup path is provided as the sole argument, tmutil will compare the computer to the specified backup. If two path arguments are provided, tmutil will compare those two items to each other. tmutil will attempt to inform you when you have asked it to do something that doesn't make sense or isn't supported. The compare verb allows you to specify what properties to compare. If you specify no property options, tmutil assumes a default property set of -@gmstu. Specifying any property option overrides the default set. Options: -a Compare all supported metadata. -n No metadata comparison. -@ Compare extended attributes. -c Compare creation times. -d Compare file data forks. -e Compare ACLs. -f Compare file flags. -g Compare GIDs. -m Compare file modes. -s Compare sizes. -t Compare modification times. -u Compare UIDs. -D Limit traversal depth to depth levels from the beginning of iteration. -E Don't take exclusions into account when comparing items inside volumes. -I Ignore paths with a path component equal to name during iteration. This may be specified multiple times. -U Ignore logical volume identity (volume UUIDs) when directly comparing a local volume or volume store to a volume store. -X Print output in XML property list format. verifychecksums path ... Compute a checksum of data contained within a backup and verify the result(s) against checksum information computed at the time of backup. No output is generated for matching checksums. Issues are reported using the following legend: ! The file's current checksum does not match the expected recorded checksum. ? The file's recorded checksum is invalid. Beginning in OS X 10.11, Time Machine records checksums of files copied into backups. Checksums are not retroactively computed for files that were copied by earlier releases of OS X. restore [-v] src ... dst Restore the item src, which is inside a backup, to the location dst. The dst argument mimics the destination path semantics of the cp tool. You may provide multiple source paths to restore. The last path argument must be a destination. When using the restore verb, tmutil behaves largely like Finder. Custom Time Machine metadata (extended security and other) will be removed from the restored data, and other metadata will be preserved. Root and Full Disk Access privileges may be required to perform restores. When restoring with tmutil as root, ownership of the restored items will match the state of the items in the backup. delete [-d backup_mount_point -t timestamp] [-p path] Deletes the backups with the specified timestamp from the backup volume mounted at the specified mountpoint. The -t option followed by a timestamp can be used multiple times to specify multiple backups to delete. For HFS backup disks, a specific path to delete can also be specified using the -p option. This verb can delete items from backups that were not made by, or are not claimed by, the current machine. Requires root and Full Disk Access privileges. deleteinprogress machine_directory Delete all in-progress backups for a machine directory. Requires root and Full Disk Access privileges. On APFS backup destinations, this reverts the destination volume to the last backup. latestbackup [-d backup_mount_point [-m [-t]]] List this computer's latest completed backup. The -d option specifies a destination volume to list backups from. When -m is provided, latestbackup will attempt to mount the backups and list their mounted paths. The -t option will show only the backup timestamp rather than the full name or path. Requires root and Full Disk Access privileges. listbackups [-d backup_mount_point [-m [-t]]] List all of this computer's completed backups. The -d option specifies a destination volume to list backups from. When -m is provided, listbackups will attempt to mount backups and list their mounted paths. The -t option will show only the backup timestamp rather than the full name or path. Requires root and Full Disk Access privileges. machinedirectory Print the path to the current machine directory for this computer. calculatedrift machine_directory Analyze the backups in an HFS machine directory and determine the amount of change between each. Averages are printed after all backups have been analyzed. This may require root and Full Disk Access privileges, depending on the contents of the machine directory. uniquesize path ... Analyze the specified path in an HFS+ backup or path to an APFS backup and determine its unique size. The figure reported by uniquesize represents things that only exist in the specified path; things that are present in other backups are not tallied. inheritbackup {machine_directory | sparsebundle} Claim a machine directory or sparsebundle for use by the current machine. Requires root and Full Disk Access privileges. Machine directories and sparsebundles are owned by one computer at a time, and are tracked by unique identifiers rather than computer name, host name, or ethernet address. The inheritbackup verb reassigns the identity of the specified item, reconfiguring it so the current host recognizes it during backups. When inheriting a sparsebundle, the machine directory within will also be claimed. Inheriting is typically only one step in the process of configuring a backup for use by a machine. You may also need to use setdestination, associatedisk, or both, depending on the situation. One machine can own multiple machine directories and sparsebundles, but it is ill-advised for them to reside in the same place. In such a situation, which will be chosen during a backup is undefined. As a result, inheritbackup will attempt to detect possible identity collisions before making changes. associatedisk mount_point snapshot_volume Bind a volume store directory to the specified local disk, thereby reconfiguring the backup history. Requires root and Full Disk Access privileges. In Mac OS X, HFS+ and APFS volumes have a persistent UUID that is assigned when the file system is created. Time Machine uses this identifier to make an association between a source volume and a volume store. Erasing the source volume creates a new file system on the disk, and the previous UUID is not retained. The new UUID causes the source volume -> volume store association to be broken. If one were just erasing the volume and starting over, it would likely be of no real consequence, and the new UUID would not be a concern; when erasing a volume in order to clone another volume to it, recreating the association may be desired. A concrete example of when and how you would use associatedisk: After having problems with a volume, you decide to erase it and manually restore its contents from a Time Machine backup or copy of another nature. (I.e., not via Time Machine System Restore or Migration Assistant.) On your next incremental backup, the data will be copied anew, as though none of it had been backed up before. Technically, it is true that the data has not been backed up, given the new UUID. However, this is probably not what you want Time Machine to do. You would then use associatedisk to reconfigure the backup so it appears that this volume has been backed up previously: thermopylae:~ thoth$ sudo tmutil associatedisk [-a] "/Volumes/MyNewStuffDisk" "/Volumes/Chronoton/Backups.backupdb/thermopylae/Latest/MyStuff" The result of the above command would associate the volume store MyStuff in the specified backup with the source volume MyNewStuffDisk. The volume store would also be renamed to match. The -a option tells associatedisk to find all volume stores in the same machine directory that match the identity of MyStuff, and then perform the association on all of them. localsnapshot Create new local Time Machine snapshots of all APFS volumes included in the Time Machine backup. listlocalsnapshots mount_point List local Time Machine snapshots of the specified volume. listlocalsnapshotdates [mount_point] List the creation dates of all local Time Machine snapshots. Specify mount_point to list snapshot creation dates from a specific volume. Listed dates are formatted YYYY-MM-DD-HHMMSS. deletelocalsnapshots {mount_point | date} If a date is specified, delete all local Time Machine snapshots on all mounted disks for the specified date (formatted YYYY-MM- DD-HHMMSS). If a disk is specified, delete all local Time Machine snapshots on the specified disk thinlocalsnapshots mount_point [purge_amount] [urgency] Thin local Time Machine snapshots for the specified volume. When purge_amount and urgency are specified, tmutil will attempt (with urgency level 1-4) to reclaim purge_amount in bytes by thinning snapshots. If urgency is not specified, the default urgency will be used. EXIT STATUS In most situations, tmutil exits 0 on success, >0 otherwise. Mac OS X 10 June 2015 Mac OS X
tmutil – Time Machine utility
tmutil verb [options]
null
null
certtool
Certtool is a UNIX command-line program which is used to create key pairs, certificates, and certificate signing requests; to import externally generated certificates and Certificate Revocation Lists (CRLs) into a Keychain, and to display the contents of certificates and CRLs.
certtool - create key pairs, certificates and certificate signing requests for use with Keychains
certtool command [command-args] [options] certtool c [options] certtool r outFileName [options] certtool V infileName [options] certtool C domainName [options] certtool i inFileName [options] certtool d inFileName [options] certtool I inFileName [options] certtool D inFileName [options] certtool y [options] CERTTOOL COMMAND SUMMARY c Create keypair and Certificate r Create CSR V Verify CSR C Create a System Identity i Import Certificate d Display Certificate I Import CRL D Display CRL I Import a CRL y Display all certs and CRLs in keychain CERTTOOL OPTION SUMMARY c Create the keychain, if one is needed. d Create a CSR in DER format; default is PEM k=keychainName Specify the Keychain to use for the operation. If keychainName starts with a '/', an absolute path is assumed; otherwise, the specified filename is relative to the user's Library/Keychains directory. p=passphrase Specify the keychain passphrase when creating r=privateKeyFileName Optional private key, for Import Certificate only f=[18f] Private Key Format = PKCS1/PKCS8/FIPS186; default is PKCS1 (openssl) x=[aSsm] Extended Key Usage: a=Any; s=SSL Client; S=SSL Server; m=SMIME a Generate private key with default ACL u Generate private key with ACL limiting access to current user P Don't create System Identity if one already exists for specified domain h Print usage message v Execute in verbose mode.
null
Generating a Self-Signed Certificate This command generates a key pair and a self-signed (root) certificate and places them in a keychain. The root cert is signed by the private key generated during this command. The cert generated by this command is totally untrustworthy and cannot be used in the "real world"; the primary use of this command is to facilitate early development of SSL server applications based on SecureTransport. In particular, "real world" SSL clients (e.g., web browsers) will complain to varying degrees when they attempt to connect to an SSL server which presents a cert which is generated by this command. Some broswers, after a fair amount of handholding, will allow you to conditionally "trust" this cert. # CertTool c [options] The available options are: k=keyChainName Where "keyChainName" is the name of the keychain into which keys and the cert will be added. The specified keychain must exist. If it doesn't exist and you want the keychain created for you, specify the 'c' option. If no keychain is specified, keys and certs are added to the default keychain. c Specifies that the designated keychain is to be created. x=[aSsm] Specifies an optional Extended Key Usage extension. Values are 'a' for ExtendedKeyUseAny; 's' for SSL client (ClientAuth); 'S' for SSL server (ServerAuth); and 'm' for S/MIME (EmailProtection). a Results the the private key being created with a default ACL. If not specified, the private key is created with no ACL. u Create the private key with an ACL limiting access to the current user. This is an interactive command; you will be prompted for a number of different items which are used to generate the keypair and the cert. A sample session follows. # CertTool k=certkc Enter key and certificate label: testCert Please specify parameters for the key pair you will generate. r RSA d DSA f FEE e ECDSA Select key algorithm by letter: r Valid key sizes for RSA are 1024..2048; default is 2048 Enter key size in bits or CR for default: 2048 You have selected algorithm RSA, key size 2048 bits. OK (y/anything)? y Enter cert/key usage (s=signing, b=signing AND encrypting): b ...Generating key pair... Note: you will be prompted for the Keychain's passphrase by the Keychain system at this point if the specified keychain is not open and you have not specified the passphrase via the 'p' option. Please specify the algorithm with which your certificate will be signed. s RSA with SHA1 2 RSA with SHA256 3 RSA with SHA384 5 RSA with SHA512 Select signature algorithm by letter: s You have selected algorithm RSA with SHA1. OK (y/anything)? y ...creating certificate... You will now specify the various components of the certificate's Relative Distinguished Name (RDN). An RDN has a number of components, all of which are optional, but at least one of which must be present. Note that if you are creating a certificate for use in an SSL/TLS server, the Common Name component of the RDN must match exactly the host name of the server. This must not be an IP address, but the actual domain name, e.g. www.apple.com. Entering a CR for a given RDN component results in no value for that component. Common Name (e.g. www.apple.com) : 10.0.61.5 Country (e.g. US) : Organization (e.g. Apple, Inc.) : Apple Organization Unit (e.g. Apple Software Engineering) : State/Province (e.g. California) : California Email Address (e.g. username@apple.com) : You have specified: Common Name : 10.0.61.5 Organization : Apple State/Province : California Is this OK (y/anything)? y ..cert stored in Keychain. # The "Common Name" portion of the RDN - in the above case, "10.0.61.5" - MUST match the host name of the machine you'll be running an SSL/TLS server on. (In this case the test machine doesn't have an actual hostname; it's DHCP'd behind a firewall which is why "10.0.61.5" was specified for Common Name.) This is part of SSL's certificate verification; it prevents an attack using DNS spoofing. A brief note about cert/key usage: the normal configuration of SecureTransport is that the server cert specified in SSLSetCertificate() is capable of both signing and encryption. If this cert is only capable of signing, you must create a second keychain containing a cert which is capable of encryption, and pass that to SSLSetEncryptionCertificate(). Generating a Certificate Signing Request (CSR) A CSR is the standard means by which an administrator of a web server provides information to a Certificate Authority (CA) in order to obtain a valid certificate which is signed by the CA. This type of cert is used in the real world; certs signed by CAs such as Verisign and Thawte are recognized by most web browsers when performing SSL transactions. The general procedure for obtaining a "real" cert is: • Generate a key pair • Generate a CSR • Provide the CSR and some other information and/or documentation to the CA • CA sends you a certificate which is signed by the CA. • You import that certificate, obtained from the CA, into your keychain. The items in that keychain can now be used in SecureTransport's SSLSetCertificate() call. This command performs the first two steps in the above procedure. See the section below entitled "Importing a Certificate" for information on importing the resulting certificate into your keychain. The format of this command is # CertTool r outFileName [options] The resulting CSR will be written to "outFileName". The available options are: k=keyChainName Where "KeyChainName" is the name of the keychain into which keys and the cert will be added. If no keychain is specified, keys and certs are added to the default keychain. The specified keychain must exist unless you specify the 'c' option. d The 'd' option tells CertTool to create the CSR in DER-encoded format. The default is PEM-encoded, which is what most CAs expect. PEM encoded data consists of printable ASCII text which can, for example, be pasted into an email message. DER-encoded data is nonprintable binary data. c Specifies that the designated keychain is to be created. a Results the the private key being created with a default ACL. If not specified, the private key is created with no ACL. u Create the private key with an ACL limiting access to the current user. This is an interactive command; you will be prompted for a number of different items which are used to generate the keypair and the CSR. The prompts given, and the format of the data you must supply, are identical to the data shown in the sample session in Section 2. Verifying a CSR A CSR contains, among other things, the public key which was generated in as described above. The CSR is signed with the associated private key. Thus the integrity of a CSR can be verified by extracting its public key and verifying the signature of the CSR. This command performs this integrity check. The format of this command is # CertTool V inFileName [options] The only available option is the 'd' flag, which as described above in the section entitled "Generating a Certificate Signing Request", indiciates that the CSR is in DER format rather than the default PEM format. A typical (successful) run of this command is like so: # CertTool V myCsr.pem ...CSR verified successfully. A large number of things can go wrong if the verification fails; suffice it to say that if you see anything other than the above success message, you have a bad or corrupted CSR. Creating a System Identity This creates a key pair and a self-signed (root) certificate in the System keychain, and registers the result in the System Identity database as being the IDentity associated with the specified domain name. The domain name is typically a string of the form "com.apple.somedomain...". You must be running as root to execute this command. The format of this command is # CertTool C domainName [options] The available options are: u Create the private key with an ACL limiting access to the current user. If not specified, the private key wil be created with a default ACL. P Don't create system identity if one already exists for specified domain. Importing a Certificate from a Certificate Authority Once you have negotiated with your CA, and provided them with the CSR generated as described above as well as any other information, documentation, and payment they require, the CA will provide you with a certificate. Use this command to add that certificate to the keychain containing the keypair you generated previously. The format of this command is # CertTool i inFileName [options] The cert to import is obtained from "inFileName". The available options are: k=keyChainName Where "keyChainName" is the name of the keychain to which the cert will be added. If no keychain is specified, the cert is added to the default keychain. The specified keychain typically contains the keypair you generated previously. (Note you can import a certificate into a keychain which does not contain keys you generated but there will be no linkage between the imported certificate and a private key if you do this.) If the keychain is not open when this command is executed, you will be prompted by the Keychain system for its passphrase. r=privateKeyFileName Where "privateKeyFileName" is the name of the optional private key file to imported along with the certificate. This option is used to import cert/key pairs which are generated by other means, such as OpenSSL. f=privateKeyFormat Where "privateKeyFormat" is the format of the private key specified with the 'r' option. The formats are: '1' for PKCS1 (OpenSSL format), '8' (PKCS8), and 'f' (FIPS186, BSAFE format). The default is OpenSSL format for both RSA and DSA keys. d Specifies DER format as described above. The default is PEM format. c Specifies that the designated keychain is to be created. Displaying a Certificate This displays the contents of an existing certificate, obtained from a file. The format of this command is # CertTool d inFileName [options] The cert to display is obtained from "inFileName". The only available option is the 'd' flag, specifying DER format as described above. The default is PEM format. Actually, in the absence of this option, certtool will correctly determine the format of the certificate (PEM or DER). Importing a CRL This command is used to add a Certificate Revocation List (CRL) to a keychain. The format of this command is # CertTool I inFileName [options] The CRL to import is obtained from "inFileName". The available options are: k=keyChainName Where "KeyChainName" is the name of the keychain to which the CRL will be added. If no keychain is specified, the cert is added to the default keychain. If the keychain is not open when this command is executed, you will be prompted by the Keychain system for its passphrase. d Specifies DER format as described above. The default is PEM format. c Specifies that the designated keychain is to be created. Displaying a CRL This displays the contents of an existing Certificate Revocation List (CRL), obtained from a file. The format of this command is # CertTool D inFileName [options] The cert to display is obtained from "inFileName". The only available option is the 'd' flag, specifying DER format as described above. The default is PEM format. Displaying Certificates and CRLs in a keychain This displays the contents of all certificates and CRLs in a keychain. The format of this command is # CertTool y [options] The available options are: k=keyChainName Where "KeyChainName" is the name of the keychain to display. v Specifies verbose mode. Certificate Authorities and CSRs As mentioned above, the general procedure for obtaining a "real" cert is: • Generate a key pair • Generate a CSR • Provide the CSR and some other information and/or documentation to the CA • CA sends you a certificate which is signed by the CA. • You import that certificate, obtained from the CA, into your keychain. The items in that keychain can now be used in SecureTransport's SSLSetCertificate() call. One CA with an excellent web-based interface for obtaining a cert is Verisign (http://www.verisign.com/products/site/index.html). You can get a free 14-day trial certificate using nothing but CertTool, Verisign's web site, and email. You need to provide some personal information. Paste the CSR generated as described in the section entitled "Generating a Certificate Signing Request" into a form on the web site. A few minutes later Verisign emails you a certificate, which you import into your keychain. The whole process takes less than 10 minutes. The free certificate obtained in this manner is signed by a temporary root cert which is not recognized by any browsers, but Verisign also provides a means of installing this temporary root cert into your browser, directly from their web site. Typically one would use the free, temporary cert to perform initial configuration of a server and to ring out the general SSL infrastructure. Once you feel comfortable with the operation of the server, then it's time to buy a "real" certificate which will allow your web server to be trusted by any browser. Thawte has a similar, very friendly service at http://www.thawte.com/. Note that, for early web server development and/or testing, you can skip the entire procedure described above and just generate your own self-signed root cert as described above. No CA is involved; no CSR is generated; no cert needs to be imported - CertTool generates a cert for you and immediately adds it to your keychain. Bear in mind that this option will require support from various SSL clients you'll be testing with, none of which recognize your root cert. FILES /System/Library/Keychains/X509Anchors System root certificate database /Library/Keychains/System.keychain System Keychain SEE ALSO openssl(1) Apple Computer, Inc. March 19, 2003 CERTTOOL(1)
vmmap
vmmap displays the virtual memory regions allocated in a specified process, helping a programmer understand how memory is being used, and what the purposes of memory at a given address may be. vmmap requires one argument -- either the process ID or the full or partial executable name of the process to examine, or the pathname of a memory graph file generated by leaks or the Xcode Memory Graph Debugger. If the optional address is given, information is only shown for the VM region containing that address (if any) and the regions around it.
vmmap – Display the virtual memory regions allocated in a process
vmmap [-s] [-w] [-v] [-pages] [-interleaved] [-submap] [-allSplitLibs] [-noCoalesce] [-summary] pid | partial-executable-name | memory-graph-file [address]
-s, -sortBySize Print sorted regions and malloc zones by size (dirty + swapped) -w, -wide Print wide output, to show full paths of mapped files. -v, -verbose Equivalent to -w -submap -allSplitLibs -noCoalesce -pages Print region sizes in page counts rather than kilobytes. -interleaved Print all regions in ascending order of starting address, rather than printing all non-writable regions followed by all writable regions. -submap Print information about VM submaps. -allSplitLibs Print information about all shared system split libraries, even those not loaded by this process. -noCoalesce Do not coalesce adjacent identical regions. Default is to coalesce for more concise output. -summary Print only the summary of VM usage, not the individual region detail. EXPLANATION OF OUTPUT For each region, vmmap describes the starting address, ending address, size of the region (in kilobytes or pages), read/write permissions for the page, sharing mode for the page, and the purpose of the pages. The size of the virtual memory region represents the virtual memory pages reserved, but not necessarily allocated. For example, using the vm_allocate Mach system call reserves the pages, but physical memory won't be allocated for the page until the memory is actually touched. A memory-mapped file may have a virtual memory page reserved, but the pages are not instantiated until a read or write happens. Thus, this size may not correctly describe the application's true memory usage. By default, the sizes are shown in kilobytes or megabytes. If the -pages flag is given, then the sizes are in number of VM pages. The protection mode describes if the memory is readable, writable, or executable. Each virtual memory region has a current permission, and a maximum permission. In the line for a virtual memory region, the current permission is displayed first, the maximum permission second. For example, the first page of an application (starting at address 0x00000000) permits neither reads, writes, or execution ("---"), ensuring that any reads or writes to address 0, or dereferences of a NULL pointer immediately cause a bus error. Pages representing an executable always have the execute and read bits set ("r-x"). The current permissions usually do not permit writing to the region. However, the maximum permissions allow writing so that the debugger can request write access to a page to insert breakpoints. Permissions for executables appear as "r-x/rwx" to indicate these permissions. The share mode describes whether pages are shared between processes,and what happens when pages are modified. Private pages (PRV) are pages only visible to this process. They are allocated as they are written to, and can be paged out to disk. Copy-on-write (COW) pages are shared by multiple processes (or shared by a single process in multiple locations). When the page is modified, the writing process then receives its own private copy of the page. Empty (NUL) sharing implies that the page does not really exist in physical memory. Aliased (ALI) and shared (SHM) memory is shared between processes. The share mode typically describes the general mode controlling the region. For example, as copy-on-write pages are modified, they become private to the application. Even with the private pages, the region is still COW until all pages become private. Once all pages are private, then the share mode would change to private. The far left column names the purpose of the memory: malloc regions, stack, text or data segment, etc. For regions loaded from binaries, the far right shows the library loaded into the memory. If the -submaps flag is given, then vmmap's output includes descriptions of submaps. A submap is a shared set of virtual memory page descriptions that the operating system can reuse between multiple processes. Submaps minimize the operating system's memory usage by representing the virtual memory regions only once. Submaps can either be shared by all processes (machine-wide) or local to the process (process-only). (Understanding where submaps are located is irrelevant for most developers, but may be interesting for anyone working with low levels of the virtual memory system.) For example, one submap contains the read-only portions of the most common dynamic libraries. These libraries are needed by most programs on the system, and because they are read-only, they will never be changed. As a result, the operating system shares these pages between all the processes, and only needs to create a single data structure to describe how this memory is laid out in every process. That section of memory is referred to as the "split library region", and it is shared system-wide. So, technically, all of the dynamic libraries that have been loaded into that region are in the VM map of every process, even though some processes may not be using some of those libraries. By default, vmmap shows only those shared system split libraries that have been loaded into the specified target process. If the -allSplitLibs flag is given, information about all shared system split libraries will be printed, regardless of whether they've been loaded into the specified target process or not. If the contents of a machine-wide submap are changed -- for example, the debugger makes a section of memory for a dylib writable so it can insert debugging traps -- then the submap becomes local, and the kernel will allocate memory to store the extra copy. % FRAG, fragmentation, in the MALLOC ZONE summary is computed by the following method: % FRAG = 100 - (100 * Allocated / (Dirty + Swapped)) Dirty and swapped are memory which has been written to by the process. Allocated is the number of bytes currently allocated from malloc. SEE ALSO heap(1), leaks(1), malloc_history(1), stringdups(1), lsof(8) The heap, leaks, and malloc_history commands can be used to look at various aspects of a process's memory usage. The lsof command can be used to get a list of open and mapped files in one or more processes, which can help determine why a volume can't be unmounted or ejected, for example. The Xcode developer tools also include Instruments, a graphical application that can give information similar to that provided by vmmap. The Allocations instrument graphically displays dynamic, real-time information about the object and memory use in an application (including VM allocations), as well as backtraces of where the allocations occured. The VM Tracker instrument in the Allocations template graphically displays information about the virtual memory regions in a process. macOS 14.5 August 9, 2022 macOS 14.5
null
system-override
system-override modifies system overrides. Some of the commands require the device to be booted into the Recovery OS. Invoke system-override with no arguments to see a full usage statement. macOS March 9, 2023 macOS
system-override – configure system overrides
system-override command
null
null
h2xs
h2xs builds a Perl extension from C header files. The extension will include functions which can be used to retrieve the value of any #define statement which was in the C header files. The module_name will be used for the name of the extension. If module_name is not supplied then the name of the first header file will be used, with the first character capitalized. If the extension might need extra libraries, they should be included here. The extension Makefile.PL will take care of checking whether the libraries actually exist and how they should be loaded. The extra libraries should be specified in the form -lm -lposix, etc, just as on the cc command line. By default, the Makefile.PL will search through the library path determined by Configure. That path can be augmented by including arguments of the form -L/another/library/path in the extra-libraries argument. In spite of its name, h2xs may also be used to create a skeleton pure Perl module. See the -X option.
h2xs - convert .h C header files to Perl extensions
h2xs [OPTIONS ...] [headerfile ... [extra_libraries]] h2xs -h|-?|--help
-A, --omit-autoload Omit all autoload facilities. This is the same as -c but also removes the "use AutoLoader" statement from the .pm file. -B, --beta-version Use an alpha/beta style version number. Causes version number to be "0.00_01" unless -v is specified. -C, --omit-changes Omits creation of the Changes file, and adds a HISTORY section to the POD template. -F, --cpp-flags=addflags Additional flags to specify to C preprocessor when scanning header for function declarations. Writes these options in the generated Makefile.PL too. -M, --func-mask=regular expression selects functions/macros to process. -O, --overwrite-ok Allows a pre-existing extension directory to be overwritten. -P, --omit-pod Omit the autogenerated stub POD section. -X, --omit-XS Omit the XS portion. Used to generate a skeleton pure Perl module. "-c" and "-f" are implicitly enabled. -a, --gen-accessors Generate an accessor method for each element of structs and unions. The generated methods are named after the element name; will return the current value of the element if called without additional arguments; and will set the element to the supplied value (and return the new value) if called with an additional argument. Embedded structures and unions are returned as a pointer rather than the complete structure, to facilitate chained calls. These methods all apply to the Ptr type for the structure; additionally two methods are constructed for the structure type itself, "_to_ptr" which returns a Ptr type pointing to the same structure, and a "new" method to construct and return a new structure, initialised to zeroes. -b, --compat-version=version Generates a .pm file which is backwards compatible with the specified perl version. For versions < 5.6.0, the changes are. - no use of 'our' (uses 'use vars' instead) - no 'use warnings' Specifying a compatibility version higher than the version of perl you are using to run h2xs will have no effect. If unspecified h2xs will default to compatibility with the version of perl you are using to run h2xs. -c, --omit-constant Omit constant() from the .xs file and corresponding specialised "AUTOLOAD" from the .pm file. -d, --debugging Turn on debugging messages. -e, --omit-enums=[regular expression] If regular expression is not given, skip all constants that are defined in a C enumeration. Otherwise skip only those constants that are defined in an enum whose name matches regular expression. Since regular expression is optional, make sure that this switch is followed by at least one other switch if you omit regular expression and have some pending arguments such as header-file names. This is ok: h2xs -e -n Module::Foo foo.h This is not ok: h2xs -n Module::Foo -e foo.h In the latter, foo.h is taken as regular expression. -f, --force Allows an extension to be created for a header even if that header is not found in standard include directories. -g, --global Include code for safely storing static data in the .xs file. Extensions that do no make use of static data can ignore this option. -h, -?, --help Print the usage, help and version for this h2xs and exit. -k, --omit-const-func For function arguments declared as "const", omit the const attribute in the generated XS code. -m, --gen-tied-var Experimental: for each variable declared in the header file(s), declare a perl variable of the same name magically tied to the C variable. -n, --name=module_name Specifies a name to be used for the extension, e.g., -n RPC::DCE -o, --opaque-re=regular expression Use "opaque" data type for the C types matched by the regular expression, even if these types are "typedef"-equivalent to types from typemaps. Should not be used without -x. This may be useful since, say, types which are "typedef"-equivalent to integers may represent OS-related handles, and one may want to work with these handles in OO-way, as in "$handle->do_something()". Use "-o ." if you want to handle all the "typedef"ed types as opaque types. The type-to-match is whitewashed (except for commas, which have no whitespace before them, and multiple "*" which have no whitespace between them). -p, --remove-prefix=prefix Specify a prefix which should be removed from the Perl function names, e.g., -p sec_rgy_ This sets up the XS PREFIX keyword and removes the prefix from functions that are autoloaded via the constant() mechanism. -s, --const-subs=sub1,sub2 Create a perl subroutine for the specified macros rather than autoload with the constant() subroutine. These macros are assumed to have a return type of char *, e.g., -s sec_rgy_wildcard_name,sec_rgy_wildcard_sid. -t, --default-type=type Specify the internal type that the constant() mechanism uses for macros. The default is IV (signed integer). Currently all macros found during the header scanning process will be assumed to have this type. Future versions of "h2xs" may gain the ability to make educated guesses. --use-new-tests When --compat-version (-b) is present the generated tests will use "Test::More" rather than "Test" which is the default for versions before 5.6.2. "Test::More" will be added to PREREQ_PM in the generated "Makefile.PL". --use-old-tests Will force the generation of test code that uses the older "Test" module. --skip-exporter Do not use "Exporter" and/or export any symbol. --skip-ppport Do not use "Devel::PPPort": no portability to older version. --skip-autoloader Do not use the module "AutoLoader"; but keep the constant() function and "sub AUTOLOAD" for constants. --skip-strict Do not use the pragma "strict". --skip-warnings Do not use the pragma "warnings". -v, --version=version Specify a version number for this extension. This version number is added to the templates. The default is 0.01, or 0.00_01 if "-B" is specified. The version specified should be numeric. -x, --autogen-xsubs Automatically generate XSUBs basing on function declarations in the header file. The package "C::Scan" should be installed. If this option is specified, the name of the header file may look like "NAME1,NAME2". In this case NAME1 is used instead of the specified string, but XSUBs are emitted only for the declarations included from file NAME2. Note that some types of arguments/return-values for functions may result in XSUB-declarations/typemap-entries which need hand- editing. Such may be objects which cannot be converted from/to a pointer (like "long long"), pointers to functions, or arrays. See also the section on "LIMITATIONS of -x".
# Default behavior, extension is Rusers h2xs rpcsvc/rusers # Same, but extension is RUSERS h2xs -n RUSERS rpcsvc/rusers # Extension is rpcsvc::rusers. Still finds <rpcsvc/rusers.h> h2xs rpcsvc::rusers # Extension is ONC::RPC. Still finds <rpcsvc/rusers.h> h2xs -n ONC::RPC rpcsvc/rusers # Without constant() or AUTOLOAD h2xs -c rpcsvc/rusers # Creates templates for an extension named RPC h2xs -cfn RPC # Extension is ONC::RPC. h2xs -cfn ONC::RPC # Extension is a pure Perl module with no XS code. h2xs -X My::Module # Extension is Lib::Foo which works at least with Perl5.005_03. # Constants are created for all #defines and enums h2xs can find # in foo.h. h2xs -b 5.5.3 -n Lib::Foo foo.h # Extension is Lib::Foo which works at least with Perl5.005_03. # Constants are created for all #defines but only for enums # whose names do not start with 'bar_'. h2xs -b 5.5.3 -e '^bar_' -n Lib::Foo foo.h # Makefile.PL will look for library -lrpc in # additional directory /opt/net/lib h2xs rpcsvc/rusers -L/opt/net/lib -lrpc # Extension is DCE::rgynbase # prefix "sec_rgy_" is dropped from perl function names h2xs -n DCE::rgynbase -p sec_rgy_ dce/rgynbase # Extension is DCE::rgynbase # prefix "sec_rgy_" is dropped from perl function names # subroutines are created for sec_rgy_wildcard_name and # sec_rgy_wildcard_sid h2xs -n DCE::rgynbase -p sec_rgy_ \ -s sec_rgy_wildcard_name,sec_rgy_wildcard_sid dce/rgynbase # Make XS without defines in perl.h, but with function declarations # visible from perl.h. Name of the extension is perl1. # When scanning perl.h, define -DEXT=extern -DdEXT= -DINIT(x)= # Extra backslashes below because the string is passed to shell. # Note that a directory with perl header files would # be added automatically to include path. h2xs -xAn perl1 -F "-DEXT=extern -DdEXT= -DINIT\(x\)=" perl.h # Same with function declaration in proto.h as visible from perl.h. h2xs -xAn perl2 perl.h,proto.h # Same but select only functions which match /^av_/ h2xs -M '^av_' -xAn perl2 perl.h,proto.h # Same but treat SV* etc as "opaque" types h2xs -o '^[S]V \*$' -M '^av_' -xAn perl2 perl.h,proto.h Extension based on .h and .c files Suppose that you have some C files implementing some functionality, and the corresponding header files. How to create an extension which makes this functionality accessible in Perl? The example below assumes that the header files are interface_simple.h and interface_hairy.h, and you want the perl module be named as "Ext::Ension". If you need some preprocessor directives and/or linking with external libraries, see the flags "-F", "-L" and "-l" in "OPTIONS". Find the directory name Start with a dummy run of h2xs: h2xs -Afn Ext::Ension The only purpose of this step is to create the needed directories, and let you know the names of these directories. From the output you can see that the directory for the extension is Ext/Ension. Copy C files Copy your header files and C files to this directory Ext/Ension. Create the extension Run h2xs, overwriting older autogenerated files: h2xs -Oxan Ext::Ension interface_simple.h interface_hairy.h h2xs looks for header files after changing to the extension directory, so it will find your header files OK. Archive and test As usual, run cd Ext/Ension perl Makefile.PL make dist make make test Hints It is important to do "make dist" as early as possible. This way you can easily merge(1) your changes to autogenerated files if you decide to edit your ".h" files and rerun h2xs. Do not forget to edit the documentation in the generated .pm file. Consider the autogenerated files as skeletons only, you may invent better interfaces than what h2xs could guess. Consider this section as a guideline only, some other options of h2xs may better suit your needs. ENVIRONMENT No environment variables are used. AUTHOR Larry Wall and others SEE ALSO perl, perlxstut, ExtUtils::MakeMaker, and AutoLoader. DIAGNOSTICS The usual warnings if it cannot read or write the files involved. LIMITATIONS of -x h2xs would not distinguish whether an argument to a C function which is of the form, say, "int *", is an input, output, or input/output parameter. In particular, argument declarations of the form int foo(n) int *n should be better rewritten as int foo(n) int &n if "n" is an input parameter. Additionally, h2xs has no facilities to intuit that a function int foo(addr,l) char *addr int l takes a pair of address and length of data at this address, so it is better to rewrite this function as int foo(sv) SV *addr PREINIT: STRLEN len; char *s; CODE: s = SvPV(sv,len); RETVAL = foo(s, len); OUTPUT: RETVAL or alternately static int my_foo(SV *sv) { STRLEN len; char *s = SvPV(sv,len); return foo(s, len); } MODULE = foo PACKAGE = foo PREFIX = my_ int foo(sv) SV *sv See perlxs and perlxstut for additional details. perl v5.38.2 2023-11-28 H2XS(1)
csrutil
csrutil modifies System Integrity Protection settings. Some of the commands require the device to be booted into the Recovery OS. Invoke csrutil with no arguments to see a full usage statement. macOS June 15, 2017 macOS
csrutil – Configure system security policies
csrutil command [arguments ...]
null
null
shar
The shar command writes a sh(1) shell script to the standard output which will recreate the file hierarchy specified by the command line operands. Directories will be recreated and must be specified before the files they contain (the find(1) utility does this correctly). The shar command is normally used for distributing files by ftp(1) or mail(1).
shar – create a shell archive of files
shar file ...
null
To create a shell archive of the program ls(1) and mail it to Rick: cd ls shar `find . -print` | mail -s "ls source" rick To recreate the program directory: mkdir ls cd ls ... <delete header lines and examine mailed archive> ... sh archive SEE ALSO compress(1), mail(1), tar(1), uuencode(1) HISTORY The shar command appeared in 4.4BSD. BUGS The shar command makes no provisions for special types of files or files containing magic characters. The shar command cannot handle files without a newline ('\n') as the last character. It is easy to insert trojan horses into shar files. It is strongly recommended that all shell archive files be examined before running them through sh(1). Archives produced using this implementation of shar may be easily examined with the command: egrep -av '^[X#]' shar.file macOS 14.5 January 31, 2019 macOS 14.5
ldappasswd
ldappasswd is a tool to set the password of an LDAP user. ldappasswd uses the LDAPv3 Password Modify (RFC 3062) extended operation. ldappasswd sets the password of associated with the user [or an optionally specified user]. If the new password is not specified on the command line and the user doesn't enable prompting, the server will be asked to generate a password for the user. ldappasswd is neither designed nor intended to be a replacement for passwd(1) and should not be installed as such.
ldappasswd - change the password of an LDAP entry
ldappasswd [-A] [-a_oldPasswd] [-t_oldpasswdfile] [-D_binddn] [-d_debuglevel] [-H_ldapuri] [-h_ldaphost] [-n] [-p_ldapport] [-S] [-s_newPasswd] [-T_newpasswdfile] [-v] [-W] [-w_passwd] [-y_passwdfile] [-O_security-properties] [-I] [-Q] [-U_authcid] [-R_realm] [-x] [-X_authzid] [-Y_mech] [-Z[Z]] [user]
-A Prompt for old password. This is used instead of specifying the password on the command line. -a_oldPasswd Set the old password to oldPasswd. -t_oldPasswdFile Set the old password to the contents of oldPasswdFile. -x Use simple authentication instead of SASL. -D_binddn Use the Distinguished Name binddn to bind to the LDAP directory. For SASL binds, the server is expected to ignore this value. -d_debuglevel Set the LDAP debugging level to debuglevel. ldappasswd must be compiled with LDAP_DEBUG defined for this option to have any effect. -H_ldapuri Specify URI(s) referring to the ldap server(s); only the protocol/host/port fields are allowed; a list of URI, separated by whitespace or commas is expected. -h_ldaphost Specify an alternate host on which the ldap server is running. Deprecated in favor of -H. -p_ldapport Specify an alternate TCP port where the ldap server is listening. Deprecated in favor of -H. -n Do not set password. (Can be useful when used in conjunction with -v or -d) -S Prompt for new password. This is used instead of specifying the password on the command line. -s_newPasswd Set the new password to newPasswd. -T_newPasswdFile Set the new password to the contents of newPasswdFile. -v Increase the verbosity of output. Can be specified multiple times. -W Prompt for bind password. This is used instead of specifying the password on the command line. -w_passwd Use passwd as the password to bind with. -y_passwdfile Use complete contents of passwdfile as the password for simple authentication. -O_security-properties Specify SASL security properties. -I Enable SASL Interactive mode. Always prompt. Default is to prompt only as needed. -Q Enable SASL Quiet mode. Never prompt. -U_authcid Specify the authentication ID for SASL bind. The form of the ID depends on the actual SASL mechanism used. -R_realm Specify the realm of authentication ID for SASL bind. The form of the realm depends on the actual SASL mechanism used. -X_authzid Specify the requested authorization ID for SASL bind. authzid must be one of the following formats: dn:<distinguished name> or u:<username>. -Y_mech Specify the SASL mechanism to be used for authentication. If it's not specified, the program will choose the best mechanism the server knows. -Z[Z] Issue StartTLS (Transport Layer Security) extended operation. If you use -ZZ, the command will require the operation to be successful SEE ALSO ldap_sasl_bind(3), ldap_extended_operation(3), ldap_start_tls_s(3) AUTHOR The OpenLDAP Project <http://www.openldap.org/> ACKNOWLEDGEMENTS OpenLDAP Software is developed and maintained by The OpenLDAP Project <http://www.openldap.org/>. OpenLDAP Software is derived from University of Michigan LDAP 3.3 Release. OpenLDAP 2.4.28 2011/11/24 LDAPPASSWD(1)
null
ResMerger
Tools supporting Carbon development, including /usr/bin/ResMerger, were deprecated with Xcode 6. The /usr/bin/ResMerger command merges the Carbon Resource Manager resource data in multiple files into a single file. The output file may be one of the input files. The /usr/bin/ResMerger command takes the following flags and arguments: -fileCreator <fileCreator> Sets the HFS creator type of the output file. The default is '????'. -fileType <fileType> Sets the HFS file type of the output file. The default is '????'. -[a]ppend Append to output file, rather than overwriting it. -srcIs RSRC | DF The fork in which to look for resources in the input file(s). The default is the data fork (DF). -dstIs RSRC | DF The fork in which to write resources in the output file. The default is the data fork (DF). -skip <type> Specifies resource type to skip during the resource merge. May be used multiple times, to specify multiple resource types to skip. file Specifies one or more input files. Note that as there is only one srcIs flag, the input files must be homogenous, that is, the resources must be in the same fork of all input files. -o <dest-file> Specifies the output file. If the fork designated by -dstIs exists, it is overwritten unless the -a flag is provided; if it does not exist, it is created. SEE ALSO Rez(1), DeRez(1), RezWack(1), UnRezWack(1), SplitForks(1) Mac OS X April 12, 2004 Mac OS X
/usr/bin/ResMerger – Merges resource forks or files into one resource file (DEPRECATED)
/usr/bin/ResMerger [-fileCreator <fileCreator>] [-fileType <fileType>] [-[a]ppend] [-srcIs RSRC | DF] [-dstIs RSRC | DF] file ... -o <dest-file>
null
null
bitesize.d
This produces a report for the size of disk events caused by processes. These are the disk events sent by the block I/O driver. If applications must use the disks, we generally prefer they do so sequentially with large I/O sizes, or larger "bites". Since this uses DTrace, only users with root privileges can run this command.
bitesize.d - analyse disk I/O size by process. Uses DTrace.
bitesize.d
null
Sample until Ctrl-C is hit then print report, # bitesize.d FIELDS PID process ID CMD command and argument list value size in bytes count number of I/O operations NOTES The application may be requesting smaller sized operations, which are being rounded up to the nearest sector size or UFS block size. To analyse what the application is requesting, DTraceToolkit programs such as Proc/fddist may help. DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT bitesize.d will sample until Ctrl-C is hit. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO iosnoop(1M), seeksize(1M), dtrace(1M) version 1.00 June 15, 2005 bitesize.d(1m)
db_checkpoint
The db_checkpoint utility is a daemon process that monitors the database log, and periodically calls DB_ENV->txn_checkpoint to checkpoint it. The options are as follows: -1 Checkpoint the log once, regardless of whether or not there has been activity since the last checkpoint and then exit. -h Specify a home directory for the database environment; by default, the current working directory is used. -k Checkpoint the database at least as often as every kbytes of log file are written. -L Log the execution of the db_checkpoint utility to the specified file in the following format, where ### is the process ID, and the date is the time the utility was started. db_checkpoint: ### Wed Jun 15 01:23:45 EDT 1995 This file will be removed if the db_checkpoint utility exits gracefully. -P Specify an environment password. Although Berkeley DB utilities overwrite password strings as soon as possible, be aware there may be a window of vulnerability on systems where unprivileged users can see command-line arguments or where utilities are not able to overwrite the memory containing the command-line arguments. -p Checkpoint the database at least every min minutes if there has been any activity since the last checkpoint. -V Write the library version number to the standard output, and exit. -v Write the time of each checkpoint attempt to the standard output. At least one of the -1, -k, and -p options must be specified. The db_checkpoint utility uses a Berkeley DB environment (as described for the -h option, the environment variable DB_HOME, or because the utility was run in a directory containing a Berkeley DB environment). In order to avoid environment corruption when using a Berkeley DB environment, db_checkpoint should always be given the chance to detach from the environment and exit gracefully. To cause db_checkpoint to release all environment resources and exit cleanly, send it an interrupt signal (SIGINT). The db_checkpoint utility does not attempt to create the Berkeley DB shared memory regions if they do not already exist. The application that creates the region should be started first, and once the region is created, the db_checkpoint utility should be started. The DB_ENV->txn_checkpoint method is the underlying method used by the db_checkpoint utility. See the db_checkpoint utility source code for an example of using DB_ENV->txn_checkpoint in a IEEE/ANSI Std 1003.1 (POSIX) environment. The db_checkpoint utility exits 0 on success, and >0 if an error occurs. ENVIRONMENT DB_HOME If the -h option is not specified and the environment variable DB_HOME is set, it is used as the path of the database home, as described in DB_ENV->open. SEE ALSO db_archive(1), db_deadlock(1), db_dump(1), db_load(1), db_printlog(1), db_recover(1), db_stat(1), db_upgrade(1), db_verify(1) Darwin December 3, 2003 Darwin
db_checkpoint
db_checkpoint [-1Vv] [-h home] [-k kbytes] [-L file] [-P password] [-p min]
null
null
funzip
funzip without a file argument acts as a filter; that is, it assumes that a ZIP archive (or a gzip'd(1) file) is being piped into standard input, and it extracts the first member from the archive to stdout. When stdin comes from a tty device, funzip assumes that this cannot be a stream of (binary) compressed data and shows a short help text, instead. If there is a file argument, then input is read from the specified file instead of from stdin. A password for encrypted zip files can be specified on the command line (preceding the file name, if any) by prefixing the password with a dash. Note that this constitutes a security risk on many systems; currently running processes are often visible via simple commands (e.g., ps(1) under Unix), and command-line histories can be read. If the first entry of the zip file is encrypted and no password is specified on the command line, then the user is prompted for a password and the password is not echoed on the console. Given the limitation on single-member extraction, funzip is most useful in conjunction with a secondary archiver program such as tar(1). The following section includes an example illustrating this usage in the case of disk backups to tape.
funzip - filter for extracting from a ZIP archive in a pipe
funzip [-password] [input[.zip|.gz]] ARGUMENTS [-password] Optional password to be used if ZIP archive is encrypted. Decryption may not be supported at some sites. See DESCRIPTION for more details. [input[.zip|.gz]] Optional input archive file specification. See DESCRIPTION for details.
null
To use funzip to extract the first member file of the archive test.zip and to pipe it into more(1): funzip test.zip | more To use funzip to test the first member file of test.zip (any errors will be reported on standard error): funzip test.zip > /dev/null To use zip and funzip in place of compress(1) and zcat(1) (or gzip(1L) and gzcat(1L)) for tape backups: tar cf - . | zip -7 | dd of=/dev/nrst0 obs=8k dd if=/dev/nrst0 ibs=8k | funzip | tar xf - (where, for example, nrst0 is a SCSI tape drive). BUGS When piping an encrypted file into more and allowing funzip to prompt for password, the terminal may sometimes be reset to a non-echo mode. This is apparently due to a race condition between the two programs; funzip changes the terminal mode to non-echo before more reads its state, and more then ``restores'' the terminal to this mode before exiting. To recover, run funzip on the same file but redirect to /dev/null rather than piping into more; after prompting again for the password, funzip will reset the terminal properly. There is presently no way to extract any member but the first from a ZIP archive. This would be useful in the case where a ZIP archive is included within another archive. In the case where the first member is a directory, funzip simply creates the directory and exits. The functionality of funzip should be incorporated into unzip itself (future release). SEE ALSO gzip(1L), unzip(1L), unzipsfx(1L), zip(1L), zipcloak(1L), zipinfo(1L), zipnote(1L), zipsplit(1L) URL The Info-ZIP home page is currently at http://www.info-zip.org/pub/infozip/ or ftp://ftp.info-zip.org/pub/infozip/ . AUTHOR Mark Adler (Info-ZIP) Info-ZIP 20 April 2009 (v3.95) FUNZIP(1L)
pod2readme5.34
This utility will use Pod::Readme to extract a README file from a POD document. It works by extracting and filtering the POD, and then calling the appropriate filter program to convert the POD to another format.
pod2readme - Intelligently generate a README file from POD USAGE pod2readme [-cfho] [long options...] input-file [output-file] [target] Intelligently generate a README file from POD -t --target target type (default: 'readme') -f --format output format (default: 'text') -b --backup backup output file -o --output output filename (default based on target) -c --stdout output to stdout (console) -F --force only update if files are changed -h --help print usage and exit
pod2readme -f markdown lib/MyApp.pm
"--backup" By default, "pod2readme" will back up the output file. To disable this, use the "--no-backup" option. "--output" Specifies the name of the output file. If omitted, it will use the second command line argument, or default to the "--target" plus the corresponding extension of the "--format". For all intents, the default is README. If a format other than "text" is chosen, then the appropriate extension will be added, e.g. for "markdown", the default output file is README.md. "--target" The target of the filter, which defaults to "readme". "--format" The output format, which defaults to "text". Other supposed formats are "github", "html", "latex", "man", "markdown", "pod", "rtf", and "xhtml". You can also use "gfm" instead of "github". Similary you can use "md" for "markdown". "--stdout" If enabled, it will output to the console instead of "--output". "--force" By default, the README will be generated if the source files have been changed. Using "--force" will force the file to be updated. Note: POD format files will always be updated. "--help" Prints the usage and exits. SEE ALSO pod2text, pod2markdown. perl v5.34.0 2018-10-31 POD2README(1)
null
power_report.sh
null
null
null
null
null
iosnoop
iosnoop prints I/O events as they happen, with useful details such as UID, PID, block number, size, filename, etc. This is useful to determine the process responsible for using the disks, as well as details on what activity the process is requesting. Behaviour such as random or sequential I/O can be observed by reading the block numbers. Since this uses DTrace, only users with root privileges can run this command.
iosnoop - snoop I/O events as they occur. Uses DTrace.
iosnoop [-a|-A|-Deghinostv] [-d device] [-f filename] [-m mount_point] [-n name] [-p PID]
-a print all data -A dump all data, space delimited -D print time delta, us (elapsed) -e print device name -i print device instance -N print major and minor numbers -o print disk delta time, us -s print start time, us -t print completion time, us -v print completion time, string -d device instance name to snoop (eg, dad0) -f filename full pathname of file to snoop -m mount_point mountpoint for filesystem to snoop -n name process name -p PID process ID
Default output, print I/O activity as it occurs, # iosnoop Print human readable timestamps, # iosnoop -v Print major and minor numbers, # iosnoop -N Snoop events on the root filesystem only, # iosnoop -m / FIELDS UID User ID PID Process ID PPID Parent Process ID COMM command name for the process ARGS argument listing for the process SIZE size of the operation, bytes BLOCK disk block for the operation (location. relative to this filesystem. more useful with the -N option to print major and minor numbers) STIME timestamp for the disk request, us TIME timestamp for the disk completion, us DELTA elapsed time from request to completion, us (this is the elapsed time from the disk request (strategy) to the disk completion (iodone)) DTIME time for disk to complete request, us (this is the time for the disk to complete that event since it's last event (time between iodones), or, the time to the strategy if the disk had been idle) STRTIME timestamp for the disk completion, string DEVICE device name INS device instance number D direction, Read or Write MOUNT mount point FILE filename (basename) for I/O operation NOTES When filtering on PID or process name, be aware that poor disk event times may be due to events that have been filtered away, for example another process that may be seeking the disk heads elsewhere. DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT iosnoop will run forever until Ctrl-C is hit. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO iotop(1M), dtrace(1M) version 1.50 July 25, 2005 iosnoop(1m)
atsutil
atsutil tool controls some aspects of the font registration system. It may be used remove font registration databases and caches, or list active fonts. fonts enumerates all fonts available to processes in the OS. It also performs a consistency check of the fonts available via font management APIs. databases will remove fontd System or User databases along with any cache files. Removing databases may cause the loss of font registration state: fonts activated outside the standard font directories, font faces disabled, and font libraries. New databases will be regenerated from fonts installed the standard font directories after the user logs out, restarts, or the fontd server is restarted.
atsutil – font registration system utility
atsutil fonts [-list] atsutil databases [-remove | -removeUser] atsutil help
fonts [-list] -list lists and performs a consistency check on registered fonts. databases [-remove | -removeUser] -remove remove fontd databases for active user and system (used when no one is logged in and some background processes). -removeUser remove fontd databases for the active user only. SEE ALSO fontd(8) HISTORY atsutil first appeared in MacOS X 10.5. Mac OS X 2008-12-06 Mac OS X
null
osalang
osalang prints information about installed OSA languages. With no options, it prints an unadorned list of language names to standard output. These names can be passed to the -l options of osacompile(1) and osascript(1). The options are as follows: -d Only print the default language. -l List in long format. For each language, osalang will print its component subtype, manufacturer, and capability flags. There are eight groups of optional routines that scripting components can support. Each flag is either a letter, meaning the group is supported, or ‘-’, meaning it is not. The letters map to the following groups: c compiling scripts. g getting source data. x coercing script values. e manipulating the event create and send functions. r recording scripts. v “convenience” APIs to execute scripts in one step. d manipulating dialects. h using scripts to handle Apple Events. -L Same as -l, but also prints the description of each component after its name. SEE ALSO osacompile(1), osascript(1) Mac OS X May 1, 2001 Mac OS X
osalang – information about installed OSA languages
osalang [-dlL]
null
null
dirname
The basename utility deletes any prefix ending with the last slash ‘/’ character present in string (after first stripping trailing slashes), and a suffix, if given. The suffix is not stripped if it is identical to the remaining characters in string. The resulting filename is written to the standard output. A non-existent suffix is ignored. If -a is specified, then every argument is treated as a string as if basename were invoked with just one argument. If -s is specified, then the suffix is taken as its argument, and all other arguments are treated as a string. The dirname utility deletes the filename portion, beginning with the last slash ‘/’ character to the end of string (after first stripping trailing slashes), and writes the result to the standard output. EXIT STATUS The basename and dirname utilities exit 0 on success, and >0 if an error occurs.
basename, dirname – return filename or directory portion of pathname
basename string [suffix] basename [-a] [-s suffix] string [...] dirname string [...]
null
The following line sets the shell variable FOO to /usr/bin. FOO=`dirname /usr/bin/trail` SEE ALSO csh(1), sh(1), basename(3), dirname(3) STANDARDS The basename and dirname utilities are expected to be IEEE Std 1003.2 (“POSIX.2”) compatible. HISTORY The basename and dirname utilities first appeared in 4.4BSD. macOS 14.5 May 26, 2020 macOS 14.5
pod2man
pod2man is a wrapper script around the Pod::Man module, using it to generate *roff input from POD source. The resulting *roff code is suitable for display on a terminal using nroff(1), normally via man(1), or printing using troff(1). By default (on non-EBCDIC systems), pod2man outputs UTF-8 manual pages. Its output should work with the man program on systems that use groff (most Linux distributions) or mandoc (most BSD variants), but may result in mangled output on older UNIX systems. To choose a different, possibly more backward-compatible output mangling on such systems, use "--encoding=roff" (the default in earlier Pod::Man versions). See the --encoding option and "ENCODING" in Pod::Man for more details. input is the file to read for POD source (the POD can be embedded in code). If input isn't given, it defaults to "STDIN". output, if given, is the file to which to write the formatted output. If output isn't given, the formatted output is written to "STDOUT". Several POD files can be processed in the same pod2man invocation (saving module load and compile times) by providing multiple pairs of input and output files on the command line. --section, --release, --center, --date, and --official can be used to set the headers and footers to use. If not given, Pod::Man will assume various defaults. See below for details.
pod2man - Convert POD data to formatted *roff input
pod2man [--center=string] [--date=string] [--encoding=encoding] [--errors=style] [--fixed=font] [--fixedbold=font] [--fixeditalic=font] [--fixedbolditalic=font] [--guesswork=rule[,rule...]] [--name=name] [--nourls] [--official] [--release=version] [--section=manext] [--quotes=quotes] [--lquote=quote] [--rquote=quote] [--stderr] [--utf8] [--verbose] [input [output] ...] pod2man --help
Each option is annotated with the version of podlators in which that option was added with its current meaning. -c string, --center=string [1.00] Sets the centered page header for the ".TH" macro to string. The default is "User Contributed Perl Documentation", but also see --official below. -d string, --date=string [4.00] Set the left-hand footer string for the ".TH" macro to string. By default, the first of POD_MAN_DATE, SOURCE_DATE_EPOCH, the modification date of the input file, or the current date (if input comes from "STDIN") will be used, and the date will be in UTC. See "CLASS METHODS" in Pod::Man for more details. -e encoding, --encoding=encoding [5.00] Specifies the encoding of the output. encoding must be an encoding recognized by the Encode module (see Encode::Supported). The default on non-EBCDIC systems is UTF-8. If the output contains characters that cannot be represented in this encoding, that is an error that will be reported as configured by the --errors option. If error handling is other than "die", the unrepresentable character will be replaced with the Encode substitution character (normally "?"). If the "encoding" option is set to the special value "groff" (the default on EBCDIC systems), or if the Encode module is not available and the encoding is set to anything other than "roff" (see below), Pod::Man will translate all non-ASCII characters to "\[uNNNN]" Unicode escapes. These are not traditionally part of the *roff language, but are supported by groff and mandoc and thus by the majority of manual page processors in use today. If encoding is set to the special value "roff", pod2man will do its historic transformation of (some) ISO 8859-1 characters into *roff escapes that may be adequate in troff and may be readable (if ugly) in nroff. This was the default behavior of versions of pod2man before 5.00. With this encoding, all other non-ASCII characters will be replaced with "X". It may be required for very old troff and nroff implementations that do not support UTF-8, but its representation of any non-ASCII character is very poor and often specific to European languages. Its use is discouraged. WARNING: The input encoding of the POD source is independent from the output encoding, and setting this option does not affect the interpretation of the POD input. Unless your POD source is US- ASCII, its encoding should be declared with the "=encoding" command in the source. If this is not done, Pod::Simple will will attempt to guess the encoding and may be successful if it's Latin-1 or UTF-8, but it will produce warnings. See perlpod(1) for more information. --errors=style [2.5.0] Set the error handling style. "die" says to throw an exception on any POD formatting error. "stderr" says to report errors on standard error, but not to throw an exception. "pod" says to include a POD ERRORS section in the resulting documentation summarizing the errors. "none" ignores POD errors entirely, as much as possible. The default is "die". --fixed=font [1.0] The fixed-width font to use for verbatim text and code. Defaults to "CW". Some systems may want "CR" instead. Only matters for troff output. --fixedbold=font [1.0] Bold version of the fixed-width font. Defaults to "CB". Only matters for troff output. --fixeditalic=font [1.0] Italic version of the fixed-width font (something of a misnomer, since most fixed-width fonts only have an oblique version, not an italic version). Defaults to "CI". Only matters for troff output. --fixedbolditalic=font [1.0] Bold italic (in theory, probably oblique in practice) version of the fixed-width font. Pod::Man doesn't assume you have this, and defaults to "CB". Some systems (such as Solaris) have this font available as "CX". Only matters for troff output. --guesswork=rule[,rule...] [5.00] By default, pod2man applies some default formatting rules based on guesswork and regular expressions that are intended to make writing Perl documentation easier and require less explicit markup. These rules may not always be appropriate, particularly for documentation that isn't about Perl. This option allows turning all or some of it off. The special rule "all" enables all guesswork. This is also the default for backward compatibility reasons. The special rule "none" disables all guesswork. Otherwise, the value of this option should be a comma-separated list of one or more of the following keywords: functions Convert function references like foo() to bold even if they have no markup. The function name accepts valid Perl characters for function names (including ":"), and the trailing parentheses must be present and empty. manref Make the first part (before the parentheses) of man page references like foo(1) bold even if they have no markup. The section must be a single number optionally followed by lowercase letters. quoting If no guesswork is enabled, any text enclosed in C<> is surrounded by double quotes in nroff (terminal) output unless the contents are already quoted. When this guesswork is enabled, quote marks will also be suppressed for Perl variables, function names, function calls, numbers, and hex constants. variables Convert Perl variable names to a fixed-width font even if they have no markup. This transformation will only be apparent in troff output, or some other output format (unlike nroff terminal output) that supports fixed-width fonts. Any unknown guesswork name is silently ignored (for potential future compatibility), so be careful about spelling. -h, --help [1.00] Print out usage information. -l, --lax [1.00] No longer used. pod2man used to check its input for validity as a manual page, but this should now be done by podchecker(1) instead. Accepted for backward compatibility; this option no longer does anything. --language=language [5.00] Add commands telling groff that the input file is in the given language. The value of this setting must be a language abbreviation for which groff provides supplemental configuration, such as "ja" (for Japanese) or "zh" (for Chinese). This adds: .mso <language>.tmac .hla <language> to the start of the file, which configure correct line breaking for the specified language. Without these commands, groff may not know how to add proper line breaks for Chinese and Japanese text if the man page is installed into the normal man page directory, such as /usr/share/man. On many systems, this will be done automatically if the man page is installed into a language-specific man page directory, such as /usr/share/man/zh_CN. In that case, this option is not required. Unfortunately, the commands added with this option are specific to groff and will not work with other troff and nroff implementations. --lquote=quote --rquote=quote [4.08] Sets the quote marks used to surround C<> text. --lquote sets the left quote mark and --rquote sets the right quote mark. Either may also be set to the special value "none", in which case no quote mark is added on that side of C<> text (but the font is still changed for troff output). Also see the --quotes option, which can be used to set both quotes at once. If both --quotes and one of the other options is set, --lquote or --rquote overrides --quotes. -n name, --name=name [4.08] Set the name of the manual page for the ".TH" macro to name. Without this option, the manual name is set to the uppercased base name of the file being converted unless the manual section is 3, in which case the path is parsed to see if it is a Perl module path. If it is, a path like ".../lib/Pod/Man.pm" is converted into a name like "Pod::Man". This option, if given, overrides any automatic determination of the name. Although one does not have to follow this convention, be aware that the convention for UNIX manual pages is for the title to be in all- uppercase, even if the command isn't. (Perl modules traditionally use mixed case for the manual page title, however.) This option is probably not useful when converting multiple POD files at once. When converting POD source from standard input, the name will be set to "STDIN" if this option is not provided. Providing this option is strongly recommended to set a meaningful manual page name. --nourls [2.5.0] Normally, L<> formatting codes with a URL but anchor text are formatted to show both the anchor text and the URL. In other words: L<foo|http://example.com/> is formatted as: foo <http://example.com/> This flag, if given, suppresses the URL when anchor text is given, so this example would be formatted as just "foo". This can produce less cluttered output in cases where the URLs are not particularly important. -o, --official [1.00] Set the default header to indicate that this page is part of the standard Perl release, if --center is not also given. -q quotes, --quotes=quotes [4.00] Sets the quote marks used to surround C<> text to quotes. If quotes is a single character, it is used as both the left and right quote. Otherwise, it is split in half, and the first half of the string is used as the left quote and the second is used as the right quote. quotes may also be set to the special value "none", in which case no quote marks are added around C<> text (but the font is still changed for troff output). Also see the --lquote and --rquote options, which can be used to set the left and right quotes independently. If both --quotes and one of the other options is set, --lquote or --rquote overrides --quotes. -r version, --release=version [1.00] Set the centered footer for the ".TH" macro to version. By default, this is set to the version of Perl you run pod2man under. Setting this to the empty string will cause some *roff implementations to use the system default value. Note that some system "an" macro sets assume that the centered footer will be a modification date and will prepend something like "Last modified: ". If this is the case for your target system, you may want to set --release to the last modified date and --date to the version number. -s string, --section=string [1.00] Set the section for the ".TH" macro. The standard section numbering convention is to use 1 for user commands, 2 for system calls, 3 for functions, 4 for devices, 5 for file formats, 6 for games, 7 for miscellaneous information, and 8 for administrator commands. There is a lot of variation here, however; some systems (like Solaris) use 4 for file formats, 5 for miscellaneous information, and 7 for devices. Still others use 1m instead of 8, or some mix of both. About the only section numbers that are reliably consistent are 1, 2, and 3. By default, section 1 will be used unless the file ends in ".pm", in which case section 3 will be selected. --stderr [2.1.3] By default, pod2man dies if any errors are detected in the POD input. If --stderr is given and no --errors flag is present, errors are sent to standard error, but pod2man does not abort. This is equivalent to "--errors=stderr" and is supported for backward compatibility. -u, --utf8 [2.1.0] This option used to tell pod2man to produce UTF-8 output. Since this is now the default as of version 5.00, it is ignored and does nothing. -v, --verbose [1.11] Print out the name of each output file as it is being generated. EXIT STATUS As long as all documents processed result in some output, even if that output includes errata (a "POD ERRORS" section generated with "--errors=pod"), pod2man will exit with status 0. If any of the documents being processed do not result in an output document, pod2man will exit with status 1. If there are syntax errors in a POD document being processed and the error handling style is set to the default of "die", pod2man will abort immediately with exit status 255. DIAGNOSTICS If pod2man fails with errors, see Pod::Man and Pod::Simple for information about what those errors might mean.
pod2man program > program.1 pod2man SomeModule.pm /usr/perl/man/man3/SomeModule.3 pod2man --section=7 note.pod > note.7 If you would like to print out a lot of man page continuously, you probably want to set the C and D registers to set contiguous page numbering and even/odd paging, at least on some versions of man(7). troff -man -rC1 -rD1 perl.1 perldata.1 perlsyn.1 ... To get index entries on "STDERR", turn on the F register, as in: troff -man -rF1 perl.1 The indexing merely outputs messages via ".tm" for each major page, section, subsection, item, and any "X<>" directives. AUTHOR Russ Allbery <rra@cpan.org>, based on the original pod2man by Larry Wall and Tom Christiansen. COPYRIGHT AND LICENSE Copyright 1999-2001, 2004, 2006, 2008, 2010, 2012-2019, 2022 Russ Allbery <rra@cpan.org> This program is free software; you may redistribute it and/or modify it under the same terms as Perl itself. SEE ALSO Pod::Man, Pod::Simple, man(1), nroff(1), perlpod(1), podchecker(1), perlpodstyle(1), troff(1), man(7) The man page documenting the an macro set may be man(5) instead of man(7) on your system. The current version of this script is always available from its web site at <https://www.eyrie.org/~eagle/software/podlators/>. It is also part of the Perl core distribution as of 5.6.0. perl v5.38.2 2023-11-28 POD2MAN(1)
enc2xs
enc2xs builds a Perl extension for use by Encode from either Unicode Character Mapping files (.ucm) or Tcl Encoding Files (.enc). Besides being used internally during the build process of the Encode module, you can use enc2xs to add your own encoding to perl. No knowledge of XS is necessary. Quick Guide If you want to know as little about Perl as possible but need to add a new encoding, just read this chapter and forget the rest. 0. Have a .ucm file ready. You can get it from somewhere or you can write your own from scratch or you can grab one from the Encode distribution and customize it. For the UCM format, see the next Chapter. In the example below, I'll call my theoretical encoding myascii, defined in my.ucm. "$" is a shell prompt. $ ls -F my.ucm 1. Issue a command as follows; $ enc2xs -M My my.ucm generating Makefile.PL generating My.pm generating README generating Changes Now take a look at your current directory. It should look like this. $ ls -F Makefile.PL My.pm my.ucm t/ The following files were created. Makefile.PL - MakeMaker script My.pm - Encode submodule t/My.t - test file 1.1. If you want *.ucm installed together with the modules, do as follows; $ mkdir Encode $ mv *.ucm Encode $ enc2xs -M My Encode/*ucm 2. Edit the files generated. You don't have to if you have no time AND no intention to give it to someone else. But it is a good idea to edit the pod and to add more tests. 3. Now issue a command all Perl Mongers love: $ perl Makefile.PL Writing Makefile for Encode::My 4. Now all you have to do is make. $ make cp My.pm blib/lib/Encode/My.pm /usr/local/bin/perl /usr/local/bin/enc2xs -Q -O \ -o encode_t.c -f encode_t.fnm Reading myascii (myascii) Writing compiled form 128 bytes in string tables 384 bytes (75%) saved spotting duplicates 1 bytes (0.775%) saved using substrings .... chmod 644 blib/arch/auto/Encode/My/My.bs $ The time it takes varies depending on how fast your machine is and how large your encoding is. Unless you are working on something big like euc-tw, it won't take too long. 5. You can "make install" already but you should test first. $ make test PERL_DL_NONLAZY=1 /usr/local/bin/perl -Iblib/arch -Iblib/lib \ -e 'use Test::Harness qw(&runtests $verbose); \ $verbose=0; runtests @ARGV;' t/*.t t/My....ok All tests successful. Files=1, Tests=2, 0 wallclock secs ( 0.09 cusr + 0.01 csys = 0.09 CPU) 6. If you are content with the test result, just "make install" 7. If you want to add your encoding to Encode's demand-loading list (so you don't have to "use Encode::YourEncoding"), run enc2xs -C to update Encode::ConfigLocal, a module that controls local settings. After that, "use Encode;" is enough to load your encodings on demand. The Unicode Character Map Encode uses the Unicode Character Map (UCM) format for source character mappings. This format is used by IBM's ICU package and was adopted by Nick Ing-Simmons for use with the Encode module. Since UCM is more flexible than Tcl's Encoding Map and far more user-friendly, this is the recommended format for Encode now. A UCM file looks like this. # # Comments # <code_set_name> "US-ascii" # Required <code_set_alias> "ascii" # Optional <mb_cur_min> 1 # Required; usually 1 <mb_cur_max> 1 # Max. # of bytes/char <subchar> \x3F # Substitution char # CHARMAP <U0000> \x00 |0 # <control> <U0001> \x01 |0 # <control> <U0002> \x02 |0 # <control> .... <U007C> \x7C |0 # VERTICAL LINE <U007D> \x7D |0 # RIGHT CURLY BRACKET <U007E> \x7E |0 # TILDE <U007F> \x7F |0 # <control> END CHARMAP • Anything that follows "#" is treated as a comment. • The header section continues until a line containing the word CHARMAP. This section has a form of <keyword> value, one pair per line. Strings used as values must be quoted. Barewords are treated as numbers. \xXX represents a byte. Most of the keywords are self-explanatory. subchar means substitution character, not subcharacter. When you decode a Unicode sequence to this encoding but no matching character is found, the byte sequence defined here will be used. For most cases, the value here is \x3F; in ASCII, this is a question mark. • CHARMAP starts the character map section. Each line has a form as follows: <UXXXX> \xXX.. |0 # comment ^ ^ ^ | | +- Fallback flag | +-------- Encoded byte sequence +-------------- Unicode Character ID in hex The format is roughly the same as a header section except for the fallback flag: | followed by 0..3. The meaning of the possible values is as follows: |0 Round trip safe. A character decoded to Unicode encodes back to the same byte sequence. Most characters have this flag. |1 Fallback for unicode -> encoding. When seen, enc2xs adds this character for the encode map only. |2 Skip sub-char mapping should there be no code point. |3 Fallback for encoding -> unicode. When seen, enc2xs adds this character for the decode map only. • And finally, END OF CHARMAP ends the section. When you are manually creating a UCM file, you should copy ascii.ucm or an existing encoding which is close to yours, rather than write your own from scratch. When you do so, make sure you leave at least U0000 to U0020 as is, unless your environment is EBCDIC. CAVEAT: not all features in UCM are implemented. For example, icu:state is not used. Because of that, you need to write a perl module if you want to support algorithmical encodings, notably the ISO-2022 series. Such modules include Encode::JP::2022_JP, Encode::KR::2022_KR, and Encode::TW::HZ. Coping with duplicate mappings When you create a map, you SHOULD make your mappings round-trip safe. That is, encode('your-encoding', decode('your-encoding', $data)) eq $data stands for all characters that are marked as "|0". Here is how to make sure: • Sort your map in Unicode order. • When you have a duplicate entry, mark either one with '|1' or '|3'. • And make sure the '|1' or '|3' entry FOLLOWS the '|0' entry. Here is an example from big5-eten. <U2550> \xF9\xF9 |0 <U2550> \xA2\xA4 |3 Internally Encoding -> Unicode and Unicode -> Encoding Map looks like this; E to U U to E -------------------------------------- \xF9\xF9 => U2550 U2550 => \xF9\xF9 \xA2\xA4 => U2550 So it is round-trip safe for \xF9\xF9. But if the line above is upside down, here is what happens. E to U U to E -------------------------------------- \xA2\xA4 => U2550 U2550 => \xF9\xF9 (\xF9\xF9 => U2550 is now overwritten!) The Encode package comes with ucmlint, a crude but sufficient utility to check the integrity of a UCM file. Check under the Encode/bin directory for this. When in doubt, you can use ucmsort, yet another utility under Encode/bin directory. Bookmarks • ICU Home Page <http://www.icu-project.org/> • ICU Character Mapping Tables <http://site.icu-project.org/charts/charset> • ICU:Conversion Data <http://www.icu-project.org/userguide/conversion-data.html> SEE ALSO Encode, perlmod, perlpod perl v5.38.2 2023-11-28 ENC2XS(1)
enc2xs -- Perl Encode Module Generator
enc2xs -[options] enc2xs -M ModName mapfiles... enc2xs -C
null
null
shasum5.34
Running shasum is often the quickest way to compute SHA message digests. The user simply feeds data to the script through files or standard input, and then collects the results from standard output. The following command shows how to compute digests for typical inputs such as the NIST test vector "abc": perl -e "print qq(abc)" | shasum Or, if you want to use SHA-256 instead of the default SHA-1, simply say: perl -e "print qq(abc)" | shasum -a 256 Since shasum mimics the behavior of the combined GNU sha1sum, sha224sum, sha256sum, sha384sum, and sha512sum programs, you can install this script as a convenient drop-in replacement. Unlike the GNU programs, shasum encompasses the full SHA standard by allowing partial-byte inputs. This is accomplished through the BITS option (-0). The following example computes the SHA-224 digest of the 7-bit message 0001100: perl -e "print qq(0001100)" | shasum -0 -a 224 AUTHOR Copyright (C) 2003-2018 Mark Shelor <mshelor@cpan.org>. SEE ALSO shasum is implemented using the Perl module Digest::SHA. perl v5.34.1 2024-04-13 SHASUM(1)
shasum - Print or Check SHA Checksums
Usage: shasum [OPTION]... [FILE]... Print or check SHA checksums. With no FILE, or when FILE is -, read standard input. -a, --algorithm 1 (default), 224, 256, 384, 512, 512224, 512256 -b, --binary read in binary mode -c, --check read SHA sums from the FILEs and check them --tag create a BSD-style checksum -t, --text read in text mode (default) -U, --UNIVERSAL read in Universal Newlines mode produces same digest on Windows/Unix/Mac -0, --01 read in BITS mode ASCII '0' interpreted as 0-bit, ASCII '1' interpreted as 1-bit, all other characters ignored The following five options are useful only when verifying checksums: --ignore-missing don't fail or report status for missing files -q, --quiet don't print OK for each successfully verified file -s, --status don't output anything, status code shows success --strict exit non-zero for improperly formatted checksum lines -w, --warn warn about improperly formatted checksum lines -h, --help display this help and exit -v, --version output version information and exit When verifying SHA-512/224 or SHA-512/256 checksums, indicate the algorithm explicitly using the -a option, e.g. shasum -a 512224 -c checksumfile The sums are computed as described in FIPS PUB 180-4. When checking, the input should be a former output of this program. The default mode is to print a line with checksum, a character indicating type (`*' for binary, ` ' for text, `U' for UNIVERSAL, `^' for BITS), and name for each FILE. The line starts with a `\' character if the FILE name contains either newlines or backslashes, which are then replaced by the two-character sequences `\n' and `\\' respectively. Report shasum bugs to mshelor@cpan.org
null
null
iopending
This samples the number of disk events that are pending and plots a distribution graph. By doing this the "serialness" or "parallelness" of disk behaviour can be distinguished. A high occurance of a pending value of more than 1 is an indication of saturation. Since this uses DTrace, only users with root privileges can run this command.
iopending - plot number of pending disk events. Uses DTrace.
iopending [-c] [-d device] [-f filename] [-m mount_point] [interval [count]]
-c clear screen -d device instance name to snoop (eg, dad0) -f filename full pathname of file to snoop -m mount_point mountpoint for filesystem to snoop
Default output, print I/O summary every 1 second, # iopending Print 10 second samples, # iopending 10 Print 12 x 5 second samples, # iopending 5 12 Snoop events on the root filesystem only, # iopending -m / FIELDS value number of pending events, 0 == idle count number of samples @ 1000 Hz load 1 min load average disk_r total disk read Kb for sample disk_w total disk write Kb for sample IDEA Dr Rex di Bona DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT iopending will run forever until Ctrl-C is hit, or the specified count is reached. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO iosnoop(1M), iotop(1M), dtrace(1M) version 0.60 November 1, 2005 iopending(1m)
ncal
The cal utility displays a simple calendar in traditional format and ncal offers an alternative layout, more options and the date of Easter. The new format is a little cramped but it makes a year fit on a 25x80 terminal. If arguments are not specified, the current month is displayed. The options are as follows: -h Turns off highlighting of today. -J Display Julian Calendar, if combined with the -e option, display date of Easter according to the Julian Calendar. -e Display date of Easter (for western churches). -j Display Julian days (days one-based, numbered from January 1). -m month Display the specified month. If month is specified as a decimal number, it may be followed by the letter ‘f’ or ‘p’ to indicate the following or preceding month of that number, respectively. -o Display date of Orthodox Easter (Greek and Russian Orthodox Churches). -p Print the country codes and switching days from Julian to Gregorian Calendar as they are assumed by ncal. The country code as determined from the local environment is marked with an asterisk. -s country_code Assume the switch from Julian to Gregorian Calendar at the date associated with the country_code. If not specified, ncal tries to guess the switch date from the local environment or falls back to September 2, 1752. This was when Great Britain and her colonies switched to the Gregorian Calendar. -w Print the number of the week below each week column. -y Display a calendar for the specified year. -3 Display the previous, current and next month surrounding today. -A number Display the number of months after the current month. -B number Display the number of months before the current month. -C Switch to cal mode. -N Switch to ncal mode. -d yyyy-mm Use yyyy-mm as the current date (for debugging of date selection). -H yyyy-mm-dd Use yyyy-mm-dd as the current date (for debugging of highlighting). A single parameter specifies the year (1–9999) to be displayed; note the year must be fully specified: “cal 89” will not display a calendar for 1989. Two parameters denote the month and year; the month is either a number between 1 and 12, or a full or abbreviated name as specified by the current locale. Month and year default to those of the current system clock and time zone (so “cal -m 8” will display a calendar for the month of August in the current year). Not all options can be used together. For example “-3 -A 2 -B 3 -y -m 7” would mean: show me the three months around the seventh month, three before that, two after that and the whole year. ncal will warn about these combinations. A year starts on January 1. Highlighting of dates is disabled if stdout is not a tty. SEE ALSO calendar(3), strftime(3) STANDARDS The cal utility is compliant with the X/Open System Interfaces option of the IEEE Std 1003.1-2008 (“POSIX.1”) specification. The flags [-3hyJeopw], as well as the ability to specify a month name as a single argument, are extensions to that specification. The week number computed by -w is compliant with the ISO 8601 specification. HISTORY A cal command appeared in Version 1 AT&T UNIX. The ncal command appeared in FreeBSD 2.2.6. AUTHORS The ncal command and manual were written by Wolfgang Helbig <helbig@FreeBSD.org>. BUGS The assignment of Julian–Gregorian switching dates to country codes is historically naive for many countries. Not all options are compatible and using them in different orders will give varying results. It is not possible to display Monday as the first day of the week with cal. macOS 14.5 March 7, 2019 macOS 14.5
cal, ncal – displays a calendar and the date of Easter
cal [-3hjy] [-A number] [-B number] [[month] year] cal [-3hj] [-A number] [-B number] -m month [year] ncal [-3hjJpwy] [-A number] [-B number] [-s country_code] [[month] year] ncal [-3hJeo] [-A number] [-B number] [year] ncal [-CN] [-H yyyy-mm-dd] [-d yyyy-mm]
null
null
yamlpp-load-dump5.34
null
null
null
null
null
chflags
The chflags utility modifies the file flags of the listed files as specified by the flags operand. The options are as follows: -f Do not display a diagnostic message if chflags could not modify the flags for file, nor modify the exit status to reflect such failures. -H If the -R option is specified, symbolic links on the command line are followed and hence unaffected by the command. (Symbolic links encountered during traversal are not followed.) -h If the file is a symbolic link, change the file flags of the link itself rather than the file to which it points. -L If the -R option is specified, all symbolic links are followed. -P If the -R option is specified, no symbolic links are followed. This is the default. -R Change the file flags of the file hierarchies rooted in the files, instead of just the files themselves. Beware of unintentionally matching the “..” hard link to the parent directory when using wildcards like “.*”. -v Cause chflags to be verbose, showing filenames as the flags are modified. If the -v option is specified more than once, the old and new flags of the file will also be printed, in octal notation. -x Do not cross mount points. The flags are specified as an octal number or a comma separated list of keywords. The following keywords are currently defined: arch, archived set the archived flag (super-user only) nodump set the nodump flag (owner or super-user only) opaque set the opaque flag (owner or super-user only) [Directory is opaque when viewed through a union mount] sappnd, sappend set the system append-only flag (super-user only) schg, schange, simmutable set the system immutable flag (super-user only) uappnd, uappend set the user append-only flag (owner or super-user only) uchg, uchange, uimmutable set the user immutable flag (owner or super-user only) hidden set the hidden flag [Hide item from GUI] Putting the letters “no” before or removing the letters “no” from a keyword causes the flag to be cleared. For example: nouchg clear the user immutable flag (owner or super-user only) dump clear the nodump flag (owner or super-user only) Unless the -H or -L options are given, chflags on a symbolic link always succeeds and has no effect. The -H, -L and -P options are ignored unless the -R option is specified. In addition, these options override each other and the command's actions are determined by the last one specified. You can use "ls -lO" to see the flags of existing files. If chflags receives a SIGINFO signal (see the status argument for stty(1)), then the current filename as well as the old and new flags are displayed. EXIT STATUS The chflags utility exits 0 on success, and >0 if an error occurs.
chflags – change file flags
chflags [-fhvx] [-R [-H | -L | -P]] flags file ...
null
Recursively clear all flags on files and directories contained within the foobar directory hierarchy: chflags -R 0 foobar SEE ALSO ls(1), chflags(2), stat(2), fts(3), symlink(7) HISTORY The chflags command first appeared in 4.4BSD. BUGS Only a limited number of utilities are chflags aware. Some of these tools include ls(1), cp(1), find(1), install(1), dump(8), and restore(8). In particular a tool which is not currently chflags aware is the pax(1) utility. macOS 14.5 June 12, 2018 macOS 14.5
derq
The derq command queries DER encoded entitlements using the CoreEntitlements library. It currently supports querying from a Mach-O, file / input stream, as well as directly from a process using csops(2). After a succesful execution of the query statements on the input , derq will output the active DER context to the output.
derq – Query and manipulate DER entitlements.
derq query [--pretty] [--raw] [--xml] [-f format] [-i input] [-o output] ⟨query statements⟩ derq csops [-p pid] [-o output] [--xml] ⟨query statements⟩ derq macho [-i input] [-o output] [--xml] ⟨query statements⟩
A list of flags and their descriptions: --pretty When specified, derq will print the active context in a textual representation to stderr. --raw Signifies that the input might not be a DER encoded entitlements blob. This forces derq to treat the input as a raw DER object. Particularly this means that if a V1 entitlements is passed in, the active context will be set to the outer metadata object, and not the inner entitlements dictionary. --xml Instruct the macho or csops subcommands to query the embedded XML blob instead of the embedded DER blob. Using this flag on the query command will change the the output format to be an XML plist. -i input Allows you to specify which file should be used as the input. If not specified "-" is assumed, which signfies that the input will follow on stdin. -o output Allows you to specify which file should be used as the output. If not specified "-" is assumed, which signfies that derq should use stdout for output. -p pid Specifies the pid of a running process from which derq should extract the DER entitlements blob to be used as input. -f format Specifies what format the input is. If this flag isn't passed in DER is assumed. The other supported format is "xml." query statements ... A space seperated list of operations to be exected left-to- right. The operation syntax is described in SYNTAX. SYNTAX DERQL has very simplistic syntax that consists of a series of operations that are executed one after another. Execution stops either when the last operation is executed or an operation induces the execution engine into an invalid state. There are many operations that can produce an invalid state, such as selecting a key that doesn't exist, or indexing an array past the bounds. Invalid state is also produced when a matching operation fails. Currently derq supports 4 operations: CESelectIndex This operation selects an index in a zero indexed array. Any query statement that starts with a number character (0-9) implies the start of a CESelectIndex operation. Example invocation: invocatio: % derq query -i - -o - 1 Will select the second element in the array passed in on stdin and output the selected value to stdout. CESelectDictValue This operation selects the value associated with the passed in key in the actively selected dictionary. Any query statement that does not imply any operation will be parsed as CESelectDictValue. Meaning that any query statement that starts with an alphanumeric sequence will be treated as a CESelectDictValue operation. Example: % derq query application-identifier Will select the value that belongs to the key "application-identifier" from the dictionary passed in on stdin and output the selected value to stdout. CEMatchBool This operation produces a valid output if the currently selected value is a boolean that has the value of true. Execution of this operation does not modify the selection. Any query statement that starts with "?" signifies this operation. Example: % derq query get-task-allow ? Will return a valid boolean only if the value for the key "get-task-allow" is a boolean and has the value of true. CEMatchString This operation produces a valid output if the currently selected value is a string that is equal to the passed in value. Execution of this operation does not modify the selection. Any query statement that starts with "=" signifies this operation. Example: % derq query useractivity-team-identifier =appleiwork Will return a valid string only if the value for the key "useractivity-team-identifier" is exactly equal to "appleiwork".
To check if a file has the string "secret-entitlement" as the first value in an array in a file named "application.entitlements": % derq query -i application.entitlements 0 =secret-entitlement To verify the DER entitlements validity of process 666 and to check that it has the "com.apple.application-identifier" equal to "P9Z4AN7VHQ.com.apple.radar.gm": % derq csops -pid 666 com.apple.application-identifier =P9Z4AN7VHQ.com.apple.radar.gm To check if the first array element of a key "com.apple.security.iokit-user-client-class" is equal to "AppleImage4UserClient": % derq query com.apple.security.iokit-user-client-class 0 =AppleImage4UserClient DIAGNOSTICS The derq utility exits 0 on success, and >0 if an error occurs. In particular EX_DATAERR (66) is returned if the query could not be satisfied or resulted in invalid state. NOTES The correct pronunciation of derq sounds similar to "dirk". SEE ALSO codesign(1) Darwin February 10, 2021 Darwin
fg
Shell builtin commands are commands that can be executed within the running shell's process. Note that, in the case of csh(1) builtin commands, the command is executed in a subshell if it occurs as any component of a pipeline except the last. If a command specified to the shell contains a slash ‘/’, the shell will not execute a builtin command, even if the last component of the specified command matches the name of a builtin command. Thus, while specifying “echo” causes a builtin command to be executed under shells that support the echo builtin command, specifying “/bin/echo” or “./echo” does not. While some builtin commands may exist in more than one shell, their operation may be different under each shell which supports them. Below is a table which lists shell builtin commands, the standard shells that support them and whether they exist as standalone utilities. Only builtin commands for the csh(1) and sh(1) shells are listed here. Consult a shell's manual page for details on the operation of its builtin commands. Beware that the sh(1) manual page, at least, calls some of these commands “built-in commands” and some of them “reserved words”. Users of other shells may need to consult an info(1) page or other sources of documentation. Commands marked “No**” under External do exist externally, but are implemented as scripts using a builtin command of the same name. Command External csh(1) sh(1) ! No No Yes % No Yes No . No No Yes : No Yes Yes @ No Yes Yes [ Yes No Yes { No No Yes } No No Yes alias No** Yes Yes alloc No Yes No bg No** Yes Yes bind No No Yes bindkey No Yes No break No Yes Yes breaksw No Yes No builtin No No Yes builtins No Yes No case No Yes Yes cd No** Yes Yes chdir No Yes Yes command No** No Yes complete No Yes No continue No Yes Yes default No Yes No dirs No Yes No do No No Yes done No No Yes echo Yes Yes Yes echotc No Yes No elif No No Yes else No Yes Yes end No Yes No endif No Yes No endsw No Yes No esac No No Yes eval No Yes Yes exec No Yes Yes exit No Yes Yes export No No Yes false Yes No Yes fc No** No Yes fg No** Yes Yes filetest No Yes No fi No No Yes for No No Yes foreach No Yes No getopts No** No Yes glob No Yes No goto No Yes No hash No** No Yes hashstat No Yes No history No Yes No hup No Yes No if No Yes Yes jobid No No Yes jobs No** Yes Yes kill Yes Yes Yes limit No Yes No local No No Yes log No Yes No login Yes Yes No logout No Yes No ls-F No Yes No nice Yes Yes No nohup Yes Yes No notify No Yes No onintr No Yes No popd No Yes No printenv Yes Yes No printf Yes No Yes pushd No Yes No pwd Yes No Yes read No** No Yes readonly No No Yes rehash No Yes No repeat No Yes No return No No Yes sched No Yes No set No Yes Yes setenv No Yes No settc No Yes No setty No Yes No setvar No No Yes shift No Yes Yes source No Yes No stop No Yes No suspend No Yes No switch No Yes No telltc No Yes No test Yes No Yes then No No Yes time Yes Yes No times No No Yes trap No No Yes true Yes No Yes type No** No Yes ulimit No** No Yes umask No** Yes Yes unalias No** Yes Yes uncomplete No Yes No unhash No Yes No unlimit No Yes No unset No Yes Yes unsetenv No Yes No until No No Yes wait No** Yes Yes where No Yes No which Yes Yes No while No Yes Yes SEE ALSO csh(1), dash(1), echo(1), false(1), info(1), kill(1), login(1), nice(1), nohup(1), printenv(1), printf(1), pwd(1), sh(1), test(1), time(1), true(1), which(1), zsh(1) HISTORY The builtin manual page first appeared in FreeBSD 3.4. AUTHORS This manual page was written by Sheldon Hearn <sheldonh@FreeBSD.org>. macOS 14.5 December 21, 2010 macOS 14.5
builtin, !, %, ., :, @, [, {, }, alias, alloc, bg, bind, bindkey, break, breaksw, builtins, case, cd, chdir, command, complete, continue, default, dirs, do, done, echo, echotc, elif, else, end, endif, endsw, esac, eval, exec, exit, export, false, fc, fg, filetest, fi, for, foreach, getopts, glob, goto, hash, hashstat, history, hup, if, jobid, jobs, kill, limit, local, log, login, logout, ls-F, nice, nohup, notify, onintr, popd, printenv, printf, pushd, pwd, read, readonly, rehash, repeat, return, sched, set, setenv, settc, setty, setvar, shift, source, stop, suspend, switch, telltc, test, then, time, times, trap, true, type, ulimit, umask, unalias, uncomplete, unhash, unlimit, unset, unsetenv, until, wait, where, which, while – shell built-in commands
See the built-in command description in the appropriate shell manual page.
null
null
rwbypid.d
This script tracks the number of reads and writes at the syscall level by processes, printing the totals in a report. This matches reads and writes whether they succeed or not. Since this uses DTrace, only users with root privileges can run this command.
rwbypid.d - read/write calls by PID. Uses DTrace.
rwbypid.d
null
This samples until Ctrl-C is hit. # rwbypid.d FIELDS PID process ID CMD process name DIR direction, Read or Write COUNT total calls DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT rwbypid.d will sample until Ctrl-C is hit. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO rwbbypid.d(1M), dtrace(1M) version 1.00 June 28, 2005 rwbypid.d(1m)
jmap
The jmap command prints details of a specified running process. Note: This command is unsupported and might not be available in future releases of the JDK. On Windows Systems where the dbgeng.dll file isn't present, the Debugging Tools for Windows must be installed to make these tools work. The PATH environment variable should contain the location of the jvm.dll file that's used by the target process or the location from which the core dump file was produced. OPTIONS FOR THE JMAP COMMAND -clstats pid Connects to a running process and prints class loader statistics of Java heap. -finalizerinfo pid Connects to a running process and prints information on objects awaiting finalization. -histo[:live] pid Connects to a running process and prints a histogram of the Java object heap. If the live suboption is specified, it then counts only live objects. -dump:dump_options pid Connects to a running process and dumps the Java heap. The dump_options include: • live --- When specified, dumps only the live objects; if not specified, then dumps all objects in the heap. • format=b --- Dumps the Java heap in hprof binary format • file=filename --- Dumps the heap to filename Example: jmap -dump:live,format=b,file=heap.bin pid JDK 22 2024 JMAP(1)
jmap - print details of a specified process
Note: This command is experimental and unsupported. jmap [options] pid
This represents the jmap command-line options. See Options for the jmap Command. pid The process ID for which the information specified by the options is to be printed. The process must be a Java process. To get a list of Java processes running on a machine, use either the ps command or, if the JVM processes are not running in a separate docker instance, the jps command.
null
dscl
dscl is a general-purpose utility for operating on Directory Service directory nodes. Its commands allow one to create, read, and manage Directory Service data. If invoked without any commands, dscl runs in an interactive mode, reading commands from standard input. Interactive processing is terminated by the quit command. Leading dashes ("-") are optional for all commands. dscl operates on a datasource specified on the command line. This may be a node name or a Mac OS X Server (10.2 or later) host specified by DNS hostname or IP address. Node names may be absolute paths beginning with a slash ("/"), or relative domain paths beginning with a dot (".") character, which specifies the local domain, or "..", specifying the local domain's parent. If the hostname or IP address form is used then the user must specify the -u option and either the -P or -p options to specify an administrative user and password on the remote host to authenticate with to the remote host. The exception to this is if "localhost" is specified. Passing passwords on the command line is inherently insecure and can cause password exposure. For better security do not provide the password as part of the command and you will be securely prompted. The datasource may also be specified as "localonly" in which case a separate DirectoryService daemon process is activated which contains only the Local plugin for use by dscl. If no file path is provided then access goes only to the registered local nodes on the system. However, if the -f option is specified then access is added to the local node "/Local/Target" which points to the database located at the provided filepath. One example is to provide the filepath of "/Volumes/Build100/var/db/dslocal/nodes/Default" and then access to that database is provided via the nodename "/Local/Target". PATH SPECIFICATION There are two modes of operation when specifying paths to operate on. The two modes correspond to whether the datasource is a node or a host. In the case of specifying a node, the top level of paths will be record types. Example top level paths would be: /Users/alice /Groups/admin In the case of specifying a host as a data source, the top level of paths correspond to Open Directory plug-ins and Search Paths. One can specify the plug-in to traverse to a node name, after which the paths are equivalent to the former usage. The following might be the equivalent paths as the above paths: /NetInfo/root/Users/alice /LDAPv3/10.0.1.42/Groups/admin If path components contain keys or values with embedded slash characters, the slash characters must be escaped with a leading backslash character. Since the shell also processes escape characters, an extra backslash is required to correctly specify an escape. For example, to read a mount record with the name "ldapserver:/Users" in the "/Mounts" path, the following path would be used: dscl . -read /Mounts/ldaphost:\/Users All pathnames are case-sensitive. COMMANDS The action of each command is described below. Some commands have aliases. For example, "cat" and "." are aliases for "read". Command aliases are listed in parentheses. read (cat .) Usage: read [path [key ...]] Prints a directory. The property key is followed by colon, then a space- separated list of the values for that property. If any value contains embedded spaces, the list will instead be displayed one entry per line, starting on the line after the key. If The -raw flag for raw output has been given, then read prints the full DirectoryService API constant for record and attribute types. If the -url flag has been specified then printed record path attribute values are encoded in the style of URLs. This is useful if a script or program is trying to process the output since values will not have any spaces or other control characters. readall Usage: readall [path [key ...]] readall prints all the records of a given type. The output of readall is formatted in the same way as read with a "-" on a line as a delimeter between records. readpl Usage: readpl path key plist_path Prints the contents of plist_path. The plist_path is followed by a colon, then a whitespace, and then the value for the path. If the plist_path is the key for a dictionary or array, the contents of it are displayed in plist form after the plist_path. If plist_path is the key for a string, number, bool, date, or data object, only the value is printed out after the plist_path. readpli Usage: readpli path key value_index plist_path Prints the contents of plist_path for the plist at value_index of the key. The plist_path is followed by a colon, then a whitespace, and then the value for the path. If the plist_path is the key for a dictionary or array, the contents of it are displayed in plist form after the plist_path. If plist_path is the key for a string, number, bool, date, or data object, only the value is printed out after the plist_path. list (ls) Usage: list path Lists the subdirectories of the given directory. Subdirectories are listed one per line. In the case of listing a search path, the names are preceded by an index number that can act as a shortcut and used in place of the name when specifying a path. When used in interactive mode, the path is optional. With no path given, the current directory will be used. search path key val Searches for records that match a pattern. The search is rooted at the given path. The path may be a node path or a record type path. Valid keys are Directory Service record attribute types. create (mk) Usage: create record_path [key [val ...]] Creates a record, property, or value. If only a record path is given, the create command will create the record if it does not exist. If a key is given, then a property with that key will be created. WARNING - If a property with the given key already exists, it will be destroyed and a new property will be created in its place. To add values to an existing property, use the append or merge commands. If values are included in the command, these values will be set for the given key. NOTE - Not all directory nodes support a property without a value. An error will be given if you attempt to create a property with no value in such a directory node. createpl Usage: createpl record_path key plist_path val1 [val2 ...] Creates a string, or array of strings at plist_path. If you are creating a value at the root of a plist that is an array, simply use "0" as the plist_path. If only val1 is specified, a string will be created at plist_path. If val1 val2 ... are specified, an array of strings will be created at plist_path. WARNING - If a value with the given plist_path already exists, it will be destroyed and a new value will be created in its place. createpli Usage: createpli record_path key value_index plist_path val1 [val2 ...] Creates a string, or array of strings at plist_path for the plist at value_index of the key. If you are creating a value at the root of a plist that is an array, simply use "0" as the plist_path. If only val1 is specified, a string will be created at plist_path. If val1 val2 ... are specified, an array of strings will be created at plist_path. WARNING - If a value with the given plist_path already exists, it will be destroyed and a new value will be created in its place. append Usage: append record_path key val ... Appends one or more values to a property in a given record. The property is created if it does not exist. merge Usage: merge record_path key val ... Appends one or more values to a property in a given directory if the property does not already have those values. The property is created if it does not exist. change Usage: change record_path key old_val new_val Replaces the given old value in the list of values of the given key with the new value in the specified record. changei Usage: changei path key index val Replaces the value at the given index in the list of values of the given key with the new value in the specified record. index is an integer value. An index of 1 specifies the first value. An index greater than the number of values in the list will result in an error. diff Usage: diff path1 path2 key ... Compares the data from path1 and path2 looking at the specified keys (or all if no keys are specified). delete (rm) Usage: delete path [key [val ...]] Delete a directory, property, or value. If a directory path is given, the delete command will delete the directory. This can only be used on record type and record paths. If a key is given, then a property with that key will be deleted. If one or more values are given, those values will be removed from the property with the given key. deletepl Usage: deletepl record_path key plist_path [val ...] Deletes a value in a plist. If no values are given deletepl deletes the plist_path. If one or more values are given, deletepl deletes the values within plist_path. deletepli Usage: deletepli record_path key value_index plist_path [val ...] Deletes a value for the plist at value_index of the key. If no values are given deletepli deletes the plist_path. If one or more values are given, deletepli deletes the values within plist_path. passwd Usage: passwd user_path [new_password | old_password new_password] Changes a password for a user. The user must be specified by full path, not just a username. If you are authenticated to the node (either by specifying the -u and -P flags or by using the auth command when in interactive node) then you can simply specify a new password. If you are not authenticated or if FileVault is enabled then the user's old password must be specified. If passwords are not specified while in interactive mode, you will be prompted for them. Passing these passwords on the command line is inherently insecure and can cause password exposure. For better security do not provide the password as part of the command and you will be securely prompted. INTERACTIVE COMMANDS cd Usage: cd dir Sets the current directory. Path names for other dscl commands may be relative to the current directory. pushd (pd) Usage: pushd path Similar to the pushd command commonly found in Unix shells. When a path is specified it sets the current directory while pushing the previous directory on to the directory stack. If no path is specified it exchanges the top two elements of the directory stack. It will also print the final directory stack. popd Usage: popd Pops the directory stack and returns to the new top directory. It will also print the final directory stack. auth (su) Usage: auth [user [password]] Authenticate as the named user, or as "root" if no user is specified. If a password is supplied, then that password is used for authentication, otherwise the command prompts for a password. If dscl is run in host mode, then when this command is run the current directory must be in the subdirectories of a node. authonly Usage: authonly [user [password]] Used to verify the password of a named user, or of "root" if no user is specified. If a password is supplied, then that password is used for authentication, otherwise the command prompts for a password. If dscl is run in host mode, then when this command is run the current directory must be in the subdirectories of a node. quit (q) Usage: quit Ends processing of interactive commands and terminates the program. command history The up and down arrow keys will scan through the command history. tab completion When pathnames are being typed, pressing the tab key will result in a search to auto-complete the typed partial subdirectory name. It will also attempt to correct capitilization in the process.
dscl – Directory Service command line utility
dscl [options] [datasource [command]] options: -p prompt for password -u user authenticate as user -P password authentication password -f filepath targeted local node database file path -raw don't strip off prefix from DirectoryService API constants -plist print out record(s) or attribute(s) in XML plist format -url print record attribute values in URL-style encoding -q quiet - no interactive prompt commands: -read [path [key ...]] -readall [path [key ...]] -readpl path key plist_path -readpli path key value_index plist_path -list path [key] -search path key val -create record_path [key [val ...]] -createpl record_path key plist_path val1 [val2 ...] -createpli record_path key value_index plist_path val1 [val2 ...] -append record_path key val ... -merge record_path key val ... -delete path [key [val ...]] -deletepl record_path key plist_path [val ...] -deletepli record_path key value_index plist_path [val ...] -change record_path key old_val new_val -changei record_path key val_index new_val -diff path1 path2 [key ...] -passwd user_path [new_password | old_password new_password] available only in interactive mode: -cd dir -pushd [dir] -popd -auth [user [password]] -authonly [user [password]] -quit
null
-view a record in the local directory node dscl . -read /Users/www -create or replace the UserShell attribute value for the www user record dscl . -create /Users/www UserShell /usr/bin/false -create or replace the test key of the mcx_application_data:loginwindow plist value for the MCXSettings attribute of the user1 user record dscl . -createpl /Users/user1 MCXSettings mcx_application_data:loginwindow:test value -list the uniqueID values for all user records on a given node dscl /LDAPv3/ldap.company.com -list /Users UniqueID -append a value that has spaces in it dscl . -append /Users/www Comment "This is a comment" DIAGNOSTICS dscl will return -1 (255) on error. SEE ALSO DirectoryService(8), DirectoryServiceAttributes(7) MacOSX August 25, 2003 MacOSX
leaks
leaks identifies leaked memory -- memory that the application has allocated, but has been lost and cannot be freed. Specifically, leaks examines a specified process's memory for values that may be pointers to malloc-allocated buffers. Any buffer reachable from a pointer in writable global memory (e.g., __DATA segments), a register, or on the stack is assumed to be memory in use. Any buffer reachable from a pointer in a reachable malloc-allocated buffer is also assumed to be in use. The buffers which are not reachable are leaks; the buffers could never be freed because no pointer exists in memory to the buffer, and thus free() could never be called for these buffers. Such buffers waste memory; removing them can reduce swapping and memory usage. Leaks are particularly dangerous for long-running programs, for eventually the leaks could fill memory and cause the application to crash. leaks requires one argument -- either the process ID or the full or partial executable name of the process to examine, or the pathname of a memory graph file generated by leaks or the Xcode Memory Graph Debugger. (Unless the -atExit -- command argument is given, see below for more details.) Once the leaked buffers have been identified, leaks analyzes them to find "root leaks" (those which are not referenced by any other buffer) and "root cycles" (cycles of objects which reference or retain each other, but which are not referenced by any other buffer outside the cycle). Then, it identifies the tree of buffers which are referenced by those root leaks and root cycles, if any. leaks then prints each such "leak tree". If the MallocStackLogging environment variable was set when the application was launched, leaks also prints a stack trace describing where the buffer was allocated. MEMORY GRAPH FILES A memory graph file archives the memory state of a process for further analysis at a later time, on a different machine, or by other people. It includes information about all VM and malloc nodes in the process, and the references between them. Memory graph files can be generated by leaks using the -outputGraph option (and the -fullContent option if desired), or by examining a live process with the Xcode Memory Graph Debugger then using the Export Memory Graph menu item from the File menu. The standard filename suffix for memory graph files is ".memgraph". These files can be used as input to various commands including leaks, heap, stringdups, vmmap, malloc_history, footprint, and the Xcode Memory Graph Debugger.
leaks – Search a process's memory for unreferenced malloc buffers
leaks [options] pid | partial-executable-name | memory-graph-file leaks [options] -atExit -- command Options: [-list] [-groupByType] [-nostacks] [-nosources] [-quiet] [-exclude symbol] [-outputGraph path] [-fullContent] [-readonlyContent] [-noContent] [-fullStackHistory] [-diffFrom=<memgraph>] [-traceTree address] [-referenceTree] [-autoreleasePools] [-debug=<mode>] [-conservative]
-list Print the leaks as a list ("classic"-style) rather than as a tree. Warning: this option may be removed in the future. -groupByType When printing a tree of leaked objects, group the children of a node in the tree by type, rather than showing individual instances. -nostacks Do not print backtraces of leaked blocks even if the target process has the MallocStackLogging environment variable set. -nosources Do not print sourceFile:lineNumber in backtraces. This can improve performance when examining a process with a huge number of debug symbols. -quiet Do not print process description header or binary image list. -exclude symbol Exclude leaked blocks whose backtraces include the specified symbol. This option can be repeated for multiple symbols. This allows ignoring leaks that, for example, are allocated in libraries for which you do not have source code. -outputGraph path Generate a memory graph file containing information about all VM and malloc nodes, and the references between them. path can be a path to a file, or just a directory name; in the latter case a filename with the ".memgraph" suffix will be generated. By default (for security) when generating a memory graph file, descriptions of the content of some objects will be included but ONLY if they are backed by read-only memory in Mach-O binary images or the dyld shared cache. To store full content pass the -fullContent flag. -fullContent When generating a memory graph file, include descriptions of the content of various objects, as would be shown by heap <pid> -addresses all, and as needed by stringdups <pid>. (Full content is the default when targeting a live process, without generating a memory graph file.) -readonlyContent When running leaks against a live target process, print descriptions of the content of memory only if they are backed by read-only memory. (Read-only content is the default when generating memory graph files.) -noContent Do not print the descriptions of the content of leaked memory, or save descriptions of allocation memory into memory graph files. Although that information can be useful for recognizing the contents of the buffer and understanding why it might be leaked, it could expose confidential information from the process if you, for example, file bug reports with that output included. -fullStackHistory When generating a memory graph file, include all available MallocStackLogging backtraces, including those for historical allocations that have been freed. -diffFrom=<memgraph> Show only the new leaks since the specified memgraph. -traceTree address Print a reverse tree of references, from the given block up to the process 'roots' (e.g., global data, registers, or locations on stacks) to the given block. This is useful for determining what is holding onto a buffer such that it has not been freed, and is similar to the information shown in the Xcode Memory Graph Debugger. -referenceTree Print a top-down tree of all malloc allocations and dynamically-allocated VM regions in the process. This can be useful for getting an overall sense of how memory is held by the process. The -groupByType argument can also be passed to summarize the data. In this reference tree mode, each allocation only appears once in the output. Some attempt is made to prioritize which reference to an allocation should be considered as the "owning" allocation to decide where in the tree to show the allocation, but since allocations often have several or numerous references to them (some of which may be false or stale references) and only one can be the "parent" in this reference tree output, sometimes allocations are shown in the wrong place in the tree. -autoreleasePools Print the contents of all autorelease pools of all threads of the process, and trees of memory that are only held by those allocations. If the autorelease pool got popped then that additional memory that is only held by autorelease pool entries would get released. -debug=[mode] This flag offers several additional more detailed modes of output, intended for debugging and deeper investigations. Use -debug=help to get more information about various debug modes. -conservative Ignore type information and scan byte-by-byte for pointers, conservatively assuming that all references are owning references. -atExit -- command Launches the specified command and runs leaks when that process exits. The -atExit argument should be the last argument, followed by -- and the command to launch. For example: $ leaks -quiet -atExit -- /bin/ls -lt /tmp/ Using -atExit will automatically set MallocStackLogging=lite for the specified command so that stack backtraces can be shown for leaked allocations. To use a different setting of that env var, such as YES or NO, you can set the env var prior to running leaks. For example: $ MallocStackLogging=YES leaks -quiet -atExit -- /bin/ls -lt /tmp/ ENVIRONMENT The leaks command may detect more leaks if the target process is run with the MallocScribble environment variable. If this variable is set then when malloc blocks are deallocated they are filled with 0x55 bytes, thus overwriting any "stale" data such as pointers remaining in those blocks. This reduces the number of false pointers remaining in the process memory. EXIT STATUS The leaks command exits with one of the following values: 0 No leaks were detected. 1 One or more leaks were detected. >1 An error occurred. SEE ALSO malloc(3), heap(1), malloc_history(1), stringdups(1), vmmap(1), footprint(1), DevToolsSecurity(1) The Xcode Memory Graph Debuggger graphically shows malloc blocks and VM regions (both leaked and non-leaked), and the references between them. The Xcode developer tools also include Instruments, a graphical application that can give information similar to that provided by leaks. The Allocations instrument graphically displays dynamic, real-time information about the object and memory use in an application, including backtraces of where the allocations occurred. The Leaks instrument performs memory leak analysis. macOS 14.5 March 15, 2022 macOS 14.5
null
perl5.34
Perl officially stands for Practical Extraction and Report Language, except when it doesn't. Perl was originally a language optimized for scanning arbitrary text files, extracting information from those text files, and printing reports based on that information. It quickly became a good language for many system management tasks. Over the years, Perl has grown into a general-purpose programming language. It's widely used for everything from quick "one-liners" to full-scale application development. The language is intended to be practical (easy to use, efficient, complete) rather than beautiful (tiny, elegant, minimal). It combines (in the author's opinion, anyway) some of the best features of sed, awk, and sh, making it familiar and easy to use for Unix users to whip up quick solutions to annoying problems. Its general-purpose programming facilities support procedural, functional, and object- oriented programming paradigms, making Perl a comfortable language for the long haul on major projects, whatever your bent. Perl's roots in text processing haven't been forgotten over the years. It still boasts some of the most powerful regular expressions to be found anywhere, and its support for Unicode text is world-class. It handles all kinds of structured text, too, through an extensive collection of extensions. Those libraries, collected in the CPAN, provide ready-made solutions to an astounding array of problems. When they haven't set the standard themselves, they steal from the best -- just like Perl itself. AVAILABILITY Perl is available for most operating systems, including virtually all Unix-like platforms. See "Supported Platforms" in perlport for a listing. ENVIRONMENT See "ENVIRONMENT" in perlrun. AUTHOR Larry Wall <larry@wall.org>, with the help of oodles of other folks. If your Perl success stories and testimonials may be of help to others who wish to advocate the use of Perl in their applications, or if you wish to simply express your gratitude to Larry and the Perl developers, please write to perl-thanks@perl.org . FILES "@INC" locations of perl libraries "@INC" above is a reference to the built-in variable of the same name; see perlvar for more information. SEE ALSO https://www.perl.org/ the Perl homepage https://www.perl.com/ Perl articles https://www.cpan.org/ the Comprehensive Perl Archive https://www.pm.org/ the Perl Mongers DIAGNOSTICS Using the "use strict" pragma ensures that all variables are properly declared and prevents other misuses of legacy Perl features. The "use warnings" pragma produces some lovely diagnostics. One can also use the -w flag, but its use is normally discouraged, because it gets applied to all executed Perl code, including that not under your control. See perldiag for explanations of all Perl's diagnostics. The "use diagnostics" pragma automatically turns Perl's normally terse warnings and errors into these longer forms. Compilation errors will tell you the line number of the error, with an indication of the next token or token type that was to be examined. (In a script passed to Perl via -e switches, each -e is counted as one line.) Setuid scripts have additional constraints that can produce error messages such as "Insecure dependency". See perlsec. Did we mention that you should definitely consider using the use warnings pragma? BUGS The behavior implied by the use warnings pragma is not mandatory. Perl is at the mercy of your machine's definitions of various operations such as type casting, atof(), and floating-point output with sprintf(). If your stdio requires a seek or eof between reads and writes on a particular stream, so does Perl. (This doesn't apply to sysread() and syswrite().) While none of the built-in data types have any arbitrary size limits (apart from memory size), there are still a few arbitrary limits: a given variable name may not be longer than 251 characters. Line numbers displayed by diagnostics are internally stored as short integers, so they are limited to a maximum of 65535 (higher numbers usually being affected by wraparound). You may submit your bug reports (be sure to include full configuration information as output by the myconfig program in the perl source tree, or by "perl -V") to <https://github.com/Perl/perl5/issues>. Perl actually stands for Pathologically Eclectic Rubbish Lister, but don't tell anyone I said that. NOTES The Perl motto is "There's more than one way to do it." Divining how many more is left as an exercise to the reader. The three principal virtues of a programmer are Laziness, Impatience, and Hubris. See the Camel Book for why. perl v5.34.1 2022-02-26 PERL(1)
perl - The Perl 5 language interpreter
perl [ -sTtuUWX ] [ -hv ] [ -V[:configvar] ] [ -cw ] [ -d[t][:debugger] ] [ -D[number/list] ] [ -pna ] [ -Fpattern ] [ -l[octal] ] [ -0[octal/hexadecimal] ] [ -Idir ] [ -m[-]module ] [ -M[-]'module...' ] [ -f ] [ -C [number/list] ] [ -S ] [ -x[dir] ] [ -i[extension] ] [ [-e|-E] 'command' ] [ -- ] [ programfile ] [ argument ]... For more information on these options, you can run "perldoc perlrun". GETTING HELP The perldoc program gives you access to all the documentation that comes with Perl. You can get more documentation, tutorials and community support online at <https://www.perl.org/>. If you're new to Perl, you should start by running "perldoc perlintro", which is a general intro for beginners and provides some background to help you navigate the rest of Perl's extensive documentation. Run "perldoc perldoc" to learn more things you can do with perldoc. For ease of access, the Perl manual has been split up into several sections. Overview perl Perl overview (this section) perlintro Perl introduction for beginners perlrun Perl execution and options perltoc Perl documentation table of contents Tutorials perlreftut Perl references short introduction perldsc Perl data structures intro perllol Perl data structures: arrays of arrays perlrequick Perl regular expressions quick start perlretut Perl regular expressions tutorial perlootut Perl OO tutorial for beginners perlperf Perl Performance and Optimization Techniques perlstyle Perl style guide perlcheat Perl cheat sheet perltrap Perl traps for the unwary perldebtut Perl debugging tutorial perlfaq Perl frequently asked questions perlfaq1 General Questions About Perl perlfaq2 Obtaining and Learning about Perl perlfaq3 Programming Tools perlfaq4 Data Manipulation perlfaq5 Files and Formats perlfaq6 Regexes perlfaq7 Perl Language Issues perlfaq8 System Interaction perlfaq9 Networking Reference Manual perlsyn Perl syntax perldata Perl data structures perlop Perl operators and precedence perlsub Perl subroutines perlfunc Perl built-in functions perlopentut Perl open() tutorial perlpacktut Perl pack() and unpack() tutorial perlpod Perl plain old documentation perlpodspec Perl plain old documentation format specification perldocstyle Perl style guide for core docs perlpodstyle Perl POD style guide perldiag Perl diagnostic messages perldeprecation Perl deprecations perllexwarn Perl warnings and their control perldebug Perl debugging perlvar Perl predefined variables perlre Perl regular expressions, the rest of the story perlrebackslash Perl regular expression backslash sequences perlrecharclass Perl regular expression character classes perlreref Perl regular expressions quick reference perlref Perl references, the rest of the story perlform Perl formats perlobj Perl objects perltie Perl objects hidden behind simple variables perldbmfilter Perl DBM filters perlipc Perl interprocess communication perlfork Perl fork() information perlnumber Perl number semantics perlthrtut Perl threads tutorial perlport Perl portability guide perllocale Perl locale support perluniintro Perl Unicode introduction perlunicode Perl Unicode support perlunicook Perl Unicode cookbook perlunifaq Perl Unicode FAQ perluniprops Index of Unicode properties in Perl perlunitut Perl Unicode tutorial perlebcdic Considerations for running Perl on EBCDIC platforms perlsec Perl security perlsecpolicy Perl security report handling policy perlmod Perl modules: how they work perlmodlib Perl modules: how to write and use perlmodstyle Perl modules: how to write modules with style perlmodinstall Perl modules: how to install from CPAN perlnewmod Perl modules: preparing a new module for distribution perlpragma Perl modules: writing a user pragma perlutil utilities packaged with the Perl distribution perlfilter Perl source filters perldtrace Perl's support for DTrace perlglossary Perl Glossary Internals and C Language Interface perlembed Perl ways to embed perl in your C or C++ application perldebguts Perl debugging guts and tips perlxstut Perl XS tutorial perlxs Perl XS application programming interface perlxstypemap Perl XS C/Perl type conversion tools perlclib Internal replacements for standard C library functions perlguts Perl internal functions for those doing extensions perlcall Perl calling conventions from C perlmroapi Perl method resolution plugin interface perlreapi Perl regular expression plugin interface perlreguts Perl regular expression engine internals perlapi Perl API listing (autogenerated) perlintern Perl internal functions (autogenerated) perliol C API for Perl's implementation of IO in Layers perlapio Perl internal IO abstraction interface perlhack Perl hackers guide perlsource Guide to the Perl source tree perlinterp Overview of the Perl interpreter source and how it works perlhacktut Walk through the creation of a simple C code patch perlhacktips Tips for Perl core C code hacking perlpolicy Perl development policies perlgov Perl Rules of Governance perlgit Using git with the Perl repository History perlhist Perl history records perldelta Perl changes since previous version perl5340delta Perl changes in version 5.34.0 perl5321delta Perl changes in version 5.32.1 perl5320delta Perl changes in version 5.32.0 perl5303delta Perl changes in version 5.30.3 perl5302delta Perl changes in version 5.30.2 perl5301delta Perl changes in version 5.30.1 perl5300delta Perl changes in version 5.30.0 perl5283delta Perl changes in version 5.28.3 perl5282delta Perl changes in version 5.28.2 perl5281delta Perl changes in version 5.28.1 perl5280delta Perl changes in version 5.28.0 perl5263delta Perl changes in version 5.26.3 perl5262delta Perl changes in version 5.26.2 perl5261delta Perl changes in version 5.26.1 perl5260delta Perl changes in version 5.26.0 perl5244delta Perl changes in version 5.24.4 perl5243delta Perl changes in version 5.24.3 perl5242delta Perl changes in version 5.24.2 perl5241delta Perl changes in version 5.24.1 perl5240delta Perl changes in version 5.24.0 perl5224delta Perl changes in version 5.22.4 perl5223delta Perl changes in version 5.22.3 perl5222delta Perl changes in version 5.22.2 perl5221delta Perl changes in version 5.22.1 perl5220delta Perl changes in version 5.22.0 perl5203delta Perl changes in version 5.20.3 perl5202delta Perl changes in version 5.20.2 perl5201delta Perl changes in version 5.20.1 perl5200delta Perl changes in version 5.20.0 perl5184delta Perl changes in version 5.18.4 perl5182delta Perl changes in version 5.18.2 perl5181delta Perl changes in version 5.18.1 perl5180delta Perl changes in version 5.18.0 perl5163delta Perl changes in version 5.16.3 perl5162delta Perl changes in version 5.16.2 perl5161delta Perl changes in version 5.16.1 perl5160delta Perl changes in version 5.16.0 perl5144delta Perl changes in version 5.14.4 perl5143delta Perl changes in version 5.14.3 perl5142delta Perl changes in version 5.14.2 perl5141delta Perl changes in version 5.14.1 perl5140delta Perl changes in version 5.14.0 perl5125delta Perl changes in version 5.12.5 perl5124delta Perl changes in version 5.12.4 perl5123delta Perl changes in version 5.12.3 perl5122delta Perl changes in version 5.12.2 perl5121delta Perl changes in version 5.12.1 perl5120delta Perl changes in version 5.12.0 perl5101delta Perl changes in version 5.10.1 perl5100delta Perl changes in version 5.10.0 perl589delta Perl changes in version 5.8.9 perl588delta Perl changes in version 5.8.8 perl587delta Perl changes in version 5.8.7 perl586delta Perl changes in version 5.8.6 perl585delta Perl changes in version 5.8.5 perl584delta Perl changes in version 5.8.4 perl583delta Perl changes in version 5.8.3 perl582delta Perl changes in version 5.8.2 perl581delta Perl changes in version 5.8.1 perl58delta Perl changes in version 5.8.0 perl561delta Perl changes in version 5.6.1 perl56delta Perl changes in version 5.6 perl5005delta Perl changes in version 5.005 perl5004delta Perl changes in version 5.004 Miscellaneous perlbook Perl book information perlcommunity Perl community information perldoc Look up Perl documentation in Pod format perlexperiment A listing of experimental features in Perl perlartistic Perl Artistic License perlgpl GNU General Public License Language-Specific perlcn Perl for Simplified Chinese (in UTF-8) perljp Perl for Japanese (in EUC-JP) perlko Perl for Korean (in EUC-KR) perltw Perl for Traditional Chinese (in Big5) Platform-Specific perlaix Perl notes for AIX perlamiga Perl notes for AmigaOS perlandroid Perl notes for Android perlbs2000 Perl notes for POSIX-BC BS2000 perlcygwin Perl notes for Cygwin perldos Perl notes for DOS perlfreebsd Perl notes for FreeBSD perlhaiku Perl notes for Haiku perlhpux Perl notes for HP-UX perlhurd Perl notes for Hurd perlirix Perl notes for Irix perllinux Perl notes for Linux perlmacos Perl notes for Mac OS (Classic) perlmacosx Perl notes for Mac OS X perlnetware Perl notes for NetWare perlopenbsd Perl notes for OpenBSD perlos2 Perl notes for OS/2 perlos390 Perl notes for OS/390 perlos400 Perl notes for OS/400 perlplan9 Perl notes for Plan 9 perlqnx Perl notes for QNX perlriscos Perl notes for RISC OS perlsolaris Perl notes for Solaris perlsynology Perl notes for Synology perltru64 Perl notes for Tru64 perlvms Perl notes for VMS perlvos Perl notes for Stratus VOS perlwin32 Perl notes for Windows Stubs for Deleted Documents perlboot perlbot perlrepository perltodo perltooc perltoot On a Unix-like system, these documentation files will usually also be available as manpages for use with the man program. Some documentation is not available as man pages, so if a cross- reference is not found by man, try it with perldoc. Perldoc can also take you directly to documentation for functions (with the -f switch). See "perldoc --help" (or "perldoc perldoc" or "man perldoc") for other helpful options perldoc has to offer. In general, if something strange has gone wrong with your program and you're not sure where you should look for help, try making your code comply with use strict and use warnings. These will often point out exactly where the trouble is.
null
null
nbdst
nbdst is used during NetBoot to associate a shadow file with the disk image being used as the root device. After the shadow file is attached, subsequent writes to the root device will be redirected to the shadow file, which normally resides on local storage. nbdst is invoked by /etc/rc.netboot ARGUMENTS The following arguments must be specified: devnode The device node of the root device, in the form "disk0" shadowfile Path to a shadow file which will be created and associated with the NetBoot root device
nbdst – NetBoot deferred shadow tool
nbdst [-recycle | -preallocate size] devnode shadowfile
-recycle If a shadow file already exists, reset it and use it again. Otherwise, information written to an existing shadow file will reappear. Reusing a previous shadow file without resetting it requires that the shadow file be created with the same base image. -preallocate size Set the shadow file to the given size up front. This forces a reset of the shadow file (like -recycle). NOTE nbdst can only be run as root. SEE ALSO hdiutil(1), hdik(8) macOS 29 Apr 2003 macOS
null
net-server5.34
The net-server program gives a simple way to test out code and try port connection parameters. Though the running server can be robust enough for full tim use, it is anticipated that this binary will just be used for basic testing of net-server ports, acting as a simple echo server, or for running development scripts as CGI.
net-server - Base Net::Server starting module
net-server [base type] [net server arguments] net-server PreFork ipv '*' net-server HTTP net-server HTTP app foo.cgi net-server HTTP app foo.cgi app /=bar.cgi net-server HTTP port 8080 port 8443/ssl ipv '*' server_type PreFork --SSL_key_file=my.key --SSL_cert_file=my.crt access_log_file STDERR
"base type" The very first argument may be a Net::Server flavor. This is given as shorthand for writing out server_type "ServerFlavor". Additionally, this allows types such as HTTP and PSGI, which are not true Net::Server base types, to subclass other server types via an additional server_type argument. net-server PreFork net-server HTTP # becomes a HTTP server in the Fork flavor net-server HTTP server_type PreFork # preforking HTTP server "port" Port to bind upon. Default is 80 if running a HTTP server as root, 8080 if running a HTTP server as non-root, or 20203 otherwise. Multiple value can be given for binding to multiple ports. All of the methods for specifying port attributes enumerated in Net::Server and Net::Server::Proto are available here. net-server port 20201 net-server port 20202 net-server port 20203/IPv6 "host" Host to bind to. Default is *. Will bind to an IPv4 socket if an IPv4 address is given. Will bind to an IPv6 socket if an IPv6 address is given (requires installation of IO::Socket::INET6). If a hostname is given and "ipv" is still set to 4, an IPv4 socket will be created. If a hostname is given and "ipv" is set to 6, an IPv6 socket will be created. If a hostname is given and "ipv" is set to * (default), a lookup will be performed and any available IPv4 or IPv6 addresses will be bound. The "ipv" parameter can be set directly, or passed along in the port, or additionally can be passed as part of the hostname. net-server host localhost net-server host localhost/IPv4 There are many more options available. Please see the Net::Server documentation. AUTHOR Paul Seamons <paul@seamons.com> LICENSE This package may be distributed under the terms of either the GNU General Public License or the Perl Artistic License perl v5.34.0 2017-08-10 NET-SERVER(1)
null
mnthome
The mnthome command unmounts the AFP (AppleShare) home directory that was automounted as guest, and remounts it with the correct privileges by logging into the AFP server using the current username and password. This command also allows you to have guest access turned off on your AFP server too and still have AFP home directories work with "su". When you ssh into another computer using an account that has an AFP home directory or you "su <netuser>" where <netuser> is an AFP home directory user, then the resulting home directory will not have the correct access privileges. This is because automount is assuming NFS behavior which assumes that all computers share the same user/group privileges and mounts volumes using "no security" and lets the client enforce privileges based on the current user. AFP is different since the privileges are based on the user that logged into the server. Since automount does not put up an authentication dialog asking for an user name and password, automount mounts the fileserver using guest login. Thus you end up with getting the world access privileges and the privileges are shown via "mapping". You also would have to allow guest access to the server to that sharepoint. Mapping makes all the files/folders appear like they are owned by the current user. Even those items not really owned by the current user show up as being owned by the current user. The server provides user access rights (UARights) which is a summary of what the access rights are regardless of the category (owner, group, world) from which they were obtained. When doing "mapping", the AppleShare client will take these UARights and show them as the owner rights. So, everything looks like it is owned by the current user and the owner rights are set to the UARights. Thus if you had access to that file/folder before, then you still do. The options are: -v Display version number. -d Print debugging information. -m Alternative mount point is specified with the -m option followed by a path to an existing directory. Normally, the volume is mounted in /Network/Servers/ or /var/automount/Network/Servers/. -n Do not force the unmount of the previous mount point. -b Exec the user's shell after mount of home. -p A password may be specified with the -p option followed by a password. If this option is not used, then the user will be prompted to enter in a password. -i Display information about the AFP home mount point. -u Attempt to unmount the current home directory mount. -x This option must be followed by a path to an existing AFP mount point. Display information about the mount point. -s Skip preflight check to see if the currently mounted home directory is already correctly mounted for the user.
mnthome – mount an AFP (AppleShare) home directory with the correct privileges
mnthome [-v] [-d] [-m mntpath] [-n] [-b] [-p password] [-i] [-x mount point] [-u] [-s]
null
The following example illustrates how to mount an AFP home directory: mnthome This example shows how to print the debugging information and provide a password: mnthome -d -p foobar SEE ALSO mount(2), unmount(2), mount(8) mount_afp(8) BUGS I get the mounting url from the "home_loc" attribute and the mountpath from the "home" attribute (with the path from home_loc subtracted out). If your AFP home directory automounts in a different location, then you need to use the -m option to specify an alternative mount point. I cant figure out how to cd out of the current home dir so I can do the unmount and then restore the user back into the new home dir. If you are in the AFP home directory when you use mnthome, you automatically get put back into that same directory when mnthome leaves. If mnthome works, then your current directory is a dead directory and you need to "cd ~" to get to your new home directory. If the server with the home directory was already mounted by another user, you will not be able to replace it with a mount made by your user id. The original mount must be first unmounted by the mounting user or root. HISTORY The mnthome command first appeared Mac OS X version 10.3. RETURN VALUES 0 mnthome successfully remounted the AFP home directory. [EINVAL] Invalid arguements were passed in. [EPERM] The current AFP home directory could not be unmounted by mnthome because the current user does not have the correct access. The current AFP home directory was probably mounted by another user first. [EAUTH] Incorrect password. Mac OS X August 4, 2004 Mac OS X
jshell
JShell provides a way to interactively evaluate declarations, statements, and expressions of the Java programming language, making it easier to learn the language, explore unfamiliar code and APIs, and prototype complex code. Java statements, variable definitions, method definitions, class definitions, import statements, and expressions are accepted. The bits of code entered are called snippets. As snippets are entered, they're evaluated, and feedback is provided. Feedback varies from the results and explanations of actions to nothing, depending on the snippet entered and the feedback mode chosen. Errors are described regardless of the feedback mode. Start with the verbose mode to get the most feedback while learning the tool. Command-line options are available for configuring the initial environment when JShell is started. Within JShell, commands are available for modifying the environment as needed. Existing snippets can be loaded from a file to initialize a JShell session, or at any time within a session. Snippets can be modified within the session to try out different variations and make corrections. To keep snippets for later use, save them to a file. OPTIONS FOR JSHELL --add-exports module/package Specifies a package to be considered as exported from its defining module. --add-modules module[,module...] Specifies the root modules to resolve in addition to the initial module. -Cflag Provides a flag to pass to the compiler. To pass more than one flag, provide an instance of this option for each flag or flag argument needed. --class-path path Specifies the directories and archives that are searched to locate class files. This option overrides the path in the CLASSPATH environment variable. If the environment variable isn't set and this option isn't used, then the current directory is searched. For Linux and macOS, use a colon (:) to separate items in the path. For Windows, use a semicolon (;) to separate items. --enable-preview Allows code to depend on the preview features of this release. --execution specification Specifies an alternate execution engine, where specification is an ExecutionControl spec. See the documentation of the package jdk.jshell.spi for the syntax of the spec. --feedback mode Sets the initial level of feedback provided in response to what's entered. The initial level can be overridden within a session by using the /set feedback mode command. The default is normal. The following values are valid for mode: verbose Provides detailed feedback for entries. Additional information about the action performed is displayed after the result of the action. The next prompt is separated from the feedback by a blank line. normal Provides an average amount of feedback. The next prompt is separated from the feedback by a blank line. concise Provides minimal feedback. The next prompt immediately follows the code snippet or feedback. silent Provides no feedback. The next prompt immediately follows the code snippet. custom Provides custom feedback based on how the mode is defined. Custom feedback modes are created within JShell by using the /set mode command. --help or -h or -? Prints a summary of standard options and exits the tool. --help-extra or -X Prints a summary of nonstandard options and exits the tool. Nonstandard options are subject to change without notice. -Jflag Provides a flag to pass to the runtime system. To pass more than one flag, provide an instance of this option for each flag or flag argument needed. --module-path modulepath Specifies where to find application modules. For Linux and macOS, use a colon (:) to separate items in the path. For Windows, use a semicolon (;) to separate items. --no-startup Prevents startup scripts from running when JShell starts. Use this option to run only the scripts entered on the command line when JShell is started, or to start JShell without any preloaded information if no scripts are entered. This option can't be used if the --startup option is used. -q Sets the feedback mode to concise, which is the same as entering --feedback concise. -Rflag Provides a flag to pass to the remote runtime system. To pass more than one flag, provide an instance of this option for each flag or flag argument to pass. -s Sets the feedback mode to silent, which is the same as entering --feedback silent. --show-version Prints version information and enters the tool. --startup file Overrides the default startup script for this session. The script can contain any valid code snippets or commands. The script can be a local file or one of the following predefined scripts: DEFAULT Loads the default entries, which are commonly used as imports. JAVASE Imports all Java SE packages. PRINTING Defines print, println, and printf as jshell methods for use within the tool. TOOLING Defines javac, jar, and other methods for running JDK tools via their command-line interface within the jshell tool. For more than one script, provide a separate instance of this option for each script. Startup scripts are run when JShell is first started and when the session is restarted with the /reset, /reload, or /env command. Startup scripts are run in the order in which they're entered on the command line. This option can't be used if the --no-startup option is used. -v Sets the feedback mode to verbose, which is the same as entering --feedback verbose. --version Prints version information and exits the tool. JSHELL COMMANDS Within the jshell tool, commands are used to modify the environment and manage code snippets. /drop {name|id|startID-endID} [{name|id|startID-endID}...] Drops snippets identified by name, ID, or ID range, making them inactive. For a range of IDs, provide the starting ID and ending ID separated with a hyphen. To provide a list, separate the items in the list with a space. Use the /list command to see the IDs of code snippets. /edit [option] Opens an editor. If no option is entered, then the editor opens with the active snippets. The following options are valid: {name|id|startID-endID} [{name|id|startID-endID}...] Opens the editor with the snippets identified by name, ID, or ID range. For a range of IDs, provide the starting ID and ending ID separated with a hyphen. To provide a list, separate the items in the list with a space. Use the /list command to see the IDs of code snippets. -all Opens the editor with all snippets, including startup snippets and snippets that failed, were overwritten, or were dropped. -start Opens the editor with startup snippets that were evaluated when JShell was started. To exit edit mode, close the editor window, or respond to the prompt provided if the -wait option was used when the editor was set. Use the /set editor command to specify the editor to use. If no editor is set, then the following environment variables are checked in order: JSHELLEDITOR, VISUAL, and EDITOR. If no editor is set in JShell and none of the editor environment variables is set, then a simple default editor is used. /env [options] Displays the environment settings, or updates the environment settings and restarts the session. If no option is entered, then the current environment settings are displayed. If one or more options are entered, then the session is restarted as follows: • Updates the environment settings with the provided options. • Resets the execution state. • Runs the startup scripts. • Silently replays the history in the order entered. The history includes all valid snippets or /drop commands entered at the jshell prompt, in scripts entered on the command line, or scripts entered with the /open command. Environment settings entered on the command line or provided with a previous /reset, /env, or /reload command are maintained unless an option is entered that overwrites the setting. The following options are valid: --add-modules module[,module...] Specifies the root modules to resolve in addition to the initial module. --add-exports source-module/package=target-module[,target-module]* Adds an export of package from source-module to target- module. --class-path path Specifies the directories and archives that are searched to locate class files. This option overrides the path in the CLASSPATH environment variable. If the environment variable isn't set and this option isn't used, then the current directory is searched. For Linux and macOS, use a colon (:) to separate items in the path. For Windows, use a semicolon (;) to separate items. --module-path modulepath Specifies where to find application modules. For Linux and macOS, use a colon (:) to separate items in the path. For Windows, use a semicolon (;) to separate items. /exit [integer-expression-snippet] Exits the tool. If no snippet is entered, the exit status is zero. If a snippet is entered and the result of the snippet is an integer, the result is used as the exit status. If an error occurs, or the result of the snippet is not an integer, an error is displayed and the tool remains active. /history Displays what was entered in this session. /help [command|subject] Displays information about commands and subjects. If no options are entered, then a summary of information for all commands and a list of available subjects are displayed. If a valid command is provided, then expanded information for that command is displayed. If a valid subject is entered, then information about that subject is displayed. The following values for subject are valid: context Describes the options that are available for configuring the environment. intro Provides an introduction to the tool. shortcuts Describes keystrokes for completing commands and snippets. See Input Shortcuts. /imports Displays the current active imports, including those from the startup scripts and scripts that were entered on the command line when JShell was started. /list [option] Displays a list of snippets and their IDs. If no option is entered, then all active snippets are displayed, but startup snippets aren't. The following options are valid: {name|id|startID-endID} [{name|id|startID-endID}...] Displays the snippets identified by name, ID, or ID range. For a range of IDs, provide the starting ID and ending ID separated with a hyphen. To provide a list, separate the items in the list with a space. -all Displays all snippets, including startup snippets and snippets that failed, were overwritten, or were dropped. IDs that begin with s are startup snippets. IDs that begin with e are snippets that failed. -start Displays startup snippets that were evaluated when JShell was started. /methods [option] Displays information about the methods that were entered. If no option is entered, then the name, parameter types, and return type of all active methods are displayed. The following options are valid: {name|id|startID-endID} [{name|id|startID-endID}...] Displays information for methods identified by name, ID, or ID range. For a range of IDs, provide the starting ID and ending ID separated with a hyphen. To provide a list, separate the items in the list with a space. Use the /list command to see the IDs of code snippets. -all Displays information for all methods, including those added when JShell was started, and methods that failed, were overwritten, or were dropped. -start Displays information for startup methods that were added when JShell was started. /open file Opens the script specified and reads the snippets into the tool. The script can be a local file or one of the following predefined scripts: DEFAULT Loads the default entries, which are commonly used as imports. JAVASE Imports all Java SE packages. PRINTING Defines print, println, and printf as jshell methods for use within the tool. TOOLING Defines javac, jar, and other methods for running JDK tools via their command-line interface within the jshell tool. /reload [options] Restarts the session as follows: • Updates the environment settings with the provided options, if any. • Resets the execution state. • Runs the startup scripts. • Replays the history in the order entered. The history includes all valid snippets or /drop commands entered at the jshell prompt, in scripts entered on the command line, or scripts entered with the /open command. Environment settings entered on the command line or provided with a previous /reset, /env, or /reload command are maintained unless an option is entered that overwrites the setting. The following options are valid: --add-modules module[,module...] Specifies the root modules to resolve in addition to the initial module. --add-exports source-module/package=target-module[,target-module]* Adds an export of package from source-module to target- module. --class-path path Specifies the directories and archives that are searched to locate class files. This option overrides the path in the CLASSPATH environment variable. If the environment variable isn't set and this option isn't used, then the current directory is searched. For Linux and macOS, use a colon (:) to separate items in the path. For Windows, use a semicolon (;) to separate items. --module-path modulepath Specifies where to find application modules. For Linux and macOS, use a colon (:) to separate items in the path. For Windows, use a semicolon (;) to separate items. -quiet Replays the valid history without displaying it. Errors are displayed. -restore Resets the environment to the state at the start of the previous run of the tool or to the last time a /reset, /reload, or /env command was executed in the previous run. The valid history since that point is replayed. Use this option to restore a previous JShell session. /reset [options] Discards all entered snippets and restarts the session as follows: • Updates the environment settings with the provided options, if any. • Resets the execution state. • Runs the startup scripts. History is not replayed. All code that was entered is lost. Environment settings entered on the command line or provided with a previous /reset, /env, or /reload command are maintained unless an option is entered that overwrites the setting. The following options are valid: --add-modules module[,module...] Specifies the root modules to resolve in addition to the initial module. --add-exports source-module/package=target-module[,target-module]* Adds an export of package from source-module to target- module. --class-path path Specifies the directories and archives that are searched to locate class files. This option overrides the path in the CLASSPATH environment variable. If the environment variable isn't set and this option isn't used, then the current directory is searched. For Linux and macOS, use a colon (:) to separate items in the path. For Windows, use a semicolon (;) to separate items. --module-path modulepath Specifies where to find application modules. For Linux and macOS, use a colon (:) to separate items in the path. For Windows, use a semicolon (;) to separate items. /save [options] file Saves snippets and commands to the file specified. If no options are entered, then active snippets are saved. The following options are valid: {name|id|startID-endID} [{name|id|startID-endID}...] Saves the snippets and commands identified by name, ID, or ID range. For a range of IDs, provide the starting ID and ending ID separated with a hyphen. To provide a list, separate the items in the list with a space. Use the /list command to see the IDs of the code snippets. -all Saves all snippets, including startup snippets and snippets that were overwritten or failed. -history Saves the sequential history of all commands and snippets entered in the current session. -start Saves the current startup settings. If no startup scripts were provided, then an empty file is saved. /set [setting] Sets configuration information, including the external editor, startup settings, and feedback mode. This command is also used to create a custom feedback mode with customized prompt, format, and truncation values. If no setting is entered, then the current setting for the editor, startup settings, and feedback mode are displayed. The following values are valid for setting: editor [options] [command] Sets the command used to start an external editor when the /edit command is entered. The command can include command arguments separated by spaces. If no command or options are entered, then the current setting is displayed. The following options are valid: -default Sets the editor to the default editor provided with JShell. This option can't be used if a command for starting an editor is entered. -delete Sets the editor to the one in effect when the session started. If used with the -retain option, then the retained editor setting is deleted and the editor is set to the first of the following environment variables found: JSHELLEDITOR, VISUAL, or EDITOR. If none of the editor environment variables are set, then this option sets the editor to the default editor. This option can't be used if a command for starting an editor is entered. -retain Saves the editor setting across sessions. If no other option or a command is entered, then the current setting is saved. -wait Prompts the user to indicate when editing is complete. Otherwise control returns to JShell when the editor exits. Use this option if the editor being used exits immediately, for example, when an edit window already exists. This option is valid only when a command for starting an editor is entered. feedback [mode] Sets the feedback mode used to respond to input. If no mode is entered, then the current mode is displayed. The following modes are valid: concise, normal, silent, verbose, and any custom mode created with the /set mode command. format mode field "format-string" selector Sets the format of the feedback provided in response to input. If no mode is entered, then the current formats for all fields for all feedback modes are displayed. If only a mode is entered, then the current formats for that mode are displayed. If only a mode and field are entered, then the current formats for that field are displayed. To define a format, the following arguments are required: mode Specifies a feedback mode to which the response format is applied. Only custom modes created with the /set mode command can be modified. field Specifies a context-specific field to which the response format is applied. The fields are described in the online help, which is accessed from JShell using the /help /set format command. "format-string" Specifies the string to use as the response format for the specified field and selector. The structure of the format string is described in the online help, which is accessed from JShell using the /help /set format command. selector Specifies the context in which the response format is applied. The selectors are described in the online help, which is accessed from JShell using the /help /set format command. mode [mode-name] [existing-mode] [options] Creates a custom feedback mode with the mode name provided. If no mode name is entered, then the settings for all modes are displayed, which includes the mode, prompt, format, and truncation settings. If the name of an existing mode is provided, then the settings from the existing mode are copied to the mode being created. The following options are valid: -command|-quiet Specifies the level of feedback displayed for commands when using the mode. This option is required when creating a feedback mode. Use -command to show information and verification feedback for commands. Use -quiet to show only essential feedback for commands, such as error messages. -delete Deletes the named feedback mode for this session. The name of the mode to delete is required. To permanently delete a retained mode, use the -retain option with this option. Predefined modes can't be deleted. -retain Saves the named feedback mode across sessions. The name of the mode to retain is required. Configure the new feedback mode using the /set prompt, /set format, and /set truncation commands. To start using the new mode, use the /set feedback command. prompt mode "prompt-string" "continuation-prompt-string" Sets the prompts for input within JShell. If no mode is entered, then the current prompts for all feedback modes are displayed. If only a mode is entered, then the current prompts for that mode are displayed. To define a prompt, the following arguments are required: mode Specifies the feedback mode to which the prompts are applied. Only custom modes created with the /set mode command can be modified. "prompt-string" Specifies the string to use as the prompt for the first line of input. "continuation-prompt-string" Specifies the string to use as the prompt for the additional input lines needed to complete a snippet. start [-retain] [file [file...]|option] Sets the names of the startup scripts used when the next /reset, /reload, or /env command is entered. If more than one script is entered, then the scripts are run in the order entered. If no scripts or options are entered, then the current startup settings are displayed. The scripts can be local files or one of the following predefined scripts: DEFAULT Loads the default entries, which are commonly used as imports. JAVASE Imports all Java SE packages. PRINTING Defines print, println, and printf as jshell methods for use within the tool. TOOLING Defines javac, jar, and other methods for running JDK tools via their command-line interface within the jshell tool. The following options are valid: -default Sets the startup settings to the default settings. -none Specifies that no startup settings are used. Use the -retain option to save the start setting across sessions. truncation mode length selector Sets the maximum length of a displayed value. If no mode is entered, then the current truncation values for all feedback modes are displayed. If only a mode is entered, then the current truncation values for that mode are displayed. To define truncation values, the following arguments are required: mode Specifies the feedback mode to which the truncation value is applied. Only custom modes created with the /set mode command can be modified. length Specifies the unsigned integer to use as the maximum length for the specified selector. selector Specifies the context in which the truncation value is applied. The selectors are described in the online help, which is accessed from JShell using the /help /set truncation command. /types [option] Displays classes, interfaces, and enums that were entered. If no option is entered, then all current active classes, interfaces, and enums are displayed. The following options are valid: {name|id|startID-endID} [{name|id|startID-endID}...] Displays information for classes, interfaces, and enums identified by name, ID, or ID range. For a range of IDs, provide the starting ID and ending ID separated with a hyphen. To provide a list, separate the items in the list with a space. Use the /list command to see the IDs of the code snippets. -all Displays information for all classes, interfaces, and enums, including those added when JShell was started, and classes, interfaces, and enums that failed, were overwritten, or were dropped. -start Displays information for startup classes, interfaces, and enums that were added when JShell was started. /vars [option] Displays the name, type, and value of variables that were entered. If no option is entered, then all current active variables are displayed. The following options are valid: {name|id|startID-endID} [{name|id|startID-endID}...] Displays information for variables identified by name, ID, or ID range. For a range of IDs, provide the starting ID and ending ID separated with a hyphen. To provide a list, separate the items in the list with a space. Use the /list command to see the IDs of the code snippets. -all Displays information for all variables, including those added when JShell was started, and variables that failed, were overwritten, or were dropped. -start Displays information for startup variables that were added when JShell was started. /? Same as the /help command. /! Reruns the last snippet. /{name|id|startID-endID} [{name|id|startID-endID}...] Reruns the snippets identified by ID, range of IDs, or name. For a range of IDs, provide the starting ID and ending ID separated with a hyphen. To provide a list, separate the items in the list with a space. The first item in the list must be an ID or ID range. Use the /list command to see the IDs of the code snippets. /-n Reruns the -nth previous snippet. For example, if 15 code snippets were entered, then /-4 runs the 11th snippet. Commands aren't included in the count. INPUT SHORTCUTS The following shortcuts are available for entering commands and snippets in JShell. Tab completion <tab> When entering snippets, commands, subcommands, command arguments, or command options, use the Tab key to automatically complete the item. If the item can't be determined from what was entered, then possible options are provided. When entering a method call, use the Tab key after the method call's opening parenthesis to see the parameters for the method. If the method has more than one signature, then all signatures are displayed. Pressing the Tab key a second time displays the description of the method and the parameters for the first signature. Continue pressing the Tab key for a description of any additional signatures. Shift+<Tab> V After entering a complete expression, use this key sequence to convert the expression to a variable declaration of a type determined by the type of the expression. Shift+<Tab> M After entering a complete expression or statement, use this key sequence to convert the expression or statement to a method declaration. If an expression is entered, the return type is based on the type of the expression. Shift+<Tab> I When an identifier is entered that can't be resolved, use this key sequence to show possible imports that resolve the identifier based on the content of the specified class path. Command abbreviations An abbreviation of a command is accepted if the abbreviation uniquely identifies a command. For example, /l is recognized as the /list command. However, /s isn't a valid abbreviation because it can't be determined if the /set or /save command is meant. Use /se for the /set command or /sa for the /save command. Abbreviations are also accepted for subcommands, command arguments, and command options. For example, use /m -a to display all methods. History navigation A history of what was entered is maintained across sessions. Use the up and down arrows to scroll through commands and snippets from the current and past sessions. Use the Ctrl key with the up and down arrows to skip all but the first line of multiline snippets. History search Use the Ctrl+R key combination to search the history for the string entered. The prompt changes to show the string and the match. Ctrl+R searches backwards from the current location in the history through earlier entries. Ctrl+S searches forward from the current location in the history though later entries. INPUT EDITING The editing capabilities of JShell are similar to that of other common shells. Keyboard keys and key combinations provide line editing shortcuts. The Ctrl key and Meta key are used in key combinations. If your keyboard doesn't have a Meta key, then the Alt key is often mapped to provide Meta key functionality. Line Editing Shortcuts Key or Key Combination Action ──────────────────────────────────────────────────── Return Enter the current line. Left arrow Move the cursor to the left one character. Right arrow Move the cursor to the right one character. Ctrl+A Move the cursor to the beginning of the line. Ctrl+E Move the cursor to the end of the line. Meta+B Move the cursor to the left one word. Meta+F Move the cursor to the right one word. Delete Delete the character under the cursor. Backspace Delete the character before the cursor. Ctrl+K Delete the text from the cursor to the end of the line. Meta+D Delete the text from the cursor to the end of the word. Ctrl+W Delete the text from the cursor to the previous white space. Ctrl+Y Paste the most recently deleted text into the line. Meta+Y After Ctrl+Y, press to cycle through the previously deleted text. EXAMPLE OF STARTING AND STOPPING A JSHELL SESSION JShell is provided with the JDK. To start a session, enter jshell on the command line. A welcome message is printed, and a prompt for entering commands and snippets is provided. % jshell | Welcome to JShell -- Version 9 | For an introduction type: /help intro jshell> To see which snippets were automatically loaded when JShell started, use the /list -start command. The default startup snippets are import statements for common packages. The ID for each snippet begins with the letter s, which indicates it's a startup snippet. jshell> /list -start s1 : import java.io.*; s2 : import java.math.*; s3 : import java.net.*; s4 : import java.nio.file.*; s5 : import java.util.*; s6 : import java.util.concurrent.*; s7 : import java.util.function.*; s8 : import java.util.prefs.*; s9 : import java.util.regex.*; s10 : import java.util.stream.*; jshell> To end the session, use the /exit command. jshell> /exit | Goodbye % EXAMPLE OF ENTERING SNIPPETS Snippets are Java statements, variable definitions, method definitions, class definitions, import statements, and expressions. Terminating semicolons are automatically added to the end of a completed snippet if they're missing. The following example shows two variables and a method being defined, and the method being run. Note that a scratch variable is automatically created to hold the result because no variable was provided. jshell> int a=4 a ==> 4 jshell> int b=8 b ==> 8 jshell> int square(int i1) { ...> return i1 * i1; ...> } | created method square(int) jshell> square(b) $5 ==> 64 EXAMPLE OF CHANGING SNIPPETS Change the definition of a variable, method, or class by entering it again. The following examples shows a method being defined and the method run: jshell> String grade(int testScore) { ...> if (testScore >= 90) { ...> return "Pass"; ...> } ...> return "Fail"; ...> } | created method grade(int) jshell> grade(88) $3 ==> "Fail" To change the method grade to allow more students to pass, enter the method definition again and change the pass score to 80. Use the up arrow key to retrieve the previous entries to avoid having to reenter them and make the change in the if statement. The following example shows the new definition and reruns the method to show the new result: jshell> String grade(int testScore) { ...> if (testScore >= 80) { ...> return "Pass"; ...> } ...> return "Fail"; ...> } | modified method grade(int) jshell> grade(88) $5 ==> "Pass" For snippets that are more than a few lines long, or to make more than a few changes, use the /edit command to open the snippet in an editor. After the changes are complete, close the edit window to return control to the JShell session. The following example shows the command and the feedback provided when the edit window is closed. The /list command is used to show that the pass score was changed to 85. jshell> /edit grade | modified method grade(int) jshell> /list grade 6 : String grade(int testScore) { if (testScore >= 85) { return "Pass"; } return "Fail"; } EXAMPLE OF CREATING A CUSTOM FEEDBACK MODE The feedback mode determines the prompt that's displayed, the feedback messages that are provided as snippets are entered, and the maximum length of a displayed value. Predefined feedback modes are provided. Commands for creating custom feedback modes are also provided. Use the /set mode command to create a new feedback mode. In the following example, the new mode mymode, is based on the predefined feedback mode, normal, and verifying command feedback is displayed: jshell> /set mode mymode normal -command | Created new feedback mode: mymode Because the new mode is based on the normal mode, the prompts are the same. The following example shows how to see what prompts are used and then changes the prompts to custom strings. The first string represents the standard JShell prompt. The second string represents the prompt for additional lines in multiline snippets. jshell> /set prompt mymode | /set prompt mymode "\njshell> " " ...> " jshell> /set prompt mymode "\nprompt$ " " continue$ " The maximum length of a displayed value is controlled by the truncation setting. Different types of values can have different lengths. The following example sets an overall truncation value of 72, and a truncation value of 500 for variable value expressions: jshell> /set truncation mymode 72 jshell> /set truncation mymode 500 varvalue The feedback displayed after snippets are entered is controlled by the format setting and is based on the type of snippet entered and the action taken for that snippet. In the predefined mode normal, the string created is displayed when a method is created. The following example shows how to change that string to defined: jshell> /set format mymode action "defined" added-primary Use the /set feedback command to start using the feedback mode that was just created. The following example shows the custom mode in use: jshell> /set feedback mymode | Feedback mode: mymode prompt$ int square (int num1){ continue$ return num1*num1; continue$ } | defined method square(int) prompt$ JDK 22 2024 JSHELL(1)
jshell - interactively evaluate declarations, statements, and expressions of the Java programming language in a read-eval-print loop (REPL)
jshell [options] [load-files]
Command-line options, separated by spaces. See Options for jshell. load-files One or more scripts to run when the tool is started. Scripts can contain any valid code snippets or JShell commands. The script can be a local file or one of following predefined scripts: DEFAULT Loads the default entries, which are commonly used as imports. JAVASE Imports all Java SE packages. PRINTING Defines print, println, and printf as jshell methods for use within the tool. TOOLING Defines javac, jar, and other methods for running JDK tools via their command-line interface within the jshell tool. For more than one script, use a space to separate the names. Scripts are run in the order in which they're entered on the command line. Command-line scripts are run after startup scripts. To run a script after JShell is started, use the /open command. To accept input from standard input and suppress the interactive I/O, enter a hyphen (-) for load-files. This option enables the use of the jshell tool in pipe chains.
null
dsexport
The dsexport utility exports records from Open Directory. The first argument is the path to the output file. If the file already exists it will be overwritten. The second argument is the path to the OpenDirectory node from which the records will be read. The third argument is the type of record to export. If the record type does not begin with ‘dsRecTypeStandard:’ or ‘dsRecTypeNative:’, the dsexport utility will determine if the node supports a standard attribute by the specified name; otherwise, dsexport will assume that the record type is native. A warning will be printed if the record type is converted. Standard record types can be listed using the following command: ‘dscl -raw . -list /’.
dsexport – export records from OpenDirectory
dsexport [--N] [-r record_list] [-e exclude_attributes] [-a address -u username [-p password]] output_file node_path record_type
The options are as follows: --N Export all attributes, including native attributes. By default, dsexport only exports standard attributes. -r record_list Comma-separated list of records to export from the specified node. The -r option may be used multiple times to specify additional records to export. If the -r option is not specified, dsexport will attempt to export all records. -e exclude Comma-separated list of attributes that should not be exported. The -e option may be used multiple times to specify additional attributes to exclude. The following attributes are always excluded: ‘dsAttrTypeStandard:AppleMetaNodeLocation’, ‘dsAttrTypeStandard:RecordType’, ‘dsAttrTypeNative:objectClass’. -a address Address of the desired proxy machine. -u username Username to use for the proxy connection -p password Password to use for the proxy connection. If the -p option is not specified, dsexport will interactively prompt for the password. NOTES When using an LDAP node, please be aware that dsexport can only export as many records as the LDAP server is willing to return. If the LDAP server has several thousand users, you may want to raise the maximum number of search results that the server returns. This can be done in Server Admin (my.server.com>OpenDirectory>Settings>Protocols tab). By default this is set to 11000 results.
Export all user records from the local node to ‘export.out’: $ dsexport export.out /Local/Default dsRecTypeStandard:Users Export the group records for ‘admin’ and ‘staff’ from the LDAPv3 node on a proxy machine ‘proxy.machine.com’: $ dsexport export.out /LDAPv3/127.0.0.1 dsRecTypeStandard:Groups -r admin,staff -a proxy.machine.com -u diradmin -p password Export augmented users from the LDAPv3 node, including native attributes but excluding the PasswordPlus attribute: $ dsexport augments.out /LDAPv3/127.0.0.1 dsRecTypeStandard:Augments --N -e "dsAttrTypeStandard:PasswordPlus" EXIT STATUS The dsexport utility exits 0 on success, and >0 if an error occurs. SEE ALSO dscl(1), dsimport(1), DirectoryService(8) macOS 14.5 20 November 2008 macOS 14.5
mdimport
mdimport is used to test Spotlight plug-ins, list the installed plug-ins and schema, and re-index files handled by a plug-in when a new plug-in is installed. The following options are available: -i Request Spotlight to import file or recursively import directory. The files will be imported using the normal mechanisms and attributes will be stored in the Spotlight index. This is the implied switch if none are specified. -t Request Spotlight to test import file, sending the result back to mdimport for possible further processing. The attributes will not be stored in the Spotlight index. This is useful to test Spotlight import plug-ins. -d level Print debugging information. This requires -t. -d1 print summary of test import -d2 print summary of import and all attributes, except kMDItemTextContent -d3 print summary of import and all attributes -o outfile Store attribute into outfile. This requires -t. -p Print out performance information gathered during the run. This requires -t. -A Print out the list of all of the attributes and their localizations in the current language and exit. -X Print the schema file and exit. -L Print the list of installed importers and exit. -r Ask the server to reimport files for UTIs claimed by the listed plugin. For example, the following would cause all of the chat files on the system to be reimported: mdimport -r /System/Library/Spotlight/Chat.mdimporter NOTES -f is obsolete in Leopard and beyond. SEE ALSO mdfind(1), mdutil(1), mdls(1) Mac OS X January 16, 2019 Mac OS X
mdimport – import file hierarchies into the metadata datastore.
mdimport [-itpAXLr] [-d level] [-o -outputfile] [ file | directory ... ]
null
null
parldyn
null
null
null
null
null
id
The id utility displays the user and group names and numeric IDs, of the calling process, to the standard output. If the real and effective IDs are different, both are displayed, otherwise only the real ID is displayed. If a user (login name or user ID) is specified, the user and group IDs of that user are displayed. In this case, the real and effective IDs are assumed to be the same. The options are as follows: -A Display the process audit user ID and other process audit properties, which requires privilege. -F Display the full name of the user. -G Display the different group IDs (effective, real and supplementary) as white-space separated numbers, in no particular order. -P Display the id as a password file entry. -a Ignored for compatibility with other id implementations. -g Display the effective group ID as a number. -n Display the name of the user or group ID for the -G, -g and -u options instead of the number. If any of the ID numbers cannot be mapped into names, the number will be displayed as usual. -p Make the output human-readable. If the user name returned by getlogin(2) is different from the login name referenced by the user ID, the name returned by getlogin(2) is displayed, preceded by the keyword “login”. The user ID as a name is displayed, preceded by the keyword “uid”. If the effective user ID is different from the real user ID, the real user ID is displayed as a name, preceded by the keyword “euid”. If the effective group ID is different from the real group ID, the real group ID is displayed as a name, preceded by the keyword “rgid”. The list of groups to which the user belongs is then displayed as names, preceded by the keyword “groups”. Each display is on a separate line. -r Display the real ID for the -g and -u options instead of the effective ID. -u Display the effective user ID as a number. EXIT STATUS The id utility exits 0 on success, and >0 if an error occurs.
id – return user identity
id [user] id -A id -F [user] id -G [-n] [user] id -P [user] id -g [-nr] [user] id -p [user] id -u [-nr] [user]
null
Show information for the user ‘bob’ as a password file entry: $ id -P bob bob:*:0:0::0:0:Robert:/bob:/usr/local/bin/bash Same output as groups(1) for the root user: $ id -Gn root wheel operator Show human readable information about ‘alice’: $ id -p alice uid alice groups alice webcamd vboxusers Assuming the user ‘bob’ executed “su -l” to simulate a root login, compare the result of the following commands: # id -un root # who am i bob pts/5 Dec 4 19:51 SEE ALSO groups(1), who(1) STANDARDS The id function is expected to conform to IEEE Std 1003.2 (“POSIX.2”). HISTORY The historic groups(1) command is equivalent to “id -Gn [user]”. The historic whoami(1) command is equivalent to “id -un”. The id command appeared in 4.4BSD. macOS 14.5 March 5, 2011 macOS 14.5
bm4
The m4 utility is a macro processor that can be used as a front end to any language (e.g., C, ratfor, fortran, lex, and yacc). If no input files are given, m4 reads from the standard input, otherwise files specified on the command line are processed in the given order. Input files can be regular files, files in the m4 include paths, or a single dash (‘-’), denoting standard input. m4 writes the processed text to the standard output, unless told otherwise. Macro calls have the form name(argument1[, argument2, ..., argumentN]). There cannot be any space following the macro name and the open parenthesis (‘(’). If the macro name is not followed by an open parenthesis it is processed with no arguments. Macro names consist of a leading alphabetic or underscore possibly followed by alphanumeric or underscore characters, e.g., valid macro names match the pattern “[a-zA-Z_][a-zA-Z0-9_]*”. In arguments to macros, leading unquoted space, tab, and newline (‘\n’) characters are ignored. To quote strings, use left and right single quotes (e.g., ‘ this is a string with a leading space’). You can change the quote characters with the changequote built-in macro. Most built-ins do not make any sense without arguments, and hence are not recognized as special when not followed by an open parenthesis. The options are as follows: -Dname[=value], --define=name[=value] Define the symbol name to have some value (or NULL). -d [[+|-]flags], --debug=[[+|-]flags] Set or unset trace flags. The trace flags are as follows: a print macro arguments. c print macro expansion over several lines. e print result of macro expansion. f print filename location. l print line number. q quote arguments and expansion with the current quotes. t start with all macros traced. x number macro expansions. V turn on all options. If "+" or "-" is used, the specified flags are added to or removed from the set of active trace flags, respectively; otherwise, the specified flags replace the set of active trace flags. Specifying this option without an argument is equivalent to specifying it with the argument "aeq". By default, trace is set to "eq". -E, --fatal-warnings Set warnings to be fatal. When a single -E flag is specified, if warnings are issued, execution continues but m4 will exit with a non-zero exit status. When multiple -E flags are specified, execution will halt upon issuing the first warning and m4 will exit with a non-zero exit status. This behaviour matches GNU m4 1.4.9 and later. -G, --traditional Disable GNU compatibility mode (see -g below). -g, --gnu Enable GNU compatibility mode. In this mode, translit handles simple character ranges (e.g., ‘a-z’), regular expressions mimic Emacs behavior, multiple m4wrap calls are handled as a stack, the number of diversions is unlimited, empty names for macro definitions are allowed, undivert can be used to include files, and eval understands ‘0rbase:value’ numbers. In macOS, this option is on by default. Use -G to revert to traditional behavior. -I dirname, --include=dirname Add directory dirname to the include path. -o filename, --error-output=filename Send trace output to filename. -P, --prefix-builtins Prefix all built-in macros with ‘m4_’. For example, instead of writing define, use m4_define. -s, --synclines Output line synchronization directives, suitable for cpp(1). -t macro, --trace=macro Turn tracing on for macro. -Uname, --undefine=name Undefine the symbol name. SYNTAX m4 provides the following built-in macros. They may be redefined, losing their original meaning. Return values are null unless otherwise stated. builtin(name) Calls a built-in by its name, overriding possible redefinitions. changecom(startcomment, endcomment) Changes the start comment and end comment sequences. Comment sequences may be up to five characters long. The default values are the hash sign and the newline character. # This is a comment With no arguments, comments are turned off. With one single argument, the end comment sequence is set to the newline character. changequote(beginquote, endquote) Defines the open quote and close quote sequences. Quote sequences may be up to five characters long. The default values are the backquote character and the quote character. `Here is a quoted string' With no arguments, the default quotes are restored. With one single argument, the close quote sequence is set to the newline character. decr(arg) Decrements the argument arg by 1. The argument arg must be a valid numeric string. define(name, value) Define a new macro named by the first argument name to have the value of the second argument value. Each occurrence of ‘$n’ (where n is 0 through 9) is replaced by the n'th argument. ‘$0’ is the name of the calling macro. Undefined arguments are replaced by a null string. ‘$#’ is replaced by the number of arguments; ‘$*’ is replaced by all arguments comma separated; ‘$@’ is the same as ‘$*’ but all arguments are quoted against further expansion. defn(name, ...) Returns the quoted definition for each argument. This can be used to rename macro definitions (even for built-in macros). divert(num) There are 10 output queues (numbered 0-9). At the end of processing m4 concatenates all the queues in numerical order to produce the final output. Initially the output queue is 0. The divert macro allows you to select a new output queue (an invalid argument passed to divert causes output to be discarded). divnum Returns the current output queue number. dnl Discard input characters up to and including the next newline. dumpdef(name, ...) Prints the names and definitions for the named items, or for everything if no arguments are passed. errprint(msg) Prints the first argument on the standard error output stream. esyscmd(cmd) Passes its first argument to a shell and returns the shell's standard output. Note that the shell shares its standard input and standard error with m4. eval(expr[,radix[,minimum]]) Computes the first argument as an arithmetic expression using 32-bit arithmetic. Operators are the standard C ternary, arithmetic, logical, shift, relational, bitwise, and parentheses operators. You can specify octal, decimal, and hexadecimal numbers as in C. The optional second argument radix specifies the radix for the result and the optional third argument minimum specifies the minimum number of digits in the result. expr(expr) This is an alias for eval. format(formatstring, arg1, ...) Returns formatstring with escape sequences substituted with arg1 and following arguments, in a way similar to printf(3). This built-in is only available in GNU-m4 compatibility mode, and the only parameters implemented are there for autoconf compatibility: left-padding flag, an optional field width, a maximum field width, *-specified field widths, and the %s and %c data type. ifdef(name, yes, no) If the macro named by the first argument is defined then return the second argument, otherwise the third. If there is no third argument, the value is NULL. The word "unix" is predefined. ifelse(a, b, yes, ...) If the first argument a matches the second argument b then ifelse() returns the third argument yes. If the match fails the three arguments are discarded and the next three arguments are used until there is zero or one arguments left, either this last argument or NULL is returned if no other matches were found. include(name) Returns the contents of the file specified in the first argument. If the file is not found as is, look through the include path: first the directories specified with -I on the command line, then the environment variable M4PATH, as a colon-separated list of directories. Include aborts with an error message if the file cannot be included. incr(arg) Increments the argument by 1. The argument must be a valid numeric string. index(string, substring) Returns the index of the second argument in the first argument (e.g., index(the quick brown fox jumped, fox) returns 16). If the second argument is not found index returns -1. indir(macro, arg1, ...) Indirectly calls the macro whose name is passed as the first argument, with the remaining arguments passed as first, ... arguments. len(arg) Returns the number of characters in the first argument. Extra arguments are ignored. m4exit(code) Immediately exits with the return value specified by the first argument, 0 if none. m4wrap(todo) Allows you to define what happens at the final EOF, usually for cleanup purposes (e.g., m4wrap("cleanup(tempfile)") causes the macro cleanup to be invoked after all other processing is done). Multiple calls to m4wrap() get inserted in sequence at the final EOF. maketemp(template) Like mkstemp. mkstemp(template) Invokes mkstemp(3) on the first argument, and returns the modified string. This can be used to create unique temporary file names. paste(file) Includes the contents of the file specified by the first argument without any macro processing. Aborts with an error message if the file cannot be included. patsubst(string, regexp, replacement) Substitutes a regular expression in a string with a replacement string. Usual substitution patterns apply: an ampersand (‘&’) is replaced by the string matching the regular expression. The string ‘\#’, where ‘#’ is a digit, is replaced by the corresponding back-reference. popdef(arg, ...) Restores the pushdefed definition for each argument. pushdef(macro, def) Takes the same arguments as define, but it saves the definition on a stack for later retrieval by popdef(). regexp(string, regexp, replacement) Finds a regular expression in a string. If no further arguments are given, it returns the first match position or -1 if no match. If a third argument is provided, it returns the replacement string, with sub-patterns replaced. shift(arg1, ...) Returns all but the first argument, the remaining arguments are quoted and pushed back with commas in between. The quoting nullifies the effect of the extra scan that will subsequently be performed. sinclude(file) Similar to include, except it ignores any errors. spaste(file) Similar to paste(), except it ignores any errors. substr(string, offset, length) Returns a substring of the first argument starting at the offset specified by the second argument and the length specified by the third argument. If no third argument is present it returns the rest of the string. syscmd(cmd) Passes the first argument to the shell. Nothing is returned. sysval Returns the return value from the last syscmd. traceon(arg, ...) Enables tracing of macro expansions for the given arguments, or for all macros if no argument is given. traceoff(arg, ...) Disables tracing of macro expansions for the given arguments, or for all macros if no argument is given. translit(string, mapfrom, mapto) Transliterate the characters in the first argument from the set given by the second argument to the set given by the third. You cannot use tr(1) style abbreviations. undefine(name1, ...) Removes the definition for the macros specified by its arguments. undivert(arg, ...) Flushes the named output queues (or all queues if no arguments). unix A pre-defined macro for testing the OS platform. __line__ Returns the current file's line number. __file__ Returns the current file's name. EXIT STATUS The m4 utility exits 0 on success, and >0 if an error occurs. But note that the m4exit macro can modify the exit status, as can the -E flag. SEE ALSO B. W. Kernighan and D. M. Ritchie, The M4 Macro Processor, AT&T Bell Laboratories, Computing Science Technical Report, 59, July 1977. STANDARDS The m4 utility is compliant with the IEEE Std 1003.1-2008 (“POSIX.1”) specification. The flags [-dEGgIPot] and the macros builtin, esyscmd, expr, format, indir, paste, patsubst, regexp, spaste, unix, __line__, and __file__ are extensions to that specification. maketemp is not supposed to be a synonym for mkstemp, but instead to be an insecure temporary file name creation function. It is marked by IEEE Std 1003.1-2008 (“POSIX.1”) as being obsolescent and should not be used if portability is a concern. The output format of traceon and dumpdef are not specified in any standard, are likely to change and should not be relied upon. The current format of tracing is closely modelled on gnu-m4, to allow autoconf to work. The built-ins pushdef and popdef handle macro definitions as a stack. However, define interacts with the stack in an undefined way. In this implementation, define replaces the top-most definition only. Other implementations may erase all definitions on the stack instead. All built-ins do expand without arguments in many other m4. Many other m4 have dire size limitations with respect to buffer sizes. AUTHORS Ozan Yigit <oz@sis.yorku.ca> and Richard A. O'Keefe <ok@goanna.cs.rmit.OZ.AU>. GNU-m4 compatibility extensions by Marc Espie <espie@cvs.openbsd.org>. macOS 14.5 June 21, 2023 macOS 14.5
m4 – macro language processor
m4 [-EGgPs] [-Dname[=value]] [-d [[+-]flags]] [-I dirname] [-o filename] [-t macro] [-Uname] [file ...]
null
null
procsystime
procsystime prints details on system call times for processes, both the elapsed times and on-cpu times can be printed. The elapsed times are interesting, to help identify syscalls that take some time to complete (during which the process may have slept). CPU time helps us identify syscalls that are consuming CPU cycles to run. Since this uses DTrace, only users with root privileges can run this command.
procsystime - analyse system call times. Uses DTrace.
procsystime [-acehoT] [ -p PID | -n name | command ]
-a print all data -c print syscall counts -e print elapsed times, ns -o print CPU times, ns -T print totals -p PID examine this PID -n name examine processes which have this name
Print elapsed times for PID 1871, # procsystime -p 1871 Print elapsed times for processes called "tar", # procsystime -n tar Print CPU times for "tar" processes, # procsystime -on tar Print syscall counts for "tar" processes, # procsystime -cn tar Print elapsed and CPU times for "tar" processes, # procsystime -eon tar print all details for "bash" processes, # procsystime -aTn bash run and print details for "df -h", # procsystime df -h FIELDS SYSCALL System call name TIME (ns) Total time, nanoseconds COUNT Number of occurrences DOCUMENTATION See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT procsystime will sample until Ctrl-C is hit. AUTHOR Brendan Gregg [Sydney, Australia] SEE ALSO dtruss(1M), dtrace(1M), truss(1) version 1.00 September 22, 2005 procsystime(1m)
dbilogstrip5.34
Replaces any hex addresses, e.g, 0x128f72ce with "0xN". Replaces any references to process id or thread id, like "pid#6254" with "pidN". So a DBI trace line like this: -> STORE for DBD::DBM::st (DBI::st=HASH(0x19162a0)~0x191f9c8 'f_params' ARRAY(0x1922018)) thr#1800400 will look like this: -> STORE for DBD::DBM::st (DBI::st=HASH(0xN)~0xN 'f_params' ARRAY(0xN)) thrN perl v5.34.0 2024-04-13 DBILOGSTRIP(1)
dbilogstrip - filter to normalize DBI trace logs for diff'ing
Read DBI trace file "dbitrace.log" and write out a stripped version to "dbitrace_stripped.log" dbilogstrip dbitrace.log > dbitrace_stripped.log Run "yourscript.pl" twice, each with different sets of arguments, with DBI_TRACE enabled. Filter the output and trace through "dbilogstrip" into a separate file for each run. Then compare using diff. (This example assumes you're using a standard shell.) DBI_TRACE=2 perl yourscript.pl ...args1... 2>&1 | dbilogstrip > dbitrace1.log DBI_TRACE=2 perl yourscript.pl ...args2... 2>&1 | dbilogstrip > dbitrace2.log diff -u dbitrace1.log dbitrace2.log
null
null
cd
Shell builtin commands are commands that can be executed within the running shell's process. Note that, in the case of csh(1) builtin commands, the command is executed in a subshell if it occurs as any component of a pipeline except the last. If a command specified to the shell contains a slash ‘/’, the shell will not execute a builtin command, even if the last component of the specified command matches the name of a builtin command. Thus, while specifying “echo” causes a builtin command to be executed under shells that support the echo builtin command, specifying “/bin/echo” or “./echo” does not. While some builtin commands may exist in more than one shell, their operation may be different under each shell which supports them. Below is a table which lists shell builtin commands, the standard shells that support them and whether they exist as standalone utilities. Only builtin commands for the csh(1) and sh(1) shells are listed here. Consult a shell's manual page for details on the operation of its builtin commands. Beware that the sh(1) manual page, at least, calls some of these commands “built-in commands” and some of them “reserved words”. Users of other shells may need to consult an info(1) page or other sources of documentation. Commands marked “No**” under External do exist externally, but are implemented as scripts using a builtin command of the same name. Command External csh(1) sh(1) ! No No Yes % No Yes No . No No Yes : No Yes Yes @ No Yes Yes [ Yes No Yes { No No Yes } No No Yes alias No** Yes Yes alloc No Yes No bg No** Yes Yes bind No No Yes bindkey No Yes No break No Yes Yes breaksw No Yes No builtin No No Yes builtins No Yes No case No Yes Yes cd No** Yes Yes chdir No Yes Yes command No** No Yes complete No Yes No continue No Yes Yes default No Yes No dirs No Yes No do No No Yes done No No Yes echo Yes Yes Yes echotc No Yes No elif No No Yes else No Yes Yes end No Yes No endif No Yes No endsw No Yes No esac No No Yes eval No Yes Yes exec No Yes Yes exit No Yes Yes export No No Yes false Yes No Yes fc No** No Yes fg No** Yes Yes filetest No Yes No fi No No Yes for No No Yes foreach No Yes No getopts No** No Yes glob No Yes No goto No Yes No hash No** No Yes hashstat No Yes No history No Yes No hup No Yes No if No Yes Yes jobid No No Yes jobs No** Yes Yes kill Yes Yes Yes limit No Yes No local No No Yes log No Yes No login Yes Yes No logout No Yes No ls-F No Yes No nice Yes Yes No nohup Yes Yes No notify No Yes No onintr No Yes No popd No Yes No printenv Yes Yes No printf Yes No Yes pushd No Yes No pwd Yes No Yes read No** No Yes readonly No No Yes rehash No Yes No repeat No Yes No return No No Yes sched No Yes No set No Yes Yes setenv No Yes No settc No Yes No setty No Yes No setvar No No Yes shift No Yes Yes source No Yes No stop No Yes No suspend No Yes No switch No Yes No telltc No Yes No test Yes No Yes then No No Yes time Yes Yes No times No No Yes trap No No Yes true Yes No Yes type No** No Yes ulimit No** No Yes umask No** Yes Yes unalias No** Yes Yes uncomplete No Yes No unhash No Yes No unlimit No Yes No unset No Yes Yes unsetenv No Yes No until No No Yes wait No** Yes Yes where No Yes No which Yes Yes No while No Yes Yes SEE ALSO csh(1), dash(1), echo(1), false(1), info(1), kill(1), login(1), nice(1), nohup(1), printenv(1), printf(1), pwd(1), sh(1), test(1), time(1), true(1), which(1), zsh(1) HISTORY The builtin manual page first appeared in FreeBSD 3.4. AUTHORS This manual page was written by Sheldon Hearn <sheldonh@FreeBSD.org>. macOS 14.5 December 21, 2010 macOS 14.5
builtin, !, %, ., :, @, [, {, }, alias, alloc, bg, bind, bindkey, break, breaksw, builtins, case, cd, chdir, command, complete, continue, default, dirs, do, done, echo, echotc, elif, else, end, endif, endsw, esac, eval, exec, exit, export, false, fc, fg, filetest, fi, for, foreach, getopts, glob, goto, hash, hashstat, history, hup, if, jobid, jobs, kill, limit, local, log, login, logout, ls-F, nice, nohup, notify, onintr, popd, printenv, printf, pushd, pwd, read, readonly, rehash, repeat, return, sched, set, setenv, settc, setty, setvar, shift, source, stop, suspend, switch, telltc, test, then, time, times, trap, true, type, ulimit, umask, unalias, uncomplete, unhash, unlimit, unset, unsetenv, until, wait, where, which, while – shell built-in commands
See the built-in command description in the appropriate shell manual page.
null
null
quota
Quota displays users' disk usage and limits. By default only the user quotas are printed. Options: -g Print group quotas for the group of which the user is a member. The optional -u flag is equivalent to the default. -v quota will display quotas on filesystems where no storage is allocated. -q Print a more terse message, containing only information on filesystems where usage is over quota. Specifying both -g and -u displays both the user quotas and the group quotas (for the user). Only the super-user may use the -u flag and the optional user argument to view the limits of other users. Non-super-users can use the -g flag and optional group argument to view only the limits of groups of which they are members. The -q flag takes precedence over the -v flag. Quota reports the quotas of all the filesystems that have a mount option file located at its root. If quota exits with a non-zero status, then one or more filesystems are over quota. FILES Each of the following quota files is located at the root of the mounted filesystem. The mount option files are empty files whose existence indicates that quotas are to be enabled for that filesystem. .quota.user data file containing user quotas .quota.group data file containing group quotas .quota.ops.user mount option file used to enable user quotas .quota.ops.group mount option file used to enable group quotas HISTORY The quota command appeared in 4.2BSD. SEE ALSO quotactl(2), edquota(8), quotacheck(8), quotaon(8), repquota(8) BSD 4.2 March 28, 2002 BSD 4.2
quota – display disk usage and limits
quota [-g] [-u] [-v | -q] quota [-u] [-v | -q] user quota [-g] [-v | -q] group
null
null
uuto
The uuto program may be used to conveniently send files to a particular user on a remote system. It will arrange for mail to be sent to the remote user when the files arrive on the remote system, and he or she may easily retrieve the files using the uupick program. Note that uuto does not provide any security--any user on the remote system can examine the files. The last argument specifies the system and user name to which to send the files. The other arguments are the files or directories to be sent. The uuto program is actually just a trivial shell script which invokes the uucp program with the appropriate arguments.
uuto - send files to a particular user on a remote system
uuto files... system!user
Any option which may be given to uucp may also be given to uuto. SEE ALSO uucp(1), uupick(1) AUTHOR Ian Lance Taylor <ian@airs.com>. Text for this Manpage comes from Taylor UUCP, version 1.07 Info documentation. Taylor UUCP 1.07 uuto(1)
null
fuser
The fuser utility shall write to standard output the process IDs of processes running on the local system that have one or more named files open. For block special devices, all processes using any file on that device are listed. The fuser utility shall write to standard error additional information about the named files indicating how the file is being used. Any output for processes running on remote systems that have a named file open is unspecified. A user may need appropriate privilege to invoke the fuser utility.
fuser - list process IDs of all processes that have one or more files open
fuser [ -cfu ] file ...
The fuser utility shall conform to the Base Definitions volume of IEEE Std 1003.1-2001, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -c The file is treated as a mount point and the utility shall report on any files open in the file system. -f The report shall be only for the named files. -u The user name, in parentheses, associated with each process ID written to standard output shall be written to standard error. OPERANDS The following operand shall be supported: file A pathname on which the file or file system is to be reported. STDIN Not used. INPUT FILES The user database. ENVIRONMENT VARIABLES The following environment variables shall affect the execution of fuser: LANG Provide a default value for the internationalization variables that are unset or null. (See the Base Definitions volume of IEEE Std 1003.1-2001, Section 8.2, Internationalization Variables for the precedence of internationalization variables used to determine the values of locale categories.) LC_ALL If set to a non-empty string value, override the values of all the other internationalization variables. LC_CTYPE Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multi-byte characters in arguments). LC_MESSAGES Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error. NLSPATH Determine the location of message catalogs for the processing of LC_MESSAGES . ASYNCHRONOUS EVENTS Default. STDOUT The fuser utility shall write the process ID for each process using each file given as an operand to standard output in the following format: "%d", <process_id> STDERR The fuser utility shall write diagnostic messages to standard error. The fuser utility also shall write the following to standard error: * The pathname of each named file is written followed immediately by a colon. * For each process ID written to standard output, the character 'c' shall be written to standard error if the process is using the file as its current directory and the character 'r' shall be written to standard error if the process is using the file as its root directory. Implementations may write other alphabetic characters to indicate other uses of files. * When the -u option is specified, characters indicating the use of the file shall be followed immediately by the user name, in parentheses, corresponding to the process' real user ID. If the user name cannot be resolved from the process' real user ID, the process' real user ID shall be written instead of the user name. When standard output and standard error are directed to the same file, the output shall be interleaved so that the filename appears at the start of each line, followed by the process ID and characters indicating the use of the file. Then, if the -u option is specified, the user name or user ID for each process using that file shall be written. A <newline> shall be written to standard error after the last output described above for each file operand. OUTPUT FILES None. EXTENDED DESCRIPTION None. EXIT STATUS The following exit values shall be returned: 0 Successful completion. >0 An error occurred. CONSEQUENCES OF ERRORS Default. The following sections are informative. APPLICATION USAGE None.
The command: fuser -fu . writes to standard output the process IDs of processes that are using the current directory and writes to standard error an indication of how those processes are using the directory and the user names associated with the processes that are using the current directory. RATIONALE The definition of the fuser utility follows existing practice. FUTURE DIRECTIONS None. SEE ALSO None. COPYRIGHT Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1, 2003 Edition, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 6, Copyright (C) 2001-2003 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html . IEEE/The Open Group 2003 FUSER(P)
dsimport
dsimport is a tool for importing records into an Open Directory source. USAGE filepath is the path of the file to be imported. nodepath is the path of the Open Directory node where the records should be imported. A flag that specifies how to handle conflicting records: O overwrite of any existing records that have the same record name, UID or GID. All previous attribute values are deleted. M merge import data with existing records or create the record if it does not exist. I ignore the record if there is a conflicting name, UID or GID. A append the data to existing records, but do not create a record if it does not exist. N no duplicate checking should be done. Note this could cause failures and/or a slower import process. A list of options and their descriptions: --crypt is used to signify that all user passwords are crypt-based. Entries in the import file can also be prefixed with {CRYPT} on a per record basis if not all users are crypt-based. By default all passwords are assumed to be provided as listed in the import file. --force attribute value forces a specific value for the named attribute for all records during the import. The new value will overwrite any value specified in the import file. This option may be specified multiple times for forcing more than one attribute. --groupid value is the GID used for any records that do not specify a primary GID. --grouppreset value designate a preset record to be applied to imported group records. --loglevel value changes the amount of logging detail output to the log file. --outputfile value Outputs a plist to the specified file with a list of changed users or groups and rejected records due to name conflicts. Also includes a list of deleted records (overwrite mode), and lists of records that failed and succeeded during import. The format of this file is likely to change in a future release of Mac OS X. --password value is the admin's password for import operations. Used to authenticate to the directory node during import. A secure prompt will be used for interactive input if not supplied via parameter. Using the prompt method is the most secure method of providing password to dsimport. --recordformat string passes in the delimiters and attributes and record type to specify the order and names of attributes in the file to be imported. An example record format string: 0x0A 0x5C 0x3A 0x2C dsRecTypeStandard:Users 7 dsAttrTypeStandard:RecordName dsAttrTypeStandard:Password dsAttrTypeStandard:UniqueID dsAttrTypeStandard:PrimaryGroupID dsAttrTypeStandard:RealName dsAttrTypeStandard:NFSHomeDirectory dsAttrTypeStandard:UserShell A special value of IGNORE can be used for values that should be ignored in the import file on a record-by-record basis. --recordtype type Override the record type defined in the import file. For example, to import ComputerGroups as ComputerLists, use: --recordtype dsRecTypeStandard:ComputerLists The opposite works for importing ComputerLists as ComputerGroups, and so on. --remotehost hostname | ipaddress connects to a remote host at the network address specified. Commonly used to import to a remote Mac OS X Server. --remoteusername value specifies user name to use for the remote connection. --remotepassword value specifies password to use for the remote connection. A secure prompt will be used to ask for the password if --remoteusername is specified and --remotepassword is not. Using the prompt method is the most secure method of providing password to dsimport. --startid value indicates the ID number to start with when the import tool generates user or group IDs for any import file that lacks an ID as part of the import data. --template StandardUser | StandardGroup is used for delimited import of files that lack field descriptions. StandardUser contains the following fields in the order: 1. RecordName 2. Password 3. UniqueID 4. PrimaryGroupID 5. DistinguishedName 6. NFSHomeDirectory 7. UserShell StandardGroup contains the following fields in the order: 1. RecordName 2. Password 3. PrimaryGroupID 4. GroupMembership --username value is the admin username to use when importing records. If this is not specified the current user is the default name. Also, if used in conjunction with --remotehost then this admin user will be used for the Open Directory node whereas the username provided in --remoteusername will be used for the remote connection. If this option is left off but --remoteusername is provided, then the remote username will be used for both the connection and for importing records. --userpreset value designate a preset record to be applied to imported user records.
dsimport
dsimport filepath nodepath O|M|A|I|N [options] dsimport --version dsimport --help
null
To import a standard dsexport file into the Local database: dsimport myimportFile /Local/Default I --username administrator --password adminpassword FILES /usr/bin/dsimport ~/Library/Logs/ImportExport SEE ALSO DirectoryService(8) dsexport(1) Darwin Fri June 24 2008 Darwin
git-shell
This is a login shell for SSH accounts to provide restricted Git access. It permits execution only of server-side Git commands implementing the pull/push functionality, plus custom commands present in a subdirectory named git-shell-commands in the user’s home directory. COMMANDS git shell accepts the following commands after the -c option: git receive-pack <argument>, git upload-pack <argument>, git upload-archive <argument> Call the corresponding server-side command to support the client’s git push, git fetch, or git archive --remote request. cvs server Imitate a CVS server. See git-cvsserver(1). If a ~/git-shell-commands directory is present, git shell will also handle other, custom commands by running "git-shell-commands/<command> <arguments>" from the user’s home directory. INTERACTIVE USE By default, the commands above can be executed only with the -c option; the shell is not interactive. If a ~/git-shell-commands directory is present, git shell can also be run interactively (with no arguments). If a help command is present in the git-shell-commands directory, it is run to provide the user with an overview of allowed actions. Then a "git> " prompt is presented at which one can enter any of the commands from the git-shell-commands directory, or exit to close the connection. Generally this mode is used as an administrative interface to allow users to list repositories they have access to, create, delete, or rename repositories, or change repository descriptions and permissions. If a no-interactive-login command exists, then it is run and the interactive shell is aborted.
git-shell - Restricted login shell for Git-only SSH access
chsh -s $(command -v git-shell) <user> git clone <user>@localhost:/path/to/repo.git ssh <user>@localhost
null
To disable interactive logins, displaying a greeting instead: $ chsh -s /usr/bin/git-shell $ mkdir $HOME/git-shell-commands $ cat >$HOME/git-shell-commands/no-interactive-login <<\EOF #!/bin/sh printf '%s\n' "Hi $USER! You've successfully authenticated, but I do not" printf '%s\n' "provide interactive shell access." exit 128 EOF $ chmod +x $HOME/git-shell-commands/no-interactive-login To enable git-cvsserver access (which should generally have the no-interactive-login example above as a prerequisite, as creating the git-shell-commands directory allows interactive logins): $ cat >$HOME/git-shell-commands/cvs <<\EOF if ! test $# = 1 && test "$1" = "server" then echo >&2 "git-cvsserver only handles \"server\"" exit 1 fi exec git cvsserver server EOF $ chmod +x $HOME/git-shell-commands/cvs SEE ALSO ssh(1), git-daemon(1), contrib/git-shell-commands/README GIT Part of the git(1) suite Git 2.41.0 2023-06-01 GIT-SHELL(1)
ldapsearch
ldapsearch is a shell-accessible interface to the ldap_search_ext(3) library call. ldapsearch opens a connection to an LDAP server, binds, and performs a search using specified parameters. The filter should conform to the string representation for search filters as defined in RFC 4515. If not provided, the default filter, (objectClass=*), is used. If ldapsearch finds one or more entries, the attributes specified by attrs are returned. If * is listed, all user attributes are returned. If + is listed, all operational attributes are returned. If no attrs are listed, all user attributes are returned. If only 1.1 is listed, no attributes will be returned. The search results are displayed using an extended version of LDIF. Option -L controls the format of the output.
ldapsearch - LDAP search tool
ldapsearch [-n] [-c] [-u] [-v] [-t[t]] [-T_path] [-F_prefix] [-A] [-L[L[L]]] [-M[M]] [-S_attribute] [-d_debuglevel] [-f_file] [-x] [-D_binddn] [-W] [-w_passwd] [-y_passwdfile] [-H_ldapuri] [-h_ldaphost] [-p_ldapport] [-b_searchbase] [-s {base|one|sub|children}] [-a {never|always|search|find}] [-P {2|3}] [-e [!]ext[=extparam]] [-E [!]ext[=extparam]] [-l_timelimit] [-z_sizelimit] [-O_security-properties] [-I] [-Q] [-U_authcid] [-R_realm] [-X_authzid] [-Y_mech] [-Z[Z]] filter [attrs...]
-n Show what would be done, but don't actually perform the search. Useful for debugging in conjunction with -v. -c Continuous operation mode. Errors are reported, but ldapsearch will continue with searches. The default is to exit after reporting an error. Only useful in conjunction with -f. -u Include the User Friendly Name form of the Distinguished Name (DN) in the output. -v Run in verbose mode, with many diagnostics written to standard output. -t[t] A single -t writes retrieved non-printable values to a set of temporary files. This is useful for dealing with values containing non-character data such as jpegPhoto or audio. A second -t writes all retrieved values to files. -T_path Write temporary files to directory specified by path (default: /var/tmp/) -F_prefix URL prefix for temporary files. Default is file://path where path is /var/tmp/ or specified with -T. -A Retrieve attributes only (no values). This is useful when you just want to see if an attribute is present in an entry and are not interested in the specific values. -L Search results are display in LDAP Data Interchange Format detailed in ldif(5). A single -L restricts the output to LDIFv1. A second -L disables comments. A third -L disables printing of the LDIF version. The default is to use an extended version of LDIF. -M[M] Enable manage DSA IT control. -MM makes control critical. -S_attribute Sort the entries returned based on attribute. The default is not to sort entries returned. If attribute is a zero-length string (""), the entries are sorted by the components of their Distinguished Name. See ldap_sort(3) for more details. Note that ldapsearch normally prints out entries as it receives them. The use of the -S option defeats this behavior, causing all entries to be retrieved, then sorted, then printed. -d_debuglevel Set the LDAP debugging level to debuglevel. ldapsearch must be compiled with LDAP_DEBUG defined for this option to have any effect. -f_file Read a series of lines from file, performing one LDAP search for each line. In this case, the filter given on the command line is treated as a pattern where the first and only occurrence of %s is replaced with a line from file. Any other occurrence of the the % character in the pattern will be regarded as an error. Where it is desired that the search filter include a % character, the character should be encoded as \25 (see RFC 4515). If file is a single - character, then the lines are read from standard input. ldapsearch will exit when the first non- successful search result is returned, unless -c is used. -x Use simple authentication instead of SASL. -D_binddn Use the Distinguished Name binddn to bind to the LDAP directory. For SASL binds, the server is expected to ignore this value. -W Prompt for simple authentication. This is used instead of specifying the password on the command line. -w_passwd Use passwd as the password for simple authentication. -y_passwdfile Use complete contents of passwdfile as the password for simple authentication. -H_ldapuri Specify URI(s) referring to the ldap server(s); a list of URI, separated by whitespace or commas is expected; only the protocol/host/port fields are allowed. As an exception, if no host/port is specified, but a DN is, the DN is used to look up the corresponding host(s) using the DNS SRV records, according to RFC 2782. The DN must be a non-empty sequence of AVAs whose attribute type is "dc" (domain component), and must be escaped according to RFC 2396. -h_ldaphost Specify an alternate host on which the ldap server is running. Deprecated in favor of -H. -p_ldapport Specify an alternate TCP port where the ldap server is listening. Deprecated in favor of -H. -b_searchbase Use searchbase as the starting point for the search instead of the default. -s {base|one|sub|children} Specify the scope of the search to be one of base, one, sub, or children to specify a base object, one-level, subtree, or children search. The default is sub. Note: children scope requires LDAPv3 subordinate feature extension. -a {never|always|search|find} Specify how aliases dereferencing is done. Should be one of never, always, search, or find to specify that aliases are never dereferenced, always dereferenced, dereferenced when searching, or dereferenced only when locating the base object for the search. The default is to never dereference aliases. -P {2|3} Specify the LDAP protocol version to use. -e [!]ext[=extparam] -E [!]ext[=extparam] Specify general extensions with -e and search extensions with -E. ´!´ indicates criticality. General extensions: [!]assert=<filter> (an RFC 4515 Filter) [!]authzid=<authzid> ("dn:<dn>" or "u:<user>") [!]manageDSAit [!]noop ppolicy [!]postread[=<attrs>] (a comma-separated attribute list) [!]preread[=<attrs>] (a comma-separated attribute list) abandon, cancel (SIGINT sends abandon/cancel; not really controls) Search extensions: [!]domainScope (domain scope) [!]mv=<filter> (matched values filter) [!]pr=<size>[/prompt|noprompt] (paged results/prompt) [!]sss=[-]<attr[:OID]>[/[-]<attr[:OID]>...] (server side sorting) [!]subentries[=true|false] (subentries) [!]sync=ro[/<cookie>] (LDAP Sync refreshOnly) rp[/<cookie>][/<slimit>] (LDAP Sync refreshAndPersist) [!]vlv=<before>/<after>(/<offset>/<count>|:<value>) (virtual list view) -l_timelimit wait at most timelimit seconds for a search to complete. A timelimit of 0 (zero) or none means no limit. A timelimit of max means the maximum integer allowable by the protocol. A server may impose a maximal timelimit which only the root user may override. -z_sizelimit retrieve at most sizelimit entries for a search. A sizelimit of 0 (zero) or none means no limit. A sizelimit of max means the maximum integer allowable by the protocol. A server may impose a maximal sizelimit which only the root user may override. -O_security-properties Specify SASL security properties. -I Enable SASL Interactive mode. Always prompt. Default is to prompt only as needed. -Q Enable SASL Quiet mode. Never prompt. -U_authcid Specify the authentication ID for SASL bind. The form of the ID depends on the actual SASL mechanism used. -R_realm Specify the realm of authentication ID for SASL bind. The form of the realm depends on the actual SASL mechanism used. -X_authzid Specify the requested authorization ID for SASL bind. authzid must be one of the following formats: dn:<distinguished name> or u:<username> -Y_mech Specify the SASL mechanism to be used for authentication. If it's not specified, the program will choose the best mechanism the server knows. -Z[Z] Issue StartTLS (Transport Layer Security) extended operation. If you use -ZZ, the command will require the operation to be successful. OUTPUT FORMAT If one or more entries are found, each entry is written to standard output in LDAP Data Interchange Format or ldif(5): version: 1 # bjensen, example, net dn: uid=bjensen,dc=example,dc=net objectClass: person objectClass: dcObject uid: bjensen cn: Barbara Jensen sn: Jensen ... If the -t option is used, the URI of a temporary file is used in place of the actual value. If the -A option is given, only the "attributename" part is written. EXAMPLE The following command: ldapsearch -LLL "(sn=smith)" cn sn telephoneNumber will perform a subtree search (using the default search base and other parameters defined in ldap.conf(5)) for entries with a surname (sn) of smith. The common name (cn), surname (sn) and telephoneNumber values will be retrieved and printed to standard output. The output might look something like this if two entries are found: dn: uid=jts,dc=example,dc=com cn: John Smith cn: John T. Smith sn: Smith sn;lang-en: Smith sn;lang-de: Schmidt telephoneNumber: 1 555 123-4567 dn: uid=sss,dc=example,dc=com cn: Steve Smith cn: Steve S. Smith sn: Smith sn;lang-en: Smith sn;lang-de: Schmidt telephoneNumber: 1 555 765-4321 The command: ldapsearch -LLL -u -t "(uid=xyz)" jpegPhoto audio will perform a subtree search using the default search base for entries with user id of "xyz". The user friendly form of the entry's DN will be output after the line that contains the DN itself, and the jpegPhoto and audio values will be retrieved and written to temporary files. The output might look like this if one entry with one value for each of the requested attributes is found: dn: uid=xyz,dc=example,dc=com ufn: xyz, example, com audio:< file:///tmp/ldapsearch-audio-a19924 jpegPhoto:< file:///tmp/ldapsearch-jpegPhoto-a19924 This command: ldapsearch -LLL -s one -b "c=US" "(o=University*)" o description will perform a one-level search at the c=US level for all entries whose organization name (o) begins begins with University. The organization name and description attribute values will be retrieved and printed to standard output, resulting in output similar to this: dn: o=University of Alaska Fairbanks,c=US o: University of Alaska Fairbanks description: Preparing Alaska for a brave new yesterday description: leaf node only dn: o=University of Colorado at Boulder,c=US o: University of Colorado at Boulder description: No personnel information description: Institution of education and research dn: o=University of Colorado at Denver,c=US o: University of Colorado at Denver o: UCD o: CU/Denver o: CU-Denver description: Institute for Higher Learning and Research dn: o=University of Florida,c=US o: University of Florida o: UFl description: Warper of young minds ... DIAGNOSTICS Exit status is zero if no errors occur. Errors result in a non-zero exit status and a diagnostic message being written to standard error. SEE ALSO ldapadd(1), ldapdelete(1), ldapmodify(1), ldapmodrdn(1), ldap.conf(5), ldif(5), ldap(3), ldap_search_ext(3), ldap_sort(3) AUTHOR The OpenLDAP Project <http://www.openldap.org/> ACKNOWLEDGEMENTS OpenLDAP Software is developed and maintained by The OpenLDAP Project <http://www.openldap.org/>. OpenLDAP Software is derived from University of Michigan LDAP 3.3 Release. OpenLDAP 2.4.28 2011/11/24 LDAPSEARCH(1)
null
command
Shell builtin commands are commands that can be executed within the running shell's process. Note that, in the case of csh(1) builtin commands, the command is executed in a subshell if it occurs as any component of a pipeline except the last. If a command specified to the shell contains a slash ‘/’, the shell will not execute a builtin command, even if the last component of the specified command matches the name of a builtin command. Thus, while specifying “echo” causes a builtin command to be executed under shells that support the echo builtin command, specifying “/bin/echo” or “./echo” does not. While some builtin commands may exist in more than one shell, their operation may be different under each shell which supports them. Below is a table which lists shell builtin commands, the standard shells that support them and whether they exist as standalone utilities. Only builtin commands for the csh(1) and sh(1) shells are listed here. Consult a shell's manual page for details on the operation of its builtin commands. Beware that the sh(1) manual page, at least, calls some of these commands “built-in commands” and some of them “reserved words”. Users of other shells may need to consult an info(1) page or other sources of documentation. Commands marked “No**” under External do exist externally, but are implemented as scripts using a builtin command of the same name. Command External csh(1) sh(1) ! No No Yes % No Yes No . No No Yes : No Yes Yes @ No Yes Yes [ Yes No Yes { No No Yes } No No Yes alias No** Yes Yes alloc No Yes No bg No** Yes Yes bind No No Yes bindkey No Yes No break No Yes Yes breaksw No Yes No builtin No No Yes builtins No Yes No case No Yes Yes cd No** Yes Yes chdir No Yes Yes command No** No Yes complete No Yes No continue No Yes Yes default No Yes No dirs No Yes No do No No Yes done No No Yes echo Yes Yes Yes echotc No Yes No elif No No Yes else No Yes Yes end No Yes No endif No Yes No endsw No Yes No esac No No Yes eval No Yes Yes exec No Yes Yes exit No Yes Yes export No No Yes false Yes No Yes fc No** No Yes fg No** Yes Yes filetest No Yes No fi No No Yes for No No Yes foreach No Yes No getopts No** No Yes glob No Yes No goto No Yes No hash No** No Yes hashstat No Yes No history No Yes No hup No Yes No if No Yes Yes jobid No No Yes jobs No** Yes Yes kill Yes Yes Yes limit No Yes No local No No Yes log No Yes No login Yes Yes No logout No Yes No ls-F No Yes No nice Yes Yes No nohup Yes Yes No notify No Yes No onintr No Yes No popd No Yes No printenv Yes Yes No printf Yes No Yes pushd No Yes No pwd Yes No Yes read No** No Yes readonly No No Yes rehash No Yes No repeat No Yes No return No No Yes sched No Yes No set No Yes Yes setenv No Yes No settc No Yes No setty No Yes No setvar No No Yes shift No Yes Yes source No Yes No stop No Yes No suspend No Yes No switch No Yes No telltc No Yes No test Yes No Yes then No No Yes time Yes Yes No times No No Yes trap No No Yes true Yes No Yes type No** No Yes ulimit No** No Yes umask No** Yes Yes unalias No** Yes Yes uncomplete No Yes No unhash No Yes No unlimit No Yes No unset No Yes Yes unsetenv No Yes No until No No Yes wait No** Yes Yes where No Yes No which Yes Yes No while No Yes Yes SEE ALSO csh(1), dash(1), echo(1), false(1), info(1), kill(1), login(1), nice(1), nohup(1), printenv(1), printf(1), pwd(1), sh(1), test(1), time(1), true(1), which(1), zsh(1) HISTORY The builtin manual page first appeared in FreeBSD 3.4. AUTHORS This manual page was written by Sheldon Hearn <sheldonh@FreeBSD.org>. macOS 14.5 December 21, 2010 macOS 14.5
builtin, !, %, ., :, @, [, {, }, alias, alloc, bg, bind, bindkey, break, breaksw, builtins, case, cd, chdir, command, complete, continue, default, dirs, do, done, echo, echotc, elif, else, end, endif, endsw, esac, eval, exec, exit, export, false, fc, fg, filetest, fi, for, foreach, getopts, glob, goto, hash, hashstat, history, hup, if, jobid, jobs, kill, limit, local, log, login, logout, ls-F, nice, nohup, notify, onintr, popd, printenv, printf, pushd, pwd, read, readonly, rehash, repeat, return, sched, set, setenv, settc, setty, setvar, shift, source, stop, suspend, switch, telltc, test, then, time, times, trap, true, type, ulimit, umask, unalias, uncomplete, unhash, unlimit, unset, unsetenv, until, wait, where, which, while – shell built-in commands
See the built-in command description in the appropriate shell manual page.
null
null
snmp-bridge-mib
The snmp-bridge-mib is an extension to net-snmp. It collects information about a bridge in a Linux system and exports them for query from other (remote) systems for management purposes. CONFIGURATION: The preferred method of snmp-bridge-mib to attach to net-snmp is agentx. For this to work, you will have to add the following line to /etc/snmp/snmpd.conf, master agentx restart snmpd and start snmp-bridge-mib. snmp-bridge-mib will then connect to the running snmpd daemon. Another way of attaching snmp-bridge-mib to is to run it as an embedded perl module. You have to add perl do "path to location of snmp-bridge-mib" and restart snmpd. EXAMPLE: Follow the instructions in this manpage and type perl /usr/bin/snmp-bridge-mib br0 You´ll get the following output: registering at .1.3.6.1.2.1.17 running as a subagent. dot1qbridge agent started. NET-SNMP version 5.4.2.1 AgentX subagent connected Now it´s time for a first test: $ export MIBS=+BRIDGE-MIB $ snmpwalk localhost .1.3.6.1.2.1.17 The output produced should look like BRIDGE-MIB::dot1dStpBridgeHelloTime = INTEGER: 199 centi-seconds BRIDGE-MIB::dot1dStpBridgeForwardDelay = INTEGER: 1499 centi-seconds BRIDGE-MIB::dot1dStpPort.1 = INTEGER: 1 BRIDGE-MIB::dot1dStpPort.3 = INTEGER: 3 BRIDGE-MIB::dot1dStpPortPriority.1 = INTEGER: 32 BRIDGE-MIB::dot1dStpPortPriority.3 = INTEGER: 32 BRIDGE-MIB::dot1dStpPortState.1 = INTEGER: disabled(1) BRIDGE-MIB::dot1dStpPortState.3 = INTEGER: disabled(1) BRIDGE-MIB::dot1dStpPortEnable.1 = INTEGER: disabled(2) BRIDGE-MIB::dot1dStpPortEnable.3 = INTEGER: disabled(2) BRIDGE-MIB::dot1dStpPortPathCost.1 = INTEGER: 2 BRIDGE-MIB::dot1dStpPortPathCost.3 = INTEGER: 4 BRIDGE-MIB::dot1dStpPortDesignatedRoot.1 = STRING: "8000.001018382c78" BRIDGE-MIB::dot1dStpPortDesignatedRoot.3 = STRING: "8000.001018382c78" BRIDGE-MIB::dot1dStpPortDesignatedCost.1 = INTEGER: 0 BRIDGE-MIB::dot1dStpPortDesignatedCost.3 = INTEGER: 0 BRIDGE-MIB::dot1dStpPortDesignatedBridge.1 = STRING: "8000.001018382c78" BRIDGE-MIB::dot1dStpPortDesignatedBridge.3 = STRING: "8000.001018382c78" BRIDGE-MIB::dot1dStpPortDesignatedPort.1 = STRING: "32769" BRIDGE-MIB::dot1dStpPortDesignatedPort.3 = STRING: "32770" BRIDGE-MIB::dot1dStpPortPathCost32.1 = INTEGER: 2 BRIDGE-MIB::dot1dStpPortPathCost32.3 = INTEGER: 4 BRIDGE-MIB::dot1dTpLearnedEntryDiscards = Counter32: 0 BRIDGE-MIB::dot1dTpAgingTime = INTEGER: 300 seconds BRIDGE-MIB::dot1dTpFdbAddress.´...8,x´ = STRING: 0:10:18:38:2c:78 BRIDGE-MIB::dot1dTpFdbAddress.´.!^/B|´ = STRING: 0:21:5e:2f:42:7c BRIDGE-MIB::dot1dTpFdbPort.´...8,x´ = INTEGER: 1 BRIDGE-MIB::dot1dTpFdbPort.´.!^/B|´ = INTEGER: 3 BRIDGE-MIB::dot1dTpFdbStatus.´...8,x´ = INTEGER: learned(3) BRIDGE-MIB::dot1dTpFdbStatus.´.!^/B|´ = INTEGER: learned(3) BRIDGE-MIB::dot1dTpPort.1 = INTEGER: 1 BRIDGE-MIB::dot1dTpPort.3 = INTEGER: 3 BRIDGE-MIB::dot1dTpPortMaxInfo.1 = INTEGER: 1500 bytes BRIDGE-MIB::dot1dTpPortMaxInfo.3 = INTEGER: 1500 bytes BRIDGE-MIB::dot1dTpPortInFrames.1 = Counter32: 18082 frames BRIDGE-MIB::dot1dTpPortInFrames.3 = Counter32: 1546072 frames BRIDGE-MIB::dot1dTpPortOutFrames.1 = Counter32: 11601 frames BRIDGE-MIB::dot1dTpPortOutFrames.3 = Counter32: 10988 frames BRIDGE-MIB::dot1dTpPortInDiscards.1 = Counter32: 0 frames BRIDGE-MIB::dot1dTpPortInDiscards.3 = Counter32: 0 frames BUGS 1. snmp-bridge-mib currently only supports one bridge which has to be specified on the commandline. 2. Not all elements of RFC 4188 because they are either not available in sysfs or difficult to extract from the existing data. SEE ALSO snmpd.conf(5), Net::SNMP(3) AUTHOR Jens Osterkamp <jens@linux.vnet.ibm.com> Developer COPYRIGHT Copyright © 2009, 2010 IBM Corp., All Rights Reserved Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. http://www.ibm.com/ v6 26 Mar 2010 SNMP-BRIDGE-MIB(1)
snmp-bridge-mib - provide Linux bridge information via SNMP
snmp-bridge-mib {bridge} ARGUMENTS The following arguments are required: bridge The name of the Linux bridge for which you want to provide information via SNMP, e.g. br0.
null
null
egrep
The grep utility searches any given input files, selecting lines that match one or more patterns. By default, a pattern matches an input line if the regular expression (RE) in the pattern matches the input line without its trailing newline. An empty expression matches every line. Each input line that matches at least one of the patterns is written to the standard output. grep is used for simple patterns and basic regular expressions (BREs); egrep can handle extended regular expressions (EREs). See re_format(7) for more information on regular expressions. fgrep is quicker than both grep and egrep, but can only handle fixed patterns (i.e., it does not interpret regular expressions). Patterns may consist of one or more lines, allowing any of the pattern lines to match a portion of the input. zgrep, zegrep, and zfgrep act like grep, egrep, and fgrep, respectively, but accept input files compressed with the compress(1) or gzip(1) compression utilities. bzgrep, bzegrep, and bzfgrep act like grep, egrep, and fgrep, respectively, but accept input files compressed with the bzip2(1) compression utility. The following options are available: -A num, --after-context=num Print num lines of trailing context after each match. See also the -B and -C options. -a, --text Treat all files as ASCII text. Normally grep will simply print “Binary file ... matches” if files contain binary characters. Use of this option forces grep to output lines matching the specified pattern. -B num, --before-context=num Print num lines of leading context before each match. See also the -A and -C options. -b, --byte-offset The offset in bytes of a matched pattern is displayed in front of the respective matched line. -C num, --context=num Print num lines of leading and trailing context surrounding each match. See also the -A and -B options. -c, --count Only a count of selected lines is written to standard output. --colour=[when], --color=[when] Mark up the matching text with the expression stored in the GREP_COLOR environment variable. The possible values of when are “never”, “always” and “auto”. -D action, --devices=action Specify the demanded action for devices, FIFOs and sockets. The default action is “read”, which means, that they are read as if they were normal files. If the action is set to “skip”, devices are silently skipped. -d action, --directories=action Specify the demanded action for directories. It is “read” by default, which means that the directories are read in the same manner as normal files. Other possible values are “skip” to silently ignore the directories, and “recurse” to read them recursively, which has the same effect as the -R and -r option. -E, --extended-regexp Interpret pattern as an extended regular expression (i.e., force grep to behave as egrep). -e pattern, --regexp=pattern Specify a pattern used during the search of the input: an input line is selected if it matches any of the specified patterns. This option is most useful when multiple -e options are used to specify multiple patterns, or when a pattern begins with a dash (‘-’). --exclude pattern If specified, it excludes files matching the given filename pattern from the search. Note that --exclude and --include patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. If no --include pattern is specified, all files are searched that are not excluded. Patterns are matched to the full path specified, not only to the filename component. --exclude-dir pattern If -R is specified, it excludes directories matching the given filename pattern from the search. Note that --exclude-dir and --include-dir patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. If no --include-dir pattern is specified, all directories are searched that are not excluded. -F, --fixed-strings Interpret pattern as a set of fixed strings (i.e., force grep to behave as fgrep). -f file, --file=file Read one or more newline separated patterns from file. Empty pattern lines match every input line. Newlines are not considered part of a pattern. If file is empty, nothing is matched. -G, --basic-regexp Interpret pattern as a basic regular expression (i.e., force grep to behave as traditional grep). -H Always print filename headers with output lines. -h, --no-filename Never print filename headers (i.e., filenames) with output lines. --help Print a brief help message. -I Ignore binary files. This option is equivalent to the “--binary-files=without-match” option. -i, --ignore-case Perform case insensitive matching. By default, grep is case sensitive. --include pattern If specified, only files matching the given filename pattern are searched. Note that --include and --exclude patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. Patterns are matched to the full path specified, not only to the filename component. --include-dir pattern If -R is specified, only directories matching the given filename pattern are searched. Note that --include-dir and --exclude-dir patterns are processed in the order given. If a name matches multiple patterns, the latest matching rule wins. -J, --bz2decompress Decompress the bzip2(1) compressed file before looking for the text. -L, --files-without-match Only the names of files not containing selected lines are written to standard output. Pathnames are listed once per file searched. If the standard input is searched, the string “(standard input)” is written unless a --label is specified. -l, --files-with-matches Only the names of files containing selected lines are written to standard output. grep will only search a file until a match has been found, making searches potentially less expensive. Pathnames are listed once per file searched. If the standard input is searched, the string “(standard input)” is written unless a --label is specified. --label Label to use in place of “(standard input)” for a file name where a file name would normally be printed. This option applies to -H, -L, and -l. --mmap Use mmap(2) instead of read(2) to read input, which can result in better performance under some circumstances but can cause undefined behaviour. -M, --lzma Decompress the LZMA compressed file before looking for the text. -m num, --max-count=num Stop reading the file after num matches. -n, --line-number Each output line is preceded by its relative line number in the file, starting at line 1. The line number counter is reset for each file processed. This option is ignored if -c, -L, -l, or -q is specified. --null Prints a zero-byte after the file name. -O If -R is specified, follow symbolic links only if they were explicitly listed on the command line. The default is not to follow symbolic links. -o, --only-matching Prints only the matching part of the lines. -p If -R is specified, no symbolic links are followed. This is the default. -q, --quiet, --silent Quiet mode: suppress normal output. grep will only search a file until a match has been found, making searches potentially less expensive. -R, -r, --recursive Recursively search subdirectories listed. (i.e., force grep to behave as rgrep). -S If -R is specified, all symbolic links are followed. The default is not to follow symbolic links. -s, --no-messages Silent mode. Nonexistent and unreadable files are ignored (i.e., their error messages are suppressed). -U, --binary Search binary files, but do not attempt to print them. -u This option has no effect and is provided only for compatibility with GNU grep. -V, --version Display version information and exit. -v, --invert-match Selected lines are those not matching any of the specified patterns. -w, --word-regexp The expression is searched for as a word (as if surrounded by ‘[[:<:]]’ and ‘[[:>:]]’; see re_format(7)). This option has no effect if -x is also specified. -x, --line-regexp Only input lines selected against an entire fixed string or regular expression are considered to be matching lines. -y Equivalent to -i. Obsoleted. -z, --null-data Treat input and output data as sequences of lines terminated by a zero-byte instead of a newline. -X, --xz Decompress the xz(1) compressed file before looking for the text. -Z, --decompress Force grep to behave as zgrep. --binary-files=value Controls searching and printing of binary files. Options are: binary (default) Search binary files but do not print them. without-match Do not search binary files. text Treat all files as text. --line-buffered Force output to be line buffered. By default, output is line buffered when standard output is a terminal and block buffered otherwise. If no file arguments are specified, the standard input is used. Additionally, “-” may be used in place of a file name, anywhere that a file name is accepted, to read from standard input. This includes both -f and file arguments. ENVIRONMENT GREP_OPTIONS May be used to specify default options that will be placed at the beginning of the argument list. Backslash-escaping is not supported, unlike the behavior in GNU grep. EXIT STATUS The grep utility exits with one of the following values: 0 One or more lines were selected. 1 No lines were selected. >1 An error occurred.
grep, egrep, fgrep, rgrep, bzgrep, bzegrep, bzfgrep, zgrep, zegrep, zfgrep – file pattern searcher
grep [-abcdDEFGHhIiJLlMmnOopqRSsUVvwXxZz] [-A num] [-B num] [-C num] [-e pattern] [-f file] [--binary-files=value] [--color[=when]] [--colour[=when]] [--context=num] [--label] [--line-buffered] [--null] [pattern] [file ...]
null
- Find all occurrences of the pattern ‘patricia’ in a file: $ grep 'patricia' myfile - Same as above but looking only for complete words: $ grep -w 'patricia' myfile - Count occurrences of the exact pattern ‘FOO’ : $ grep -c FOO myfile - Same as above but ignoring case: $ grep -c -i FOO myfile - Find all occurrences of the pattern ‘.Pp’ at the beginning of a line: $ grep '^\.Pp' myfile The apostrophes ensure the entire expression is evaluated by grep instead of by the user's shell. The caret ‘^’ matches the null string at the beginning of a line, and the ‘\’ escapes the ‘.’, which would otherwise match any character. - Find all lines in a file which do not contain the words ‘foo’ or ‘bar’: $ grep -v -e 'foo' -e 'bar' myfile - Peruse the file ‘calendar’ looking for either 19, 20, or 25 using extended regular expressions: $ egrep '19|20|25' calendar - Show matching lines and the name of the ‘*.h’ files which contain the pattern ‘FIXME’. Do the search recursively from the /usr/src/sys/arm directory $ grep -H -R FIXME --include="*.h" /usr/src/sys/arm/ - Same as above but show only the name of the matching file: $ grep -l -R FIXME --include="*.h" /usr/src/sys/arm/ - Show lines containing the text ‘foo’. The matching part of the output is colored and every line is prefixed with the line number and the offset in the file for those lines that matched. $ grep -b --colour -n foo myfile - Show lines that match the extended regular expression patterns read from the standard input: $ echo -e 'Free\nBSD\nAll.*reserved' | grep -E -f - myfile - Show lines from the output of the pciconf(8) command matching the specified extended regular expression along with three lines of leading context and one line of trailing context: $ pciconf -lv | grep -B3 -A1 -E 'class.*=.*storage' - Suppress any output and use the exit status to show an appropriate message: $ grep -q foo myfile && echo File matches SEE ALSO bzip2(1), compress(1), ed(1), ex(1), gzip(1), sed(1), xz(1), zgrep(1), re_format(7) STANDARDS The grep utility is compliant with the IEEE Std 1003.1-2008 (“POSIX.1”) specification. The flags [-AaBbCDdGHhILmopRSUVw] are extensions to that specification, and the behaviour of the -f flag when used with an empty pattern file is left undefined. All long options are provided for compatibility with GNU versions of this utility. Historic versions of the grep utility also supported the flags [-ruy]. This implementation supports those options; however, their use is strongly discouraged. HISTORY The grep command first appeared in Version 6 AT&T UNIX. BUGS The grep utility does not normalize Unicode input, so a pattern containing composed characters will not match decomposed input, and vice versa. macOS 14.5 November 10, 2021 macOS 14.5
notifyutil
notifyutil is a command-line utility for interacting with the notify(3) notification system and the notifyd(8) server. It may be used to post notifications, detect and report notifications, and to examine and set the state values associated with notification keys. If notifyutil is used to monitor one or more notification keys, it prints the notification key when the corresponding notification is received. The -v (verbose) and -q (quiet) flags, if specified, modify the output behavior. The -v flag causes notifyutil to print a time stamp, the notification key, the current state value for that key, and the type of the notification (port, file, etc). The -q flag supresses any output except for state values fetched following a -g command. Commands listed in the table below are processed in left to right order from the command line. -p key Post a notification for key. -w key Register for key and wait forever for notifications. -# key Register for key and wait for # (an integer) notifications. E.g. -1 key waits for a single notification. -g key Get state value for key. -s key val Set state value for key. -port Use mach port notifications for subsequent -w or -# registrations. This is the default registration type. -file Use file descriptor notifications for subsequent registrations. -check Use shared memory notifications for subsequent registrations. -signal [#] Use signal notifications for subsequent registrations. Signal 1 (HUP) is the default, but an alternate signal may be specified. -dispatch Use dispatch for subsequent registrations. When invoked with any combination of -w and -# actions, notifyutil registers for notification for the specified key(s). If any key is given with a -w action, notifyutil runs until interrupted with Control-C. If all registrations are invoked with -#, the program continues to run until the corresponding number of notifications for each key have been received. By default, notifyutil uses mach port registration (using notify_register_mach_port()) for keys given with a -w or -# flag. The -file command causes notifyutil to use notify_register_file_descriptor() for any subsequent -w or -# registrations. Similarly, -check causes notifyutil to use notify_register_check() for subsequent registrations, -signal switches to notify_register_signal(), and -dispatch causes it to use notify_register_dispatch() for subsequent registrations. If any registrations are made following the use of the -check command, notifyutil will start a timer and check for shared memory notifications every 100 milliseconds. An alternate timer value may be set following the -z flag. The -M flag causes notifyutil to use multiplex all notifications over a single mach connection with notifyd. Notifications (except shared memory notifications) are received and redistributed by a dispatch handler. The -R flag causes notifyutil to regenerate all its registrations in the unlikely event that notifyd restarts. Note that a notification key and its associated state variable only exist when there are one or more current registrations for that key. Setting the state for a key that has no registrations has no effect. Thus the command notifyutil -s foo.bar 123 -g foo.bar will print foo.bar 0 unless foo.bar is registered by some other process. However, the command notifyutil -w foo.bar -s foo.bar 123 -g foo.bar prints foo.bar 123 since the “-w foo.bar” registration ensures the key and its state variable exist before the value is set, and continue to exist when the value is fetched. SEE ALSO notify(3), notifyd(8) Mac OS X November 4, 2011 Mac OS X
notifyutil – notification command line utility
notifyutil [-q] [-v] [-z msec] [-M] [-R] [command ...]
null
null
cc
clang is a C, C++, and Objective-C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link. While Clang is highly integrated, it is important to understand the stages of compilation, to understand how to invoke it. These stages are: Driver The clang executable is actually a small driver which controls the overall execution of other tools such as the compiler, assembler and linker. Typically you do not need to interact with the driver, but you transparently use it to run the other tools. Preprocessing This stage handles tokenization of the input source file, macro expansion, #include expansion and handling of other preprocessor directives. The output of this stage is typically called a ".i" (for C), ".ii" (for C++), ".mi" (for Objective-C), or ".mii" (for Objective-C++) file. Parsing and Semantic Analysis This stage parses the input file, translating preprocessor tokens into a parse tree. Once in the form of a parse tree, it applies semantic analysis to compute types for expressions as well and determine whether the code is well formed. This stage is responsible for generating most of the compiler warnings as well as parse errors. The output of this stage is an "Abstract Syntax Tree" (AST). Code Generation and Optimization This stage translates an AST into low-level intermediate code (known as "LLVM IR") and ultimately to machine code. This phase is responsible for optimizing the generated code and handling target-specific code generation. The output of this stage is typically called a ".s" file or "assembly" file. Clang also supports the use of an integrated assembler, in which the code generator produces object files directly. This avoids the overhead of generating the ".s" file and of calling the target assembler. Assembler This stage runs the target assembler to translate the output of the compiler into a target object file. The output of this stage is typically called a ".o" file or "object" file. Linker This stage runs the target linker to merge multiple object files into an executable or dynamic library. The output of this stage is typically called an "a.out", ".dylib" or ".so" file. Clang Static Analyzer The Clang Static Analyzer is a tool that scans source code to try to find bugs through code analysis. This tool uses many parts of Clang and is built into the same driver. Please see <https://clang-analyzer.llvm.org> for more details on how to use the static analyzer.
clang - the Clang C, C++, and Objective-C compiler
clang [options] filename ...
Stage Selection Options -E Run the preprocessor stage. -fsyntax-only Run the preprocessor, parser and semantic analysis stages. -S Run the previous stages as well as LLVM generation and optimization stages and target-specific code generation, producing an assembly file. -c Run all of the above, plus the assembler, generating a target ".o" object file. no stage selection option If no stage selection option is specified, all stages above are run, and the linker is run to combine the results into an executable or shared library. Language Selection and Mode Options -x <language> Treat subsequent input files as having type language. -std=<standard> Specify the language standard to compile for. Supported values for the C language are: c89 c90 iso9899:1990 ISO C 1990 iso9899:199409 ISO C 1990 with amendment 1 gnu89 gnu90 ISO C 1990 with GNU extensions c99 iso9899:1999 ISO C 1999 gnu99 ISO C 1999 with GNU extensions c11 iso9899:2011 ISO C 2011 gnu11 ISO C 2011 with GNU extensions c17 iso9899:2017 ISO C 2017 gnu17 ISO C 2017 with GNU extensions The default C language standard is gnu17, except on PS4, where it is gnu99. Supported values for the C++ language are: c++98 c++03 ISO C++ 1998 with amendments gnu++98 gnu++03 ISO C++ 1998 with amendments and GNU extensions c++11 ISO C++ 2011 with amendments gnu++11 ISO C++ 2011 with amendments and GNU extensions c++14 ISO C++ 2014 with amendments gnu++14 ISO C++ 2014 with amendments and GNU extensions c++17 ISO C++ 2017 with amendments gnu++17 ISO C++ 2017 with amendments and GNU extensions c++20 ISO C++ 2020 with amendments gnu++20 ISO C++ 2020 with amendments and GNU extensions c++2b Working draft for ISO C++ 2023 gnu++2b Working draft for ISO C++ 2023 with GNU extensions The default C++ language standard is gnu++98. Supported values for the OpenCL language are: cl1.0 OpenCL 1.0 cl1.1 OpenCL 1.1 cl1.2 OpenCL 1.2 cl2.0 OpenCL 2.0 The default OpenCL language standard is cl1.0. Supported values for the CUDA language are: cuda NVIDIA CUDA(tm) -stdlib=<library> Specify the C++ standard library to use; supported options are libstdc++ and libc++. If not specified, platform default will be used. -rtlib=<library> Specify the compiler runtime library to use; supported options are libgcc and compiler-rt. If not specified, platform default will be used. -ansi Same as -std=c89. -ObjC, -ObjC++ Treat source input files as Objective-C and Object-C++ inputs respectively. -trigraphs Enable trigraphs. -ffreestanding Indicate that the file should be compiled for a freestanding, not a hosted, environment. Note that it is assumed that a freestanding environment will additionally provide memcpy, memmove, memset and memcmp implementations, as these are needed for efficient codegen for many programs. -fno-builtin Disable special handling and optimizations of well-known library functions, like strlen() and malloc(). -fno-builtin-<function> Disable special handling and optimizations for the specific library function. For example, -fno-builtin-strlen removes any special handling for the strlen() library function. -fno-builtin-std-<function> Disable special handling and optimizations for the specific C++ standard library function in namespace std. For example, -fno-builtin-std-move_if_noexcept removes any special handling for the std::move_if_noexcept() library function. For C standard library functions that the C++ standard library also provides in namespace std, use -fno-builtin-<function> instead. -fmath-errno Indicate that math functions should be treated as updating errno. -fpascal-strings Enable support for Pascal-style strings with "\pfoo". -fms-extensions Enable support for Microsoft extensions. -fmsc-version= Set _MSC_VER. Defaults to 1300 on Windows. Not set otherwise. -fborland-extensions Enable support for Borland extensions. -fwritable-strings Make all string literals default to writable. This disables uniquing of strings and other optimizations. -flax-vector-conversions, -flax-vector-conversions=<kind>, -fno-lax-vector-conversions Allow loose type checking rules for implicit vector conversions. Possible values of <kind>: • none: allow no implicit conversions between vectors • integer: allow implicit bitcasts between integer vectors of the same overall bit-width • all: allow implicit bitcasts between any vectors of the same overall bit-width <kind> defaults to integer if unspecified. -fblocks Enable the "Blocks" language feature. -fobjc-abi-version=version Select the Objective-C ABI version to use. Available versions are 1 (legacy "fragile" ABI), 2 (non-fragile ABI 1), and 3 (non-fragile ABI 2). -fobjc-nonfragile-abi-version=<version> Select the Objective-C non-fragile ABI version to use by default. This will only be used as the Objective-C ABI when the non-fragile ABI is enabled (either via -fobjc-nonfragile-abi, or because it is the platform default). -fobjc-nonfragile-abi, -fno-objc-nonfragile-abi Enable use of the Objective-C non-fragile ABI. On platforms for which this is the default ABI, it can be disabled with -fno-objc-nonfragile-abi. Target Selection Options Clang fully supports cross compilation as an inherent part of its design. Depending on how your version of Clang is configured, it may have support for a number of cross compilers, or may only support a native target. -arch <architecture> Specify the architecture to build for (Mac OS X specific). -target <architecture> Specify the architecture to build for (all platforms). -mmacosx-version-min=<version> When building for macOS, specify the minimum version supported by your application. -miphoneos-version-min When building for iPhone OS, specify the minimum version supported by your application. --print-supported-cpus Print out a list of supported processors for the given target (specified through --target=<architecture> or -arch <architecture>). If no target is specified, the system default target will be used. -mcpu=?, -mtune=? Acts as an alias for --print-supported-cpus. -march=<cpu> Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=i486, the compiler is allowed to generate instructions that are valid on i486 and later processors, but which may not exist on earlier ones. Code Generation Options -O0, -O1, -O2, -O3, -Ofast, -Os, -Oz, -Og, -O, -O4 Specify which optimization level to use: -O0 Means "no optimization": this level compiles the fastest and generates the most debuggable code. -O1 Somewhere between -O0 and -O2. -O2 Moderate level of optimization which enables most optimizations. -O3 Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster). -Ofast Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict compliance with language standards. -Os Like -O2 with extra optimizations to reduce code size. -Oz Like -Os (and thus -O2), but reduces code size further. -Og Like -O1. In future versions, this option might disable different optimizations in order to improve debuggability. -O Equivalent to -O1. -O4 and higher Currently equivalent to -O3 -g, -gline-tables-only, -gmodules Control debug information output. Note that Clang debug information works best at -O0. When more than one option starting with -g is specified, the last one wins: -g Generate debug information. -gline-tables-only Generate only line table debug information. This allows for symbolicated backtraces with inlining information, but does not include any information about variables, their locations or types. -gmodules Generate debug information that contains external references to types defined in Clang modules or precompiled headers instead of emitting redundant debug type information into every object file. This option transparently switches the Clang module format to object file containers that hold the Clang module together with the debug information. When compiling a program that uses Clang modules or precompiled headers, this option produces complete debug information with faster compile times and much smaller object files. This option should not be used when building static libraries for distribution to other machines because the debug info will contain references to the module cache on the machine the object files in the library were built on. -fstandalone-debug -fno-standalone-debug Clang supports a number of optimizations to reduce the size of debug information in the binary. They work based on the assumption that the debug type information can be spread out over multiple compilation units. For instance, Clang will not emit type definitions for types that are not needed by a module and could be replaced with a forward declaration. Further, Clang will only emit type info for a dynamic C++ class in the module that contains the vtable for the class. The -fstandalone-debug option turns off these optimizations. This is useful when working with 3rd-party libraries that don't come with debug information. This is the default on Darwin. Note that Clang will never emit type information for types that are not referenced at all by the program. -feliminate-unused-debug-types By default, Clang does not emit type information for types that are defined but not used in a program. To retain the debug info for these unused types, the negation -fno-eliminate-unused-debug-types can be used. -fexceptions Enable generation of unwind information. This allows exceptions to be thrown through Clang compiled stack frames. This is on by default in x86-64. -ftrapv Generate code to catch integer overflow errors. Signed integer overflow is undefined in C. With this flag, extra code is generated to detect this and abort when it happens. -fvisibility This flag sets the default visibility level. -fcommon, -fno-common This flag specifies that variables without initializers get common linkage. It can be disabled with -fno-common. -ftls-model=<model> Set the default thread-local storage (TLS) model to use for thread-local variables. Valid values are: "global-dynamic", "local-dynamic", "initial-exec" and "local-exec". The default is "global-dynamic". The default model can be overridden with the tls_model attribute. The compiler will try to choose a more efficient model if possible. -flto, -flto=full, -flto=thin, -emit-llvm Generate output files in LLVM formats, suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options). The default for -flto is "full", in which the LLVM bitcode is suitable for monolithic Link Time Optimization (LTO), where the linker merges all such modules into a single combined module for optimization. With "thin", ThinLTO compilation is invoked instead. NOTE: On Darwin, when using -flto along with -g and compiling and linking in separate steps, you also need to pass -Wl,-object_path_lto,<lto-filename>.o at the linking step to instruct the ld64 linker not to delete the temporary object file generated during Link Time Optimization (this flag is automatically passed to the linker by Clang if compilation and linking are done in a single step). This allows debugging the executable as well as generating the .dSYM bundle using dsymutil(1). Driver Options -### Print (but do not run) the commands to run for this compilation. --help Display available options. -Qunused-arguments Do not emit any warnings for unused driver arguments. -Wa,<args> Pass the comma separated arguments in args to the assembler. -Wl,<args> Pass the comma separated arguments in args to the linker. -Wp,<args> Pass the comma separated arguments in args to the preprocessor. -Xanalyzer <arg> Pass arg to the static analyzer. -Xassembler <arg> Pass arg to the assembler. -Xlinker <arg> Pass arg to the linker. -Xpreprocessor <arg> Pass arg to the preprocessor. -o <file> Write output to file. -print-file-name=<file> Print the full library path of file. -print-libgcc-file-name Print the library path for the currently used compiler runtime library ("libgcc.a" or "libclang_rt.builtins.*.a"). -print-prog-name=<name> Print the full program path of name. -print-search-dirs Print the paths used for finding libraries and programs. -save-temps Save intermediate compilation results. -save-stats, -save-stats=cwd, -save-stats=obj Save internal code generation (LLVM) statistics to a file in the current directory (-save-stats/"-save-stats=cwd") or the directory of the output file ("-save-state=obj"). -integrated-as, -no-integrated-as Used to enable and disable, respectively, the use of the integrated assembler. Whether the integrated assembler is on by default is target dependent. -time Time individual commands. -ftime-report Print timing summary of each stage of compilation. -v Show commands to run and use verbose output. Diagnostics Options -fshow-column, -fshow-source-location, -fcaret-diagnostics, -fdiagnostics-fixit-info, -fdiagnostics-parseable-fixits, -fdiagnostics-print-source-range-info, -fprint-source-range-info, -fdiagnostics-show-option, -fmessage-length These options control how Clang prints out information about diagnostics (errors and warnings). Please see the Clang User's Manual for more information. Preprocessor Options -D<macroname>=<value> Adds an implicit #define into the predefines buffer which is read before the source file is preprocessed. -U<macroname> Adds an implicit #undef into the predefines buffer which is read before the source file is preprocessed. -include <filename> Adds an implicit #include into the predefines buffer which is read before the source file is preprocessed. -I<directory> Add the specified directory to the search path for include files. -F<directory> Add the specified directory to the search path for framework include files. -nostdinc Do not search the standard system directories or compiler builtin directories for include files. -nostdlibinc Do not search the standard system directories for include files, but do search compiler builtin include directories. -nobuiltininc Do not search clang's builtin directory for include files. ENVIRONMENT TMPDIR, TEMP, TMP These environment variables are checked, in order, for the location to write temporary files used during the compilation process. CPATH If this environment variable is present, it is treated as a delimited list of paths to be added to the default system include path list. The delimiter is the platform dependent delimiter, as used in the PATH environment variable. Empty components in the environment variable are ignored. C_INCLUDE_PATH, OBJC_INCLUDE_PATH, CPLUS_INCLUDE_PATH, OBJCPLUS_INCLUDE_PATH These environment variables specify additional paths, as for CPATH, which are only used when processing the appropriate language. MACOSX_DEPLOYMENT_TARGET If -mmacosx-version-min is unspecified, the default deployment target is read from this environment variable. This option only affects Darwin targets. BUGS To report bugs, please visit <https://github.com/llvm/llvm-project/issues/>. Most bug reports should include preprocessed source files (use the -E option) and the full output of the compiler, along with information to reproduce. SEE ALSO as(1), ld(1) AUTHOR Maintained by the Clang / LLVM Team (<http://clang.llvm.org>) COPYRIGHT 2007-2024, The Clang Team 11 January 28, 2024 CLANG(1)
null
dsmemberutil
dsmemberutil is a program that implements the membership API calls in a command line utility. FLAGS A list of flags and their descriptions: -h Lists the options for calling dsmemberutil -v Causes dsmemberutil to operate in verbose mode. COMMANDS The action of each command is described below: getuuid -ugUGsS value Takes any of the options and returns the associated UUID. getid -UGsSX value Takes any of the options and returns the associated UID or GID depending on option provided. getsid -ugUGX value Takes any of the options and returns the associated SID. checkmembership -uUxs param -gGXS param Returns if a user or group with the associated option is a member of the group. flushcache Flushes the current membership cache. Legacy commands such as dumpstate and statistics are gone. See odutil(1) for show cache and statistics operations. A list of options available. In some cases -xX and -sS can be used synonymously due to nature of the value. -u uid Using user with UID -U name Using user with name -s sid Using user with SID -x uuid Using user with UUID -g gid Using group with GID -G name Using group with name -S sid Using group with SID -X uuid Using group with UUID
dsmemberutil – various operations for the membership APIs, including state dump, check memberships, UUIDs, etc.
dsmemberutil [-v] [-h] command [options]
null
Get a user's uuid: % dsmemberutil getuuid -u 501 EEA4F2F6-B268-49E7-9C6F-E3C4A37DA4FD Get a group's uuid % dsmemberutil getuuid -g 0 ABCDEFAB-CDEF-ABCD-EFAB-CDEF00000000 Get a user's or group's id from a uuid % dsmemberutil getid -X ABCDEFAB-CDEF-ABCD-EFAB-CDEF0000000C gid: 12 Check a user's membership in a group (using UID and GID) % dsmemberutil checkmembership -u 501 -g 0 user is not a member of the group Check a user's membership in a group (using names) % dsmemberutil checkmembership -U root -G wheel user is a member of the group SEE ALSO odutil(1), dseditgroup(1), dscacheutil(1) Darwin January 1, 2007 Darwin
podselect
podselect will read the given input files looking for pod documentation and will print out (in raw pod format) all sections that match one ore more of the given section specifications. If no section specifications are given than all pod sections encountered are output. podselect invokes the podselect() function exported by Pod::Select Please see "podselect()" in Pod::Select for more details. SEE ALSO Pod::Parser and Pod::Select AUTHOR Please report bugs using <http://rt.cpan.org>. Brad Appleton <bradapp@enteract.com> Based on code for Pod::Text::pod2text(1) written by Tom Christiansen <tchrist@mox.perl.com> perl v5.30.3 2024-04-13 PODSELECT(1)
podselect - print selected sections of pod documentation on standard output
podselect [-help] [-man] [-section section-spec] [file ...] OPTIONS AND ARGUMENTS -help Print a brief help message and exit. -man Print the manual page and exit. -section section-spec Specify a section to include in the output. See "SECTION SPECIFICATIONS" in Pod::Parser for the format to use for section-spec. This option may be given multiple times on the command line. file The pathname of a file from which to select sections of pod documentation (defaults to standard input).
null
null
whereis
The whereis utility checks the standard binary, and manual page directories for the specified programs, printing out the paths of any it finds. The supplied program names are first stripped of leading path name components and any single trailing extension added by gzip(1), compress(1), or bzip2(1). The default path searched is the string returned by the sysctl(8) utility for the “user.cs_path” string, with /usr/libexec and the current user's $PATH appended. Manual pages are searched by default along the $MANPATH. The following options are available: -B Specify directories to search for binaries. Requires the -f option. -M Specify directories to search for manual pages. Requires the -f option. -a Report all matches instead of only the first of each requested type. -b Search for binaries. -f Delimits the list of directories after the -B, -M, or -S options, and indicates the beginning of the program list. -m Search for manual pages. -q (“quiet”). Suppress the output of the utility name in front of the normal output line. This can become handy for use in a backquote substitution of a shell command line, see EXAMPLES. -u Search for “unusual” entries. A file is said to be unusual if it does not have at least one entry of each requested type. Only the name of the unusual entry is printed.
whereis – locate programs
whereis [-abmqu] [-BM dir ... -f] program ...
null
The following finds all utilities under /usr/bin that do not have documentation: whereis -m -u /usr/bin/* SEE ALSO find(1), locate(1), man(1), which(1), sysctl(8) HISTORY The whereis utility appeared in 3.0BSD. This version re-implements the historical functionality that was lost in 4.4BSD. AUTHORS This implementation of the whereis command was written by Jörg Wunsch. BUGS This re-implementation of the whereis utility is not bug-for-bug compatible with historical versions. It is believed to be compatible with the version that was shipping with FreeBSD 2.2 through FreeBSD 4.5 though. macOS 14.5 August 22, 2002 macOS 14.5
tccutil
The tccutil command manages the privacy database, which stores decisions the user has made about whether apps may access personal data. One command is currently supported: reset Reset all decisions for the specified service, causing apps to prompt again the next time they access the service. If a bundle identifier is specified, the service will be reset for that bundle only.
tccutil – manage the privacy database
tccutil command service [bundle_id]
null
To reset all decisions about whether apps may access the address book: tccutil reset AddressBook tccutil reset All com.apple.Terminal Darwin April 3, 2012 Darwin
find
The find utility recursively descends the directory tree for each path listed, evaluating an expression (composed of the “primaries” and “operands” listed below) in terms of each file in the tree. The options are as follows: -E Interpret regular expressions following the -regex and -iregex primaries as extended (modern) regular expressions rather than basic regular expressions (BRE's). The re_format(7) manual page fully describes both formats. -H Cause the file information and file type (see stat(2)) returned for each symbolic link specified on the command line to be those of the file referenced by the link, not the link itself. If the referenced file does not exist, the file information and type will be for the link itself. File information of all symbolic links not on the command line is that of the link itself. -L Cause the file information and file type (see stat(2)) returned for each symbolic link to be those of the file referenced by the link, not the link itself. If the referenced file does not exist, the file information and type will be for the link itself. This option is equivalent to the deprecated -follow primary. -P Cause the file information and file type (see stat(2)) returned for each symbolic link to be those of the link itself. This is the default. -X Permit find to be safely used in conjunction with xargs(1). If a file name contains any of the delimiting characters used by xargs(1), a diagnostic message is displayed on standard error, and the file is skipped. The delimiting characters include single (“ ' ”) and double (“ " ”) quotes, backslash (“\”), space, tab and newline characters. However, you may wish to consider the -print0 primary in conjunction with “xargs -0” as an effective alternative. -d Cause find to perform a depth-first traversal. This option is a BSD-specific equivalent of the -depth primary specified by IEEE Std 1003.1-2001 (“POSIX.1”). Refer to its description under PRIMARIES for more information. -f path Add path to the list of paths that will be recursed into. This is useful when path begins with a character that would otherwise be interpreted as an expression, namely “!” , “(” and “-”. -s Cause find to traverse the file hierarchies in lexicographical order, i.e., alphabetical order within each directory. Note: ‘find -s’ and ‘find | sort’ may give different results. For example, ‘find -s’ puts a directory ‘foo’ with all its contents before a directory ‘foo’. but ‘find | sort’ puts the directory name ‘foo’. before any string like ‘foo/bar’ because ‘.’ goes before ‘/’ in ASCII. In locales other than C results may vary more due to collation differences. -x Prevent find from descending into directories that have a device number different than that of the file from which the descent began. This option is equivalent to the deprecated -xdev primary. PRIMARIES All primaries which take a numeric argument allow the number to be preceded by a plus sign (“+”) or a minus sign (“-”). A preceding plus sign means “more than n”, a preceding minus sign means “less than n” and neither means “exactly n”. -Bmin n True if the difference between the time of a file's inode creation and the time find was started, rounded up to the next full minute, is n minutes. -Bnewer file Same as -newerBm. -Btime n[smhdw] If no units are specified, this primary evaluates to true if the difference between the time of a file's inode creation and the time find was started, rounded up to the next full 24-hour period, is n 24-hour periods. If units are specified, this primary evaluates to true if the difference between the time of a file's inode creation and the time find was started is exactly n units. Please refer to the -atime primary description for information on supported time units. -acl May be used in conjunction with other primaries to locate files with extended ACLs. See acl(3) for more information. -amin [-|+]n True if the difference between the file last access time and the time find was started, rounded up to the next full minute, is more than n (+n), less than n (-n), or exactly n minutes ago. -anewer file Same as -neweram. -atime n[smhdw] If no units are specified, this primary evaluates to true if the difference between the file last access time and the time find was started, rounded up to the next full 24-hour period, is n 24-hour periods. If units are specified, this primary evaluates to true if the difference between the file last access time and the time find was started is exactly n units. Possible time units are as follows: s second m minute (60 seconds) h hour (60 minutes) d day (24 hours) w week (7 days) Any number of units may be combined in one -atime argument, for example, “-atime -1h30m”. Units are probably only useful when used in conjunction with the + or - modifier. -cmin [-|+]n True if the difference between the time of last change of file status information and the time find was started, rounded up to the next full minute, is more than n (+n), less than n (-n), or exactly n minutes ago. -cnewer file Same as -newercm. -ctime n[smhdw] If no units are specified, this primary evaluates to true if the difference between the time of last change of file status information and the time find was started, rounded up to the next full 24-hour period, is n 24-hour periods. If units are specified, this primary evaluates to true if the difference between the time of last change of file status information and the time find was started is exactly n units. Please refer to the -atime primary description for information on supported time units. -d Non-portable, BSD-specific version of depth. GNU find implements this as a primary in mistaken emulation of FreeBSD find. -delete Delete found files and/or directories. Always returns true. This executes from the current working directory as find recurses down the tree. It will not attempt to delete a filename with a “/” character in its pathname relative to “.” for security reasons. Depth-first traversal processing is implied by this option. The -delete primary will fail to delete a directory if it is not empty. Following symlinks is incompatible with this option. -depth Always true; same as the non-portable -d option. Cause find to perform a depth-first traversal, i.e., directories are visited in post-order and all entries in a directory will be acted on before the directory itself. By default, find visits directories in pre-order, i.e., before their contents. Note, the default is not a breadth-first traversal. The -depth primary can be useful when find is used with cpio(1) to process files that are contained in directories with unusual permissions. It ensures that you have write permission while you are placing files in a directory, then sets the directory's permissions as the last thing. -depth n True if the depth of the file relative to the starting point of the traversal is n. -empty True if the current file or directory is empty. -exec utility [argument ...] ; True if the program named utility returns a zero value as its exit status. Optional arguments may be passed to the utility. The expression must be terminated by a semicolon (“;”). If you invoke find from a shell you may need to quote the semicolon if the shell would otherwise treat it as a control operator. If the string “{}” appears anywhere in the utility name or the arguments it is replaced by the pathname of the current file. Utility will be executed from the directory from which find was executed. Utility and arguments are not subject to the further expansion of shell patterns and constructs. -exec utility [argument ...] {} + Same as -exec, except that “{}” is replaced with as many pathnames as possible for each invocation of utility. This behaviour is similar to that of xargs(1). The primary always returns true; if at least one invocation of utility returns a non-zero exit status, find will return a non-zero exit status. -execdir utility [argument ...] ; The -execdir primary is identical to the -exec primary with the exception that utility will be executed from the directory that holds the current file. The filename substituted for the string “{}” is not qualified. -execdir utility [argument ...] {} + Same as -execdir, except that “{}” is replaced with as many pathnames as possible for each invocation of utility. This behaviour is similar to that of xargs(1). The primary always returns true; if at least one invocation of utility returns a non-zero exit status, find will return a non-zero exit status. -flags [-|+]flags,notflags The flags are specified using symbolic names (see chflags(1)). Those with the "no" prefix (except "nodump") are said to be notflags. Flags in flags are checked to be set, and flags in notflags are checked to be not set. Note that this is different from -perm, which only allows the user to specify mode bits that are set. If flags are preceded by a dash (“-”), this primary evaluates to true if at least all of the bits in flags and none of the bits in notflags are set in the file's flags bits. If flags are preceded by a plus (“+”), this primary evaluates to true if any of the bits in flags is set in the file's flags bits, or any of the bits in notflags is not set in the file's flags bits. Otherwise, this primary evaluates to true if the bits in flags exactly match the file's flags bits, and none of the flags bits match those of notflags. -fstype type True if the file is contained in a file system of type type. The lsvfs(1) command can be used to find out the types of file systems that are available on the system. In addition, there are two pseudo-types, “local” and “rdonly”. The former matches any file system physically mounted on the system where the find is being executed and the latter matches any file system which is mounted read-only. -gid gname The same thing as -group gname for compatibility with GNU find. GNU find imposes a restriction that gname is numeric, while find does not. -group gname True if the file belongs to the group gname. If gname is numeric and there is no such group name, then gname is treated as a group ID. -ignore_readdir_race Ignore errors because a file or a directory is deleted after reading the name from a directory. This option does not affect errors occurring on starting points. -ilname pattern Like -lname, but the match is case insensitive. This is a GNU find extension. -iname pattern Like -name, but the match is case insensitive. -inum n True if the file has inode number n. -ipath pattern Like -path, but the match is case insensitive. -iregex pattern Like -regex, but the match is case insensitive. -iwholename pattern The same thing as -ipath, for GNU find compatibility. -links n True if the file has n links. -lname pattern Like -name, but the contents of the symbolic link are matched instead of the file name. Note that this only matches broken symbolic links if symbolic links are being followed. This is a GNU find extension. -ls This primary always evaluates to true. The following information for the current file is written to standard output: its inode number, size in 512-byte blocks, file permissions, number of hard links, owner, group, size in bytes, last modification time, and pathname. If the file is a block or character special file, the device number will be displayed instead of the size in bytes. If the file is a symbolic link, the pathname of the linked-to file will be displayed preceded by “->”. The format is identical to that produced by “ls -dgils”. -maxdepth n Always true; descend at most n directory levels below the command line arguments. If any -maxdepth primary is specified, it applies to the entire expression even if it would not normally be evaluated. “-maxdepth 0” limits the whole search to the command line arguments. -mindepth n Always true; do not apply any tests or actions at levels less than n. If any -mindepth primary is specified, it applies to the entire expression even if it would not normally be evaluated. “-mindepth 1” processes all but the command line arguments. -mmin [-|+]n True if the difference between the file last modification time and the time find was started, rounded up to the next full minute, is more than n (+n), less than n (-n), or exactly n minutes ago. -mnewer file Same as -newer. -mount The same thing as -xdev, for GNU find compatibility. -mtime n[smhdw] If no units are specified, this primary evaluates to true if the difference between the file last modification time and the time find was started, rounded up to the next full 24-hour period, is n 24-hour periods. If units are specified, this primary evaluates to true if the difference between the file last modification time and the time find was started is exactly n units. Please refer to the -atime primary description for information on supported time units. -name pattern True if the last component of the pathname being examined matches pattern. Special shell pattern matching characters (“[”, “]”, “*”, and “?”) may be used as part of pattern. These characters may be matched explicitly by escaping them with a backslash (“\”). -newer file True if the current file has a more recent last modification time than file. -newerXY file True if the current file has a more recent last access time (X=a), inode creation time (X=B), change time (X=c), or modification time (X=m) than the last access time (Y=a), inode creation time (Y=B), change time (Y=c), or modification time (Y=m) of file. In addition, if Y=t, then file is instead interpreted as a direct date specification of the form understood by ISO8601 or RFC822. Note that -newermm is equivalent to -newer. -nogroup True if the file belongs to an unknown group. -noignore_readdir_race Turn off the effect of -ignore_readdir_race. This is default behaviour. -noleaf This option is for GNU find compatibility. In GNU find it disables an optimization not relevant to find, so it is ignored. -nouser True if the file belongs to an unknown user. -ok utility [argument ...] ; The -ok primary is identical to the -exec primary with the exception that find requests user affirmation for the execution of the utility by printing a message to the terminal and reading a response. If the response is not affirmative (‘y’ in the “POSIX” locale), the command is not executed and the value of the -ok expression is false. -okdir utility [argument ...] ; The -okdir primary is identical to the -execdir primary with the same exception as described for the -ok primary. -path pattern True if the pathname being examined matches pattern. Special shell pattern matching characters (“[”, “]”, “*”, and “?”) may be used as part of pattern. These characters may be matched explicitly by escaping them with a backslash (“\”). Slashes (“/”) are treated as normal characters and do not have to be matched explicitly. -perm [-|+]mode The mode may be either symbolic (see chmod(1)) or an octal number. If the mode is symbolic, a starting value of zero is assumed and the mode sets or clears permissions without regard to the process' file mode creation mask. If the mode is octal, only bits 07777 (S_ISUID | S_ISGID | S_ISTXT | S_IRWXU | S_IRWXG | S_IRWXO) of the file's mode bits participate in the comparison. If the mode is preceded by a dash (“-”), this primary evaluates to true if at least all of the bits in the mode are set in the file's mode bits. If the mode is preceded by a plus (“+”), this primary evaluates to true if any of the bits in the mode are set in the file's mode bits. Otherwise, this primary evaluates to true if the bits in the mode exactly match the file's mode bits. Note, the first character of a symbolic mode may not be a dash (“-”). -print This primary always evaluates to true. It prints the pathname of the current file to standard output. If none of -exec, -ls, -print, -print0, or -ok is specified, the given expression shall be effectively replaced by ( given expression ) -print. -print0 This primary always evaluates to true. It prints the pathname of the current file to standard output, followed by an ASCII NUL character (character code 0). -prune This primary always evaluates to true. It causes find to not descend into the current file. Note, the -prune primary has no effect if the -d option was specified. -quit Causes find to terminate immediately. -regex pattern True if the whole path of the file matches pattern using regular expression. To match a file named “./foo/xyzzy”, you can use the regular expression “.*/[xyz]*” or “.*/foo/.*”, but not “xyzzy” or “/foo/”. -samefile name True if the file is a hard link to name. If the command option -L is specified, it is also true if the file is a symbolic link and points to name. -size n[ckMGTP] True if the file's size, rounded up, in 512-byte blocks is n. If n is followed by a c, then the primary is true if the file's size is n bytes (characters). Similarly if n is followed by a scale indicator then the file's size is compared to n scaled as: k kilobytes (1024 bytes) M megabytes (1024 kilobytes) G gigabytes (1024 megabytes) T terabytes (1024 gigabytes) P petabytes (1024 terabytes) -sparse True if the current file is sparse, i.e. has fewer blocks allocated than expected based on its size in bytes. This might also match files that have been compressed by the filesystem. -type t True if the file is of the specified type. Possible file types are as follows: b block special c character special d directory f regular file l symbolic link p FIFO s socket -uid uname The same thing as -user uname for compatibility with GNU find. GNU find imposes a restriction that uname is numeric, while find does not. -user uname True if the file belongs to the user uname. If uname is numeric and there is no such user name, then uname is treated as a user ID. -wholename pattern The same thing as -path, for GNU find compatibility. -xattr True if the file has any extended attributes. -xattrname name True if the file has an extended attribute with the specified name. OPERATORS The primaries may be combined using the following operators. The operators are listed in order of decreasing precedence. ( expression ) This evaluates to true if the parenthesized expression evaluates to true. ! expression -not expression This is the unary NOT operator. It evaluates to true if the expression is false. -false Always false. -true Always true. expression -and expression expression expression The -and operator is the logical AND operator. As it is implied by the juxtaposition of two expressions it does not have to be specified. The expression evaluates to true if both expressions are true. The second expression is not evaluated if the first expression is false. expression -or expression The -or operator is the logical OR operator. The expression evaluates to true if either the first or the second expression is true. The second expression is not evaluated if the first expression is true. All operands and primaries must be separate arguments to find. Primaries which themselves take arguments expect each argument to be a separate argument to find. ENVIRONMENT The LANG, LC_ALL, LC_COLLATE, LC_CTYPE, LC_MESSAGES and LC_TIME environment variables affect the execution of the find utility as described in environ(7).
find – walk a file hierarchy
find [-H | -L | -P] [-EXdsx] [-f path] path ... [expression] find [-H | -L | -P] [-EXdsx] -f path [path ...] [expression]
null
The following examples are shown as given to the shell: find / \! -name "*.c" -print Print out a list of all the files whose names do not end in .c. find / -newer ttt -user wnj -print Print out a list of all the files owned by user “wnj” that are newer than the file ttt. find / \! \( -newer ttt -user wnj \) -print Print out a list of all the files which are not both newer than ttt and owned by “wnj”. find / \( -newer ttt -or -user wnj \) -print Print out a list of all the files that are either owned by “wnj” or that are newer than ttt. find / -newerct '1 minute ago' -print Print out a list of all the files whose inode change time is more recent than the current time minus one minute. find / -type f -exec echo {} \; Use the echo(1) command to print out a list of all the files. find -L /usr/ports/packages -type l -exec rm -- {} + Delete all broken symbolic links in /usr/ports/packages. find /usr/src -name CVS -prune -o -depth +6 -print Find files and directories that are at least seven levels deep in the working directory /usr/src. find /usr/src -name CVS -prune -o -mindepth 7 -print Is not equivalent to the previous example, since -prune is not evaluated below level seven. COMPATIBILITY The -follow primary is deprecated; the -L option should be used instead. See the STANDARDS section below for details. SEE ALSO chflags(1), chmod(1), locate(1), lsvfs(1), whereis(1), which(1), xargs(1), stat(2), acl(3), fts(3), getgrent(3), getpwent(3), strmode(3), ascii(7), re_format(7), symlink(7) STANDARDS The find utility syntax is a superset of the syntax specified by the IEEE Std 1003.1-2001 (“POSIX.1”) standard. All the single character options except -H and -L as well as -amin, -anewer, -cmin, -cnewer, -delete, -empty, -fstype, -iname, -inum, -iregex, -ls, -maxdepth, -mindepth, -mmin, -not, -path, -print0, -regex, -sparse and all of the -B* birthtime related primaries are extensions to IEEE Std 1003.1-2001 (“POSIX.1”). Historically, the -d, -L and -x options were implemented using the primaries -depth, -follow, and -xdev. These primaries always evaluated to true. As they were really global variables that took effect before the traversal began, some legal expressions could have unexpected results. An example is the expression -print -o -depth. As -print always evaluates to true, the standard order of evaluation implies that -depth would never be evaluated. This is not the case. The operator -or was implemented as -o, and the operator -and was implemented as -a. Historic implementations of the -exec and -ok primaries did not replace the string “{}” in the utility name or the utility arguments if it had preceding or following non-whitespace characters. This version replaces it no matter where in the utility name or arguments it appears. The -E option was inspired by the equivalent grep(1) and sed(1) options. HISTORY A simple find command appeared in Version 1 AT&T UNIX and was removed in Version 3 AT&T UNIX. It was rewritten for Version 5 AT&T UNIX and later be enhanced for the Programmer's Workbench (PWB). These changes were later incorporated in Version 7 AT&T UNIX. BUGS The special characters used by find are also special characters to many shell programs. In particular, the characters “*”, “[”, “]”, “?”, “(”, “)”, “!”, “\” and “;” may have to be escaped from the shell. As there is no delimiter separating options and file names or file names and the expression, it is difficult to specify files named -xdev or !. These problems are handled by the -f option and the getopt(3) “--” construct. The -delete primary does not interact well with other options that cause the file system tree traversal options to be changed. The -mindepth and -maxdepth primaries are actually global options (as documented above). They should probably be replaced by options which look like options. macOS 14.5 January 23, 2023 macOS 14.5
jstat
The jstat command displays performance statistics for an instrumented Java HotSpot VM. The target JVM is identified by its virtual machine identifier, or vmid option. The jstat command supports two types of options, general options and output options. General options cause the jstat command to display simple usage and version information. Output options determine the content and format of the statistical output. All options and their functionality are subject to change or removal in future releases. GENERAL OPTIONS If you specify one of the general options, then you can't specify any other option or parameter. -help Displays a help message. -options Displays a list of static options. See Output Options for the jstat Command. OUTPUT OPTIONS FOR THE JSTAT COMMAND If you don't specify a general option, then you can specify output options. Output options determine the content and format of the jstat command's output, and consist of a single statOption, plus any of the other output options (-h, -t, and -J). The statOption must come first. Output is formatted as a table, with columns that are separated by spaces. A header row with titles describes the columns. Use the -h option to set the frequency at which the header is displayed. Column header names are consistent among the different options. In general, if two options provide a column with the same name, then the data source for the two columns is the same. Use the -t option to display a time-stamp column, labeled Timestamp as the first column of output. The Timestamp column contains the elapsed time, in seconds, since the target JVM started. The resolution of the time stamp is dependent on various factors and is subject to variation due to delayed thread scheduling on heavily loaded systems. Use the interval and count parameters to determine how frequently and how many times, respectively, the jstat command displays its output. Note: Don't write scripts to parse the jstat command's output because the format might change in future releases. If you write scripts that parse the jstat command output, then expect to modify them for future releases of this tool. -statOption Determines the statistics information that the jstat command displays. The following lists the available options. Use the -options general option to display the list of options for a particular platform installation. See Stat Options and Output. class: Displays statistics about the behavior of the class loader. compiler: Displays statistics about the behavior of the Java HotSpot VM Just-in-Time compiler. gc: Displays statistics about the behavior of the garbage collected heap. gccapacity: Displays statistics about the capacities of the generations and their corresponding spaces. gccause: Displays a summary about garbage collection statistics (same as -gcutil), with the cause of the last and current (when applicable) garbage collection events. gcnew: Displays statistics about the behavior of the new generation. gcnewcapacity: Displays statistics about the sizes of the new generations and their corresponding spaces. gcold: Displays statistics about the behavior of the old generation and metaspace statistics. gcoldcapacity: Displays statistics about the sizes of the old generation. gcmetacapacity: Displays statistics about the sizes of the metaspace. gcutil: Displays a summary about garbage collection statistics. printcompilation: Displays Java HotSpot VM compilation method statistics. -JjavaOption Passes javaOption to the Java application launcher. For example, -J-Xms48m sets the startup memory to 48 MB. For a complete list of options, see java. STAT OPTIONS AND OUTPUT The following information summarizes the columns that the jstat command outputs for each statOption. -class option Class loader statistics. Loaded: Number of classes loaded. Bytes: Number of KB loaded. Unloaded: Number of classes unloaded. Bytes: Number of KB unloaded. Time: Time spent performing class loading and unloading operations. -compiler option Java HotSpot VM Just-in-Time compiler statistics. Compiled: Number of compilation tasks performed. Failed: Number of compilations tasks failed. Invalid: Number of compilation tasks that were invalidated. Time: Time spent performing compilation tasks. FailedType: Compile type of the last failed compilation. FailedMethod: Class name and method of the last failed compilation. -gc option Garbage collected heap statistics. S0C: Current survivor space 0 capacity (KB). S1C: Current survivor space 1 capacity (KB). S0U: Survivor space 0 utilization (KB). S1U: Survivor space 1 utilization (KB). EC: Current eden space capacity (KB). EU: Eden space utilization (KB). OC: Current old space capacity (KB). OU: Old space utilization (KB). MC: Metaspace Committed Size (KB). MU: Metaspace utilization (KB). CCSC: Compressed class committed size (KB). CCSU: Compressed class space used (KB). YGC: Number of young generation garbage collection (GC) events. YGCT: Young generation garbage collection time. FGC: Number of full GC events. FGCT: Full garbage collection time. GCT: Total garbage collection time. -gccapacity option Memory pool generation and space capacities. NGCMN: Minimum new generation capacity (KB). NGCMX: Maximum new generation capacity (KB). NGC: Current new generation capacity (KB). S0C: Current survivor space 0 capacity (KB). S1C: Current survivor space 1 capacity (KB). EC: Current eden space capacity (KB). OGCMN: Minimum old generation capacity (KB). OGCMX: Maximum old generation capacity (KB). OGC: Current old generation capacity (KB). OC: Current old space capacity (KB). MCMN: Minimum metaspace capacity (KB). MCMX: Maximum metaspace capacity (KB). MC: Metaspace Committed Size (KB). CCSMN: Compressed class space minimum capacity (KB). CCSMX: Compressed class space maximum capacity (KB). CCSC: Compressed class committed size (KB). YGC: Number of young generation GC events. FGC: Number of full GC events. -gccause option This option displays the same summary of garbage collection statistics as the -gcutil option, but includes the causes of the last garbage collection event and (when applicable), the current garbage collection event. In addition to the columns listed for -gcutil, this option adds the following columns: LGCC: Cause of last garbage collection GCC: Cause of current garbage collection -gcnew option New generation statistics. S0C: Current survivor space 0 capacity (KB). S1C: Current survivor space 1 capacity (KB). S0U: Survivor space 0 utilization (KB). S1U: Survivor space 1 utilization (KB). TT: Tenuring threshold. MTT: Maximum tenuring threshold. DSS: Desired survivor size (KB). EC: Current eden space capacity (KB). EU: Eden space utilization (KB). YGC: Number of young generation GC events. YGCT: Young generation garbage collection time. -gcnewcapacity option New generation space size statistics. NGCMN: Minimum new generation capacity (KB). NGCMX: Maximum new generation capacity (KB). NGC: Current new generation capacity (KB). S0CMX: Maximum survivor space 0 capacity (KB). S0C: Current survivor space 0 capacity (KB). S1CMX: Maximum survivor space 1 capacity (KB). S1C: Current survivor space 1 capacity (KB). ECMX: Maximum eden space capacity (KB). EC: Current eden space capacity (KB). YGC: Number of young generation GC events. FGC: Number of full GC events. -gcold option Old generation size statistics. MC: Metaspace Committed Size (KB). MU: Metaspace utilization (KB). CCSC: Compressed class committed size (KB). CCSU: Compressed class space used (KB). OC: Current old space capacity (KB). OU: Old space utilization (KB). YGC: Number of young generation GC events. FGC: Number of full GC events. FGCT: Full garbage collection time. GCT: Total garbage collection time. -gcoldcapacity option Old generation statistics. OGCMN: Minimum old generation capacity (KB). OGCMX: Maximum old generation capacity (KB). OGC: Current old generation capacity (KB). OC: Current old space capacity (KB). YGC: Number of young generation GC events. FGC: Number of full GC events. FGCT: Full garbage collection time. GCT: Total garbage collection time. -gcmetacapacity option Metaspace size statistics. MCMN: Minimum metaspace capacity (KB). MCMX: Maximum metaspace capacity (KB). MC: Metaspace Committed Size (KB). CCSMN: Compressed class space minimum capacity (KB). CCSMX: Compressed class space maximum capacity (KB). YGC: Number of young generation GC events. FGC: Number of full GC events. FGCT: Full garbage collection time. GCT: Total garbage collection time. -gcutil option Summary of garbage collection statistics. S0: Survivor space 0 utilization as a percentage of the space's current capacity. S1: Survivor space 1 utilization as a percentage of the space's current capacity. E: Eden space utilization as a percentage of the space's current capacity. O: Old space utilization as a percentage of the space's current capacity. M: Metaspace utilization as a percentage of the space's current capacity. CCS: Compressed class space utilization as a percentage. YGC: Number of young generation GC events. YGCT: Young generation garbage collection time. FGC: Number of full GC events. FGCT: Full garbage collection time. GCT: Total garbage collection time. -printcompilation option Java HotSpot VM compiler method statistics. Compiled: Number of compilation tasks performed by the most recently compiled method. Size: Number of bytes of byte code of the most recently compiled method. Type: Compilation type of the most recently compiled method. Method: Class name and method name identifying the most recently compiled method. Class name uses a slash (/) instead of a dot (.) as a name space separator. The method name is the method within the specified class. The format for these two fields is consistent with the HotSpot -XX:+PrintCompilation option. VIRTUAL MACHINE IDENTIFIER The syntax of the vmid string corresponds to the syntax of a URI: [protocol:][//]lvmid[@hostname[:port][/servername] The syntax of the vmid string corresponds to the syntax of a URI. The vmid string can vary from a simple integer that represents a local JVM to a more complex construction that specifies a communications protocol, port number, and other implementation-specific values. protocol The communications protocol. If the protocol value is omitted and a host name isn't specified, then the default protocol is a platform-specific optimized local protocol. If the protocol value is omitted and a host name is specified, then the default protocol is rmi. lvmid The local virtual machine identifier for the target JVM. The lvmid is a platform-specific value that uniquely identifies a JVM on a system. The lvmid is the only required component of a virtual machine identifier. The lvmid is typically, but not necessarily, the operating system's process identifier for the target JVM process. You can use the jps command to determine the lvmid provided the JVM processes is not running in a separate docker instance. You can also determine the lvmid on Linux and macOS platforms with the ps command, and on Windows with the Windows Task Manager. hostname A host name or IP address that indicates the target host. If the hostname value is omitted, then the target host is the local host. port The default port for communicating with the remote server. If the hostname value is omitted or the protocol value specifies an optimized, local protocol, then the port value is ignored. Otherwise, treatment of the port parameter is implementation- specific. For the default rmi protocol, the port value indicates the port number for the rmiregistry on the remote host. If the port value is omitted and the protocol value indicates rmi, then the default rmiregistry port (1099) is used. servername The treatment of the servername parameter depends on implementation. For the optimized local protocol, this field is ignored. For the rmi protocol, it represents the name of the RMI remote object on the remote host.
jstat - monitor JVM statistics
Note: This command is experimental and unsupported. jstat generalOptions jstat outputOptions [-t] [-h lines] vmid [interval [count]] generalOptions A single general command-line option. See General Options. outputOptions An option reported by the -options option. One or more output options that consist of a single statOption, plus any of the -t, -h, and -J options. See Output Options for the jstat Command. -t Displays a time-stamp column as the first column of output. The time stamp is the time since the start time of the target JVM. -h n Displays a column header every n samples (output rows), where n is a positive integer. The default value is 0, which displays the column header of the first row of data. vmid A virtual machine identifier, which is a string that indicates the target JVM. See Virtual Machine Identifier. interval The sampling interval in the specified units, seconds (s) or milliseconds (ms). Default units are milliseconds. This must be a positive integer. When specified, the jstat command produces its output at each interval. count The number of samples to display. The default value is infinity, which causes the jstat command to display statistics until the target JVM terminates or the jstat command is terminated. This value must be a positive integer.
null
This section presents some examples of monitoring a local JVM with an lvmid of 21891. THE GCUTIL OPTION This example attaches to lvmid 21891 and takes 7 samples at 250 millisecond intervals and displays the output as specified by the -gcutil option. The output of this example shows that a young generation collection occurred between the third and fourth sample. The collection took 0.078 seconds and promoted objects from the eden space (E) to the old space (O), resulting in an increase of old space utilization from 66.80% to 68.19%. Before the collection, the survivor space was 97.02% utilized, but after this collection it's 91.03% utilized. jstat -gcutil 21891 250 7 S0 S1 E O M CCS YGC YGCT FGC FGCT GCT 0.00 97.02 70.31 66.80 95.52 89.14 7 0.300 0 0.000 0.300 0.00 97.02 86.23 66.80 95.52 89.14 7 0.300 0 0.000 0.300 0.00 97.02 96.53 66.80 95.52 89.14 7 0.300 0 0.000 0.300 91.03 0.00 1.98 68.19 95.89 91.24 8 0.378 0 0.000 0.378 91.03 0.00 15.82 68.19 95.89 91.24 8 0.378 0 0.000 0.378 91.03 0.00 17.80 68.19 95.89 91.24 8 0.378 0 0.000 0.378 91.03 0.00 17.80 68.19 95.89 91.24 8 0.378 0 0.000 0.378 REPEAT THE COLUMN HEADER STRING This example attaches to lvmid 21891 and takes samples at 250 millisecond intervals and displays the output as specified by -gcnew option. In addition, it uses the -h3 option to output the column header after every 3 lines of data. In addition to showing the repeating header string, this example shows that between the second and third samples, a young GC occurred. Its duration was 0.001 seconds. The collection found enough active data that the survivor space 0 utilization (S0U) would have exceeded the desired survivor size (DSS). As a result, objects were promoted to the old generation (not visible in this output), and the tenuring threshold (TT) was lowered from 31 to 2. Another collection occurs between the fifth and sixth samples. This collection found very few survivors and returned the tenuring threshold to 31. jstat -gcnew -h3 21891 250 S0C S1C S0U S1U TT MTT DSS EC EU YGC YGCT 64.0 64.0 0.0 31.7 31 31 32.0 512.0 178.6 249 0.203 64.0 64.0 0.0 31.7 31 31 32.0 512.0 355.5 249 0.203 64.0 64.0 35.4 0.0 2 31 32.0 512.0 21.9 250 0.204 S0C S1C S0U S1U TT MTT DSS EC EU YGC YGCT 64.0 64.0 35.4 0.0 2 31 32.0 512.0 245.9 250 0.204 64.0 64.0 35.4 0.0 2 31 32.0 512.0 421.1 250 0.204 64.0 64.0 0.0 19.0 31 31 32.0 512.0 84.4 251 0.204 S0C S1C S0U S1U TT MTT DSS EC EU YGC YGCT 64.0 64.0 0.0 19.0 31 31 32.0 512.0 306.7 251 0.204 INCLUDE A TIME STAMP FOR EACH SAMPLE This example attaches to lvmid 21891 and takes 3 samples at 250 millisecond intervals. The -t option is used to generate a time stamp for each sample in the first column. The Timestamp column reports the elapsed time in seconds since the start of the target JVM. In addition, the -gcoldcapacity output shows the old generation capacity (OGC) and the old space capacity (OC) increasing as the heap expands to meet allocation or promotion demands. The old generation capacity (OGC) has grown from 11,696 KB to 13,820 KB after the eighty-first full garbage collection (FGC). The maximum capacity of the generation (and space) is 60,544 KB (OGCMX), so it still has room to expand. Timestamp OGCMN OGCMX OGC OC YGC FGC FGCT GCT 150.1 1408.0 60544.0 11696.0 11696.0 194 80 2.874 3.799 150.4 1408.0 60544.0 13820.0 13820.0 194 81 2.938 3.863 150.7 1408.0 60544.0 13820.0 13820.0 194 81 2.938 3.863 MONITOR INSTRUMENTATION FOR A REMOTE JVM This example attaches to lvmid 40496 on the system named remote.domain using the -gcutil option, with samples taken every second indefinitely. The lvmid is combined with the name of the remote host to construct a vmid of 40496@remote.domain. This vmid results in the use of the rmi protocol to communicate to the default jstatd server on the remote host. The jstatd server is located using the rmiregistry command on remote.domain that's bound to the default port of the rmiregistry command (port 1099). jstat -gcutil 40496@remote.domain 1000 ... output omitted JDK 22 2024 JSTAT(1)
zcat
The gzip program compresses and decompresses files using Lempel-Ziv coding (LZ77). If no files are specified, gzip will compress from standard input, or decompress to standard output. When in compression mode, each file will be replaced with another file with the suffix, set by the -S suffix option, added, if possible. In decompression mode, each file will be checked for existence, as will the file with the suffix added. Each file argument must contain a separate complete archive; when multiple files are indicated, each is decompressed in turn. In the case of gzcat the resulting data is then concatenated in the manner of cat(1). If invoked as gunzip then the -d option is enabled. If invoked as zcat or gzcat then both the -c and -d options are enabled. This version of gzip is also capable of decompressing files compressed using compress(1), bzip2(1), lzip, or xz(1).
gzip, gunzip, zcat – compression/decompression tool using Lempel-Ziv coding (LZ77)
gzip [-cdfhkLlNnqrtVv] [-S suffix] file [file [...]] gunzip [-cfhkLNqrtVv] [-S suffix] file [file [...]] zcat [-fhV] file [file [...]]
The following options are available: -1, --fast -2, -3, -4, -5, -6, -7, -8 -9, --best These options change the compression level used, with the -1 option being the fastest, with less compression, and the -9 option being the slowest, with optimal compression. The default compression level is 6. -c, --stdout, --to-stdout This option specifies that output will go to the standard output stream, leaving files intact. -d, --decompress, --uncompress This option selects decompression rather than compression. -f, --force This option turns on force mode. This allows files with multiple links, symbolic links to regular files, overwriting of pre-existing files, reading from or writing to a terminal, and when combined with the -c option, allowing non-compressed data to pass through unchanged. -h, --help This option prints a usage summary and exits. -k, --keep This option prevents gzip from deleting input files after (de)compression. -L, --license This option prints gzip license. -l, --list This option displays information about the file's compressed and uncompressed size, ratio, uncompressed name. With the -v option, it also displays the compression method, CRC, date and time embedded in the file. -N, --name This option causes the stored filename in the input file to be used as the output file. -n, --no-name This option stops the filename and timestamp from being stored in the output file. -q, --quiet With this option, no warnings or errors are printed. -r, --recursive This option is used to gzip the files in a directory tree individually, using the fts(3) library. -S suffix, --suffix suffix This option changes the default suffix from .gz to suffix. -t, --test This option will test compressed files for integrity. -V, --version This option prints the version of the gzip program. -v, --verbose This option turns on verbose mode, which prints the compression ratio for each file compressed. ENVIRONMENT If the environment variable GZIP is set, it is parsed as a white-space separated list of options handled before any options on the command line. Options on the command line will override anything in GZIP. EXIT STATUS The gzip utility exits 0 on success, 1 on errors, and 2 if a warning occurs. SIGNALS gzip responds to the following signals: SIGINFO Report progress to standard error. SEE ALSO bzip2(1), compress(1), xz(1), fts(3), zlib(3) HISTORY The gzip program was originally written by Jean-loup Gailly, licensed under the GNU Public Licence. Matthew R. Green wrote a simple front end for NetBSD 1.3 distribution media, based on the freely re-distributable zlib library. It was enhanced to be mostly feature-compatible with the original GNU gzip program for NetBSD 2.0. This implementation of gzip was ported based on the NetBSD gzip version 20181111, and first appeared in FreeBSD 7.0. AUTHORS This implementation of gzip was written by Matthew R. Green <mrg@eterna.com.au> with unpack support written by Xin LI <delphij@FreeBSD.org>. BUGS According to RFC 1952, the recorded file size is stored in a 32-bit integer, therefore, it cannot represent files larger than 4GB. This limitation also applies to -l option of gzip utility. macOS 14.5 January 7, 2019 macOS 14.5
null
unifdefall
The unifdef utility selectively processes conditional cpp(1) directives. It removes from a file both the directives and any additional text that they specify should be removed, while otherwise leaving the file alone. The unifdef utility acts on #if, #ifdef, #ifndef, #elif, #else, and #endif lines. A directive is only processed if the symbols specified on the command line are sufficient to allow unifdef to get a definite value for its control expression. If the result is false, the directive and the following lines under its control are removed. If the result is true, only the directive is removed. An #ifdef or #ifndef directive is passed through unchanged if its controlling symbol is not specified on the command line. Any #if or #elif control expression that has an unknown value or that unifdef cannot parse is passed through unchanged. By default, unifdef ignores #if and #elif lines with constant expressions; it can be told to process them by specifying the -k flag on the command line. It understands a commonly-used subset of the expression syntax for #if and #elif lines: integer constants, integer values of symbols defined on the command line, the defined() operator, the operators !, <, >, <=, >=, ==, !=, &&, ||, and parenthesized expressions. A kind of “short circuit” evaluation is used for the && operator: if either operand is definitely false then the result is false, even if the value of the other operand is unknown. Similarly, if either operand of || is definitely true then the result is true. In most cases, the unifdef utility does not distinguish between object- like macros (without arguments) and function-like arguments (with arguments). If a macro is not explicitly defined, or is defined with the -D flag on the command-line, its arguments are ignored. If a macro is explicitly undefined on the command line with the -U flag, it may not have any arguments since this leads to a syntax error. The unifdef utility understands just enough about C to know when one of the directives is inactive because it is inside a comment, or affected by a backslash-continued line. It spots unusually-formatted preprocessor directives and knows when the layout is too odd for it to handle. A script called unifdefall can be used to remove all conditional cpp(1) directives from a file. It uses unifdef -s and cpp -dM to get lists of all the controlling symbols and their definitions (or lack thereof), then invokes unifdef with appropriate arguments to process the file.
unifdef, unifdefall – remove preprocessor conditionals from code
unifdef [-bBcdeKknsStV] [-Ipath] [-Dsym[=val]] [-Usym] [-iDsym[=val]] [-iUsym] ... [-o outfile] [infile] unifdefall [-Ipath] ... file
-Dsym=val Specify that a symbol is defined to a given value which is used when evaluating #if and #elif control expressions. -Dsym Specify that a symbol is defined to the value 1. -Usym Specify that a symbol is undefined. If the same symbol appears in more than one argument, the last occurrence dominates. -b Replace removed lines with blank lines instead of deleting them. Mutually exclusive with the -B option. -B Compress blank lines around a deleted section. Mutually exclusive with the -b option. -c If the -c flag is specified, then the operation of unifdef is complemented, i.e., the lines that would have been removed or blanked are retained and vice versa. -d Turn on printing of debugging messages. -e Because unifdef processes its input one line at a time, it cannot remove preprocessor directives that span more than one line. The most common example of this is a directive with a multi-line comment hanging off its right hand end. By default, if unifdef has to process such a directive, it will complain that the line is too obfuscated. The -e option changes the behaviour so that, where possible, such lines are left unprocessed instead of reporting an error. -K Always treat the result of && and || operators as unknown if either operand is unknown, instead of short-circuiting when unknown operands can't affect the result. This option is for compatibility with older versions of unifdef. -k Process #if and #elif lines with constant expressions. By default, sections controlled by such lines are passed through unchanged because they typically start “#if 0” and are used as a kind of comment to sketch out future or past development. It would be rude to strip them out, just as it would be for normal comments. -n Add #line directives to the output following any deleted lines, so that errors produced when compiling the output file correspond to line numbers in the input file. -o outfile Write output to the file outfile instead of the standard output. If outfile is the same as the input file, the output is written to a temporary file which is renamed into place when unifdef completes successfully. -s Instead of processing the input file as usual, this option causes unifdef to produce a list of symbols that appear in expressions that unifdef understands. It is useful in conjunction with the -dM option of cpp(1) for creating unifdef command lines. -S Like the -s option, but the nesting depth of each symbol is also printed. This is useful for working out the number of possible combinations of interdependent defined/undefined symbols. -t Disables parsing for C comments and line continuations, which is useful for plain text. -iDsym[=val] -iUsym Ignore #ifdefs. If your C code uses #ifdefs to delimit non-C lines, such as comments or code which is under construction, then you must tell unifdef which symbols are used for that purpose so that it will not try to parse comments and line continuations inside those #ifdefs. You can specify ignored symbols with -iDsym[=val] and -iUsym similar to -Dsym[=val] and -Usym above. -Ipath Specifies to unifdefall an additional place to look for #include files. This option is ignored by unifdef for compatibility with cpp(1) and to simplify the implementation of unifdefall. -V Print version details. The unifdef utility copies its output to stdout and will take its input from stdin if no file argument is given. The unifdef utility works nicely with the -Dsym option of diff(1). EXIT STATUS The unifdef utility exits 0 if the output is an exact copy of the input, 1 if not, and 2 if in trouble. DIAGNOSTICS Too many levels of nesting. Inappropriate #elif, #else or #endif. Obfuscated preprocessor control line. Premature EOF (with the line number of the most recent unterminated #if). EOF in comment. SEE ALSO cpp(1), diff(1) HISTORY The unifdef command appeared in 2.9BSD. ANSI C support was added in FreeBSD 4.7. AUTHORS The original implementation was written by Dave Yost ⟨Dave@Yost.com⟩. Tony Finch ⟨dot@dotat.at⟩ rewrote it to support ANSI C. BUGS Expression evaluation is very limited. Preprocessor control lines split across more than one physical line (because of comments or backslash-newline) cannot be handled in every situation. Trigraphs are not recognized. There is no support for symbols with different definitions at different points in the source file. The text-mode and ignore functionality does not correspond to modern cpp(1) behaviour. macOS 14.5 March 11, 2010 macOS 14.5
null
jlink
The jlink tool links a set of modules, along with their transitive dependences, to create a custom runtime image. Note: Developers are responsible for updating their custom runtime images. JLINK OPTIONS --add-modules mod [, mod...] Adds the named modules, mod, to the default set of root modules. The default set of root modules is empty. --bind-services Link service provider modules and their dependencies. -c ={0|1|2} or --compress={0|1|2} Enable compression of resources: • 0: No compression • 1: Constant string sharing • 2: ZIP --disable-plugin pluginname Disables the specified plug-in. See jlink Plug-ins for the list of supported plug-ins. --endian {little|big} Specifies the byte order of the generated image. The default value is the format of your system's architecture. -h or --help Prints the help message. --ignore-signing-information Suppresses a fatal error when signed modular JARs are linked in the runtime image. The signature-related files of the signed modular JARs aren't copied to the runtime image. --launcher command=module or --launcher command=module/main Specifies the launcher command name for the module or the command name for the module and main class (the module and the main class names are separated by a slash (/)). --limit-modules mod [, mod...] Limits the universe of observable modules to those in the transitive closure of the named modules, mod, plus the main module, if any, plus any further modules specified in the --add-modules option. --list-plugins Lists available plug-ins, which you can access through command- line options; see jlink Plug-ins. -p or --module-path modulepath Specifies the module path. If this option is not specified, then the default module path is $JAVA_HOME/jmods. This directory contains the java.base module and the other standard and JDK modules. If this option is specified but the java.base module cannot be resolved from it, then the jlink command appends $JAVA_HOME/jmods to the module path. --no-header-files Excludes header files. --no-man-pages Excludes man pages. --output path Specifies the location of the generated runtime image. --save-opts filename Saves jlink options in the specified file. --suggest-providers [name, ...] Suggest providers that implement the given service types from the module path. --version Prints version information. @filename Reads options from the specified file. An options file is a text file that contains the options and values that you would typically enter in a command prompt. Options may appear on one line or on several lines. You may not specify environment variables for path names. You may comment out lines by prefixing a hash symbol (#) to the beginning of the line. The following is an example of an options file for the jlink command: #Wed Dec 07 00:40:19 EST 2016 --module-path mlib --add-modules com.greetings --output greetingsapp JLINK PLUG-INS Note: Plug-ins not listed in this section aren't supported and are subject to change. For plug-in options that require a pattern-list, the value is a comma- separated list of elements, with each element using one the following forms: • glob-pattern • glob:glob-pattern • regex:regex-pattern • @filename • filename is the name of a file that contains patterns to be used, one pattern per line. For a complete list of all available plug-ins, run the command jlink --list-plugins. Plugin compress Compresses all resources in the output image. • Level 0: No compression • Level 1: Constant string sharing • Level 2: ZIP An optional pattern-list filter can be specified to list the pattern of files to include. Plugin include-locales Includes the list of locales where langtag is a BCP 47 language tag. This option supports locale matching as defined in RFC 4647. Ensure that you add the module jdk.localedata when using this option. Example: --add-modules jdk.localedata --include-locales=en,ja,*-IN Plugin order-resources Orders the specified paths in priority order. If @filename is specified, then each line in pattern-list must be an exact match for the paths to be ordered. Example: --order-resources=/module-info.class,@classlist,/java.base/java/lang/ Plugin strip-debug Strips debug information from the output image. Plugin generate-cds-archive Generate CDS archive if the runtime image supports the CDS feature. JLINK EXAMPLES The following command creates a runtime image in the directory greetingsapp. This command links the module com.greetings, whose module definition is contained in the directory mlib. jlink --module-path mlib --add-modules com.greetings --output greetingsapp The following command lists the modules in the runtime image greetingsapp: greetingsapp/bin/java --list-modules com.greetings java.base@11 java.logging@11 org.astro@1.0 The following command creates a runtime image in the directory compressedrt that's stripped of debug symbols, uses compression to reduce space, and includes French language locale information: jlink --add-modules jdk.localedata --strip-debug --compress=2 --include-locales=fr --output compressedrt The following example compares the size of the runtime image compressedrt with fr_rt, which isn't stripped of debug symbols and doesn't use compression: jlink --add-modules jdk.localedata --include-locales=fr --output fr_rt du -sh ./compressedrt ./fr_rt 23M ./compressedrt 36M ./fr_rt The following example lists the providers that implement java.security.Provider: jlink --suggest-providers java.security.Provider Suggested providers: java.naming provides java.security.Provider used by java.base java.security.jgss provides java.security.Provider used by java.base java.security.sasl provides java.security.Provider used by java.base java.smartcardio provides java.security.Provider used by java.base java.xml.crypto provides java.security.Provider used by java.base jdk.crypto.cryptoki provides java.security.Provider used by java.base jdk.crypto.ec provides java.security.Provider used by java.base jdk.crypto.mscapi provides java.security.Provider used by java.base jdk.security.jgss provides java.security.Provider used by java.base The following example creates a custom runtime image named mybuild that includes only java.naming and jdk.crypto.cryptoki and their dependencies but no other providers. Note that these dependencies must exist in the module path: jlink --add-modules java.naming,jdk.crypto.cryptoki --output mybuild The following command is similar to the one that creates a runtime image named greetingsapp, except that it will link the modules resolved from root modules with service binding; see the Configuration.resolveAndBind method. jlink --module-path mlib --add-modules com.greetings --output greetingsapp --bind-services The following command lists the modules in the runtime image greetingsapp created by this command: greetingsapp/bin/java --list-modules com.greetings java.base@11 java.compiler@11 java.datatransfer@11 java.desktop@11 java.logging@11 java.management@11 java.management.rmi@11 java.naming@11 java.prefs@11 java.rmi@11 java.security.jgss@11 java.security.sasl@11 java.smartcardio@11 java.xml@11 java.xml.crypto@11 jdk.accessibility@11 jdk.charsets@11 jdk.compiler@11 jdk.crypto.cryptoki@11 jdk.crypto.ec@11 jdk.crypto.mscapi@11 jdk.internal.opt@11 jdk.jartool@11 jdk.javadoc@11 jdk.jdeps@11 jdk.jfr@11 jdk.jlink@11 jdk.localedata@11 jdk.management@11 jdk.management.jfr@11 jdk.naming.dns@11 jdk.naming.rmi@11 jdk.security.auth@11 jdk.security.jgss@11 jdk.zipfs@11 org.astro@1.0 JDK 22 2024 JLINK(1)
jlink - assemble and optimize a set of modules and their dependencies into a custom runtime image
jlink [options] --module-path modulepath --add-modules module [, module...]
Command-line options separated by spaces. See jlink Options. modulepath The path where the jlink tool discovers observable modules. These modules can be modular JAR files, JMOD files, or exploded modules. module The names of the modules to add to the runtime image. The jlink tool adds these modules and their transitive dependencies. --compress={0|1|2}[:filter=pattern-list] --include-locales=langtag[,langtag]* --order-resources=pattern-list --strip-debug --generate-cds-archive
null
enc2xs5.30
enc2xs builds a Perl extension for use by Encode from either Unicode Character Mapping files (.ucm) or Tcl Encoding Files (.enc). Besides being used internally during the build process of the Encode module, you can use enc2xs to add your own encoding to perl. No knowledge of XS is necessary. Quick Guide If you want to know as little about Perl as possible but need to add a new encoding, just read this chapter and forget the rest. 0. Have a .ucm file ready. You can get it from somewhere or you can write your own from scratch or you can grab one from the Encode distribution and customize it. For the UCM format, see the next Chapter. In the example below, I'll call my theoretical encoding myascii, defined in my.ucm. "$" is a shell prompt. $ ls -F my.ucm 1. Issue a command as follows; $ enc2xs -M My my.ucm generating Makefile.PL generating My.pm generating README generating Changes Now take a look at your current directory. It should look like this. $ ls -F Makefile.PL My.pm my.ucm t/ The following files were created. Makefile.PL - MakeMaker script My.pm - Encode submodule t/My.t - test file 1.1. If you want *.ucm installed together with the modules, do as follows; $ mkdir Encode $ mv *.ucm Encode $ enc2xs -M My Encode/*ucm 2. Edit the files generated. You don't have to if you have no time AND no intention to give it to someone else. But it is a good idea to edit the pod and to add more tests. 3. Now issue a command all Perl Mongers love: $ perl Makefile.PL Writing Makefile for Encode::My 4. Now all you have to do is make. $ make cp My.pm blib/lib/Encode/My.pm /usr/local/bin/perl /usr/local/bin/enc2xs -Q -O \ -o encode_t.c -f encode_t.fnm Reading myascii (myascii) Writing compiled form 128 bytes in string tables 384 bytes (75%) saved spotting duplicates 1 bytes (0.775%) saved using substrings .... chmod 644 blib/arch/auto/Encode/My/My.bs $ The time it takes varies depending on how fast your machine is and how large your encoding is. Unless you are working on something big like euc-tw, it won't take too long. 5. You can "make install" already but you should test first. $ make test PERL_DL_NONLAZY=1 /usr/local/bin/perl -Iblib/arch -Iblib/lib \ -e 'use Test::Harness qw(&runtests $verbose); \ $verbose=0; runtests @ARGV;' t/*.t t/My....ok All tests successful. Files=1, Tests=2, 0 wallclock secs ( 0.09 cusr + 0.01 csys = 0.09 CPU) 6. If you are content with the test result, just "make install" 7. If you want to add your encoding to Encode's demand-loading list (so you don't have to "use Encode::YourEncoding"), run enc2xs -C to update Encode::ConfigLocal, a module that controls local settings. After that, "use Encode;" is enough to load your encodings on demand. The Unicode Character Map Encode uses the Unicode Character Map (UCM) format for source character mappings. This format is used by IBM's ICU package and was adopted by Nick Ing-Simmons for use with the Encode module. Since UCM is more flexible than Tcl's Encoding Map and far more user-friendly, this is the recommended format for Encode now. A UCM file looks like this. # # Comments # <code_set_name> "US-ascii" # Required <code_set_alias> "ascii" # Optional <mb_cur_min> 1 # Required; usually 1 <mb_cur_max> 1 # Max. # of bytes/char <subchar> \x3F # Substitution char # CHARMAP <U0000> \x00 |0 # <control> <U0001> \x01 |0 # <control> <U0002> \x02 |0 # <control> .... <U007C> \x7C |0 # VERTICAL LINE <U007D> \x7D |0 # RIGHT CURLY BRACKET <U007E> \x7E |0 # TILDE <U007F> \x7F |0 # <control> END CHARMAP • Anything that follows "#" is treated as a comment. • The header section continues until a line containing the word CHARMAP. This section has a form of <keyword> value, one pair per line. Strings used as values must be quoted. Barewords are treated as numbers. \xXX represents a byte. Most of the keywords are self-explanatory. subchar means substitution character, not subcharacter. When you decode a Unicode sequence to this encoding but no matching character is found, the byte sequence defined here will be used. For most cases, the value here is \x3F; in ASCII, this is a question mark. • CHARMAP starts the character map section. Each line has a form as follows: <UXXXX> \xXX.. |0 # comment ^ ^ ^ | | +- Fallback flag | +-------- Encoded byte sequence +-------------- Unicode Character ID in hex The format is roughly the same as a header section except for the fallback flag: | followed by 0..3. The meaning of the possible values is as follows: |0 Round trip safe. A character decoded to Unicode encodes back to the same byte sequence. Most characters have this flag. |1 Fallback for unicode -> encoding. When seen, enc2xs adds this character for the encode map only. |2 Skip sub-char mapping should there be no code point. |3 Fallback for encoding -> unicode. When seen, enc2xs adds this character for the decode map only. • And finally, END OF CHARMAP ends the section. When you are manually creating a UCM file, you should copy ascii.ucm or an existing encoding which is close to yours, rather than write your own from scratch. When you do so, make sure you leave at least U0000 to U0020 as is, unless your environment is EBCDIC. CAVEAT: not all features in UCM are implemented. For example, icu:state is not used. Because of that, you need to write a perl module if you want to support algorithmical encodings, notably the ISO-2022 series. Such modules include Encode::JP::2022_JP, Encode::KR::2022_KR, and Encode::TW::HZ. Coping with duplicate mappings When you create a map, you SHOULD make your mappings round-trip safe. That is, "encode('your-encoding', decode('your-encoding', $data)) eq $data" stands for all characters that are marked as "|0". Here is how to make sure: • Sort your map in Unicode order. • When you have a duplicate entry, mark either one with '|1' or '|3'. • And make sure the '|1' or '|3' entry FOLLOWS the '|0' entry. Here is an example from big5-eten. <U2550> \xF9\xF9 |0 <U2550> \xA2\xA4 |3 Internally Encoding -> Unicode and Unicode -> Encoding Map looks like this; E to U U to E -------------------------------------- \xF9\xF9 => U2550 U2550 => \xF9\xF9 \xA2\xA4 => U2550 So it is round-trip safe for \xF9\xF9. But if the line above is upside down, here is what happens. E to U U to E -------------------------------------- \xA2\xA4 => U2550 U2550 => \xF9\xF9 (\xF9\xF9 => U2550 is now overwritten!) The Encode package comes with ucmlint, a crude but sufficient utility to check the integrity of a UCM file. Check under the Encode/bin directory for this. When in doubt, you can use ucmsort, yet another utility under Encode/bin directory. Bookmarks • ICU Home Page <http://www.icu-project.org/> • ICU Character Mapping Tables <http://site.icu-project.org/charts/charset> • ICU:Conversion Data <http://www.icu-project.org/userguide/conversion-data.html> SEE ALSO Encode, perlmod, perlpod perl v5.30.3 2024-04-13 ENC2XS(1)
enc2xs -- Perl Encode Module Generator
enc2xs -[options] enc2xs -M ModName mapfiles... enc2xs -C
null
null
pridist.d
This is a simple DTrace script that samples at 1000 Hz which process is on the CPUs, and what the priority is. A distribution plot is printed. With priorities, the higher the priority the better chance the process (actually, thread) has of being scheduled. This idea came from the script /usr/demo/dtrace/profpri.d, which produces similar output for one particular PID. Since this uses DTrace, only users with root privileges can run this command.
pridist.d - process priority distribution. Uses DTrace.
pridist.d
null
This samples until Ctrl-C is hit. # pridist.d FIELDS CMD process name PID process ID value process priority count number of samples of at least this priority BASED ON /usr/demo/dtrace/profpri.d DOCUMENTATION DTrace Guide "profile Provider" chapter (docs.sun.com) See the DTraceToolkit for further documentation under the Docs directory. The DTraceToolkit docs may include full worked examples with verbose descriptions explaining the output. EXIT pridist.d will sample until Ctrl-C is hit. SEE ALSO dispadmin(1M), dtrace(1M) version 0.90 June 13, 2005 pridist.d(1m)
size
Size (without the -m option) prints the (decimal) number of bytes required by the __TEXT, __DATA and __OBJC segments. All other segments are totaled and that size is listed in the `others' column. The final two columns is the sum in decimal and hexadecimal. If no file is specified, a.out is used. The options to size(1) are: - Treat the remaining arguments as name of object files not options to size(1). -m Print the sizes of the Mach-O segments and sections as well as the total sizes of the sections in each segment and the total size of the segments in the file. -l When used with the -m option, also print the addresses and offsets of the sections and segments. -x When used with the -m option, print the values in hexadecimal (with leading 0x's) rather than decimal. -arch arch_type Specifies the architecture, arch_type, of the file for size(1) to operate on when the file is a universal file. (See arch(3) for the currently know arch_types.) The arch_type can be "all" to operate on all architectures in the file. The default is to display only the host architecture, if the file contains it; otherwise, all architectures in the file are shown. SEE ALSO otool(1) BUGS The size of common symbols can't be reflected in any of the numbers for relocatable object files. Apple Computer, Inc. July 28, 2005 SIZE(1)
size - print the size of the sections in an object file
size [ option ... ] [ object ... ]
null
null
timesyncanalyse
The timesyncanalyse executable is used for analysis of test data for time synchronization testing. The following modes are available: audio Determine the time error between 2 audio signals and how it changes over time. Produces both plots and CSV data files (and scripts to plot them) of the calculated time error, the Allan Deviation (ADEV), the Modified Allan Deviation (MADEV), the Time Deviation (TDEV) and the Maximum Time Interval Error (MTIE). time-error Load the time error CSV file as previously output by a call the tool with the audio mode and analyse the data. Produces both plots and CSV data files (and scripts to plot them) of the Allan Deviation (ADEV), the Modified Allan Deviation (MADEV), the Time Deviation (TDEV) and the Maximum Time Interval Error (MTIE). The following options are available for the audio mode: --channel-a path The path to the audio file for channel A of the analysis, only the first channel of the file is used. --channel-b path The path to the audio file for channel B of the analysis, only the first channel of the file is used. The sample rate and length of the file must match that of channel A. --output path Specifies the path to the directory to put all of the output data from the analysis. Creates the directory if it does not exist. [--name name] The name to use for the results. This is used in the title of the plots and the naming of the output files. If not provided the default of audio_analysis is used. --interval interval The interval, in samples, at which the analysis is performed. [--upscale upscale] The amount of upsampling to perform to the audio before running the analysis. If not specified 1 is used. --length length The length, in samples, of the section of the original audio that is used for the correlation to determine the error. [--type quick|resampler|post-resampler] Select which type of drift correlator is used. Quick uses a quadratic interpolation post correlation, resampler performs resampling of the signal before correlation and post-resampler performs resampling of the correlated signal. Quadratic interpollation results can have some error present but runs much faster than resampling. Quadratic interpollation can be used to do a quick analysis before spending time doing the full analysis. Post resampling, where the correlation signal is upsampled is much quicker than resampling prior to correlation but can produce slightly different results when the drift is right on the edge of a quantization level. post-resampler is the default mode if nothing is specified. [--audio-limit seconds] Limit the audio analysis to the first N seconds of audio in the file. [--window-lower lower-limit] Specify the smallest window size to perform the analysis on. This directlty relates to the smallest observation interval plotted where the observation interval = the window length * the time error sampling period (number of seconds between time error points). The sampling period of the time error data is the interval / sampling rate of the audio file. The default (and smallest) value is 2. [--window-upper upper-limit] Specify the largest window size to perform the analysis on. This directlty relates to the largest observation interval plotted where the observation interval = the window length * the time error sampling period (number of seconds between time error points). The sampling period of the time error data is the interval / sampling rate of the audio file. The default value is the number of time error points in the data. [--window-step step-size] Specify the window size step to step between each analysis calculation. Adjusting this value will speed up the analysis of the data but produce lower resolution plots. The default value is 1. [--adev | --no-adev] Either calculate or don't calculate the Allan Deviation on the time error data. If unspecified the Allan deviation is calculated. [--madev | --no-madev] Either calculate pr don't calculate the Modified Allan Deviation on the time error data. If unspecified the Modified Allan deviation is calculated. [--tdev | --no-tdev] Either calculate or don't calculate the Time Deviation on the time error data. If unspecified the Time deviation is calculated. [--rmstie | --no-rmstie] Either calculate or don't calculate the Root Mean Squared Time Interval Error on the time error data. If unspecified the RMSTIE is calculated. [--mtie | --no-mtie] Either calculate or don't calculate the Maximum Time Interval Error on the time error data. If unspecified the MTIE is calculated. The following options are available for the time-error mode: --data path The path to the time error CSV file. Both the time and the time error should be in seconds (floating point). --output path Specifies the path to the directory to put all of the output data from the analysis. Creates the directory if it does not exist. [--name name] The name to use for the results. This is used in the title of the plots and the naming of the output files. If not provided the default of time-error_analysis is used. [--window-lower lower-limit] Specify the smallest window size to perform the analysis on. This directlty relates to the smallest observation interval plotted where the observation interval = the window length * the time error sampling period (number of seconds between time error points). The default (and smallest) value is 2. [--window-upper upper-limit] Specify the largest window size to perform the analysis on. This directlty relates to the largest observation interval plotted where the observation interval = the window length * the time error sampling period (number of seconds between time error points). The default value is the number of time error points in the data. [--window-step step-size] Specify the window size step to step between each analysis calculation. Adjusting this value will speed up the analysis of the data but produce lower resolution plots. The default value is 1. [--adev | --no-adev] Either calculate or don't calculate the Allan Deviation on the time error data. If unspecified the Allan deviation is calculated. [--madev | --no-madev] Either calculate pr don't calculate the Modified Allan Deviation on the time error data. If unspecified the Modified Allan deviation is calculated. [--tdev | --no-tdev] Either calculate or don't calculate the Time Deviation on the time error data. If unspecified the Time deviation is calculated. [--rmstie | --no-rmstie] Either calculate or don't calculate the Root Mean Squared Time Interval Error on the time error data. If unspecified the RMSTIE is calculated. [--mtie | --no-mtie] Either calculate or don't calculate the Maximum Time Interval Error on the time error data. If unspecified the MTIE is calculated. Darwin 2/29/16 Darwin
timesyncanalyse – time synchronization analysis tool.
timesyncanalyse mode <arguments>
null
null
uttype
This command can be used to query the system programmatically about Uniform Type Identifiers (UTTypes for short) known to it. EXAMPLE List all declared UTTypes uttype --all List all UTTypes that claim the filename extension "xyz" and which conform to "public.data" uttype --extension ´xyz´ --conformsto ´public.data´ List extended information about the UTType "com.apple.package" uttype --verbose "com.apple.package" List all supported flags and options uttype --help SEE ALSO lsregister(1) HISTORY First appeared in macOS 11. macOS August 2020 UTTYPE(1)
uttype - Information about Uniform Type Identifiers
uttype [flags] [options] [identifier1 [identifier2 [...]]]
null
null
uustat
The uustat command can display various types of status information about the UUCP system. It can also be used to cancel or rejuvenate requests made by uucp (1) or uux (1). By default uustat displays all jobs queued up for the invoking user, as if given the --user option with the appropriate argument. If any of the -a, --all, -e, --executions, -s, --system, -S, --not-system, -u, --user, -U, --not-user, -c, --command, -C, --not-command, -o, --older-than, -y, --younger-than options are given, then all jobs which match the combined specifications are displayed. The -K or --kill-all option may be used to kill off a selected group of jobs, such as all jobs more than 7 days old.
uustat - UUCP status inquiry and control
uustat -a uustat --all uustat [ -eKRiMNQ ] [ -sS system ] [ -uU user ] [ -cC command ] [ -oy hours ] [ -B lines ] [ --executions ] [ --kill-all ] [ --rejuvenate-all ] [ --prompt ] [ --mail ] [ --notify ] [ --no-list ] [ --system system ] [ --not-system system ] [ --user user ] [ --not-user user ] [ --command command ] [ --not-command command ] [ --older-than hours ] [ --younger-than hours ] [ --mail-lines lines ] uustat [ -kr jobid ] [ --kill jobid ] [ --rejuvenate jobid ] uustat -q [ -sS system ] [ -oy hours ] [ --system system ] [ --not-system system ] [ --older-than hours ] [ --younger-than hours ] uustat --list [ -sS system ] [ -oy hours ] [ --system system ] [ --not-system system ] [ --older-than hours ] [ --younger-than hours ] uustat -m uustat --status uustat -p uustat --ps
The following options may be given to uustat. -a, --all List all queued file transfer requests. -e, --executions List queued execution requests rather than queued file transfer requests. Queued execution requests are processed by uuxqt (8) rather than uucico (8). Queued execution requests may be waiting for some file to be transferred from a remote system. They are created by an invocation of uux (1). -s system, --system system List all jobs queued up for the named system. These options may be specified multiple times, in which case all jobs for all the systems will be listed. If used with --list only the systems named will be listed. -S system, --not-system system List all jobs queued for systems other than the one named. These options may be specified multiple times, in which case no jobs from any of the specified systems will be listed. If used with --list only the systems not named will be listed. These options may not be used with -s or --system. -u user, --user user List all jobs queued up for the named user. These options may be specified multiple times, in which case all jobs for all the users will be listed. -U user, --not-user user List all jobs queued up for users other than the one named. These options may be specified multiple times, in which case no jobs from any of the specified users will be listed. These options may not be used with -u or --user. -c command, --command command List all jobs requesting the execution of the named command. If command is ALL this will list all jobs requesting the execution of some command (as opposed to simply requesting a file transfer). These options may be specified multiple times, in which case all jobs requesting any of the commands will be listed. -C command, --not-command command List all jobs requesting execution of some command other than the named command, or, if command is ALL, list all jobs that simply request a file transfer (as opposed to requesting the execution of some command). These options may be specified multiple times, in which case no job requesting one of the specified commands will be listed. These options may not be used with -c or --command. -o hours, --older-than hours List all queued jobs older than the given number of hours. If used with --list only systems whose oldest job is older than the given number of hours will be listed. -y hours, --younger-than hours List all queued jobs younger than the given number of hours. If used with --list only systems whose oldest job is younger than the given number of hours will be listed. -k jobid, --kill jobid Kill the named job. The job id is shown by the default output format, as well as by the -j or --jobid option to uucp (1) or uux (1). A job may only be killed by the user who created the job, or by the UUCP administrator or the superuser. The -k or --kill options may be used multiple times on the command line to kill several jobs. -r jobid, --rejuvenate jobid Rejuvenate the named job. This will mark it as having been invoked at the current time, affecting the output of the -o, --older-than, -y, or --younger-than options and preserving it from any automated cleanup daemon. The job id is shown by the default output format, as well as by the -j or --jobid options to uucp (1) or uux (1). A job may only be rejuvenated by the user who created the job, or by the UUCP administrator or the superuser. The -r or --rejuvenate options may be used multiple times on the command line to rejuvenate several jobs. -q, --list Display the status of commands, executions and conversations for all remote systems for which commands or executions are queued. The -s, --system, -S, --not-system, -o, --older-than, -y, and --younger-than options may be used to restrict the systems which are listed. Systems for which no commands or executions are queued will never be listed. -m, --status Display the status of conversations for all remote systems. -p, --ps Display the status of all processes holding UUCP locks on systems or ports. -i, --prompt For each listed job, prompt whether to kill the job or not. If the first character of the input line is y or Y the job will be killed. -K, --kill-all Automatically kill each listed job. This can be useful for automatic cleanup scripts, in conjunction with the --mail and --notify options. -R, --rejuvenate-all Automatically rejuvenate each listed job. This may not be used with --kill-all. -M, --mail For each listed job, send mail to the UUCP administrator. If the job is killed (due to --kill-all or --prompt with an affirmative response) the mail will indicate that. A comment specified by the --comment option may be included. If the job is an execution, the initial portion of its standard input will be included in the mail message; the number of lines to include may be set with the --mail-lines option (the default is 100). If the standard input contains null characters, it is assumed to be a binary file and is not included. -N, --notify For each listed job, send mail to the user who requested the job. The mail is identical to that sent by the -M or --mail options. -W comment, --comment comment Specify a comment to be included in mail sent with the -M, --mail, -N, or --notify options. -B lines, --mail-lines lines When the -M, --mail, -N, or --notify options are used to send mail about an execution with standard input, this option controls the number of lines of standard input to include in the message. The default is 100. -Q, --no-list Do not actually list the job, but only take any actions indicated by the -i, --prompt, -K, --kill-all, -M, --mail, -N or --notify options. -x type, --debug type Turn on particular debugging types. The following types are recognized: abnormal, chat, handshake, uucp-proto, proto, port, config, spooldir, execute, incoming, outgoing. Only abnormal, config, spooldir and execute are meaningful for uustat. Multiple types may be given, separated by commas, and the --debug option may appear multiple times. A number may also be given, which will turn on that many types from the foregoing list; for example, --debug 2 is equivalent to --debug abnormal,chat. -I file, --config file Set configuration file to use. This option may not be available, depending upon how uustat was compiled. -v, --version Report version information and exit. --help Print a help message and exit.
uustat --all Display status of all jobs. A sample output line is as follows: bugsA027h bugs ian 04-01 13:50 Executing rmail ian@airs.com (sending 1283 bytes) The format is jobid system user queue-date command (size) The jobid may be passed to the --kill or --rejuvenate options. The size indicates how much data is to be transferred to the remote system, and is absent for a file receive request. The --system, --not-system, --user, --not-user, --command, --not-command, --older-than, and --younger-than options may be used to control which jobs are listed. uustat --executions Display status of queued up execution requests. A sample output line is as follows: bugs bugs!ian 05-20 12:51 rmail ian The format is system requestor queue-date command The --system, --not-system, --user, --not-user, --command, --not-command, --older-than, and --younger-than options may be used to control which requests are listed. uustat --list Display status for all systems with queued up commands. A sample output line is as follows: bugs 4C (1 hour) 0X (0 secs) 04-01 14:45 Dial failed This indicates the system, the number of queued commands, the age of the oldest queued command, the number of queued local executions, the age of the oldest queued execution, the date of the last conversation, and the status of that conversation. uustat --status Display conversation status for all remote systems. A sample output line is as follows: bugs 04-01 15:51 Conversation complete This indicates the system, the date of the last conversation, and the status of that conversation. If the last conversation failed, uustat will indicate how many attempts have been made to call the system. If the retry period is currently preventing calls to that system, uustat also displays the time when the next call will be permitted. uustat --ps Display the status of all processes holding UUCP locks. The output format is system dependent, as uustat simply invokes ps (1) on each process holding a lock. uustat --command rmail --older-than 168 --kill-all --no-list --mail --notify --comment "Queued for over 1 week" This will kill all rmail commands that have been queued up waiting for delivery for over 1 week (168 hours). For each such command, mail will be sent both to the UUCP administrator and to the user who requested the rmail execution. The mail message sent will include the string given by the --comment option. The --no-list option prevents any of the jobs from being listed on the terminal, so any output from the program will be error messages. SEE ALSO ps(1), rmail(1), uucp(1), uux(1), uucico(8), uuxqt(8) AUTHOR Ian Lance Taylor (ian@airs.com) Taylor UUCP 1.07 uustat(1)
snmpwalk
snmpwalk is an SNMP application that uses SNMP GETNEXT requests to query a network entity for a tree of information. An object identifier (OID) may be given on the command line. This OID specifies which portion of the object identifier space will be searched using GETNEXT requests. All variables in the subtree below the given OID are queried and their values presented to the user. Each variable name is given in the format specified in variables(5). If no OID argument is present, snmpwalk will search the subtree rooted at SNMPv2-SMI::mib-2 (including any MIB object values from other MIB modules, that are defined as lying within this subtree). If the network entity has an error processing the request packet, an error packet will be returned and a message will be shown, helping to pinpoint why the request was malformed. If the tree search causes attempts to search beyond the end of the MIB, the message "End of MIB" will be displayed.
snmpwalk - retrieve a subtree of management values using SNMP GETNEXT requests
snmpwalk [APPLICATION OPTIONS] [COMMON OPTIONS] AGENT [OID]
-Cc Do not check whether the returned OIDs are increasing. Some agents (LaserJets are an example) return OIDs out of order, but can complete the walk anyway. Other agents return OIDs that are out of order and can cause snmpwalk to loop indefinitely. By default, snmpwalk tries to detect this behavior and warns you when it hits an agent acting illegally. Use -Cc to turn off this check. -CE {OID} End the walk at the specified OID, rather than a simple subtree. This can be used to walk a partial subtree, selected columns of a table, or even two or more tables within a single command. -Ci Include the given OID in the search range. Normally snmpwalk uses GETNEXT requests starting with the OID you specified and returns all results in the MIB subtree rooted at that OID. Sometimes, you may wish to include the OID specified on the command line in the printed results if it is a valid OID in the tree itself. This option lets you do this explicitly. -CI In fact, the given OID will be retrieved automatically if the main subtree walk returns no useable values. This allows a walk of a single instance to behave as generally expected, and return the specified instance value. This option turns off this final GET request, so a walk of a single instance will return nothing. -Cp Upon completion of the walk, print the number of variables found. -Ct Upon completion of the walk, print the total wall-clock time it took to collect the data (in seconds). Note that the timer is started just before the beginning of the data request series and stopped just after it finishes. Most importantly, this means that it does not include snmp library initialization, shutdown, argument processing, and any other overhead. In addition to these options, snmpwalk takes the common options described in the snmpcmd(1) manual page.
Note that snmpbulkget REQUIRES an argument specifying the agent to query and at most one OID argument, as described there. The command: snmpwalk -Os -c public -v 1 zeus system will retrieve all of the variables under system: sysDescr.0 = STRING: "SunOS zeus.net.cmu.edu 4.1.3_U1 1 sun4m" sysObjectID.0 = OID: enterprises.hp.nm.hpsystem.10.1.1 sysUpTime.0 = Timeticks: (155274552) 17 days, 23:19:05 sysContact.0 = STRING: "" sysName.0 = STRING: "zeus.net.cmu.edu" sysLocation.0 = STRING: "" sysServices.0 = INTEGER: 72 (plus the contents of the sysORTable). The command: snmpwalk -Os -c public -v 1 -CE sysORTable zeus system will retrieve the scalar values, but omit the sysORTable. SEE ALSO snmpcmd(1), snmpbulkwalk(1), variables(5). V5.6.2.1 28 May 2007 SNMPWALK(1)
bintrans
The uuencode and uudecode utilities are used to transmit binary files over transmission mediums that do not support other than simple ASCII data. The b64encode utility is synonymous with uuencode with the -m flag specified. The b64decode utility is synonymous with uudecode with the -m flag specified. The base64 utility acts as a base64 decoder when passed the --decode (or -d) flag and as a base64 encoder otherwise. As a decoder it only accepts raw base64 input and as an encoder it does not produce the framing lines. base64 reads standard input or file if it is provided and writes to standard output. Options --wrap (or -w) and --ignore-garbage (or -i) are accepted for compatibility with GNU base64, but the latter is unimplemented and silently ignored. The uuencode utility reads file (or by default the standard input) and writes an encoded version to the standard output, or output_file if one has been specified. The encoding uses only printing ASCII characters and includes the mode of the file and the operand name for use by uudecode. The uudecode utility transforms uuencoded files (or by default, the standard input) into the original form. The resulting file is named either name or (depending on options passed to uudecode) output_file and will have the mode of the original file except that setuid and execute bits are not retained. The uudecode utility ignores any leading and trailing lines. The following options are available for uuencode: -m Use the Base64 method of encoding, rather than the traditional uuencode algorithm. -r Produce raw output by excluding the initial and final framing lines. -o output_file Output to output_file instead of standard output. The following options are available for uudecode: -c Decode more than one uuencoded file from file if possible. -i Do not overwrite files. -m When used with the -r flag, decode Base64 input instead of traditional uuencode input. Without -r it has no effect. -o output_file Output to output_file instead of any pathname contained in the input data. -p Decode file and write output to standard output. -r Decode raw (or broken) input, which is missing the initial and possibly the final framing lines. The input is assumed to be in the traditional uuencode encoding, but if the -m flag is used, or if the utility is invoked as b64decode, then the input is assumed to be in Base64 format. -s Do not strip output pathname to base filename. By default uudecode deletes any prefix ending with the last slash '/' for security reasons. Additionally, b64encode accepts the following option: -w column Wrap encoded output after column. The following options are available for base64: -b count, --break=count Insert line breaks every count characters. The default is 0, which generates an unbroken stream. -d, -D, --decode Decode incoming Base64 stream into binary data. -h, --help Print usage summary and exit. -i input_file, --input=input_file Read input from input_file. The default is stdin; passing “-” also represents stdin. -o output_file, --output=output_file Write output to output_file. The default is stdout; passing “-” also represents stdout. bintrans is a generic utility that can run any of the aforementioned encoders and decoders. It can also run algorithms that are not available through a dedicated program: qp is a quoted-printable converter and accepts the following options: -u Decode. -o output_file Output to output_file instead of standard output.
bintrans, base64, uuencode, uudecode, – encode/decode a binary file
bintrans [algorithm] [...] uuencode [-m] [-r] [-o output_file] [file] name uudecode [-cimprs] [file ...] uudecode [-i] -o output_file b64encode [-r] [-w column] [-o output_file] [file] name b64decode [-cimprs] [file ...] b64decode [-i] -o output_file [file] base64 [-h | -D | -d] [-b count] [-i input_file] [-o output_file]
null
The following example packages up a source tree, compresses it, uuencodes it and mails it to a user on another system. When uudecode is run on the target system, the file ``src_tree.tar.Z'' will be created which may then be uncompressed and extracted into the original tree. tar cf - src_tree | compress | uuencode src_tree.tar.Z | mail user@example.com The following example unpacks all uuencoded files from your mailbox into your current working directory. uudecode -c < $MAIL The following example extracts a compressed tar archive from your mailbox uudecode -o /dev/stdout < $MAIL | zcat | tar xfv - SEE ALSO basename(1), compress(1), mail(1), uucp(1) (ports/net/freebsd-uucp), uuencode(5) HISTORY The uudecode and uuencode utilities appeared in 4.0BSD. BUGS Files encoded using the traditional algorithm are expanded by 35% (3 bytes become 4 plus control information). macOS 14.5 April 18, 2022 macOS 14.5
llvm-g++
null
null
null
null
null
yamlpp-parse-emit5.34
null
null
null
null
null
scp
scp copies files between hosts on a network. scp uses the SFTP protocol over a ssh(1) connection for data transfer, and uses the same authentication and provides the same security as a login session. scp will ask for passwords or passphrases if they are needed for authentication. The source and target may be specified as a local pathname, a remote host with optional path in the form [user@]host:[path], or a URI in the form scp://[user@]host[:port][/path]. Local file names can be made explicit using absolute or relative pathnames to avoid scp treating file names containing ‘:’ as host specifiers. When copying between two remote hosts, if the URI format is used, a port cannot be specified on the target if the -R option is used. The options are as follows: -3 Copies between two remote hosts are transferred through the local host. Without this option the data is copied directly between the two remote hosts. Note that, when using the legacy SCP protocol (via the -O flag), this option selects batch mode for the second host as scp cannot ask for passwords or passphrases for both hosts. This mode is the default. -4 Forces scp to use IPv4 addresses only. -6 Forces scp to use IPv6 addresses only. -A Allows forwarding of ssh-agent(1) to the remote system. The default is not to forward an authentication agent. -B Selects batch mode (prevents asking for passwords or passphrases). -C Compression enable. Passes the -C flag to ssh(1) to enable compression. -c cipher Selects the cipher to use for encrypting the data transfer. This option is directly passed to ssh(1). -D sftp_server_path Connect directly to a local SFTP server program rather than a remote one via ssh(1). This option may be useful in debugging the client and server. -F ssh_config Specifies an alternative per-user configuration file for ssh. This option is directly passed to ssh(1). -i identity_file Selects the file from which the identity (private key) for public key authentication is read. This option is directly passed to ssh(1). -J destination Connect to the target host by first making an scp connection to the jump host described by destination and then establishing a TCP forwarding to the ultimate destination from there. Multiple jump hops may be specified separated by comma characters. This is a shortcut to specify a ProxyJump configuration directive. This option is directly passed to ssh(1). -l limit Limits the used bandwidth, specified in Kbit/s. -O Use the legacy SCP protocol for file transfers instead of the SFTP protocol. Forcing the use of the SCP protocol may be necessary for servers that do not implement SFTP, for backwards- compatibility for particular filename wildcard patterns and for expanding paths with a ‘~’ prefix for older SFTP servers. -o ssh_option Can be used to pass options to ssh in the format used in ssh_config(5). This is useful for specifying options for which there is no separate scp command-line flag. For full details of the options listed below, and their possible values, see ssh_config(5). AddressFamily BatchMode BindAddress BindInterface CanonicalDomains CanonicalizeFallbackLocal CanonicalizeHostname CanonicalizeMaxDots CanonicalizePermittedCNAMEs CASignatureAlgorithms CertificateFile CheckHostIP Ciphers Compression ConnectionAttempts ConnectTimeout ControlMaster ControlPath ControlPersist GlobalKnownHostsFile GSSAPIAuthentication GSSAPIDelegateCredentials HashKnownHosts Host HostbasedAcceptedAlgorithms HostbasedAuthentication HostKeyAlgorithms HostKeyAlias Hostname IdentitiesOnly IdentityAgent IdentityFile IPQoS KbdInteractiveAuthentication KbdInteractiveDevices KexAlgorithms KnownHostsCommand LogLevel MACs NoHostAuthenticationForLocalhost NumberOfPasswordPrompts PasswordAuthentication PKCS11Provider Port PreferredAuthentications ProxyCommand ProxyJump PubkeyAcceptedAlgorithms PubkeyAuthentication RekeyLimit RequiredRSASize SendEnv ServerAliveInterval ServerAliveCountMax SetEnv StrictHostKeyChecking TCPKeepAlive UpdateHostKeys User UserKnownHostsFile VerifyHostKeyDNS -P port Specifies the port to connect to on the remote host. Note that this option is written with a capital ‘P’, because -p is already reserved for preserving the times and mode bits of the file. -p Preserves modification times, access times, and file mode bits from the source file. -q Quiet mode: disables the progress meter as well as warning and diagnostic messages from ssh(1). -R Copies between two remote hosts are performed by connecting to the origin host and executing scp there. This requires that scp running on the origin host can authenticate to the destination host without requiring a password. -r Recursively copy entire directories. Note that scp follows symbolic links encountered in the tree traversal. -S program Name of program to use for the encrypted connection. The program must understand ssh(1) options. -T Disable strict filename checking. By default when copying files from a remote host to a local directory scp checks that the received filenames match those requested on the command-line to prevent the remote end from sending unexpected or unwanted files. Because of differences in how various operating systems and shells interpret filename wildcards, these checks may cause wanted files to be rejected. This option disables these checks at the expense of fully trusting that the server will not send unexpected filenames. -v Verbose mode. Causes scp and ssh(1) to print debugging messages about their progress. This is helpful in debugging connection, authentication, and configuration problems. -X sftp_option Specify an option that controls aspects of SFTP protocol behaviour. The valid options are: nrequests=value Controls how many concurrent SFTP read or write requests may be in progress at any point in time during a download or upload. By default 64 requests may be active concurrently. buffer=value Controls the maximum buffer size for a single SFTP read/write operation used during download or upload. By default a 32KB buffer is used. EXIT STATUS The scp utility exits 0 on success, and >0 if an error occurs. SEE ALSO sftp(1), ssh(1), ssh-add(1), ssh-agent(1), ssh-keygen(1), ssh_config(5), sftp-server(8), sshd(8) HISTORY scp is based on the rcp program in BSD source code from the Regents of the University of California. Since OpenSSH 9.0, scp has used the SFTP protocol for transfers by default. AUTHORS Timo Rinne <tri@iki.fi> Tatu Ylonen <ylo@cs.hut.fi> CAVEATS The legacy SCP protocol (selected by the -O flag) requires execution of the remote user's shell to perform glob(3) pattern matching. This requires careful quoting of any characters that have special meaning to the remote shell, such as quote characters. macOS 14.5 December 16, 2022 macOS 14.5
scp – OpenSSH secure file copy
scp [-346ABCOpqRrsTv] [-c cipher] [-D sftp_server_path] [-F ssh_config] [-i identity_file] [-J destination] [-l limit] [-o ssh_option] [-P port] [-S program] [-X sftp_option] source ... target
null
null
tnameserv
null
null
null
null
null
true
The true utility always returns with an exit code of zero. Some shells may provide a builtin true command which is identical to this utility. Consult the builtin(1) manual page. SEE ALSO builtin(1), csh(1), false(1), sh(1) STANDARDS The true utility is expected to be IEEE Std 1003.2 (“POSIX.2”) compatible. macOS 14.5 June 9, 1993 macOS 14.5
true – return true value
true
null
null
afinfo
Audio File Info prints out information about an audio file to stdout Darwin February 13, 2007 Darwin
afinfo – Audio File Info
afinfo audiofile
null
null
hidutil
See hidutil [help] for more information. March 11, 2016
hidutil – HID event system debug utility
hidutil [command] [options]
null
null
ldapadd
ldapmodify is a shell-accessible interface to the ldap_add_ext(3), ldap_modify_ext(3), ldap_delete_ext(3) and ldap_rename(3). library calls. ldapadd is implemented as a hard link to the ldapmodify tool. When invoked as ldapadd the -a (add new entry) flag is turned on automatically. ldapmodify opens a connection to an LDAP server, binds, and modifies or adds entries. The entry information is read from standard input or from file through the use of the -f option.
ldapmodify, ldapadd - LDAP modify entry and LDAP add entry tools
ldapmodify [-a] [-c] [-S_file] [-n] [-v] [-M[M]] [-d_debuglevel] [-D_binddn] [-W] [-w_passwd] [-y_passwdfile] [-H_ldapuri] [-h_ldaphost] [-p_ldapport] [-P {2|3}] [-e [!]ext[=extparam]] [-E [!]ext[=extparam]] [-O_security-properties] [-I] [-Q] [-U_authcid] [-R_realm] [-x] [-X_authzid] [-Y_mech] [-Z[Z]] [-f_file] ldapadd [-c] [-S_file] [-n] [-v] [-M[M]] [-d_debuglevel] [-D_binddn] [-W] [-w_passwd] [-y_passwdfile] [-H_ldapuri] [-h_ldaphost] [-p_ldapport] [-P {2|3}] [-O_security-properties] [-I] [-Q] [-U_authcid] [-R_realm] [-x] [-X_authzid] [-Y_mech] [-Z[Z]] [-f_file]
-a Add new entries. The default for ldapmodify is to modify existing entries. If invoked as ldapadd, this flag is always set. -c Continuous operation mode. Errors are reported, but ldapmodify will continue with modifications. The default is to exit after reporting an error. -S_file Add or change records which were skipped due to an error are written to file and the error message returned by the server is added as a comment. Most useful in conjunction with -c. -n Show what would be done, but don't actually modify entries. Useful for debugging in conjunction with -v. -v Use verbose mode, with many diagnostics written to standard output. -M[M] Enable manage DSA IT control. -MM makes control critical. -d_debuglevel Set the LDAP debugging level to debuglevel. ldapmodify must be compiled with LDAP_DEBUG defined for this option to have any effect. -f_file Read the entry modification information from file instead of from standard input. -x Use simple authentication instead of SASL. -D_binddn Use the Distinguished Name binddn to bind to the LDAP directory. For SASL binds, the server is expected to ignore this value. -W Prompt for simple authentication. This is used instead of specifying the password on the command line. -w_passwd Use passwd as the password for simple authentication. -y_passwdfile Use complete contents of passwdfile as the password for simple authentication. -H_ldapuri Specify URI(s) referring to the ldap server(s); only the protocol/host/port fields are allowed; a list of URI, separated by whitespace or commas is expected. -h_ldaphost Specify an alternate host on which the ldap server is running. Deprecated in favor of -H. -p_ldapport Specify an alternate TCP port where the ldap server is listening. Deprecated in favor of -H. -P {2|3} Specify the LDAP protocol version to use. -O_security-properties Specify SASL security properties. -e [!]ext[=extparam] -E [!]ext[=extparam] Specify general extensions with -e and search extensions with -E. ´!´ indicates criticality. General extensions: [!]assert=<filter> (an RFC 4515 Filter) [!]authzid=<authzid> ("dn:<dn>" or "u:<user>") [!]manageDSAit [!]noop ppolicy [!]postread[=<attrs>] (a comma-separated attribute list) [!]preread[=<attrs>] (a comma-separated attribute list) abandon, cancel (SIGINT sends abandon/cancel; not really controls) Search extensions: [!]domainScope (domain scope) [!]mv=<filter> (matched values filter) [!]pr=<size>[/prompt|noprompt] (paged results/prompt) [!]sss=[-]<attr[:OID]>[/[-]<attr[:OID]>...] (server side sorting) [!]subentries[=true|false] (subentries) [!]sync=ro[/<cookie>] (LDAP Sync refreshOnly) rp[/<cookie>][/<slimit>] (LDAP Sync refreshAndPersist) -I Enable SASL Interactive mode. Always prompt. Default is to prompt only as needed. -Q Enable SASL Quiet mode. Never prompt. -U_authcid Specify the authentication ID for SASL bind. The form of the ID depends on the actual SASL mechanism used. -R_realm Specify the realm of authentication ID for SASL bind. The form of the realm depends on the actual SASL mechanism used. -X_authzid Specify the requested authorization ID for SASL bind. authzid must be one of the following formats: dn:<distinguished name> or u:<username> -Y_mech Specify the SASL mechanism to be used for authentication. If it's not specified, the program will choose the best mechanism the server knows. -Z[Z] Issue StartTLS (Transport Layer Security) extended operation. If you use -ZZ, the command will require the operation to be successful. INPUT FORMAT The contents of file (or standard input if no -f flag is given on the command line) must conform to the format defined in ldif(5) (LDIF as defined in RFC 2849).
Assuming that the file /tmp/entrymods exists and has the contents: dn: cn=Modify Me,dc=example,dc=com changetype: modify replace: mail mail: modme@example.com - add: title title: Grand Poobah - add: jpegPhoto jpegPhoto:< file:///tmp/modme.jpeg - delete: description - the command: ldapmodify -f /tmp/entrymods will replace the contents of the "Modify Me" entry's mail attribute with the value "modme@example.com", add a title of "Grand Poobah", and the contents of the file "/tmp/modme.jpeg" as a jpegPhoto, and completely remove the description attribute. Assuming that the file /tmp/newentry exists and has the contents: dn: cn=Barbara Jensen,dc=example,dc=com objectClass: person cn: Barbara Jensen cn: Babs Jensen sn: Jensen title: the world's most famous mythical manager mail: bjensen@example.com uid: bjensen the command: ldapadd -f /tmp/newentry will add a new entry for Babs Jensen, using the values from the file /tmp/newentry. Assuming that the file /tmp/entrymods exists and has the contents: dn: cn=Barbara Jensen,dc=example,dc=com changetype: delete the command: ldapmodify -f /tmp/entrymods will remove Babs Jensen's entry. DIAGNOSTICS Exit status is zero if no errors occur. Errors result in a non-zero exit status and a diagnostic message being written to standard error. SEE ALSO ldapadd(1), ldapdelete(1), ldapmodrdn(1), ldapsearch(1), ldap.conf(5), ldap(3), ldap_add_ext(3), ldap_delete_ext(3), ldap_modify_ext(3), ldap_modrdn_ext(3), ldif(5), slapd.replog(5) AUTHOR The OpenLDAP Project <http://www.openldap.org/> ACKNOWLEDGEMENTS OpenLDAP Software is developed and maintained by The OpenLDAP Project <http://www.openldap.org/>. OpenLDAP Software is derived from University of Michigan LDAP 3.3 Release. OpenLDAP 2.4.28 2011/11/24 LDAPMODIFY(1)
pwpolicy
pwpolicy manipulates password policies.
pwpolicy – gets and sets password policies
pwpolicy [-h] pwpolicy [-v] [-a authenticator] [-p password] [-u username | -c computername] [-n nodename] command command- arg pwpolicy [-v] [-a authenticator] [-p password] [-u username | -c computername] [-n nodename] command "policy1=value1 policy2=value2 ..."
-a name of the authenticator -c name of the computer account to modify -p password (omit this option for a secure prompt) -u name of the user account to modify -n use a specific directory node; the search node is used by default. -v verbose -h help Commands -getglobalpolicy Get global policies. DEPRECATED. -setglobalpolicy Set global policies. DEPRECATED. -getpolicy Get policies for a user. DEPRECATED. --get-effective-policy Gets the combination of global and user policies that apply to the user. DEPRECATED. -setpolicy Set policies for a user. DEPRECATED. -setpassword Set a new password for a user. Non- administrators can use this command to change their own passwords. -enableuser Enable a user account that was disabled by a password policy event. -disableuser Disable a user account. -getglobalhashtypes Returns the default list of password hashes stored on disk for this system. -setglobalhashtypes Edits the default list of password hashes stored on disk for this system. -gethashtypes Returns a list of password hashes stored on disk for a user account. -sethashtypes Edits the list of password hashes stored on disk for a user account. -setaccountpolicies Sets (replaces) the account polices for the specified user. If no user is specified, sets the global account policies. Takes one argument: the name of the file containing the policies. -getaccountpolicies Gets the account policies for the specified user. If no user is specified, gets the global account policies. -clearaccountpolicies Removes all of the account policies for the specified user. If no user is specified, removes the global account policies. -authentication-allowed Determines if the policies allow the user to authenticate Account Policies Account policies are the replacement for the deprecated legacy global and user policies. Account policies are specified as a dictionary containing three keys, one key for each policy category. Note that the dictionary is not required to contain all of the policy categories. Valid keys for the policy categories are: policyCategoryAuthentication Controls when a user may login/authenticate. policyCategoryPasswordChange Determines if/when a user is required to change their password policyCategoryPasswordContent Controls the set of allowable characters in a password. Each policy category contains an array of individual policy dictionaries. Valid keys in the policy dictionary are: policyIdentifier A user-defined unique identifier for the policy. policyParameters An optional key that contains a dictionary of parameters to be used in the policy or used for display purposes. policyContent The actual policy string, from which an NSPredicate can be created. Any valid NSPredicate keyword may be used, as well as certain parameters from the user's record and the policy's parameters dictionary. Below is an example account policy dictionary. Not all policy categories need be present in the dictionary. <dict> <key>policyCategoryPasswordAuthentication</key> <array> <dict> <key>policyContent</key> <string>policyAttributeMaximumFailedAuthentications &lt; policyAttributeFailedAuthentications</string> <key>policyIdentifier</key> <string>failed auths</string> </dict> </array> <key>policyCategoryPasswordChange</key> <array> <dict> <key>policyContent</key> <string>policyAttributeCurrentTime &gt; policyAttributeLastPasswordChangeTime + policyAttributeExpiresEveryNDays * DAYS_TO_SECONDS</string> <key>policyIdentifier</key> <string>Change every 30 days</string> <key>policyParameters</key> <dict> <key>policyAttributeExpiresEveryNDays<key> <integer>30</integer> </dict> </array> <key>policyCategoryPasswordContent</key> <array> <dict> <key>policyContent</key> <string>policyAttributePassword matches '.{3,}+'</string> <key>policyIdentifier</key> <string>com.apple.policy.legacy.minChars</string> <key>policyParameters</key> <dict> <key>minimumLength</key> <integer>3</integer> </dict> </dict> </array> </dict> Account Policy Keywords The following keywords may be used in the policy content. The values from the user's record will be substitued for the keyword when the policy is evaluated. User-defined keywords may also be used, as long the keyword is present in the policy's parameters dictionary. policyAttributePassword User's new password. policyAttributePasswordHashes Hashes of the new password. Compared against the history. policyAttributePasswordHistory User's password history. policyAttributePasswordHistoryDepth How much password history to keep. policyAttributeCurrentDate Current date and time as an NSDate. Use for comparing localized NSDates. policyAttributeCurrentTime Current date and time in seconds. Used for date/time calculations, i.e. date + interval. policyAttributeCurrentDayOfWeek Current day of the week (integer). policyAttributeCurrentTimeOfDay Current time of day (0000 to 2359). policyAttributeFailedAuthentications Number of consecutive failed authentication attempts. policyAttributeMaximumFailedAuthentications Maximum allowed consecutive failed authentication attempts. policyAttributeLastFailedAuthenticationTime Time of the last failed authentication. policyAttributeLastAuthenticationTime Time of the last successful authentication. policyAttributeLastPasswordChangeTime Time of the last password change. policyAttributeNewPasswordRequiredTime Time when a new password is required. policyAttributeCreationTime Time when the account was created. policyAttributeConsecutiveCharacters Number of consecutive (i.e. run of the same) characters in a password. policyAttributeMaximumConsecutiveCharacters Maximum number of consectuive characters allowed in a password. policyAttributeSequentialCharacters Number of sequention (ascending or descending) characters in a password. policyAttributeMaximumSequentialCharacters Maximum allowed nmber of sequention (ascending or descending) characters in a password. policyAttributeExpiresEveryNDays Expires every n number of days. policyAttributeDaysUntilExpiration Synonym for the above. policyAttributeEnableOnDate Date on which the account is enabled (localized NSDate). policyAttributeExpiresOnDate Date on which the account will expire (localized NSdate). policyAttributeEnableOnDayOfWeek Day of week on which the account is enabled (integer). policyAttributeExpiresOnDayOfWeek Day of week on which the account will expire (integer). policyAttributeEnableAtTimeOfDay Time of day at which the account is enabled (integer, 0000-2359). policyAttributeExpiresAtTimeOfDay Time of day at which the account will expire (integer, 0000-2359). Legacy Global Policies (DEPRECATED) usingHistory 0 = user can reuse the current password, 1 = user cannot reuse the current password, 2-15 = user cannot reuse the last n passwords. usingExpirationDate If 1, user is required to change password on the date in expirationDateGMT usingHardExpirationDate If 1, user's account is disabled on the date in hardExpireDateGMT requiresAlpha If 1, user's password is required to have a character in [A-Z][a-z]. requiresNumeric If 1, user's password is required to have a character in [0-9]. expirationDateGMT Date for the password to expire, format must be: mm/dd/yy hardExpireDateGMT Date for the user's account to be disabled, format must be: mm/dd/yy validAfter Date for the user's account to be enabled, format must be: mm/dd/yy maxMinutesUntilChangePassword user is required to change the password at this interval maxMinutesUntilDisabled user's account is disabled after this interval maxMinutesOfNonUse user's account is disabled if it is not accessed by this interval maxFailedLoginAttempts user's account is disabled if the failed login count exceeds this number minChars passwords must contain at least minChars maxChars passwords are limited to maxChars Additional Legacy User Policies (DEPRECATED) isDisabled If 1, user account is not allowed to authenticate, ever. isAdminUser If 1, this user can administer accounts on the password server. newPasswordRequired If 1, the user will be prompted for a new password at the next authentication. Applications that do not support change password will not authenticate. canModifyPasswordforSelf If 1, the user can change the password. Stored Hash Types CRAM-MD5 Required for IMAP. RECOVERABLE Required for APOP and WebDAV. Only available on Mac OS X Server edition. SALTED-SHA512-PBKDF2 The default for loginwindow. SALTED-SHA512 Legacy hash for loginwindow. SMB-NT Required for compatibility with Windows NT/XP file sharing. SALTED-SHA1 Legacy hash for loginwindow. SHA1 Legacy hash for loginwindow.
To get global policies: pwpolicy -getglobalpolicy To set global policies: pwpolicy -a authenticator -setglobalpolicy "minChars=4 maxFailedLoginAttempts=3" To get policies for a specific user account: pwpolicy -u user -getpolicy pwpolicy -u user -n /NetInfo/DefaultLocalNode -getpolicy To set policies for a specific user account: pwpolicy -a authenticator -u user -setpolicy "minChars=4 maxFailedLoginAttempts=3" To change the password for a user: pwpolicy -a authenticator -u user -setpassword newpassword To set the list of hash types for local accounts: pwpolicy -a authenticator -setglobalhashtypes SMB-LAN-MANAGER off SMB-NT on SEE ALSO PasswordService(8) Mac OS X 13 November 2002 Mac OS X
profiles
profiles is used to handle various profile types on macOS. Starting with macOS 11.0 (profiles tool 8.0 or later), this tool cannot be used to install configuration profiles. You should add your profiles using the System Settings Profiles preference pane. Additionally, startup profiles are no longer supported. VERBS Each command verb is listed with its description and optional individual arguments. Most commands use the -type option to determine which kind of profile should be used in the command. For those commands, if no type is specified, the default will be to use the configuration profile type. help Shows abbreviated help list -type profile_type -user user_name -output output_path List profiles for a user or when running as root, the device. show -type profile_type -user user_name -output output_path -cached Show expanded information for profiles. If the type is set to enrollment, this will instead show the current server DEP configuration. Obtaining the enrollment information may be rate limited to 10 times every 23 hours, at which point it will try to display the information from the local cache. Should this occur, a note will be displayed with the enrollment information. If the -cached option is set, the enrollment information will only be obtained from the local cache. remove -type profile_type -user user_name -identifier identifier -uuid uuid -path file_path -forced -all Remove profiles. Attempting to remove a configuration profile requring a removal password without the correct password will fail. status -type profile_type Display status of the profiles installed on this client. When displaying the enrollment type status, if the MDM enrollment was user approved, the status output will show "(User Approved)". sync -type configuration For configuration profiles, synchronize current installed set of profiles with the local users and remove any configuration profiles that belong to users that no longer exist on this computer. renew -type profile_type -identifier identifier -output output_path For configuration profiles, renews any certificates for the specified profile. For Device Enrollment Program (DEP) enrollments, retry to obtain the device enrollment configuration, and re-enable the user notification if enrollment wasn't completed. With enrollments, this call may be rate limited to 10 times every 23 hours, though it should try to enroll with cached information if it is available. validate -type profile_type -path file_path For provisioning profiles, validate the provisioning profile located at the file_path. For enrollments, re-validate the installed DEP server information and update any local information, displaying any major changes. If this information is different from the current enrolled server, this will not unenroll the client from the current server. This call may be rate limited to 10 times every 23 hours. version Displays current tool version.
profiles – Profiles Tool for macOS.
profiles verb [options]
-type profile_type The profile_type can be one of either: "configuration", "provisioning", "bootstraptoken", or "enrollment" (DEP). If a command requires a profile type and none is specified, "configuration" will be used. -path file_path A file path or "-" to represent stdout. When used by the remove command for startup profiles, this should only contain the file name of the profile. -user user_name An OD short user name. In most cases if no user was specified, then the current user will be used. If no user option was specified and the process runs as root, the computer/device profiles will be used in the command. -uuid profile_uuid A canonical form UUID to specify a profile's PayloadUUID, such as 5A15247B-899C-474D-B1D7-DBD82BDE5684. Only used by the remove provisioning profile command. -identifier profile_identifier A profile identifier (PayloadIdentifier) to specify a profile. -output output_path The output path location. The output_path argument must be specified to use this option, Use 'stdout' to send this informaton to the console. File output will be written as an XML plist file, or you can use 'stdout-xml' to write XML to the console. The toplevel key of the dictionary will contain either the user name, or _computerLevel for device or provisioning profile information. -password password An optional password used when removing a configuration profile which requires the password removal option. -forced This will prevent confirmation requests, and when trying to remove all configuration profles for a user, it will ignore any errors during removal. -all For configuration profiles, when running as root, the use of this option with the list or show commands will display all profiles installed on the system. When removing profiles, using this option will remove all profiles for that user (or device). -cached A flag to indicate that information should only be obtained from the local cache. This is currently used for the show command's enrollment option. -verbose Display additional information. PROFILE TYPES configuration A configuration profile. provisioning A provisioning profile. enrollment A device enrollment program (DEP) or mobile device management (MDM) enrollment profile or feature. bootstraptoken Bootstrap Token options. Requires MDM supervised client.
profiles remove -path /profiles/testfile2.mobileconfig Removes the configuration profile file '/profiles/testfile2.mobileconfig' into the current user. profiles list -type provisioning Displays a list of installed provisioning profiles. profiles list -all When running as root, this will list all configuration profiles on the system. profiles show Displays extended information for installed configuration profiles for the current user. profiles status -type startup Displays information on whether or not startup profiles are set up. profiles remove -identifier com.example.profile1 -password pass Removes any installed profiles with the identifier com.example.profile1 in the current user and using a removal password of 'pass'. profiles show -type enrollment Displays the current DEP configuration information. profiles renew -type enrollment Re-enables the DEP user notification enrollment messages. profiles install -type bootstraptoken Creates or updates the Bootstrap Token APFS record and escrows the information to the server. profiles show -type enrollment -cached Displays the cached information of an existing DEP enrollment configuration. SEE ALSO profiles.old(1) macOS March 24, 2022 macOS
yacc
On Darwin, yacc is a wrapper for bison(1), equivalent to bison -y. SEE ALSO bison(1) Darwin May 8, 2005 Darwin
yacc – parser generator
null
null
null
kswitch
kswitch makes the specified credential cache the primary cache for the collection, if a cache collection is available.
kswitch - switch primary ticket cache
kswitch {-c cachename|-p principal}
-c cachename Directly specifies the credential cache to be made primary. -p principal Causes the cache collection to be searched for a cache containing credentials for principal. If one is found, that collection is made primary. ENVIRONMENT See kerberos(7) for a description of Kerberos environment variables. FILES KCM: Default location of Kerberos 5 credentials cache SEE ALSO kinit(1), kdestroy(1), klist(1), kerberos(7) AUTHOR MIT COPYRIGHT 1985-2022, MIT 1.20.1 KSWITCH(1)
null
xcodebuild
xcodebuild builds one or more targets contained in an Xcode project, or builds a scheme contained in an Xcode workspace or Xcode project. Usage To build an Xcode project, run xcodebuild from the directory containing your project (i.e. the directory containing the name.xcodeproj package). If you have multiple projects in the this directory you will need to use -project to indicate which project should be built. By default, xcodebuild builds the first target listed in the project, with the default build configuration. The order of the targets is a property of the project and is the same for all users of the project. To build an Xcode workspace, you must pass both the -workspace and -scheme options to define the build. The parameters of the scheme will control which targets are built and how they are built, although you may pass other options to xcodebuild to override some parameters of the scheme. There are also several options that display info about the installed version of Xcode or about projects or workspaces in the local directory, but which do not initiate an action. These include -list, -showBuildSettings, -showdestinations, -showsdks, -showTestPlans, -usage, and -version.
xcodebuild – build Xcode projects and workspaces
xcodebuild [-project name.xcodeproj] [[-target targetname] ... | -alltargets] [-configuration configurationname] [-sdk [sdkfullpath | sdkname]] [action ...] [buildsetting=value ...] [-userdefault=value ...] xcodebuild [-project name.xcodeproj] -scheme schemename [[-destination destinationspecifier] ...] [-destination-timeout value] [-configuration configurationname] [-sdk [sdkfullpath | sdkname]] [action ...] [buildsetting=value ...] [-userdefault=value ...] xcodebuild -workspace name.xcworkspace -scheme schemename [[-destination destinationspecifier] ...] [-destination-timeout value] [-configuration configurationname] [-sdk [sdkfullpath | sdkname]] [action ...] [buildsetting=value ...] [-userdefault=value ...] xcodebuild -version [-sdk [sdkfullpath | sdkname]] [infoitem] xcodebuild -showsdks xcodebuild -showBuildSettings [-project name.xcodeproj | [-workspace name.xcworkspace -scheme schemename]] xcodebuild -showdestinations [-project name.xcodeproj | [-workspace name.xcworkspace -scheme schemename]] xcodebuild -showTestPlans [-project name.xcodeproj | -workspace name.xcworkspace] -scheme schemename xcodebuild -list [-project name.xcodeproj | -workspace name.xcworkspace] xcodebuild -exportArchive -archivePath xcarchivepath -exportPath destinationpath -exportOptionsPlist path xcodebuild -exportNotarizedApp -archivePath xcarchivepath -exportPath destinationpath xcodebuild -exportLocalizations -project name.xcodeproj -localizationPath path [[-exportLanguage language] ...] xcodebuild -importLocalizations -project name.xcodeproj -localizationPath path
-project name.xcodeproj Build the project name.xcodeproj. Required if there are multiple project files in the same directory. -target targetname Build the target specified by targetname. -alltargets Build all the targets in the specified project. -workspace name.xcworkspace Build the workspace name.xcworkspace. -scheme schemename Build the scheme specified by schemename. Required if building a workspace. -destination destinationspecifier Use the destination device described by destinationspecifier. Defaults to a destination that is compatible with the selected scheme. See the Destinations section below for more details. -destination-timeout timeout Use the specified timeout when searching for a destination device. The default is 30 seconds. -configuration configurationname Use the build configuration specified by configurationname when building each target. -arch architecture Use the architecture specified by architecture when building each target. -sdk [sdkfullpath | sdkname] Build an Xcode project or workspace against the specified SDK, using build tools appropriate for that SDK. The argument may be an absolute path to an SDK, or the canonical name of an SDK. -showsdks Lists all available SDKs that Xcode knows about, including their canonical names suitable for use with -sdk. Does not initiate a build. -showBuildSettings Lists the build settings for targets in a project or workspace. Does not initiate a build. Use with -target or with -scheme. With -scheme, optionally pass a build action (such as build or test) to use targets from the matching scheme action. -showdestinations Lists the valid destinations for a project or workspace and scheme. Does not initiate a build. Use with -project or -workspace and -scheme. -showBuildTimingSummary Display a report of the timings of all the commands invoked during the build. -showTestPlans Lists the test plans (if any) associated with the specified scheme. Does not initiate a build. Use with -scheme. -list Lists the targets and configurations in a project, or the schemes in a workspace. Does not initiate a build. Use with -project or -workspace. -enableAddressSanitizer [YES | NO] Turns the address sanitizer on or off. This overrides the setting for the launch action of a scheme in a workspace. -enableThreadSanitizer [YES | NO] Turns the thread sanitizer on or off. This overrides the setting for the launch action of a scheme in a workspace. -enableUndefinedBehaviorSanitizer [YES | NO] Turns the undefined behavior sanitizer on or off. This overrides the setting for the launch action of a scheme in a workspace. -enableCodeCoverage [YES | NO] Turns code coverage on or off during testing. This overrides the setting for the test action of a scheme in a workspace. -testLanguage language Specifies ISO 639-1 language during testing. This overrides the setting for the test action of a scheme in a workspace. -testRegion region Specifies ISO 3166-1 region during testing. This overrides the setting for the test action of a scheme in a workspace. -derivedDataPath path Overrides the folder that should be used for derived data when performing an action on a scheme in a workspace. -resultBundlePath path Writes a bundle to the specified path with results from performing an action on a scheme in a workspace. If the path already exists, xcodebuild will exit with an error. Intermediate directories will be created automatically. The bundle contains build logs, code coverage reports, XML property lists with test results, screenshots and other attachments collected during testing, and various diagnostic logs. -allowProvisioningUpdates Allow xcodebuild to communicate with the Apple Developer website. For automatically signed targets, xcodebuild will create and update profiles, app IDs, and certificates. For manually signed targets, xcodebuild will download missing or updated provisioning profiles. Requires a developer account to have been added in Xcode's Accounts preference pane. -allowProvisioningDeviceRegistration Allow xcodebuild to register your destination device on the Apple Developer website if necessary. Requires -allowProvisioningUpdates. -authenticationKeyPath Specifies the path to an authentication key issued by App Store Connect. If specified, xcodebuild will authenticate with the Apple Developer website using this credential. Requires -authenticationKeyID and -authenticationKeyIssuerID. -authenticationKeyID Specifies the key identifier associated with the App Store Conect authentication key at -authenticationKeyPath. This string can be located in the users and access details for your provider at ⟨URL: https://appstoreconnect.apple.com ⟩ -authenticationKeyIssuerID Specifies the App Store Connect issuer identifier associated with the authentication key at -authenticationKeyPath. This string can be located in the users and access details for your provider at ⟨URL: https://appstoreconnect.apple.com ⟩ -exportArchive Specifies that an archive should be distributed. Requires -archivePath and -exportOptionsPlist. For exporting, -exportPath is also required. Cannot be passed along with an action. -exportNotarizedApp Export an archive that has been notarized by Apple. Requires -archivePath and -exportPath. -archivePath xcarchivepath Specifies the path for the archive produced by the archive action, or specifies the archive that should be exported when -exportArchive or -exportNotarizedApp is passed. -exportPath destinationpath Specifies the destination for the exported product, including the name of the exported file. -exportOptionsPlist path Specifies options for -exportArchive. xcodebuild -help can print the full set of available options. -exportLocalizations Exports localizations to XLIFF files. Requires -project and -localizationPath. Cannot be passed along with an action. -importLocalizations Imports localizations from an XLIFF file. Requires -project and -localizationPath. Cannot be passed along with an action. -localizationPath Specifies a path to a directory or a single XLIFF localization file. -exportLanguage language Specifies optional ISO 639-1 languages included in a localization export. May be repeated to specify multiple languages. May be excluded to specify an export includes only development language strings. action ... Specify one or more actions to perform. Available actions are: build Build the target in the build root (SYMROOT). This is the default action, and is used if no action is given. build-for-testing Build the target and associated tests in the build root (SYMROOT). This will also produce an xctestrun file in the build root. This requires specifying a scheme. analyze Build and analyze a target or scheme from the build root (SYMROOT). This requires specifying a scheme. archive Archive a scheme from the build root (SYMROOT). This requires specifying a scheme. test Test a scheme from the build root (SYMROOT). This requires specifying a scheme and optionally a destination. test-without-building Test compiled bundles. If a scheme is provided with -scheme then the command finds bundles in the build root (SRCROOT). If an xctestrun file is provided with -xctestrun then the command finds bundles at paths specified in the xctestrun file. docbuild Build the target and associated documentation in the build root (SRCROOT). installsrc Copy the source of the project to the source root (SRCROOT). install Build the target and install it into the target's installation directory in the distribution root (DSTROOT). clean Remove build products and intermediate files from the build root (SYMROOT). -xcconfig filename Load the build settings defined in filename when building all targets. These settings will override all other settings, including settings passed individually on the command line. -testProductsPath path-to-xctestproducts Specifies xctestproducts path. XCTestProducts are a unified test product format for XCTest. Can only be used with build-for-testing or test-without-building action. Cannot be used with -workspace or -project. When used with build-for-testing the path will be used as the destination for where the xctestproducts archive is written to. Example path: MyProject_MyScheme.xctestproducts When used with test-without-building the path will be used as the source of which xctestproducts archive to use for testing. test-without-building -testProductsPath Cannot be used with -workspace or -project. -xctestrun xctestrunpath Specifies test run parameters. Can only be used with the test-without-building action. Cannot be used with -workspace or -project. See ⟨URL: x-man-page://5/xcodebuild.xctestrun ⟩ for file format details. -testPlan test-plan-name Specifies which test plan associated with the scheme should be used for testing. Pass the name of the .xctestplan file without its extension. -skip-testing test-identifier, -only-testing test-identifier Constrain test targets, classes, or methods in test actions. -only-testing constrains a test action to only testing a specified identifier, and excluding all other identifiers. -skip-testing constrains a test action to skip testing a specified identifier, but including all other identifiers. Test identifiers have the form TestTarget[/TestClass[/TestMethod]]. The TestTarget component of an identifier is the name of a unit or UI testing bundle as shown in the Test Navigator. An xcodebuild command can combine multiple constraint options, but -only-testing has precedence over -skip-testing. -skip-test-configuration test-configuration-name, -only-test-configuration test-configuration-name Constrain test configurations in test actions. -only-test-configuration constrains a test action to only test a specified test configuration within a test plan, and exclude all other test configurations. -skip-test-configuration constrains a test action to skip a specified test configuration, but include all other test configurations. Each test configuration name must match the name of a configuration specified in a test plan and is case- sensitive. An xcodebuild command can combine multiple constraint options, but -only-test-configuration has precedence over -skip-test-configuration. -disable-concurrent-destination-testing Do not run tests on the specified destinations concurrently. The full test suite will run to completion on a given destination before it begins on the next. -maximum-concurrent-test-device-destinations number If multiple device destinations are specified (and -disable-concurrent-destination-testing is not passed), only test on number devices at a time. For example, if four iOS devices are specified, but number is 2, the full test suite will run on each device, but only two devices will be testing at a given time. -maximum-concurrent-test-simulator-destinations number If multiple simulator destinations are specified (and -disable-concurrent-destination-testing is not passed), only test on number simulators at a time. For example, if four iOS simulators are specified, but number is 2, the full test suite will run on each simulator, but only two simulators will be testing at a given time. -parallel-testing-enabled [YES | NO] Overrides the per-target setting in the scheme for running tests in parallel. -parallel-testing-worker-count number Spawn exactly number test runners when executing tests in parallel. Overrides -maximum-parallel-testing-workers, if it is specified. -maximum-parallel-testing-workers number Limit the number of test runners that will be spawned when running tests in parallel to number. -parallelize-tests-among-destinations If parallel testing is enabled (either via the -parallel-testing-enabled option, or on an individual test-target basis) and multiple destination specifiers are passed, distribute test classes among the destinations, instead of running the entire test suite on each destination (which is the default behavior when multiple destination specifiers are passed). -test-timeouts-enabled [YES | NO] Enable or disable test timeout behavior. This value takes precedence over the value specified in the test plan. -default-test-execution-time-allowance seconds The default execution time an individual test is given to execute, if test timeouts are enabled. -maximum-test-execution-time-allowance seconds The maximum execution time an individual test is given to execute, regardless of the test's preferred allowance. -test-iterations number If specified, tests will run number times. May be used in conjunction with either -retry-tests-on-failure or -run-tests-until-failure, in which case this will become the maximum number of iterations. -retry-tests-on-failure If specified, tests will retry on failure. May be used in conjunction with -test-iterations number, in which case number will be the maximum number of iterations. Otherwise, a maximum of 3 is assumed. May not be used with -run-tests-until-failure. -run-tests-until-failure If specified, tests will run until they fail. May be used in conjunction with -test-iterations number, in which case number will be the maximum number of iterations. Otherwise, a maximum of 100 is assumed. May not be used with -retry-tests-on-failure. -test-repetition-relaunch-enabled [YES | NO] Whether or not each repetition of test should use a new process for its execution. Must be used in conjunction with -test-iterations, -retry-tests-on-failure, or -run-tests-until-failure. If not specified, tests will repeat in the same process. -collect-test-diagnostics [on-failure | never] Whether or not verbose and long-running diagnostics, like sysdiagnoses or log archives, are collected when testing. If not specified, the value in the test plan will be used. -enumerate-tests If specified alongside either the test or test-without-building actions, the set of tests that would normally execute will instead be listed/enumerated, and the list of tests will be output to either stdout (the default), or to a file whose location is specified via the -test-enumeration-output-path option. The format of the list of tests is controlled via the -test-enumeration-style and -test-enumeration-format options. Note that a -destination specifier must be supplied in order to enumerate the tests. -test-enumeration-style [hierarchical | flat] Whether tests should be enumerated in a hierarchical organization (the default), meaning grouped by test plan, target, and class, or as a flat list of test identifiers that can subsequently be passed to the -skip-testing and -only-testing options. -test-enumeration-format [text | json] Whether tests should be enumerated as human-readable text (the default), or as machine-parseable JSON. -test-enumeration-output-path [path | -] Specifies a file path where the list of tests computed by the -enumerate-tests option will be written to disk. If - is supplied, the data will be written to stdout (which is also the default if this option is omitted). -dry-run, -n Print the commands that would be executed, but do not execute them. -skipUnavailableActions Skip actions that cannot be performed instead of failing. This option is only honored if -scheme is passed. buildsetting=value Set the build setting buildsetting to value. A detailed reference of Xcode build settings can be found at: ⟨URL: https://developer.apple.com/documentation/xcode/build- settings-reference ⟩ -userdefault=value Set the user default userdefault to value. -toolchain [identifier | name] Use a given toolchain, specified with either an identifier or name. -quiet Do not print any output except for warnings and errors. -verbose Provide additional status output. -version Display version information for this install of Xcode. Does not initiate a build. When used in conjunction with -sdk, the version of the specified SDK is displayed, or all SDKs if -sdk is given no argument. Additionally, a single line of the reported version information may be returned if infoitem is specified. -license Show the Xcode and SDK license agreements. Allows for accepting the license agreements without launching Xcode itself, which is useful for headless systems. Must be run as a privileged user. -checkFirstLaunchStatus Check if any additional system components or configuration tasks need to be performed. Exits with a non-zero code if additional system content needs to be installed, using -runFirstLaunch. Exits with 0 if system components are up-to-date. -runFirstLaunch Install additional system components and agree to the license. -downloadAllPlatforms Download and install all available platforms. -usage Displays usage information for xcodebuild. Destinations The -destination option takes as its argument a destination specifier describing the device (or devices) to use as a destination. A destination specifier is a single argument consisting of a set of comma- separated key=value pairs. The -destination option may be specified multiple times to cause xcodebuild to perform the specified action on multiple destinations. Destination specifiers may include the platform key to specify one of the supported destination platforms. There are additional keys which should be supplied depending on the platform of the device you are selecting. Some devices may take time to look up. The -destination-timeout option can be used to specify the amount of time to wait before a device is considered unavailable. If unspecified, the default timeout is 30 seconds. Currently, xcodebuild supports these platforms: macOS The local Mac, referred to in the Xcode interface as My Mac, and which supports the following keys: arch The architecture to use, e.g. arm64 or x86_64. variant The optional variant to use, e.g. Mac Catalyst or macOS. iOS An iOS device, which supports the following keys: id The identifier of the device to use, as shown in the Devices window. A valid destination specifier must provide either id or name, but not both. name The name of the device to use. A valid destination specifier must provide either id or name, but not both. iOS Simulator A simulated iOS device, which supports the following keys: id The identifier of the simulated device to use, as shown in the Devices window. A valid destination specifier must provide either id or name, but not both. name The name of the simulated device to use. A valid destination specifier must provide either id or name, but not both. OS When specifying the simulated device by name, the iOS version for that simulated device, such as 6.0, or the string latest (the default) to indicate the most recent version of iOS supported by this version of Xcode. watchOS A watchOS app is always built and deployed nested inside of an iOS app. To use a watchOS device as your destination, specify a scheme which is configured to run a WatchKit app, and specify the iOS platform destination that is paired with the watchOS device you want to use. watchOS Simulator A watchOS Simulator app is always built and deployed nested inside of an iOS Simulator app. To use a watchOS Simulator device as your destination, specify a scheme which is configured to run a WatchKit app, and specify the iOS Simulator platform destination that is paired with the watchOS Simulator device you want to use. tvOS A tvOS device, which supports the following keys: id The identifier of the device to use, as shown in the Devices window. A valid destination specifier must provide either id or name, but not both. name The name of the device to use. A valid destination specifier must provide either id or name, but not both. tvOS Simulator A simulated tvOS device, which supports the following keys: id The identifier of the simulated device to use, as shown in the Devices window. A valid destination specifier must provide either id or name, but not both. name The name of the simulated device to use. A valid destination specifier must provide either id or name, but not both. OS When specifying the simulated device by name, the tvOS version for that simulated device, such as 9.0, or the string latest (the default) to indicate the most recent version of tvOS supported by this version of Xcode. DriverKit The DriverKit environment, which supports the following key: arch The architecture to use, e.g. arm64 or x86_64. Some actions (such as building) may be performed without an actual device present. To build against a platform generically instead of a specific device, the destination specifier may be prefixed with the optional string "generic/", indicating that the platform should be targeted generically. An example of a generic destination is the "Any iOS Device" destination displayed in Xcode's UI when no physical iOS device is present. Testing on Multiple Destinations When more than one destination is specified with the -destination option, xcodebuild tests on those destinations concurrently. In this mode, xcodebuild automatically chooses the number of devices and simulators that are used simultaneously. All enabled tests in the scheme or xctestrun file are run on each destination. Distributing Archives The -exportArchive option specifies that xcodebuild should distribute the archive specified by -archivePath using the options specified by -exportOptionsPlist. xcodebuild -help can print the full set of available inputs to -exportOptionsPlist. The product can either be uploaded to Apple or exported locally. The exported product will be placed at the path specified by -exportPath. Archives that have been uploaded to the Apple notary service can be distributed using the -exportNotarizedApp option. This specifies that xcodebuild should export a notarized app from the archive specified by -archivePath and place the exported product at the path specified by -exportPath. If the archive has not completed processing by the notary service, or processing failed, then xcodebuild will exit and emit informational or error messages. When uploading an archive using the -exportArchive option, or exporting a notarized archive using the -exportNotarizedApp option, an Apple ID account belonging to the archive's development team is required. Enter the credentials for the Apple ID account using Xcode's Accounts preference pane before invoking xcodebuild. Environment Variables The following environment variables affect the execution of xcodebuild: XCODE_XCCONFIG_FILE Set to a path to a file, build settings in that file will be loaded and used when building all targets. These settings will override all other settings, including settings passed individually on the command line, and those in the file passed with the -xcconfig option. TEST_RUNNER_<VAR> Set an environment variable whose name is prefixed with TEST_RUNNER_ to have that variable passed, with its prefix stripped, to all test runner processes launched during a test action. For example, TEST_RUNNER_Foo=Bar xcodebuild test ... sets the environment variable Foo=Bar in the test runner's environment. Existing variables may be modified using the special token __CURRENT_VALUE__ to represent their current value. For example, TEST_RUNNER_Foo=__CURRENT_VALUE__:Bar appends the string :Bar to any existing value of Foo. Exit Codes xcodebuild exits with codes defined by sysexits(3). It will exit with EX_OK on success. On failure, it will commonly exit with EX_USAGE if any options appear malformed, EX_NOINPUT if any input files cannot be found, EX_IOERR if any files cannot be read or written, and EX_SOFTWARE if the commands given to xcodebuild fail. It may exit with other codes in less common scenarios.
xcodebuild clean install Cleans the build directory; then builds and installs the first target in the Xcode project in the directory from which xcodebuild was started. xcodebuild -project MyProject.xcodeproj -target Target1 -target Target2 -configuration Debug Builds the targets Target1 and Target2 in the project MyProject.xcodeproj using the Debug configuration. xcodebuild -target MyTarget OBJROOT=/Build/MyProj/Obj.root SYMROOT=/Build/MyProj/Sym.root Builds the target MyTarget in the Xcode project in the directory from which xcodebuild was started, putting intermediate files in the directory /Build/MyProj/Obj.root and the products of the build in the directory /Build/MyProj/Sym.root. xcodebuild -sdk macosx10.6 Builds the Xcode project in the directory from which xcodebuild was started against the macOS 10.6 SDK. The canonical names of all available SDKs can be viewed using the -showsdks option. xcodebuild -workspace MyWorkspace.xcworkspace -scheme MyScheme Builds the scheme MyScheme in the Xcode workspace MyWorkspace.xcworkspace. xcodebuild archive -workspace MyWorkspace.xcworkspace -scheme MyScheme Archives the scheme MyScheme in the Xcode workspace MyWorkspace.xcworkspace. xcodebuild build-for-testing -workspace MyWorkspace.xcworkspace -scheme MyScheme -destination generic/platform=iOS Build tests and associated targets in the scheme MyScheme in the Xcode workspace MyWorkspace.xcworkspace using the generic iOS device destination. The command also writes test parameters from the scheme to an xctestrun file in the built products directory. xcodebuild test-without-building -workspace MyWorkspace.xcworkspace -scheme MyScheme -destination 'platform=iOS Simulator,name=iPhone 5s' -destination 'platform=iOS,name=My iPad' Tests the scheme MyScheme in the Xcode workspace MyWorkspace.xcworkspace using both the iOS Simulator and the device named iPhone 5s for the latest version of iOS. The command assumes the test bundles are in the build root (SYMROOT). (Note that the shell requires arguments to be quoted or otherwise escaped if they contain spaces.) xcodebuild test-without-building -xctestrun MyTestRun.xctestrun -destination 'platform=iOS Simulator,name=iPhone 5s' -destination 'platform=iOS,name=My iPad' Tests using both the iOS Simulator and the device named iPhone 5s. Test bundle paths and other test parameters are specified in MyTestRun.xctestrun. The command requires project binaries and does not require project source code. xcodebuild test -workspace MyWorkspace.xcworkspace -scheme MyScheme -destination 'platform=macOS,arch=x86_64' Tests the scheme MyScheme in the Xcode workspace MyWorkspace.xcworkspace using the destination described as My Mac 64-bit in Xcode. xcodebuild test -workspace MyWorkspace.xcworkspace -scheme MyScheme -destination 'platform=macOS,arch=x86_64' -only-testing MyTests/FooTests/testFooWithBar Tests the scheme MyScheme in the Xcode workspace MyWorkspace.xcworkspace using the destination described as My Mac 64-bit in Xcode. Only the test testFooWithBar of the test suite FooTests, part of the MyTests testing bundle target, will be run. xcodebuild -exportArchive -archivePath MyMobileApp.xcarchive -exportPath ExportDestination -exportOptionsPlist 'export.plist' Exports the archive MyMobileApp.xcarchive to the path ExportDestination using the options specified in export.plist. xcodebuild -exportLocalizations -project MyProject.xcodeproj -localizationPath MyDirectory -exportLanguage zh-hans -exportLanguage es-MX Exports two XLIFF files to MyDirectory from MyProject.xcodeproj containing development language strings and translations for Simplified Chinese and Mexican Spanish. xcodebuild -exportLocalizations -project MyProject.xcodeproj -localizationPath MyDirectory Export a single XLIFF file to MyDirectory from MyProject.xcodeproj containing only development language strings. (In this case, the -exportLanguage parameter has been excluded.) xcodebuild -importLocalizations -project MyProject.xcodeproj -localizationPath MyLocalizations.xliff Imports localizations from MyLocalizations.xliff into MyProject.xcodeproj. Translations with issues will be reported but not imported. SEE ALSO ibtool(1), sysexits(3), xcode-select(1), xcrun(1), xed(1) Xcode Build Settings Reference ⟨URL: https://developer.apple.com/documentation/xcode/build-settings-reference ⟩ macOS June 20, 2016 macOS
tkpp5.30
Tkpp is a GUI frontend to pp, which can turn perl scripts into stand- alone PAR files, perl scripts or executables. You can save command line generated, load and save your Tkpp configuration GUI. Below is a short explanation of tkpp GUI. Menu File -> Save command line When you build or display command line in the Tkpp GUI, you can save the command line in a separated file. This command line can be executed from a terminal. File -> Save configuration You can save your GUI configuration (all options used) to load and execute it next time. File -> Load configuration Load your saved file configuration. All saved options will be set in the GUI. File -> Exit Close Tkpp. Help -> Tkpp documentation Display POD documentation of Tkpp. Help -> pp documentation Display POD documentation of pp. Help -> About Tkpp Display version and authors of Tkpp. Help -> About pp Display version and authors of pp ( pp --version). Tabs GUI There are five tabs in GUI : General Options, Information, Size, Other Options and Output. All tabs contain all options which can be used with pp. All default pp options are kept. You can now set as you want the options. When your have finished, you can display the command line or start building your package. You will have the output tab to see error or verbose messages. NOTES In Win32 system, the building is executed in a separate process, then the GUI is not frozen. The first time you use Tkpp, it will tell you to install some CPAN modules to use the GUI (like Tk, Tk::ColoredButton...). SEE ALSO pp, PAR AUTHORS Tkpp was written by Doug Gruber and rewrite by Djibril Ousmanou. In the event this application breaks, you get both pieces :-) COPYRIGHT Copyright 2003, 2004, 2005, 2006, 2011, 2014, 2015 by Doug Gruber <doug(a)dougthug.com>, Audrey Tang <cpan@audreyt.org> and Djibril Ousmanou <djibel(a)cpan.org>. Neither this program nor the associated pp program impose any licensing restrictions on files generated by their execution, in accordance with the 8th article of the Artistic License: "Aggregation of this Package with a commercial distribution is always permitted provided that the use of this Package is embedded; that is, when no overt attempt is made to make this Package's interfaces visible to the end user of the commercial distribution. Such use shall not be construed as a distribution of this Package." Therefore, you are absolutely free to place any license on the resulting executable, as long as the packed 3rd-party libraries are also available under the Artistic License. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See LICENSE. perl v5.30.3 2020-03-08 TKPP(1)
tkpp - frontend to pp written in Perl/Tk.
You just have to execute command line : tkpp
null
null
test-yaml5.30
null
null
null
null
null
jobs
Shell builtin commands are commands that can be executed within the running shell's process. Note that, in the case of csh(1) builtin commands, the command is executed in a subshell if it occurs as any component of a pipeline except the last. If a command specified to the shell contains a slash ‘/’, the shell will not execute a builtin command, even if the last component of the specified command matches the name of a builtin command. Thus, while specifying “echo” causes a builtin command to be executed under shells that support the echo builtin command, specifying “/bin/echo” or “./echo” does not. While some builtin commands may exist in more than one shell, their operation may be different under each shell which supports them. Below is a table which lists shell builtin commands, the standard shells that support them and whether they exist as standalone utilities. Only builtin commands for the csh(1) and sh(1) shells are listed here. Consult a shell's manual page for details on the operation of its builtin commands. Beware that the sh(1) manual page, at least, calls some of these commands “built-in commands” and some of them “reserved words”. Users of other shells may need to consult an info(1) page or other sources of documentation. Commands marked “No**” under External do exist externally, but are implemented as scripts using a builtin command of the same name. Command External csh(1) sh(1) ! No No Yes % No Yes No . No No Yes : No Yes Yes @ No Yes Yes [ Yes No Yes { No No Yes } No No Yes alias No** Yes Yes alloc No Yes No bg No** Yes Yes bind No No Yes bindkey No Yes No break No Yes Yes breaksw No Yes No builtin No No Yes builtins No Yes No case No Yes Yes cd No** Yes Yes chdir No Yes Yes command No** No Yes complete No Yes No continue No Yes Yes default No Yes No dirs No Yes No do No No Yes done No No Yes echo Yes Yes Yes echotc No Yes No elif No No Yes else No Yes Yes end No Yes No endif No Yes No endsw No Yes No esac No No Yes eval No Yes Yes exec No Yes Yes exit No Yes Yes export No No Yes false Yes No Yes fc No** No Yes fg No** Yes Yes filetest No Yes No fi No No Yes for No No Yes foreach No Yes No getopts No** No Yes glob No Yes No goto No Yes No hash No** No Yes hashstat No Yes No history No Yes No hup No Yes No if No Yes Yes jobid No No Yes jobs No** Yes Yes kill Yes Yes Yes limit No Yes No local No No Yes log No Yes No login Yes Yes No logout No Yes No ls-F No Yes No nice Yes Yes No nohup Yes Yes No notify No Yes No onintr No Yes No popd No Yes No printenv Yes Yes No printf Yes No Yes pushd No Yes No pwd Yes No Yes read No** No Yes readonly No No Yes rehash No Yes No repeat No Yes No return No No Yes sched No Yes No set No Yes Yes setenv No Yes No settc No Yes No setty No Yes No setvar No No Yes shift No Yes Yes source No Yes No stop No Yes No suspend No Yes No switch No Yes No telltc No Yes No test Yes No Yes then No No Yes time Yes Yes No times No No Yes trap No No Yes true Yes No Yes type No** No Yes ulimit No** No Yes umask No** Yes Yes unalias No** Yes Yes uncomplete No Yes No unhash No Yes No unlimit No Yes No unset No Yes Yes unsetenv No Yes No until No No Yes wait No** Yes Yes where No Yes No which Yes Yes No while No Yes Yes SEE ALSO csh(1), dash(1), echo(1), false(1), info(1), kill(1), login(1), nice(1), nohup(1), printenv(1), printf(1), pwd(1), sh(1), test(1), time(1), true(1), which(1), zsh(1) HISTORY The builtin manual page first appeared in FreeBSD 3.4. AUTHORS This manual page was written by Sheldon Hearn <sheldonh@FreeBSD.org>. macOS 14.5 December 21, 2010 macOS 14.5
builtin, !, %, ., :, @, [, {, }, alias, alloc, bg, bind, bindkey, break, breaksw, builtins, case, cd, chdir, command, complete, continue, default, dirs, do, done, echo, echotc, elif, else, end, endif, endsw, esac, eval, exec, exit, export, false, fc, fg, filetest, fi, for, foreach, getopts, glob, goto, hash, hashstat, history, hup, if, jobid, jobs, kill, limit, local, log, login, logout, ls-F, nice, nohup, notify, onintr, popd, printenv, printf, pushd, pwd, read, readonly, rehash, repeat, return, sched, set, setenv, settc, setty, setvar, shift, source, stop, suspend, switch, telltc, test, then, time, times, trap, true, type, ulimit, umask, unalias, uncomplete, unhash, unlimit, unset, unsetenv, until, wait, where, which, while – shell built-in commands
See the built-in command description in the appropriate shell manual page.
null
null
c89
This is the name of the C language compiler as required by the IEEE Std 1003.1-2001 (“POSIX.1”) standard. The c89 compiler accepts the following options: -c Suppress the link-edit phase of the compilation, and do not remove any object files that are produced. -D name[=value] Define name as if by a C-language #define directive. If no “=value” is given, a value of 1 will be used. Note that in order to request a translation as specified by IEEE Std 1003.1-2001 (“POSIX.1”), you need to define _POSIX_C_SOURCE=200112L either in the source or using this option. The -D option has lower precedence than the -U option. That is, if name is used in both a -U and a -D option, name will be undefined regardless of the order of the options. The -D option may be specified more than once. -E Copy C-language source files to the standard output, expanding all preprocessor directives; no compilation will be performed. -g Produce symbolic information in the object or executable files. -I directory Change the algorithm for searching for headers whose names are not absolute pathnames to look in the directory named by the directory pathname before looking in the usual places. Thus, headers whose names are enclosed in double-quotes ("") will be searched for first in the directory of the file with the #include line, then in directories named in -I options, and last in the usual places. For headers whose names are enclosed in angle brackets (⟨⟩), the header will be searched for only in directories named in -I options and then in the usual places. Directories named in -I options shall be searched in the order specified. The -I option may be specified more than once. -L directory Change the algorithm of searching for the libraries named in the -l objects to look in the directory named by the directory pathname before looking in the usual places. Directories named in -L options will be searched in the order specified. The -L option may be specified more than once. -o outfile Use the pathname outfile, instead of the default a.out, for the executable file produced. -O optlevel If optlevel is zero, disable all optimizations. Otherwise, enable optimizations at the specified level. -s Produce object and/or executable files from which symbolic and other information not required for proper execution has been removed (stripped). -U name Remove any initial definition of name. The -U option may be specified more than once. -W 32|64 Set the pointer size for the compiled code to either 32 or 64 bits. If not specified, the pointer size matches the current host architecture. An operand is either in the form of a pathname or the form -l library. At least one operand of the pathname form needs to be specified. Supported operands are of the form: file.c A C-language source file to be compiled and optionally linked. The operand must be of this form if the -c option is used. file.a A library of object files, as produced by ar(1), passed directly to the link editor. file.o An object file produced by c89 -c, and passed directly to the link editor. -l library Search the library named liblibrary.a. A library will be searched when its name is encountered, so the placement of a -l operand is significant. SEE ALSO ar(1), c99(1), cc(1) STANDARDS The c89 utility interface conforms to IEEE Std 1003.1-2001 (“POSIX.1”). Since it is a wrapper around GCC, it is limited to the C89 features that GCC actually implements. macOS 14.5 October 7, 2002 macOS 14.5
c89 – standard C language compiler
c89 [-cEgs] [-D name[=value]] ... [-I directory ...] [-L directory ...] [-o outfile] [-O optlevel] [-U name ...] [-W 32|64] operand ...
null
null
sudo
sudo allows a permitted user to execute a command as the superuser or another user, as specified by the security policy. The invoking user's real (not effective) user-ID is used to determine the user name with which to query the security policy. sudo supports a plugin architecture for security policies, auditing, and input/output logging. Third parties can develop and distribute their own plugins to work seamlessly with the sudo front-end. The default security policy is sudoers, which is configured via the file /private/etc/sudoers, or via LDAP. See the Plugins section for more information. The security policy determines what privileges, if any, a user has to run sudo. The policy may require that users authenticate themselves with a password or another authentication mechanism. If authentication is required, sudo will exit if the user's password is not entered within a configurable time limit. This limit is policy-specific; the default password prompt timeout for the sudoers security policy is 0 minutes. Security policies may support credential caching to allow the user to run sudo again for a period of time without requiring authentication. By default, the sudoers policy caches credentials on a per-terminal basis for 5 minutes. See the timestamp_type and timestamp_timeout options in sudoers(5) for more information. By running sudo with the -v option, a user can update the cached credentials without running a command. On systems where sudo is the primary method of gaining superuser privileges, it is imperative to avoid syntax errors in the security policy configuration files. For the default security policy, sudoers(5), changes to the configuration files should be made using the visudo(8) utility which will ensure that no syntax errors are introduced. When invoked as sudoedit, the -e option (described below), is implied. Security policies and audit plugins may log successful and failed attempts to run sudo. If an I/O plugin is configured, the running command's input and output may be logged as well. The options are as follows: -A, --askpass Normally, if sudo requires a password, it will read it from the user's terminal. If the -A (askpass) option is specified, a (possibly graphical) helper program is executed to read the user's password and output the password to the standard output. If the SUDO_ASKPASS environment variable is set, it specifies the path to the helper program. Otherwise, if sudo.conf(5) contains a line specifying the askpass program, that value will be used. For example: # Path to askpass helper program Path askpass /usr/X11R6/bin/ssh-askpass If no askpass program is available, sudo will exit with an error. -a type, --auth-type=type Use the specified BSD authentication type when validating the user, if allowed by /etc/login.conf. The system administrator may specify a list of sudo-specific authentication methods by adding an “auth-sudo” entry in /etc/login.conf. This option is only available on systems that support BSD authentication. -B, --bell Ring the bell as part of the password prompt when a terminal is present. This option has no effect if an askpass program is used. -b, --background Run the given command in the background. It is not possible to use shell job control to manipulate background processes started by sudo. Most interactive commands will fail to work properly in background mode. -C num, --close-from=num Close all file descriptors greater than or equal to num before executing a command. Values less than three are not permitted. By default, sudo will close all open file descriptors other than standard input, standard output, and standard error when executing a command. The security policy may restrict the user's ability to use this option. The sudoers policy only permits use of the -C option when the administrator has enabled the closefrom_override option. -c class, --login-class=class Run the command with resource limits and scheduling priority of the specified login class. The class argument can be either a class name as defined in /etc/login.conf, or a single ‘-’ character. If class is -, the default login class of the target user will be used. Otherwise, the command must be run as the superuser (user-ID 0), or sudo must be run from a shell that is already running as the superuser. If the command is being run as a login shell, additional /etc/login.conf settings, such as the umask and environment variables, will be applied, if present. This option is only available on systems with BSD login classes. -D directory, --chdir=directory Run the command in the specified directory instead of the current working directory. The security policy may return an error if the user does not have permission to specify the working directory. -E, --preserve-env Indicates to the security policy that the user wishes to preserve their existing environment variables. The security policy may return an error if the user does not have permission to preserve the environment. --preserve-env=list Indicates to the security policy that the user wishes to add the comma-separated list of environment variables to those preserved from the user's environment. The security policy may return an error if the user does not have permission to preserve the environment. This option may be specified multiple times. -e, --edit Edit one or more files instead of running a command. In lieu of a path name, the string "sudoedit" is used when consulting the security policy. If the user is authorized by the policy, the following steps are taken: 1. Temporary copies are made of the files to be edited with the owner set to the invoking user. 2. The editor specified by the policy is run to edit the temporary files. The sudoers policy uses the SUDO_EDITOR, VISUAL and EDITOR environment variables (in that order). If none of SUDO_EDITOR, VISUAL or EDITOR are set, the first program listed in the editor sudoers(5) option is used. 3. If they have been modified, the temporary files are copied back to their original location and the temporary versions are removed. To help prevent the editing of unauthorized files, the following restrictions are enforced unless explicitly allowed by the security policy: • Symbolic links may not be edited (version 1.8.15 and higher). • Symbolic links along the path to be edited are not followed when the parent directory is writable by the invoking user unless that user is root (version 1.8.16 and higher). • Files located in a directory that is writable by the invoking user may not be edited unless that user is root (version 1.8.16 and higher). Users are never allowed to edit device special files. If the specified file does not exist, it will be created. Unlike most commands run by sudo, the editor is run with the invoking user's environment unmodified. If the temporary file becomes empty after editing, the user will be prompted before it is installed. If, for some reason, sudo is unable to update a file with its edited version, the user will receive a warning and the edited copy will remain in a temporary file. -g group, --group=group Run the command with the primary group set to group instead of the primary group specified by the target user's password database entry. The group may be either a group name or a numeric group-ID (GID) prefixed with the ‘#’ character (e.g., ‘#0’ for GID 0). When running a command as a GID, many shells require that the ‘#’ be escaped with a backslash (‘\’). If no -u option is specified, the command will be run as the invoking user. In either case, the primary group will be set to group. The sudoers policy permits any of the target user's groups to be specified via the -g option as long as the -P option is not in use. -H, --set-home Request that the security policy set the HOME environment variable to the home directory specified by the target user's password database entry. Depending on the policy, this may be the default behavior. -h, --help Display a short help message to the standard output and exit. -h host, --host=host Run the command on the specified host if the security policy plugin supports remote commands. The sudoers plugin does not currently support running remote commands. This may also be used in conjunction with the -l option to list a user's privileges for the remote host. -i, --login Run the shell specified by the target user's password database entry as a login shell. This means that login-specific resource files such as .profile, .bash_profile, or .login will be read by the shell. If a command is specified, it is passed to the shell as a simple command using the -c option. The command and any args are concatenated, separated by spaces, after escaping each character (including white space) with a backslash (‘\’) except for alphanumerics, underscores, hyphens, and dollar signs. If no command is specified, an interactive shell is executed. sudo attempts to change to that user's home directory before running the shell. The command is run with an environment similar to the one a user would receive at log in. Most shells behave differently when a command is specified as compared to an interactive session; consult the shell's manual for details. The Command environment section in the sudoers(5) manual documents how the -i option affects the environment in which a command is run when the sudoers policy is in use. -K, --remove-timestamp Similar to the -k option, except that it removes every cached credential for the user, regardless of the terminal or parent process ID. The next time sudo is run, a password must be entered if the security policy requires authentication. It is not possible to use the -K option in conjunction with a command or other option. This option does not require a password. Not all security policies support credential caching. -k, --reset-timestamp When used without a command, invalidates the user's cached credentials for the current session. The next time sudo is run in the session, a password must be entered if the security policy requires authentication. By default, the sudoers policy uses a separate record in the credential cache for each terminal (or parent process ID if no terminal is present). This prevents the -k option from interfering with sudo commands run in a different terminal session. See the timestamp_type option in sudoers(5) for more information. This option does not require a password, and was added to allow a user to revoke sudo permissions from a .logout file. When used in conjunction with a command or an option that may require a password, this option will cause sudo to ignore the user's cached credentials. As a result, sudo will prompt for a password (if one is required by the security policy) and will not update the user's cached credentials. Not all security policies support credential caching. -l, --list If no command is specified, list the privileges for the invoking user (or the user specified by the -U option) on the current host. A longer list format is used if this option is specified multiple times and the security policy supports a verbose output format. If a command is specified and is permitted by the security policy, the fully-qualified path to the command is displayed along with any args. If a command is specified but not allowed by the policy, sudo will exit with a status value of 1. -N, --no-update Do not update the user's cached credentials, even if the user successfully authenticates. Unlike the -k flag, existing cached credentials are used if they are valid. To detect when the user's cached credentials are valid (or when no authentication is required), the following can be used: sudo -Nnv Not all security policies support credential caching. -n, --non-interactive Avoid prompting the user for input of any kind. If a password is required for the command to run, sudo will display an error message and exit. -P, --preserve-groups Preserve the invoking user's group vector unaltered. By default, the sudoers policy will initialize the group vector to the list of groups the target user is a member of. The real and effective group-IDs, however, are still set to match the target user. -p prompt, --prompt=prompt Use a custom password prompt with optional escape sequences. The following percent (‘%’) escape sequences are supported by the sudoers policy: %H expanded to the host name including the domain name (only if the machine's host name is fully qualified or the fqdn option is set in sudoers(5)) %h expanded to the local host name without the domain name %p expanded to the name of the user whose password is being requested (respects the rootpw, targetpw, and runaspw flags in sudoers(5)) %U expanded to the login name of the user the command will be run as (defaults to root unless the -u option is also specified) %u expanded to the invoking user's login name %% two consecutive ‘%’ characters are collapsed into a single ‘%’ character The custom prompt will override the default prompt specified by either the security policy or the SUDO_PROMPT environment variable. On systems that use PAM, the custom prompt will also override the prompt specified by a PAM module unless the passprompt_override flag is disabled in sudoers. -R directory, --chroot=directory Change to the specified root directory (see chroot(8)) before running the command. The security policy may return an error if the user does not have permission to specify the root directory. -r role, --role=role Run the command with an SELinux security context that includes the specified role. -S, --stdin Write the prompt to the standard error and read the password from the standard input instead of using the terminal device. -s, --shell Run the shell specified by the SHELL environment variable if it is set or the shell specified by the invoking user's password database entry. If a command is specified, it is passed to the shell as a simple command using the -c option. The command and any args are concatenated, separated by spaces, after escaping each character (including white space) with a backslash (‘\’) except for alphanumerics, underscores, hyphens, and dollar signs. If no command is specified, an interactive shell is executed. Most shells behave differently when a command is specified as compared to an interactive session; consult the shell's manual for details. -t type, --type=type Run the command with an SELinux security context that includes the specified type. If no type is specified, the default type is derived from the role. -U user, --other-user=user Used in conjunction with the -l option to list the privileges for user instead of for the invoking user. The security policy may restrict listing other users' privileges. When using the sudoers policy, the -U option is restricted to the root user and users with either the “list” priviege for the specified user or the ability to run any command as root or user on the current host. -T timeout, --command-timeout=timeout Used to set a timeout for the command. If the timeout expires before the command has exited, the command will be terminated. The security policy may restrict the user's ability to set timeouts. The sudoers policy requires that user-specified timeouts be explicitly enabled. -u user, --user=user Run the command as a user other than the default target user (usually root). The user may be either a user name or a numeric user-ID (UID) prefixed with the ‘#’ character (e.g., ‘#0’ for UID 0). When running commands as a UID, many shells require that the ‘#’ be escaped with a backslash (‘\’). Some security policies may restrict UIDs to those listed in the password database. The sudoers policy allows UIDs that are not in the password database as long as the targetpw option is not set. Other security policies may not support this. -V, --version Print the sudo version string as well as the version string of any configured plugins. If the invoking user is already root, the -V option will display the options passed to configure when sudo was built; plugins may display additional information such as default options. -v, --validate Update the user's cached credentials, authenticating the user if necessary. For the sudoers plugin, this extends the sudo timeout for another 5 minutes by default, but does not run a command. Not all security policies support cached credentials. -- The -- is used to delimit the end of the sudo options. Subsequent options are passed to the command. Options that take a value may only be specified once unless otherwise indicated in the description. This is to help guard against problems caused by poorly written scripts that invoke sudo with user-controlled input. Environment variables to be set for the command may also be passed as options to sudo in the form VAR=value, for example LD_LIBRARY_PATH=/usr/local/pkg/lib. Environment variables may be subject to restrictions imposed by the security policy plugin. The sudoers policy subjects environment variables passed as options to the same restrictions as existing environment variables with one important difference. If the setenv option is set in sudoers, the command to be run has the SETENV tag set or the command matched is ALL, the user may set variables that would otherwise be forbidden. See sudoers(5) for more information. COMMAND EXECUTION When sudo executes a command, the security policy specifies the execution environment for the command. Typically, the real and effective user and group and IDs are set to match those of the target user, as specified in the password database, and the group vector is initialized based on the group database (unless the -P option was specified). The following parameters may be specified by security policy: • real and effective user-ID • real and effective group-ID • supplementary group-IDs • the environment list • current working directory • file creation mode mask (umask) • scheduling priority (aka nice value) Process model There are two distinct ways sudo can run a command. If an I/O logging plugin is configured to log terminal I/O, or if the security policy explicitly requests it, a new pseudo-terminal (“pty”) is allocated and fork(2) is used to create a second sudo process, referred to as the monitor. The monitor creates a new terminal session with itself as the leader and the pty as its controlling terminal, calls fork(2) again, sets up the execution environment as described above, and then uses the execve(2) system call to run the command in the child process. The monitor exists to relay job control signals between the user's terminal and the pty the command is being run in. This makes it possible to suspend and resume the command normally. Without the monitor, the command would be in what POSIX terms an “orphaned process group” and it would not receive any job control signals from the kernel. When the command exits or is terminated by a signal, the monitor passes the command's exit status to the main sudo process and exits. After receiving the command's exit status, the main sudo process passes the command's exit status to the security policy's close function, as well as the close function of any configured audit plugin, and exits. If no pty is used, sudo calls fork(2), sets up the execution environment as described above, and uses the execve(2) system call to run the command in the child process. The main sudo process waits until the command has completed, then passes the command's exit status to the security policy's close function, as well as the close function of any configured audit plugins, and exits. As a special case, if the policy plugin does not define a close function, sudo will execute the command directly instead of calling fork(2) first. The sudoers policy plugin will only define a close function when I/O logging is enabled, a pty is required, an SELinux role is specified, the command has an associated timeout, or the pam_session or pam_setcred options are enabled. Both pam_session and pam_setcred are enabled by default on systems using PAM. On systems that use PAM, the security policy's close function is responsible for closing the PAM session. It may also log the command's exit status. Signal handling When the command is run as a child of the sudo process, sudo will relay signals it receives to the command. The SIGINT and SIGQUIT signals are only relayed when the command is being run in a new pty or when the signal was sent by a user process, not the kernel. This prevents the command from receiving SIGINT twice each time the user enters control- C. Some signals, such as SIGSTOP and SIGKILL, cannot be caught and thus will not be relayed to the command. As a general rule, SIGTSTP should be used instead of SIGSTOP when you wish to suspend a command being run by sudo. As a special case, sudo will not relay signals that were sent by the command it is running. This prevents the command from accidentally killing itself. On some systems, the reboot(8) utility sends SIGTERM to all non-system processes other than itself before rebooting the system. This prevents sudo from relaying the SIGTERM signal it received back to reboot(8), which might then exit before the system was actually rebooted, leaving it in a half-dead state similar to single user mode. Note, however, that this check only applies to the command run by sudo and not any other processes that the command may create. As a result, running a script that calls reboot(8) or shutdown(8) via sudo may cause the system to end up in this undefined state unless the reboot(8) or shutdown(8) are run using the exec() family of functions instead of system() (which interposes a shell between the command and the calling process). Plugins Plugins may be specified via Plugin directives in the sudo.conf(5) file. They may be loaded as dynamic shared objects (on systems that support them), or compiled directly into the sudo binary. If no sudo.conf(5) file is present, or if it doesn't contain any Plugin lines, sudo will use sudoers(5) for the policy, auditing, and I/O logging plugins. See the sudo.conf(5) manual for details of the /private/etc/sudo.conf file and the sudo_plugin(5) manual for more information about the sudo plugin architecture. EXIT VALUE Upon successful execution of a command, the exit status from sudo will be the exit status of the program that was executed. If the command terminated due to receipt of a signal, sudo will send itself the same signal that terminated the command. If the -l option was specified without a command, sudo will exit with a value of 0 if the user is allowed to run sudo and they authenticated successfully (as required by the security policy). If a command is specified with the -l option, the exit value will only be 0 if the command is permitted by the security policy, otherwise it will be 1. If there is an authentication failure, a configuration/permission problem, or if the given command cannot be executed, sudo exits with a value of 1. In the latter case, the error string is printed to the standard error. If sudo cannot stat(2) one or more entries in the user's PATH, an error is printed to the standard error. (If the directory does not exist or if it is not really a directory, the entry is ignored and no error is printed.) This should not happen under normal circumstances. The most common reason for stat(2) to return “permission denied” is if you are running an automounter and one of the directories in your PATH is on a machine that is currently unreachable. SECURITY NOTES sudo tries to be safe when executing external commands. To prevent command spoofing, sudo checks "." and "" (both denoting current directory) last when searching for a command in the user's PATH (if one or both are in the PATH). Depending on the security policy, the user's PATH environment variable may be modified, replaced, or passed unchanged to the program that sudo executes. Users should never be granted sudo privileges to execute files that are writable by the user or that reside in a directory that is writable by the user. If the user can modify or replace the command there is no way to limit what additional commands they can run. By default, sudo will only log the command it explicitly runs. If a user runs a command such as ‘sudo su’ or ‘sudo sh’, subsequent commands run from that shell are not subject to sudo's security policy. The same is true for commands that offer shell escapes (including most editors). If I/O logging is enabled, subsequent commands will have their input and/or output logged, but there will not be traditional logs for those commands. Because of this, care must be taken when giving users access to commands via sudo to verify that the command does not inadvertently give the user an effective root shell. For information on ways to address this, see the Preventing shell escapes section in sudoers(5). To prevent the disclosure of potentially sensitive information, sudo disables core dumps by default while it is executing (they are re- enabled for the command that is run). This historical practice dates from a time when most operating systems allowed set-user-ID processes to dump core by default. To aid in debugging sudo crashes, you may wish to re-enable core dumps by setting “disable_coredump” to false in the sudo.conf(5) file as follows: Set disable_coredump false See the sudo.conf(5) manual for more information. ENVIRONMENT sudo utilizes the following environment variables. The security policy has control over the actual content of the command's environment. EDITOR Default editor to use in -e (sudoedit) mode if neither SUDO_EDITOR nor VISUAL is set. MAIL Set to the mail spool of the target user when the -i option is specified, or when env_reset is enabled in sudoers (unless MAIL is present in the env_keep list). HOME Set to the home directory of the target user when the -i or -H options are specified, when the -s option is specified and set_home is set in sudoers, when always_set_home is enabled in sudoers, or when env_reset is enabled in sudoers and HOME is not present in the env_keep list. LOGNAME Set to the login name of the target user when the -i option is specified, when the set_logname option is enabled in sudoers, or when the env_reset option is enabled in sudoers (unless LOGNAME is present in the env_keep list). PATH May be overridden by the security policy. SHELL Used to determine shell to run with -s option. SUDO_ASKPASS Specifies the path to a helper program used to read the password if no terminal is available or if the -A option is specified. SUDO_COMMAND Set to the command run by sudo, including any args. The args are truncated at 4096 characters to prevent a potential execution error. SUDO_EDITOR Default editor to use in -e (sudoedit) mode. SUDO_GID Set to the group-ID of the user who invoked sudo. SUDO_PROMPT Used as the default password prompt unless the -p option was specified. SUDO_PS1 If set, PS1 will be set to its value for the program being run. SUDO_UID Set to the user-ID of the user who invoked sudo. SUDO_USER Set to the login name of the user who invoked sudo. USER Set to the same value as LOGNAME, described above. VISUAL Default editor to use in -e (sudoedit) mode if SUDO_EDITOR is not set. FILES /private/etc/sudo.conf sudo front-end configuration
sudo, sudoedit - execute a command as another user
sudo -h | -K | -k | -V sudo -v [-ABkNnS] [-g group] [-h host] [-p prompt] [-u user] sudo -l [-ABkNnS] [-g group] [-h host] [-p prompt] [-U user] [-u user] [command [arg_...]] sudo [-ABbEHnPS] [-C num] [-D directory] [-g group] [-h host] [-p prompt] [-R directory] [-T timeout] [-u user] [VAR=value] [-i | -s] [command [arg_...]] sudoedit [-ABkNnS] [-C num] [-D directory] [-g group] [-h host] [-p prompt] [-R directory] [-T timeout] [-u user] file_...
null
The following examples assume a properly configured security policy. To get a file listing of an unreadable directory: $ sudo ls /usr/local/protected To list the home directory of user yaz on a machine where the file system holding ~yaz is not exported as root: $ sudo -u yaz ls ~yaz To edit the index.html file as user www: $ sudoedit -u www ~www/htdocs/index.html To view system logs only accessible to root and users in the adm group: $ sudo -g adm more /var/log/syslog To run an editor as jim with a different primary group: $ sudoedit -u jim -g audio ~jim/sound.txt To shut down a machine: $ sudo shutdown -r +15 "quick reboot" To make a usage listing of the directories in the /home partition. The commands are run in a sub-shell to allow the ‘cd’ command and file redirection to work. $ sudo sh -c "cd /home ; du -s * | sort -rn > USAGE" DIAGNOSTICS Error messages produced by sudo include: editing files in a writable directory is not permitted By default, sudoedit does not permit editing a file when any of the parent directories are writable by the invoking user. This avoids a race condition that could allow the user to overwrite an arbitrary file. See the sudoedit_checkdir option in sudoers(5) for more information. editing symbolic links is not permitted By default, sudoedit does not follow symbolic links when opening files. See the sudoedit_follow option in sudoers(5) for more information. effective uid is not 0, is sudo installed setuid root? sudo was not run with root privileges. The sudo binary must be owned by the root user and have the set-user-ID bit set. Also, it must not be located on a file system mounted with the ‘nosuid’ option or on an NFS file system that maps uid 0 to an unprivileged uid. effective uid is not 0, is sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges? sudo was not run with root privileges. The sudo binary has the proper owner and permissions but it still did not run with root privileges. The most common reason for this is that the file system the sudo binary is located on is mounted with the ‘nosuid’ option or it is an NFS file system that maps uid 0 to an unprivileged uid. fatal error, unable to load plugins An error occurred while loading or initializing the plugins specified in sudo.conf(5). invalid environment variable name One or more environment variable names specified via the -E option contained an equal sign (‘=’). The arguments to the -E option should be environment variable names without an associated value. no password was provided When sudo tried to read the password, it did not receive any characters. This may happen if no terminal is available (or the -S option is specified) and the standard input has been redirected from /dev/null. a terminal is required to read the password sudo needs to read the password but there is no mechanism available for it to do so. A terminal is not present to read the password from, sudo has not been configured to read from the standard input, the -S option was not used, and no askpass helper has been specified either via the sudo.conf(5) file or the SUDO_ASKPASS environment variable. no writable temporary directory found sudoedit was unable to find a usable temporary directory in which to store its intermediate files. The “no new privileges” flag is set, which prevents sudo from running as root. sudo was run by a process that has the Linux “no new privileges” flag is set. This causes the set-user-ID bit to be ignored when running an executable, which will prevent sudo from functioning. The most likely cause for this is running sudo within a container that sets this flag. Check the documentation to see if it is possible to configure the container such that the flag is not set. sudo must be owned by uid 0 and have the setuid bit set sudo was not run with root privileges. The sudo binary does not have the correct owner or permissions. It must be owned by the root user and have the set-user-ID bit set. sudoedit is not supported on this platform It is only possible to run sudoedit on systems that support setting the effective user-ID. timed out reading password The user did not enter a password before the password timeout (5 minutes by default) expired. you do not exist in the passwd database Your user-ID does not appear in the system passwd database. you may not specify environment variables in edit mode It is only possible to specify environment variables when running a command. When editing a file, the editor is run with the user's environment unmodified. SEE ALSO su(1), stat(2), login_cap(3), passwd(5), sudo.conf(5), sudo_plugin(5), sudoers(5), sudoers_timestamp(5), sudoreplay(8), visudo(8) HISTORY See the HISTORY.md file in the sudo distribution (https://www.sudo.ws/about/history/) for a brief history of sudo. AUTHORS Many people have worked on sudo over the years; this version consists of code written primarily by: Todd C. Miller See the CONTRIBUTORS.md file in the sudo distribution (https://www.sudo.ws/about/contributors/) for an exhaustive list of people who have contributed to sudo. CAVEATS There is no easy way to prevent a user from gaining a root shell if that user is allowed to run arbitrary commands via sudo. Also, many programs (such as editors) allow the user to run commands via shell escapes, thus avoiding sudo's checks. However, on most systems it is possible to prevent shell escapes with the sudoers(5) plugin's noexec functionality. It is not meaningful to run the ‘cd’ command directly via sudo, e.g., $ sudo cd /usr/local/protected since when the command exits the parent process (your shell) will still be the same. The -D option can be used to run a command in a specific directory. Running shell scripts via sudo can expose the same kernel bugs that make set-user-ID shell scripts unsafe on some operating systems (if your OS has a /dev/fd/ directory, set-user-ID shell scripts are generally safe). BUGS If you believe you have found a bug in sudo, you can submit a bug report at https://bugzilla.sudo.ws/ SUPPORT Limited free support is available via the sudo-users mailing list, see https://www.sudo.ws/mailman/listinfo/sudo-users to subscribe or search the archives. DISCLAIMER sudo is provided “AS IS” and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. See the LICENSE.md file distributed with sudo or https://www.sudo.ws/about/license/ for complete details. Sudo 1.9.13p2 January 16, 2023 SUDO(8)
update_mcdp29xx
null
null
null
null
null
bioutil
bioutil provides the possibility of viewing and changing biometrics configuration, both system-wide and user-specific. It also allows listing and deleting enrolled biometric templates.
bioutil – tool for viewing/changing biometrics configuration and listing/deleting enrolled biometric templates
bioutil {-r | -w [-f { 0 | 1 }] [-u { 0 | 1 }] [-a { 0 | 1 }] [-o <seconds>] | [--<X>timeout <seconds>]} | [-c] | [-p] | [-d <uid>] [-s]
-r, --read Read biometrics configuration. -w, --write Write biometrics configuration. -s, --system Indicates that system-wide configuration is to be read/written (user-specific configuration is the default) or that a system- wide list/delete operation is to be performed. -f, --function Enables (1) or disables (0) overall biometrics functionality (system-wide configuration only). -u, --unlock Enables (1) or disables (0) biometrics for unlock. -a, --applepay Enables (1) or disables (0) biometrics for ApplePay (user- specific configuration only). -o, --timeout Sets biometric timeout (in seconds, system-wide configuration only). Deprecated, please use --btimeout instead. --btimeout Sets biometric timeout (in seconds, system-wide configuration only). --mtimeout Sets match timeout (in seconds, system-wide configuration only). --ptimeout Sets passcode input timeout (in seconds, system-wide configuration only). -c, --count Provides number of enrolled biometric templates of the current user or of all users (when run with -s as an administrator) -p, --purge Deletes all enrolled biometric templates of the current user or of all users (when run with -s as an administrator) -d, --delete Deletes all enrolled biometric templates of the user with given user ID (must be run as an administrator)
bioutil -r Reads biometrics configuration for the current user. bioutil -r -s Reads system-wide biometrics configuration. bioutil -w -u 1 Enables biometrics for unlock for the current user. sudo bioutil -w -s -u 0 Disables biometrics for unlock for the whole system. sudo bioutil -w -s --btimeout 86400 Sets biometric timeout to 24h. bioutil -c Prints the number of enrolled biometric templates of the current user. bioutil -p Deletes all enrolled biometric templates of the current user. sudo bioutil -c -s Prints numbers of enrolled biometric templates of all enrolled users. sudo bioutil -p -s Deletes all biometric templates from the system. sudo bioutil -s -d 501 Deletes all biometric templates of user 501. Darwin 15/02/16 Darwin
tmdiagnose
The tmdiagnose tool gathers system and backup information in order to assist Apple when investigating issues related to Time Machine. A great deal of information is harvested, spanning system state, system and backup configuration, and snapshot details. What tmdiagnose Collects: • A spindump of the system • Several seconds of fs_usage ouput • Several seconds of top output • Individual samples of backupd and Finder • Power Management state and logs • IOKit registry information • A netstat inspection • A list of all open files on the system • All system logs • All kernel logs • All install logs • All fsck_hfs logs • All WiFi logs • Disk Utility logs for all local users • The system migration log • All spin and crash reports for the following processes: backupd mds diskimages-helper DesktopServicesHelper newfs_hfs NetAuthSysAgent • Recent spin and crash reports for the following processes, for all local users: Finder Locum System Settings SystemUIServer iPhoto Mail MailTimeMachineHelper Address Book • Basic information about reachable AirPort and Time Capsule devices • The FindSystemFiles cache • Information about disks and mounted network shares • Information about attached disk images • A System Profiler report • A list of software that has been installed via the darwinup tool • Time Machine preferences, caches, and configuration information • Comprehensive information about Time Machine backups and snapshots The high-level backup structure for the current machine directory is recreated inside the diagnostic bundle, including the capture of various Time Machine log files contained within snapshots. Items in this "skeleton" structure have on them the extended attributes of the originals. Recreations of local snapshots are also captured in the same manner. The tmdiagnose tool will run the Spotlight diagnostic tool mddiagnose automatically, if it is available. What tmdiagnose Doesn't Collect: • No user files are harvested from within backups • No authentication credentials are harvested from the system • No authentication credentials are harvested for reachable AirPort and Time Capsule devices
tmdiagnose – gather information to aid in diagnosing Time Machine issues
tmdiagnose -h tmdiagnose [-d seconds] [-f path] [-m] [-r]
-h Print full usage. -d sec Delay the start of the diagnostic by sec seconds. The system alert sound will play two times when the diagnostic begins. -f path Write the diagnostic to the specified path. -m Gather memory diagnostics for backupd and Finder. -r Do not reveal the diagnostic in Finder when finished. EXIT STATUS tmdiagnose exits with status 0 if there were no internal errors encountered during the diagnostic, or >0 when an error unrelated to external state occurs or unusable input is provided by the user. Mac OS X 6 September 2011 Mac OS X
null
plockstat
null
plockstat - front-end to DTrace to print statistics about POSIX mutexes and read/write locks
plockstat [-vACHV] [-n count] [-s depth] [-e secs] [-x opt[=val]] command [arg...] plockstat [-vACHV] [-n count] [-s depth] [-e secs] [-x opt[=val]] -p pid OVERVIEW The plockstat command is a front-end to DTrace that can be used to print statistics about POSIX mutexes and read/write locks. Since OS X 10.11, in order to use this, your process must be run with DYLD_LIBRARY_PATH set to contain /usr/lib/system/introspection: DYLD_LIBRARY_PATH=/usr/lib/system/introspection Which contains the necessary static DTrace probes.
-v print a message when tracing starts -A trace contention and hold events (same as -CH) -C trace contention events for mutexes and rwlocks -H trace hold events for mutexes and rwlocks -V print the dtrace script to run -n count display only 'count' entries for each event type -s depth show stack trace upto 'depth' entries -e secs exit after specified seconds -x arg[=val] enable a DTrace runtime option or a D compiler option -p pid attach and trace the specified process id SEE ALSO dtrace(1) 1.0 July 2007 PLOCKSTAT(1)
null
serialver
The serialver command returns the serialVersionUID for one or more classes in a form suitable for copying into an evolving class. When called with no arguments, the serialver command prints a usage line. OPTIONS FOR SERIALVER -classpath path-files Sets the search path for application classes and resources. Separate classes and resources with a colon (:). -Joption Passes the specified option to the Java Virtual Machine, where option is one of the options described on the reference page for the Java application launcher. For example, -J-Xms48m sets the startup memory to 48 MB. NOTES The serialver command loads and initializes the specified classes in its virtual machine, and by default, it doesn't set a security manager. If the serialver command is to be run with untrusted classes, then a security manager can be set with the following option: -J-Djava.security.manager When necessary, a security policy can be specified with the following option: -J-Djava.security.policy=policy_file JDK 22 2024 SERIALVER(1)
serialver - return the serialVersionUID for one or more classes in a form suitable for copying into an evolving class
serialver [options] [classnames]
This represents the command-line options for the serialver command. See Options for serialver. classnames The classes for which serialVersionUID is to be returned.
null