command
stringlengths 1
42
| description
stringlengths 29
182k
⌀ | name
stringlengths 7
64.9k
⌀ | synopsis
stringlengths 4
85.3k
⌀ | options
stringclasses 593
values | examples
stringclasses 455
values |
|---|---|---|---|---|---|
skywalkctl
|
skywalkctl is a utility used to interact with the Skywalk subsystem, which provides the plumbing between various networking-related pieces of software and hardware. It should only be used in a test and debug context. Using it for any other purpose is strongly discouraged. It is composed of multiple commands list below. Use skywalkctl COMMAND help to see usage of each command. COMMANDS show Prints a brief overview of Skywalk runtime. flow Display Skywalk flow. flow-adv Display flow advisory information. flow-owner Display flow owner information. flow-route Display flow route information. channel Display channels with performance and error statistics. memory Display memory usage. provider Display Skywalk nexus providers and their child instances. netns Display information about port reservations. log Display and manipulate Skywalk kernel logging verbosity. enable Enable/disable Skywalk via boot-args. SEE ALSO netstat(1), lsof(8) July 24, 2018 SKYWALKCTL(8)
|
skywalkctl - Interact with Skywalk subsystem
|
skywalkctl <command> [ <args> ]
| null | null |
mkpassdb
|
mkpassdb creates or modifies the password server database directly. mkpassdb must be run as root; it will exit otherwise. The -list command is the only exception. This tool's purpose is to create and manage the password server database. It performs operations that are not supported by the password server protocol because of security concerns. These operations include the creation and destruction of the database itself, the creation of the RSA security keys that establish the identity of the password server, the trusted mechanism list, and the genesis of administrator accounts. It also allows the root account to make some password server changes on the local system. -deleteslot Invalidates a slot ID in the database. -dump Outputs all of the User IDs and their corresponding user names. If a slot-ID is specified, it prints out more detailed information for a single slot. If the [-v] option is used, additional columns are included. -header Outputs the database header information. -kerberize Attempts to add kerberos principals for all non-kerberos accounts in password server. -key Outputs the RSA public key stored in the database. -list Outputs all of the SASL mechanisms available to the password server. -mergedb This command is a low-level command that is invoked by a higher-level tool in normal usage. Refer to the restoredb command in the slapconfig man page. This command merges a snapshot of the password server database into the current database whether or not the daemon is running. This command takes existing LDAP users, looks for their data in the specified db file, and merges their db information. If there is data in the db without a corresponding LDAP user or computer, it is not merged. The identity elements of the password server, including RSA keys and replica name, are changed to the snapshot's contents. -mergeparent This command is a low-level command that is invoked by a higher-level tool in normal usage. Refer to the mergedb command in the slapconfig man page. This command merges a snapshot of the password server database into the current database whether or not the daemon is running. This command takes existing LDAP users, looks for their data in the specified db file, and merges their db information. If there is data in the db without a corresponding LDAP user or compute r, it is not merged. The current identity of the password server is preserved. -setadmin Promotes a slot-ID to have administrator privileges for the password server. By default, administrators set with mkpassdb receive the most priveleged rank (0). -setglobalpolicy Sets the default policies for all users. -setkerberos Assigns a Kerberos realm to a password server account. -setkeyagent Promotes a slot-ID to have enough administrator privileges to retrieve session keys on behalf of other accounts. -setcomputeraccount Informs the password server that the account belongs to a computer rather than a user. Computer accounts are not subject to policies and do not expire. Using the optional "off" argument changes the state back to a user account. -setrealm Sets the password server's SASL realm. -getreplicationinterval Gets the number of seconds between replication attempts. -setreplicationinterval Sets the number of seconds between replication attempts. -rekeydb Generates a new RSA public/private key pair for the database. Valid sizes are 1024, 2048, or 3072. This command should be invoked by a higher-level tool. If run from the command line, existing users will not be able to authenticate. The PasswordService daemon must be turned off with, "NeST -stoppasswordserver" before this command can be used.
|
mkpassdb – Mac OS X Server Password Server database creation tool
|
mkpassdb -deleteslot slot-ID mkpassdb -dump [-v] mkpassdb -dump [slot-ID] mkpassdb -header mkpassdb -kerberize mkpassdb -key mkpassdb -list mkpassdb -mergedb path mkpassdb -mergeparent path mkpassdb -setadmin slot-ID [admin-class (0-7)] mkpassdb -setglobalpolicy "policy1=value1 policy2=value2 etc." mkpassdb -setkerberos slot-ID KerberosRealm mkpassdb -setkeyagent slot-ID mkpassdb -setcomputeraccount [off] mkpassdb -setrealm realm mkpassdb -getreplicationinterval mkpassdb -setreplicationinterval seconds [policy] mkpassdb -rekeydb [key-size-in-bits] mkpassdb [-u user] [-m mech] [-a] [-b] [-e count] [-n replica-name] [-o] [-p] [-q]
|
The following options are available: -a add a new administrator to an existing database. -b add a new non-administrative user to an existing database. -e expand the database to a fixed number of records. If the number is greater than the current size of the database, then the database is expanded; otherwise, no action is performed. This option is used by other setup tools when establishing a replica database. There is no reason to use it from the command line. -m mech establishes a mechanism as weak. If a mechanism is considered weak, then it can be used to verify passwords but the password server will not allow write operations to its database. The mechanisms SMB-NT, SMB-LAN-MANAGER, CRYPT, and APOP are always in the weak list. Directory Services uses DHX to perform write operations to the password server. -n name Assign a name to a replica -o overwrite an existing database. Replacing an existing database is extremely destructive and should not be done unless all password server users have been removed from the directory system. -p prompt for a password -q quiet -u user Add this user name to the database. USAGE In typical usage, mkpassdb is invoked by another tool. It is used directly on rare occasion. FILES & FOLDERS /Library/Preferences/com.apple.passwordserver.plist - the PasswordService preferences file /usr/sbin/PasswordService - the password service daemon /var/db/authserver/authservermain - password database (guard this) /var/db/authserver/authserverfree - list of free (reusable) slots in the database /var/db/authserver/authserverreplicas - table of password server replicas SEE ALSO NeST(8) PasswordService(8) slapconfig(8) Mac OS X Server 21 February 2002 Mac OS X Server
| null |
postconf
|
By default, the postconf(1) command displays the values of main.cf configuration parameters, and warns about possible mis-typed parameter names (Postfix 2.9 and later). The command can also change main.cf configuration parameter values, or display other configuration information about the Postfix mail system. Options: -a List the available SASL plug-in types for the Postfix SMTP server. The plug-in type is selected with the smtpd_sasl_type configuration parameter by specifying one of the names listed below. cyrus This server plug-in is available when Postfix is built with Cyrus SASL support. dovecot This server plug-in uses the Dovecot authentication server, and is available when Postfix is built with any form of SASL support. This feature is available with Postfix 2.3 and later. -A List the available SASL plug-in types for the Postfix SMTP client. The plug-in type is selected with the smtp_sasl_type or lmtp_sasl_type configuration parameters by specifying one of the names listed below. cyrus This client plug-in is available when Postfix is built with Cyrus SASL support. This feature is available with Postfix 2.3 and later. -b [template_file] Display the message text that appears at the beginning of delivery status notification (DSN) messages, expanding $name expressions with actual values as described in bounce(5). To override the bounce_template_file parameter setting, specify a template file name at the end of the "postconf -b" command line. Specify an empty file name to display built-in templates (in shell language: ""). This feature is available with Postfix 2.3 and later. -c config_dir The main.cf configuration file is in the named directory instead of the default configuration directory. -C class,... When displaying main.cf parameters, select only parameters from the specified class(es): builtin Parameters with built-in names. service Parameters with service-defined names (the first field of a master.cf entry plus a Postfix-defined suffix). user Parameters with user-defined names. all All the above classes. The default is as if "-C all" is specified. This feature is available with Postfix 2.9 and later. -d Print main.cf default parameter settings instead of actual settings. Specify -df to fold long lines for human readability (Postfix 2.9 and later). -e Edit the main.cf configuration file, and update parameter settings with the "name=value" pairs on the postconf(1) command line. With -M, edit the master.cf configuration file, and replace one or more service entries with new values as specified with "service/type=value" on the postconf(1) command line. With -F, edit the master.cf configuration file, and replace one or more service fields with new values as specified with "service/type/field=value" on the postconf(1) command line. Currently, the "command" field contains the command name and command arguments. this may change in the near future, so that the "command" field contains only the command name, and a new "arguments" pseudofield contains the command arguments. With -P, edit the master.cf configuration file, and add or update one or more service parameter settings (-o parameter=value settings) with new values as specified with "service/type/parameter=value" on the postconf(1) command line. In all cases the file is copied to a temporary file then renamed into place. Specify quotes to protect special characters and whitespace on the postconf(1) command line. The -e option is no longer needed with Postfix version 2.8 and later. -f Fold long lines when printing main.cf or master.cf configuration file entries, for human readability. This feature is available with Postfix 2.9 and later. -F Show master.cf per-entry field settings (by default all services and all fields), formatted as "service/type/field=value", one per line. Specify -Ff to fold long lines. Specify one or more "service/type/field" instances on the postconf(1) command line to limit the output to fields of interest. Trailing parameter name or service type fields that are omitted will be handled as "*" wildcard fields. This feature is available with Postfix 2.11 and later. -h Show parameter or attribute values without the "name = " label that normally precedes the value. -H Show parameter or attribute names without the " = value" that normally follows the name. This feature is available with Postfix 3.1 and later. -l List the names of all supported mailbox locking methods. Postfix supports the following methods: flock A kernel-based advisory locking method for local files only. This locking method is available on systems with a BSD compatible library. fcntl A kernel-based advisory locking method for local and remote files. dotlock An application-level locking method. An application locks a file named filename by creating a file named filename.lock. The application is expected to remove its own lock file, as well as stale lock files that were left behind after abnormal program termination. -m List the names of all supported lookup table types. In Postfix configuration files, lookup tables are specified as type:name, where type is one of the types listed below. The table name syntax depends on the lookup table type as described in the DATABASE_README document. btree A sorted, balanced tree structure. Available on systems with support for Berkeley DB databases. cdb A read-optimized structure with no support for incremental updates. Available on systems with support for CDB databases. This feature is available with Postfix 2.2 and later. cidr A table that associates values with Classless Inter-Domain Routing (CIDR) patterns. This is described in cidr_table(5). This feature is available with Postfix 2.2 and later. dbm An indexed file type based on hashing. Available on systems with support for DBM databases. environ The UNIX process environment array. The lookup key is the environment variable name; the table name is ignored. Originally implemented for testing, someone may find this useful someday. fail A table that reliably fails all requests. The lookup table name is used for logging. This table exists to simplify Postfix error tests. This feature is available with Postfix 2.9 and later. hash An indexed file type based on hashing. Available on systems with support for Berkeley DB databases. inline (read-only) A non-shared, in-memory lookup table. Example: "inline:{ key=value, { key = text with whitespace or comma }}". Key-value pairs are separated by whitespace or comma; whitespace after "{" and before "}" is ignored. Inline tables eliminate the need to create a database file for just a few fixed elements. See also the static: map type. This feature is available with Postfix 3.0 and later. internal A non-shared, in-memory hash table. Its content are lost when a process terminates. lmdb OpenLDAP LMDB database (a memory-mapped, persistent file). Available on systems with support for LMDB databases. This is described in lmdb_table(5). This feature is available with Postfix 2.11 and later. ldap (read-only) LDAP database client. This is described in ldap_table(5). memcache Memcache database client. This is described in memcache_table(5). This feature is available with Postfix 2.9 and later. mysql (read-only) MySQL database client. Available on systems with support for MySQL databases. This is described in mysql_table(5). pcre (read-only) A lookup table based on Perl Compatible Regular Expressions. The file format is described in pcre_table(5). pgsql (read-only) PostgreSQL database client. This is described in pgsql_table(5). This feature is available with Postfix 2.1 and later. pipemap (read-only) A lookup table that constructs a pipeline of tables. Example: "pipemap:{type_1:name_1, ..., type_n:name_n}". Each "pipemap:" query is given to the first table. Each lookup result becomes the query for the next table in the pipeline, and the last table produces the final result. When any table lookup produces no result, the pipeline produces no result. The first and last characters of the "pipemap:" table name must be "{" and "}". Within these, individual maps are separated with comma or whitespace. This feature is available with Postfix 3.0 and later. proxy Postfix proxymap(8) client for shared access to Postfix databases. The table name syntax is type:name. This feature is available with Postfix 2.0 and later. randmap (read-only) An in-memory table that performs random selection. Example: "randmap:{result_1, ..., result_n}". Each table query returns a random choice from the specified results. The first and last characters of the "randmap:" table name must be "{" and "}". Within these, individual results are separated with comma or whitespace. To give a specific result more weight, specify it multiple times. This feature is available with Postfix 3.0 and later. regexp (read-only) A lookup table based on regular expressions. The file format is described in regexp_table(5). sdbm An indexed file type based on hashing. Available on systems with support for SDBM databases. This feature is available with Postfix 2.2 and later. socketmap (read-only) Sendmail-style socketmap client. The table name is inet:host:port:name for a TCP/IP server, or unix:pathname:name for a UNIX-domain server. This is described in socketmap_table(5). This feature is available with Postfix 2.10 and later. sqlite (read-only) SQLite database. This is described in sqlite_table(5). This feature is available with Postfix 2.8 and later. static (read-only) A table that always returns its name as lookup result. For example, static:foobar always returns the string foobar as lookup result. Specify "static:{ text with whitespace }" when the result contains whitespace; this form ignores whitespace after "{" and before "}". See also the inline: map. The form "static:{text} is available with Postfix 3.0 and later. tcp (read-only) TCP/IP client. The protocol is described in tcp_table(5). texthash (read-only) Produces similar results as hash: files, except that you don't need to run the postmap(1) command before you can use the file, and that it does not detect changes after the file is read. This feature is available with Postfix 2.8 and later. unionmap (read-only) A table that sends each query to multiple lookup tables and that concatenates all found results, separated by comma. The table name syntax is the same as for pipemap. This feature is available with Postfix 3.0 and later. unix (read-only) A limited view of the UNIX authentication database. The following tables are implemented: unix:passwd.byname The table is the UNIX password database. The key is a login name. The result is a password file entry in passwd(5) format. unix:group.byname The table is the UNIX group database. The key is a group name. The result is a group file entry in group(5) format. Other table types may exist depending on how Postfix was built. -M Show master.cf file contents instead of main.cf file contents. Specify -Mf to fold long lines for human readability. Specify zero or more arguments, each with a service-name or service-name/service-type pair, where service-name is the first field of a master.cf entry and service-type is one of (inet, unix, fifo, or pass). If service-name or service-name/service-type is specified, only the matching master.cf entries will be output. For example, "postconf -Mf smtp" will output all services named "smtp", and "postconf -Mf smtp/inet" will output only the smtp service that listens on the network. Trailing service type fields that are omitted will be handled as "*" wildcard fields. This feature is available with Postfix 2.9 and later. The syntax was changed from "name.type" to "name/type", and "*" wildcard support was added with Postfix 2.11. -n Show only configuration parameters that have explicit name=value settings in main.cf. Specify -nf to fold long lines for human readability (Postfix 2.9 and later). -o name=value Override main.cf parameter settings. This feature is available with Postfix 2.10 and later. -p Show main.cf parameter settings. This is the default. This feature is available with Postfix 2.11 and later. -P Show master.cf service parameter settings (by default all services and all parameters), formatted as "service/type/parameter=value", one per line. Specify -Pf to fold long lines. Specify one or more "service/type/parameter" instances on the postconf(1) command line to limit the output to parameters of interest. Trailing parameter name or service type fields that are omitted will be handled as "*" wildcard fields. This feature is available with Postfix 2.11 and later. -t [template_file] Display the templates for text that appears at the beginning of delivery status notification (DSN) messages, without expanding $name expressions. To override the bounce_template_file parameter setting, specify a template file name at the end of the "postconf -t" command line. Specify an empty file name to display built-in templates (in shell language: ""). This feature is available with Postfix 2.3 and later. -T mode If Postfix is compiled without TLS support, the -T option produces no output. Otherwise, if an invalid mode is specified, the -T option reports an error and exits with a non-zero status code. The valid modes are: compile-version Output the OpenSSL version that Postfix was compiled with (i.e. the OpenSSL version in a header file). The output format is the same as with the command "openssl version". run-version Output the OpenSSL version that Postfix is linked with at runtime (i.e. the OpenSSL version in a shared library). public-key-algorithms Output the lower-case names of the supported public-key algorithms, one per-line. This feature is available with Postfix 3.1 and later. -v Enable verbose logging for debugging purposes. Multiple -v options make the software increasingly verbose. -x Expand $name in main.cf or master.cf parameter values. The expansion is recursive. This feature is available with Postfix 2.10 and later. -X Edit the main.cf configuration file, and remove the parameters named on the postconf(1) command line. Specify a list of parameter names, not "name=value" pairs. With -M, edit the master.cf configuration file, and remove one or more service entries as specified with "service/type" on the postconf(1) command line. With -P, edit the master.cf configuration file, and remove one or more service parameter settings (-o parameter=value settings) as specified with "service/type/parameter" on the postconf(1) command line. In all cases the file is copied to a temporary file then renamed into place. Specify quotes to protect special characters on the postconf(1) command line. There is no postconf(1) command to perform the reverse operation. This feature is available with Postfix 2.10 and later. Support for -M and -P was added with Postfix 2.11. -# Edit the main.cf configuration file, and comment out the parameters named on the postconf(1) command line, so that those parameters revert to their default values. Specify a list of parameter names, not "name=value" pairs. With -M, edit the master.cf configuration file, and comment out one or more service entries as specified with "service/type" on the postconf(1) command line. In all cases the file is copied to a temporary file then renamed into place. Specify quotes to protect special characters on the postconf(1) command line. There is no postconf(1) command to perform the reverse operation. This feature is available with Postfix 2.6 and later. Support for -M was added with Postfix 2.11. DIAGNOSTICS Problems are reported to the standard error stream. ENVIRONMENT MAIL_CONFIG Directory with Postfix configuration files. CONFIGURATION PARAMETERS The following main.cf parameters are especially relevant to this program. The text below provides only a parameter summary. See postconf(5) for more details including examples. config_directory (see 'postconf -d' output) The default location of the Postfix main.cf and master.cf configuration files. bounce_template_file (empty) Pathname of a configuration file with bounce message templates. FILES /etc/postfix/main.cf, Postfix configuration parameters /etc/postfix/master.cf, Postfix master daemon configuration SEE ALSO bounce(5), bounce template file format master(5), master.cf configuration file syntax postconf(5), main.cf configuration file syntax README FILES Use "postconf readme_directory" or "postconf html_directory" to locate this information. DATABASE_README, Postfix lookup table overview LICENSE The Secure Mailer license must be distributed with this software. AUTHOR(S) Wietse Venema IBM T.J. Watson Research P.O. Box 704 Yorktown Heights, NY 10598, USA Wietse Venema Google, Inc. 111 8th Avenue New York, NY 10011, USA POSTCONF(1)
|
postconf - Postfix configuration utility
|
Managing main.cf: postconf [-dfhHnopvx] [-c config_dir] [-C class,...] [parameter ...] postconf [-epv] [-c config_dir] parameter=value ... postconf -# [-pv] [-c config_dir] parameter ... postconf -X [-pv] [-c config_dir] parameter ... Managing master.cf service entries: postconf -M [-fovx] [-c config_dir] [service[/type] ...] postconf -M [-ev] [-c config_dir] service/type=value ... postconf -M# [-v] [-c config_dir] service/type ... postconf -MX [-v] [-c config_dir] service/type ... Managing master.cf service fields: postconf -F [-fhHovx] [-c config_dir] [service[/type[/field]] ...] postconf -F [-ev] [-c config_dir] service/type/field=value ... Managing master.cf service parameters: postconf -P [-fhHovx] [-c config_dir] [service[/type[/parameter]] ...] postconf -P [-ev] [-c config_dir] service/type/parameter=value ... postconf -PX [-v] [-c config_dir] service/type/parameter ... Managing bounce message templates: postconf -b [-v] [-c config_dir] [template_file] postconf -t [-v] [-c config_dir] [template_file] Managing TLS features: postconf -T mode [-v] [-c config_dir] Managing other configuration: postconf -a|-A|-l|-m [-v] [-c config_dir]
| null | null |
createhomedir
|
createhomedir provides several options for creating and populating home directories.
|
createhomedir – create and populate home directories on the local computer.
|
createhomedir [-scbalh] [-n directoryDomainName] [-u username]
|
-s creates home directories for server home paths only (default). -c creates home directories for local home paths only. -b creates home directories for both server and local home paths. -a creates home directories for users defined in all directory domains of the server's search path. -l creates home directories for users defined in the local directory domain. -L causes the created home directory to be localized. -n directoryDomainName creates home directories for users defined in a specific directory domain in the server's search path. -u username creates a home directory for a specific user defined in the domain(s) identified in the -a, -l, or -n parameter. If you omit the -a, -l, and -n parameters when you use the -u parameter, -a is assumed. -i reads username list from standard input and creates specified home directories. Each username should be on its own line. -h usage help. FILES /usr/sbin/createhomedir location of tool CAVEATS When using the -a option, search limits of various directory servers (such as Open Directory or Active Directory) can prevent all possible home directories from being created. In this case, you may need to specify the usernames explicitly. Mac OS X Thu Oct 13 2004 Mac OS X
| null |
ocspd
|
ocspd performs caching and network fetching of Certificate Revocation Lists (CRLs) and Online Certificate Status Protocol (OCSP) responses. It is used by Security.framework during certificate verification. Security.framework communicates with ocspd via a private RPC interface. When Security.framework determines that a CRL is needed, or that it needs to perform an OCSP transaction, it performs an RPC to ocspd which then examines its cache to see if the appropriate CRL or OCSP response exists and is still valid. If so, that entity is returned to Security.framework. If no entry is found in cache, ocspd obtains it from the network, saving the result in cache before returning it to Security.framework. This command is not intended to be invoked directly. FILES /private/var/db/crls/crlcache.db CRL cache /private/var/db/crls/ocspcache.db OCSP response cache HISTORY ocspd was first introduced in Mac OS X version 10.4 (Tiger). Darwin Thurs Mar 31 2005 Darwin
|
ocspd – OCSP and CRL Daemon
|
ocspd
| null | null |
iostat
|
The iostat utility displays kernel I/O statistics on terminal, device and cpu operations. The first statistics that are printed are averaged over the system uptime. To get information about the current activity, a suitable wait time should be specified, so that the subsequent sets of printed statistics will be averaged over that time. The options are as follows: -? Display a usage statement and exit. -c Repeat the display count times. If no wait interval is specified, the default is 1 second. -C Display CPU statistics. This is on by default, unless -d is specified. -d Display only device statistics. If this flag is turned on, only device statistics will be displayed, unless -C or -U or -T is also specified to enable the display of CPU, load average or TTY statistics. -I Display total statistics for a given time period, rather than average statistics for each second during that time period. -K In the blocks transferred display (-o), display block count in kilobytes rather then the device native block size. -n Display up to devs number of devices. The iostat utility will display fewer devices if there are not devs devices present. -o Display old-style iostat device statistics. Sectors per second, transfers per second, and milliseconds per seek are displayed. If -I is specified, total blocks/sectors, total transfers, and milliseconds per seek are displayed. -T Display TTY statistics. This is on by default, unless -d is specified. -U Display system load averages. This is on by default, unless -d is specified. -w Pause wait seconds between each display. If no repeat count is specified, the default is infinity. The iostat utility displays its information in the following format: tty tin characters read from terminals tout characters written to terminals devices Device operations. The header of the field is the device name and unit number. The iostat utility will display as many devices as will fit in a standard 80 column screen, or the maximum number of devices in the system, whichever is smaller. If -n is specified on the command line, iostat will display the smaller of the requested number of devices, and the maximum number of devices in the system. To force iostat to display specific drives, their names may be supplied on the command line. The iostat utility will not display more devices than will fit in an 80 column screen, unless the -n argument is given on the command line to specify a maximum number of devices to display, or the list of specified devices exceeds 80 columns. If fewer devices are specified on the command line than will fit in an 80 column screen, iostat will show only the specified devices. The standard iostat device display shows the following statistics: KB/t kilobytes per transfer tps transfers per second MB/s megabytes per second The standard iostat device display, with the -I flag specified, shows the following statistics: KB/t kilobytes per transfer xfrs total number of transfers MB total number of megabytes transferred The old-style iostat display (using -o) shows the following statistics: sps sectors transferred per second tps transfers per second msps average milliseconds per transaction The old-style iostat display, with the -I flag specified, shows the following statistics: blk total blocks/sectors transferred xfr total transfers msps average milliseconds per transaction cpu us % of cpu time in user mode sy % of cpu time in system mode id % of cpu time in idle mode
|
iostat – report I/O statistics
|
iostat [-CUdKIoT?] [-c count] [-n devs] [-w wait] [drives]
| null |
iostat -w 1 disk0 disk2 Display statistics for the first and third disk devices device every second ad infinitum. iostat -c 2 Display the statistics for the first four devices in the system twice, with a one second display interval. iostat -Iw 3 Display total statistics every three seconds ad infinitum. iostat -odICTw 2 -c 9 Display total statistics using the old-style output format 9 times, with a two second interval between each measurement/display. The -d flag generally disables the TTY and CPU displays, but since the -T and -C flags are given, the TTY and CPU displays will be displayed. SEE ALSO netstat(1), nfsstat(1), ps(1), top(1), vm_stat(1) The sections starting with ``Interpreting system activity'' in Installing and Operating 4.3BSD. HISTORY This version of iostat first appeared in FreeBSD 3.0. macOS 14.5 May 22, 2015 macOS 14.5
|
cvdb
|
cvdb provides a mechanism for developers and system administrators to extract debugging information from the Xsan File System client filesystem. It can be used by system administrators to change the level of system logging that the client filesystem performs. There is also a switch to retrieve various statistics. USAGE cvdb is a multi-purpose debugging tool, performing a variety of functions. A rich set of options provide the user with control over various debug and logging functions. The main features of cvdb are as follows: Control debug logging. Control level and verbosity of syslog logging. Retrieve statistics.
|
cvdb - Xsan Client File System Debugger
|
cvdb [options]
|
-g Retrieve the debug log from a running system. The log pointers are reset after this command, so that the next invocation of cvdb -g will retrieve new information from the buffer. -C Continuously snap the trace. (Only useful with the -g option.) -S stopfile Stop snapping the trace when the file stopfile appears. (Only useful when also using the -g and -C options.) -D msec Delay msec milliseconds between trace snaps. The default is 1000 msec or one second. (Only useful when also using the -C and -g options.) -F Save the trace output to files named cvdbout.000000, cvdbout.000001, etc. instead of writing to standard output. These files will appear in the current working directory. (Only useful when also using the -C and -g options.) -n cnt After writing cnt files, overwrite the cvdbout out files starting with cvdbout.000000. This will essentially "wrap" the trace output. -N name Use name instead of cvdbout for the cvdb output files. (Only useful when also using the -C, -g, and -F options.) -d Disable debug logging. This is the initial (start-up) default. -e Enable debug logging. Disabled by default. Note: care should be taken when enabling logging in a production environment as this can significantly reduce file system performance. -m modules=bitvector logmask=bitvector Specify the trace points for a given module or modules. -l List the current trace points and their mask values. -L List the available trace/debug points. -s syslog={none|notice|info|debug} Set the syslog logging value. The default at mount time is notice. See mount_acfs(8) for more information. -R size=[nbytes[k|m|g]] Resize the the debug log. By default, the size of the log is 4MB. The minimum allowed size is 32768 bytes. -v Be verbose about the operations. -i Print various statistics about the directory cache. If enabled and configured, the directory cache contains a number of buffers of directory contents. This cache is shared by all mounted Xsan file systems. Without -v, the following are printed: The number of directory buffers currently cached and the maximum number allowed. The number of times a buffer has been "hit" in the cache. The number of times a cache search missed and required an RPC to the MDC. The number of times a read of the directory re-used the LAST buffer that was used on the previous read of the same directory (similar to a cache hit but doesn't probe the cache). The number of times a read of a directory specified the EOF offset. The number of times the directory cache for a specific directory was invalidated. For example, if the directory contents changed after it was read and a subsequent read directory was done thereby causing the invalidation. If -v is also specified, -i displays more statistics. Note that there are 2 hashes in the directory cache: one for all buffers and one by directory and file system. The number of entries in the hash used to find dir cache buffers. The # of searches using the directory cache buffer hash. The total # of probes searching the directory cache for buffers. This can be larger than searches in the hash since multiple buffers may hit the same hash bucket. The maximum probes after hitting a particular hash bucket (for buffers). The maximum probes in the hash by directory and file system. -b Print various statistics about each buffer cache. The only other option that can be used with this is -v. There are buffer caches per cachebufsize, see mount_acfs(8). For each buffer cache, the following is printed: # of mounted file systems using this buffer cache # of buffers and total memory used # of cache hits (and percentage) # of cache misses (and percentage) # of checks for write throttling to prevent over use by one file system. Write throttles only occur when more than 1 file system is using the cache. # of times writes were throttled If the -v option is also used with -b, the following additional statistics are printed for each buffer cache: buffercachecap, see mount_acfs(8) buffercachewant (internal, means thread is waiting for a buffer) bufhashsize (internal, # of entries in hash used to search buffers) bcdirtycnt (internal, # of buffers with "dirty" data queued in cache) dirty_ndone (internal, bcdirtycnt + buffers being written) flusheractive (internal, flag indicating buffer flusher is active) deferredflush (internal, # of buffers deferred after files are closed) dirtywaiters (internal, # of threads waiting due to throttling) rsvd max (internal, maximum amount of reserved space seen) non-zero rsvd min (internal, minimum amount of reserved space seen > 0) successful rsvd requests (internal, # of times reserved space was needed) failed rsvd requests (internal, # of times reserved space not available) -B Print buffer cache statistics using a curses based display that refreshes every second. Statistics are maintained separately for reads and writes, for each cache segment, and each mount point. Statistics labeled Cumulative are those representing the totals since the command was invoked or since the last reset. Those labeled Current represent the change in the last one second, roughly corresponding to the display refresh interval. Two keystrokes are interactively recognized on systems supporting curses. A q, quit, will cause the display to terminate. An r, reset, will reset the cumulative counters to zeros. The -B option is intended to be used to to analyze performance of the buffer cache with various applications, I/O subsystems, and various configuration parameters. The refreshing display is supported on clients that have a curses capability. Other clients will produce a line oriented output with similar content. A deadman timer will terminate the display after 30 seconds with no file systems mounted. This is to avoid hanging during file system shutdown. -x Print distributed LAN proxy client and server statistics. The only other options that can be used with this are -X and -f. The proxy statistics are collected at both the client and server ends of each proxy connection. The client will have a connection entry for each path to a proxy server for each proxy client file system. A proxy server will have a connection entry for each path to each client which has the file system mounted. Note: The distributed LAN proxy options are only available on platforms which support the distributed LAN client or server. The following information is displayed for each proxy connection: Client/Server System ID This IP address identifies the remote host. Client IP Addr The IP address of the Client side of the connection. Server IP Addr The IP address of the Server side of the connection. Read Bytes/Sec Measured recent read performance of the connection. Write Bytes/Sec Measured recent write performance of the connection. FS Read Bytes/Sec Measured recent read performance for all connections for this file system. FS Write Bytes/Sec Measured recent write performance for all connections for this file system. Queued I/O Outstanding I/O (backlog) for this connection. The backlog is meaningful for client side connections only. -X option Dump statistics for each path in comma separated value (CSV) format. (Only useful with the -x option.) The following options are available: 1 Dump remote endpoint IP address and backlog in bytes. This option is only relevant for client mounts. 2 Dump remote endpoint IP address and read bytes per second. 3 Dump remote endpoint IP address and write bytes per second. -f fsname Specifies the file system name associated with an action option. For proxy statistics(-x option), filter on connections for the given file system. This parameter is required for the read/write statistics (-y or -Y) option. -U NOTE: Not intended for general use. Only use when recommended by Quantum Support as a fault injection tool. This option resets the network connection to the proxy peer for all proxy connections on all file systems for which this node is either a proxy client or gateway. This simulates an unexpected network disconnect and reconnect. It is intended to test the robustness of the error handling and reconnect logic in the StorNext DLC proxy client and gateway systems. -y, -Y Display the read/write statistics for the file system specified with the -f option (required). If -Y, also clear the stats. -Z NOTE: Not intended for general use. Only use when recommended by Quantum Support as a fault injection tool. This option resets the network connection to the file system manager for all active file systems. This simulates an unexpected network disconnect and reconnect. It is intended to test the robustness of the error handling and reconnect logic in the StorNext file system. -z NOTE: Not intended for general use. Only use when recommended by Quantum Support as a performance measuring tool. Setting this option could result in data corruption, loss of data, or unintended exposure of uninitialized disk data!! This option turns on the DEVNULL capability and only applies to linux clients. Once enabled this option will continue to be enabled until reboot. When this option is enabled, all I/O for files with the DEVNULL affinity is not performed at the lowest level. The code paths are all executed including the allocation of space, but the data is not read or written to disk. Instead, writes simply complete the I/O and return and reads zero out the "read" buffer and complete the I/O. Files without the DEVNULL affinity are unaffected by this setting. Before attempting to use this capability, make sure no one is already using DEVNULL as an affinity on any file system the client has access too. Then, modify the file system configuration file, snfs_config(5), for the file system under test to contain DEVNULL as an affinity on at least one stripe group that can hold data. Next, restart the fsm. Then, use cvmkdir(1) with -k DEVNULL to create a directory to hold files to be used for this test. Finally, enable the feature with this option, cvdb -z. DEBUG LOGGING Developing code that runs in the kernel is very different than programming a user-level application. To assist plugin developers who may not be familiar with the kernel environment, Xsan provides a simple "tracepoint like" debugging mechanism. This mechanism allows developers to use printf-like statements to assist in debugging their code. To use the debugging facility, each module (typically a ".c" file), must declare a structure of type ModuleLogInfo_t. This structure defines the name of the module as it will appear in the debug statements, and indicates the debug level that is in effect for that module. ModuleLogInfo_t MyLogModule = { "mymodule_name", DEBUGLOG_NONE}; To use the facility, each module must call the AddLogModule() routine. This is typically done when the module is first initialized (in the xxx_start() routine for a plugin). When logging is no longer required (as when the plugin is unloaded), the module should call RemoveLogModule() to free up the system resources. Logging is not enabled by default. To enable logging at any time, specify the enable flag (-e) shrubbery %h: cvdb -e To disable logging, specify the disable flag. shrubbery %h: cvdb -d -v Disabling debug logging The level of debugging is controlled via a 64-bit mask. This allows each module to have 64 different, discrete trace/log points. If the log point is enabled when the code is executed, the trace point will be dumped to the circular buffer. A complete listing of all the pre-defined trace points can be obtained via: rabbit %h: cvdb -L Trace points: cvENTRY 0x0001 cvEXIT 0x0002 cvINFO 0x0004 cvNOTE 0x0008 cvWARN 0x0010 cvMEM 0x0020 cvNUKE 0x0040 cvLOOKUP 0x0080 cvGATE 0x0100 cvSTRAT 0x0200 cvRWCVP 0x0400 These trace points would then be used to control the verbosity of logging. Using the example above, if the cvEXIT and cvINFO trace points are enabled, then only those trace points would be dumped to the log. To enable the trace points, the first step is to determine the ID of the module. This is done with the list command. shrubbery %h: cvdb -l Module 'cvfs_memalloc' module 0x000001 logmask 0x0000000000000000 Module 'cvfs_fsmsubr' module 0x000002 logmask 0x0000000000000000 Module 'cvfs_fsmdir' module 0x000004 logmask 0x0000000000000000 Module 'cvfs_fsmvfsops' module 0x000008 logmask 0x0000000000000000 Module 'cvfs_fsmvnops' module 0x000010 logmask 0x0000000000000000 Module 'cvfs_sockio' module 0x000020 logmask 0x0000000000000000 Module 'cvfs_subr' module 0x000040 logmask 0x0000000000000000 Module 'cvfs_vfsops' module 0x000080 logmask 0x0000000000000000 Module 'cvfs_vnops' module 0x000100 logmask 0x0000000000000000 Module 'cvfs_dmon' module 0x000200 logmask 0x0000000000000000 Module 'cvfs_rwlock' module 0x000400 logmask 0x0000000000000000 Module 'cvfs_rw' module 0x000800 logmask 0x0000000000000000 Module 'cvfs_fsmtokops' module 0x001000 logmask 0x0000000000000000 Module 'cvfs_extent' module 0x002000 logmask 0x0000000000000000 Module 'cvfs_plugin' module 0x004000 logmask 0x0000000000000000 Module 'cvfs_disk' module 0x008000 logmask 0x0000000000000000 To enable the cvENTRY and cvEXIT trace points of the plugin, rwlock, vnops, and memalloc routines, use the modules command. shrubbery %h: cvdb -m modules=0x4501 logmask=3 The bit masks are additive, not replacement. This means that modules and trace points you do not specify are unaffected. To turn on all debugging on all trace points, specify minus one (-1). shrubbery %h: cvdb -m modules=-1 logmask=-1 Once the module has been added to the system, log messages will then be dumped into a 1 meg circular buffer. Modules may find it convenient to declare a macro in each file so that the form of log messages will be the same in each file. For example, the following macro definition and following log function would dump information to the log buffer if the trace point is enabled: #define LOGINFO (&MyLogModule) LogMsg(LOGINFO, cvEXIT, "Plugin read return error %d bytes %llx", error, num_bytes); To extract the messages from the log on a running system, use the -g option of cvdb. SYSLOG The Xsan client file system can log certain events so that they show up on the system console and in the system log, /var/adm/SYSLOG. The verbosity of messages can be controlled via the syslog parameter. The default is to log all messages. See syslogd(1M) for more information of setting up system logging. There are four log levels: none, notice, info, and debug . The levels are prioritized so that the debug level is the most verbose; setting the level to none will turn off logging completely. The events that are logged at each level are as follows: notice • reconnection with the FSM. info • all notice messages, plus • socket daemon termination debug • Currently unused The log level is set to debug by default. BUSY UNMOUNTS Occasionally, it will be impossible to unmount the Xsan volume even when it appears that all processes are no longer using the volume. The problem is that the processes are most likely in the zombie state; while they do not show up in ps, then can be found using crash. Usually, these processes are waiting on a lock in the Xsan file system, or waiting for a response from the FSM. DEBUG LOGGING EXAMPLES To enable logging: cvdb -e To disable logging: cvdb -d To retrieve (get) log information on a running system: cvdb -g > cvdbout To continuously retrieve log information on a running system, snapping the trace once per second: cvdb -g -C > cvdbout To continuously retrieve log information on a running system, snapping the trace once every two seconds and stopping when the file named STOP appears: cvdb -g -C -D 2000 -S STOP > cvdbout To continuously retrieve log information on a running system, and save the output to files named cvdbout.000000, cvdbout.0000001, etc. and wrapping after 100 files have been written: cvdb -g -C -F -n 100 To continuously snap traces named /tmp/snap.000000, /tmp/snap.000001, etc.: cvdb -g -C -F -N /tmp/snap To list all the modules and their enabled trace points: cvdb -l To set trace points in individual modules: cvdb -m modules=bitmask_of_modules logmask=tracepoints. To resize the log to 12 megabytes: cvdb -R 12m To dump out all the pre-defined trace points: cvdb -L SEE ALSO syslogd(1M), umount(8), cvdbset(8) Xsan File System January 2018 CVDB(8)
| null |
sa
|
The sa utility reports on, cleans up, and generally maintains system accounting files. The sa utility is able to condense the information in /var/account/acct into the summary files /var/account/savacct and /var/account/usracct, which contain system statistics according to command name and login id, respectively. This condensation is desirable because on a large system, /var/account/acct can grow by hundreds of blocks per day. The summary files are normally read before the accounting file, so that reports include all available information. If file names are supplied, they are read instead of /var/account/acct. After each file is read, if the summary files are being updated, an updated summary will be saved to disk. Only one report is printed, after the last file is processed. The labels used in the output indicate the following, except where otherwise specified by individual options: avio Average number of I/O operations per execution cp Sum of user and system time, in minutes cpu Same as cp k CPU-time averaged core usage, in 1k units k*sec CPU storage integral, in 1k-core seconds re Real time, in minutes s System time, in minutes tio Total number of I/O operations u User time, in minutes The options to sa are: -a List all command names, including those containing unprintable characters and those used only once. By default, sa places all names containing unprintable characters and those used only once under the name ``***other''. -b If printing command statistics, sort output by the sum of user and system time divided by number of calls. -c In addition to the number of calls and the user, system and real times for each command, print their percentage of the total over all commands. -d If printing command statistics, sort by the average number of disk I/O operations. If printing user statistics, print the average number of disk I/O operations per user. -D If printing command statistics, sort and print by the total number of disk I/O operations. -f Force no interactive threshold comparison with the -v option. -i Do not read in the summary files. -j Instead of the total minutes per category, give seconds per call. -k If printing command statistics, sort by the cpu-time average memory usage. If printing user statistics, print the cpu-time average memory usage. -K If printing command statistics, print and sort by the cpu-storage integral. -l Separate system and user time; normally they are combined. -m Print per-user statistics rather than per-command statistics. -n Sort by number of calls. -P file Use the specified file for accessing the per-command accounting summary database, instead of the default /var/account/savacct. -q Create no output other than error messages. -r Reverse order of sort. -s Truncate the accounting files when done and merge their data into the summary files. -t For each command, report the ratio of real time to the sum of user and system cpu times. If the cpu time is too small to report, ``*ignore*'' appears in this field. -U file Use the specified file for accessing the per-user accounting summary database, instead of the default /var/account/usracct. -u Superseding all other flags, for each entry in the accounting file, print the user ID, total seconds of cpu usage, total memory usage, number of I/O operations performed, and command name. -v cutoff For each command used cutoff times or fewer, print the command name and await a reply from the terminal. If the reply begins with ``y'', add the command to the category ``**junk**''. This flag is used to strip garbage from the report. By default, per-command statistics will be printed. The number of calls, the total elapsed time in minutes, total cpu and user time in minutes, average number of I/O operations, and CPU-time averaged core usage will be printed. If the -m option is specified, per-user statistics will be printed, including the user name, the number of commands invoked, total cpu time used (in minutes), total number of I/O operations, and CPU storage integral for each user. If the -u option is specified, the uid, user and system time (in seconds), CPU storage integral, I/O usage, and command name will be printed for each entry in the accounting data file. If the -u flag is specified, all flags other than -q are ignored. If the -m flag is specified, only the -b, -d, -i, -k, -q, and -s flags are honored. FILES /var/account/acct raw accounting data file /var/account/savacct per-command accounting summary database /var/account/usracct per-user accounting summary database EXIT STATUS The sa utility exits 0 on success, and >0 if an error occurs. SEE ALSO lastcomm(1), acct(5), ac(8), accton(8) HISTORY sa first appeared in Version 5 AT&T UNIX. sa was rewritten for NetBSD 0.9A from the specification provided by various systems' manual pages. AUTHORS Chris G. Demetriou <cgd@postgres.berkeley.edu> CAVEATS While the behavior of the options in this version of sa was modeled after the original version, there are some intentional differences and undoubtedly some unintentional ones as well. In particular, the -q option has been added, and the -m option now understands more options than it used to. The formats of the summary files created by this version of sa are very different from the those used by the original version. This is not considered a problem, however, because the accounting record format has changed as well (since user ids are now 32 bits). BUGS The number of options to this program is absurd, especially considering that there is not much logic behind their lettering. The field labels should be more consistent. The VM system does not record the CPU storage integral. macOS 14.5 February 14, 2020 macOS 14.5
|
sa – print system accounting statistics
|
sa [-abcdDfijkKlmnqrstu] [-P file] [-U file] [-v cutoff] [file ...]
| null | null |
rpc.lockd
|
The rpc.lockd utility provides monitored and unmonitored file and record locking services in an NFS environment. To monitor the status of hosts requesting locks, the locking daemon typically operates in conjunction with rpc.statd(8). The rpc.lockd utility runs whenever its services are needed to support either the NFS server (see nfsd(8)) or the NFS client (an NFS file system on which file locking requests have been made). The daemon will remain running until a short time after its services are no longer needed to support either the NFS client or NFS server. rpc.lockd will also make sure that the statd service (on which it depends) is running. The following is a list of command line options that are available. However, since rpc.lockd is normally started by launchd(8), configuration of these options should be controlled using the equivalent settings in the NFS configuration file. See nfs.conf(5) for a list of tunable parameters. -d The -d option causes debugging information to be written to syslog, recording all RPC transactions to the daemon. These messages are logged with level LOG_DEBUG and facility LOG_DAEMON. Specifying a debug_level of 1 results in the generation of one log line per protocol operation. Higher debug levels can be specified, causing display of operation arguments and internal operations of the daemon. -g The -g option allows to specify the grace period (in seconds). During the grace period rpc.lockd only accepts requests from hosts which are reinitialising locks which existed before the server restart. Default is 45 seconds. -x The -x option tells rpc.lockd how long to cache state records for monitored hosts. Setting it to zero will disable the cache which will make lock and unlock requests from a single client more expensive because of additional interaction with the client's statd. The default cache time is 60 seconds. Error conditions are logged to syslog, irrespective of the debug level, using log level LOG_ERR and facility LOG_DAEMON. FILES /var/run/lockd.pid The pid of the current lockd daemon. /System/Library/LaunchDaemons/com.apple.lockd.plist The lockd service's property list file for launchd(8). /usr/include/rpcsvc/nlm_prot.x RPC protocol specification for the network lock manager protocol. SEE ALSO nfs.conf(5), rpc.statd(8), syslog(3), launchd(8) BUGS The current implementation serialises locks requests that could be shared. STANDARDS The implementation is based on the specification in Protocols for X/Open PC Interworking: XNFS, Issue 4, X/Open CAE Specification C218, ISBN 1 872630 66 9. HISTORY A version of rpc.lockd appeared in SunOS 4. macOS 14.5 January 9, 2007 macOS 14.5
|
rpc.lockd – NFS file locking daemon
|
rpc.lockd [-d debug_level] [-g grace_period] [-x host_monitor_cache_period]
| null | null |
uuxqt
|
The uuxqt daemon executes commands requested by uux (1) from either the local system or from remote systems. It is started automatically by the uucico (8) daemon (unless uucico (8) is given the -q or --nouuxqt option). There is normally no need to run this command, since it will be invoked by uucico (8). However, it can be used to provide greater control over the processing of the work queue. Multiple invocations of uuxqt may be run at once, as controlled by the max-uuxqts configuration command.
|
uuxqt - UUCP execution daemon
|
uuxqt [ options ]
|
The following options may be given to uuxqt. -c command, --command command Only execute requests for the specified command. For example: uuxqt --command rmail -s system, --system system Only execute requests originating from the specified system. -x type, --debug type Turn on particular debugging types. The following types are recognized: abnormal, chat, handshake, uucp-proto, proto, port, config, spooldir, execute, incoming, outgoing. Only abnormal, config, spooldir and execute are meaningful for uuxqt. Multiple types may be given, separated by commas, and the --debug option may appear multiple times. A number may also be given, which will turn on that many types from the foregoing list; for example, --debug 2 is equivalent to --debug abnormal,chat. The debugging output is sent to the debugging file, which may be printed using uulog -D. -I file, --config Set configuration file to use. This option may not be available, depending upon how uuxqt was compiled. -v, --version Report version information and exit. --help Print a help message and exit. SEE ALSO uucp(1), uux(1), uucico(8) AUTHOR Ian Lance Taylor (ian@airs.com) Taylor UUCP 1.07 uuxqt(8)
| null |
mkfile
|
mkfile creates one or more files that are suitable for use as NFS-mounted swap areas. The sticky bit is set, and the file is padded with zeroes by default. Non-root users must set the sticky bit using chmod(1). The default size unit is bytes, but the following suffixes may be used to multiply by the given factor: b (512), k (1024), m (1048576), and g (1073741824).
|
mkfile – create a file
|
mkfile [-nv] size[b|k|m|g] file ...
|
-n Create an empty filename. The size is noted, but disk blocks aren't allocated until data is written to them. -v Verbose. Report the names and sizes of created files. WARNING If a client's swap file is removed and recreated, it must be re-exported before the client will be able to access it. This action may only be done when the client is not running. SEE ALSO chmod(2), stat(2), sticky(7) macOS September 1, 1997 macOS
| null |
softwareupdate
|
Software Update checks for new and updated versions of your software based on information about your computer and current software. Invoke softwareupdate by specifying a command followed by zero or more args. softwareupdate requires admin authentication for all commands except --list. If you run softwareupdate as a normal admin user, you will be prompted for a password where required. Alternatively, you can run softwareupdate as root and avoid all further authentication prompts. The following commands are available: -l | --list List all available updates. -i | --install Each update specified by args is downloaded and installed. args can be one of the following: -r | --recommended All updates that are recommended for your system. These are prefixed with a * character in the --list output. --os-only Only macOS updates --safari-only Only safari updates -R | --restart Automatically restart (or shut down) if required to complete installation. If the user invoking this tool is logged in then macOS will attempt to quit all applications, logout, and restart. If the user is not logged in, macOS will trigger a forced reboot if necessary. If you wish to always perform a forced reboot, pass -f (--force). -a | --all All updates that are applicable to your system, including those non-recommended ones, which are prefixed with a - character in the --list output. (Non-recommended updates are uncommon in any case.) item ... One or more specified updates. The --list output shows the item names you can specify here, prefixed by the * or - characters. See EXAMPLES. --stdinpass Password to authenticate as an owner. Apple Silicon only. --user Local username to authenticate as an owner. Apple Silicon only. --list-full-installers List the available macOS Installers. --fetch-full-installer Install the latest recommended macOS Installer. Use --full- installer-version to specify the version of macOS to install. ie. --full-installer-version 10.15. Use --launch-installer to launch the installer automatically after it has been downloaded --install-rosetta Install Rosetta. Only applies to Apple silicon Macs. Pass --agree-to-license to agree to the software license agreement without any user interaction. -d | --download Each update specified by args is downloaded but not installed. The values of args are the same as for the --install command. Updates downloaded with --download can be subsequently installed with --install, or through System Settings (as long as they remain applicable to your system). Updates are downloaded to /Library/Updates, but are not designed to be installed by double-clicking the packages in that directory: always use --install or System Settings to actually perform the install. --schedule Returns the per-machine automatic (background) check preference. -h | --help Print command usage.
|
softwareupdate – system software update tool
|
softwareupdate command [args ...]
| null |
The following examples are shown as given to the shell: softwareupdate --list Software Update Tool Finding available software Software Update found the following new or updated software: * Label: MacBookAirEFIUpdate2.4-2.4 Title: MacBook Air EFI Firmware Update, Version: 2.4, Size: 3817K, Recommended: YES, Action: restart, * Label: ProAppsQTCodecs-1.0 Title: ProApps QuickTime codecs, Version: 1.0, Size: 968K, Recommended: YES, sudo softwareupdate --install ProAppsQTCodecs-1.0 Software Update Tool Finding available software Downloading ProApps QuickTime codecs Downloaded ProApps QuickTime codecs Installing ProApps QuickTime codecs Done with ProApps QuickTime codecs Done. sudo softwareupdate --schedule Automatic check is on Mac OS X September 11, 2012 Mac OS X
|
unsetpassword
| null | null | null | null | null |
zic
|
The zic program reads text from the file(s) named on the command line and creates the timezone information format (TZif) files specified in this input. If a filename is “-”, standard input is read. The following options are available: --version Output version information and exit. --help Output short usage message and exit. -b bloat Output backward-compatibility data as specified by bloat. If bloat is fat, generate additional data entries that work around potential bugs or incompatibilities in older software, such as software that mishandles the 64-bit generated data. If bloat is slim, keep the output files small; this can help check for the bugs and incompatibilities. The default is slim, as software that mishandles 64-bit data typically mishandles timestamps after the year 2038 anyway. Also see the -r option for another way to alter output size. -D Do not create directories. -d directory Create time conversion information files in the named directory rather than in the standard directory named below. -l timezone Use timezone as local time. The zic utility will act as if the input contained a link line of the form Link timezone localtime If timezone is ‘-’, any already-existing link is removed. -L filename Read leap second information from the file with the given name. If this option is not used, no leap second information appears in output files. -p timezone Use timezone 's rules when handling nonstandard TZ strings like “EET-2EEST” that lack transition rules. The zic utility will act as if the input contained a link line of the form Link timezone posixrules This feature is obsolete and poorly supported. Among other things it should not be used for timestamps after the year 2037, and it should not be combined with -b slim if timezone 's transitions are at standard time or Universal Time (UT) instead of local time. If timezone is ‘-’, any already-existing link is removed. -r [@lo][/@hi] Limit the applicability of output files to timestamps in the range from lo (inclusive) to hi (exclusive), where lo and hi are possibly-signed decimal counts of seconds since the Epoch (1970-01-01 00:00:00 UTC). Omitted counts default to extreme values. The output files use UT offset 0 and abbreviation “-00” in place of the omitted timestamp data. For example, -r -@0 omits data intended for negative timestamps (i.e., before the Epoch), and -r -@0/@2147483648 outputs data intended only for nonnegative timestamps that fit into 31-bit signed integers. Although this option typically reduces the output file's size, the size can increase due to the need to represent the timestamp range boundaries, particularly if hi causes a TZif file to contain explicit entries for pre- hi transitions rather than concisely representing them with an extended POSIX TZ string. Also see the -b slim option for another way to shrink output size. -R -@hi Generate redundant trailing explicit transitions for timestamps that occur less than hi seconds since the Epoch, even though the transitions could be more concisely represented via the extended POSIX TZ string. This option does not affect the represented timestamps. Although it accommodates nonstandard TZif readers that ignore the extended POSIX TZ string, it increases the size of the altered output files. -t file When creating local time information, put the configuration link in the named file rather than in the standard location. -v Be more verbose, and complain about the following situations: • The input specifies a link to a link, something not supported by some older parsers, including zic itself through release 2022e. • A year that appears in a data file is outside the range of representable years. • A time of 24:00 or more appears in the input. Pre-1998 versions of zic prohibit 24:00, and pre-2007 versions prohibit times greater than 24:00. • A rule goes past the start or end of the month. Pre-2004 versions of zic prohibit this. • A time zone abbreviation uses a ‘%z’ format. Pre-2015 versions of zic do not support this. • A timestamp contains fractional seconds. Pre-2018 versions of zic do not support this. • The input contains abbreviations that are mishandled by pre-2018 versions of zic due to a longstanding coding bug. These abbreviations include “L” for “Link”, “mi” for “min”, “Sa” for “Sat”, and “Su” for “Sun”. • The output file does not contain all the information about the long-term future of a timezone, because the future cannot be summarized as an extended POSIX TZ string. For example, as of 2019 this problem occurs for Iran's daylight-saving rules for the predicted future, as these rules are based on the Iranian calendar, which cannot be represented. • The output contains data that may not be handled properly by client code designed for older zic output formats. These compatibility issues affect only timestamps before 1970 or after the start of 2038. • The output contains a truncated leap second table, which can cause some older TZif readers to misbehave. This can occur if the -L option is used, and either an Expires line is present or the -r option is also used. • The output file contains more than 1200 transitions, which may be mishandled by some clients. The current reference client supports at most 2000 transitions; pre-2014 versions of the reference client support at most 1200 transitions. • A time zone abbreviation has fewer than 3 or more than 6 characters. POSIX requires at least 3, and requires implementations to support at least 6. • An output file name contains a byte that is not an ASCII letter, “-”, “/”, or “_”; or it contains a file name component that contains more than 14 bytes or that starts with “-”. FILES Input files use the format described in this section; output files use tzfile(5) format. Input files should be text files, that is, they should be a series of zero or more lines, each ending in a newline byte and containing at most 2048 bytes counting the newline, and without any NUL bytes. The input text's encoding is typically UTF-8 or ASCII; it should have a unibyte representation for the POSIX Portable Character Set (PPCS) https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap06.html and the encoding's non-unibyte characters should consist entirely of non- PPCS bytes. Non-PPCS characters typically occur only in comments: although output file names and time zone abbreviations can contain nearly any character, other software will work better if these are limited to the restricted syntax described under the -v option. Input lines are made up of fields. Fields are separated from one another by one or more white space characters. The white space characters are space, form feed, carriage return, newline, tab, and vertical tab. Leading and trailing white space on input lines is ignored. An unquoted sharp character (#) in the input introduces a comment which extends to the end of the line the sharp character appears on. White space characters and sharp characters may be enclosed in double quotes (") if they're to be used as part of a field. Any line that is blank (after comment stripping) is ignored. Nonblank lines are expected to be of one of three types: rule lines, zone lines, and link lines. Names must be in English and are case insensitive. They appear in several contexts, and include month and weekday names and keywords such as “maximum”, “only”, “Rolling”, and “Zone”. A name can be abbreviated by omitting all but an initial prefix; any abbreviation must be unambiguous in context. A rule line has the form Rule NAME FROM TO - IN ON AT SAVE LETTER/S For example: Rule US 1967 1973 - Apr lastSun 2:00w 1:00d D The fields that make up a rule line are: NAME Gives the name of the rule set that contains this line. The name must start with a character that is neither an ASCII digit nor “-” nor “+”. To allow for future extensions, an unquoted name should not contain characters from the set “‘!$%&'()*,/:;<=>?@[]^`{|}~’”. FROM Gives the first year in which the rule applies. Any signed integer year can be supplied; the proleptic Gregorian calendar is assumed, with year 0 preceding year 1. The word minimum (or an abbreviation) means the indefinite past. The word maximum (or an abbreviation) means the indefinite future. Rules can describe times that are not representable as time values, with the unrepresentable times ignored; this allows rules to be portable among hosts with differing time value types. TO Gives the final year in which the rule applies. In addition to minimum and maximum (as above), the word only (or an abbreviation) may be used to repeat the value of the FROM field. - Is a reserved field and should always contain ‘-’ for compatibility with older versions of zic. It was previously known as the TYPE field, which could contain values to allow a separate script to further restrict in which “types” of years the rule would apply. IN Names the month in which the rule takes effect. Month names may be abbreviated. ON Gives the day on which the rule takes effect. Recognized forms include: 5 the fifth of the month lastSun the last Sunday in the month lastMon the last Monday in the month Sun>=8 first Sunday on or after the eighth Sun<=25 last Sunday on or before the 25th A weekday name (e.g., ‘Sunday’) or a weekday name preceded by “last” (e.g., ‘lastSunday’) may be abbreviated or spelled out in full. There must be no white space characters within the ON field. The “<=” and “>=” constructs can result in a day in the neighboring month; for example, the IN-ON combination “Oct Sun>=31” stands for the first Sunday on or after October 31, even if that Sunday occurs in November. AT Gives the time of day at which the rule takes effect, relative to 00:00, the start of a calendar day. Recognized forms include: 2 time in hours 2:00 time in hours and minutes 01:28:14 time in hours, minutes, and seconds 00:19:32.13 time with fractional seconds 12:00 midday, 12 hours after 00:00 15:00 3 PM, 15 hours after 00:00 24:00 end of day, 24 hours after 00:00 260:00 260 hours after 00:00 -2:30 2.5 hours before 00:00 - equivalent to 0 Although zic rounds times to the nearest integer second (breaking ties to the even integer), the fractions may be useful to other applications requiring greater precision. The source format does not specify any maximum precision. Any of these forms may be followed by the letter ‘w’ if the given time is local or “wall clock” time, ‘s’ if the given time is standard time without any adjustment for daylight saving, or ‘u’ (or ‘g’ or ‘z’) if the given time is universal time; in the absence of an indicator, local (wall clock) time is assumed. These forms ignore leap seconds; for example, if a leap second occurs at 00:59:60 local time, ‘1:00’ stands for 3601 seconds after local midnight instead of the usual 3600 seconds. The intent is that a rule line describes the instants when a clock/calendar set to the type of time specified in the AT field would show the specified date and time of day. SAVE Gives the amount of time to be added to local standard time when the rule is in effect, and whether the resulting time is standard or daylight saving. This field has the same format as the AT field except with a different set of suffix letters: ‘s’ for standard time and ‘d’ for daylight saving time. The suffix letter is typically omitted, and defaults to ‘s’ if the offset is zero and to ‘d’ otherwise. Negative offsets are allowed; in Ireland, for example, daylight saving time is observed in winter and has a negative offset relative to Irish Standard Time. The offset is merely added to standard time; for example, zic does not distinguish a 10:30 standard time plus an 0:30 SAVE from a 10:00 standard time plus a 1:00 SAVE. LETTER/S Gives the “variable part” (for example, the “S” or “D” in “EST” or “EDT”) of time zone abbreviations to be used when this rule is in effect. If this field is ‘-’, the variable part is null. A zone line has the form Zone NAME STDOFF RULES FORMAT [UNTIL] For example: Zone Asia/Amman 2:00 Jordan EE%sT 2017 Oct 27 01:00 The fields that make up a zone line are: NAME The name of the timezone. This is the name used in creating the time conversion information file for the timezone. It should not contain a file name component “.” or “..”; a file name component is a maximal substring that does not contain “/”. STDOFF The amount of time to add to UT to get standard time, without any adjustment for daylight saving. This field has the same format as the AT and SAVE fields of rule lines, except without suffix letters; begin the field with a minus sign if time must be subtracted from UT. RULES The name of the rules that apply in the timezone or, alternatively, a field in the same format as a rule-line SAVE column, giving the amount of time to be added to local standard time and whether the resulting time is standard or daylight saving. If this field is ‘-’ then standard time always applies. When an amount of time is given, only the sum of standard time and this amount matters. FORMAT The format for time zone abbreviations. The pair of characters ‘%s’ is used to show where the “variable part” of the time zone abbreviation goes. Alternatively, a format can use the pair of characters ‘%z’ to stand for the UT offset in the form ± hh, ± hhmm, or ± hhmmss, using the shortest form that does not lose information, where hh, mm, and ss are the hours, minutes, and seconds east (+) or west (-) of UT. Alternatively, a slash (/) separates standard and daylight abbreviations. To conform to POSIX, a time zone abbreviation should contain only alphanumeric ASCII characters, ‘+’ and ‘-’. By convention, the time zone abbreviation ‘-00’ is a placeholder that means local time is unspecified. UNTIL The time at which the UT offset or the rule(s) change for a location. It takes the form of one to four fields YEAR [MONTH [DAY [TIME]]]. If this is specified, the time zone information is generated from the given UT offset and rule change until the time specified, which is interpreted using the rules in effect just before the transition. The month, day, and time of day have the same format as the IN, ON, and AT fields of a rule; trailing fields can be omitted, and default to the earliest possible value for the missing fields. The next line must be a “continuation” line; this has the same form as a zone line except that the string “Zone” and the name are omitted, as the continuation line will place information starting at the time specified as the “until” information in the previous line in the file used by the previous line. Continuation lines may contain “until” information, just as zone lines do, indicating that the next line is a further continuation. If a zone changes at the same instant that a rule would otherwise take effect in the earlier zone or continuation line, the rule is ignored. A zone or continuation line L with a named rule set starts with standard time by default: that is, any of L 's timestamps preceding L 's earliest rule use the rule in effect after L 's first transition into standard time. In a single zone it is an error if two rules take effect at the same instant, or if two zone changes take effect at the same instant. If a continuation line subtracts N seconds from the UT offset after a transition that would be interpreted to be later if using the continuation line's UT offset and rules, the “until” time of the previous zone or continuation line is interpreted according to the continuation line's UT offset and rules, and any rule that would otherwise take effect in the next N seconds is instead assumed to take effect simultaneously. For example: # Rule NAME FROM TO IN ON AT SAVE LETTER/S Rule US 1967 2006 - Oct lastSun 2:00 0 S Rule US 1967 1973 - Apr lastSun 2:00 1:00 D # Zone NAME STDOFF RULES FORMAT [UNTIL] Zone America/Menominee 5:00 EST 1973 Apr 29 2:00 6:00 US C%sT Here, an incorrect reading would be there were two clock changes on 1973-04-29, the first from 02:00 EST (-05) to 01:00 CST (-06), and the second an hour later from 02:00 CST (-06) to 03:00 CDT (-05). However, zic interprets this more sensibly as a single transition from 02:00 CST (-05) to 02:00 CDT (-05). A link line has the form Link TARGET LINK-NAME For example: Link Europe/Istanbul Asia/Istanbul The TARGET field should appear as the NAME field in some zone line or as the LINK-NAME field in some link line. The LINK-NAME field is used as an alternative name for that zone; it has the same syntax as a zone line's NAME field. Links can chain together, although the behavior is unspecified if a chain of one or more links does not terminate in a Zone name. A link line can appear before the line that defines the link target. For example: Link Greenwich G_M_T Link Etc/GMT Greenwich Zone Etc/GMT 0 - GMT The two links are chained together, and G_M_T, Greenwich, and Etc/GMT all name the same zone. Except for continuation lines, lines may appear in any order in the input. However, the behavior is unspecified if multiple zone or link lines define the same name. The file that describes leap seconds can have leap lines and an expiration line. Leap lines have the following form: Leap YEAR MONTH DAY HH:MM:SS CORR R/S For example: Leap 2016 Dec 31 23:59:60 + S The YEAR, MONTH, DAY, and HH:MM:SS fields tell when the leap second happened. The CORR field should be ‘+’ if a second was added or ‘-’ if a second was skipped. The R/S field should be (an abbreviation of) “Stationary” if the leap second time given by the other fields should be interpreted as UTC or (an abbreviation of) “Rolling” if the leap second time given by the other fields should be interpreted as local (wall clock) time. Rolling leap seconds were implemented back when it was not clear whether common practice was rolling or stationary, with concerns that one would see Times Square ball drops where there'd be a “3... 2... 1... leap... Happy New Year” countdown, placing the leap second at midnight New York time rather than midnight UTC. However, this countdown style does not seem to have caught on, which means rolling leap seconds are not used in practice; also, they are not supported if the -r option is used. The expiration line, if present, has the form: Expires YEAR MONTH DAY HH:MM:SS For example: Expires 2020 Dec 28 00:00:00 The YEAR, MONTH, DAY, and HH:MM:SS fields give the expiration timestamp in UTC for the leap second table. EXTENDED EXAMPLE Here is an extended example of zic input, intended to illustrate many of its features. # Rule NAME FROM TO - IN ON AT SAVE LETTER/S Rule Swiss 1941 1942 - May Mon>=1 1:00 1:00 S Rule Swiss 1941 1942 - Oct Mon>=1 2:00 0 - Rule EU 1977 1980 - Apr Sun>=1 1:00u 1:00 S Rule EU 1977 only - Sep lastSun 1:00u 0 - Rule EU 1978 only - Oct 1 1:00u 0 - Rule EU 1979 1995 - Sep lastSun 1:00u 0 - Rule EU 1981 max - Mar lastSun 1:00u 1:00 S Rule EU 1996 max - Oct lastSun 1:00u 0 - # Zone NAME STDOFF RULES FORMAT [UNTIL] Zone Europe/Zurich 0:34:08 - LMT 1853 Jul 16 0:29:45.50 - BMT 1894 Jun 1:00 Swiss CE%sT 1981 1:00 EU CE%sT Link Europe/Zurich Europe/Vaduz In this example, the EU rules are for the European Union and for its predecessor organization, the European Communities. The timezone is named Europe/Zurich and it has the alias Europe/Vaduz. This example says that Zurich was 34 minutes and 8 seconds east of UT until 1853-07-16 at 00:00, when the legal offset was changed to 7°26′22.50″, which works out to 0:29:45.50; zic treats this by rounding it to 0:29:46. After 1894-06-01 at 00:00 the UT offset became one hour and Swiss daylight saving rules (defined with lines beginning with “Rule Swiss”) apply. From 1981 to the present, EU daylight saving rules have applied, and the UTC offset has remained at one hour. In 1941 and 1942, daylight saving time applied from the first Monday in May at 01:00 to the first Monday in October at 02:00. The pre-1981 EU daylight-saving rules have no effect here, but are included for completeness. Since 1981, daylight saving has begun on the last Sunday in March at 01:00 UTC. Until 1995 it ended the last Sunday in September at 01:00 UTC, but this changed to the last Sunday in October starting in 1996. For purposes of display, “LMT” and “BMT” were initially used, respectively. Since Swiss rules and later EU rules were applied, the time zone abbreviation has been CET for standard time and CEST for daylight saving time. FILES /etc/localtime Default local timezone file. /usr/share/zoneinfo Default timezone information directory. NOTES For areas with more than two types of local time, you may need to use local standard time in the AT field of the earliest transition time's rule to ensure that the earliest transition time recorded in the compiled file is correct. If, for a particular timezone, a clock advance caused by the start of daylight saving coincides with and is equal to a clock retreat caused by a change in UT offset, zic produces a single transition to daylight saving at the new UT offset without any change in local (wall clock) time. To get separate transitions use multiple zone continuation lines specifying transition instants using universal time. SEE ALSO tzfile(5), zdump(8) macOS 14.5 January 21, 2023 macOS 14.5
|
zic – timezone compiler
|
zic [--help] [--version] [-Dsv] [-b slim | fat] [-d directory] [-g gid] [-l localtime] [-L leapseconds] [-m mode] [-p posixrules] [-r [@lo][/@hi]] [-R -@hi] [-t localtime-link] [-u uid] [filename ...]
| null | null |
nfs4mapid
|
In the first form, nfs4mapid shows translations from NFSv4 string representations of users, and with the -G option, groups, to the corresponding local uids and gids. In the second form shows the translations from opendirectoy GUIDS to NFSv4 strings. The well known strings names (which are distinguished by a trailing ‘@’ ), such as "OWNER@" and "GROUP@" are represented locally by GUIDs and may not map to uids or gids. To map those GUIDS to NFSv4 strings use this form. The first form can be used to map the well known ids to GUIDs. nfs4mapid does this by looking at the trailing ‘@’ sign. Note that NFSv4 well known names are always groups and are used in ACEs. In the third form, it shows the mapping from uids to the NFSv4 user@domain form. Similarly, in the last form it shows the mapping from gids to the NFSv4 group@domain. nfs4mapid will also show the intermediate GUID translation if used. The NFSv4 domain name should be set with dscl(1). See opendirectory(8) for instructions. -G Map an NFSv4 string to a gid. -u Map a uid to an NFSv4 user@domain string. -g Map a gid to an NFSv4 group@domain string. NOTES nfs4mapid uses a privileged nfs client system call to pass the translation request down to the kernel, so results will be the same as a request coming from an NFSv4 server. Because of this, nfs4mapid must be run with root privileges. SEE ALSO dscl(1), nfs(5), opendirectoryd(8), mount_nfs(8), HISTORY The nfs4mapid utility first appeared in OSX 10.10 macOS 14.5 February 20, 2014 macOS 14.5
|
nfs4mapid – shows NFSv4 mappings from uids or gids to over the wire string names and string names to uids or gids.
|
nfs4mapid [-G] string name nfs4mapid [-G] GUID nfs4mapid -u uid nfs4mapid -g gid
| null | null |
postsuper
|
The postsuper(1) command does maintenance jobs on the Postfix queue. Use of the command is restricted to the superuser. See the postqueue(1) command for unprivileged queue operations such as listing or flushing the mail queue. By default, postsuper(1) performs the operations requested with the -s and -p command-line options on all Postfix queue directories - this includes the incoming, active and deferred directories with mail files and the bounce, defer, trace and flush directories with log files. Options: -c config_dir The main.cf configuration file is in the named directory instead of the default configuration directory. See also the MAIL_CONFIG environment setting below. -d queue_id Delete one message with the named queue ID from the named mail queue(s) (default: hold, incoming, active and deferred). To delete multiple files, specify the -d option multiple times, or specify a queue_id of - to read queue IDs from standard input. For example, to delete all mail with exactly one recipient user@example.com: mailq | tail -n +2 | grep -v '^ *(' | awk 'BEGIN { RS = "" } # $7=sender, $8=recipient1, $9=recipient2 { if ($8 == "user@example.com" && $9 == "") print $1 } ' | tr -d '*!' | postsuper -d - Specify "-d ALL" to remove all messages; for example, specify "-d ALL deferred" to delete all mail in the deferred queue. As a safety measure, the word ALL must be specified in upper case. Warning: Postfix queue IDs are reused (always with Postfix <= 2.8; and with Postfix >= 2.9 when enable_long_queue_ids=no). There is a very small possibility that postsuper deletes the wrong message file when it is executed while the Postfix mail system is delivering mail. The scenario is as follows: 1) The Postfix queue manager deletes the message that postsuper(1) is asked to delete, because Postfix is finished with the message (it is delivered, or it is returned to the sender). 2) New mail arrives, and the new message is given the same queue ID as the message that postsuper(1) is supposed to delete. The probability for reusing a deleted queue ID is about 1 in 2**15 (the number of different microsecond values that the system clock can distinguish within a second). 3) postsuper(1) deletes the new message, instead of the old message that it should have deleted. -h queue_id Put mail "on hold" so that no attempt is made to deliver it. Move one message with the named queue ID from the named mail queue(s) (default: incoming, active and deferred) to the hold queue. To hold multiple files, specify the -h option multiple times, or specify a queue_id of - to read queue IDs from standard input. Specify "-h ALL" to hold all messages; for example, specify "-h ALL deferred" to hold all mail in the deferred queue. As a safety measure, the word ALL must be specified in upper case. Note: while mail is "on hold" it will not expire when its time in the queue exceeds the maximal_queue_lifetime or bounce_queue_lifetime setting. It becomes subject to expiration after it is released from "hold". This feature is available in Postfix 2.0 and later. -H queue_id Release mail that was put "on hold". Move one message with the named queue ID from the named mail queue(s) (default: hold) to the deferred queue. To release multiple files, specify the -H option multiple times, or specify a queue_id of - to read queue IDs from standard input. Note: specify "postsuper -r" to release mail that was kept on hold for a significant fraction of $maximal_queue_lifetime or $bounce_queue_lifetime, or longer. Specify "-H ALL" to release all mail that is "on hold". As a safety measure, the word ALL must be specified in upper case. This feature is available in Postfix 2.0 and later. -p Purge old temporary files that are left over after system or software crashes. -r queue_id Requeue the message with the named queue ID from the named mail queue(s) (default: hold, incoming, active and deferred). To requeue multiple files, specify the -r option multiple times, or specify a queue_id of - to read queue IDs from standard input. Specify "-r ALL" to requeue all messages. As a safety measure, the word ALL must be specified in upper case. A requeued message is moved to the maildrop queue, from where it is copied by the pickup(8) and cleanup(8) daemons to a new queue file. In many respects its handling differs from that of a new local submission. • The message is not subjected to the smtpd_milters or non_smtpd_milters settings. When mail has passed through an external content filter, this would produce incorrect results with Milter applications that depend on original SMTP connection state information. • The message is subjected again to mail address rewriting and substitution. This is useful when rewriting rules or virtual mappings have changed. The address rewriting context (local or remote) is the same as when the message was received. • The message is subjected to the same content_filter settings (if any) as used for new local mail submissions. This is useful when content_filter settings have changed. Warning: Postfix queue IDs are reused (always with Postfix <= 2.8; and with Postfix >= 2.9 when enable_long_queue_ids=no). There is a very small possibility that postsuper(1) requeues the wrong message file when it is executed while the Postfix mail system is running, but no harm should be done. This feature is available in Postfix 1.1 and later. -s Structure check and structure repair. This should be done once before Postfix startup. • Rename files whose name does not match the message file inode number. This operation is necessary after restoring a mail queue from a different machine or from backup, when queue files were created with Postfix <= 2.8 or with "enable_long_queue_ids = no". • Move queue files that are in the wrong place in the file system hierarchy and remove subdirectories that are no longer needed. File position rearrangements are necessary after a change in the hash_queue_names and/or hash_queue_depth configuration parameters. • Rename queue files created with "enable_long_queue_ids = yes" to short names, for migration to Postfix <= 2.8. The procedure is as follows: # postfix stop # postconf enable_long_queue_ids=no # postsuper Run postsuper(1) repeatedly until it stops reporting file name changes. -S A redundant version of -s that requires that long file names also match the message file inode number. This option exists for testing purposes, and is available with Postfix 2.9 and later. -v Enable verbose logging for debugging purposes. Multiple -v options make the software increasingly verbose. DIAGNOSTICS Problems are reported to the standard error stream and to syslogd(8). postsuper(1) reports the number of messages deleted with -d, the number of messages requeued with -r, and the number of messages whose queue file name was fixed with -s. The report is written to the standard error stream and to syslogd(8). ENVIRONMENT MAIL_CONFIG Directory with the main.cf file. BUGS Mail that is not sanitized by Postfix (i.e. mail in the maildrop queue) cannot be placed "on hold". CONFIGURATION PARAMETERS The following main.cf parameters are especially relevant to this program. The text below provides only a parameter summary. See postconf(5) for more details including examples. config_directory (see 'postconf -d' output) The default location of the Postfix main.cf and master.cf configuration files. hash_queue_depth (1) The number of subdirectory levels for queue directories listed with the hash_queue_names parameter. hash_queue_names (deferred, defer) The names of queue directories that are split across multiple subdirectory levels. import_environment (see 'postconf -d' output) The list of environment parameters that a privileged Postfix process will import from a non-Postfix parent process, or name=value environment overrides. queue_directory (see 'postconf -d' output) The location of the Postfix top-level queue directory. syslog_facility (mail) The syslog facility of Postfix logging. syslog_name (see 'postconf -d' output) A prefix that is prepended to the process name in syslog records, so that, for example, "smtpd" becomes "prefix/smtpd". Available in Postfix version 2.9 and later: enable_long_queue_ids (no) Enable long, non-repeating, queue IDs (queue file names). SEE ALSO sendmail(1), Sendmail-compatible user interface postqueue(1), unprivileged queue operations LICENSE The Secure Mailer license must be distributed with this software. AUTHOR(S) Wietse Venema IBM T.J. Watson Research P.O. Box 704 Yorktown Heights, NY 10598, USA Wietse Venema Google, Inc. 111 8th Avenue New York, NY 10011, USA POSTSUPER(1)
|
postsuper - Postfix superintendent
|
postsuper [-psSv] [-c config_dir] [-d queue_id] [-h queue_id] [-H queue_id] [-r queue_id] [directory ...]
| null | null |
htpasswd
| null |
htpasswd - Manage user files for basic authentication
|
htpasswd [ -c ] [ -i ] [ -m | -B | -d | -s | -p ] [ -C cost ] [ -D ] [ -v ] passwdfile username htpasswd -b [ -c ] [ -m | -B | -d | -s | -p ] [ -C cost ] [ -D ] [ -v ] passwdfile username password htpasswd -n [ -i ] [ -m | -B | -d | -s | -p ] [ -C cost ] username htpasswd -nb [ -m | -B | -d | -s | -p ] [ -C cost ] username password SUMMARY htpasswd is used to create and update the flat-files used to store usernames and password for basic authentication of HTTP users. If htpasswd cannot access a file, such as not being able to write to the output file or not being able to read the file in order to update it, it returns an error status and makes no changes. Resources available from the Apache HTTP server can be restricted to just the users listed in the files created by htpasswd. This program can only manage usernames and passwords stored in a flat-file. It can encrypt and display password information for use in other types of data stores, though. To use a DBM database see dbmmanage or htdbm. htpasswd encrypts passwords using either bcrypt, a version of MD5 modified for Apache, SHA1, or the system's crypt() routine. Files managed by htpasswd may contain a mixture of different encoding types of passwords; some user records may have bcrypt or MD5-encrypted passwords while others in the same file may have passwords encrypted with crypt(). This manual page only lists the command line arguments. For details of the directives necessary to configure user authentication in httpd see the Apache manual, which is part of the Apache distribution or can be found at http://httpd.apache.org/.
|
-b Use batch mode; i.e., get the password from the command line rather than prompting for it. This option should be used with extreme care, since the password is clearly visible on the command line. For script use see the -i option. Available in 2.4.4 and later. -i Read the password from stdin without verification (for script usage). -c Create the passwdfile. If passwdfile already exists, it is rewritten and truncated. This option cannot be combined with the -n option. -n Display the results on standard output rather than updating a file. This is useful for generating password records acceptable to Apache for inclusion in non-text data stores. This option changes the syntax of the command line, since the passwdfile argument (usually the first one) is omitted. It cannot be combined with the -c option. -m Use MD5 encryption for passwords. This is the default (since version 2.2.18). -B Use bcrypt encryption for passwords. This is currently considered to be very secure. -C This flag is only allowed in combination with -B (bcrypt encryption). It sets the computing time used for the bcrypt algorithm (higher is more secure but slower, default: 5, valid: 4 to 17). -d Use crypt() encryption for passwords. This is not supported by the httpd server on Windows and Netware. This algorithm limits the password length to 8 characters. This algorithm is insecure by today's standards. It used to be the default algorithm until version 2.2.17. -s Use SHA encryption for passwords. Facilitates migration from/to Netscape servers using the LDAP Directory Interchange Format (ldif). This algorithm is insecure by today's standards. -p Use plaintext passwords. Though htpasswd will support creation on all platforms, the httpd daemon will only accept plain text passwords on Windows and Netware. -D Delete user. If the username exists in the specified htpasswd file, it will be deleted. -v Verify password. Verify that the given password matches the password of the user stored in the specified htpasswd file. Available in 2.4.5 and later. passwdfile Name of the file to contain the user name and password. If -c is given, this file is created if it does not already exist, or rewritten and truncated if it does exist. username The username to create or update in passwdfile. If username does not exist in this file, an entry is added. If it does exist, the password is changed. password The plaintext password to be encrypted and stored in the file. Only used with the -b flag. EXIT STATUS htpasswd returns a zero status ("true") if the username and password have been successfully added or updated in the passwdfile. htpasswd returns 1 if it encounters some problem accessing files, 2 if there was a syntax problem with the command line, 3 if the password was entered interactively and the verification entry didn't match, 4 if its operation was interrupted, 5 if a value is too long (username, filename, password, or final computed record), 6 if the username contains illegal characters (see the Restrictions section), and 7 if the file is not a valid password file.
|
htpasswd /usr/local/etc/apache/.htpasswd-users jsmith Adds or modifies the password for user jsmith. The user is prompted for the password. The password will be encrypted using the modified Apache MD5 algorithm. If the file does not exist, htpasswd will do nothing except return an error. htpasswd -c /home/doe/public_html/.htpasswd jane Creates a new file and stores a record in it for user jane. The user is prompted for the password. If the file exists and cannot be read, or cannot be written, it is not altered and htpasswd will display a message and return an error status. htpasswd -db /usr/web/.htpasswd-all jones Pwd4Steve Encrypts the password from the command line (Pwd4Steve) using the crypt() algorithm, and stores it in the specified file. SECURITY CONSIDERATIONS Web password files such as those managed by htpasswd should not be within the Web server's URI space -- that is, they should not be fetchable with a browser. This program is not safe as a setuid executable. Do not make it setuid. The use of the -b option is discouraged, since when it is used the unencrypted password appears on the command line. When using the crypt() algorithm, note that only the first 8 characters of the password are used to form the password. If the supplied password is longer, the extra characters will be silently discarded. The SHA encryption format does not use salting: for a given password, there is only one encrypted representation. The crypt() and MD5 formats permute the representation by prepending a random salt string, to make dictionary attacks against the passwords more difficult. The SHA and crypt() formats are insecure by today's standards. RESTRICTIONS On the Windows platform, passwords encrypted with htpasswd are limited to no more than 255 characters in length. Longer passwords will be truncated to 255 characters. The MD5 algorithm used by htpasswd is specific to the Apache software; passwords encrypted using it will not be usable with other Web servers. Usernames are limited to 255 bytes and may not include the character :. The cost of computing a bcrypt password hash value increases with the number of rounds specified by the -C option. The apr-util library enforces a maximum number of rounds of 17 in version 1.6.0 and later. Apache HTTP Server 2019-08-09 HTPASSWD(1)
|
rotatelogs
| null |
rotatelogs - Piped logging program to rotate Apache logs
|
rotatelogs [ -l ] [ -L linkname ] [ -p program ] [ -f ] [ -D ] [ -t ] [ -v ] [ -e ] [ -c ] [ -n number-of-files ] logfile rotationtime|filesize(B|K|M|G) [ offset ] SUMMARY rotatelogs is a simple program for use in conjunction with Apache's piped logfile feature. It supports rotation based on a time interval or maximum size of the log.
|
-l Causes the use of local time rather than GMT as the base for the interval or for strftime(3) formatting with size-based rotation. -L linkname Causes a hard link to be made from the current logfile to the specified link name. This can be used to watch the log continuously across rotations using a command like tail -F linkname. If the linkname is not an absolute path, it is relative to rotatelogs' working directory, which is the ServerRoot when rotatelogs is run by the server. -p program If given, rotatelogs will execute the specified program every time a new log file is opened. The filename of the newly opened file is passed as the first argument to the program. If executing after a rotation, the old log file is passed as the second argument. rotatelogs does not wait for the specified program to terminate before continuing to operate, and will not log any error code returned on termination. The spawned program uses the same stdin, stdout, and stderr as rotatelogs itself, and also inherits the environment. -f Causes the logfile to be opened immediately, as soon as rotatelogs starts, instead of waiting for the first logfile entry to be read (for non-busy sites, there may be a substantial delay between when the server is started and when the first request is handled, meaning that the associated logfile does not "exist" until then, which causes problems from some automated logging tools) -D Creates the parent directories of the path that the log file will be placed in if they do not already exist. This allows strftime(3) formatting to be used in the path and not just the filename. -t Causes the logfile to be truncated instead of rotated. This is useful when a log is processed in real time by a command like tail, and there is no need for archived data. No suffix will be added to the filename, however format strings containing '%' characters will be respected. -T Causes all but the initial logfile to be truncated when opened. This is useful when the format string contains something that will loop around, such as the day of the month. Available in 2.4.56 and later. -v Produce verbose output on STDERR. The output contains the result of the configuration parsing, and all file open and close actions. -e Echo logs through to stdout. Useful when logs need to be further processed in real time by a further tool in the chain. -c Create log file for each interval, even if empty. -n number-of-files Use a circular list of filenames without timestamps. This option overwrites log files at startup and during rotation. With -n 3, the series of log files opened would be "logfile", "logfile.1", "logfile.2", then overwriting "logfile". When this program first opens "logfile", the file will only be truncated if -t is also provided. Every subsequent rotation will always begin with truncation of the target file. For size based rotation without -t and existing log files in place, this option may result in unintuitive behavior such as initial log entries being sent to "logfile.1", and entries in "logfile.1" not being preserved even if later "logfile.n" have not yet been used. Available in 2.4.5 and later. logfile The path plus basename of the logfile. If logfile includes any '%' characters, it is treated as a format string for strftime(3). Otherwise, the suffix .nnnnnnnnnn is automatically added and is the time in seconds (unless the -t option is used). Both formats compute the start time from the beginning of the current period. For example, if a rotation time of 86400 is specified, the hour, minute, and second fields created from the strftime(3) format will all be zero, referring to the beginning of the current 24-hour period (midnight). When using strftime(3) filename formatting, be sure the log file format has enough granularity to produce a different file name each time the logs are rotated. Otherwise rotation will overwrite the same file instead of starting a new one. For example, if logfile was /var/log/errorlog.%Y-%m-%d with log rotation at 5 megabytes, but 5 megabytes was reached twice in the same day, the same log file name would be produced and log rotation would keep writing to the same file. If the logfile is not an absolute path, it is relative to rotatelogs' working directory, which is the ServerRoot when rotatelogs is run by the server. rotationtime The time between log file rotations in seconds. The rotation occurs at the beginning of this interval. For example, if the rotation time is 3600, the log file will be rotated at the beginning of every hour; if the rotation time is 86400, the log file will be rotated every night at midnight. (If no data is logged during an interval, no file will be created.) filesize(B|K|M|G) The maximum file size in followed by exactly one of the letters B (Bytes), K (KBytes), M (MBytes) or G (GBytes). When time and size are specified, the size must be given after the time. Rotation will occur whenever either time or size limits are reached. offset The number of minutes offset from UTC. If omitted, zero is assumed and UTC is used. For example, to use local time in the zone UTC -5 hours, specify a value of -300 for this argument. In most cases, -l should be used instead of specifying an offset.
|
CustomLog "|bin/rotatelogs /var/log/logfile 86400" common This creates the files /var/log/logfile.nnnn where nnnn is the system time at which the log nominally starts (this time will always be a multiple of the rotation time, so you can synchronize cron scripts with it). At the end of each rotation time (here after 24 hours) a new log is started. CustomLog "|bin/rotatelogs -l /var/log/logfile.%Y.%m.%d 86400" common This creates the files /var/log/logfile.yyyy.mm.dd where yyyy is the year, mm is the month, and dd is the day of the month. Logging will switch to a new file every day at midnight, local time. CustomLog "|bin/rotatelogs /var/log/logfile 5M" common This configuration will rotate the logfile whenever it reaches a size of 5 megabytes. ErrorLog "|bin/rotatelogs /var/log/errorlog.%Y-%m-%d-%H_%M_%S 5M" This configuration will rotate the error logfile whenever it reaches a size of 5 megabytes, and the suffix to the logfile name will be created of the form errorlog.YYYY-mm-dd-HH_MM_SS. CustomLog "|bin/rotatelogs -t /var/log/logfile 86400" common This creates the file /var/log/logfile, truncating the file at startup and then truncating the file once per day. It is expected in this scenario that a separate process (such as tail) would process the file in real time. CustomLog "|bin/rotatelogs -T /var/log/logfile.%d 86400" common If the server is started (or restarted) on the first of the month, this appends to /var/log/logfile.01. When a log entry is written on the second of the month, /var/log/logfile.02 is truncated and new entries will be added to the top. This example keeps approximately 1 months worth of logs without external maintenance. PORTABILITY The following logfile format string substitutions should be supported by all strftime(3) implementations, see the strftime(3) man page for library-specific extensions. • %A - full weekday name (localized) • %a - 3-character weekday name (localized) • %B - full month name (localized) • %b - 3-character month name (localized) • %c - date and time (localized) • %d - 2-digit day of month • %H - 2-digit hour (24 hour clock) • %I - 2-digit hour (12 hour clock) • %j - 3-digit day of year • %M - 2-digit minute • %m - 2-digit month • %p - am/pm of 12 hour clock (localized) • %S - 2-digit second • %U - 2-digit week of year (Sunday first day of week) • %W - 2-digit week of year (Monday first day of week) • %w - 1-digit weekday (Sunday first day of week) • %X - time (localized) • %x - date (localized) • %Y - 4-digit year • %y - 2-digit year • %Z - time zone name • %% - literal `%' Apache HTTP Server 2023-03-05 ROTATELOGS(8)
|
universalaccessd
|
universalaccessd provides universal access services. There are no configuration options to universalaccessd. Users should not run universalaccessd manually. macOS October 6, 2011 macOS
|
universalaccessd – universal access server
|
universalaccessd
| null | null |
audit
|
The audit utility controls the state of the audit system. One of the following flags is required as an argument to audit: -e Forces the audit system to immediately remove audit log files that meet the expiration criteria specified in the audit control file without doing a log rotation. -i Initializes and starts auditing. This option is currently for Mac OS X only and requires auditd(8) to be configured to run under launchd(8). -n Forces the audit system to close the existing audit log file and rotate to a new log file in a location specified in the audit control file. Also, audit log files that meet the expiration criteria specified in the audit control file will be removed. -s Specifies that the audit system should [re]synchronize its configuration from the audit control file. A new log file will be created. The attributable flags parameter from the audit_control(5) configuration file is set at login time and is not synchronized with this flag. -t Specifies that the audit system should terminate. Log files are closed and renamed to indicate the time of the shutdown. -c Specifies that audit should query the audit condition and exit successfully only if auditing is enabled in the kernel. NOTES The auditd(8) daemon must already be running. Optionally, it can be configured to be started on-demand by launchd(8) (Mac OS X only). The audit utility requires audit administrator privileges for successful operation. FILES /etc/security/audit_control Audit policy file used to configure the auditing system. SEE ALSO audit(4), audit_control(5), auditd(8), launchd(8) HISTORY The OpenBSM implementation was created by McAfee Research, the security division of McAfee Inc., under contract to Apple Computer Inc. in 2004. It was subsequently adopted by the TrustedBSD Project as the foundation for the OpenBSM distribution. AUTHORS This software was created by McAfee Research, the security research division of McAfee, Inc., under contract to Apple Computer Inc. Additional authors include Wayne Salamon, Robert Watson, and SPARTA Inc. The Basic Security Module (BSM) interface to audit records and audit event stream format were defined by Sun Microsystems. macOS 14.5 January 29, 2009 macOS 14.5
|
audit – audit management utility DEPRECATION NOTICE The audit(4) subsystem has been deprecated since macOS 11.0, disabled since macOS 14.0, and WILL BE REMOVED in a future version of macOS. Applications that require a security event stream should use the EndpointSecurity(7) API instead. On this version of macOS, you can re-enable audit(4) by renaming or copying /etc/security/audit_control.example to /etc/security/audit_control, re-enabling the system/com.apple.auditd service by running launchctl enable system/com.apple.auditd as root, and rebooting.
|
audit -e | -i | -n | -s | -t | -c
| null | null |
httpd
| null |
httpd - Apache Hypertext Transfer Protocol Server
|
httpd [ -d serverroot ] [ -f config ] [ -C directive ] [ -c directive ] [ -D parameter ] [ -e level ] [ -E file ] [ -k start|restart|graceful|stop|graceful-stop ] [ -h ] [ -l ] [ -L ] [ -S ] [ -t ] [ -v ] [ -V ] [ -X ] [ -M ] [ -T ] On Windows systems, the following additional arguments are available: httpd [ -k install|config|uninstall ] [ -n name ] [ -w ] SUMMARY httpd is the Apache HyperText Transfer Protocol (HTTP) server program. It is designed to be run as a standalone daemon process. When used like this it will create a pool of child processes or threads to handle requests. In general, httpd should not be invoked directly, but rather should be invoked via apachectl on Unix-based systems or as a service on Windows NT, 2000 and XP and as a console application on Windows 9x and ME.
|
-d serverroot Set the initial value for the ServerRoot directive to serverroot. This can be overridden by the ServerRoot directive in the configuration file. The default is /usr. -f config Uses the directives in the file config on startup. If config does not begin with a /, then it is taken to be a path relative to the ServerRoot. The default is /etc/apache2/httpd.conf. -k start|restart|graceful|stop|graceful-stop Signals httpd to start, restart, or stop. See Stopping Apache httpd for more information. -C directive Process the configuration directive before reading config files. -c directive Process the configuration directive after reading config files. -D parameter Sets a configuration parameter which can be used with <IfDefine> sections in the configuration files to conditionally skip or process commands at server startup and restart. Also can be used to set certain less-common startup parameters including -DNO_DETACH (prevent the parent from forking) and -DFOREGROUND (prevent the parent from calling setsid() et al). -e level Sets the LogLevel to level during server startup. This is useful for temporarily increasing the verbosity of the error messages to find problems during startup. -E file Send error messages during server startup to file. -h Output a short summary of available command line options. -l Output a list of modules compiled into the server. This will not list dynamically loaded modules included using the LoadModule directive. -L Output a list of directives provided by static modules, together with expected arguments and places where the directive is valid. Directives provided by shared modules are not listed. -M Dump a list of loaded Static and Shared Modules. -S Show the settings as parsed from the config file (currently only shows the virtualhost settings). -T (Available in 2.3.8 and later) Skip document root check at startup/restart. -t Run syntax tests for configuration files only. The program immediately exits after these syntax parsing tests with either a return code of 0 (Syntax OK) or return code not equal to 0 (Syntax Error). If -D DUMP_VHOSTS is also set, details of the virtual host configuration will be printed. If -D DUMP_MODULES is set, all loaded modules will be printed. -v Print the version of httpd, and then exit. -V Print the version and build parameters of httpd, and then exit. -X Run httpd in debug mode. Only one worker will be started and the server will not detach from the console. The following arguments are available only on the Windows platform: -k install|config|uninstall Install Apache httpd as a Windows NT service; change startup options for the Apache httpd service; and uninstall the Apache httpd service. -n name The name of the Apache httpd service to signal. -w Keep the console window open on error so that the error message can be read. Apache HTTP Server 2018-07-06 HTTPD(8)
| null |
slapauth
|
Slapauth is used to check the behavior of the slapd in mapping identities for authentication and authorization purposes, as specified in slapd.conf(5). It opens the slapd.conf(5) configuration file or the slapd-config(5) backend, reads in the authz-policy/olcAuthzPolicy and authz-regexp/olcAuthzRegexp directives, and then parses the ID list given on the command-line.
|
slapauth - Check a list of string-represented IDs for LDAP authc/authz
|
/usr/sbin/slapauth [-d_debug-level] [-f_slapd.conf] [-F_confdir] [-M_mech] [-o_option[=value]] [-R_realm] [-U_authcID] [-v] [-X_authzID] ID [...]
|
-d_debug-level enable debugging messages as defined by the specified debug-level; see slapd(8) for details. -f_slapd.conf specify an alternative slapd.conf(5) file. -F_confdir specify a config directory. If both -f and -F are specified, the config file will be read and converted to config directory format and written to the specified directory. If neither option is specified, an attempt to read the default config directory will be made before trying to use the default config file. If a valid config directory exists then the default config file is ignored. -M_mech specify a mechanism. -o_option[=value] Specify an option with a(n optional) value. Possible generic options/values are: syslog=<subsystems> (see `-s' in slapd(8)) syslog-level=<level> (see `-S' in slapd(8)) syslog-user=<user> (see `-l' in slapd(8)) -R_realm specify a realm. -U_authcID specify an ID to be used as authcID throughout the test session. If present, and if no authzID is given, the IDs in the ID list are treated as authzID. -X_authzID specify an ID to be used as authzID throughout the test session. If present, and if no authcID is given, the IDs in the ID list are treated as authcID. If both authcID and authzID are given via command line switch, the ID list cannot be present. -v enable verbose mode.
|
The command /usr/sbin/slapauth -f //etc/openldap/slapd.conf -v \ -U bjorn -X u:bjensen tests whether the user bjorn can assume the identity of the user bjensen provided the directives authz-policy from authz-regexp "^uid=([^,]+).*,cn=auth$" "ldap:///dc=example,dc=net??sub?uid=$1" are defined in slapd.conf(5). SEE ALSO ldap(3), slapd(8), slaptest(8) "OpenLDAP Administrator's Guide" (http://www.OpenLDAP.org/doc/admin/) ACKNOWLEDGEMENTS OpenLDAP Software is developed and maintained by The OpenLDAP Project <http://www.openldap.org/>. OpenLDAP Software is derived from University of Michigan LDAP 3.3 Release. OpenLDAP 2.4.28 2011/11/24 SLAPAUTH(8C)
|
slappasswd
|
Slappasswd is used to generate an userPassword value suitable for use with ldapmodify(1), slapd.conf(5) rootpw configuration directive or the slapd-config(5) olcRootPW configuration directive.
|
slappasswd - OpenLDAP password utility
|
/usr/sbin/slappasswd [-v] [-u] [-g|-s secret|-T file] [-h_hash] [-c_salt-format] [-n]
|
-v enable verbose mode. -u Generate RFC 2307 userPassword values (the default). Future versions of this program may generate alternative syntaxes by default. This option is provided for forward compatibility. -s_secret The secret to hash. If this, -g and -T are absent, the user will be prompted for the secret to hash. -s, -g and -T are mutually exclusive flags. -g Generate the secret. If this, -s and -T are absent, the user will be prompted for the secret to hash. -s, -g and -T are mutually exclusive flags. If this is present, {CLEARTEXT} is used as scheme. -g and -h are mutually exclusive flags. -T_"file" Hash the contents of the file. If this, -g and -s are absent, the user will be prompted for the secret to hash. -s, -g and -T and mutually exclusive flags. -h_"scheme" If -h is specified, one of the following RFC 2307 schemes may be specified: {CRYPT}, {MD5}, {SMD5}, {SSHA}, and {SHA}. The default is {SSHA}. Note that scheme names may need to be protected, due to { and }, from expansion by the user's command interpreter. {SHA} and {SSHA} use the SHA-1 algorithm (FIPS 160-1), the latter with a seed. {MD5} and {SMD5} use the MD5 algorithm (RFC 1321), the latter with a seed. {CRYPT} uses the crypt(3). {CLEARTEXT} indicates that the new password should be added to userPassword as clear text. Unless {CLEARTEXT} is used, this flag is incompatible with option -g. -c_crypt-salt-format Specify the format of the salt passed to crypt(3) when generating {CRYPT} passwords. This string needs to be in sprintf(3) format and may include one (and only one) %s conversion. This conversion will be substituted with a string of random characters from [A-Za-z0-9./]. For example, '%.2s' provides a two character salt and '$1$%.8s' tells some versions of crypt(3) to use an MD5 algorithm and provides 8 random characters of salt. The default is '%s', which provides 31 characters of salt. -n Omit the trailing newline; useful to pipe the credentials into a command. LIMITATIONS The practice of storing hashed passwords in userPassword violates Standard Track (RFC 4519) schema specifications and may hinder interoperability. A new attribute type, authPassword, to hold hashed passwords has been defined (RFC 3112), but is not yet implemented in slapd(8). It should also be noted that the behavior of crypt(3) is platform specific. SECURITY CONSIDERATIONS Use of hashed passwords does not protect passwords during protocol transfer. TLS or other eavesdropping protections should be in-place before using LDAP simple bind. The hashed password values should be protected as if they were clear text passwords. SEE ALSO ldappasswd(1), ldapmodify(1), slapd(8), slapd.conf(5), slapd-config(5), RFC 2307, RFC 4519, RFC 3112 "OpenLDAP Administrator's Guide" (http://www.OpenLDAP.org/doc/admin/) ACKNOWLEDGEMENTS OpenLDAP Software is developed and maintained by The OpenLDAP Project <http://www.openldap.org/>. OpenLDAP Software is derived from University of Michigan LDAP 3.3 Release. OpenLDAP 2.4.28 2011/11/24 SLAPPASSWD(8C)
| null |
localemanager
|
localemanager creates, destroys, and edits OpenDirectory Server Locale information. Locales are collections of OpenDirectory servers to assist clients in locating the nearest OpenDirectory Server. To use OpenDirectory Server Locales, simply create a locale on an OD server with the createLocale operation. Then add servers and subnets to the locale. All localemanager operations are performed on the local OpenDirectory node. The first time a locale is created, a DefaultLocale will automatically be created as well. The DefaultLocale will be used for any clients that don't match a subnet in any other locale. Before a locale can be configured, the server must already be an OpenDirectory server. Locales can be defined on each of the OD servers or on a single OD server in the group of OD master/replicas. For the latter, the locale information will get replicated to all of the other servers but locales will need to be "enabled" on the other servers by running the command localemanager enableLocales. Commands: help Displays the commands and options. createLocale Creates a new locale on the local OD server. This command requires the -l option. deleteLocale Deletes a locale from the local OD server. This command requires the -l option. showLocale Displays the current locale(s). The -l option can be used to display a specific locale. If -l is not specified, all locales are displayed. enableLocales Enables the use of locales on an OD server. This command is automatically run the first time any localemanager command is run on an OD server. Therefore this command only needs to be run if no other localemanager commands have been (or will be) run on this server. addSubnet Adds a new subnet to the specified locale. This command requires the -l -subnet options. removeSubnet Removes a subnet from the specified locale. This command requires -l -subnet options. addServer Adds a server to the specified locale. This command requires the -l -server options. If the -i option is specified, that IP address will be used by locale clients. This may be useful for multi-homed servers to restrict locale clients to a specific network interface. If the -i option is not specified the IP address(es) will be looked up. removeServer Removes a server from the specified locale. This command requires -l -server options. If the -i option is specified, only that IP address will be removed from the locale. If the -i option is no specified, all of the server's IP addresses will be removed from the locale. Options: -l locale Locale name. -subnet 192.168.0.0/16 Subnet specified in CIDR notation. -server server.example.com Server fully-qualified domain name. -i 192.168.1.1 Use this IP address for the server. Typically used to limit locale clients to a specific interface on a multi-homed server. FILES /var/log/localemanager.log localemanager log file. SEE ALSO slapconfig(8) HISTORY First introduced in Mac OS X 10.7 Darwin 3/11/10 Darwin
|
localemanager – Configure OpenDirectory Server Locales
|
localemanager operation [-l localename] [-subnet 1.2.3.4/5] [-server servername] [-i IP address]
| null | null |
asr
|
asr efficiently copies disk images onto volumes, either directly or via a multicast network stream. asr can also accurately clone volumes without the use of an intermediate disk image. In its first form, asr copies source (usually a disk image, potentially on an HTTP server) to target. source can be specified using a path in the filesystem, or an http or https URL. It can also be an asr:// URL to indicate a multicast source. asr can also be invoked with its second form to act as a multicast server. In its third form, asr will restore a multicast disk image to a file instead of disk volume. In its fourth form, asr prepares a disk image to be restored efficiently, adding whole- volume checksum information. help and version provide usage and version information, respectively. source and target can be /dev entries or volume mountpoints. For more information on restoring to or from APFS filesystems, see the RESTORING WITH APFS FILESYSTEMS section below. If restoring a multicast disk image to a file, file can be a path to a local file or directory. If the specified path is a file, the disk image is given the specified name. If a directory, the name of the disk image being multicast is used. When specifying server, source has to be a UDIF disk image. Restoring from a multicast stream is accomplished by passing a asr:// url as source. When restoring APFS volumes, asr supports restoring snapshots from the source volume, as well as restoring snapshot deltas. See the RESTORING WITH APFS SNAPSHOTS section below. asr supports restoring systems with a Read-Only System Volume (ROSV). For more information, see the RESTORING WITH READ-ONLY SYSTEM VOLUMES section below. asr needs to be run as root (see sudo(8)) in order to accomplish its tasks. VERBS Each verb is listed with its description and individual arguments. restore restores a disk image or volume to another volume (including a mounted disk image) --source can be a disk image, /dev entry, or volume mountpoint. In the latter two cases, the volume must be unmountable or mounted read- only in order for a erase blockcopy to occur (thus, one cannot erase blockcopy the root filesystem as the source, unless it happened to be mounted read-only). --target can be a /dev entry, or volume mountpoint. Must be unmountable in order for an erase block-copy to occur. If source specifies an image of an APFS container, then target can specify a mounted APFS volume. See the RESTORING WITH APFS FILESYSTEMS section below for details. --file when performing a multicast restore, --file can be specified instead of --target. If the specified path is a file, the disk image is given the specified name. If a directory, the name of the disk image being multicast is used. --erase erases target and is required. --erase must always be used, as file copies are no longer supported by asr. If source is a asr:// url for restoring from a multicast stream, --erase must be passed (multicasting only supports erase block-copy restores). Passing --erase with --file indicates any existing file should be overwritten when doing a multicast file copy. --format HFS+ | HFSX specifies the destination filesystem format, when --erase is also given. If not specified, the destination will be formatted with the same filesystem format as the source. If multicasting, the --format specified must be block copy compatible with the source. --format is ignored if --erase is not used. Note: HFS Journaling is an attribute of the source image, and is not affected by --format. --noprompt suppresses the prompt which usually occurs before target is erased. newfs_hfs(8) will be called on target and once you start writing new data, there isn't much hope for recovery. You have been warned. --timeout num specifies num seconds that a multicast client should wait when no payload data has been received over a multicast stream before exiting, allowing the client to stop in case of server failure/stoppage. It defaults to 0 (i.e. never time out). --puppetstrings provide progress output that is easy for another program to parse. Any program trying to interpret asr's progress should use --puppetstrings. --noverify skips the verification steps normally taken to ensure that a volume has been properly restored. --noverify allows images which have not been scanned to be restored. Skipping verification is dangerous for a number of reasons and should never be used in production systems. --allowfragmentedcatalog allows restores to proceed even if the source's catalog file is fragmented (in particular, if it has more than 8 extents). By default such restores are disallowed. Catalog fragmentation is undesirable and in most cases it is better to fix the problem on the source (e.g. by running fsck_hfs -r on it), but --allowfragmentedcatalog is provided for situations where such a change is impractical. This option only makes sense if the source specifes an HFS+ filesystem variant. It is otherwise ignored. --corestorageconvert Cause target to be converted to a Core Storage LVG at the end of the restore. After the copy and verify are complete, asr will create a new Core Storage Logical Volume Group (LVG), using the partition represented by target as its only physical volume (PV). The volume contents restored from source will be present as a single logical volume (LV) exported from this LVG. If target is already a Core Storage LV, then this option has no effect. --SHA1 forces the restore to use the SHA-1 hash in the image during verification. If the image doesn't contain a SHA-1 hash, then an error will be raised. --SHA256 forces the restore to use the SHA2-256 hash in the image during verification. If the image doesn't contain a SHA2-256 hash, then an error will be raised. --sourcevolumename tells asr which volume in the source container to invert when doing an APFS restore. It is an error if more than one volume has the specified name. You can see the volume names and UUIDs by running asr with the info verb. See the section RESTORING WITH APFS FILESYSTEMS below for when this option is necessary. --sourcevolumeUUID tells asr which volume in the source container to invert when doing an APFS restore. You can see the volume names and UUIDs by running asr with the info verb. See the section RESTORING WITH APFS FILESYSTEMS below for when this option is necessary. --useReplication forces asr to use replication for restoring APFS volumes (see the section REPLICATION AND THE INVERTER below). This is the default, but there may be a preference setting to use the inverter instead. This would override that preference setting. --useInverter forces asr to use the inverter for restoring APFS volumes (see the section REPLICATION AND THE INVERTER below). This overrides any preference setting. --toSnapshot specifies the snapshot on the source APFS volume to restore to the target APFS volume. The argument must be either the name or UUID of a snapshot on source. See the RESTORING WITH APFS SNAPSHOTS section below for more details. --fromSnapshot names a snapshot on the source APFS volume to use in combination with --toSnapshot to specify a snapshot delta to restore to the target APFS volume. The argument must be either the name or UUID of a snapshot on both source and target. See the RESTORING WITH APFS SNAPSHOTS section below for more details. restoreexact performs the same operation as restore, taking all the same options, but with the following difference: for an HFS Plus volume, the target partition is resized to exactly match the size of the source partition/volume, if such a resize can be done. If the target partition needs to grow and there is not enough space, then the operation will fail. If it needs to shrink, then it should always be able to do so, possibly leaving free space in the target disk's partition map. Because the target exactly matches the source in size, all volume structures should be identical in source and target upon completion of the restore. restoreexact is not allowed with APFS volumes (see the section RESTORING WITH APFS FILESYSTEMS below), so its use is deprecated. server multicasts source over the network. Requires --erase be passed in by clients (multicasting only supports erase block-copy restores). --source source has to be a UDIF disk image. A path to a disk image on a local/remote volume can be passed in, or a http:// url to a disk image that is accessible via a web server. --interface the network interface to be used for multicasting (e.g. en0) instead of the default network interface. --config server requires a configuration file to be passed, in standard property list format. The following keys/options configure the various parameters for multicast operation. Required Data Rate this is the desired data rate in bytes per second. On average, the stream will go slightly slower than this speed, but will never exceed it. It's a number in the plist (-int when set with defaults(1)). Note: The performance/reliability of the networking infrastructure being multicast on is an important factor in determining what data rate can be supported. Excessive/bursty packet loss for a given data rate could be due to an inability of the server/client to be able to send/receive multicast data at that rate, but it's equally important to verify that the network infrastructure can support multicasting at the requested rate. Multicast Address this is the Multicast address for the data stream. It's a string in the plist. Optional Client Data Rate this is the rate the slowest client can write data to its target in bytes per second. if asr misses data on the first pass (x's during progress) and slowing the Data Rate doesn't resolve it, setting the Client Data Rate will dynamically regulate the speed of the multicast stream to allow clients more time to write the data. It's a number in the plist (-int when set with defaults(1)). DNS Service Discovery whether the server should be advertised via DNS Service Discovery, a.k.a. Bonjour (tm). It defaults to true. It's a boolean in the plist (-bool when set with defaults(1)). Loop Suspend a limit of the number of times to multicast the image file when no clients have started a restore operation. Once exceeded, the server will stop the stream and wait for new clients before multicasting the image file. It defaults to 0 (e.g. never stop multicasting once a client starts the stream), and should not be set to <2. It's a number in the plist (-int when set with defaults(1)). Multicast TTL the time to live on the multicast packets (for multicasting through routers). It defaults to 3. It cannot be set to 0, and should not be set to 1 (otherwise, it could adversely affect some network routers). It's a number in the plist (-int when set with defaults(1)). Port the port of initial client-server handshake, version checks, multicast restore metadata, and stream data. It defaults to 7800. This should only be included/modified if the default port cannot be used. It's a number in the plist (-int when set with defaults(1)). imagescan calculate checksums of the data in the provided image and store them in the image. These checksums are used to ensure proper restores. By default, a SHA2-256 hash is used. Also determines if the disk image is in order for multicasting, and rewrites the file in order if not. If the image has to be reordered, it will require free disk space equal to the size of the disk image being scanned. --nostream bypasses the check/reordering of a disk image file for multicasting. By default disk images will be rewritten in a way that's necessary for multicasting. --allowfragmentedcatalog bypasses the check for a fragmented catalog file. By default that check is done and scanning won't be allowed on an image that has a fragmented catalog file. It is usually a better idea to fix the image (e.g. run fsck_hfs -r on a writable copy of it) than to use --allowfragmentedcatalog, but it is provided in case fixing the image is impractical. info report the image metadata which was placed in the image by a previous use of the imagescan verb. Requires --source. The report is written to standard output. --plist writes its output as an XML-formatted plist, suitable for parsing by another program. RESTORING WITH APFS FILESYSTEMS Individual APFS volumes can not be restored directly, because their device nodes don't allow I/O from a standard process. However, asr can restore entire APFS containers, including all volumes. Or it can restore valid system configurations, which can get the effect of restoring a single system. This requires understanding what is meant by a valid system. In order for an APFS volume to be bootable, it must contain a properly installed macOS system. It must also be part of an APFS container which also has two special volumes in it: a Preboot volume and a Recovery volume. A container may have arbitrarily many system volumes in it, but it must have only one Preboot volume and one Recovery volume, each with the corresponding APFS volume role set (see diskutil(1) for information on roles). The Preboot and Recovery volumes contain information which is tied to each system volume in the container. So for a system volume to be bootable, that information needs to be set up in the Preboot and Recovery volumes. A system which is part of a container that has these two special volumes, and for which the requisite information is set up in those volumes, will be referred to here as a valid system. If the source of a restore is an APFS image (i.e. an image which contains an APFS container), then asr does different things depending on how target was specified: Volume Restore If the target is an individual volume within an existing APFS container, then asr will block restore the APFS container to a file within that volume, after which it will invert the volume within the restored container, erasing the previous contents of the target volume and replacing them with the source volume contents. If the source container only has a single non-special volume (i.e. not Preboot or Recovery), then that is the volume which will be inverted. If the source container has more than one non-special volume, then either the --sourcevolumename or --sourcevolumeUUID option must be present and must specify the volume to invert. Additionally, if the volume being inverted is a valid system (as defined above), then the relevant contents of both the Preboot and Recovery volumes will be copied from the source to the target, creating those volumes on the target if necessary. Volume Restore with Creation If the target is a synthesized APFS whole disk or Apple_APFS partition, and the --erase option is not present, then asr will create a new volume in the given container, after which it will do a volume restore to that new volume, as with the previous section. All other volumes in the container are preserved. Volume Restore with Erase If the target is a synthesized APFS whole disk or any disk partition, and the --erase option is present, then asr will erase the existing partition, create a new APFS container and a new volume in it, after which it will do a volume restore to that new volume, as with the previous section. See the EXAMPLES section below for some command lines that show these operations. REPLICATION AND THE INVERTER As of macOS Catalina, the standard mechanism for restoring APFS volumes is to use the internal APFS replication capability. While this should be sufficient for most needs, asr does provide the ability to use a legacy restore mechanism, which involves running the apfs_invert program. Restoring with the inverter has some limitations (e.g. all volumes in the target container must be unmounted, the source volume can't have any snapshots in it, etc), so using the default APFS replication is usually the better choice. However, in the event that invert restores are desired, that option can be selected. The logic asr uses for this is as follows, from lowest to highest priority: - By default, use replication. - Look for a preference in the domain com.apple.asr with key "ForceInvert" and a Boolean value. - Look for a --useReplication or --useInverter option on the command line. RESTORING WITH APFS SNAPSHOTS APFS volumes may contain snapshots, which are point-in-time captures of all volume state (including directory hierarchy, file existence and file content). To distinguish between a snapshot and the current state of a volume, we will here refer to that current state as the "live volume." Snapshots can be identified by name or UUID. Names are unique within a single volume, but two volumes can have snapshots with the same name that are unrelated in content. By contrast, snapshot UUIDs are unique, in the sense that two snapshots on different volumes that have the same UUID must refer to identical content, a situation that will typically arise by restoring a snapshot, as described in this section. In addition to restoring a live volume (either currently known to the system or from an image), asr also supports restoring a snapshot from the source volume. The result of such a restore is that the target volume ends up looking like the source volume at the time of the given snapshot, rather than like the live source volume. Additionally, the target volume will contain that state as a snapshot of its own, with the same name and UUID as the restored snapshot in the source. See the EXAMPLES section below for some command lines that show snapshot restores. asr also supports restoring the difference between two snapshots, referred to as a "snapshot delta." In this case there must be both a "from" snapshot and a "to" snapshot on the source volume, the target must be specified as a specific volume rather than a whole container, and the target volume must already contain a snapshot which is identical to the source's "from" snapshot. The result of a snapshot delta restore is that the target ends up looking like the source's "to" snapshot, similar to a regular snapshot restore as described above. But the restore only needs to copy over the difference between the two snapshots, so it may save considerable time and/or network or bus resources. Note that a snapshot delta restore can still discard data from the target volume, so asr does require using the --erase option when doing a snapshot delta restore. Again, see the EXAMPLES section below for some command line examples of snapshot delta restores. Note that restoring with snapshots and snapshot deltas is only allowed when using replication (see the REPLICATION AND THE INVERTER section above). RESTORING WITH READ-ONLY SYSTEM VOLUMES macOS Catalina supports a Read-Only System Volume (ROSV) configuration, in which the standard macOS system install is split across two volumes. The two are referred to as the System and Data volumes, that is how their corresponding APFS roles are set (see diskutil(1) for more on APFS roles), they are combined into a volume group, and the System volume gets mounted read-only. asr has support for restoring ROSV volume groups. If the source is a disk image containing an ROSV volume group, or an existing volume that is part of a volume group, then both volumes will be restored to the target, and the target volumes will be combined as appropriate into a new group on the target. Since the source and the target may each be part of a group or not, there are several cases to consider: Creating New Volumes If the specified target is a container rather than a volume, then new volumes will always be created, whether the source is a single volume or part of a group. Source is Group, Target is Single The specified target will be erased and replaced with the System- role volume in the source group, and a new volume will be created for the Data-role volume. Source is Group, Target is Group Both of the volumes in the target group will be replaced by the corresponding volumes in the source group. Source is Single, Target is Group The System-role volume in the target is replaced by the source volume, and the Data-role volume in the target is deleted. SNAPSHOTS AND ROSV VOLUME GROUPS asr can restore snapshots and snapshot deltas from any volume in a volume group, but the behavior is different between snapshot restores and snapshot delta restores. When doing a snapshot restore (i.e. using the --toSnapshot option without the --fromSnapshot option), each volume in the source volume group is examined to see if it contains the specified "to" snapshot. Each volume in the group which contains the snapshot will be copied as a snapshot replication, as described in the RESTORING WITH APFS SNAPSHOTS section, above. Each volume in the group which does not contain the snapshot will be copied as a live volume replication. So all volumes in the group are restored, and only those which contain the given "to" snapshot will have a snapshot restore performed. Note that if the "to" snapshot is specified by name, multiple volumes in the source group may have a snapshot with that name, though those snapshots need not be related in any way. By contrast, snapshot delta restores (i.e. using both the --toSnapshot and --fromSnapshot options) are only ever performed on a single volume. The source volume can be any volume (i.e. it need not have any particular role), but whether or not it's in a group, that will be the only volume restored. So if there are multiple volumes which have snapshots with the same names and you want to do a snapshot delta restore for all of them, then you must invoke asr once for each such volume. BUFFERING The following options control how asr uses memory. These options can have a significant impact on performance. asr is optimized for copying between devices (different disk drives, from a network volume to a local disk, etc). As such, asr defaults to using eight one megabyte buffers. These buffers are wired down (occupying physical memory). For partition to partition copies on the same device, one large buffer (e.g. 32 MB) is much faster than the default eight medium sized ones. For multicast, 4 256k buffers are the default. Custom buffering for multicast operation is not recommended. --csumbuffers and --csumbuffersize allow a different buffer configuration for checksumming operations. One checksum buffer offers the best performance. The default is 1 1MB buffer. Custom checksum buffering is not recommended. Like mkfile(8), size defaults to bytes but can be followed by a multiplier character (e.g. 'm'). --buffers num specifies that num buffers should be used. --buffersize size specifies the size of each buffer. --csumbuffers num specifies that num buffers should be used for checksumming operations (which only affect the target). Custom checksum buffering is not recommended. --csumbuffersize size specifies the size of each buffer used for checksumming. Custom checksum buffering is not recommended. OTHER OPTIONS --verbose enables verbose progress and error messages. --debug enables other progress and error messages.
|
asr – Apple Software Restore; copy volumes (e.g. from disk images)
|
asr verb [options] asr restore[exact] --source source --target target [options] asr server --source source --config configuration [options] asr restore --source asr://source --file file [options] asr imagescan --source image [options] asr help | file ... version
| null |
Volume cloning: sudo asr restore --source /Volumes/Classic --target /Volumes/install --erase Restoring: sudo asr restore -s <compressedimage> -t <targetvol> --erase Will erase the target and potentially do a block copy restore. Multicast server: asr server --source <compressedimage> --config <configuration.plist> Will start up a multicast server for the specified image, using the parameters in the configuration.plist. The image will not start multicasting on the network until a client attempts to start a restore. The server will continue to multicast the image until the process is terminated. An example multicast configuration file: defaults write /tmp/streamconfig "Data Rate" -int 6000000 defaults write /tmp/streamconfig "Multicast Address" <mcastaddr> (will create the file /tmp/streamconfig.plist) <mcastaddr> should be appropriate for your network infrastructure and policy, usually from a range assigned by your network administrator. Multicast client sudo asr restore --source asr://<hostname> --target <targetvol> --erase Multicast client restoring to a file sudo asr restore --source asr://<hostname> --file <file> --erase Will receive the multicast stream from <hostname> and save it to a file. If <file> is a directory, the image of the streamed disk image will be used the save the file. --erase causes any existing file with the same name to be overwritten. Restoring a single APFS volume sudo asr restore -s <APFS image> -t /Volumes/MyAPFSVolume --erase In this case the contents of MyAPFSVolume will be replaced by the contents of the source container's single APFS volume, possibly including any associated data for the Preboot and Recovery volumes, if the source is a valid system. If the source has more than one non-special volume, this is an error. No other volumes in the target will be affected. Restoring one of many APFS volumes sudo asr restore -s <APFS image> --sourcevolumename SourceVolume -t /Volumes/MyAPFSVolume --erase This tells asr to select the volume named "SourceVolume" from the given APFS image. If there is no volume with that name, or if there are more than one with that name, it is an error. Use the info verb to see the volume names and UUIDs for an image. No other volumes in the target will be affected. Creating a new APFS volume on the fly sudo asr restore -s <APFS image> --sourcevolumename SourceVolume -t /dev/disk2 Here we get the same effect as the last example, except that asr will create a new volume on the target APFS container disk, given by /dev/disk2, and use that newly created volume as the target. Any volumes which already existed in the container will still be there after the restore. Overwriting the existing container sudo asr restore -s <APFS image> --sourcevolumename SourceVolume -t /dev/disk2 --erase Like the last example, we restore to a new volume on the target APFS container disk. However in this case we are erasing the target, so any volumes which already existed are destroyed. Looking at an image's volume names/UUIDs asr info -s <APFS image> Assuming this image has been previously scanned (using the imagescan verb), this will display the volumes' names and UUIDs so they can be used with the --sourcevolumename or --sourcevolumeUUID options. Restoring a snapshot sudo asr restore -s <APFS image> -t /dev/disk2 --toSnapshot Snap1 This assumes that the image volume has a snapshot named Snap1. During the restore, asr will create a new volume in the container at /dev/disk2 and use that volume as the target of the restore. The resulting target volume will have the same contents as Snap1 on the source volume, and it will also have a snapshot with the same name (Snap1) and UUID as Snap1 on the source. This snapshot will match the live target volume right after the restore; the live volume can subsequently change, but the snapshot will remain the same. Restoring a snapshot delta sudo asr restore -s <APFS image> -t /Volumes/Target --erase --fromSnapshot Snap1 --toSnapshot Snap2 This assumes that the image volume has a snapshot named Snap1 and another snapshot named Snap2. Furthermore the target volume (mounted here at "/Volumes/Target") must also contain Snap1, with the same UUID and content. The result of the restore will be that the target volume will have the same contents as Snap2 on the source volume, and it will also gain a snapshot with the same name (Snap2) and UUID as Snap2 on the source. The restore will only need to copy the difference between the two snapshots, rather than the entire contents of Snap2. HOW TO USE ASR asr requires a properly created disk image for most efficient operation. This image is most easily made with the Disk Utility application's "Image from Folder" function in OS X 10.3. The Disk Copy from OS X 10.2.3 (v55.6) or later can also be used. Basic steps for imaging and restoring a volume: 1. Set up the source volume the way you want it. 2. Use Disk Utility's "File -> New Image -> Image from Folder..." function and select the root of the volume. Save the image as read- only or compressed. "File->New Image->Image from <device>" is not recommended for restorable images. 3. Scan the image with "Images -> Scan Image for Restore..." 4. Select a volume and click on the "Restore" button. Then click on the "Image..." button to select the image you have scanned. Click Restore. BLOCK COPY RESTORE REQUIREMENTS asr can block copy restore HFS+/HFSX filesystems and resize the source filesystem to fit in the target's partition if the source filesystem data blocks will fit within the target partition's space (resizing the filesystem geometry as appropriate). HFS+ can be used as the source of a block copy to either an HFS+ or HFSX destination. However, an HFSX source can only be used to block copy to an HFSX destination. This is because case collision of file names could occur when converting from an HFSX filesystem to HFS+. Certain non-HFS+/HFSX filesystems will block copy restore, but the target partition will be resized to match the size of the source image/partition size, with no filesystem resizing occurring. COMPATIBILITY asr maintains compatibility with previous syntax, e.g. asr -source source -target target [options] asr -source source -server configuration [options] asr -source asr://source -file file [options] asr -imagescan [options] image asr -h | file ... -v where -source, -target, and -file are equivalent to --source, --target, and --file respectively, and all [options] are equivalent to their -- descriptions. asr -server configuration is superseded by asr server --config configuration. The following deprecated options also remain: -nocheck this option is deprecated, but remains for script compatibility. Use -noverify instead. -blockonly this option is deprecated, but remains for script compatibility. On by default. Note that if an image scanned with -blockonly cannot be block-copied to a particular target an error will occur, since the file-copy information was omitted. Note: Compatibility with previous syntax is not guaranteed in the next major OS release. ERRORS asr will exit with status 1 if it cannot complete the requested operation. A human readable error message will be printed in most cases. If asr has already started writing to the target volume when the error occurs, then it will erase the target, leaving it in a valid (but empty) state. It will, however, leave it unmounted. Some of the error messages which asr prints are generated by the underlying subsystems that it uses, and their meaning is not always obvious. Here are some useful guidelines: 1. asr does some preflight testing before it starts actually copying data. Errors that show up during this preflighting are usually clear (e.g. "There is not enough space in volume "Macintosh HD" to do the restore.") 2. If an error occurs during the copy, it might be because there is corruption in the source image file. Try running "hdiutil verify" with the image. A common error message which indicates this is "codec overrun". 3. Errors which occur during the copy and which don't have an obvious cause (i.e. the error message is difficult to interpret) may be transient in nature (e.g. there was an I/O error on the disk), and it is worth simply trying the restore again. HISTORY Apple Software Restore got its start as a field service restoration tool used to reconfigure computers' software to 'factory' state. It later became a more general software restore mechanism and software installation helper application for various Apple computer products. ASR has been used in manufacturing processes and in shipping computers' System Software Installers. For Mac OS X, asr was rewritten as a command line tool for manufacturing and professional customers. asr is the backend for the Mac OS X Software Restore application that shipped on Macintosh computers as well as the Scan and Restore functionality in Disk Utility. Multicast support was added to allow multiple clients to erase restore an image from a multicast network stream. Per its history, most functionality in asr was originally focused on HFS+ volumes, but it has expanded to also include APFS. SEE ALSO hdiutil(1), df(1), bless(8), ditto(1), and what(1) Mac OS X December 10, 2020 Mac OS X
|
cvupdatefs
|
The cvupdatefs program is used to commit a configuration change to a Xsan volume. Possible configuration changes include storage pool list modification as well as volume journal modification. The volume update program must be run on the machine that the File System Manager (FSM) is running on. This utility reads the configuration file and compares the configuration file against the current on-disk metadata configuration. If there are differences between the configuration and the on-disk metadata, the utility will display what changes need to be made to bring the volume metadata up to date. NOTE: All metadata modification must be made on a stopped volume. It is recommended that the volume is stopped and cvfsck(8) has been run before making any changes to a volume configuration. Maintaining a backup of the original volume configuration file is also strongly recommended. When a successful update is completed, the new configuration file is stored in the on-disk metadata and the previous one is saved in /Library/Logs/Xsan/data/<volume_name>/config_history/*.cfg.<TIMESTAMP>
|
cvupdatefs - Commit a Xsan Volume configuration change
|
cvupdatefs [-bdfFhlnSv] [-c pathname] [[-M] -R NewVolName] [VolName] [VolPath]
|
-b Build info - log the build information. -c pathname Provide a specific path to the previous configuration file that is to be used. This option is used to force cvfsck to be run as a sub-process to insure that the volume meta data is consistent prior to doing a capacity or stripegroup expansion, or any journal changes. -d Debug - use to turn on internal debugging only. -F Force. This option has been deprecated and replaced with -y. It will cause the same action as that option. -f Failure mode - do not fail if there is a configuration mismatch or other serious abnormal condition detected. Note: This option is not intended for general use. Use only if instructed by Apple support. Incorrect use may result in an unusable file system. -h Help - print the synopsis for this command. -l Log - log when the update finished. -n Read-only - set metadata to read-only mode. -M Allow managed file systems with -R option. -R NewVolName Rename - Provide a new volume name to rename an existing unmanaged volume. Use with -M to rename a managed file system. The existing config file will be renamed, and the existing data directory containing logs will be migrated to the new name. See the section below for further details about using this option. -s Slice rebuild - Rebuild the free slice trees to their optimal sizes. When extending the LUNs in a storagepool, by default it will just add enough additional slices to hold the additional free space. When the -s option is given, it will instead rebuild the slice trees, which usually results in larger slices. If the LUNs have been previously extended, this option will allow the slice trees to be rebuilt without extending the LUNs. -S Status - write status plist to /var/run/cvupdatefs_status_<FS>.plist. -U When a storagepool is added, do a check for disks that are included in the storagepool that is being added to see if they are currently in use in another file system that is visible to the cluster. In some configurations, this may take a long time. If there are disks in use, the operation is aborted. -v Verbose - turn on verbose reporting methods. -y Yes - Bypass the prompt and answer yes to the basic warning about proceeding. If the prompt warning is for an unusual condition, this option will not bypass that prompt. -W Do not use copy-on-write (COW) when applying changes. Once the volume configuration has been changed to reflect the stripe group or journal changes the cvupdatefs utility may be run. When cvupdatefs is run it will display a listing of storage pools which will be modified, followed by a prompt. If this list accurately reflects the changes made to the configuration file then answering 'yes' at the prompt will allow the utility to make the needed changes. Once the utility has completed, the volume may be started again. After starting the volume, the 'show' command in cvadmin(8) may be used to verify the new storage pools. The 'show' command will list all of the stripe groups on the volume, including the newly created storage pool(s). Also, if the location of the volume journal has changed this too will be reflected by the cvadmin command 'show'. WARNINGS It is very important that the consistency of the volume be correct before cvupdatefs is run. If the volume has a bad state cvupdatefs could introduce data corruption. It is recommended that cvfsck is executed on the volume before any changes are made. If cvfsck does not finish with a clean volume do not make any configuration changes until the volume is clean. ADDING A STORAGE POOL The first step in adding storage pools is to modify the volume's configuration file to reflect the desired changes. For notes on volume configuration format refer to snfs_config(5). In addition to adding StripeGroup configuration entries, associated Disk and DiskType entries for any new disks must be included. Currently the ordering of storage pools in the configuration file and in the metadata must match. Thus, when adding new storage pool configuration entries to the configuration file they must always be added to the end of the StripeGroup configuration section. cvupdatefs will abort if a new storage pool is detected anywhere but the end of the file. INCREASING THE STRIPE DEPTH OF AN EXISTING STORAGE POOL Warning: This option is not recommended and its use is deprecated. Adding a new stripe group is the recommended way to expand capacity of a file system. The stripe depth is the number of disks in the storage pool and is a key factor in the amount of parallel I/O that can be accomplished. This choice should ideally be made before the volume is created, thus eliminating the need for cvupdatefs to modify this value by adding disks to the storage pool. Consult the StorNext File System Tuning Guide for information on configuring for optimal file system performance. Warning: When a storage pool is populated with file data, adding disks will increase free space fragmentation of the storage pool proportional to the amount of pre-existing file data. It is important to avoid fragmentation, which severely impacts performance and functionality of the volume. If the storage pool contains little or no file data, expansion will not result in free space fragmentation. The snfsdefrag utility can be used to relocate pre-existing file data to a different storage pool. When new disks are added to an existing storage pool the new disks must exactly match the existing disks in size. All new disks must be added to the end in the disk list in the configuration file StripeGroup section. New disks cannot be added to a storage pool containing metadata or journal. A new storage pool must be added if additional capacity or performance is needed for metadata or journal operations. The cvupdatefs utility can be used to relocate the journal to a new storage pool. MODIFYING VOLUME JOURNAL CONFIGURATION cvupdatefs will also detect changes in the journal configuration and modify the metadata accordingly. Journal changes include moving the journal to a new storage pool and increasing or decreasing the size of the journal. JournalSize (Located in the Global section) Modifying this value will change the size of the on-disk journal. Journal (Located in the Storage Pool section) Setting this entry to yes will place the on-disk journal on the given storage pool. NOTE: There may only be one journal storage pool per volume. REMOVING A JOURNAL-ONLY STRIPE GROUP For Linux MDCs, if a stripe group has only the journal attribute, i.e. no metadata and no userdata, and the journal is moved to another stripe group, the former journal-only stripe group is left with no attributes pertaining to content type. If it is desired that this stripe group be retired and the disks used for other purposes, you can set the status to down after the journal is moved. Note that the status must be up during the journal move operation because the journal recovery must be executed prior to moving the journal. The behavior is similar on Windows MDCs, except that there is no explicit userdata attribute in the ASCII config file. This means that with no journal and no metadata, userdata is assumed. If the desire is to retire the former journal-only stripe group, care should be taken to not run the file system after moving the journal off of the stripe group. Set the status to down immediately after moving the journal and before starting the FSM. CORRECTING MISCONFIGURED STORAGE POOLS cvupdatefs has a limited ability to address configuration errors. For example, if a storage pool was added but the configuration file shows incorrect disk sizes, this option could be used to rewrite that stripe group. Metadata and Journal storage pools cannot be rewritten. In addition, data only storage pools that may be overwritten must be empty. The types of changes that can be made to a storage pool are as follows 1) Resize disk definitions in a storage pool 2) Modify stripe breadth in a storage pool 3) Modify the disk list in a storage pool Warning: Always use this option with extreme caution. Configuration errors could lead to data loss. RENAMING A VOLUME Warning: Renaming a volume that is managed requires additional steps documented elsewhere. These are in the StorNext documentation center. When following those instructions, the -M option must be used when invoking this command with the -R option. Otherwise, renaming a volume is only allowed on an unmanaged volume. Without the -M option, if cvupdatefs(8) detects that the volume is managed, it will print an error message and exit without doing the rename. The -R option for renaming a volume should be used with care, as there are several things that get modified as part of this process. Before renaming a volume, it is highly recommended that cvfsck(8) be run prior to renaming the volume. The volume must be unmounted on all SAN and DLAN clients, and the volume stopped, see cvadmin(8). If a client has the volume mounted when it is renamed, the client might need to be rebooted in order to unmount the old volume name. On Windows, use the Client Configuration Tool to unmount volume before renaming it. The volume that is being renamed will have been configured in one of three modes: non-HA, HA or manual HA, and how it was configured will change how to rename the volume. Non-HA mode There are no extra steps needed when renaming a volume that is not in HA mode. HA mode When a volume is being used in HA mode, prior to running the rename command on the primary, on the secondary the /Library/Logs/Xsan/data/VolName directory should be manually renamed to /Library/Logs/Xsan/data/NewVolName. When the rename command is then run on the primary, the HA sync processes will propagate all the other configuration changes to the secondary. Wait for the HA sync to complete before continuing. Manual HA mode In manual HA mode, the rename command should be run on both MDCs. When run on the second MDC, cvupdatefs(8) will recognize that the name in the ICB has been changed, but will proceed if NewVolName is the same as the name in the ICB. In manual HA mode there is no need to manually rename /Library/Logs/Xsan/data/VolName since that will happen as part of running cvupdatefs -R on the second MDC. After changing the name of a volume, the change needs to be manually reflected in the /etc/fstab, /etc/vfstab or /etc/vstab files on all the clients before they remount the volume and the corresponding directories renamed or created. Windows StorNext SAN and DLAN Clients mounts will need to be remapped. Run the Client Configuration Tool to re-map the mount with new file system name. For any client that is operating as an Xsan volume Proxy Client, check to see if it has a /Library/Preferences/Xsan/dpserver.VolName file. If it does, it will need to be renamed to /Library/Preferences/Xsan/dpserver.NewVolName. If something goes wrong during the rename operation, cvupdatefs(8) will revert any partial changes, but it is still possible that in some corner cases it will not be able to fully revert the changes, and manual intervention will be required. Files that are modified and/or renamed during the rename operation include: /Library/Logs/Xsan/data/VolName /Library/Logs/Xsan/data/NewVolName /Library/Preferences/Xsan/VolName.cfg /Library/Preferences/Xsan/NewVolName.cfg /Library/Preferences/Xsan/fsmlist as well as the ICB in the volume itself. The OS dependent files that need to be manually updated include: /etc/fstab /etc/vfstab /etc/vstab Windows registry via the Windows Client Configuration Tool ENABLING CASE INSENSITIVE If a change in the file system configuration is detected such that case insensitive is being enabled, cvupdatefs invokes cvfsck as a sub- process to check for name collisions. If name collisions are detected, the update operation will be aborted. It is strongly recommended that cvfsck -A be run prior to attempting the change using cvupdatefs. EXIT VALUES cvupdatefs will return one of the following condition codes upon exit. 0 - No error, no changes made to the volume 1 - No error, changes have been made to the volume 2 - Configuration or volume state error, no changes made 3 - ICB error, improper volume found, no changes made 4 - Case conversion found name collisions, no changes made NOTES IMPORTANT: It is highly recommended to run cvfsck(8) prior to making any configuration changes. By default, cvupdatefs uses a copy-on-write (COW) store and applies changes to metadata at the very end. This is beneficial for performance and allows easier recovery if any issues are encountered that prevent successful completion. When COW is enabled, temporary space is typically consumed from /tmp or similiar, depending on the platform. However, the temporary directory can be set using the TMPDIR environment variable. As noted above, COW can be disabled using the -W option in which case no temporary space is used. FILES /Library/Preferences/Xsan/*.cfg /Library/Logs/Xsan/data/<volume_name>/config_history/*.cfg.<TIMESTAMP> SEE ALSO snfs_config(5), cvfsck(8), cvadmin(8) Xsan Volume November 2021 CVUPDATEFS(8)
| null |
bless
|
bless is used to modify the volume bootability characteristics of filesystems, as well as select the active boot device. bless has 6 modes of execution: Folder Mode, Mount Mode, Device Mode, NetBoot Mode, Info Mode, and Unbless Mode. Folder Mode allows you to select a directory on a mounted volume to act as the “blessed” directory, which causes the system firmware to look in that directory for boot code. EFI-based systems also support a “blessed” system file, which is the primary mechanism of specifying the booter for a volume for those systems. In Folder Mode, if you are operating on an HFS+ volume, the HFS+ Volume Header is updated to reflect the files/directories given, which persists even if the volume is moved to another system or NVRAM is cleared. Mount Mode does not make permanent modifications to the filesystem, but rather set the system firmware to boot from the specified volume, assuming it has been properly blessed. This is a subset of the functionality of Folder Mode with the --setBoot option, but is convenient when you don't want to change or interrogate the filesystem for its blessed status. Device Mode is similar to Mount Mode, but allows selection of unmounted filesystems, for instance while in single user mode. It can also perform certain offline modifications to the filesystem, but is not generally recommended. NetBoot Mode sets the system firmware to boot from the network, using a URL syntax to specify the protocol and server. bless only sets the local system to go into NetBoot mode, and does not communicate to the server what image should be used, if there are multiple images. Some other mechanism, such as using Startup Disk, should be used to select that. Info Mode will print out the currently-blessed directory of a volume, or if no mountpoint is specified, the active boot device that the firmware is set to boot from. Unbless Mode complements Folder Mode, and clears the persistent blessed folder and file information on HFS+ volumes. NOTE: bless must be run as the root user. Additionally, --help can be used to display the command-line usage summary. OPTIONS FOR INTEL ARCHITECTURE BASED DEVICES: FILE/FOLDER MODE Folder Mode has the following options: --folder directory Set this directory to be the Mac OS X/Darwin blessed directory, containing a BootX secondary loader for New World machines. --file file Set this file to be the Mac OS X/Darwin blessed boot file, containing a booter for EFI-based systems. If this option is not provided, a default boot file is used based on the blessed directory. Create a BootX file in the Mac OS X/Darwin system folder using file as a source. If file is not provided, a default is used (see FILES), using a path relative to the mountpoint you are blessing. This attempts to ensure that a BootX is used that is compatible with the OS on the target volume. --bootefi [file] Create a boot.efi file in the Mac OS X/Darwin system folder using file as a source. If file is not provided, a default is used (see FILES), using a path relative to the mountpoint you are blessing. This attempts to ensure that a boot.efi is used that is compatible with the OS on the target volume. If --file is also provided, the new file will be created at that path instead. --label name Render a text label used in the firmware-based OS picker --labelfile file Use a pre-rendered label used for the firmware- based OS picker --setBoot Set the system to boot off the specified partition. This is implemented in a platform- specific manner. On Open Firmware-based systems, the boot-device variable is modified. On EFI- based systems, the efi-boot-device variable is changed. This is not supported on Apple Silicon based systems. --nextonly Only change the boot device selection for the next boot. This is only supported on EFI-based systems. --shortform Use an abbreviated device path form. This option can allow for booting from new devices, at the expense of boot time performance. This is only supported on EFI-based systems. --legacy If --setBoot is given, set the firmware to boot a legacy BIOS-based operating system from the specified disk. The active flag of an MBR- partitioned disk is not modified, which can be done with fdisk(8) . This is only supported on EFI-based systems. --legacydrivehint device Instruct the firmware to treat the specified whole disk as the primary, master IDE drive. This is only supported on EFI-based systems. --options Set load options associated with the new boot option. This is only supported on EFI-based systems, and in general should be avoided. Instead, use nvram(8) to set "boot-args" , which will work with both Open Firmware- and EFI-based systems. --personalize Attempts to do a personalization operation on the target, which validates the SecureBoot bundle and ensures that the relevant boot files are signed and valid for this particular machine. This may require network access, in order to check the signatures. Only one of the following snapshot options can be activated at the same time: --create-snapshot Attempts to create an APFS root snapshot of the target APFS system volume and set it as root snapshot of the system volume. The target system will boot from this snapshot on its next boot. --snapshot Set specific snapshot (uuid) as root snapshot of the system volume. The target system will boot from this snapshot on its next boot. --snapshotname Set specific snapshot (name) as root snapshot of the system volume. The target system will boot from this snapshot on its next boot. --last-sealed-snapshot Reverts back to using the previously signed APFS root snapshot reenabling Authenticated Root Volume. The target system will boot from this sealed snapshot on its next boot. --quiet Do not print any output --verbose Print verbose output MOUNT MODE Mount Mode has the following options: --mount directory Use the volume mounted at directory to change the active boot device, in conjunction with --setBoot. The volume must already be properly blessed. --file file Instead of allowing the firmware to discover the booter based on the blessed directory or file, pass an explicit path to the firmware to boot from. This can be used to run EFI applications or EFI booters for alternate OSes, but should not be normally used. This is only supported on EFI-based systems. --setBoot Same as for Folder Mode. --nextonly Same as for Folder Mode. --shortform Same as for Folder Mode. --legacy Same as for Folder Mode. --legacydrivehint device Same as for Folder Mode. --options Same as for Folder Mode. --personalize Same as for Folder Mode. --create-snapshot Same as for Folder Mode. --snapshot Same as for Folder Mode. --snapshotname Same as for Folder Mode. --last-sealed-snapshot Same as for Folder Mode. --bootefi This enables copying required boot objects when --create-snapshot or --last-sealed-snapshot is given. --quiet Do not print any output --verbose Print verbose output DEVICE MODE Device Mode has the following options: --device device Use the block device device to change the active boot device. No volumes should be mounted from device , and the filesystem should already be properly blessed. --label name Set the firmware-based OS picker label for the unmounted filesystem, using name , which should be in UTF-8 encoding. --labelfile file Use a pre-rendered label used with the firmware- based OS picker. --setBoot Set the system to boot off the specified partition, as with Folder and Mount Modes. --startupfile file Add the file as the HFS+ StartupFile, and update other information on disk as appropriate for the startup file type. --nextonly Same as for Folder Mode. --shortform Same as for Folder Mode. --options Same as for Folder Mode. --legacy Same as for Folder Mode. --legacydrivehint device Same as for Folder Mode. --quiet Do not print any output --verbose Print verbose output NETBOOT MODE NetBoot Mode has the following options: --netboot Instead of setting the active boot selection to a disk-based volume, set the system to NetBoot. --server protocol://[interface@]server A URL specification of how to boot the system. Currently, the only protocol supported is BSDP ("bsdp"), Apple's Boot Service Discovery Protocol. The interface is optional, and the server is the IPv4 address of the server in dotted-quad notation. If there is not a specific server you'd like to use, pass "255.255.255.255" to have the firmware broadcast for the first available server. Examples of this notation would be "bsdp://255.255.255.255" and "bsdp://en1@17.203.12.203". --nextonly Same as for Folder Mode. --options Same as for Folder Mode. --quiet Do not print any output --verbose Print verbose output INFO MODE Info Mode has the following options: --info [directory] Print out the blessed system folder for the volume mounted at directory . If directory is not specified, print information for the currently selected boot device (which may not necessarily be ‘/’ ). This is not supported on Apple Silicon based systems. --getBoot Print out the logical boot device, based on what is currently selected. This option will take into account the fact that the firmware may be pointing to an auxiliary booter partition, and will print out the corresponding root partition for those cases. If the system is configured to NetBoot, a URL matching the format of the --server specification for NetBoot mode will be printed. --plist Output all information in Property List (.plist) format, suitable for parsing by CoreFoundation. This is most useful when bless is executed from another program and its standard output must be parsed. --quiet Do not print any output --verbose Print verbose output --version Print bless version and exit immediately UNBLESS MODE Unbless Mode has the following options: --unbless directory Use the HFS+ volume mounted at directory and unset any persistent blessed files/directories in the HFS+ Volume Header. OPTIONS FOR APPLE SILICON DEVICES: NOTE: Admin credentials may be prompted when running bless on an Apple silicon platform (beyond running the tool as an admin user). However, if the volume has been previously blessed by a different OS instance, then these credentials may not be necessary or used to bless the target OS. FOLDER MODE - Available only for external/removable devices Folder Mode has the following options: --folder directory Set this directory to be the Mac OS X/Darwin blessed directory, containing a booter for EFI- based systems. --file file Set this file to be the Mac OS X/Darwin blessed boot file, containing a booter for EFI-based systems. If this option is not provided, a default boot file is used based on the blessed directory. --personalize Attempts to do a personalization operation on the target, which validates the SecureBoot bundle and ensures that the relevant boot files are signed and valid for this particular machine. This may require network access, in order to check the signatures. --quiet Do not print any output --verbose Print verbose output MOUNT MODE Mount Mode has the following options: --mount directory Use the volume mounted at directory to change the active boot device, in conjunction with --setBoot. The volume must already be properly blessed. --nextonly Only change the boot device selection for the next boot. --create-snapshot Attempts to create an APFS root snapshot of the target APFS system volume and set it as root snapshot of the system volume. The target system will boot from this snapshot on its next boot. --snapshot Set specific snapshot (uuid) as root snapshot of the system volume. The target system will boot from this snapshot on its next boot. --snapshotname Set specific snapshot (name) as root snapshot of the system volume. The target system will boot from this snapshot on its next boot. --last-sealed-snapshot Reverts back to using the previously signed APFS root snapshot reenabling Authenticated Root Volume. The target system will boot from this sealed snapshot on its next boot. --user Collect a local owner username to authorize boot policy modification. --stdinpass Collect a local owner password from stdin without prompting. --passpromt Explicitly ask to be prompted for the password. --quiet Do not print any output --verbose Print verbose output DEVICE MODE Device Mode has the following options: --device device Use the block device device to change the active boot device. No volumes should be mounted from device , and the filesystem should already be properly blessed. --setBoot Set the system to boot off the specified volume, as with Mount and Device mode Modes. for the startup file type. --nextonly Same as for Mount Mode. --user Collect a local owner username to authorize boot policy modification. --stdinpass Collect a local owner password from stdin without prompting. --passpromt Explicitly ask to be prompted for the password. --quiet Do not print any output --verbose Print verbose output INFO MODE Info Mode has the following options: --info [directory] (Available only for external/removable devices) Print out the blessed system folder for the volume mounted at directory . If directory is not specified, print information for the currently selected boot device (which may not necessarily be ‘/’ ). --getBoot Print out the logical boot device, based on what is currently selected. This option will take into account the fact that the firmware may be pointing to an auxiliary booter partition, and will print out the corresponding root partition for those cases. --plist Output all information in Property List (.plist) format, suitable for parsing by CoreFoundation. This is most useful when bless is executed from another program and its standard output must be parsed. --user Collect a local owner username to authorize boot policy modification. --stdinpass Collect a local owner password from stdin without prompting. --passpromt Explicitly ask to be prompted for the password. --quiet Do not print any output --verbose Print verbose output --version Print bless version and exit immediately FILES Booter for EFI-based systems, used with the --bootefi flag. If the argument to --bootefi is ommitted, this file will be used as the default input. Typical blessed folder for Mac OS X and Darwin
|
bless – set volume bootability and startup disk options
|
bless --help bless --folder directory [--file file] [--bootefi [file]] [--label name | --labelfile file] [--setBoot] [--nextonly] [--shortform] [--legacy] [--legacydrivehint device] [--options string] [--personalize] [--create-snapshot] [--snapshot] [--snapshotname] [--last-sealed-snapshot] [--quiet | --verbose] bless --mount directory [--file file] [--setBoot] [--nextonly] [--shortform] [--legacy] [--legacydrivehint device] [--options string] [--personalize] [--snapshot] [--snapshotname] [--create-snapshot] [--last-sealed-snapshot] [--quiet | --verbose] bless --device device [--label name | --labelfile file] [--startupfile file] [--setBoot] [--nextonly] [--shortform] [--legacy] [--legacydrivehint device] [--options string] [--quiet | --verbose] bless --netboot --server url [--nextonly] [--options string] [--quiet | --verbose] bless --info [directory] [--getBoot] [--plist] [--quiet | --verbose] [--version] bless --unbless directory
| null |
FOLDER MODE To bless a volume with only Mac OS X or Darwin, and create the BootX and boot.efi files as needed: bless --folder "/Volumes/Mac OS X/System/Library/CoreServices" --bootefi MOUNT MODE To set a volume containing either Mac OS 9 and Mac OS X to be the active volume: bless --mount "/Volumes/Mac OS" --setBoot NETBOOT MODE To set the system to NetBoot and broadcast for an available server: bless --netboot --server bsdp://255.255.255.255 INFO MODE To gather information about the currently selected volume (as determined by the firmware), suitable for piping to a program capable of parsing Property Lists: bless --info --plist SEE ALSO mount(8), newfs(8), nvram(8) Mac OS X July 6, 2022 Mac OS X
|
cvgather
|
The cvgather program is used to collect debug information from a volume. This creates a tar file of the system's Xsan File System debug logs, configuration, version information and disk devices. The cvgather program will collect client debug information on client machines and server information on server machines, as well as portmapper information from all machines. System log files as well as Xsan log files are included. At the users option, cvgather also collects core files from user space utilities, such as the fsm, and also from the operating system kernel, when available. USAGE When the operator encounters an error using Xsan and wishes to send debugging information to Apple technical support, the cvgather utility may be run. The following command arguments and options affect the behavior of cvgather. -f VolName Specify the name of the volume for which debugging information should be collected. Some information is universal to all installed volumes, while some is unique to each file system. -k Collect the core file from the operating system kernel. This option is not supported on Linux. The -k option collects the kernel core from the default location for the machine's operating system. To collect the kernel core from another location use -K. -K KernelCore Collect the kernel core file from any file. You must specify the full filename as well as the path. -n NumberOfCvlogs Specify the number of cvlog files to include in the tarball. If this option is not selected, 4 will be used. This is the default number of cvlogs used by the fsm. -o OutputFile Specify the name of the output file. This name is appended with a _timestamp and the suffix '.tgz'. The timestamp is appended to the filename to allow for the existence of multiple tar files. If this option is not selected, the name of the volume will be used as a default. -p SnfsPath Specify the file path to the Xsan install directory. If this option is not selected, the path /System/Library/Filesystems/acfs.fs/Contents will be used as a default. -s Gather symbol information without core files. -u Collect the core file from user executables, such as the fsm. By default, if they exist, cvgather will pick up a file named "core" and the the most recently modified "core.*" file on systems that support core file names with extensions. The -u option collects core files from the 'debug' directory in the Xsan directory. To collect user core files from another location or core files with with non-standard names use -U. -U UserCore Collect the user core file from any file. You must specify the full filename as well as the path. -r Show numerical addresses instead of trying to determine symbolic host -x Exclude files that are collected by pse_snapshot. Note that this option is intended to be used by pse_snapshot only and not for general use. The behavior of this option may change without warning. When cvgather is run it will create a tar file, that can be simply e- mailed to Apple technical support for evaluation. It is recommended that the tar file be compressed into a standard compression format such as compress, gzip, or bzip2. NOTES IMPORTANT: cvgather creates a number of temporary files, thus must have write privileges for the directory in which it is run. These files, as well as the output tar file can be very large, especially when the kernel core file is included, thus adequate disk space must be available. Several important log files are only accessible by the root user, thus it is important that cvgather be run with root privileges to gather the entire range of useful information. FILES /Library/Preferences/Xsan/*.cfg /Library/Logs/Xsan/debug/cvfsd.out /Library/Logs/Xsan/debug/nssdbg.out /Library/Logs/Xsan/debug/fsmpm.out /Library/Logs/Xsan/data/<volume_name>/log/cvlog* LIMITATIONS Only the Linux platform is supported with cvgather The Windows version of cvgather contains different features and options. See the StorNext Windows "Gather Debugging Info (cvgather.exe)" help page for more information. SEE ALSO cvdb(8), cvversions(1), cvfsid(8) cvlabel(8) Xsan File System December 2021 CVGATHER(8)
|
cvgather - Compile debugging information for a Xsan File System (The Windows version of cvgather contains different features and options. See the StorNext Windows "Gather Debugging Info (cvgather.exe)" help page for more information.)
|
cvgather -f_VolName [-sukxr] [-o OutputFile] [-n NumberOfCvlogs] [-U UserCore] [-K KernelCore] [-p SnfsPath]
| null | null |
xscertadmin
|
Manage Certificate Revocation Lists (CRLs) in an Open Directory Environment. COMMANDS list [-x] [-v] [-d ⟨debug_level⟩] [-a | ⟨issuer_common_name⟩] add [-x] [-v] [-d ⟨debug_level⟩] [-r ⟨reason_code⟩] [-t ⟨type⟩] ⟨user/record name⟩ add [-x] [-v] [-d ⟨debug_level⟩] [-r ⟨reason_code⟩] [-t ⟨type⟩] -f ⟨file_of_names⟩ add [-x] [-v] [-d ⟨debug_level⟩] [-r ⟨reason_code⟩] -i ⟨issuer_common_name⟩ -s ⟨serial_number⟩ COMMON OPTIONS -x/--xml Returns output to stdout in xml format. -v/--version Print the version of the tool. -d/--debug ⟨debug_level⟩ Sets the level of diagnostic message detail, 0 for off and 1 for on. LIST OPTIONS -a/--all List all CRLs ADD OPTIONS -f/--file A file containing names. -i/--issuer The common name of the issuer of the certificate. -r/--reason The revocation reason code. -s/--serial The serial number of the certificate to revoke. -t/--type The record type: user, device, computer or recordType. REASON CODES 0 unspecified 1 keyCompromise 2 cACompromise 3 affiliationChanged 4 superseded 5 cessationOfOperation 6 certificateHold 9 privilegeWithdrawn 10 aACompromise MacOSX Fri March 11 2011 MacOSX
|
xscertadmin -- process Certificate Revocation Lists in an OD environment
|
xscertadmin command [common options] [command options]
| null | null |
visudo
|
visudo edits the sudoers file in a safe fashion, analogous to vipw(8). visudo locks the sudoers file against multiple simultaneous edits, performs basic validity checks, and checks for syntax errors before installing the edited file. If the sudoers file is currently being edited you will receive a message to try again later. visudo parses the sudoers file after editing and will not save the changes if there is a syntax error. Upon finding an error, visudo will print a message stating the line number(s) where the error occurred and the user will receive the “What now?” prompt. At this point the user may enter ‘e’ to re-edit the sudoers file, ‘x’ to exit without saving the changes, or ‘Q’ to quit and save changes. The ‘Q’ option should be used with extreme caution because if visudo believes there to be a syntax error, so will sudo. If ‘e’ is typed to edit the sudoers file after a syntax error has been detected, the cursor will be placed on the line where the error occurred (if the editor supports this feature). There are two sudoers settings that determine which editor visudo will run. editor A colon (‘:’) separated list of editors allowed to be used with visudo. visudo will choose the editor that matches the user's SUDO_EDITOR, VISUAL, or EDITOR environment variable if possible, or the first editor in the list that exists and is executable. sudo does not preserve the SUDO_EDITOR, VISUAL, or EDITOR environment variables unless they are present in the env_keep list or the env_reset option is disabled in the sudoers file. The default editor path is /usr/bin/vi which can be set at compile time via the --with-editor configure option. env_editor If set, visudo will use the value of the SUDO_EDITOR, VISUAL, or EDITOR environment variables before falling back on the default editor list. visudo is typically run as root so this option may allow a user with visudo privileges to run arbitrary commands as root without logging. An alternative is to place a colon-separated list of “safe” editors in the editor variable. visudo will then only use SUDO_EDITOR, VISUAL, or EDITOR if they match a value specified in editor. If the env_reset flag is enabled, the SUDO_EDITOR, VISUAL, and/or EDITOR environment variables must be present in the env_keep list for the env_editor flag to function when visudo is invoked via sudo. The default value is on, which can be set at compile time via the --with-env-editor configure option. The options are as follows: -c, --check Enable check-only mode. The existing sudoers file (and any other files it includes) will be checked for syntax errors. If the path to the sudoers file was not specified, visudo will also check the file ownership and permissions (see the -O and -P options). A message will be printed to the standard output describing the status of sudoers unless the -q option was specified. If the check completes successfully, visudo will exit with a value of 0. If an error is encountered, visudo will exit with a value of 1. -f sudoers, --file=sudoers Specify an alternate sudoers file location, see below. As of version 1.8.27, the sudoers path can be specified without using the -f option. -h, --help Display a short help message to the standard output and exit. -I, --no-includes Disable the editing of include files unless there is a pre- existing syntax error. By default, visudo will edit the main sudoers file and any files included via @include or #include directives. Files included via @includedir or #includedir are never edited unless they contain a syntax error. -O, --owner Enforce the default ownership (user and group) of the sudoers file. In edit mode, the owner of the edited file will be set to the default. In check mode (-c), an error will be reported if the owner is incorrect. This option is enabled by default if the sudoers file was not specified. -P, --perms Enforce the default permissions (mode) of the sudoers file. In edit mode, the permissions of the edited file will be set to the default. In check mode (-c), an error will be reported if the file permissions are incorrect. This option is enabled by default if the sudoers file was not specified. -q, --quiet Enable quiet mode. In this mode details about syntax errors are not printed. This option is only useful when combined with the -c option. -s, --strict Enable strict checking of the sudoers file. If an alias is referenced but not actually defined or if there is a cycle in an alias, visudo will consider this a syntax error. It is not possible to differentiate between an alias and a host name or user name that consists solely of uppercase letters, digits, and the underscore (‘_’) character. -V, --version Print the visudo and sudoers grammar versions and exit. A sudoers file may be specified instead of the default, /private/etc/sudoers. The temporary file used is the specified sudoers file with “.tmp” appended to it. In check-only mode only, ‘-’ may be used to indicate that sudoers will be read from the standard input. Because the policy is evaluated in its entirety, it is not sufficient to check an individual sudoers include file for syntax errors. Debugging and sudoers plugin arguments visudo versions 1.8.4 and higher support a flexible debugging framework that is configured via Debug lines in the sudo.conf(5) file. Starting with sudo 1.8.12, visudo will also parse the arguments to the sudoers plugin to override the default sudoers path name, user-ID, group-ID, and file mode. These arguments, if present, should be listed after the path to the plugin (i.e., after sudoers.so). Multiple arguments may be specified, separated by white space. For example: Plugin sudoers_policy sudoers.so sudoers_mode=0400 The following arguments are supported: sudoers_file=pathname The sudoers_file argument can be used to override the default path to the sudoers file. sudoers_uid=user-ID The sudoers_uid argument can be used to override the default owner of the sudoers file. It should be specified as a numeric user-ID. sudoers_gid=group-ID The sudoers_gid argument can be used to override the default group of the sudoers file. It must be specified as a numeric group-ID (not a group name). sudoers_mode=mode The sudoers_mode argument can be used to override the default file mode for the sudoers file. It should be specified as an octal value. For more information on configuring sudo.conf(5), refer to its manual. ENVIRONMENT The following environment variables may be consulted depending on the value of the editor and env_editor sudoers settings: SUDO_EDITOR Invoked by visudo as the editor to use VISUAL Used by visudo if SUDO_EDITOR is not set EDITOR Used by visudo if neither SUDO_EDITOR nor VISUAL is set FILES /private/etc/sudo.conf Sudo front-end configuration /private/etc/sudoers List of who can run what /private/etc/sudoers.tmp Default temporary file used by visudo DIAGNOSTICS In addition to reporting sudoers syntax errors, visudo may produce the following messages: sudoers file busy, try again later. Someone else is currently editing the sudoers file. /private/etc/sudoers: Permission denied You didn't run visudo as root. you do not exist in the passwd database Your user-ID does not appear in the system passwd database. Warning: {User,Runas,Host,Cmnd}_Alias referenced but not defined Either you are trying to use an undeclared {User,Runas,Host,Cmnd}_Alias or you have a user or host name listed that consists solely of uppercase letters, digits, and the underscore (‘_’) character. In the latter case, you can ignore the warnings (sudo will not complain) . The message is prefixed with the path name of the sudoers file and the line number where the undefined alias was used. In -s (strict) mode these are errors, not warnings. Warning: unused {User,Runas,Host,Cmnd}_Alias The specified {User,Runas,Host,Cmnd}_Alias was defined but never used. The message is prefixed with the path name of the sudoers file and the line number where the unused alias was defined. You may wish to comment out or remove the unused alias. Warning: cycle in {User,Runas,Host,Cmnd}_Alias The specified {User,Runas,Host,Cmnd}_Alias includes a reference to itself, either directly or through an alias it includes. The message is prefixed with the path name of the sudoers file and the line number where the cycle was detected. This is only a warning unless visudo is run in -s (strict) mode as sudo will ignore cycles when parsing the sudoers file. unknown defaults entry "name" The sudoers file contains a Defaults setting not recognized by visudo. SEE ALSO vi(1), sudo.conf(5), sudoers(5), sudo(8), vipw(8) AUTHORS Many people have worked on sudo over the years; this version consists of code written primarily by: Todd C. Miller See the CONTRIBUTORS.md file in the sudo distribution (https://www.sudo.ws/about/contributors/) for an exhaustive list of people who have contributed to sudo. CAVEATS There is no easy way to prevent a user from gaining a root shell if the editor used by visudo allows shell escapes. BUGS If you believe you have found a bug in visudo, you can submit a bug report at https://bugzilla.sudo.ws/ SUPPORT Limited free support is available via the sudo-users mailing list, see https://www.sudo.ws/mailman/listinfo/sudo-users to subscribe or search the archives. DISCLAIMER visudo is provided “AS IS” and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. See the LICENSE.md file distributed with sudo or https://www.sudo.ws/about/license/ for complete details. Sudo 1.9.13p2 January 16, 2023 VISUDO(8)
|
visudo - edit the sudoers file
|
visudo [-chIOPqsV] [[-f] sudoers]
| null | null |
uusched
|
The uusched program is actually just a shell script which invokes the uucico daemon. It is provided for backward compatibility. It causes uucico to call all systems for which there is work. Any option which may be given to uucico may also be given to uusched . For more details, see uucico(8) . SEE ALSO uucp(1), uux(1), uustat(1), uucico(8), uuxqt(8) AUTHOR Ian Lance Taylor <ian@airs.com>. Text for this Manpage comes from Taylor UUCP, version 1.07 Info documentation. Taylor UUCP 1.07 uusched(8)
|
uusched - schedule the uucp file transport program
|
uusched [ options ]
| null | null |
dsconfigldap
|
dsconfigldap allows addition or removal of LDAP server configurations. Presented below is a discussion of possible parameters. Usage has three intents: add server config, remove server config, or display help. Options list and their descriptions: -f Bindings will be established or dropped in conjunction with the addition or removal of the LDAP server configuration. -v This enables the logging to stdout of the details of the operations. This can be redirected to a file. -i You will be prompted for a password to use in conjunction with a specified username. -s This ensures that no clear text passwords will be sent to the LDAP server during authentication. This will only be enabled if the server supports non-cleartext methods. -e This ensures that if the server is capable of supporting encryption methods (i.e., SSL or Kerberos) that encryption will be enforced at all times via policy. -m This ensures that man-in-the-middle capabilities will be enforced via Kerberos, if the server supports the capability. -g This ensures that packet signing capabilities will be enforced via Kerberos, if the server supports the capability. -x Connection to the LDAP server will only be made over SSL. -S Will skip updating the search policies. -N Will assume Yes for installing certificates -h Display usage statement. -a servername This is either the fully qualified domain name or correct IP address of the LDAP server to be added to the DirectoryService LDAPv3 configuration. -r servername This is either the fully qualified domain name or correct IP address of the LDAP server to be removed from the DirectoryService LDAPv3 configuration. -n configname This is the UI configuration label that is to be given the LDAP server configuration. -c computerid This is the name to be used for directory binding to the LDAP server. If none is given the first substring, before a period, of the hostname (the defined environment variable "HOST") is used. -u username Username of a privileged network user to be used in authenticated directory binding. -p password Password for the privileged network user. This is a less secure method of providing a password, as it may be viewed via process list. For stronger security leave the option off and you will be prompted for a password. -l username Username of a local administrator. -q password Password for the local administrator. This is a less secure method of providing a password, as it may be viewed via process list. For stronger security leave the option off and you will be prompted for a password.
|
dsconfigldap – LDAP server configuration/binding add/remove tool.
|
dsconfigldap [-fvixsgmeSN] -a servername [-n configname] [-c computerid] [-u username] [-p password] [-l username] [-q password] dsconfigldap [-fviSN] -r servername [-u username] [-p password] [-l username] [-q password] options: -f force authenticated binding/unbinding -v verbose logging to stdout -i prompt for passwords as required -x choose SSL connection -s enforce secure authentication only -g enforce packet signing security policy -m enforce man-in-middle security policy -e enforce encryption security policy -S do not update search policies -N do not prompt about adding certificates -h display usage statement -a servername add config of servername -r servername remove config of servername -n configname name given to LDAP server config -c computerid name used if binding to directory -u username privileged network username -p password privileged network user password -l username local admin username -q password local admin password
| null |
dsconfigldap -a ldap.company.com The LDAP server config for the LDAP server myldap.company.com will be added. If authenticated directory binding is required by the LDAP server, then this call will fail. Otherwise, the following parameters configname, computerid, and local admin name will respectively pick up these defaults: ip address of the LDAP servername, substring up to first period of fully qualified hostname, and username of the user in the shell this tool was invoked. dsconfigldap -r ldap.company.com The LDAP server config for the LDAP server myldap.company.com will be removed but not unbound since no network user credentials were supplied. The local admin name will be the username of the user in the shell this tool was invoked. SEE ALSO opendirectoryd(8), odutil(1) Mac OS X April 24 2010 Mac OS X
|
smbdiagnose
|
The smbdiagnose tool gathers system information and network traffic in order to assist Apple when investigating issues related to SMB file sharing. A great deal of information is harvested, spanning system state, system configuration, and network traffic related to SMB file sharing. What smbdiagnose Collects: • A network packet trace of SMB file sharing traffic that occurs during the time the tool is run. This packet trace may include file names and file data. It may also include the authentication exchange used to log into the SMB server. • Enhanced debug logs for the SMB server process. • If the -s option is specified, the default set of information collected by sysdiagnose(1). What smbdiagnose Doesn't Collect: • No authentication credentials are harvested from the system
|
smbdiagnose – gather information to aid in diagnosing SMB file sharing issues
|
smbdiagnose -h smbdiagnose [-f path] [-s]
|
-h Print full usage. -f path Write the diagnostic to the specified path. The default is “~/Desktop”. -s Collect sysdiagnose(1) information. The default is not to collect sysdiagnose(1) information. EXIT STATUS smbdiagnose exits with status 0 if there were no internal errors encountered during the diagnostic, or >0 when an error unrelated to external state occurs or unusable input is provided by the user. macOS 15 July 2016 macOS
| null |
cfprefsd
|
cfprefsd provides preferences services for the CFPreferences and NSUserDefaults APIs. There are no configuration options to cfprefsd. Users should not run cfprefsd manually. Mac OS X October 25th, 2011 Mac OS X
|
cfprefsd – defaults server
|
cfprefsd
| null | null |
auditd
|
The auditd daemon responds to requests from the audit(8) utility and notifications from the kernel. It manages the resulting audit log files and specified log file locations. The options are as follows: -d Starts the daemon in debug mode — it will not daemonize. -l This option is for when auditd is configured to start on-demand using launchd(8). Optionally, the audit review group "audit" may be created. Non- privileged users that are members of this group may read the audit trail log files. NOTE To assure uninterrupted audit support, the auditd daemon should not be started and stopped manually. Instead, the audit(8) command should be used to inform the daemon to change state/configuration after altering the audit_control file. If auditd is started on-demand by launchd(8) then auditing should only be started and stopped with audit(8). On Mac OS X, auditd uses the asl(3) API for writing system log messages. Therefore, only the audit administrator and members of the audit review group will be able to read the system log entries. FILES /var/audit Default directory for storing audit log files. /etc/security The directory containing the auditing configuration files audit_class(5), audit_control(5), audit_event(5), and audit_warn(5). COMPATIBILITY The historical -h and -s flags are now configured using audit_control(5) policy flags ahlt and cnt, and are no longer available as arguments to auditd. SEE ALSO asl(3), libauditd(3), audit(4), audit_class(5), audit_control(5), audit_event(5), audit_warn(5), audit(8), launchd(8) HISTORY The OpenBSM implementation was created by McAfee Research, the security division of McAfee Inc., under contract to Apple Computer Inc. in 2004. It was subsequently adopted by the TrustedBSD Project as the foundation for the OpenBSM distribution. AUTHORS This software was created by McAfee Research, the security research division of McAfee, Inc., under contract to Apple Computer Inc. Additional authors include Wayne Salamon, Robert Watson, and SPARTA Inc. The Basic Security Module (BSM) interface to audit records and audit event stream format were defined by Sun Microsystems. macOS 14.5 December 11, 2008 macOS 14.5
|
auditd – audit log management daemon DEPRECATION NOTICE The audit(4) subsystem has been deprecated since macOS 11.0, disabled since macOS 14.0, and WILL BE REMOVED in a future version of macOS. Applications that require a security event stream should use the EndpointSecurity(7) API instead. On this version of macOS, you can re-enable audit(4) by renaming or copying /etc/security/audit_control.example to /etc/security/audit_control, re-enabling the system/com.apple.auditd service by running launchctl enable system/com.apple.auditd as root, and rebooting.
|
auditd [-d | -l]
| null | null |
uuchk
|
The uuchk program displays information read from the UUCP configuration files. It should be used to ensure that UUCP has been configured correctly. The -s or --system options may be used to display the configuration for just the specified system, rather than for all systems. The uuchk program also supports the standard UUCP program options.
|
uuchk - display information read from the UUCP configuration files.
|
uuchk [ options ]
|
-s system, --system system Display the configuration for just the specified system, rather than for all systems. Standard UUCP options: -I file, --config Set configuration file to use. -v, --version Report version information and exit. --help Print a help message and exit. SEE ALSO uucp(1) AUTHOR Ian Lance Taylor <ian@airs.com>. Text for this Manpage comes from Taylor UUCP, version 1.07 Info documentation. Taylor UUCP 1.07 uuchk(8)
| null |
cvfsck
|
The cvfsck program can check and repair Xsan file system metadata corruption due to a system crash, bad disk or other catastrophic failure. This program also has the ability to list all of the existing files and their pertinent statistics, such as inode number, size, file type and location in the volume. If the volume is active, it may only be checked in a Read-only mode. In this mode, modifications are noted, but not committed. The -n option may be used to perform a read only check as well. The file system checking program must be run on the machine where the File System Services are running. cvfsck reads the configuration file and compares the configuration against a saved copy that is stored in the metadata. It is important that the configuration file (see snfs_config(5)) accurately reflect the current state of the volume. If you need to change a parameter in a current configuration, save a copy of the configuration first or make sure /Library/Logs/Xsan/data/VolName/config_history/*.cfg.<TIMESTAMP> already has a recent copy. Once the configuration file has been validated with the metadata version, if the configuration file is different and cvfsck is not in read-only mode, the new configuration is stored in the metadata and the previous version is written to /Library/Logs/Xsan/data/VolName/config_history/*.cfg.<TIMESTAMP>. After validating the configuration file, cvfsck reads all of the metadata, checks it for any inconsistencies, and the volume is repaired to resolve these issues or if in read-only mode, any problems are reported. By default, modifications are first written to a file in the local volume instead of the Xsan disks. All fixes are made to this local file, including journal replay. When all problems are fixed and the run is complete, the user is asked if the changes should be copied to the actual Xsan disks. If the user responds "y", the changes are made. An answer of "n" indicates that the volume should not be changed. This allows the user to easily gauge the extent of problems with a volume before committing to the repair. The user can override this behavior with the -n, -y, and --T options.
|
cvfsck - Check and Recover a Xsan Volume
|
cvfsck [options] [VolName] [VolPath]
|
NOTE: If no action flags are specified (-e, -f, -g, -j, -F, -K, -M, -p, -r, -s, -t, -w, -x, -q), then cvfsck runs in a verbose read-only mode equivalent to -nv. -4 If there are files with unconverted or partially converted xattr chains that contain xattrs greater than 4KiB in length, destroy the oversized xattrs so conversion can continue. Use with caution. -A Scan directories for name collisions that would occur on a case- insensitive file system. -a This option can only be used with -f and is used to tell cvfsck to print totals (all). When used, a line is printed after each storage pool showing how many free space fragments exist for that storage pool. In addition, at the end of the run, this options prints the grand total of free space fragments for all storage pools. -B separator Use separator instead of comma (,) for the character used to partition fields when the -x option is specified. -c pathname Provide a specific path to a configuration file that is to be used, overriding the implicit location. This option is used when cvupdatefs invokes cvfsck as a sub-process to insure that the volume meta data is consistent prior to doing a capacity or stripegroup expansion. -d Internal debug use. This option dumps a significant amount of data to the standard output device. -e Report statistics for extents in each file. This reporting option enables all the same file statistics that the -r flag enables. In addition, the -e flag enables statistic reporting for each extent in a file. All extent data is displayed immediately following the parent file's information. See the -r flag description for file statistics output. The extent stats are output in the following order; Extent#, Stripe group, File relative block, Base block, End block No checking is done. This flag implies -r and -n flags. No tracing is enabled for this report option. -E Erase i.e. "scrub" on disk free space. Cvfsck will write zeros over all free space on the disk. It works in conjunction with the -P option that reports the last block actually scrubbed in case of a crash during a scrub operation. This is intended for Linux. -f Report free space fragmentation. Each separate chunk of free allocation blocks is tallied based on the chunk's size. After all free chunks are accounted for, a report is displayed showing the counts for each unique sized free space chunk. Free space fragmentation is reported separately for each storage pool. The free space report is sorted from smallest contiguous allocation chunk to largest. The "Pct." column indicates percentage of the storage pool space the given sized chunks make up. The "(sum)" column indicates what percentage of the total storage pool space is taken up by chunks smaller than, and equal to the given size. The "Chunk Size" gives the chunk's size in volume blocks, and the "Chunk Count" column displays how many instances of this sized chunk are located in this storage pool's free space. For more information on fragmentation see the snfsdefrag(1) and sgdefrag(8) pages. No checking is done. Implies -n flag. See also -a that is used to get more output. -F This option causes cvfsck to make use of the compressed cache even when the configured value of bufferCacheSize is less than or equal to 1GB. It also sizes the cache to hold all metadata which can dramatically improve performance for aged file systems having large file counts. This option can cause cvfsck to use a lot of memory, so it is advisable to first obtain an estimate using the -q option. -g Print journal recovery log. With this flag cvfsck reports contents of the metadata journal. For debugging use only. Implies -n flag. -i Print inode summary report. With this flag cvfsck scans the inode list and reports inode statistics information then exits. This includes a breakdown of the count of inode types, hard links, and size of the largest directory. This is normally reported as part of the 'Building Inode Index Database' phase anyway but with this flag cvfsck exits after printing the inode summary report and skips the rest of the operations. This allows the inode summary report to run pretty fast. Implies -n flag. -j Execute journal recovery and then exit. Running journal recovery will ensure all operations have been committed to disk, and that the metadata state is up to date. It is recommended that cvfsck is run with the -j flag before any read-only checks or volume reports are run. -J Dump raw journal to a file named jrnraw.dat and then exit. For debugging use only. -K Forces the journal to be cleared and reset. WARNING: Resetting the journal may introduce metadata inconsistency. After the journal reset has been completed, run cvfsck to verify and repair any metadata inconsistency. Use this option with extreme caution. -l This option will log any problems to the system log. This is mainly used on system startup where a file system check may be automatically started by the Xsan File System Services. -L This option forces all orphaned inodes (valid inodes which are not linked in to the directory tree) to be reattached in the lost+found directory. If this option is not present, cvfsck examines the RPL attribute on the inodes and tries to reattach them to the directory that used to hold them. In either usage, it tries to name the inodes using the name in the RPL attribute. If there is no RPL attribute, the inode number is used as a name. If that name already exists, the inode will be reattached using that name followed by a dash and a random number. -M Performs simple checks that attempt to determine whether a new metadata dump is needed. If the checks find that a dump is needed, cvfsck will exit with status 1 and print an explanation. If the checks do not find that a dump is needed, cvfsck will exit with status 0. If an error occurs while performing the checks, cvfsck will print an explanation and exit with status 2. This option is useful only on managed file systems. Note: these checks are not exhaustive, and, in some cases, cvfsck will exit with status 0 when a new dump is actually required. -m size This option is used to specify the amount of memory in bytes to be used for the internal cache used to hold inode information. For larger file systems, this can improve the performance of cvfsck. Note that the memory estimate produced using the -q option will be increased by the amount specified with this option. The 'k', 'm', and 'g' extensions are recognized for this option. For example, -m 2g can be used to specify 2GB. -n This option allows a volume to be checked in a read-only mode. Modifications are written to a file in the local volume instead of the Xsan disks. All fixes that would be made if cvfsck was run without the -n option are made to this local file, including journal replay. When the run is complete, the local file is thrown away. The volume itself is never changed. -O If cvfsck is run on a file system while the FSM for that file system is active, cvfsck runs in shared mode. This means that it runs in read-only mode and only a small subset of the usual checking is performed. This is because the FSM changing the file system may confuse a full cvfsck and cause problems. The -O option causes cvfsck to perform full (read-only) checking anyway. Strange behavior may be observed. -p StripeGroupName This option provides a method for deleting all files that have blocks allocated on the given stripe group. All files that have at least one data extent on the given stripe group will be deleted, even if they have extents on other stripe groups as well. WARNING: Use this option with extreme caution. This option could remove files that the user did not intend to remove, and there are no methods to recover files that have been deleted with this option. -q This option causes cvfsck to generate and estimate for disk and memory requirements and then exit. Any other options that will get used when performing the actual check should also be specified to improve estimate accuracy. For example, if the intent is to run cvfsck -m2g -F VolName, then to generate the estimate, run cvfsck -q -m2g -F VolName. Note that the base memory requirements will typically be around 600MB, so this should be taken into account when also using the -m option. -Q This option causes cvfsck to print qustat statistics just before exiting. -P Report progress of an Erase operation. This flag enables the writing of a file in /Library/Logs/Xsan/debug of the last block on a given strip group that has been scrubbed. The files are created on a stripe group by stripe group basis as /Library/Logs/Xsan/data/cvfsck_<VolName>_sg<StripeGroupOrdinal>. This is intended for Linux use. -r This report option shows information on file state. Information for each file is output in the following order. Inode#, Mode, Size, Block count, Extent count, Storage pools, Affinity, Path No tracing is enabled for this report option. -R This option helps repair a file system which had cvmkfs accidentally run on it. First, cvfsck restores file system state which was saved by cvmkfs in /Library/Logs/Xsan/debug/VolName.cvmkfs. Then, it continues as usual to fix any other problems it may encounter. The COW layer treats the restoration of saved state the same as any other file system modification. This option is only useful if the accidental cvmkfs is detected before the file system is mounted and changed. Using it at any other time is not advised. If unsure, please contact customer support. -s StripeGroupName THIS FUNCTIONALITY IS ONLY SUPPORTED ON MANAGED FILE SYSTEMS Provides a method for restoring data on the given storage pool from a stored copy. cvfsck will truncate all files with data on the given storage pool. As such, all data blocks on that storage pool will be inaccessible and subsequent access of these files will trigger data retrieval from a stored copy. All data will be lost for files that do not have any stored copies. The cvfsck output will indicate whether or not the ALL_COPIES_MADE flag is set in the SNEA attribute for each truncated file. Files outside of all managed directories that have data on the given storage pool will be deleted from the file system. NOTE: Use this option with extreme caution because it can result in permanent data loss. -T directory This option specifies the directory where all temporary files created by cvfsck will be placed. If this option is omitted all temporary files will be placed in the system's default temporary folder. NOTE: cvfsck does honor the use of TMPDIR/TEMP environment variables. -t This option is used to check the work of the -U option on thin provisioned devices in the given file system. It causes cvfsck to print in sn_dmap(1) -v format, from the file system's perspective, an idea of what should be unmapped and mapped on each thin provisioned device in the given file system. One can then somewhat compare the sn_dmap(1) output with the cvfsck -t output as follows: any "mapped" space indicated by sn_dmap(1) must also be mapped in the cvfsck output. But, unmapped space from sn_dmap(1) may show up as mapped in the cvfsck output since the space has been allocated but not yet written. Going the other way, any mapped space from cvfsck may or may not be mapped from sn_dmap, depending if the allocated space was actually written. Any unmapped space (immediately after running cvfsck -U) must also appear as unmapped from sn_dmap(1). In addition, this option verifies this final condition (checking ummapped space) printing lines where unmapped space is incorrectly still mapped. If all is copacetic, the following output appears for each device: checked NNN unmapped entries with 0 errors where NNN is the number of unmapped pieces checked. This currently only works on Linux and is intended as a debugging tool for quality assurance and development. -U This option is for use with thin provisioned devices in the given file system. It causes UNMAPS or TRIMs operations for all file system free space. Cvfsck will issue the appropriate UNMAP/TRIM device operations for every free chunk in the file system. See also the -t option. This currently only works on Linux and only with Quantum QXS series storage. -v Use verbose reporting methods. -w This option specifies that cvfsck is allowed to make modifications to the file system to correct any problems that are found. -W This option causes cvfsck to always clean up any orphaned "Wopens" inodes that may have been generated when an earlier metadata restore from the metadata archive was performed using an older version of Xsan. Normally, cvfsck will only clean up these inodes if other metadata inconsistencies are detected prior to the orphan inode phase. -x Report statistics. No checking is done. Implies -e,-r and -n flags. All values are in decimal. Data is output in this order: Inode#, Mode, Size, Block Count, Affinity, Path, Extent Count, Extent Number, Storage pool, File Relative Block, Base, End, Depth, Breadth By default, fields are comma separated. However, the separating character can be changed using the -B option. See also the -z option. No tracing is enabled for this report option. -X (Engineering use only.) Free all inodes in extended attribute chains. Extended attributes present in these inodes will be deleted. -y Fix any problems found in the file system without prompting for confirmation. The default behavior is to display the extent of the changes that will be made and prompt for whether or not to make the changes. The fixes are first made to a file in a file on the local volume (specified by -T). When all fixes are complete, they are copied into the actual Xsan disks. -Y Same behavior as -y except that the changes are not buffered through the local volume as they are by default. -z Enclose the Path field displayed by -x with double quotes. If the Path itself contains double quotes, replace each of them with two double quote characters. -Z Remove all NT Security Descriptors from the file system. This is useful when ACLs are being abandoned to allow the use of the unixPermBits Security Model. This option should only be used when recommended by Apple support. All StorNext systems must first unmount the file system prior to running cvfsck -Z since it modifies the metadata causing the FSM to prevent currently mounted clients from reconnecting. Running cvfsck -Z can take a long time for large file systems because all inodes have to be scanned for security descriptors. Also, since the metadata is updated by cvfsck -Z, if the metadataArchive parameter is set to true in the file system configuration file, a new metadata archive will be generated when the FSM is restarted. Note that when running cvfsck -Z, the file system must be configured such that the securityModel is NOT acl, and enforceAcls, quotas, and windowsSecurity are all disabled either explicitly or by setting the securityModel to unixpermbits. After running cvfsck -Z, unix permissions on files and directories should be updated if needed. VolName Specifies a volume to check. Otherwise all volumes on this system will be displayed for selection. VolPath Forces the program to use VolPath/data instead of /Library/Logs/Xsan/data to locate the volumes. EXIT VALUES cvfsck will return one of the following condition codes upon exit. 0 - No error, no changes made to the file system 1 - Inconsistencies encountered, changes have been made to the file system - A read-only cvfsck will return 1 if journal replay is needed. - A read-only cvfsck will only print the needed fixes and not commit changes to the metadata. 2 - Fatal error, cvfsck run aborted 3 - Name collisions found, no repair needed 4 - Name collisions found, file system successfully repaired NOTES It is strongly recommended that the user should not run cvfsck with the -y or -Y options until the extent of any metadata corruption is known. Unless running cvfsck in read-only mode, the file system should be unmounted from all machines before a check is performed. In the event that repairs are required and cvfsck modifies metadata, it will report this at the end of the check. If this occurs, any machines that continue to mount the file system should be rebooted before restarting the file system. In order to ensure minimum run-time cvfsck should be run on an idle FSS server. Extraneous I/O and processor usage will severely impact the performance of cvfsck. CRC checks are now done on all Windows Security descriptors. Windows Security Descriptors with inconsistent CRC's are removed causing affected files to inherit permissions from the parent folder. Cvfsck limits the number of trace files to 100. It starts removing the oldest trace file if the max number of trace files in /Library/Logs/Xsan/data/VolName/trace is exceeded before a new file is created. NOTE: On large file systems cvfsck may requires 100s of megabytes or more of local system disk space for working files. If the output of -x is to be used with Excel, consider the use of the -z option so that lines having pathnames containing commas can be parsed. If the output of -x is to be used with Unix tools such as awk, Perl, or Python, consider using the -B option with a field separator such as '|' or similar that does not appear as a character in a pathname. FILES /Library/Logs/Xsan/data/* /Library/Logs/Xsan/data/VolName/config_history/*.cfg.<TIMESTAMP> /Library/Preferences/Xsan/*.cfg SEE ALSO snfs_config(5) cvmkfile(1), cvupdatefs(8), cvadmin(8), sgdefrag(8), snfsdefrag(1) Xsan File System April 2021 CVFSCK(8)
| null |
kextfind
|
The kextfind utility locates and prints information, or generates reports, about kernel extensions (kexts) matching the search criteria in query from among those in the named directory and extension arguments. If no directories or extensions are specified, kextfind searches /System/Library/Extensions and /Library/Extensions. Searches are performed via kext management logic as used by kextload(8) and kextd(8), by which only kexts directly in the repository directory or kexts explicitly named (and their immediate plugins) are eligible; this is specifically not an exhaustive, recursive filesystem search. Construct your search using any of the query and command predicates listed below. You can combine predicates with the logical operators -and, -or, and -not, and group them with parentheses. Query command predicates generally print some bit of information about a kext, such as its pathname or bundle identifier, followed by either a newline or an ASCII NUL. You can also generate a tab-delimited report using the -report keyword after the query expression; if you do, you must not specify any of the command predicates described below. If no command predicate or report is specified, kextfind implicitly executes a -print command predicate for each kext matching the query.
|
kextfind – find kernel extensions (kexts) based on a variety of criteria and print information
|
kextfind [options] [--] [kext_or_directory ...] [query] [-report [-no-header] report_predicate ...] DEPRECATED The kextfind utility has been deprecated. Please use the kmutil(8) equivalent: kmutil find.
|
-h, -help Print a help message describing each option flag and exit with a success result, regardless of any other options on the command line. -set-arch arch Set the architecture used for such things as architecture- specific properties to arch. You can only perform a query with one such architecture; searches for multiple executable architectures are possible, for example, but you can't search for two architecture-specific values of a single property. -i, -case-insensitive Perform case-insensitive comparisons for all property, match property, and bundle identifier query predicates when values are strings. Has no effect when property values are numbers or booleans. You can also use this option with individual property query predicates. -s, -substring Perform substring searches for all property, match property, and bundle identifier query predicates when values are strings. Has no effect when property values are numbers or booleans. You can also use this option with individual property query predicates. -no-paths Print no paths for kexts, just their bundle names, and for info dictionary and executable files, their paths relative to the kext itself. This can be ambiguous with plugins of the same name and when searching multiple repositories. -relative-paths Print pathnames relative to kexts' repositories (which can be ambiguous if multiple repositories are being searched). -0, -nul Make the -echo and all -print... command predicates except for -print-diagnostics emit an ASCII NUL character (character code 0) in place of any newlines. This is useful when sending the output to xargs(1). You can also use this flag individually with those command predicates. -f kext_or_directory, -search-item kext_or_directory Specifies a kext or directory of kexts to search. May be specified multiple times. While you can normally just list them without an option flag, these are provided to prevent ambiguity with the query expression. -e, -system-extensions Adds /System/Library/Extensions and /Library/Extensions to the list of directories to search. If you don't specify any directories or kexts, this is used by default. -- End of options. QUERY PREDICATES Descriptions of all available search criteria and commands follow, grouped by general category. Search by Bundle Name, or Info Dictionary or Match (Personality) Properties Most of these predicates take the -case-insensitive (-i) and -substring (-s) options as described above. -b [-i|-case-insensitive] [-s|-substring] identifier -bundle-id [-i|-case-insensitive] [-s|-substring] identifier True if the kext's bundle identifier matches identifier. This is equivalent to -property CFBundleIdentifier identifier. -dup -duplicate-id True if any other kext has the same bundle identifier as the current kext. -B [-i|-case-insensitive] [-s|-substring] name -bundle-name [-i|-case-insensitive] [-s|-substring] name True if the kext's bundle name matches name. -m [-i|-case-insensitive] [-s|-substring] name value -match-property [-i|-case-insensitive] [-s|-substring] name value True if the kext has at least one personality that contains value as a string, number, or boolean value (expressible as “true”, “yes”, “1” or “false”, “no”, “0”) for the named property. -me name -match-property-exists name True if the kext has at least one personality containing any value for the named property. -p [-i|-case-insensitive] [-s|-substring] name value -property [-i|-case-insensitive] [-s|-substring] name value True if the kext's info dictionary contains value as a string, number, or boolean value (expressible as “true”, “yes”, “1” or “false”, “no”, “0”) for the named property. -pe name -property-exists name True if the kext's info dictionary contains any value for the named property. Search by Loaded/Loadable -a, -authentic True if the kext is owned by root:wheel and has proper permissions. -d, -dependencies-met True if the kext has all its dependencies met. -nd, -dependencies-missing True if the kext is missing dependencies (or can't have its dependencies resolved). -na, -inauthentic True if the kext is not owned by root:wheel or has improper permissions (or can't be so authenticated). -nv, -invalid True if the kext is not valid. -l, -loadable True if the kext appears to be loadable. (It may still fail to load due to link errors.) -loaded True if the kext is currently loaded (if its bundle identifier, version, and executable UUID match a kext loaded in the kernel). -nl, -nonloadable True if the kext can't be loaded because it is invalid, inauthentic, or missing dependencies. -v, -valid True if the kext is valid. -w, -warnings True if any warnings are noted while validating the kext. Search by Executable, Architecture, or Symbol -arch arch1[,arch2...] True if the kext contains all of the named CPU architectures (separated by commas only with no spaces), and possibly others, in its executable. -ax arch1[,arch2...], -arch-exact arch1[,arch2...] True if the kext contains all of the named CPU architectures (separated by commas only with no spaces), and no others, in its executable. -dsym symbol, -defines-symbol symbol True if the kext defines the named symbol in any of its architectures. The name must match exactly with the (possibly mangled) symbol in the kext's executable. Such names typically begin with at lease one underscore; see nm(1). A kext must also be a library for others to link against it (see -library). -x, -executable True if the kext declares an executable via the CFBundleExecutable property (whether it actually has one or not; that is, if the kext declares one but it's missing, this predicate is true even though the kext is invalid). -nx, -no-executable True if the kext does not declare an executable via the CFBundleExecutable property. -rsym symbol, -references-symbol symbol True if the kext has an undefined reference to the named symbol in any of its architectures. The name must match exactly with the (possibly mangled) symbol in the kext's executable. Such names typically begin with at lease one underscore; see nm(1). Search by Miscellaneous Attribute -debug True if the kext has a top-level OSBundleEnableKextLogging property set to true, or if any of its personalities has an IOKitDebug property other than zero. (Note: As of Mac OS X 10.6 (Snow Leopard), the property OSBundleDebugLevel is no longer used.) -has-plugins True if the kext contains plugins. -integrity { correct|modified|no-receipt|not-apple|unknown } OBSOLETE. As of Mac OS X 10.6 (Snow Leopard), kext integrity is not used and this predicate always evaluates to false. -kernel-resource True if the kext represents a resource built into the kernel. -lib, -library True if the kext is a library that other kexts can link against. -plugin True if the kext is a plugin of another kext. Search by Startup Requirement These options find kexts that are used at startup or allowed to load during safe boot. They should be combined with the -or operator. (The standard system mkext file contains console, local-root, and root kexts, so you would specify “\( -console -or -local-root -or -root \)”. -C, -console True if the kext is potentially required for console-mode startup (same as -p OSBundleRequired Console but always case-sensitive). -L, -local-root True if the kext is potentially required for local-root startup (same as -p OSBundleRequired Local-Root but always case- sensitive). -N, -network-root True if the kext is potentially required for network-root startup (same as -p OSBundleRequired Network-Root but always case- sensitive). -R, -root True if the kext is potentially required for root startup (same as -p OSBundleRequired Root but always case-sensitive). -S, -safe-boot True if the kext is potentially allowed to load during safe boot (same as -p OSBundleRequired 'Safe Boot' but always case- sensitive). Search by Version -compatible-with-version version True if the kext is a library kext compatible with the given version. -V [ne|gt|ge|lt|le]version[-version] -version [ne|gt|ge|lt|le]version[-version] True if the kext's version matches the version expression. You can either specify an operator before a single version, or a range of versions. Remember that nonfinal versions such as 1.0d21 compare as less than final versions (in this case 1.0); construct your version expression accordingly. See also -library. QUERY COMMAND PREDICATES These predicates print information about kexts that match the query, or run a utility on the kext bundle directory, its info dictionary file, or its executable. Execpt for -exec, these all have a true result for purposes of query evaluation. The -echo and all -print... command predicates except for -print-diagnostics accept a -nul (-0) option to emit an ASCII NUL character (character code 0) in place of any newlines. This is useful when sending the output to xargs(1). -echo [-n|-no-newline] [-0|-nul] string Prints string followed by a newline. You can specify -n or -no-newline to omit the newline. If you specify both -n and -nul, string is not followed by either a newline or an ASCII NUL character. -exec utility [argument ...] ; True if the program named utility returns a zero value as its exit status. Optional arguments may be passed to the utility. The expression must be terminated by a semicolon (“;”). If you invoke kextfind from a shell you may need to quote the semicolon if the shell would otherwise treat it as a control operator. The strings “{}”, “{info-dictionary}”, and “{executable}”, appearing anywhere in the utility name or the arguments are replaced by the pathname of the current kext, its info dictionary, or its executable, respectively. utility will be executed from the directory from which kextfind was executed. utility and arguments are not subject to the further expansion of shell patterns and constructs. -print [-0|-nul] Prints the pathname of the kext. If no command predicate is specified, the query as a whole becomes equivalent to ( query ) -and -print. -print0 Equivalent to -print -nul, for all you find(1) users out there. -pa [-0|-nul] -print-arches [-0|-nul] Prints the names of all the architectures in the kext executable (if it has one), separated by commas. -print-dependencies [-0|-nul] Prints the pathnames of all direct and indirect dependencies of the kext. -print-dependents [-0|-nul] Prints the pathnames of all direct and indirect dependents of the kext. -pdiag -print-diagnostics Prints validation and authentication failures, missing dependencies, and warnings for the kext. -px [-0|-nul] -print-executable [-0|-nul] Prints the pathname to the kext's executable file. -pid [-0|-nul] -print-info-dictionary [-0|-nul] Prints the pathname to the kext's info dictionary file. (You can use “-exec cat {info-dictionary} \;” or “-exec pl -input {info-dictionary} \;” to print the contents of the file.) -print-integrity [-0|-nul] OBSOLETE. As of Mac OS X 10.6 (Snow Leopard), kext integrity is not used and this command prints “n/a” for “not applicable”. -print-plugins [-0|-nul] Prints the pathnames of all plugins of the kext. -pm [-0|-nul] name -print-match-property [-0|-nul] name For each matching personality in the kext, if the named property exists, prints the personality's name, a colon, then name followed by an equals sign and the property's value. Results in true even if the property does not exist for any personality. -pp [-0|-nul] name -print-property [-0|-nul] name If the top-level property exists, prints name followed by an equals sign and its value. Results in true even if the property does not exist. OPERATORS The query primaries may be combined using the following operators. The operators are listed in order of decreasing precedence. ( expression ) This evaluates to true if the parenthesized expression evaluates to true. Note that in many shells parentheses are special characters and must be escaped or quoted. ! expression -not expression This is the unary NOT operator. It evaluates to true if expression is false, to false if expression is true. Note that in many shells “!” is a special character and must be escaped or quoted. expression -and expression expression expression The and operator is the logical AND operator. It is implied by the juxtaposition of two expressions and therefore need not be specified. It evaluates to true if both expressions are true. If the first expression is false, the second expression is not evaluated. expression -or expression The -or operator is the logical OR operator. It evaluates to true if either expression is true. If the first expression is true, the second expression is not evaluated. REPORTS Use the following predicates in a report expression to generate a tab- delimited format, one kext per line, suitable for further processing (or immediate edification). The report normally starts with a header line labeling each column; you can skip this by following -report directly with -no-header. The report predicate keywords are almost all the same as query predicates, but have different purposes (and arguments in several cases). In general, where a query predicate is looking for a value, a report predicate is retrieving it. Thus, the property predicates only take the name of the property, and print the value of that property for the kext being examined. Report predicates based on attributes with multiple values, such as -print-dependencies, print the number of values rather than the values themselves. Finally, report predicates for yes/no questions print “yes” or “no”. Note that many shorthands for inverted meanings, such as -invalid, are not available for reports (they would only be confusing). Others, such as -match-property, could generate multiple values that would be impossible to embed meaningfully in plain tab-delimited text (and knowing how many of them there are is not useful). Value Report Predicates -b, -bundle-id Prints the kext's bundle identifier. -B, -bundle-name Prints the kext's bundle name. -integrity, -print-integrity OBSOLETE. As of Mac OS X 10.6 (Snow Leopard), kext integrity is not used and this command prints “n/a” for “not applicable”. -V, -version Prints the kext's version. -print Prints the kext's pathname. -pa, -print-arches Prints the names of the architectures, if any, in the kext executable. -print-dependencies Prints the number of dependencies found for the kext. -print-dependents Prints the number of kexts found that depend on the kext. -px, -print-executable Prints the pathname of the kext's executable (if it has one). -pid, -print-info-dictionary Prints the pathname of the kext's info dictionary. -print-plugins Prints the number of plugin kexts the kext has. -p name, -property name -pp name, -print-property name Prints the value for the top-level info dictionary property with key name. If the key is not defined, prints “<null>”. -sym symbol, -symbol symbol Prints “references” or “defines” if the kext references or defines symbol. (This is the only report predicate that is not also a query predicate.) Yes/No Report Predicates -arch arch1[,arch2...] “yes” if the kexts contains all the named architectures (and possibly others), “no” otherwise. -ax arch1[,arch2...], -arch-exact arch1[,arch2...] “yes” if the kexts contains exactly the named architectures (and no others), “no” otherwise. -a, -authentic -debug -d, -dependencies-met -dup, -duplicate-identifier -x, -executable -has-plugins -kernel-resource -lib, -library -l, -loadable -loaded -plugin -w, -warnings -v, -valid
|
The following examples are shown as given to the shell: kextfind -case-insensitive -not -bundle-id -substring 'com.apple.' -print Print a list of all non-Apple kexts. kextfind \( -nonloadable -or -warnings \) -print -print-diagnostics Print a list of all kexts that aren't loadable or that have any warnings, along with what's wrong with each. kextfind -nonloadable -print-dependents | sort | uniq Print a list of all kexts that can't be loaded because of problems with their dependencies. kextfind -defines-symbol __ZTV14IONetworkStack Print a list of all kexts that define the symbol __ZTV14IONetworkStack. kextfind -relative-paths -arch-exact ppc,i386 Print a list of all kexts kexts that contain only ppc and i386 code. kextfind -debug -print -pp OSBundleDebugLevel -pm IOKitDebug Print a list of all kexts that have debug options set, along with the values of the debug options. kextfind -m IOProviderClass IOMedia -print -exec pl -input {info-dictionary} ; Print a list of all kexts that match on IOMedia, along with their info dictionaries. kextfind -no-paths -nl -report -print -v -a -d Print a report of kexts that can't be loaded, with hints as to the problems. DIAGNOSTICS The kextfind utility exits with a status of 0 on completion (whether or not any kexts are found), or with a nonzero status if an error occurs. SEE ALSO find(1), kmutil(8), kernelmanagerd(8), kextcache(8), kextd(8), kextload(8), kextstat(8), kextunload(8), xargs(1) BUGS Many single-letter options are inconsistent in meaning with (or directly contradictory to) the same letter options in other kext tools. Several special characters used by kextfind are also special characters to many shell programs. In particular, the characters “!”, “(”, and “)”, may have to be escaped from the shell. Darwin November 14, 2012 Darwin
|
diskutil
|
diskutil manipulates the structure of local disks. It provides information about, and allows the administration of, partitioning schemes, layouts, and formats of disks. This includes hard disks, solid state disks, optical discs, disk images, APFS volumes, CoreStorage volumes, and AppleRAID sets. It generally manipulates whole volumes instead of individual files and directories. CAUTION Many diskutil commands, if improperly used, can result in data loss. Most commands do not present confirmation prompts. You should back up your data before using any of these commands. VERBS Each command verb is listed with its description and individual arguments. list [-plist] [internal | external] [physical | virtual] [device] List disks, including internal and external disks, whole disks and partitions, and various kinds of virtual or offline disks. If no argument is given, then all whole disks and their partitions are listed. You can limit the number of disks shown by specifying filtering arguments such as internal above, and/or a device disk. When limiting by a disk, you can specify either a whole disk, e.g. disk0, or any of its slices, e.g. disk0s3, but filtering is only done at the whole disk level (disk0s3 is a synonym for disk0 in this case). If -plist is specified, then a property list will be emitted instead of the normal user-readable output. A script could interpret the results of diskutil list -plist and use diskutil info -plist as well as diskutil listFilesystems -plist for more detailed information. The top-to-bottom appearance of all whole disks is sorted in numerical order by unit (whole disk) number. However, within each whole disk's "sublist" of partitions, the ordering indicates actual on-disk location. The first disk item listed represents the partition which is located most near the beginning of its encompassing whole disk, and so on. When viewed this way, the slice (partition) parts of the BSD disk identifiers may, in certain circumstances, not appear in numerical order. This is normal and is likely the result of a recent partition map editing operation in which volumes were kept mounted. Note that both human-readable and plist output are sorted as described above. See the DEVICES section below for the various forms that the device specification may take for this and all of the other diskutil verbs. info | information [-plist] device | -all Get detailed information about a specific whole disk or partition. If -plist is specified, then a property list instead of the normal user-readable output will be emitted. If -all is specified, then all disks (whole disks and their partitions) are processed. activity Continuously display system-wide disk manipulation activity as reported by the Disk Arbitration framework until interrupted with a signal (e.g. by typing Control-C). This can be useful to watch system-wide activity of disks coming on-line or being ejected, volumes on disks being mounted or unmounted, volumes being renamed, etc. However, this output must never be parsed; programs should become Disk Arbitration clients instead. For debugging information, such as the monitoring of applications dissenting (attempting to deny) activities for disks for which they have registered an interest, you must use the logging features of the diskarbitrationd daemon. Programs needing this information must become Disk Arbitration clients. listFilesystems [-plist] Show the file system personalities available for formatting in diskutil when using the erasing and partitioning verbs. This is a subset of the complete set of personalities exported by the various file system bundles that may be installed in the system. Also shown are some shortcut aliases for common personalities. See the FORMAT section below for more details. If -plist is specified, then a property list instead of the normal user-readable output will be emitted. unmount | umount [force] device Unmount a single volume. Force will force-unmount the volume (less kind to any open files; see also umount (8)). Up to a few seconds (or more) may be required for any Disk Arbitration dissenters in the system to approve the unmount, and/or for the file system to flush data. This verb gives up and returns failure after a maximum of 1 minute in most situations. unmountDisk | umountDisk [force] device Given a disk containing a partition map, unmount all of its volumes. That is, unmounts are attempted for the map's partitions containing file system volumes, as well as for "virtual" volumes exported by storage systems which import data from the map's partitions. Storage systems supported include APFS, AppleRAID, and CoreStorage. Force will force-unmount the volumes (less kind to any open files; see also umount (8)). You should specify a whole disk, but all volumes of the whole disk are attempted to be unmounted even if you specify a partition. eject device Eject a disk. Media will become offline for the purposes of being a data store for file systems or being a member of constructs such as software RAID or direct data. Additionally, removable media will become eligible for safe manual removal; automatically-removable media will begin its physical (motorized) eject sequence. mount [readOnly] [nobrowse] [-mountOptions option [, option]] [-mountPoint path] device Mount a single volume. If readOnly is specified, then the file system is mounted read-only, even if writing is supported or allowed by the volume's underlying file system, device, media, or user (e.g. the super-user). If nobrowse is specified, then the file system is mounted with a recommendation to prevent display (e.g. by the Finder) to the end user. These options are equivalent to passing rdonly or nobrowse as "-o" arguments to the appropriate file system bundle's mount (8) program. If -mountOptions is specified, then the argument strings you specify will be passed (by diskarbitrationd) verbatim to "-o"; multiple arguments must be separated with commas. Up to a few seconds (or much longer in rare cases) may be required for any Disk Arbitration dissenters or disk claimers in the system to approve the mount, and/or for the file system to complete a minimal fsck(8). (For example, Disk Arbitration might invoke fsck_apfs -q before mounting an APFS Volume.) This verb gives up and returns failure after a maximum of 1 minute in most situations. If -mountPoint is specified, then your path, rather than the standard path of /Volumes/VolumeName or /System/Volumes/VolumeName, will be used as the view into the volume file content; a directory at that path must already exist. mountDisk device Mount all mountable and UI-browsable volumes on the given partition map; that is, a mount is attempted on the directly- mountable volume, if any, on each of the whole disk's partitions. However, "virtual" volumes, such as those are implied by e.g. Core Storage Physical Volumes, AppleRAID Members, etc., are not handled. You should specify a whole disk, but all volumes of the whole disk are attempted to be mounted even if you specify a partition. rename | renameVolume device name Rename a volume. Volume names are subject to file system- specific alphabet and length restrictions. enableJournal device Enable journaling on an HFS+ volume. This works whether or not the volume is currently mounted (the volume is temporarily mounted if necessary). Ownership of the affected disk is required. disableJournal [force] device Disable journaling on an HFS+ volume. This normally works whether or not the volume is currently mounted (the volume is temporarily mounted if necessary). If the force option is specified, then journaling is disabled directly on disk; in this case, the volume must not be mounted. Ownership of the affected disk is required. moveJournal external journalDevice device Create a 512MB Apple_Journal partition using the journalDevice partition to serve as a journal for the volume device. For best results, journalDevice should be a partition on a different whole-disk than the volume itself. The journal for device will be moved externally onto the newly created Apple_Journal partition. Since the journalDevice you specify will invariably be larger than 512MB, a new HFS+ partition will be created following the Apple_Journal partition to fill the remaining space. Moving the journal works whether or not the volume is mounted, provided journaling is enabled on that volume. No errors are currently supported to flag attempts to move journals on volumes that do not have journaling enabled. If you have multiple volumes for which you want external journals, each must have its own external Apple_Journal partition. Ownership of the affected disks is required. moveJournal internal device Move the journal for device back locally (onto that same device). Ownership of the affected disk is required. enableOwnership device Enable ownership of a volume. The on-root-disk Volume Database at /var/db/volinfo.database is manipulated such that the User and Group ID settings of files, directories, and links (file system objects, or "FSOs") on the target volume are taken into account. This setting for a particular volume is persistent across ejects and injects of that volume as seen by the current OS, even across reboots of that OS, because of the entries in this OS's Volume Database. Note thus that the setting is not kept on the target disk, nor is it in-memory. For some locations of devices (e.g. internal hard disks), consideration of ownership settings on FSOs is the default. For others (e.g. plug-in USB disks), it is not. When ownership is disabled, Owner and Group ID settings on FSOs appear to the user and programs as the current user and group instead of their actual on-disk settings, in order to make it easy to use a plug-in disk of which the user has physical possession. When ownership is enabled, the Owner and Group ID settings that exist on the disk are taken into account for determining access, and exact settings are written to the disk as FSOs are created. A common reason for having to enable ownership is when a disk is to contain FSOs whose User and Group ID settings, and thus permissions behavior overall, is critically important, such as when the plug-in disk contains system files to be changed or added to. See also the vsdbutil(8) command. Running as root is required. disableOwnership device Disable ownership of a volume. See enableOwnership above. Running as root is required. verifyVolume device Verify the file system data structures of a volume. The appropriate fsck program is executed and the volume is attempted to be left mounted or unmounted as it was before the command. Any underlying Storage System (e.g. Core Storage, APFS) is verified before the target volume itself. In certain cases, "live" verify, including of the boot volume, is supported. Ownership of the disk to be verified is required. repairVolume device Repair the file system data structures of a volume. The appropriate fsck program is executed and the volume is attempted to be left mounted or unmounted as it was before the command. Any underlying Storage System (e.g. Core Storage, APFS) is repaired before the given target volume. In most cases (e.g. except mount-read-only), the target volume must be unmountable; in all cases, the underlying storage media must be writable. "Live" repair (e.g. of a file-writable mounted volume) is not supported. Ownership of the affected disk is required. verifyDisk device Verify the partition map layout of a whole disk intended for booting or data use on a Macintosh. The checks further include, but are not limited to, the integrity of the EFI System Partition, the integrity of any Core Storage Physical Volume partitions, and provisioning of space for boot loaders. Ownership of the disk to be verified is required; it must be a whole disk and must have a partition map. repairDisk device Repair the partition map layout of a whole disk intended for booting or data use on a Macintosh. The repairs further include, but are not limited to, the repair or creation of an EFI System Partition, the integrity of any Core Storage Physical Volume partitions, and the provisioning of space for boot loaders. Ownership of the affected disk is required; it must be a whole disk and must have a partition map. resetFusion For Fusion Drive machines (two internal disk device hardware configurations), reset the disk devices in the machine to the factory-like state of one empty Fusion volume. This command will only run on a machine that contains exactly two internal disk devices: one solid-state device (SSD) and one rotational device (HDD), or, alternatively, two solid- state devices. This command must be able to make a positive identification thereof. If these requirements are met, you are prompted, and if you confirm, the erase and reset begins. Both internal disk devices are (re)-partitioned with GPT maps, and then they are turned into (used to create) an APFS Fusion Drive Container with one APFS Volume. All internal-disk data is lost. This includes any "extra" partitions (e.g. for Boot Camp or other "user" purposes). No system software is installed and no user data is restored. After running this command, you should (re)-install macOS on the machine (on the newly-created APFS Volume); otherwise, the machine will not be usable (bootable). You generally must be booted from the Internet Recovery System (CMD-OPT-R) or from an externally-connected macOS boot disk (e.g. a USB drive), because you cannot erase a disk that hosts the currently-running macOS. Externally-connected disk(s) are not affected. Ownership of the affected disks is required. eraseDisk [-noEFI] format name [APM[Format] | MBR[Format] | GPT[Format]] device Erase an existing disk, removing all volumes and writing out a new partitioning scheme containing one new empty file system volume. If the partitioning scheme is not specified, then an appropriate one for the current machine is chosen. Format is discussed below in the section for the partitionDisk verb. Ownership of the affected disk is required. If -noEFI is specified, do not create EFI partition on the target disk. eraseVolume format name device Write out a new empty file system volume (erasing any current file system volume) on an existing partition. The partition remains but its data is lost. Format is discussed below in the section for the partitionDisk verb. If you specify Free Space for format, the partition itself is deleted (removed entirely) from the partition map instead of merely being erased. Ownership of the affected disk is required. reformat device Erase an existing volume by writing out a new empty file system of the same personality (type) and with the same volume name. Ownership of the affected disk is required. eraseOptical [quick] device Erase optical media (CD/RW, DVD/RW, etc.). Quick specifies whether the disc recording system software should do a full erase or a quick erase. Ownership of the affected disk is required. zeroDisk [force] [short] device Erase a device, writing zeros to the media. The device can be a whole-disk or a partition. In either case, in order to be useful again, zeroed whole-disks will need to be (re)partitioned, or zeroed partitions will need to be (re)formatted with a file system, e.g. by using the partitionDisk, eraseDisk, or eraseVolume verbs. If you desire a more sophisticated erase algorithm or if you need to erase only free space not in use for files, use the secureErase verb. The force parameter causes best-effort, non-error-terminating, forced unmounts and shared-mode writes to be attempted; however, this is still no guarantee against drivers which claim the disk exclusively. In such cases, you may have to first unmount all overlying logical volumes (e.g. CoreStorage or AppleRAID). If a disk is partially damaged in just a certain unlucky way, you might even have to un-install a kext or erase the disk elsewhere. The short parameter causes only a minimal amount of zeros to be written ("wipefs"); this is quick. You can use this to prevent inadvertent identification by software, e.g. as filesystem data. Ownership of the affected disk is required. randomDisk [times] device Erase a whole disk, writing random data to the media. Times is the optional (defaults to 1) number of times to write random information. The device can be a whole-disk or a partition. In either case, in order to be useful again, randomized whole-disks will need to be (re)partitioned, or randomized partitions will need to be (re)formatted with a file system, e.g. by using the partitionDisk, eraseDisk, or eraseVolume verbs. If you desire a more sophisticated erase algorithm or if you need to erase only free space not in use for files, use the secureErase verb. Ownership of the affected disk is required. secureErase [freespace] level device Erase, using a "secure" (but see the NOTE below) method, either a whole-disk (including all of its partitions if partitioned), or, only the free space (not in use for files) on a currently-mounted volume. Secure erasing makes it harder to recover data using "file recovery" software. Erasing a whole-disk will leave it useless until it is partitioned again. Erasing freespace on a volume will leave your files intact, indeed, from an end-user perspective, it will appear unchanged, with the exception that it will have attempted to make it impossible to recover deleted files. If you need to erase all contents of a partition but not its hosting whole-disk, use the zeroDisk or randomDisk verbs. Ownership of the affected disk is required. Level should be one of the following: • 0 - Single-pass zero fill erase. • 1 - Single-pass random fill erase. • 2 - Seven-pass erase, consisting of zero fills and all-ones fills plus a final random fill. • 3 - Gutmann algorithm 35-pass erase. • 4 - Three-pass erase, consisting of two random fills plus a final zero fill. NOTE: This kind of secure erase is no longer considered safe. Modern devices have wear-leveling, block-sparing, and possibly-persistent cache hardware, which cannot be completely erased by these commands. The modern solution for quickly and securely erasing your data is encryption. Strongly-encrypted data can be instantly "erased" by destroying (or losing) the key (password), because this renders your data irretrievable in practical terms. Consider using APFS encryption (FileVault). partitionDisk device [-noEFI] [numberOfPartitions] [APM[Format] | MBR[Format] | GPT[Format]] [part1Format part1Name part1Size part2Format part2Name part2Size part3Format part3Name part3Size ...] (re)Partition a disk, removing all volumes. All volumes on this disk will be destroyed. The device parameter specifies which whole disk is to be partitioned. The optional numberOfPartitions parameter specifies the number of partitions to create; if given then the number of parameter triplets (see below) is expected to match; else, the number of triplets alone given will determine the number of partitions created. If -noEFI is specified, do not create EFI partition on the target disk. The optional partitioning scheme parameter forces a particular partitioning scheme; if not specified, a suitable default is chosen. They are: • APM[Format] specifies that an Apple Partition Map scheme should be used. This is the traditional Apple partitioning scheme used to start up a PowerPC-based Macintosh computer, to use the disk as a non-startup disk with any Mac, or to create a multiplatform compatible startup disk. • MBR[Format] specifies that a Master Boot Record scheme should be used. This is the DOS/Windows- compatible partitioning scheme. • GPT[Format] specifies that a GUID Partitioning Table scheme should be used. This is the partitioning scheme used to start up an Intel-based Macintosh computer. For each partition, a triplet of the desired file system format, volume name, and size must be specified. Several other diskutil verbs allow these triplets as well (and for them, the numberOfPartitions parameter is also optional). The triplets must be as follows: • Format names are of the form jhfs+, HFS+, MS-DOS, etc.; a list of formattable file systems (more precisely, specific file system personalities exported by the installed file system bundles) and common aliases is available from the listFilesystems verb. Format guides diskutil both in what partition type to set for the partitions (slices) as well as what file system structures to initialize therein, using the file system bundle's plist's FormatExecutable setting which usually points to the appropriate formatter program such as newfs_hfs(8). You can specify a format of Free Space to skip an area of the disk. You can specify the partition type manually and directly with a format of %<human-readable partition type>% such as %Apple_HFS% or %<GPT partition type UUID constant>% such as %48465300-0000-11AA-AA11-00306543ECAC%; these imply a name of %noformat% (below). Human-readable types must be known to the system but UUID types (GPT scheme only) can be arbitrary. • Names are the initial volume names; they must conform to file system specific restrictions. If a name of %noformat% is specified, then the partition is left blank such that the partition space is carved out, the partition type is set according to the file system format name or explicit type, the partition space is partially erased ("wiped"), but a file system structure is not initialized with any file system's formatter program, e.g. newfs_hfs(8). This is useful for setting up partitions that will contain user-defined (not necessarily file system) data. For a triplet whose format is Free Space or a directly-specified partition type, its name is ignored but a dummy name must nevertheless be present. • Sizes are floating point numbers followed by a letter or percent sign as described in the SIZES section at the end of this page (e.g. 165536000, 55.3T, 678M, 75%, R). In addition to explicitly-requested partitions, space (gaps) might be allocated to satisfy certain filesystems' position and length alignment requirements; space might be allocated for possible future booter partition insertion; and indeed, actual booter partitions might be implicitly created. In particular, there is a rule that unrecognized partitions 1GiB or larger automatically acquire booters. Thus, if you create an arbitrary partition with e.g. diskutil partitionDisk disk0 gpt %11112222-1111-2222-1111-111122221111% %noformat% 3gib jhfs+ Untitled r, then a booter partition will also be created. You can always delete that booter with diskutil eraseVolume "Free Space" dummy disk0s3. The last partition is usually automatically lengthened to the end of the partition map (disk). You can specify an exact size for your last partition by specifying it as the penultimate triplet and specifying an additional (last) triplet as Free Space. Or you can use the R (remainder) size specifier for one of your middle partitions while specifying an exact size for your last partition. Ownership of the affected disk is required. resizeVolume device limits | mapsize [-plist] | R | size [numberOfPartitions] [part1Format part1Name part1Size part2Format part2Name part2Size part3Format part3Name part3Size ...] Non-destructively resize a volume (partition); you may increase or decrease its size. Alternatively, take no action and print information. Specifying limits instead of size takes no action, but instead prints the range of valid values for the target partition, taking into account current file system and partition map conditions such as files in use and other (immovable) partitions following the target. Specifying mapsize instead of size takes no action, but instead prints the size of the encompassing whole-disk device, as well as the size of the entire partition map (all partitions less map overhead). The whole-disk device might be larger than the partition map if the whole-disk device has grown since the partition map was created. Growing a whole- disk device is possible with certain enterprise disk (RAID) systems. The -plist option will print partition or whole-disk size inquiry information in property list format. You can grow a volume (partition) (back) to its maximum size possible, provided no new partitions have been created that are in the way, by specifying R for the new volume size. You should use R instead of attempting an absolute value such as 100% because the latter cannot count partition map overhead. When decreasing the size, new partitions may optionally be created to fill the newly-freed space. To do this, specify the numberOfPartitions, format, name, and size parameters in the same manner as the triplet description for the partitionDisk verb. Resizing a volume that is currently set as the computer's startup disk will invalidate that setting; use the Startup Disk System Preferences panel or bless (8) to reset the resized volume as the startup disk. Device refers to a volume; the volume's file system must be journaled HFS+. Valid sizes are a number followed by a capital letter multiplier or percent sign suffix as described in the SIZES section at the end of this page (e.g. 1.5T, 128M, 50%). Ownership of the affected disk is required. splitPartition device [numberOfPartitions] [part1Format part1Name part1Size part2Format part2Name part2Size part3Format part3Name part3Size ...] Destructively split a volume into multiple partitions. You must supply a list of new partitions to create in the space of the old partition; specify these with the numberOfPartitions, format, name, and size parameters in the same manner as the triplet description for the partitionDisk verb. For one of your triplets, you can optionally specify the R meta-size in lieu of a constant number value for the size parameter: the substituted value will be exactly the amount of space necessary to complete the re-filling of the original partition with all of your triplets. Device refers to a volume. Ownership of the affected disk is required. mergePartitions [force] format name fromDevice toDevice Merge two or more partitions on a disk. All data on merged partitions other than the first will be lost. Data on the first partition will be lost as well if the force argument is given. If force is not given, and the first partition has a resizable file system (e.g. JHFS+), the file system will be preserved and grown in a data-preserving manner; your format and name parameters are ignored in this case. If force is not given, and the first partition is not resizable, you are prompted if you want to format. You will also be prompted to format if the first partition has an (HFS) Allocation Block Size which is too small to support the required growth of the first partition; see the -b option for newfs_hfs (8). If force is given, the final resulting partition is always (re)formatted. You should do this if you wish to (re)format to a new file system type. You will be prompted to confirm. Format and name must always be given, but they have an effect only when force is given. Merged partitions are required to be ordered sequentially on disk (see diskutil list for the actual on-disk ordering). All partitions in the range, except for the first one, must be unmountable. Ownership of the affected disk is required. addPartition device format name size Create a new partition following an existing partition. The new partition will start immediately beyond the end (start + size) of the existing partition. If device is a partition, then a new partition will be created in the gap that follows it, formatted with the file system personality format, with an initial volume name of name, extending for size, in the same manner as the triplet description for the partitionDisk verb. If device is a (partition map-bearing) whole disk, then the new partition will automatically be placed last in the map. Alternatively, you can create a new partition without any formatting by providing the partition type manually. To do so, pass a format parameter in the form of % followed by a raw GPT UUID or valid human-readable ioContent string followed by %, together with %noformat% for name. In this usage, any old on-disk data at the location of the new partition will be "wiped" (partially set to zeroes) to avoid any undesired interpretation. You can request fit-to-fill by specifying a size of 0. The partition map scheme must be GPT. A gap must exist at the target location, which will generally not be the case unless you have resized or deleted partitions. The partition map must contain at least one entry (the EFI partition suffices). Ownership of the affected disk is required. APFS | ap apfsVerb [...] Apple APFS is a system of virtual volumes. APFS verbs can be used to create, manipulate and destroy APFS Containers and their APFS Volumes. Apple APFS defines these types of objects: • Container - An APFS Container imports one or more APFS Physical Store disks and exports zero or more APFS Volume disks. Zero or more APFS Containers can exist in (might be attached to) the system at any one time. While attached, the "handle" by which an APFS Container is identified is by its APFS Container Reference disk (device), e.g. "disk5" or "/dev/disk5". You should treat this as an opaque reference token. The Container Reference disk is a synthesized whole- disk which is exported by APFS for identification purposes only; it has no storage. It is associated with the AppleAPFSContainerScheme node in the IO Registry. While APFS Volume device identifiers appear to be of a related form, you should never use the Container Reference as a basis to create device identifiers yourself; use the listing verbs with their plist options instead. An APFS Container has a certain fixed size (capacity) which, via its Physical Store(s), uses physical space on a device. An APFS Container can be resized, but this is not a part of normal operation. • Physical Store - An APFS Physical Store is a disk which is imported into (that is, which backs, indeed defines) an APFS Container. An APFS Container can import more than one Physical Store, e.g. for Fusion-style Containers. An APFS Physical Store disk is not necessarily a disk from a partition map; it could be e.g. an AppleRAID Set disk. Therefore, you must never assume that an APFS Physical Store's disk identifier is a 2-part form such as disk0s2. • Volume - An APFS Volume is an [un]mountable file system volume which is exported from an APFS Container. Zero or more APFS Volumes may be exported out of an APFS Container. An APFS Volume is identified by its device node, e.g. "disk5s1" or "/dev/disk5s1". The term volumeDevice is used below to refer to this device node. APFS Volumes have no specified "size" (capacity). Instead, all APFS Volumes consume capacity out of the remaining free space of their parent APFS Container, consuming or returning such capacity as user file data is added or deleted. Note that this means that all Volumes within a Container compete for the Container's remaining capacity. However, you can manage Volume allocation with the optional reserve and quota size values. The optional reserve size requests an assured minimum capacity for an APFS Volume. If successfully created, the Volume is guaranteed to be able to store at least this many bytes of user file data. Note that beyond this, the Volume might be able to store even more until constrained by reaching zero free space in its parent Container or by reaching a quota, if any. You can use a reserve to prevent running out of capacity due to competition from other Volumes or from a Container shrink attempt. The optional quota size applies a maximum capacity to an APFS Volume, placing a limit on the number of bytes of user file data which can be stored on the Volume. Note that you might not be able to reach this limit if its parent Container becomes full first. You can use a quota to enforce accounting or to manage against "unfair" premature filling-up of the parent Container due solely to this Volume at the expense of sibling Volumes. APFS Volumes can be tagged with zero or more role metadata flags which give a hint as to their intended use. Not all combinations of flags are valid, and not all flags are allowed to be set or changed by a user. Efficient file copy cloning (copy-on-write) is supported; see copyfile(3) COPYFILE_CLONE. Optional volume-level encryption is supported (see also Volume Groups below). An APFS Volume can be in an encrypted state because it was converted from a Core Storage encrypted volume, or because it was created as encrypted from its inception (e.g. with the diskutil apfs addVolume -passphrase verb) or because FileVault was enabled on it at some later time. On machines that support hardware encryption, the on-disk-device data for local volumes is encrypted even if FileVault is not enabled; this is termed "encrypted at rest". The format of an APFS Volume's device identifier (volumeDevice) is that of a slice disk of a special whole-disk; both disks are synthesized by APFS. The "whole" identifier number (a positive possibly- multi-digit integer) is arbitrary, and the "slice" numbers (positive possibly-multi-digit integers) count up from 1 with each new Volume. Deleting Volumes may cause gaps in the numbering. This form appears the same as a partition (map) scheme and partitions, but it is completely unrelated. For example: If "disk3s2" is a Physical Store defining a Container, then "disk5s1", "disk5s2", and "disk5s3" might be the Container's Volumes; "disk5" exists but is never used directly. Although it has a device node, an APFS Volume's data may only be accessed through its files; 3rd-party code cannot open an APFS Volume device node to "directly" access its on-disk bytes. • Snapshot - An APFS Snapshot represents a read-only copy of its parent (or "base") APFS Volume, frozen at the moment of its creation. An APFS Volume can have zero or more associated APFS Snapshots. APFS Snapshots are normally not discoverable unless the "base" or one of the snapshots is mounted. APFS Snapshots are uniquely identified with a UUID (preferred) or within their parent Volume's namespace by either a numeric identifier or by their name; they can be renamed, but APFS will never allow duplication of names (within a Volume) to occur. APFS Snapshots are mountable; when this occurs, its mount point (separate from and simultaneous with its parent Volume) provides a read-only historic version of the Volume content at Snapshot creation time. You can revert the present state of an APFS Volume back to equality with a Snapshot in its history. This is a destructive reset/restore operation: Once a Volume is reverted, it cannot be brought forward. Any Snapshots between the revert point and the present are lost as well. You can delete a Snapshot; this removes the possibility of ever reverting to that Snapshot's state, but does not affect the Volume's present-time content. An APFS Snapshot mount point's "source device" (its statfs(2) f_mntfromname shown by the mount(8) command) is not necessarily a device node (e.g. disk0s2) as is common; it can be the Snapshot name followed by the '@' character and the "parent" Volume's device node, e.g. "SnapName123@/dev/disk2s1". See the mount_apfs(8) -s and fs_snapshot_create(2) commands. However, it is also possible for f_mntfromname to have a 3-part form ("diskCsVsS") if you are rooted (booted) from an APFS Snapshot; in this case, its "base" Volume (e.g. "diskCsV") will not be mounted. • Volume Group - Collections of APFS Volumes can be associated with each other via an APFS Volume Group. Zero or more APFS Volume Groups may exist on any given APFS Container. The "members" (APFS Volumes) of any particular APFS Volume Group must all be on the same APFS Container. There is no such thing as an "empty" (zero-member) APFS Volume Group. APFS Volume Groups are identified using their Volume Group ID (a UUID). Assignment of this ID may be deferred in some cases. A primary use for APFS Volume Groups is realization of macOS installations in which "System"-role (for the operating system) and "Data"-role (for user data) APFS Volumes are functionally linked (overlaid file namespace, crypto info), yet separated for reasons of security, backup, and software update. Cryptographic identity, if any, is shared among all members of an APFS Volume Group. APFS itself has no provision for backing up your data. Backups should be always be performed on a regular basis and before modifying any APFS Container using these commands. The following is a list of APFS sub-verbs with their descriptions and individual arguments. list [-plist] [containerReferenceDevice] Display APFS objects as a tree. APFS Container(s) are shown with their imported Physical Store(s) and exported Volume(s). All currently-attached APFS Containers in the system are listed unless you specify a containerReferenceDevice, which limits the output to that specific APFS Container family. If -plist is specified, then a property list will be emitted instead of the normal user-readable output. convert device [-dryrun] [-prebootSource yourStagingDirectory] [-noPrebootAdditions] Non-destructively convert an HFS volume to an APFS Container with a single (but see below) APFS Volume. The APFS Container can later be manipulated (e.g. adding and deleting APFS Volumes) as usual. This verb can be used to convert nonbootable "data"-only volumes as well as "macOS" volumes (see below). The source HFS volume can be located on a GPT partition or on an encrypted or non-encrypted, Fusion or non-Fusion CoreStorage logical volume (LV). In the latter case, the CoreStorage logical volume group (LVG) is dismantled, including automatic removal of any related Boot Camp Assistant partition(s). If -dryrun is specified, all calculations, checks, and some data moving is performed, but your disk is left as valid HFS (headers remain declared as HFS, partition types are left alone, etc). For volumes currently or planned to be macOS- bearing (and bootable), you can optionally specify -prebootSource with your own staging directory of macOS boot items; a Preboot Role APFS Volume with a UUID directory will automatically be created as part of the conversion process to facilitate macOS bootstrap. Normally your directory should be writable; additional (cryptographic and EFI rendering) items are automatically added to your directory prior to conversion and are not removed afterwards. You can opt-out of automatic item addition with the -noPrebootAdditions option. Ownership of the affected disks is required. create device [device] name Convenience verb which creates an empty APFS Container and then adds one APFS Volume with the given name. The APFS Volume will have default attributes such as no encryption, no capacity reserve nor quota, etc. If you specify more than one device, a Fusion Container is created, with the performance parts assigned automatically. This is a combination of the diskutil apfs createContainer and diskutil apfs addVolume verbs. Ownership of the affected disks is required. createContainer [-main] device [-secondary] [device] Create an empty APFS Container. The device(s) specified become APFS Physical Stores. If you specify more than one device, a Fusion Container is created. For Fusion cases, if you do not explicitly use the -main and -secondary options, the performance duties are assigned automatically; this is preferred. Rotational vs. solid-state hardware design must be detectable; this is often not the case for external disks. Solid-state hardware is welcome but not required; it is the identification which holds as a hard requirement with this usage. Alternatively, you can explicitly specify -main and -secondary devices; if you do so, you must specify both. The "main" device is assumed to be "faster" (you should use solid-state hardware if available), while the "secondary" device is assumed to be "slower" and is often used to store OS-associated "auxiliary" data such as a Boot Camp Assistant partition. You cannot mix the use of disks from a disk image and not from a disk image. After running this command, you may add APFS Volumes with the diskutil apfs addVolume verb; you must do this at least once in order to "use" the new Container. Ownership of the affected disks is required. deleteContainer [-force] containerReferenceDevice | physicalStoreDevice [newName] [newFormat newName newSize] Destroy an existing APFS Container, including all of its APFS Volumes. Data on all of those volumes will be lost. You can identify the APFS Container by its Container Reference disk (preferred), or by one of its Physical Store disk(s). The APFS Volumes are unmounted first; this process may not succeed if one or more is busy, in which case deleteContainer is aborted and all data is left intact (although some volumes might now be unmounted). Otherwise, all APFS Volumes are deleted, their encryption-store entries are removed as applicable, the parent APFS Container is deleted, and the APFS Container's former Physical Store(s) are disposed of as follows: If you did not specify a newName and all Physical Stores are partitions, then those partitions are deleted (turned into free space). You might then wish to use diskutil addPartition to re-purpose the newly-created gap in your partition map. If you did specify a newName, or if one or more Physical Stores are whole disks (e.g. AppleRAID), then they are reformatted (as something other than APFS) with volume name(s) based on newName. If you specified the triplet of newFormat newName newSize in the same manner as when using the partitionDisk verb, then they are each reformatted with the specified format and volume names based on newName. Only a newSize of 0 (fit-to-fill) is currently supported. If your APFS Container is damaged, a Container Reference for it might not exist or it might not be functional. In this case, you can reclaim your former APFS Physical Store disk(s) by specifying the -force option; this activates an alternate last-resort mode. In this mode, if you had more than one Physical Store (e.g. the Fusion case) and the Container is sufficiently damaged, you might have to delete each Physical Store manually. You should normally avoid this mode. Ownership of the affected disks is required. resizeContainer containerReferenceDevice | physicalStoreDevice limits [-plist] | size [part1Format part1Name part1Size part2Format part2Name part2Size part3Format part3Name part3Size ...] Resize an existing APFS Container by specifying either an APFS Container Reference (preferred) or an APFS Physical Store partition, plus a proposed new size. Alternatively, take no action and print constraint information. The operation is live, non-destructive, and does not mount or unmount any APFS Volumes. If you specify an APFS Container Reference and that Container imports more than one Physical Store (in e.g. Fusion setups), the appropriate Physical Store will be chosen automatically. Specifying limits instead of a size causes no action to be taken, but instead prints a range of valid values, taking into account various constraints; the -plist option will print this information in property list format. Shrinks are constrained by the amount of data usage by all APFS Volumes on the targeted or implied APFS Container. Contributing to this data usage is the file content on the APFS Volumes, the existence of quotas and/or reserves, the usage of APFS Snapshots (e.g. by Time Machine), and metadata overhead. Grows are constrained by the amount of partition map free space trailing the targeted or implied Physical Store partition. When shrinking, new partitions may optionally be created to fill the newly-freed space. To do this, specify the format, name, and size parameters in the same manner as the triplet description for the partitionDisk verb. You can specify a size of zero (0) to grow the targeted APFS Physical Store such that all remaining space is filled to the next partition or the end of the partition map. Ownership of the affected disks is required, and all APFS Volumes on the Container must be unlocked. addVolume containerReferenceDevice filesystem name [-passprompt] | [-passphrase passphrase] | [-stdinpassphrase] [-passphraseHint passphraseHint] [-reserve reserve] [-quota quota] [-role roles] [-group[With] | -sibling groupDevice] [-nomount] [-mountpoint mountpoint] Add a new APFS Volume to an existing APFS Container. Files can then be stored on this newly- created APFS Volume. The filesystem parameter sets the permanent APFS personality for this new APFS Volume; you should specify APFS or Case-sensitive APFS. The new APFS Volume will be unencrypted unless you specify one of the passphrase options, in which case the volume will be encrypted from the beginning of its existence (as opposed to having encryption applied later); the user which is added will be the "Disk User". The optional passphraseHint is a user-defined string that can be displayed even while an encrypted APFS Volume is locked. APFS Volumes have no fixed size; they allocate backing store on an as-needed basis. You can specify the reserve parameter to guarantee a minimum amount of space for your volume; at least that many bytes will be available for file data. You can also specify the quota parameter to limit your volume's file usage to a maximum amount; no more than that many bytes will be available for file data, even if there is otherwise enough space in the parent APFS Container. You can specify both reserve and quota simultaneously; however, the reserve is not allowed to be larger than the quota. APFS Volumes can be tagged with certain role meta- data flags. You can supply the roles parameter with any combination of one or more of meta-data flags from APFS VOLUME ROLES section below or 0 as a no- op for scripting convenience. Note that you may be limited to only one role at a time and various other rules. If you specify -groupWith, your new APFS Volume will become a member of the same APFS Volume Group as the APFS Volume groupDevice. If groupDevice is not yet associated with any group, such will be created automatically when appropriate. The new APFS Volume is explicitly mounted after creation; you can specify -nomount to leave it unmounted, or, you can supply a "custom" mountpoint path, in which case you must be root, the directory must already exist, and you must delete the directory yourself when you unmount. Ownership of the affected disks is required. deleteVolume volumeDevice Remove the given APFS Volume from its APFS Container. All of the Volume's data will be lost. Additionally, a best-effort (error ignored) attempt is made to remove any corresponding XART, Preboot, and Recovery entries. Ownership of the affected disks is required. deleteVolumeGroup volumeGroupUUID Remove all APFS Volumes belonging to the given APFS Volume Group from its APFS Container. All of the Volumes' data will be lost. Additionally, a best- effort (error ignored) attempt is made to remove any corresponding XART, Preboot, and Recovery entries for each Volume. It is then positively verified that the Volume Group no longer exists. Removal will not start unless all Volumes in the Group can first be successfully unmounted. Ownership of the parent APFS Container is required. eraseVolume volumeDevice -name newName [-passprompt] | [-passphrase passphrase] | [-stdinpassphrase] [-passphraseHint passphraseHint] [-role roles] [-group[With] | -sibling groupDevice] Erase the contents of an existing APFS Volume; all of its data will be lost. Unlike diskutil apfs deleteVolume, the APFS Volume is not removed from its APFS Container. The "new" APFS Volume will inherit the APFS file system type (Case-sensitive or not) but will not inherit attributes such as name, reserve, quota, role, or encryption status. The "new" APFS Volume will be unencrypted, unless you supply passphrase options in the same manner as diskutil apfs addVolume in which case it will be encrypted and initially accessible by the "Disk User". The -role and -groupWith options function in the same manner as diskutil apfs addVolume. If you need more control, you should delete and (re-)add the Volume instead. Ownership of the affected disks is required. changeVolumeRole | chrole volumeDevice roles Change the role metadata flags of an existing APFS Volume. The roles should be any combination of one or more of the role meta-data flags from APFS VOLUME ROLES section below. Unspecified flags are left alone, use of lower-case causes flags to be cleared, and use of upper-case causes flags to be set. Alternatively, clear will remove all flags, or 0 can be used as a no-op for scripting convenience. You should not make any assumptions about the usage or legal combinations of role flags. Ownership of the affected disks is required. unlockVolume | unlock volumeDevice [-user disk | -user cryptoUserUUID | -recoverykeychain file] [-passphrase passphrase] | [-stdinpassphrase] [-nomount | -mountpoint mountpoint] [-systemreadwrite] [-verify] [-plist] Unlock and mount an encrypted and locked APFS Volume or verify a passphrase. If you do not supply the -user option, then all cryptographic users on that APFS Volume are searched for a match; if you supply -user disk then the Disk UUID (which equals the APFS Volume UUID) user is assumed; if you supply -user with a UUID then that specific user is assumed; if you instead supply -recoverykeychain then the Institutional Recovery user (see below) is assumed. You will be prompted interactively for a passphrase unless you specify a passphrase parameter with -passphrase or pipe your passphrase into stdin and use -stdinpassphrase. As an alternative to a passphrase, you can specify -recoverykeychain with a full path to a keychain file if an Institutional Recovery Key has been previously set up on the APFS Volume. The keychain must be unlocked; see security(1) and fdesetup(8) for more information. This command will normally mount the APFS Volume after unlocking; if part of a Volume Group "System"/"Data"-role pair, both will be mounted. If (one of the) volume(s) is of the "System"-role, then it will be mounted as read-only unless you specify the -systemreadwrite option. You can skip the explicit mounting step with the -nomount option, or specify a "custom" mountpoint with the -mountpoint option. If you specify your own mountpoint path, it must exist and you must have write privileges on it (e.g. usually you must be root). Specifying -verify will test passphrase correctness without affecting the locked or unlocked state. If -plist is specified, then a property list will be emitted instead of the normal user-readable output; this list provides additional detail. To re-lock the volume, unmount it, e.g. with diskutil unmount or diskutil apfs lockVolume. Ownership of the affected disks is required. lockVolume | lock volumeDevice Unmount and lock an encrypted unlocked APFS Volume. This is mostly a synonym for diskutil unmount. Ownership of the affected disks is required. listCryptoUsers | listUsers | listCryptoKeys | listKeys [-plist] volumeDevice Show all cryptographic users and special-purpose (e.g. recovery) "users" (keys) that are currently associated with the given APFS Volume, each by their Cryptographic User UUID and usage "type". The usual purpose of an APFS Cryptographic User is to authenticate for unlocking its APFS Volume; any of its users can do so. An APFS Volume need not be encrypted in order to contain crypto users; indeed, other than the Disk User, they should be added before encrypting. Types of Cryptographic Users include the at-most- one-per-Volume "Disk" user, whose UUID value always matches its Volume's UUID; iCloud or personal "Recovery Keys", which are not users per se, but instead store partial crypto keys and are paired with corresponding "Recovery Users" and have fixed- constant UUID values; and, most commonly, "Open Directory" users, whose UUID values match corresponding local macOS Open Directory (OD) account user GUIDs (e.g. the common local user accounts; see dscl(1) for more information). If -plist is specified, then a property list will be emitted instead of the normal user-readable output. changePassphrase | changeCryptoUserPassphrase | passwd volumeDevice -user disk | cryptoUserUUID [-oldPassphrase oldPassphrase | -oldStdinpassphrase] [-newPassphrase newPassphrase | -newStdinpassphrase] Change the passphrase of the given cryptographic user associated with the given APFS Volume. The old and new passphrases are specified in the same manner as diskutil apfs addVolume; you will be interactively prompted as necessary if you do not specify both. Ownership of the affected disks is required. setPassphraseHint | setCryptoUserPassphraseHint | hint volumeDevice -user disk | cryptoUserUUID -hint hintMessage | -clear Set an arbitrary hint string to aid recall of a passphrase for the given cryptographic user associated with the given APFS Volume. Specifying -clear will clear any existing hint (no hint is the default). Ownership of the affected disks is required. encryptVolume | encrypt | enableFileVault volumeDevice -user disk | existingCryptoUserUUID [-passphrase existingOrNewPassphrase | -stdinpassphrase] Start encryption of a currently-unencrypted APFS Volume ("Enable FileVault"). Depending on hardware, the operation may be accomplished immediately, or it may proceed "in the background". You can supply an existing cryptographic user UUID, in which case you must supply its corresponding passphrase, or you can supply disk (or the Disk/Volume UUID) and the corresponding passphrase of the "Disk User", provided the "Disk User" already exists. Alternatively, if no users exist yet on this APFS Volume, you can still supply disk (or the Disk/Volume UUID), and a "Disk User" will be created with a new passphrase which you supply. This is the only way using diskutil in which an APFS Volume that has no cryptographics users on it yet can acquire the first such user. The passphrase, interactive or not, is specified in the same manner as diskutil apfs addVolume. Ownership of the affected disks is required. decryptVolume | decrypt | disableFileVault volumeDevice [-user disk | existingCryptoUserUUID | -recoverykeychain file] [-passphrase existingPassphrase | -stdinpassphrase] Start decryption of a currently-encrypted APFS Volume ("Disable FileVault"). Depending on hardware, the operation may be accomplished immediately, or it may proceed "in the background". The APFS Volume must be in an unlocked state before invoking this operation. Additionally, this operation itself requires that you authenticate. Any existing cryptographic user and its passphrase on the APFS Volume can be supplied, using -user with either a UUID or the word disk to specify the "Disk User". If a "Disk User" exists on the APFS Volume and you omit the -user parameter, then the "Disk User" is assumed. As an alternative to a passphrase, you can specify -recoverykeychain with a full path, in the same fashion as the unlockVolume verb. If you do not supply a passphrase, yet one is required, you will be prompted interactively by cryptographic user UUID. Ownership of the affected disks is required. listSnapshots | listVolumeSnapshots [-plist] volumeDevice | volumeSnapshotDevice Show all APFS Snapshots currently associated with the given APFS Volume, each with information such as its Snapshot UUID, Snapshot Name, numeric XID identifier, and possibly other fields. If applicable, the unique APFS Snapshot which might be limiting APFS Container resizing is identified. If you are rooted (booted) from an APFS Snapshot, you can specify the appropriate 3-part BSD identifier (e.g. "disk1s2s1"). If -plist is specified, then a property list will be emitted instead of the normal user-readable output. deleteSnapshot volumeDevice -uuid snapshotUUID | -xid xid | -name snapshotName [-wait] Remove the given APFS Snapshot from its APFS Volume. The ability to restore the state of the APFS Volume back to that point in its evolution will be lost. Snapshot removal proceeds in the background and might not be finished when this command exits unless you specify -wait. Ownership of the affected disks is required. list[Volume]Groups [-plist] [containerReferenceDevice] Display the relationships among APFS Volumes which are defined by APFS Volume Groups. For each currently-attached APFS Container in the system, the Container's APFS Volume Groups are shown; for each APFS Volume Group, the Group's membership list of APFS Volumes is shown. If -plist is specified, then a property list will be emitted instead of the normal user-readable output. defragment containerDevice | volumeDevice status | enable | disable Manage automatic background defragmentation of user file data at the APFS Container or Volume level. Enablement of defragmentation at the APFS Container level means that any future Volumes which are created out of that Container will have defragmentation enabled by default. Ownership of the affected disks is required. updatePreboot volumeDevice [-od openDirectoryPath] Examine the given APFS Volume's cryptographic user (unlock) records, correlating against matching macOS user metadata (e.g. avatar pictures, password hints, etc) and use this information to update the target volume's associated Preboot Volume. The Preboot Volume is used at EFI firmware time to present a login user interface and to load and boot macOS. MacOS user metadata is sourced from macOS and Open Directory (OD) database files that are searched for on the given volumeDevice, which is normally expected to be a macOS installation. You can use a different Open Directory database by supplying the -od option with a full path, e.g. "/Volumes/SomeOtherMacOSVolume/var/db/dslocal/nodes/Default", or with / to use the currently-running macOS (even if volumeDevice is not). Redirecting the database source can lead to loss of access; it must never be done unless you have a precise reason. If some user cannot log in or login metadata is out of date, diskutil apfs updatePreboot / can be used as a repair. You should normally never have to use this verb; the Preboot Volume is updated automatically when you use Users & Groups in System Preferences. Ownership of the affected disks is required. syncPatchUsers volumeDevice Perform a specific, rarely-needed repair of APFS cryptographic user lock records. If the target volume is part of an APFS Volume Group, all APFS cryptographic user record lock data is copied from the System-role volume, if any, to the Data-role volume, if any. You must never use this command unless you know precisely why you are doing so. Ownership of the affected disks is required. appleRAID | ar raidVerb [...] AppleRAID verbs can be used to create, manipulate and destroy AppleRAID volumes (Software RAID). AppleRAID supports three basic types of RAID sets: • "stripe" - Striped Volume (RAID 0) • "mirror" - Mirrored Volume (RAID 1) • "concat" - Concatenated Volume (Spanning) Of these three basic types, only the "mirror" type increases fault-tolerance. Mirrors may have more than two disks to further increase their fault-tolerance. Striped and concatentated volumes are, in fact, more vulnerable to faults than single disk volumes. From these basic types, "stacked" or "nested" RAID volumes can be created. Stacked RAID sets that make use of mirrored RAID sets are fault-tolerant. For example, these are some of the more common combinations of stacked RAID sets: • RAID 50 - A striped RAID set of hardware RAID 5 disks. • RAID 10 - A striped RAID set of mirrored RAID sets. • RAID 0+1 - A mirrored RAID set of striped RAID sets. • Concatenated Mirror - A concatenation of mirrored RAID sets. When creating new RAID sets or adding disks, if possible, it is better to specify the entire disk instead of a partition on that disk. This allows the software to reformat the entire disk using the most current partition layouts. When using whole disks, the type of partitioning used is selected based on the platform type (PPC = APMFormat, Intel = GPTFormat). GPT and APM partition formats cannot be mixed in the same RAID set. In addition to whole disk and partition device names, AppleRAID uses UUIDs to refer to existing RAID sets and their members. Existing RAID sets may also be specified by mount point (e.g. /Volume/raidset). In many cases, using the UUID for the device argument is preferred because disk device names may change over time when disks are added, disks are removed or when the system is rebooted. If RAID members have been physically disconnected from the system or are no longer responding, you must use the member's UUID as the command argument. Messages in the system log will refer to RAID sets and their member disks by UUID. For more information on specifying device arguments, see the DEVICES section below. AppleRAID is not a replacement for backing up your data. Backups should be always be performed on a regular basis and before modifying any RAID set using these commands. The following is a list of appleRAID sub-verbs with their descriptions and individual arguments. list [-plist | UUID] Display AppleRAID volumes with current status and associated member disks. If UUID is specified, only list the RAID set with that AppleRAID Set UUID. If -plist is specified, then a property list will be emitted instead of user-formatted output. The -plist and UUID arguments may not both be specified. diskutil listRAID and diskutil checkRAID are deprecated synonyms for diskutil appleRAID list. create mirror | stripe | concat setName format devices ... Create a new RAID set consisting of multiple disks and/or RAID sets. setName is used for both the name of the created RAID volume and the RAID set itself (as displayed in list). e.g. 'diskutil createRAID stripe MyArray JHFS+ disk1 disk2 disk3 disk4'. Ownership of the affected disks is required. diskutil createRAID is a deprecated synonym for diskutil appleRAID create. delete raidVolume Destroy an existing RAID set. If the RAID set is a mirror with a resizable file system, delete will attempt to convert each of the member partitions back into a non-RAID volume while retaining the contained file system. For concatenated RAID sets with a resizable file system, delete will attempt to shrink the file system to fit on the first member partition and convert that to a non-RAID volume. Ownership of the affected disks is required. diskutil destroyRAID is a deprecated synonym for diskutil appleRAID delete. repairMirror raidVolume newDevice Repair a degraded mirror by adding a "new" disk given as newDevice to the RAID mirror set whose exported disk device or set UUID is given as raidVolume. The new disk must be the same size or larger than the existing disks in the RAID set. After running this command, you should manually remove the old (orphaned, failed) member(s) with diskutil appleRAID remove. Ownership of the affected disk is required. diskutil repairMirror is a deprecated synonym for diskutil appleRAID repairMirror. add type newDevice raidVolume Add a new member or hot spare to an existing RAID set. Type can be either member or spare. New disks are added live, the RAID volume does not need to be unmounted. Mirrored volumes support adding both members and hot spares, concatenated volumes only support adding members. When adding to a mirrored RAID set, the new disk must be the same size or larger than the existing disks in the RAID set. Adding a hot spare to a mirror will enable autorebuilding for that mirror. Adding a new member to a concatenated RAID set appends the member and expands the RAID volume. Ownership of the affected disk is required. diskutil addToRAID is a deprecated synonym for diskutil appleRAID add. remove oldDevice raidVolume Remove a member or spare from an existing RAID set. Old disks are removed live; the RAID volume does not need to be unmounted. For missing devices, oldDevice must be the device's UUID. Online mirror members with a resizable file system will be converted to non-RAID volumes, spare and offline members will be marked free. For concatenated RAID sets, only the last member can be removed. For resizable file systems remove will first attempt to shrink the concatenated RAID set so that the file system fits on the remaining disks. Ownership of the affected disk is required. diskutil removeFromRAID is a deprecated synonym for diskutil appleRAID remove. enable mirror | concat device Convert a non-RAID disk partition containing a resizable file system (such as JHFS+) into an unpaired mirror or single disk concatenated RAID set. Disks that were originally partitioned on Mac OS X 10.2 Jaguar or earlier or were partitioned to be Mac OS 9 compatible may not be resizable. Ownership of the affected disk is required. diskutil enableRAID is a deprecated synonym for diskutil appleRAID enable. update key value raidVolume Update the key value parameters of an existing RAID set. Valid keys are: • AutoRebuild - If true, the system attempts to rebuild degraded mirrored volumes automatically. When looking for devices for rebuild, AppleRAID first looks for hot spares and then degraded members. Use a value of "1" for true and "0" for false. • SetTimeout - Controls how long the system waits (in seconds) for a missing device before degrading a mirrored raid set. Also controls the amount of time you have to disconnect all devices from an unmounted mirror without degrading it. Ownership of the affected disk is required. diskutil updateRAID is a deprecated synonym for diskutil appleRAID update. coreStorage | cs coreStorageVerb [...] CoreStorage verbs can be used to gather information about, and to remove, CoreStorage volumes. CoreStorage maintains a world of virtual disks, somewhat like RAID, in which one can easily add or remove imported backing store disks, as well as exported usable volumes, to or from a pool (or several pools). This provides the user with flexibility in allocating their hardware; user or operating system data can span multiple physical disks seamlessly, for example. CoreStorage is deprecated in favor of Apple APFS. Apple CoreStorage defines four types of objects, instances of which are uniquely represented by a UUID: • Logical Volume Group (LVG) • Physical Volume (PV) • Logical Volume Family (LVF) • Logical Volume (LV) The Logical Volume Group (LVG) is the top or "pool" level; zero or more may exist during any OS boot time session. An LVG imports one or more Physical Volumes (PVs). A PV represents a device that feeds the LVG storage space; a PV is normally real media but it can be a disk image or even an AppleRAID Set. A disk offered to be a PV must be a partition and the encompassing scheme must be GPT. An LVG exports zero or more Logical Volume Families (LVFs). An LVF contains properties which govern and bind together all of its descendant Logical Volumes (LVs). These properties provide settings for Full Disk Encryption (FDE) (such as whether the LVs are encrypted, which users have access, etc) and other services. However, at the present time, for the creation of any new LVFs, only zero or one LVF per LVG is supported. A Logical Volume Family (LVF) exports one or more Logical Volumes (LVs). However, only and exactly one LV per LVF is supported. A Logical Volume (LV) exports a dev node, upon which a file system (such as Journaled HFS+) resides. For more information on specifying device arguments, see the DEVICES section below. The following is a list of coreStorage sub-verbs with their descriptions and individual arguments. list [-plist | UUID] Display a tree view of the CoreStorage world for all current logical volume groups (LVGs) with member disks (PVs) and exported volumes (LVFs and LVs), with properties and status for each level. If -plist is specified then a property list will be emitted instead of the formatted tree output; the UUIDs can be used with the diskutil coreStorage information verb to get properties for the object represented by that UUID. If UUID is specified then an attempt is made to list only that UUID (whatever type of CoreStorage object it may represent). The -plist and UUID arguments may not both be specified. info | information [-plist] UUID | device Display properties of the CoreStorage object (LVG, PV, LVF, or LV) associated with the given CoreStorage UUID or disk. delete | deleteLVG lvgUUID | lvgName Delete a CoreStorage logical volume group. All logical volume families with their logical volumes are removed, the logical volume group is destroyed, and the now-orphaned physical volumes are erased and partition-typed as Journaled HFS+. unlockVolume | unlockLV [-nomount] lvUUID [-stdinpassphrase] | [-passphrase passphrase] | [-recoverykeychain file] Unlock a logical volume and file system, causing it to be attached and mounted. Data is then accessible in plain form to the file system and applications, while the on-physical-disk backing bytes remain in encrypted form. A credential must be supplied; you must supply either a "Disk" user passphrase or a recovery keychain file. If no -passphrase option is specified, you will be prompted interactively; else, your passphrase is used. Or, if you specify -stdinpassphrase then the standard input is read (e.g. so that the passphrase can be securely piped in without having to expose it). Alternatively, you can specify -recoverykeychain with a path to a keychain file. The keychain must be unlocked; see security(1) for more information. The locked state means that the CoreStorage driver has not been given authentication information (a passphrase) to interpret the encrypted bytes on disk and thus export a dev node. This verb unlocks a logical volume family (LVF) and its logical volumes (LVs) by providing that authentication; as the LVs thus appear as dev nodes, any file systems upon them are automatically mounted unless the -nomount option is given. To re-lock the volume, make it offline again by ejecting it, e.g. with diskutil eject. image [--stdinpassphrase] [--verbose] [--plist] diskimageverb diskutil image verb uses DiskImages framework alongside with StorageKit framework to facilitate in manipulating disk images. Currently only attach , info and create are supported. Add verbose flag for verbose output. Add plist flag for the diskimageverb to produce output in a plist format. Intended for other programs to use this flag in order to consume the output instead of attempting to parse human-readable text. Add stdinpassphrase to read the passphrase from stdin instead of prompting for one. The following is a list of diskutil image sub-verbs with their descriptions and individual arguments. attach [--readOnly] [--nobrowse] [--mountPoint mountPoint] [--mountOptions opt[,opt]*] [--mountPolicy mountPolicy] [--noMount] image-url Attach a disk image as a device. Upon success will return the disk identifier matching the attached disk image. image-url can be one of: • Path to an existing disk image file • HTTPS[S] URL representing a disk image over HTTP[S] • RAM disk in format ram://<size>. See SIZES. If units are not specified, the size will be multiplied by block size (512 bytes). Note, RAM disks cannot be mounted as they do not have the appropriate file system during their creation. If readOnly is specified, then the disk image is attached read-only, even if writing is supported or allowed by the volume's underlying file system, device, media, or user (e.g. the super-user). Mount Policy: Can be one of: noMount, autoMount, forceMount. By default, will attempt to mount the disk image after attaching it (unless it is a ram disk), in case of no mountable filesystem on the image, the operation will fail ( forceMount ). Specifying noMount flag will skip any mount attempts and only attach the disk image. Lastly, autoMount will attempt to perform a mount after successful attach. In case no mountable filesystem is present the operation will end successfully. noMount flag behaves the same as specifying mountPolicy=noMount. Note that you cannot specify this flag with conflicting mount policies. Mount only related options (cannot be used with noMount ): In case mountPoint is supplied, then it will be used as the path to view into the volume; a directory at the supplied path must already exist. Note, that when specifying this option, the disk image should contain only a single mountable entity, otherwise the operation will fail. Additionally, can manipulate the mount operation by passing mountOptions, see mount(8). Can supply multiple options separated by ','. These options will be used during mount operation. Lastly, nobrowse can be supplied to hide the mounted volume in the disk image from GUI applications (such as Finder). info [--extra] image-url Print out information about a disk image. The operation includes an attach operation of the image in order to retrieve various details. If the image is already attached, it will remain attached. Otherwise, the image is ejected upon completion. If --extra is specified, additional information will be retrieved for some image types. create [-e, --encrypt] [blank | from] ... Create a disk image. Either blank or from an existing source. Creates an optionally encrypted disk image. With encrypt can encrypt the newly created disk image using AES-256. When creating from source and encryption applies to both, source and destination, stdinpassphrase applies the first passphrase to source and the second to destination. At the end of the operation the image is not mounted. Use blank to create a blank disk image. Use from to create the disk image from source (e.g. disk3 / existing disk image). blank [--format format] [--size size] [--volumeName name] image-path Creates a blank disk image. The disk image will be created at image-path. Should be a writable path. format • RAW - RAW read-write format (previously known as UDRW) • UDSB - Sparse bundle name creates an APFS filesystem with the requested volume name on the image (default: untitled). size allows to specify the size of the newly created image, see SIZES. from [--format format] [--source source] Creates a disk image from source (e.g. disk3 or disk image URL). The disk image will be created at image-path. Should be a writable path. format • RAW - RAW read-write format (previously known as UDRW) • UDRO - UDIF read-only uncompressed • UDZO - UDIF read-only zlib- compressed • ULFO - UDIF read-only lzfse- compressed • ULMO - UDIF read-only lzma- compressed • UDSB - Sparse bundle source should be the source for the disk image to be created from (e.g. disk3 / existing disk image). resize [-s, --size] [--image-only] image-url Resizes an existing disk image represented by image-url Prints disk image resize limits when no size is specified. In case the image contains a resizable filesystem (e.g. APFS) it will be resized and image limits will be calculated accordingly. In case the image contains one or more partitions, only the last one will be resized based on the input. Only GPT partition tables are supported or partition-less images with APFS. The image should not be attached prior to resizing. size The new size of the disk image, should not exceed the size from the disk image limits. Can be either size from size section (see SIZES) or 'min' to resize to minimum size as reported by diskutil. image-only Will only resize the disk image and adjust a secondary GPT table to the new size if there is one. DEVICES A device parameter for any of the above commands (except where explicitly required otherwise) can usually be any of the following: • The disk identifier (see below). Any entry of the form of disk*, e.g. disk1s9. • The device node entry containing the disk identifier. Any entry of the form of /dev/[r]disk*, e.g. /dev/disk2. • The volume mount point. Any entry of the form of /Volumes/*, e.g. /Volumes/Untitled. In most cases, a "custom" mount point e.g. /your/custom/mountpoint/here is also accepted. • The URL form of any of the volume mount point forms described above. E.g. file:///Volumes/Untitled or file:///. • A UUID. Any entry of the form of e.g. 11111111-2222-3333-4444-555555555555. The UUID can be a "media" UUID which IOKit places in an IOMedia node as derived from e.g. a GPT map's partition UUID, or it can be an AppleRAID (or CoreStorage) set (LV) or member (PV) UUID. • A volume name, e.g. Untitled. This match is only attempted if the given device is not of the form [/dev/][r]disk*, nor [/Volumes/]*. The match attempt is against the intrinsic volume label, not against the terminal component, if mounted, of its mount point. DISK IDENTIFIER The (BSD) disk identifier string variously identifies a physical or logical device unit, a session (if any) upon that device, a partition (slice) upon that session (if any), a virtual logical volume, or a moment in a volume's evolution. It may take the form of diskU, diskUsP, diskUsQ, diskUsQsP, diskC, diskCsV, or diskCsVsS where C, P, Q, S, U, and V are positive decimal integers (possibly multi-digit), and where: • U is the device unit. It may refer to hardware (e.g. a hard drive, optical drive, or memory card) or a virtual "drive" constructed by software (e.g. an AppleRAID Set, Apple Disk Image, CoreStorage LV, etc). • C is an APFS Container. This is a virtual disk constructed by APFS to represent a collection of APFS Volumes. Multiple APFS Containers can be active simultaneously. • Q is the session and is only included for optical media; it refers to the number of times recording has taken place on the currently-inserted medium (disc). • P is a partition in some partitioning scheme. A partitioning scheme divides up a device unit and is also called a "partition map" or simply a "map". Upon a partition, the raw data that underlies a user-visible file system is usually present, but it may also contain specialized data for certain 3rd-party database programs, or data required for the system software (e.g. EFI partitions, booter partitions, APM partition map data, etc), or, notably, it might contain backing-store physical volumes for AppleRAID, CoreStorage, APFS, or other (3rd-party) Storage Systems. For example, a partition disk0s2 might contain APFS data and have a partition type of Apple_APFS; this partition would then be termed an APFS Physical Store, out of which an APFS Container disk1 is defined, out of which an APFS Volume disk1s1 is exported. • V is an APFS Volume; it refers to a virtual logical volume that is shared out of an APFS Container. For example, exported from an APFS Container designated as disk1 there might be an APFS Volume disk1s1, mountable as a file system and usable for file storage via its mountpoint path. • S is an APFS Snapshot; it refers to a frozen moment in time of the state of files on an APFS Volume. For example, if APFS Container disk6 has an APFS Volume disk6s3, and two APFS Snapshots have been "taken" on it, these, when mounted, might be designated as disk6s3s1 and disk6s3s2. Zero or more snapshots can be persistently defined on a volume, but only "active" (mounted) snapshots have disk identifiers. Some units (e.g. floppy disks, RAID sets) contain file system data upon their "whole" device instead of containing a partitioning scheme with partitions. Note that some of the forms appear the same and must be distinguished by context. For example, diskUsQ, diskUsS, and diskCsV are all 2-part forms that can mean different things: For non-optical media, it identifies a partition (on a partition map) upon which (file system) data is stored; for optical media, it identifies a session upon which an entire partition map (with its partitions with file systems) is stored; for an APFS setup, it identifies an APFS Volume. As another example, in "stacked" cases (CoreStorage on AppleRAID or APFS on AppleRAID), the 1-part diskU form becomes a CoreStorage PV or APFS PhysicalStore, in contrast with the more-common 2-part form. It is important for software to avoid relying on numerical ordering of any of the parts. Activities including but not limited to partition deletions and insertions, partition resizing, virtual volume deletions and additions, device ejects and attachments due to media insertion cycles, plug cycles, authentication lock cycles or reboots, can all cause (temporary) gaps and non-increments in the numerical ordering of any of the parts. You must rely on more persistent means of identification, such as the various UUIDs. SIZES Wherever a size is emitted as an output, it is presented as a base-ten approximation to the precision of one fractional decimal digit and a base-ten SI multiplier, often accompanied by a precise count in bytes. Scripts should refrain from parsing this human-readable output and use the -plist option instead. Wherever a size is to be supplied by you as an input, you can provide values in a number of different ways, some absolute and some context- sensitive. Values are interpreted as base ten and must be positive with no preceding "+". An integer without a suffix is taken to mean an exact number of bytes (e.g. 5368709120). Multiplier suffixes are optional, must follow your value immediately without whitespace, and allow your value to be a real number (e.g. 5.1234t). Some of the specifiers below should not have a preceding value at all (e.g. the letter R for "remainder"). Power-of-ten suffixes: • B is bytes (not blocks) where the multiplier is 1. This suffix may be omitted. • K[B] is power of ten kilobytes where the multiplier is 1000 (1 x 10^3). • M[B] is power of ten megabytes where the multiplier is 1000000 (1 x 10^6). • G[B] is power of ten gigabytes where the multiplier is 1000000000 (1 x 10^9). • T[B] is power of ten terabytes where the multiplier is 1000000000000 (1 x 10^12). • P[B] is power of ten petabytes where the multiplier is 1000000000000000 (1 x 10^15). • E[B] is power of ten exabytes where the multiplier is 1000000000000000000 (1 x 10^18). Power-of-two suffixes: • Ki[B] is power of two kibibytes where the multiplier is 1024 (1 x 2^10). • Mi[B] is power of two mebibytes where the multiplier is 1048576 (1 x 2^20). • Gi[B] is power of two gibibytes where the multiplier is 1073741824 (1 x 2^30). • Ti[B] is power of two tebibytes where the multiplier is 1099511627776 (1 x 2^40). • Pi[B] is power of two pebibytes where the multiplier is 1125899906842624 (1 x 2^50). • Ei[B] is power of two exbibytes where the multiplier is 1152921504606846976 (1 x 2^60). The following are useful when working with devices and partition maps: • S | UAM ("sectors") is 512-byte units (device-independent) where the multiplier is always 512. • DBS ("device block size") is the device-dependent native block size of the encompassing whole disk, if applicable, where the multiplier is often 512, but not always; indeed it might not be a power of two. In certain contexts (e.g. when asking to "use all space available", or when building partition triplets) you can provide a relative value as follows: • 0 (the number zero) is a request to allocate "all possible". This may mean different things in different contexts. For partition maps, this requests allocation until the start of the following partition or the end of the partition map's allocatable space. • % (with a preceding number) is a percentage of the whole-disk size, the partition map size, or other allocatable size, as appropriate by context. Use of % is not supported in all situations. • R (with no preceding number) specifies the remainder of the whole-disk size or other allocatable size after all other triplets in the group are taken into account. It need not be in the last triplet. It must only appear in at most one triplet among all triplets. Use of R is not supported in all situations; in some such cases, a value of 0 is more appropriate. You can provide an operating system-defined constant value as follows: • %recovery% (with no preceding number) is the customary size of pre-macOS-13.0 Recovery Partitions. Note again that B refers to bytes and S and UAM refer to a constant multiplier of 512; the latter are useful when working with tools such as gpt (8) or df (1). Note also that this multiplier is not a "block" size as actually implemented by the underlying device driver and/or hardware, nor is it an "allocation block", which is a file system's minimum unit of backing store usage, often formatting-option-dependent. Examples: 10G (10 gigabytes), 4.23tb (4.23 terabytes), 5M (5 megabytes), 4GiB (exactly 2^32 bytes), 126000 (exactly 126000 bytes), 25.4% (25.4 percent of whole disk size). FORMAT The format parameter for the erasing and partitioning verbs is the file system personality name. You can determine this name by looking in a file system bundle's /System/Library/Filesystems/<fs>.fs/Contents/Info.plist and looking at the keys for the FSPersonalities dictionary, or by using the listFilesystems verb, which also lists shortcut aliases for common personalities (these shortcuts are defined by diskutil for use with it only). Common examples include JHFS+, JHFSX, MS-DOS, etc, as nicknames for the canonical forms from the file system bundles such as "Case-sensitive HFS+". APFS VOLUME ROLES APFS Volumes can be tagged with certain role meta-data flags. Supported flags are: • B - Preboot (boot loader) • R - Recovery • V - VM (swap space) • I - Installer • T - Backup (Time Machine) • S - System • D - Data • E - Update • X - XART (hardware security) • H - Hardware • C - Sidecar (Time Machine) • Y - Enterprise (data)
|
diskutil – modify, verify and repair local disks
|
diskutil [quiet] verb [subVerb] [options]
| null |
Erase a whole disk (device) diskutil eraseDisk JHFS+ Untitled disk3 Erase a volume (or format a partition or virtual disk) diskutil eraseVolume jhfs+ UntitledHFS /Volumes/SomeDisk Erase and (re)-partition a disk (device) with three partitions diskutil partitionDisk disk3 HFSX Foo1 10G JHFS+ Foo2 10G MS-DOS FOO3 0 Erase and format with a different volume file system diskutil eraseVolume ExFAT FOO disk3s2 Remove a partition from a partition map (results in free space) diskutil eraseVolume free free disk3s2 Add a new partition to a partition map (into free space) diskutil addPartition disk3s2 ExFat FOO 0 diskutil addPartition disk3s2 %Apple_HFS% %noformat% 2.5g diskutil addPartition disk3 ExFat FOO 50% Convert a HFS disk to APFS diskutil apfs convert disk3s2 Create a new APFS Container with three new APFS Volumes diskutil apfs createContainer disk0s2 diskutil apfs addVolume disk8 APFS MyVolume1 diskutil apfs addVolume disk8 APFS MyVolume2 -passprompt diskutil apfs addVolume disk8 APFS MyVolume3 -quota 10g diskutil apfs list Encrypt an APFS Volume (enable FileVault) diskutil apfs encryptVolume disk8s1 -user disk Lock or unlock an APFS Volume diskutil apfs list disk8 diskutil apfs lockVolume disk8s1 diskutil apfs unlockVolume disk8s1 (tries all users) diskutil apfs unlockVolume disk8s2 -user USERUUID (tries specific user) Decrypt an APFS Volume (disable FileVault) diskutil apfs listUsers disk8s1 diskutil apfs decryptVolume disk8s1 -user USERUUID Remove an APFS Volume from its APFS Container altogether diskutil apfs deleteVolume disk8s3 Resize an HFS volume and create a volume after it diskutil resizeVolume /Volumes/SomeDisk 50g MS-DOS DOS 0 Resize an HFS volume and leave all remaining space as unused diskutil resizeVolume /Volumes/SomeDisk 12g Merge two partitions into a new partition diskutil mergePartitions JHFS+ not disk1s3 disk1s5 Split a partition into three new ones diskutil splitPartition /Volumes/SomeDisk JHFS+ vol1 12g MS-DOS VOL2 8g JHFS+ vol3 0 Create an AppleRAID diskutil createRAID mirror MirroredVolume JHFS+ disk1 disk2 Destroy an AppleRAID diskutil destroyRAID /Volumes/MirroredVolume Repair a damaged AppleRAID diskutil repairMirror /Volumes/MirroredVolume disk3 Convert volume into an AppleRAID volume diskutil enableRAID mirror /Volumes/ExistingVolume Erase a partition, shrink, associate a pre-macOS-13.0 Recovery Partition diskutil splitPartition disk8s2 JHFS+ MacHD R %Apple_Boot% %noformat% %recovery% Partition a disk with the MBR partitioning scheme (e.g. for a camera) diskutil partitionDisk disk3 MBR MS-DOS CAM1 0 Partition a disk with the (deprecated) APM partitioning scheme diskutil partitionDisk disk3 APM HFS+ vol1 15% Journaled\ HFS+ vol2 R Journaled\ HFS+ vol3 25% Free\ Space volX 10g Attach a 100MiB RAM disk image diskutil image attach --mountPolicy=noMount ram://100MiB Attach a read-only disk image at some specified mount point diskutil image attach --readOnly --mountPoint /tmp/myMountPoint /tmp/myImage.dmg Creating a blank 100MiB disk image with UDSB format diskutil image create -s 100MiB -f UDSB /tmp/myImage.dmg Creating a blank 100MiB disk image with APFS volume named MyVolume diskutil image create -s 100MiB --volumeName MyVolume /tmp/myImage.dmg Creating a disk image from an existing disk diskutil image create from --source disk3 /tmp/myImage.dmg Resizing a disk image: diskutil image resize --size 100G /tmp/my.dmg Resize to minimal size: diskutil image resize --size min /tmp/my.dmg Print resize limits in plist format diskutil image resize --plist /tmp/my.dmg ERRORS diskutil will exit with status based on sysexits (see sysexits(3)) or 1 if it cannot complete the requested operation; this includes cases in which usage text is printed. Before diskutil returns with non EX_OK status, it prints a message which might include an explanation local to diskutil, an error string from the DiskManagement or MediaKit frameworks, an underlying POSIX error, or some combination. SEE ALSO authopen(1), drutil(1), hdiutil(1), apfs.util(8), corestoraged(8), diskarbitrationd(8), diskmanagementd(8), diskmanagementstartup(8), fdesetup(8), fsck_apfs(8), fsck_hfs(8), hfs.util(8), ioreg(8), mount(8), mount_apfs(8), msdos.util(8), newfs_apfs(8), newfs_hfs(8), sysexits(3), ufs.util(8), umount(8), vsdbutil(8) HISTORY The eraseDisk and partitionDisk verbs had an option to add Mac OS 9 drivers (in partitions designated for that purpose); there was also a repairOS9Permissions verb. These have been removed. Starting with Mac OS X 10.6, the input and output notation of disk and partition sizes use power-of-10 suffixes. In the past this had been power-of-2, regardless of the suffix (e.g. G, Gi, GiB) used for display or accepted as input. Starting with Mac OS X 10.11, the B suffix is optional even for "bare" numeric values. Starting with Mac OS X 10.11, the verify- and repairPermissions verbs have been removed. Starting with macOS 10.12, the plist output of partitions from diskutil list -plist is presented in on-disk (not BSD slice name, e.g. disk0s2) order. This mimics the order of outputs from programs such as gpt (1). The human-readable output always has been, and remains, in on-disk order. Starting with macOS 10.13.2, APFS cryptographic user authentication is required even when disabling FileVault. Starting with macOS 10.14, partitions on all media above 1GiB in size will default to 1MiB alignment, regardless of the partitioning scheme. This is significant for MBR partition maps and their use in appliances such as cameras. Free-space requests will not be aligned. Starting with macOS 11.0, certain Core Storage manipulation verbs have been removed. macOS 27 October 2021 macOS
|
cvfsid
|
cvfsid provides a mechanism for displaying the SNFS identifier for the executing system. For customers using client-based licensing, SNFS identifiers are used to generate individual client licenses. This identifier string is submitted to Apple Technical Support for license authorization keys. See the installation instructions for additional information on SNFS licensing.
|
cvfsid - Display SNFS System Identifier
|
/System/Library/Filesystems/acfs.fs/Contents/bin/cvfsid [-?Ghnl]
|
-h, -? Display help -G Gather mode. NOTE: Not intended for general use. Only use when recommended by Apple Support. -l List the local host's Authorizing IDs, IP addresses, and MACs. (Linux only.) -n Display the network interface information in a compact, machine readable form. (Linux only.) When executed without options, cvfsid prints the information required to generate a license for the host on which it is executed. Simply execute the program on each participating system, and either Email or Fax the identifiers to Apple Technical Support for authorization keys. After the license keys are received cut-and-paste them into the file /Library/Preferences/Xsan/license.dat on the system that runs the CVFS File System Manager. FILES /Library/Preferences/Xsan/license.dat SEE ALSO cvfs(8), snfs_config(5), Xsan File System June 2014 CVFSID(8)
| null |
appleh16camerad
| null | null | null | null | null |
sso_util
|
sso_util is a tool for setting up, interrogating and removing Kerberos configurations within the Apple Single Sign On environment. This tool can configure services, create and consume encrypted config records and tear down Kerberos installations Commands for sso_util : info [-p] [-g | -l | -L | -r dir_node_path [dir_node_path]] Returns information about the current Single Sign On environment info command arguments: -p Returns the data in XML format -g Returns the default Kerberos realm name -l Returns a list of the services sso_util knows how to Kerberize -L Returns the default Kerberos log file paths -r dir_node_path Returns whether or not the given node has a Kerberos record associated with it. If it does, it returns the default realm name. If dir_node_path is '.' (default) it also returns all the realm names available on the search path dir_node_path specifies the directory node in which to search for the computer record configure -r REALM -a admin_name [-p password] service Configures Kerberized services on the local machine for the given realm configure command arguments: -r REALM Kerberos realm for the service principals -a admin_name Account name of an administrator authorized to make changes in the Kerberos database -p password Password for the above administrator. The password can also be stored in a file and the path to the file can be passed as an environment variable - SSO_PASSWD_PATH. service Service can be any number of afp, ftp, imap, pop, smtp, ssh, fcsvr, DNS, or all useconfig [-u] [-R record_name] [-f dir_node_path] -a admin_name [-p password] Uses a secure config record to configure a server for Kerberos configure command arguments: -u Forces the update, ignoring that the update may already have been installed -R record_name Name of the Computer record containing the secure config record -f dir_node_path Specifies the directory node in which to find the given computer record -a admin_name Account name of an user authorized to use the secure config record (see generateconfig) -p password Password for the above user. The password can also be stored in a file and the path to the file can be passed as an environment variable - SSO_PASSWD_PATH.
|
sso_util – Kerberos – Open Directory Single Sign On
|
sso_util command [-args]
| null |
To configure a server in realm FOO.COM when you have the Kerberos administrator's password. Store the password in a file and set env var SSO_PASSWD_PATH to the file path sso_util configure -r FOO.COM -a kerberos_admin all To create a secure config record to allow the delegated administrators, Fred and Barney, to configure a server named fred.foo.com in realm FOO.COM (using an existing computer record). The Open Directory Master for foo.com is odmaster.foo.com. This can be run on any server and neither Fred nor Barney need to have the Kerberos administrator's password. Store the password in a file and set env var SSO_PASSWD_PATH to the file path. sso_util generateconfig -r FOO.COM -R fred.foo.com -f /LDAPv3/odmaster.foo.com -U Fred,Barney -a kerberos_admin all To use the secure config record to allow Barney to configure the server named fred.foo.com. Store the password in a file and set env var SSO_PASSWD_PATH to the file path. sso_util useconfig -R fred.foo.com -f /LDAPv3/odmaster.foo.com -a Barney FILES /etc/krb5.keytab The configure and useconfig commands create or modify the krb5.keytab file. DIAGNOSTICS You can add -v debug_level to any of the sso_util commands. Debug level 1 provides status information, higher levels add progressively more levels of detail. The maximum is level 7. NOTES The sso_util tool is used by the Apple Single Sign On system to set up Kerberized services integrated with the rest of the Single Sign On components. SEE ALSO kdc(8), kdcsetup(8), kerberos(8), krbservicesetup(8) Darwin Tue Mar 11 2003 Darwin
|
vsdbutil
|
vsdbutil manipulates the volume status DB. The following options are available: -a adopts (activates) on-disk ownership on the specified path -c checks the status of the ownership usage on the specified path -d disowns (deactivates) the on-disk ownership on the specified path -i initializes the ownership database to include all mounted HFS+ and APFS volumes -x clears the entry associated with the specified path from the database -h prints out a simple help message The vsdbutil command is deprecated; using a volume UUID in fstab(5) is preferred. FILES /var/db/volinfo.database Database of volumes managed via vsdbutil. SEE ALSO diskutil(8), mount(8), fstab(5) Darwin December 19, 2001 Darwin
|
vsdbutil – manipulates the volume status DB.
|
vsdbutil [-a path] vsdbutil [-c path] [-d path] [-i] vsdbutil [-h]
| null | null |
sntpd
|
sntpd is an SNTP server daemon, which can provide time synchronization services to clients such as timed(8). The following options are available: -L Overwrite any existing state with a header describing a stratum 1 uncalibrated local clock. This is useful for starting a root (primary) server that doesn't use sntp(1) to synchronize with a higher stratum server. -z state_file Specify where to get the header data from. When unspecified, the default location is state.bin in the current working directory. When managed by launchd, the default system location is /var/sntpd/state.bin. If the file does not exist, it will be created and populated with a template indicating that the server is unsynchronized. FILES /var/sntpd/state.bin SNTP header data /System/Library/LaunchDaemons/com.apple.sntpd.plist launchd job SEE ALSO launchctl(1), sntp(1), clock_gettime(3), timed(8) HISTORY sntpd first appeared in macOS 11.1 Darwin October 16, 2020 Darwin
|
sntpd – very Simple Network Time Protocol server daemon
|
launchctl enable system/com.apple.sntpd sntpd [-L] [-z state_file]
| null | null |
ddns-confgen
|
tsig-keygen and ddns-confgen are invocation methods for a utility that generates keys for use in TSIG signing. The resulting keys can be used, for example, to secure dynamic DNS updates to a zone or for the rndc command channel. When run as tsig-keygen, a domain name can be specified on the command line which will be used as the name of the generated key. If no name is specified, the default is tsig-key. When run as ddns-confgen, the generated key is accompanied by configuration text and instructions that can be used with nsupdate and named when setting up dynamic DNS, including an example update-policy statement. (This usage similar to the rndc-confgen command for setting up command channel security.) Note that named itself can configure a local DDNS key for use with nsupdate -l: it does this when a zone is configured with update-policy local;. ddns-confgen is only needed when a more elaborate configuration is required: for instance, if nsupdate is to be used from a remote system.
|
ddns-confgen - ddns key generation tool
|
tsig-keygen [-a algorithm] [-h] [-r randomfile] [name] ddns-confgen [-a algorithm] [-h] [-k keyname] [-q] [-r randomfile] [-s name | -z zone]
|
-a algorithm Specifies the algorithm to use for the TSIG key. Available choices are: hmac-md5, hmac-sha1, hmac-sha224, hmac-sha256, hmac-sha384 and hmac-sha512. The default is hmac-sha256. Options are case-insensitive, and the "hmac-" prefix may be omitted. -h Prints a short summary of options and arguments. -k keyname Specifies the key name of the DDNS authentication key. The default is ddns-key when neither the -s nor -z option is specified; otherwise, the default is ddns-key as a separate label followed by the argument of the option, e.g., ddns-key.example.com. The key name must have the format of a valid domain name, consisting of letters, digits, hyphens and periods. -q (ddns-confgen only.) Quiet mode: Print only the key, with no explanatory text or usage examples; This is essentially identical to tsig-keygen. -r randomfile Specifies a source of random data for generating the authorization. If the operating system does not provide a /dev/random or equivalent device, the default source of randomness is keyboard input. randomdev specifies the name of a character device or file containing random data to be used instead of the default. The special value keyboard indicates that keyboard input should be used. -s name (ddns-confgen only.) Generate configuration example to allow dynamic updates of a single hostname. The example named.conf text shows how to set an update policy for the specified name using the "name" nametype. The default key name is ddns-key.name. Note that the "self" nametype cannot be used, since the name to be updated may differ from the key name. This option cannot be used with the -z option. -z zone (ddns-confgen only.) Generate configuration example to allow dynamic updates of a zone: The example named.conf text shows how to set an update policy for the specified zone using the "zonesub" nametype, allowing updates to all subdomain names within that zone. This option cannot be used with the -s option. SEE ALSO nsupdate(1), named.conf(5), named(8), BIND 9 Administrator Reference Manual. AUTHOR Internet Systems Consortium, Inc. COPYRIGHT Copyright © 2009, 2014-2016 Internet Systems Consortium, Inc. ("ISC") ISC 2014-03-06 DDNS-CONFGEN(8)
| null |
ioalloccount
|
ioalloccount displays some accounting of memory allocated by IOKit allocators, including object instances, in the kernel. This information is useful for tracking leaks. Instance counts can also found in the root of the IORegistry in the “IOKitDiagnostics” property.
|
ioalloccount – Summarize IOKit memory usage.
|
ioalloccount
| null |
ioalloccount Instance allocation = 0x0022c718 = 2225 K Container allocation = 0x00141bad = 1286 K IOMalloc allocation = 0x00638221 = 6368 K Pageable allocation = 0x00f4f000 = 15676 K SEE ALSO ioclasscount(8), ioreg(8) Darwin November 6, 2008 Darwin
|
uasysdiagnose
|
uasysdiagone is used to dump a diagnostic state from useractivityd. macOS 14.5 June 15, 2017 macOS 14.5
|
uasysdiagone – Produce system diagnostics for UserActivity.framework
| null | null | null |
dev_mkdb
|
The dev_mkdb command creates a db(3) hash access method database in “/var/run/dev.db” which contains the names of all of the character and block special files in the “/dev” directory, using the file type and the st_rdev field as the key. Keys are a structure containing a mode_t followed by a dev_t, with any padding zero'd out. The former is the type of the file (st_mode & S_IFMT), the latter is the st_rdev field. FILES /dev Device directory. /var/run/dev.db Database file. SEE ALSO ps(1), stat(2), db(3), devname(3), kvm_nlist(3), ttyname(3), kvm_mkdb(8) HISTORY The dev_mkdb command appeared in 4.4BSD. macOS 14.5 June 6, 1993 macOS 14.5
|
dev_mkdb – create /dev database
|
dev_mkdb
| null | null |
hdik
|
hdik is a simple tool that can be used to attach disk images directly to the DiskImages driver. The end result is functionally similar to passing -kernel to hdiutil(1)'s attach verb. hdik does not rely upon the presence of DiskImages or other high-level frameworks. The DiskImages driver only supports a selection of disk image formats: UDRW, UDRO, UDZO, ULFO, SPARSE (UDSP). It also supports shadow files. hdiutil(1)'s imageinfo verb indicates whether a particular image is kernel compatible. hdik requires root access to perform its functions. In the first form, an image to attach must be provided: imagefile path to the disk image file to attach. In its second form, hdik issues an eject command to the specified device. The argument is the full device node path (e.g. /dev/disk2). Any volumes mounted from the device must be unmounted first, or the command will fail. See umount(8).
|
hdik – lightweight tool to attach and mount disk images in-kernel
|
hdik imagefile [options] hdik -e device
|
-shadow [shadowfile] Use a shadow file in conjunction with the data in the image. This option prevents modification of the original image and allows read-only images to be used as read/write images. When blocks are being read from the image, blocks present in the shadow file override blocks in the base image. When blocks are being written, the writes will be redirected to the shadow file. If not specified, -shadow defaults to <imagename>.shadow. If the shadow file does not exist, it is created. -nomount Suppress automatic mounting of filesystems contained within the image. This will result in /dev entries being created, but will not mount any volumes. -drivekey keyname=value Specify a key/value pair for the IOHDIXHDDrive object created (shows up in the IOKit registry of devices which is viewable with ioreg(8)). SEE ALSO hdiutil(1), diskarbitrationd(8), diskutil(8), umount(8), ioreg(8) macOS 20 Mar 2014 macOS
| null |
slapdn
|
Slapdn is used to check the conformance of a DN based on the schema defined in slapd(8) and that loaded via slapd.conf(5). It opens the slapd.conf(5) configuration file or the slapd-config (5) backend, reads in the schema definitions, and then parses the DN list given on the command-line.
|
slapdn - Check a list of string-represented LDAP DNs based on schema syntax
|
/usr/sbin/slapdn [-d_debug-level] [-f_slapd.conf] [-F_confdir] [-N|-P] [-o_option[=value]] [-v] DN [...]
|
-d_debug-level enable debugging messages as defined by the specified debug-level; see slapd(8) for details. -f_slapd.conf specify an alternative slapd.conf(5) file. -F_confdir specify a config directory. If both -f and -F are specified, the config file will be read and converted to config directory format and written to the specified directory. If neither option is specified, an attempt to read the default config directory will be made before trying to use the default config file. If a valid config directory exists then the default config file is ignored. -N only output a normalized form of the DN, suitable to be used in a normalization tool; incompatible with -P. -o_option[=value] Specify an option with a(n optional) value. Possible generic options/values are: syslog=<subsystems> (see `-s' in slapd(8)) syslog-level=<level> (see `-S' in slapd(8)) syslog-user=<user> (see `-l' in slapd(8)) -P only output a prettified form of the DN, suitable to be used in a check and beautification tool; incompatible with -N. -v enable verbose mode.
|
To check a DN give the command: /usr/sbin/slapdn -f //etc/openldap/slapd.conf -v DN SEE ALSO ldap(3), slapd(8), slaptest(8) "OpenLDAP Administrator's Guide" (http://www.OpenLDAP.org/doc/admin/) ACKNOWLEDGEMENTS OpenLDAP Software is developed and maintained by The OpenLDAP Project <http://www.openldap.org/>. OpenLDAP Software is derived from University of Michigan LDAP 3.3 Release. OpenLDAP 2.4.28 2011/11/24 SLAPDN(8C)
|
vpnd
|
vpnd allows external hosts to tunnel via L2TP over IPSec or via PPTP from an insecure external network (such as the Internet) into a "secure" internal network, such as a corporate network. All traffic through the tunnel is encrypted to provide secure communications, with L2TP/IPSec providing a higher level of security than PPTP. vpnd listens for incoming connections, pairs each one with an available internal IP address, and passes the connection to pppd(8) with appropriate parameters. Parameters for vpnd are specified in a system configuration (plist) file in XML format. This file contains a dictionary of configurations each identified by a key referred to as a server_id. Parameters include the tunneling protocol, IP addresses to be assigned to clients, PPP parameters etc. vpnd is launched for a particular configuration by using the -i option which takes the server_id to be run as an argument. vpnd can also be run without the -i option. In this case it will check the configuration file for a special array which contains a list of configurations to be run and will fork and exec a copy of vpnd for each server_id to be run. Running multiple vpnd processes simultaneously for a particular protocol is not allowed. vpnd will be launched during the boot process by a startup item if the field VPNSERVER is defined in /etc/hostconfig with the value -YES-. Typically, in this case it will be launched without the -i option and will check the configuration file to determine which configuration(s) are to be run. vpnd logs items of interest to the system log. A different log path can be specified in the configuration file.
|
vpnd – Mac OS X VPN service daemon
|
vpnd [-d | -n | -x] [-i server_id] vpnd [-h]
|
The following options are available: -d Do not move to background and print log strings to the terminal. -h Print usage summary and exit. -i Server_id in the plist file that defines the configuration to be run. -n Do not move to background, print log information to the terminal, and quit after validating the argument list. -x Do not move to background.
|
The default invocation, vpnd will read the list of configurations to run from the configuration file and launch them. This default configuration may be enabled at startup by defining VPNSERVER to -YES-. To specify a particular configuration to run use vpnd -i server_id FILES & FOLDERS /usr/sbin/vpnd /etc/hostconfig /System/Library/StartupItems/NetworkExtensions /Library/Preferences/SystemConfiguration/com.apple.RemoteAccessServers.plist SEE ALSO pppd(8) vpnd(5) Mac OS X 21 August 2003 Mac OS X
|
postqueue
|
The postqueue(1) command implements the Postfix user interface for queue management. It implements operations that are traditionally available via the sendmail(1) command. See the postsuper(1) command for queue operations that require super-user privileges such as deleting a message from the queue or changing the status of a message. The following options are recognized: -c config_dir The main.cf configuration file is in the named directory instead of the default configuration directory. See also the MAIL_CONFIG environment setting below. -f Flush the queue: attempt to deliver all queued mail. This option implements the traditional "sendmail -q" command, by contacting the Postfix qmgr(8) daemon. Warning: flushing undeliverable mail frequently will result in poor delivery performance of all other mail. -i queue_id Schedule immediate delivery of deferred mail with the specified queue ID. This option implements the traditional sendmail -qI command, by contacting the flush(8) server. This feature is available with Postfix version 2.4 and later. -j Produce a queue listing in JSON format, based on output from the showq(8) daemon. The result is a stream of zero or more JSON objects, one per queue file. Each object is followed by a newline character to support simple streaming parsers. See "JSON OBJECT FORMAT" below for details. This feature is available in Postfix 3.1 and later. -p Produce a traditional sendmail-style queue listing. This option implements the traditional mailq command, by contacting the Postfix showq(8) daemon. Each queue entry shows the queue file ID, message size, arrival time, sender, and the recipients that still need to be delivered. If mail could not be delivered upon the last attempt, the reason for failure is shown. The queue ID string is followed by an optional status character: * The message is in the active queue, i.e. the message is selected for delivery. ! The message is in the hold queue, i.e. no further delivery attempt will be made until the mail is taken off hold. -s site Schedule immediate delivery of all mail that is queued for the named site. A numerical site must be specified as a valid RFC 5321 address literal enclosed in [], just like in email addresses. The site must be eligible for the "fast flush" service. See flush(8) for more information about the "fast flush" service. This option implements the traditional "sendmail -qRsite" command, by contacting the Postfix flush(8) daemon. -v Enable verbose logging for debugging purposes. Multiple -v options make the software increasingly verbose. As of Postfix 2.3, this option is available for the super-user only. JSON OBJECT FORMAT Each JSON object represents one queue file; it is emitted as a single text line followed by a newline character. Object members have string values unless indicated otherwise. Programs should ignore object members that are not listed here; the list of members is expected to grow over time. queue_name The name of the queue where the message was found. Note that the contents of the mail queue may change while it is being listed; some messages may appear more than once, and some messages may be missed. queue_id The queue file name. The queue_id may be reused within a Postfix instance unless "enable_long_queue_ids = true" and time is monotonic. Even then, the queue_id is not expected to be unique between different Postfix instances. Management tools that require a unique name should combine the queue_id with the myhostname setting of the Postfix instance. arrival_time The number of seconds since the start of the UNIX epoch. message_size The number of bytes in the message header and body. This number does not include message envelope information. It is approximately equal to the number of bytes that would be transmitted via SMTP including the <CR><LF> line endings. sender The envelope sender address. recipients An array containing zero or more objects with members: address One recipient address. delay_reason If present, the reason for delayed delivery. Delayed recipients may have no delay reason, for example, while delivery is in progress, or after the system was stopped before it could record the reason. SECURITY This program is designed to run with set-group ID privileges, so that it can connect to Postfix daemon processes. STANDARDS RFC 7159 (JSON notation) DIAGNOSTICS Problems are logged to syslogd(8) and to the standard error stream. ENVIRONMENT MAIL_CONFIG Directory with the main.cf file. In order to avoid exploitation of set-group ID privileges, a non-standard directory is allowed only if: • The name is listed in the standard main.cf file with the alternate_config_directories configuration parameter. • The command is invoked by the super-user. CONFIGURATION PARAMETERS The following main.cf parameters are especially relevant to this program. The text below provides only a parameter summary. See postconf(5) for more details including examples. alternate_config_directories (empty) A list of non-default Postfix configuration directories that may be specified with "-c config_directory" on the command line (in the case of sendmail(1), with "-C config_directory"), or via the MAIL_CONFIG environment parameter. config_directory (see 'postconf -d' output) The default location of the Postfix main.cf and master.cf configuration files. command_directory (see 'postconf -d' output) The location of all postfix administrative commands. fast_flush_domains ($relay_domains) Optional list of destinations that are eligible for per-destination logfiles with mail that is queued to those destinations. import_environment (see 'postconf -d' output) The list of environment parameters that a Postfix process will import from a non-Postfix parent process. queue_directory (see 'postconf -d' output) The location of the Postfix top-level queue directory. syslog_facility (mail) The syslog facility of Postfix logging. syslog_name (see 'postconf -d' output) A prefix that is prepended to the process name in syslog records, so that, for example, "smtpd" becomes "prefix/smtpd". trigger_timeout (10s) The time limit for sending a trigger to a Postfix daemon (for example, the pickup(8) or qmgr(8) daemon). Available in Postfix version 2.2 and later: authorized_flush_users (static:anyone) List of users who are authorized to flush the queue. authorized_mailq_users (static:anyone) List of users who are authorized to view the queue. FILES /var/spool/postfix, mail queue SEE ALSO qmgr(8), queue manager showq(8), list mail queue flush(8), fast flush service sendmail(1), Sendmail-compatible user interface postsuper(1), privileged queue operations README FILES Use "postconf readme_directory" or "postconf html_directory" to locate this information. ETRN_README, Postfix ETRN howto LICENSE The Secure Mailer license must be distributed with this software. HISTORY The postqueue command was introduced with Postfix version 1.1. AUTHOR(S) Wietse Venema IBM T.J. Watson Research P.O. Box 704 Yorktown Heights, NY 10598, USA Wietse Venema Google, Inc. 111 8th Avenue New York, NY 10011, USA POSTQUEUE(1)
|
postqueue - Postfix queue control
|
To flush the mail queue: postqueue [-v] [-c config_dir] -f postqueue [-v] [-c config_dir] -i queue_id postqueue [-v] [-c config_dir] -s site To list the mail queue: postqueue [-v] [-c config_dir] -j postqueue [-v] [-c config_dir] -p
| null | null |
setkey
|
setkey adds, updates, dumps, or flushes Security Association Database (SAD) entries as well as Security Policy Database (SPD) entries in the kernel. setkey takes a series of operations from standard input (if invoked with -c) or the file named filename (if invoked with -f filename). (no flag) Dump the SAD entries or SPD entries contained in the specified file. -? Print short help. -a setkey usually does not display dead SAD entries with -D. If -a is also specified, the dead SAD entries will be displayed as well. A dead SAD entry is one that has expired but remains in the system because it is referenced by some SPD entries. -D Dump the SAD entries. If -P is also specified, the SPD entries are dumped. If -p is specified, the ports are displayed. -F Flush the SAD entries. If -P is also specified, the SPD entries are flushed. -H Add hexadecimal dump in -x mode. -h On NetBSD, synonym for -H. On other systems, synonym for -?. -k Use semantics used in kernel. Available only in Linux. See also -r. -l Loop forever with short output on -D. -n No action. The program will check validity of the input, but no changes to the SPD will be made. -r Use semantics described in IPsec RFCs. This mode is default. For details see section RFC vs Linux kernel semantics. Available only in Linux. See also -k. -x Loop forever and dump all the messages transmitted to the PF_KEY socket. -xx prints the unformatted timestamps. -V Print version string. -v Be verbose. The program will dump messages exchanged on the PF_KEY socket, including messages sent from other processes to the kernel. Configuration syntax With -c or -f on the command line, setkey accepts the following configuration syntax. Lines starting with hash signs (‘#’) are treated as comment lines. add [-46n] src dst protocol spi [extensions] algorithm ... ; Add an SAD entry. add can fail for multiple reasons, including when the key length does not match the specified algorithm. get [-46n] src dst protocol spi ; Show an SAD entry. delete [-46n] src dst protocol spi ; Remove an SAD entry. deleteall [-46n] src dst protocol ; Remove all SAD entries that match the specification. flush [protocol] ; Clear all SAD entries matched by the options. -F on the command line achieves the same functionality. dump [protocol] ; Dumps all SAD entries matched by the options. -D on the command line achieves the same functionality. spdadd [-46n] src_range dst_range upperspec policy ; Add an SPD entry. spdadd tagged tag policy ; Add an SPD entry based on a PF tag. tag must be a string surrounded by double quotes. spddelete [-46n] src_range dst_range upperspec -P direction ; Delete an SPD entry. spdflush ; Clear all SPD entries. -FP on the command line achieves the same functionality. spddump ; Dumps all SPD entries. -DP on the command line achieves the same functionality. Meta-arguments are as follows: src dst Source/destination of the secure communication is specified as an IPv4/v6 address, and an optional port number between square brackets. setkey can resolve a FQDN into numeric addresses. If the FQDN resolves into multiple addresses, setkey will install multiple SAD/SPD entries into the kernel by trying all possible combinations. -4, -6, and -n restrict the address resolution of FQDN in certain ways. -4 and -6 restrict results into IPv4/v6 addresses only, respectively. -n avoids FQDN resolution and requires addresses to be numeric addresses. protocol protocol is one of following: esp ESP based on rfc2406 esp-old ESP based on rfc1827 ah AH based on rfc2402 ah-old AH based on rfc1826 ipcomp IPComp tcp TCP-MD5 based on rfc2385 spi Security Parameter Index (SPI) for the SAD and the SPD. spi must be a decimal number, or a hexadecimal number with a “0x” prefix. SPI values between 0 and 255 are reserved for future use by IANA and cannot be used. TCP-MD5 associations must use 0x1000 and therefore only have per-host granularity at this time. extensions take some of the following: -m mode Specify a security protocol mode for use. mode is one of following: transport, tunnel, or any. The default value is any. -r size Specify window size of bytes for replay prevention. size must be decimal number in 32-bit word. If size is zero or not specified, replay checks don't take place. -u id Specify the identifier of the policy entry in the SPD. See policy. -f pad_option defines the content of the ESP padding. pad_option is one of following: zero-pad All the paddings are zero. random-pad A series of randomized values are used. seq-pad A series of sequential increasing numbers started from 1 are used. -f nocyclic-seq Don't allow cyclic sequence numbers. -lh time -ls time Specify hard/soft life time duration of the SA measured in seconds. -bh bytes -bs bytes Specify hard/soft life time duration of the SA measured in bytes transported. algorithm -E ealgo key Specify an encryption algorithm ealgo for ESP. -E ealgo key -A aalgo key Specify an encryption algorithm ealgo, as well as a payload authentication algorithm aalgo, for ESP. -A aalgo key Specify an authentication algorithm for AH. -C calgo [-R] Specify a compression algorithm for IPComp. If -R is specified, the spi field value will be used as the IPComp CPI (compression parameter index) on wire as- is. If -R is not specified, the kernel will use well-known CPI on wire, and spi field will be used only as an index for kernel internal usage. key must be a double-quoted character string, or a series of hexadecimal digits preceded by “0x”. Possible values for ealgo, aalgo, and calgo are specified in the Algorithms sections. src_range dst_range These select the communications that should be secured by IPsec. They can be an IPv4/v6 address or an IPv4/v6 address range, and may be accompanied by a TCP/UDP port specification. This takes the following form: address address/prefixlen address[port] address/prefixlen[port] prefixlen and port must be a decimal number. The square brackets around port are necessary and are not manpage metacharacters. For FQDN resolution, the rules applicable to src and dst apply here as well. upperspec Upper-layer protocol to be used. You can use one of the words in /etc/protocols as upperspec, or icmp6, ip4, or any. any stands for “any protocol”. You can also use the protocol number. You can specify a type and/or a code of ICMPv6 when the upper-layer protocol is ICMPv6. The specification must be placed after icmp6. A type is separated from a code by single comma. A code must always be specified. When a zero is specified, the kernel deals with it as a wildcard. Note that the kernel can not distinguish a wildcard from an ICMPv6 type of zero. For example, the following means that the policy doesn't require IPsec for any inbound Neighbor Solicitation: spdadd ::/0 ::/0 icmp6 135,0 -P in none ; NOTE: upperspec does not work against forwarding case at this moment, as it requires extra reassembly at the forwarding node (currently not implemented). There are many protocols in /etc/protocols, protocols other than TCP, UDP, and ICMP may not be suitable to use with IPsec. You have to consider carefully what to use. policy policy is in one of the following three formats: -P direction [priority specification] discard -P direction [priority specification] none -P direction [priority specification] ipsec protocol/mode/src-dst/level [...] You must specify the direction of its policy as direction. Either out, in, or fwd can be used. priority specification is used to control the placement of the policy within the SPD. Policy position is determined by a signed integer where higher priorities indicate the policy is placed closer to the beginning of the list and lower priorities indicate the policy is placed closer to the end of the list. Policies with equal priorities are added at the end of groups of such policies. Priority can only be specified when setkey has been compiled against kernel headers that support policy priorities (Linux >= 2.6.6). If the kernel does not support priorities, a warning message will be printed the first time a priority specification is used. Policy priority takes one of the following formats: {priority,prio} offset offset is an integer in the range from -2147483647 to 214783648. {priority,prio} base {+,-} offset base is either low (-1073741824), def (0), or high (1073741824) offset is an unsigned integer. It can be up to 1073741824 for positive offsets, and up to 1073741823 for negative offsets. discard means the packet matching indexes will be discarded. none means that IPsec operation will not take place onto the packet. ipsec means that IPsec operation will take place onto the packet. The protocol/mode/src-dst/level part specifies the rule how to process the packet. Either ah, esp, or ipcomp must be used as protocol. mode is either transport or tunnel. If mode is tunnel, you must specify the end-point addresses of the SA as src and dst with ‘-’ between these addresses, which is used to specify the SA to use. If mode is transport, both src and dst can be omitted. level is to be one of the following: default, use, require, or unique. If the SA is not available in every level, the kernel will ask the key exchange daemon to establish a suitable SA. default means the kernel consults the system wide default for the protocol you specified, e.g. the esp_trans_deflev sysctl variable, when the kernel processes the packet. use means that the kernel uses an SA if it's available, otherwise the kernel keeps normal operation. require means SA is required whenever the kernel sends a packet matched with the policy. unique is the same as require; in addition, it allows the policy to match the unique out-bound SA. You just specify the policy level unique, racoon(8) will configure the SA for the policy. If you configure the SA by manual keying for that policy, you can put a decimal number as the policy identifier after unique separated by a colon ‘:’ like: unique:number in order to bind this policy to the SA. number must be between 1 and 32767. It corresponds to extensions -u of the manual SA configuration. When you want to use an SA bundle, you can define multiple rules. For example, if an IP header was followed by an AH header followed by an ESP header followed by an upper layer protocol header, the rule would be: esp/transport//require ah/transport//require; The rule order is very important. When NAT-T is enabled in the kernel, policy matching for ESP over UDP packets may be done on endpoint addresses and port (this depends on the system. System that do not perform the port check cannot support multiple endpoints behind the same NAT). When using ESP over UDP, you can specify port numbers in the endpoint addresses to get the correct matching. Here is an example: spdadd 10.0.11.0/24[any] 10.0.11.33/32[any] any -P out ipsec esp/tunnel/192.168.0.1[4500]-192.168.1.2[30000]/require ; These ports must be left unspecified (which defaults to 0) for anything other than ESP over UDP. They can be displayed in SPD dump using setkey -DPp. Note that “discard” and “none” are not in the syntax described in ipsec_set_policy(3). There are a few differences in the syntax. See ipsec_set_policy(3) for detail. Algorithms The following list shows the supported algorithms. protocol and algorithm are almost orthogonal. These authentication algorithms can be used as aalgo in -A aalgo of the protocol parameter: algorithm keylen (bits) comment hmac-md5 128 ah: rfc2403 128 ah-old: rfc2085 hmac-sha1 160 ah: rfc2404 160 ah-old: 128bit ICV (no document) keyed-md5 128 ah: 96bit ICV (no document) 128 ah-old: rfc1828 keyed-sha1 160 ah: 96bit ICV (no document) 160 ah-old: 128bit ICV (no document) null 0 to 2048 for debugging hmac-sha256 256 ah: 96bit ICV (draft-ietf-ipsec-ciph-sha-256-00) 256 ah-old: 128bit ICV (no document) hmac-sha384 384 ah: 96bit ICV (no document) 384 ah-old: 128bit ICV (no document) hmac-sha512 512 ah: 96bit ICV (no document) 512 ah-old: 128bit ICV (no document) hmac-ripemd160 160 ah: 96bit ICV (RFC2857) ah-old: 128bit ICV (no document) aes-xcbc-mac 128 ah: 96bit ICV (RFC3566) 128 ah-old: 128bit ICV (no document) tcp-md5 8 to 640 tcp: rfc2385 These encryption algorithms can be used as ealgo in -E ealgo of the protocol parameter: algorithm keylen (bits) comment des-cbc 64 esp-old: rfc1829, esp: rfc2405 3des-cbc 192 rfc2451 null 0 to 2048 rfc2410 blowfish-cbc 40 to 448 rfc2451 cast128-cbc 40 to 128 rfc2451 des-deriv 64 ipsec-ciph-des-derived-01 3des-deriv 192 no document rijndael-cbc 128/192/256 rfc3602 twofish-cbc 0 to 256 draft-ietf-ipsec-ciph-aes-cbc-01 aes-ctr 160/224/288 draft-ietf-ipsec-ciph-aes-ctr-03 Note that the first 128 bits of a key for aes-ctr will be used as AES key, and the remaining 32 bits will be used as nonce. These compression algorithms can be used as calgo in -C calgo of the protocol parameter: algorithm comment deflate rfc2394 RFC vs Linux kernel semantics The Linux kernel uses the fwd policy instead of the in policy for packets what are forwarded through that particular box. In kernel mode, setkey manages and shows policies and SAs exactly as they are stored in the kernel. In RFC mode, setkey creates fwd policies for every in policy inserted (not implemented yet) filters out all fwd policies RETURN VALUES The command exits with 0 on success, and non-zero on errors.
|
setkey – manually manipulate the IPsec SA/SP database
|
setkey [-knrv] file ... setkey [-knrv] -c setkey [-krv] -f filename setkey [-aklPrv] -D setkey [-Pvp] -F setkey [-H] -x setkey [-?V]
| null |
add 3ffe:501:4819::1 3ffe:501:481d::1 esp 123457 -E des-cbc 0x3ffe05014819ffff ; add -6 myhost.example.com yourhost.example.com ah 123456 -A hmac-sha1 "AH SA configuration!" ; add 10.0.11.41 10.0.11.33 esp 0x10001 -E des-cbc 0x3ffe05014819ffff -A hmac-md5 "authentication!!" ; get 3ffe:501:4819::1 3ffe:501:481d::1 ah 123456 ; flush ; dump esp ; spdadd 10.0.11.41/32[21] 10.0.11.33/32[any] any -P out ipsec esp/tunnel/192.168.0.1-192.168.1.2/require ; add 10.1.10.34 10.1.10.36 tcp 0x1000 -A tcp-md5 "TCP-MD5 BGP secret" ; SEE ALSO ipsec_set_policy(3), racoon(8), sysctl(8) Changed manual key configuration for IPsec, October 1999, http://www.kame.net/newsletter/19991007/. HISTORY The setkey command first appeared in the WIDE Hydrangea IPv6 protocol stack kit. The command was completely re-designed in June 1998. BUGS setkey should report and handle syntax errors better. For IPsec gateway configuration, src_range and dst_range with TCP/UDP port numbers does not work, as the gateway does not reassemble packets (it cannot inspect upper-layer headers). macOS 14.5 March 19, 2004 macOS 14.5
|
repairHomePermissions
| null | null | null | null | null |
spindump
|
spindump is used by various system components to create reports when an unresponsive application is force quit. Reports are stored at: /Library/Logs/DiagnosticReports/ For normal application force quits spindump will display a dialog to offer the choice to view more details and/or send a report to Apple. ------------------------------------- When run manually, spindump samples user and kernel callstacks for every process in the system. Spindump supports two display formats for callstacks, heavy and timeline, and includes a binary representation of its data at the end of reports for re-reporting with different options (see -i ). Spindump can also parse reports in timeline format even without a binary representation to re-report them in heavy format. When displayed in heavy format, callstacks are sorted by count and each unique callstack is displayed once. In this snippet: 84 __CFRunLoopRun + 1161 (CoreFoundation + 460665) [0x7fff8d662779] Address 0x7fff8d662779 was sampled 84 times in total, not necessarily consecutively. The address corresponds __CFRunLoopRun in CoreFoundation. When displayed in timeline format, callstacks are sorted so that the leaf frames in the call tree are presented in chronological order. Each frame includes a time range of consecutive callstacks in which the frame was seen, which can be compared with the range of other frames to determine concurrency. If multiple samples of the same callstack were not consecutive, the callstack will be displayed multiple times. In this snippet: 23 __CFRunLoopRun + 1161 (CoreFoundation + 460665) [0x7fff8d662779] 50-72 Address 0x7fff8d662779 was sampled 23 times consecutively from the 50th to 72nd sample. In timeline format, spindump notes state changes for threads, e.g.: <darwinbg, timers coalesced> which indicates the change in state for the samples that follow. Any state not mentioned is unchanged from previous samples. The state information spindump reports includes thread QoS, darwinbg, importance inheritance boost, suppression for App Nap, latency QoS (timers), I/O throttling tier, and cpu priority. Leaf frames will indicate whether the thread was running/runnable or suspended. A leading star (*) indicates a kernel frame or library. ARGUMENTS pid or partial-name the process to be sorted topmost in the report. "-notarget" may be used to avoid providing a target process when specifying a duration and interval. duration the duration of the sampling in seconds. If not specified, the default of 10 seconds is used. interval the number of miliseconds between samples. If not specified, the default of 10 miliseconds is used. -i path Read in the file located at path rather than sampling the live system. Supported file formats are: * Spindump text files containing a spindump binary format * Spindump text files without a spindump binaty format written in timeline mode (with limited options, and only callstacks will be updated; summary information will not change) * Tailspin files * Concatenated kcdata stackshots * Concatenated microstackshots -o path Specifies where the report should be written. If path is a file, it will be overwritten. If path is a directory, a file will be created inside that directory with the name following the format <appname>_<pid>.spindump.txt. If a file by that name already exists, spindump will add a unique number to the filename. If not specified, spindump will output reports to files inside /tmp. -indexRange s-e Only include samples in the given range -startIndex s Omit frames before sample number s -endIndex e Omit frames after sample number e -heavy Sort callstacks by count (default) -timeline Sort callstacks chronologically -wait Wait for the process to exist before sampling. If the process already exists, spindump will begin sampling immediately. -onlyRunnable Only display runnable callstacks -onlyBlocked Only display non-runnable callstacks -onlyTarget Only sample the target process (allows faster sampling rates) -proc proc If -onlyTarget is provided, sample proc as well. This option may be specified multiple times -sampleWithoutTarget Keep sampling for the entire sampling duration even if the target process exits -timelimit t Exit after t seconds even if the report hasn't been saved -stdout Print the report to stdout (in addition to writing to a file) -noFile Do not output to a file. Implies -stdout and will include the binary format in the stdout output -noBinary Do not include the spindump binary format at the bottom of the report (the file will not be able to be re-parsed) -noText Do not include the textual report, only include the binary format -open appname Specifies an app in which to open the resulting report -reveal Reveal the resulting report in Finder -siginfo After sampling, wait for SIGINFO before generating the report -delayonsignal d Stop sampling d seconds after receiving a signal to stop sampling instead of stopping immediately -threadprioritythreshold t Filter out any threads that have a priority below the given threshold. Pass a negative number to filter out threads that have a priority above the given threshold's absolute value -nothrottle Do not throttle sampling rate on excessive memory growth -noProcessingWhileSampling Do not parse stackshots until done sampling -displayIdleWorkQueueThreads Display idle work queue threads in the textual report (by default they are omitted) -aggregateCallTreesByThread Group call trees by thread ID rather than by dispatch queue -aggregateCallTreesByProcess Each process gets one call tree for all threads -omitFramesBelowSampleCount c Omit frames with count below c MICROSTACKSHOTS Microstackshots are gathered by the kernel to provide extremely lightweight sampling of single threads at a time. They can be viewed in spindump via the microstackshot command line options: -microstackshots Report on interrupt microstackshots, which provide a sampling of where cpu time is spent. This is the default mode if -microstackshots_io is not provided -microstackshots_io Report on I/O microstackshots, which provide a sampling of where file backed memory is dirtied -microstackshots_datastore path When reporting microstackshots, read from this location rather than using the live system's microstackshots. When saving with -microstackshots_save, write to this location -microstackshots_save Save microstackshot from the live system to the path specified by -microstackshots_datastore instead of generating a textual report -microstackshots_starttime date Only report microstackshots after this time. The date can be a string like "11/14/12 12:00am" or a single number representing a unix timestamp in seconds -microstackshots_endtime date Only report microstackshots before this time. The date can be a string like "11/14/12 12:00am" or a single number representing a unix timestamp in seconds -microstackshots_pid pid Only report microstackshots for the given process id -microstackshots_threadid thread_id Only report microstackshots for the given thread id -microstackshots_dsc_path path Path to a directory containing dyld shared cache layout files. If not specified, spindump uses the historical information for the current machine -batteryonly Only include microstackshots taken while the machine was running on battery power -aconly Only include microstackshots taken while the machine was running on AC power -useridleonly Only include microstackshots taken while the user was idle -useractiveonly Only include microstackshots taken while the user was active SEE ALSO SubmitDiagInfo(8), sample(1) Darwin April 19, 2016 Darwin
|
spindump – Profile entire system during a time interval
|
spindump [pid | partial-name [duration [interval]]] [<options>]
| null | null |
cupsreject
|
The cupsaccept command instructs the printing system to accept print jobs to the specified destinations. The cupsreject command instructs the printing system to reject print jobs to the specified destinations. The -r option sets the reason for rejecting print jobs. If not specified, the reason defaults to "Reason Unknown".
|
cupsaccept/cupsreject - accept/reject jobs sent to a destination
|
cupsaccept [ -E ] [ -U username ] [ -h hostname[:port] ] destination(s) cupsreject [ -E ] [ -U username ] [ -h hostname[:port] ] [ -r reason ] destination(s)
|
The following options are supported by both cupsaccept and cupsreject: -E Forces encryption when connecting to the server. -U username Sets the username that is sent when connecting to the server. -h hostname[:port] Chooses an alternate server. -r "reason" Sets the reason string that is shown for a printer that is rejecting jobs. CONFORMING TO The cupsaccept and cupsreject commands correspond to the System V printing system commands "accept" and "reject", respectively. Unlike the System V printing system, CUPS allows printer names to contain any printable character except SPACE, TAB, "/", or "#". Also, printer and class names are not case-sensitive. Finally, the CUPS versions may ask the user for an access password depending on the printing system configuration. SEE ALSO cancel(1), cupsenable(8), lp(1), lpadmin(8), lpstat(1), CUPS Online Help (http://localhost:631/help) COPYRIGHT Copyright © 2007-2019 by Apple Inc. 26 April 2019 CUPS cupsaccept(8)
| null |
traceroute
|
The Internet is a large and complex aggregation of network hardware, connected together by gateways. Tracking the route one's packets follow (or finding the miscreant gateway that's discarding your packets) can be difficult. traceroute utilizes the IP protocol `time to live' field and attempts to elicit an ICMP TIME_EXCEEDED response from each gateway along the path to some host. The only mandatory parameter is the destination host name or IP number. The default probe datagram length is 40 bytes, but this may be increased by specifying a packet size (in bytes) after the destination host name. Other options are: -a Turn on AS# lookups for each hop encountered. -A as_server Turn on AS# lookups and use the given server instead of the default. -d Enable socket level debugging. -D When an ICMP response to our probe datagram is received, print the differences between the transmitted packet and the packet quoted by the ICMP response. A key showing the location of fields within the transmitted packet is printed, followed by the original packet in hex, followed by the quoted packet in hex. Bytes that are unchanged in the quoted packet are shown as underscores. Note, the IP checksum and the TTL of the quoted packet are not expected to match. By default, only one probe per hop is sent with this option. -E Detect ECN bleaching. Set the IPTOS_ECN_ECT1 bit and report if that value has been bleached or mangled. -e Firewall evasion mode. Use fixed destination ports for UDP and TCP probes. The destination port does NOT increment with each packet sent. -f first_ttl Set the initial time-to-live used in the first outgoing probe packet. -F Set the "don't fragment" bit. -g gateway Specify a loose source route gateway (8 maximum). -i iface Specify a network interface to obtain the source IP address for outgoing probe packets. This is normally only useful on a multi- homed host. (See the -s flag for another way to do this.) -I Use ICMP ECHO instead of UDP datagrams. (A synonym for "-P icmp"). -M first_ttl Set the initial time-to-live value used in outgoing probe packets. The default is 1, i.e., start with the first hop. -m max_ttl Set the max time-to-live (max number of hops) used in outgoing probe packets. The default is net.inet.ip.ttl hops (the same default used for TCP connections). -n Print hop addresses numerically rather than symbolically and numerically (saves a nameserver address-to-name lookup for each gateway found on the path). -P proto Send packets of specified IP protocol. The currently supported protocols are: UDP , TCP , GRE and ICMP Other protocols may also be specified (either by name or by number), though traceroute does not implement any special knowledge of their packet formats. This option is useful for determining which router along a path may be blocking packets based on IP protocol number. But see BUGS below. -p port Protocol specific. For UDP and TCP, sets the base port number used in probes (default is 33434). traceroute hopes that nothing is listening on UDP ports base to base+nhops-1 at the destination host (so an ICMP PORT_UNREACHABLE message will be returned to terminate the route tracing). If something is listening on a port in the default range, this option can be used to pick an unused port range. -q nqueries Set the number of probes per ``ttl'' to nqueries (default is three probes). -r Bypass the normal routing tables and send directly to a host on an attached network. If the host is not on a directly-attached network, an error is returned. This option can be used to ping a local host through an interface that has no route through it (e.g., after the interface was dropped by routed(8)). -s src_addr Use the following IP address (which must be given as an IP number, not a hostname) as the source address in outgoing probe packets. On hosts with more than one IP address, this option can be used to force the source address to be something other than the IP address of the interface the probe packet is sent on. If the IP address is not one of this machine's interface addresses, an error is returned and nothing is sent. (See the -i flag for another way to do this.) -S Print a summary of how many probes were not answered for each hop. -t tos Set the type-of-service in probe packets to the following value (default zero). The value must be a decimal integer in the range 0 to 255. This option can be used to see if different types-of- service result in different paths. (If you are not running a 4.4BSD or later system, this may be academic since the normal network services like telnet and ftp don't let you control the TOS). Not all values of TOS are legal or meaningful - see the IP spec for definitions. Useful values are probably ‘-t 16’ (low delay) and ‘-t 8’ (high throughput). -v Verbose output. Received ICMP packets other than TIME_EXCEEDED and UNREACHABLEs are listed. -w Set the time (in seconds) to wait for a response to a probe (default 5 sec.). -x Toggle IP checksums. Normally, this prevents traceroute from calculating IP checksums. In some cases, the operating system can overwrite parts of the outgoing packet but not recalculate the checksum (so in some cases the default is to not calculate checksums and using -x causes them to be calculated). Note that checksums are usually required for the last hop when using ICMP ECHO probes ( -I ). So they are always calculated when using ICMP. -z pausemsecs Set the time (in milliseconds) to pause between probes (default 0). Some systems such as Solaris and routers such as Ciscos rate limit ICMP messages. A good value to use with this is 500 (e.g. 1/2 second). This program attempts to trace the route an IP packet would follow to some internet host by launching UDP probe packets with a small ttl (time to live) then listening for an ICMP "time exceeded" reply from a gateway. We start our probes with a ttl of one and increase by one until we get an ICMP "port unreachable" (which means we got to "host") or hit a max (which defaults to net.inet.ip.ttl hops & can be changed with the -m flag). Three probes (changed with -q flag) are sent at each ttl setting and a line is printed showing the ttl, address of the gateway and round trip time of each probe. If the probe answers come from different gateways, the address of each responding system will be printed. If there is no response within a 5 sec. timeout interval (changed with the -w flag), a "*" is printed for that probe. We don't want the destination host to process the UDP probe packets so the destination port is set to an unlikely value (if some clod on the destination is using that value, it can be changed with the -p flag). A sample use and output might be: [yak 71]% traceroute nis.nsf.net. traceroute to nis.nsf.net (35.1.1.48), 64 hops max, 38 byte packet 1 helios.ee.lbl.gov (128.3.112.1) 19 ms 19 ms 0 ms 2 lilac-dmc.Berkeley.EDU (128.32.216.1) 39 ms 39 ms 19 ms 3 lilac-dmc.Berkeley.EDU (128.32.216.1) 39 ms 39 ms 19 ms 4 ccngw-ner-cc.Berkeley.EDU (128.32.136.23) 39 ms 40 ms 39 ms 5 ccn-nerif22.Berkeley.EDU (128.32.168.22) 39 ms 39 ms 39 ms 6 128.32.197.4 (128.32.197.4) 40 ms 59 ms 59 ms 7 131.119.2.5 (131.119.2.5) 59 ms 59 ms 59 ms 8 129.140.70.13 (129.140.70.13) 99 ms 99 ms 80 ms 9 129.140.71.6 (129.140.71.6) 139 ms 239 ms 319 ms 10 129.140.81.7 (129.140.81.7) 220 ms 199 ms 199 ms 11 nic.merit.edu (35.1.1.48) 239 ms 239 ms 239 ms Note that lines 2 & 3 are the same. This is due to a buggy kernel on the 2nd hop system - lbl-csam.arpa - that forwards packets with a zero ttl (a bug in the distributed version of 4.3 BSD). Note that you have to guess what path the packets are taking cross-country since the NSFNet (129.140) doesn't supply address-to-name translations for its NSSes. A more interesting example is: [yak 72]% traceroute allspice.lcs.mit.edu. traceroute to allspice.lcs.mit.edu (18.26.0.115), 64 hops max 1 helios.ee.lbl.gov (128.3.112.1) 0 ms 0 ms 0 ms 2 lilac-dmc.Berkeley.EDU (128.32.216.1) 19 ms 19 ms 19 ms 3 lilac-dmc.Berkeley.EDU (128.32.216.1) 39 ms 19 ms 19 ms 4 ccngw-ner-cc.Berkeley.EDU (128.32.136.23) 19 ms 39 ms 39 ms 5 ccn-nerif22.Berkeley.EDU (128.32.168.22) 20 ms 39 ms 39 ms 6 128.32.197.4 (128.32.197.4) 59 ms 119 ms 39 ms 7 131.119.2.5 (131.119.2.5) 59 ms 59 ms 39 ms 8 129.140.70.13 (129.140.70.13) 80 ms 79 ms 99 ms 9 129.140.71.6 (129.140.71.6) 139 ms 139 ms 159 ms 10 129.140.81.7 (129.140.81.7) 199 ms 180 ms 300 ms 11 129.140.72.17 (129.140.72.17) 300 ms 239 ms 239 ms 12 * * * 13 128.121.54.72 (128.121.54.72) 259 ms 499 ms 279 ms 14 * * * 15 * * * 16 * * * 17 * * * 18 ALLSPICE.LCS.MIT.EDU (18.26.0.115) 339 ms 279 ms 279 ms Note that the gateways 12, 14, 15, 16 & 17 hops away either don't send ICMP "time exceeded" messages or send them with a ttl too small to reach us. 14 - 17 are running the MIT C Gateway code that doesn't send "time exceeded"s. God only knows what's going on with 12. The silent gateway 12 in the above may be the result of a bug in the 4.[23] BSD network code (and its derivatives): 4.x (x <= 3) sends an unreachable message using whatever ttl remains in the original datagram. Since, for gateways, the remaining ttl is zero, the ICMP "time exceeded" is guaranteed to not make it back to us. The behavior of this bug is slightly more interesting when it appears on the destination system: 1 helios.ee.lbl.gov (128.3.112.1) 0 ms 0 ms 0 ms 2 lilac-dmc.Berkeley.EDU (128.32.216.1) 39 ms 19 ms 39 ms 3 lilac-dmc.Berkeley.EDU (128.32.216.1) 19 ms 39 ms 19 ms 4 ccngw-ner-cc.Berkeley.EDU (128.32.136.23) 39 ms 40 ms 19 ms 5 ccn-nerif35.Berkeley.EDU (128.32.168.35) 39 ms 39 ms 39 ms 6 csgw.Berkeley.EDU (128.32.133.254) 39 ms 59 ms 39 ms 7 * * * 8 * * * 9 * * * 10 * * * 11 * * * 12 * * * 13 rip.Berkeley.EDU (128.32.131.22) 59 ms ! 39 ms ! 39 ms ! Notice that there are 12 "gateways" (13 is the final destination) and exactly the last half of them are "missing". What's really happening is that rip (a Sun-3 running Sun OS3.5) is using the ttl from our arriving datagram as the ttl in its ICMP reply. So, the reply will time out on the return path (with no notice sent to anyone since ICMP's aren't sent for ICMP's) until we probe with a ttl that's at least twice the path length. I.e., rip is really only 7 hops away. A reply that returns with a ttl of 1 is a clue this problem exists. traceroute prints a "!" after the time if the ttl is <= 1. Since vendors ship a lot of obsolete (DEC´s Ultrix, Sun 3.x) or non-standard (HPUX) software, expect to see this problem frequently and/or take care picking the target host of your probes. Other possible annotations after the time are !H, !N, or !P (host, network or protocol unreachable), !S (source route failed), (fragmentation needed - the RFC1191 Path MTU Discovery value is displayed), !U or !W (destination network/host unknown), !I (source host is isolated), !A (communication with destination network administratively prohibited), !Z (communication with destination host administratively prohibited), !Q (for this ToS the destination network is unreachable), !T (for this ToS the destination host is unreachable), !X (communication administratively prohibited), !V (host precedence violation), !C (precedence cutoff in effect), or !<num> (ICMP unreachable code <num>). These are defined by RFC1812 (which supersedes RFC1716). If almost all the probes result in some kind of unreachable, traceroute will give up and exit. This program is intended for use in network testing, measurement and management. It should be used primarily for manual fault isolation. Because of the load it could impose on the network, it is unwise to use traceroute during normal operations or from automated scripts. AUTHOR Implemented by Van Jacobson from a suggestion by Steve Deering. Debugged by a cast of thousands with particularly cogent suggestions or fixes from C. Philip Wood, Tim Seaver and Ken Adelman. SEE ALSO netstat(1), ping(8), traceroute6(8) BUGS When using protocols other than UDP, functionality is reduced. In particular, the last packet will often appear to be lost, because even though it reaches the destination host, there's no way to know that because no ICMP message is sent back. In the TCP case, traceroute should listen for a RST from the destination host (or an intermediate router that's filtering packets), but this is not implemented yet. The AS number capability reports information that may sometimes be inaccurate due to discrepancies between the contents of the routing database server and the current state of the Internet. BSD 4.3 May 29, 2008 BSD 4.3
|
traceroute – print the route packets take to network host
|
traceroute [-adeEFISdNnrvx] [-A as_server] [-f first_ttl] [-g gateway] [-i iface] [-M first_ttl] [-m max_ttl] [-P proto] [-p port] [-q nqueries] [-s src_addr] [-t tos] [-w waittime] [-z pausemsecs] host [packetsize]
| null | null |
quotaoff
|
Quotaon announces to the system that disk quotas should be enabled on one or more filesystems. Quotaoff announces to the system that the specified filesystems should have disk quotas turned off. The filesystem must be mounted and it must have the appropriate mount option file located at its root, the .quota.ops.user file for user quota configuration, and the .quota.ops.group file for group quota configuration. Quotaon also expects each filesystem to have the appropriate quota data files located at its root, the .quota.user file for user data, and the .quota.group file for group data. These filenames and their root location cannot be overridden. By default, quotaon will attempt to enable both user and group quotas. By default, quotaoff will disable both user and group quotas. Available options: -a If the -a flag is supplied in place of any filesystem names, quotaon/quotaoff will enable/disable any filesystems with an existing mount option file at its root. The mount option file specifies the types of quotas that are to be configured. -g Only group quotas will be enabled/disabled. The mount option file, .quota.ops.group, must exist at the root of the filesystem. -u Only user quotas will be enabled/disabled. The mount option file, .quota.ops.user, must exist at the root of the filesystem. -v Causes quotaon and quotaoff to print a message for each filesystem where quotas are turned on or off. Specifying both -g and -u is equivalent to the default. Quotas for both users and groups will automatically be turned on at filesystem mount if the appropriate mount option file and binary data file is in place at its root. FILES Each of the following quota files is located at the root of the mounted filesystem. The mount option files are empty files whose existence indicates that quotas are to be enabled for that filesystem. .quota.user data file containing user quotas .quota.group data file containing group quotas .quota.ops.user mount option file used to enable user quotas .quota.ops.group mount option file used to enable group quotas SEE ALSO quota(1), quotactl(2), edquota(8), quotacheck(8), repquota(8) HISTORY The quotaon command appeared in 4.2BSD. BSD 4.2 October 17, 2002 BSD 4.2
|
quotaon, quotaoff – turn filesystem quotas on and off
|
quotaon [-g] [-u] [-v] filesystem ... quotaon [-g] [-u] [-v] -a quotaoff [-g] [-u] [-v] filesystem ... quotaoff [-g] [-u] [-v] -a
| null | null |
newsyslog
|
The newsyslog utility should be scheduled to run periodically by cron(8). When it is executed it archives log files if necessary. If a log file is determined to require archiving, newsyslog rearranges the files so that “logfile” is empty, “logfile.0” has the last period's logs in it, “logfile.1” has the next to last period's logs in it, and so on, up to a user-specified number of archived logs. Optionally the archived logs can be compressed to save space. A log can be archived for three reasons: 1. It is larger than the configured size (in kilobytes). 2. A configured number of hours have elapsed since the log was last archived. 3. This is the specific configured hour for rotation of the log. The granularity of newsyslog is dependent on how often it is scheduled to run by cron(8). Since the program is quite fast, it may be scheduled to run every hour without any ill effects, and mode three (above) assumes that this is so.
|
newsyslog – maintain system log files to manageable sizes
|
newsyslog [-CFNnrsv] [-R tagname] [-a directory] [-d directory] [-f config_file] [file ...]
|
The following options can be used with newsyslog: -f config_file Instruct newsyslog to use config_file instead of /etc/newsyslog.conf and /etc/newsyslog.d/*.conf for its configuration file. -a directory Specify a directory into which archived log files will be written. If a relative path is given, it is appended to the path of each log file and the resulting path is used as the directory into which the archived log for that log file will be written. If an absolute path is given, all archived logs are written into the given directory. If any component of the path directory does not exist, it will be created when newsyslog is run. -d directory Specify a directory which all log files will be relative to. To allow archiving of logs outside the root, the directory passed to the -a option is unaffected. -v Place newsyslog in verbose mode. In this mode it will print out each log and its reasons for either trimming that log or skipping it. -n Cause newsyslog not to trim the logs, but to print out what it would do if this option were not specified. -r Remove the restriction that newsyslog must be running as root. Of course, newsyslog will not be able to send a HUP signal to syslogd(8) so this option should only be used in debugging. -s Specify that newsyslog should not send any signals to any daemon processes that it would normally signal when rotating a log file. For any log file which is rotated, this option will usually also mean the rotated log file will not be compressed if there is a daemon which would have been signalled without this option. However, this option is most likely to be useful when specified with the -R option, and in that case the compression will be done. -C If specified once, then newsyslog will create any log files which do not exist, and which have the C flag specified in their config file entry. If specified multiple times, then newsyslog will create all log files which do not already exist. If log files are given on the command-line, then the -C or -CC will only apply to those specific log files. -F Force newsyslog to trim the logs, even if the trim conditions have not been met. This option is useful for diagnosing system problems by providing you with fresh logs that contain only the problems. -N Do not perform any rotations. This option is intended to be used with the -C or -CC options when creating log files is the only objective. -R tagname Specify that newsyslog should rotate a given list of files, even if trim conditions are not met for those files. The tagname is only used in the messages written to the log files which are rotated. This differs from the -F option in that one or more log files must also be specified, so that newsyslog will only operate on those specific files. This option is mainly intended for the daemons or programs which write some log files, and want to trigger a rotate based on their own criteria. With this option they can execute newsyslog to trigger the rotate when they want it to happen, and still give the system administrator a way to specify the rules of rotation (such as how many backup copies are kept, and what kind of compression is done). When a daemon does execute newsyslog with the -R option, it should make sure all of the log files are closed before calling newsyslog, and then it should re-open the files after newsyslog returns. Usually the calling process will also want to specify the -s option, so newsyslog will not send a signal to the very process which called it to force the rotate. Skipping the signal step will also mean that newsyslog will return faster, since newsyslog normally waits a few seconds after any signal that is sent. If additional command line arguments are given, newsyslog will only examine log files that match those arguments; otherwise, it will examine all files listed in the configuration file(s). FILES /etc/newsyslog.conf newsyslog configuration file /etc/newsyslog.d/ newsyslog configuration directory COMPATIBILITY Previous versions of the newsyslog utility used the dot (``.'') character to distinguish the group name. Beginning with FreeBSD 3.3, this has been changed to a colon (``:'') character so that user and group names may contain the dot character. The dot (``.'') character is still accepted for backwards compatibility. HISTORY The newsyslog utility originated from NetBSD and first appeared in FreeBSD 2.2. AUTHORS Theodore Ts'o, MIT Project Athena Copyright 1987, Massachusetts Institute of Technology SEE ALSO bzip2(1), gzip(1), syslog(3), newsyslog.conf(5), chown(8), syslogd(8) BUGS Does not yet automatically read the logs to find security breaches. macOS 14.5 February 24, 2005 macOS 14.5
| null |
postfix
|
This command is reserved for the superuser. To submit mail, use the Postfix sendmail(1) command. The postfix(1) command controls the operation of the Postfix mail system: start or stop the master(8) daemon, do a health check, and other maintenance. By default, the postfix(1) command sets up a standardized environment and runs the postfix-script shell script to do the actual work. However, when support for multiple Postfix instances is configured, postfix(1) executes the command specified with the multi_instance_wrapper configuration parameter. This command will execute the command for each applicable Postfix instance. The following commands are implemented: check Warn about bad directory/file ownership or permissions, and create missing directories. start Start the Postfix mail system. This also runs the configuration check described above. stop Stop the Postfix mail system in an orderly fashion. If possible, running processes are allowed to terminate at their earliest convenience. Note: in order to refresh the Postfix mail system after a configuration change, do not use the start and stop commands in succession. Use the reload command instead. abort Stop the Postfix mail system abruptly. Running processes are signaled to stop immediately. flush Force delivery: attempt to deliver every message in the deferred mail queue. Normally, attempts to deliver delayed mail happen at regular intervals, the interval doubling after each failed attempt. Warning: flushing undeliverable mail frequently will result in poor delivery performance of all other mail. reload Re-read configuration files. Running processes terminate at their earliest convenience. status Indicate if the Postfix mail system is currently running. set-permissions [name=value ...] Set the ownership and permissions of Postfix related files and directories, as specified in the postfix-files file. Specify name=value to override and update specific main.cf configuration parameters. Use this, for example, to change the mail_owner or setgid_group setting for an already installed Postfix system. This feature is available in Postfix 2.1 and later. With Postfix 2.0 and earlier, use "$config_directory/post-install set-permissions". tls subcommand Enable opportunistic TLS in the Postfix SMTP client or server, and manage Postfix SMTP server TLS private keys and certificates. See postfix-tls(1) for documentation. This feature is available in Postfix 3.1 and later. upgrade-configuration [name=value ...] Update the main.cf and master.cf files with information that Postfix needs in order to run: add or update services, and add or update configuration parameter settings. Specify name=value to override and update specific main.cf configuration parameters. This feature is available in Postfix 2.1 and later. With Postfix 2.0 and earlier, use "$config_directory/post-install upgrade-configuration". The following options are implemented: -c config_dir Read the main.cf and master.cf configuration files in the named directory instead of the default configuration directory. Use this to distinguish between multiple Postfix instances on the same host. With Postfix 2.6 and later, this option forces the postfix(1) command to operate on the specified Postfix instance only. This behavior is inherited by postfix(1) commands that run as a descendant of the current process. -D (with postfix start only) Run each Postfix daemon under control of a debugger as specified via the debugger_command configuration parameter. -v Enable verbose logging for debugging purposes. Multiple -v options make the software increasingly verbose. ENVIRONMENT The postfix(1) command exports the following environment variables before executing the postfix-script file: MAIL_CONFIG This is set when the -c command-line option is present. With Postfix 2.6 and later, this environment variable forces the postfix(1) command to operate on the specified Postfix instance only. This behavior is inherited by postfix(1) commands that run as a descendant of the current process. MAIL_VERBOSE This is set when the -v command-line option is present. MAIL_DEBUG This is set when the -D command-line option is present. CONFIGURATION PARAMETERS The following main.cf configuration parameters are exported as environment variables with the same names: config_directory (see 'postconf -d' output) The default location of the Postfix main.cf and master.cf configuration files. command_directory (see 'postconf -d' output) The location of all postfix administrative commands. daemon_directory (see 'postconf -d' output) The directory with Postfix support programs and daemon programs. html_directory (see 'postconf -d' output) The location of Postfix HTML files that describe how to build, configure or operate a specific Postfix subsystem or feature. mail_owner (postfix) The UNIX system account that owns the Postfix queue and most Postfix daemon processes. mailq_path (see 'postconf -d' output) Sendmail compatibility feature that specifies where the Postfix mailq(1) command is installed. manpage_directory (see 'postconf -d' output) Where the Postfix manual pages are installed. newaliases_path (see 'postconf -d' output) Sendmail compatibility feature that specifies the location of the newaliases(1) command. queue_directory (see 'postconf -d' output) The location of the Postfix top-level queue directory. readme_directory (see 'postconf -d' output) The location of Postfix README files that describe how to build, configure or operate a specific Postfix subsystem or feature. sendmail_path (see 'postconf -d' output) A Sendmail compatibility feature that specifies the location of the Postfix sendmail(1) command. setgid_group (postdrop) The group ownership of set-gid Postfix commands and of group-writable Postfix directories. Available in Postfix version 2.5 and later: data_directory (see 'postconf -d' output) The directory with Postfix-writable data files (for example: caches, pseudo-random numbers). Available in Postfix version 3.0 and later: meta_directory (see 'postconf -d' output) The location of non-executable files that are shared among multiple Postfix instances, such as postfix-files, dynamicmaps.cf, and the multi-instance template files main.cf.proto and master.cf.proto. shlib_directory (see 'postconf -d' output) The location of Postfix dynamically-linked libraries (libpostfix-*.so), and the default location of Postfix database plugins (postfix-*.so) that have a relative pathname in the dynamicmaps.cf file. Available in Postfix version 3.1 and later: openssl_path (openssl) The location of the OpenSSL command line program openssl(1). Other configuration parameters: import_environment (see 'postconf -d' output) The list of environment parameters that a Postfix process will import from a non-Postfix parent process. syslog_facility (mail) The syslog facility of Postfix logging. syslog_name (see 'postconf -d' output) A prefix that is prepended to the process name in syslog records, so that, for example, "smtpd" becomes "prefix/smtpd". Available in Postfix version 2.6 and later: multi_instance_directories (empty) An optional list of non-default Postfix configuration directories; these directories belong to additional Postfix instances that share the Postfix executable files and documentation with the default Postfix instance, and that are started, stopped, etc., together with the default Postfix instance. multi_instance_wrapper (empty) The pathname of a multi-instance manager command that the postfix(1) command invokes when the multi_instance_directories parameter value is non-empty. multi_instance_group (empty) The optional instance group name of this Postfix instance. multi_instance_name (empty) The optional instance name of this Postfix instance. multi_instance_enable (no) Allow this Postfix instance to be started, stopped, etc., by a multi-instance manager. FILES Prior to Postfix version 2.6, all of the following files were in $config_directory. Some files are now in $daemon_directory so that they can be shared among multiple instances that run the same Postfix version. Use the command "postconf config_directory" or "postconf daemon_directory" to expand the names into their actual values. $config_directory/main.cf, Postfix configuration parameters $config_directory/master.cf, Postfix daemon processes $daemon_directory/postfix-files, file/directory permissions $daemon_directory/postfix-script, administrative commands $daemon_directory/post-install, post-installation configuration $daemon_directory/dynamicmaps.cf, plug-in database clients SEE ALSO Commands: postalias(1), create/update/query alias database postcat(1), examine Postfix queue file postconf(1), Postfix configuration utility postfix(1), Postfix control program postfix-tls(1), Postfix TLS management postkick(1), trigger Postfix daemon postlock(1), Postfix-compatible locking postlog(1), Postfix-compatible logging postmap(1), Postfix lookup table manager postmulti(1), Postfix multi-instance manager postqueue(1), Postfix mail queue control postsuper(1), Postfix housekeeping mailq(1), Sendmail compatibility interface newaliases(1), Sendmail compatibility interface sendmail(1), Sendmail compatibility interface Postfix configuration: bounce(5), Postfix bounce message templates master(5), Postfix master.cf file syntax postconf(5), Postfix main.cf file syntax postfix-wrapper(5), Postfix multi-instance API Table-driven mechanisms: access(5), Postfix SMTP access control table aliases(5), Postfix alias database canonical(5), Postfix input address rewriting generic(5), Postfix output address rewriting header_checks(5), body_checks(5), Postfix content inspection relocated(5), Users that have moved transport(5), Postfix routing table virtual(5), Postfix virtual aliasing Table lookup mechanisms: cidr_table(5), Associate CIDR pattern with value ldap_table(5), Postfix LDAP client lmdb_table(5), Postfix LMDB database driver memcache_table(5), Postfix memcache client mysql_table(5), Postfix MYSQL client nisplus_table(5), Postfix NIS+ client pcre_table(5), Associate PCRE pattern with value pgsql_table(5), Postfix PostgreSQL client regexp_table(5), Associate POSIX regexp pattern with value socketmap_table(5), Postfix socketmap client sqlite_table(5), Postfix SQLite database driver tcp_table(5), Postfix client-server table lookup Daemon processes: anvil(8), Postfix connection/rate limiting bounce(8), defer(8), trace(8), Delivery status reports cleanup(8), canonicalize and enqueue message discard(8), Postfix discard delivery agent dnsblog(8), DNS black/whitelist logger error(8), Postfix error delivery agent flush(8), Postfix fast ETRN service local(8), Postfix local delivery agent master(8), Postfix master daemon oqmgr(8), old Postfix queue manager pickup(8), Postfix local mail pickup pipe(8), deliver mail to non-Postfix command postscreen(8), Postfix zombie blocker proxymap(8), Postfix lookup table proxy server qmgr(8), Postfix queue manager qmqpd(8), Postfix QMQP server scache(8), Postfix connection cache manager showq(8), list Postfix mail queue smtp(8), lmtp(8), Postfix SMTP+LMTP client smtpd(8), Postfix SMTP server spawn(8), run non-Postfix server tlsmgr(8), Postfix TLS cache and randomness manager tlsproxy(8), Postfix TLS proxy server trivial-rewrite(8), Postfix address rewriting verify(8), Postfix address verification virtual(8), Postfix virtual delivery agent Other: syslogd(8), system logging README FILES Use "postconf readme_directory" or "postconf html_directory" to locate this information. OVERVIEW, overview of Postfix commands and processes BASIC_CONFIGURATION_README, Postfix basic configuration ADDRESS_REWRITING_README, Postfix address rewriting SMTPD_ACCESS_README, SMTP relay/access control CONTENT_INSPECTION_README, Postfix content inspection QSHAPE_README, Postfix queue analysis LICENSE The Secure Mailer license must be distributed with this software. AUTHOR(S) Wietse Venema IBM T.J. Watson Research P.O. Box 704 Yorktown Heights, NY 10598, USA Wietse Venema Google, Inc. 111 8th Avenue New York, NY 10011, USA TLS support by: Lutz Jaenicke Brandenburg University of Technology Cottbus, Germany Victor Duchovni Morgan Stanley SASL support originally by: Till Franke SuSE Rhein/Main AG 65760 Eschborn, Germany LMTP support originally by: Philip A. Prindeville Mirapoint, Inc. USA. Amos Gouaux University of Texas at Dallas P.O. Box 830688, MC34 Richardson, TX 75083, USA IPv6 support originally by: Mark Huizer, Eindhoven University, The Netherlands Jun-ichiro 'itojun' Hagino, KAME project, Japan The Linux PLD project Dean Strik, Eindhoven University, The Netherlands POSTFIX(1)
|
postfix - Postfix control program
|
postfix [-Dv] [-c config_dir] command
| null | null |
cvfsdb
|
The cvfsdb command is a tool for debugging an Xsan volume. WARNING: Apple Internal use only. The cvfsdb command can easily damage an Xsan volume, and should only be used under the direction of customer support.
|
cvfsdb - Xsan File System debugging tool SYNOPSYS cvfsdb VolName
| null |
VolName The volume to debug. COMMANDS The cvfsdb command is an interactive program, that contains builtin help on commands and usage. help [command] Display help information. If command is omitted, the help command will display a list of commands that cvfsdb can understand. If command is provided, command specific help will be given. exit, q, quit Exit cvfsdb. The output of any command can be redirected using: command | shell_command Redirect the output of command to shell_command using popen(3). command > file Redirect the output of command into file, which will be overwritten if it exists. command >> file Append the output of command to file. SEE ALSO popen(3) Xsan File System June 2014 CVFSDB(8)
| null |
snmptrapd
|
snmptrapd is an SNMP application that receives and logs SNMP TRAP and INFORM messages. Note: the default is to listen on UDP port 162 on all IPv4 interfaces. Since 162 is a privileged port, snmptrapd must typically be run as root.
|
snmptrapd - Receive and log SNMP trap messages.
|
snmptrapd [OPTIONS] [LISTENING ADDRESSES]
|
-a Ignore authenticationFailure traps. -A Append to the log file rather than truncating it. Note that this needs to come before any -Lf options that it should apply to. -c FILE Read FILE as a configuration file (or a comma-separated list of configuration files). -C Do not read any configuration files except the one optionally specified by the -c option. -d Dump (in hexadecimal) the sent and received SNMP packets. -D[TOKEN[,...]] Turn on debugging output for the given TOKEN(s). Try ALL for extremely verbose output. -f Do not fork() from the calling shell. -F FORMAT When logging to standard output, use the format in the string FORMAT. See the section FORMAT SPECIFICATIONS below for more details. -h, --help Display a brief usage message and then exit. -H Display a list of configuration file directives understood by the trap daemon and then exit. -I [-]INITLIST Specifies which modules should (or should not) be initialized when snmptrapd starts up. If the comma-separated INITLIST is preceded with a '-', it is the list of modules that should not be started. Otherwise this is the list of the only modules that should be started. To get a list of compiled modules, run snmptrapd with the arguments -Dmib_init -H (assuming debugging support has been compiled in). -L[efos] Specify where logging output should be directed (standard error or output, to a file or via syslog). See LOGGING OPTIONS in snmpcmd(1) for details. -m MIBLIST Specifies a colon separated list of MIB modules to load for this application. This overrides the environment variable MIBS. See snmpcmd(1) for details. -M DIRLIST Specifies a colon separated list of directories to search for MIBs. This overrides the environment variable MIBDIRS. See snmpcmd(1) for details. -n Do not attempt to translate source addresses of incoming packets into hostnames. -p FILE Save the process ID of the trap daemon in FILE. -O [abeEfnqQsStTuUvxX] Specifies how MIB objects and other output should be displayed. See the section OUTPUT OPTIONS in the snmpcmd(1) manual page for details. -t Do not log traps to syslog. This disables logging to syslog. This is useful if you want the snmptrapd application to only run traphandle hooks and not to log any traps to any location. -v, --version Print version information for the trap daemon and then exit. -x ADDRESS Connect to the AgentX master agent on the specified address, rather than the default AGENTX_SOCKET. See snmpd(8) for details of the format of such addresses. --name="value" Allows to specify any token ("name") supported in the snmptrapd.conf file and sets its value to "value". Overrides the corresponding token in the snmptrapd.conf file. See snmptrapd.conf(5) for the full list of tokens. FORMAT SPECIFICATIONS snmptrapd interprets format strings similarly to printf(3). It understands the following formatting sequences: %% a literal % %a the contents of the agent-addr field of the PDU (v1 TRAPs only) %A the hostname corresponding to the contents of the agent-addr field of the PDU, if available, otherwise the contents of the agent-addr field of the PDU (v1 TRAPs only). %b PDU source address (Note: this is not necessarily an IPv4 address) %B PDU source hostname if available, otherwise PDU source address (see note above) %h current hour on the local system %H the hour field from the sysUpTime.0 varbind %j current minute on the local system %J the minute field from the sysUpTime.0 varbind %k current second on the local system %K the seconds field from the sysUpTime.0 varbind %l current day of month on the local system %L the day of month field from the sysUpTime.0 varbind %m current (numeric) month on the local system %M the numeric month field from the sysUpTime.0 varbind %N enterprise string %q trap sub-type (numeric, in decimal) %P security information from the PDU (community name for v1/v2c, user and context for v3) %t decimal number of seconds since the operating system epoch (as returned by time(2)) %T the value of the sysUpTime.0 varbind in seconds %v list of variable-bindings from the notification payload. These will be separated by a tab, or by a comma and a blank if the alternate form is requested See also %V %V specifies the variable-bindings separator. This takes a sequence of characters, up to the next % (to embed a % in the string, use \%) %w trap type (numeric, in decimal) %W trap description %y current year on the local system %Y the year field from the sysUpTime.0 varbind In addition to these values, an optional field width and precision may also be specified , just as in printf(3), and a flag value. The following flags are supported: - left justify 0 use leading zeros # use alternate form The "use alternate form" flag changes the behavior of various format string sequences: Time information will be displayed based on GMT (rather than the local timezone) The variable-bindings will be a comma-separated list (rather than a tab-separated one) The system uptime will be broken down into a human-meaningful format (rather than being a simple integer) Examples: To get a message like "14:03 TRAP3.1 from humpty.ucd.edu" you could use something like this: snmptrapd -P -F "%02.2h:%02.2j TRAP%w.%q from %A\n" If you want the same thing but in GMT rather than local time, use snmptrapd -P -F "%#02.2h:%#02.2j TRAP%w.%q from %A\n" LISTENING ADDRESSES By default, snmptrapd listens for incoming SNMP TRAP and INFORM packets on UDP port 162 on all IPv4 interfaces. However, it is possible to modify this behaviour by specifying one or more listening addresses as arguments to snmptrapd. See the snmpd(8) manual page for more information about the format of listening addresses. NOTIFICATION-LOG-MIB SUPPORT As of net-snmp 5.0, the snmptrapd application supports the NOTIFICATION-LOG-MIB. It does this by opening an AgentX subagent connection to the master snmpd agent and registering the notification log tables. As long as the snmpd application is started first, it will attach itself to it and thus you should be able to view the last recorded notifications via the nlmLogTable and nlmLogVariableTable. See the snmptrapd.conf file and the "doNotRetainNotificationLogs" token for turning off this support. See the NOTIFICATION-LOG-MIB for more details about the MIB itself. EXTENSIBILITY AND CONFIGURATION See the snmptrapd.conf(5) manual page. SEE ALSO snmpcmd(1), snmpd(8), printf(3), snmptrapd.conf(5), syslog(8), variables(5) V5.6.2.1 30 Mar 2011 SNMPTRAPD(8)
| null |
chroot
|
The chroot utility changes its current and root directories to the supplied directory newroot and then exec's command with provided arguments, if supplied, or an interactive copy of the user's login shell. The options are as follows: -G group[,group ...] Run the command with the permissions of the specified groups. -g group Run the command with the permissions of the specified group. -u user Run the command as the user. ENVIRONMENT The following environment variable is referenced by chroot: SHELL If set, the string specified by SHELL is interpreted as the name of the shell to exec. If the variable SHELL is not set, /bin/sh is used.
|
chroot – change root directory
|
chroot [-G group[,group ...]] [-g group] [-u user] newroot [command [arg ...]]
| null |
Example 1: Chrooting into a New Root Directory The following command opens the csh(1) shell after chrooting to the standard root directory. # chroot / /bin/csh Example 2: Execution of a Command with a Changed Root Directory The following command changes a root directory with chroot and then runs ls(1) to list the contents of /sbin. # chroot /tmp/testroot ls /sbin SEE ALSO chdir(2), chroot(2), setgid(2), setgroups(2), setuid(2), getgrnam(3), environ(7) HISTORY The chroot utility first appeared in AT&T System III UNIX and 4.3BSD-Reno. macOS 14.5 July 20, 2021 macOS 14.5
|
arp
|
The arp utility displays and modifies the Internet-to-Ethernet address translation tables used by the address resolution protocol (arp(4)). With no flags, the program displays the current ARP entry for hostname. The host may be specified by name or by number, using Internet dot notation. Available options: -a The program displays or deletes all of the current ARP entries. -d A super-user may delete an entry for the host called hostname with the -d flag. If the pub keyword is specified, only the “published” ARP entry for this host will be deleted. If the ifscope keyword is specified, the entry specific to the interface will be deleted. Alternatively, the -d flag may be combined with the -a flag to delete all entries. -i interface Limit the operation scope to the ARP entries on interface. Applicable only to the following operations: display one, display all, delete all. -l Show link-layer reachability information. -n Show network addresses as numbers (normally arp attempts to display addresses symbolically). -s hostname ether_addr Create an ARP entry for the host called hostname with the Ethernet address ether_addr. The Ethernet address is given as six hex bytes separated by colons. The entry will be permanent unless the word temp is given in the command. If the word pub is given, the entry will be “published”; i.e., this system will act as an ARP server, responding to requests for hostname even though the host address is not its own. In this case the ether_addr can be given as auto in which case the interfaces on this host will be examined, and if one of them is found to occupy the same subnet, its Ethernet address will be used. If the only keyword is also specified, this will create a “published (proxy only)” entry. This type of entry is created automatically if arp detects that a routing table entry for hostname already exists. If the reject keyword is specified the entry will be marked so that traffic to the host will be discarded and the sender will be notified the host is unreachable. The blackhole keyword is similar in that traffic is discarded but the sender is not notified. These can be used to block external traffic to a host without using a firewall. If the ifscope keyword is specified, the entry will set with an additional property that strictly associate the entry to the interface. This allows for the presence of multiple entries with the same destination on different interfaces. -S hostname ether_addr Is just like -s except any existing ARP entry for this host will be deleted first. -f filename Cause the file filename to be read and multiple entries to be set in the ARP tables. Entries in the file should be of the form hostname ether_addr [temp] [pub [only]] [ifscope interface] with argument meanings as given above. Leading whitespace and empty lines are ignored. A ‘#’ character will mark the rest of the line as a comment. -x Show extended link-layer reachability information in addition to that shown by the -l flag. SEE ALSO inet(3), arp(4), ifconfig(8), ndp(8) HISTORY The arp utility appeared in 4.3BSD. macOS 14.5 March 18, 2008 macOS 14.5
|
arp – address resolution display and control
|
arp [-n] [-i interface] hostname arp [-n] [-i interface] [-l] -a arp -d hostname [pub] [ifscope interface] arp -d [-i interface] -a arp -s hostname ether_addr [temp] [reject] [blackhole] [pub [only]] [ifscope interface] arp -S hostname ether_addr [temp] [reject] [blackhole] [pub [only]] [ifscope interface] arp -f filename
| null | null |
cupsaccept
|
The cupsaccept command instructs the printing system to accept print jobs to the specified destinations. The cupsreject command instructs the printing system to reject print jobs to the specified destinations. The -r option sets the reason for rejecting print jobs. If not specified, the reason defaults to "Reason Unknown".
|
cupsaccept/cupsreject - accept/reject jobs sent to a destination
|
cupsaccept [ -E ] [ -U username ] [ -h hostname[:port] ] destination(s) cupsreject [ -E ] [ -U username ] [ -h hostname[:port] ] [ -r reason ] destination(s)
|
The following options are supported by both cupsaccept and cupsreject: -E Forces encryption when connecting to the server. -U username Sets the username that is sent when connecting to the server. -h hostname[:port] Chooses an alternate server. -r "reason" Sets the reason string that is shown for a printer that is rejecting jobs. CONFORMING TO The cupsaccept and cupsreject commands correspond to the System V printing system commands "accept" and "reject", respectively. Unlike the System V printing system, CUPS allows printer names to contain any printable character except SPACE, TAB, "/", or "#". Also, printer and class names are not case-sensitive. Finally, the CUPS versions may ask the user for an access password depending on the printing system configuration. SEE ALSO cancel(1), cupsenable(8), lp(1), lpadmin(8), lpstat(1), CUPS Online Help (http://localhost:631/help) COPYRIGHT Copyright © 2007-2019 by Apple Inc. 26 April 2019 CUPS cupsaccept(8)
| null |
nvram
|
The nvram command allows manipulation of firmware NVRAM variables. It can be used to get or set a variable. It can also be used to print all of the variables or set a list of variables from a file. Changes to NVRAM variables are only saved by clean restart or shutdown. In principle, name can be any string. In practice, not all strings will be accepted. Some variables require administrator privilege to get or set. The given value must match the data type required for name. Binary data can be set using the %xx notation, where xx is the hex value of the byte. The type for new variables is always binary data.
|
nvram – manipulate firmware NVRAM variables
|
nvram [-x] [-p] [-f filename] [-d name] [-c] [name [= value [...]]]
|
-d name Deletes the named firmware variable. -r name Deletes the named firmware variable and returns error code if any. -f filename Set firmware variables from a text file. The file must be a list of "name value" statements. The first space on each line is taken to be the separator between "name" and "value". If the last character of a line is \, the value extends to the next line. -x Use XML format for reading and writing variables. This option must be used before the -p or -f options, since arguments are processed in order. -c Delete all of the firmware variables. -p Print all of the firmware variables.
|
example% nvram boot-args="-s rd=*hd:10" Set the boot-args variable to "-s rd=*hd:10". This would specify single user mode with the root device in hard drive partition 10. example% nvram my-variable="String One%00String Two%00%00" Create a new variable, my-variable, containing a list of two C- strings that is terminated by a NUL. example% nvram -d my-variable Deletes the variable named my-variable. macOS January 25, 2021 macOS
|
chown
|
The chown utility changes the user ID and/or the group ID of the specified files. Symbolic links named by arguments are silently left unchanged unless -h is used. The options are as follows: -H If the -R option is specified, symbolic links on the command line are followed and hence unaffected by the command. (Symbolic links encountered during traversal are not followed.) -L If the -R option is specified, all symbolic links are followed. -P If the -R option is specified, no symbolic links are followed. Instead, the user and/or group ID of the link itself are modified. This is the default. For matching behavior when using chown without the -R option, the -h option should be used instead. -R Change the user ID and/or the group ID of the file hierarchies rooted in the files, instead of just the files themselves. Beware of unintentionally matching the “..” hard link to the parent directory when using wildcards like “.*”. -f Do not report any failure to change file owner or group, nor modify the exit status to reflect such failures. -h If the file is a symbolic link, change the user ID and/or the group ID of the link itself. -n Interpret user ID and group ID as numeric, avoiding name lookups. -v Cause chown to be verbose, showing files as the owner is modified. If the -v flag is specified more than once, chown will print the filename, followed by the old and new numeric user/group ID. -x File system mount points are not traversed. The -H, -L and -P options are ignored unless the -R option is specified. In addition, these options override each other and the command's actions are determined by the last one specified. The owner and group operands are both optional, however, one must be specified. If the group operand is specified, it must be preceded by a colon (``:'') character. The owner may be either a numeric user ID or a user name. If a user name is also a numeric user ID, the operand is used as a user name. The group may be either a numeric group ID or a group name. If a group name is also a numeric group ID, the operand is used as a group name. The ownership of a file may only be altered by a super-user for obvious security reasons. Similarly, only a member of a group can change a file's group ID to that group. If chown receives a SIGINFO signal (see the status argument for stty(1)), then the current filename as well as the old and new file owner and group are displayed. EXIT STATUS The chown utility exits 0 on success, and >0 if an error occurs. COMPATIBILITY Previous versions of the chown utility used the dot (``.'') character to distinguish the group name. This has been changed to be a colon (``:'') character so that user and group names may contain the dot character. On previous versions of this system, symbolic links did not have owners. The -v and -x options are non-standard and their use in scripts is not recommended. LEGACY DESCRIPTION In legacy mode, the -R and -RP options do not change the user ID or the group ID of symbolic links. SEE ALSO chgrp(1), chmod(1), find(1), chown(2), fts(3), compat(5), symlink(7) STANDARDS The chown utility is expected to be IEEE Std 1003.2 (“POSIX.2”) compliant. HISTORY A chown utility appeared in Version 1 AT&T UNIX. macOS 14.5 August 24, 2022 macOS 14.5
|
chown – change file owner and group
|
chown [-fhnvx] [-R [-H | -L | -P]] owner[:group] file ... chown [-fhnvx] [-R [-H | -L | -P]] :group file ...
| null | null |
mkextunpack
|
The mkextunpack program lists the contents of a multikext file, mkext_file, or unarchives the contents into output_directory (which must exist). The -v option causes mkextunpack to print the name if each kext as it finds them. DIAGNOSTICS mkextunpack exits with a zero status upon success. Upon failure, it prints an error message and exits with a nonzero status. With a nonsegreated format 1 mkext file, wherein each kext may contain a universal binary, mkextunpack simply unpacks the contents. With an mkext file segregated by architecture (that is, with distinct internal archives of architecture-specific kexts), mkextunpack attempts by default to unpack or list kexts for the current machine's architecture. To choose a particular architecture to extract or list, use the -a option. There is no simple way to unpack a segregated mkext file into a set of kexts with universal binaries, but you can unpack each of its component architectures to separate directories for examination. SEE ALSO kmutil(8), kernelmanagerd(8), kextcache(8) BUGS Many single-letter options are inconsistent in meaning with (or directly contradictory to) the same letter options in other kext tools. For version 1 mkext files, note that the file format doesn't record the original filenames of the kexts, so mkextunpack has to guess at what they are. It does this by using the value of the CFBundleExecutable property of the kext's info dictionary (Project Builder sets this to the base name of the kext bundle by default, but the developer can change it). If that property doesn't exist, the last component of the CFBundleIdentifier is used. Duplicates have an incrementing index appended to the name. Kexts that have no CFBundleExecutable or CFBundleIdentifier property are named “NameUnknown-n.kext”, where n is a number. Darwin March 6, 2009 Darwin
|
mkextunpack – extract or list the contents of a multikext (mkext) archive
|
mkextunpack [-v] [-a arch] [-d output_directory] mkext_file DEPRECATED The mkextunpack utility has been deprecated.
| null | null |
postlog
|
The postlog(1) command implements a Postfix-compatible logging interface for use in, for example, shell scripts. By default, postlog(1) logs the text given on the command line as one record. If no text is specified on the command line, postlog(1) reads from standard input and logs each input line as one record. Logging is sent to syslogd(8); when the standard error stream is connected to a terminal, logging is sent there as well. The following options are implemented: -c config_dir Read the main.cf configuration file in the named directory instead of the default configuration directory. -i Include the process ID in the logging tag. -p priority (default: info) Specifies the logging severity: info, warn, error, fatal, or panic. With Postfix 3.1 and later, the program will pause for 1 second after reporting a fatal or panic condition, just like other Postfix programs. -t tag Specifies the logging tag, that is, the identifying name that appears at the beginning of each logging record. A default tag is used when none is specified. -v Enable verbose logging for debugging purposes. Multiple -v options make the software increasingly verbose. ENVIRONMENT MAIL_CONFIG Directory with the main.cf file. CONFIGURATION PARAMETERS The following main.cf parameters are especially relevant to this program. The text below provides only a parameter summary. See postconf(5) for more details including examples. config_directory (see 'postconf -d' output) The default location of the Postfix main.cf and master.cf configuration files. import_environment (see 'postconf -d' output) The list of environment parameters that a privileged Postfix process will import from a non-Postfix parent process, or name=value environment overrides. syslog_facility (mail) The syslog facility of Postfix logging. syslog_name (see 'postconf -d' output) A prefix that is prepended to the process name in syslog records, so that, for example, "smtpd" becomes "prefix/smtpd". SEE ALSO postconf(5), configuration parameters syslogd(8), syslog daemon LICENSE The Secure Mailer license must be distributed with this software. AUTHOR(S) Wietse Venema IBM T.J. Watson Research P.O. Box 704 Yorktown Heights, NY 10598, USA Wietse Venema Google, Inc. 111 8th Avenue New York, NY 10011, USA POSTLOG(1)
|
postlog - Postfix-compatible logging utility
|
postlog [-iv] [-c config_dir] [-p priority] [-t tag] [text...]
| null | null |
installer
|
The installer command is used to install macOS installer packages to a specified domain or volume. The installer command installs a single package per invocation, which is specified with the -package parameter ( -pkg is accepted as a synonym). It may be either a single package or a metapackage. In the case of the metapackage, the packages which are part of the default install will be installed unless disqualified by a package's check tool(s). The target volume is specified with the -target parameter ( -tgt is accepted as a synonym). It must already be mounted when the installer command is invoked. The installer command requires root privileges to run. If a package requires authentication (set in a package's .info file) the installer must be either run as root or with the sudo(8) command (but see further discussion under the -store option). The installer is not responsible for rebooting the machine after installing. Use reboot(8) or shutdown(8) -r now to reboot the system. The installer displays two forms of output. The default terse output is intended for parsing by scripting languages for automating (or scripting) installs and verbose output providing additional information and descriptive error messages. A list of flags and their descriptions: -dominfo Displays a list of domains into which the software package can be installed. For example: LocalSystem or CurrentUserHomeDirectory. The domains listed are those which are available and enabled when the command is run. -volinfo Displays a list of volumes onto which the software package can be installed. The volumes listed are the mounted volumes available when the command is run. -pkginfo Displays a list of packages that can be installed onto the target volume. If a metapackage is given as the package source, all of its subpackages are listed. -query flag Queries a package for information about the metadata. See -help for supported flags. -allowUntrusted Allow install of a package signed by an untrusted (or expired) certificate. -dumplog Detailed log information is always sent to syslog using the LOG_INSTALL facility (and will wind up in /var/log/install.log). -dumplog additionally writes this log to standard error output. -help Displays the help screen describing the list of parameters. -verbose Displays more descriptive information than the default output. Use this parameter in conjunction with -pkginfo and -volinfo information requests to see more readable output. The default output is formatted for scripting. -verboseR Displays same information as -verbose except the output is formatted for easy parsing. -vers Displays the version of this command. -config Formats the command line installation arguments for later use. The output is sent to stdout, but can be redirected to a file to create a configuration file. When specifying this option, an installation is not actually performed. This configuration file can be supplied as the argument to the -file parameter instead of typing a long series of installation arguments. The config file can be used to perform multiple identical installs. You can create a config file as follows: installer -pkg ~/Documents/Foo.pkg -target / -config > /tmp/configfile.plist -plist Formats the installer output into an XML file, which is sent by default to stdout. Use this parameter for -dominfo, -volinfo, and -pkginfo -file pathToFile Specifies the path to the XML file containing parameter information in the key/value dictionary. This file can be used instead of the command line parameters, and supersedes any parameters on the command line. When you type this parameter, you type the path to the XML file. Use with config file generated by -config For example: installer -file /tmp/configfile.plist -lang ISOLanguageCode Default language of installed system (ISO format). This is only necessary when performing a system (OS) install, otherwise is it ignored. There is no verification done to make sure that the language being set actually exists on the machine, however the ISO language code is verified to ensure that it is valid. -listiso Display the list of valid ISO language codes the installer recognizes. -showChoiceChangesXML Prints to stdout the install choices for the package (specified with -pkg) in an XML format. This allows choice attributes to be modified and applied at install-time using -applyChoiceChangesXML. See CHOICE CHANGES FILE for details of this XML format. -applyChoiceChangesXML pathToXMLFile Applies the install choice changes specified in pathToXMLFile to the default choices in the package before installation. This allows the command-line installer to customize choice what gets installed. See CHOICE CHANGES FILE for details of this XML format. Any problems encountered while applying the choice changes will be reported to the LOG_INSTALL facility (i.e. to /var/log/install.log), and also to stdout if -dumplog is used. -showChoicesAfterApplyingChangesXML pathToXMLFile Applies the install choice changes specified in pathToXMLFile to the default choices in the package, and then dumps the resulting choice state to stdout. The input and output XML format is as described in CHOICE CHANGES FILE. Since changing one choice in a package can implicitly change other choices, this option allows you to confirm that a particular choiceChanges file will have the intended effect. You must specify a -target when using this option, since the evaluated choices can also change with the state of the target disk. -showChoicesXML Prints to stdout the install choices for the package (specified with -pkg) in a hierarchical XML format. This is not the same format as used with -applyChoiceChangesXML. This option is provided for System Image Utility only. -store Install the product archive specified by -package, in the same way that it would be installed through the Mac App Store. In this mode, no other options are supported. (You can specify -target, but the only allowable value is the root volume mount point, /). For best Mac App Store fidelity, run installer as an admin user (not using sudo); you will prompted for your admin user's password before the install begins. This mode is provided for testing a product archive before submission to the Mac App Store. See productbuild(1) for how to create a product archive. DEVICES A device parameter for the target is any one of the following: 1) Any of the values returned by -dominfo 2) The device node entry. Any entry of the form of /dev/disk*. ex: /dev/disk2 3) The disk identifier. Any entry of the form of disk*. ex: disk1s9 4) The volume mount point. Any entry of the form of /Volumes/Mountpoint. ex: /Volumes/Untitled 5) The volume UUID. ex: 376C4046-083E-334F-AF08-62FAFBC4E352 CHOICE CHANGES FILE A “choiceChanges” file allows individual installer choices to be selected or deselected. A template choiceChanges file for a given package can be generated with the -showChoiceChangesXML option, and is interpreted as follows. The choiceChanges file is a property list containing an array of dictionaries. Each dictionary has the following three keys: Key Description choiceIdentifier Identifier for the choice to be modified (string) choiceAttribute One of the attribute names described below (string) attributeSetting A setting that depends on the choiceAttribute, described below (number or string) The choiceAttribute and attributeSetting values are as follows: choiceAttribute attributeSetting Description selected (number) 1 to select the choice, 0 to deselect it enabled (number) 1 to enable the choice, 0 to disable it visible (number) 1 to show the choice, 0 to hide it customLocation (string) path at which to install the choice (see below) Note that there can be multiple dictionaries for the same choiceIdentifier, since there can be multiple attributes set for a single choice. The customLocation attribute can be set for a choice only if that choice explicitly allows a user-defined path. That is, if the choice would have a Location popup when viewed in the Customize pane of the Installer application, it can be set via customLocation. (Otherwise, installation paths cannot be arbitrarily modified, since the package author must account for custom install locations for the installation to work properly.)
|
installer – system software and package installer tool.
|
installer [-dominfo] [-volinfo] [-pkginfo] [-showChoicesXML] [-showChoicesAfterApplyingChangesXML <pathToXMLFile>] [-applyChoiceChangesXML <pathToXMLFile>] [-query <flag>] [-allow] [-dumplog] [-help] [-verbose | -verboseR] [-vers] [-config] [-plist] [-file <pathToFile>] [-lang <ISOLanguageCode>] [-listiso] -pkg <pathToPackage> -target device
| null |
installer -dominfo -pkg InstallMe.pkg installer -volinfo -pkg InstallMe.pkg installer -pkginfo -pkg DeveloperTools.mpkg installer -pkg OSInstall.mpkg -target LocalSystem installer -pkg OSInstall.mpkg -target / -lang en installer -pkg DeveloperTools.mpkg -target / installer -pkg InstallMe.pkg -target "/Volumes/Macintosh HD2" installer -pkg InstallMe.pkg -file /tmp/InstallConfigFile installer -pkg InstallMe.pkg -target /dev/disk0s5 ENVIRONMENT COMMAND_LINE_INSTALL Set when performing an installation using the installer command. FILES /usr/sbin/installer Software package installer tool SEE ALSO syslog.conf(5) reboot(8) shutdown(8) softwareupdate(8) sudo(8) systemsetup(8) HISTORY The command line installer tool first appeared in the 10.2 release of Mac OS X. macOS April 19, 2007 macOS
|
pppd
|
PPP is the protocol used for establishing internet links over dial-up modems, DSL connections, and many other types of point-to-point links. The pppd daemon works together with the kernel PPP driver to establish and maintain a PPP link with another system (called the peer) and to negotiate Internet Protocol (IP) addresses for each end of the link. Pppd can also authenticate the peer and/or supply authentication information to the peer. PPP can be used with other network protocols besides IP, but such use is becoming increasingly rare. FREQUENTLY USED OPTIONS ttyname Use the serial port called ttyname to communicate with the peer. The string "/dev/" is prepended to ttyname to form the name of the device to open. If no device name is given, or if the name of the terminal connected to the standard input is given, pppd will use that terminal, and will not fork to put itself in the background. A value for this option from a privileged source cannot be overridden by a non-privileged user. speed An option that is a decimal number is taken as the desired baud rate for the serial device. On systems such as 4.4BSD and NetBSD, any speed can be specified. Other systems (e.g. Linux, SunOS) only support the commonly-used baud rates. asyncmap map This option sets the Async-Control-Character-Map (ACCM) for this end of the link. The ACCM is a set of 32 bits, one for each of the ASCII control characters with values from 0 to 31, where a 1 bit indicates that the corresponding control character should not be used in PPP packets sent to this system. The map is encoded as a hexadecimal number (without a leading 0x) where the least significant bit (00000001) represents character 0 and the most significant bit (80000000) represents character 31. Pppd will ask the peer to send these characters as a 2-byte escape sequence. If multiple asyncmap options are given, the values are ORed together. If no asyncmap option is given, the default is zero, so pppd will ask the peer not to escape any control characters. To escape transmitted characters, use the escape option. auth Require the peer to authenticate itself before allowing network packets to be sent or received. This option is the default if the system has a default route. If neither this option nor the noauth option is specified, pppd will only allow the peer to use IP addresses to which the system does not already have a route. call name Read options from the file /etc/ppp/peers/name. This file may contain privileged options, such as noauth, even if pppd is not being run by root. The name string may not begin with / or include .. as a pathname component. The format of the options file is described below. connect script Usually there is something which needs to be done to prepare the link before the PPP protocol can be started; for instance, with a dial-up modem, commands need to be sent to the modem to dial the appropriate phone number. This option specifies an command for pppd to execute (by passing it to a shell) before attempting to start PPP negotiation. The chat (8) program is often useful here, as it provides a way to send arbitrary strings to a modem and respond to received characters. A value for this option from a privileged source cannot be overridden by a non- privileged user. crtscts Specifies that pppd should set the serial port to use hardware flow control using the RTS and CTS signals in the RS-232 interface. If neither the crtscts, the nocrtscts, the cdtrcts nor the nocdtrcts option is given, the hardware flow control setting for the serial port is left unchanged. Some serial ports (such as Macintosh serial ports) lack a true RTS output. Such serial ports use this mode to implement unidirectional flow control. The serial port will suspend transmission when requested by the modem (via CTS) but will be unable to request the modem to stop sending to the computer. This mode retains the ability to use DTR as a modem control line. defaultroute Add a default route to the system routing tables, using the peer as the gateway, when IPCP negotiation is successfully completed. This entry is removed when the PPP connection is broken. This option is privileged if the nodefaultroute option has been specified. disconnect script Execute the command specified by script, by passing it to a shell, after pppd has terminated the link. This command could, for example, issue commands to the modem to cause it to hang up if hardware modem control signals were not available. The disconnect script is not run if the modem has already hung up. A value for this option from a privileged source cannot be overridden by a non-privileged user. escape xx,yy,... Specifies that certain characters should be escaped on transmission (regardless of whether the peer requests them to be escaped with its async control character map). The characters to be escaped are specified as a list of hex numbers separated by commas. Note that almost any character can be specified for the escape option, unlike the asyncmap option which only allows control characters to be specified. The characters which may not be escaped are those with hex values 0x20 - 0x3f or 0x5e. file name Read options from file name (the format is described below). The file must be readable by the user who has invoked pppd. init script Execute the command specified by script, by passing it to a shell, to initialize the serial line. This script would typically use the chat(8) program to configure the modem to enable auto answer. A value for this option from a privileged source cannot be overridden by a non-privileged user. lock Specifies that pppd should create a UUCP-style lock file for the serial device to ensure exclusive access to the device. mru n Set the MRU [Maximum Receive Unit] value to n. Pppd will ask the peer to send packets of no more than n bytes. The value of n must be between 128 and 16384; the default is 1500. A value of 296 works well on very slow links (40 bytes for TCP/IP header + 256 bytes of data). Note that for the IPv6 protocol, the MRU must be at least 1280. mtu n Set the MTU [Maximum Transmit Unit] value to n. Unless the peer requests a smaller value via MRU negotiation, pppd will request that the kernel networking code send data packets of no more than n bytes through the PPP network interface. Note that for the IPv6 protocol, the MTU must be at least 1280. passive Enables the "passive" option in the LCP. With this option, pppd will attempt to initiate a connection; if no reply is received from the peer, pppd will then just wait passively for a valid LCP packet from the peer, instead of exiting, as it would without this option.
|
pppd - Point-to-Point Protocol Daemon
|
pppd [ options ]
|
<local_IP_address>:<remote_IP_address> Set the local and/or remote interface IP addresses. Either one may be omitted. The IP addresses can be specified with a host name or in decimal dot notation (e.g. 150.234.56.78). The default local address is the (first) IP address of the system (unless the noipdefault option is given). The remote address will be obtained from the peer if not specified in any option. Thus, in simple cases, this option is not required. If a local and/or remote IP address is specified with this option, pppd will not accept a different value from the peer in the IPCP negotiation, unless the ipcp-accept-local and/or ipcp-accept- remote options are given, respectively. ipv6 <local_interface_identifier>,<remote_interface_identifier> Set the local and/or remote 64-bit interface identifier. Either one may be omitted. The identifier must be specified in standard ascii notation of IPv6 addresses (e.g. ::dead:beef). If the ipv6cp-use-ipaddr option is given, the local identifier is the local IPv4 address (see above). On systems which supports a unique persistent id, such as EUI-48 derived from the Ethernet MAC address, ipv6cp-use-persistent option can be used to replace the ipv6 <local>,<remote> option. Otherwise the identifier is randomized. active-filter filter-expression Specifies a packet filter to be applied to data packets to determine which packets are to be regarded as link activity, and therefore reset the idle timer, or cause the link to be brought up in demand-dialing mode. This option is useful in conjunction with the idle option if there are packets being sent or received regularly over the link (for example, routing information packets) which would otherwise prevent the link from ever appearing to be idle. The filter-expression syntax is as described for tcpdump(1), except that qualifiers which are inappropriate for a PPP link, such as ether and arp, are not permitted. Generally the filter expression should be enclosed in single-quotes to prevent whitespace in the expression from being interpreted by the shell. This option is currently only available under Linux, and requires that the kernel was configured to include PPP filtering support (CONFIG_PPP_FILTER). Note that it is possible to apply different constraints to incoming and outgoing packets using the inbound and outbound qualifiers. allow-ip address(es) Allow peers to use the given IP address or subnet without authenticating themselves. The parameter is parsed as for each element of the list of allowed IP addresses in the secrets files (see the AUTHENTICATION section below). allow-number number Allow peers to connect from the given telephone number. A trailing `*' character will match all numbers beginning with the leading part. bsdcomp nr,nt Request that the peer compress packets that it sends, using the BSD-Compress scheme, with a maximum code size of nr bits, and agree to compress packets sent to the peer with a maximum code size of nt bits. If nt is not specified, it defaults to the value given for nr. Values in the range 9 to 15 may be used for nr and nt; larger values give better compression but consume more kernel memory for compression dictionaries. Alternatively, a value of 0 for nr or nt disables compression in the corresponding direction. Use nobsdcomp or bsdcomp 0 to disable BSD-Compress compression entirely. cdtrcts Use a non-standard hardware flow control (i.e. DTR/CTS) to control the flow of data on the serial port. If neither the crtscts, the nocrtscts, the cdtrcts nor the nocdtrcts option is given, the hardware flow control setting for the serial port is left unchanged. Some serial ports (such as Macintosh serial ports) lack a true RTS output. Such serial ports use this mode to implement true bi-directional flow control. The sacrifice is that this flow control mode does not permit using DTR as a modem control line. chap-interval n If this option is given, pppd will rechallenge the peer every n seconds. chap-max-challenge n Set the maximum number of CHAP challenge transmissions to n (default 10). chap-restart n Set the CHAP restart interval (retransmission timeout for challenges) to n seconds (default 3). connect-delay n Wait for up n milliseconds after the connect script finishes for a valid PPP packet from the peer. At the end of this time, or when a valid PPP packet is received from the peer, pppd will commence negotiation by sending its first LCP packet. The default value is 1000 (1 second). This wait period only applies if the connect or pty option is used. debug Enables connection debugging facilities. If this option is given, pppd will log the contents of all control packets sent or received in a readable form. The packets are logged through syslog with facility daemon and level debug. This information can be directed to a file by setting up /etc/syslog.conf appropriately (see syslog.conf(5)). default-asyncmap Disable asyncmap negotiation, forcing all control characters to be escaped for both the transmit and the receive direction. default-mru Disable MRU [Maximum Receive Unit] negotiation. With this option, pppd will use the default MRU value of 1500 bytes for both the transmit and receive direction. deflate nr,nt Request that the peer compress packets that it sends, using the Deflate scheme, with a maximum window size of 2**nr bytes, and agree to compress packets sent to the peer with a maximum window size of 2**nt bytes. If nt is not specified, it defaults to the value given for nr. Values in the range 9 to 15 may be used for nr and nt; larger values give better compression but consume more kernel memory for compression dictionaries. Alternatively, a value of 0 for nr or nt disables compression in the corresponding direction. Use nodeflate or deflate 0 to disable Deflate compression entirely. (Note: pppd requests Deflate compression in preference to BSD-Compress if the peer can do either.) demand Initiate the link only on demand, i.e. when data traffic is present. With this option, the remote IP address must be specified by the user on the command line or in an options file. Pppd will initially configure the interface and enable it for IP traffic without connecting to the peer. When traffic is available, pppd will connect to the peer and perform negotiation, authentication, etc. When this is completed, pppd will commence passing data packets (i.e., IP packets) across the link. The demand option implies the persist option. If this behaviour is not desired, use the nopersist option after the demand option. The idle and holdoff options are also useful in conjuction with the demand option. domain d Append the domain name d to the local host name for authentication purposes. For example, if gethostname() returns the name porsche, but the fully qualified domain name is porsche.Quotron.COM, you could specify domain Quotron.COM. Pppd would then use the name porsche.Quotron.COM for looking up secrets in the secrets file, and as the default name to send to the peer when authenticating itself to the peer. This option is privileged. dryrun With the dryrun option, pppd will print out all the option values which have been set and then exit, after parsing the command line and options files and checking the option values, but before initiating the link. The option values are logged at level info, and also printed to standard output unless the device on standard output is the device that pppd would be using to communicate with the peer. dump With the dump option, pppd will print out all the option values which have been set. This option is like the dryrun option except that pppd proceeds as normal rather than exiting. endpoint <epdisc> Sets the endpoint discriminator sent by the local machine to the peer during multilink negotiation to <epdisc>. The default is to use the MAC address of the first ethernet interface on the system, if any, otherwise the IPv4 address corresponding to the hostname, if any, provided it is not in the multicast or locally-assigned IP address ranges, or the localhost address. The endpoint discriminator can be the string null or of the form type:value, where type is a decimal number or one of the strings local, IP, MAC, magic, or phone. The value is an IP address in dotted-decimal notation for the IP type, or a string of bytes in hexadecimal, separated by periods or colons for the other types. For the MAC type, the value may also be the name of an ethernet or similar network interface. This option is currently only available under Linux. hide-password When logging the contents of PAP packets, this option causes pppd to exclude the password string from the log. This is the default. holdoff n Specifies how many seconds to wait before re-initiating the link after it terminates. This option only has any effect if the persist or demand option is used. The holdoff period is not applied if the link was terminated because it was idle. idle n Specifies that pppd should disconnect if the link is idle for n seconds. The link is idle when no data packets (i.e. IP packets) are being sent or received. Note: it is not advisable to use this option with the persist option without the demand option. If the active-filter option is given, data packets which are rejected by the specified activity filter also count as the link being idle. ipcp-accept-local With this option, pppd will accept the peer's idea of our local IP address, even if the local IP address was specified in an option. ipcp-accept-remote With this option, pppd will accept the peer's idea of its (remote) IP address, even if the remote IP address was specified in an option. ipcp-max-configure n Set the maximum number of IPCP configure-request transmissions to n (default 10). ipcp-max-failure n Set the maximum number of IPCP configure-NAKs returned before starting to send configure-Rejects instead to n (default 10). ipcp-max-terminate n Set the maximum number of IPCP terminate-request transmissions to n (default 3). ipcp-restart n Set the IPCP restart interval (retransmission timeout) to n seconds (default 3). ipparam string Provides an extra parameter to the ip-up and ip-down scripts. If this option is given, the string supplied is given as the 6th parameter to those scripts. ipv6cp-max-configure n Set the maximum number of IPv6CP configure-request transmissions to n (default 10). ipv6cp-max-failure n Set the maximum number of IPv6CP configure-NAKs returned before starting to send configure-Rejects instead to n (default 10). ipv6cp-max-terminate n Set the maximum number of IPv6CP terminate-request transmissions to n (default 3). ipv6cp-restart n Set the IPv6CP restart interval (retransmission timeout) to n seconds (default 3). ipx Enable the IPXCP and IPX protocols. This option is presently only supported under Linux, and only if your kernel has been configured to include IPX support. ipx-network n Set the IPX network number in the IPXCP configure request frame to n, a hexadecimal number (without a leading 0x). There is no valid default. If this option is not specified, the network number is obtained from the peer. If the peer does not have the network number, the IPX protocol will not be started. ipx-node n:m Set the IPX node numbers. The two node numbers are separated from each other with a colon character. The first number n is the local node number. The second number m is the peer's node number. Each node number is a hexadecimal number, at most 10 digits long. The node numbers on the ipx-network must be unique. There is no valid default. If this option is not specified then the node numbers are obtained from the peer. ipx-router-name <string> Set the name of the router. This is a string and is sent to the peer as information data. ipx-routing n Set the routing protocol to be received by this option. More than one instance of ipx-routing may be specified. The 'none' option (0) may be specified as the only instance of ipx-routing. The values may be 0 for NONE, 2 for RIP/SAP, and 4 for NLSP. ipxcp-accept-local Accept the peer's NAK for the node number specified in the ipx- node option. If a node number was specified, and non-zero, the default is to insist that the value be used. If you include this option then you will permit the peer to override the entry of the node number. ipxcp-accept-network Accept the peer's NAK for the network number specified in the ipx-network option. If a network number was specified, and non- zero, the default is to insist that the value be used. If you include this option then you will permit the peer to override the entry of the node number. ipxcp-accept-remote Use the peer's network number specified in the configure request frame. If a node number was specified for the peer and this option was not specified, the peer will be forced to use the value which you have specified. ipxcp-max-configure n Set the maximum number of IPXCP configure request frames which the system will send to n. The default is 10. ipxcp-max-failure n Set the maximum number of IPXCP NAK frames which the local system will send before it rejects the options. The default value is 3. ipxcp-max-terminate n Set the maximum nuber of IPXCP terminate request frames before the local system considers that the peer is not listening to them. The default value is 3. kdebug n Enable debugging code in the kernel-level PPP driver. The argument values depend on the specific kernel driver, but in general a value of 1 will enable general kernel debug messages. (Note that these messages are usually only useful for debugging the kernel driver itself.) For the Linux 2.2.x kernel driver, the value is a sum of bits: 1 to enable general debug messages, 2 to request that the contents of received packets be printed, and 4 to request that the contents of transmitted packets be printed. On most systems, messages printed by the kernel are logged by syslog(1) to a file as directed in the /etc/syslog.conf configuration file. ktune Enables pppd to alter kernel settings as appropriate. Under Linux, pppd will enable IP forwarding (i.e. set /proc/sys/net/ipv4/ip_forward to 1) if the proxyarp option is used, and will enable the dynamic IP address option (i.e. set /proc/sys/net/ipv4/ip_dynaddr to 1) in demand mode if the local address changes. lcp-echo-failure n If this option is given, pppd will presume the peer to be dead if n LCP echo-requests are sent without receiving a valid LCP echo-reply. If this happens, pppd will terminate the connection. Use of this option requires a non-zero value for the lcp-echo-interval parameter. This option can be used to enable pppd to terminate after the physical connection has been broken (e.g., the modem has hung up) in situations where no hardware modem control lines are available. lcp-echo-interval n If this option is given, pppd will send an LCP echo-request frame to the peer every n seconds. Normally the peer should respond to the echo-request by sending an echo-reply. This option can be used with the lcp-echo-failure option to detect that the peer is no longer connected. lcp-max-configure n Set the maximum number of LCP configure-request transmissions to n (default 10). lcp-max-failure n Set the maximum number of LCP configure-NAKs returned before starting to send configure-Rejects instead to n (default 10). lcp-max-terminate n Set the maximum number of LCP terminate-request transmissions to n (default 3). lcp-restart n Set the LCP restart interval (retransmission timeout) to n seconds (default 3). linkname name Sets the logical name of the link to name. Pppd will create a file named ppp-name.pid in /var/run (or /etc/ppp on some systems) containing its process ID. This can be useful in determining which instance of pppd is responsible for the link to a given peer system. This is a privileged option. local Don't use the modem control lines. With this option, pppd will ignore the state of the CD (Carrier Detect) signal from the modem and will not change the state of the DTR (Data Terminal Ready) signal. logfd n Send log messages to file descriptor n. Pppd will send log messages to at most one file or file descriptor (as well as sending the log messages to syslog), so this option and the logfile option are mutually exclusive. The default is for pppd to send log messages to stdout (file descriptor 1), unless the serial port is already open on stdout. logfile filename Append log messages to the file filename (as well as sending the log messages to syslog). The file is opened with the privileges of the user who invoked pppd, in append mode. login Use the system password database for authenticating the peer using PAP, and record the user in the system wtmp file. Note that the peer must have an entry in the /etc/ppp/pap-secrets file as well as the system password database to be allowed access. maxconnect n Terminate the connection when it has been available for network traffic for n seconds (i.e. n seconds after the first network control protocol comes up). maxfail n Terminate after n consecutive failed connection attempts. A value of 0 means no limit. The default value is 10. modem Use the modem control lines. This option is the default. With this option, pppd will wait for the CD (Carrier Detect) signal from the modem to be asserted when opening the serial device (unless a connect script is specified), and it will drop the DTR (Data Terminal Ready) signal briefly when the connection is terminated and before executing the connect script. On Ultrix, this option implies hardware flow control, as for the crtscts option. mp Enables the use of PPP multilink; this is an alias for the `multilink' option. This option is currently only available under Linux. mppe-stateful Allow MPPE to use stateful mode. Stateless mode is still attempted first. The default is to disallow stateful mode. mpshortseq Enables the use of short (12-bit) sequence numbers in multilink headers, as opposed to 24-bit sequence numbers. This option is only available under Linux, and only has any effect if multilink is enabled (see the multilink option). mrru n Sets the Maximum Reconstructed Receive Unit to n. The MRRU is the maximum size for a received packet on a multilink bundle, and is analogous to the MRU for the individual links. This option is currently only available under Linux, and only has any effect if multilink is enabled (see the multilink option). ms-dns <addr> If pppd is acting as a server for Microsoft Windows clients, this option allows pppd to supply one or two DNS (Domain Name Server) addresses to the clients. The first instance of this option specifies the primary DNS address; the second instance (if given) specifies the secondary DNS address. (This option was present in some older versions of pppd under the name dns- addr.) ms-wins <addr> If pppd is acting as a server for Microsoft Windows or "Samba" clients, this option allows pppd to supply one or two WINS (Windows Internet Name Services) server addresses to the clients. The first instance of this option specifies the primary WINS address; the second instance (if given) specifies the secondary WINS address. multilink Enables the use of the PPP multilink protocol. If the peer also supports multilink, then this link can become part of a bundle between the local system and the peer. If there is an existing bundle to the peer, pppd will join this link to that bundle, otherwise pppd will create a new bundle. See the MULTILINK section below. This option is currently only available under Linux. name name Set the name of the local system for authentication purposes to name. This is a privileged option. With this option, pppd will use lines in the secrets files which have name as the second field when looking for a secret to use in authenticating the peer. In addition, unless overridden with the user option, name will be used as the name to send to the peer when authenticating the local system to the peer. (Note that pppd does not append the domain name to name.) noaccomp Disable Address/Control compression in both directions (send and receive). noauth Do not require the peer to authenticate itself. This option is privileged. nobsdcomp Disables BSD-Compress compression; pppd will not request or agree to compress packets using the BSD-Compress scheme. noccp Disable CCP (Compression Control Protocol) negotiation. This option should only be required if the peer is buggy and gets confused by requests from pppd for CCP negotiation. nocrtscts Disable hardware flow control (i.e. RTS/CTS) on the serial port. If neither the crtscts nor the nocrtscts nor the cdtrcts nor the nocdtrcts option is given, the hardware flow control setting for the serial port is left unchanged. nocdtrcts This option is a synonym for nocrtscts. Either of these options will disable both forms of hardware flow control. nodefaultroute Disable the defaultroute option. The system administrator who wishes to prevent users from creating default routes with pppd can do so by placing this option in the /etc/ppp/options file. nodeflate Disables Deflate compression; pppd will not request or agree to compress packets using the Deflate scheme. nodetach Don't detach from the controlling terminal. Without this option, if a serial device other than the terminal on the standard input is specified, pppd will fork to become a background process. noendpoint Disables pppd from sending an endpoint discriminator to the peer or accepting one from the peer (see the MULTILINK section below). This option should only be required if the peer is buggy. noip Disable IPCP negotiation and IP communication. This option should only be required if the peer is buggy and gets confused by requests from pppd for IPCP negotiation. noipv6 Disable IPv6CP negotiation and IPv6 communication. This option should only be required if the peer is buggy and gets confused by requests from pppd for IPv6CP negotiation. noipdefault Disables the default behaviour when no local IP address is specified, which is to determine (if possible) the local IP address from the hostname. With this option, the peer will have to supply the local IP address during IPCP negotiation (unless it specified explicitly on the command line or in an options file). noipx Disable the IPXCP and IPX protocols. This option should only be required if the peer is buggy and gets confused by requests from pppd for IPXCP negotiation. noktune Opposite of the ktune option; disables pppd from changing system settings. nolog Do not send log messages to a file or file descriptor. This option cancels the logfd and logfile options. nomagic Disable magic number negotiation. With this option, pppd cannot detect a looped-back line. This option should only be needed if the peer is buggy. nomp Disables the use of PPP multilink. This option is currently only available under Linux. nomppe Disables MPPE (Microsoft Point to Point Encryption). This is the default. nomppe-40 Disable 40-bit encryption with MPPE. nomppe-128 Disable 128-bit encryption with MPPE. nomppe-stateful Disable MPPE stateful mode. This is the default. nompshortseq Disables the use of short (12-bit) sequence numbers in the PPP multilink protocol, forcing the use of 24-bit sequence numbers. This option is currently only available under Linux, and only has any effect if multilink is enabled. nomultilink Disables the use of PPP multilink. This option is currently only available under Linux. nopcomp Disable protocol field compression negotiation in both the receive and the transmit direction. nopersist Exit once a connection has been made and terminated. This is the default unless the persist or demand option has been specified. nopredictor1 Do not accept or agree to Predictor-1 compression. noproxyarp Disable the proxyarp option. The system administrator who wishes to prevent users from creating proxy ARP entries with pppd can do so by placing this option in the /etc/ppp/options file. notty Normally, pppd requires a terminal device. With this option, pppd will allocate itself a pseudo-tty master/slave pair and use the slave as its terminal device. Pppd will create a child process to act as a `character shunt' to transfer characters between the pseudo-tty master and its standard input and output. Thus pppd will transmit characters on its standard output and receive characters on its standard input even if they are not terminal devices. This option increases the latency and CPU overhead of transferring data over the ppp interface as all of the characters sent and received must flow through the character shunt process. An explicit device name may not be given if this option is used. novj Disable Van Jacobson style TCP/IP header compression in both the transmit and the receive direction. novjccomp Disable the connection-ID compression option in Van Jacobson style TCP/IP header compression. With this option, pppd will not omit the connection-ID byte from Van Jacobson compressed TCP/IP headers, nor ask the peer to do so. papcrypt Indicates that all secrets in the /etc/ppp/pap-secrets file which are used for checking the identity of the peer are encrypted, and thus pppd should not accept a password which, before encryption, is identical to the secret from the /etc/ppp/pap-secrets file. pap-max-authreq n Set the maximum number of PAP authenticate-request transmissions to n (default 10). pap-restart n Set the PAP restart interval (retransmission timeout) to n seconds (default 3). pap-timeout n Set the maximum time that pppd will wait for the peer to authenticate itself with PAP to n seconds (0 means no limit). pass-filter filter-expression Specifies a packet filter to applied to data packets being sent or received to determine which packets should be allowed to pass. Packets which are rejected by the filter are silently discarded. This option can be used to prevent specific network daemons (such as routed) using up link bandwidth, or to provide a very basic firewall capability. The filter-expression syntax is as described for tcpdump(1), except that qualifiers which are inappropriate for a PPP link, such as ether and arp, are not permitted. Generally the filter expression should be enclosed in single-quotes to prevent whitespace in the expression from being interpreted by the shell. Note that it is possible to apply different constraints to incoming and outgoing packets using the inbound and outbound qualifiers. This option is currently only available under Linux, and requires that the kernel was configured to include PPP filtering support (CONFIG_PPP_FILTER). password password-string Specifies the password to use for authenticating to the peer. Use of this option is discouraged, as the password is likely to be visible to other users on the system (for example, by using ps(1)). persist Do not exit after a connection is terminated; instead try to reopen the connection. The maxfail option still has an effect on persistent connections. plugin filename This option is deprecated starting in macOS 10.14. The NEPacketTunnelProvider API should be used for VPN plugins. predictor1 Request that the peer compress frames that it sends using Predictor-1 compression, and agree to compress transmitted frames with Predictor-1 if requested. This option has no effect unless the kernel driver supports Predictor-1 compression. privgroup group-name Allows members of group group-name to use privileged options. This is a privileged option. Use of this option requires care as there is no guarantee that members of group-name cannot use pppd to become root themselves. Consider it equivalent to putting the members of group-name in the kmem or disk group. proxyarp Add an entry to this system's ARP [Address Resolution Protocol] table with the IP address of the peer and the Ethernet address of this system. This will have the effect of making the peer appear to other systems to be on the local ethernet. pty script Specifies that the command script is to be used to communicate rather than a specific terminal device. Pppd will allocate itself a pseudo-tty master/slave pair and use the slave as its terminal device. The script will be run in a child process with the pseudo-tty master as its standard input and output. An explicit device name may not be given if this option is used. (Note: if the record option is used in conjuction with the pty option, the child process will have pipes on its standard input and output.) receive-all With this option, pppd will accept all control characters from the peer, including those marked in the receive asyncmap. Without this option, pppd will discard those characters as specified in RFC1662. This option should only be needed if the peer is buggy. record filename Specifies that pppd should record all characters sent and received to a file named filename. This file is opened in append mode, using the user's user-ID and permissions. This option is implemented using a pseudo-tty and a process to transfer characters between the pseudo-tty and the real serial device, so it will increase the latency and CPU overhead of transferring data over the ppp interface. The characters are stored in a tagged format with timestamps, which can be displayed in readable form using the pppdump(8) program. remotename name Set the assumed name of the remote system for authentication purposes to name. remotenumber number Set the assumed telephone number of the remote system for authentication purposes to number. refuse-chap With this option, pppd will not agree to authenticate itself to the peer using CHAP. refuse-mschap With this option, pppd will not agree to authenticate itself to the peer using MS-CHAP. refuse-mschap-v2 With this option, pppd will not agree to authenticate itself to the peer using MS-CHAPv2. refuse-eap With this option, pppd will not agree to authenticate itself to the peer using EAP. refuse-pap With this option, pppd will not agree to authenticate itself to the peer using PAP. require-chap Require the peer to authenticate itself using CHAP [Challenge Handshake Authentication Protocol] authentication. require-mppe Require the use of MPPE (Microsoft Point to Point Encryption). This option disables all other compression types. This option enables both 40-bit and 128-bit encryption. In order for MPPE to successfully come up, you must have authenticated with either MS-CHAP or MS-CHAPv2. This option is presently only supported under Linux, and only if your kernel has been configured to include MPPE support. require-mppe-40 Require the use of MPPE, with 40-bit encryption. require-mppe-128 Require the use of MPPE, with 128-bit encryption. require-mschap Require the peer to authenticate itself using MS-CHAP [Microsft Challenge Handshake Authentication Protocol] authentication. require-mschap-v2 Require the peer to authenticate itself using MS-CHAPv2 [Microsft Challenge Handshake Authentication Protocol, Version 2] authentication. require-eap Require the peer to authenticate itself using EAP [Extensible Authentication Protocol] authentication. require-pap Require the peer to authenticate itself using PAP [Password Authentication Protocol] authentication. show-password When logging the contents of PAP packets, this option causes pppd to show the password string in the log message. silent With this option, pppd will not transmit LCP packets to initiate a connection until a valid LCP packet is received from the peer (as for the `passive' option with ancient versions of pppd). sync Use synchronous HDLC serial encoding instead of asynchronous. The device used by pppd with this option must have sync support. Currently supports Microgate SyncLink adapters under Linux and FreeBSD 2.2.8 and later. unit num Sets the ppp unit number (for a ppp0 or ppp1 etc interface name) for outbound connections. updetach With this option, pppd will detach from its controlling terminal once it has successfully established the ppp connection (to the point where the first network control protocol, usually the IP control protocol, has come up). usehostname Enforce the use of the hostname (with domain name appended, if given) as the name of the local system for authentication purposes (overrides the name option). This option is not normally needed since the name option is privileged. usepeerdns Ask the peer for up to 2 DNS server addresses. The addresses supplied by the peer (if any) are passed to the /etc/ppp/ip-up script in the environment variables DNS1 and DNS2, and the environment variable USEPEERDNS will be set to 1. In addition, pppd will create an /etc/ppp/resolv.conf file containing one or two nameserver lines with the address(es) supplied by the peer. user name Sets the name used for authenticating the local system to the peer to name. vj-max-slots n Sets the number of connection slots to be used by the Van Jacobson TCP/IP header compression and decompression code to n, which must be between 2 and 16 (inclusive). welcome script Run the executable or shell command specified by script before initiating PPP negotiation, after the connect script (if any) has completed. A value for this option from a privileged source cannot be overridden by a non-privileged user. xonxoff Use software flow control (i.e. XON/XOFF) to control the flow of data on the serial port. OPTIONS FILES Options can be taken from files as well as the command line. Pppd reads options from the files /etc/ppp/options, ~/.ppprc and /etc/ppp/options.ttyname (in that order) before processing the options on the command line. (In fact, the command-line options are scanned to find the terminal name before the options.ttyname file is read.) In forming the name of the options.ttyname file, the initial /dev/ is removed from the terminal name, and any remaining / characters are replaced with dots. An options file is parsed into a series of words, delimited by whitespace. Whitespace can be included in a word by enclosing the word in double-quotes ("). A backslash (\) quotes the following character. A hash (#) starts a comment, which continues until the end of the line. There is no restriction on using the file or call options within an options file. SECURITY pppd provides system administrators with sufficient access control that PPP access to a server machine can be provided to legitimate users without fear of compromising the security of the server or the network it's on. This control is provided through restrictions on which IP addresses the peer may use, based on its authenticated identity (if any), and through restrictions on which options a non-privileged user may use. Several of pppd's options are privileged, in particular those which permit potentially insecure configurations; these options are only accepted in files which are under the control of the system administrator, or if pppd is being run by root. The default behaviour of pppd is to allow an unauthenticated peer to use a given IP address only if the system does not already have a route to that IP address. For example, a system with a permanent connection to the wider internet will normally have a default route, and thus all peers will have to authenticate themselves in order to set up a connection. On such a system, the auth option is the default. On the other hand, a system where the PPP link is the only connection to the internet will not normally have a default route, so the peer will be able to use almost any IP address without authenticating itself. As indicated above, some security-sensitive options are privileged, which means that they may not be used by an ordinary non-privileged user running a setuid-root pppd, either on the command line, in the user's ~/.ppprc file, or in an options file read using the file option. Privileged options may be used in /etc/ppp/options file or in an options file read using the call option. If pppd is being run by the root user, privileged options can be used without restriction. When opening the device, pppd uses either the invoking user's user ID or the root UID (that is, 0), depending on whether the device name was specified by the user or the system administrator. If the device name comes from a privileged source, that is, /etc/ppp/options or an options file read using the call option, pppd uses full root privileges when opening the device. Thus, by creating an appropriate file under /etc/ppp/peers, the system administrator can allow users to establish a ppp connection via a device which they would not normally have permission to access. Otherwise pppd uses the invoking user's real UID when opening the device. AUTHENTICATION Authentication is the process whereby one peer convinces the other of its identity. This involves the first peer sending its name to the other, together with some kind of secret information which could only come from the genuine authorized user of that name. In such an exchange, we will call the first peer the "client" and the other the "server". The client has a name by which it identifies itself to the server, and the server also has a name by which it identifies itself to the client. Generally the genuine client shares some secret (or password) with the server, and authenticates itself by proving that it knows that secret. Very often, the names used for authentication correspond to the internet hostnames of the peers, but this is not essential. At present, pppd supports three authentication protocols: the Password Authentication Protocol (PAP), Challenge Handshake Authentication Protocol (CHAP), and Extensible Authentication Protocol (EAP). PAP involves the client sending its name and a cleartext password to the server to authenticate itself. In contrast, the server initiates the CHAP authentication exchange by sending a challenge to the client (the challenge packet includes the server's name). The client must respond with a response which includes its name plus a hash value derived from the shared secret and the challenge, in order to prove that it knows the secret. The PPP protocol, being symmetrical, allows both peers to require the other to authenticate itself. In that case, two separate and independent authentication exchanges will occur. The two exchanges could use different authentication protocols, and in principle, different names could be used in the two exchanges. The default behaviour of pppd is to agree to authenticate if requested, and to not require authentication from the peer. However, pppd will not agree to authenticate itself with a particular protocol if it has no secrets which could be used to do so. Pppd stores secrets for use in authentication in secrets files (/etc/ppp/pap-secrets for PAP, /etc/ppp/chap-secrets for CHAP/MS- CHAP/MS-CHAPv2). Both secrets files have the same format. The secrets files can contain secrets for pppd to use in authenticating itself to other systems, as well as secrets for pppd to use when authenticating other systems to itself. Each line in a secrets file contains one secret. A given secret is specific to a particular combination of client and server - it can only be used by that client to authenticate itself to that server. Thus each line in a secrets file has at least 3 fields: the name of the client, the name of the server, and the secret. These fields may be followed by a list of the IP addresses that the specified client may use when connecting to the specified server. A secrets file is parsed into words as for a options file, so the client name, server name and secrets fields must each be one word, with any embedded spaces or other special characters quoted or escaped. Note that case is significant in the client and server names and in the secret. If the secret starts with an `@', what follows is assumed to be the name of a file from which to read the secret. A "*" as the client or server name matches any name. When selecting a secret, pppd takes the best match, i.e. the match with the fewest wildcards. Any following words on the same line are taken to be a list of acceptable IP addresses for that client. If there are only 3 words on the line, or if the first word is "-", then all IP addresses are disallowed. To allow any address, use "*". A word starting with "!" indicates that the specified address is not acceptable. An address may be followed by "/" and a number n, to indicate a whole subnet, i.e. all addresses which have the same value in the most significant n bits. In this form, the address may be followed by a plus sign ("+") to indicate that one address from the subnet is authorized, based on the ppp network interface unit number in use. In this case, the host part of the address will be set to the unit number plus one. Thus a secrets file contains both secrets for use in authenticating other hosts, plus secrets which we use for authenticating ourselves to others. When pppd is authenticating the peer (checking the peer's identity), it chooses a secret with the peer's name in the first field and the name of the local system in the second field. The name of the local system defaults to the hostname, with the domain name appended if the domain option is used. This default can be overridden with the name option, except when the usehostname option is used. When pppd is choosing a secret to use in authenticating itself to the peer, it first determines what name it is going to use to identify itself to the peer. This name can be specified by the user with the user option. If this option is not used, the name defaults to the name of the local system, determined as described in the previous paragraph. Then pppd looks for a secret with this name in the first field and the peer's name in the second field. Pppd will know the name of the peer if CHAP or EAP authentication is being used, because the peer will have sent it in the challenge packet. However, if PAP is being used, pppd will have to determine the peer's name from the options specified by the user. The user can specify the peer's name directly with the remotename option. Otherwise, if the remote IP address was specified by a name (rather than in numeric form), that name will be used as the peer's name. Failing that, pppd will use the null string as the peer's name. When authenticating the peer with PAP, the supplied password is first compared with the secret from the secrets file. If the password doesn't match the secret, the password is encrypted using crypt() and checked against the secret again. Thus secrets for authenticating the peer can be stored in encrypted form if desired. If the papcrypt option is given, the first (unencrypted) comparison is omitted, for better security. Furthermore, if the login option was specified, the username and password are also checked against the system password database. Thus, the system administrator can set up the pap-secrets file to allow PPP access only to certain users, and to restrict the set of IP addresses that each user can use. Typically, when using the login option, the secret in /etc/ppp/pap-secrets would be "", which will match any password supplied by the peer. This avoids the need to have the same secret in two places. Authentication must be satisfactorily completed before IPCP (or any other Network Control Protocol) can be started. If the peer is required to authenticate itself, and fails to do so, pppd will terminated the link (by closing LCP). If IPCP negotiates an unacceptable IP address for the remote host, IPCP will be closed. IP packets can only be sent or received when IPCP is open. In some cases it is desirable to allow some hosts which can't authenticate themselves to connect and use one of a restricted set of IP addresses, even when the local host generally requires authentication. If the peer refuses to authenticate itself when requested, pppd takes that as equivalent to authenticating with PAP using the empty string for the username and password. Thus, by adding a line to the pap-secrets file which specifies the empty string for the client and password, it is possible to allow restricted access to hosts which refuse to authenticate themselves. ROUTING When IPCP negotiation is completed successfully, pppd will inform the kernel of the local and remote IP addresses for the ppp interface. This is sufficient to create a host route to the remote end of the link, which will enable the peers to exchange IP packets. Communication with other machines generally requires further modification to routing tables and/or ARP (Address Resolution Protocol) tables. In most cases the defaultroute and/or proxyarp options are sufficient for this, but in some cases further intervention is required. The /etc/ppp/ip-up script can be used for this. Sometimes it is desirable to add a default route through the remote host, as in the case of a machine whose only connection to the Internet is through the ppp interface. The defaultroute option causes pppd to create such a default route when IPCP comes up, and delete it when the link is terminated. In some cases it is desirable to use proxy ARP, for example on a server machine connected to a LAN, in order to allow other hosts to communicate with the remote host. The proxyarp option causes pppd to look for a network interface on the same subnet as the remote host (an interface supporting broadcast and ARP, which is up and not a point-to- point or loopback interface). If found, pppd creates a permanent, published ARP entry with the IP address of the remote host and the hardware address of the network interface found. When the demand option is used, the interface IP addresses have already been set at the point when IPCP comes up. If pppd has not been able to negotiate the same addresses that it used to configure the interface (for example when the peer is an ISP that uses dynamic IP address assignment), pppd has to change the interface IP addresses to the negotiated addresses. This may disrupt existing connections, and the use of demand dialing with peers that do dynamic IP address assignment is not recommended. MULTILINK Multilink PPP provides the capability to combine two or more PPP links between a pair of machines into a single `bundle', which appears as a single virtual PPP link which has the combined bandwidth of the individual links. Currently, multilink PPP is only supported under Linux. Pppd detects that the link it is controlling is connected to the same peer as another link using the peer's endpoint discriminator and the authenticated identity of the peer (if it authenticates itself). The endpoint discriminator is a block of data which is hopefully unique for each peer. Several types of data can be used, including locally- assigned strings of bytes, IP addresses, MAC addresses, randomly strings of bytes, or E-164 phone numbers. The endpoint discriminator sent to the peer by pppd can be set using the endpoint option. In circumstances the peer may send no endpoint discriminator or a non- unique value. The optional bundle option adds an extra string which is added to the peer's endpoint discriminator and authenticated identity when matching up links to be joined together in a bundle. The bundle option can also be used to allow the establishment of multiple bundles between the local system and the peer. Pppd uses a TDB database in /var/run/pppd.tdb to match up links. Assuming that multilink is enabled and the peer is willing to negotiate multilink, then when pppd is invoked to bring up the first link to the peer, it will detect that no other link is connected to the peer and create a new bundle, that is, another ppp network interface unit. When another pppd is invoked to bring up another link to the peer, it will detect the existing bundle and join its link to it. Currently, if the first pppd terminates (for example, because of a hangup or a received signal) the bundle is destroyed.
|
The following examples assume that the /etc/ppp/options file contains the auth option (as in the default /etc/ppp/options file in the ppp distribution). Probably the most common use of pppd is to dial out to an ISP. This can be done with a command such as pppd call isp where the /etc/ppp/peers/isp file is set up by the system administrator to contain something like this: ttyS0 19200 crtscts connect '/usr/sbin/chat -v -f /etc/ppp/chat-isp' noauth In this example, we are using chat to dial the ISP's modem and go through any logon sequence required. The /etc/ppp/chat-isp file contains the script used by chat; it could for example contain something like this: ABORT "NO CARRIER" ABORT "NO DIALTONE" ABORT "ERROR" ABORT "NO ANSWER" ABORT "BUSY" ABORT "Username/Password Incorrect" "" "at" OK "at&d0&c1" OK "atdt2468135" "name:" "^Umyuserid" "word:" "\qmypassword" "ispts" "\q^Uppp" "~-^Uppp-~" See the chat(8) man page for details of chat scripts. Pppd can also be used to provide a dial-in ppp service for users. If the users already have login accounts, the simplest way to set up the ppp service is to let the users log in to their accounts and run pppd (installed setuid-root) with a command such as pppd proxyarp To allow a user to use the PPP facilities, you need to allocate an IP address for that user's machine and create an entry in /etc/ppp/pap- secrets or /etc/ppp/chap-secrets (depending on which authentication method the PPP implementation on the user's machine supports), so that the user's machine can authenticate itself. For example, if Joe has a machine called "joespc" which is to be allowed to dial in to the machine called "server" and use the IP address joespc.my.net, you would add an entry like this to /etc/ppp/pap-secrets or /etc/ppp/chap- secrets: joespc server "joe's secret" joespc.my.net Alternatively, you can create a username called (for example) "ppp", whose login shell is pppd and whose home directory is /etc/ppp. Options to be used when pppd is run this way can be put in /etc/ppp/.ppprc. If your serial connection is any more complicated than a piece of wire, you may need to arrange for some control characters to be escaped. In particular, it is often useful to escape XON (^Q) and XOFF (^S), using asyncmap a0000. If the path includes a telnet, you probably should escape ^] as well (asyncmap 200a0000). If the path includes an rlogin, you will need to use the escape ff option on the end which is running the rlogin client, since many rlogin implementations are not transparent; they will remove the sequence [0xff, 0xff, 0x73, 0x73, followed by any 8 bytes] from the stream. DIAGNOSTICS Messages are sent to the syslog daemon using facility LOG_RAS. (This can be overriden by recompiling pppd with the macro LOG_PPP defined as the desired facility.) See the syslog(8) documentation for details of where the syslog daemon will write the messages. On most systems, the syslog daemon uses the /etc/syslog.conf file to specify the destination(s) for syslog messages. You may need to edit that file to suit. The debug option causes the contents of all control packets sent or received to be logged, that is, all LCP, PAP, CHAP, EAP or IPCP packets. This can be useful if the PPP negotiation does not succeed or if authentication fails. If debugging is enabled at compile time, the debug option also causes other debugging messages to be logged. Debugging can also be enabled or disabled by sending a SIGUSR1 signal to the pppd process. This signal acts as a toggle. EXIT STATUS The exit status of pppd is set to indicate whether any error was detected, or the reason for the link being terminated. The values used are: 0 Pppd has detached, or otherwise the connection was successfully established and terminated at the peer's request. 1 An immediately fatal error of some kind occurred, such as an essential system call failing, or running out of virtual memory. 2 An error was detected in processing the options given, such as two mutually exclusive options being used. 3 Pppd is not setuid-root and the invoking user is not root. 4 The kernel does not support PPP, for example, the PPP kernel driver is not included or cannot be loaded. 5 Pppd terminated because it was sent a SIGINT, SIGTERM or SIGHUP signal. 6 The serial port could not be locked. 7 The serial port could not be opened. 8 The connect script failed (returned a non-zero exit status). 9 The command specified as the argument to the pty option could not be run. 10 The PPP negotiation failed, that is, it didn't reach the point where at least one network protocol (e.g. IP) was running. 11 The peer system failed (or refused) to authenticate itself. 12 The link was established successfully and terminated because it was idle. 13 The link was established successfully and terminated because the connect time limit was reached. 14 Callback was negotiated and an incoming call should arrive shortly. 15 The link was terminated because the peer is not responding to echo requests. 16 The link was terminated by the modem hanging up. 17 The PPP negotiation failed because serial loopback was detected. 18 The init script failed (returned a non-zero exit status). 19 We failed to authenticate ourselves to the peer. SCRIPTS Pppd invokes scripts at various stages in its processing which can be used to perform site-specific ancillary processing. These scripts are usually shell scripts, but could be executable code files instead. Pppd does not wait for the scripts to finish. The scripts are executed as root (with the real and effective user-id set to 0), so that they can do things such as update routing tables or run privileged daemons. Be careful that the contents of these scripts do not compromise your system's security. Pppd runs the scripts with standard input, output and error redirected to /dev/null, and with an environment that is empty except for some environment variables that give information about the link. The environment variables that pppd sets are: DEVICE The name of the serial tty device being used. IFNAME The name of the network interface being used. IPLOCAL The IP address for the local end of the link. This is only set when IPCP has come up. IPREMOTE The IP address for the remote end of the link. This is only set when IPCP has come up. PEERNAME The authenticated name of the peer. This is only set if the peer authenticates itself. SPEED The baud rate of the tty device. ORIG_UID The real user-id of the user who invoked pppd. PPPLOGNAME The username of the real user-id that invoked pppd. This is always set. For the ip-down and auth-down scripts, pppd also sets the following variables giving statistics for the connection: CONNECT_TIME The number of seconds from when the PPP negotiation started until the connection was terminated. BYTES_SENT The number of bytes sent (at the level of the serial port) during the connection. BYTES_RCVD The number of bytes received (at the level of the serial port) during the connection. LINKNAME The logical name of the link, set with the linkname option. DNS1 If the peer supplies DNS server addresses, this variable is set to the first DNS server address supplied. DNS2 If the peer supplies DNS server addresses, this variable is set to the second DNS server address supplied. Pppd invokes the following scripts, if they exist. It is not an error if they don't exist. /etc/ppp/auth-up A program or script which is executed after the remote system successfully authenticates itself. It is executed with the parameters interface-name peer-name user-name tty-device speed Note that this script is not executed if the peer doesn't authenticate itself, for example when the noauth option is used. /etc/ppp/auth-down A program or script which is executed when the link goes down, if /etc/ppp/auth-up was previously executed. It is executed in the same manner with the same parameters as /etc/ppp/auth-up. /etc/ppp/ip-up A program or script which is executed when the link is available for sending and receiving IP packets (that is, IPCP has come up). It is executed with the parameters interface-name tty-device speed local-IP-address remote-IP- address ipparam /etc/ppp/ip-down A program or script which is executed when the link is no longer available for sending and receiving IP packets. This script can be used for undoing the effects of the /etc/ppp/ip-up script. It is invoked in the same manner and with the same parameters as the ip-up script. /etc/ppp/ipv6-up Like /etc/ppp/ip-up, except that it is executed when the link is available for sending and receiving IPv6 packets. It is executed with the parameters interface-name tty-device speed local-link-local-address remote- link-local-address ipparam /etc/ppp/ipv6-down Similar to /etc/ppp/ip-down, but it is executed when IPv6 packets can no longer be transmitted on the link. It is executed with the same parameters as the ipv6-up script. /etc/ppp/ipx-up A program or script which is executed when the link is available for sending and receiving IPX packets (that is, IPXCP has come up). It is executed with the parameters interface-name tty-device speed network-number local-IPX-node- address remote-IPX-node-address local-IPX-routing-protocol remote-IPX-routing-protocol local-IPX-router-name remote-IPX- router-name ipparam pppd-pid The local-IPX-routing-protocol and remote-IPX-routing-protocol field may be one of the following: NONE to indicate that there is no routing protocol RIP to indicate that RIP/SAP should be used NLSP to indicate that Novell NLSP should be used RIP NLSP to indicate that both RIP/SAP and NLSP should be used /etc/ppp/ipx-down A program or script which is executed when the link is no longer available for sending and receiving IPX packets. This script can be used for undoing the effects of the /etc/ppp/ipx-up script. It is invoked in the same manner and with the same parameters as the ipx-up script. FILES /var/run/pppn.pid (BSD or Linux), /etc/ppp/pppn.pid (others) Process-ID for pppd process on ppp interface unit n. /var/run/ppp-name.pid (BSD or Linux), /etc/ppp/ppp-name.pid (others) Process-ID for pppd process for logical link name (see the linkname option). /etc/ppp/pap-secrets Usernames, passwords and IP addresses for PAP authentication. This file should be owned by root and not readable or writable by any other user. Pppd will log a warning if this is not the case. /etc/ppp/chap-secrets Names, secrets and IP addresses for CHAP/MS-CHAP/MS-CHAPv2 authentication. As for /etc/ppp/pap-secrets, this file should be owned by root and not readable or writable by any other user. Pppd will log a warning if this is not the case. /etc/ppp/options System default options for pppd, read before user default options or command-line options. ~/.ppprc User default options, read before /etc/ppp/options.ttyname. /etc/ppp/options.ttyname System default options for the serial port being used, read after ~/.ppprc. In forming the ttyname part of this filename, an initial /dev/ is stripped from the port name (if present), and any slashes in the remaining part are converted to dots. /etc/ppp/peers A directory containing options files which may contain privileged options, even if pppd was invoked by a user other than root. The system administrator can create options files in this directory to permit non-privileged users to dial out without requiring the peer to authenticate, but only to certain trusted peers. SEE ALSO RFC1144 Jacobson, V. Compressing TCP/IP headers for low-speed serial links. February 1990. RFC1321 Rivest, R. The MD5 Message-Digest Algorithm. April 1992. RFC1332 McGregor, G. PPP Internet Protocol Control Protocol (IPCP). May 1992. RFC1334 Lloyd, B.; Simpson, W.A. PPP authentication protocols. October 1992. RFC1661 Simpson, W.A. The Point-to-Point Protocol (PPP). July 1994. RFC1662 Simpson, W.A. PPP in HDLC-like Framing. July 1994. RFC2284 Blunk, L.; Vollbrecht, J., PPP Extensible Authentication Protocol (EAP). March 1998. RFC2472 Haskin, D. IP Version 6 over PPP December 1998. NOTES Some limited degree of control can be exercised over a running pppd process by sending it a signal from the list below. SIGINT, SIGTERM These signals cause pppd to terminate the link (by closing LCP), restore the serial device settings, and exit. SIGHUP This signal causes pppd to terminate the link, restore the serial device settings, and close the serial device. If the persist or demand option has been specified, pppd will try to reopen the serial device and start another connection (after the holdoff period). Otherwise pppd will exit. If this signal is received during the holdoff period, it causes pppd to end the holdoff period immediately. SIGUSR1 This signal toggles the state of the debug option. SIGUSR2 This signal causes pppd to renegotiate compression. This can be useful to re-enable compression after it has been disabled as a result of a fatal decompression error. (Fatal decompression errors generally indicate a bug in one or other implementation.) AUTHORS Paul Mackerras (Paul.Mackerras@samba.org), based on earlier work by Drew Perkins, Brad Clements, Karl Fox, Greg Christy, and Brad Parker. PPPD(8)
|
httpd-wrapper
|
This tool extracts environment variable definitions from a specific plist file and then execs the httpd executable, thereby allowing variables to be used in Apache's paramaterized config files. It also gathers minimal metrics on the usage of Apache. FILES /etc/apache2/env.plist The file where environment variables are defined /var/log/apache2/httpd-wrapper.log The file where errors are logged SEE ALSO httpd(8) macOS Sept. 20, 2016 macOS
|
httpd-wrapper – Wrapper script for httpd web server
| null | null | null |
system_profiler
|
system_profiler reports on the hardware and software configuration of the system. It can generate plain text reports or XML reports which can be opened with System Information.app, or JSON reports Progress and error messages are printed to stderr while actual report data is printed to stdout. Redirect stderr to /dev/null to suppress progress and error messages. The following options are available: -xml Generates a report in XML format. If the XML report is redirected to a file with a ".spx" suffix that file can be opened with System Information.app. -json Generates a report in JSON format. -listDataTypes Lists the available datatypes. -detailLevel level Specifies the level of detail for the report: mini report with no personal information basic basic hardware and network information full all available information -timeout Specifies the maximum time to wait in seconds for results. If some information is not available within the specified time limit then an incomplete or partial report will be generated. The default timeout is 180 seconds. Specifying a timeout of 0 means no timeout. -usage Prints usage info and examples.
|
system_profiler – reports system hardware and software configuration.
|
system_profiler [-usage] system_profiler [-listDataTypes] system_profiler [-xml] dataType1 ... dataTypeN system_profiler [-xml] [-detailLevel level] system_profiler [-json] dataType1 ... dataTypeN system_profiler [-json] [-detailLevel level]
| null |
system_profiler Generates a text report with the standard detail level. system_profiler -detailLevel mini Generates a short report containing no personal information. system_profiler -listDataTypes Shows a list of the available data types. system_profiler SPSoftwareDataType SPNetworkDataType Generates a text report containing only software and network data. system_profiler -xml > MyReport.spx Creates a XML file which can be opened by System Profiler.app AUTHORS Apple Inc. Darwin June 30, 2003 Darwin
|
fcgistarter
| null |
fcgistarter - Start a FastCGI program
|
fcgistarter -c command -p port [ -i interface ] -N num SUMMARY NOTE Currently only works on Unix systems.
|
-c command Absolute path of the FastCGI program -p port Port which the program will listen on -i interface Interface which the program will listen on -N num Number of instances of the program Apache HTTP Server 2020-02-08 FCGISTARTER(8)
| null |
bluetoothd
| null | null | null | null | null |
slapschema
|
Slapschema is used to check schema compliance of the contents of a slapd(8) database. It opens the given database determined by the database number or suffix and checks the compliance of its contents with the corresponding schema. Errors are written to standard output or the specified file. Databases configured as subordinate of this one are also output, unless -g is specified. Administrators may need to modify existing schema items, including adding new required attributes to objectClasses, removing existing required or allowed attributes from objectClasses, entirely removing objectClasses, or any other change that may result in making perfectly valid entries no longer compliant with the modified schema. The execution of the slapschema tool after modifying the schema can point out inconsistencies that would otherwise surface only when inconsistent entries need to be modified. The entry records are checked in database order, not superior first order. The entry records will be checked considering all (user and operational) attributes stored in the database. Dynamically generated attributes (such as subschemaSubentry) will not be considered.
|
slapschema - SLAPD in-database schema checking utility
|
/usr/sbin/slapschema [-afilter] [-bsuffix] [-c] [-ddebug-level] [-fslapd.conf] [-Fconfdir] [-g] [-HURI] [-lerror-file] [-ndbnum] [-ooption[=value]] [-ssubtree-dn] [-v]
|
-a_filter Only check entries matching the asserted filter. For example slapschema -a \ "(!(entryDN:dnSubtreeMatch:=ou=People,dc=example,dc=com))" will check all but the "ou=People,dc=example,dc=com" subtree of the "dc=example,dc=com" database. Deprecated; use -H ldap:///???(filter) instead. -b_suffix Use the specified suffix to determine which database to check. The -b cannot be used in conjunction with the -n option. -c Enable continue (ignore errors) mode. -d_debug-level Enable debugging messages as defined by the specified debug-level; see slapd(8) for details. -f_slapd.conf Specify an alternative slapd.conf(5) file. -F_confdir specify a config directory. If both -f and -F are specified, the config file will be read and converted to config directory format and written to the specified directory. If neither option is specified, an attempt to read the default config directory will be made before trying to use the default config file. If a valid config directory exists then the default config file is ignored. -g disable subordinate gluing. Only the specified database will be processed, and not its glued subordinates (if any). -H URI use dn, scope and filter from URI to only handle matching entries. -l_error-file Write errors to specified file instead of standard output. -n_dbnum Check the dbnum-th database listed in the configuration file. The config database slapd-config(5), is always the first database, so use -n 0 The -n cannot be used in conjunction with the -b option. -o_option[=value] Specify an option with a(n optional) value. Possible generic options/values are: syslog=<subsystems> (see `-s' in slapd(8)) syslog-level=<level> (see `-S' in slapd(8)) syslog-user=<user> (see `-l' in slapd(8)) -s_subtree-dn Only check entries in the subtree specified by this DN. Implies -b subtree-dn if no -b nor -n option is given. Deprecated; use -H ldap:///subtree-dn instead. -v Enable verbose mode. LIMITATIONS For some backend types, your slapd(8) should not be running (at least, not in read-write mode) when you do this to ensure consistency of the database. It is always safe to run slapschema with the slapd-bdb(5), slapd-hdb(5), and slapd-null(5) backends.
|
To check the schema compliance of your SLAPD database after modifications to the schema, and put any error in a file called errors.ldif, give the command: /usr/sbin/slapcat -l errors.ldif SEE ALSO ldap(3), ldif(5), slapd(8) "OpenLDAP Administrator's Guide" (http://www.OpenLDAP.org/doc/admin/) ACKNOWLEDGEMENTS OpenLDAP Software is developed and maintained by The OpenLDAP Project <http://www.openldap.org/>. OpenLDAP Software is derived from University of Michigan LDAP 3.3 Release. OpenLDAP 2.4.28 2011/11/24 SLAPSCHEMA(8C)
|
kadmin
|
kadmin and kadmin.local are command-line interfaces to the Kerberos V5 administration system. They provide nearly identical functionalities; the difference is that kadmin.local directly accesses the KDC database, while kadmin performs operations using kadmind(8). Except as explicitly noted otherwise, this man page will use "kadmin" to refer to both versions. kadmin provides for the maintenance of Kerberos principals, password policies, and service key tables (keytabs). The remote kadmin client uses Kerberos to authenticate to kadmind using the service principal kadmin/admin or kadmin/ADMINHOST (where ADMINHOST is the fully-qualified hostname of the admin server). If the credentials cache contains a ticket for one of these principals, and the -c credentials_cache option is specified, that ticket is used to authenticate to kadmind. Otherwise, the -p and -k options are used to specify the client Kerberos principal name used to authenticate. Once kadmin has determined the principal name, it requests a service ticket from the KDC, and uses that service ticket to authenticate to kadmind. Since kadmin.local directly accesses the KDC database, it usually must be run directly on the primary KDC with sufficient permissions to read the KDC database. If the KDC database uses the LDAP database module, kadmin.local can be run on any host which can access the LDAP server.
|
kadmin - Kerberos V5 database administration program
|
kadmin [-O|-N] [-r realm] [-p principal] [-q query] [[-c cache_name]|[-k [-t keytab]]|-n] [-w password] [-s admin_server[:port]] [command args...] kadmin.local [-r realm] [-p principal] [-q query] [-d dbname] [-e enc:salt ...] [-m] [-x db_args] [command args...]
|
-r realm Use realm as the default database realm. -p principal Use principal to authenticate. Otherwise, kadmin will append /admin to the primary principal name of the default ccache, the value of the USER environment variable, or the username as obtained with getpwuid, in order of preference. -k Use a keytab to decrypt the KDC response instead of prompting for a password. In this case, the default principal will be host/hostname. If there is no keytab specified with the -t option, then the default keytab will be used. -t keytab Use keytab to decrypt the KDC response. This can only be used with the -k option. -n Requests anonymous processing. Two types of anonymous principals are supported. For fully anonymous Kerberos, configure PKINIT on the KDC and configure pkinit_anchors in the client's krb5.conf(5). Then use the -n option with a principal of the form @REALM (an empty principal name followed by the at-sign and a realm name). If permitted by the KDC, an anonymous ticket will be returned. A second form of anonymous tickets is supported; these realm-exposed tickets hide the identity of the client but not the client's realm. For this mode, use kinit -n with a normal principal name. If supported by the KDC, the principal (but not realm) will be replaced by the anonymous principal. As of release 1.8, the MIT Kerberos KDC only supports fully anonymous operation. -c credentials_cache Use credentials_cache as the credentials cache. The cache should contain a service ticket for the kadmin/admin or kadmin/ADMINHOST (where ADMINHOST is the fully-qualified hostname of the admin server) service; it can be acquired with the kinit(1) program. If this option is not specified, kadmin requests a new service ticket from the KDC, and stores it in its own temporary ccache. -w password Use password instead of prompting for one. Use this option with care, as it may expose the password to other users on the system via the process list. -q query Perform the specified query and then exit. -d dbname Specifies the name of the KDC database. This option does not apply to the LDAP database module. -s admin_server[:port] Specifies the admin server which kadmin should contact. -m If using kadmin.local, prompt for the database master password instead of reading it from a stash file. -e "enc:salt ..." Sets the keysalt list to be used for any new keys created. See Keysalt_lists in kdc.conf(5) for a list of possible values. -O Force use of old AUTH_GSSAPI authentication flavor. -N Prevent fallback to AUTH_GSSAPI authentication flavor. -x db_args Specifies the database specific arguments. See the next section for supported options. Starting with release 1.14, if any command-line arguments remain after the options, they will be treated as a single query to be executed. This mode of operation is intended for scripts and behaves differently from the interactive mode in several respects: • Query arguments are split by the shell, not by kadmin. • Informational and warning messages are suppressed. Error messages and query output (e.g. for get_principal) will still be displayed. • Confirmation prompts are disabled (as if -force was given). Password prompts will still be issued as required. • The exit status will be non-zero if the query fails. The -q option does not carry these behavior differences; the query will be processed as if it was entered interactively. The -q option cannot be used in combination with a query in the remaining arguments. DATABASE OPTIONS Database options can be used to override database-specific defaults. Supported options for the DB2 module are: -x dbname=*filename* Specifies the base filename of the DB2 database. -x lockiter Make iteration operations hold the lock for the duration of the entire operation, rather than temporarily releasing the lock while handling each principal. This is the default behavior, but this option exists to allow command line override of a [dbmodules] setting. First introduced in release 1.13. -x unlockiter Make iteration operations unlock the database for each principal, instead of holding the lock for the duration of the entire operation. First introduced in release 1.13. Supported options for the LDAP module are: -x host=ldapuri Specifies the LDAP server to connect to by a LDAP URI. -x binddn=bind_dn Specifies the DN used to bind to the LDAP server. -x bindpwd=password Specifies the password or SASL secret used to bind to the LDAP server. Using this option may expose the password to other users on the system via the process list; to avoid this, instead stash the password using the stashsrvpw command of kdb5_ldap_util(8). -x sasl_mech=mechanism Specifies the SASL mechanism used to bind to the LDAP server. The bind DN is ignored if a SASL mechanism is used. New in release 1.13. -x sasl_authcid=name Specifies the authentication name used when binding to the LDAP server with a SASL mechanism, if the mechanism requires one. New in release 1.13. -x sasl_authzid=name Specifies the authorization name used when binding to the LDAP server with a SASL mechanism. New in release 1.13. -x sasl_realm=realm Specifies the realm used when binding to the LDAP server with a SASL mechanism, if the mechanism uses one. New in release 1.13. -x debug=level sets the OpenLDAP client library debug level. level is an integer to be interpreted by the library. Debugging messages are printed to standard error. New in release 1.12. COMMANDS When using the remote client, available commands may be restricted according to the privileges specified in the kadm5.acl(5) file on the admin server. add_principal add_principal [options] newprinc Creates the principal newprinc, prompting twice for a password. If no password policy is specified with the -policy option, and the policy named default is assigned to the principal if it exists. However, creating a policy named default will not automatically assign this policy to previously existing principals. This policy assignment can be suppressed with the -clearpolicy option. This command requires the add privilege. Aliases: addprinc, ank Options: -expire expdate (getdate string) The expiration date of the principal. -pwexpire pwexpdate (getdate string) The password expiration date. -maxlife maxlife (duration or getdate string) The maximum ticket life for the principal. -maxrenewlife maxrenewlife (duration or getdate string) The maximum renewable life of tickets for the principal. -kvno kvno The initial key version number. -policy policy The password policy used by this principal. If not specified, the policy default is used if it exists (unless -clearpolicy is specified). -clearpolicy Prevents any policy from being assigned when -policy is not specified. {-|+}allow_postdated -allow_postdated prohibits this principal from obtaining postdated tickets. +allow_postdated clears this flag. {-|+}allow_forwardable -allow_forwardable prohibits this principal from obtaining forwardable tickets. +allow_forwardable clears this flag. {-|+}allow_renewable -allow_renewable prohibits this principal from obtaining renewable tickets. +allow_renewable clears this flag. {-|+}allow_proxiable -allow_proxiable prohibits this principal from obtaining proxiable tickets. +allow_proxiable clears this flag. {-|+}allow_dup_skey -allow_dup_skey disables user-to-user authentication for this principal by prohibiting others from obtaining a service ticket encrypted in this principal's TGT session key. +allow_dup_skey clears this flag. {-|+}requires_preauth +requires_preauth requires this principal to preauthenticate before being allowed to kinit. -requires_preauth clears this flag. When +requires_preauth is set on a service principal, the KDC will only issue service tickets for that service principal if the client's initial authentication was performed using preauthentication. {-|+}requires_hwauth +requires_hwauth requires this principal to preauthenticate using a hardware device before being allowed to kinit. -requires_hwauth clears this flag. When +requires_hwauth is set on a service principal, the KDC will only issue service tickets for that service principal if the client's initial authentication was performed using a hardware device to preauthenticate. {-|+}ok_as_delegate +ok_as_delegate sets the okay as delegate flag on tickets issued with this principal as the service. Clients may use this flag as a hint that credentials should be delegated when authenticating to the service. -ok_as_delegate clears this flag. {-|+}allow_svr -allow_svr prohibits the issuance of service tickets for this principal. In release 1.17 and later, user-to-user service tickets are still allowed unless the -allow_dup_skey flag is also set. +allow_svr clears this flag. {-|+}allow_tgs_req -allow_tgs_req specifies that a Ticket-Granting Service (TGS) request for a service ticket for this principal is not permitted. +allow_tgs_req clears this flag. {-|+}allow_tix -allow_tix forbids the issuance of any tickets for this principal. +allow_tix clears this flag. {-|+}needchange +needchange forces a password change on the next initial authentication to this principal. -needchange clears this flag. {-|+}password_changing_service +password_changing_service marks this principal as a password change service principal. {-|+}ok_to_auth_as_delegate +ok_to_auth_as_delegate allows this principal to acquire forwardable tickets to itself from arbitrary users, for use with constrained delegation. {-|+}no_auth_data_required +no_auth_data_required prevents PAC or AD-SIGNEDPATH data from being added to service tickets for the principal. {-|+}lockdown_keys +lockdown_keys prevents keys for this principal from leaving the KDC via kadmind. The chpass and extract operations are denied for a principal with this attribute. The chrand operation is allowed, but will not return the new keys. The delete and rename operations are also denied if this attribute is set, in order to prevent a malicious administrator from replacing principals like krbtgt/* or kadmin/* with new principals without the attribute. This attribute can be set via the network protocol, but can only be removed using kadmin.local. -randkey Sets the key of the principal to a random value. -nokey Causes the principal to be created with no key. New in release 1.12. -pw password Sets the password of the principal to the specified string and does not prompt for a password. Note: using this option in a shell script may expose the password to other users on the system via the process list. -e enc:salt,... Uses the specified keysalt list for setting the keys of the principal. See Keysalt_lists in kdc.conf(5) for a list of possible values. -x db_princ_args Indicates database-specific options. The options for the LDAP database module are: -x dn=dn Specifies the LDAP object that will contain the Kerberos principal being created. -x linkdn=dn Specifies the LDAP object to which the newly created Kerberos principal object will point. -x containerdn=container_dn Specifies the container object under which the Kerberos principal is to be created. -x tktpolicy=policy Associates a ticket policy to the Kerberos principal. NOTE: • The containerdn and linkdn options cannot be specified with the dn option. • If the dn or containerdn options are not specified while adding the principal, the principals are created under the principal container configured in the realm or the realm container. • dn and containerdn should be within the subtrees or principal container configured in the realm. Example: kadmin: addprinc jennifer No policy specified for "jennifer@ATHENA.MIT.EDU"; defaulting to no policy. Enter password for principal jennifer@ATHENA.MIT.EDU: Re-enter password for principal jennifer@ATHENA.MIT.EDU: Principal "jennifer@ATHENA.MIT.EDU" created. kadmin: modify_principal modify_principal [options] principal Modifies the specified principal, changing the fields as specified. The options to add_principal also apply to this command, except for the -randkey, -pw, and -e options. In addition, the option -clearpolicy will clear the current policy of a principal. This command requires the modify privilege. Alias: modprinc Options (in addition to the addprinc options): -unlock Unlocks a locked principal (one which has received too many failed authentication attempts without enough time between them according to its password policy) so that it can successfully authenticate. rename_principal rename_principal [-force] old_principal new_principal Renames the specified old_principal to new_principal. This command prompts for confirmation, unless the -force option is given. This command requires the add and delete privileges. Alias: renprinc delete_principal delete_principal [-force] principal Deletes the specified principal from the database. This command prompts for deletion, unless the -force option is given. This command requires the delete privilege. Alias: delprinc change_password change_password [options] principal Changes the password of principal. Prompts for a new password if neither -randkey or -pw is specified. This command requires the changepw privilege, or that the principal running the program is the same as the principal being changed. Alias: cpw The following options are available: -randkey Sets the key of the principal to a random value. -pw password Set the password to the specified string. Using this option in a script may expose the password to other users on the system via the process list. -e enc:salt,... Uses the specified keysalt list for setting the keys of the principal. See Keysalt_lists in kdc.conf(5) for a list of possible values. -keepold Keeps the existing keys in the database. This flag is usually not necessary except perhaps for krbtgt principals. Example: kadmin: cpw systest Enter password for principal systest@BLEEP.COM: Re-enter password for principal systest@BLEEP.COM: Password for systest@BLEEP.COM changed. kadmin: purgekeys purgekeys [-all|-keepkvno oldest_kvno_to_keep] principal Purges previously retained old keys (e.g., from change_password -keepold) from principal. If -keepkvno is specified, then only purges keys with kvnos lower than oldest_kvno_to_keep. If -all is specified, then all keys are purged. The -all option is new in release 1.12. This command requires the modify privilege. get_principal get_principal [-terse] principal Gets the attributes of principal. With the -terse option, outputs fields as quoted tab-separated strings. This command requires the inquire privilege, or that the principal running the the program to be the same as the one being listed. Alias: getprinc Examples: kadmin: getprinc tlyu/admin Principal: tlyu/admin@BLEEP.COM Expiration date: [never] Last password change: Mon Aug 12 14:16:47 EDT 1996 Password expiration date: [never] Maximum ticket life: 0 days 10:00:00 Maximum renewable life: 7 days 00:00:00 Last modified: Mon Aug 12 14:16:47 EDT 1996 (bjaspan/admin@BLEEP.COM) Last successful authentication: [never] Last failed authentication: [never] Failed password attempts: 0 Number of keys: 1 Key: vno 1, aes256-cts-hmac-sha384-192 MKey: vno 1 Attributes: Policy: [none] kadmin: getprinc -terse systest systest@BLEEP.COM 3 86400 604800 1 785926535 753241234 785900000 tlyu/admin@BLEEP.COM 786100034 0 0 kadmin: list_principals list_principals [expression] Retrieves all or some principal names. expression is a shell-style glob expression that can contain the wild-card characters ?, *, and []. All principal names matching the expression are printed. If no expression is provided, all principal names are printed. If the expression does not contain an @ character, an @ character followed by the local realm is appended to the expression. This command requires the list privilege. Alias: listprincs, get_principals, getprincs Example: kadmin: listprincs test* test3@SECURE-TEST.OV.COM test2@SECURE-TEST.OV.COM test1@SECURE-TEST.OV.COM testuser@SECURE-TEST.OV.COM kadmin: get_strings get_strings principal Displays string attributes on principal. This command requires the inquire privilege. Alias: getstrs set_string set_string principal name value Sets a string attribute on principal. String attributes are used to supply per-principal configuration to the KDC and some KDC plugin modules. The following string attribute names are recognized by the KDC: require_auth Specifies an authentication indicator which is required to authenticate to the principal as a service. Multiple indicators can be specified, separated by spaces; in this case any of the specified indicators will be accepted. (New in release 1.14.) session_enctypes Specifies the encryption types supported for session keys when the principal is authenticated to as a server. See Encryption_types in kdc.conf(5) for a list of the accepted values. otp Enables One Time Passwords (OTP) preauthentication for a client principal. The value is a JSON string representing an array of objects, each having optional type and username fields. pkinit_cert_match Specifies a matching expression that defines the certificate attributes required for the client certificate used by the principal during PKINIT authentication. The matching expression is in the same format as those used by the pkinit_cert_match option in krb5.conf(5). (New in release 1.16.) This command requires the modify privilege. Alias: setstr Example: set_string host/foo.mit.edu session_enctypes aes128-cts set_string user@FOO.COM otp "[{""type"":""hotp"",""username"":""al""}]" del_string del_string principal key Deletes a string attribute from principal. This command requires the delete privilege. Alias: delstr add_policy add_policy [options] policy Adds a password policy named policy to the database. This command requires the add privilege. Alias: addpol The following options are available: -maxlife time (duration or getdate string) Sets the maximum lifetime of a password. -minlife time (duration or getdate string) Sets the minimum lifetime of a password. -minlength length Sets the minimum length of a password. -minclasses number Sets the minimum number of character classes required in a password. The five character classes are lower case, upper case, numbers, punctuation, and whitespace/unprintable characters. -history number Sets the number of past keys kept for a principal. This option is not supported with the LDAP KDC database module. -maxfailure maxnumber Sets the number of authentication failures before the principal is locked. Authentication failures are only tracked for principals which require preauthentication. The counter of failed attempts resets to 0 after a successful attempt to authenticate. A maxnumber value of 0 (the default) disables lockout. -failurecountinterval failuretime (duration or getdate string) Sets the allowable time between authentication failures. If an authentication failure happens after failuretime has elapsed since the previous failure, the number of authentication failures is reset to 1. A failuretime value of 0 (the default) means forever. -lockoutduration lockouttime (duration or getdate string) Sets the duration for which the principal is locked from authenticating if too many authentication failures occur without the specified failure count interval elapsing. A duration of 0 (the default) means the principal remains locked out until it is administratively unlocked with modprinc -unlock. -allowedkeysalts Specifies the key/salt tuples supported for long-term keys when setting or changing a principal's password/keys. See Keysalt_lists in kdc.conf(5) for a list of the accepted values, but note that key/salt tuples must be separated with commas (',') only. To clear the allowed key/salt policy use a value of '-'. Example: kadmin: add_policy -maxlife "2 days" -minlength 5 guests kadmin: modify_policy modify_policy [options] policy Modifies the password policy named policy. Options are as described for add_policy. This command requires the modify privilege. Alias: modpol delete_policy delete_policy [-force] policy Deletes the password policy named policy. Prompts for confirmation before deletion. The command will fail if the policy is in use by any principals. This command requires the delete privilege. Alias: delpol Example: kadmin: del_policy guests Are you sure you want to delete the policy "guests"? (yes/no): yes kadmin: get_policy get_policy [ -terse ] policy Displays the values of the password policy named policy. With the -terse flag, outputs the fields as quoted strings separated by tabs. This command requires the inquire privilege. Alias: getpol Examples: kadmin: get_policy admin Policy: admin Maximum password life: 180 days 00:00:00 Minimum password life: 00:00:00 Minimum password length: 6 Minimum number of password character classes: 2 Number of old keys kept: 5 Reference count: 17 kadmin: get_policy -terse admin admin 15552000 0 6 2 5 17 kadmin: The "Reference count" is the number of principals using that policy. With the LDAP KDC database module, the reference count field is not meaningful. list_policies list_policies [expression] Retrieves all or some policy names. expression is a shell-style glob expression that can contain the wild-card characters ?, *, and []. All policy names matching the expression are printed. If no expression is provided, all existing policy names are printed. This command requires the list privilege. Aliases: listpols, get_policies, getpols. Examples: kadmin: listpols test-pol dict-only once-a-min test-pol-nopw kadmin: listpols t* test-pol test-pol-nopw kadmin: ktadd ktadd [options] principal ktadd [options] -glob princ-exp Adds a principal, or all principals matching princ-exp, to a keytab file. Each principal's keys are randomized in the process. The rules for princ-exp are described in the list_principals command. This command requires the inquire and changepw privileges. With the -glob form, it also requires the list privilege. The options are: -k[eytab] keytab Use keytab as the keytab file. Otherwise, the default keytab is used. -e enc:salt,... Uses the specified keysalt list for setting the new keys of the principal. See Keysalt_lists in kdc.conf(5) for a list of possible values. -q Display less verbose information. -norandkey Do not randomize the keys. The keys and their version numbers stay unchanged. This option cannot be specified in combination with the -e option. An entry for each of the principal's unique encryption types is added, ignoring multiple keys with the same encryption type but different salt types. Alias: xst Example: kadmin: ktadd -k /tmp/foo-new-keytab host/foo.mit.edu Entry for principal host/foo.mit.edu@ATHENA.MIT.EDU with kvno 3, encryption type aes256-cts-hmac-sha1-96 added to keytab FILE:/tmp/foo-new-keytab kadmin: ktremove ktremove [options] principal [kvno | all | old] Removes entries for the specified principal from a keytab. Requires no permissions, since this does not require database access. If the string "all" is specified, all entries for that principal are removed; if the string "old" is specified, all entries for that principal except those with the highest kvno are removed. Otherwise, the value specified is parsed as an integer, and all entries whose kvno match that integer are removed. The options are: -k[eytab] keytab Use keytab as the keytab file. Otherwise, the default keytab is used. -q Display less verbose information. Alias: ktrem Example: kadmin: ktremove kadmin/admin all Entry for principal kadmin/admin with kvno 3 removed from keytab FILE:/etc/krb5.keytab kadmin: lock Lock database exclusively. Use with extreme caution! This command only works with the DB2 KDC database module. unlock Release the exclusive database lock. list_requests Lists available for kadmin requests. Aliases: lr, ? quit Exit program. If the database was locked, the lock is released. Aliases: exit, q HISTORY The kadmin program was originally written by Tom Yu at MIT, as an interface to the OpenVision Kerberos administration program. ENVIRONMENT See kerberos(7) for a description of Kerberos environment variables. SEE ALSO kpasswd(1), kadmind(8), kerberos(7) AUTHOR MIT COPYRIGHT 1985-2022, MIT 1.20.1 KADMIN(1)
| null |
spray
|
Spray sends multiple RPC packets to host and records how many of them were correctly received and how long it took. The options are as follows: -c count Send count packets. -d delay Pause delay microseconds between sending each packet. -l length Set the length of the packet that holds the RPC call message to length bytes. Not all values of length are possible because RPC data is encoded using XDR. Spray rounds up to the nearest possible value. Spray is intended for use in network testing, measurement, and management. This command can be very hard on a network and should be used with caution. SEE ALSO netstat(1), ifconfig(8), ping(8) macOS 14.5 July 10, 1995 macOS 14.5
|
spray – send many packets to host
|
spray [-c count] [-d delay] [-l length] host ...
| null | null |
cvversions
|
cvversions will display the revision, build level and creation date for the File System Manager (FSM) and client subsystems of the Xsan File System.
|
cvversions - Display Xsan client/server versions
|
cvversions [ -h ] [ -F type ] [[id:]file ...]
|
-h Print a help/usage message and exit. -F type If type is text, cvversions emits version information in text format (this is the default). If type is json, cvversions emits limited version information in a parsable JSON format. file file is searched for a Xsan version string and, if found, is printed in addition to the standard version strings. If present in the argument, id is substituted for file in the program output. USAGE Simply execute the program and record the information shown. Xsan File System April 2017 CVVERSIONS(1)
| null |
kextlibs
|
The kextlibs utility searches for library kexts that define symbols needed for linking by kext, printing their bundle identifiers and versions to stdout. If the kext has a multiple-architecture executable, libraries are resolved for each architecture. If any symbols are not found, or are found in multiple libraries, the numbers of such symbols are printed to standard error after the library kext information for each architecture. A handy use of kextlibs is to run it with just the -xml flag and pipe the output to pbcopy(1); if the exit status is zero (indicating no undefined or multiply-defined symbols), you can open your kext's Info.plist file in a text editor and paste the library declarations over the OSBundleLibraries property. You can use kextlibs to find libraries for older releases of macOS using the -repository option to specify an extensions folder to search other than the extensions directories for the root volume (although releases prior to Mac OS X 10.6 (Snow Leopard) don't check for architecture- specific properties, so be sure to check the output and edit as needed). If you don't explicitly specify a repository directory, kextlibs searches the root volume's /System/Library/Extensions and /Library/Extensions directories.
|
kextlibs – find OSBundleLibraries needed by a kext
|
kextlibs [options] [--] kext ... DEPRECATED The kextlibs utility has been deprecated. Please use the kmutil(8) equivalent: kmutil libraries.
|
-h, -help Print a help message describing each option flag and exit with a success result, regardless of any other options on the command line. -all-symbols Print reports on all symbols that remain undefined, all symbols that have been resolved in one library kext each, and all symbols that have multiple definitions in different library kexts. Equivalent to specifying all of -undef-symbols, -onedef-symbols, and -multdef-symbols. Normally only the number of missing and duplicate symbols is printed. -c, -compatible-versions Print the compatible version rather than the current version. -multdef-symbols Print all undefined symbols from kext found in more than one library kext, followed by those library kexts' bundle identifiers and versions (or compatible versions if -compatible-versions was specified). Normally only the number of multiply-defined symbols is printed. -non-kpi Search the compatibility kext, com.apple.kernel.6.0, rather than any of the com.apple.kpi.* system kexts. Use of this option is not recommended: The exact kernel component (mach, bsd, libkern, or iokit) cannot be determined, and the compatible version of com.apple.kernel is locked to its current version, so kexts linking against it can only load against that exact version. -onedef-symbols Print all undefined symbols from kext found in exactly one library kext, followed by that library kext's bundle identifier and version (or compatible version if -compatible-versions was specified). Normally nothing is printed about symbols that are found once. -r directory, -repository directory Search directory for dependencies. This option may be specified multiple times. You can use this to get library declarations relative to a set of extensions other than those of the running system (such as for a different release of macOS), or to include a side directory of library kexts. Note: If you specify a directory with this option, the system extensions folders are not implicitly searched. See -system-extensions. -e, -system-extensions Add /System/Library/Extensions and /Library/Extensions to the list of directories to search. If you don't specify any directories or kexts, this is used by default. -undef-symbols Print all undefined symbols from kext that can't be found in any library kexts. Normally only the number of symbols not found is printed. -unsupported Search unsupported library kexts for symbols (by default they are not searched). -v [0-6 | 0x####], -verbose [0-6 | 0x####] Verbose mode; print information about program operation. Higher levels of verbosity include all lower levels. You can specify a level from 0-6, or a hexadecimal log specification (as described in kext_logging(8)). For kextlibs, the decimal levels 1-6 generally have little effect. -xml Print an XML fragment to stdout suitable for copying and pasting directly into an Info.plist file. This option prints information about libraries to stdout, and then prints information about symbols to stderr. In XML mode, if the libraries for all architectures are the same, only one set of OSBundleLibraries is printed; if any differ from any others, architecture-specific listings for all architectures are printed (OSBundleLibraries_i386, OSBundleLibraries_x86_64, and so on). -- End of options. FILES /System/Library/Extensions/ The standard system repository of kernel extensions. /Library/Extensions/ The standard repository of non Apple kernel extensions. DIAGNOSTICS The kextlibs utility exits with a status of 0 on completion if all undefined symbols are found exactly once; with a status of 1 if any undefined symbols remain, or with a status of 2 if any symbols are found in more than one library kext (whether or not any undefined symbols remain), and with another nonzero status on some other problem. BUGS kextlibs uses a simple algorithm of string matching to resolve symbols, and does not apply any of the patching that the full link process does. This can cause it to fail when searching for symbols in a kext built against an SDK for a prior release of macOS than the one on which kextlibs is being used. In such cases, you can run kextlibs against the Extensions folder of that prior release using the -repository option. Many single-letter options are inconsistent in meaning with (or directly contradictory to) the same letter options in other kext tools. SEE ALSO kmutil(8), kernelmanagerd(8), kextutil(8), kextfind(8), kext_logging(8) Darwin November 14, 2012 Darwin
| null |
tcpdump
|
Tcpdump prints out a description of the contents of packets on a network interface that match the Boolean expression; the description is preceded by a time stamp, printed, by default, as hours, minutes, seconds, and fractions of a second since midnight. It can also be run with the -w flag, which causes it to save the packet data to a file for later analysis, and/or with the -r flag, which causes it to read from a saved packet file rather than to read packets from a network interface. It can also be run with the -V flag, which causes it to read a list of saved packet files. In all cases, only packets that match expression will be processed by tcpdump. Tcpdump will, if not run with the -c flag, continue capturing packets until it is interrupted by a SIGINT signal (generated, for example, by typing your interrupt character, typically control-C) or a SIGTERM signal (typically generated with the kill(1) command); if run with the -c flag, it will capture packets until it is interrupted by a SIGINT or SIGTERM signal or the specified number of packets have been processed. When tcpdump finishes capturing packets, it will report counts of: packets ``captured'' (this is the number of packets that tcpdump has received and processed); packets ``received by filter'' (the meaning of this depends on the OS on which you're running tcpdump, and possibly on the way the OS was configured - if a filter was specified on the command line, on some OSes it counts packets regardless of whether they were matched by the filter expression and, even if they were matched by the filter expression, regardless of whether tcpdump has read and processed them yet, on other OSes it counts only packets that were matched by the filter expression regardless of whether tcpdump has read and processed them yet, and on other OSes it counts only packets that were matched by the filter expression and were processed by tcpdump); packets ``dropped by kernel'' (this is the number of packets that were dropped, due to a lack of buffer space, by the packet capture mechanism in the OS on which tcpdump is running, if the OS reports that information to applications; if not, it will be reported as 0). On platforms that support the SIGINFO signal, such as most BSDs (including macOS) and Digital/Tru64 UNIX, it will report those counts when it receives a SIGINFO signal (generated, for example, by typing your ``status'' character, typically control-T, although on some platforms, such as macOS, the ``status'' character is not set by default, so you must set it with stty(1) in order to use it) and will continue capturing packets. On platforms that do not support the SIGINFO signal, the same can be achieved by using the SIGUSR1 signal. Using the SIGUSR2 signal along with the -w flag will forcibly flush the packet buffer into the output file. Reading packets from a network interface may require that you have special privileges; see the pcap(3PCAP) man page for details. Reading a saved packet file doesn't require special privileges.
|
tcpdump - dump traffic on a network
|
tcpdump [ -AbdDefhHIJKlLnNOpqStuUvxX# ] [ -B buffer_size ] [ -c count ] [ --count ] [ -C file_size ] [ -E spi@ipaddr algo:secret,... ] [ -F file ] [ -G rotate_seconds ] [ -i interface ] [ --immediate-mode ] [ -j tstamp_type ] [ -k (metadata_arg) ] [ -m module ] [ -M secret ] [ --number ] [ --print ] [ -Q packet-metadata-filter ] [ -Q in|out|inout ] [ -r file ] [ -s snaplen ] [ -T type ] [ --version ] [ -V file ] [ -w file ] [ -W filecount ] [ -y datalinktype ] [ -z postrotate-command ] [ -Z user ] [ --time-stamp-precision=tstamp_precision ] [ --micro ] [ --nano ] [ expression ]
|
-A Print each packet (minus its link level header) in ASCII. Handy for capturing web pages. -b Print the AS number in BGP packets in ASDOT notation rather than ASPLAIN notation. -B buffer_size --buffer-size=buffer_size Set the operating system capture buffer size to buffer_size, in units of KiB (1024 bytes). -c count Exit after receiving count packets. -c skip,count Exit after receiving or displaying count packets. The second form allows to pass the number of initial packets to ignore with the skip parameter. The skip parameter is required before the comma but the count parameters is optional after the comma. --count Print only on stderr the packet count when reading capture file(s) instead of parsing/printing the packets. If a filter is specified on the command line, tcpdump counts only packets that were matched by the filter expression. -C file_size Before writing a raw packet to a savefile, check whether the file is currently larger than file_size and, if so, close the current savefile and open a new one. Savefiles after the first savefile will have the name specified with the -w flag, with a number after it, starting at 1 and continuing upward. The units of file_size are millions of bytes (1,000,000 bytes, not 1,048,576 bytes). -d Dump the compiled packet-matching code in a human readable form to standard output and stop. Please mind that although code compilation is always DLT- specific, typically it is impossible (and unnecessary) to specify which DLT to use for the dump because tcpdump uses either the DLT of the input pcap file specified with -r, or the default DLT of the network interface specified with -i, or the particular DLT of the network interface specified with -y and -i respectively. In these cases the dump shows the same exact code that would filter the input file or the network interface without -d. However, when neither -r nor -i is specified, specifying -d prevents tcpdump from guessing a suitable network interface (see -i). In this case the DLT defaults to EN10MB and can be set to another valid value manually with -y. -dd Dump packet-matching code as a C program fragment. -ddd Dump packet-matching code as decimal numbers (preceded with a count). -D --list-interfaces Print the list of the network interfaces available on the system and on which tcpdump can capture packets. For each network interface, a number and an interface name, possibly followed by a text description of the interface, are printed. The interface name or the number can be supplied to the -i flag to specify an interface on which to capture. This can be useful on systems that don't have a command to list them (e.g., Windows systems, or UNIX systems lacking ifconfig -a); the number can be useful on Windows 2000 and later systems, where the interface name is a somewhat complex string. The -D flag will not be supported if tcpdump was built with an older version of libpcap that lacks the pcap_findalldevs(3PCAP) function. -e Print the link-level header on each dump line. This can be used, for example, to print MAC layer addresses for protocols such as Ethernet and IEEE 802.11. -E Use spi@ipaddr algo:secret for decrypting IPsec ESP packets that are addressed to addr and contain Security Parameter Index value spi. This combination may be repeated with comma or newline separation. Note that setting the secret for IPv4 ESP packets is supported at this time. Algorithms may be des-cbc, 3des-cbc, blowfish-cbc, rc3-cbc, cast128-cbc, or none. The default is des-cbc. The ability to decrypt packets is only present if tcpdump was compiled with cryptography enabled. secret is the ASCII text for ESP secret key. If preceded by 0x, then a hex value will be read. The option assumes RFC2406 ESP, not RFC1827 ESP. The option is only for debugging purposes, and the use of this option with a true `secret' key is discouraged. By presenting IPsec secret key onto command line you make it visible to others, via ps(1) and other occasions. In addition to the above syntax, the syntax file name may be used to have tcpdump read the provided file in. The file is opened upon receiving the first ESP packet, so any special permissions that tcpdump may have been given should already have been given up. -f Print `foreign' IPv4 addresses numerically rather than symbolically (this option is intended to get around serious brain damage in Sun's NIS server — usually it hangs forever translating non-local internet numbers). The test for `foreign' IPv4 addresses is done using the IPv4 address and netmask of the interface on which capture is being done. If that address or netmask are not available, available, either because the interface on which capture is being done has no address or netmask or because the capture is being done on the Linux "any" interface, which can capture on more than one interface, this option will not work correctly. -F file Use file as input for the filter expression. An additional expression given on the command line is ignored. -g --apple-oneline Do not insert line break after IP header in verbose mode for easier parsing. This is an Apple addition. -G rotate_seconds If specified, rotates the dump file specified with the -w option every rotate_seconds seconds. Savefiles will have the name specified by -w which should include a time format as defined by strftime(3). If no time format is specified, each new file will overwrite the previous. Whenever a generated filename is not unique, tcpdump will overwrite the pre-existing data; providing a time specification that is coarser than the capture period is therefore not advised. If used in conjunction with the -C option, filenames will take the form of `file<count>'. -h --help Print the tcpdump and libpcap version strings, print a usage message, and exit. --version Print the tcpdump and libpcap version strings and exit. -H Attempt to detect 802.11s draft mesh headers. -i interface --interface=interface Listen, report the list of link-layer types, report the list of time stamp types, or report the results of compiling a filter expression on interface. If unspecified and if the -d flag is not given, tcpdump searches the system interface list for the lowest numbered, configured up interface (excluding loopback), which may turn out to be, for example, ``eth0''. On Linux systems with 2.2 or later kernels, an interface argument of ``any'' can be used to capture packets from all interfaces. Note that captures on the ``any'' device will not be done in promiscuous mode. On Darwin systems version 13 or later, when the interface is unspecified, tcpdump will use a pseudo interface to capture packets on a set of interfaces determined by the kernel (excludes by default loopback and tunnel interfaces). Alternatively, to capture on more than one interface at a time, one may use "pktap" as the interface parameter followed by an optional list of comma separated interface names to include. For example, to capture on the loopback and en0 interface: tcpdump -i pktap,lo0,en0 An interface argument of "all" or "pktap,all" can be used to capture packets from all interfaces, including loopback and tunnel interfaces. A pktap pseudo interface provides for packet metadata using the default PKTAP data link type and files are written in the Pcap- ng file format. The RAW data link type must be used to force to use the legacy pcap-savefile(5) file format with a ptkap pseudo interface. Note that captures on a ptkap pseudo interface will not be done in promiscuous mode. An interface argument of "iptap" can be used to capture packets from at the IP layer. This capture packets as they are passed to the input and output routines of the IPv4 and IPv6 protocol handlers of the networking stack. Note that captures will not be done in promiscuous mode. If the -D flag is supported, an interface number as printed by that flag can be used as the interface argument, if no interface on the system has that number as a name. -I --monitor-mode Put the interface in "monitor mode"; this is supported only on IEEE 802.11 Wi-Fi interfaces, and supported only on some operating systems. Note that in monitor mode the adapter might disassociate from the network with which it's associated, so that you will not be able to use any wireless networks with that adapter. This could prevent accessing files on a network server, or resolving host names or network addresses, if you are capturing in monitor mode and are not connected to another network with another adapter. This flag will affect the output of the -L flag. If -I isn't specified, only those link-layer types available when not in monitor mode will be shown; if -I is specified, only those link- layer types available when in monitor mode will be shown. --immediate-mode Capture in "immediate mode". In this mode, packets are delivered to tcpdump as soon as they arrive, rather than being buffered for efficiency. This is the default when printing packets rather than saving packets to a ``savefile'' if the packets are being printed to a terminal rather than to a file or pipe. -j tstamp_type --time-stamp-type=tstamp_type Set the time stamp type for the capture to tstamp_type. The names to use for the time stamp types are given in pcap-tstamp(7); not all the types listed there will necessarily be valid for any given interface. -J --list-time-stamp-types List the supported time stamp types for the interface and exit. If the time stamp type cannot be set for the interface, no time stamp types are listed. --time-stamp-precision=tstamp_precision When capturing, set the time stamp precision for the capture to tstamp_precision. Note that availability of high precision time stamps (nanoseconds) and their actual accuracy is platform and hardware dependent. Also note that when writing captures made with nanosecond accuracy to a savefile, the time stamps are written with nanosecond resolution, and the file is written with a different magic number, to indicate that the time stamps are in seconds and nanoseconds; not all programs that read pcap savefiles will be able to read those captures. When reading a savefile, convert time stamps to the precision specified by timestamp_precision, and display them with that resolution. If the precision specified is less than the precision of time stamps in the file, the conversion will lose precision. The supported values for timestamp_precision are micro for microsecond resolution and nano for nanosecond resolution. The default is microsecond resolution. --micro --nano Shorthands for --time-stamp-precision=micro or --time-stamp-precision=nano, adjusting the time stamp precision accordingly. When reading packets from a savefile, using --micro truncates time stamps if the savefile was created with nanosecond precision. In contrast, a savefile created with microsecond precision will have trailing zeroes added to the time stamp when --nano is used. -k metadata_arg --apple-md-print metadata_arg Control the display of packet metadata via an optional metadata_arg argument. This is useful when displaying packet saved in the pcap-ng file format or with interfaces that support the PKTAP data link type. By default, when the metadata_arg optional argument is not specified, any available packet metadata information is printed out. The metadata_arg argument controls the display of specific packet metadata information using a flag word, where each character corresponds to a type of packet metadata as follows: I interface name (or interface ID) N process name P process ID S service class D direction C comment F flags U process UUID (not shown by default) V verbose printf of pcap-ng blocks (not shown by default) d data link type f flow identifier t trace tag A display all types of metadata This is an Apple modification. -K --dont-verify-checksums Don't attempt to verify IP, TCP, or UDP checksums. This is useful for interfaces that perform some or all of those checksum calculation in hardware; otherwise, all outgoing TCP checksums will be flagged as bad. The option also suppresses truncated bytes missing warnings for ip and ip6 (Apple modification). -l Make stdout line buffered. Useful if you want to see the data while capturing it. E.g., tcpdump -l | tee dat or tcpdump -l > dat & tail -f dat Note that on Windows,``line buffered'' means ``unbuffered'', so that WinDump will write each character individually if -l is specified. -U is similar to -l in its behavior, but it will cause output to be ``packet-buffered'', so that the output is written to stdout at the end of each packet rather than at the end of each line; this is buffered on all platforms, including Windows. -L --list-data-link-types List the known data link types for the interface, in the specified mode, and exit. The list of known data link types may be dependent on the specified mode; for example, on some platforms, a Wi-Fi interface might support one set of data link types when not in monitor mode (for example, it might support only fake Ethernet headers, or might support 802.11 headers but not support 802.11 headers with radio information) and another set of data link types when in monitor mode (for example, it might support 802.11 headers, or 802.11 headers with radio information, only in monitor mode). -m module Load SMI MIB module definitions from file module. This option can be used several times to load several MIB modules into tcpdump. -M secret Use secret as a shared secret for validating the digests found in TCP segments with the TCP-MD5 option (RFC 2385), if present. -n Don't convert addresses (i.e., host addresses, port numbers, etc.) to names. -N Don't print domain name qualification of host names. E.g., if you give this flag then tcpdump will print ``nic'' instead of ``nic.ddn.mil''. -# --number Print an optional packet number at the beginning of the line. -O --no-optimize Do not run the packet-matching code optimizer. This is useful only if you suspect a bug in the optimizer. -P --apple-pcapng Use the pcap-ng file format when saving files. This is an Apple addition. -p --no-promiscuous-mode Don't put the interface into promiscuous mode. Note that the interface might be in promiscuous mode for some other reason; hence, `-p' cannot be used as an abbreviation for `ether host {local-hw-addr} or ether broadcast'. --print Print parsed packet output, even if the raw packets are being saved to a file with the -w flag. -Q direction --direction=direction Choose send/receive direction direction for which packets should be captured. Possible values are `in', `out' and `inout'. Not available on all platforms. -Q meta-data-expression --apple-md-filter meta-data-expression See the PACKET METADATA FILTER section below. This is an Apple addition. -q Quick (quiet?) output. Print less protocol information so output lines are shorter. -r file Read packets from file (which was created with the -w option or by other tools that write pcap or pcapng files). Standard input is used if file is ``-''. -S --absolute-tcp-sequence-numbers Print absolute, rather than relative, TCP sequence numbers. -s snaplen --snapshot-length=snaplen Snarf snaplen bytes of data from each packet rather than the default of 262144 bytes. Packets truncated because of a limited snapshot are indicated in the output with ``[|proto]'', where proto is the name of the protocol level at which the truncation has occurred. Note that taking larger snapshots both increases the amount of time it takes to process packets and, effectively, decreases the amount of packet buffering. This may cause packets to be lost. Note also that taking smaller snapshots will discard data from protocols above the transport layer, which loses information that may be important. NFS and AFS requests and replies, for example, are very large, and much of the detail won't be available if a too-short snapshot length is selected. If you need to reduce the snapshot size below the default, you should limit snaplen to the smallest number that will capture the protocol information you're interested in. Setting snaplen to 0 sets it to the default of 262144, for backwards compatibility with recent older versions of tcpdump. -T type Force packets selected by "expression" to be interpreted the specified type. Currently known types are aodv (Ad-hoc On- demand Distance Vector protocol), carp (Common Address Redundancy Protocol), cnfp (Cisco NetFlow protocol), domain (Domain Name System), lmp (Link Management Protocol), pgm (Pragmatic General Multicast), pgm_zmtp1 (ZMTP/1.0 inside PGM/EPGM), ptp (Precision Time Protocol), radius (RADIUS), resp (REdis Serialization Protocol), rpc (Remote Procedure Call), rtcp (Real-Time Applications control protocol), rtp (Real-Time Applications protocol), snmp (Simple Network Management Protocol), someip (SOME/IP), tftp (Trivial File Transfer Protocol), vat (Visual Audio Tool), vxlan (Virtual eXtensible Local Area Network), wb (distributed White Board) and zmtp1 (ZeroMQ Message Transport Protocol 1.0). Note that the pgm type above affects UDP interpretation only, the native PGM is always recognised as IP protocol 113 regardless. UDP-encapsulated PGM is often called "EPGM" or "PGM/UDP". Note that the pgm_zmtp1 type above affects interpretation of both native PGM and UDP at once. During the native PGM decoding the application data of an ODATA/RDATA packet would be decoded as a ZeroMQ datagram with ZMTP/1.0 frames. During the UDP decoding in addition to that any UDP packet would be treated as an encapsulated PGM packet. Additional dissectors for non registered UDP protocols: iperf (iperf 2.x), iperf3 (iperf 3.x), iperf3-64 (iperf 3.x with 64 bits packet ID), suttp (Simple UDP Throughput Test Protocol), -t Don't print a timestamp on each dump line. -tt Print the timestamp, as seconds since January 1, 1970, 00:00:00, UTC, and fractions of a second since that time, on each dump line. -ttt Print a delta (microsecond or nanosecond resolution depending on the --time-stamp-precision option) between current and previous line on each dump line. The default is microsecond resolution. -tttt Print a timestamp, as hours, minutes, seconds, and fractions of a second since midnight, preceded by the date, on each dump line. -ttttt Print a delta (microsecond or nanosecond resolution depending on the --time-stamp-precision option) between current and first line on each dump line. The default is microsecond resolution. -t n An alternate form for specifying the kind of timestamp display where n is a number between 0 and 5 with the following meaning: 0 time 1 no time 2 unformatted timestamp 3 microseconds since previous line 4 date and time 5 microseconds since first line This option may be specified more than once to display more than one kind of timestamp on each dump line. -u Print undecoded NFS handles. -U --packet-buffered If the -w option is not specified, or if it is specified but the --print flag is also specified, make the printed packet output ``packet-buffered''; i.e., as the description of the contents of each packet is printed, it will be written to the standard output, rather than, when not writing to a terminal, being written only when the output buffer fills. If the -w option is specified, make the saved raw packet output ``packet-buffered''; i.e., as each packet is saved, it will be written to the output file, rather than being written only when the output buffer fills. The -U flag will not be supported if tcpdump was built with an older version of libpcap that lacks the pcap_dump_flush(3PCAP) function. -v When parsing and printing, produce (slightly more) verbose output. For example, the time to live, identification, total length and options in an IP packet are printed. Also enables additional packet integrity checks such as verifying the IP and ICMP header checksum. When writing to a file with the -w option and at the same time not reading from a file with the -r option, report to stderr, once per second, the number of packets captured. In Solaris, FreeBSD and possibly other operating systems this periodic update currently can cause loss of captured packets on their way from the kernel to tcpdump. On Darwin systems when writing to a file with the -w option, the number of packets captured is not updated if there as been no new packets in the last second. -vv Even more verbose output. For example, additional fields are printed from NFS reply packets, and SMB packets are fully decoded. -vvv Even more verbose output. For example, telnet SB ... SE options are printed in full. With -X Telnet options are printed in hex as well. -V file Read a list of filenames from file. Standard input is used if file is ``-''. -w file Write the raw packets to file rather than parsing and printing them out. They can later be printed with the -r option. Standard output is used if file is ``-''. This output will be buffered if written to a file or pipe, so a program reading from the file or pipe may not see packets for an arbitrary amount of time after they are received. Use the -U flag to cause packets to be written as soon as they are received. The MIME type application/vnd.tcpdump.pcap has been registered with IANA for pcap files. The filename extension .pcap appears to be the most commonly used along with .cap and .dmp. Tcpdump itself doesn't check the extension when reading capture files and doesn't add an extension when writing them (it uses magic numbers in the file header instead). However, many operating systems and applications will use the extension if it is present and adding one (e.g. .pcap) is recommended. See pcap-savefile(5) for a description of the file format. -W filecount Used in conjunction with the -C option, this will limit the number of files created to the specified number, and begin overwriting files from the beginning, thus creating a 'rotating' buffer. In addition, it will name the files with enough leading 0s to support the maximum number of files, allowing them to sort correctly. Used in conjunction with the -G option, this will limit the number of rotated dump files that get created, exiting with status 0 when reaching the limit. If used in conjunction with both -C and -G, the -W option will currently be ignored, and will only affect the file name. -x When parsing and printing, in addition to printing the headers of each packet, print the data of each packet (minus its link level header) in hex. The smaller of the entire packet or snaplen bytes will be printed. Note that this is the entire link-layer packet, so for link layers that pad (e.g. Ethernet), the padding bytes will also be printed when the higher layer packet is shorter than the required padding. In the current implementation this flag may have the same effect as -xx if the packet is truncated. -xx When parsing and printing, in addition to printing the headers of each packet, print the data of each packet, including its link level header, in hex. -X When parsing and printing, in addition to printing the headers of each packet, print the data of each packet (minus its link level header) in hex and ASCII. This is very handy for analysing new protocols. In the current implementation this flag may have the same effect as -XX if the packet is truncated. -XX When parsing and printing, in addition to printing the headers of each packet, print the data of each packet, including its link level header, in hex and ASCII. -y datalinktype --linktype=datalinktype Set the data link type to use while capturing packets (see -L) or just compiling and dumping packet-matching code (see -d) to datalinktype. -z postrotate-command Used in conjunction with the -C or -G options, this will make tcpdump run " postrotate-command file " where file is the savefile being closed after each rotation. For example, specifying -z gzip or -z bzip2 will compress each savefile using gzip or bzip2. Note that tcpdump will run the command in parallel to the capture, using the lowest priority so that this doesn't disturb the capture process. And in case you would like to use a command that itself takes flags or different arguments, you can always write a shell script that will take the savefile name as the only argument, make the flags & arguments arrangements and execute the command that you want. -Z user --relinquish-privileges=user If tcpdump is running as root, after opening the capture device or input savefile, but before opening any savefiles for output, change the user ID to user and the group ID to the primary group of user. This behavior can also be enabled by default at compile time. expression selects which packets will be dumped. If no expression is given, all packets on the net will be dumped. Otherwise, only packets for which expression is `true' will be dumped. For the expression syntax, see pcap-filter(7). The expression argument can be passed to tcpdump as either a single Shell argument, or as multiple Shell arguments, whichever is more convenient. Generally, if the expression contains Shell metacharacters, such as backslashes used to escape protocol names, it is easier to pass it as a single, quoted argument rather than to escape the Shell metacharacters. Multiple arguments are concatenated with spaces before being parsed.
|
To print all packets arriving at or departing from sundown: tcpdump host sundown To print traffic between helios and either hot or ace: tcpdump host helios and \( hot or ace \) To print all IP packets between ace and any host except helios: tcpdump ip host ace and not helios To print all traffic between local hosts and hosts at Berkeley: tcpdump net ucb-ether To print all ftp traffic through internet gateway snup: (note that the expression is quoted to prevent the shell from (mis-)interpreting the parentheses): tcpdump 'gateway snup and (port ftp or ftp-data)' To print traffic neither sourced from nor destined for local hosts (if you gateway to one other net, this stuff should never make it onto your local net). tcpdump ip and not net localnet To print the start and end packets (the SYN and FIN packets) of each TCP conversation that involves a non-local host. tcpdump 'tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 and not src and dst net localnet' To print the TCP packets with flags RST and ACK both set. (i.e. select only the RST and ACK flags in the flags field, and if the result is "RST and ACK both set", match) tcpdump 'tcp[tcpflags] & (tcp-rst|tcp-ack) == (tcp-rst|tcp-ack)' To print all IPv4 HTTP packets to and from port 80, i.e. print only packets that contain data, not, for example, SYN and FIN packets and ACK-only packets. (IPv6 is left as an exercise for the reader.) tcpdump 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' To print IP packets longer than 576 bytes sent through gateway snup: tcpdump 'gateway snup and ip[2:2] > 576' To print IP broadcast or multicast packets that were not sent via Ethernet broadcast or multicast: tcpdump 'ether[0] & 1 = 0 and ip[16] >= 224' To print all ICMP packets that are not echo requests/replies (i.e., not ping packets): tcpdump 'icmp[icmptype] != icmp-echo and icmp[icmptype] != icmp-echoreply' OUTPUT FORMAT The output of tcpdump is protocol dependent. The following gives a brief description and examples of most of the formats. Timestamps By default, all output lines are preceded by a timestamp. The timestamp is the current clock time in the form hh:mm:ss.frac and is as accurate as the kernel's clock. The timestamp reflects the time the kernel applied a time stamp to the packet. No attempt is made to account for the time lag between when the network interface finished receiving the packet from the network and when the kernel applied a time stamp to the packet; that time lag could include a delay between the time when the network interface finished receiving a packet from the network and the time when an interrupt was delivered to the kernel to get it to read the packet and a delay between the time when the kernel serviced the `new packet' interrupt and the time when it applied a time stamp to the packet. Link Level Headers If the '-e' option is given, the link level header is printed out. On Ethernets, the source and destination addresses, protocol, and packet length are printed. On FDDI networks, the '-e' option causes tcpdump to print the `frame control' field, the source and destination addresses, and the packet length. (The `frame control' field governs the interpretation of the rest of the packet. Normal packets (such as those containing IP datagrams) are `async' packets, with a priority value between 0 and 7; for example, `async4'. Such packets are assumed to contain an 802.2 Logical Link Control (LLC) packet; the LLC header is printed if it is not an ISO datagram or a so-called SNAP packet. On Token Ring networks, the '-e' option causes tcpdump to print the `access control' and `frame control' fields, the source and destination addresses, and the packet length. As on FDDI networks, packets are assumed to contain an LLC packet. Regardless of whether the '-e' option is specified or not, the source routing information is printed for source-routed packets. On 802.11 networks, the '-e' option causes tcpdump to print the `frame control' fields, all of the addresses in the 802.11 header, and the packet length. As on FDDI networks, packets are assumed to contain an LLC packet. (N.B.: The following description assumes familiarity with the SLIP compression algorithm described in RFC-1144.) On SLIP links, a direction indicator (``I'' for inbound, ``O'' for outbound), packet type, and compression information are printed out. The packet type is printed first. The three types are ip, utcp, and ctcp. No further link information is printed for ip packets. For TCP packets, the connection identifier is printed following the type. If the packet is compressed, its encoded header is printed out. The special cases are printed out as *S+n and *SA+n, where n is the amount by which the sequence number (or sequence number and ack) has changed. If it is not a special case, zero or more changes are printed. A change is indicated by U (urgent pointer), W (window), A (ack), S (sequence number), and I (packet ID), followed by a delta (+n or -n), or a new value (=n). Finally, the amount of data in the packet and compressed header length are printed. For example, the following line shows an outbound compressed TCP packet, with an implicit connection identifier; the ack has changed by 6, the sequence number by 49, and the packet ID by 6; there are 3 bytes of data and 6 bytes of compressed header: O ctcp * A+6 S+49 I+6 3 (6) ARP/RARP Packets ARP/RARP output shows the type of request and its arguments. The format is intended to be self explanatory. Here is a short sample taken from the start of an `rlogin' from host rtsg to host csam: arp who-has csam tell rtsg arp reply csam is-at CSAM The first line says that rtsg sent an ARP packet asking for the Ethernet address of internet host csam. Csam replies with its Ethernet address (in this example, Ethernet addresses are in caps and internet addresses in lower case). This would look less redundant if we had done tcpdump -n: arp who-has 128.3.254.6 tell 128.3.254.68 arp reply 128.3.254.6 is-at 02:07:01:00:01:c4 If we had done tcpdump -e, the fact that the first packet is broadcast and the second is point-to-point would be visible: RTSG Broadcast 0806 64: arp who-has csam tell rtsg CSAM RTSG 0806 64: arp reply csam is-at CSAM For the first packet this says the Ethernet source address is RTSG, the destination is the Ethernet broadcast address, the type field contained hex 0806 (type ETHER_ARP) and the total length was 64 bytes. IPv4 Packets If the link-layer header is not being printed, for IPv4 packets, IP is printed after the time stamp. If the -v flag is specified, information from the IPv4 header is shown in parentheses after the IP or the link-layer header. The general format of this information is: tos tos, ttl ttl, id id, offset offset, flags [flags], proto proto, length length, options (options) tos is the type of service field; if the ECN bits are non-zero, those are reported as ECT(1), ECT(0), or CE. ttl is the time-to-live; it is not reported if it is zero. id is the IP identification field. offset is the fragment offset field; it is printed whether this is part of a fragmented datagram or not. flags are the MF and DF flags; + is reported if MF is set, and DF is reported if F is set. If neither are set, . is reported. proto is the protocol ID field. length is the total length field. options are the IP options, if any. Next, for TCP and UDP packets, the source and destination IP addresses and TCP or UDP ports, with a dot between each IP address and its corresponding port, will be printed, with a > separating the source and destination. For other protocols, the addresses will be printed, with a > separating the source and destination. Higher level protocol information, if any, will be printed after that. For fragmented IP datagrams, the first fragment contains the higher level protocol header; fragments after the first contain no higher level protocol header. Fragmentation information will be printed only with the -v flag, in the IP header information, as described above. TCP Packets (N.B.:The following description assumes familiarity with the TCP protocol described in RFC-793. If you are not familiar with the protocol, this description will not be of much use to you.) The general format of a TCP protocol line is: src > dst: Flags [tcpflags], seq data-seqno, ack ackno, win window, urg urgent, options [opts], length len Src and dst are the source and destination IP addresses and ports. Tcpflags are some combination of S (SYN), F (FIN), P (PUSH), R (RST), U (URG), W (ECN CWR), E (ECN-Echo) or `.' (ACK), or `none' if no flags are set. Data-seqno describes the portion of sequence space covered by the data in this packet (see example below). Ackno is sequence number of the next data expected the other direction on this connection. Window is the number of bytes of receive buffer space available the other direction on this connection. Urg indicates there is `urgent' data in the packet. Opts are TCP options (e.g., mss 1024). Len is the length of payload data. Iptype, Src, dst, and flags are always present. The other fields depend on the contents of the packet's TCP protocol header and are output only if appropriate. Here is the opening portion of an rlogin from host rtsg to host csam. IP rtsg.1023 > csam.login: Flags [S], seq 768512:768512, win 4096, opts [mss 1024] IP csam.login > rtsg.1023: Flags [S.], seq, 947648:947648, ack 768513, win 4096, opts [mss 1024] IP rtsg.1023 > csam.login: Flags [.], ack 1, win 4096 IP rtsg.1023 > csam.login: Flags [P.], seq 1:2, ack 1, win 4096, length 1 IP csam.login > rtsg.1023: Flags [.], ack 2, win 4096 IP rtsg.1023 > csam.login: Flags [P.], seq 2:21, ack 1, win 4096, length 19 IP csam.login > rtsg.1023: Flags [P.], seq 1:2, ack 21, win 4077, length 1 IP csam.login > rtsg.1023: Flags [P.], seq 2:3, ack 21, win 4077, urg 1, length 1 IP csam.login > rtsg.1023: Flags [P.], seq 3:4, ack 21, win 4077, urg 1, length 1 The first line says that TCP port 1023 on rtsg sent a packet to port login on csam. The S indicates that the SYN flag was set. The packet sequence number was 768512 and it contained no data. (The notation is `first:last' which means `sequence numbers first up to but not including last'.) There was no piggy-backed ACK, the available receive window was 4096 bytes and there was a max-segment-size option requesting an MSS of 1024 bytes. Csam replies with a similar packet except it includes a piggy-backed ACK for rtsg's SYN. Rtsg then ACKs csam's SYN. The `.' means the ACK flag was set. The packet contained no data so there is no data sequence number or length. Note that the ACK sequence number is a small integer (1). The first time tcpdump sees a TCP `conversation', it prints the sequence number from the packet. On subsequent packets of the conversation, the difference between the current packet's sequence number and this initial sequence number is printed. This means that sequence numbers after the first can be interpreted as relative byte positions in the conversation's data stream (with the first data byte each direction being `1'). `-S' will override this feature, causing the original sequence numbers to be output. On the 6th line, rtsg sends csam 19 bytes of data (bytes 2 through 20 in the rtsg → csam side of the conversation). The PUSH flag is set in the packet. On the 7th line, csam says it's received data sent by rtsg up to but not including byte 21. Most of this data is apparently sitting in the socket buffer since csam's receive window has gotten 19 bytes smaller. Csam also sends one byte of data to rtsg in this packet. On the 8th and 9th lines, csam sends two bytes of urgent, pushed data to rtsg. If the snapshot was small enough that tcpdump didn't capture the full TCP header, it interprets as much of the header as it can and then reports ``[|tcp]'' to indicate the remainder could not be interpreted. If the header contains a bogus option (one with a length that's either too small or beyond the end of the header), tcpdump reports it as ``[bad opt]'' and does not interpret any further options (since it's impossible to tell where they start). If the header length indicates options are present but the IP datagram length is not long enough for the options to actually be there, tcpdump reports it as ``[bad hdr length]''. Capturing TCP packets with particular flag combinations (SYN-ACK, URG-ACK, etc.) There are 8 bits in the control bits section of the TCP header: CWR | ECE | URG | ACK | PSH | RST | SYN | FIN Let's assume that we want to watch packets used in establishing a TCP connection. Recall that TCP uses a 3-way handshake protocol when it initializes a new connection; the connection sequence with regard to the TCP control bits is 1) Caller sends SYN 2) Recipient responds with SYN, ACK 3) Caller sends ACK Now we're interested in capturing packets that have only the SYN bit set (Step 1). Note that we don't want packets from step 2 (SYN-ACK), just a plain initial SYN. What we need is a correct filter expression for tcpdump. Recall the structure of a TCP header without options: 0 15 31 ----------------------------------------------------------------- | source port | destination port | ----------------------------------------------------------------- | sequence number | ----------------------------------------------------------------- | acknowledgment number | ----------------------------------------------------------------- | HL | rsvd |C|E|U|A|P|R|S|F| window size | ----------------------------------------------------------------- | TCP checksum | urgent pointer | ----------------------------------------------------------------- A TCP header usually holds 20 octets of data, unless options are present. The first line of the graph contains octets 0 - 3, the second line shows octets 4 - 7 etc. Starting to count with 0, the relevant TCP control bits are contained in octet 13: 0 7| 15| 23| 31 ----------------|---------------|---------------|---------------- | HL | rsvd |C|E|U|A|P|R|S|F| window size | ----------------|---------------|---------------|---------------- | | 13th octet | | | Let's have a closer look at octet no. 13: | | |---------------| |C|E|U|A|P|R|S|F| |---------------| |7 5 3 0| These are the TCP control bits we are interested in. We have numbered the bits in this octet from 0 to 7, right to left, so the PSH bit is bit number 3, while the URG bit is number 5. Recall that we want to capture packets with only SYN set. Let's see what happens to octet 13 if a TCP datagram arrives with the SYN bit set in its header: |C|E|U|A|P|R|S|F| |---------------| |0 0 0 0 0 0 1 0| |---------------| |7 6 5 4 3 2 1 0| Looking at the control bits section we see that only bit number 1 (SYN) is set. Assuming that octet number 13 is an 8-bit unsigned integer in network byte order, the binary value of this octet is 00000010 and its decimal representation is 7 6 5 4 3 2 1 0 0*2 + 0*2 + 0*2 + 0*2 + 0*2 + 0*2 + 1*2 + 0*2 = 2 We're almost done, because now we know that if only SYN is set, the value of the 13th octet in the TCP header, when interpreted as a 8-bit unsigned integer in network byte order, must be exactly 2. This relationship can be expressed as tcp[13] == 2 We can use this expression as the filter for tcpdump in order to watch packets which have only SYN set: tcpdump -i xl0 tcp[13] == 2 The expression says "let the 13th octet of a TCP datagram have the decimal value 2", which is exactly what we want. Now, let's assume that we need to capture SYN packets, but we don't care if ACK or any other TCP control bit is set at the same time. Let's see what happens to octet 13 when a TCP datagram with SYN-ACK set arrives: |C|E|U|A|P|R|S|F| |---------------| |0 0 0 1 0 0 1 0| |---------------| |7 6 5 4 3 2 1 0| Now bits 1 and 4 are set in the 13th octet. The binary value of octet 13 is 00010010 which translates to decimal 7 6 5 4 3 2 1 0 0*2 + 0*2 + 0*2 + 1*2 + 0*2 + 0*2 + 1*2 + 0*2 = 18 Now we can't just use 'tcp[13] == 18' in the tcpdump filter expression, because that would select only those packets that have SYN-ACK set, but not those with only SYN set. Remember that we don't care if ACK or any other control bit is set as long as SYN is set. In order to achieve our goal, we need to logically AND the binary value of octet 13 with some other value to preserve the SYN bit. We know that we want SYN to be set in any case, so we'll logically AND the value in the 13th octet with the binary value of a SYN: 00010010 SYN-ACK 00000010 SYN AND 00000010 (we want SYN) AND 00000010 (we want SYN) -------- -------- = 00000010 = 00000010 We see that this AND operation delivers the same result regardless whether ACK or another TCP control bit is set. The decimal representation of the AND value as well as the result of this operation is 2 (binary 00000010), so we know that for packets with SYN set the following relation must hold true: ( ( value of octet 13 ) AND ( 2 ) ) == ( 2 ) This points us to the tcpdump filter expression tcpdump -i xl0 'tcp[13] & 2 == 2' Some offsets and field values may be expressed as names rather than as numeric values. For example tcp[13] may be replaced with tcp[tcpflags]. The following TCP flag field values are also available: tcp-fin, tcp- syn, tcp-rst, tcp-push, tcp-ack, tcp-urg. This can be demonstrated as: tcpdump -i xl0 'tcp[tcpflags] & tcp-push != 0' Note that you should use single quotes or a backslash in the expression to hide the AND ('&') special character from the shell. UDP Packets UDP format is illustrated by this rwho packet: actinide.who > broadcast.who: udp 84 This says that port who on host actinide sent a UDP datagram to port who on host broadcast, the Internet broadcast address. The packet contained 84 bytes of user data. Some UDP services are recognized (from the source or destination port number) and the higher level protocol information printed. In particular, Domain Name service requests (RFC-1034/1035) and Sun RPC calls (RFC-1050) to NFS. TCP or UDP Name Server Requests (N.B.:The following description assumes familiarity with the Domain Service protocol described in RFC-1035. If you are not familiar with the protocol, the following description will appear to be written in Greek.) Name server requests are formatted as src > dst: id op? flags qtype qclass name (len) h2opolo.1538 > helios.domain: 3+ A? ucbvax.berkeley.edu. (37) Host h2opolo asked the domain server on helios for an address record (qtype=A) associated with the name ucbvax.berkeley.edu. The query id was `3'. The `+' indicates the recursion desired flag was set. The query length was 37 bytes, excluding the TCP or UDP and IP protocol headers. The query operation was the normal one, Query, so the op field was omitted. If the op had been anything else, it would have been printed between the `3' and the `+'. Similarly, the qclass was the normal one, C_IN, and omitted. Any other qclass would have been printed immediately after the `A'. A few anomalies are checked and may result in extra fields enclosed in square brackets: If a query contains an answer, authority records or additional records section, ancount, nscount, or arcount are printed as `[na]', `[nn]' or `[nau]' where n is the appropriate count. If any of the response bits are set (AA, RA or rcode) or any of the `must be zero' bits are set in bytes two and three, `[b2&3=x]' is printed, where x is the hex value of header bytes two and three. TCP or UDP Name Server Responses Name server responses are formatted as src > dst: id op rcode flags a/n/au type class data (len) helios.domain > h2opolo.1538: 3 3/3/7 A 128.32.137.3 (273) helios.domain > h2opolo.1537: 2 NXDomain* 0/1/0 (97) In the first example, helios responds to query id 3 from h2opolo with 3 answer records, 3 name server records and 7 additional records. The first answer record is type A (address) and its data is internet address 128.32.137.3. The total size of the response was 273 bytes, excluding TCP or UDP and IP headers. The op (Query) and response code (NoError) were omitted, as was the class (C_IN) of the A record. In the second example, helios responds to query 2 with a response code of non-existent domain (NXDomain) with no answers, one name server and no authority records. The `*' indicates that the authoritative answer bit was set. Since there were no answers, no type, class or data were printed. Other flag characters that might appear are `-' (recursion available, RA, not set) and `|' (truncated message, TC, set). If the `question' section doesn't contain exactly one entry, `[nq]' is printed. SMB/CIFS decoding tcpdump now includes fairly extensive SMB/CIFS/NBT decoding for data on UDP/137, UDP/138 and TCP/139. Some primitive decoding of IPX and NetBEUI SMB data is also done. By default a fairly minimal decode is done, with a much more detailed decode done if -v is used. Be warned that with -v a single SMB packet may take up a page or more, so only use -v if you really want all the gory details. For information on SMB packet formats and what all the fields mean see https://download.samba.org/pub/samba/specs/ and other online resources. The SMB patches were written by Andrew Tridgell (tridge@samba.org). NFS Requests and Replies Sun NFS (Network File System) requests and replies are printed as: src.sport > dst.nfs: NFS request xid xid len op args src.nfs > dst.dport: NFS reply xid xid reply stat len op results sushi.1023 > wrl.nfs: NFS request xid 26377 112 readlink fh 21,24/10.73165 wrl.nfs > sushi.1023: NFS reply xid 26377 reply ok 40 readlink "../var" sushi.1022 > wrl.nfs: NFS request xid 8219 144 lookup fh 9,74/4096.6878 "xcolors" wrl.nfs > sushi.1022: NFS reply xid 8219 reply ok 128 lookup fh 9,74/4134.3150 In the first line, host sushi sends a transaction with id 26377 to wrl. The request was 112 bytes, excluding the UDP and IP headers. The operation was a readlink (read symbolic link) on file handle (fh) 21,24/10.731657119. (If one is lucky, as in this case, the file handle can be interpreted as a major,minor device number pair, followed by the inode number and generation number.) In the second line, wrl replies `ok' with the same transaction id and the contents of the link. In the third line, sushi asks (using a new transaction id) wrl to lookup the name `xcolors' in directory file 9,74/4096.6878. In the fourth line, wrl sends a reply with the respective transaction id. Note that the data printed depends on the operation type. The format is intended to be self explanatory if read in conjunction with an NFS protocol spec. Also note that older versions of tcpdump printed NFS packets in a slightly different format: the transaction id (xid) would be printed instead of the non-NFS port number of the packet. If the -v (verbose) flag is given, additional information is printed. For example: sushi.1023 > wrl.nfs: NFS request xid 79658 148 read fh 21,11/12.195 8192 bytes @ 24576 wrl.nfs > sushi.1023: NFS reply xid 79658 reply ok 1472 read REG 100664 ids 417/0 sz 29388 (-v also prints the IP header TTL, ID, length, and fragmentation fields, which have been omitted from this example.) In the first line, sushi asks wrl to read 8192 bytes from file 21,11/12.195, at byte offset 24576. Wrl replies `ok'; the packet shown on the second line is the first fragment of the reply, and hence is only 1472 bytes long (the other bytes will follow in subsequent fragments, but these fragments do not have NFS or even UDP headers and so might not be printed, depending on the filter expression used). Because the -v flag is given, some of the file attributes (which are returned in addition to the file data) are printed: the file type (``REG'', for regular file), the file mode (in octal), the UID and GID, and the file size. If the -v flag is given more than once, even more details are printed. NFS reply packets do not explicitly identify the RPC operation. Instead, tcpdump keeps track of ``recent'' requests, and matches them to the replies using the transaction ID. If a reply does not closely follow the corresponding request, it might not be parsable. AFS Requests and Replies Transarc AFS (Andrew File System) requests and replies are printed as: src.sport > dst.dport: rx packet-type src.sport > dst.dport: rx packet-type service call call-name args src.sport > dst.dport: rx packet-type service reply call-name args elvis.7001 > pike.afsfs: rx data fs call rename old fid 536876964/1/1 ".newsrc.new" new fid 536876964/1/1 ".newsrc" pike.afsfs > elvis.7001: rx data fs reply rename In the first line, host elvis sends a RX packet to pike. This was a RX data packet to the fs (fileserver) service, and is the start of an RPC call. The RPC call was a rename, with the old directory file id of 536876964/1/1 and an old filename of `.newsrc.new', and a new directory file id of 536876964/1/1 and a new filename of `.newsrc'. The host pike responds with a RPC reply to the rename call (which was successful, because it was a data packet and not an abort packet). In general, all AFS RPCs are decoded at least by RPC call name. Most AFS RPCs have at least some of the arguments decoded (generally only the `interesting' arguments, for some definition of interesting). The format is intended to be self-describing, but it will probably not be useful to people who are not familiar with the workings of AFS and RX. If the -v (verbose) flag is given twice, acknowledgement packets and additional header information is printed, such as the RX call ID, call number, sequence number, serial number, and the RX packet flags. If the -v flag is given twice, additional information is printed, such as the RX call ID, serial number, and the RX packet flags. The MTU negotiation information is also printed from RX ack packets. If the -v flag is given three times, the security index and service id are printed. Error codes are printed for abort packets, with the exception of Ubik beacon packets (because abort packets are used to signify a yes vote for the Ubik protocol). AFS reply packets do not explicitly identify the RPC operation. Instead, tcpdump keeps track of ``recent'' requests, and matches them to the replies using the call number and service ID. If a reply does not closely follow the corresponding request, it might not be parsable. KIP AppleTalk (DDP in UDP) AppleTalk DDP packets encapsulated in UDP datagrams are de-encapsulated and dumped as DDP packets (i.e., all the UDP header information is discarded). The file /etc/atalk.names is used to translate AppleTalk net and node numbers to names. Lines in this file have the form number name 1.254 ether 16.1 icsd-net 1.254.110 ace The first two lines give the names of AppleTalk networks. The third line gives the name of a particular host (a host is distinguished from a net by the 3rd octet in the number - a net number must have two octets and a host number must have three octets.) The number and name should be separated by whitespace (blanks or tabs). The /etc/atalk.names file may contain blank lines or comment lines (lines starting with a `#'). AppleTalk addresses are printed in the form net.host.port 144.1.209.2 > icsd-net.112.220 office.2 > icsd-net.112.220 jssmag.149.235 > icsd-net.2 (If the /etc/atalk.names doesn't exist or doesn't contain an entry for some AppleTalk host/net number, addresses are printed in numeric form.) In the first example, NBP (DDP port 2) on net 144.1 node 209 is sending to whatever is listening on port 220 of net icsd node 112. The second line is the same except the full name of the source node is known (`office'). The third line is a send from port 235 on net jssmag node 149 to broadcast on the icsd-net NBP port (note that the broadcast address (255) is indicated by a net name with no host number - for this reason it's a good idea to keep node names and net names distinct in /etc/atalk.names). NBP (name binding protocol) and ATP (AppleTalk transaction protocol) packets have their contents interpreted. Other protocols just dump the protocol name (or number if no name is registered for the protocol) and packet size. NBP packets are formatted like the following examples: icsd-net.112.220 > jssmag.2: nbp-lkup 190: "=:LaserWriter@*" jssmag.209.2 > icsd-net.112.220: nbp-reply 190: "RM1140:LaserWriter@*" 250 techpit.2 > icsd-net.112.220: nbp-reply 190: "techpit:LaserWriter@*" 186 The first line is a name lookup request for laserwriters sent by net icsd host 112 and broadcast on net jssmag. The nbp id for the lookup is 190. The second line shows a reply for this request (note that it has the same id) from host jssmag.209 saying that it has a laserwriter resource named "RM1140" registered on port 250. The third line is another reply to the same request saying host techpit has laserwriter "techpit" registered on port 186. ATP packet formatting is demonstrated by the following example: jssmag.209.165 > helios.132: atp-req 12266<0-7> 0xae030001 helios.132 > jssmag.209.165: atp-resp 12266:0 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp 12266:1 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp 12266:2 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp 12266:3 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp 12266:4 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp 12266:5 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp 12266:6 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp*12266:7 (512) 0xae040000 jssmag.209.165 > helios.132: atp-req 12266<3,5> 0xae030001 helios.132 > jssmag.209.165: atp-resp 12266:3 (512) 0xae040000 helios.132 > jssmag.209.165: atp-resp 12266:5 (512) 0xae040000 jssmag.209.165 > helios.132: atp-rel 12266<0-7> 0xae030001 jssmag.209.133 > helios.132: atp-req* 12267<0-7> 0xae030002 Jssmag.209 initiates transaction id 12266 with host helios by requesting up to 8 packets (the `<0-7>'). The hex number at the end of the line is the value of the `userdata' field in the request. Helios responds with 8 512-byte packets. The `:digit' following the transaction id gives the packet sequence number in the transaction and the number in parens is the amount of data in the packet, excluding the ATP header. The `*' on packet 7 indicates that the EOM bit was set. Jssmag.209 then requests that packets 3 & 5 be retransmitted. Helios resends them then jssmag.209 releases the transaction. Finally, jssmag.209 initiates the next request. The `*' on the request indicates that XO (`exactly once') was not set. PACKET METADATA FILTER Use packet metadata filter expression to match packets against descriptive information about the packet: interface, process, service type or direction. Note this is meaningful only with capture files in the Pcap-ng file format or for interfaces supporting the PKTAP data link type. The syntax supports the following operators: or logical or and logical and not negation (...) to group sub-expressions = is equal != is not equal || logical or (alternate) && logical and (alternate) ! negation (alternate) The syntax support the following keywords to denote which of packet metadata contents is to be compared: if interface name proc process name pid process ID svc service class dir direction eproc effective process name epid effective process ID dlt data link type For example to filter packets from interface en0 "sent" by the process named "nc" or incoming packet not on interface en0: -Q "( if=en0 and proc =nc ) || (if != en0 and dir=in)" Note that a complex packet metadata filter expression needs to be put in quotes as the option -Q takes a single string parameter. Likewise, strings that contain spaces have to be surrounded by quotes. For example: -Q "proc = 'Some App'" Interface names can be filtered by partial string if the unit number is ommitted. For example use the following to include interfaces whose name begins with "en": -Q "if = en" The data link type can be specified by number or by one of the following symbolic names: NULL EN10MB PPP RAW ether (same as EN10MB) SEE ALSO stty(1), pcap(3PCAP), bpf(4), nit(4P), pcap-savefile(5), pcap-filter(7), pcap-tstamp(7) https://www.iana.org/assignments/media-types/application/vnd.tcpdump.pcap AUTHORS The original authors are: Van Jacobson, Craig Leres and Steven McCanne, all of the Lawrence Berkeley National Laboratory, University of California, Berkeley, CA. It is currently being maintained by tcpdump.org. The current version is available via HTTPS: https://www.tcpdump.org/ The original distribution is available via anonymous ftp: ftp://ftp.ee.lbl.gov/old/tcpdump.tar.Z IPv6/IPsec support is added by WIDE/KAME project. This program uses OpenSSL/LibreSSL, under specific configurations. BUGS To report a security issue please send an e-mail to security@tcpdump.org. To report bugs and other problems, contribute patches, request a feature, provide generic feedback etc. please see the file CONTRIBUTING in the tcpdump source tree root. NIT doesn't let you watch your own outbound traffic, BPF will. We recommend that you use the latter. On Linux systems with 2.0[.x] kernels: packets on the loopback device will be seen twice; packet filtering cannot be done in the kernel, so that all packets must be copied from the kernel in order to be filtered in user mode; all of a packet, not just the part that's within the snapshot length, will be copied from the kernel (the 2.0[.x] packet capture mechanism, if asked to copy only part of a packet to userspace, will not report the true length of the packet; this would cause most IP packets to get an error from tcpdump); capturing on some PPP devices won't work correctly. We recommend that you upgrade to a 2.2 or later kernel. Some attempt should be made to reassemble IP fragments or, at least to compute the right length for the higher level protocol. Name server inverse queries are not dumped correctly: the (empty) question section is printed rather than real query in the answer section. Some believe that inverse queries are themselves a bug and prefer to fix the program generating them rather than tcpdump. A packet trace that crosses a daylight savings time change will give skewed time stamps (the time change is ignored). Filter expressions on fields other than those in Token Ring headers will not correctly handle source-routed Token Ring packets. Filter expressions on fields other than those in 802.11 headers will not correctly handle 802.11 data packets with both To DS and From DS set. ip6 proto should chase header chain, but at this moment it does not. ip6 protochain is supplied for this behavior. Arithmetic expression against transport layer headers, like tcp[0], does not work against IPv6 packets. It only looks at IPv4 packets. 21 December 2020 TCPDUMP(1)
|
xartutil
| null | null | null | null | null |
securityd
|
securityd maintains security contexts and arbitrates cryptographic operations. Access to keychain items is routed through securityd to enforce access controls and to keep private keys out of user process address space. All user interaction with securityd is mediated through the Security Agent. This command is not intended to be invoked directly. HISTORY securityd was first introduced in Mac OS X version 10.0 (Cheetah) as the "Security Server" and was renamed in 10.4 (Tiger). SEE ALSO authd(8), secd(8) Darwin Fri July 20 2018 Darwin
|
securityd – Security context daemon for keychain and cryptographic operations
|
securityd
| null | null |
nlcontrol
|
The NETLOGON channel is a secure connection to a Windows Domain Controller that is used for non-Kerberos user authentication. nlcontrol can be used to manipulate and test the status of the NETLOGON channel.
|
nlcontrol – NETLOGON secure channel utility
|
nlcontrol [reconfigure] status [options] status nlcontrol [options] verify nlcontrol [options] change-password
|
-domain The service will log extensive debug information and may perform extra diagnostic checks. This option is typically only useful for debugging. -help Prints a usage message and exits. -server list The service will listen on each of the TCP ports specified in the comma-separated list. This option is typically only used for debugging. nlcontrol supports the following commands: reconfigure Force the NETLOGON service to re-read its configuration information. This is not necessary in normal operation, since the NETLOGON service will detect relevant configuration changes and re-establish the secure channel automatically. status Print the current status of the NETLOGON channel without altering its state. verify Attempt to verify that the NETLOGON channel is available and working correctly. change-password Bring up the NETLOGON channel and change the password of the machine account. The machine account is used to authenticate to the Domain Controller in order to secure the channel. DIAGNOSTICS nlcontrol will exit with a non-zero error code if the command fails. It may also display a Windows error code, which is typically self- explanatory. HISTORY The nlcontrol utility first appeared in Mac OS 10.7. Darwin Wed Nov 4 17:03:55 PST 2009 Darwin
| null |
quotaon
|
Quotaon announces to the system that disk quotas should be enabled on one or more filesystems. Quotaoff announces to the system that the specified filesystems should have disk quotas turned off. The filesystem must be mounted and it must have the appropriate mount option file located at its root, the .quota.ops.user file for user quota configuration, and the .quota.ops.group file for group quota configuration. Quotaon also expects each filesystem to have the appropriate quota data files located at its root, the .quota.user file for user data, and the .quota.group file for group data. These filenames and their root location cannot be overridden. By default, quotaon will attempt to enable both user and group quotas. By default, quotaoff will disable both user and group quotas. Available options: -a If the -a flag is supplied in place of any filesystem names, quotaon/quotaoff will enable/disable any filesystems with an existing mount option file at its root. The mount option file specifies the types of quotas that are to be configured. -g Only group quotas will be enabled/disabled. The mount option file, .quota.ops.group, must exist at the root of the filesystem. -u Only user quotas will be enabled/disabled. The mount option file, .quota.ops.user, must exist at the root of the filesystem. -v Causes quotaon and quotaoff to print a message for each filesystem where quotas are turned on or off. Specifying both -g and -u is equivalent to the default. Quotas for both users and groups will automatically be turned on at filesystem mount if the appropriate mount option file and binary data file is in place at its root. FILES Each of the following quota files is located at the root of the mounted filesystem. The mount option files are empty files whose existence indicates that quotas are to be enabled for that filesystem. .quota.user data file containing user quotas .quota.group data file containing group quotas .quota.ops.user mount option file used to enable user quotas .quota.ops.group mount option file used to enable group quotas SEE ALSO quota(1), quotactl(2), edquota(8), quotacheck(8), repquota(8) HISTORY The quotaon command appeared in 4.2BSD. BSD 4.2 October 17, 2002 BSD 4.2
|
quotaon, quotaoff – turn filesystem quotas on and off
|
quotaon [-g] [-u] [-v] filesystem ... quotaon [-g] [-u] [-v] -a quotaoff [-g] [-u] [-v] filesystem ... quotaoff [-g] [-u] [-v] -a
| null | null |
spctl
|
spctl manages the security assessment policy subsystem. This subsystem maintains and evaluates rules that determine whether the system allows the installation, execution, and other operations on files on the system. spctl requires one command option that determines its principal operation: --add Add rule(s) to the system-wide assessment rule database. -a, --assess Requests that spctl perform an assessment on the files given. --disable Disable one or more rules in the assessment rule database. Disabled rules are not considered when performing assessment, but remain in the database and can be re-enabled later. --enable Enable rule(s) in the assessment rule database, counteracting earlier disabling. --disable --global-disable Disable the assessment subsystem altogether. Operations that would be denied by system policy will be allowed to proceed; assessment APIs always report success. Requires root access. --global-enable Enable the assessment subsystem. Operations that are denied by system policy will fail; assessment APIs report the truth. Requires root access. --remove Remove rule(s) from the assessment rule database. --status Query whether the assessment subsystem is enabled or disabled. In addition, the following options are recognized: --anchor In rule update operations, indicates that the arguments are hashes of anchor certificates. --continue If the assessment of a file fails, continue assessing additional file arguments. Without this option, the first failed assessment terminates operation. --hash In rule update operations, indicates that the arguments are code directory hashes. --ignore-cache Do not query or use the assessment object cache. This may significantly slow down operation. Newly generated assessments may still be stored in the cache. --label label Specifies a string label to attach to new rules, or find in existing rules. Labels are arbitrary strings that are assigned by convention. Rule labels are optional. --no-cache Do not place the outcome of any assessments into the assessment object cache. No other assessment may reuse this outcome. This option not prohibit the use of existing cache entries. --path In rule update operations, indicates that the argument(s) denote paths to files on disk. --priority priority In rule update operations, specifies the priority of the rule(s) created or changed. Priorities are floating-point numbers. Higher numeric values indicate higher priority. --raw When displaying the outcome of an assessment, write it as a "raw" XML plist instead of parsing it in somewhat more friendly form. This is useful when used in scripts, or to access newly invented assessment aspects that spctl does not yet know about. --requirement In rule update operations, indicates that the argument(s) are code requirement source. --reset-default Unconditionally reset the system policy database to its default value. This discards all changes made by administrators. It also heals any corruption to the database. It does not implicitly either enable or disable the facility. This must be done as the super user. Reboot after use. --rule In rule update operations, indicates that the argument(s) are the index numbers of existing rules. -t, --type Specify which type of assessment is desired: execute to assess code execution, install to assess installation of an installer package, and open to assess the opening of documents. The default is to assess execution. -v, --verbose Requests more verbose output. Repeat the option or give it a higher numeric value to increase verbosity. RULE SUBJECTS The system assessement rule database contains entries that match candidates based on Code Requirements. spctl allows you to specify these requirements directly using the --requirement option. In addition, individual programs on disk can be addressed with the --path option (which uses their Designated Requirement). The --anchor option takes the hash of a (full) certificate and turns it into a requirement matching any signature based on that anchor certificate. Alternatively, it can take the absolute path of a certificate file on disk, containing the DER form of an anchor certificate. Finally, the --hash option generates a code requirement that denotes only and exactly one program whose CodeDirectory hash is given. The means of specifying subjects does not affect the remaining processing. FILES /var/db/SystemPolicy The system policy database. /var/db/.SystemPolicy-default A copy of the initial distribution version of the system policy database. Useful for starting over if the database gets messed up beyond recognition.
|
spctl – SecAssessment system policy security
|
spctl --assess [-t type] [-] file ... spctl --global-enable | --global-disable spctl --enable | --disable | --remove [-t type] [--path path] [--requirement requirement] [--anchor hash] [--hash hash] spctl --status
| null |
To check whether Mail.app is allowed to run on the local system: spctl -a /Applications/Mail.app To allow Frobozz.app to run on the local system: spctl --add --label "My Stuff" /Applications/Frobozz.app To forbid all code obtained from the Mac App Store from running: spctl --disable --label "Mac App Store" DIAGNOSTICS spctl exits zero on success, or one if an operation has failed. Exit code two indicates unrecognized or unsuitable arguments. If an assessment operation results in denial but no other problem has occurred, the exit code is three. SEE ALSO codesign(1), syspolicyd(1) HISTORY The system policy facility and spctl command first appeared in Mac OS X Lion 10.7.3 as a limited developer preview. macOS 14.5 January 19, 2012 macOS 14.5
|
vipw
|
The vipw utility edits the password file after setting the appropriate locks, and does any necessary processing after the password file is unlocked. If the password file is already locked for editing by another user, vipw will ask you to try again later. The default editor for vipw is vi(1). When run without options, vipw will work with the password files in /etc. The -d option may be used to specify an alternative directory to work with. The vipw utility performs a number of consistency checks on the password entries, and will not allow a password file with a “mangled” entry to be installed. If vipw rejects the new password file, the user is prompted to re-enter the edit session. Once the information has been verified, vipw uses pwd_mkdb(8) to update the user database. This is run in the background, and, at very large sites could take several minutes. Until this update is completed, the password file is unavailable for other updates and the new information is not available to programs. ENVIRONMENT If the following environment variable exists it will be utilized by vipw: EDITOR The editor specified by the string EDITOR will be invoked instead of the default editor vi(1). This can be used to allow a script to non-interactively modify the password file. PW_SCAN_BIG_IDS See pwd_mkdb(8). SEE ALSO chpass(1), passwd(1), passwd(5), pwd_mkdb(8) HISTORY The vipw utility appeared in 4.0BSD. BUGS The mechanism for checking for password file modifications requires that the modification time of the password file changes. This means that in a default configuration where file system timestamps are not calculated with sub-second precision, EDITOR has to run for at least one second. Non-interactive editor scripts should invoke sleep(1) or equivalent to ensure this happens. macOS 14.5 February 14, 2012 macOS 14.5
|
vipw – edit the password file
|
vipw [-d directory]
| null | null |
gssd
|
gssd provides kernel access to the Generic Security Services API (GSS- API). There are no configuration options to gssd. Users should not run gssd manually. Mac OS X June 29, 2006 Mac OS X
|
gssd – Generic Security Services Daemon
|
gssd
| null | null |
slaptest
|
Slaptest is used to check the conformance of the slapd(8) configuration. It opens the slapd.conf(5) configuration file or the slapd-config(5) backend, and parses it according to the general and the backend-specific rules, checking its sanity.
|
slaptest - Check the suitability of the OpenLDAP slapd configuration
|
/usr/sbin/slaptest [-d_debug-level] [-f_slapd.conf] [-F_confdir] [-ndbnum] [-o_option[=value]] [-Q] [-u] [-v]
|
-d_debug-level enable debugging messages as defined by the specified debug-level; see slapd(8) for details. -f_slapd.conf specify an alternative slapd.conf(5) file. -F_confdir specify a config directory. If both -f and -F are specified, the config file will be read and converted to config directory format and written to the specified directory. If neither option is specified, slaptest will attempt to read the default config directory before trying to use the default config file. If a valid config directory exists then the default config file is ignored. If dry-run mode is also specified, no conversion will occur. -n_dbnum Just open and test the dbnum-th database listed in the configuration file. To only test the config database slapd-config(5), use -n 0 as it is always the first database. -o_option[=value] Specify an option with a(n optional) value. Possible generic options/values are: syslog=<subsystems> (see `-s' in slapd(8)) syslog-level=<level> (see `-S' in slapd(8)) syslog-user=<user> (see `-l' in slapd(8)) -Q Be extremely quiet: only the exit code indicates success (0) or not (any other value). -u enable dry-run mode (i.e. don't fail if databases cannot be opened, but config is fine). -v enable verbose mode.
|
To check a slapd.conf(5) give the command: /usr/sbin/slaptest -f //etc/openldap/slapd.conf -v SEE ALSO ldap(3), slapd(8), slapdn(8) "OpenLDAP Administrator's Guide" (http://www.OpenLDAP.org/doc/admin/) ACKNOWLEDGEMENTS OpenLDAP Software is developed and maintained by The OpenLDAP Project <http://www.openldap.org/>. OpenLDAP Software is derived from University of Michigan LDAP 3.3 Release. OpenLDAP 2.4.28 2011/11/24 SLAPTEST(8C)
|
automount
|
automount reads the /etc/auto_master file, and any local or network maps it includes, and mounts autofs on the appropriate mount points to cause mounts to be triggered. It will also attempt to unmount any top-level autofs mounts that correspond to maps no longer found.
|
automount – mount autofs on the appropriate mount points
|
automount [-v] [-c] [-u] [-t timeout]
|
-v Print more detailed information about actions taken by automount. -c Tell automountd(8) to flush any cached information it has. -u Unmount all non-busy automounted mounts. Top-level triggers are preserved. -t timeout Set to timeout seconds the time after which an automounted file system will be unmounted if it hasn't been referred to within that period of time. The default is 10 minutes (600 seconds). FILES /etc/autofs.conf configuration file for automount and automountd. /etc/auto_master The master map contains a list of directories to be controlled by autofs and their associated direct map or indirect maps. SEE ALSO auto_master(5), automountd(8), autofs.conf(5) Darwin July 17, 2010 Darwin
| null |
scselect
|
scselect provides access to the system configuration sets, commonly referred to as "locations". When invoked with no arguments, scselect displays the names and associated identifiers for each defined "location" and indicates which is currently active. scselect also allows the user to select or change the active "location" by specifying its name or identifier. Changing the "location" causes an immediate system re- configuration, unless the -n option is supplied. At present, the majority of preferences associated with a "location" relate to the system's network configuration. The command line options are as follows: -n Delay changing the system's "location" until the next system boot (or the next time that the system configuration preferences are changed). new-location-name If not specified, a list of the available "location" names and associated identifiers will be reported on standard output. If specified, this argument is matched with the "location" names and identifiers and the matching set is activated. SEE ALSO configd(8) HISTORY The scselect command appeared in Mac OS X Public Beta. Mac OS X November 4, 2003 Mac OS X
|
scselect – Select system configuration "location"
|
scselect [-n] [new-location-name]
| null | null |
graphicssession
|
graphicssession is used by the Apple Mac OS X base system to initalize graphics contexts. It should not be called directly. Mac OS X February 25, 2011 Mac OS X
|
graphicssession – graphics session initialization utility
|
graphicssession
| null | null |
ckksctl
| null | null | null | null | null |
lpinfo
|
lpinfo lists the available devices or drivers known to the CUPS server. The first form (-m) lists the available drivers, while the second form (-v) lists the available devices.
|
lpinfo - show available devices or drivers (deprecated)
|
lpinfo [ -E ] [ -h server[:port] ] [ -l ] [ --device-id device-id-string ] [ --exclude-schemes scheme-list ] [ --include-schemes scheme-list ] [ --language locale ] [ --make-and-model name ] [ --product name ] -m lpinfo [ -E ] [ -h server[:port] ] [ -l ] [ --exclude-schemes scheme-list ] [ --include-schemes scheme-list ] [ --timeout seconds ] -v
|
lpinfo accepts the following options: -E Forces encryption when connecting to the server. -h server[:port] Selects an alternate server. -l Shows a "long" listing of devices or drivers. --device-id device-id-string Specifies the IEEE-1284 device ID to match when listing drivers with the -m option. --exclude-schemes scheme-list Specifies a comma-delimited list of device or PPD schemes that should be excluded from the results. Static PPD files use the "file" scheme. --include-schemes scheme-list Specifies a comma-delimited list of device or PPD schemes that should be included in the results. Static PPD files use the "file" scheme. --language locale Specifies the language to match when listing drivers with the -m option. --make-and-model name Specifies the make and model to match when listing drivers with the -m option. --product name Specifies the product to match when listing drivers with the -m option. --timeout seconds Specifies the timeout when listing devices with the -v option. CONFORMING TO The lpinfo command is unique to CUPS.
|
List all devices: lpinfo -v List all drivers: lpinfo -m List drivers matching "HP LaserJet": lpinfo --make-and-model "HP LaserJet" -m NOTES CUPS printer drivers and backends are deprecated and will no longer be supported in a future feature release of CUPS. Printers that do not support IPP can be supported using applications such as ippeveprinter(1). SEE ALSO lpadmin(8), CUPS Online Help (http://localhost:631/help) COPYRIGHT Copyright © 2007-2019 by Apple Inc. 26 April 2019 CUPS lpinfo(8)
|
smbd
|
smbd runs on a server machine to service SMB protocol requests from SMB client machines.
|
smbd – SMB server daemon
|
smbd [options]
|
-debug The service will log extensive debug information and may perform extra diagnostic checks. In debug mode, smbd will not exit when idle. -help Prints a usage message and exits. -ports list The service will listen on each of the TCP ports specified in the comma-separated list. This option is typically only used for debugging, since smbd is normally launched by launchd(8) on port 445. -stdout Causes smbd to print log messages to standard output instead of the system log. -no-symlinks In normal operation, smbd will respond to client symlink requests but will never follow symlinks itself. This flag causes smbd to restrict client access to symlink operations and to always follow symlinks. In this case, clients will not be aware that symlinks are in use because they will always be directed to the symlink target. FILES /Library/Preferences/SystemConfiguration/com.apple.smb.server.plist The primary configuration for the SMB stack. This file is updated by various system services and should not be edited by hand. /System/Library/LaunchDaemons/com.apple.smbd.plist The smbd service's property list file for launchd(8). SIGNALS SIGHUP This signal causes smbd to re-read its configuration settings. SEE ALSO launchd(8) STANDARDS The SMB protocol is documented as part of the Microsoft Work Group Server Protocol Program (WSPP) technical documentation set, specifically MS-CIFS Common Internet File System Specification MS-SMB Server Message Block (SMB) Protocol Specification HISTORY The smbd utility first appeared in Mac OS 10.7. Darwin Wed Nov 4 17:03:55 PST 2009 Darwin
| null |
serverinfo
| null | null | null | null | null |
cvmkfs
|
cvmkfs will initialize a Xsan volume optionally using volume_name as the name. If no name is supplied, a list of volumes configured will be presented. Active file systems may not be re-initialized. The user will be prompted for a confirmation before initializing the volume. WARNING: This will destroy ANY existing volume data for the named Xsan volume!
|
cvmkfs - Initialize a Xsan Volume
|
cvmkfs [-GF] [-a key] [-n ninode[k|m|g]] [-r[-e][-m]] [-Q] [-R [date:]time] [-X] [volume_name]
|
-a key Set the affinity of the root directory to key. -e When remaking a managed file system in preparation for restoring all metadata from a metadata archive, the -e option specifies that the FSM should restore all user file extents. When this option is not specified, files are truncated which results in them being restored from backup. Use this option when the metadata disks must be restored but all disks containing user data are intact. This option can only be used in conjunction with the -r option and is ignored when restoring unmanaged file systems. This option also causes thin provision unmapping work to be skipped for all stripegroups that can contain user data. -G Bypass "Press return to continue..." type prompts. These prompts are useful on Windows systems to give the user a chance to read the error message before the window disappears. -F Force. This option has been deprecated and replaced with -X. It will cause the same action as that option. -f Failure mode - do not fail if there is a configuration mismatch or other serious abnormal condition detected. Note: This option is not intended for general use. Use only if instructed by Apple support. Incorrect use may result in an unusable file system. -m When using the -r option to remake a file system in preparation for a metadata restore from the metadata archive, cvmkfs will issue an error message and exit without modifying the file system if the stripe groups are defined to hold both metadata and user data. It does this because it is possible for the restore procedure to inadvertently allocate disk space for metadata that conflicts with user data, resulting in file corruption. The -m option can be used in conjunction with the -r option to override this behavior and force cvmkfs to remake the file system despite the risk of corruption. Use this option only if instructed by Quantum support. -n ninode[k|m|g]] Pre-allocate ninode inodes. NOTE: This option has been deprecated. -Q This option causes cvmkfs to print qustat statistics just before exiting. -R [date:]time Remake the file system in preparation for restoring all metadata as it existed at the given date and time. The format for the date:time argument is yyyy-mm-dd:hh:mm:ss, for example, "-R 2016-08-24:08:00:00". If the date is not specified, then today is assumed. This option is only valid for managed file systems when metadataArchiveDays is set to a non-zero value in the configuration file and it cannot be used with the -e option to restore file extents. This "historical" restore will truncate all files, forcing all data to be restored from backup. WARNING: It is highly recommended that Quantum Technical Support be contacted before using this option. If used improperly, data could be lost. -r Remake the file system in preparation for restoring all metadata from a metadata archive. This option can only be used when metadataArchive is set to true in the configuration file and a metadata archive exists that is current as of the last time the corresponding FSM was stopped. The remake option can be useful for disaster recovery or for metadata and journal stripe group reconfiguration. For a managed file system, the default behavior is to truncate all of the user data files with the expectation that they have been backed up to another media such as tape. The files will be reloaded when next accessed or through other storage manager actions. It is possible to override this behavior by specifying -e on a managed file system. In this case the same cautions as specified below for unmanaged file systems apply. For an unmanaged file system, there is no backup copy of the user data. The -e option can be specified, but it is ignored and is forced on. The metadata that is restored contains the disk addresses of the user data. This means that all stripe groups that contain user data must be left completely intact. Therefore, all thin provision unmap work is skipped for all stripegroups that can contain user data. The following statements apply to both managed and unmanaged file systems. The metadata and journal stripe groups are remade from scratch. This allows the underlying storage on these stripe groups to be replaced and stripe group attributes to be changed. Metadata stripe groups can be converted to data stripe groups. New stripe groups can be added. The journal stripe group can change. WARNING: It is highly recommended that Quantum Technical Support be contacted before using this option. If used improperly, data could be lost or corrupted. -T Normally on linux, cvmkfs opens all devices in the configuration file to check for thin provisioned devices. This is done to unmap/trim any prior mappings on those devices that are eliminated by this cvmkfs command. This "thin provision work" can be bypassed using the -T option. -U Do a check for disks that are included in the file system that is being made to see if they are currently in use in another file system that is visible to the cluster. In some configurations, this may take a long time. If there are disks in use, the operation is aborted. -X Use expert mode to automatically answer all prompts for verification. This is useful for running cvmkfs as part of a script or automated test. The failure option can be used instead, but with the failure option no configuration transformation validation is done and is therefore not recommended. With the -X option, all of the normal checks are performed and if an error is detected, the command exits with appropriate message and status. FILES /Library/Logs/Xsan/data/* SEE ALSO cvfs(8), snfs_config(5) Xsan File System January 2018 CVMKFS(8)
| null |
postmap
|
The postmap(1) command creates or queries one or more Postfix lookup tables, or updates an existing one. The input and output file formats are expected to be compatible with: makemap file_type file_name < file_name If the result files do not exist they will be created with the same group and other read permissions as their source file. While the table update is in progress, signal delivery is postponed, and an exclusive, advisory, lock is placed on the entire table, in order to avoid surprises in spectator processes. INPUT FILE FORMAT The format of a lookup table input file is as follows: • A table entry has the form key whitespace value • Empty lines and whitespace-only lines are ignored, as are lines whose first non-whitespace character is a `#'. • A logical line starts with non-whitespace text. A line that starts with whitespace continues a logical line. The key and value are processed as is, except that surrounding white space is stripped off. Whitespace in lookup keys is supported as of Postfix 3.2. When the key specifies email address information, the localpart should be enclosed with double quotes if required by RFC 5322. For example, an address localpart that contains ";", or a localpart that starts or ends with ".". By default the lookup key is mapped to lowercase to make the lookups case insensitive; as of Postfix 2.3 this case folding happens only with tables whose lookup keys are fixed-case strings such as btree:, dbm: or hash:. With earlier versions, the lookup key is folded even with tables where a lookup field can match both upper and lower case text, such as regexp: and pcre:. This resulted in loss of information with $number substitutions. COMMAND-LINE ARGUMENTS -b Enable message body query mode. When reading lookup keys from standard input with "-q -", process the input as if it is an email message in RFC 5322 format. Each line of body content becomes one lookup key. By default, the -b option starts generating lookup keys at the first non-header line, and stops when the end of the message is reached. To simulate body_checks(5) processing, enable MIME parsing with -m. With this, the -b option generates no body-style lookup keys for attachment MIME headers and for attached message/* headers. NOTE: with "smtputf8_enable = yes", the -b option option disables UTF-8 syntax checks on query keys and lookup results. Specify the -U option to force UTF-8 syntax checks anyway. This feature is available in Postfix version 2.6 and later. -c config_dir Read the main.cf configuration file in the named directory instead of the default configuration directory. -d key Search the specified maps for key and remove one entry per map. The exit status is zero when the requested information was found. If a key value of - is specified, the program reads key values from the standard input stream. The exit status is zero when at least one of the requested keys was found. -f Do not fold the lookup key to lower case while creating or querying a table. With Postfix version 2.3 and later, this option has no effect for regular expression tables. There, case folding is controlled by appending a flag to a pattern. -h Enable message header query mode. When reading lookup keys from standard input with "-q -", process the input as if it is an email message in RFC 5322 format. Each logical header line becomes one lookup key. A multi-line header becomes one lookup key with one or more embedded newline characters. By default, the -h option generates lookup keys until the first non-header line is reached. To simulate header_checks(5) processing, enable MIME parsing with -m. With this, the -h option also generates header-style lookup keys for attachment MIME headers and for attached message/* headers. NOTE: with "smtputf8_enable = yes", the -b option option disables UTF-8 syntax checks on query keys and lookup results. Specify the -U option to force UTF-8 syntax checks anyway. This feature is available in Postfix version 2.6 and later. -i Incremental mode. Read entries from standard input and do not truncate an existing database. By default, postmap(1) creates a new database from the entries in file_name. -m Enable MIME parsing with "-b" and "-h". This feature is available in Postfix version 2.6 and later. -N Include the terminating null character that terminates lookup keys and values. By default, postmap(1) does whatever is the default for the host operating system. -n Don't include the terminating null character that terminates lookup keys and values. By default, postmap(1) does whatever is the default for the host operating system. -o Do not release root privileges when processing a non-root input file. By default, postmap(1) drops root privileges and runs as the source file owner instead. -p Do not inherit the file access permissions from the input file when creating a new file. Instead, create a new file with default access permissions (mode 0644). -q key Search the specified maps for key and write the first value found to the standard output stream. The exit status is zero when the requested information was found. If a key value of - is specified, the program reads key values from the standard input stream and writes one line of key value output for each key that was found. The exit status is zero when at least one of the requested keys was found. -r When updating a table, do not complain about attempts to update existing entries, and make those updates anyway. -s Retrieve all database elements, and write one line of key value output for each element. The elements are printed in database order, which is not necessarily the same as the original input order. This feature is available in Postfix version 2.2 and later, and is not available for all database types. -u Disable UTF-8 support. UTF-8 support is enabled by default when "smtputf8_enable = yes". It requires that keys and values are valid UTF-8 strings. -U With "smtputf8_enable = yes", force UTF-8 syntax checks with the -b and -h options. -v Enable verbose logging for debugging purposes. Multiple -v options make the software increasingly verbose. -w When updating a table, do not complain about attempts to update existing entries, and ignore those attempts. Arguments: file_type The database type. To find out what types are supported, use the "postconf -m" command. The postmap(1) command can query any supported file type, but it can create only the following file types: btree The output file is a btree file, named file_name.db. This is available on systems with support for db databases. cdb The output consists of one file, named file_name.cdb. This is available on systems with support for cdb databases. dbm The output consists of two files, named file_name.pag and file_name.dir. This is available on systems with support for dbm databases. hash The output file is a hashed file, named file_name.db. This is available on systems with support for db databases. fail A table that reliably fails all requests. The lookup table name is used for logging only. This table exists to simplify Postfix error tests. sdbm The output consists of two files, named file_name.pag and file_name.dir. This is available on systems with support for sdbm databases. When no file_type is specified, the software uses the database type specified via the default_database_type configuration parameter. file_name The name of the lookup table source file when rebuilding a database. DIAGNOSTICS Problems are logged to the standard error stream and to syslogd(8). No output means that no problems were detected. Duplicate entries are skipped and are flagged with a warning. postmap(1) terminates with zero exit status in case of success (including successful "postmap -q" lookup) and terminates with non-zero exit status in case of failure. ENVIRONMENT MAIL_CONFIG Directory with Postfix configuration files. MAIL_VERBOSE Enable verbose logging for debugging purposes. CONFIGURATION PARAMETERS The following main.cf parameters are especially relevant to this program. The text below provides only a parameter summary. See postconf(5) for more details including examples. berkeley_db_create_buffer_size (16777216) The per-table I/O buffer size for programs that create Berkeley DB hash or btree tables. berkeley_db_read_buffer_size (131072) The per-table I/O buffer size for programs that read Berkeley DB hash or btree tables. config_directory (see 'postconf -d' output) The default location of the Postfix main.cf and master.cf configuration files. default_database_type (see 'postconf -d' output) The default database type for use in newaliases(1), postalias(1) and postmap(1) commands. import_environment (see 'postconf -d' output) The list of environment parameters that a privileged Postfix process will import from a non-Postfix parent process, or name=value environment overrides. smtputf8_enable (yes) Enable preliminary SMTPUTF8 support for the protocols described in RFC 6531..6533. syslog_facility (mail) The syslog facility of Postfix logging. syslog_name (see 'postconf -d' output) A prefix that is prepended to the process name in syslog records, so that, for example, "smtpd" becomes "prefix/smtpd". SEE ALSO postalias(1), create/update/query alias database postconf(1), supported database types postconf(5), configuration parameters syslogd(8), system logging README FILES Use "postconf readme_directory" or "postconf html_directory" to locate this information. DATABASE_README, Postfix lookup table overview LICENSE The Secure Mailer license must be distributed with this software. AUTHOR(S) Wietse Venema IBM T.J. Watson Research P.O. Box 704 Yorktown Heights, NY 10598, USA Wietse Venema Google, Inc. 111 8th Avenue New York, NY 10011, USA POSTMAP(1)
|
postmap - Postfix lookup table management
|
postmap [-NbfhimnoprsuUvw] [-c config_dir] [-d key] [-q key] [file_type:]file_name ...
| null | null |
postdrop
|
The postdrop(1) command creates a file in the maildrop directory and copies its standard input to the file. Options: -c config_dir The main.cf configuration file is in the named directory instead of the default configuration directory. See also the MAIL_CONFIG environment setting below. -r Use a Postfix-internal protocol for reading the message from standard input, and for reporting status information on standard output. This is currently the only supported method. -v Enable verbose logging for debugging purposes. Multiple -v options make the software increasingly verbose. As of Postfix 2.3, this option is available for the super-user only. SECURITY The command is designed to run with set-group ID privileges, so that it can write to the maildrop queue directory and so that it can connect to Postfix daemon processes. DIAGNOSTICS Fatal errors: malformed input, I/O error, out of memory. Problems are logged to syslogd(8) and to the standard error stream. When the input is incomplete, or when the process receives a HUP, INT, QUIT or TERM signal, the queue file is deleted. ENVIRONMENT MAIL_CONFIG Directory with the main.cf file. In order to avoid exploitation of set-group ID privileges, a non-standard directory is allowed only if: • The name is listed in the standard main.cf file with the alternate_config_directories configuration parameter. • The command is invoked by the super-user. CONFIGURATION PARAMETERS The following main.cf parameters are especially relevant to this program. The text below provides only a parameter summary. See postconf(5) for more details including examples. alternate_config_directories (empty) A list of non-default Postfix configuration directories that may be specified with "-c config_directory" on the command line, or via the MAIL_CONFIG environment parameter. config_directory (see 'postconf -d' output) The default location of the Postfix main.cf and master.cf configuration files. import_environment (see 'postconf -d' output) The list of environment parameters that a Postfix process will import from a non-Postfix parent process. queue_directory (see 'postconf -d' output) The location of the Postfix top-level queue directory. syslog_facility (mail) The syslog facility of Postfix logging. syslog_name (see 'postconf -d' output) A prefix that is prepended to the process name in syslog records, so that, for example, "smtpd" becomes "prefix/smtpd". trigger_timeout (10s) The time limit for sending a trigger to a Postfix daemon (for example, the pickup(8) or qmgr(8) daemon). Available in Postfix version 2.2 and later: authorized_submit_users (static:anyone) List of users who are authorized to submit mail with the sendmail(1) command (and with the privileged postdrop(1) helper command). FILES /var/spool/postfix/maildrop, maildrop queue SEE ALSO sendmail(1), compatibility interface postconf(5), configuration parameters syslogd(8), system logging LICENSE The Secure Mailer license must be distributed with this software. AUTHOR(S) Wietse Venema IBM T.J. Watson Research P.O. Box 704 Yorktown Heights, NY 10598, USA Wietse Venema Google, Inc. 111 8th Avenue New York, NY 10011, USA POSTDROP(1)
|
postdrop - Postfix mail posting utility
|
postdrop [-rv] [-c config_dir]
| null | null |
cupsenable
|
cupsenable starts the named printers or classes while cupsdisable stops the named printers or classes.
|
cupsdisable, cupsenable - stop/start printers and classes
|
cupsdisable [ -E ] [ -U username ] [ -c ] [ -h server[:port] ] [ -r reason ] [ --hold ] destination(s) cupsenable [ -E ] [ -U username ] [ -c ] [ -h server[:port] ] [ --release ] destination(s)
|
The following options may be used: -E Forces encryption of the connection to the server. -U username Uses the specified username when connecting to the server. -c Cancels all jobs on the named destination. -h server[:port] Uses the specified server and port. --hold Holds remaining jobs on the named printer. Useful for allowing the current job to complete before performing maintenance. -r "reason" Sets the message associated with the stopped state. If no reason is specified then the message is set to "Reason Unknown". --release Releases pending jobs for printing. Use after running cupsdisable with the --hold option to resume printing. CONFORMING TO Unlike the System V printing system, CUPS allows printer names to contain any printable character except SPACE, TAB, "/", or "#". Also, printer and class names are not case-sensitive. The System V versions of these commands are disable and enable, respectively. They have been renamed to avoid conflicts with the bash(1) build-in commands of the same names. The CUPS versions of disable and enable may ask the user for an access password depending on the printing system configuration. This differs from the System V versions which require the root user to execute these commands. SEE ALSO cupsaccept(8), cupsreject(8), cancel(1), lp(1), lpadmin(8), lpstat(1), CUPS Online Help (http://localhost:631/help) COPYRIGHT Copyright © 2007-2019 by Apple Inc. 26 April 2019 CUPS cupsenable(8)
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.