diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/index.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/index.md new file mode 100644 index 0000000000000000000000000000000000000000..d3e3864f34212d213b92538a0f3fda8d1c7d71b0 --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/index.md @@ -0,0 +1,65 @@ + + +## ZooKeeper: Because Coordinating Distributed Systems is a Zoo + +ZooKeeper is a high-performance coordination service for +distributed applications. It exposes common services - such as +naming, configuration management, synchronization, and group +services - in a simple interface so you don't have to write them +from scratch. You can use it off-the-shelf to implement +consensus, group management, leader election, and presence +protocols. And you can build on it for your own, specific needs. + +The following documents describe concepts and procedures to get +you started using ZooKeeper. If you have more questions, please +ask the [mailing list](http://zookeeper.apache.org/mailing_lists.html) or browse the +archives. + ++ **ZooKeeper Overview** + Technical Overview Documents for Client Developers, Administrators, and Contributors + + [Overview](zookeeperOver.html) - a bird's eye view of ZooKeeper, including design concepts and architecture + + [Getting Started](zookeeperStarted.html) - a tutorial-style guide for developers to install, run, and program to ZooKeeper + + [Release Notes](releasenotes.html) - new developer and user facing features, improvements, and incompatibilities ++ **Developers** + Documents for Developers using the ZooKeeper Client API + + [API Docs](apidocs/zookeeper-server/index.html) - the technical reference to ZooKeeper Client APIs + + [Programmer's Guide](zookeeperProgrammers.html) - a client application developer's guide to ZooKeeper + + [ZooKeeper Use Cases](zookeeperUseCases.html) - a series of use cases using the ZooKeeper. + + [ZooKeeper Java Example](javaExample.html) - a simple Zookeeper client application, written in Java + + [Barrier and Queue Tutorial](zookeeperTutorial.html) - sample implementations of barriers and queues + + [ZooKeeper Recipes](recipes.html) - higher level solutions to common problems in distributed applications ++ **Administrators & Operators** + Documents for Administrators and Operations Engineers of ZooKeeper Deployments + + [Administrator's Guide](zookeeperAdmin.html) - a guide for system administrators and anyone else who might deploy ZooKeeper + + [Quota Guide](zookeeperQuotas.html) - a guide for system administrators on Quotas in ZooKeeper. + + [Snapshot and Restore Guide](zookeeperSnapshotAndRestore.html) - a guide for system administrators on take snapshot and restore ZooKeeper. + + [JMX](zookeeperJMX.html) - how to enable JMX in ZooKeeper + + [Hierarchical Quorums](zookeeperHierarchicalQuorums.html) - a guide on how to use hierarchical quorums + + [Oracle Quorum](zookeeperOracleQuorums.html) - the introduction to Oracle Quorum increases the availability of a cluster of 2 ZooKeeper instances with a failure detector. + + [Observers](zookeeperObservers.html) - non-voting ensemble members that easily improve ZooKeeper's scalability + + [Dynamic Reconfiguration](zookeeperReconfig.html) - a guide on how to use dynamic reconfiguration in ZooKeeper + + [ZooKeeper CLI](zookeeperCLI.html) - a guide on how to use the ZooKeeper command line interface + + [ZooKeeper Tools](zookeeperTools.html) - a guide on how to use a series of tools for ZooKeeper + + [ZooKeeper Monitor](zookeeperMonitor.html) - a guide on how to monitor the ZooKeeper + + [Audit Logging](zookeeperAuditLogs.html) - a guide on how to configure audit logs in ZooKeeper Server and what contents are logged. ++ **Contributors** + Documents for Developers Contributing to the ZooKeeper Open Source Project + + [ZooKeeper Internals](zookeeperInternals.html) - assorted topics on the inner workings of ZooKeeper ++ **Miscellaneous ZooKeeper Documentation** + + [Wiki](https://cwiki.apache.org/confluence/display/ZOOKEEPER) + + [FAQ](https://cwiki.apache.org/confluence/display/ZOOKEEPER/FAQ) + diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/recipes.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/recipes.md new file mode 100644 index 0000000000000000000000000000000000000000..9d3dec55c62393bb3a7fe2c45df43a7468b0267f --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/recipes.md @@ -0,0 +1,416 @@ + + +# ZooKeeper Recipes and Solutions + +* [A Guide to Creating Higher-level Constructs with ZooKeeper](#ch_recipes) + * [Important Note About Error Handling](#sc_recipes_errorHandlingNote) + * [Out of the Box Applications: Name Service, Configuration, Group Membership](#sc_outOfTheBox) + * [Barriers](#sc_recipes_eventHandles) + * [Double Barriers](#sc_doubleBarriers) + * [Queues](#sc_recipes_Queues) + * [Priority Queues](#sc_recipes_priorityQueues) + * [Locks](#sc_recipes_Locks) + * [Recoverable Errors and the GUID](#sc_recipes_GuidNote) + * [Shared Locks](#Shared+Locks) + * [Revocable Shared Locks](#sc_revocableSharedLocks) + * [Two-phased Commit](#sc_recipes_twoPhasedCommit) + * [Leader Election](#sc_leaderElection) + + + +## A Guide to Creating Higher-level Constructs with ZooKeeper + +In this article, you'll find guidelines for using +ZooKeeper to implement higher order functions. All of them are conventions +implemented at the client and do not require special support from +ZooKeeper. Hopefully the community will capture these conventions in client-side libraries +to ease their use and to encourage standardization. + +One of the most interesting things about ZooKeeper is that even +though ZooKeeper uses _asynchronous_ notifications, you +can use it to build _synchronous_ consistency +primitives, such as queues and locks. As you will see, this is possible +because ZooKeeper imposes an overall order on updates, and has mechanisms +to expose this ordering. + +Note that the recipes below attempt to employ best practices. In +particular, they avoid polling, timers or anything else that would result +in a "herd effect", causing bursts of traffic and limiting +scalability. + +There are many useful functions that can be imagined that aren't +included here - revocable read-write priority locks, as just one example. +And some of the constructs mentioned here - locks, in particular - +illustrate certain points, even though you may find other constructs, such +as event handles or queues, a more practical means of performing the same +function. In general, the examples in this section are designed to +stimulate thought. + + + +### Important Note About Error Handling + +When implementing the recipes you must handle recoverable exceptions +(see the [FAQ](https://cwiki.apache.org/confluence/display/ZOOKEEPER/FAQ)). In +particular, several of the recipes employ sequential ephemeral +nodes. When creating a sequential ephemeral node there is an error case in +which the create() succeeds on the server but the server crashes before +returning the name of the node to the client. When the client reconnects its +session is still valid and, thus, the node is not removed. The implication is +that it is difficult for the client to know if its node was created or not. The +recipes below include measures to handle this. + + + +### Out of the Box Applications: Name Service, Configuration, Group Membership + +Name service and configuration are two of the primary applications +of ZooKeeper. These two functions are provided directly by the ZooKeeper +API. + +Another function directly provided by ZooKeeper is _group +membership_. The group is represented by a node. Members of the +group create ephemeral nodes under the group node. Nodes of the members +that fail abnormally will be removed automatically when ZooKeeper detects +the failure. + + + +### Barriers + +Distributed systems use _barriers_ +to block processing of a set of nodes until a condition is met +at which time all the nodes are allowed to proceed. Barriers are +implemented in ZooKeeper by designating a barrier node. The +barrier is in place if the barrier node exists. Here's the +pseudo code: + +1. Client calls the ZooKeeper API's **exists()** function on the barrier node, with + _watch_ set to true. +1. If **exists()** returns false, the + barrier is gone and the client proceeds +1. Else, if **exists()** returns true, + the clients wait for a watch event from ZooKeeper for the barrier + node. +1. When the watch event is triggered, the client reissues the + **exists( )** call, again waiting until + the barrier node is removed. + + + +#### Double Barriers + +Double barriers enable clients to synchronize the beginning and +the end of a computation. When enough processes have joined the barrier, +processes start their computation and leave the barrier once they have +finished. This recipe shows how to use a ZooKeeper node as a +barrier. + +The pseudo code in this recipe represents the barrier node as +_b_. Every client process _p_ +registers with the barrier node on entry and unregisters when it is +ready to leave. A node registers with the barrier node via the **Enter** procedure below, it waits until +_x_ client process register before proceeding with +the computation. (The _x_ here is up to you to +determine for your system.) + +| **Enter** | **Leave** | +|-----------------------------------|-------------------------------| +| 1. Create a name __n_ = _b_+“/”+_p__ | 1. **L = getChildren(b, false)** | +| 2. Set watch: **exists(_b_ + ‘‘/ready’’, true)** | 2. if no children, exit | +| 3. Create child: **create(_n_, EPHEMERAL)** | 3. if _p_ is only process node in L, delete(n) and exit | +| 4. **L = getChildren(b, false)** | 4. if _p_ is the lowest process node in L, wait on highest process node in L | +| 5. if fewer children in L than_x_, wait for watch event | 5. else **delete(_n_)**if still exists and wait on lowest process node in L | +| 6. else **create(b + ‘‘/ready’’, REGULAR)** | 6. goto 1 | + +On entering, all processes watch on a ready node and +create an ephemeral node as a child of the barrier node. Each process +but the last enters the barrier and waits for the ready node to appear +at line 5. The process that creates the xth node, the last process, will +see x nodes in the list of children and create the ready node, waking up +the other processes. Note that waiting processes wake up only when it is +time to exit, so waiting is efficient. + +On exit, you can't use a flag such as _ready_ +because you are watching for process nodes to go away. By using +ephemeral nodes, processes that fail after the barrier has been entered +do not prevent correct processes from finishing. When processes are +ready to leave, they need to delete their process nodes and wait for all +other processes to do the same. + +Processes exit when there are no process nodes left as children of +_b_. However, as an efficiency, you can use the +lowest process node as the ready flag. All other processes that are +ready to exit watch for the lowest existing process node to go away, and +the owner of the lowest process watches for any other process node +(picking the highest for simplicity) to go away. This means that only a +single process wakes up on each node deletion except for the last node, +which wakes up everyone when it is removed. + + + +### Queues + +Distributed queues are a common data structure. To implement a +distributed queue in ZooKeeper, first designate a znode to hold the queue, +the queue node. The distributed clients put something into the queue by +calling create() with a pathname ending in "queue-", with the +_sequence_ and _ephemeral_ flags in +the create() call set to true. Because the _sequence_ +flag is set, the new pathname will have the form +_path-to-queue-node_/queue-X, where X is a monotonic increasing number. A +client that wants to be removed from the queue calls ZooKeeper's **getChildren( )** function, with +_watch_ set to true on the queue node, and begins +processing nodes with the lowest number. The client does not need to issue +another **getChildren( )** until it exhausts +the list obtained from the first **getChildren( +)** call. If there are no children in the queue node, the +reader waits for a watch notification to check the queue again. + +###### Note +>There now exists a Queue implementation in ZooKeeper +recipes directory. This is distributed with the release -- +zookeeper-recipes/zookeeper-recipes-queue directory of the release artifact. + + + +#### Priority Queues + +To implement a priority queue, you need only make two simple +changes to the generic [queue +recipe](#sc_recipes_Queues) . First, to add to a queue, the pathname ends with +"queue-YY" where YY is the priority of the element with lower numbers +representing higher priority (just like UNIX). Second, when removing +from the queue, a client uses an up-to-date children list meaning that +the client will invalidate previously obtained children lists if a watch +notification triggers for the queue node. + + + +### Locks + +Fully distributed locks that are globally synchronous, meaning at +any snapshot in time no two clients think they hold the same lock. These +can be implemented using ZooKeeper. As with priority queues, first define +a lock node. + +###### Note +>There now exists a Lock implementation in ZooKeeper +recipes directory. This is distributed with the release -- +zookeeper-recipes/zookeeper-recipes-lock directory of the release artifact. + +Clients wishing to obtain a lock do the following: + +1. Call **create( )** with a pathname + of "_locknode_/guid-lock-" and the _sequence_ and + _ephemeral_ flags set. The _guid_ + is needed in case the create() result is missed. See the note below. +1. Call **getChildren( )** on the lock + node _without_ setting the watch flag (this is + important to avoid the herd effect). +1. If the pathname created in step **1** has the lowest sequence number suffix, the + client has the lock and the client exits the protocol. +1. The client calls **exists( )** with + the watch flag set on the path in the lock directory with the next + lowest sequence number. +1. if **exists( )** returns null, go + to step **2**. Otherwise, wait for a + notification for the pathname from the previous step before going to + step **2**. + +The unlock protocol is very simple: clients wishing to release a +lock simply delete the node they created in step 1. + +Here are a few things to notice: + +* The removal of a node will only cause one client to wake up + since each node is watched by exactly one client. In this way, you + avoid the herd effect. + +* There is no polling or timeouts. + +* Because of the way you implement locking, it is easy to see the + amount of lock contention, break locks, debug locking problems, + etc. + + + +#### Recoverable Errors and the GUID + +* If a recoverable error occurs calling **create()** the + client should call **getChildren()** and check for a node + containing the _guid_ used in the path name. + This handles the case (noted [above](#sc_recipes_errorHandlingNote)) of + the create() succeeding on the server but the server crashing before returning the name + of the new node. + + + +#### Shared Locks + +You can implement shared locks by with a few changes to the lock +protocol: + +| **Obtaining a read lock:** | **Obtaining a write lock:** | +|----------------------------|-----------------------------| +| 1. Call **create( )** to create a node with pathname "*guid-/read-*". This is the lock node use later in the protocol. Make sure to set both the _sequence_ and _ephemeral_ flags. | 1. Call **create( )** to create a node with pathname "*guid-/write-*". This is the lock node spoken of later in the protocol. Make sure to set both _sequence_ and _ephemeral_ flags. | +| 2. Call **getChildren( )** on the lock node _without_ setting the _watch_ flag - this is important, as it avoids the herd effect. | 2. Call **getChildren( )** on the lock node _without_ setting the _watch_ flag - this is important, as it avoids the herd effect. | +| 3. If there are no children with a pathname starting with "*write-*" and having a lower sequence number than the node created in step **1**, the client has the lock and can exit the protocol. | 3. If there are no children with a lower sequence number than the node created in step **1**, the client has the lock and the client exits the protocol. | +| 4. Otherwise, call **exists( )**, with _watch_ flag, set on the node in lock directory with pathname starting with "*write-*" having the next lowest sequence number. | 4. Call **exists( ),** with _watch_ flag set, on the node with the pathname that has the next lowest sequence number. | +| 5. If **exists( )** returns _false_, goto step **2**. | 5. If **exists( )** returns _false_, goto step **2**. Otherwise, wait for a notification for the pathname from the previous step before going to step **2**. | +| 6. Otherwise, wait for a notification for the pathname from the previous step before going to step **2** | | + +Notes: + +* It might appear that this recipe creates a herd effect: + when there is a large group of clients waiting for a read + lock, and all getting notified more or less simultaneously + when the "*write-*" node with the lowest + sequence number is deleted. In fact. that's valid behavior: + as all those waiting reader clients should be released since + they have the lock. The herd effect refers to releasing a + "herd" when in fact only a single or a small number of + machines can proceed. + +* See the [note for Locks](#sc_recipes_GuidNote) on how to use the guid in the node. + + + +#### Revocable Shared Locks + +With minor modifications to the Shared Lock protocol, you make +shared locks revocable by modifying the shared lock protocol: + +In step **1**, of both obtain reader +and writer lock protocols, call **getData( +)** with _watch_ set, immediately after the +call to **create( )**. If the client +subsequently receives notification for the node it created in step +**1**, it does another **getData( )** on that node, with +_watch_ set and looks for the string "unlock", which +signals to the client that it must release the lock. This is because, +according to this shared lock protocol, you can request the client with +the lock give up the lock by calling **setData()** on the lock node, writing "unlock" to that node. + +Note that this protocol requires the lock holder to consent to +releasing the lock. Such consent is important, especially if the lock +holder needs to do some processing before releasing the lock. Of course +you can always implement _Revocable Shared Locks with Freaking +Laser Beams_ by stipulating in your protocol that the revoker +is allowed to delete the lock node if after some length of time the lock +isn't deleted by the lock holder. + + + +### Two-phased Commit + +A two-phase commit protocol is an algorithm that lets all clients in +a distributed system agree either to commit a transaction or abort. + +In ZooKeeper, you can implement a two-phased commit by having a +coordinator create a transaction node, say "/app/Tx", and one child node +per participating site, say "/app/Tx/s_i". When coordinator creates the +child node, it leaves the content undefined. Once each site involved in +the transaction receives the transaction from the coordinator, the site +reads each child node and sets a watch. Each site then processes the query +and votes "commit" or "abort" by writing to its respective node. Once the +write completes, the other sites are notified, and as soon as all sites +have all votes, they can decide either "abort" or "commit". Note that a +node can decide "abort" earlier if some site votes for "abort". + +An interesting aspect of this implementation is that the only role +of the coordinator is to decide upon the group of sites, to create the +ZooKeeper nodes, and to propagate the transaction to the corresponding +sites. In fact, even propagating the transaction can be done through +ZooKeeper by writing it in the transaction node. + +There are two important drawbacks of the approach described above. +One is the message complexity, which is O(n²). The second is the +impossibility of detecting failures of sites through ephemeral nodes. To +detect the failure of a site using ephemeral nodes, it is necessary that +the site create the node. + +To solve the first problem, you can have only the coordinator +notified of changes to the transaction nodes, and then notify the sites +once coordinator reaches a decision. Note that this approach is scalable, +but it is slower too, as it requires all communication to go through the +coordinator. + +To address the second problem, you can have the coordinator +propagate the transaction to the sites, and have each site creating its +own ephemeral node. + + + +### Leader Election + +A simple way of doing leader election with ZooKeeper is to use the +**SEQUENCE|EPHEMERAL** flags when creating +znodes that represent "proposals" of clients. The idea is to have a znode, +say "/election", such that each znode creates a child znode "/election/guid-n_" +with both flags SEQUENCE|EPHEMERAL. With the sequence flag, ZooKeeper +automatically appends a sequence number that is greater than anyone +previously appended to a child of "/election". The process that created +the znode with the smallest appended sequence number is the leader. + +That's not all, though. It is important to watch for failures of the +leader, so that a new client arises as the new leader in the case the +current leader fails. A trivial solution is to have all application +processes watching upon the current smallest znode, and checking if they +are the new leader when the smallest znode goes away (note that the +smallest znode will go away if the leader fails because the node is +ephemeral). But this causes a herd effect: upon a failure of the current +leader, all other processes receive a notification, and execute +getChildren on "/election" to obtain the current list of children of +"/election". If the number of clients is large, it causes a spike on the +number of operations that ZooKeeper servers have to process. To avoid the +herd effect, it is sufficient to watch for the next znode down on the +sequence of znodes. If a client receives a notification that the znode it +is watching is gone, then it becomes the new leader in the case that there +is no smaller znode. Note that this avoids the herd effect by not having +all clients watching the same znode. + +Here's the pseudo code: + +Let ELECTION be a path of choice of the application. To volunteer to +be a leader: + +1. Create znode z with path "ELECTION/guid-n_" with both SEQUENCE and + EPHEMERAL flags; +1. Let C be the children of "ELECTION", and i is the sequence + number of z; +1. Watch for changes on "ELECTION/guid-n_j", where j is the largest + sequence number such that j < i and n_j is a znode in C; + +Upon receiving a notification of znode deletion: + +1. Let C be the new set of children of ELECTION; +1. If z is the smallest node in C, then execute leader + procedure; +1. Otherwise, watch for changes on "ELECTION/guid-n_j", where j is the + largest sequence number such that j < i and n_j is a znode in C; + +Notes: + +* Note that the znode having no preceding znode on the list of + children do not imply that the creator of this znode is aware that it is + the current leader. Applications may consider creating a separate znode + to acknowledge that the leader has executed the leader procedure. + +* See the [note for Locks](#sc_recipes_GuidNote) on how to use the guid in the node. + + diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/releasenotes.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/releasenotes.md new file mode 100644 index 0000000000000000000000000000000000000000..a27b6ffdd12640e449ede8d2c90417e96eef28a8 --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/releasenotes.md @@ -0,0 +1,267 @@ + + +# ZooKeeper 3.0.0 Release Notes + +* [Migration Instructions when Upgrading to 3.0.0](#migration) + * [Migrating Client Code](#migration_code) + * [Watch Management](#Watch+Management) + * [Java API](#Java+API) + * [C API](#C+API) + * [Migrating Server Data](#migration_data) + * [Migrating Server Configuration](#migration_config) +* [Changes Since ZooKeeper 2.2.1](#changes) + +These release notes include new developer and user facing incompatibilities, features, and major improvements. + +* [Migration Instructions](#migration) +* [Changes](#changes) + + +## Migration Instructions when Upgrading to 3.0.0 +
+ +*You should only have to read this section if you are upgrading from a previous version of ZooKeeper to version 3.0.0, otw skip down to [changes](#changes)* + +A small number of changes in this release have resulted in non-backward compatible Zookeeper client user code and server instance data. The following instructions provide details on how to migrate code and date from version 2.2.1 to version 3.0.0. + +Note: ZooKeeper increments the major version number (major.minor.fix) when backward incompatible changes are made to the source base. As part of the migration from SourceForge we changed the package structure (com.yahoo.zookeeper.* to org.apache.zookeeper.*) and felt it was a good time to incorporate some changes that we had been withholding. As a result the following will be required when migrating from 2.2.1 to 3.0.0 version of ZooKeeper. + +* [Migrating Client Code](#migration_code) +* [Migrating Server Data](#migration_data) +* [Migrating Server Configuration](#migration_config) + + +### Migrating Client Code + +The underlying client-server protocol has changed in version 3.0.0 +of ZooKeeper. As a result clients must be upgraded along with +serving clusters to ensure proper operation of the system (old +pre-3.0.0 clients are not guaranteed to operate against upgraded +3.0.0 servers and vice-versa). + + +#### Watch Management + +In previous releases of ZooKeeper any watches registered by clients were lost if the client lost a connection to a ZooKeeper server. +This meant that developers had to track watches they were interested in and reregister them if a session disconnect event was received. +In this release the client library tracks watches that a client has registered and reregisters the watches when a connection is made to a new server. +Applications that still manually reregister interest should continue working properly as long as they are able to handle unsolicited watches. +For example, an old application may register a watch for /foo and /goo, lose the connection, and reregister only /goo. +As long as the application is able to receive a notification for /foo, (probably ignoring it) it does not need to be changed. +One caveat to the watch management: it is possible to miss an event for the creation and deletion of a znode if watching for creation and both the create and delete happens while the client is disconnected from ZooKeeper. + +This release also allows clients to specify call specific watch functions. +This gives the developer the ability to modularize logic in different watch functions rather than cramming everything in the watch function attached to the ZooKeeper handle. +Call specific watch functions receive all session events for as long as they are active, but will only receive the watch callbacks for which they are registered. + + +#### Java API + +1. The java package structure has changed from **com.yahoo.zookeeper*** to **org.apache.zookeeper***. This will probably affect all of your java code which makes use of ZooKeeper APIs (typically import statements) +1. A number of constants used in the client ZooKeeper API were re-specified using enums (rather than ints). See [ZOOKEEPER-7](https://issues.apache.org/jira/browse/ZOOKEEPER-7), [ZOOKEEPER-132](https://issues.apache.org/jira/browse/ZOOKEEPER-132) and [ZOOKEEPER-139](https://issues.apache.org/jira/browse/ZOOKEEPER-139) for full details +1. [ZOOKEEPER-18](https://issues.apache.org/jira/browse/ZOOKEEPER-18) removed KeeperStateChanged, use KeeperStateDisconnected instead + +Also see [the current Java API](http://zookeeper.apache.org/docs/current/apidocs/zookeeper-server/index.html) + + +#### C API + +1. A number of constants used in the client ZooKeeper API were renamed in order to reduce namespace collision, see [ZOOKEEPER-6](https://issues.apache.org/jira/browse/ZOOKEEPER-6) for full details + + +### Migrating Server Data +The following issues resulted in changes to the on-disk data format (the snapshot and transaction log files contained within the ZK data directory) and require a migration utility to be run. + +* [ZOOKEEPER-27 Unique DB identifiers for servers and clients](https://issues.apache.org/jira/browse/ZOOKEEPER-27) +* [ZOOKEEPER-32 CRCs for ZooKeeper data](https://issues.apache.org/jira/browse/ZOOKEEPER-32) +* [ZOOKEEPER-33 Better ACL management](https://issues.apache.org/jira/browse/ZOOKEEPER-33) +* [ZOOKEEPER-38 headers (version+) in log/snap files](https://issues.apache.org/jira/browse/ZOOKEEPER-38) + +**The following must be run once, and only once, when upgrading the ZooKeeper server instances to version 3.0.0.** + +###### Note +> The and directories referenced below are specified by the *dataLogDir* + and *dataDir* specification in your ZooKeeper config file respectively. *dataLogDir* defaults to + the value of *dataDir* if not specified explicitly in the ZooKeeper server config file (in which + case provide the same directory for both parameters to the upgrade utility). + +1. Shutdown the ZooKeeper server cluster. +1. Backup your and directories +1. Run upgrade using + * `bin/zkServer.sh upgrade ` + + or + + * `java -classpath pathtolog4j:pathtozookeeper.jar UpgradeMain ` + + where is the directory where all transaction logs (log.*) are stored. is the directory where all the snapshots (snapshot.*) are stored. +1. Restart the cluster. + +If you have any failure during the upgrade procedure keep reading to sanitize your database. + +This is how upgrade works in ZooKeeper. This will help you troubleshoot in case you have problems while upgrading + +1. Upgrade moves files from `` and `` to `/version-1/` and `/version-1` respectively (version-1 sub-directory is created by the upgrade utility). +1. Upgrade creates a new version sub-directory `/version-2` and `/version-2` +1. Upgrade reads the old database from `/version-1` and `/version-1` into the memory and creates a new upgraded snapshot. +1. Upgrade writes the new database in `/version-2`. + +Troubleshooting. + + +1. In case you start ZooKeeper 3.0 without upgrading from 2.0 on a 2.0 database - the servers will start up with an empty database. + This is because the servers assume that `/version-2` and `/version-2` will have the database to start with. Since this will be empty + in case of no upgrade, the servers will start with an empty database. In such a case, shutdown the ZooKeeper servers, remove the version-2 directory (remember + this will lead to loss of updates after you started 3.0.) + and then start the upgrade procedure. +1. If the upgrade fails while trying to rename files into the version-1 directory, you should try and move all the files under `/version-1` + and `/version-1` to `` and `` respectively. Then try upgrade again. +1. If you do not wish to run with ZooKeeper 3.0 and prefer to run with ZooKeeper 2.0 and have already upgraded - you can run ZooKeeper 2 with + the `` and `` directories changed to `/version-1` and `/version-1`. Remember that you will lose all the updates that you made after the upgrade. + + +### Migrating Server Configuration + +There is a significant change to the ZooKeeper server configuration file. + +The default election algorithm, specified by the *electionAlg* configuration attribute, has +changed from a default of *0* to a default of *3*. See +[Cluster Options](zookeeperAdmin.html#sc_clusterOptions) section of the administrators guide, specifically +the *electionAlg* and *server.X* properties. + +You will either need to explicitly set *electionAlg* to its previous default value +of *0* or change your *server.X* options to include the leader election port. + + + +## Changes Since ZooKeeper 2.2.1 + +Version 2.2.1 code, documentation, binaries, etc... are still accessible on [SourceForge](http://sourceforge.net/projects/zookeeper) + +| Issue | Notes | +|-------|-------| +|[ZOOKEEPER-43](https://issues.apache.org/jira/browse/ZOOKEEPER-43)|Server side of auto reset watches.| +|[ZOOKEEPER-132](https://issues.apache.org/jira/browse/ZOOKEEPER-132)|Create Enum to replace CreateFlag in ZooKepper.create method| +|[ZOOKEEPER-139](https://issues.apache.org/jira/browse/ZOOKEEPER-139)|Create Enums for WatcherEvent's KeeperState and EventType| +|[ZOOKEEPER-18](https://issues.apache.org/jira/browse/ZOOKEEPER-18)|keeper state inconsistency| +|[ZOOKEEPER-38](https://issues.apache.org/jira/browse/ZOOKEEPER-38)|headers in log/snap files| +|[ZOOKEEPER-8](https://issues.apache.org/jira/browse/ZOOKEEPER-8)|Stat enchaned to include num of children and size| +|[ZOOKEEPER-6](https://issues.apache.org/jira/browse/ZOOKEEPER-6)|List of problem identifiers in zookeeper.h| +|[ZOOKEEPER-7](https://issues.apache.org/jira/browse/ZOOKEEPER-7)|Use enums rather than ints for types and state| +|[ZOOKEEPER-27](https://issues.apache.org/jira/browse/ZOOKEEPER-27)|Unique DB identifiers for servers and clients| +|[ZOOKEEPER-32](https://issues.apache.org/jira/browse/ZOOKEEPER-32)|CRCs for ZooKeeper data| +|[ZOOKEEPER-33](https://issues.apache.org/jira/browse/ZOOKEEPER-33)|Better ACL management| +|[ZOOKEEPER-203](https://issues.apache.org/jira/browse/ZOOKEEPER-203)|fix datadir typo in releasenotes| +|[ZOOKEEPER-145](https://issues.apache.org/jira/browse/ZOOKEEPER-145)|write detailed release notes for users migrating from 2.x to 3.0| +|[ZOOKEEPER-23](https://issues.apache.org/jira/browse/ZOOKEEPER-23)|Auto reset of watches on reconnect| +|[ZOOKEEPER-191](https://issues.apache.org/jira/browse/ZOOKEEPER-191)|forrest docs for upgrade.| +|[ZOOKEEPER-201](https://issues.apache.org/jira/browse/ZOOKEEPER-201)|validate magic number when reading snapshot and transaction logs| +|[ZOOKEEPER-200](https://issues.apache.org/jira/browse/ZOOKEEPER-200)|the magic number for snapshot and log must be different| +|[ZOOKEEPER-199](https://issues.apache.org/jira/browse/ZOOKEEPER-199)|fix log messages in persistence code| +|[ZOOKEEPER-197](https://issues.apache.org/jira/browse/ZOOKEEPER-197)|create checksums for snapshots| +|[ZOOKEEPER-198](https://issues.apache.org/jira/browse/ZOOKEEPER-198)|apache license header missing from FollowerSyncRequest.java| +|[ZOOKEEPER-5](https://issues.apache.org/jira/browse/ZOOKEEPER-5)|Upgrade Feature in Zookeeper server.| +|[ZOOKEEPER-194](https://issues.apache.org/jira/browse/ZOOKEEPER-194)|Fix terminology in zookeeperAdmin.xml| +|[ZOOKEEPER-151](https://issues.apache.org/jira/browse/ZOOKEEPER-151)|Document change to server configuration| +|[ZOOKEEPER-193](https://issues.apache.org/jira/browse/ZOOKEEPER-193)|update java example doc to compile with latest zookeeper| +|[ZOOKEEPER-187](https://issues.apache.org/jira/browse/ZOOKEEPER-187)|CreateMode api docs missing| +|[ZOOKEEPER-186](https://issues.apache.org/jira/browse/ZOOKEEPER-186)|add new "releasenotes.xml" to forrest documentation| +|[ZOOKEEPER-190](https://issues.apache.org/jira/browse/ZOOKEEPER-190)|Reorg links to docs and navs to docs into related sections| +|[ZOOKEEPER-189](https://issues.apache.org/jira/browse/ZOOKEEPER-189)|forrest build not validated xml of input documents| +|[ZOOKEEPER-188](https://issues.apache.org/jira/browse/ZOOKEEPER-188)|Check that election port is present for all servers| +|[ZOOKEEPER-185](https://issues.apache.org/jira/browse/ZOOKEEPER-185)|Improved version of FLETest| +|[ZOOKEEPER-184](https://issues.apache.org/jira/browse/ZOOKEEPER-184)|tests: An explicit include directive is needed for the usage of memcpy functions| +|[ZOOKEEPER-183](https://issues.apache.org/jira/browse/ZOOKEEPER-183)|Array subscript is above array bounds in od_completion, src/cli.c.| +|[ZOOKEEPER-182](https://issues.apache.org/jira/browse/ZOOKEEPER-182)|zookeeper_init accepts empty host-port string and returns valid pointer to zhandle_t.| +|[ZOOKEEPER-17](https://issues.apache.org/jira/browse/ZOOKEEPER-17)|zookeeper_init doc needs clarification| +|[ZOOKEEPER-181](https://issues.apache.org/jira/browse/ZOOKEEPER-181)|Some Source Forge Documents did not get moved over: javaExample, zookeeperTutorial, zookeeperInternals| +|[ZOOKEEPER-180](https://issues.apache.org/jira/browse/ZOOKEEPER-180)|Placeholder sections needed in document for new topics that the umbrella jira discusses| +|[ZOOKEEPER-179](https://issues.apache.org/jira/browse/ZOOKEEPER-179)|Programmer's Guide "Basic Operations" section is missing content| +|[ZOOKEEPER-178](https://issues.apache.org/jira/browse/ZOOKEEPER-178)|FLE test.| +|[ZOOKEEPER-159](https://issues.apache.org/jira/browse/ZOOKEEPER-159)|Cover two corner cases of leader election| +|[ZOOKEEPER-156](https://issues.apache.org/jira/browse/ZOOKEEPER-156)|update programmer guide with acl details from old wiki page| +|[ZOOKEEPER-154](https://issues.apache.org/jira/browse/ZOOKEEPER-154)|reliability graph diagram in overview doc needs context| +|[ZOOKEEPER-157](https://issues.apache.org/jira/browse/ZOOKEEPER-157)|Peer can't find existing leader| +|[ZOOKEEPER-155](https://issues.apache.org/jira/browse/ZOOKEEPER-155)|improve "the zookeeper project" section of overview doc| +|[ZOOKEEPER-140](https://issues.apache.org/jira/browse/ZOOKEEPER-140)|Deadlock in QuorumCnxManager| +|[ZOOKEEPER-147](https://issues.apache.org/jira/browse/ZOOKEEPER-147)|This is version of the documents with most of the [tbd...] scrubbed out| +|[ZOOKEEPER-150](https://issues.apache.org/jira/browse/ZOOKEEPER-150)|zookeeper build broken| +|[ZOOKEEPER-136](https://issues.apache.org/jira/browse/ZOOKEEPER-136)|sync causes hang in all followers of quorum.| +|[ZOOKEEPER-134](https://issues.apache.org/jira/browse/ZOOKEEPER-134)|findbugs cleanup| +|[ZOOKEEPER-133](https://issues.apache.org/jira/browse/ZOOKEEPER-133)|hudson tests failing intermittently| +|[ZOOKEEPER-144](https://issues.apache.org/jira/browse/ZOOKEEPER-144)|add tostring support for watcher event, and enums for event type/state| +|[ZOOKEEPER-21](https://issues.apache.org/jira/browse/ZOOKEEPER-21)|Improve zk ctor/watcher| +|[ZOOKEEPER-142](https://issues.apache.org/jira/browse/ZOOKEEPER-142)|Provide Javadoc as to the maximum size of the data byte array that may be stored within a znode| +|[ZOOKEEPER-93](https://issues.apache.org/jira/browse/ZOOKEEPER-93)|Create Documentation for Zookeeper| +|[ZOOKEEPER-117](https://issues.apache.org/jira/browse/ZOOKEEPER-117)|threading issues in Leader election| +|[ZOOKEEPER-137](https://issues.apache.org/jira/browse/ZOOKEEPER-137)|client watcher objects can lose events| +|[ZOOKEEPER-131](https://issues.apache.org/jira/browse/ZOOKEEPER-131)|Old leader election can elect a dead leader over and over again| +|[ZOOKEEPER-130](https://issues.apache.org/jira/browse/ZOOKEEPER-130)|update build.xml to support apache release process| +|[ZOOKEEPER-118](https://issues.apache.org/jira/browse/ZOOKEEPER-118)|findbugs flagged switch statement in followerrequestprocessor.run| +|[ZOOKEEPER-115](https://issues.apache.org/jira/browse/ZOOKEEPER-115)|Potential NPE in QuorumCnxManager| +|[ZOOKEEPER-114](https://issues.apache.org/jira/browse/ZOOKEEPER-114)|cleanup ugly event messages in zookeeper client| +|[ZOOKEEPER-112](https://issues.apache.org/jira/browse/ZOOKEEPER-112)|src/java/main ZooKeeper.java has test code embedded into it.| +|[ZOOKEEPER-39](https://issues.apache.org/jira/browse/ZOOKEEPER-39)|Use Watcher objects rather than boolean on read operations.| +|[ZOOKEEPER-97](https://issues.apache.org/jira/browse/ZOOKEEPER-97)|supports optional output directory in code generator.| +|[ZOOKEEPER-101](https://issues.apache.org/jira/browse/ZOOKEEPER-101)|Integrate ZooKeeper with "violations" feature on hudson| +|[ZOOKEEPER-105](https://issues.apache.org/jira/browse/ZOOKEEPER-105)|Catch Zookeeper exceptions and print on the stderr.| +|[ZOOKEEPER-42](https://issues.apache.org/jira/browse/ZOOKEEPER-42)|Change Leader Election to fast tcp.| +|[ZOOKEEPER-48](https://issues.apache.org/jira/browse/ZOOKEEPER-48)|auth_id now handled correctly when no auth ids present| +|[ZOOKEEPER-44](https://issues.apache.org/jira/browse/ZOOKEEPER-44)|Create sequence flag children with prefixes of 0's so that they can be lexicographically sorted.| +|[ZOOKEEPER-108](https://issues.apache.org/jira/browse/ZOOKEEPER-108)|Fix sync operation reordering on a Quorum.| +|[ZOOKEEPER-25](https://issues.apache.org/jira/browse/ZOOKEEPER-25)|Fuse module for Zookeeper.| +|[ZOOKEEPER-58](https://issues.apache.org/jira/browse/ZOOKEEPER-58)|Race condition on ClientCnxn.java| +|[ZOOKEEPER-56](https://issues.apache.org/jira/browse/ZOOKEEPER-56)|Add clover support to build.xml.| +|[ZOOKEEPER-75](https://issues.apache.org/jira/browse/ZOOKEEPER-75)|register the ZooKeeper mailing lists with nabble.com| +|[ZOOKEEPER-54](https://issues.apache.org/jira/browse/ZOOKEEPER-54)|remove sleeps in the tests.| +|[ZOOKEEPER-55](https://issues.apache.org/jira/browse/ZOOKEEPER-55)|build.xml fails to retrieve a release number from SVN and the ant target "dist" fails| +|[ZOOKEEPER-89](https://issues.apache.org/jira/browse/ZOOKEEPER-89)|invoke WhenOwnerListener.whenNotOwner when the ZK connection fails| +|[ZOOKEEPER-90](https://issues.apache.org/jira/browse/ZOOKEEPER-90)|invoke WhenOwnerListener.whenNotOwner when the ZK session expires and the znode is the leader| +|[ZOOKEEPER-82](https://issues.apache.org/jira/browse/ZOOKEEPER-82)|Make the ZooKeeperServer more DI friendly.| +|[ZOOKEEPER-110](https://issues.apache.org/jira/browse/ZOOKEEPER-110)|Build script relies on svnant, which is not compatible with subversion 1.5 working copies| +|[ZOOKEEPER-111](https://issues.apache.org/jira/browse/ZOOKEEPER-111)|Significant cleanup of existing tests.| +|[ZOOKEEPER-122](https://issues.apache.org/jira/browse/ZOOKEEPER-122)|Fix NPE in jute's Utils.toCSVString.| +|[ZOOKEEPER-123](https://issues.apache.org/jira/browse/ZOOKEEPER-123)|Fix the wrong class is specified for the logger.| +|[ZOOKEEPER-2](https://issues.apache.org/jira/browse/ZOOKEEPER-2)|Fix synchronization issues in QuorumPeer and FastLeader election.| +|[ZOOKEEPER-125](https://issues.apache.org/jira/browse/ZOOKEEPER-125)|Remove unwanted class declaration in FastLeaderElection.| +|[ZOOKEEPER-61](https://issues.apache.org/jira/browse/ZOOKEEPER-61)|Address in client/server test cases.| +|[ZOOKEEPER-75](https://issues.apache.org/jira/browse/ZOOKEEPER-75)|cleanup the library directory| +|[ZOOKEEPER-109](https://issues.apache.org/jira/browse/ZOOKEEPER-109)|cleanup of NPE and Resource issue nits found by static analysis| +|[ZOOKEEPER-76](https://issues.apache.org/jira/browse/ZOOKEEPER-76)|Commit 677109 removed the cobertura library, but not the build targets.| +|[ZOOKEEPER-63](https://issues.apache.org/jira/browse/ZOOKEEPER-63)|Race condition in client close| +|[ZOOKEEPER-70](https://issues.apache.org/jira/browse/ZOOKEEPER-70)|Add skeleton forrest doc structure for ZooKeeper| +|[ZOOKEEPER-79](https://issues.apache.org/jira/browse/ZOOKEEPER-79)|Document jacob's leader election on the wiki recipes page| +|[ZOOKEEPER-73](https://issues.apache.org/jira/browse/ZOOKEEPER-73)|Move ZK wiki from SourceForge to Apache| +|[ZOOKEEPER-72](https://issues.apache.org/jira/browse/ZOOKEEPER-72)|Initial creation/setup of ZooKeeper ASF site.| +|[ZOOKEEPER-71](https://issues.apache.org/jira/browse/ZOOKEEPER-71)|Determine what to do re ZooKeeper Changelog| +|[ZOOKEEPER-68](https://issues.apache.org/jira/browse/ZOOKEEPER-68)|parseACLs in ZooKeeper.java fails to parse elements of ACL, should be lastIndexOf rather than IndexOf| +|[ZOOKEEPER-130](https://issues.apache.org/jira/browse/ZOOKEEPER-130)|update build.xml to support apache release process.| +|[ZOOKEEPER-131](https://issues.apache.org/jira/browse/ZOOKEEPER-131)|Fix Old leader election can elect a dead leader over and over again.| +|[ZOOKEEPER-137](https://issues.apache.org/jira/browse/ZOOKEEPER-137)|client watcher objects can lose events| +|[ZOOKEEPER-117](https://issues.apache.org/jira/browse/ZOOKEEPER-117)|threading issues in Leader election| +|[ZOOKEEPER-128](https://issues.apache.org/jira/browse/ZOOKEEPER-128)|test coverage on async client operations needs to be improved| +|[ZOOKEEPER-127](https://issues.apache.org/jira/browse/ZOOKEEPER-127)|Use of non-standard election ports in config breaks services| +|[ZOOKEEPER-53](https://issues.apache.org/jira/browse/ZOOKEEPER-53)|tests failing on solaris.| +|[ZOOKEEPER-172](https://issues.apache.org/jira/browse/ZOOKEEPER-172)|FLE Test| +|[ZOOKEEPER-41](https://issues.apache.org/jira/browse/ZOOKEEPER-41)|Sample startup script| +|[ZOOKEEPER-33](https://issues.apache.org/jira/browse/ZOOKEEPER-33)|Better ACL management| +|[ZOOKEEPER-49](https://issues.apache.org/jira/browse/ZOOKEEPER-49)|SetACL does not work| +|[ZOOKEEPER-20](https://issues.apache.org/jira/browse/ZOOKEEPER-20)|Child watches are not triggered when the node is deleted| +|[ZOOKEEPER-15](https://issues.apache.org/jira/browse/ZOOKEEPER-15)|handle failure better in build.xml:test| +|[ZOOKEEPER-11](https://issues.apache.org/jira/browse/ZOOKEEPER-11)|ArrayList is used instead of List| +|[ZOOKEEPER-45](https://issues.apache.org/jira/browse/ZOOKEEPER-45)|Restructure the SVN repository after initial import | +|[ZOOKEEPER-1](https://issues.apache.org/jira/browse/ZOOKEEPER-1)|Initial ZooKeeper code contribution from Yahoo!| diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/getMenu.js b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/getMenu.js new file mode 100644 index 0000000000000000000000000000000000000000..6878b2653b86d3468fbd006de80997f887b23910 --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/getMenu.js @@ -0,0 +1,45 @@ +/* +* Licensed to the Apache Software Foundation (ASF) under one or more +* contributor license agreements. See the NOTICE file distributed with +* this work for additional information regarding copyright ownership. +* The ASF licenses this file to You under the Apache License, Version 2.0 +* (the "License"); you may not use this file except in compliance with +* the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ +/** + * This script, when included in a html file, can be used to make collapsible menus + * + * Typical usage: + * + */ + +if (document.getElementById){ + document.write('') +} + + +function SwitchMenu(obj, thePath) +{ +var open = 'url("'+thePath + 'chapter_open.gif")'; +var close = 'url("'+thePath + 'chapter.gif")'; + if(document.getElementById) { + var el = document.getElementById(obj); + var title = document.getElementById(obj+'Title'); + + if(el.style.display != "block"){ + title.style.backgroundImage = open; + el.style.display = "block"; + }else{ + title.style.backgroundImage = close; + el.style.display = "none"; + } + }// end - if(document.getElementById) +}//end - function SwitchMenu(obj) diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/menu.js b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/menu.js new file mode 100644 index 0000000000000000000000000000000000000000..06ea471dc57c073917d4619ae6d391203f68c65c --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/menu.js @@ -0,0 +1,48 @@ +/* +* Licensed to the Apache Software Foundation (ASF) under one or more +* contributor license agreements. See the NOTICE file distributed with +* this work for additional information regarding copyright ownership. +* The ASF licenses this file to You under the Apache License, Version 2.0 +* (the "License"); you may not use this file except in compliance with +* the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ +/** + * This script, when included in a html file, can be used to make collapsible menus + * + * Typical usage: + * + */ + +if (document.getElementById){ + document.write('') +} + +function SwitchMenu(obj) +{ + if(document.getElementById) { + var el = document.getElementById(obj); + var title = document.getElementById(obj+'Title'); + + if(obj.indexOf("_selected_")==0&&el.style.display == ""){ + el.style.display = "block"; + title.className = "pagegroupselected"; + } + + if(el.style.display != "block"){ + el.style.display = "block"; + title.className = "pagegroupopen"; + } + else{ + el.style.display = "none"; + title.className = "pagegroup"; + } + }// end - if(document.getElementById) +}//end - function SwitchMenu(obj) diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/print.css b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/print.css new file mode 100644 index 0000000000000000000000000000000000000000..aaa99319acdf30b5b6ce2ec9b6967c6ad704a27b --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/print.css @@ -0,0 +1,54 @@ +/* +* Licensed to the Apache Software Foundation (ASF) under one or more +* contributor license agreements. See the NOTICE file distributed with +* this work for additional information regarding copyright ownership. +* The ASF licenses this file to You under the Apache License, Version 2.0 +* (the "License"); you may not use this file except in compliance with +* the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ +body { + font-family: Georgia, Palatino, serif; + font-size: 12pt; + background: white; +} + +#tabs, +#menu, +#content .toc { + display: none; +} + +#content { + width: auto; + padding: 0; + float: none !important; + color: black; + background: inherit; +} + +a:link, a:visited { + color: #336699; + background: inherit; + text-decoration: underline; +} + +#top .logo { + padding: 0; + margin: 0 0 2em 0; +} + +#footer { + margin-top: 4em; +} + +acronym { + border: 0; +} \ No newline at end of file diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/screen.css b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/screen.css new file mode 100644 index 0000000000000000000000000000000000000000..9ce32c292dde34b6119c8cb64856bdbf109faf63 --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/skin/screen.css @@ -0,0 +1,531 @@ +/* +* Licensed to the Apache Software Foundation (ASF) under one or more +* contributor license agreements. See the NOTICE file distributed with +* this work for additional information regarding copyright ownership. +* The ASF licenses this file to You under the Apache License, Version 2.0 +* (the "License"); you may not use this file except in compliance with +* the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ +body { margin: 0px 0px 0px 0px; font-family: Verdana, Helvetica, sans-serif; } + +h1 { font-size : 160%; margin: 0px 0px 0px 0px; padding: 0px; } +h2 { font-size : 140%; margin: 1em 0px 0.8em 0px; padding: 0px; font-weight : bold;} +h3 { font-size : 130%; margin: 0.8em 0px 0px 0px; padding: 0px; font-weight : bold; } +.h3 { margin: 22px 0px 3px 0px; } +h4 { font-size : 120%; margin: 0.7em 0px 0px 0px; padding: 0px; font-weight : normal; text-align: left; } +.h4 { margin: 18px 0px 0px 0px; } +h4.faq { font-size : 120%; margin: 18px 0px 0px 0px; padding: 0px; font-weight : bold; text-align: left; } +h5 { font-size : 100%; margin: 14px 0px 0px 0px; padding: 0px; font-weight : normal; text-align: left; } + +/** +* table +*/ +table .title { background-color: #000000; } +.ForrestTable { + color: #ffffff; + background-color: #7099C5; + width: 100%; + font-size : 100%; + empty-cells: show; +} +table caption { + padding-left: 5px; + color: white; + text-align: left; + font-weight: bold; + background-color: #000000; +} +.ForrestTable td { + color: black; + background-color: #f0f0ff; +} +.ForrestTable th { text-align: center; } +/** + * Page Header + */ + +#top { + position: relative; + float: left; + width: 100%; + background: #294563; /* if you want a background in the header, put it here */ +} + +#top .breadtrail { + background: #CFDCED; + color: black; + border-bottom: solid 1px white; + padding: 3px 10px; + font-size: 75%; +} +#top .breadtrail a { color: black; } + +#top .header { + float: left; + width: 100%; + background: url("header_white_line.gif") repeat-x bottom; +} + +#top .grouplogo { + padding: 7px 0 10px 10px; + float: left; + text-align: left; +} +#top .projectlogo { + padding: 7px 0 10px 10px; + float: left; + width: 33%; + text-align: right; +} +#top .projectlogoA1 { + padding: 7px 0 10px 10px; + float: right; +} +html>body #top .searchbox { + bottom: 0px; +} +#top .searchbox { + position: absolute; + right: 10px; + height: 42px; + font-size: 70%; + white-space: nowrap; + bottom: -1px; /* compensate for IE rendering issue */ + border-radius: 5px 5px 0px 0px; +} + +#top .searchbox form { + padding: 5px 10px; + margin: 0; +} +#top .searchbox p { + padding: 0 0 2px 0; + margin: 0; +} +#top .searchbox input { + font-size: 100%; +} + +#tabs { + clear: both; + padding-left: 10px; + margin: 0; + list-style: none; +} + +#tabs li { + float: left; + margin: 0 3px 0 0; + padding: 0; + border-radius: 5px 5px 0px 0px; +} + +/*background: url("tab-left.gif") no-repeat left top;*/ +#tabs li a { + float: left; + display: block; + font-family: verdana, arial, sans-serif; + text-decoration: none; + color: black; + white-space: nowrap; + padding: 5px 15px 4px; + width: .1em; /* IE/Win fix */ +} + +#tabs li a:hover { + + cursor: pointer; + text-decoration:underline; +} + +#tabs > li a { width: auto; } /* Rest of IE/Win fix */ + +/* Commented Backslash Hack hides rule from IE5-Mac \*/ +#tabs a { float: none; } +/* End IE5-Mac hack */ + +#top .header .current { + background-color: #4C6C8F; +} +#top .header .current a { + font-weight: bold; + padding-bottom: 5px; + color: white; +} +#publishedStrip { + padding-right: 10px; + padding-left: 20px; + padding-top: 3px; + padding-bottom:3px; + color: #ffffff; + font-size : 60%; + font-weight: bold; + background-color: #4C6C8F; + text-align:right; +} + +#level2tabs { +margin: 0; +float:left; +position:relative; + +} + + + +#level2tabs a:hover { + + cursor: pointer; + text-decoration:underline; + +} + +#level2tabs a{ + + cursor: pointer; + text-decoration:none; + background-image: url('chapter.gif'); + background-repeat: no-repeat; + background-position: center left; + padding-left: 6px; + margin-left: 6px; +} + +/* +* border-top: solid #4C6C8F 15px; +*/ +#main { + position: relative; + background: white; + clear:both; +} +#main .breadtrail { + clear:both; + position: relative; + background: #CFDCED; + color: black; + border-bottom: solid 1px black; + border-top: solid 1px black; + padding: 0px 180px; + font-size: 75%; + z-index:10; +} + +img.corner { + width: 15px; + height: 15px; + border: none; + display: block !important; +} + +img.cornersmall { + width: 5px; + height: 5px; + border: none; + display: block !important; +} +/** + * Side menu + */ +#menu a { font-weight: normal; text-decoration: none;} +#menu a:visited { font-weight: normal; } +#menu a:active { font-weight: normal; } +#menu a:hover { font-weight: normal; text-decoration:underline;} + +#menuarea { width:10em;} +#menu { + position: relative; + float: left; + width: 160px; + padding-top: 0px; + padding-bottom: 15px; + top:-18px; + left:10px; + z-index: 20; + background-color: #f90; + font-size : 70%; + border-radius: 0px 0px 15px 15px; +} + +.menutitle { + cursor:pointer; + padding: 3px 12px; + margin-left: 10px; + background-image: url('chapter.gif'); + background-repeat: no-repeat; + background-position: center left; + font-weight : bold; +} + +.menutitle.selected { + background-image: url('chapter_open.gif'); +} + +.menutitle:hover{text-decoration:underline;cursor: pointer;} + +#menu .menuitemgroup { + margin: 0px 0px 6px 8px; + padding: 0px; + font-weight : bold; } + +#menu .selectedmenuitemgroup{ + margin: 0px 0px 0px 8px; + padding: 0px; + font-weight : normal; + + } + +#menu .menuitem { + padding: 2px 0px 1px 13px; + background-image: url('page.gif'); + background-repeat: no-repeat; + background-position: center left; + font-weight : normal; + margin-left: 10px; +} + +#menu .selected { + font-style : normal; + margin-right: 10px; + +} +.menuitem .selected { + border-style: solid; + border-width: 1px; +} +#menu .menupageitemgroup { + padding: 3px 0px 4px 6px; + font-style : normal; + border-bottom: 1px solid ; + border-left: 1px solid ; + border-right: 1px solid ; + margin-right: 10px; +} +#menu .menupageitem { + font-style : normal; + font-weight : normal; + border-width: 0px; + font-size : 90%; +} +#menu .searchbox { + text-align: center; +} +#menu .searchbox form { + padding: 3px 3px; + margin: 0; +} +#menu .searchbox input { + font-size: 100%; +} + +#content { + padding: 20px 20px 20px 180px; + margin: 0; + font : small Verdana, Helvetica, sans-serif; + font-size : 80%; +} + +#content ul { + margin: 0; + padding: 0 25px; +} +#content li { + padding: 0 5px; +} +#feedback { + color: black; + background: #CFDCED; + text-align:center; + margin-top: 5px; +} +#feedback #feedbackto { + font-size: 90%; + color: black; +} +#footer { + clear: both; + position: relative; /* IE bugfix (http://www.dracos.co.uk/web/css/ie6floatbug/) */ + width: 100%; + background: #CFDCED; + border-top: solid 1px #4C6C8F; + color: black; +} +#footer .copyright { + position: relative; /* IE bugfix cont'd */ + padding: 5px; + margin: 0; + width: 60%; +} +#footer .lastmodified { + position: relative; /* IE bugfix cont'd */ + float: right; + width: 30%; + padding: 5px; + margin: 0; + text-align: right; +} +#footer a { color: white; } + +#footer #logos { + text-align: left; +} + + +/** + * Misc Styles + */ + +acronym { cursor: help; } +.boxed { background-color: #a5b6c6;} +.underlined_5 {border-bottom: solid 5px #4C6C8F;} +.underlined_10 {border-bottom: solid 10px #4C6C8F;} +/* ==================== snail trail ============================ */ + +.trail { + position: relative; /* IE bugfix cont'd */ + font-size: 70%; + text-align: right; + float: right; + margin: -10px 5px 0px 5px; + padding: 0; +} + +#motd-area { + position:relative; + float:right; + width: 35%; + background-color: #f0f0ff; + border: solid 1px #4C6C8F; + margin: 0px 0px 10px 10px; + padding: 5px; +} + +#minitoc-area { + border-top: solid 1px #4C6C8F; + border-bottom: solid 1px #4C6C8F; + margin: 15px 10% 5px 15px; + /* margin-bottom: 15px; + margin-left: 15px; + margin-right: 10%;*/ + padding-bottom: 7px; + padding-top: 5px; +} +.minitoc { + list-style-image: url('current.gif'); + font-weight: normal; +} + +.abstract{ + text-align:justify; + } + +li p { + margin: 0; + padding: 0; +} + +.pdflink { + position: relative; /* IE bugfix cont'd */ + float: right; + margin: 0px 5px; + padding: 0; +} +.pdflink br { + margin-top: -10px; + padding-left: 1px; +} +.pdflink a { + display: block; + font-size: 70%; + text-align: center; + margin: 0; + padding: 0; +} + +.pdflink img { + display: block; + height: 16px; + width: 16px; +} +.xmllink { + position: relative; /* IE bugfix cont'd */ + float: right; + margin: 0px 5px; + padding: 0; +} +.xmllink br { + margin-top: -10px; + padding-left: 1px; +} +.xmllink a { + display: block; + font-size: 70%; + text-align: center; + margin: 0; + padding: 0; +} + +.xmllink img { + display: block; + height: 16px; + width: 16px; +} +.podlink { + position: relative; /* IE bugfix cont'd */ + float: right; + margin: 0px 5px; + padding: 0; +} +.podlink br { + margin-top: -10px; + padding-left: 1px; +} +.podlink a { + display: block; + font-size: 70%; + text-align: center; + margin: 0; + padding: 0; +} + +.podlink img { + display: block; + height: 16px; + width: 16px; +} + +.printlink { + position: relative; /* IE bugfix cont'd */ + float: right; +} +.printlink br { + margin-top: -10px; + padding-left: 1px; +} +.printlink a { + display: block; + font-size: 70%; + text-align: center; + margin: 0; + padding: 0; +} +.printlink img { + display: block; + height: 16px; + width: 16px; +} + +p.instruction { + display: list-item; + list-style-image: url('../instruction_arrow.png'); + list-style-position: outside; + margin-left: 2em; +} \ No newline at end of file diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperAdmin.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperAdmin.md new file mode 100644 index 0000000000000000000000000000000000000000..5f42bea59b89889e45e88b249416e05f9ee6648b --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperAdmin.md @@ -0,0 +1,2956 @@ + + +# ZooKeeper Administrator's Guide + +### A Guide to Deployment and Administration + +* [Deployment](#ch_deployment) + * [System Requirements](#sc_systemReq) + * [Supported Platforms](#sc_supportedPlatforms) + * [Required Software](#sc_requiredSoftware) + * [Clustered (Multi-Server) Setup](#sc_zkMulitServerSetup) + * [Single Server and Developer Setup](#sc_singleAndDevSetup) +* [Administration](#ch_administration) + * [Designing a ZooKeeper Deployment](#sc_designing) + * [Cross Machine Requirements](#sc_CrossMachineRequirements) + * [Single Machine Requirements](#Single+Machine+Requirements) + * [Provisioning](#sc_provisioning) + * [Things to Consider: ZooKeeper Strengths and Limitations](#sc_strengthsAndLimitations) + * [Administering](#sc_administering) + * [Maintenance](#sc_maintenance) + * [Ongoing Data Directory Cleanup](#Ongoing+Data+Directory+Cleanup) + * [Debug Log Cleanup (logback)](#Debug+Log+Cleanup+Logback) + * [Supervision](#sc_supervision) + * [Monitoring](#sc_monitoring) + * [Logging](#sc_logging) + * [Troubleshooting](#sc_troubleshooting) + * [Configuration Parameters](#sc_configuration) + * [Minimum Configuration](#sc_minimumConfiguration) + * [Advanced Configuration](#sc_advancedConfiguration) + * [Cluster Options](#sc_clusterOptions) + * [Encryption, Authentication, Authorization Options](#sc_authOptions) + * [Experimental Options/Features](#Experimental+Options%2FFeatures) + * [Unsafe Options](#Unsafe+Options) + * [Disabling data directory autocreation](#Disabling+data+directory+autocreation) + * [Enabling db existence validation](#sc_db_existence_validation) + * [Performance Tuning Options](#sc_performance_options) + * [AdminServer configuration](#sc_adminserver_config) + * [Communication using the Netty framework](#Communication+using+the+Netty+framework) + * [Quorum TLS](#Quorum+TLS) + * [Upgrading existing non-TLS cluster with no downtime](#Upgrading+existing+nonTLS+cluster) + * [ZooKeeper Commands](#sc_zkCommands) + * [The Four Letter Words](#sc_4lw) + * [The AdminServer](#sc_adminserver) + * [Data File Management](#sc_dataFileManagement) + * [The Data Directory](#The+Data+Directory) + * [The Log Directory](#The+Log+Directory) + * [File Management](#sc_filemanagement) + * [Recovery - TxnLogToolkit](#Recovery+-+TxnLogToolkit) + * [Things to Avoid](#sc_commonProblems) + * [Best Practices](#sc_bestPractices) + + + +## Deployment + +This section contains information about deploying Zookeeper and +covers these topics: + +* [System Requirements](#sc_systemReq) +* [Clustered (Multi-Server) Setup](#sc_zkMulitServerSetup) +* [Single Server and Developer Setup](#sc_singleAndDevSetup) + +The first two sections assume you are interested in installing +ZooKeeper in a production environment such as a datacenter. The final +section covers situations in which you are setting up ZooKeeper on a +limited basis - for evaluation, testing, or development - but not in a +production environment. + + + +### System Requirements + + + +#### Supported Platforms + +ZooKeeper consists of multiple components. Some components are +supported broadly, and other components are supported only on a smaller +set of platforms. + +* **Client** is the Java client + library, used by applications to connect to a ZooKeeper ensemble. +* **Server** is the Java server + that runs on the ZooKeeper ensemble nodes. +* **Native Client** is a client + implemented in C, similar to the Java client, used by applications + to connect to a ZooKeeper ensemble. +* **Contrib** refers to multiple + optional add-on components. + +The following matrix describes the level of support committed for +running each component on different operating system platforms. + +##### Support Matrix + +| Operating System | Client | Server | Native Client | Contrib | +|------------------|--------|--------|---------------|---------| +| GNU/Linux | Development and Production | Development and Production | Development and Production | Development and Production | +| Solaris | Development and Production | Development and Production | Not Supported | Not Supported | +| FreeBSD | Development and Production | Development and Production | Not Supported | Not Supported | +| Windows | Development and Production | Development and Production | Not Supported | Not Supported | +| Mac OS X | Development Only | Development Only | Not Supported | Not Supported | + +For any operating system not explicitly mentioned as supported in +the matrix, components may or may not work. The ZooKeeper community +will fix obvious bugs that are reported for other platforms, but there +is no full support. + + + +#### Required Software + +ZooKeeper runs in Java, release 1.8 or greater +(JDK 8 LTS, JDK 11 LTS, JDK 12 - Java 9 and 10 are not supported). +It runs as an _ensemble_ of ZooKeeper servers. Three +ZooKeeper servers is the minimum recommended size for an +ensemble, and we also recommend that they run on separate +machines. At Yahoo!, ZooKeeper is usually deployed on +dedicated RHEL boxes, with dual-core processors, 2GB of RAM, +and 80GB IDE hard drives. + + + +### Clustered (Multi-Server) Setup + +For reliable ZooKeeper service, you should deploy ZooKeeper in a +cluster known as an _ensemble_. As long as a majority +of the ensemble are up, the service will be available. Because Zookeeper +requires a majority, it is best to use an +odd number of machines. For example, with four machines ZooKeeper can +only handle the failure of a single machine; if two machines fail, the +remaining two machines do not constitute a majority. However, with five +machines ZooKeeper can handle the failure of two machines. + +###### Note +>As mentioned in the +[ZooKeeper Getting Started Guide](zookeeperStarted.html) +, a minimum of three servers are required for a fault tolerant +clustered setup, and it is strongly recommended that you have an +odd number of servers. + +>Usually three servers is more than enough for a production +install, but for maximum reliability during maintenance, you may +wish to install five servers. With three servers, if you perform +maintenance on one of them, you are vulnerable to a failure on one +of the other two servers during that maintenance. If you have five +of them running, you can take one down for maintenance, and know +that you're still OK if one of the other four suddenly fails. + +>Your redundancy considerations should include all aspects of +your environment. If you have three ZooKeeper servers, but their +network cables are all plugged into the same network switch, then +the failure of that switch will take down your entire ensemble. + +Here are the steps to set a server that will be part of an +ensemble. These steps should be performed on every host in the +ensemble: + +1. Install the Java JDK. You can use the native packaging system + for your system, or download the JDK from: + [http://java.sun.com/javase/downloads/index.jsp](http://java.sun.com/javase/downloads/index.jsp) + +2. Set the Java heap size. This is very important to avoid + swapping, which will seriously degrade ZooKeeper performance. To + determine the correct value, use load tests, and make sure you are + well below the usage limit that would cause you to swap. Be + conservative - use a maximum heap size of 3GB for a 4GB + machine. + +3. Install the ZooKeeper Server Package. It can be downloaded + from: + [http://zookeeper.apache.org/releases.html](http://zookeeper.apache.org/releases.html) + +4. Create a configuration file. This file can be called anything. + Use the following settings as a starting point: + + tickTime=2000 + dataDir=/var/lib/zookeeper/ + clientPort=2181 + initLimit=5 + syncLimit=2 + server.1=zoo1:2888:3888 + server.2=zoo2:2888:3888 + server.3=zoo3:2888:3888 + + You can find the meanings of these and other configuration + settings in the section [Configuration Parameters](#sc_configuration). A word + thought about a few here: + Every machine that is part of the ZooKeeper ensemble should know + about every other machine in the ensemble. You accomplish this with + the series of lines of the form **server.id=host:port:port**. + (The parameters **host** and **port** are straightforward, for each server + you need to specify first a Quorum port then a dedicated port for ZooKeeper leader + election). Since ZooKeeper 3.6.0 you can also [specify multiple addresses](#id_multi_address) + for each ZooKeeper server instance (this can increase availability when multiple physical + network interfaces can be used parallel in the cluster). + You attribute the + server id to each machine by creating a file named + *myid*, one for each server, which resides in + that server's data directory, as specified by the configuration file + parameter **dataDir**. + +5. The myid file + consists of a single line containing only the text of that machine's + id. So *myid* of server 1 would contain the text + "1" and nothing else. The id must be unique within the + ensemble and should have a value between 1 and 255. + **IMPORTANT:** if you enable extended features such + as TTL Nodes (see below) the id must be between 1 + and 254 due to internal limitations. + +6. Create an initialization marker file *initialize* + in the same directory as *myid*. This file indicates + that an empty data directory is expected. When present, an empty database + is created and the marker file deleted. When not present, an empty data + directory will mean this peer will not have voting rights and it will not + populate the data directory until it communicates with an active leader. + Intended use is to only create this file when bringing up a new + ensemble. + +7. If your configuration file is set up, you can start a + ZooKeeper server: + + $ java -cp zookeeper.jar:lib/*:conf org.apache.zookeeper.server.quorum.QuorumPeerMain zoo.conf + + QuorumPeerMain starts a ZooKeeper server, + [JMX](http://java.sun.com/javase/technologies/core/mntr-mgmt/javamanagement/) + management beans are also registered which allows + management through a JMX management console. + The [ZooKeeper JMX + document](zookeeperJMX.html) contains details on managing ZooKeeper with JMX. + See the script _bin/zkServer.sh_, + which is included in the release, for an example + of starting server instances. +8. Test your deployment by connecting to the hosts: + In Java, you can run the following command to execute + simple operations: + + $ bin/zkCli.sh -server 127.0.0.1:2181 + + + +### Single Server and Developer Setup + +If you want to set up ZooKeeper for development purposes, you will +probably want to set up a single server instance of ZooKeeper, and then +install either the Java or C client-side libraries and bindings on your +development machine. + +The steps to setting up a single server instance are the similar +to the above, except the configuration file is simpler. You can find the +complete instructions in the [Installing and +Running ZooKeeper in Single Server Mode](zookeeperStarted.html#sc_InstallingSingleMode) section of the [ZooKeeper Getting Started +Guide](zookeeperStarted.html). + +For information on installing the client side libraries, refer to +the [Bindings](zookeeperProgrammers.html#ch_bindings) +section of the [ZooKeeper +Programmer's Guide](zookeeperProgrammers.html). + + + +## Administration + +This section contains information about running and maintaining +ZooKeeper and covers these topics: + +* [Designing a ZooKeeper Deployment](#sc_designing) +* [Provisioning](#sc_provisioning) +* [Things to Consider: ZooKeeper Strengths and Limitations](#sc_strengthsAndLimitations) +* [Administering](#sc_administering) +* [Maintenance](#sc_maintenance) +* [Supervision](#sc_supervision) +* [Monitoring](#sc_monitoring) +* [Logging](#sc_logging) +* [Troubleshooting](#sc_troubleshooting) +* [Configuration Parameters](#sc_configuration) +* [ZooKeeper Commands](#sc_zkCommands) +* [Data File Management](#sc_dataFileManagement) +* [Things to Avoid](#sc_commonProblems) +* [Best Practices](#sc_bestPractices) + + + +### Designing a ZooKeeper Deployment + +The reliability of ZooKeeper rests on two basic assumptions. + +1. Only a minority of servers in a deployment + will fail. _Failure_ in this context + means a machine crash, or some error in the network that + partitions a server off from the majority. +1. Deployed machines operate correctly. To + operate correctly means to execute code correctly, to have + clocks that work properly, and to have storage and network + components that perform consistently. + +The sections below contain considerations for ZooKeeper +administrators to maximize the probability for these assumptions +to hold true. Some of these are cross-machines considerations, +and others are things you should consider for each and every +machine in your deployment. + + + +#### Cross Machine Requirements + +For the ZooKeeper service to be active, there must be a +majority of non-failing machines that can communicate with +each other. For a ZooKeeper ensemble with N servers, +if N is odd, the ensemble is able to tolerate up to N/2 +server failures without losing any znode data; +if N is even, the ensemble is able to tolerate up to N/2-1 +server failures. + +For example, if we have a ZooKeeper ensemble with 3 servers, +the ensemble is able to tolerate up to 1 (3/2) server failures. +If we have a ZooKeeper ensemble with 5 servers, +the ensemble is able to tolerate up to 2 (5/2) server failures. +If the ZooKeeper ensemble with 6 servers, the ensemble +is also able to tolerate up to 2 (6/2-1) server failures +without losing data and prevent the "brain split" issue. + +ZooKeeper ensemble is usually has odd number of servers. +This is because with the even number of servers, +the capacity of failure tolerance is the same as +the ensemble with one less server +(2 failures for both 5-node ensemble and 6-node ensemble), +but the ensemble has to maintain extra connections and +data transfers for one more server. + +To achieve the highest probability of tolerating a failure +you should try to make machine failures independent. For +example, if most of the machines share the same switch, +failure of that switch could cause a correlated failure and +bring down the service. The same holds true of shared power +circuits, cooling systems, etc. + + + +#### Single Machine Requirements + +If ZooKeeper has to contend with other applications for +access to resources like storage media, CPU, network, or +memory, its performance will suffer markedly. ZooKeeper has +strong durability guarantees, which means it uses storage +media to log changes before the operation responsible for the +change is allowed to complete. You should be aware of this +dependency then, and take great care if you want to ensure +that ZooKeeper operations aren’t held up by your media. Here +are some things you can do to minimize that sort of +degradation: + +* ZooKeeper's transaction log must be on a dedicated + device. (A dedicated partition is not enough.) ZooKeeper + writes the log sequentially, without seeking Sharing your + log device with other processes can cause seeks and + contention, which in turn can cause multi-second + delays. +* Do not put ZooKeeper in a situation that can cause a + swap. In order for ZooKeeper to function with any sort of + timeliness, it simply cannot be allowed to swap. + Therefore, make certain that the maximum heap size given + to ZooKeeper is not bigger than the amount of real memory + available to ZooKeeper. For more on this, see + [Things to Avoid](#sc_commonProblems) + below. + + + +### Provisioning + + + +### Things to Consider: ZooKeeper Strengths and Limitations + + + +### Administering + + + +### Maintenance + +Little long term maintenance is required for a ZooKeeper +cluster however you must be aware of the following: + + + +#### Ongoing Data Directory Cleanup + +The ZooKeeper [Data +Directory](#var_datadir) contains files which are a persistent copy +of the znodes stored by a particular serving ensemble. These +are the snapshot and transactional log files. As changes are +made to the znodes these changes are appended to a +transaction log. Occasionally, when a log grows large, a +snapshot of the current state of all znodes will be written +to the filesystem and a new transaction log file is created +for future transactions. During snapshotting, ZooKeeper may +continue appending incoming transactions to the old log file. +Therefore, some transactions which are newer than a snapshot +may be found in the last transaction log preceding the +snapshot. + +A ZooKeeper server **will not remove +old snapshots and log files** when using the default +configuration (see autopurge below), this is the +responsibility of the operator. Every serving environment is +different and therefore the requirements of managing these +files may differ from install to install (backup for example). + +The PurgeTxnLog utility implements a simple retention +policy that administrators can use. The [API docs](index.html) contains details on +calling conventions (arguments, etc...). + +In the following example the last count snapshots and +their corresponding logs are retained and the others are +deleted. The value of should typically be +greater than 3 (although not required, this provides 3 backups +in the unlikely event a recent log has become corrupted). This +can be run as a cron job on the ZooKeeper server machines to +clean up the logs daily. + + CLASSPATH='lib/*:conf' java org.apache.zookeeper.server.PurgeTxnLog -n + + +Automatic purging of the snapshots and corresponding +transaction logs was introduced in version 3.4.0 and can be +enabled via the following configuration parameters **autopurge.snapRetainCount** and **autopurge.purgeInterval**. For more on +this, see [Advanced Configuration](#sc_advancedConfiguration) +below. + + + +#### Debug Log Cleanup (logback) + +See the section on [logging](#sc_logging) in this document. It is +expected that you will setup a rolling file appender using the +in-built logback feature. The sample configuration file in the +release tar's `conf/logback.xml` provides an example of +this. + + + +### Supervision + +You will want to have a supervisory process that manages +each of your ZooKeeper server processes (JVM). The ZK server is +designed to be "fail fast" meaning that it will shut down +(process exit) if an error occurs that it cannot recover +from. As a ZooKeeper serving cluster is highly reliable, this +means that while the server may go down the cluster as a whole +is still active and serving requests. Additionally, as the +cluster is "self healing" the failed server once restarted will +automatically rejoin the ensemble w/o any manual +interaction. + +Having a supervisory process such as [daemontools](http://cr.yp.to/daemontools.html) or +[SMF](http://en.wikipedia.org/wiki/Service\_Management\_Facility) +(other options for supervisory process are also available, it's +up to you which one you would like to use, these are just two +examples) managing your ZooKeeper server ensures that if the +process does exit abnormally it will automatically be restarted +and will quickly rejoin the cluster. + +It is also recommended to configure the ZooKeeper server process to +terminate and dump its heap if an OutOfMemoryError** occurs. This is achieved +by launching the JVM with the following arguments on Linux and Windows +respectively. The *zkServer.sh* and +*zkServer.cmd* scripts that ship with ZooKeeper set +these options. + + -XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError='kill -9 %p' + + "-XX:+HeapDumpOnOutOfMemoryError" "-XX:OnOutOfMemoryError=cmd /c taskkill /pid %%%%p /t /f" + + + +### Monitoring + +The ZooKeeper service can be monitored in one of three primary ways: + +* the command port through the use of [4 letter words](#sc_zkCommands) +* with [JMX](zookeeperJMX.html) +* using the [`zkServer.sh status` command](zookeeperTools.html#zkServer) + + + +### Logging + +ZooKeeper uses **[SLF4J](http://www.slf4j.org)** +version 1.7 as its logging infrastructure. By default ZooKeeper is shipped with +**[LOGBack](http://logback.qos.ch/)** as the logging backend, but you can use +any other supported logging framework of your choice. + +The ZooKeeper default *logback.xml* +file resides in the *conf* directory. Logback requires that +*logback.xml* either be in the working directory +(the directory from which ZooKeeper is run) or be accessible from the classpath. + +For more information about SLF4J, see +[its manual](http://www.slf4j.org/manual.html). + +For more information about Logback, see +[Logback website](http://logback.qos.ch/). + + + +### Troubleshooting + +* *Server not coming up because of file corruption* : + A server might not be able to read its database and fail to come up because of + some file corruption in the transaction logs of the ZooKeeper server. You will + see some IOException on loading ZooKeeper database. In such a case, + make sure all the other servers in your ensemble are up and working. Use "stat" + command on the command port to see if they are in good health. After you have verified that + all the other servers of the ensemble are up, you can go ahead and clean the database + of the corrupt server. Delete all the files in datadir/version-2 and datalogdir/version-2/. + Restart the server. + + + +### Configuration Parameters + +ZooKeeper's behavior is governed by the ZooKeeper configuration +file. This file is designed so that the exact same file can be used by +all the servers that make up a ZooKeeper server assuming the disk +layouts are the same. If servers use different configuration files, care +must be taken to ensure that the list of servers in all of the different +configuration files match. + +###### Note +>In 3.5.0 and later, some of these parameters should be placed in +a dynamic configuration file. If they are placed in the static +configuration file, ZooKeeper will automatically move them over to the +dynamic configuration file. See [Dynamic Reconfiguration](zookeeperReconfig.html) for more information. + + + +#### Minimum Configuration + +Here are the minimum configuration keywords that must be defined +in the configuration file: + +* *clientPort* : + the port to listen for client connections; that is, the + port that clients attempt to connect to. + +* *secureClientPort* : + the port to listen on for secure client connections using SSL. + **clientPort** specifies + the port for plaintext connections while **secureClientPort** specifies the port for SSL + connections. Specifying both enables mixed-mode while omitting + either will disable that mode. + Note that SSL feature will be enabled when user plugs-in + zookeeper.serverCnxnFactory, zookeeper.clientCnxnSocket as Netty. + +* *observerMasterPort* : + the port to listen for observer connections; that is, the + port that observers attempt to connect to. + if the property is set then the server will host observer connections + when in follower mode in addition to when in leader mode and correspondingly + attempt to connect to any voting peer when in observer mode. + +* *dataDir* : + the location where ZooKeeper will store the in-memory + database snapshots and, unless specified otherwise, the + transaction log of updates to the database. + ###### Note + >Be careful where you put the transaction log. A + dedicated transaction log device is key to consistent good + performance. Putting the log on a busy device will adversely + affect performance. + +* *tickTime* : + the length of a single tick, which is the basic time unit + used by ZooKeeper, as measured in milliseconds. It is used to + regulate heartbeats, and timeouts. For example, the minimum + session timeout will be two ticks. + + + +#### Advanced Configuration + +The configuration settings in the section are optional. You can +use them to further fine tune the behaviour of your ZooKeeper servers. +Some can also be set using Java system properties, generally of the +form _zookeeper.keyword_. The exact system +property, when available, is noted below. + +* *dataLogDir* : + (No Java system property) + This option will direct the machine to write the + transaction log to the **dataLogDir** rather than the **dataDir**. This allows a dedicated log + device to be used, and helps avoid competition between logging + and snapshots. + ###### Note + >Having a dedicated log device has a large impact on + throughput and stable latencies. It is highly recommended dedicating a log device and set **dataLogDir** to point to a directory on + that device, and then make sure to point **dataDir** to a directory + _not_ residing on that device. + +* *globalOutstandingLimit* : + (Java system property: **zookeeper.globalOutstandingLimit.**) + Clients can submit requests faster than ZooKeeper can + process them, especially if there are a lot of clients. To + prevent ZooKeeper from running out of memory due to queued + requests, ZooKeeper will throttle clients so that there are no + more than globalOutstandingLimit outstanding requests across + entire ensemble, equally divided. The default limit is 1,000 + and, for example, with 3 members each of them will have + 1000 / 2 = 500 individual limit. + +* *preAllocSize* : + (Java system property: **zookeeper.preAllocSize**) + To avoid seeks ZooKeeper allocates space in the + transaction log file in blocks of preAllocSize kilobytes. The + default block size is 64M. One reason for changing the size of + the blocks is to reduce the block size if snapshots are taken + more often. (Also, see **snapCount** and **snapSizeLimitInKb**). + +* *snapCount* : + (Java system property: **zookeeper.snapCount**) + ZooKeeper records its transactions using snapshots and + a transaction log (think write-ahead log). The number of + transactions recorded in the transaction log before a snapshot + can be taken (and the transaction log rolled) is determined + by snapCount. In order to prevent all of the machines in the quorum + from taking a snapshot at the same time, each ZooKeeper server + will take a snapshot when the number of transactions in the transaction log + reaches a runtime generated random value in the \[snapCount/2+1, snapCount] + range. The default snapCount is 100,000. + +* *commitLogCount* * : + (Java system property: **zookeeper.commitLogCount**) + Zookeeper maintains an in-memory list of last committed requests for fast synchronization with + followers when the followers are not too behind. This improves sync performance in case when your + snapshots are large (>100,000). The default value is 500 which is the recommended minimum. + +* *snapSizeLimitInKb* : + (Java system property: **zookeeper.snapSizeLimitInKb**) + ZooKeeper records its transactions using snapshots and + a transaction log (think write-ahead log). The total size in bytes allowed + in the set of transactions recorded in the transaction log before a snapshot + can be taken (and the transaction log rolled) is determined + by snapSize. In order to prevent all of the machines in the quorum + from taking a snapshot at the same time, each ZooKeeper server + will take a snapshot when the size in bytes of the set of transactions in the + transaction log reaches a runtime generated random value in the \[snapSize/2+1, snapSize] + range. Each file system has a minimum standard file size and in order + to for valid functioning of this feature, the number chosen must be larger + than that value. The default snapSizeLimitInKb is 4,194,304 (4GB). + A non-positive value will disable the feature. + +* *txnLogSizeLimitInKb* : + (Java system property: **zookeeper.txnLogSizeLimitInKb**) + Zookeeper transaction log file can also be controlled more + directly using txnLogSizeLimitInKb. Larger txn logs can lead to + slower follower syncs when sync is done using transaction log. + This is because leader has to scan through the appropriate log + file on disk to find the transaction to start sync from. + This feature is turned off by default and snapCount and snapSizeLimitInKb are the + only values that limit transaction log size. When enabled + Zookeeper will roll the log when any of the limits is hit. + Please note that actual log size can exceed this value by the size + of the serialized transaction. On the other hand, if this value is + set too close to (or smaller than) **preAllocSize**, + it can cause Zookeeper to roll the log for every transaction. While + this is not a correctness issue, this may cause severely degraded + performance. To avoid this and to get most out of this feature, it is + recommended to set the value to N * **preAllocSize** + where N >= 2. + +* *maxCnxns* : + (Java system property: **zookeeper.maxCnxns**) + Limits the total number of concurrent connections that can be made to a + zookeeper server (per client Port of each server ). This is used to prevent certain + classes of DoS attacks. The default is 0 and setting it to 0 entirely removes + the limit on total number of concurrent connections. Accounting for the + number of connections for serverCnxnFactory and a secureServerCnxnFactory is done + separately, so a peer is allowed to host up to 2*maxCnxns provided they are of appropriate types. + +* *maxClientCnxns* : + (No Java system property) + Limits the number of concurrent connections (at the socket + level) that a single client, identified by IP address, may make + to a single member of the ZooKeeper ensemble. This is used to + prevent certain classes of DoS attacks, including file + descriptor exhaustion. The default is 60. Setting this to 0 + entirely removes the limit on concurrent connections. + +* *clientPortAddress* : + **New in 3.3.0:** the + address (ipv4, ipv6 or hostname) to listen for client + connections; that is, the address that clients attempt + to connect to. This is optional, by default we bind in + such a way that any connection to the **clientPort** for any + address/interface/nic on the server will be + accepted. + +* *minSessionTimeout* : + (No Java system property) + **New in 3.3.0:** the + minimum session timeout in milliseconds that the server + will allow the client to negotiate. Defaults to 2 times + the **tickTime**. + +* *maxSessionTimeout* : + (No Java system property) + **New in 3.3.0:** the + maximum session timeout in milliseconds that the server + will allow the client to negotiate. Defaults to 20 times + the **tickTime**. + +* *fsync.warningthresholdms* : + (Java system property: **zookeeper.fsync.warningthresholdms**) + **New in 3.3.4:** A + warning message will be output to the log whenever an + fsync in the Transactional Log (WAL) takes longer than + this value. The values is specified in milliseconds and + defaults to 1000. This value can only be set as a + system property. + +* *maxResponseCacheSize* : + (Java system property: **zookeeper.maxResponseCacheSize**) + When set to a positive integer, it determines the size + of the cache that stores the serialized form of recently + read records. Helps save the serialization cost on + popular znodes. The metrics **response_packet_cache_hits** + and **response_packet_cache_misses** can be used to tune + this value to a given workload. The feature is turned on + by default with a value of 400, set to 0 or a negative + integer to turn the feature off. + +* *maxGetChildrenResponseCacheSize* : + (Java system property: **zookeeper.maxGetChildrenResponseCacheSize**) + **New in 3.6.0:** + Similar to **maxResponseCacheSize**, but applies to get children + requests. The metrics **response_packet_get_children_cache_hits** + and **response_packet_get_children_cache_misses** can be used to tune + this value to a given workload. The feature is turned on + by default with a value of 400, set to 0 or a negative + integer to turn the feature off. + +* *autopurge.snapRetainCount* : + (No Java system property) + **New in 3.4.0:** + When enabled, ZooKeeper auto purge feature retains + the **autopurge.snapRetainCount** most + recent snapshots and the corresponding transaction logs in the + **dataDir** and **dataLogDir** respectively and deletes the rest. + Defaults to 3. Minimum value is 3. + +* *autopurge.purgeInterval* : + (No Java system property) + **New in 3.4.0:** The + time interval in hours for which the purge task has to + be triggered. Set to a positive integer (1 and above) + to enable the auto purging. Defaults to 0. + **Suffix support added in 3.10.0:** The interval is specified as an integer with an optional suffix to indicate the time unit. + Supported suffixes are: `ms` for milliseconds, `s` for seconds, `m` for minutes, `h` for hours, and `d` for days. + For example, "10m" represents 10 minutes, and "5h" represents 5 hours. + If no suffix is provided, the default unit is hours. + +* *syncEnabled* : + (Java system property: **zookeeper.observer.syncEnabled**) + **New in 3.4.6, 3.5.0:** + The observers now log transaction and write snapshot to disk + by default like the participants. This reduces the recovery time + of the observers on restart. Set to "false" to disable this + feature. Default is "true" + +* *extendedTypesEnabled* : + (Java system property only: **zookeeper.extendedTypesEnabled**) + **New in 3.5.4, 3.6.0:** Define to `true` to enable + extended features such as the creation of [TTL Nodes](zookeeperProgrammers.html#TTL+Nodes). + They are disabled by default. IMPORTANT: when enabled server IDs must + be less than 255 due to internal limitations. + +* *emulate353TTLNodes* : + (Java system property only:**zookeeper.emulate353TTLNodes**). + **New in 3.5.4, 3.6.0:** Due to [ZOOKEEPER-2901] + (https://issues.apache.org/jira/browse/ZOOKEEPER-2901) TTL nodes + created in version 3.5.3 are not supported in 3.5.4/3.6.0. However, a workaround is provided via the + zookeeper.emulate353TTLNodes system property. If you used TTL nodes in ZooKeeper 3.5.3 and need to maintain + compatibility set **zookeeper.emulate353TTLNodes** to `true` in addition to + **zookeeper.extendedTypesEnabled**. NOTE: due to the bug, server IDs + must be 127 or less. Additionally, the maximum support TTL value is `1099511627775` which is smaller + than what was allowed in 3.5.3 (`1152921504606846975`) + +* *watchManagerName* : + (Java system property only: **zookeeper.watchManagerName**) + **New in 3.6.0:** Added in [ZOOKEEPER-1179](https://issues.apache.org/jira/browse/ZOOKEEPER-1179) + New watcher manager WatchManagerOptimized is added to optimize the memory overhead in heavy watch use cases. This + config is used to define which watcher manager to be used. Currently, we only support WatchManager and + WatchManagerOptimized. + +* *watcherCleanThreadsNum* : + (Java system property only: **zookeeper.watcherCleanThreadsNum**) + **New in 3.6.0:** Added in [ZOOKEEPER-1179](https://issues.apache.org/jira/browse/ZOOKEEPER-1179) + The new watcher manager WatchManagerOptimized will clean up the dead watchers lazily, this config is used to decide how + many thread is used in the WatcherCleaner. More thread usually means larger clean up throughput. The + default value is 2, which is good enough even for heavy and continuous session closing/recreating cases. + +* *watcherCleanThreshold* : + (Java system property only: **zookeeper.watcherCleanThreshold**) + **New in 3.6.0:** Added in [ZOOKEEPER-1179](https://issues.apache.org/jira/browse/ZOOKEEPER-1179) + The new watcher manager WatchManagerOptimized will clean up the dead watchers lazily, the cleanup process is relatively + heavy, batch processing will reduce the cost and improve the performance. This setting is used to decide + the batch size. The default one is 1000, we don't need to change it if there is no memory or clean up + speed issue. + +* *watcherCleanIntervalInSeconds* : + (Java system property only:**zookeeper.watcherCleanIntervalInSeconds**) + **New in 3.6.0:** Added in [ZOOKEEPER-1179](https://issues.apache.org/jira/browse/ZOOKEEPER-1179) + The new watcher manager WatchManagerOptimized will clean up the dead watchers lazily, the cleanup process is relatively + heavy, batch processing will reduce the cost and improve the performance. Besides watcherCleanThreshold, + this setting is used to clean up the dead watchers after certain time even the dead watchers are not larger + than watcherCleanThreshold, so that we won't leave the dead watchers there for too long. The default setting + is 10 minutes, which usually don't need to be changed. + +* *maxInProcessingDeadWatchers* : + (Java system property only: **zookeeper.maxInProcessingDeadWatchers**) + **New in 3.6.0:** Added in [ZOOKEEPER-1179](https://issues.apache.org/jira/browse/ZOOKEEPER-1179) + This is used to control how many backlog can we have in the WatcherCleaner, when it reaches this number, it will + slow down adding the dead watcher to WatcherCleaner, which will in turn slow down adding and closing + watchers, so that we can avoid OOM issue. By default there is no limit, you can set it to values like + watcherCleanThreshold * 1000. + +* *bitHashCacheSize* : + (Java system property only: **zookeeper.bitHashCacheSize**) + **New 3.6.0**: Added in [ZOOKEEPER-1179](https://issues.apache.org/jira/browse/ZOOKEEPER-1179) + This is the setting used to decide the HashSet cache size in the BitHashSet implementation. Without HashSet, we + need to use O(N) time to get the elements, N is the bit numbers in elementBits. But we need to + keep the size small to make sure it doesn't cost too much in memory, there is a trade off between memory + and time complexity. The default value is 10, which seems a relatively reasonable cache size. + +* *fastleader.minNotificationInterval* : + (Java system property: **zookeeper.fastleader.minNotificationInterval**) + Lower bound for length of time between two consecutive notification + checks on the leader election. This interval determines how long a + peer waits to check the set of election votes and effects how + quickly an election can resolve. The interval follows a backoff + strategy from the configured minimum (this) and the configured maximum + (fastleader.maxNotificationInterval) for long elections. + +* *fastleader.maxNotificationInterval* : + (Java system property: **zookeeper.fastleader.maxNotificationInterval**) + Upper bound for length of time between two consecutive notification + checks on the leader election. This interval determines how long a + peer waits to check the set of election votes and effects how + quickly an election can resolve. The interval follows a backoff + strategy from the configured minimum (fastleader.minNotificationInterval) + and the configured maximum (this) for long elections. + +* *connectionMaxTokens* : + (Java system property: **zookeeper.connection_throttle_tokens**) + **New in 3.6.0:** + This is one of the parameters to tune the server-side connection throttler, + which is a token-based rate limiting mechanism with optional probabilistic + dropping. + This parameter defines the maximum number of tokens in the token-bucket. + When set to 0, throttling is disabled. Default is 0. + +* *connectionTokenFillTime* : + (Java system property: **zookeeper.connection_throttle_fill_time**) + **New in 3.6.0:** + This is one of the parameters to tune the server-side connection throttler, + which is a token-based rate limiting mechanism with optional probabilistic + dropping. + This parameter defines the interval in milliseconds when the token bucket is re-filled with + *connectionTokenFillCount* tokens. Default is 1. + +* *connectionTokenFillCount* : + (Java system property: **zookeeper.connection_throttle_fill_count**) + **New in 3.6.0:** + This is one of the parameters to tune the server-side connection throttler, + which is a token-based rate limiting mechanism with optional probabilistic + dropping. + This parameter defines the number of tokens to add to the token bucket every + *connectionTokenFillTime* milliseconds. Default is 1. + +* *connectionFreezeTime* : + (Java system property: **zookeeper.connection_throttle_freeze_time**) + **New in 3.6.0:** + This is one of the parameters to tune the server-side connection throttler, + which is a token-based rate limiting mechanism with optional probabilistic + dropping. + This parameter defines the interval in milliseconds when the dropping + probability is adjusted. When set to -1, probabilistic dropping is disabled. + Default is -1. + +* *connectionDropIncrease* : + (Java system property: **zookeeper.connection_throttle_drop_increase**) + **New in 3.6.0:** + This is one of the parameters to tune the server-side connection throttler, + which is a token-based rate limiting mechanism with optional probabilistic + dropping. + This parameter defines the dropping probability to increase. The throttler + checks every *connectionFreezeTime* milliseconds and if the token bucket is + empty, the dropping probability will be increased by *connectionDropIncrease*. + The default is 0.02. + +* *connectionDropDecrease* : + (Java system property: **zookeeper.connection_throttle_drop_decrease**) + **New in 3.6.0:** + This is one of the parameters to tune the server-side connection throttler, + which is a token-based rate limiting mechanism with optional probabilistic + dropping. + This parameter defines the dropping probability to decrease. The throttler + checks every *connectionFreezeTime* milliseconds and if the token bucket has + more tokens than a threshold, the dropping probability will be decreased by + *connectionDropDecrease*. The threshold is *connectionMaxTokens* \* + *connectionDecreaseRatio*. The default is 0.002. + +* *connectionDecreaseRatio* : + (Java system property: **zookeeper.connection_throttle_decrease_ratio**) + **New in 3.6.0:** + This is one of the parameters to tune the server-side connection throttler, + which is a token-based rate limiting mechanism with optional probabilistic + dropping. This parameter defines the threshold to decrease the dropping + probability. The default is 0. + +* *zookeeper.connection_throttle_weight_enabled* : + (Java system property only) + **New in 3.6.0:** + Whether to consider connection weights when throttling. Only useful when connection throttle is enabled, that is, connectionMaxTokens is larger than 0. The default is false. + +* *zookeeper.connection_throttle_global_session_weight* : + (Java system property only) + **New in 3.6.0:** + The weight of a global session. It is the number of tokens required for a global session request to get through the connection throttler. It has to be a positive integer no smaller than the weight of a local session. The default is 3. + +* *zookeeper.connection_throttle_local_session_weight* : + (Java system property only) + **New in 3.6.0:** + The weight of a local session. It is the number of tokens required for a local session request to get through the connection throttler. It has to be a positive integer no larger than the weight of a global session or a renew session. The default is 1. + +* *zookeeper.connection_throttle_renew_session_weight* : + (Java system property only) + **New in 3.6.0:** + The weight of renewing a session. It is also the number of tokens required for a reconnect request to get through the throttler. It has to be a positive integer no smaller than the weight of a local session. The default is 2. + + +* *clientPortListenBacklog* : + (No Java system property) + **New in 3.4.14, 3.5.5, 3.6.0:** + The socket backlog length for the ZooKeeper server socket. This controls + the number of requests that will be queued server-side to be processed + by the ZooKeeper server. Connections that exceed this length will receive + a network timeout (30s) which may cause ZooKeeper session expiry issues. + By default, this value is unset (`-1`) which, on Linux, uses a backlog of + `50`. This value must be a positive number. + +* *serverCnxnFactory* : + (Java system property: **zookeeper.serverCnxnFactory**) + Specifies ServerCnxnFactory implementation. + This should be set to `NettyServerCnxnFactory` in order to use TLS based server communication. + Default is `NIOServerCnxnFactory`. + +* *flushDelay* : + (Java system property: **zookeeper.flushDelay**) + Time in milliseconds to delay the flush of the commit log. + Does not affect the limit defined by *maxBatchSize*. + Disabled by default (with value 0). Ensembles with high write rates + may see throughput improved with a value of 10-20 ms. + +* *maxWriteQueuePollTime* : + (Java system property: **zookeeper.maxWriteQueuePollTime**) + If *flushDelay* is enabled, this determines the amount of time in milliseconds + to wait before flushing when no new requests are being queued. + Set to *flushDelay*/3 by default (implicitly disabled by default). + +* *maxBatchSize* : + (Java system property: **zookeeper.maxBatchSize**) + The number of transactions allowed in the server before a flush of the + commit log is triggered. + Does not affect the limit defined by *flushDelay*. + Default is 1000. + +* *enforceQuota* : + (Java system property: **zookeeper.enforceQuota**) + **New in 3.7.0:** + Enforce the quota check. When enabled and the client exceeds the total bytes or children count hard quota under a znode, the server will reject the request and reply the client a `QuotaExceededException` by force. + The default value is: false. Exploring [quota feature](http://zookeeper.apache.org/doc/current/zookeeperQuotas.html) for more details. + +* *requestThrottleLimit* : + (Java system property: **zookeeper.request_throttle_max_requests**) + **New in 3.6.0:** + The total number of outstanding requests allowed before the RequestThrottler starts stalling. When set to 0, throttling is disabled. The default is 0. + +* *requestThrottleStallTime* : + (Java system property: **zookeeper.request_throttle_stall_time**) + **New in 3.6.0:** + The maximum time (in milliseconds) for which a thread may wait to be notified that it may proceed processing a request. The default is 100. + +* *requestThrottleDropStale* : + (Java system property: **request_throttle_drop_stale**) + **New in 3.6.0:** + When enabled, the throttler will drop stale requests rather than issue them to the request pipeline. A stale request is a request sent by a connection that is now closed, and/or a request that will have a request latency higher than the sessionTimeout. The default is true. + +* *requestStaleLatencyCheck* : + (Java system property: **zookeeper.request_stale_latency_check**) + **New in 3.6.0:** + When enabled, a request is considered stale if the request latency is higher than its associated session timeout. Disabled by default. + +* *requestStaleConnectionCheck* : + (Java system property: **zookeeper.request_stale_connection_check**) + **New in 3.6.0:** + When enabled, a request is considered stale if the request's connection has closed. Enabled by default. + +* *zookeeper.request_throttler.shutdownTimeout* : + (Java system property only) + **New in 3.6.0:** + The time (in milliseconds) the RequestThrottler waits for the request queue to drain during shutdown before it shuts down forcefully. The default is 10000. + +* *advancedFlowControlEnabled* : + (Java system property: **zookeeper.netty.advancedFlowControl.enabled**) + Using accurate flow control in netty based on the status of ZooKeeper + pipeline to avoid direct buffer OOM. It will disable the AUTO_READ in + Netty. + +* *enableEagerACLCheck* : + (Java system property only: **zookeeper.enableEagerACLCheck**) + When set to "true", enables eager ACL check on write requests on each local + server before sending the requests to quorum. Default is "false". + +* *maxConcurrentSnapSyncs* : + (Java system property: **zookeeper.leader.maxConcurrentSnapSyncs**) + The maximum number of snap syncs a leader or a follower can serve at the same + time. The default is 10. + +* *maxConcurrentDiffSyncs* : + (Java system property: **zookeeper.leader.maxConcurrentDiffSyncs**) + The maximum number of diff syncs a leader or a follower can serve at the same + time. The default is 100. + +* *digest.enabled* : + (Java system property only: **zookeeper.digest.enabled**) + **New in 3.6.0:** + The digest feature is added to detect the data inconsistency inside + ZooKeeper when loading database from disk, catching up and following + leader, its doing incrementally hash check for the DataTree based on + the adHash paper mentioned in + + https://cseweb.ucsd.edu/~daniele/papers/IncHash.pdf + + The idea is simple, the hash value of DataTree will be updated incrementally + based on the changes to the set of data. When the leader is preparing the txn, + it will pre-calculate the hash of the tree based on the changes happened with + formula: + + current_hash = current_hash + hash(new node data) - hash(old node data) + + If it’s creating a new node, the hash(old node data) will be 0, and if it’s a + delete node op, the hash(new node data) will be 0. + + This hash will be associated with each txn to represent the expected hash value + after applying the txn to the data tree, it will be sent to followers with + original proposals. Learner will compare the actual hash value with the one in + the txn after applying the txn to the data tree, and report mismatch if it’s not + the same. + + These digest value will also be persisted with each txn and snapshot on the disk, + so when servers restarted and load data from disk, it will compare and see if + there is hash mismatch, which will help detect data loss issue on disk. + + For the actual hash function, we’re using CRC internally, it’s not a collisionless + hash function, but it’s more efficient compared to collisionless hash, and the + collision possibility is really really rare and can already meet our needs here. + + This feature is backward and forward compatible, so it can safely roll upgrade, + downgrade, enabled and later disabled without any compatible issue. Here are the + scenarios have been covered and tested: + + 1. When leader runs with new code while follower runs with old one, the digest will + be appended to the end of each txn, follower will only read header and txn data, + digest value in the txn will be ignored. It won't affect the follower reads and + processes the next txn. + 2. When leader runs with old code while follower runs with new one, the digest won't + be sent with txn, when follower tries to read the digest, it will throw EOF which + is caught and handled gracefully with digest value set to null. + 3. When loading old snapshot with new code, it will throw IOException when trying to + read the non-exist digest value, and the exception will be caught and digest will + be set to null, which means we won't compare digest when loading this snapshot, + which is expected to happen during rolling upgrade + 4. When loading new snapshot with old code, it will finish successfully after deserializing + the data tree, the digest value at the end of snapshot file will be ignored + 5. The scenarios of rolling restart with flags change are similar to the 1st and 2nd + scenarios discussed above, if the leader enabled but follower not, digest value will + be ignored, and follower won't compare the digest during runtime; if leader disabled + but follower enabled, follower will get EOF exception which is handled gracefully. + + Note: the current digest calculation excluded nodes under /zookeeper + due to the potential inconsistency in the /zookeeper/quota stat node, + we can include that after that issue is fixed. + + By default, this feature is enabled, set "false" to disable it. + +* *snapshot.compression.method* : + (Java system property: **zookeeper.snapshot.compression.method**) + **New in 3.6.0:** + This property controls whether or not ZooKeeper should compress snapshots + before storing them on disk (see [ZOOKEEPER-3179](https://issues.apache.org/jira/browse/ZOOKEEPER-3179)). + Possible values are: + - "": Disabled (no snapshot compression). This is the default behavior. + - "gz": See [gzip compression](https://en.wikipedia.org/wiki/Gzip). + - "snappy": See [Snappy compression](https://en.wikipedia.org/wiki/Snappy_(compression)). + +* *snapshot.trust.empty* : + (Java system property: **zookeeper.snapshot.trust.empty**) + **New in 3.5.6:** + This property controls whether or not ZooKeeper should treat missing + snapshot files as a fatal state that can't be recovered from. + Set to true to allow ZooKeeper servers recover without snapshot + files. This should only be set during upgrading from old versions of + ZooKeeper (3.4.x, pre 3.5.3) where ZooKeeper might only have transaction + log files but without presence of snapshot files. If the value is set + during upgrade, we recommend setting the value back to false after upgrading + and restart ZooKeeper process so ZooKeeper can continue normal data + consistency check during recovery process. + Default value is false. + +* *audit.enable* : + (Java system property: **zookeeper.audit.enable**) + **New in 3.6.0:** + By default audit logs are disabled. Set to "true" to enable it. Default value is "false". + See the [ZooKeeper audit logs](zookeeperAuditLogs.html) for more information. + +* *audit.impl.class* : + (Java system property: **zookeeper.audit.impl.class**) + **New in 3.6.0:** + Class to implement the audit logger. By default logback based audit logger org.apache.zookeeper.audit + .Slf4jAuditLogger is used. + See the [ZooKeeper audit logs](zookeeperAuditLogs.html) for more information. + +* *largeRequestMaxBytes* : + (Java system property: **zookeeper.largeRequestMaxBytes**) + **New in 3.6.0:** + The maximum number of bytes of all inflight large request. The connection will be closed if a coming large request causes the limit exceeded. The default is 100 * 1024 * 1024. + +* *largeRequestThreshold* : + (Java system property: **zookeeper.largeRequestThreshold**) + **New in 3.6.0:** + The size threshold after which a request is considered a large request. If it is -1, then all requests are considered small, effectively turning off large request throttling. The default is -1. + +* *outstandingHandshake.limit* + (Java system property only: **zookeeper.netty.server.outstandingHandshake.limit**) + The maximum in-flight TLS handshake connections could have in ZooKeeper, + the connections exceed this limit will be rejected before starting handshake. + This setting doesn't limit the max TLS concurrency, but helps avoid herd + effect due to TLS handshake timeout when there are too many in-flight TLS + handshakes. Set it to something like 250 is good enough to avoid herd effect. + +* *netty.server.earlyDropSecureConnectionHandshakes* + (Java system property: **zookeeper.netty.server.earlyDropSecureConnectionHandshakes**) + If the ZooKeeper server is not fully started, drop TCP connections before performing the TLS handshake. + This is useful in order to prevent flooding the server with many concurrent TLS handshakes after a restart. + Please note that if you enable this flag the server won't answer to 'ruok' commands if it is not fully started. + + The behaviour of dropping the connection has been introduced in ZooKeeper 3.7 and it was not possible to disable it. + Since 3.7.1 and 3.8.0 this feature is disabled by default. + +* *throttledOpWaitTime* + (Java system property: **zookeeper.throttled_op_wait_time**) + The time in the RequestThrottler queue longer than which a request will be marked as throttled. + A throttled requests will not be processed other than being fed down the pipeline of the server it belongs + to preserve the order of all requests. + The FinalProcessor will issue an error response (new error code: ZTHROTTLEDOP) for these undigested requests. + The intent is for the clients not to retry them immediately. + When set to 0, no requests will be throttled. The default is 0. + +* *learner.closeSocketAsync* + (Java system property: **zookeeper.learner.closeSocketAsync**) + (Java system property: **learner.closeSocketAsync**)(Added for backward compatibility) + **New in 3.7.0:** + When enabled, a learner will close the quorum socket asynchronously. This is useful for TLS connections where closing a socket might take a long time, block the shutdown process, potentially delay a new leader election, and leave the quorum unavailable. Closing the socket asynchronously avoids blocking the shutdown process despite the long socket closing time and a new leader election can be started while the socket being closed. + The default is false. + +* *leader.closeSocketAsync* + (Java system property: **zookeeper.leader.closeSocketAsync**) + (Java system property: **leader.closeSocketAsync**)(Added for backward compatibility) + **New in 3.7.0:** + When enabled, the leader will close a quorum socket asynchronously. This is useful for TLS connections where closing a socket might take a long time. If disconnecting a follower is initiated in ping() because of a failed SyncLimitCheck then the long socket closing time will block the sending of pings to other followers. Without receiving pings, the other followers will not send session information to the leader, which causes sessions to expire. Setting this flag to true ensures that pings will be sent regularly. + The default is false. + +* *learner.asyncSending* + (Java system property: **zookeeper.learner.asyncSending**) + (Java system property: **learner.asyncSending**)(Added for backward compatibility) + **New in 3.7.0:** + The sending and receiving packets in Learner were done synchronously in a critical section. An untimely network issue could cause the followers to hang (see [ZOOKEEPER-3575](https://issues.apache.org/jira/browse/ZOOKEEPER-3575) and [ZOOKEEPER-4074](https://issues.apache.org/jira/browse/ZOOKEEPER-4074)). The new design moves sending packets in Learner to a separate thread and sends the packets asynchronously. The new design is enabled with this parameter (learner.asyncSending). + The default is false. + +* *forward_learner_requests_to_commit_processor_disabled* + (Java system property: **zookeeper.forward_learner_requests_to_commit_processor_disabled**) + When this property is set, the requests from learners won't be enqueued to + CommitProcessor queue, which will help save the resources and GC time on + leader. + + The default value is false. + +* *serializeLastProcessedZxid.enabled* + (Java system property: **zookeeper.serializeLastProcessedZxid.enabled**) + **New in 3.9.0:** + If enabled, ZooKeeper serializes the lastProcessedZxid when snapshot and deserializes it + when restore. Defaults to true. Needs to be enabled for performing snapshot and restore + via admin server commands, as there is no snapshot file name to extract the lastProcessedZxid. + + This feature is backward and forward compatible. Here are the different scenarios. + + 1. Snapshot triggered by server internally + a. When loading old snapshot with new code, it will throw EOFException when trying to + read the non-exist lastProcessedZxid value, and the exception will be caught. + The lastProcessedZxid will be set using the snapshot file name. + + b. When loading new snapshot with old code, it will finish successfully after deserializing the + digest value, the lastProcessedZxid at the end of snapshot file will be ignored. + The lastProcessedZxid will be set using the snapshot file name. + + 2. Sync up between leader and follower + The lastProcessedZxid will not be serialized by leader and deserialized by follower + in both new and old code. It will be set to the lastProcessedZxid sent from leader + via QuorumPacket. + + 3. Snapshot triggered via admin server APIs + The feature flag need to be enabled for the snapshot command to work. + + + +#### Cluster Options + +The options in this section are designed for use with an ensemble +of servers -- that is, when deploying clusters of servers. + +* *electionAlg* : + (No Java system property) + Election implementation to use. A value of "1" corresponds to the + non-authenticated UDP-based version of fast leader election, "2" + corresponds to the authenticated UDP-based version of fast + leader election, and "3" corresponds to TCP-based version of + fast leader election. Algorithm 3 was made default in 3.2.0 and + prior versions (3.0.0 and 3.1.0) were using algorithm 1 and 2 as well. + ###### Note + >The implementations of leader election 1, and 2 were + **deprecated** in 3.4.0. Since 3.6.0 only FastLeaderElection is available, + in case of upgrade you have to shut down all of your servers and + restart them with electionAlg=3 (or by removing the line from the configuration file). > + +* *maxTimeToWaitForEpoch* : + (Java system property: **zookeeper.leader.maxTimeToWaitForEpoch**) + **New in 3.6.0:** + The maximum time to wait for epoch from voters when activating + leader. If leader received a LOOKING notification from one of + its voters, and it hasn't received epoch packets from majority + within maxTimeToWaitForEpoch, then it will goto LOOKING and + elect leader again. + This can be tuned to reduce the quorum or server unavailable + time, it can be set to be much smaller than initLimit * tickTime. + In cross datacenter environment, it can be set to something + like 2s. + +* *initLimit* : + (No Java system property) + Amount of time, in ticks (see [tickTime](#id_tickTime)), to allow followers to + connect and sync to a leader. Increased this value as needed, if + the amount of data managed by ZooKeeper is large. + +* *connectToLearnerMasterLimit* : + (Java system property: zookeeper.**connectToLearnerMasterLimit**) + Amount of time, in ticks (see [tickTime](#id_tickTime)), to allow followers to + connect to the leader after leader election. Defaults to the value of initLimit. + Use when initLimit is high so connecting to learner master doesn't result in higher timeout. + +* *leaderServes* : + (Java system property: zookeeper.**leaderServes**) + Leader accepts client connections. Default value is "yes". + The leader machine coordinates updates. For higher update + throughput at the slight expense of read throughput the leader + can be configured to not accept clients and focus on + coordination. The default to this option is yes, which means + that a leader will accept client connections. + ###### Note + >Turning on leader selection is highly recommended when + you have more than three ZooKeeper servers in an ensemble. + +* *server.x=[hostname]:nnnnn[:nnnnn] etc* : + (No Java system property) + servers making up the ZooKeeper ensemble. When the server + starts up, it determines which server it is by looking for the + file *myid* in the data directory. That file + contains the server number, in ASCII, and it should match + **x** in **server.x** in the left hand side of this + setting. + The list of servers that make up ZooKeeper servers that is + used by the clients must match the list of ZooKeeper servers + that each ZooKeeper server has. + There are two port numbers **nnnnn**. + The first followers used to connect to the leader, and the second is for + leader election. If you want to test multiple servers on a single machine, then + different ports can be used for each server. + + + + Since ZooKeeper 3.6.0 it is possible to specify **multiple addresses** for each + ZooKeeper server (see [ZOOKEEPER-3188](https://issues.apache.org/jira/projects/ZOOKEEPER/issues/ZOOKEEPER-3188)). + To enable this feature, you must set the *multiAddress.enabled* configuration property + to *true*. This helps to increase availability and adds network level + resiliency to ZooKeeper. When multiple physical network interfaces are used + for the servers, ZooKeeper is able to bind on all interfaces and runtime switching + to a working interface in case a network error. The different addresses can be specified + in the config using a pipe ('|') character. A valid configuration using multiple addresses looks like: + + server.1=zoo1-net1:2888:3888|zoo1-net2:2889:3889 + server.2=zoo2-net1:2888:3888|zoo2-net2:2889:3889 + server.3=zoo3-net1:2888:3888|zoo3-net2:2889:3889 + + + ###### Note + >By enabling this feature, the Quorum protocol (ZooKeeper Server-Server protocol) will change. + The users will not notice this and when anyone starts a ZooKeeper cluster with the new config, + everything will work normally. However, it's not possible to enable this feature and specify + multiple addresses during a rolling upgrade if the old ZooKeeper cluster didn't support the + *multiAddress* feature (and the new Quorum protocol). In case if you need this feature but you + also need to perform a rolling upgrade from a ZooKeeper cluster older than *3.6.0*, then you + first need to do the rolling upgrade without enabling the MultiAddress feature and later make + a separate rolling restart with the new configuration where **multiAddress.enabled** is set + to **true** and multiple addresses are provided. + +* *syncLimit* : + (No Java system property) + Amount of time, in ticks (see [tickTime](#id_tickTime)), to allow followers to sync + with ZooKeeper. If followers fall too far behind a leader, they + will be dropped. + +* *group.x=nnnnn[:nnnnn]* : + (No Java system property) + Enables a hierarchical quorum construction."x" is a group identifier + and the numbers following the "=" sign correspond to server identifiers. + The left-hand side of the assignment is a colon-separated list of server + identifiers. Note that groups must be disjoint and the union of all groups + must be the ZooKeeper ensemble. + You will find an example [here](zookeeperHierarchicalQuorums.html) + +* *weight.x=nnnnn* : + (No Java system property) + Used along with "group", it assigns a weight to a server when + forming quorums. Such a value corresponds to the weight of a server + when voting. There are a few parts of ZooKeeper that require voting + such as leader election and the atomic broadcast protocol. By default + the weight of server is 1. If the configuration defines groups, but not + weights, then a value of 1 will be assigned to all servers. + You will find an example [here](zookeeperHierarchicalQuorums.html) + +* *cnxTimeout* : + (Java system property: zookeeper.**cnxTimeout**) + Sets the timeout value for opening connections for leader election notifications. + Only applicable if you are using electionAlg 3. + ###### Note + >Default value is 5 seconds. + +* *quorumCnxnTimeoutMs* : + (Java system property: zookeeper.**quorumCnxnTimeoutMs**) + Sets the read timeout value for the connections for leader election notifications. + Only applicable if you are using electionAlg 3. + ######Note + >Default value is -1, which will then use the syncLimit * tickTime as the timeout. + +* *standaloneEnabled* : + (No Java system property) + **New in 3.5.0:** + When set to false, a single server can be started in replicated + mode, a lone participant can run with observers, and a cluster + can reconfigure down to one node, and up from one node. The + default is true for backwards compatibility. It can be set + using QuorumPeerConfig's setStandaloneEnabled method or by + adding "standaloneEnabled=false" or "standaloneEnabled=true" + to a server's config file. + +* *reconfigEnabled* : + (No Java system property) + **New in 3.5.3:** + This controls the enabling or disabling of + [Dynamic Reconfiguration](zookeeperReconfig.html) feature. When the feature + is enabled, users can perform reconfigure operations through + the ZooKeeper client API or through ZooKeeper command line tools + assuming users are authorized to perform such operations. + When the feature is disabled, no user, including the super user, + can perform a reconfiguration. Any attempt to reconfigure will return an error. + **"reconfigEnabled"** option can be set as + **"reconfigEnabled=false"** or + **"reconfigEnabled=true"** + to a server's config file, or using QuorumPeerConfig's + setReconfigEnabled method. The default value is false. + If present, the value should be consistent across every server in + the entire ensemble. Setting the value as true on some servers and false + on other servers will cause inconsistent behavior depending on which server + is elected as leader. If the leader has a setting of + **"reconfigEnabled=true"**, then the ensemble + will have reconfig feature enabled. If the leader has a setting of + **"reconfigEnabled=false"**, then the ensemble + will have reconfig feature disabled. It is thus recommended having a consistent + value for **"reconfigEnabled"** across servers + in the ensemble. + +* *4lw.commands.whitelist* : + (Java system property: **zookeeper.4lw.commands.whitelist**) + **New in 3.5.3:** + A list of comma separated [Four Letter Words](#sc_4lw) + commands that user wants to use. A valid Four Letter Words + command must be put in this list else ZooKeeper server will + not enable the command. + By default the whitelist only contains "srvr" command + which zkServer.sh uses. The rest of four-letter word commands are disabled + by default: attempting to use them will gain a response + ".... is not executed because it is not in the whitelist." + Here's an example of the configuration that enables stat, ruok, conf, and isro + command while disabling the rest of Four Letter Words command: + + 4lw.commands.whitelist=stat, ruok, conf, isro + + +If you really need enable all four-letter word commands by default, you can use +the asterisk option so you don't have to include every command one by one in the list. +As an example, this will enable all four-letter word commands: + + + 4lw.commands.whitelist=* + + +* *tcpKeepAlive* : + (Java system property: **zookeeper.tcpKeepAlive**) + **New in 3.5.4:** + Setting this to true sets the TCP keepAlive flag on the + sockets used by quorum members to perform elections. + This will allow for connections between quorum members to + remain up when there is network infrastructure that may + otherwise break them. Some NATs and firewalls may terminate + or lose state for long-running or idle connections. + Enabling this option relies on OS level settings to work + properly, check your operating system's options regarding TCP + keepalive for more information. Defaults to + **false**. + +* *clientTcpKeepAlive* : + (Java system property: **zookeeper.clientTcpKeepAlive**) + **New in 3.6.1:** + Setting this to true sets the TCP keepAlive flag on the + client sockets. Some broken network infrastructure may lose + the FIN packet that is sent from closing client. These never + closed client sockets cause OS resource leak. Enabling this + option terminates these zombie sockets by idle check. + Enabling this option relies on OS level settings to work + properly, check your operating system's options regarding TCP + keepalive for more information. Defaults to **false**. Please + note the distinction between it and **tcpKeepAlive**. It is + applied for the client sockets while **tcpKeepAlive** is for + the sockets used by quorum members. Currently this option is + only available when default `NIOServerCnxnFactory` is used. + +* *electionPortBindRetry* : + (Java system property only: **zookeeper.electionPortBindRetry**) + Property set max retry count when Zookeeper server fails to bind + leader election port. Such errors can be temporary and recoverable, + such as DNS issue described in [ZOOKEEPER-3320](https://issues.apache.org/jira/projects/ZOOKEEPER/issues/ZOOKEEPER-3320), + or non-retryable, such as port already in use. + In case of transient errors, this property can improve availability + of Zookeeper server and help it to self recover. + Default value 3. In container environment, especially in Kubernetes, + this value should be increased or set to 0(infinite retry) to overcome issues + related to DNS name resolving. + + +* *observer.reconnectDelayMs* : + (Java system property: **zookeeper.observer.reconnectDelayMs**) + When observer loses its connection with the leader, it waits for the + specified value before trying to reconnect with the leader so that + the entire observer fleet won't try to run leader election and reconnect + to the leader at once. + Defaults to 0 ms. + +* *observer.election.DelayMs* : + (Java system property: **zookeeper.observer.election.DelayMs**) + Delay the observer's participation in a leader election upon disconnect + so as to prevent unexpected additional load on the voting peers during + the process. Defaults to 200 ms. + +* *localSessionsEnabled* and *localSessionsUpgradingEnabled* : + **New in 3.5:** + Optional value is true or false. Their default values are false. + Turning on the local session feature by setting *localSessionsEnabled=true*. Turning on + *localSessionsUpgradingEnabled* can upgrade a local session to a global session automatically as required (e.g. creating ephemeral nodes), + which only matters when *localSessionsEnabled* is enabled. + + + +#### Encryption, Authentication, Authorization Options + +The options in this section allow control over +encryption/authentication/authorization performed by the service. + +Beside this page, you can also find useful information about client side configuration in the +[Programmers Guide](zookeeperProgrammers.html#sc_java_client_configuration). +The ZooKeeper Wiki also has useful pages about [ZooKeeper SSL support](https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide), +and [SASL authentication for ZooKeeper](https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+and+SASL). + +* *DigestAuthenticationProvider.enabled* : + (Java system property: **zookeeper.DigestAuthenticationProvider.enabled**) + **New in 3.7:** + Determines whether the `digest` authentication provider is + enabled. The default value is **true** for backwards + compatibility, but it may be a good idea to disable this provider + if not used, as it can result in misleading entries appearing in + audit logs + (see [ZOOKEEPER-3979](https://issues.apache.org/jira/browse/ZOOKEEPER-3979)) + +* *DigestAuthenticationProvider.superDigest* : + (Java system property: **zookeeper.DigestAuthenticationProvider.superDigest**) + By default this feature is **disabled** + **New in 3.2:** + Enables a ZooKeeper ensemble administrator to access the + znode hierarchy as a "super" user. In particular no ACL + checking occurs for a user authenticated as + super. + org.apache.zookeeper.server.auth.DigestAuthenticationProvider + can be used to generate the superDigest, call it with + one parameter of "super:". Provide the + generated "super:" as the system property value + when starting each server of the ensemble. + When authenticating to a ZooKeeper server (from a + ZooKeeper client) pass a scheme of "digest" and authdata + of "super:". Note that digest auth passes + the authdata in plaintext to the server, it would be + prudent to use this authentication method only on + localhost (not over the network) or over an encrypted + connection. + +* *DigestAuthenticationProvider.digestAlg* : + (Java system property: **zookeeper.DigestAuthenticationProvider.digestAlg**) + **New in 3.7.0:** + Set ACL digest algorithm. The default value is: `SHA1` which will be deprecated in the future for security issues. + Set this property the same value in all the servers. + + - How to support other more algorithms? + - modify the `java.security` configuration file under `$JAVA_HOME/jre/lib/security/java.security` by specifying: + `security.provider.=`. + + ``` + For example: + set zookeeper.DigestAuthenticationProvider.digestAlg=RipeMD160 + security.provider.3=org.bouncycastle.jce.provider.BouncyCastleProvider + ``` + + - copy the jar file to `$JAVA_HOME/jre/lib/ext/`. + + ``` + For example: + copy bcprov-jdk18on-1.60.jar to $JAVA_HOME/jre/lib/ext/ + ``` + + - How to migrate from one digest algorithm to another? + - 1. Regenerate `superDigest` when migrating to new algorithm. + - 2. `SetAcl` for a znode which already had a digest auth of old algorithm. + +* *IPAuthenticationProvider.usexforwardedfor* : + (Java system property: **zookeeper.IPAuthenticationProvider.usexforwardedfor**) + **New in 3.9.3:** + IPAuthenticationProvider uses the client IP address to authenticate the user. By + default it reads the **Host** HTTP header to detect client IP address. In some + proxy configurations the proxy server adds the **X-Forwarded-For** header to + the request in order to provide the IP address of the original client request. + By enabling **usexforwardedfor** ZooKeeper setting, **X-Forwarded-For** will be preferred + over the standard **Host** header. + Default value is **false**. + +* *X509AuthenticationProvider.superUser* : + (Java system property: **zookeeper.X509AuthenticationProvider.superUser**) + The SSL-backed way to enable a ZooKeeper ensemble + administrator to access the znode hierarchy as a "super" user. + When this parameter is set to an X500 principal name, only an + authenticated client with that principal will be able to bypass + ACL checking and have full privileges to all znodes. + +* *zookeeper.superUser* : + (Java system property: **zookeeper.superUser**) + Similar to **zookeeper.X509AuthenticationProvider.superUser** + but is generic for SASL based logins. It stores the name of + a user that can access the znode hierarchy as a "super" user. + You can specify multiple SASL super users using the + **zookeeper.superUser.[suffix]** notation, e.g.: + `zookeeper.superUser.1=...`. + +* *ssl.authProvider* : + (Java system property: **zookeeper.ssl.authProvider**) + Specifies a subclass of **org.apache.zookeeper.auth.X509AuthenticationProvider** + to use for secure client authentication. This is useful in + certificate key infrastructures that do not use JKS. It may be + necessary to extend **javax.net.ssl.X509KeyManager** and **javax.net.ssl.X509TrustManager** + to get the desired behavior from the SSL stack. To configure the + ZooKeeper server to use the custom provider for authentication, + choose a scheme name for the custom AuthenticationProvider and + set the property **zookeeper.authProvider.[scheme]** to the fully-qualified class name of the custom + implementation. This will load the provider into the ProviderRegistry. + Then set this property **zookeeper.ssl.authProvider=[scheme]** and that provider + will be used for secure authentication. + +* *zookeeper.ensembleAuthName* : + (Java system property only: **zookeeper.ensembleAuthName**) + **New in 3.6.0:** + Specify a list of comma-separated valid names/aliases of an ensemble. A client + can provide the ensemble name it intends to connect as the credential for scheme "ensemble". The EnsembleAuthenticationProvider will check the credential against + the list of names/aliases of the ensemble that receives the connection request. + If the credential is not in the list, the connection request will be refused. + This prevents a client accidentally connecting to a wrong ensemble. + +* *sessionRequireClientSASLAuth* : + (Java system property: **zookeeper.sessionRequireClientSASLAuth**) + **New in 3.6.0:** + When set to **true**, ZooKeeper server will only accept connections and requests from clients + that have authenticated with server via SASL. Clients that are not configured with SASL + authentication, or configured with SASL but failed authentication (i.e. with invalid credential) + will not be able to establish a session with server. A typed error code (-124) will be delivered + in such case, both Java and C client will close the session with server thereafter, + without further attempts on retrying to reconnect. + + This configuration is shorthand for **enforce.auth.enabled=true** and **enforce.auth.scheme=sasl** + + By default, this feature is disabled. Users who would like to opt-in can enable the feature + by setting **sessionRequireClientSASLAuth** to **true**. + + This feature overrules the zookeeper.allowSaslFailedClients option, so even if server is + configured to allow clients that fail SASL authentication to login, client will not be able to + establish a session with server if this feature is enabled. + +* *enforce.auth.enabled* : + (Java system property : **zookeeper.enforce.auth.enabled**) + **New in 3.7.0:** + When set to **true**, ZooKeeper server will only accept connections and requests from clients + that have authenticated with server via configured auth scheme. Authentication schemes + can be configured using property enforce.auth.schemes. Clients that are not + configured with the any of the auth scheme configured at server or configured but failed authentication (i.e. with invalid credential) + will not be able to establish a session with server. A typed error code (-124) will be delivered + in such case, both Java and C client will close the session with server thereafter, + without further attempts on retrying to reconnect. + + By default, this feature is disabled. Users who would like to opt-in can enable the feature + by setting **enforce.auth.enabled** to **true**. + + When **enforce.auth.enabled=true** and **enforce.auth.schemes=sasl** then + zookeeper.allowSaslFailedClients configuration is overruled. So even if server is + configured to allow clients that fail SASL authentication to login, client will not be able to + establish a session with server if this feature is enabled with sasl as authentication scheme. + +* *enforce.auth.schemes* : + (Java system property : **zookeeper.enforce.auth.schemes**) + **New in 3.7.0:** + Comma separated list of authentication schemes. Clients must be authenticated with at least one + authentication scheme before doing any zookeeper operations. + This property is used only when **enforce.auth.enabled** is to **true**. + +* *sslQuorum* : + (Java system property: **zookeeper.sslQuorum**) + **New in 3.5.5:** + Enables encrypted quorum communication. Default is `false`. When enabling this feature, please also consider enabling *leader.closeSocketAsync* + and *learner.closeSocketAsync* to avoid issues associated with the potentially long socket closing time when shutting down an SSL connection. + +* *ssl.keyStore.location and ssl.keyStore.password* and *ssl.quorum.keyStore.location* and *ssl.quorum.keyStore.password* : + (Java system properties: **zookeeper.ssl.keyStore.location** and **zookeeper.ssl.keyStore.password** and **zookeeper.ssl.quorum.keyStore.location** and **zookeeper.ssl.quorum.keyStore.password**) + **New in 3.5.5:** + Specifies the file path to a Java keystore containing the local + credentials to be used for client and quorum TLS connections, and the + password to unlock the file. + +* *ssl.keyStore.passwordPath* and *ssl.quorum.keyStore.passwordPath* : + (Java system properties: **zookeeper.ssl.keyStore.passwordPath** and **zookeeper.ssl.quorum.keyStore.passwordPath**) + **New in 3.8.0:** + Specifies the file path that contains the keystore password. Reading the password from a file takes precedence over + the explicit password property. + +* *ssl.keyStore.type* and *ssl.quorum.keyStore.type* : + (Java system properties: **zookeeper.ssl.keyStore.type** and **zookeeper.ssl.quorum.keyStore.type**) + **New in 3.5.5:** + Specifies the file format of client and quorum keystores. Values: JKS, PEM, PKCS12 or null (detect by filename). + Default: null. + **New in 3.5.10, 3.6.3, 3.7.0:** + The format BCFKS was added. + +* *ssl.trustStore.location* and *ssl.trustStore.password* and *ssl.quorum.trustStore.location* and *ssl.quorum.trustStore.password* : + (Java system properties: **zookeeper.ssl.trustStore.location** and **zookeeper.ssl.trustStore.password** and **zookeeper.ssl.quorum.trustStore.location** and **zookeeper.ssl.quorum.trustStore.password**) + **New in 3.5.5:** + Specifies the file path to a Java truststore containing the remote + credentials to be used for client and quorum TLS connections, and the + password to unlock the file. + +* *ssl.trustStore.passwordPath* and *ssl.quorum.trustStore.passwordPath* : + (Java system properties: **zookeeper.ssl.trustStore.passwordPath** and **zookeeper.ssl.quorum.trustStore.passwordPath**) + **New in 3.8.0:** + Specifies the file path that contains the truststore password. Reading the password from a file takes precedence over + the explicit password property. + +* *ssl.trustStore.type* and *ssl.quorum.trustStore.type* : + (Java system properties: **zookeeper.ssl.trustStore.type** and **zookeeper.ssl.quorum.trustStore.type**) + **New in 3.5.5:** + Specifies the file format of client and quorum trustStores. Values: JKS, PEM, PKCS12 or null (detect by filename). + Default: null. + **New in 3.5.10, 3.6.3, 3.7.0:** + The format BCFKS was added. + +* *ssl.protocol* and *ssl.quorum.protocol* : + (Java system properties: **zookeeper.ssl.protocol** and **zookeeper.ssl.quorum.protocol**) + **New in 3.5.5:** + Specifies to protocol to be used in client and quorum TLS negotiation. + Default: TLSv1.3 or TLSv1.2 depending on Java runtime version being used. + +* *ssl.enabledProtocols* and *ssl.quorum.enabledProtocols* : + (Java system properties: **zookeeper.ssl.enabledProtocols** and **zookeeper.ssl.quorum.enabledProtocols**) + **New in 3.5.5:** + Specifies the enabled protocols in client and quorum TLS negotiation. + Default: TLSv1.3, TLSv1.2 if value of `protocol` property is TLSv1.3. TLSv1.2 if `protocol` is TLSv1.2. + +* *ssl.ciphersuites* and *ssl.quorum.ciphersuites* : + (Java system properties: **zookeeper.ssl.ciphersuites** and **zookeeper.ssl.quorum.ciphersuites**) + **New in 3.5.5:** + Specifies the enabled cipher suites to be used in client and quorum TLS negotiation. + Default: Enabled cipher suites depend on the Java runtime version being used. + +* *ssl.context.supplier.class* and *ssl.quorum.context.supplier.class* : + (Java system properties: **zookeeper.ssl.context.supplier.class** and **zookeeper.ssl.quorum.context.supplier.class**) + **New in 3.5.5:** + Specifies the class to be used for creating SSL context in client and quorum SSL communication. + This allows you to use custom SSL context and implement the following scenarios: + 1. Use hardware keystore, loaded in using PKCS11 or something similar. + 2. You don't have access to the software keystore, but can retrieve an already-constructed SSLContext from their container. + Default: null + +* *ssl.hostnameVerification* and *ssl.quorum.hostnameVerification* : + (Java system properties: **zookeeper.ssl.hostnameVerification** and **zookeeper.ssl.quorum.hostnameVerification**) + **New in 3.5.5:** + Specifies whether the hostname verification is enabled in client and quorum TLS negotiation process. + Disabling it only recommended for testing purposes. + Default: true + +* *ssl.clientHostnameVerification* and *ssl.quorum.clientHostnameVerification* : + (Java system properties: **zookeeper.ssl.clientHostnameVerification** and **zookeeper.ssl.quorum.clientHostnameVerification**) + **New in 3.9.4:** + Specifies whether the client's hostname verification is enabled in client and quorum TLS negotiation process. + This option requires the corresponding *hostnameVerification* option to be `true`, or it will be ignored. + Default: true for quorum, false for clients + +* *ssl.crl* and *ssl.quorum.crl* : + (Java system properties: **zookeeper.ssl.crl** and **zookeeper.ssl.quorum.crl**) + **New in 3.5.5:** + Specifies whether Certificate Revocation List is enabled in client and quorum TLS protocols. + Default: false + +* *ssl.ocsp* and *ssl.quorum.ocsp* : + (Java system properties: **zookeeper.ssl.ocsp** and **zookeeper.ssl.quorum.ocsp**) + **New in 3.5.5:** + Specifies whether Online Certificate Status Protocol is enabled in client and quorum TLS protocols. + Default: false + +* *ssl.clientAuth* and *ssl.quorum.clientAuth* : + (Java system properties: **zookeeper.ssl.clientAuth** and **zookeeper.ssl.quorum.clientAuth**) + **Added in 3.5.5, but broken until 3.5.7:** + Specifies options to authenticate ssl connections from clients. Valid values are + + * "none": server will not request client authentication + * "want": server will "request" client authentication + * "need": server will "require" client authentication + + Default: "need" + +* *ssl.handshakeDetectionTimeoutMillis* and *ssl.quorum.handshakeDetectionTimeoutMillis* : + (Java system properties: **zookeeper.ssl.handshakeDetectionTimeoutMillis** and **zookeeper.ssl.quorum.handshakeDetectionTimeoutMillis**) + **New in 3.5.5:** + TBD + +* *ssl.sslProvider* : + (Java system property: **zookeeper.ssl.sslProvider**) + **New in 3.9.0:** + Allows to select SSL provider in the client-server communication when TLS is enabled. Netty-tcnative native library + has been added to ZooKeeper in version 3.9.0 which allows us to use native SSL libraries like OpenSSL on supported + platforms. See the available options in Netty-tcnative documentation. Default value is "JDK". + +* *sslQuorumReloadCertFiles* : + (No Java system property) + **New in 3.5.5, 3.6.0:** + Allows Quorum SSL keyStore and trustStore reloading when the certificates on the filesystem change without having to restart the ZK process. Default: false + +* *client.certReload* : + (Java system property: **zookeeper.client.certReload**) + **New in 3.7.2, 3.8.1, 3.9.0:** + Allows client SSL keyStore and trustStore reloading when the certificates on the filesystem change without having to restart the ZK process. Default: false + +* *client.portUnification*: + (Java system property: **zookeeper.client.portUnification**) + Specifies that the client port should accept SSL connections + (using the same configuration as the secure client port). + Default: false + +* *authProvider*: + (Java system property: **zookeeper.authProvider**) + You can specify multiple authentication provider classes for ZooKeeper. + Usually you use this parameter to specify the SASL authentication provider + like: `authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider` + +* *kerberos.removeHostFromPrincipal* + (Java system property: **zookeeper.kerberos.removeHostFromPrincipal**) + You can instruct ZooKeeper to remove the host from the client principal name during authentication. + (e.g. the zk/myhost@EXAMPLE.COM client principal will be authenticated in ZooKeeper as zk@EXAMPLE.COM) + Default: false + +* *kerberos.removeRealmFromPrincipal* + (Java system property: **zookeeper.kerberos.removeRealmFromPrincipal**) + You can instruct ZooKeeper to remove the realm from the client principal name during authentication. + (e.g. the zk/myhost@EXAMPLE.COM client principal will be authenticated in ZooKeeper as zk/myhost) + Default: false + +* *kerberos.canonicalizeHostNames* + (Java system property: **zookeeper.kerberos.canonicalizeHostNames**) + **New in 3.7.0:** + Instructs ZooKeeper to canonicalize server host names extracted from *server.x* lines. + This allows using e.g. `CNAME` records to reference servers in configuration files, while still enabling SASL Kerberos authentication between quorum members. + It is essentially the quorum equivalent of the *zookeeper.sasl.client.canonicalize.hostname* property for clients. + The default value is **false** for backwards compatibility. + +* *multiAddress.enabled* : + (Java system property: **zookeeper.multiAddress.enabled**) + **New in 3.6.0:** + Since ZooKeeper 3.6.0 you can also [specify multiple addresses](#id_multi_address) + for each ZooKeeper server instance (this can increase availability when multiple physical + network interfaces can be used parallel in the cluster). Setting this parameter to + **true** will enable this feature. Please note, that you can not enable this feature + during a rolling upgrade if the version of the old ZooKeeper cluster is prior to 3.6.0. + The default value is **false**. + +* *multiAddress.reachabilityCheckTimeoutMs* : + (Java system property: **zookeeper.multiAddress.reachabilityCheckTimeoutMs**) + **New in 3.6.0:** + Since ZooKeeper 3.6.0 you can also [specify multiple addresses](#id_multi_address) + for each ZooKeeper server instance (this can increase availability when multiple physical + network interfaces can be used parallel in the cluster). ZooKeeper will perform ICMP ECHO requests + or try to establish a TCP connection on port 7 (Echo) of the destination host in order to find + the reachable addresses. This happens only if you provide multiple addresses in the configuration. + In this property you can set the timeout in milliseconds for the reachability check. The check happens + in parallel for the different addresses, so the timeout you set here is the maximum time will be taken + by checking the reachability of all addresses. + The default value is **1000**. + + This parameter has no effect, unless you enable the MultiAddress feature by setting *multiAddress.enabled=true*. + +* *fips-mode* : + (Java system property: **zookeeper.fips-mode**) + **New in 3.8.2:** + Enable FIPS compatibility mode in ZooKeeper. If enabled, the following things will be changed in order to comply + with FIPS requirements: + * Custom trust manager (`ZKTrustManager`) that is used for hostname verification will be disabled. As a consequence, + hostname verification is not available in the Quorum protocol, but still can be set in client-server communication. + * DIGEST-MD5 Sasl auth mechanism will be disabled in Quorum and ZooKeeper Sasl clients. Only GSSAPI (Kerberos) + can be used. + + Default: **true** (3.9.0+), **false** (3.8.x) + + + +#### Experimental Options/Features + +New features that are currently considered experimental. + +* *Read Only Mode Server* : + (Java system property: **readonlymode.enabled**) + **New in 3.4.0:** + Setting this value to true enables Read Only Mode server + support (disabled by default). + *localSessionsEnabled* has to be activated to serve clients. + A downgrade of an existing connections is currently not supported. + ROM allows clients sessions which requested ROM support to connect to the + server even when the server might be partitioned from + the quorum. In this mode ROM clients can still read + values from the ZK service, but will be unable to write + values and see changes from other clients. See + ZOOKEEPER-784 for more details. + +* *zookeeper.follower.skipLearnerRequestToNextProcessor* : + (Java system property: **zookeeper.follower.skipLearnerRequestToNextProcessor**) + When our cluster has observers which are connected with ObserverMaster, then turning on this flag might help + you reduce some memory pressure on the Observer Master. If your cluster doesn't have any observers or + they are not connected with ObserverMaster or your Observer's don't make much writes, then using this flag + won't help you. + Currently the change here is guarded behind the flag to help us get more confidence around the memory gains. + In Long run, we might want to remove this flag and set its behavior as the default codepath. + + + +#### Unsafe Options + +The following options can be useful, but be careful when you use +them. The risk of each is explained along with the explanation of what +the variable does. + +* *forceSync* : + (Java system property: **zookeeper.forceSync**) + Requires updates to be synced to media of the transaction + log before finishing processing the update. If this option is + set to no, ZooKeeper will not require updates to be synced to + the media. + +* *jute.maxbuffer* : + (Java system property:**jute.maxbuffer**). + - This option can only be set as a Java system property. + There is no zookeeper prefix on it. It specifies the maximum + size of the data that can be stored in a znode. The unit is: byte. The default is + 0xfffff(1048575) bytes, or just under 1M. + - If this option is changed, the system property must be set on all servers and clients otherwise + problems will arise. + - When *jute.maxbuffer* in the client side is greater than the server side, the client wants to write the data + exceeds *jute.maxbuffer* in the server side, the server side will get **java.io.IOException: Len error** + - When *jute.maxbuffer* in the client side is less than the server side, the client wants to read the data + exceeds *jute.maxbuffer* in the client side, the client side will get **java.io.IOException: Unreasonable length** + or **Packet len is out of range!** + - This is really a sanity check. ZooKeeper is designed to store data on the order of kilobytes in size. + In the production environment, increasing this property to exceed the default value is not recommended for the following reasons: + - Large size znodes cause unwarranted latency spikes, worsen the throughput + - Large size znodes make the synchronization time between leader and followers unpredictable and non-convergent(sometimes timeout), cause the quorum unstable + +* *jute.maxbuffer.extrasize*: + (Java system property: **zookeeper.jute.maxbuffer.extrasize**) + **New in 3.5.7:** + While processing client requests ZooKeeper server adds some additional information into + the requests before persisting it as a transaction. Earlier this additional information size + was fixed to 1024 bytes. For many scenarios, specially scenarios where jute.maxbuffer value + is more than 1 MB and request type is multi, this fixed size was insufficient. + To handle all the scenarios additional information size is increased from 1024 byte + to same as jute.maxbuffer size and also it is made configurable through jute.maxbuffer.extrasize. + Generally this property is not required to be configured as default value is the most optimal value. + +* *skipACL* : + (Java system property: **zookeeper.skipACL**) + Skips ACL checks. This results in a boost in throughput, + but opens up full access to the data tree to everyone. + +* *quorumListenOnAllIPs* : + When set to true the ZooKeeper server will listen + for connections from its peers on all available IP addresses, + and not only the address configured in the server list of the + configuration file. It affects the connections handling the + ZAB protocol and the Fast Leader Election protocol. Default + value is **false**. + +* *multiAddress.reachabilityCheckEnabled* : + (Java system property: **zookeeper.multiAddress.reachabilityCheckEnabled**) + **New in 3.6.0:** + Since ZooKeeper 3.6.0 you can also [specify multiple addresses](#id_multi_address) + for each ZooKeeper server instance (this can increase availability when multiple physical + network interfaces can be used parallel in the cluster). ZooKeeper will perform ICMP ECHO requests + or try to establish a TCP connection on port 7 (Echo) of the destination host in order to find + the reachable addresses. This happens only if you provide multiple addresses in the configuration. + The reachable check can fail if you hit some ICMP rate-limitation, (e.g. on macOS) when you try to + start a large (e.g. 11+) ensemble members cluster on a single machine for testing. + + Default value is **true**. By setting this parameter to 'false' you can disable the reachability checks. + Please note, disabling the reachability check will cause the cluster not to be able to reconfigure + itself properly during network problems, so the disabling is advised only during testing. + + This parameter has no effect, unless you enable the MultiAddress feature by setting *multiAddress.enabled=true*. + + + +#### Disabling data directory autocreation + +**New in 3.5:** The default +behavior of a ZooKeeper server is to automatically create the +data directory (specified in the configuration file) when +started if that directory does not already exist. This can be +inconvenient and even dangerous in some cases. Take the case +where a configuration change is made to a running server, +wherein the **dataDir** parameter +is accidentally changed. When the ZooKeeper server is +restarted it will create this non-existent directory and begin +serving - with an empty znode namespace. This scenario can +result in an effective "split brain" situation (i.e. data in +both the new invalid directory and the original valid data +store). As such is would be good to have an option to turn off +this autocreate behavior. In general for production +environments this should be done, unfortunately however the +default legacy behavior cannot be changed at this point and +therefore this must be done on a case by case basis. This is +left to users and to packagers of ZooKeeper distributions. + +When running **zkServer.sh** autocreate can be disabled +by setting the environment variable **ZOO_DATADIR_AUTOCREATE_DISABLE** to 1. +When running ZooKeeper servers directly from class files this +can be accomplished by setting **zookeeper.datadir.autocreate=false** on +the java command line, i.e. **-Dzookeeper.datadir.autocreate=false** + +When this feature is disabled, and the ZooKeeper server +determines that the required directories do not exist it will +generate an error and refuse to start. + +A new script **zkServer-initialize.sh** is provided to +support this new feature. If autocreate is disabled it is +necessary for the user to first install ZooKeeper, then create +the data directory (and potentially txnlog directory), and +then start the server. Otherwise as mentioned in the previous +paragraph the server will not start. Running **zkServer-initialize.sh** will create the +required directories, and optionally set up the myid file +(optional command line parameter). This script can be used +even if the autocreate feature itself is not used, and will +likely be of use to users as this (setup, including creation +of the myid file) has been an issue for users in the past. +Note that this script ensures the data directories exist only, +it does not create a config file, but rather requires a config +file to be available in order to execute. + + + +#### Enabling db existence validation + +**New in 3.6.0:** The default +behavior of a ZooKeeper server on startup when no data tree +is found is to set zxid to zero and join the quorum as a +voting member. This can be dangerous if some event (e.g. a +rogue 'rm -rf') has removed the data directory while the +server was down since this server may help elect a leader +that is missing transactions. Enabling db existence validation +will change the behavior on startup when no data tree is +found: the server joins the ensemble as a non-voting participant +until it is able to sync with the leader and acquire an up-to-date +version of the ensemble data. To indicate an empty data tree is +expected (ensemble creation), the user should place a file +'initialize' in the same directory as 'myid'. This file will +be detected and deleted by the server on startup. + +Initialization validation can be enabled when running +ZooKeeper servers directly from class files by setting +**zookeeper.db.autocreate=false** +on the java command line, i.e. +**-Dzookeeper.db.autocreate=false**. +Running **zkServer-initialize.sh** +will create the required initialization file. + + + +#### Performance Tuning Options + +**New in 3.5.0:** Several subsystems have been reworked +to improve read throughput. This includes multi-threading of the NIO communication subsystem and +request processing pipeline (Commit Processor). NIO is the default client/server communication +subsystem. Its threading model comprises 1 acceptor thread, 1-N selector threads and 0-M +socket I/O worker threads. In the request processing pipeline the system can be configured +to process multiple read request at once while maintaining the same consistency guarantee +(same-session read-after-write). The Commit Processor threading model comprises 1 main +thread and 0-N worker threads. + +The default values are aimed at maximizing read throughput on a dedicated ZooKeeper machine. +Both subsystems need to have sufficient amount of threads to achieve peak read throughput. + +* *zookeeper.nio.numSelectorThreads* : + (Java system property only: **zookeeper.nio.numSelectorThreads**) + **New in 3.5.0:** + Number of NIO selector threads. At least 1 selector thread required. + It is recommended to use more than one selector for large numbers + of client connections. The default value is sqrt( number of cpu cores / 2 ). + +* *zookeeper.nio.numWorkerThreads* : + (Java system property only: **zookeeper.nio.numWorkerThreads**) + **New in 3.5.0:** + Number of NIO worker threads. If configured with 0 worker threads, the selector threads + do the socket I/O directly. The default value is 2 times the number of cpu cores. + +* *zookeeper.commitProcessor.numWorkerThreads* : + (Java system property only: **zookeeper.commitProcessor.numWorkerThreads**) + **New in 3.5.0:** + Number of Commit Processor worker threads. If configured with 0 worker threads, the main thread + will process the request directly. The default value is the number of cpu cores. + +* *zookeeper.commitProcessor.maxReadBatchSize* : + (Java system property only: **zookeeper.commitProcessor.maxReadBatchSize**) + Max number of reads to process from queuedRequests before switching to processing commits. + If the value < 0 (default), we switch whenever we have a local write, and pending commits. + A high read batch size will delay commit processing, causing stale data to be served. + If reads are known to arrive in fixed size batches then matching that batch size with + the value of this property can smooth queue performance. Since reads are handled in parallel, + one recommendation is to set this property to match *zookeeper.commitProcessor.numWorkerThread* + (default is the number of cpu cores) or lower. + +* *zookeeper.commitProcessor.maxCommitBatchSize* : + (Java system property only: **zookeeper.commitProcessor.maxCommitBatchSize**) + Max number of commits to process before processing reads. We will try to process as many + remote/local commits as we can till we reach this count. A high commit batch size will delay + reads while processing more commits. A low commit batch size will favor reads. + It is recommended to only set this property when an ensemble is serving a workload with a high + commit rate. If writes are known to arrive in a set number of batches then matching that + batch size with the value of this property can smooth queue performance. A generic + approach would be to set this value to equal the ensemble size so that with the processing + of each batch the current server will probabilistically handle a write related to one of + its direct clients. + Default is "1". Negative and zero values are not supported. + +* *znode.container.checkIntervalMs* : + (Java system property only) + **New in 3.6.0:** The + time interval in milliseconds for each check of candidate container + and ttl nodes. Default is "60000". + +* *znode.container.maxPerMinute* : + (Java system property only) + **New in 3.6.0:** The + maximum number of container and ttl nodes that can be deleted per + minute. This prevents herding during container deletion. + Default is "10000". + +* *znode.container.maxNeverUsedIntervalMs* : + (Java system property only) + **New in 3.6.0:** The + maximum interval in milliseconds that a container that has never had + any children is retained. Should be long enough for your client to + create the container, do any needed work and then create children. + Default is "0" which is used to indicate that containers + that have never had any children are never deleted. + + + +#### Debug Observability Configurations + +**New in 3.6.0:** The following options are introduced to make zookeeper easier to debug. + +* *zookeeper.messageTracker.BufferSize* : + (Java system property only) + Controls the maximum number of messages stored in **MessageTracker**. Value should be positive + integers. The default value is 10. **MessageTracker** is introduced in **3.6.0** to record the + last set of messages between a server (follower or observer) and a leader, when a server + disconnects with leader. These set of messages will then be dumped to zookeeper's log file, + and will help reconstruct the state of the servers at the time of the disconnection and + will be useful for debugging purpose. + +* *zookeeper.messageTracker.Enabled* : + (Java system property only) + When set to "true", will enable **MessageTracker** to track and record messages. Default value + is "false". + + + +#### AdminServer configuration + +**New in 3.9.0:** The following +options are used to configure the [AdminServer](#sc_adminserver). + +* *admin.rateLimiterIntervalInMS* : + (Java system property: **zookeeper.admin.rateLimiterIntervalInMS**) + The time interval for rate limiting admin command to protect the server. + Defaults to 5 mins. + +* *admin.snapshot.enabled* : + (Java system property: **zookeeper.admin.snapshot.enabled**) + The flag for enabling the snapshot command. Defaults to true. + + +* *admin.restore.enabled* : + (Java system property: **zookeeper.admin.restore.enabled**) + The flag for enabling the restore command. Defaults to true. + + +* *admin.needClientAuth* : + (Java system property: **zookeeper.admin.needClientAuth**) + The flag to control whether client auth is needed. Using x509 auth requires true. + Defaults to false. + +**New in 3.7.1:** The following +options are used to configure the [AdminServer](#sc_adminserver). + +* *admin.forceHttps* : + (Java system property: **zookeeper.admin.forceHttps**) + Force AdminServer to use SSL, thus allowing only HTTPS traffic. + Defaults to disabled. + Overwrites **admin.portUnification** settings. + +**New in 3.6.0:** The following +options are used to configure the [AdminServer](#sc_adminserver). + +* *admin.portUnification* : + (Java system property: **zookeeper.admin.portUnification**) + Enable the admin port to accept both HTTP and HTTPS traffic. + Defaults to disabled. + +**New in 3.5.0:** The following +options are used to configure the [AdminServer](#sc_adminserver). + +* *admin.enableServer* : + (Java system property: **zookeeper.admin.enableServer**) + Set to "false" to disable the AdminServer. By default the + AdminServer is enabled. + +* *admin.serverAddress* : + (Java system property: **zookeeper.admin.serverAddress**) + The address the embedded Jetty server listens on. Defaults to 0.0.0.0. + +* *admin.serverPort* : + (Java system property: **zookeeper.admin.serverPort**) + The port the embedded Jetty server listens on. Defaults to 8080. + +* *admin.idleTimeout* : + (Java system property: **zookeeper.admin.idleTimeout**) + Set the maximum idle time in milliseconds that a connection can wait + before sending or receiving data. Defaults to 30000 ms. + +* *admin.commandURL* : + (Java system property: **zookeeper.admin.commandURL**) + The URL for listing and issuing commands relative to the + root URL. Defaults to "/commands". + +### Metrics Providers + +**New in 3.6.0:** The following options are used to configure metrics. + + By default ZooKeeper server exposes useful metrics using the [AdminServer](#sc_adminserver). + and [Four Letter Words](#sc_4lw) interface. + + Since 3.6.0 you can configure a different Metrics Provider, that exports metrics + to your favourite system. + + Since 3.6.0 ZooKeeper binary package bundles an integration with [Prometheus.io](https://prometheus.io) + +* *metricsProvider.className* : + Set to "org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider" to + enable Prometheus.io exporter. + +* *metricsProvider.httpHost* : + **New in 3.8.0:** Prometheus.io exporter will start a Jetty server and listen this address, default is "0.0.0.0" + +* *metricsProvider.httpPort* : + Prometheus.io exporter will start a Jetty server and bind to this port, it defaults to 7000. + Prometheus end point will be http://hostname:httPort/metrics. + +* *metricsProvider.exportJvmInfo* : + If this property is set to **true** Prometheus.io will export useful metrics about the JVM. + The default is true. + +* *metricsProvider.numWorkerThreads* : + **New in 3.7.1:** + Number of worker threads for reporting Prometheus summary metrics. + Default value is 1. + If the number is less than 1, the main thread will be used. + +* *metricsProvider.maxQueueSize* : + **New in 3.7.1:** + The max queue size for Prometheus summary metrics reporting task. + Default value is 1000000. + +* *metricsProvider.workerShutdownTimeoutMs* : + **New in 3.7.1:** + The timeout in ms for Prometheus worker threads shutdown. + Default value is 1000ms. + + + +### Communication using the Netty framework + +[Netty](http://netty.io) +is an NIO based client/server communication framework, it +simplifies (over NIO being used directly) many of the +complexities of network level communication for java +applications. Additionally the Netty framework has built +in support for encryption (SSL) and authentication +(certificates). These are optional features and can be +turned on or off individually. + +In versions 3.5+, a ZooKeeper server can use Netty +instead of NIO (default option) by setting the environment +variable **zookeeper.serverCnxnFactory** +to **org.apache.zookeeper.server.NettyServerCnxnFactory**; +for the client, set **zookeeper.clientCnxnSocket** +to **org.apache.zookeeper.ClientCnxnSocketNetty**. + + + +#### Quorum TLS + +*New in 3.5.5* + +Based on the Netty Framework ZooKeeper ensembles can be set up +to use TLS encryption in their communication channels. This section +describes how to set up encryption on the quorum communication. + +Please note that Quorum TLS encapsulates securing both leader election +and quorum communication protocols. + +1. Create SSL keystore JKS to store local credentials + +One keystore should be created for each ZK instance. + +In this example we generate a self-signed certificate and store it +together with the private key in `keystore.jks`. This is suitable for +testing purposes, but you probably need an official certificate to sign +your keys in a production environment. + +Please note that the alias (`-alias`) and the distinguished name (`-dname`) +must match the hostname of the machine that is associated with, otherwise +hostname verification won't work. + +``` +keytool -genkeypair -alias $(hostname -f) -keyalg RSA -keysize 2048 -dname "cn=$(hostname -f)" -keypass password -keystore keystore.jks -storepass password +``` + +2. Extract the signed public key (certificate) from keystore + +*This step might only necessary for self-signed certificates.* + +``` +keytool -exportcert -alias $(hostname -f) -keystore keystore.jks -file $(hostname -f).cer -rfc +``` + +3. Create SSL truststore JKS containing certificates of all ZooKeeper instances + +The same truststore (storing all accepted certs) should be shared on +participants of the ensemble. You need to use different aliases to store +multiple certificates in the same truststore. Name of the aliases doesn't matter. + +``` +keytool -importcert -alias [host1..3] -file [host1..3].cer -keystore truststore.jks -storepass password +``` + +4. You need to use `NettyServerCnxnFactory` as serverCnxnFactory, because SSL is not supported by NIO. +Add the following configuration settings to your `zoo.cfg` config file: + +``` +sslQuorum=true +serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory +ssl.quorum.keyStore.location=/path/to/keystore.jks +ssl.quorum.keyStore.password=password +ssl.quorum.trustStore.location=/path/to/truststore.jks +ssl.quorum.trustStore.password=password +``` + +5. Verify in the logs that your ensemble is running on TLS: + +``` +INFO [main:QuorumPeer@1789] - Using TLS encrypted quorum communication +INFO [main:QuorumPeer@1797] - Port unification disabled +... +INFO [QuorumPeerListener:QuorumCnxManager$Listener@877] - Creating TLS-only quorum server socket +``` + + + +#### Upgrading existing non-TLS cluster with no downtime + +*New in 3.5.5* + +Here are the steps needed to upgrade an already running ZooKeeper ensemble +to TLS without downtime by taking advantage of port unification functionality. + +1. Create the necessary keystores and truststores for all ZK participants as described in the previous section + +2. Add the following config settings and restart the first node + +``` +sslQuorum=false +portUnification=true +serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory +ssl.quorum.keyStore.location=/path/to/keystore.jks +ssl.quorum.keyStore.password=password +ssl.quorum.trustStore.location=/path/to/truststore.jks +ssl.quorum.trustStore.password=password +``` + +Note that TLS is not yet enabled, but we turn on port unification. + +3. Repeat step #2 on the remaining nodes. Verify that you see the following entries in the logs: + +``` +INFO [main:QuorumPeer@1791] - Using insecure (non-TLS) quorum communication +INFO [main:QuorumPeer@1797] - Port unification enabled +... +INFO [QuorumPeerListener:QuorumCnxManager$Listener@874] - Creating TLS-enabled quorum server socket +``` + +You should also double-check after each node restart that the quorum become healthy again. + +4. Enable Quorum TLS on each node and do rolling restart: + +``` +sslQuorum=true +portUnification=true +``` + +5. Once you verified that your entire ensemble is running on TLS, you could disable port unification +and do another rolling restart + +``` +sslQuorum=true +portUnification=false +``` + + + + +### ZooKeeper Commands + + + +#### The Four Letter Words + +ZooKeeper responds to a small set of commands. Each command is +composed of four letters. You issue the commands to ZooKeeper via telnet +or nc, at the client port. + +Three of the more interesting commands: "stat" gives some +general information about the server and connected clients, +while "srvr" and "cons" give extended details on server and +connections respectively. + +**New in 3.5.3:** +Four Letter Words need to be explicitly white listed before using. +Please refer to **4lw.commands.whitelist** +described in [cluster configuration section](#sc_clusterOptions) for details. +Moving forward, Four Letter Words will be deprecated, please use +[AdminServer](#sc_adminserver) instead. + +* *conf* : + **New in 3.3.0:** Print + details about serving configuration. + +* *cons* : + **New in 3.3.0:** List + full connection/session details for all clients connected + to this server. Includes information on numbers of packets + received/sent, session id, operation latencies, last + operation performed, etc... + +* *crst* : + **New in 3.3.0:** Reset + connection/session statistics for all connections. + +* *dump* : + Lists the outstanding sessions and ephemeral nodes. + +* *envi* : + Print details about serving environment + +* *ruok* : + Tests if the server is running in a non-error state. + When the whitelist enables ruok, the server will respond with `imok` + if it is running, otherwise it will not respond at all. + When ruok is disabled, the server responds with: + "ruok is not executed because it is not in the whitelist." + A response of "imok" does not necessarily indicate that the + server has joined the quorum, just that the server process is active + and bound to the specified client port. Use "stat" for details on + state wrt quorum and client connection information. + +* *srst* : + Reset server statistics. + +* *srvr* : + **New in 3.3.0:** Lists + full details for the server. + +* *stat* : + Lists brief details for the server and connected + clients. + +* *wchs* : + **New in 3.3.0:** Lists + brief information on watches for the server. + +* *wchc* : + **New in 3.3.0:** Lists + detailed information on watches for the server, by + session. This outputs a list of sessions(connections) + with associated watches (paths). Note, depending on the + number of watches this operation may be expensive (ie + impact server performance), use it carefully. + +* *dirs* : + **New in 3.5.1:** + Shows the total size of snapshot and log files in bytes + +* *wchp* : + **New in 3.3.0:** Lists + detailed information on watches for the server, by path. + This outputs a list of paths (znodes) with associated + sessions. Note, depending on the number of watches this + operation may be expensive (ie impact server performance), + use it carefully. + +* *mntr* : + **New in 3.4.0:** Outputs a list + of variables that could be used for monitoring the health of the cluster. + + + $ echo mntr | nc localhost 2185 + zk_version 3.4.0 + zk_avg_latency 0.7561 - be account to four decimal places + zk_max_latency 0 + zk_min_latency 0 + zk_packets_received 70 + zk_packets_sent 69 + zk_outstanding_requests 0 + zk_server_state leader + zk_znode_count 4 + zk_watch_count 0 + zk_ephemerals_count 0 + zk_approximate_data_size 27 + zk_learners 4 - only exposed by the Leader + zk_synced_followers 4 - only exposed by the Leader + zk_pending_syncs 0 - only exposed by the Leader + zk_open_file_descriptor_count 23 - only available on Unix platforms + zk_max_file_descriptor_count 1024 - only available on Unix platforms + + +The output is compatible with java properties format and the content +may change over time (new keys added). Your scripts should expect changes. +ATTENTION: Some of the keys are platform specific and some of the keys are only exported by the Leader. +The output contains multiple lines with the following format: + + + key \t value + + +* *isro* : + **New in 3.4.0:** Tests if + server is running in read-only mode. The server will respond with + "ro" if in read-only mode or "rw" if not in read-only mode. + +* *hash* : + **New in 3.6.0:** + Return the latest history of the tree digest associated with zxid. + +* *gtmk* : + Gets the current trace mask as a 64-bit signed long value in + decimal format. See `stmk` for an explanation of + the possible values. + +* *stmk* : + Sets the current trace mask. The trace mask is 64 bits, + where each bit enables or disables a specific category of trace + logging on the server. Logback must be configured to enable + `TRACE` level first in order to see trace logging + messages. The bits of the trace mask correspond to the following + trace logging categories. + + | Trace Mask Bit Values | | + |-----------------------|---------------------| + | 0b0000000000 | Unused, reserved for future use. | + | 0b0000000010 | Logs client requests, excluding ping requests. | + | 0b0000000100 | Unused, reserved for future use. | + | 0b0000001000 | Logs client ping requests. | + | 0b0000010000 | Logs packets received from the quorum peer that is the current leader, excluding ping requests. | + | 0b0000100000 | Logs addition, removal and validation of client sessions. | + | 0b0001000000 | Logs delivery of watch events to client sessions. | + | 0b0010000000 | Logs ping packets received from the quorum peer that is the current leader. | + | 0b0100000000 | Unused, reserved for future use. | + | 0b1000000000 | Unused, reserved for future use. | + + All remaining bits in the 64-bit value are unused and + reserved for future use. Multiple trace logging categories are + specified by calculating the bitwise OR of the documented values. + The default trace mask is 0b0100110010. Thus, by default, trace + logging includes client requests, packets received from the + leader and sessions. + To set a different trace mask, send a request containing the + `stmk` four-letter word followed by the trace + mask represented as a 64-bit signed long value. This example uses + the Perl `pack` function to construct a trace + mask that enables all trace logging categories described above and + convert it to a 64-bit signed long value with big-endian byte + order. The result is appended to `stmk` and sent + to the server using netcat. The server responds with the new + trace mask in decimal format. + + + $ perl -e "print 'stmk', pack('q>', 0b0011111010)" | nc localhost 2181 + 250 + + +Here's an example of the **ruok** +command: + + + $ echo ruok | nc 127.0.0.1 5111 + imok + + + + +#### The AdminServer + +**New in 3.5.0:** The AdminServer is +an embedded Jetty server that provides an HTTP interface to the four-letter +word commands. By default, the server is started on port 8080, +and commands are issued by going to the URL "/commands/\[command name]", +e.g., http://localhost:8080/commands/stat. The command response is +returned as JSON. Unlike the original protocol, commands are not +restricted to four-letter names, and commands can have multiple names; +for instance, "stmk" can also be referred to as "set_trace_mask". To +view a list of all available commands, point a browser to the URL +/commands (e.g., http://localhost:8080/commands). See the [AdminServer configuration options](#sc_adminserver_config) +for how to change the port and URLs. + +The AdminServer is enabled by default, but can be disabled by either: + +* Setting the zookeeper.admin.enableServer system + property to false. +* Removing Jetty from the classpath. (This option is + useful if you would like to override ZooKeeper's jetty + dependency.) + +Note that the TCP four-letter word interface is still available if +the AdminServer is disabled. + +##### Configuring AdminServer for SSL/TLS +- Generating the **keystore.jks** and **truststore.jks** which can be found in the [Quorum TLS](http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#Quorum+TLS). +- Add the following configuration settings to the `zoo.cfg` config file: + +``` +admin.portUnification=true +ssl.quorum.keyStore.location=/path/to/keystore.jks +ssl.quorum.keyStore.password=password +ssl.quorum.trustStore.location=/path/to/truststore.jks +ssl.quorum.trustStore.password=password +``` +- Verify that the following entries in the logs can be seen: + +``` +2019-08-03 15:44:55,213 [myid:] - INFO [main:JettyAdminServer@123] - Successfully loaded private key from /data/software/cert/keystore.jks +2019-08-03 15:44:55,213 [myid:] - INFO [main:JettyAdminServer@124] - Successfully loaded certificate authority from /data/software/cert/truststore.jks + +2019-08-03 15:44:55,403 [myid:] - INFO [main:JettyAdminServer@170] - Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands +``` + +Available commands include: + +* *connection_stat_reset/crst*: + Reset all client connection statistics. + No new fields returned. + +* *configuration/conf/config* : + Print basic details about serving configuration, e.g. + client port, absolute path to data directory. + +* *connections/cons* : + Information on client connections to server. + Note, depending on the number of client connections this operation may be expensive + (i.e. impact server performance). + Returns "connections", a list of connection info objects. + +* *hash*: + Txn digests in the historical digest list. + One is recorded every 128 transactions. + Returns "digests", a list to transaction digest objects. + +* *dirs* : + Information on logfile directory and snapshot directory + size in bytes. + Returns "datadir_size" and "logdir_size". + +* *dump* : + Information on session expirations and ephemerals. + Note, depending on the number of global sessions and ephemerals + this operation may be expensive (i.e. impact server performance). + Returns "expiry_time_to_session_ids" and "session_id_to_ephemeral_paths" as maps. + +* *environment/env/envi* : + All defined environment variables. + Returns each as its own field. + +* *get_trace_mask/gtmk* : + The current trace mask. Read-only version of *set_trace_mask*. + See the description of the four letter command *stmk* for + more details. + Returns "tracemask". + +* *initial_configuration/icfg* : + Print the text of the configuration file used to start the peer. + Returns "initial_configuration". + +* *is_read_only/isro* : + A true/false if this server is in read-only mode. + Returns "read_only". + +* *last_snapshot/lsnp* : + Information of the last snapshot that zookeeper server has finished saving to disk. + If called during the initial time period between the server starting up + and the server finishing saving its first snapshot, the command returns the + information of the snapshot read when starting up the server. + Returns "zxid" and "timestamp", the latter using a time unit of seconds. + +* *leader/lead* : + If the ensemble is configured in quorum mode then emits the current leader + status of the peer and the current leader location. + Returns "is_leader", "leader_id", and "leader_ip". + +* *monitor/mntr* : + Emits a wide variety of useful info for monitoring. + Includes performance stats, information about internal queues, and + summaries of the data tree (among other things). + Returns each as its own field. + +* *observer_connection_stat_reset/orst* : + Reset all observer connection statistics. Companion command to *observers*. + No new fields returned. + +* *restore/rest* : + Restore database from snapshot input stream on the current server. + Returns the following data in response payload: + "last_zxid": String + Note: this API is rate-limited (once every 5 mins by default) to protect the server + from being over-loaded. + +* *ruok* : + No-op command, check if the server is running. + A response does not necessarily indicate that the + server has joined the quorum, just that the admin server + is active and bound to the specified port. + No new fields returned. + +* *set_trace_mask/stmk* : + Sets the trace mask (as such, it requires a parameter). + Write version of *get_trace_mask*. + See the description of the four letter command *stmk* for + more details. + Returns "tracemask". + +* *server_stats/srvr* : + Server information. + Returns multiple fields giving a brief overview of server state. + +* *snapshot/snap* : + Takes a snapshot of the current server in the datadir and stream out data. + Optional query parameter: + "streaming": Boolean (defaults to true if the parameter is not present) + Returns the following via Http headers: + "last_zxid": String + "snapshot_size": String + Note: this API is rate-limited (once every 5 mins by default) to protect the server + from being over-loaded. + +* *stats/stat* : + Same as *server_stats* but also returns the "connections" field (see *connections* + for details). + Note, depending on the number of client connections this operation may be expensive + (i.e. impact server performance). + +* *stat_reset/srst* : + Resets server statistics. This is a subset of the information returned + by *server_stats* and *stats*. + No new fields returned. + +* *observers/obsr* : + Information on observer connections to server. + Always available on a Leader, available on a Follower if its + acting as a learner master. + Returns "synced_observers" (int) and "observers" (list of per-observer properties). + +* *system_properties/sysp* : + All defined system properties. + Returns each as its own field. + +* *voting_view* : + Provides the current voting members in the ensemble. + Returns "current_config" as a map. + +* *watches/wchc* : + Watch information aggregated by session. + Note, depending on the number of watches this operation may be expensive + (i.e. impact server performance). + Returns "session_id_to_watched_paths" as a map. + +* *watches_by_path/wchp* : + Watch information aggregated by path. + Note, depending on the number of watches this operation may be expensive + (i.e. impact server performance). + Returns "path_to_session_ids" as a map. + +* *watch_summary/wchs* : + Summarized watch information. + Returns "num_total_watches", "num_paths", and "num_connections". + +* *zabstate* : + The current phase of Zab protocol that peer is running and whether it is a + voting member. + Peers can be in one of these phases: ELECTION, DISCOVERY, SYNCHRONIZATION, BROADCAST. + Returns fields "voting" and "zabstate". + + + + +### Data File Management + +ZooKeeper stores its data in a data directory and its transaction +log in a transaction log directory. By default these two directories are +the same. The server can (and should) be configured to store the +transaction log files in a separate directory than the data files. +Throughput increases and latency decreases when transaction logs reside +on a dedicated log devices. + + + +#### The Data Directory + +This directory has two or three files in it: + +* *myid* - contains a single integer in + human readable ASCII text that represents the server id. +* *initialize* - presence indicates lack of + data tree is expected. Cleaned up once data tree is created. +* *snapshot.* - holds the fuzzy + snapshot of a data tree. + +Each ZooKeeper server has a unique id. This id is used in two +places: the *myid* file and the configuration file. +The *myid* file identifies the server that +corresponds to the given data directory. The configuration file lists +the contact information for each server identified by its server id. +When a ZooKeeper server instance starts, it reads its id from the +*myid* file and then, using that id, reads from the +configuration file, looking up the port on which it should +listen. + +The *snapshot* files stored in the data +directory are fuzzy snapshots in the sense that during the time the +ZooKeeper server is taking the snapshot, updates are occurring to the +data tree. The suffix of the *snapshot* file names +is the _zxid_, the ZooKeeper transaction id, of the +last committed transaction at the start of the snapshot. Thus, the +snapshot includes a subset of the updates to the data tree that +occurred while the snapshot was in process. The snapshot, then, may +not correspond to any data tree that actually existed, and for this +reason we refer to it as a fuzzy snapshot. Still, ZooKeeper can +recover using this snapshot because it takes advantage of the +idempotent nature of its updates. By replaying the transaction log +against fuzzy snapshots ZooKeeper gets the state of the system at the +end of the log. + + + +#### The Log Directory + +The Log Directory contains the ZooKeeper transaction logs. +Before any update takes place, ZooKeeper ensures that the transaction +that represents the update is written to non-volatile storage. A new +log file is started when the number of transactions written to the +current log file reaches a (variable) threshold. The threshold is +computed using the same parameter which influences the frequency of +snapshotting (see snapCount and snapSizeLimitInKb above). The log file's +suffix is the first zxid written to that log. + + + +#### File Management + +The format of snapshot and log files does not change between +standalone ZooKeeper servers and different configurations of +replicated ZooKeeper servers. Therefore, you can pull these files from +a running replicated ZooKeeper server to a development machine with a +stand-alone ZooKeeper server for troubleshooting. + +Using older log and snapshot files, you can look at the previous +state of ZooKeeper servers and even restore that state. + +The ZooKeeper server creates snapshot and log files, but +never deletes them. The retention policy of the data and log +files is implemented outside of the ZooKeeper server. The +server itself only needs the latest complete fuzzy snapshot, all log +files following it, and the last log file preceding it. The latter +requirement is necessary to include updates which happened after this +snapshot was started but went into the existing log file at that time. +This is possible because snapshotting and rolling over of logs +proceed somewhat independently in ZooKeeper. See the +[maintenance](#sc_maintenance) section in +this document for more details on setting a retention policy +and maintenance of ZooKeeper storage. + +###### Note +>The data stored in these files is not encrypted. In the case of +storing sensitive data in ZooKeeper, necessary measures need to be +taken to prevent unauthorized access. Such measures are external to +ZooKeeper (e.g., control access to the files) and depend on the +individual settings in which it is being deployed. + + + +#### Recovery - TxnLogToolkit +More details can be found in [this](http://zookeeper.apache.org/doc/current/zookeeperTools.html#zkTxnLogToolkit) + + + +### Things to Avoid + +Here are some common problems you can avoid by configuring +ZooKeeper correctly: + +* *inconsistent lists of servers* : + The list of ZooKeeper servers used by the clients must match + the list of ZooKeeper servers that each ZooKeeper server has. + Things work okay if the client list is a subset of the real list, + but things will really act strange if clients have a list of + ZooKeeper servers that are in different ZooKeeper clusters. Also, + the server lists in each Zookeeper server configuration file + should be consistent with one another. + +* *incorrect placement of transaction log* : + The most performance critical part of ZooKeeper is the + transaction log. ZooKeeper syncs transactions to media before it + returns a response. A dedicated transaction log device is key to + consistent good performance. Putting the log on a busy device will + adversely affect performance. If you only have one storage device, + increase the snapCount so that snapshot files are generated less often; + it does not eliminate the problem, but it makes more resources available + for the transaction log. + +* *incorrect Java heap size* : + You should take special care to set your Java max heap size + correctly. In particular, you should not create a situation in + which ZooKeeper swaps to disk. The disk is death to ZooKeeper. + Everything is ordered, so if processing one request swaps the + disk, all other queued requests will probably do the same. the + disk. DON'T SWAP. + Be conservative in your estimates: if you have 4G of RAM, do + not set the Java max heap size to 6G or even 4G. For example, it + is more likely you would use a 3G heap for a 4G machine, as the + operating system and the cache also need memory. The best and only + recommend practice for estimating the heap size your system needs + is to run load tests, and then make sure you are well below the + usage limit that would cause the system to swap. + +* *Publicly accessible deployment* : + A ZooKeeper ensemble is expected to operate in a trusted computing environment. + It is thus recommended deploying ZooKeeper behind a firewall. + + + +### Best Practices + +For best results, take note of the following list of good +Zookeeper practices: + +For multi-tenant installations see the [section](zookeeperProgrammers.html#ch_zkSessions) +detailing ZooKeeper "chroot" support, this can be very useful +when deploying many applications/services interfacing to a +single ZooKeeper cluster. diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperAuditLogs.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperAuditLogs.md new file mode 100644 index 0000000000000000000000000000000000000000..b3954ad676de219ebc593b1f3717aa60ac1aab4e --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperAuditLogs.md @@ -0,0 +1,140 @@ + + +# ZooKeeper Audit Logging + +* [ZooKeeper Audit Logs](#ch_auditLogs) +* [ZooKeeper Audit Log Configuration](#ch_reconfig_format) +* [Who is taken as user in audit logs?](#ch_zkAuditUser) + + +## ZooKeeper Audit Logs + +Apache ZooKeeper supports audit logs from version 3.6.0. By default audit logs are disabled. To enable audit logs + configure audit.enable=true in conf/zoo.cfg. Audit logs are not logged on all the ZooKeeper servers, but logged only on the servers where client is connected as depicted in below figure. + +![Audit Logs](images/zkAuditLogs.jpg) + + +The audit log captures detailed information for the operations that are selected to be audited. The audit information is written as a set of key=value pairs for the following keys + +| Key | Value | +| ----- | ----- | +|session | client session id | +|user | comma separated list of users who are associate with a client session. For more on this, see [Who is taken as user in audit logs](#ch_zkAuditUser). +|ip | client IP address +|operation | any one of the selected operations for audit. Possible values are(serverStart, serverStop, create, delete, setData, setAcl, multiOperation, reconfig, ephemeralZNodeDeleteOnSessionClose) +|znode | path of the znode +|znode type | type of znode in case of creation operation +|acl | String representation of znode ACL like cdrwa(create, delete,read, write, admin). This is logged only for setAcl operation +|result | result of the operation. Possible values are (success/failure/invoked). Result "invoked" is used for serverStop operation because stop is logged before ensuring that server actually stopped. + +Below are sample audit logs for all operations, where client is connected from 192.168.1.2, client principal is zkcli@HADOOP.COM, server principal is zookeeper/192.168.1.3@HADOOP.COM + + user=zookeeper/192.168.1.3 operation=serverStart result=success + session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=create znode=/a znode_type=persistent result=success + session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=create znode=/a znode_type=persistent result=failure + session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=setData znode=/a result=failure + session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=setData znode=/a result=success + session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=setAcl znode=/a acl=world:anyone:cdrwa result=failure + session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=setAcl znode=/a acl=world:anyone:cdrwa result=success + session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=create znode=/b znode_type=persistent result=success + session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=setData znode=/b result=success + session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=delete znode=/b result=success + session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=multiOperation result=failure + session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=delete znode=/a result=failure + session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=delete znode=/a result=success + session=0x19344730001 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=create znode=/ephemral znode_type=ephemral result=success + session=0x19344730001 user=zookeeper/192.168.1.3 operation=ephemeralZNodeDeletionOnSessionCloseOrExpire znode=/ephemral result=success + session=0x19344730000 user=192.168.1.2,zkcli@HADOOP.COM ip=192.168.1.2 operation=reconfig znode=/zookeeper/config result=success + user=zookeeper/192.168.1.3 operation=serverStop result=invoked + + + +## ZooKeeper Audit Log Configuration + +By default audit logs are disabled. To enable audit logs configure `audit.enable=true` in _conf/zoo.cfg_. +Audit logging is done using logback. Following is the default logback configuration for audit logs in `conf/logback.xml` + + + + +Change above configuration to customize the auditlog file, number of backups, max file size, custom audit logger etc. + + + +## Who is taken as user in audit logs? + +By default there are only four authentication provider: + +* IPAuthenticationProvider +* SASLAuthenticationProvider +* X509AuthenticationProvider +* DigestAuthenticationProvider + +User is decided based on the configured authentication provider: + +* When IPAuthenticationProvider is configured then authenticated IP is taken as user +* When SASLAuthenticationProvider is configured then client principal is taken as user +* When X509AuthenticationProvider is configured then client certificate is taken as user +* When DigestAuthenticationProvider is configured then authenticated user is user + +Custom authentication provider can override org.apache.zookeeper.server.auth.AuthenticationProvider.getUserName(String id) + to provide user name. If authentication provider is not overriding this method then whatever is stored in + org.apache.zookeeper.data.Id.id is taken as user. + Generally only user name is stored in this field but it is up to the custom authentication provider what they store in it. + For audit logging value of org.apache.zookeeper.data.Id.id would be taken as user. + +In ZooKeeper Server not all the operations are done by clients but some operations are done by the server itself. For example when client closes the session, ephemeral znodes are deleted by the Server. These deletion are not done by clients directly but it is done the server itself these are called system operations. For these system operations the user associated with the ZooKeeper server are taken as user while audit logging these operations. For example if in ZooKeeper server principal is zookeeper/hadoop.hadoop.com@HADOOP.COM then this becomes the system user and all the system operations will be logged with this user name. + + user=zookeeper/hadoop.hadoop.com@HADOOP.COM operation=serverStart result=success + + +If there is no user associate with ZooKeeper server then the user who started the ZooKeeper server is taken as the user. For example if server started by root then root is taken as the system user + + user=root operation=serverStart result=success + + +Single client can attach multiple authentication schemes to a session, in this case all authenticated schemes will taken taken as user and will be presented as comma separated list. For example if a client is authenticate with principal zkcli@HADOOP.COM and ip 127.0.0.1 then create znode audit log will be as: + + session=0x10c0bcb0000 user=zkcli@HADOOP.COM,127.0.0.1 ip=127.0.0.1 operation=create znode=/a result=success + + diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperCLI.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperCLI.md new file mode 100644 index 0000000000000000000000000000000000000000..7096aa0cc8980729a06461725cd21c3a91658f09 --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperCLI.md @@ -0,0 +1,573 @@ + + +# ZooKeeper-cli: the ZooKeeper command line interface + +## Pre-requisites +Enter into the ZooKeeper-cli + +```bash +# connect to the localhost with the default port:2181 +bin/zkCli.sh +# connect to the remote host with timeout:3s +bin/zkCli.sh -timeout 3000 -server remoteIP:2181 +# connect to the remote host with -waitforconnection option to wait for connection success before executing commands +bin/zkCli.sh -waitforconnection -timeout 3000 -server remoteIP:2181 +# connect with a custom client configuration properties file +bin/zkCli.sh -client-configuration /path/to/client.properties +``` +## help +Showing helps about ZooKeeper commands + +```bash +[zkshell: 1] help +# a sample one +[zkshell: 2] h +ZooKeeper -server host:port cmd args + addauth scheme auth + close + config [-c] [-w] [-s] + connect host:port + create [-s] [-e] [-c] [-t ttl] path [data] [acl] + delete [-v version] path + deleteall path + delquota [-n|-b|-N|-B] path + get [-s] [-w] path + getAcl [-s] path + getAllChildrenNumber path + getEphemerals path + history + listquota path + ls [-s] [-w] [-R] path + printwatches on|off + quit + reconfig [-s] [-v version] [[-file path] | [-members serverID=host:port1:port2;port3[,...]*]] | [-add serverId=host:port1:port2;port3[,...]]* [-remove serverId[,...]*] + redo cmdno + removewatches path [-c|-d|-a] [-l] + set [-s] [-v version] path data + setAcl [-s] [-v version] [-R] path acl + setquota -n|-b|-N|-B val path + stat [-w] path + sync path + version +``` + +## addauth +Add a authorized user for ACL + +```bash +[zkshell: 9] getAcl /acl_digest_test + Insufficient permission : /acl_digest_test +[zkshell: 10] addauth digest user1:12345 +[zkshell: 11] getAcl /acl_digest_test + 'digest,'user1:+owfoSBn/am19roBPzR1/MfCblE= + : cdrwa +# add a super user +# Notice:set zookeeper.DigestAuthenticationProvider +# e.g. zookeeper.DigestAuthenticationProvider.superDigest=zookeeper:qW/HnTfCSoQpB5G8LgkwT3IbiFc= +[zkshell: 12] addauth digest zookeeper:admin +``` + +## close +Close this client/session. + +```bash +[zkshell: 0] close + 2019-03-09 06:42:22,178 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@528] - EventThread shut down for session: 0x10007ab7c550006 + 2019-03-09 06:42:22,179 [myid:] - INFO [main:ZooKeeper@1346] - Session: 0x10007ab7c550006 closed +``` + +## config +Showing the config of quorum membership + +```bash +[zkshell: 17] config + server.1=[2001:db8:1:0:0:242:ac11:2]:2888:3888:participant + server.2=[2001:db8:1:0:0:242:ac11:2]:12888:13888:participant + server.3=[2001:db8:1:0:0:242:ac11:2]:22888:23888:participant + version=0 +``` +## connect +Connect a ZooKeeper server. + +```bash +[zkshell: 4] connect + 2019-03-09 06:43:33,179 [myid:localhost:2181] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@986] - Socket connection established, initiating session, client: /127.0.0.1:35144, server: localhost/127.0.0.1:2181 + 2019-03-09 06:43:33,189 [myid:localhost:2181] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1421] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x10007ab7c550007, negotiated timeout = 30000 + connect "localhost:2181,localhost:2182,localhost:2183" + +# connect a remote server +[zkshell: 5] connect remoteIP:2181 +``` +## create +Create a znode. + +```bash +# create a persistent_node +[zkshell: 7] create /persistent_node + Created /persistent_node + +# create a ephemeral node +[zkshell: 8] create -e /ephemeral_node mydata + Created /ephemeral_node + +# create the persistent-sequential node +[zkshell: 9] create -s /persistent_sequential_node mydata + Created /persistent_sequential_node0000000176 + +# create the ephemeral-sequential_node +[zkshell: 10] create -s -e /ephemeral_sequential_node mydata + Created /ephemeral_sequential_node0000000174 + +# create a node with the schema +[zkshell: 11] create /zk-node-create-schema mydata digest:user1:+owfoSBn/am19roBPzR1/MfCblE=:crwad + Created /zk-node-create-schema +[zkshell: 12] addauth digest user1:12345 +[zkshell: 13] getAcl /zk-node-create-schema + 'digest,'user1:+owfoSBn/am19roBPzR1/MfCblE= + : cdrwa + +# create the container node.When the last child of a container is deleted,the container becomes to be deleted +[zkshell: 14] create -c /container_node mydata + Created /container_node +[zkshell: 15] create -c /container_node/child_1 mydata + Created /container_node/child_1 +[zkshell: 16] create -c /container_node/child_2 mydata + Created /container_node/child_2 +[zkshell: 17] delete /container_node/child_1 +[zkshell: 18] delete /container_node/child_2 +[zkshell: 19] get /container_node + org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /container_node + +# create the ttl node. +# set zookeeper.extendedTypesEnabled=true +# Otherwise:KeeperErrorCode = Unimplemented for /ttl_node +[zkshell: 20] create -t 3000 /ttl_node mydata + Created /ttl_node +# after 3s later +[zkshell: 21] get /ttl_node + org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /ttl_node +``` +## delete +Delete a node with a specific path + +```bash +[zkshell: 2] delete /config/topics/test +[zkshell: 3] ls /config/topics/test + Node does not exist: /config/topics/test +``` + +## deleteall +Delete all nodes under a specific path + +```bash +zkshell: 1] ls /config + [changes, clients, topics] +[zkshell: 2] deleteall /config +[zkshell: 3] ls /config + Node does not exist: /config +``` + +## delquota +Delete the quota under a path + +```bash +[zkshell: 1] delquota /quota_test +[zkshell: 2] listquota /quota_test + absolute path is /zookeeper/quota/quota_test/zookeeper_limits + quota for /quota_test does not exist. +[zkshell: 3] delquota -n /c1 +[zkshell: 4] delquota -N /c2 +[zkshell: 5] delquota -b /c3 +[zkshell: 6] delquota -B /c4 + +``` +## get +Get the data of the specific path + +```bash +[zkshell: 10] get /latest_producer_id_block + {"version":1,"broker":0,"block_start":"0","block_end":"999"} + +# -s to show the stat +[zkshell: 11] get -s /latest_producer_id_block + {"version":1,"broker":0,"block_start":"0","block_end":"999"} + cZxid = 0x90000009a + ctime = Sat Jul 28 08:14:09 UTC 2018 + mZxid = 0x9000000a2 + mtime = Sat Jul 28 08:14:12 UTC 2018 + pZxid = 0x90000009a + cversion = 0 + dataVersion = 1 + aclVersion = 0 + ephemeralOwner = 0x0 + dataLength = 60 + numChildren = 0 + +# -w to set a watch on the data change, Notice: turn on the printwatches +[zkshell: 12] get -w /latest_producer_id_block + {"version":1,"broker":0,"block_start":"0","block_end":"999"} +[zkshell: 13] set /latest_producer_id_block mydata + WATCHER:: + WatchedEvent state:SyncConnected type:NodeDataChanged path:/latest_producer_id_block +``` + +## getAcl +Get the ACL permission of one path + +```bash +[zkshell: 4] create /acl_test mydata ip:127.0.0.1:crwda + Created /acl_test +[zkshell: 5] getAcl /acl_test + 'ip,'127.0.0.1 + : cdrwa + [zkshell: 6] getAcl /testwatch + 'world,'anyone + : cdrwa +``` +## getAllChildrenNumber +Get all numbers of children nodes under a specific path + +```bash +[zkshell: 1] getAllChildrenNumber / + 73779 +[zkshell: 2] getAllChildrenNumber /ZooKeeper + 2 +[zkshell: 3] getAllChildrenNumber /ZooKeeper/quota + 0 +``` +## getEphemerals +Get all the ephemeral nodes created by this session + +```bash +[zkshell: 1] create -e /test-get-ephemerals "ephemeral node" + Created /test-get-ephemerals +[zkshell: 2] getEphemerals + [/test-get-ephemerals] +[zkshell: 3] getEphemerals / + [/test-get-ephemerals] +[zkshell: 4] create -e /test-get-ephemerals-1 "ephemeral node" + Created /test-get-ephemerals-1 +[zkshell: 5] getEphemerals /test-get-ephemerals + test-get-ephemerals test-get-ephemerals-1 +[zkshell: 6] getEphemerals /test-get-ephemerals + [/test-get-ephemerals-1, /test-get-ephemerals] +[zkshell: 7] getEphemerals /test-get-ephemerals-1 + [/test-get-ephemerals-1] +``` + +## history +Showing the history about the recent 11 commands that you have executed + +```bash +[zkshell: 7] history + 0 - close + 1 - close + 2 - ls / + 3 - ls / + 4 - connect + 5 - ls / + 6 - ll + 7 - history +``` + +## listquota +Listing the quota of one path + +```bash +[zkshell: 1] listquota /c1 + absolute path is /zookeeper/quota/c1/zookeeper_limits + Output quota for /c1 count=-1,bytes=-1=;byteHardLimit=-1;countHardLimit=2 + Output stat for /c1 count=4,bytes=0 +``` + +## ls +Listing the child nodes of one path + +```bash +[zkshell: 36] ls /quota_test + [child_1, child_2, child_3] + +# -s to show the stat +[zkshell: 37] ls -s /quota_test + [child_1, child_2, child_3] + cZxid = 0x110000002d + ctime = Thu Mar 07 11:19:07 UTC 2019 + mZxid = 0x110000002d + mtime = Thu Mar 07 11:19:07 UTC 2019 + pZxid = 0x1100000033 + cversion = 3 + dataVersion = 0 + aclVersion = 0 + ephemeralOwner = 0x0 + dataLength = 0 + numChildren = 3 + +# -R to show the child nodes recursely +[zkshell: 38] ls -R /quota_test + /quota_test + /quota_test/child_1 + /quota_test/child_2 + /quota_test/child_3 + +# -w to set a watch on the child change,Notice: turn on the printwatches +[zkshell: 39] ls -w /brokers + [ids, seqid, topics] +[zkshell: 40] delete /brokers/ids + WATCHER:: + WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers +``` + +## printwatches +A switch to turn on/off whether printing watches or not. + +```bash +[zkshell: 0] printwatches + printwatches is on +[zkshell: 1] printwatches off +[zkshell: 2] printwatches + printwatches is off +[zkshell: 3] printwatches on +[zkshell: 4] printwatches + printwatches is on +``` + +## quit +Quit the CLI windows. + +```bash +[zkshell: 1] quit +``` + +## reconfig +Change the membership of the ensemble during the runtime. + +Before using this cli,read the details in the [Dynamic Reconfiguration](zookeeperReconfig.html) about the reconfig feature,especially the "Security" part. + +Pre-requisites: + +1. set reconfigEnabled=true in the zoo.cfg + +2. add a super user or skipAcl,otherwise will get “Insufficient permission”. e.g. addauth digest zookeeper:admin + +```bash +# Change follower 2 to an observer and change its port from 2182 to 12182 +# Add observer 5 to the ensemble +# Remove Observer 4 from the ensemble +[zkshell: 1] reconfig --add 2=localhost:2781:2786:observer;12182 --add 5=localhost:2781:2786:observer;2185 -remove 4 + Committed new configuration: + server.1=localhost:2780:2785:participant;0.0.0.0:2181 + server.2=localhost:2781:2786:observer;0.0.0.0:12182 + server.3=localhost:2782:2787:participant;0.0.0.0:2183 + server.5=localhost:2784:2789:observer;0.0.0.0:2185 + version=1c00000002 + +# -members to appoint the membership +[zkshell: 2] reconfig -members server.1=localhost:2780:2785:participant;0.0.0.0:2181,server.2=localhost:2781:2786:observer;0.0.0.0:12182,server.3=localhost:2782:2787:participant;0.0.0.0:12183 + Committed new configuration: + server.1=localhost:2780:2785:participant;0.0.0.0:2181 + server.2=localhost:2781:2786:observer;0.0.0.0:12182 + server.3=localhost:2782:2787:participant;0.0.0.0:12183 + version=f9fe0000000c + +# Change the current config to the one in the myNewConfig.txt +# But only if current config version is 2100000010 +[zkshell: 3] reconfig -file /data/software/zookeeper/zookeeper-test/conf/myNewConfig.txt -v 2100000010 + Committed new configuration: + server.1=localhost:2780:2785:participant;0.0.0.0:2181 + server.2=localhost:2781:2786:observer;0.0.0.0:12182 + server.3=localhost:2782:2787:participant;0.0.0.0:2183 + server.5=localhost:2784:2789:observer;0.0.0.0:2185 + version=220000000c +``` + +## redo +Redo the cmd with the index from history. + +```bash +[zkshell: 4] history + 0 - ls / + 1 - get /consumers + 2 - get /hbase + 3 - ls /hbase + 4 - history +[zkshell: 5] redo 3 + [backup-masters, draining, flush-table-proc, hbaseid, master-maintenance, meta-region-server, namespace, online-snapshot, replication, rs, running, splitWAL, switch, table, table-lock] +``` + +## removewatches +Remove the watches under a node. + +```bash +[zkshell: 1] get -w /brokers + null +[zkshell: 2] removewatches /brokers + WATCHER:: + WatchedEvent state:SyncConnected type:DataWatchRemoved path:/brokers + +``` + +## set +Set/update the data on a path. + +```bash +[zkshell: 50] set /brokers myNewData + +# -s to show the stat of this node. +[zkshell: 51] set -s /quota_test mydata_for_quota_test + cZxid = 0x110000002d + ctime = Thu Mar 07 11:19:07 UTC 2019 + mZxid = 0x1100000038 + mtime = Thu Mar 07 11:42:41 UTC 2019 + pZxid = 0x1100000033 + cversion = 3 + dataVersion = 2 + aclVersion = 0 + ephemeralOwner = 0x0 + dataLength = 21 + numChildren = 3 + +# -v to set the data with CAS,the version can be found from dataVersion using stat. +[zkshell: 52] set -v 0 /brokers myNewData +[zkshell: 53] set -v 0 /brokers myNewData + version No is not valid : /brokers +``` + +## setAcl +Set the Acl permission for one node. + +```bash +[zkshell: 28] addauth digest user1:12345 +[zkshell: 30] setAcl /acl_auth_test auth:user1:12345:crwad +[zkshell: 31] getAcl /acl_auth_test + 'digest,'user1:+owfoSBn/am19roBPzR1/MfCblE= + : cdrwa + +# -R to set Acl recursely +[zkshell: 32] ls /acl_auth_test + [child_1, child_2] +[zkshell: 33] getAcl /acl_auth_test/child_2 + 'world,'anyone + : cdrwa +[zkshell: 34] setAcl -R /acl_auth_test auth:user1:12345:crwad +[zkshell: 35] getAcl /acl_auth_test/child_2 + 'digest,'user1:+owfoSBn/am19roBPzR1/MfCblE= + : cdrwa + +# -v set Acl with the acl version which can be found from the aclVersion using the stat +[zkshell: 36] stat /acl_auth_test + cZxid = 0xf9fc0000001c + ctime = Tue Mar 26 16:50:58 CST 2019 + mZxid = 0xf9fc0000001c + mtime = Tue Mar 26 16:50:58 CST 2019 + pZxid = 0xf9fc0000001f + cversion = 2 + dataVersion = 0 + aclVersion = 3 + ephemeralOwner = 0x0 + dataLength = 0 + numChildren = 2 +[zkshell: 37] setAcl -v 3 /acl_auth_test auth:user1:12345:crwad +``` + +## setquota +Set the quota in one path. + +```bash +# -n to limit the number of child nodes(included itself) +[zkshell: 18] setquota -n 2 /quota_test +[zkshell: 19] create /quota_test/child_1 + Created /quota_test/child_1 +[zkshell: 20] create /quota_test/child_2 + Created /quota_test/child_2 +[zkshell: 21] create /quota_test/child_3 + Created /quota_test/child_3 +# Notice:don't have a hard constraint,just log the warning info + 2019-03-07 11:22:36,680 [myid:1] - WARN [SyncThread:0:DataTree@374] - Quota exceeded: /quota_test count=3 limit=2 + 2019-03-07 11:22:41,861 [myid:1] - WARN [SyncThread:0:DataTree@374] - Quota exceeded: /quota_test count=4 limit=2 + +# -b to limit the bytes(data length) of one path +[zkshell: 22] setquota -b 5 /brokers +[zkshell: 23] set /brokers "I_love_zookeeper" +# Notice:don't have a hard constraint,just log the warning info + WARN [CommitProcWorkThread-7:DataTree@379] - Quota exceeded: /brokers bytes=4206 limit=5 + +# -N count Hard quota +[zkshell: 3] create /c1 +Created /c1 +[zkshell: 4] setquota -N 2 /c1 +[zkshell: 5] listquota /c1 +absolute path is /zookeeper/quota/c1/zookeeper_limits +Output quota for /c1 count=-1,bytes=-1=;byteHardLimit=-1;countHardLimit=2 +Output stat for /c1 count=2,bytes=0 +[zkshell: 6] create /c1/ch-3 +Count Quota has exceeded : /c1/ch-3 + +# -B byte Hard quota +[zkshell: 3] create /c2 +[zkshell: 4] setquota -B 4 /c2 +[zkshell: 5] set /c2 "foo" +[zkshell: 6] set /c2 "foo-bar" +Bytes Quota has exceeded : /c2 +[zkshell: 7] get /c2 +foo +``` + +## stat +Showing the stat/metadata of one node. + +```bash +[zkshell: 1] stat /hbase + cZxid = 0x4000013d9 + ctime = Wed Jun 27 20:13:07 CST 2018 + mZxid = 0x4000013d9 + mtime = Wed Jun 27 20:13:07 CST 2018 + pZxid = 0x500000001 + cversion = 17 + dataVersion = 0 + aclVersion = 0 + ephemeralOwner = 0x0 + dataLength = 0 + numChildren = 15 +``` + +## sync +Sync the data of one node between leader and followers(Asynchronous sync) + +```bash +[zkshell: 14] sync / +[zkshell: 15] Sync is OK +``` + +## version +Show the version of the ZooKeeper client/CLI + +```bash +[zkshell: 1] version +ZooKeeper CLI version: 3.6.0-SNAPSHOT-29f9b2c1c0e832081f94d59a6b88709c5f1bb3ca, built on 05/30/2019 09:26 GMT +``` + +## whoami +Gives all authentication information added into the current session. + + [zkshell: 1] whoami + Auth scheme: User + ip: 127.0.0.1 + [zkshell: 2] addauth digest user1:12345 + [zkshell: 3] whoami + Auth scheme: User + ip: 127.0.0.1 + digest: user1 \ No newline at end of file diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperHierarchicalQuorums.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperHierarchicalQuorums.md new file mode 100644 index 0000000000000000000000000000000000000000..e11f34f2b5929aac45e6377e3a9053a029033f21 --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperHierarchicalQuorums.md @@ -0,0 +1,47 @@ + + +# Introduction to hierarchical quorums + +This document gives an example of how to use hierarchical quorums. The basic idea is +very simple. First, we split servers into groups, and add a line for each group listing +the servers that form this group. Next we have to assign a weight to each server. + +The following example shows how to configure a system with three groups of three servers +each, and we assign a weight of 1 to each server: + + + group.1=1:2:3 + group.2=4:5:6 + group.3=7:8:9 + + weight.1=1 + weight.2=1 + weight.3=1 + weight.4=1 + weight.5=1 + weight.6=1 + weight.7=1 + weight.8=1 + weight.9=1 + + +When running the system, we are able to form a quorum once we have a majority of votes from +a majority of non-zero-weight groups. Groups that have zero weight are discarded and not +considered when forming quorums. Looking at the example, we are able to form a quorum once +we have votes from at least two servers from each of two different groups. + + diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperInternals.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperInternals.md new file mode 100644 index 0000000000000000000000000000000000000000..171f33ddb4c0c28696a2a67bf82a53cd70505fd9 --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperInternals.md @@ -0,0 +1,382 @@ + + +# ZooKeeper Internals + +* [Introduction](#ch_Introduction) +* [Atomic Broadcast](#sc_atomicBroadcast) + * [Guarantees, Properties, and Definitions](#sc_guaranteesPropertiesDefinitions) + * [Leader Activation](#sc_leaderElection) + * [Active Messaging](#sc_activeMessaging) + * [Summary](#sc_summary) + * [Comparisons](#sc_comparisons) +* [Consistency Guarantees](#sc_consistency) +* [Quorums](#sc_quorum) +* [Logging](#sc_logging) + * [Developer Guidelines](#sc_developerGuidelines) + * [Logging at the Right Level](#sc_rightLevel) + * [Use of Standard slf4j Idioms](#sc_slf4jIdioms) + + + +## Introduction + +This document contains information on the inner workings of ZooKeeper. +It discusses the following topics: + +* [Atomic Broadcast](#sc_atomicBroadcast) +* [Consistency Guarantees](#sc_consistency) +* [Quorums](#sc_quorum) +* [Logging](#sc_logging) + + + +## Atomic Broadcast + +At the heart of ZooKeeper is an atomic messaging system that keeps all of the servers in sync. + + + +### Guarantees, Properties, and Definitions + +The specific guarantees provided by the messaging system used by ZooKeeper are the following: + +* *_Reliable delivery_* : + If a message `m`, is delivered + by one server, message `m` will be eventually delivered by all servers. + +* *_Total order_* : + If a message `a` is + delivered before message `b` by one server, message `a` will be delivered before `b` by all + servers. + +* *_Causal order_* : + If a message `b` is sent after a message `a` has been delivered by the sender of `b`, + message `a` must be ordered before `b`. If a sender sends `c` after sending `b`, `c` must be ordered after `b`. + +The ZooKeeper messaging system also needs to be efficient, reliable, and easy to +implement and maintain. We make heavy use of messaging, so we need the system to +be able to handle thousands of requests per second. Although we can require at +least k+1 correct servers to send new messages, we must be able to recover from +correlated failures such as power outages. When we implemented the system we had +little time and few engineering resources, so we needed a protocol that is +accessible to engineers and is easy to implement. We found that our protocol +satisfied all of these goals. + +Our protocol assumes that we can construct point-to-point FIFO channels between +the servers. While similar services usually assume message delivery that can +lose or reorder messages, our assumption of FIFO channels is very practical +given that we use TCP for communication. Specifically we rely on the following property of TCP: + +* *_Ordered delivery_* : + Data is delivered in the same order it is sent and a message `m` is + delivered only after all messages sent before `m` have been delivered. + (The corollary to this is that if message `m` is lost all messages after `m` will be lost.) + +* *_No message after close_* : + Once a FIFO channel is closed, no messages will be received from it. + +FLP proved that consensus cannot be achieved in asynchronous distributed systems +if failures are possible. To ensure that we achieve consensus in the presence of failures +we use timeouts. However, we rely on time for liveness not for correctness. So, +if timeouts stop working (e.g., skewed clocks) the messaging system may +hang, but it will not violate its guarantees. + +When describing the ZooKeeper messaging protocol we will talk of packets, +proposals, and messages: + +* *_Packet_* : + a sequence of bytes sent through a FIFO channel. + +* *_Proposal_* : + a unit of agreement. Proposals are agreed upon by exchanging packets + with a quorum of ZooKeeper servers. Most proposals contain messages, however the + NEW_LEADER proposal is an example of a proposal that does not contain to a message. + +* *_Message_* : + a sequence of bytes to be atomically broadcast to all ZooKeeper + servers. A message put into a proposal and agreed upon before it is delivered. + +As stated above, ZooKeeper guarantees a total order of messages, and it also +guarantees a total order of proposals. ZooKeeper exposes the total ordering using +a ZooKeeper transaction id (_zxid_). All proposals will be stamped with a zxid when +it is proposed and exactly reflects the total ordering. Proposals are sent to all +ZooKeeper servers and committed when a quorum of them acknowledge the proposal. +If a proposal contains a message, the message will be delivered when the proposal +is committed. Acknowledgement means the server has recorded the proposal to persistent storage. +Our quorums have the requirement that any pair of quorum must have at least one server +in common. We ensure this by requiring that all quorums have size (_n/2+1_) where +n is the number of servers that make up a ZooKeeper service. + +The zxid has two parts: the epoch and a counter. In our implementation the zxid +is a 64-bit number. We use the high order 32-bits for the epoch and the low order +32-bits for the counter. Because zxid consists of two parts, zxid can be represented both as a +number and as a pair of integers, (_epoch, count_). The epoch number represents a +change in leadership. Each time a new leader comes into power it will have its +own epoch number. We have a simple algorithm to assign a unique zxid to a proposal: +the leader simply increments the zxid to obtain a unique zxid for each proposal. _Leadership activation will ensure that only one leader uses a given epoch, so our +simple algorithm guarantees that every proposal will have a unique id._ + +ZooKeeper messaging consists of two phases: + +* *_Leader activation_* : + In this phase a leader establishes the correct state of the system + and gets ready to start making proposals. + +* *_Active messaging_* : + In this phase a leader accepts messages to propose and coordinates message delivery. + +ZooKeeper is a holistic protocol. We do not focus on individual proposals, rather +look at the stream of proposals as a whole. Our strict ordering allows us to do this +efficiently and greatly simplifies our protocol. Leadership activation embodies +this holistic concept. A leader becomes active only when a quorum of followers +(The leader counts as a follower as well. You can always vote for yourself ) has synced +up with the leader, they have the same state. This state consists of all of the +proposals that the leader believes have been committed and the proposal to follow +the leader, the NEW_LEADER proposal. (Hopefully you are thinking to +yourself, _Does the set of proposals that the leader believes has been committed +include all the proposals that really have been committed?_ The answer is _yes_. +Below, we make clear why.) + + + +### Leader Activation + +Leader activation includes leader election (`FastLeaderElection`). +ZooKeeper messaging doesn't care about the exact method of electing a leader as long as the following holds: + +* The leader has seen the highest zxid of all the followers. +* A quorum of servers have committed to following the leader. + +Of these two requirements only the first, the highest zxid among the followers +needs to hold for correct operation. The second requirement, a quorum of followers, +just needs to hold with high probability. We are going to recheck the second requirement, +so if a failure happens during or after the leader election and quorum is lost, +we will recover by abandoning leader activation and running another election. + +After leader election a single server will be designated as a leader and start +waiting for followers to connect. The rest of the servers will try to connect to +the leader. The leader will sync up with the followers by sending any proposals they +are missing, or if a follower is missing too many proposals, it will send a full +snapshot of the state to the follower. + +There is a corner case in which a follower that has proposals, `U`, not seen +by a leader arrives. Proposals are seen in order, so the proposals of `U` will have a zxids +higher than zxids seen by the leader. The follower must have arrived after the +leader election, otherwise the follower would have been elected leader given that +it has seen a higher zxid. Since committed proposals must be seen by a quorum of +servers, and a quorum of servers that elected the leader did not see `U`, the proposals +of `U` have not been committed, so they can be discarded. When the follower connects +to the leader, the leader will tell the follower to discard `U`. + +A new leader establishes a zxid to start using for new proposals by getting the +epoch, e, of the highest zxid it has seen and setting the next zxid to use to be +(e+1, 0), after the leader syncs with a follower, it will propose a NEW_LEADER +proposal. Once the NEW_LEADER proposal has been committed, the leader will activate +and start receiving and issuing proposals. + +It all sounds complicated but here are the basic rules of operation during leader +activation: + +* A follower will ACK the NEW_LEADER proposal after it has synced with the leader. +* A follower will only ACK a NEW_LEADER proposal with a given zxid from a single server. +* A new leader will COMMIT the NEW_LEADER proposal when a quorum of followers has ACKed it. +* A follower will commit any state it received from the leader when the NEW_LEADER proposal is COMMIT. +* A new leader will not accept new proposals until the NEW_LEADER proposal has been COMMITTED. + +If leader election terminates erroneously, we don't have a problem since the +NEW_LEADER proposal will not be committed since the leader will not have quorum. +When this happens, the leader and any remaining followers will timeout and go back +to leader election. + + + +### Active Messaging + +Leader Activation does all the heavy lifting. Once the leader is coronated he can +start blasting out proposals. As long as he remains the leader no other leader can +emerge since no other leader will be able to get a quorum of followers. If a new +leader does emerge, +it means that the leader has lost quorum, and the new leader will clean up any +mess left over during her leadership activation. + +ZooKeeper messaging operates similar to a classic two-phase commit. + +![Two phase commit](images/2pc.jpg) + +All communication channels are FIFO, so everything is done in order. Specifically +the following operating constraints are observed: + +* The leader sends proposals to all followers using + the same order. Moreover, this order follows the order in which requests have been + received. Because we use FIFO channels this means that followers also receive proposals in order. +* Followers process messages in the order they are received. This + means that messages will be ACKed in order and the leader will receive ACKs from + followers in order, due to the FIFO channels. It also means that if message `m` + has been written to non-volatile storage, all messages that were proposed before + `m` have been written to non-volatile storage. +* The leader will issue a COMMIT to all followers as soon as a + quorum of followers have ACKed a message. Since messages are ACKed in order, + COMMITs will be sent by the leader as received by the followers in order. +* COMMITs are processed in order. Followers deliver a proposal + message when that proposal is committed. + + + +### Summary + +So there you go. Why does it work? Specifically, why does a set of proposals +believed by a new leader always contain any proposal that has actually been committed? +First, all proposals have a unique zxid, so unlike other protocols, we never have +to worry about two different values being proposed for the same zxid; followers +(a leader is also a follower) see and record proposals in order; proposals are +committed in order; there is only one active leader at a time since followers only +follow a single leader at a time; a new leader has seen all committed proposals +from the previous epoch since it has seen the highest zxid from a quorum of servers; +any uncommitted proposals from a previous epoch seen by a new leader will be committed +by that leader before it becomes active. + + + +### Comparisons + +Isn't this just Multi-Paxos? No, Multi-Paxos requires some way of assuring that +there is only a single coordinator. We do not count on such assurances. Instead +we use the leader activation to recover from leadership change or old leaders +believing they are still active. + +Isn't this just Paxos? Your active messaging phase looks just like phase 2 of Paxos? +Actually, to us active messaging looks just like 2 phase commit without the need to +handle aborts. Active messaging is different from both in the sense that it has +cross proposal ordering requirements. If we do not maintain strict FIFO ordering of +all packets, it all falls apart. Also, our leader activation phase is different from +both of them. In particular, our use of epochs allows us to skip blocks of uncommitted +proposals and to not worry about duplicate proposals for a given zxid. + + + + +## Consistency Guarantees + +The [consistency](https://jepsen.io/consistency) guarantees of ZooKeeper lie between sequential consistency and linearizability. In this section, we explain the exact consistency guarantees that ZooKeeper provides. + +Write operations in ZooKeeper are *linearizable*. In other words, each `write` will appear to take effect atomically at some point between when the client issues the request and receives the corresponding response. This means that the writes performed by all the clients in ZooKeeper can be totally ordered in such a way that respects the real-time ordering of these writes. However, merely stating that write operations are linearizable is meaningless unless we also talk about read operations. + +Read operations in ZooKeeper are *not linearizable* since they can return potentially stale data. This is because a `read` in ZooKeeper is not a quorum operation and a server will respond immediately to a client that is performing a `read`. ZooKeeper does this because it prioritizes performance over consistency for the read use case. However, reads in ZooKeeper are *sequentially consistent*, because `read` operations will appear to take effect in some sequential order that furthermore respects the order of each client's operations. A common pattern to work around this is to issue a `sync` before issuing a `read`. This too does **not** strictly guarantee up-to-date data because `sync` is [not currently a quorum operation](https://issues.apache.org/jira/browse/ZOOKEEPER-1675). To illustrate, consider a scenario where two servers simultaneously think they are the leader, something that could occur if the TCP connection timeout is smaller than `syncLimit * tickTime`. Note that this is [unlikely](https://www.amazon.com/ZooKeeper-Distributed-Coordination-Flavio-Junqueira/dp/1449361307) to occur in practice, but should be kept in mind nevertheless when discussing strict theoretical guarantees. Under this scenario, it is possible that the `sync` is served by the “leader” with stale data, thereby allowing the following `read` to be stale as well. The stronger guarantee of linearizability is provided if an actual quorum operation (e.g., a `write`) is performed before a `read`. + +Overall, the consistency guarantees of ZooKeeper are formally captured by the notion of [ordered sequential consistency](http://webee.technion.ac.il/people/idish/ftp/OSC-IPL17.pdf) or `OSC(U)` to be exact, which lies between sequential consistency and linearizability. + + + +## Quorums + +Atomic broadcast and leader election use the notion of quorum to guarantee a consistent +view of the system. By default, ZooKeeper uses majority quorums, which means that every +voting that happens in one of these protocols requires a majority to vote on. One example is +acknowledging a leader proposal: the leader can only commit once it receives an +acknowledgement from a quorum of servers. + +If we extract the properties that we really need from our use of majorities, we have that we only +need to guarantee that groups of processes used to validate an operation by voting (e.g., acknowledging +a leader proposal) pairwise intersect in at least one server. Using majorities guarantees such a property. +However, there are other ways of constructing quorums different from majorities. For example, we can assign +weights to the votes of servers, and say that the votes of some servers are more important. To obtain a quorum, +we get enough votes so that the sum of weights of all votes is larger than half of the total sum of all weights. + +A different construction that uses weights and is useful in wide-area deployments (co-locations) is a hierarchical +one. With this construction, we split the servers into disjoint groups and assign weights to processes. To form +a quorum, we have to get a hold of enough servers from a majority of groups G, such that for each group g in G, +the sum of votes from g is larger than half of the sum of weights in g. Interestingly, this construction enables +smaller quorums. If we have, for example, 9 servers, we split them into 3 groups, and assign a weight of 1 to each +server, then we are able to form quorums of size 4. Note that two subsets of processes composed each of a majority +of servers from each of a majority of groups necessarily have a non-empty intersection. It is reasonable to expect +that a majority of co-locations will have a majority of servers available with high probability. + +With ZooKeeper, we provide a user with the ability of configuring servers to use majority quorums, weights, or a +hierarchy of groups. + + + +## Logging + +Zookeeper uses [slf4j](http://www.slf4j.org/index.html) as an abstraction layer for logging. +[Logback](https://logback.qos.ch/) is chosen the logging backend since ZooKeeper version 3.8.0. +For better embedding support, it is planned in the future to leave the decision of choosing the final logging implementation to the end user. +Therefore, always use the slf4j api to write log statements in the code, but configure logback for how to log at runtime. +Note that slf4j has no FATAL level, former messages at FATAL level have been moved to ERROR level. +For information on configuring logback for +ZooKeeper, see the [Logging](zookeeperAdmin.html#sc_logging) section +of the [ZooKeeper Administrator's Guide.](zookeeperAdmin.html) + + + +### Developer Guidelines + +Please follow the [slf4j manual](http://www.slf4j.org/manual.html) when creating log statements within code. +Also read the [FAQ on performance](http://www.slf4j.org/faq.html#logging\_performance), when creating log statements. Patch reviewers will look for the following: + + + +#### Logging at the Right Level + +There are several levels of logging in slf4j. + +It's important to pick the right one. In order of higher to lower severity: + +1. ERROR level designates error events that might still allow the application to continue running. +1. WARN level designates potentially harmful situations. +1. INFO level designates informational messages that highlight the progress of the application at coarse-grained level. +1. DEBUG Level designates fine-grained informational events that are most useful to debug an application. +1. TRACE Level designates finer-grained informational events than the DEBUG. + +ZooKeeper is typically run in production such that log messages of INFO level +severity and higher (more severe) are output to the log. + + + +#### Use of Standard slf4j Idioms + +_Static Message Logging_ + + LOG.debug("process completed successfully!"); + +However when creating parameterized messages are required, use formatting anchors. + + LOG.debug("got {} messages in {} minutes",new Object[]{count,time}); + +_Naming_ + +Loggers should be named after the class in which they are used. + + public class Foo { + private static final Logger LOG = LoggerFactory.getLogger(Foo.class); + .... + public Foo() { + LOG.info("constructing Foo"); + +_Exception handling_ + + try { + // code + } catch (XYZException e) { + // do this + LOG.error("Something bad happened", e); + // don't do this (generally) + // LOG.error(e); + // why? because "don't do" case hides the stack trace + + // continue process here as you need... recover or (re)throw + } diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperMonitor.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperMonitor.md new file mode 100644 index 0000000000000000000000000000000000000000..180f1273c1972468a54d869f1e52f20b55693c88 --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperMonitor.md @@ -0,0 +1,269 @@ + + +# ZooKeeper Monitor Guide + +* [New Metrics System](#Metrics-System) + * [Metrics](#Metrics) + * [Prometheus](#Prometheus) + * [Alerting with Prometheus](#Alerting) + * [Grafana](#Grafana) + * [InfluxDB](#influxdb) + +* [JMX](#JMX) + +* [Four letter words](#four-letter-words) + + + +## New Metrics System +The feature:`New Metrics System` has been available since 3.6.0 which provides the abundant metrics +to help users monitor the ZooKeeper on the topic: znode, network, disk, quorum, leader election, +client, security, failures, watch/session, requestProcessor, and so forth. + + + +### Metrics +All the metrics are included in the `ServerMetrics.java`. + + + + +### Pre-requisites: +- Enable the `Prometheus MetricsProvider` by setting the following in `zoo.cfg`: + ```conf + metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider + ``` + +- The port for Prometheus metrics can be configured using: + ```conf + metricsProvider.httpPort=7000 # Default port is 7000 + ``` + +#### Enabling HTTPS for Prometheus Metrics: + +ZooKeeper also supports SSL for Prometheus metrics, which provides secure data transmission. To enable this, configure an HTTPS port and set up SSL certificates as follows: + +- Define the HTTPS port: + ```conf + metricsProvider.httpsPort=4443 + ``` + +- Configure the SSL key store (holds the server’s private key and certificates): + ```conf + metricsProvider.ssl.keyStore.location=/path/to/keystore.jks + metricsProvider.ssl.keyStore.password=your_keystore_password + metricsProvider.ssl.keyStore.type=jks # Default is JKS + ``` + +- Configure the SSL trust store (used to verify client certificates): + ```conf + metricsProvider.ssl.trustStore.location=/path/to/truststore.jks + metricsProvider.ssl.trustStore.password=your_truststore_password + metricsProvider.ssl.trustStore.type=jks # Default is JKS + ``` + +- **Note**: You can enable both HTTP and HTTPS simultaneously by defining both ports: + ```conf + metricsProvider.httpPort=7000 + metricsProvider.httpsPort=4443 + ``` +### Prometheus +- Running a [Prometheus](https://prometheus.io/) monitoring service is the easiest way to ingest and record ZooKeeper's metrics. + +- Install Prometheus: + Go to the official website download [page](https://prometheus.io/download/), download the latest release. + +- Set Prometheus's scraper to target the ZooKeeper cluster endpoints: + + ```bash + cat > /tmp/test-zk.yaml <> /tmp/test-zk.log 2>&1 & + ``` + +- Now Prometheus will scrape zk metrics every 10 seconds. + + + +### Alerting with Prometheus +- We recommend that you read [Prometheus Official Alerting Page](https://prometheus.io/docs/practices/alerting/) to explore + some principles of alerting + +- We recommend that you use [Prometheus Alertmanager](https://www.prometheus.io/docs/alerting/latest/alertmanager/) which can + help users to receive alerting email or instant message(by webhook) in a more convenient way + +- We provide an alerting example where these metrics should be taken a special attention. Note: this is for your reference only, + and you need to adjust them according to your actual situation and resource environment + + + use ./promtool check rules rules/zk.yml to check the correctness of the config file + cat rules/zk.yml + + groups: + - name: zk-alert-example + rules: + - alert: ZooKeeper server is down + expr: up == 0 + for: 1m + labels: + severity: critical + annotations: + summary: "Instance {{ $labels.instance }} ZooKeeper server is down" + description: "{{ $labels.instance }} of job {{$labels.job}} ZooKeeper server is down: [{{ $value }}]." + + - alert: create too many znodes + expr: znode_count > 1000000 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} create too many znodes" + description: "{{ $labels.instance }} of job {{$labels.job}} create too many znodes: [{{ $value }}]." + + - alert: create too many connections + expr: num_alive_connections > 50 # suppose we use the default maxClientCnxns: 60 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} create too many connections" + description: "{{ $labels.instance }} of job {{$labels.job}} create too many connections: [{{ $value }}]." + + - alert: znode total occupied memory is too big + expr: approximate_data_size /1024 /1024 > 1 * 1024 # more than 1024 MB(1 GB) + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} znode total occupied memory is too big" + description: "{{ $labels.instance }} of job {{$labels.job}} znode total occupied memory is too big: [{{ $value }}] MB." + + - alert: set too many watch + expr: watch_count > 10000 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} set too many watch" + description: "{{ $labels.instance }} of job {{$labels.job}} set too many watch: [{{ $value }}]." + + - alert: a leader election happens + expr: increase(election_time_count[5m]) > 0 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} a leader election happens" + description: "{{ $labels.instance }} of job {{$labels.job}} a leader election happens: [{{ $value }}]." + + - alert: open too many files + expr: open_file_descriptor_count > 300 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} open too many files" + description: "{{ $labels.instance }} of job {{$labels.job}} open too many files: [{{ $value }}]." + + - alert: fsync time is too long + expr: rate(fsynctime_sum[1m]) > 100 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} fsync time is too long" + description: "{{ $labels.instance }} of job {{$labels.job}} fsync time is too long: [{{ $value }}]." + + - alert: take snapshot time is too long + expr: rate(snapshottime_sum[5m]) > 100 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} take snapshot time is too long" + description: "{{ $labels.instance }} of job {{$labels.job}} take snapshot time is too long: [{{ $value }}]." + + - alert: avg latency is too high + expr: avg_latency > 100 + for: 1m + labels: + severity: warning + annotations: + summary: "Instance {{ $labels.instance }} avg latency is too high" + description: "{{ $labels.instance }} of job {{$labels.job}} avg latency is too high: [{{ $value }}]." + + - alert: JvmMemoryFillingUp + expr: jvm_memory_bytes_used / jvm_memory_bytes_max{area="heap"} > 0.8 + for: 5m + labels: + severity: warning + annotations: + summary: "JVM memory filling up (instance {{ $labels.instance }})" + description: "JVM memory is filling up (> 80%)\n labels: {{ $labels }} value = {{ $value }}\n" + + + + +### Grafana +- Grafana has built-in Prometheus support; just add a Prometheus data source: + + ```bash + Name: test-zk + Type: Prometheus + Url: http://localhost:9090 + Access: proxy + ``` +- Then download and import the default ZooKeeper dashboard [template](https://grafana.com/grafana/dashboards/10465) and customize. +- Users can ask for Grafana dashboard account if having any good improvements by writing a email to **dev@zookeeper.apache.org**. + + + +### InfluxDB + +InfluxDB is an open source time series data that is often used to store metrics +from Zookeeper. You can [download](https://portal.influxdata.com/downloads/) the +open source version or create a [free](https://cloud2.influxdata.com/signup) +account on InfluxDB Cloud. In either case, configure the [Apache Zookeeper +Telegraf plugin](https://www.influxdata.com/integration/apache-zookeeper/) to +start collecting and storing metrics from your Zookeeper clusters into your +InfluxDB instance. There is also an [Apache Zookeeper InfluxDB +template](https://www.influxdata.com/influxdb-templates/zookeeper-monitor/) that +includes the Telegraf configurations and a dashboard to get you set up right +away. + + +## JMX +More details can be found in [here](http://zookeeper.apache.org/doc/current/zookeeperJMX.html) + + +## Four letter words +More details can be found in [here](http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_zkCommands) diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperOver.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperOver.md new file mode 100644 index 0000000000000000000000000000000000000000..4c60a3de7e2f81968208692660716bfb2a9c4d61 --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperOver.md @@ -0,0 +1,336 @@ + + +# ZooKeeper + +* [ZooKeeper: A Distributed Coordination Service for Distributed Applications](#ch_DesignOverview) + * [Design Goals](#sc_designGoals) + * [Data model and the hierarchical namespace](#sc_dataModelNameSpace) + * [Nodes and ephemeral nodes](#Nodes+and+ephemeral+nodes) + * [Conditional updates and watches](#Conditional+updates+and+watches) + * [Guarantees](#Guarantees) + * [Simple API](#Simple+API) + * [Implementation](#Implementation) + * [Uses](#Uses) + * [Performance](#Performance) + * [Reliability](#Reliability) + * [The ZooKeeper Project](#The+ZooKeeper+Project) + + + +## ZooKeeper: A Distributed Coordination Service for Distributed Applications + +ZooKeeper is a distributed, open-source coordination service for +distributed applications. It exposes a simple set of primitives that +distributed applications can build upon to implement higher level services +for synchronization, configuration maintenance, and groups and naming. It +is designed to be easy to program to, and uses a data model styled after +the familiar directory tree structure of file systems. It runs in Java and +has bindings for both Java and C. + +Coordination services are notoriously hard to get right. They are +especially prone to errors such as race conditions and deadlock. The +motivation behind ZooKeeper is to relieve distributed applications the +responsibility of implementing coordination services from scratch. + + + +### Design Goals + +**ZooKeeper is simple.** ZooKeeper +allows distributed processes to coordinate with each other through a +shared hierarchical namespace which is organized similarly to a standard +file system. The namespace consists of data registers - called znodes, +in ZooKeeper parlance - and these are similar to files and directories. +Unlike a typical file system, which is designed for storage, ZooKeeper +data is kept in-memory, which means ZooKeeper can achieve high +throughput and low latency numbers. + +The ZooKeeper implementation puts a premium on high performance, +highly available, strictly ordered access. The performance aspects of +ZooKeeper means it can be used in large, distributed systems. The +reliability aspects keep it from being a single point of failure. The +strict ordering means that sophisticated synchronization primitives can +be implemented at the client. + +**ZooKeeper is replicated.** Like the +distributed processes it coordinates, ZooKeeper itself is intended to be +replicated over a set of hosts called an ensemble. + +![ZooKeeper Service](images/zkservice.jpg) + +The servers that make up the ZooKeeper service must all know about +each other. They maintain an in-memory image of state, along with a +transaction logs and snapshots in a persistent store. As long as a +majority of the servers are available, the ZooKeeper service will be +available. + +Clients connect to a single ZooKeeper server. The client maintains +a TCP connection through which it sends requests, gets responses, gets +watch events, and sends heart beats. If the TCP connection to the server +breaks, the client will connect to a different server. + +**ZooKeeper is ordered.** ZooKeeper +stamps each update with a number that reflects the order of all +ZooKeeper transactions. Subsequent operations can use the order to +implement higher-level abstractions, such as synchronization +primitives. + +**ZooKeeper is fast.** It is +especially fast in "read-dominant" workloads. ZooKeeper applications run +on thousands of machines, and it performs best where reads are more +common than writes, at ratios of around 10:1. + + + +### Data model and the hierarchical namespace + +The namespace provided by ZooKeeper is much like that of a +standard file system. A name is a sequence of path elements separated by +a slash (/). Every node in ZooKeeper's namespace is identified by a +path. + +#### ZooKeeper's Hierarchical Namespace + +![ZooKeeper's Hierarchical Namespace](images/zknamespace.jpg) + + + +### Nodes and ephemeral nodes + +Unlike standard file systems, each node in a ZooKeeper +namespace can have data associated with it as well as children. It is +like having a file-system that allows a file to also be a directory. +(ZooKeeper was designed to store coordination data: status information, +configuration, location information, etc., so the data stored at each +node is usually small, in the byte to kilobyte range.) We use the term +_znode_ to make it clear that we are talking about +ZooKeeper data nodes. + +Znodes maintain a stat structure that includes version numbers for +data changes, ACL changes, and timestamps, to allow cache validations +and coordinated updates. Each time a znode's data changes, the version +number increases. For instance, whenever a client retrieves data it also +receives the version of the data. + +The data stored at each znode in a namespace is read and written +atomically. Reads get all the data bytes associated with a znode and a +write replaces all the data. Each node has an Access Control List (ACL) +that restricts who can do what. + +ZooKeeper also has the notion of ephemeral nodes. These znodes +exists as long as the session that created the znode is active. When the +session ends the znode is deleted. + + + +### Conditional updates and watches + +ZooKeeper supports the concept of _watches_. +Clients can set a watch on a znode. A watch will be triggered and +removed when the znode changes. When a watch is triggered, the client +receives a packet saying that the znode has changed. If the +connection between the client and one of the ZooKeeper servers is +broken, the client will receive a local notification. + +**New in 3.6.0:** Clients can also set +permanent, recursive watches on a znode that are not removed when triggered +and that trigger for changes on the registered znode as well as any children +znodes recursively. + + + +### Guarantees + +ZooKeeper is very fast and very simple. Since its goal, though, is +to be a basis for the construction of more complicated services, such as +synchronization, it provides a set of guarantees. These are: + +* Sequential Consistency - Updates from a client will be applied + in the order that they were sent. +* Atomicity - Updates either succeed or fail. No partial + results. +* Single System Image - A client will see the same view of the + service regardless of the server that it connects to. i.e., a + client will never see an older view of the system even if the + client fails over to a different server with the same session. +* Reliability - Once an update has been applied, it will persist + from that time forward until a client overwrites the update. +* Timeliness - The clients view of the system is guaranteed to + be up-to-date within a certain time bound. + + + +### Simple API + +One of the design goals of ZooKeeper is providing a very simple +programming interface. As a result, it supports only these +operations: + +* *create* : + creates a node at a location in the tree + +* *delete* : + deletes a node + +* *exists* : + tests if a node exists at a location + +* *get data* : + reads the data from a node + +* *set data* : + writes data to a node + +* *get children* : + retrieves a list of children of a node + +* *sync* : + waits for data to be propagated + + + +### Implementation + +[ZooKeeper Components](#zkComponents) shows the high-level components +of the ZooKeeper service. With the exception of the request processor, +each of +the servers that make up the ZooKeeper service replicates its own copy +of each of the components. + + + +![ZooKeeper Components](images/zkcomponents.jpg) + +The replicated database is an in-memory database containing the +entire data tree. Updates are logged to disk for recoverability, and +writes are serialized to disk before they are applied to the in-memory +database. + +Every ZooKeeper server services clients. Clients connect to +exactly one server to submit requests. Read requests are serviced from +the local replica of each server database. Requests that change the +state of the service, write requests, are processed by an agreement +protocol. + +As part of the agreement protocol all write requests from clients +are forwarded to a single server, called the +_leader_. The rest of the ZooKeeper servers, called +_followers_, receive message proposals from the +leader and agree upon message delivery. The messaging layer takes care +of replacing leaders on failures and syncing followers with +leaders. + +ZooKeeper uses a custom atomic messaging protocol. Since the +messaging layer is atomic, ZooKeeper can guarantee that the local +replicas never diverge. When the leader receives a write request, it +calculates what the state of the system is when the write is to be +applied and transforms this into a transaction that captures this new +state. + + + +### Uses + +The programming interface to ZooKeeper is deliberately simple. +With it, however, you can implement higher order operations, such as +synchronizations primitives, group membership, ownership, etc. + + + +### Performance + +ZooKeeper is designed to be highly performance. But is it? The +results of the ZooKeeper's development team at Yahoo! Research indicate +that it is. (See [ZooKeeper Throughput as the Read-Write Ratio Varies](#zkPerfRW).) It is especially high +performance in applications where reads outnumber writes, since writes +involve synchronizing the state of all servers. (Reads outnumbering +writes is typically the case for a coordination service.) + + + +![ZooKeeper Throughput as the Read-Write Ratio Varies](images/zkperfRW-3.2.jpg) + +The [ZooKeeper Throughput as the Read-Write Ratio Varies](#zkPerfRW) is a throughput +graph of ZooKeeper release 3.2 running on servers with dual 2Ghz +Xeon and two SATA 15K RPM drives. One drive was used as a +dedicated ZooKeeper log device. The snapshots were written to +the OS drive. Write requests were 1K writes and the reads were +1K reads. "Servers" indicate the size of the ZooKeeper +ensemble, the number of servers that make up the +service. Approximately 30 other servers were used to simulate +the clients. The ZooKeeper ensemble was configured such that +leaders do not allow connections from clients. + +######Note +>In version 3.2 r/w performance improved by ~2x compared to + the [previous 3.1 release](http://zookeeper.apache.org/docs/r3.1.1/zookeeperOver.html#Performance). + +Benchmarks also indicate that it is reliable, too. +[Reliability in the Presence of Errors](#zkPerfReliability) shows how a deployment responds to +various failures. The events marked in the figure are the following: + +1. Failure and recovery of a follower +1. Failure and recovery of a different follower +1. Failure of the leader +1. Failure and recovery of two followers +1. Failure of another leader + + + +### Reliability + +To show the behavior of the system over time as +failures are injected we ran a ZooKeeper service made up of +7 machines. We ran the same saturation benchmark as before, +but this time we kept the write percentage at a constant +30%, which is a conservative ratio of our expected +workloads. + + + +![Reliability in the Presence of Errors](images/zkperfreliability.jpg) + +There are a few important observations from this graph. First, if +followers fail and recover quickly, then ZooKeeper is able to sustain a +high throughput despite the failure. But maybe more importantly, the +leader election algorithm allows for the system to recover fast enough +to prevent throughput from dropping substantially. In our observations, +ZooKeeper takes less than 200ms to elect a new leader. Third, as +followers recover, ZooKeeper is able to raise throughput again once they +start processing requests. + + + +### The ZooKeeper Project + +ZooKeeper has been +[successfully used](https://cwiki.apache.org/confluence/display/ZOOKEEPER/PoweredBy) +in many industrial applications. It is used at Yahoo! as the +coordination and failure recovery service for Yahoo! Message +Broker, which is a highly scalable publish-subscribe system +managing thousands of topics for replication and data +delivery. It is used by the Fetching Service for Yahoo! +crawler, where it also manages failure recovery. A number of +Yahoo! advertising systems also use ZooKeeper to implement +reliable services. + +All users and developers are encouraged to join the +community and contribute their expertise. See the +[Zookeeper Project on Apache](http://zookeeper.apache.org/) +for more information. + + diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperReconfig.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperReconfig.md new file mode 100644 index 0000000000000000000000000000000000000000..8b3e3dad799fe578578671f01387c5ec4329d1ad --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperReconfig.md @@ -0,0 +1,908 @@ + + +# ZooKeeper Dynamic Reconfiguration + +* [Overview](#ch_reconfig_intro) +* [Changes to Configuration Format](#ch_reconfig_format) + * [Specifying the client port](#sc_reconfig_clientport) + * [Specifying multiple server addresses](#sc_multiaddress) + * [The standaloneEnabled flag](#sc_reconfig_standaloneEnabled) + * [The reconfigEnabled flag](#sc_reconfig_reconfigEnabled) + * [Dynamic configuration file](#sc_reconfig_file) + * [Backward compatibility](#sc_reconfig_backward) +* [Upgrading to 3.5.0](#ch_reconfig_upgrade) +* [Dynamic Reconfiguration of the ZooKeeper Ensemble](#ch_reconfig_dyn) + * [API](#ch_reconfig_api) + * [Security](#sc_reconfig_access_control) + * [Retrieving the current dynamic configuration](#sc_reconfig_retrieving) + * [Modifying the current dynamic configuration](#sc_reconfig_modifying) + * [General](#sc_reconfig_general) + * [Incremental mode](#sc_reconfig_incremental) + * [Non-incremental mode](#sc_reconfig_nonincremental) + * [Conditional reconfig](#sc_reconfig_conditional) + * [Error conditions](#sc_reconfig_errors) + * [Additional comments](#sc_reconfig_additional) +* [Rebalancing Client Connections](#ch_reconfig_rebalancing) + + + +## Overview + +Prior to the 3.5.0 release, the membership and all other configuration +parameters of Zookeeper were static - loaded during boot and immutable at +runtime. Operators resorted to ''rolling restarts'' - a manually intensive +and error-prone method of changing the configuration that has caused data +loss and inconsistency in production. + +Starting with 3.5.0, “rolling restarts” are no longer needed! +ZooKeeper comes with full support for automated configuration changes: the +set of Zookeeper servers, their roles (participant / observer), all ports, +and even the quorum system can be changed dynamically, without service +interruption and while maintaining data consistency. Reconfigurations are +performed immediately, just like other operations in ZooKeeper. Multiple +changes can be done using a single reconfiguration command. The dynamic +reconfiguration functionality does not limit operation concurrency, does +not require client operations to be stopped during reconfigurations, has a +very simple interface for administrators and no added complexity to other +client operations. + +New client-side features allow clients to find out about configuration +changes and to update the connection string (list of servers and their +client ports) stored in their ZooKeeper handle. A probabilistic algorithm +is used to rebalance clients across the new configuration servers while +keeping the extent of client migrations proportional to the change in +ensemble membership. + +This document provides the administrator manual for reconfiguration. +For a detailed description of the reconfiguration algorithms, performance +measurements, and more, please see our paper: + +* *Shraer, A., Reed, B., Malkhi, D., Junqueira, F. Dynamic +Reconfiguration of Primary/Backup Clusters. In _USENIX Annual +Technical Conference (ATC)_(2012), 425-437* : + Links: [paper (pdf)](https://www.usenix.org/system/files/conference/atc12/atc12-final74.pdf), [slides (pdf)](https://www.usenix.org/sites/default/files/conference/protected-files/shraer\_atc12\_slides.pdf), [video](https://www.usenix.org/conference/atc12/technical-sessions/presentation/shraer), [hadoop summit slides](http://www.slideshare.net/Hadoop\_Summit/dynamic-reconfiguration-of-zookeeper) + +**Note:** Starting with 3.5.3, the dynamic reconfiguration +feature is disabled by default, and has to be explicitly turned on via +[reconfigEnabled](zookeeperAdmin.html#sc_advancedConfiguration) configuration option. + + + +## Changes to Configuration Format + + + +### Specifying the client port + +A client port of a server is the port on which the server accepts plaintext (non-TLS) client connection requests +and secure client port is the port on which the server accepts TLS client connection requests. + +Starting with 3.5.0 the +_clientPort_ and _clientPortAddress_ configuration parameters should no longer be used in zoo.cfg. + +Starting with 3.10.0 the +_secureClientPort_ and _secureClientPortAddress_ configuration parameters should no longer be used in zoo.cfg. + +Instead, this information is now part of the server keyword specification, which +becomes as follows: + + server. = ::[:role];[[:]][;[:]] + +- [New in ZK 3.10.0] The client port specification is optional and is to the right of the + first semicolon. The secure client port specification is also optional and is to the right + of the second semicolon. However, both the client port and secure client port specification + cannot be omitted, at least one of them should be present. If the user intends to omit client + port specification and provide only secure client port specification (TLS-only server), a second + semicolon should still be specified to indicate an empty client port specification (see last + example below). In either spec, the port address is optional, and if not specified it defaults + to "0.0.0.0". +- As usual, role is also optional, it can be _participant_ or _observer_ (_participant_ by default). + +Examples of legal server statements: + + server.5 = 125.23.63.23:1234:1235;1236 (non-TLS server) + server.5 = 125.23.63.23:1234:1235;1236;1237 (non-TLS + TLS server) + server.5 = 125.23.63.23:1234:1235;;1237 (TLS-only server) + server.5 = 125.23.63.23:1234:1235:participant;1236 (non-TLS server) + server.5 = 125.23.63.23:1234:1235:observer;1236 (non-TLS server) + server.5 = 125.23.63.23:1234:1235;125.23.63.24:1236 (non-TLS server) + server.5 = 125.23.63.23:1234:1235:participant;125.23.63.23:1236 (non-TLS server) + server.5 = 125.23.63.23:1234:1235:participant;125.23.63.23:1236;125.23.63.23:1237 (non-TLS + TLS server) + server.5 = 125.23.63.23:1234:1235:participant;;125.23.63.23:1237 (TLS-only server) + + + + +### Specifying multiple server addresses + +Since ZooKeeper 3.6.0 it is possible to specify multiple addresses for each +ZooKeeper server (see [ZOOKEEPER-3188](https://issues.apache.org/jira/projects/ZOOKEEPER/issues/ZOOKEEPER-3188)). +This helps to increase availability and adds network level +resiliency to ZooKeeper. When multiple physical network interfaces are used +for the servers, ZooKeeper is able to bind on all interfaces and runtime switching +to a working interface in case a network error. The different addresses can be +specified in the config using a pipe ('|') character. + +Examples for a valid configurations using multiple addresses: + + server.2=zoo2-net1:2888:3888|zoo2-net2:2889:3889;2188 + server.2=zoo2-net1:2888:3888|zoo2-net2:2889:3889|zoo2-net3:2890:3890;2188 + server.2=zoo2-net1:2888:3888|zoo2-net2:2889:3889;zoo2-net1:2188 + server.2=zoo2-net1:2888:3888:observer|zoo2-net2:2889:3889:observer;2188 + + + +### The _standaloneEnabled_ flag + +Prior to 3.5.0, one could run ZooKeeper in Standalone mode or in a +Distributed mode. These are separate implementation stacks, and +switching between them during run time is not possible. By default (for +backward compatibility) _standaloneEnabled_ is set to +_true_. The consequence of using this default is that +if started with a single server the ensemble will not be allowed to +grow, and if started with more than one server it will not be allowed to +shrink to contain fewer than two participants. + +Setting the flag to _false_ instructs the system +to run the Distributed software stack even if there is only a single +participant in the ensemble. To achieve this the (static) configuration +file should contain: + + standaloneEnabled=false** + +With this setting it is possible to start a ZooKeeper ensemble +containing a single participant and to dynamically grow it by adding +more servers. Similarly, it is possible to shrink an ensemble so that +just a single participant remains, by removing servers. + +Since running the Distributed mode allows more flexibility, we +recommend setting the flag to _false_. We expect that +the legacy Standalone mode will be deprecated in the future. + + + +### The _reconfigEnabled_ flag + +Starting with 3.5.0 and prior to 3.5.3, there is no way to disable +dynamic reconfiguration feature. We would like to offer the option of +disabling reconfiguration feature because with reconfiguration enabled, +we have a security concern that a malicious actor can make arbitrary changes +to the configuration of a ZooKeeper ensemble, including adding a compromised +server to the ensemble. We prefer to leave to the discretion of the user to +decide whether to enable it or not and make sure that the appropriate security +measure are in place. So in 3.5.3 the [reconfigEnabled](zookeeperAdmin.html#sc_advancedConfiguration) configuration option is introduced +such that the reconfiguration feature can be completely disabled and any attempts +to reconfigure a cluster through reconfig API with or without authentication +will fail by default, unless **reconfigEnabled** is set to +**true**. + +To set the option to true, the configuration file (zoo.cfg) should contain: + + reconfigEnabled=true + + + +### Dynamic configuration file + +Starting with 3.5.0 we're distinguishing between dynamic +configuration parameters, which can be changed during runtime, and +static configuration parameters, which are read from a configuration +file when a server boots and don't change during its execution. For now, +the following configuration keywords are considered part of the dynamic +configuration: _server_, _group_ +and _weight_. + +Dynamic configuration parameters are stored in a separate file on +the server (which we call the dynamic configuration file). This file is +linked from the static config file using the new +_dynamicConfigFile_ keyword. + +**Example** + +#### zoo_replicated1.cfg + + + tickTime=2000 + dataDir=/zookeeper/data/zookeeper1 + initLimit=5 + syncLimit=2 + dynamicConfigFile=/zookeeper/conf/zoo_replicated1.cfg.dynamic + + +#### zoo_replicated1.cfg.dynamic + + + server.1=125.23.63.23:2780:2783:participant;2791 + server.2=125.23.63.24:2781:2784:participant;2792 + server.3=125.23.63.25:2782:2785:participant;2793 + + +When the ensemble configuration changes, the static configuration +parameters remain the same. The dynamic parameters are pushed by +ZooKeeper and overwrite the dynamic configuration files on all servers. +Thus, the dynamic configuration files on the different servers are +usually identical (they can only differ momentarily when a +reconfiguration is in progress, or if a new configuration hasn't +propagated yet to some of the servers). Once created, the dynamic +configuration file should not be manually altered. Changed are only made +through the new reconfiguration commands outlined below. Note that +changing the config of an offline cluster could result in an +inconsistency with respect to configuration information stored in the +ZooKeeper log (and the special configuration znode, populated from the +log) and is therefore highly discouraged. + +**Example 2** + +Users may prefer to initially specify a single configuration file. +The following is thus also legal: + +#### zoo_replicated1.cfg + + + tickTime=2000 + dataDir=/zookeeper/data/zookeeper1 + initLimit=5 + syncLimit=2 + clientPort= + + +The configuration files on each server will be automatically split +into dynamic and static files, if they are not already in this format. +So the configuration file above will be automatically transformed into +the two files in Example 1. Note that the clientPort and +clientPortAddress lines (if specified) will be automatically removed +during this process, if they are redundant (as in the example above). +The original static configuration file is backed up (in a .bak +file). + + + +### Backward compatibility + +We still support the old configuration format. For example, the +following configuration file is acceptable (but not recommended): + +#### zoo_replicated1.cfg + + tickTime=2000 + dataDir=/zookeeper/data/zookeeper1 + initLimit=5 + syncLimit=2 + clientPort=2791 + server.1=125.23.63.23:2780:2783:participant + server.2=125.23.63.24:2781:2784:participant + server.3=125.23.63.25:2782:2785:participant + + +During boot, a dynamic configuration file is created and contains +the dynamic part of the configuration as explained earlier. In this +case, however, the line "clientPort=2791" will remain in the static +configuration file of server 1 since it is not redundant -- it was not +specified as part of the "server.1=..." using the format explained in +the section [Changes to Configuration Format](#ch_reconfig_format). If a reconfiguration +is invoked that sets the client port of server 1, we remove +"clientPort=2791" from the static configuration file (the dynamic file +now contain this information as part of the specification of server +1). + + + +## Upgrading to 3.5.0 + +Upgrading a running ZooKeeper ensemble to 3.5.0 should be done only +after upgrading your ensemble to the 3.4.6 release. Note that this is only +necessary for rolling upgrades (if you're fine with shutting down the +system completely, you don't have to go through 3.4.6). If you attempt a +rolling upgrade without going through 3.4.6 (for example from 3.4.5), you +may get the following error: + + 2013-01-30 11:32:10,663 [myid:2] - INFO [localhost/127.0.0.1:2784:QuorumCnxManager$Listener@498] - Received connection request /127.0.0.1:60876 + 2013-01-30 11:32:10,663 [myid:2] - WARN [localhost/127.0.0.1:2784:QuorumCnxManager@349] - Invalid server id: -65536 + +During a rolling upgrade, each server is taken down in turn and +rebooted with the new 3.5.0 binaries. Before starting the server with +3.5.0 binaries, we highly recommend updating the configuration file so +that all server statements "server.x=..." contain client ports (see the +section [Specifying the client port](#sc_reconfig_clientport)). As explained earlier +you may leave the configuration in a single file, as well as leave the +clientPort/clientPortAddress statements (although if you specify client +ports in the new format, these statements are now redundant). + + + +## Dynamic Reconfiguration of the ZooKeeper Ensemble + +The ZooKeeper Java and C API were extended with getConfig and reconfig +commands that facilitate reconfiguration. Both commands have a synchronous +(blocking) variant and an asynchronous one. We demonstrate these commands +here using the Java CLI, but note that you can similarly use the C CLI or +invoke the commands directly from a program just like any other ZooKeeper +command. + + + +### API + +There are two sets of APIs for both Java and C client. + +* ***Reconfiguration API*** : + Reconfiguration API is used to reconfigure the ZooKeeper cluster. + Starting with 3.5.3, reconfiguration Java APIs are moved into ZooKeeperAdmin class + from ZooKeeper class, and use of this API requires ACL setup and user + authentication (see [Security](#sc_reconfig_access_control) for more information.). + +* ***Get Configuration API*** : + Get configuration APIs are used to retrieve ZooKeeper cluster configuration information + stored in /zookeeper/config znode. Use of this API does not require specific setup or authentication, + because /zookeeper/config is readable to any users. + + + +### Security + +Prior to **3.5.3**, there is no enforced security mechanism +over reconfig so any ZooKeeper clients that can connect to ZooKeeper server ensemble +will have the ability to change the state of a ZooKeeper cluster via reconfig. +It is thus possible for a malicious client to add compromised server to an ensemble, +e.g., add a compromised server, or remove legitimate servers. +Cases like these could be security vulnerabilities on a case by case basis. + +To address this security concern, we introduced access control over reconfig +starting from **3.5.3** such that only a specific set of users +can use reconfig commands or APIs, and these users need be configured explicitly. In addition, +the setup of ZooKeeper cluster must enable authentication so ZooKeeper clients can be authenticated. + +We also provide an escape hatch for users who operate and interact with a ZooKeeper ensemble in a secured +environment (i.e. behind company firewall). For those users who want to use reconfiguration feature but +don't want the overhead of configuring an explicit list of authorized user for reconfig access checks, +they can set ["skipACL"](zookeeperAdmin.html#sc_authOptions) to "yes" which will +skip ACL check and allow any user to reconfigure cluster. + +Overall, ZooKeeper provides flexible configuration options for the reconfigure feature +that allow a user to choose based on user's security requirement. +We leave to the discretion of the user to decide appropriate security measure are in place. + +* ***Access Control*** : + The dynamic configuration is stored in a special znode + ZooDefs.CONFIG_NODE = /zookeeper/config. This node by default is read only + for all users, except super user and users that's explicitly configured for write + access. + Clients that need to use reconfig commands or reconfig API should be configured as users + that have write access to CONFIG_NODE. By default, only the super user has full control including + write access to CONFIG_NODE. Additional users can be granted write access through superuser + by setting an ACL that has write permission associated with specified user. + A few examples of how to setup ACLs and use reconfiguration API with authentication can be found in + ReconfigExceptionTest.java and TestReconfigServer.cc. + +* ***Authentication*** : + Authentication of users is orthogonal to the access control and is delegated to + existing authentication mechanism supported by ZooKeeper's pluggable authentication schemes. + See [ZooKeeper and SASL](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zookeeper+and+SASL) for more details on this topic. + +* ***Disable ACL check*** : + ZooKeeper supports ["skipACL"](zookeeperAdmin.html#sc_authOptions) option such that ACL + check will be completely skipped, if skipACL is set to "yes". In such cases any unauthenticated + users can use reconfig API. + + + +### Retrieving the current dynamic configuration + +The dynamic configuration is stored in a special znode +ZooDefs.CONFIG_NODE = /zookeeper/config. The new +`config` CLI command reads this znode (currently it is +simply a wrapper to `get /zookeeper/config`). As with +normal reads, to retrieve the latest committed value you should do a +`sync` first. + + [zk: 127.0.0.1:2791(CONNECTED) 3] config + server.1=localhost:2780:2783:participant;localhost:2791 + server.2=localhost:2781:2784:participant;localhost:2792 + server.3=localhost:2782:2785:participant;localhost:2793 + +Notice the last line of the output. This is the configuration +version. The version equals to the zxid of the reconfiguration command +which created this configuration. The version of the first established +configuration equals to the zxid of the NEWLEADER message sent by the +first successfully established leader. When a configuration is written +to a dynamic configuration file, the version automatically becomes part +of the filename and the static configuration file is updated with the +path to the new dynamic configuration file. Configuration files +corresponding to earlier versions are retained for backup +purposes. + +During boot time the version (if it exists) is extracted from the +filename. The version should never be altered manually by users or the +system administrator. It is used by the system to know which +configuration is most up-to-date. Manipulating it manually can result in +data loss and inconsistency. + +Just like a `get` command, the +`config` CLI command accepts the _-w_ +flag for setting a watch on the znode, and _-s_ flag for +displaying the Stats of the znode. It additionally accepts a new flag +_-c_ which outputs only the version and the client +connection string corresponding to the current configuration. For +example, for the configuration above we would get: + + [zk: 127.0.0.1:2791(CONNECTED) 17] config -c + 400000003 localhost:2791,localhost:2793,localhost:2792 + +Note that when using the API directly, this command is called +`getConfig`. + +As any read command it returns the configuration known to the +follower to which your client is connected, which may be slightly +out-of-date. One can use the `sync` command for +stronger guarantees. For example using the Java API: + + zk.sync(ZooDefs.CONFIG_NODE, void_callback, context); + zk.getConfig(watcher, callback, context); + +Note: in 3.5.0 it doesn't really matter which path is passed to the +`sync()` command as all the server's state is brought +up to date with the leader (so one could use a different path instead of +ZooDefs.CONFIG_NODE). However, this may change in the future. + + + +### Modifying the current dynamic configuration + +Modifying the configuration is done through the +`reconfig` command. There are two modes of +reconfiguration: incremental and non-incremental (bulk). The +non-incremental simply specifies the new dynamic configuration of the +system. The incremental specifies changes to the current configuration. +The `reconfig` command returns the new +configuration. + +A few examples are in: *ReconfigTest.java*, +*ReconfigRecoveryTest.java* and +*TestReconfigServer.cc*. + + + +#### General + +**Removing servers:** Any server can +be removed, including the leader (although removing the leader will +result in a short unavailability, see Figures 6 and 8 in the [paper](https://www.usenix.org/conference/usenixfederatedconferencesweek/dynamic-recon%EF%AC%81guration-primarybackup-clusters)). The server will not be shut-down automatically. +Instead, it becomes a "non-voting follower". This is somewhat similar +to an observer in that its votes don't count towards the Quorum of +votes necessary to commit operations. However, unlike a non-voting +follower, an observer doesn't actually see any operation proposals and +does not ACK them. Thus a non-voting follower has a more significant +negative effect on system throughput compared to an observer. +Non-voting follower mode should only be used as a temporary mode, +before shutting the server down, or adding it as a follower or as an +observer to the ensemble. We do not shut the server down automatically +for two main reasons. The first reason is that we do not want all the +clients connected to this server to be immediately disconnected, +causing a flood of connection requests to other servers. Instead, it +is better if each client decides when to migrate independently. The +second reason is that removing a server may sometimes (rarely) be +necessary in order to change it from "observer" to "participant" (this +is explained in the section [Additional comments](#sc_reconfig_additional)). + +Note that the new configuration should have some minimal number of +participants in order to be considered legal. If the proposed change +would leave the cluster with less than 2 participants and standalone +mode is enabled (standaloneEnabled=true, see the section [The _standaloneEnabled_ flag](#sc_reconfig_standaloneEnabled)), the reconfig will not be +processed (BadArgumentsException). If standalone mode is disabled +(standaloneEnabled=false) then it's legal to remain with 1 or more +participants. + +**Adding servers:** Before a +reconfiguration is invoked, the administrator must make sure that a +quorum (majority) of participants from the new configuration are +already connected and synced with the current leader. To achieve this +we need to connect a new joining server to the leader before it is +officially part of the ensemble. This is done by starting the joining +server using an initial list of servers which is technically not a +legal configuration of the system but (a) contains the joiner, and (b) +gives sufficient information to the joiner in order for it to find and +connect to the current leader. We list a few different options of +doing this safely. + +1. Initial configuration of joiners is comprised of servers in + the last committed configuration and one or more joiners, where + **joiners are listed as observers.** + For example, if servers D and E are added at the same time to (A, + B, C) and server C is being removed, the initial configuration of + D could be (A, B, C, D) or (A, B, C, D, E), where D and E are + listed as observers. Similarly, the configuration of E could be + (A, B, C, E) or (A, B, C, D, E), where D and E are listed as + observers. **Note that listing the joiners as + observers will not actually make them observers - it will only + prevent them from accidentally forming a quorum with other + joiners.** Instead, they will contact the servers in the + current configuration and adopt the last committed configuration + (A, B, C), where the joiners are absent. Configuration files of + joiners are backed up and replaced automatically as this happens. + After connecting to the current leader, joiners become non-voting + followers until the system is reconfigured and they are added to + the ensemble (as participant or observer, as appropriate). +1. Initial configuration of each joiner is comprised of servers + in the last committed configuration + **the + joiner itself, listed as a participant.** For example, to + add a new server D to a configuration consisting of servers (A, B, + C), the administrator can start D using an initial configuration + file consisting of servers (A, B, C, D). If both D and E are added + at the same time to (A, B, C), the initial configuration of D + could be (A, B, C, D) and the configuration of E could be (A, B, + C, E). Similarly, if D is added and C is removed at the same time, + the initial configuration of D could be (A, B, C, D). Never list + more than one joiner as participant in the initial configuration + (see warning below). +1. Whether listing the joiner as an observer or as participant, + it is also fine not to list all the current configuration servers, + as long as the current leader is in the list. For example, when + adding D we could start D with a configuration file consisting of + just (A, D) if A is the current leader. however this is more + fragile since if A fails before D officially joins the ensemble, D + doesn’t know anyone else and therefore the administrator will have + to intervene and restart D with another server list. + +######Note +>##### Warning + +>Never specify more than one joining server in the same initial +configuration as participants. Currently, the joining servers don’t +know that they are joining an existing ensemble; if multiple joiners +are listed as participants they may form an independent quorum +creating a split-brain situation such as processing operations +independently from your main ensemble. It is OK to list multiple +joiners as observers in an initial config. + +If the configuration of existing servers changes or they become unavailable +before the joiner succeeds to connect and learn about configuration changes, the +joiner may need to be restarted with an updated configuration file in order to be +able to connect. + +Finally, note that once connected to the leader, a joiner adopts +the last committed configuration, in which it is absent (the initial +config of the joiner is backed up before being rewritten). If the +joiner restarts in this state, it will not be able to boot since it is +absent from its configuration file. In order to start it you’ll once +again have to specify an initial configuration. + +**Modifying server parameters:** One +can modify any of the ports of a server, or its role +(participant/observer) by adding it to the ensemble with different +parameters. This works in both the incremental and the bulk +reconfiguration modes. It is not necessary to remove the server and +then add it back; just specify the new parameters as if the server is +not yet in the system. The server will detect the configuration change +and perform the necessary adjustments. See an example in the section +[Incremental mode](#sc_reconfig_incremental) and an exception to this +rule in the section [Additional comments](#sc_reconfig_additional). + +It is also possible to change the Quorum System used by the +ensemble (for example, change the Majority Quorum System to a +Hierarchical Quorum System on the fly). This, however, is only allowed +using the bulk (non-incremental) reconfiguration mode. In general, +incremental reconfiguration only works with the Majority Quorum +System. Bulk reconfiguration works with both Hierarchical and Majority +Quorum Systems. + +**Performance Impact:** There is +practically no performance impact when removing a follower, since it +is not being automatically shut down (the effect of removal is that +the server's votes are no longer being counted). When adding a server, +there is no leader change and no noticeable performance disruption. +For details and graphs please see Figures 6, 7 and 8 in the [paper](https://www.usenix.org/conference/usenixfederatedconferencesweek/dynamic-recon%EF%AC%81guration-primarybackup-clusters). + +The most significant disruption will happen when a leader change +is caused, in one of the following cases: + +1. Leader is removed from the ensemble. +1. Leader's role is changed from participant to observer. +1. The port used by the leader to send transactions to others + (quorum port) is modified. + +In these cases we perform a leader hand-off where the old leader +nominates a new leader. The resulting unavailability is usually +shorter than when a leader crashes since detecting leader failure is +unnecessary and electing a new leader can usually be avoided during a +hand-off (see Figures 6 and 8 in the [paper](https://www.usenix.org/conference/usenixfederatedconferencesweek/dynamic-recon%EF%AC%81guration-primarybackup-clusters)). + +When the client port of a server is modified, it does not drop +existing client connections. New connections to the server will have +to use the new client port. + +**Progress guarantees:** Up to the +invocation of the reconfig operation, a quorum of the old +configuration is required to be available and connected for ZooKeeper +to be able to make progress. Once reconfig is invoked, a quorum of +both the old and of the new configurations must be available. The +final transition happens once (a) the new configuration is activated, +and (b) all operations scheduled before the new configuration is +activated by the leader are committed. Once (a) and (b) happen, only a +quorum of the new configuration is required. Note, however, that +neither (a) nor (b) are visible to a client. Specifically, when a +reconfiguration operation commits, it only means that an activation +message was sent out by the leader. It does not necessarily mean that +a quorum of the new configuration got this message (which is required +in order to activate it) or that (b) has happened. If one wants to +make sure that both (a) and (b) has already occurred (for example, in +order to know that it is safe to shut down old servers that were +removed), one can simply invoke an update +(`set-data`, or some other quorum operation, but not +a `sync`) and wait for it to commit. An alternative +way to achieve this was to introduce another round to the +reconfiguration protocol (which, for simplicity and compatibility with +Zab, we decided to avoid). + + + +#### Incremental mode + +The incremental mode allows adding and removing servers to the +current configuration. Multiple changes are allowed. For +example: + + > reconfig -remove 3 -add + server.5=125.23.63.23:1234:1235;1236 + +Both the add and the remove options get a list of comma separated +arguments (no spaces): + + > reconfig -remove 3,4 -add + server.5=localhost:2111:2112;2113,6=localhost:2114:2115:observer;2116 + +The format of the server statement is exactly the same as +described in the section [Specifying the client port](#sc_reconfig_clientport) and +includes the client port. Notice that here instead of "server.5=" you +can just say "5=". In the example above, if server 5 is already in the +system, but has different ports or is not an observer, it is updated +and once the configuration commits becomes an observer and starts +using these new ports. This is an easy way to turn participants into +observers and vice versa or change any of their ports, without +rebooting the server. + +ZooKeeper supports two types of Quorum Systems – the simple +Majority system (where the leader commits operations after receiving +ACKs from a majority of voters) and a more complex Hierarchical +system, where votes of different servers have different weights and +servers are divided into voting groups. Currently, incremental +reconfiguration is allowed only if the last proposed configuration +known to the leader uses a Majority Quorum System +(BadArgumentsException is thrown otherwise). + +Incremental mode - examples using the Java API: + + List leavingServers = new ArrayList(); + leavingServers.add("1"); + leavingServers.add("2"); + byte[] config = zk.reconfig(null, leavingServers, null, -1, new Stat()); + + List leavingServers = new ArrayList(); + List joiningServers = new ArrayList(); + leavingServers.add("1"); + joiningServers.add("server.4=localhost:1234:1235;1236"); + byte[] config = zk.reconfig(joiningServers, leavingServers, null, -1, new Stat()); + + String configStr = new String(config); + System.out.println(configStr); + +There is also an asynchronous API, and an API accepting comma +separated Strings instead of List. See +src/java/main/org/apache/zookeeper/ZooKeeper.java. + + + +#### Non-incremental mode + +The second mode of reconfiguration is non-incremental, whereby a +client gives a complete specification of the new dynamic system +configuration. The new configuration can either be given in place or +read from a file: + + > reconfig -file newconfig.cfg + +//newconfig.cfg is a dynamic config file, see [Dynamic configuration file](#sc_reconfig_file) + + > reconfig -members + server.1=125.23.63.23:2780:2783:participant;2791,server.2=125.23.63.24:2781:2784:participant;2792,server.3=125.23.63.25:2782:2785:participant;2793}} + +The new configuration may use a different Quorum System. For +example, you may specify a Hierarchical Quorum System even if the +current ensemble uses a Majority Quorum System. + +Bulk mode - example using the Java API: + + List newMembers = new ArrayList(); + newMembers.add("server.1=1111:1234:1235;1236"); + newMembers.add("server.2=1112:1237:1238;1239"); + newMembers.add("server.3=1114:1240:1241:observer;1242"); + + byte[] config = zk.reconfig(null, null, newMembers, -1, new Stat()); + + String configStr = new String(config); + System.out.println(configStr); + +There is also an asynchronous API, and an API accepting comma +separated String containing the new members instead of +List. See +src/java/main/org/apache/zookeeper/ZooKeeper.java. + + + +#### Conditional reconfig + +Sometimes (especially in non-incremental mode) a new proposed +configuration depends on what the client "believes" to be the current +configuration, and should be applied only to that configuration. +Specifically, the `reconfig` succeeds only if the +last configuration at the leader has the specified version. + + > reconfig -file -v + +In the previously listed Java examples, instead of -1 one could +specify a configuration version to condition the +reconfiguration. + + + +#### Error conditions + +In addition to normal ZooKeeper error conditions, a +reconfiguration may fail for the following reasons: + +1. another reconfig is currently in progress + (ReconfigInProgress) +1. the proposed change would leave the cluster with less than 2 + participants, in case standalone mode is enabled, or, if + standalone mode is disabled then its legal to remain with 1 or + more participants (BadArgumentsException) +1. no quorum of the new configuration was connected and + up-to-date with the leader when the reconfiguration processing + began (NewConfigNoQuorum) +1. `-v x` was specified, but the version +`y` of the latest configuration is not +`x` (BadVersionException) +1. an incremental reconfiguration was requested but the last + configuration at the leader uses a Quorum System which is + different from the Majority system (BadArgumentsException) +1. syntax error (BadArgumentsException) +1. I/O exception when reading the configuration from a file + (BadArgumentsException) + +Most of these are illustrated by test-cases in +*ReconfigFailureCases.java*. + + + +#### Additional comments + +**Liveness:** To better understand +the difference between incremental and non-incremental +reconfiguration, suppose that client C1 adds server D to the system +while a different client C2 adds server E. With the non-incremental +mode, each client would first invoke `config` to find +out the current configuration, and then locally create a new list of +servers by adding its own suggested server. The new configuration can +then be submitted using the non-incremental +`reconfig` command. After both reconfigurations +complete, only one of E or D will be added (not both), depending on +which client's request arrives second to the leader, overwriting the +previous configuration. The other client can repeat the process until +its change takes effect. This method guarantees system-wide progress +(i.e., for one of the clients), but does not ensure that every client +succeeds. To have more control C2 may request to only execute the +reconfiguration in case the version of the current configuration +hasn't changed, as explained in the section [Conditional reconfig](#sc_reconfig_conditional). In this way it may avoid blindly +overwriting the configuration of C1 if C1's configuration reached the +leader first. + +With incremental reconfiguration, both changes will take effect as +they are simply applied by the leader one after the other to the +current configuration, whatever that is (assuming that the second +reconfig request reaches the leader after it sends a commit message +for the first reconfig request -- currently the leader will refuse to +propose a reconfiguration if another one is already pending). Since +both clients are guaranteed to make progress, this method guarantees +stronger liveness. In practice, multiple concurrent reconfigurations +are probably rare. Non-incremental reconfiguration is currently the +only way to dynamically change the Quorum System. Incremental +configuration is currently only allowed with the Majority Quorum +System. + +**Changing an observer into a +follower:** Clearly, changing a server that participates in +voting into an observer may fail if error (2) occurs, i.e., if fewer +than the minimal allowed number of participants would remain. However, +converting an observer into a participant may sometimes fail for a +more subtle reason: Suppose, for example, that the current +configuration is (A, B, C, D), where A is the leader, B and C are +followers and D is an observer. In addition, suppose that B has +crashed. If a reconfiguration is submitted where D is said to become a +follower, it will fail with error (3) since in this configuration, a +majority of voters in the new configuration (any 3 voters), must be +connected and up-to-date with the leader. An observer cannot +acknowledge the history prefix sent during reconfiguration, and +therefore it does not count towards these 3 required servers and the +reconfiguration will be aborted. In case this happens, a client can +achieve the same task by two reconfig commands: first invoke a +reconfig to remove D from the configuration and then invoke a second +command to add it back as a participant (follower). During the +intermediate state D is a non-voting follower and can ACK the state +transfer performed during the second reconfig command. + + + +## Rebalancing Client Connections + +When a ZooKeeper cluster is started, if each client is given the same +connection string (list of servers), the client will randomly choose a +server in the list to connect to, which makes the expected number of +client connections per server the same for each of the servers. We +implemented a method that preserves this property when the set of servers +changes through reconfiguration. See Sections 4 and 5.1 in the [paper](https://www.usenix.org/conference/usenixfederatedconferencesweek/dynamic-recon%EF%AC%81guration-primarybackup-clusters). + +In order for the method to work, all clients must subscribe to +configuration changes (by setting a watch on /zookeeper/config either +directly or through the `getConfig` API command). When +the watch is triggered, the client should read the new configuration by +invoking `sync` and `getConfig` and if +the configuration is indeed new invoke the +`updateServerList` API command. To avoid mass client +migration at the same time, it is better to have each client sleep a +random short period of time before invoking +`updateServerList`. + +A few examples can be found in: +*StaticHostProviderTest.java* and +*TestReconfig.cc* + +Example (this is not a recipe, but a simplified example just to +explain the general idea): + + public void process(WatchedEvent event) { + synchronized (this) { + if (event.getType() == EventType.None) { + connected = (event.getState() == KeeperState.SyncConnected); + notifyAll(); + } else if (event.getPath()!=null && event.getPath().equals(ZooDefs.CONFIG_NODE)) { + // in prod code never block the event thread! + zk.sync(ZooDefs.CONFIG_NODE, this, null); + zk.getConfig(this, this, null); + } + } + } + + public void processResult(int rc, String path, Object ctx, byte[] data, Stat stat) { + if (path!=null && path.equals(ZooDefs.CONFIG_NODE)) { + String config[] = ConfigUtils.getClientConfigStr(new String(data)).split(" "); // similar to config -c + long version = Long.parseLong(config[0], 16); + if (this.configVersion == null){ + this.configVersion = version; + } else if (version > this.configVersion) { + hostList = config[1]; + try { + // the following command is not blocking but may cause the client to close the socket and + // migrate to a different server. In practice it's better to wait a short period of time, chosen + // randomly, so that different clients migrate at different times + zk.updateServerList(hostList); + } catch (IOException e) { + System.err.println("Error updating server list"); + e.printStackTrace(); + } + this.configVersion = version; + } + } + } diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperSnapshotAndRestore.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperSnapshotAndRestore.md new file mode 100644 index 0000000000000000000000000000000000000000..576f18fdadd5bcc122528b84b400bf0993ab325d --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperSnapshotAndRestore.md @@ -0,0 +1,68 @@ + + +# ZooKeeper Snapshot and Restore Guide + +Zookeeper is designed to withstand machine failures. A Zookeeper cluster can automatically recover +from temporary failures such as machine reboot. It can also tolerate up to (N-1)/2 permanent +failures for a cluster of N members due to hardware failures or disk corruption, etc. When a member +permanently fails, it loses access to the cluster. If the cluster permanently loses more than +(N-1)/2 members, it disastrously fails and loses quorum. Once the quorum is lost, the cluster +cannot reach consensus and therefore cannot continue to accept updates. + +To recover from such disastrous failures, Zookeeper provides snapshot and restore functionalities to +restore a cluster from a snapshot. + +1. Snapshot and restore operate on the connected server via Admin Server APIs +1. Snapshot and restore are rate limited to protect the server from being overloaded +1. Snapshot and restore require authentication and authorization on the root path with ALL permission. +The supported auth schemas are digest, x509 and IP. + +* [Snapshot](#zookeeper_snapshot) +* [Restore](#zookeeper_restore) + + + +## Snapshot +Recovering a cluster needs a snapshot from a ZooKeeper cluster. Users can periodically take +snapshots from a live server which has the highest zxid and stream out data to a local +or external storage/file system (e.g., S3). + + ```bash + # The snapshot command takes snapshot from the server it connects to and rate limited to once every 5 mins by default + curl -H 'Authorization: digest root:root_passwd' http://hostname:adminPort/commands/snapshot?streaming=true --output snapshotFileName + ``` + + +## Restore + +Restoring a cluster needs a single snapshot as input stream. Restore can be used for recovering a +cluster for quorum lost or building a brand-new cluster with seed data. + +All members should restore using the same snapshot. The following are the recommended steps: + +- Blocking traffic on the client port or client secure port before restore starts +- Take a snapshot of the latest database state using the snapshot admin server command if applicable +- For each server + - Moving the files in dataDir and dataLogDir to different location to prevent the restored database + from being overwritten when server restarts after restore + - Restore the server using restore admin server command +- Unblocking traffic on the client port or client secure port after restore completes + + ```bash + # The restore command takes a snapshot as input stream and restore the db of the server it connects. It is rate limited to once every 5 mins by default + curl -H 'Content-Type:application/octet-stream' -H 'Authorization: digest root:root_passwd' -POST http://hostname:adminPort/commands/restore --data-binary "@snapshotFileName" + ``` diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperStarted.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperStarted.md new file mode 100644 index 0000000000000000000000000000000000000000..a33e83c33b4dfe8d7f48d45c58f0ac1231915d94 --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperStarted.md @@ -0,0 +1,373 @@ + + +# ZooKeeper Getting Started Guide + +* [Getting Started: Coordinating Distributed Applications with ZooKeeper](#getting-started-coordinating-distributed-applications-with-zooKeeper) + * [Pre-requisites](#sc_Prerequisites) + * [Download](#sc_Download) + * [Standalone Operation](#sc_InstallingSingleMode) + * [Managing ZooKeeper Storage](#sc_FileManagement) + * [Connecting to ZooKeeper](#sc_ConnectingToZooKeeper) + * [Programming to ZooKeeper](#sc_ProgrammingToZooKeeper) + * [Running Replicated ZooKeeper](#sc_RunningReplicatedZooKeeper) + * [Other Optimizations](#other-optimizations) + + + +## Getting Started: Coordinating Distributed Applications with ZooKeeper + +This document contains information to get you started quickly with +ZooKeeper. It is aimed primarily at developers hoping to try it out, and +contains simple installation instructions for a single ZooKeeper server, a +few commands to verify that it is running, and a simple programming +example. Finally, as a convenience, there are a few sections regarding +more complicated installations, for example running replicated +deployments, and optimizing the transaction log. However for the complete +instructions for commercial deployments, please refer to the [ZooKeeper +Administrator's Guide](zookeeperAdmin.html). + + + +### Pre-requisites + +See [System Requirements](zookeeperAdmin.html#sc_systemReq) in the Admin guide. + + + +### Download + +To get a ZooKeeper distribution, download a recent +[stable](http://zookeeper.apache.org/releases.html) release from one of the Apache Download +Mirrors. + + + +### Standalone Operation + +Setting up a ZooKeeper server in standalone mode is +straightforward. The server is contained in a single JAR file, +so installation consists of creating a configuration. + +Once you've downloaded a stable ZooKeeper release unpack +it and cd to the root + +To start ZooKeeper you need a configuration file. Here is a sample, +create it in **conf/zoo.cfg**: + + + tickTime=2000 + dataDir=/var/lib/zookeeper + clientPort=2181 + + +This file can be called anything, but for the sake of this +discussion call +it **conf/zoo.cfg**. Change the +value of **dataDir** to specify an +existing (empty to start with) directory. Here are the meanings +for each of the fields: + +* ***tickTime*** : + the basic time unit in milliseconds used by ZooKeeper. It is + used to do heartbeats and the minimum session timeout will be + twice the tickTime. + +* ***dataDir*** : + the location to store the in-memory database snapshots and, + unless specified otherwise, the transaction log of updates to the + database. + +* ***clientPort*** : + the port to listen for client connections + +Now that you created the configuration file, you can start +ZooKeeper: + + + bin/zkServer.sh start + + +ZooKeeper logs messages using _logback_ -- more detail +available in the +[Logging](zookeeperProgrammers.html#Logging) +section of the Programmer's Guide. You will see log messages +coming to the console (default) and/or a log file depending on +the logback configuration. + +The steps outlined here run ZooKeeper in standalone mode. There is +no replication, so if ZooKeeper process fails, the service will go down. +This is fine for most development situations, but to run ZooKeeper in +replicated mode, please see [Running Replicated +ZooKeeper](#sc_RunningReplicatedZooKeeper). + + + +### Managing ZooKeeper Storage + +For long running production systems ZooKeeper storage must +be managed externally (dataDir and logs). See the section on +[maintenance](zookeeperAdmin.html#sc_maintenance) for +more details. + + + +### Connecting to ZooKeeper + + + $ bin/zkCli.sh -server 127.0.0.1:2181 + + +This lets you perform simple, file-like operations. + +Once you have connected, you should see something like: + + + Connecting to localhost:2181 + ... + Welcome to ZooKeeper! + JLine support is enabled + [zkshell: 0] + +From the shell, type `help` to get a listing of commands that can be executed from the client, as in: + + + [zkshell: 0] help + ZooKeeper -server host:port cmd args + addauth scheme auth + close + config [-c] [-w] [-s] + connect host:port + create [-s] [-e] [-c] [-t ttl] path [data] [acl] + delete [-v version] path + deleteall path + delquota [-n|-b] path + get [-s] [-w] path + getAcl [-s] path + getAllChildrenNumber path + getEphemerals path + history + listquota path + ls [-s] [-w] [-R] path + printwatches on|off + quit + reconfig [-s] [-v version] [[-file path] | [-members serverID=host:port1:port2;port3[,...]*]] | [-add serverId=host:port1:port2;port3[,...]]* [-remove serverId[,...]*] + redo cmdno + removewatches path [-c|-d|-a] [-l] + set [-s] [-v version] path data + setAcl [-s] [-v version] [-R] path acl + setquota -n|-b val path + stat [-w] path + sync path + + +From here, you can try a few simple commands to get a feel for this simple command line interface. First, start by issuing the list command, as +in `ls`, yielding: + + + [zkshell: 8] ls / + [zookeeper] + + +Next, create a new znode by running `create /zk_test my_data`. This creates a new znode and associates the string "my_data" with the node. +You should see: + + + [zkshell: 9] create /zk_test my_data + Created /zk_test + + +Issue another `ls /` command to see what the directory looks like: + + + [zkshell: 11] ls / + [zookeeper, zk_test] + + +Notice that the zk_test directory has now been created. + +Next, verify that the data was associated with the znode by running the `get` command, as in: + + + [zkshell: 12] get /zk_test + my_data + cZxid = 5 + ctime = Fri Jun 05 13:57:06 PDT 2009 + mZxid = 5 + mtime = Fri Jun 05 13:57:06 PDT 2009 + pZxid = 5 + cversion = 0 + dataVersion = 0 + aclVersion = 0 + ephemeralOwner = 0 + dataLength = 7 + numChildren = 0 + + +We can change the data associated with zk_test by issuing the `set` command, as in: + + + [zkshell: 14] set /zk_test junk + cZxid = 5 + ctime = Fri Jun 05 13:57:06 PDT 2009 + mZxid = 6 + mtime = Fri Jun 05 14:01:52 PDT 2009 + pZxid = 5 + cversion = 0 + dataVersion = 1 + aclVersion = 0 + ephemeralOwner = 0 + dataLength = 4 + numChildren = 0 + [zkshell: 15] get /zk_test + junk + cZxid = 5 + ctime = Fri Jun 05 13:57:06 PDT 2009 + mZxid = 6 + mtime = Fri Jun 05 14:01:52 PDT 2009 + pZxid = 5 + cversion = 0 + dataVersion = 1 + aclVersion = 0 + ephemeralOwner = 0 + dataLength = 4 + numChildren = 0 + + +(Notice we did a `get` after setting the data and it did, indeed, change. + +Finally, let's `delete` the node by issuing: + + + [zkshell: 16] delete /zk_test + [zkshell: 17] ls / + [zookeeper] + [zkshell: 18] + + +That's it for now. To explore more, see the [Zookeeper CLI](zookeeperCLI.html). + + + +### Programming to ZooKeeper + +ZooKeeper has a Java bindings and C bindings. They are +functionally equivalent. The C bindings exist in two variants: single +threaded and multi-threaded. These differ only in how the messaging loop +is done. For more information, see the [Programming +Examples in the ZooKeeper Programmer's Guide](zookeeperProgrammers.html#ch_programStructureWithExample) for +sample code using the different APIs. + + + +### Running Replicated ZooKeeper + +Running ZooKeeper in standalone mode is convenient for evaluation, +some development, and testing. But in production, you should run +ZooKeeper in replicated mode. A replicated group of servers in the same +application is called a _quorum_, and in replicated +mode, all servers in the quorum have copies of the same configuration +file. + +######Note +>For replicated mode, a minimum of three servers are required, +and it is strongly recommended that you have an odd number of +servers. If you only have two servers, then you are in a +situation where if one of them fails, there are not enough +machines to form a majority quorum. Two servers are inherently +**less** stable than a single server, because there are two single +points of failure. + +The required +**conf/zoo.cfg** +file for replicated mode is similar to the one used in standalone +mode, but with a few differences. Here is an example: + + tickTime=2000 + dataDir=/var/lib/zookeeper + clientPort=2181 + initLimit=5 + syncLimit=2 + server.1=zoo1:2888:3888 + server.2=zoo2:2888:3888 + server.3=zoo3:2888:3888 + +The new entry, **initLimit** is +timeouts ZooKeeper uses to limit the length of time the ZooKeeper +servers in quorum have to connect to a leader. The entry **syncLimit** limits how far out of date a server can +be from a leader. + +With both of these timeouts, you specify the unit of time using +**tickTime**. In this example, the timeout +for initLimit is 5 ticks at 2000 milliseconds a tick, or 10 +seconds. + +The entries of the form _server.X_ list the +servers that make up the ZooKeeper service. When the server starts up, +it knows which server it is by looking for the file +_myid_ in the data directory. That file has the +contains the server number, in ASCII. + +Finally, note the two port numbers after each server +name: " 2888" and "3888". Peers use the former port to connect +to other peers. Such a connection is necessary so that peers +can communicate, for example, to agree upon the order of +updates. More specifically, a ZooKeeper server uses this port +to connect followers to the leader. When a new leader arises, a +follower opens a TCP connection to the leader using this +port. Because the default leader election also uses TCP, we +currently require another port for leader election. This is the +second port in the server entry. + +######Note +>If you want to test multiple servers on a single +machine, specify the servername +as _localhost_ with unique quorum & +leader election ports (i.e. 2888:3888, 2889:3889, 2890:3890 in +the example above) for each server.X in that server's config +file. Of course separate _dataDir_s and +distinct _clientPort_s are also necessary +(in the above replicated example, running on a +single _localhost_, you would still have +three config files). + +>Please be aware that setting up multiple servers on a single +machine will not create any redundancy. If something were to +happen which caused the machine to die, all of the zookeeper +servers would be offline. Full redundancy requires that each +server have its own machine. It must be a completely separate +physical server. Multiple virtual machines on the same physical +host are still vulnerable to the complete failure of that host. + +>If you have multiple network interfaces in your ZooKeeper machines, +you can also instruct ZooKeeper to bind on all of your interfaces and +automatically switch to a healthy interface in case of a network failure. +For details, see the [Configuration Parameters](zookeeperAdmin.html#id_multi_address). + + + +### Other Optimizations + +There are a couple of other configuration parameters that can +greatly increase performance: + +* To get low latencies on updates it is important to + have a dedicated transaction log directory. By default + transaction logs are put in the same directory as the data + snapshots and _myid_ file. The dataLogDir + parameters indicates a different directory to use for the + transaction logs. + diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTools.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTools.md new file mode 100644 index 0000000000000000000000000000000000000000..d4abe3854677bcf515dcc4a9483a9fedc503d776 --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTools.md @@ -0,0 +1,698 @@ + + +# A series of tools for ZooKeeper + +* [Scripts](#Scripts) + * [zkServer.sh](#zkServer) + * [zkCli.sh](#zkCli) + * [zkEnv.sh](#zkEnv) + * [zkCleanup.sh](#zkCleanup) + * [zkTxnLogToolkit.sh](#zkTxnLogToolkit) + * [zkSnapShotToolkit.sh](#zkSnapShotToolkit) + * [zkSnapshotRecursiveSummaryToolkit.sh](#zkSnapshotRecursiveSummaryToolkit) + * [zkSnapshotComparer.sh](#zkSnapshotComparer) + +* [Benchmark](#Benchmark) + * [YCSB](#YCSB) + * [zk-smoketest](#zk-smoketest) + +* [Testing](#Testing) + * [Fault Injection Framework](#fault-injection) + * [Byteman](#Byteman) + * [Jepsen Test](#jepsen-test) + + + +## Scripts + + + +### zkServer.sh +A command for the operations for the ZooKeeper server. + +```bash +Usage: ./zkServer.sh {start|start-foreground|stop|version|restart|status|upgrade|print-cmd} +# start the server +./zkServer.sh start + +# start the server in the foreground for debugging +./zkServer.sh start-foreground + +# stop the server +./zkServer.sh stop + +# restart the server +./zkServer.sh restart + +# show the status,mode,role of the server +./zkServer.sh status +JMX enabled by default +Using config: /data/software/zookeeper/conf/zoo.cfg +Mode: standalone + +# Deprecated +./zkServer.sh upgrade + +# print the parameters of the start-up +./zkServer.sh print-cmd + +# show the version of the ZooKeeper server +./zkServer.sh version +Apache ZooKeeper, version 3.6.0-SNAPSHOT 06/11/2019 05:39 GMT + +``` + +The `status` command establishes a client connection to the server to execute diagnostic commands. +When the ZooKeeper cluster is started in client SSL only mode (by omitting the clientPort +from the zoo.cfg), then additional SSL related configuration has to be provided before using +the `./zkServer.sh status` command to find out if the ZooKeeper server is running. An example: + + CLIENT_JVMFLAGS="-Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty -Dzookeeper.ssl.trustStore.location=/tmp/clienttrust.jks -Dzookeeper.ssl.trustStore.password=password -Dzookeeper.ssl.keyStore.location=/tmp/client.jks -Dzookeeper.ssl.keyStore.password=password -Dzookeeper.client.secure=true" ./zkServer.sh status + + + + +### zkCli.sh +Look at the [ZooKeeperCLI](zookeeperCLI.html) + + + +### zkEnv.sh +The environment setting for the ZooKeeper server + +```bash +# the setting of log property +ZOO_LOG_DIR: the directory to store the logs +``` + + + +### zkCleanup.sh +Clean up the old snapshots and transaction logs. + +```bash +Usage: + * args dataLogDir [snapDir] -n count + * dataLogDir -- path to the txn log directory + * snapDir -- path to the snapshot directory + * count -- the number of old snaps/logs you want to keep, value should be greater than or equal to 3 +# Keep the latest 5 logs and snapshots +./zkCleanup.sh -n 5 +``` + + + +### zkTxnLogToolkit.sh +TxnLogToolkit is a command line tool shipped with ZooKeeper which +is capable of recovering transaction log entries with broken CRC. + +Running it without any command line parameters or with the `-h,--help` argument, it outputs the following help page: + + $ bin/zkTxnLogToolkit.sh + usage: TxnLogToolkit [-dhrv] txn_log_file_name + -d,--dump Dump mode. Dump all entries of the log file. (this is the default) + -h,--help Print help message + -r,--recover Recovery mode. Re-calculate CRC for broken entries. + -v,--verbose Be verbose in recovery mode: print all entries, not just fixed ones. + -y,--yes Non-interactive mode: repair all CRC errors without asking + +The default behaviour is safe: it dumps the entries of the given +transaction log file to the screen: (same as using `-d,--dump` parameter) + + $ bin/zkTxnLogToolkit.sh log.100000001 + ZooKeeper Transactional Log File with dbid 0 txnlog format version 2 + 4/5/18 2:15:58 PM CEST session 0x16295bafcc40000 cxid 0x0 zxid 0x100000001 createSession 30000 + CRC ERROR - 4/5/18 2:16:05 PM CEST session 0x16295bafcc40000 cxid 0x1 zxid 0x100000002 closeSession null + 4/5/18 2:16:05 PM CEST session 0x16295bafcc40000 cxid 0x1 zxid 0x100000002 closeSession null + 4/5/18 2:16:12 PM CEST session 0x26295bafcc90000 cxid 0x0 zxid 0x100000003 createSession 30000 + 4/5/18 2:17:34 PM CEST session 0x26295bafcc90000 cxid 0x0 zxid 0x200000001 closeSession null + 4/5/18 2:17:34 PM CEST session 0x16295bd23720000 cxid 0x0 zxid 0x200000002 createSession 30000 + 4/5/18 2:18:02 PM CEST session 0x16295bd23720000 cxid 0x2 zxid 0x200000003 create '/andor,#626262,v{s{31,s{'world,'anyone}}},F,1 + EOF reached after 6 txns. + +There's a CRC error in the 2nd entry of the above transaction log file. In **dump** +mode, the toolkit only prints this information to the screen without touching the original file. In +**recovery** mode (`-r,--recover` flag) the original file still remains +untouched and all transactions will be copied over to a new txn log file with ".fixed" suffix. It recalculates +CRC values and copies the calculated value, if it doesn't match the original txn entry. +By default, the tool works interactively: it asks for confirmation whenever CRC error encountered. + + $ bin/zkTxnLogToolkit.sh -r log.100000001 + ZooKeeper Transactional Log File with dbid 0 txnlog format version 2 + CRC ERROR - 4/5/18 2:16:05 PM CEST session 0x16295bafcc40000 cxid 0x1 zxid 0x100000002 closeSession null + Would you like to fix it (Yes/No/Abort) ? + +Answering **Yes** means the newly calculated CRC value will be outputted +to the new file. **No** means that the original CRC value will be copied over. +**Abort** will abort the entire operation and exits. +(In this case the ".fixed" will not be deleted and left in a half-complete state: contains only entries which +have already been processed or only the header if the operation was aborted at the first entry.) + + $ bin/zkTxnLogToolkit.sh -r log.100000001 + ZooKeeper Transactional Log File with dbid 0 txnlog format version 2 + CRC ERROR - 4/5/18 2:16:05 PM CEST session 0x16295bafcc40000 cxid 0x1 zxid 0x100000002 closeSession null + Would you like to fix it (Yes/No/Abort) ? y + EOF reached after 6 txns. + Recovery file log.100000001.fixed has been written with 1 fixed CRC error(s) + +The default behaviour of recovery is to be silent: only entries with CRC error get printed to the screen. +One can turn on verbose mode with the `-v,--verbose` parameter to see all records. +Interactive mode can be turned off with the `-y,--yes` parameter. In this case all CRC errors will be fixed +in the new transaction file. + + + +### zkSnapShotToolkit.sh +Dump a snapshot file to stdout, showing the detailed information of the each zk-node. + +```bash +# help +./zkSnapShotToolkit.sh +/usr/bin/java +USAGE: SnapshotFormatter [-d|-json] snapshot_file + -d dump the data for each znode + -json dump znode info in json format + +# show the each zk-node info without data content +./zkSnapShotToolkit.sh /data/zkdata/version-2/snapshot.fa01000186d +/zk-latencies_4/session_946 + cZxid = 0x00000f0003110b + ctime = Wed Sep 19 21:58:22 CST 2018 + mZxid = 0x00000f0003110b + mtime = Wed Sep 19 21:58:22 CST 2018 + pZxid = 0x00000f0003110b + cversion = 0 + dataVersion = 0 + aclVersion = 0 + ephemeralOwner = 0x00000000000000 + dataLength = 100 + +# [-d] show the each zk-node info with data content +./zkSnapShotToolkit.sh -d /data/zkdata/version-2/snapshot.fa01000186d +/zk-latencies2/session_26229 + cZxid = 0x00000900007ba0 + ctime = Wed Aug 15 20:13:52 CST 2018 + mZxid = 0x00000900007ba0 + mtime = Wed Aug 15 20:13:52 CST 2018 + pZxid = 0x00000900007ba0 + cversion = 0 + dataVersion = 0 + aclVersion = 0 + ephemeralOwner = 0x00000000000000 + data = eHh4eHh4eHh4eHh4eA== + +# [-json] show the each zk-node info with json format +./zkSnapShotToolkit.sh -json /data/zkdata/version-2/snapshot.fa01000186d +[[1,0,{"progname":"SnapshotFormatter.java","progver":"0.01","timestamp":1559788148637},[{"name":"\/","asize":0,"dsize":0,"dev":0,"ino":1001},[{"name":"zookeeper","asize":0,"dsize":0,"dev":0,"ino":1002},{"name":"config","asize":0,"dsize":0,"dev":0,"ino":1003},[{"name":"quota","asize":0,"dsize":0,"dev":0,"ino":1004},[{"name":"test","asize":0,"dsize":0,"dev":0,"ino":1005},{"name":"zookeeper_limits","asize":52,"dsize":52,"dev":0,"ino":1006},{"name":"zookeeper_stats","asize":15,"dsize":15,"dev":0,"ino":1007}]]],{"name":"test","asize":0,"dsize":0,"dev":0,"ino":1008}]] +``` + + +### zkSnapshotRecursiveSummaryToolkit.sh +Recursively collect and display child count and data size for a selected node. + + $./zkSnapshotRecursiveSummaryToolkit.sh + USAGE: + + SnapshotRecursiveSummary + + snapshot_file: path to the zookeeper snapshot + starting_node: the path in the zookeeper tree where the traversal should begin + max_depth: defines the depth where the tool still writes to the output. 0 means there is no depth limit, every non-leaf node's stats will be displayed, 1 means it will only contain the starting node's and it's children's stats, 2 ads another level and so on. This ONLY affects the level of details displayed, NOT the calculation. + +```bash +# recursively collect and display child count and data for the root node and 2 levels below it +./zkSnapshotRecursiveSummaryToolkit.sh /data/zkdata/version-2/snapshot.fa01000186d / 2 + +/ + children: 1250511 + data: 1952186580 +-- /zookeeper +-- children: 1 +-- data: 0 +-- /solr +-- children: 1773 +-- data: 8419162 +---- /solr/configs +---- children: 1640 +---- data: 8407643 +---- /solr/overseer +---- children: 6 +---- data: 0 +---- /solr/live_nodes +---- children: 3 +---- data: 0 +``` + + + +### zkSnapshotComparer.sh +SnapshotComparer is a tool that loads and compares two snapshots with configurable threshold and various filters, and outputs information about the delta. + +The delta includes specific znode paths added, updated, deleted comparing one snapshot to another. + +It's useful in use cases that involve snapshot analysis, such as offline data consistency checking, and data trending analysis (e.g. what's growing under which zNode path during when). + +This tool only outputs information about permanent nodes, ignoring both sessions and ephemeral nodes. + +It provides two tuning parameters to help filter out noise: +1. `--nodes` Threshold number of children added/removed; +2. `--bytes` Threshold number of bytes added/removed. + +#### Locate Snapshots +Snapshots can be found in [Zookeeper Data Directory](zookeeperAdmin.html#The+Data+Directory) which configured in [conf/zoo.cfg](zookeeperStarted.html#sc_InstallingSingleMode) when set up Zookeeper server. + +#### Supported Snapshot Formats +This tool supports uncompressed snapshot format, and compressed snapshot file formats: `snappy` and `gz`. Snapshots with different formats can be compared using this tool directly without decompression. + +#### Running the Tool +Running the tool with no command line argument or an unrecognized argument, it outputs the following help page: + +``` +usage: java -cp org.apache.zookeeper.server.SnapshotComparer + -b,--bytes (Required) The node data delta size threshold, in bytes, for printing the node. + -d,--debug Use debug output. + -i,--interactive Enter interactive mode. + -l,--left (Required) The left snapshot file. + -n,--nodes (Required) The descendant node delta size threshold, in nodes, for printing the node. + -r,--right (Required) The right snapshot file. +``` +Example Command: + +``` +./bin/zkSnapshotComparer.sh -l /zookeeper-data/backup/snapshot.d.snappy -r /zookeeper-data/backup/snapshot.44 -b 2 -n 1 +``` + +Example Output: +``` +... +Deserialized snapshot in snapshot.44 in 0.002741 seconds +Processed data tree in 0.000361 seconds +Node count: 10 +Total size: 0 +Max depth: 4 +Count of nodes at depth 0: 1 +Count of nodes at depth 1: 2 +Count of nodes at depth 2: 4 +Count of nodes at depth 3: 3 + +Node count: 22 +Total size: 2903 +Max depth: 5 +Count of nodes at depth 0: 1 +Count of nodes at depth 1: 2 +Count of nodes at depth 2: 4 +Count of nodes at depth 3: 7 +Count of nodes at depth 4: 8 + +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Analysis for depth 0 +Node found in both trees. Delta: 2903 bytes, 12 descendants +Analysis for depth 1 +Node /zk_test found in both trees. Delta: 2903 bytes, 12 descendants +Analysis for depth 2 +Node /zk_test/gz found in both trees. Delta: 730 bytes, 3 descendants +Node /zk_test/snappy found in both trees. Delta: 2173 bytes, 9 descendants +Analysis for depth 3 +Node /zk_test/gz/12345 found in both trees. Delta: 9 bytes, 1 descendants +Node /zk_test/gz/a found only in right tree. Descendant size: 721. Descendant count: 0 +Node /zk_test/snappy/anotherTest found in both trees. Delta: 1738 bytes, 2 descendants +Node /zk_test/snappy/test_1 found only in right tree. Descendant size: 344. Descendant count: 3 +Node /zk_test/snappy/test_2 found only in right tree. Descendant size: 91. Descendant count: 2 +Analysis for depth 4 +Node /zk_test/gz/12345/abcdef found only in right tree. Descendant size: 9. Descendant count: 0 +Node /zk_test/snappy/anotherTest/abc found only in right tree. Descendant size: 1738. Descendant count: 0 +Node /zk_test/snappy/test_1/a found only in right tree. Descendant size: 93. Descendant count: 0 +Node /zk_test/snappy/test_1/b found only in right tree. Descendant size: 251. Descendant count: 0 +Node /zk_test/snappy/test_2/xyz found only in right tree. Descendant size: 33. Descendant count: 0 +Node /zk_test/snappy/test_2/y found only in right tree. Descendant size: 58. Descendant count: 0 +All layers compared. +``` + +#### Interactive Mode +Use "-i" or "--interactive" to enter interactive mode: +``` +./bin/zkSnapshotComparer.sh -l /zookeeper-data/backup/snapshot.d.snappy -r /zookeeper-data/backup/snapshot.44 -b 2 -n 1 -i +``` + +There are three options to proceed: +``` +- Press enter to move to print current depth layer; +- Type a number to jump to and print all nodes at a given depth; +- Enter an ABSOLUTE path to print the immediate subtree of a node. Path must start with '/'. +``` + +Note: As indicated by the interactive messages, the tool only shows analysis on the result that filtered by tuning parameters bytes threshold and nodes threshold. + +Press enter to print current depth layer: + +``` +Current depth is 0 +Press enter to move to print current depth layer; +... +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Analysis for depth 0 +Node found in both trees. Delta: 2903 bytes, 12 descendants +``` + +Type a number to jump to and print all nodes at a given depth: + +(Jump forward) + +``` +Current depth is 1 +... +Type a number to jump to and print all nodes at a given depth; +... +3 +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Analysis for depth 3 +Node /zk_test/gz/12345 found in both trees. Delta: 9 bytes, 1 descendants +Node /zk_test/gz/a found only in right tree. Descendant size: 721. Descendant count: 0 +Filtered node /zk_test/gz/anotherOne of left size 0, right size 0 +Filtered right node /zk_test/gz/b of size 0 +Node /zk_test/snappy/anotherTest found in both trees. Delta: 1738 bytes, 2 descendants +Node /zk_test/snappy/test_1 found only in right tree. Descendant size: 344. Descendant count: 3 +Node /zk_test/snappy/test_2 found only in right tree. Descendant size: 91. Descendant count: 2 +``` + +(Jump back) + +``` +Current depth is 3 +... +Type a number to jump to and print all nodes at a given depth; +... +0 +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Analysis for depth 0 +Node found in both trees. Delta: 2903 bytes, 12 descendants +``` + +Out of range depth is handled: + +``` +Current depth is 1 +... +Type a number to jump to and print all nodes at a given depth; +... +10 +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Depth must be in range [0, 4] +``` + +Enter an ABSOLUTE path to print the immediate subtree of a node: + +``` +Current depth is 3 +... +Enter an ABSOLUTE path to print the immediate subtree of a node. +/zk_test +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Analysis for node /zk_test +Node /zk_test/gz found in both trees. Delta: 730 bytes, 3 descendants +Node /zk_test/snappy found in both trees. Delta: 2173 bytes, 9 descendants +``` + +Invalid path is handled: + +``` +Current depth is 3 +... +Enter an ABSOLUTE path to print the immediate subtree of a node. +/non-exist-path +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Analysis for node /non-exist-path +Path /non-exist-path is neither found in left tree nor right tree. +``` + +Invalid input is handled: +``` +Current depth is 1 +- Press enter to move to print current depth layer; +- Type a number to jump to and print all nodes at a given depth; +- Enter an ABSOLUTE path to print the immediate subtree of a node. Path must start with '/'. +12223999999999999999999999999999999999999 +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Input 12223999999999999999999999999999999999999 is not valid. Depth must be in range [0, 4]. Path must be an absolute path which starts with '/'. +``` + +Exit interactive mode automatically when all layers are compared: + +``` +Printing analysis for nodes difference larger than 2 bytes or node count difference larger than 1. +Analysis for depth 4 +Node /zk_test/gz/12345/abcdef found only in right tree. Descendant size: 9. Descendant count: 0 +Node /zk_test/snappy/anotherTest/abc found only in right tree. Descendant size: 1738. Descendant count: 0 +Filtered right node /zk_test/snappy/anotherTest/abcd of size 0 +Node /zk_test/snappy/test_1/a found only in right tree. Descendant size: 93. Descendant count: 0 +Node /zk_test/snappy/test_1/b found only in right tree. Descendant size: 251. Descendant count: 0 +Filtered right node /zk_test/snappy/test_1/c of size 0 +Node /zk_test/snappy/test_2/xyz found only in right tree. Descendant size: 33. Descendant count: 0 +Node /zk_test/snappy/test_2/y found only in right tree. Descendant size: 58. Descendant count: 0 +All layers compared. +``` + +Or use `^c` to exit interactive mode anytime. + + + + +## Benchmark + + + +### YCSB + +#### Quick Start + +This section describes how to run YCSB on ZooKeeper. + +#### 1. Start ZooKeeper Server(s) + +#### 2. Install Java and Maven + +#### 3. Set Up YCSB + +Git clone YCSB and compile: + + git clone http://github.com/brianfrankcooper/YCSB.git + # more details in the landing page for instructions on downloading YCSB(https://github.com/brianfrankcooper/YCSB#getting-started). + cd YCSB + mvn -pl site.ycsb:zookeeper-binding -am clean package -DskipTests + +#### 4. Provide ZooKeeper Connection Parameters + +Set connectString, sessionTimeout, watchFlag in the workload you plan to run. + +- `zookeeper.connectString` +- `zookeeper.sessionTimeout` +- `zookeeper.watchFlag` + * A parameter for enabling ZooKeeper's watch, optional values:true or false.the default value is false. + * This parameter cannot test the watch performance, but for testing what effect will take on the read/write requests when enabling the watch. + + ```bash + ./bin/ycsb run zookeeper -s -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p zookeeper.watchFlag=true + ``` + +Or, you can set configs with the shell command, EG: + + # create a /benchmark namespace for sake of cleaning up the workspace after test. + # e.g the CLI:create /benchmark + ./bin/ycsb run zookeeper -s -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p zookeeper.sessionTimeout=30000 + +#### 5. Load data and run tests + +Load the data: + + # -p recordcount,the count of records/paths you want to insert + ./bin/ycsb load zookeeper -s -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p recordcount=10000 > outputLoad.txt + +Run the workload test: + + # YCSB workloadb is the most suitable workload for read-heavy workload for the ZooKeeper in the real world. + + # -p fieldlength, test the length of value/data-content took effect on performance + ./bin/ycsb run zookeeper -s -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p fieldlength=1000 + + # -p fieldcount + ./bin/ycsb run zookeeper -s -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p fieldcount=20 + + # -p hdrhistogram.percentiles,show the hdrhistogram benchmark result + ./bin/ycsb run zookeeper -threads 1 -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p hdrhistogram.percentiles=10,25,50,75,90,95,99,99.9 -p histogram.buckets=500 + + # -threads: multi-clients test, increase the **maxClientCnxns** in the zoo.cfg to handle more connections. + ./bin/ycsb run zookeeper -threads 10 -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark + + # show the timeseries benchmark result + ./bin/ycsb run zookeeper -threads 1 -P workloads/workloadb -p zookeeper.connectString=127.0.0.1:2181/benchmark -p measurementtype=timeseries -p timeseries.granularity=50 + + # cluster test + ./bin/ycsb run zookeeper -P workloads/workloadb -p zookeeper.connectString=192.168.10.43:2181,192.168.10.45:2181,192.168.10.27:2181/benchmark + + # test leader's read/write performance by setting zookeeper.connectString to leader's(192.168.10.43:2181) + ./bin/ycsb run zookeeper -P workloads/workloadb -p zookeeper.connectString=192.168.10.43:2181/benchmark + + # test for large znode(by default: jute.maxbuffer is 1048575 bytes/1 MB ). Notice:jute.maxbuffer should also be set the same value in all the zk servers. + ./bin/ycsb run zookeeper -jvm-args="-Djute.maxbuffer=4194304" -s -P workloads/workloadc -p zookeeper.connectString=127.0.0.1:2181/benchmark + + # Cleaning up the workspace after finishing the benchmark. + # e.g the CLI:deleteall /benchmark + + + + +### zk-smoketest + +**zk-smoketest** provides a simple smoketest client for a ZooKeeper ensemble. Useful for verifying new, updated, +existing installations. More details are [here](https://github.com/phunt/zk-smoketest). + + + + +## Testing + + + +### Fault Injection Framework + + + +#### Byteman + +- **Byteman** is a tool which makes it easy to trace, monitor and test the behaviour of Java application and JDK runtime code. +It injects Java code into your application methods or into Java runtime methods without the need for you to recompile, repackage or even redeploy your application. +Injection can be performed at JVM startup or after startup while the application is still running. +- Visit the official [website](https://byteman.jboss.org/) to download the latest release +- A brief tutorial can be found [here](https://developer.jboss.org/wiki/ABytemanTutorial) + + ```bash + Preparations: + # attach the byteman to 3 zk servers during runtime + # 55001,55002,55003 is byteman binding port; 714,740,758 is the zk server pid + ./bminstall.sh -b -Dorg.jboss.byteman.transform.all -Dorg.jboss.byteman.verbose -p 55001 714 + ./bminstall.sh -b -Dorg.jboss.byteman.transform.all -Dorg.jboss.byteman.verbose -p 55002 740 + ./bminstall.sh -b -Dorg.jboss.byteman.transform.all -Dorg.jboss.byteman.verbose -p 55003 758 + + # load the fault injection script + ./bmsubmit.sh -p 55002 -l my_zk_fault_injection.btm + # unload the fault injection script + ./bmsubmit.sh -p 55002 -u my_zk_fault_injectionr.btm + ``` + +Look at the below examples to customize your byteman fault injection script + +Example 1: This script makes leader's zxid roll over, to force re-election. + +```bash +cat zk_leader_zxid_roll_over.btm + +RULE trace zk_leader_zxid_roll_over +CLASS org.apache.zookeeper.server.quorum.Leader +METHOD propose +IF true +DO + traceln("*** Leader zxid has rolled over, forcing re-election ***"); + $1.zxid = 4294967295L +ENDRULE +``` + +Example 2: This script makes the leader drop the ping packet to a specific follower. +The leader will close the **LearnerHandler** with that follower, and the follower will enter the state:LOOKING +then re-enter the quorum with the state:FOLLOWING + +```bash +cat zk_leader_drop_ping_packet.btm + +RULE trace zk_leader_drop_ping_packet +CLASS org.apache.zookeeper.server.quorum.LearnerHandler +METHOD ping +AT ENTRY +IF $0.sid == 2 +DO + traceln("*** Leader drops ping packet to sid: 2 ***"); + return; +ENDRULE +``` + +Example 3: This script makes one follower drop ACK packet which has no big effect in the broadcast phrase, since after receiving +the majority of ACKs from the followers, the leader can commit that proposal + +```bash +cat zk_leader_drop_ping_packet.btm + +RULE trace zk.follower_drop_ack_packet +CLASS org.apache.zookeeper.server.quorum.SendAckRequestProcessor +METHOD processRequest +AT ENTRY +IF true +DO + traceln("*** Follower drops ACK packet ***"); + return; +ENDRULE +``` + + + + +### Jepsen Test +A framework for distributed systems verification, with fault injection. +Jepsen has been used to verify everything from eventually-consistent commutative databases to linearizable coordination systems to distributed task schedulers. +more details can be found in [jepsen-io](https://github.com/jepsen-io/jepsen) + +Running the [Dockerized Jepsen](https://github.com/jepsen-io/jepsen/blob/master/docker/README.md) is the simplest way to use the Jepsen. + +Installation: + +```bash +git clone git@github.com:jepsen-io/jepsen.git +cd docker +# maybe a long time for the first init. +./up.sh +# docker ps to check one control node and five db nodes are up +docker ps + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + 8265f1d3f89c docker_control "/bin/sh -c /init.sh" 9 hours ago Up 4 hours 0.0.0.0:32769->8080/tcp jepsen-control + 8a646102da44 docker_n5 "/run.sh" 9 hours ago Up 3 hours 22/tcp jepsen-n5 + 385454d7e520 docker_n1 "/run.sh" 9 hours ago Up 9 hours 22/tcp jepsen-n1 + a62d6a9d5f8e docker_n2 "/run.sh" 9 hours ago Up 9 hours 22/tcp jepsen-n2 + 1485e89d0d9a docker_n3 "/run.sh" 9 hours ago Up 9 hours 22/tcp jepsen-n3 + 27ae01e1a0c5 docker_node "/run.sh" 9 hours ago Up 9 hours 22/tcp jepsen-node + 53c444b00ebd docker_n4 "/run.sh" 9 hours ago Up 9 hours 22/tcp jepsen-n4 +``` + +Running & Test + +```bash +# Enter into the container:jepsen-control +docker exec -it jepsen-control bash +# Test +cd zookeeper && lein run test --concurrency 10 +# See something like the following to assert that ZooKeeper has passed the Jepsen test +INFO [2019-04-01 11:25:23,719] jepsen worker 8 - jepsen.util 8 :ok :read 2 +INFO [2019-04-01 11:25:23,722] jepsen worker 3 - jepsen.util 3 :invoke :cas [0 4] +INFO [2019-04-01 11:25:23,760] jepsen worker 3 - jepsen.util 3 :fail :cas [0 4] +INFO [2019-04-01 11:25:23,791] jepsen worker 1 - jepsen.util 1 :invoke :read nil +INFO [2019-04-01 11:25:23,794] jepsen worker 1 - jepsen.util 1 :ok :read 2 +INFO [2019-04-01 11:25:24,038] jepsen worker 0 - jepsen.util 0 :invoke :write 4 +INFO [2019-04-01 11:25:24,073] jepsen worker 0 - jepsen.util 0 :ok :write 4 +............................................................................... +Everything looks good! ヽ(‘ー`)ノ + +``` + +Reference: +read [this blog](https://aphyr.com/posts/291-call-me-maybe-zookeeper) to learn more about the Jepsen test for the Zookeeper. diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTutorial.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTutorial.md new file mode 100644 index 0000000000000000000000000000000000000000..366c0a9f081dd2e7d7e80e186d4a805d73a639bc --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperTutorial.md @@ -0,0 +1,666 @@ + + +# Programming with ZooKeeper - A basic tutorial + +* [Introduction](#ch_Introduction) +* [Barriers](#sc_barriers) +* [Producer-Consumer Queues](#sc_producerConsumerQueues) +* [Complete example](#Complete+example) + * [Queue test](#Queue+test) + * [Barrier test](#Barrier+test) + * [Source Listing](#sc_sourceListing) + + + +## Introduction + +In this tutorial, we show simple implementations of barriers and +producer-consumer queues using ZooKeeper. We call the respective classes Barrier and Queue. +These examples assume that you have at least one ZooKeeper server running. + +Both primitives use the following common excerpt of code: + + static ZooKeeper zk = null; + static Integer mutex; + + String root; + + SyncPrimitive(String address) { + if(zk == null){ + try { + System.out.println("Starting ZK:"); + zk = new ZooKeeper(address, 3000, this); + mutex = new Integer(-1); + System.out.println("Finished starting ZK: " + zk); + } catch (IOException e) { + System.out.println(e.toString()); + zk = null; + } + } + } + + synchronized public void process(WatchedEvent event) { + synchronized (mutex) { + mutex.notify(); + } + } + + + +Both classes extend SyncPrimitive. In this way, we execute steps that are +common to all primitives in the constructor of SyncPrimitive. To keep the examples +simple, we create a ZooKeeper object the first time we instantiate either a barrier +object or a queue object, and we declare a static variable that is a reference +to this object. The subsequent instances of Barrier and Queue check whether a +ZooKeeper object exists. Alternatively, we could have the application creating a +ZooKeeper object and passing it to the constructor of Barrier and Queue. + +We use the process() method to process notifications triggered due to watches. +In the following discussion, we present code that sets watches. A watch is internal +structure that enables ZooKeeper to notify a client of a change to a node. For example, +if a client is waiting for other clients to leave a barrier, then it can set a watch and +wait for modifications to a particular node, which can indicate that it is the end of the wait. +This point becomes clear once we go over the examples. + + + +## Barriers + +A barrier is a primitive that enables a group of processes to synchronize the +beginning and the end of a computation. The general idea of this implementation +is to have a barrier node that serves the purpose of being a parent for individual +process nodes. Suppose that we call the barrier node "/b1". Each process "p" then +creates a node "/b1/p". Once enough processes have created their corresponding +nodes, joined processes can start the computation. + +In this example, each process instantiates a Barrier object, and its constructor takes as parameters: + +* the address of a ZooKeeper server (e.g., "zoo1.foo.com:2181") +* the path of the barrier node on ZooKeeper (e.g., "/b1") +* the size of the group of processes + +The constructor of Barrier passes the address of the Zookeeper server to the +constructor of the parent class. The parent class creates a ZooKeeper instance if +one does not exist. The constructor of Barrier then creates a +barrier node on ZooKeeper, which is the parent node of all process nodes, and +we call root (**Note:** This is not the ZooKeeper root "/"). + + /** + * Barrier constructor + * + * @param address + * @param root + * @param size + */ + Barrier(String address, String root, int size) { + super(address); + this.root = root; + this.size = size; + // Create barrier node + if (zk != null) { + try { + Stat s = zk.exists(root, false); + if (s == null) { + zk.create(root, new byte[0], Ids.OPEN_ACL_UNSAFE, + CreateMode.PERSISTENT); + } + } catch (KeeperException e) { + System.out + .println("Keeper exception when instantiating queue: " + + e.toString()); + } catch (InterruptedException e) { + System.out.println("Interrupted exception"); + } + } + + // My node name + try { + name = new String(InetAddress.getLocalHost().getCanonicalHostName().toString()); + } catch (UnknownHostException e) { + System.out.println(e.toString()); + } + } + + +To enter the barrier, a process calls enter(). The process creates a node under +the root to represent it, using its host name to form the node name. It then wait +until enough processes have entered the barrier. A process does it by checking +the number of children the root node has with "getChildren()", and waiting for +notifications in the case it does not have enough. To receive a notification when +there is a change to the root node, a process has to set a watch, and does it +through the call to "getChildren()". In the code, we have that "getChildren()" +has two parameters. The first one states the node to read from, and the second is +a boolean flag that enables the process to set a watch. In the code the flag is true. + + /** + * Join barrier + * + * @return + * @throws KeeperException + * @throws InterruptedException + */ + + boolean enter() throws KeeperException, InterruptedException{ + zk.create(root + "/" + name, new byte[0], Ids.OPEN_ACL_UNSAFE, + CreateMode.EPHEMERAL); + while (true) { + synchronized (mutex) { + List list = zk.getChildren(root, true); + + if (list.size() < size) { + mutex.wait(); + } else { + return true; + } + } + } + } + + +Note that enter() throws both KeeperException and InterruptedException, so it is +the responsibility of the application to catch and handle such exceptions. + +Once the computation is finished, a process calls leave() to leave the barrier. +First it deletes its corresponding node, and then it gets the children of the root +node. If there is at least one child, then it waits for a notification (obs: note +that the second parameter of the call to getChildren() is true, meaning that +ZooKeeper has to set a watch on the root node). Upon reception of a notification, +it checks once more whether the root node has any children. + + /** + * Wait until all reach barrier + * + * @return + * @throws KeeperException + * @throws InterruptedException + */ + + boolean leave() throws KeeperException, InterruptedException { + zk.delete(root + "/" + name, 0); + while (true) { + synchronized (mutex) { + List list = zk.getChildren(root, true); + if (list.size() > 0) { + mutex.wait(); + } else { + return true; + } + } + } + } + + + + +## Producer-Consumer Queues + +A producer-consumer queue is a distributed data structure that groups of processes +use to generate and consume items. Producer processes create new elements and add +them to the queue. Consumer processes remove elements from the list, and process them. +In this implementation, the elements are simple integers. The queue is represented +by a root node, and to add an element to the queue, a producer process creates a new node, +a child of the root node. + +The following excerpt of code corresponds to the constructor of the object. As +with Barrier objects, it first calls the constructor of the parent class, SyncPrimitive, +that creates a ZooKeeper object if one doesn't exist. It then verifies if the root +node of the queue exists, and creates if it doesn't. + + /** + * Constructor of producer-consumer queue + * + * @param address + * @param name + */ + Queue(String address, String name) { + super(address); + this.root = name; + // Create ZK node name + if (zk != null) { + try { + Stat s = zk.exists(root, false); + if (s == null) { + zk.create(root, new byte[0], Ids.OPEN_ACL_UNSAFE, + CreateMode.PERSISTENT); + } + } catch (KeeperException e) { + System.out + .println("Keeper exception when instantiating queue: " + + e.toString()); + } catch (InterruptedException e) { + System.out.println("Interrupted exception"); + } + } + } + + +A producer process calls "produce()" to add an element to the queue, and passes +an integer as an argument. To add an element to the queue, the method creates a +new node using "create()", and uses the SEQUENCE flag to instruct ZooKeeper to +append the value of the sequencer counter associated to the root node. In this way, +we impose a total order on the elements of the queue, thus guaranteeing that the +oldest element of the queue is the next one consumed. + + /** + * Add element to the queue. + * + * @param i + * @return + */ + + boolean produce(int i) throws KeeperException, InterruptedException{ + ByteBuffer b = ByteBuffer.allocate(4); + byte[] value; + + // Add child with value i + b.putInt(i); + value = b.array(); + zk.create(root + "/element", value, Ids.OPEN_ACL_UNSAFE, + CreateMode.PERSISTENT_SEQUENTIAL); + + return true; + } + + +To consume an element, a consumer process obtains the children of the root node, +reads the node with smallest counter value, and returns the element. Note that +if there is a conflict, then one of the two contending processes won't be able to +delete the node and the delete operation will throw an exception. + +A call to getChildren() returns the list of children in lexicographic order. +As lexicographic order does not necessarily follow the numerical order of the counter +values, we need to decide which element is the smallest. To decide which one has +the smallest counter value, we traverse the list, and remove the prefix "element" +from each one. + + /** + * Remove first element from the queue. + * + * @return + * @throws KeeperException + * @throws InterruptedException + */ + int consume() throws KeeperException, InterruptedException{ + int retvalue = -1; + Stat stat = null; + + // Get the first element available + while (true) { + synchronized (mutex) { + List list = zk.getChildren(root, true); + if (list.size() == 0) { + System.out.println("Going to wait"); + mutex.wait(); + } else { + Integer min = new Integer(list.get(0).substring(7)); + for(String s : list){ + Integer tempValue = new Integer(s.substring(7)); + //System.out.println("Temporary value: " + tempValue); + if(tempValue < min) min = tempValue; + } + System.out.println("Temporary value: " + root + "/element" + min); + byte[] b = zk.getData(root + "/element" + min, + false, stat); + zk.delete(root + "/element" + min, 0); + ByteBuffer buffer = ByteBuffer.wrap(b); + retvalue = buffer.getInt(); + + return retvalue; + } + } + } + } + } + + + + +## Complete example + +In the following section you can find a complete command line application to demonstrate the above mentioned +recipes. Use the following command to run it. + + ZOOBINDIR="[path_to_distro]/bin" + . "$ZOOBINDIR"/zkEnv.sh + java SyncPrimitive [Test Type] [ZK server] [No of elements] [Client type] + + + +### Queue test + +Start a producer to create 100 elements + + java SyncPrimitive qTest localhost 100 p + + +Start a consumer to consume 100 elements + + java SyncPrimitive qTest localhost 100 c + + + +### Barrier test + +Start a barrier with 2 participants (start as many times as many participants you'd like to enter) + + java SyncPrimitive bTest localhost 2 + + + +### Source Listing + +#### SyncPrimitive.Java + + import java.io.IOException; + import java.net.InetAddress; + import java.net.UnknownHostException; + import java.nio.ByteBuffer; + import java.util.List; + import java.util.Random; + + import org.apache.zookeeper.CreateMode; + import org.apache.zookeeper.KeeperException; + import org.apache.zookeeper.WatchedEvent; + import org.apache.zookeeper.Watcher; + import org.apache.zookeeper.ZooKeeper; + import org.apache.zookeeper.ZooDefs.Ids; + import org.apache.zookeeper.data.Stat; + + public class SyncPrimitive implements Watcher { + + static ZooKeeper zk = null; + static Integer mutex; + String root; + + SyncPrimitive(String address) { + if(zk == null){ + try { + System.out.println("Starting ZK:"); + zk = new ZooKeeper(address, 3000, this); + mutex = new Integer(-1); + System.out.println("Finished starting ZK: " + zk); + } catch (IOException e) { + System.out.println(e.toString()); + zk = null; + } + } + //else mutex = new Integer(-1); + } + + synchronized public void process(WatchedEvent event) { + synchronized (mutex) { + //System.out.println("Process: " + event.getType()); + mutex.notify(); + } + } + + /** + * Barrier + */ + static public class Barrier extends SyncPrimitive { + int size; + String name; + + /** + * Barrier constructor + * + * @param address + * @param root + * @param size + */ + Barrier(String address, String root, int size) { + super(address); + this.root = root; + this.size = size; + + // Create barrier node + if (zk != null) { + try { + Stat s = zk.exists(root, false); + if (s == null) { + zk.create(root, new byte[0], Ids.OPEN_ACL_UNSAFE, + CreateMode.PERSISTENT); + } + } catch (KeeperException e) { + System.out + .println("Keeper exception when instantiating queue: " + + e.toString()); + } catch (InterruptedException e) { + System.out.println("Interrupted exception"); + } + } + + // My node name + try { + name = new String(InetAddress.getLocalHost().getCanonicalHostName().toString()); + } catch (UnknownHostException e) { + System.out.println(e.toString()); + } + + } + + /** + * Join barrier + * + * @return + * @throws KeeperException + * @throws InterruptedException + */ + + boolean enter() throws KeeperException, InterruptedException{ + zk.create(root + "/" + name, new byte[0], Ids.OPEN_ACL_UNSAFE, + CreateMode.EPHEMERAL); + while (true) { + synchronized (mutex) { + List list = zk.getChildren(root, true); + + if (list.size() < size) { + mutex.wait(); + } else { + return true; + } + } + } + } + + /** + * Wait until all reach barrier + * + * @return + * @throws KeeperException + * @throws InterruptedException + */ + boolean leave() throws KeeperException, InterruptedException{ + zk.delete(root + "/" + name, 0); + while (true) { + synchronized (mutex) { + List list = zk.getChildren(root, true); + if (list.size() > 0) { + mutex.wait(); + } else { + return true; + } + } + } + } + } + + /** + * Producer-Consumer queue + */ + static public class Queue extends SyncPrimitive { + + /** + * Constructor of producer-consumer queue + * + * @param address + * @param name + */ + Queue(String address, String name) { + super(address); + this.root = name; + // Create ZK node name + if (zk != null) { + try { + Stat s = zk.exists(root, false); + if (s == null) { + zk.create(root, new byte[0], Ids.OPEN_ACL_UNSAFE, + CreateMode.PERSISTENT); + } + } catch (KeeperException e) { + System.out + .println("Keeper exception when instantiating queue: " + + e.toString()); + } catch (InterruptedException e) { + System.out.println("Interrupted exception"); + } + } + } + + /** + * Add element to the queue. + * + * @param i + * @return + */ + + boolean produce(int i) throws KeeperException, InterruptedException{ + ByteBuffer b = ByteBuffer.allocate(4); + byte[] value; + + // Add child with value i + b.putInt(i); + value = b.array(); + zk.create(root + "/element", value, Ids.OPEN_ACL_UNSAFE, + CreateMode.PERSISTENT_SEQUENTIAL); + + return true; + } + + /** + * Remove first element from the queue. + * + * @return + * @throws KeeperException + * @throws InterruptedException + */ + int consume() throws KeeperException, InterruptedException{ + int retvalue = -1; + Stat stat = null; + + // Get the first element available + while (true) { + synchronized (mutex) { + List list = zk.getChildren(root, true); + if (list.size() == 0) { + System.out.println("Going to wait"); + mutex.wait(); + } else { + Integer min = new Integer(list.get(0).substring(7)); + String minNode = list.get(0); + for(String s : list){ + Integer tempValue = new Integer(s.substring(7)); + //System.out.println("Temporary value: " + tempValue); + if(tempValue < min) { + min = tempValue; + minNode = s; + } + } + System.out.println("Temporary value: " + root + "/" + minNode); + byte[] b = zk.getData(root + "/" + minNode, + false, stat); + zk.delete(root + "/" + minNode, 0); + ByteBuffer buffer = ByteBuffer.wrap(b); + retvalue = buffer.getInt(); + + return retvalue; + } + } + } + } + } + + public static void main(String args[]) { + if (args[0].equals("qTest")) + queueTest(args); + else + barrierTest(args); + } + + public static void queueTest(String args[]) { + Queue q = new Queue(args[1], "/app1"); + + System.out.println("Input: " + args[1]); + int i; + Integer max = new Integer(args[2]); + + if (args[3].equals("p")) { + System.out.println("Producer"); + for (i = 0; i < max; i++) + try{ + q.produce(10 + i); + } catch (KeeperException e){ + + } catch (InterruptedException e){ + + } + } else { + System.out.println("Consumer"); + + for (i = 0; i < max; i++) { + try{ + int r = q.consume(); + System.out.println("Item: " + r); + } catch (KeeperException e){ + i--; + } catch (InterruptedException e){ + } + } + } + } + + public static void barrierTest(String args[]) { + Barrier b = new Barrier(args[1], "/b1", new Integer(args[2])); + try{ + boolean flag = b.enter(); + System.out.println("Entered barrier: " + args[2]); + if(!flag) System.out.println("Error when entering the barrier"); + } catch (KeeperException e){ + } catch (InterruptedException e){ + } + + // Generate random integer + Random rand = new Random(); + int r = rand.nextInt(100); + // Loop for rand iterations + for (int i = 0; i < r; i++) { + try { + Thread.sleep(100); + } catch (InterruptedException e) { + } + } + try{ + b.leave(); + } catch (KeeperException e){ + + } catch (InterruptedException e){ + + } + System.out.println("Left barrier"); + } + } + diff --git a/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperUseCases.md b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperUseCases.md new file mode 100644 index 0000000000000000000000000000000000000000..98045444457f5841a40a1eeb349292798942732c --- /dev/null +++ b/local-test-zookeeper-delta-02/afc-zookeeper/zookeeper-docs/src/main/resources/markdown/zookeeperUseCases.md @@ -0,0 +1,385 @@ + + +# ZooKeeper Use Cases + +- Applications and organizations using ZooKeeper include (alphabetically) [1]. +- If your use case wants to be listed here. Please do not hesitate, submit a pull request or write an email to **dev@zookeeper.apache.org**, + and then, your use case will be included. +- If this documentation has violated your intellectual property rights or you and your company's privacy, write an email to **dev@zookeeper.apache.org**, + we will handle them in a timely manner. + + +## Free Software Projects + +### [AdroitLogic UltraESB](http://adroitlogic.org/) + - Uses ZooKeeper to implement node coordination, in clustering support. This allows the management of the complete cluster, + or any specific node - from any other node connected via JMX. A Cluster wide command framework developed on top of the + ZooKeeper coordination allows commands that fail on some nodes to be retried etc. We also support the automated graceful + round-robin-restart of a complete cluster of nodes using the same framework [1]. + +### [Akka](http://akka.io/) + - Akka is the platform for the next generation event-driven, scalable and fault-tolerant architectures on the JVM. + Or: Akka is a toolkit and runtime for building highly concurrent, distributed, and fault tolerant event-driven applications on the JVM [1]. + +### [Eclipse Communication Framework](http://www.eclipse.org/ecf) + - The Eclipse ECF project provides an implementation of its Abstract Discovery services using Zookeeper. ECF itself + is used in many projects providing base functionality for communication, all based on OSGi [1]. + +### [Eclipse Gyrex](http://www.eclipse.org/gyrex) + - The Eclipse Gyrex project provides a platform for building your own Java OSGi based clouds. + - ZooKeeper is used as the core cloud component for node membership and management, coordination of jobs executing among workers, + a lock service and a simple queue service and a lot more [1]. + +### [GoldenOrb](http://www.goldenorbos.org/) + - massive-scale Graph analysis [1]. + +### [Juju](https://juju.ubuntu.com/) + - Service deployment and orchestration framework, formerly called Ensemble [1]. + +### [Katta](http://katta.sourceforge.net/) + - Katta serves distributed Lucene indexes in a grid environment. + - Zookeeper is used for node, master and index management in the grid [1]. + +### [KeptCollections](https://github.com/anthonyu/KeptCollections) + - KeptCollections is a library of drop-in replacements for the data structures in the Java Collections framework. + - KeptCollections uses Apache ZooKeeper as a backing store, thus making its data structures distributed and scalable [1]. + +### [Neo4j](https://neo4j.com/) + - Neo4j is a Graph Database. It's a disk based, ACID compliant transactional storage engine for big graphs and fast graph traversals, + using external indices like Lucene/Solr for global searches. + - We use ZooKeeper in the Neo4j High Availability components for write-master election, + read slave coordination and other cool stuff. ZooKeeper is a great and focused project - we like! [1]. + +### [Norbert](http://sna-projects.com/norbert) + - Partitioned routing and cluster management [1]. + +### [spring-cloud-zookeeper](https://spring.io/projects/spring-cloud-zookeeper) + - Spring Cloud Zookeeper provides Apache Zookeeper integrations for Spring Boot apps through autoconfiguration + and binding to the Spring Environment and other Spring programming model idioms. With a few simple annotations + you can quickly enable and configure the common patterns inside your application and build large distributed systems with Zookeeper. + The patterns provided include Service Discovery and Distributed Configuration [38]. + +### [spring-statemachine](https://projects.spring.io/spring-statemachine/) + - Spring Statemachine is a framework for application developers to use state machine concepts with Spring applications. + - Spring Statemachine can provide this feature:Distributed state machine based on a Zookeeper [31,32]. + +### [spring-xd](https://projects.spring.io/spring-xd/) + - Spring XD is a unified, distributed, and extensible system for data ingestion, real time analytics, batch processing, and data export. + The project’s goal is to simplify the development of big data applications. + - ZooKeeper - Provides all runtime information for the XD cluster. Tracks running containers, in which containers modules + and jobs are deployed, stream definitions, deployment manifests, and the like [30,31]. + +### [Talend ESB](http://www.talend.com/products-application-integration/application-integration-esb-se.php) + - Talend ESB is a versatile and flexible, enterprise service bus. + - It uses ZooKeeper as endpoint repository of both REST and SOAP Web services. + By using ZooKeeper Talend ESB is able to provide failover and load balancing capabilities in a very light-weight manner [1]. + +### [redis_failover](https://github.com/ryanlecompte/redis_failover) + - Redis Failover is a ZooKeeper-based automatic master/slave failover solution for Ruby [1]. + + +## Apache Projects + +### [Apache Accumulo](https://accumulo.apache.org/) + - Accumulo is a distributed key/value store that provides expressive, cell-level access labels. + - Apache ZooKeeper plays a central role within the Accumulo architecture. Its quorum consistency model supports an overall + Accumulo architecture with no single points of failure. Beyond that, Accumulo leverages ZooKeeper to store and communication + configuration information for users and tables, as well as operational states of processes and tablets [2]. + +### [Apache Atlas](http://atlas.apache.org) + - Atlas is a scalable and extensible set of core foundational governance services – enabling enterprises to effectively and efficiently meet + their compliance requirements within Hadoop and allows integration with the whole enterprise data ecosystem. + - Atlas uses Zookeeper for coordination to provide redundancy and high availability of HBase,Kafka [31,35]. + +### [Apache BookKeeper](https://bookkeeper.apache.org/) + - A scalable, fault-tolerant, and low-latency storage service optimized for real-time workloads. + - BookKeeper requires a metadata storage service to store information related to ledgers and available bookies. BookKeeper currently uses + ZooKeeper for this and other tasks [3]. + +### [Apache CXF DOSGi](http://cxf.apache.org/distributed-osgi.html) + - Apache CXF is an open source services framework. CXF helps you build and develop services using frontend programming + APIs, like JAX-WS and JAX-RS. These services can speak a variety of protocols such as SOAP, XML/HTTP, RESTful HTTP, + or CORBA and work over a variety of transports such as HTTP, JMS or JBI. + - The Distributed OSGi implementation at Apache CXF uses ZooKeeper for its Discovery functionality [4]. + +### [Apache Drill](http://drill.apache.org/) + - Schema-free SQL Query Engine for Hadoop, NoSQL and Cloud Storage + - ZooKeeper maintains ephemeral cluster membership information. The Drillbits use ZooKeeper to find other Drillbits in the cluster, + and the client uses ZooKeeper to find Drillbits to submit a query [28]. + +### [Apache Druid](https://druid.apache.org/) + - Apache Druid is a high performance real-time analytics database. + - Apache Druid uses Apache ZooKeeper (ZK) for management of current cluster state. The operations that happen over ZK are [27]: + - Coordinator leader election + - Segment "publishing" protocol from Historical and Realtime + - Segment load/drop protocol between Coordinator and Historical + - Overlord leader election + - Overlord and MiddleManager task management + +### [Apache Dubbo](http://dubbo.apache.org) + - Apache Dubbo is a high-performance, java based open source RPC framework. + - Zookeeper is used for service registration discovery and configuration management in Dubbo [6]. + +### [Apache Flink](https://flink.apache.org/) + - Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. + Flink has been designed to run in all common cluster environments, perform computations at in-memory speed and at any scale. + - To enable JobManager High Availability you have to set the high-availability mode to zookeeper, configure a ZooKeeper quorum and set up a masters file with all JobManagers hosts and their web UI ports. + Flink leverages ZooKeeper for distributed coordination between all running JobManager instances. ZooKeeper is a separate service from Flink, + which provides highly reliable distributed coordination via leader election and light-weight consistent state storage [23]. + +### [Apache Flume](https://flume.apache.org/) + - Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts + of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant + with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model + that allows for online analytic application. + - Flume supports Agent configurations via Zookeeper. This is an experimental feature [5]. + +### [Apache Fluo](https://fluo.apache.org/) + - Apache Fluo is a distributed processing system that lets users make incremental updates to large data sets. + - Apache Fluo is built on Apache Accumulo which uses Apache Zookeeper for consensus [31,37]. + +### [Apache Griffin](https://griffin.apache.org/) + - Big Data Quality Solution For Batch and Streaming. + - Griffin uses Zookeeper for coordination to provide redundancy and high availability of Kafka [31,36]. + +### [Apache Hadoop](http://hadoop.apache.org/) + - The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across + clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, + each offering local computation and storage. Rather than rely on hardware to deliver high-availability, + the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. + - The implementation of automatic HDFS failover relies on ZooKeeper for the following things: + - **Failure detection** - each of the NameNode machines in the cluster maintains a persistent session in ZooKeeper. + If the machine crashes, the ZooKeeper session will expire, notifying the other NameNode that a failover should be triggered. + - **Active NameNode election** - ZooKeeper provides a simple mechanism to exclusively elect a node as active. If the current active NameNode crashes, + another node may take a special exclusive lock in ZooKeeper indicating that it should become the next active. + - The ZKFailoverController (ZKFC) is a new component which is a ZooKeeper client which also monitors and manages the state of the NameNode. + Each of the machines which runs a NameNode also runs a ZKFC, and that ZKFC is responsible for: + - **Health monitoring** - the ZKFC pings its local NameNode on a periodic basis with a health-check command. + So long as the NameNode responds in a timely fashion with a healthy status, the ZKFC considers the node healthy. + If the node has crashed, frozen, or otherwise entered an unhealthy state, the health monitor will mark it as unhealthy. + - **ZooKeeper session management** - when the local NameNode is healthy, the ZKFC holds a session open in ZooKeeper. + If the local NameNode is active, it also holds a special “lock” znode. This lock uses ZooKeeper’s support for “ephemeral” nodes; + if the session expires, the lock node will be automatically deleted. + - **ZooKeeper-based election** - if the local NameNode is healthy, and the ZKFC sees that no other node currently holds the lock znode, + it will itself try to acquire the lock. If it succeeds, then it has “won the election”, and is responsible for running a failover to make its local NameNode active. + The failover process is similar to the manual failover described above: first, the previous active is fenced if necessary, + and then the local NameNode transitions to active state [7]. + +### [Apache HBase](https://hbase.apache.org/) + - HBase is the Hadoop database. It's an open-source, distributed, column-oriented store model. + - HBase uses ZooKeeper for master election, server lease management, bootstrapping, and coordination between servers. + A distributed Apache HBase installation depends on a running ZooKeeper cluster. All participating nodes and clients + need to be able to access the running ZooKeeper ensemble [8]. + - As you can see, ZooKeeper is a fundamental part of HBase. All operations that require coordination, such as Regions + assignment, Master-Failover, replication, and snapshots, are built on ZooKeeper [20]. + +### [Apache Helix](http://helix.apache.org/) + - A cluster management framework for partitioned and replicated distributed resources. + - We need a distributed store to maintain the state of the cluster and a notification system to notify if there is any change in the cluster state. + Helix uses Apache ZooKeeper to achieve this functionality [21]. + Zookeeper provides: + - A way to represent PERSISTENT state which remains until its deleted + - A way to represent TRANSIENT/EPHEMERAL state which vanishes when the process that created the state dies + - A notification mechanism when there is a change in PERSISTENT and EPHEMERAL state + +### [Apache Hive](https://hive.apache.org) + - The Apache Hive data warehouse software facilitates reading, writing, and managing large datasets residing in distributed + storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive. + - Hive has been using ZooKeeper as distributed lock manager to support concurrency in HiveServer2 [25,26]. + +### [Apache Ignite](https://ignite.apache.org/) + - Ignite is a memory-centric distributed database, caching, and processing platform for + transactional, analytical, and streaming workloads delivering in-memory speeds at petabyte scale + - Apache Ignite discovery mechanism goes with a ZooKeeper implementations which allows scaling Ignite clusters to 100s and 1000s of nodes + preserving linear scalability and performance [31,34].​ + +### [Apache James Mailbox](http://james.apache.org/mailbox/) + - The Apache James Mailbox is a library providing a flexible Mailbox storage accessible by mail protocols + (IMAP4, POP3, SMTP,...) and other protocols. + - Uses Zookeeper and Curator Framework for generating distributed unique ID's [31]. + +### [Apache Kafka](https://kafka.apache.org/) + - Kafka is a distributed publish/subscribe messaging system + - Apache Kafka relies on ZooKeeper for the following things: + - **Controller election** + The controller is one of the most important broking entity in a Kafka ecosystem, and it also has the responsibility + to maintain the leader-follower relationship across all the partitions. If a node by some reason is shutting down, + it’s the controller’s responsibility to tell all the replicas to act as partition leaders in order to fulfill the + duties of the partition leaders on the node that is about to fail. So, whenever a node shuts down, a new controller + can be elected and it can also be made sure that at any given time, there is only one controller and all the follower nodes have agreed on that. + - **Configuration Of Topics** + The configuration regarding all the topics including the list of existing topics, the number of partitions for each topic, + the location of all the replicas, list of configuration overrides for all topics and which node is the preferred leader, etc. + - **Access control lists** + Access control lists or ACLs for all the topics are also maintained within Zookeeper. + - **Membership of the cluster** + Zookeeper also maintains a list of all the brokers that are functioning at any given moment and are a part of the cluster [9]. + +### [Apache Kylin](http://kylin.apache.org/) + - Apache Kylin is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop/Spark supporting extremely large datasets, + original contributed from eBay Inc. + - Apache Kylin leverages Zookeeper for job coordination [31,33]. + +### [Apache Mesos](http://mesos.apache.org/) + - Apache Mesos abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), + enabling fault-tolerant and elastic distributed systems to easily be built and run effectively. + - Mesos has a high-availability mode that uses multiple Mesos masters: one active master (called the leader or leading master) + and several backups in case it fails. The masters elect the leader, with Apache ZooKeeper both coordinating the election + and handling leader detection by masters, agents, and scheduler drivers [10]. + +### [Apache Oozie](https://oozie.apache.org) + - Oozie is a workflow scheduler system to manage Apache Hadoop jobs. + - the Oozie servers use it for coordinating access to the database and communicating with each other. In order to have full HA, + there should be at least 3 ZooKeeper servers [29]. + +### [Apache Pulsar](https://pulsar.apache.org) + - Apache Pulsar is an open-source distributed pub-sub messaging system originally created at Yahoo and now part of the Apache Software Foundation + - Pulsar uses Apache Zookeeper for metadata storage, cluster configuration, and coordination. In a Pulsar instance: + - A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent. + - Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as ownership metadata, + broker load reports, BookKeeper ledger metadata, and more [24]. + +### [Apache Solr](https://lucene.apache.org/solr/) + - Solr is the popular, blazing-fast, open source enterprise search platform built on Apache Lucene. + - In the "Cloud" edition (v4.x and up) of enterprise search engine Apache Solr, ZooKeeper is used for configuration, + leader election and more [12,13]. + +### [Apache Spark](https://spark.apache.org/) + - Apache Spark is a unified analytics engine for large-scale data processing. + - Utilizing ZooKeeper to provide leader election and some state storage, you can launch multiple Masters in your cluster connected to the same ZooKeeper instance. + One will be elected “leader” and the others will remain in standby mode. If the current leader dies, another Master will be elected, + recover the old Master’s state, and then resume scheduling [14]. + +### [Apache Storm](http://storm.apache.org) + - Apache Storm is a free and open source distributed realtime computation system. Apache Storm makes it easy to reliably + process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. + Apache Storm is simple, can be used with any programming language, and is a lot of fun to use! + - Storm uses Zookeeper for coordinating the cluster [22]. + + +## Companies + +### [AGETO](http://www.ageto.de/) + - The AGETO RnD team uses ZooKeeper in a variety of internal as well as external consulting projects [1]. + +### [Benipal Technologies](http://www.benipaltechnologies.com/) + - ZooKeeper is used for internal application development with Solr and Hadoop with Hbase [1]. + +### [Box](http://box.net/) + - Box uses ZooKeeper for service discovery, service coordination, Solr and Hadoop support, etc [1]. + +### [Deepdyve](http://www.deepdyve.com/) + - We do search for research and provide access to high quality content using advanced search technologies Zookeeper is used to + manage server state, control index deployment and a myriad other tasks [1]. + +### [Facebook](https://www.facebook.com/) + - Facebook uses the Zeus ([17,18]) for configuration management which is a forked version of ZooKeeper, with many scalability + and performance en- hancements in order to work at the Facebook scale. + It runs a consensus protocol among servers distributed across mul- tiple regions for resilience. If the leader fails, + a follower is converted into a new leader. + +### [Idium Portal](http://www.idium.no/no/idium_portal/) + - Idium Portal is a hosted web-publishing system delivered by Norwegian company, Idium AS. + - ZooKeeper is used for cluster messaging, service bootstrapping, and service coordination [1]. + +### [Makara](http://www.makara.com/) + - Using ZooKeeper on 2-node cluster on VMware workstation, Amazon EC2, Zen + - Using zkpython + - Looking into expanding into 100 node cluster [1]. + +### [Midokura](http://www.midokura.com/) + - We do virtualized networking for the cloud computing era. We use ZooKeeper for various aspects of our distributed control plane [1]. + +### [Pinterest](https://www.pinterest.com/) + - Pinterest uses the ZooKeeper for Service discovery and dynamic configuration.Like many large scale web sites, Pinterest’s infrastructure consists of servers that communicate with + backend services composed of a number of individual servers for managing load and fault tolerance. Ideally, we’d like the configuration to reflect only the active hosts, + so clients don’t need to deal with bad hosts as often. ZooKeeper provides a well known pattern to solve this problem [19]. + +### [Rackspace](http://www.rackspace.com/email_hosting) + - The Email & Apps team uses ZooKeeper to coordinate sharding and responsibility changes in a distributed e-mail client + that pulls and indexes data for search. ZooKeeper also provides distributed locking for connections to prevent a cluster from overwhelming servers [1]. + +### [Sematext](http://sematext.com/) + - Uses ZooKeeper in SPM (which includes ZooKeeper monitoring component, too!), Search Analytics, and Logsene [1]. + +### [Tubemogul](http://tubemogul.com/) + - Uses ZooKeeper for leader election, configuration management, locking, group membership [1]. + +### [Twitter](https://twitter.com/) + - ZooKeeper is used at Twitter as the source of truth for storing critical metadata. It serves as a coordination kernel to + provide distributed coordination services, such as leader election and distributed locking. + Some concrete examples of ZooKeeper in action include [15,16]: + - ZooKeeper is used to store service registry, which is used by Twitter’s naming service for service discovery. + - Manhattan (Twitter’s in-house key-value database), Nighthawk (sharded Redis), and Blobstore (in-house photo and video storage), + stores its cluster topology information in ZooKeeper. + - EventBus, Twitter’s pub-sub messaging system, stores critical metadata in ZooKeeper and uses ZooKeeper for leader election. + - Mesos, Twitter’s compute platform, uses ZooKeeper for leader election. + +### [Vast.com](http://www.vast.com/) + - Used internally as a part of sharding services, distributed synchronization of data/index updates, configuration management and failover support [1]. + +### [Wealthfront](http://wealthfront.com/) + - Wealthfront uses ZooKeeper for service discovery, leader election and distributed locking among its many backend services. + ZK is an essential part of Wealthfront's continuous [deployment infrastructure](http://eng.wealthfront.com/2010/05/02/deployment-infrastructure-for-continuous-deployment/) [1]. + +### [Yahoo!](http://www.yahoo.com/) + - ZooKeeper is used for a myriad of services inside Yahoo! for doing leader election, configuration management, sharding, locking, group membership etc [1]. + +### [Zynga](http://www.zynga.com/) + - ZooKeeper at Zynga is used for a variety of services including configuration management, leader election, sharding and more [1]. + + +#### References +- [1] https://cwiki.apache.org/confluence/display/ZOOKEEPER/PoweredBy +- [2] https://www.youtube.com/watch?v=Ew53T6h9oRw +- [3] https://bookkeeper.apache.org/docs/4.7.3/getting-started/concepts/#ledgers +- [4] http://cxf.apache.org/dosgi-discovery-demo-page.html +- [5] https://flume.apache.org/FlumeUserGuide.html +- [6] http://dubbo.apache.org/en-us/blog/dubbo-zk.html +- [7] https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html +- [8] https://hbase.apache.org/book.html#zookeeper +- [9] https://www.cloudkarafka.com/blog/2018-07-04-cloudkarafka_what_is_zookeeper.html +- [10] http://mesos.apache.org/documentation/latest/high-availability/ +- [11] http://incubator.apache.org/projects/s4.html +- [12] https://lucene.apache.org/solr/guide/6_6/using-zookeeper-to-manage-configuration-files.html#UsingZooKeepertoManageConfigurationFiles-StartupBootstrap +- [13] https://lucene.apache.org/solr/guide/6_6/setting-up-an-external-zookeeper-ensemble.html +- [14] https://spark.apache.org/docs/latest/spark-standalone.html#standby-masters-with-zookeeper +- [15] https://blog.twitter.com/engineering/en_us/topics/infrastructure/2018/zookeeper-at-twitter.html +- [16] https://blog.twitter.com/engineering/en_us/topics/infrastructure/2018/dynamic-configuration-at-twitter.html +- [17] TANG, C., KOOBURAT, T., VENKATACHALAM, P.,CHANDER, A., WEN, Z., NARAYANAN, A., DOWELL,P., AND KARL, R. Holistic Configuration Management + at Facebook. In Proceedings of the 25th Symposium on Operating System Principles (SOSP’15) (Monterey, CA,USA, Oct. 2015). +- [18] https://www.youtube.com/watch?v=SeZV373gUZc +- [19] https://medium.com/@Pinterest_Engineering/zookeeper-resilience-at-pinterest-adfd8acf2a6b +- [20] https://blog.cloudera.com/what-are-hbase-znodes/ +- [21] https://helix.apache.org/Architecture.html +- [22] http://storm.apache.org/releases/current/Setting-up-a-Storm-cluster.html +- [23] https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/jobmanager_high_availability.html +- [24] https://pulsar.apache.org/docs/en/concepts-architecture-overview/#metadata-store +- [25] https://cwiki.apache.org/confluence/display/Hive/Locking +- [26] *ZooKeeperHiveLockManager* implementation in the [hive](https://github.com/apache/hive/) code base +- [27] https://druid.apache.org/docs/latest/dependencies/zookeeper.html +- [28] https://mapr.com/blog/apache-drill-architecture-ultimate-guide/ +- [29] https://oozie.apache.org/docs/4.1.0/AG_Install.html +- [30] https://docs.spring.io/spring-xd/docs/current/reference/html/ +- [31] https://cwiki.apache.org/confluence/display/CURATOR/Powered+By +- [32] https://projects.spring.io/spring-statemachine/ +- [33] https://www.tigeranalytics.com/blog/apache-kylin-architecture/ +- [34] https://apacheignite.readme.io/docs/cluster-discovery +- [35] http://atlas.apache.org/HighAvailability.html +- [36] http://griffin.apache.org/docs/usecases.html +- [37] https://fluo.apache.org/ +- [38] https://spring.io/projects/spring-cloud-zookeeper